requests-2.22.0/ 0000755 0000765 0000024 00000000000 13467272564 015161 5 ustar nateprewitt staff 0000000 0000000 requests-2.22.0/PKG-INFO 0000644 0000765 0000024 00000013400 13467272564 016254 0 ustar nateprewitt staff 0000000 0000000 Metadata-Version: 2.1
Name: requests
Version: 2.22.0
Summary: Python HTTP for Humans.
Home-page: http://python-requests.org
Author: Kenneth Reitz
Author-email: me@kennethreitz.org
License: Apache 2.0
Description: Requests: HTTP for Humans™
==========================
[](https://pypi.org/project/requests/)
[](https://pypi.org/project/requests/)
[](https://pypi.org/project/requests/)
[](https://codecov.io/github/requests/requests)
[](https://github.com/requests/requests/graphs/contributors)
[](https://saythanks.io/to/kennethreitz)
Requests is the only *Non-GMO* HTTP library for Python, safe for human
consumption.

Behold, the power of Requests:
``` {.sourceCode .python}
>>> import requests
>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
u'{"type":"User"...'
>>> r.json()
{u'disk_usage': 368627, u'private_gists': 484, ...}
```
See [the similar code, sans Requests](https://gist.github.com/973705).
[](http://docs.python-requests.org/)
Requests allows you to send *organic, grass-fed* HTTP/1.1 requests,
without the need for manual labor. There's no need to manually add query
strings to your URLs, or to form-encode your POST data. Keep-alive and
HTTP connection pooling are 100% automatic, thanks to
[urllib3](https://github.com/shazow/urllib3).
Besides, all the cool kids are doing it. Requests is one of the most
downloaded Python packages of all time, pulling in over 11,000,000
downloads every month. You don't want to be left out!
Feature Support
---------------
Requests is ready for today's web.
- International Domains and URLs
- Keep-Alive & Connection Pooling
- Sessions with Cookie Persistence
- Browser-style SSL Verification
- Basic/Digest Authentication
- Elegant Key/Value Cookies
- Automatic Decompression
- Automatic Content Decoding
- Unicode Response Bodies
- Multipart File Uploads
- HTTP(S) Proxy Support
- Connection Timeouts
- Streaming Downloads
- `.netrc` Support
- Chunked Requests
Requests officially supports Python 2.7 & 3.4–3.7, and runs great on
PyPy.
Installation
------------
To install Requests, simply use [pipenv](http://pipenv.org/) (or pip, of
course):
``` {.sourceCode .bash}
$ pipenv install requests
✨🍰✨
```
Satisfaction guaranteed.
Documentation
-------------
Fantastic documentation is available at
, for a limited time only.
How to Contribute
-----------------
1. Become more familiar with the project by reading our [Contributor's Guide](http://docs.python-requests.org/en/latest/dev/contributing/) and our [development philosophy](http://docs.python-requests.org/en/latest/dev/philosophy/).
2. Check for open issues or open a fresh issue to start a discussion
around a feature idea or a bug. There is a [Contributor
Friendly](https://github.com/requests/requests/issues?direction=desc&labels=Contributor+Friendly&page=1&sort=updated&state=open)
tag for issues that should be ideal for people who are not very
familiar with the codebase yet.
3. Fork [the repository](https://github.com/requests/requests) on
GitHub to start making your changes to the **master** branch (or
branch off of it).
4. Write a test which shows that the bug was fixed or that the feature
works as expected.
5. Send a pull request and bug the maintainer until it gets merged and
published. :) Make sure to add yourself to
[AUTHORS](https://github.com/requests/requests/blob/master/AUTHORS.rst).
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*
Description-Content-Type: text/markdown
Provides-Extra: security
Provides-Extra: socks
requests-2.22.0/pytest.ini 0000644 0000765 0000024 00000000041 13467270450 017175 0 ustar nateprewitt staff 0000000 0000000 [pytest]
addopts = -p no:warnings requests-2.22.0/LICENSE 0000644 0000765 0000024 00000001106 13467270450 016154 0 ustar nateprewitt staff 0000000 0000000 Copyright 2018 Kenneth Reitz
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
requests-2.22.0/HISTORY.md 0000644 0000765 0000024 00000137544 13467271065 016655 0 ustar nateprewitt staff 0000000 0000000 Release History
===============
dev
---
**Bugfixes**
- \[Short description of non-trivial change.\]
2.22.0 (2019-05-15)
-------------------
**Dependencies**
- Requests now supports urllib3 v1.25.2.
(note: 1.25.0 and 1.25.1 are incompatible)
**Deprecations**
- Requests has officially stopped support for Python 3.4.
2.21.0 (2018-12-10)
-------------------
**Dependencies**
- Requests now supports idna v2.8.
2.20.1 (2018-11-08)
-------------------
**Bugfixes**
- Fixed bug with unintended Authorization header stripping for
redirects using default ports (http/80, https/443).
2.20.0 (2018-10-18)
-------------------
**Bugfixes**
- Content-Type header parsing is now case-insensitive (e.g.
charset=utf8 v Charset=utf8).
- Fixed exception leak where certain redirect urls would raise
uncaught urllib3 exceptions.
- Requests removes Authorization header from requests redirected
from https to http on the same hostname. (CVE-2018-18074)
- `should_bypass_proxies` now handles URIs without hostnames (e.g.
files).
**Dependencies**
- Requests now supports urllib3 v1.24.
**Deprecations**
- Requests has officially stopped support for Python 2.6.
2.19.1 (2018-06-14)
-------------------
**Bugfixes**
- Fixed issue where status\_codes.py's `init` function failed trying
to append to a `__doc__` value of `None`.
2.19.0 (2018-06-12)
-------------------
**Improvements**
- Warn user about possible slowdown when using cryptography version
< 1.3.4
- Check for invalid host in proxy URL, before forwarding request to
adapter.
- Fragments are now properly maintained across redirects. (RFC7231
7.1.2)
- Removed use of cgi module to expedite library load time.
- Added support for SHA-256 and SHA-512 digest auth algorithms.
- Minor performance improvement to `Request.content`.
- Migrate to using collections.abc for 3.7 compatibility.
**Bugfixes**
- Parsing empty `Link` headers with `parse_header_links()` no longer
return one bogus entry.
- Fixed issue where loading the default certificate bundle from a zip
archive would raise an `IOError`.
- Fixed issue with unexpected `ImportError` on windows system which do
not support `winreg` module.
- DNS resolution in proxy bypass no longer includes the username and
password in the request. This also fixes the issue of DNS queries
failing on macOS.
- Properly normalize adapter prefixes for url comparison.
- Passing `None` as a file pointer to the `files` param no longer
raises an exception.
- Calling `copy` on a `RequestsCookieJar` will now preserve the cookie
policy correctly.
**Dependencies**
- We now support idna v2.7.
- We now support urllib3 v1.23.
2.18.4 (2017-08-15)
-------------------
**Improvements**
- Error messages for invalid headers now include the header name for
easier debugging
**Dependencies**
- We now support idna v2.6.
2.18.3 (2017-08-02)
-------------------
**Improvements**
- Running `$ python -m requests.help` now includes the installed
version of idna.
**Bugfixes**
- Fixed issue where Requests would raise `ConnectionError` instead of
`SSLError` when encountering SSL problems when using urllib3 v1.22.
2.18.2 (2017-07-25)
-------------------
**Bugfixes**
- `requests.help` no longer fails on Python 2.6 due to the absence of
`ssl.OPENSSL_VERSION_NUMBER`.
**Dependencies**
- We now support urllib3 v1.22.
2.18.1 (2017-06-14)
-------------------
**Bugfixes**
- Fix an error in the packaging whereby the `*.whl` contained
incorrect data that regressed the fix in v2.17.3.
2.18.0 (2017-06-14)
-------------------
**Improvements**
- `Response` is now a context manager, so can be used directly in a
`with` statement without first having to be wrapped by
`contextlib.closing()`.
**Bugfixes**
- Resolve installation failure if multiprocessing is not available
- Resolve tests crash if multiprocessing is not able to determine the
number of CPU cores
- Resolve error swallowing in utils set\_environ generator
2.17.3 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.2 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.1 (2017-05-29)
-------------------
**Improvements**
- Improved `packages` namespace identity support, for monkeypatching
libraries.
2.17.0 (2017-05-29)
-------------------
**Improvements**
- Removal of the 301 redirect cache. This improves thread-safety.
2.16.5 (2017-05-28)
-------------------
- Improvements to `$ python -m requests.help`.
2.16.4 (2017-05-27)
-------------------
- Introduction of the `$ python -m requests.help` command, for
debugging with maintainers!
2.16.3 (2017-05-27)
-------------------
- Further restored the `requests.packages` namespace for compatibility
reasons.
2.16.2 (2017-05-27)
-------------------
- Further restored the `requests.packages` namespace for compatibility
reasons.
No code modification (noted below) should be necessary any longer.
2.16.1 (2017-05-27)
-------------------
- Restored the `requests.packages` namespace for compatibility
reasons.
- Bugfix for `urllib3` version parsing.
**Note**: code that was written to import against the
`requests.packages` namespace previously will have to import code that
rests at this module-level now.
For example:
from requests.packages.urllib3.poolmanager import PoolManager
Will need to be re-written to be:
from requests.packages import urllib3
urllib3.poolmanager.PoolManager
Or, even better:
from urllib3.poolmanager import PoolManager
2.16.0 (2017-05-26)
-------------------
- Unvendor ALL the things!
2.15.1 (2017-05-26)
-------------------
- Everyone makes mistakes.
2.15.0 (2017-05-26)
-------------------
**Improvements**
- Introduction of the `Response.next` property, for getting the next
`PreparedResponse` from a redirect chain (when
`allow_redirects=False`).
- Internal refactoring of `__version__` module.
**Bugfixes**
- Restored once-optional parameter for
`requests.utils.get_environ_proxies()`.
2.14.2 (2017-05-10)
-------------------
**Bugfixes**
- Changed a less-than to an equal-to and an or in the dependency
markers to widen compatibility with older setuptools releases.
2.14.1 (2017-05-09)
-------------------
**Bugfixes**
- Changed the dependency markers to widen compatibility with older pip
releases.
2.14.0 (2017-05-09)
-------------------
**Improvements**
- It is now possible to pass `no_proxy` as a key to the `proxies`
dictionary to provide handling similar to the `NO_PROXY` environment
variable.
- When users provide invalid paths to certificate bundle files or
directories Requests now raises `IOError`, rather than failing at
the time of the HTTPS request with a fairly inscrutable certificate
validation error.
- The behavior of `SessionRedirectMixin` was slightly altered.
`resolve_redirects` will now detect a redirect by calling
`get_redirect_target(response)` instead of directly querying
`Response.is_redirect` and `Response.headers['location']`. Advanced
users will be able to process malformed redirects more easily.
- Changed the internal calculation of elapsed request time to have
higher resolution on Windows.
- Added `win_inet_pton` as conditional dependency for the `[socks]`
extra on Windows with Python 2.7.
- Changed the proxy bypass implementation on Windows: the proxy bypass
check doesn't use forward and reverse DNS requests anymore
- URLs with schemes that begin with `http` but are not `http` or
`https` no longer have their host parts forced to lowercase.
**Bugfixes**
- Much improved handling of non-ASCII `Location` header values in
redirects. Fewer `UnicodeDecodeErrors` are encountered on Python 2,
and Python 3 now correctly understands that Latin-1 is unlikely to
be the correct encoding.
- If an attempt to `seek` file to find out its length fails, we now
appropriately handle that by aborting our content-length
calculations.
- Restricted `HTTPDigestAuth` to only respond to auth challenges made
on 4XX responses, rather than to all auth challenges.
- Fixed some code that was firing `DeprecationWarning` on Python 3.6.
- The dismayed person emoticon (`/o\\`) no longer has a big head. I'm
sure this is what you were all worrying about most.
**Miscellaneous**
- Updated bundled urllib3 to v1.21.1.
- Updated bundled chardet to v3.0.2.
- Updated bundled idna to v2.5.
- Updated bundled certifi to 2017.4.17.
2.13.0 (2017-01-24)
-------------------
**Features**
- Only load the `idna` library when we've determined we need it. This
will save some memory for users.
**Miscellaneous**
- Updated bundled urllib3 to 1.20.
- Updated bundled idna to 2.2.
2.12.5 (2017-01-18)
-------------------
**Bugfixes**
- Fixed an issue with JSON encoding detection, specifically detecting
big-endian UTF-32 with BOM.
2.12.4 (2016-12-14)
-------------------
**Bugfixes**
- Fixed regression from 2.12.2 where non-string types were rejected in
the basic auth parameters. While support for this behaviour has been
readded, the behaviour is deprecated and will be removed in the
future.
2.12.3 (2016-12-01)
-------------------
**Bugfixes**
- Fixed regression from v2.12.1 for URLs with schemes that begin with
"http". These URLs have historically been processed as though they
were HTTP-schemed URLs, and so have had parameters added. This was
removed in v2.12.2 in an overzealous attempt to resolve problems
with IDNA-encoding those URLs. This change was reverted: the other
fixes for IDNA-encoding have been judged to be sufficient to return
to the behaviour Requests had before v2.12.0.
2.12.2 (2016-11-30)
-------------------
**Bugfixes**
- Fixed several issues with IDNA-encoding URLs that are technically
invalid but which are widely accepted. Requests will now attempt to
IDNA-encode a URL if it can but, if it fails, and the host contains
only ASCII characters, it will be passed through optimistically.
This will allow users to opt-in to using IDNA2003 themselves if they
want to, and will also allow technically invalid but still common
hostnames.
- Fixed an issue where URLs with leading whitespace would raise
`InvalidSchema` errors.
- Fixed an issue where some URLs without the HTTP or HTTPS schemes
would still have HTTP URL preparation applied to them.
- Fixed an issue where Unicode strings could not be used in basic
auth.
- Fixed an issue encountered by some Requests plugins where
constructing a Response object would cause `Response.content` to
raise an `AttributeError`.
2.12.1 (2016-11-16)
-------------------
**Bugfixes**
- Updated setuptools 'security' extra for the new PyOpenSSL backend in
urllib3.
**Miscellaneous**
- Updated bundled urllib3 to 1.19.1.
2.12.0 (2016-11-15)
-------------------
**Improvements**
- Updated support for internationalized domain names from IDNA2003 to
IDNA2008. This updated support is required for several forms of IDNs
and is mandatory for .de domains.
- Much improved heuristics for guessing content lengths: Requests will
no longer read an entire `StringIO` into memory.
- Much improved logic for recalculating `Content-Length` headers for
`PreparedRequest` objects.
- Improved tolerance for file-like objects that have no `tell` method
but do have a `seek` method.
- Anything that is a subclass of `Mapping` is now treated like a
dictionary by the `data=` keyword argument.
- Requests now tolerates empty passwords in proxy credentials, rather
than stripping the credentials.
- If a request is made with a file-like object as the body and that
request is redirected with a 307 or 308 status code, Requests will
now attempt to rewind the body object so it can be replayed.
**Bugfixes**
- When calling `response.close`, the call to `close` will be
propagated through to non-urllib3 backends.
- Fixed issue where the `ALL_PROXY` environment variable would be
preferred over scheme-specific variables like `HTTP_PROXY`.
- Fixed issue where non-UTF8 reason phrases got severely mangled by
falling back to decoding using ISO 8859-1 instead.
- Fixed a bug where Requests would not correctly correlate cookies set
when using custom Host headers if those Host headers did not use the
native string type for the platform.
**Miscellaneous**
- Updated bundled urllib3 to 1.19.
- Updated bundled certifi certs to 2016.09.26.
2.11.1 (2016-08-17)
-------------------
**Bugfixes**
- Fixed a bug when using `iter_content` with `decode_unicode=True` for
streamed bodies would raise `AttributeError`. This bug was
introduced in 2.11.
- Strip Content-Type and Transfer-Encoding headers from the header
block when following a redirect that transforms the verb from
POST/PUT to GET.
2.11.0 (2016-08-08)
-------------------
**Improvements**
- Added support for the `ALL_PROXY` environment variable.
- Reject header values that contain leading whitespace or newline
characters to reduce risk of header smuggling.
**Bugfixes**
- Fixed occasional `TypeError` when attempting to decode a JSON
response that occurred in an error case. Now correctly returns a
`ValueError`.
- Requests would incorrectly ignore a non-CIDR IP address in the
`NO_PROXY` environment variables: Requests now treats it as a
specific IP.
- Fixed a bug when sending JSON data that could cause us to encounter
obscure OpenSSL errors in certain network conditions (yes, really).
- Added type checks to ensure that `iter_content` only accepts
integers and `None` for chunk sizes.
- Fixed issue where responses whose body had not been fully consumed
would have the underlying connection closed but not returned to the
connection pool, which could cause Requests to hang in situations
where the `HTTPAdapter` had been configured to use a blocking
connection pool.
**Miscellaneous**
- Updated bundled urllib3 to 1.16.
- Some previous releases accidentally accepted non-strings as
acceptable header values. This release does not.
2.10.0 (2016-04-29)
-------------------
**New Features**
- SOCKS Proxy Support! (requires PySocks;
`$ pip install requests[socks]`)
**Miscellaneous**
- Updated bundled urllib3 to 1.15.1.
2.9.2 (2016-04-29)
------------------
**Improvements**
- Change built-in CaseInsensitiveDict (used for headers) to use
OrderedDict as its underlying datastore.
**Bugfixes**
- Don't use redirect\_cache if allow\_redirects=False
- When passed objects that throw exceptions from `tell()`, send them
via chunked transfer encoding instead of failing.
- Raise a ProxyError for proxy related connection issues.
2.9.1 (2015-12-21)
------------------
**Bugfixes**
- Resolve regression introduced in 2.9.0 that made it impossible to
send binary strings as bodies in Python 3.
- Fixed errors when calculating cookie expiration dates in certain
locales.
**Miscellaneous**
- Updated bundled urllib3 to 1.13.1.
2.9.0 (2015-12-15)
------------------
**Minor Improvements** (Backwards compatible)
- The `verify` keyword argument now supports being passed a path to a
directory of CA certificates, not just a single-file bundle.
- Warnings are now emitted when sending files opened in text mode.
- Added the 511 Network Authentication Required status code to the
status code registry.
**Bugfixes**
- For file-like objects that are not seeked to the very beginning, we
now send the content length for the number of bytes we will actually
read, rather than the total size of the file, allowing partial file
uploads.
- When uploading file-like objects, if they are empty or have no
obvious content length we set `Transfer-Encoding: chunked` rather
than `Content-Length: 0`.
- We correctly receive the response in buffered mode when uploading
chunked bodies.
- We now handle being passed a query string as a bytestring on Python
3, by decoding it as UTF-8.
- Sessions are now closed in all cases (exceptional and not) when
using the functional API rather than leaking and waiting for the
garbage collector to clean them up.
- Correctly handle digest auth headers with a malformed `qop`
directive that contains no token, by treating it the same as if no
`qop` directive was provided at all.
- Minor performance improvements when removing specific cookies by
name.
**Miscellaneous**
- Updated urllib3 to 1.13.
2.8.1 (2015-10-13)
------------------
**Bugfixes**
- Update certificate bundle to match `certifi` 2015.9.6.2's weak
certificate bundle.
- Fix a bug in 2.8.0 where requests would raise `ConnectTimeout`
instead of `ConnectionError`
- When using the PreparedRequest flow, requests will now correctly
respect the `json` parameter. Broken in 2.8.0.
- When using the PreparedRequest flow, requests will now correctly
handle a Unicode-string method name on Python 2. Broken in 2.8.0.
2.8.0 (2015-10-05)
------------------
**Minor Improvements** (Backwards Compatible)
- Requests now supports per-host proxies. This allows the `proxies`
dictionary to have entries of the form
`{'://': ''}`. Host-specific proxies will
be used in preference to the previously-supported scheme-specific
ones, but the previous syntax will continue to work.
- `Response.raise_for_status` now prints the URL that failed as part
of the exception message.
- `requests.utils.get_netrc_auth` now takes an `raise_errors` kwarg,
defaulting to `False`. When `True`, errors parsing `.netrc` files
cause exceptions to be thrown.
- Change to bundled projects import logic to make it easier to
unbundle requests downstream.
- Changed the default User-Agent string to avoid leaking data on
Linux: now contains only the requests version.
**Bugfixes**
- The `json` parameter to `post()` and friends will now only be used
if neither `data` nor `files` are present, consistent with the
documentation.
- We now ignore empty fields in the `NO_PROXY` environment variable.
- Fixed problem where `httplib.BadStatusLine` would get raised if
combining `stream=True` with `contextlib.closing`.
- Prevented bugs where we would attempt to return the same connection
back to the connection pool twice when sending a Chunked body.
- Miscellaneous minor internal changes.
- Digest Auth support is now thread safe.
**Updates**
- Updated urllib3 to 1.12.
2.7.0 (2015-05-03)
------------------
This is the first release that follows our new release process. For
more, see [our
documentation](http://docs.python-requests.org/en/latest/community/release-process/).
**Bugfixes**
- Updated urllib3 to 1.10.4, resolving several bugs involving chunked
transfer encoding and response framing.
2.6.2 (2015-04-23)
------------------
**Bugfixes**
- Fix regression where compressed data that was sent as chunked data
was not properly decompressed. (\#2561)
2.6.1 (2015-04-22)
------------------
**Bugfixes**
- Remove VendorAlias import machinery introduced in v2.5.2.
- Simplify the PreparedRequest.prepare API: We no longer require the
user to pass an empty list to the hooks keyword argument. (c.f.
\#2552)
- Resolve redirects now receives and forwards all of the original
arguments to the adapter. (\#2503)
- Handle UnicodeDecodeErrors when trying to deal with a unicode URL
that cannot be encoded in ASCII. (\#2540)
- Populate the parsed path of the URI field when performing Digest
Authentication. (\#2426)
- Copy a PreparedRequest's CookieJar more reliably when it is not an
instance of RequestsCookieJar. (\#2527)
2.6.0 (2015-03-14)
------------------
**Bugfixes**
- CVE-2015-2296: Fix handling of cookies on redirect. Previously a
cookie without a host value set would use the hostname for the
redirected URL exposing requests users to session fixation attacks
and potentially cookie stealing. This was disclosed privately by
Matthew Daley of [BugFuzz](https://bugfuzz.com). This affects all
versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
- Fix error when requests is an `install_requires` dependency and
`python setup.py test` is run. (\#2462)
- Fix error when urllib3 is unbundled and requests continues to use
the vendored import location.
- Include fixes to `urllib3`'s header handling.
- Requests' handling of unvendored dependencies is now more
restrictive.
**Features and Improvements**
- Support bytearrays when passed as parameters in the `files`
argument. (\#2468)
- Avoid data duplication when creating a request with `str`, `bytes`,
or `bytearray` input to the `files` argument.
2.5.3 (2015-02-24)
------------------
**Bugfixes**
- Revert changes to our vendored certificate bundle. For more context
see (\#2455, \#2456, and )
2.5.2 (2015-02-23)
------------------
**Features and Improvements**
- Add sha256 fingerprint support.
([shazow/urllib3\#540](https://github.com/shazow/urllib3/pull/540))
- Improve the performance of headers.
([shazow/urllib3\#544](https://github.com/shazow/urllib3/pull/544))
**Bugfixes**
- Copy pip's import machinery. When downstream redistributors remove
requests.packages.urllib3 the import machinery will continue to let
those same symbols work. Example usage in requests' documentation
and 3rd-party libraries relying on the vendored copies of urllib3
will work without having to fallback to the system urllib3.
- Attempt to quote parts of the URL on redirect if unquoting and then
quoting fails. (\#2356)
- Fix filename type check for multipart form-data uploads. (\#2411)
- Properly handle the case where a server issuing digest
authentication challenges provides both auth and auth-int
qop-values. (\#2408)
- Fix a socket leak.
([shazow/urllib3\#549](https://github.com/shazow/urllib3/pull/549))
- Fix multiple `Set-Cookie` headers properly.
([shazow/urllib3\#534](https://github.com/shazow/urllib3/pull/534))
- Disable the built-in hostname verification.
([shazow/urllib3\#526](https://github.com/shazow/urllib3/pull/526))
- Fix the behaviour of decoding an exhausted stream.
([shazow/urllib3\#535](https://github.com/shazow/urllib3/pull/535))
**Security**
- Pulled in an updated `cacert.pem`.
- Drop RC4 from the default cipher list.
([shazow/urllib3\#551](https://github.com/shazow/urllib3/pull/551))
2.5.1 (2014-12-23)
------------------
**Behavioural Changes**
- Only catch HTTPErrors in raise\_for\_status (\#2382)
**Bugfixes**
- Handle LocationParseError from urllib3 (\#2344)
- Handle file-like object filenames that are not strings (\#2379)
- Unbreak HTTPDigestAuth handler. Allow new nonces to be negotiated
(\#2389)
2.5.0 (2014-12-01)
------------------
**Improvements**
- Allow usage of urllib3's Retry object with HTTPAdapters (\#2216)
- The `iter_lines` method on a response now accepts a delimiter with
which to split the content (\#2295)
**Behavioural Changes**
- Add deprecation warnings to functions in requests.utils that will be
removed in 3.0 (\#2309)
- Sessions used by the functional API are always closed (\#2326)
- Restrict requests to HTTP/1.1 and HTTP/1.0 (stop accepting HTTP/0.9)
(\#2323)
**Bugfixes**
- Only parse the URL once (\#2353)
- Allow Content-Length header to always be overridden (\#2332)
- Properly handle files in HTTPDigestAuth (\#2333)
- Cap redirect\_cache size to prevent memory abuse (\#2299)
- Fix HTTPDigestAuth handling of redirects after authenticating
successfully (\#2253)
- Fix crash with custom method parameter to Session.request (\#2317)
- Fix how Link headers are parsed using the regular expression library
(\#2271)
**Documentation**
- Add more references for interlinking (\#2348)
- Update CSS for theme (\#2290)
- Update width of buttons and sidebar (\#2289)
- Replace references of Gittip with Gratipay (\#2282)
- Add link to changelog in sidebar (\#2273)
2.4.3 (2014-10-06)
------------------
**Bugfixes**
- Unicode URL improvements for Python 2.
- Re-order JSON param for backwards compat.
- Automatically defrag authentication schemes from host/pass URIs.
([\#2249](https://github.com/requests/requests/issues/2249))
2.4.2 (2014-10-05)
------------------
**Improvements**
- FINALLY! Add json parameter for uploads!
([\#2258](https://github.com/requests/requests/pull/2258))
- Support for bytestring URLs on Python 3.x
([\#2238](https://github.com/requests/requests/pull/2238))
**Bugfixes**
- Avoid getting stuck in a loop
([\#2244](https://github.com/requests/requests/pull/2244))
- Multiple calls to iter\* fail with unhelpful error.
([\#2240](https://github.com/requests/requests/issues/2240),
[\#2241](https://github.com/requests/requests/issues/2241))
**Documentation**
- Correct redirection introduction
([\#2245](https://github.com/requests/requests/pull/2245/))
- Added example of how to send multiple files in one request.
([\#2227](https://github.com/requests/requests/pull/2227/))
- Clarify how to pass a custom set of CAs
([\#2248](https://github.com/requests/requests/pull/2248/))
2.4.1 (2014-09-09)
------------------
- Now has a "security" package extras set,
`$ pip install requests[security]`
- Requests will now use Certifi if it is available.
- Capture and re-raise urllib3 ProtocolError
- Bugfix for responses that attempt to redirect to themselves forever
(wtf?).
2.4.0 (2014-08-29)
------------------
**Behavioral Changes**
- `Connection: keep-alive` header is now sent automatically.
**Improvements**
- Support for connect timeouts! Timeout now accepts a tuple (connect,
read) which is used to set individual connect and read timeouts.
- Allow copying of PreparedRequests without headers/cookies.
- Updated bundled urllib3 version.
- Refactored settings loading from environment -- new
Session.merge\_environment\_settings.
- Handle socket errors in iter\_content.
2.3.0 (2014-05-16)
------------------
**API Changes**
- New `Response` property `is_redirect`, which is true when the
library could have processed this response as a redirection (whether
or not it actually did).
- The `timeout` parameter now affects requests with both `stream=True`
and `stream=False` equally.
- The change in v2.0.0 to mandate explicit proxy schemes has been
reverted. Proxy schemes now default to `http://`.
- The `CaseInsensitiveDict` used for HTTP headers now behaves like a
normal dictionary when references as string or viewed in the
interpreter.
**Bugfixes**
- No longer expose Authorization or Proxy-Authorization headers on
redirect. Fix CVE-2014-1829 and CVE-2014-1830 respectively.
- Authorization is re-evaluated each redirect.
- On redirect, pass url as native strings.
- Fall-back to autodetected encoding for JSON when Unicode detection
fails.
- Headers set to `None` on the `Session` are now correctly not sent.
- Correctly honor `decode_unicode` even if it wasn't used earlier in
the same response.
- Stop advertising `compress` as a supported Content-Encoding.
- The `Response.history` parameter is now always a list.
- Many, many `urllib3` bugfixes.
2.2.1 (2014-01-23)
------------------
**Bugfixes**
- Fixes incorrect parsing of proxy credentials that contain a literal
or encoded '\#' character.
- Assorted urllib3 fixes.
2.2.0 (2014-01-09)
------------------
**API Changes**
- New exception: `ContentDecodingError`. Raised instead of `urllib3`
`DecodeError` exceptions.
**Bugfixes**
- Avoid many many exceptions from the buggy implementation of
`proxy_bypass` on OS X in Python 2.6.
- Avoid crashing when attempting to get authentication credentials
from \~/.netrc when running as a user without a home directory.
- Use the correct pool size for pools of connections to proxies.
- Fix iteration of `CookieJar` objects.
- Ensure that cookies are persisted over redirect.
- Switch back to using chardet, since it has merged with charade.
2.1.0 (2013-12-05)
------------------
- Updated CA Bundle, of course.
- Cookies set on individual Requests through a `Session` (e.g. via
`Session.get()`) are no longer persisted to the `Session`.
- Clean up connections when we hit problems during chunked upload,
rather than leaking them.
- Return connections to the pool when a chunked upload is successful,
rather than leaking it.
- Match the HTTPbis recommendation for HTTP 301 redirects.
- Prevent hanging when using streaming uploads and Digest Auth when a
401 is received.
- Values of headers set by Requests are now always the native string
type.
- Fix previously broken SNI support.
- Fix accessing HTTP proxies using proxy authentication.
- Unencode HTTP Basic usernames and passwords extracted from URLs.
- Support for IP address ranges for no\_proxy environment variable
- Parse headers correctly when users override the default `Host:`
header.
- Avoid munging the URL in case of case-sensitive servers.
- Looser URL handling for non-HTTP/HTTPS urls.
- Accept unicode methods in Python 2.6 and 2.7.
- More resilient cookie handling.
- Make `Response` objects pickleable.
- Actually added MD5-sess to Digest Auth instead of pretending to like
last time.
- Updated internal urllib3.
- Fixed @Lukasa's lack of taste.
2.0.1 (2013-10-24)
------------------
- Updated included CA Bundle with new mistrusts and automated process
for the future
- Added MD5-sess to Digest Auth
- Accept per-file headers in multipart file POST messages.
- Fixed: Don't send the full URL on CONNECT messages.
- Fixed: Correctly lowercase a redirect scheme.
- Fixed: Cookies not persisted when set via functional API.
- Fixed: Translate urllib3 ProxyError into a requests ProxyError
derived from ConnectionError.
- Updated internal urllib3 and chardet.
2.0.0 (2013-09-24)
------------------
**API Changes:**
- Keys in the Headers dictionary are now native strings on all Python
versions, i.e. bytestrings on Python 2, unicode on Python 3.
- Proxy URLs now *must* have an explicit scheme. A `MissingSchema`
exception will be raised if they don't.
- Timeouts now apply to read time if `Stream=False`.
- `RequestException` is now a subclass of `IOError`, not
`RuntimeError`.
- Added new method to `PreparedRequest` objects:
`PreparedRequest.copy()`.
- Added new method to `Session` objects: `Session.update_request()`.
This method updates a `Request` object with the data (e.g. cookies)
stored on the `Session`.
- Added new method to `Session` objects: `Session.prepare_request()`.
This method updates and prepares a `Request` object, and returns the
corresponding `PreparedRequest` object.
- Added new method to `HTTPAdapter` objects:
`HTTPAdapter.proxy_headers()`. This should not be called directly,
but improves the subclass interface.
- `httplib.IncompleteRead` exceptions caused by incorrect chunked
encoding will now raise a Requests `ChunkedEncodingError` instead.
- Invalid percent-escape sequences now cause a Requests `InvalidURL`
exception to be raised.
- HTTP 208 no longer uses reason phrase `"im_used"`. Correctly uses
`"already_reported"`.
- HTTP 226 reason added (`"im_used"`).
**Bugfixes:**
- Vastly improved proxy support, including the CONNECT verb. Special
thanks to the many contributors who worked towards this improvement.
- Cookies are now properly managed when 401 authentication responses
are received.
- Chunked encoding fixes.
- Support for mixed case schemes.
- Better handling of streaming downloads.
- Retrieve environment proxies from more locations.
- Minor cookies fixes.
- Improved redirect behaviour.
- Improved streaming behaviour, particularly for compressed data.
- Miscellaneous small Python 3 text encoding bugs.
- `.netrc` no longer overrides explicit auth.
- Cookies set by hooks are now correctly persisted on Sessions.
- Fix problem with cookies that specify port numbers in their host
field.
- `BytesIO` can be used to perform streaming uploads.
- More generous parsing of the `no_proxy` environment variable.
- Non-string objects can be passed in data values alongside files.
1.2.3 (2013-05-25)
------------------
- Simple packaging fix
1.2.2 (2013-05-23)
------------------
- Simple packaging fix
1.2.1 (2013-05-20)
------------------
- 301 and 302 redirects now change the verb to GET for all verbs, not
just POST, improving browser compatibility.
- Python 3.3.2 compatibility
- Always percent-encode location headers
- Fix connection adapter matching to be most-specific first
- new argument to the default connection adapter for passing a block
argument
- prevent a KeyError when there's no link headers
1.2.0 (2013-03-31)
------------------
- Fixed cookies on sessions and on requests
- Significantly change how hooks are dispatched - hooks now receive
all the arguments specified by the user when making a request so
hooks can make a secondary request with the same parameters. This is
especially necessary for authentication handler authors
- certifi support was removed
- Fixed bug where using OAuth 1 with body `signature_type` sent no
data
- Major proxy work thanks to @Lukasa including parsing of proxy
authentication from the proxy url
- Fix DigestAuth handling too many 401s
- Update vendored urllib3 to include SSL bug fixes
- Allow keyword arguments to be passed to `json.loads()` via the
`Response.json()` method
- Don't send `Content-Length` header by default on `GET` or `HEAD`
requests
- Add `elapsed` attribute to `Response` objects to time how long a
request took.
- Fix `RequestsCookieJar`
- Sessions and Adapters are now picklable, i.e., can be used with the
multiprocessing library
- Update charade to version 1.0.3
The change in how hooks are dispatched will likely cause a great deal of
issues.
1.1.0 (2013-01-10)
------------------
- CHUNKED REQUESTS
- Support for iterable response bodies
- Assume servers persist redirect params
- Allow explicit content types to be specified for file data
- Make merge\_kwargs case-insensitive when looking up keys
1.0.3 (2012-12-18)
------------------
- Fix file upload encoding bug
- Fix cookie behavior
1.0.2 (2012-12-17)
------------------
- Proxy fix for HTTPAdapter.
1.0.1 (2012-12-17)
------------------
- Cert verification exception bug.
- Proxy fix for HTTPAdapter.
1.0.0 (2012-12-17)
------------------
- Massive Refactor and Simplification
- Switch to Apache 2.0 license
- Swappable Connection Adapters
- Mountable Connection Adapters
- Mutable ProcessedRequest chain
- /s/prefetch/stream
- Removal of all configuration
- Standard library logging
- Make Response.json() callable, not property.
- Usage of new charade project, which provides python 2 and 3
simultaneous chardet.
- Removal of all hooks except 'response'
- Removal of all authentication helpers (OAuth, Kerberos)
This is not a backwards compatible change.
0.14.2 (2012-10-27)
-------------------
- Improved mime-compatible JSON handling
- Proxy fixes
- Path hack fixes
- Case-Insensitive Content-Encoding headers
- Support for CJK parameters in form posts
0.14.1 (2012-10-01)
-------------------
- Python 3.3 Compatibility
- Simply default accept-encoding
- Bugfixes
0.14.0 (2012-09-02)
-------------------
- No more iter\_content errors if already downloaded.
0.13.9 (2012-08-25)
-------------------
- Fix for OAuth + POSTs
- Remove exception eating from dispatch\_hook
- General bugfixes
0.13.8 (2012-08-21)
-------------------
- Incredible Link header support :)
0.13.7 (2012-08-19)
-------------------
- Support for (key, value) lists everywhere.
- Digest Authentication improvements.
- Ensure proxy exclusions work properly.
- Clearer UnicodeError exceptions.
- Automatic casting of URLs to strings (fURL and such)
- Bugfixes.
0.13.6 (2012-08-06)
-------------------
- Long awaited fix for hanging connections!
0.13.5 (2012-07-27)
-------------------
- Packaging fix
0.13.4 (2012-07-27)
-------------------
- GSSAPI/Kerberos authentication!
- App Engine 2.7 Fixes!
- Fix leaking connections (from urllib3 update)
- OAuthlib path hack fix
- OAuthlib URL parameters fix.
0.13.3 (2012-07-12)
-------------------
- Use simplejson if available.
- Do not hide SSLErrors behind Timeouts.
- Fixed param handling with urls containing fragments.
- Significantly improved information in User Agent.
- client certificates are ignored when verify=False
0.13.2 (2012-06-28)
-------------------
- Zero dependencies (once again)!
- New: Response.reason
- Sign querystring parameters in OAuth 1.0
- Client certificates no longer ignored when verify=False
- Add openSUSE certificate support
0.13.1 (2012-06-07)
-------------------
- Allow passing a file or file-like object as data.
- Allow hooks to return responses that indicate errors.
- Fix Response.text and Response.json for body-less responses.
0.13.0 (2012-05-29)
-------------------
- Removal of Requests.async in favor of
[grequests](https://github.com/kennethreitz/grequests)
- Allow disabling of cookie persistence.
- New implementation of safe\_mode
- cookies.get now supports default argument
- Session cookies not saved when Session.request is called with
return\_response=False
- Env: no\_proxy support.
- RequestsCookieJar improvements.
- Various bug fixes.
0.12.1 (2012-05-08)
-------------------
- New `Response.json` property.
- Ability to add string file uploads.
- Fix out-of-range issue with iter\_lines.
- Fix iter\_content default size.
- Fix POST redirects containing files.
0.12.0 (2012-05-02)
-------------------
- EXPERIMENTAL OAUTH SUPPORT!
- Proper CookieJar-backed cookies interface with awesome dict-like
interface.
- Speed fix for non-iterated content chunks.
- Move `pre_request` to a more usable place.
- New `pre_send` hook.
- Lazily encode data, params, files.
- Load system Certificate Bundle if `certify` isn't available.
- Cleanups, fixes.
0.11.2 (2012-04-22)
-------------------
- Attempt to use the OS's certificate bundle if `certifi` isn't
available.
- Infinite digest auth redirect fix.
- Multi-part file upload improvements.
- Fix decoding of invalid %encodings in URLs.
- If there is no content in a response don't throw an error the second
time that content is attempted to be read.
- Upload data on redirects.
0.11.1 (2012-03-30)
-------------------
- POST redirects now break RFC to do what browsers do: Follow up with
a GET.
- New `strict_mode` configuration to disable new redirect behavior.
0.11.0 (2012-03-14)
-------------------
- Private SSL Certificate support
- Remove select.poll from Gevent monkeypatching
- Remove redundant generator for chunked transfer encoding
- Fix: Response.ok raises Timeout Exception in safe\_mode
0.10.8 (2012-03-09)
-------------------
- Generate chunked ValueError fix
- Proxy configuration by environment variables
- Simplification of iter\_lines.
- New trust\_env configuration for disabling system/environment hints.
- Suppress cookie errors.
0.10.7 (2012-03-07)
-------------------
- encode\_uri = False
0.10.6 (2012-02-25)
-------------------
- Allow '=' in cookies.
0.10.5 (2012-02-25)
-------------------
- Response body with 0 content-length fix.
- New async.imap.
- Don't fail on netrc.
0.10.4 (2012-02-20)
-------------------
- Honor netrc.
0.10.3 (2012-02-20)
-------------------
- HEAD requests don't follow redirects anymore.
- raise\_for\_status() doesn't raise for 3xx anymore.
- Make Session objects picklable.
- ValueError for invalid schema URLs.
0.10.2 (2012-01-15)
-------------------
- Vastly improved URL quoting.
- Additional allowed cookie key values.
- Attempted fix for "Too many open files" Error
- Replace unicode errors on first pass, no need for second pass.
- Append '/' to bare-domain urls before query insertion.
- Exceptions now inherit from RuntimeError.
- Binary uploads + auth fix.
- Bugfixes.
0.10.1 (2012-01-23)
-------------------
- PYTHON 3 SUPPORT!
- Dropped 2.5 Support. (*Backwards Incompatible*)
0.10.0 (2012-01-21)
-------------------
- `Response.content` is now bytes-only. (*Backwards Incompatible*)
- New `Response.text` is unicode-only.
- If no `Response.encoding` is specified and `chardet` is available,
`Response.text` will guess an encoding.
- Default to ISO-8859-1 (Western) encoding for "text" subtypes.
- Removal of decode\_unicode. (*Backwards Incompatible*)
- New multiple-hooks system.
- New `Response.register_hook` for registering hooks within the
pipeline.
- `Response.url` is now Unicode.
0.9.3 (2012-01-18)
------------------
- SSL verify=False bugfix (apparent on windows machines).
0.9.2 (2012-01-18)
------------------
- Asynchronous async.send method.
- Support for proper chunk streams with boundaries.
- session argument for Session classes.
- Print entire hook tracebacks, not just exception instance.
- Fix response.iter\_lines from pending next line.
- Fix but in HTTP-digest auth w/ URI having query strings.
- Fix in Event Hooks section.
- Urllib3 update.
0.9.1 (2012-01-06)
------------------
- danger\_mode for automatic Response.raise\_for\_status()
- Response.iter\_lines refactor
0.9.0 (2011-12-28)
------------------
- verify ssl is default.
0.8.9 (2011-12-28)
------------------
- Packaging fix.
0.8.8 (2011-12-28)
------------------
- SSL CERT VERIFICATION!
- Release of Cerifi: Mozilla's cert list.
- New 'verify' argument for SSL requests.
- Urllib3 update.
0.8.7 (2011-12-24)
------------------
- iter\_lines last-line truncation fix
- Force safe\_mode for async requests
- Handle safe\_mode exceptions more consistently
- Fix iteration on null responses in safe\_mode
0.8.6 (2011-12-18)
------------------
- Socket timeout fixes.
- Proxy Authorization support.
0.8.5 (2011-12-14)
------------------
- Response.iter\_lines!
0.8.4 (2011-12-11)
------------------
- Prefetch bugfix.
- Added license to installed version.
0.8.3 (2011-11-27)
------------------
- Converted auth system to use simpler callable objects.
- New session parameter to API methods.
- Display full URL while logging.
0.8.2 (2011-11-19)
------------------
- New Unicode decoding system, based on over-ridable
Response.encoding.
- Proper URL slash-quote handling.
- Cookies with `[`, `]`, and `_` allowed.
0.8.1 (2011-11-15)
------------------
- URL Request path fix
- Proxy fix.
- Timeouts fix.
0.8.0 (2011-11-13)
------------------
- Keep-alive support!
- Complete removal of Urllib2
- Complete removal of Poster
- Complete removal of CookieJars
- New ConnectionError raising
- Safe\_mode for error catching
- prefetch parameter for request methods
- OPTION method
- Async pool size throttling
- File uploads send real names
- Vendored in urllib3
0.7.6 (2011-11-07)
------------------
- Digest authentication bugfix (attach query data to path)
0.7.5 (2011-11-04)
------------------
- Response.content = None if there was an invalid response.
- Redirection auth handling.
0.7.4 (2011-10-26)
------------------
- Session Hooks fix.
0.7.3 (2011-10-23)
------------------
- Digest Auth fix.
0.7.2 (2011-10-23)
------------------
- PATCH Fix.
0.7.1 (2011-10-23)
------------------
- Move away from urllib2 authentication handling.
- Fully Remove AuthManager, AuthObject, &c.
- New tuple-based auth system with handler callbacks.
0.7.0 (2011-10-22)
------------------
- Sessions are now the primary interface.
- Deprecated InvalidMethodException.
- PATCH fix.
- New config system (no more global settings).
0.6.6 (2011-10-19)
------------------
- Session parameter bugfix (params merging).
0.6.5 (2011-10-18)
------------------
- Offline (fast) test suite.
- Session dictionary argument merging.
0.6.4 (2011-10-13)
------------------
- Automatic decoding of unicode, based on HTTP Headers.
- New `decode_unicode` setting.
- Removal of `r.read/close` methods.
- New `r.faw` interface for advanced response usage.\*
- Automatic expansion of parameterized headers.
0.6.3 (2011-10-13)
------------------
- Beautiful `requests.async` module, for making async requests w/
gevent.
0.6.2 (2011-10-09)
------------------
- GET/HEAD obeys allow\_redirects=False.
0.6.1 (2011-08-20)
------------------
- Enhanced status codes experience `\o/`
- Set a maximum number of redirects (`settings.max_redirects`)
- Full Unicode URL support
- Support for protocol-less redirects.
- Allow for arbitrary request types.
- Bugfixes
0.6.0 (2011-08-17)
------------------
- New callback hook system
- New persistent sessions object and context manager
- Transparent Dict-cookie handling
- Status code reference object
- Removed Response.cached
- Added Response.request
- All args are kwargs
- Relative redirect support
- HTTPError handling improvements
- Improved https testing
- Bugfixes
0.5.1 (2011-07-23)
------------------
- International Domain Name Support!
- Access headers without fetching entire body (`read()`)
- Use lists as dicts for parameters
- Add Forced Basic Authentication
- Forced Basic is default authentication type
- `python-requests.org` default User-Agent header
- CaseInsensitiveDict lower-case caching
- Response.history bugfix
0.5.0 (2011-06-21)
------------------
- PATCH Support
- Support for Proxies
- HTTPBin Test Suite
- Redirect Fixes
- settings.verbose stream writing
- Querystrings for all methods
- URLErrors (Connection Refused, Timeout, Invalid URLs) are treated as
explicitly raised
`r.requests.get('hwe://blah'); r.raise_for_status()`
0.4.1 (2011-05-22)
------------------
- Improved Redirection Handling
- New 'allow\_redirects' param for following non-GET/HEAD Redirects
- Settings module refactoring
0.4.0 (2011-05-15)
------------------
- Response.history: list of redirected responses
- Case-Insensitive Header Dictionaries!
- Unicode URLs
0.3.4 (2011-05-14)
------------------
- Urllib2 HTTPAuthentication Recursion fix (Basic/Digest)
- Internal Refactor
- Bytes data upload Bugfix
0.3.3 (2011-05-12)
------------------
- Request timeouts
- Unicode url-encoded data
- Settings context manager and module
0.3.2 (2011-04-15)
------------------
- Automatic Decompression of GZip Encoded Content
- AutoAuth Support for Tupled HTTP Auth
0.3.1 (2011-04-01)
------------------
- Cookie Changes
- Response.read()
- Poster fix
0.3.0 (2011-02-25)
------------------
- Automatic Authentication API Change
- Smarter Query URL Parameterization
- Allow file uploads and POST data together
-
New Authentication Manager System
: - Simpler Basic HTTP System
- Supports all build-in urllib2 Auths
- Allows for custom Auth Handlers
0.2.4 (2011-02-19)
------------------
- Python 2.5 Support
- PyPy-c v1.4 Support
- Auto-Authentication tests
- Improved Request object constructor
0.2.3 (2011-02-15)
------------------
-
New HTTPHandling Methods
: - Response.\_\_nonzero\_\_ (false if bad HTTP Status)
- Response.ok (True if expected HTTP Status)
- Response.error (Logged HTTPError if bad HTTP Status)
- Response.raise\_for\_status() (Raises stored HTTPError)
0.2.2 (2011-02-14)
------------------
- Still handles request in the event of an HTTPError. (Issue \#2)
- Eventlet and Gevent Monkeypatch support.
- Cookie Support (Issue \#1)
0.2.1 (2011-02-14)
------------------
- Added file attribute to POST and PUT requests for multipart-encode
file uploads.
- Added Request.url attribute for context and redirects
0.2.0 (2011-02-14)
------------------
- Birth!
0.0.1 (2011-02-13)
------------------
- Frustration
- Conception
requests-2.22.0/tests/ 0000755 0000765 0000024 00000000000 13467272564 016323 5 ustar nateprewitt staff 0000000 0000000 requests-2.22.0/tests/test_utils.py 0000644 0000765 0000024 00000062641 13467270450 021075 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import os
import copy
import filecmp
from io import BytesIO
import zipfile
from collections import deque
import pytest
from requests import compat
from requests.cookies import RequestsCookieJar
from requests.structures import CaseInsensitiveDict
from requests.utils import (
address_in_network, dotted_netmask, extract_zipped_paths,
get_auth_from_url, _parse_content_type_header, get_encoding_from_headers,
get_encodings_from_content, get_environ_proxies,
guess_filename, guess_json_utf, is_ipv4_address,
is_valid_cidr, iter_slices, parse_dict_header,
parse_header_links, prepend_scheme_if_needed,
requote_uri, select_proxy, should_bypass_proxies, super_len,
to_key_val_list, to_native_string,
unquote_header_value, unquote_unreserved,
urldefragauth, add_dict_to_cookiejar, set_environ)
from requests._internal_utils import unicode_is_ascii
from .compat import StringIO, cStringIO
class TestSuperLen:
@pytest.mark.parametrize(
'stream, value', (
(StringIO.StringIO, 'Test'),
(BytesIO, b'Test'),
pytest.mark.skipif('cStringIO is None')((cStringIO, 'Test')),
))
def test_io_streams(self, stream, value):
"""Ensures that we properly deal with different kinds of IO streams."""
assert super_len(stream()) == 0
assert super_len(stream(value)) == 4
def test_super_len_correctly_calculates_len_of_partially_read_file(self):
"""Ensure that we handle partially consumed file like objects."""
s = StringIO.StringIO()
s.write('foobarbogus')
assert super_len(s) == 0
@pytest.mark.parametrize('error', [IOError, OSError])
def test_super_len_handles_files_raising_weird_errors_in_tell(self, error):
"""If tell() raises errors, assume the cursor is at position zero."""
class BoomFile(object):
def __len__(self):
return 5
def tell(self):
raise error()
assert super_len(BoomFile()) == 0
@pytest.mark.parametrize('error', [IOError, OSError])
def test_super_len_tell_ioerror(self, error):
"""Ensure that if tell gives an IOError super_len doesn't fail"""
class NoLenBoomFile(object):
def tell(self):
raise error()
def seek(self, offset, whence):
pass
assert super_len(NoLenBoomFile()) == 0
def test_string(self):
assert super_len('Test') == 4
@pytest.mark.parametrize(
'mode, warnings_num', (
('r', 1),
('rb', 0),
))
def test_file(self, tmpdir, mode, warnings_num, recwarn):
file_obj = tmpdir.join('test.txt')
file_obj.write('Test')
with file_obj.open(mode) as fd:
assert super_len(fd) == 4
assert len(recwarn) == warnings_num
def test_super_len_with__len__(self):
foo = [1,2,3,4]
len_foo = super_len(foo)
assert len_foo == 4
def test_super_len_with_no__len__(self):
class LenFile(object):
def __init__(self):
self.len = 5
assert super_len(LenFile()) == 5
def test_super_len_with_tell(self):
foo = StringIO.StringIO('12345')
assert super_len(foo) == 5
foo.read(2)
assert super_len(foo) == 3
def test_super_len_with_fileno(self):
with open(__file__, 'rb') as f:
length = super_len(f)
file_data = f.read()
assert length == len(file_data)
def test_super_len_with_no_matches(self):
"""Ensure that objects without any length methods default to 0"""
assert super_len(object()) == 0
class TestToKeyValList:
@pytest.mark.parametrize(
'value, expected', (
([('key', 'val')], [('key', 'val')]),
((('key', 'val'), ), [('key', 'val')]),
({'key': 'val'}, [('key', 'val')]),
(None, None)
))
def test_valid(self, value, expected):
assert to_key_val_list(value) == expected
def test_invalid(self):
with pytest.raises(ValueError):
to_key_val_list('string')
class TestUnquoteHeaderValue:
@pytest.mark.parametrize(
'value, expected', (
(None, None),
('Test', 'Test'),
('"Test"', 'Test'),
('"Test\\\\"', 'Test\\'),
('"\\\\Comp\\Res"', '\\Comp\\Res'),
))
def test_valid(self, value, expected):
assert unquote_header_value(value) == expected
def test_is_filename(self):
assert unquote_header_value('"\\\\Comp\\Res"', True) == '\\\\Comp\\Res'
class TestGetEnvironProxies:
"""Ensures that IP addresses are correctly matches with ranges
in no_proxy variable.
"""
@pytest.fixture(autouse=True, params=['no_proxy', 'NO_PROXY'])
def no_proxy(self, request, monkeypatch):
monkeypatch.setenv(request.param, '192.168.0.0/24,127.0.0.1,localhost.localdomain,172.16.1.1')
@pytest.mark.parametrize(
'url', (
'http://192.168.0.1:5000/',
'http://192.168.0.1/',
'http://172.16.1.1/',
'http://172.16.1.1:5000/',
'http://localhost.localdomain:5000/v1.0/',
))
def test_bypass(self, url):
assert get_environ_proxies(url, no_proxy=None) == {}
@pytest.mark.parametrize(
'url', (
'http://192.168.1.1:5000/',
'http://192.168.1.1/',
'http://www.requests.com/',
))
def test_not_bypass(self, url):
assert get_environ_proxies(url, no_proxy=None) != {}
@pytest.mark.parametrize(
'url', (
'http://192.168.1.1:5000/',
'http://192.168.1.1/',
'http://www.requests.com/',
))
def test_bypass_no_proxy_keyword(self, url):
no_proxy = '192.168.1.1,requests.com'
assert get_environ_proxies(url, no_proxy=no_proxy) == {}
@pytest.mark.parametrize(
'url', (
'http://192.168.0.1:5000/',
'http://192.168.0.1/',
'http://172.16.1.1/',
'http://172.16.1.1:5000/',
'http://localhost.localdomain:5000/v1.0/',
))
def test_not_bypass_no_proxy_keyword(self, url, monkeypatch):
# This is testing that the 'no_proxy' argument overrides the
# environment variable 'no_proxy'
monkeypatch.setenv('http_proxy', 'http://proxy.example.com:3128/')
no_proxy = '192.168.1.1,requests.com'
assert get_environ_proxies(url, no_proxy=no_proxy) != {}
class TestIsIPv4Address:
def test_valid(self):
assert is_ipv4_address('8.8.8.8')
@pytest.mark.parametrize('value', ('8.8.8.8.8', 'localhost.localdomain'))
def test_invalid(self, value):
assert not is_ipv4_address(value)
class TestIsValidCIDR:
def test_valid(self):
assert is_valid_cidr('192.168.1.0/24')
@pytest.mark.parametrize(
'value', (
'8.8.8.8',
'192.168.1.0/a',
'192.168.1.0/128',
'192.168.1.0/-1',
'192.168.1.999/24',
))
def test_invalid(self, value):
assert not is_valid_cidr(value)
class TestAddressInNetwork:
def test_valid(self):
assert address_in_network('192.168.1.1', '192.168.1.0/24')
def test_invalid(self):
assert not address_in_network('172.16.0.1', '192.168.1.0/24')
class TestGuessFilename:
@pytest.mark.parametrize(
'value', (1, type('Fake', (object,), {'name': 1})()),
)
def test_guess_filename_invalid(self, value):
assert guess_filename(value) is None
@pytest.mark.parametrize(
'value, expected_type', (
(b'value', compat.bytes),
(b'value'.decode('utf-8'), compat.str)
))
def test_guess_filename_valid(self, value, expected_type):
obj = type('Fake', (object,), {'name': value})()
result = guess_filename(obj)
assert result == value
assert isinstance(result, expected_type)
class TestExtractZippedPaths:
@pytest.mark.parametrize(
'path', (
'/',
__file__,
pytest.__file__,
'/etc/invalid/location',
))
def test_unzipped_paths_unchanged(self, path):
assert path == extract_zipped_paths(path)
def test_zipped_paths_extracted(self, tmpdir):
zipped_py = tmpdir.join('test.zip')
with zipfile.ZipFile(zipped_py.strpath, 'w') as f:
f.write(__file__)
_, name = os.path.splitdrive(__file__)
zipped_path = os.path.join(zipped_py.strpath, name.lstrip(r'\/'))
extracted_path = extract_zipped_paths(zipped_path)
assert extracted_path != zipped_path
assert os.path.exists(extracted_path)
assert filecmp.cmp(extracted_path, __file__)
class TestContentEncodingDetection:
def test_none(self):
encodings = get_encodings_from_content('')
assert not len(encodings)
@pytest.mark.parametrize(
'content', (
# HTML5 meta charset attribute
'',
# HTML4 pragma directive
'',
# XHTML 1.x served with text/html MIME type
'',
# XHTML 1.x served as XML
'',
))
def test_pragmas(self, content):
encodings = get_encodings_from_content(content)
assert len(encodings) == 1
assert encodings[0] == 'UTF-8'
def test_precedence(self):
content = '''
'''.strip()
assert get_encodings_from_content(content) == ['HTML5', 'HTML4', 'XML']
class TestGuessJSONUTF:
@pytest.mark.parametrize(
'encoding', (
'utf-32', 'utf-8-sig', 'utf-16', 'utf-8', 'utf-16-be', 'utf-16-le',
'utf-32-be', 'utf-32-le'
))
def test_encoded(self, encoding):
data = '{}'.encode(encoding)
assert guess_json_utf(data) == encoding
def test_bad_utf_like_encoding(self):
assert guess_json_utf(b'\x00\x00\x00\x00') is None
@pytest.mark.parametrize(
('encoding', 'expected'), (
('utf-16-be', 'utf-16'),
('utf-16-le', 'utf-16'),
('utf-32-be', 'utf-32'),
('utf-32-le', 'utf-32')
))
def test_guess_by_bom(self, encoding, expected):
data = u'\ufeff{}'.encode(encoding)
assert guess_json_utf(data) == expected
USER = PASSWORD = "%!*'();:@&=+$,/?#[] "
ENCODED_USER = compat.quote(USER, '')
ENCODED_PASSWORD = compat.quote(PASSWORD, '')
@pytest.mark.parametrize(
'url, auth', (
(
'http://' + ENCODED_USER + ':' + ENCODED_PASSWORD + '@' +
'request.com/url.html#test',
(USER, PASSWORD)
),
(
'http://user:pass@complex.url.com/path?query=yes',
('user', 'pass')
),
(
'http://user:pass%20pass@complex.url.com/path?query=yes',
('user', 'pass pass')
),
(
'http://user:pass pass@complex.url.com/path?query=yes',
('user', 'pass pass')
),
(
'http://user%25user:pass@complex.url.com/path?query=yes',
('user%user', 'pass')
),
(
'http://user:pass%23pass@complex.url.com/path?query=yes',
('user', 'pass#pass')
),
(
'http://complex.url.com/path?query=yes',
('', '')
),
))
def test_get_auth_from_url(url, auth):
assert get_auth_from_url(url) == auth
@pytest.mark.parametrize(
'uri, expected', (
(
# Ensure requoting doesn't break expectations
'http://example.com/fiz?buz=%25ppicture',
'http://example.com/fiz?buz=%25ppicture',
),
(
# Ensure we handle unquoted percent signs in redirects
'http://example.com/fiz?buz=%ppicture',
'http://example.com/fiz?buz=%25ppicture',
),
))
def test_requote_uri_with_unquoted_percents(uri, expected):
"""See: https://github.com/requests/requests/issues/2356"""
assert requote_uri(uri) == expected
@pytest.mark.parametrize(
'uri, expected', (
(
# Illegal bytes
'http://example.com/?a=%--',
'http://example.com/?a=%--',
),
(
# Reserved characters
'http://example.com/?a=%300',
'http://example.com/?a=00',
)
))
def test_unquote_unreserved(uri, expected):
assert unquote_unreserved(uri) == expected
@pytest.mark.parametrize(
'mask, expected', (
(8, '255.0.0.0'),
(24, '255.255.255.0'),
(25, '255.255.255.128'),
))
def test_dotted_netmask(mask, expected):
assert dotted_netmask(mask) == expected
http_proxies = {'http': 'http://http.proxy',
'http://some.host': 'http://some.host.proxy'}
all_proxies = {'all': 'socks5://http.proxy',
'all://some.host': 'socks5://some.host.proxy'}
mixed_proxies = {'http': 'http://http.proxy',
'http://some.host': 'http://some.host.proxy',
'all': 'socks5://http.proxy'}
@pytest.mark.parametrize(
'url, expected, proxies', (
('hTTp://u:p@Some.Host/path', 'http://some.host.proxy', http_proxies),
('hTTp://u:p@Other.Host/path', 'http://http.proxy', http_proxies),
('hTTp:///path', 'http://http.proxy', http_proxies),
('hTTps://Other.Host', None, http_proxies),
('file:///etc/motd', None, http_proxies),
('hTTp://u:p@Some.Host/path', 'socks5://some.host.proxy', all_proxies),
('hTTp://u:p@Other.Host/path', 'socks5://http.proxy', all_proxies),
('hTTp:///path', 'socks5://http.proxy', all_proxies),
('hTTps://Other.Host', 'socks5://http.proxy', all_proxies),
('http://u:p@other.host/path', 'http://http.proxy', mixed_proxies),
('http://u:p@some.host/path', 'http://some.host.proxy', mixed_proxies),
('https://u:p@other.host/path', 'socks5://http.proxy', mixed_proxies),
('https://u:p@some.host/path', 'socks5://http.proxy', mixed_proxies),
('https://', 'socks5://http.proxy', mixed_proxies),
# XXX: unsure whether this is reasonable behavior
('file:///etc/motd', 'socks5://http.proxy', all_proxies),
))
def test_select_proxies(url, expected, proxies):
"""Make sure we can select per-host proxies correctly."""
assert select_proxy(url, proxies) == expected
@pytest.mark.parametrize(
'value, expected', (
('foo="is a fish", bar="as well"', {'foo': 'is a fish', 'bar': 'as well'}),
('key_without_value', {'key_without_value': None})
))
def test_parse_dict_header(value, expected):
assert parse_dict_header(value) == expected
@pytest.mark.parametrize(
'value, expected', (
(
'application/xml',
('application/xml', {})
),
(
'application/json ; charset=utf-8',
('application/json', {'charset': 'utf-8'})
),
(
'application/json ; Charset=utf-8',
('application/json', {'charset': 'utf-8'})
),
(
'text/plain',
('text/plain', {})
),
(
'multipart/form-data; boundary = something ; boundary2=\'something_else\' ; no_equals ',
('multipart/form-data', {'boundary': 'something', 'boundary2': 'something_else', 'no_equals': True})
),
(
'multipart/form-data; boundary = something ; boundary2="something_else" ; no_equals ',
('multipart/form-data', {'boundary': 'something', 'boundary2': 'something_else', 'no_equals': True})
),
(
'multipart/form-data; boundary = something ; \'boundary2=something_else\' ; no_equals ',
('multipart/form-data', {'boundary': 'something', 'boundary2': 'something_else', 'no_equals': True})
),
(
'multipart/form-data; boundary = something ; "boundary2=something_else" ; no_equals ',
('multipart/form-data', {'boundary': 'something', 'boundary2': 'something_else', 'no_equals': True})
),
(
'application/json ; ; ',
('application/json', {})
)
))
def test__parse_content_type_header(value, expected):
assert _parse_content_type_header(value) == expected
@pytest.mark.parametrize(
'value, expected', (
(
CaseInsensitiveDict(),
None
),
(
CaseInsensitiveDict({'content-type': 'application/json; charset=utf-8'}),
'utf-8'
),
(
CaseInsensitiveDict({'content-type': 'text/plain'}),
'ISO-8859-1'
),
))
def test_get_encoding_from_headers(value, expected):
assert get_encoding_from_headers(value) == expected
@pytest.mark.parametrize(
'value, length', (
('', 0),
('T', 1),
('Test', 4),
('Cont', 0),
('Other', -5),
('Content', None),
))
def test_iter_slices(value, length):
if length is None or (length <= 0 and len(value) > 0):
# Reads all content at once
assert len(list(iter_slices(value, length))) == 1
else:
assert len(list(iter_slices(value, 1))) == length
@pytest.mark.parametrize(
'value, expected', (
(
'; rel=front; type="image/jpeg"',
[{'url': 'http:/.../front.jpeg', 'rel': 'front', 'type': 'image/jpeg'}]
),
(
'',
[{'url': 'http:/.../front.jpeg'}]
),
(
';',
[{'url': 'http:/.../front.jpeg'}]
),
(
'; type="image/jpeg",;',
[
{'url': 'http:/.../front.jpeg', 'type': 'image/jpeg'},
{'url': 'http://.../back.jpeg'}
]
),
(
'',
[]
),
))
def test_parse_header_links(value, expected):
assert parse_header_links(value) == expected
@pytest.mark.parametrize(
'value, expected', (
('example.com/path', 'http://example.com/path'),
('//example.com/path', 'http://example.com/path'),
))
def test_prepend_scheme_if_needed(value, expected):
assert prepend_scheme_if_needed(value, 'http') == expected
@pytest.mark.parametrize(
'value, expected', (
('T', 'T'),
(b'T', 'T'),
(u'T', 'T'),
))
def test_to_native_string(value, expected):
assert to_native_string(value) == expected
@pytest.mark.parametrize(
'url, expected', (
('http://u:p@example.com/path?a=1#test', 'http://example.com/path?a=1'),
('http://example.com/path', 'http://example.com/path'),
('//u:p@example.com/path', '//example.com/path'),
('//example.com/path', '//example.com/path'),
('example.com/path', '//example.com/path'),
('scheme:u:p@example.com/path', 'scheme://example.com/path'),
))
def test_urldefragauth(url, expected):
assert urldefragauth(url) == expected
@pytest.mark.parametrize(
'url, expected', (
('http://192.168.0.1:5000/', True),
('http://192.168.0.1/', True),
('http://172.16.1.1/', True),
('http://172.16.1.1:5000/', True),
('http://localhost.localdomain:5000/v1.0/', True),
('http://google.com:6000/', True),
('http://172.16.1.12/', False),
('http://172.16.1.12:5000/', False),
('http://google.com:5000/v1.0/', False),
('file:///some/path/on/disk', True),
))
def test_should_bypass_proxies(url, expected, monkeypatch):
"""Tests for function should_bypass_proxies to check if proxy
can be bypassed or not
"""
monkeypatch.setenv('no_proxy', '192.168.0.0/24,127.0.0.1,localhost.localdomain,172.16.1.1, google.com:6000')
monkeypatch.setenv('NO_PROXY', '192.168.0.0/24,127.0.0.1,localhost.localdomain,172.16.1.1, google.com:6000')
assert should_bypass_proxies(url, no_proxy=None) == expected
@pytest.mark.parametrize(
'url, expected', (
('http://172.16.1.1/', '172.16.1.1'),
('http://172.16.1.1:5000/', '172.16.1.1'),
('http://user:pass@172.16.1.1', '172.16.1.1'),
('http://user:pass@172.16.1.1:5000', '172.16.1.1'),
('http://hostname/', 'hostname'),
('http://hostname:5000/', 'hostname'),
('http://user:pass@hostname', 'hostname'),
('http://user:pass@hostname:5000', 'hostname'),
))
def test_should_bypass_proxies_pass_only_hostname(url, expected, mocker):
"""The proxy_bypass function should be called with a hostname or IP without
a port number or auth credentials.
"""
proxy_bypass = mocker.patch('requests.utils.proxy_bypass')
should_bypass_proxies(url, no_proxy=None)
proxy_bypass.assert_called_once_with(expected)
@pytest.mark.parametrize(
'cookiejar', (
compat.cookielib.CookieJar(),
RequestsCookieJar()
))
def test_add_dict_to_cookiejar(cookiejar):
"""Ensure add_dict_to_cookiejar works for
non-RequestsCookieJar CookieJars
"""
cookiedict = {'test': 'cookies',
'good': 'cookies'}
cj = add_dict_to_cookiejar(cookiejar, cookiedict)
cookies = {cookie.name: cookie.value for cookie in cj}
assert cookiedict == cookies
@pytest.mark.parametrize(
'value, expected', (
(u'test', True),
(u'æíöû', False),
(u'ジェーピーニック', False),
)
)
def test_unicode_is_ascii(value, expected):
assert unicode_is_ascii(value) is expected
@pytest.mark.parametrize(
'url, expected', (
('http://192.168.0.1:5000/', True),
('http://192.168.0.1/', True),
('http://172.16.1.1/', True),
('http://172.16.1.1:5000/', True),
('http://localhost.localdomain:5000/v1.0/', True),
('http://172.16.1.12/', False),
('http://172.16.1.12:5000/', False),
('http://google.com:5000/v1.0/', False),
))
def test_should_bypass_proxies_no_proxy(
url, expected, monkeypatch):
"""Tests for function should_bypass_proxies to check if proxy
can be bypassed or not using the 'no_proxy' argument
"""
no_proxy = '192.168.0.0/24,127.0.0.1,localhost.localdomain,172.16.1.1'
# Test 'no_proxy' argument
assert should_bypass_proxies(url, no_proxy=no_proxy) == expected
@pytest.mark.skipif(os.name != 'nt', reason='Test only on Windows')
@pytest.mark.parametrize(
'url, expected, override', (
('http://192.168.0.1:5000/', True, None),
('http://192.168.0.1/', True, None),
('http://172.16.1.1/', True, None),
('http://172.16.1.1:5000/', True, None),
('http://localhost.localdomain:5000/v1.0/', True, None),
('http://172.16.1.22/', False, None),
('http://172.16.1.22:5000/', False, None),
('http://google.com:5000/v1.0/', False, None),
('http://mylocalhostname:5000/v1.0/', True, ''),
('http://192.168.0.1/', False, ''),
))
def test_should_bypass_proxies_win_registry(url, expected, override,
monkeypatch):
"""Tests for function should_bypass_proxies to check if proxy
can be bypassed or not with Windows registry settings
"""
if override is None:
override = '192.168.*;127.0.0.1;localhost.localdomain;172.16.1.1'
if compat.is_py3:
import winreg
else:
import _winreg as winreg
class RegHandle:
def Close(self):
pass
ie_settings = RegHandle()
proxyEnableValues = deque([1, "1"])
def OpenKey(key, subkey):
return ie_settings
def QueryValueEx(key, value_name):
if key is ie_settings:
if value_name == 'ProxyEnable':
# this could be a string (REG_SZ) or a 32-bit number (REG_DWORD)
proxyEnableValues.rotate()
return [proxyEnableValues[0]]
elif value_name == 'ProxyOverride':
return [override]
monkeypatch.setenv('http_proxy', '')
monkeypatch.setenv('https_proxy', '')
monkeypatch.setenv('ftp_proxy', '')
monkeypatch.setenv('no_proxy', '')
monkeypatch.setenv('NO_PROXY', '')
monkeypatch.setattr(winreg, 'OpenKey', OpenKey)
monkeypatch.setattr(winreg, 'QueryValueEx', QueryValueEx)
assert should_bypass_proxies(url, None) == expected
@pytest.mark.parametrize(
'env_name, value', (
('no_proxy', '192.168.0.0/24,127.0.0.1,localhost.localdomain'),
('no_proxy', None),
('a_new_key', '192.168.0.0/24,127.0.0.1,localhost.localdomain'),
('a_new_key', None),
))
def test_set_environ(env_name, value):
"""Tests set_environ will set environ values and will restore the environ."""
environ_copy = copy.deepcopy(os.environ)
with set_environ(env_name, value):
assert os.environ.get(env_name) == value
assert os.environ == environ_copy
def test_set_environ_raises_exception():
"""Tests set_environ will raise exceptions in context when the
value parameter is None."""
with pytest.raises(Exception) as exception:
with set_environ('test1', None):
raise Exception('Expected exception')
assert 'Expected exception' in str(exception.value)
requests-2.22.0/tests/conftest.py 0000644 0000765 0000024 00000000732 13467270450 020514 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import pytest
from requests.compat import urljoin
def prepare_url(value):
# Issue #1483: Make sure the URL always has a trailing slash
httpbin_url = value.url.rstrip('/') + '/'
def inner(*suffix):
return urljoin(httpbin_url, '/'.join(suffix))
return inner
@pytest.fixture
def httpbin(httpbin):
return prepare_url(httpbin)
@pytest.fixture
def httpbin_secure(httpbin_secure):
return prepare_url(httpbin_secure)
requests-2.22.0/tests/test_structures.py 0000644 0000765 0000024 00000004312 13467270450 022147 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import pytest
from requests.structures import CaseInsensitiveDict, LookupDict
class TestCaseInsensitiveDict:
@pytest.fixture(autouse=True)
def setup(self):
"""CaseInsensitiveDict instance with "Accept" header."""
self.case_insensitive_dict = CaseInsensitiveDict()
self.case_insensitive_dict['Accept'] = 'application/json'
def test_list(self):
assert list(self.case_insensitive_dict) == ['Accept']
possible_keys = pytest.mark.parametrize('key', ('accept', 'ACCEPT', 'aCcEpT', 'Accept'))
@possible_keys
def test_getitem(self, key):
assert self.case_insensitive_dict[key] == 'application/json'
@possible_keys
def test_delitem(self, key):
del self.case_insensitive_dict[key]
assert key not in self.case_insensitive_dict
def test_lower_items(self):
assert list(self.case_insensitive_dict.lower_items()) == [('accept', 'application/json')]
def test_repr(self):
assert repr(self.case_insensitive_dict) == "{'Accept': 'application/json'}"
def test_copy(self):
copy = self.case_insensitive_dict.copy()
assert copy is not self.case_insensitive_dict
assert copy == self.case_insensitive_dict
@pytest.mark.parametrize(
'other, result', (
({'AccePT': 'application/json'}, True),
({}, False),
(None, False)
)
)
def test_instance_equality(self, other, result):
assert (self.case_insensitive_dict == other) is result
class TestLookupDict:
@pytest.fixture(autouse=True)
def setup(self):
"""LookupDict instance with "bad_gateway" attribute."""
self.lookup_dict = LookupDict('test')
self.lookup_dict.bad_gateway = 502
def test_repr(self):
assert repr(self.lookup_dict) == ""
get_item_parameters = pytest.mark.parametrize(
'key, value', (
('bad_gateway', 502),
('not_a_key', None)
)
)
@get_item_parameters
def test_getitem(self, key, value):
assert self.lookup_dict[key] == value
@get_item_parameters
def test_get(self, key, value):
assert self.lookup_dict.get(key) == value
requests-2.22.0/tests/test_packages.py 0000644 0000765 0000024 00000000345 13467270450 021504 0 ustar nateprewitt staff 0000000 0000000 import requests
def test_can_access_urllib3_attribute():
requests.packages.urllib3
def test_can_access_idna_attribute():
requests.packages.idna
def test_can_access_chardet_attribute():
requests.packages.chardet
requests-2.22.0/tests/testserver/ 0000755 0000765 0000024 00000000000 13467272564 020531 5 ustar nateprewitt staff 0000000 0000000 requests-2.22.0/tests/testserver/server.py 0000644 0000765 0000024 00000007211 13467270450 022402 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import threading
import socket
import select
def consume_socket_content(sock, timeout=0.5):
chunks = 65536
content = b''
while True:
more_to_read = select.select([sock], [], [], timeout)[0]
if not more_to_read:
break
new_content = sock.recv(chunks)
if not new_content:
break
content += new_content
return content
class Server(threading.Thread):
"""Dummy server using for unit testing"""
WAIT_EVENT_TIMEOUT = 5
def __init__(self, handler=None, host='localhost', port=0, requests_to_handle=1, wait_to_close_event=None):
super(Server, self).__init__()
self.handler = handler or consume_socket_content
self.handler_results = []
self.host = host
self.port = port
self.requests_to_handle = requests_to_handle
self.wait_to_close_event = wait_to_close_event
self.ready_event = threading.Event()
self.stop_event = threading.Event()
@classmethod
def text_response_server(cls, text, request_timeout=0.5, **kwargs):
def text_response_handler(sock):
request_content = consume_socket_content(sock, timeout=request_timeout)
sock.send(text.encode('utf-8'))
return request_content
return Server(text_response_handler, **kwargs)
@classmethod
def basic_response_server(cls, **kwargs):
return cls.text_response_server(
"HTTP/1.1 200 OK\r\n" +
"Content-Length: 0\r\n\r\n",
**kwargs
)
def run(self):
try:
self.server_sock = self._create_socket_and_bind()
# in case self.port = 0
self.port = self.server_sock.getsockname()[1]
self.ready_event.set()
self._handle_requests()
if self.wait_to_close_event:
self.wait_to_close_event.wait(self.WAIT_EVENT_TIMEOUT)
finally:
self.ready_event.set() # just in case of exception
self._close_server_sock_ignore_errors()
self.stop_event.set()
def _create_socket_and_bind(self):
sock = socket.socket()
sock.bind((self.host, self.port))
sock.listen(0)
return sock
def _close_server_sock_ignore_errors(self):
try:
self.server_sock.close()
except IOError:
pass
def _handle_requests(self):
for _ in range(self.requests_to_handle):
sock = self._accept_connection()
if not sock:
break
handler_result = self.handler(sock)
self.handler_results.append(handler_result)
def _accept_connection(self):
try:
ready, _, _ = select.select([self.server_sock], [], [], self.WAIT_EVENT_TIMEOUT)
if not ready:
return None
return self.server_sock.accept()[0]
except (select.error, socket.error):
return None
def __enter__(self):
self.start()
self.ready_event.wait(self.WAIT_EVENT_TIMEOUT)
return self.host, self.port
def __exit__(self, exc_type, exc_value, traceback):
if exc_type is None:
self.stop_event.wait(self.WAIT_EVENT_TIMEOUT)
else:
if self.wait_to_close_event:
# avoid server from waiting for event timeouts
# if an exception is found in the main thread
self.wait_to_close_event.set()
# ensure server thread doesn't get stuck waiting for connections
self._close_server_sock_ignore_errors()
self.join()
return False # allow exceptions to propagate
requests-2.22.0/tests/testserver/__init__.py 0000644 0000765 0000024 00000000000 13467270450 022620 0 ustar nateprewitt staff 0000000 0000000 requests-2.22.0/tests/test_testserver.py 0000644 0000765 0000024 00000012743 13467270450 022141 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import threading
import socket
import time
import pytest
import requests
from tests.testserver.server import Server
class TestTestServer:
def test_basic(self):
"""messages are sent and received properly"""
question = b"success?"
answer = b"yeah, success"
def handler(sock):
text = sock.recv(1000)
assert text == question
sock.sendall(answer)
with Server(handler) as (host, port):
sock = socket.socket()
sock.connect((host, port))
sock.sendall(question)
text = sock.recv(1000)
assert text == answer
sock.close()
def test_server_closes(self):
"""the server closes when leaving the context manager"""
with Server.basic_response_server() as (host, port):
sock = socket.socket()
sock.connect((host, port))
sock.close()
with pytest.raises(socket.error):
new_sock = socket.socket()
new_sock.connect((host, port))
def test_text_response(self):
"""the text_response_server sends the given text"""
server = Server.text_response_server(
"HTTP/1.1 200 OK\r\n" +
"Content-Length: 6\r\n" +
"\r\nroflol"
)
with server as (host, port):
r = requests.get('http://{}:{}'.format(host, port))
assert r.status_code == 200
assert r.text == u'roflol'
assert r.headers['Content-Length'] == '6'
def test_basic_response(self):
"""the basic response server returns an empty http response"""
with Server.basic_response_server() as (host, port):
r = requests.get('http://{}:{}'.format(host, port))
assert r.status_code == 200
assert r.text == u''
assert r.headers['Content-Length'] == '0'
def test_basic_waiting_server(self):
"""the server waits for the block_server event to be set before closing"""
block_server = threading.Event()
with Server.basic_response_server(wait_to_close_event=block_server) as (host, port):
sock = socket.socket()
sock.connect((host, port))
sock.sendall(b'send something')
time.sleep(2.5)
sock.sendall(b'still alive')
block_server.set() # release server block
def test_multiple_requests(self):
"""multiple requests can be served"""
requests_to_handle = 5
server = Server.basic_response_server(requests_to_handle=requests_to_handle)
with server as (host, port):
server_url = 'http://{}:{}'.format(host, port)
for _ in range(requests_to_handle):
r = requests.get(server_url)
assert r.status_code == 200
# the (n+1)th request fails
with pytest.raises(requests.exceptions.ConnectionError):
r = requests.get(server_url)
@pytest.mark.skip(reason="this fails non-deterministically under pytest-xdist")
def test_request_recovery(self):
"""can check the requests content"""
# TODO: figure out why this sometimes fails when using pytest-xdist.
server = Server.basic_response_server(requests_to_handle=2)
first_request = b'put your hands up in the air'
second_request = b'put your hand down in the floor'
with server as address:
sock1 = socket.socket()
sock2 = socket.socket()
sock1.connect(address)
sock1.sendall(first_request)
sock1.close()
sock2.connect(address)
sock2.sendall(second_request)
sock2.close()
assert server.handler_results[0] == first_request
assert server.handler_results[1] == second_request
def test_requests_after_timeout_are_not_received(self):
"""the basic response handler times out when receiving requests"""
server = Server.basic_response_server(request_timeout=1)
with server as address:
sock = socket.socket()
sock.connect(address)
time.sleep(1.5)
sock.sendall(b'hehehe, not received')
sock.close()
assert server.handler_results[0] == b''
def test_request_recovery_with_bigger_timeout(self):
"""a biggest timeout can be specified"""
server = Server.basic_response_server(request_timeout=3)
data = b'bananadine'
with server as address:
sock = socket.socket()
sock.connect(address)
time.sleep(1.5)
sock.sendall(data)
sock.close()
assert server.handler_results[0] == data
def test_server_finishes_on_error(self):
"""the server thread exits even if an exception exits the context manager"""
server = Server.basic_response_server()
with pytest.raises(Exception):
with server:
raise Exception()
assert len(server.handler_results) == 0
# if the server thread fails to finish, the test suite will hang
# and get killed by the jenkins timeout.
def test_server_finishes_when_no_connections(self):
"""the server thread exits even if there are no connections"""
server = Server.basic_response_server()
with server:
pass
assert len(server.handler_results) == 0
# if the server thread fails to finish, the test suite will hang
# and get killed by the jenkins timeout.
requests-2.22.0/tests/compat.py 0000644 0000765 0000024 00000000515 13467270450 020151 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
from requests.compat import is_py3
try:
import StringIO
except ImportError:
import io as StringIO
try:
from cStringIO import StringIO as cStringIO
except ImportError:
cStringIO = None
if is_py3:
def u(s):
return s
else:
def u(s):
return s.decode('unicode-escape')
requests-2.22.0/tests/__init__.py 0000644 0000765 0000024 00000000561 13467270450 020426 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""Requests test package initialisation."""
import warnings
import urllib3
from urllib3.exceptions import SNIMissingWarning
# urllib3 sets SNIMissingWarning to only go off once,
# while this test suite requires it to always fire
# so that it occurs during test_requests.test_https_warnings
warnings.simplefilter('always', SNIMissingWarning)
requests-2.22.0/tests/test_hooks.py 0000644 0000765 0000024 00000000674 13467270450 021056 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import pytest
from requests import hooks
def hook(value):
return value[1:]
@pytest.mark.parametrize(
'hooks_list, result', (
(hook, 'ata'),
([hook, lambda x: None, hook], 'ta'),
)
)
def test_hooks(hooks_list, result):
assert hooks.dispatch_hook('response', {'response': hooks_list}, 'Data') == result
def test_default_hooks():
assert hooks.default_hooks() == {'response': []}
requests-2.22.0/tests/test_requests.py 0000644 0000765 0000024 00000261376 13467270450 021616 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""Tests for Requests."""
from __future__ import division
import json
import os
import pickle
import collections
import contextlib
import warnings
import re
import io
import requests
import pytest
from requests.adapters import HTTPAdapter
from requests.auth import HTTPDigestAuth, _basic_auth_str
from requests.compat import (
Morsel, cookielib, getproxies, str, urlparse,
builtin_str, OrderedDict)
from requests.cookies import (
cookiejar_from_dict, morsel_to_cookie)
from requests.exceptions import (
ConnectionError, ConnectTimeout, InvalidSchema, InvalidURL,
MissingSchema, ReadTimeout, Timeout, RetryError, TooManyRedirects,
ProxyError, InvalidHeader, UnrewindableBodyError, SSLError, InvalidProxyURL)
from requests.models import PreparedRequest
from requests.structures import CaseInsensitiveDict
from requests.sessions import SessionRedirectMixin
from requests.models import urlencode
from requests.hooks import default_hooks
from requests.compat import MutableMapping
from .compat import StringIO, u
from .utils import override_environ
from urllib3.util import Timeout as Urllib3Timeout
# Requests to this URL should always fail with a connection timeout (nothing
# listening on that port)
TARPIT = 'http://10.255.255.1'
try:
from ssl import SSLContext
del SSLContext
HAS_MODERN_SSL = True
except ImportError:
HAS_MODERN_SSL = False
try:
requests.pyopenssl
HAS_PYOPENSSL = True
except AttributeError:
HAS_PYOPENSSL = False
class TestRequests:
digest_auth_algo = ('MD5', 'SHA-256', 'SHA-512')
def test_entry_points(self):
requests.session
requests.session().get
requests.session().head
requests.get
requests.head
requests.put
requests.patch
requests.post
# Not really an entry point, but people rely on it.
from requests.packages.urllib3.poolmanager import PoolManager
@pytest.mark.parametrize(
'exception, url', (
(MissingSchema, 'hiwpefhipowhefopw'),
(InvalidSchema, 'localhost:3128'),
(InvalidSchema, 'localhost.localdomain:3128/'),
(InvalidSchema, '10.122.1.1:3128/'),
(InvalidURL, 'http://'),
))
def test_invalid_url(self, exception, url):
with pytest.raises(exception):
requests.get(url)
def test_basic_building(self):
req = requests.Request()
req.url = 'http://kennethreitz.org/'
req.data = {'life': '42'}
pr = req.prepare()
assert pr.url == req.url
assert pr.body == 'life=42'
@pytest.mark.parametrize('method', ('GET', 'HEAD'))
def test_no_content_length(self, httpbin, method):
req = requests.Request(method, httpbin(method.lower())).prepare()
assert 'Content-Length' not in req.headers
@pytest.mark.parametrize('method', ('POST', 'PUT', 'PATCH', 'OPTIONS'))
def test_no_body_content_length(self, httpbin, method):
req = requests.Request(method, httpbin(method.lower())).prepare()
assert req.headers['Content-Length'] == '0'
@pytest.mark.parametrize('method', ('POST', 'PUT', 'PATCH', 'OPTIONS'))
def test_empty_content_length(self, httpbin, method):
req = requests.Request(method, httpbin(method.lower()), data='').prepare()
assert req.headers['Content-Length'] == '0'
def test_override_content_length(self, httpbin):
headers = {
'Content-Length': 'not zero'
}
r = requests.Request('POST', httpbin('post'), headers=headers).prepare()
assert 'Content-Length' in r.headers
assert r.headers['Content-Length'] == 'not zero'
def test_path_is_not_double_encoded(self):
request = requests.Request('GET', "http://0.0.0.0/get/test case").prepare()
assert request.path_url == '/get/test%20case'
@pytest.mark.parametrize(
'url, expected', (
('http://example.com/path#fragment', 'http://example.com/path?a=b#fragment'),
('http://example.com/path?key=value#fragment', 'http://example.com/path?key=value&a=b#fragment')
))
def test_params_are_added_before_fragment(self, url, expected):
request = requests.Request('GET', url, params={"a": "b"}).prepare()
assert request.url == expected
def test_params_original_order_is_preserved_by_default(self):
param_ordered_dict = OrderedDict((('z', 1), ('a', 1), ('k', 1), ('d', 1)))
session = requests.Session()
request = requests.Request('GET', 'http://example.com/', params=param_ordered_dict)
prep = session.prepare_request(request)
assert prep.url == 'http://example.com/?z=1&a=1&k=1&d=1'
def test_params_bytes_are_encoded(self):
request = requests.Request('GET', 'http://example.com',
params=b'test=foo').prepare()
assert request.url == 'http://example.com/?test=foo'
def test_binary_put(self):
request = requests.Request('PUT', 'http://example.com',
data=u"ööö".encode("utf-8")).prepare()
assert isinstance(request.body, bytes)
def test_whitespaces_are_removed_from_url(self):
# Test for issue #3696
request = requests.Request('GET', ' http://example.com').prepare()
assert request.url == 'http://example.com/'
@pytest.mark.parametrize('scheme', ('http://', 'HTTP://', 'hTTp://', 'HttP://'))
def test_mixed_case_scheme_acceptable(self, httpbin, scheme):
s = requests.Session()
s.proxies = getproxies()
parts = urlparse(httpbin('get'))
url = scheme + parts.netloc + parts.path
r = requests.Request('GET', url)
r = s.send(r.prepare())
assert r.status_code == 200, 'failed for scheme {}'.format(scheme)
def test_HTTP_200_OK_GET_ALTERNATIVE(self, httpbin):
r = requests.Request('GET', httpbin('get'))
s = requests.Session()
s.proxies = getproxies()
r = s.send(r.prepare())
assert r.status_code == 200
def test_HTTP_302_ALLOW_REDIRECT_GET(self, httpbin):
r = requests.get(httpbin('redirect', '1'))
assert r.status_code == 200
assert r.history[0].status_code == 302
assert r.history[0].is_redirect
def test_HTTP_307_ALLOW_REDIRECT_POST(self, httpbin):
r = requests.post(httpbin('redirect-to'), data='test', params={'url': 'post', 'status_code': 307})
assert r.status_code == 200
assert r.history[0].status_code == 307
assert r.history[0].is_redirect
assert r.json()['data'] == 'test'
def test_HTTP_307_ALLOW_REDIRECT_POST_WITH_SEEKABLE(self, httpbin):
byte_str = b'test'
r = requests.post(httpbin('redirect-to'), data=io.BytesIO(byte_str), params={'url': 'post', 'status_code': 307})
assert r.status_code == 200
assert r.history[0].status_code == 307
assert r.history[0].is_redirect
assert r.json()['data'] == byte_str.decode('utf-8')
def test_HTTP_302_TOO_MANY_REDIRECTS(self, httpbin):
try:
requests.get(httpbin('relative-redirect', '50'))
except TooManyRedirects as e:
url = httpbin('relative-redirect', '20')
assert e.request.url == url
assert e.response.url == url
assert len(e.response.history) == 30
else:
pytest.fail('Expected redirect to raise TooManyRedirects but it did not')
def test_HTTP_302_TOO_MANY_REDIRECTS_WITH_PARAMS(self, httpbin):
s = requests.session()
s.max_redirects = 5
try:
s.get(httpbin('relative-redirect', '50'))
except TooManyRedirects as e:
url = httpbin('relative-redirect', '45')
assert e.request.url == url
assert e.response.url == url
assert len(e.response.history) == 5
else:
pytest.fail('Expected custom max number of redirects to be respected but was not')
def test_http_301_changes_post_to_get(self, httpbin):
r = requests.post(httpbin('status', '301'))
assert r.status_code == 200
assert r.request.method == 'GET'
assert r.history[0].status_code == 301
assert r.history[0].is_redirect
def test_http_301_doesnt_change_head_to_get(self, httpbin):
r = requests.head(httpbin('status', '301'), allow_redirects=True)
print(r.content)
assert r.status_code == 200
assert r.request.method == 'HEAD'
assert r.history[0].status_code == 301
assert r.history[0].is_redirect
def test_http_302_changes_post_to_get(self, httpbin):
r = requests.post(httpbin('status', '302'))
assert r.status_code == 200
assert r.request.method == 'GET'
assert r.history[0].status_code == 302
assert r.history[0].is_redirect
def test_http_302_doesnt_change_head_to_get(self, httpbin):
r = requests.head(httpbin('status', '302'), allow_redirects=True)
assert r.status_code == 200
assert r.request.method == 'HEAD'
assert r.history[0].status_code == 302
assert r.history[0].is_redirect
def test_http_303_changes_post_to_get(self, httpbin):
r = requests.post(httpbin('status', '303'))
assert r.status_code == 200
assert r.request.method == 'GET'
assert r.history[0].status_code == 303
assert r.history[0].is_redirect
def test_http_303_doesnt_change_head_to_get(self, httpbin):
r = requests.head(httpbin('status', '303'), allow_redirects=True)
assert r.status_code == 200
assert r.request.method == 'HEAD'
assert r.history[0].status_code == 303
assert r.history[0].is_redirect
def test_header_and_body_removal_on_redirect(self, httpbin):
purged_headers = ('Content-Length', 'Content-Type')
ses = requests.Session()
req = requests.Request('POST', httpbin('post'), data={'test': 'data'})
prep = ses.prepare_request(req)
resp = ses.send(prep)
# Mimic a redirect response
resp.status_code = 302
resp.headers['location'] = 'get'
# Run request through resolve_redirects
next_resp = next(ses.resolve_redirects(resp, prep))
assert next_resp.request.body is None
for header in purged_headers:
assert header not in next_resp.request.headers
def test_transfer_enc_removal_on_redirect(self, httpbin):
purged_headers = ('Transfer-Encoding', 'Content-Type')
ses = requests.Session()
req = requests.Request('POST', httpbin('post'), data=(b'x' for x in range(1)))
prep = ses.prepare_request(req)
assert 'Transfer-Encoding' in prep.headers
# Create Response to avoid https://github.com/kevin1024/pytest-httpbin/issues/33
resp = requests.Response()
resp.raw = io.BytesIO(b'the content')
resp.request = prep
setattr(resp.raw, 'release_conn', lambda *args: args)
# Mimic a redirect response
resp.status_code = 302
resp.headers['location'] = httpbin('get')
# Run request through resolve_redirect
next_resp = next(ses.resolve_redirects(resp, prep))
assert next_resp.request.body is None
for header in purged_headers:
assert header not in next_resp.request.headers
def test_fragment_maintained_on_redirect(self, httpbin):
fragment = "#view=edit&token=hunter2"
r = requests.get(httpbin('redirect-to?url=get')+fragment)
assert len(r.history) > 0
assert r.history[0].request.url == httpbin('redirect-to?url=get')+fragment
assert r.url == httpbin('get')+fragment
def test_HTTP_200_OK_GET_WITH_PARAMS(self, httpbin):
heads = {'User-agent': 'Mozilla/5.0'}
r = requests.get(httpbin('user-agent'), headers=heads)
assert heads['User-agent'] in r.text
assert r.status_code == 200
def test_HTTP_200_OK_GET_WITH_MIXED_PARAMS(self, httpbin):
heads = {'User-agent': 'Mozilla/5.0'}
r = requests.get(httpbin('get') + '?test=true', params={'q': 'test'}, headers=heads)
assert r.status_code == 200
def test_set_cookie_on_301(self, httpbin):
s = requests.session()
url = httpbin('cookies/set?foo=bar')
s.get(url)
assert s.cookies['foo'] == 'bar'
def test_cookie_sent_on_redirect(self, httpbin):
s = requests.session()
s.get(httpbin('cookies/set?foo=bar'))
r = s.get(httpbin('redirect/1')) # redirects to httpbin('get')
assert 'Cookie' in r.json()['headers']
def test_cookie_removed_on_expire(self, httpbin):
s = requests.session()
s.get(httpbin('cookies/set?foo=bar'))
assert s.cookies['foo'] == 'bar'
s.get(
httpbin('response-headers'),
params={
'Set-Cookie':
'foo=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT'
}
)
assert 'foo' not in s.cookies
def test_cookie_quote_wrapped(self, httpbin):
s = requests.session()
s.get(httpbin('cookies/set?foo="bar:baz"'))
assert s.cookies['foo'] == '"bar:baz"'
def test_cookie_persists_via_api(self, httpbin):
s = requests.session()
r = s.get(httpbin('redirect/1'), cookies={'foo': 'bar'})
assert 'foo' in r.request.headers['Cookie']
assert 'foo' in r.history[0].request.headers['Cookie']
def test_request_cookie_overrides_session_cookie(self, httpbin):
s = requests.session()
s.cookies['foo'] = 'bar'
r = s.get(httpbin('cookies'), cookies={'foo': 'baz'})
assert r.json()['cookies']['foo'] == 'baz'
# Session cookie should not be modified
assert s.cookies['foo'] == 'bar'
def test_request_cookies_not_persisted(self, httpbin):
s = requests.session()
s.get(httpbin('cookies'), cookies={'foo': 'baz'})
# Sending a request with cookies should not add cookies to the session
assert not s.cookies
def test_generic_cookiejar_works(self, httpbin):
cj = cookielib.CookieJar()
cookiejar_from_dict({'foo': 'bar'}, cj)
s = requests.session()
s.cookies = cj
r = s.get(httpbin('cookies'))
# Make sure the cookie was sent
assert r.json()['cookies']['foo'] == 'bar'
# Make sure the session cj is still the custom one
assert s.cookies is cj
def test_param_cookiejar_works(self, httpbin):
cj = cookielib.CookieJar()
cookiejar_from_dict({'foo': 'bar'}, cj)
s = requests.session()
r = s.get(httpbin('cookies'), cookies=cj)
# Make sure the cookie was sent
assert r.json()['cookies']['foo'] == 'bar'
def test_cookielib_cookiejar_on_redirect(self, httpbin):
"""Tests resolve_redirect doesn't fail when merging cookies
with non-RequestsCookieJar cookiejar.
See GH #3579
"""
cj = cookiejar_from_dict({'foo': 'bar'}, cookielib.CookieJar())
s = requests.Session()
s.cookies = cookiejar_from_dict({'cookie': 'tasty'})
# Prepare request without using Session
req = requests.Request('GET', httpbin('headers'), cookies=cj)
prep_req = req.prepare()
# Send request and simulate redirect
resp = s.send(prep_req)
resp.status_code = 302
resp.headers['location'] = httpbin('get')
redirects = s.resolve_redirects(resp, prep_req)
resp = next(redirects)
# Verify CookieJar isn't being converted to RequestsCookieJar
assert isinstance(prep_req._cookies, cookielib.CookieJar)
assert isinstance(resp.request._cookies, cookielib.CookieJar)
assert not isinstance(resp.request._cookies, requests.cookies.RequestsCookieJar)
cookies = {}
for c in resp.request._cookies:
cookies[c.name] = c.value
assert cookies['foo'] == 'bar'
assert cookies['cookie'] == 'tasty'
def test_requests_in_history_are_not_overridden(self, httpbin):
resp = requests.get(httpbin('redirect/3'))
urls = [r.url for r in resp.history]
req_urls = [r.request.url for r in resp.history]
assert urls == req_urls
def test_history_is_always_a_list(self, httpbin):
"""Show that even with redirects, Response.history is always a list."""
resp = requests.get(httpbin('get'))
assert isinstance(resp.history, list)
resp = requests.get(httpbin('redirect/1'))
assert isinstance(resp.history, list)
assert not isinstance(resp.history, tuple)
def test_headers_on_session_with_None_are_not_sent(self, httpbin):
"""Do not send headers in Session.headers with None values."""
ses = requests.Session()
ses.headers['Accept-Encoding'] = None
req = requests.Request('GET', httpbin('get'))
prep = ses.prepare_request(req)
assert 'Accept-Encoding' not in prep.headers
def test_headers_preserve_order(self, httpbin):
"""Preserve order when headers provided as OrderedDict."""
ses = requests.Session()
ses.headers = OrderedDict()
ses.headers['Accept-Encoding'] = 'identity'
ses.headers['First'] = '1'
ses.headers['Second'] = '2'
headers = OrderedDict([('Third', '3'), ('Fourth', '4')])
headers['Fifth'] = '5'
headers['Second'] = '222'
req = requests.Request('GET', httpbin('get'), headers=headers)
prep = ses.prepare_request(req)
items = list(prep.headers.items())
assert items[0] == ('Accept-Encoding', 'identity')
assert items[1] == ('First', '1')
assert items[2] == ('Second', '222')
assert items[3] == ('Third', '3')
assert items[4] == ('Fourth', '4')
assert items[5] == ('Fifth', '5')
@pytest.mark.parametrize('key', ('User-agent', 'user-agent'))
def test_user_agent_transfers(self, httpbin, key):
heads = {key: 'Mozilla/5.0 (github.com/requests/requests)'}
r = requests.get(httpbin('user-agent'), headers=heads)
assert heads[key] in r.text
def test_HTTP_200_OK_HEAD(self, httpbin):
r = requests.head(httpbin('get'))
assert r.status_code == 200
def test_HTTP_200_OK_PUT(self, httpbin):
r = requests.put(httpbin('put'))
assert r.status_code == 200
def test_BASICAUTH_TUPLE_HTTP_200_OK_GET(self, httpbin):
auth = ('user', 'pass')
url = httpbin('basic-auth', 'user', 'pass')
r = requests.get(url, auth=auth)
assert r.status_code == 200
r = requests.get(url)
assert r.status_code == 401
s = requests.session()
s.auth = auth
r = s.get(url)
assert r.status_code == 200
@pytest.mark.parametrize(
'username, password', (
('user', 'pass'),
(u'имя'.encode('utf-8'), u'пароль'.encode('utf-8')),
(42, 42),
(None, None),
))
def test_set_basicauth(self, httpbin, username, password):
auth = (username, password)
url = httpbin('get')
r = requests.Request('GET', url, auth=auth)
p = r.prepare()
assert p.headers['Authorization'] == _basic_auth_str(username, password)
def test_basicauth_encodes_byte_strings(self):
"""Ensure b'test' formats as the byte string "test" rather
than the unicode string "b'test'" in Python 3.
"""
auth = (b'\xc5\xafsername', b'test\xc6\xb6')
r = requests.Request('GET', 'http://localhost', auth=auth)
p = r.prepare()
assert p.headers['Authorization'] == 'Basic xa9zZXJuYW1lOnRlc3TGtg=='
@pytest.mark.parametrize(
'url, exception', (
# Connecting to an unknown domain should raise a ConnectionError
('http://doesnotexist.google.com', ConnectionError),
# Connecting to an invalid port should raise a ConnectionError
('http://localhost:1', ConnectionError),
# Inputing a URL that cannot be parsed should raise an InvalidURL error
('http://fe80::5054:ff:fe5a:fc0', InvalidURL)
))
def test_errors(self, url, exception):
with pytest.raises(exception):
requests.get(url, timeout=1)
def test_proxy_error(self):
# any proxy related error (address resolution, no route to host, etc) should result in a ProxyError
with pytest.raises(ProxyError):
requests.get('http://localhost:1', proxies={'http': 'non-resolvable-address'})
def test_proxy_error_on_bad_url(self, httpbin, httpbin_secure):
with pytest.raises(InvalidProxyURL):
requests.get(httpbin_secure(), proxies={'https': 'http:/badproxyurl:3128'})
with pytest.raises(InvalidProxyURL):
requests.get(httpbin(), proxies={'http': 'http://:8080'})
with pytest.raises(InvalidProxyURL):
requests.get(httpbin_secure(), proxies={'https': 'https://'})
with pytest.raises(InvalidProxyURL):
requests.get(httpbin(), proxies={'http': 'http:///example.com:8080'})
def test_basicauth_with_netrc(self, httpbin):
auth = ('user', 'pass')
wrong_auth = ('wronguser', 'wrongpass')
url = httpbin('basic-auth', 'user', 'pass')
old_auth = requests.sessions.get_netrc_auth
try:
def get_netrc_auth_mock(url):
return auth
requests.sessions.get_netrc_auth = get_netrc_auth_mock
# Should use netrc and work.
r = requests.get(url)
assert r.status_code == 200
# Given auth should override and fail.
r = requests.get(url, auth=wrong_auth)
assert r.status_code == 401
s = requests.session()
# Should use netrc and work.
r = s.get(url)
assert r.status_code == 200
# Given auth should override and fail.
s.auth = wrong_auth
r = s.get(url)
assert r.status_code == 401
finally:
requests.sessions.get_netrc_auth = old_auth
def test_DIGEST_HTTP_200_OK_GET(self, httpbin):
for authtype in self.digest_auth_algo:
auth = HTTPDigestAuth('user', 'pass')
url = httpbin('digest-auth', 'auth', 'user', 'pass', authtype, 'never')
r = requests.get(url, auth=auth)
assert r.status_code == 200
r = requests.get(url)
assert r.status_code == 401
print(r.headers['WWW-Authenticate'])
s = requests.session()
s.auth = HTTPDigestAuth('user', 'pass')
r = s.get(url)
assert r.status_code == 200
def test_DIGEST_AUTH_RETURNS_COOKIE(self, httpbin):
for authtype in self.digest_auth_algo:
url = httpbin('digest-auth', 'auth', 'user', 'pass', authtype)
auth = HTTPDigestAuth('user', 'pass')
r = requests.get(url)
assert r.cookies['fake'] == 'fake_value'
r = requests.get(url, auth=auth)
assert r.status_code == 200
def test_DIGEST_AUTH_SETS_SESSION_COOKIES(self, httpbin):
for authtype in self.digest_auth_algo:
url = httpbin('digest-auth', 'auth', 'user', 'pass', authtype)
auth = HTTPDigestAuth('user', 'pass')
s = requests.Session()
s.get(url, auth=auth)
assert s.cookies['fake'] == 'fake_value'
def test_DIGEST_STREAM(self, httpbin):
for authtype in self.digest_auth_algo:
auth = HTTPDigestAuth('user', 'pass')
url = httpbin('digest-auth', 'auth', 'user', 'pass', authtype)
r = requests.get(url, auth=auth, stream=True)
assert r.raw.read() != b''
r = requests.get(url, auth=auth, stream=False)
assert r.raw.read() == b''
def test_DIGESTAUTH_WRONG_HTTP_401_GET(self, httpbin):
for authtype in self.digest_auth_algo:
auth = HTTPDigestAuth('user', 'wrongpass')
url = httpbin('digest-auth', 'auth', 'user', 'pass', authtype)
r = requests.get(url, auth=auth)
assert r.status_code == 401
r = requests.get(url)
assert r.status_code == 401
s = requests.session()
s.auth = auth
r = s.get(url)
assert r.status_code == 401
def test_DIGESTAUTH_QUOTES_QOP_VALUE(self, httpbin):
for authtype in self.digest_auth_algo:
auth = HTTPDigestAuth('user', 'pass')
url = httpbin('digest-auth', 'auth', 'user', 'pass', authtype)
r = requests.get(url, auth=auth)
assert '"auth"' in r.request.headers['Authorization']
def test_POSTBIN_GET_POST_FILES(self, httpbin):
url = httpbin('post')
requests.post(url).raise_for_status()
post1 = requests.post(url, data={'some': 'data'})
assert post1.status_code == 200
with open('Pipfile') as f:
post2 = requests.post(url, files={'some': f})
assert post2.status_code == 200
post4 = requests.post(url, data='[{"some": "json"}]')
assert post4.status_code == 200
with pytest.raises(ValueError):
requests.post(url, files=['bad file data'])
def test_invalid_files_input(self, httpbin):
url = httpbin('post')
post = requests.post(url,
files={"random-file-1": None, "random-file-2": 1})
assert b'name="random-file-1"' not in post.request.body
assert b'name="random-file-2"' in post.request.body
def test_POSTBIN_SEEKED_OBJECT_WITH_NO_ITER(self, httpbin):
class TestStream(object):
def __init__(self, data):
self.data = data.encode()
self.length = len(self.data)
self.index = 0
def __len__(self):
return self.length
def read(self, size=None):
if size:
ret = self.data[self.index:self.index + size]
self.index += size
else:
ret = self.data[self.index:]
self.index = self.length
return ret
def tell(self):
return self.index
def seek(self, offset, where=0):
if where == 0:
self.index = offset
elif where == 1:
self.index += offset
elif where == 2:
self.index = self.length + offset
test = TestStream('test')
post1 = requests.post(httpbin('post'), data=test)
assert post1.status_code == 200
assert post1.json()['data'] == 'test'
test = TestStream('test')
test.seek(2)
post2 = requests.post(httpbin('post'), data=test)
assert post2.status_code == 200
assert post2.json()['data'] == 'st'
def test_POSTBIN_GET_POST_FILES_WITH_DATA(self, httpbin):
url = httpbin('post')
requests.post(url).raise_for_status()
post1 = requests.post(url, data={'some': 'data'})
assert post1.status_code == 200
with open('Pipfile') as f:
post2 = requests.post(url, data={'some': 'data'}, files={'some': f})
assert post2.status_code == 200
post4 = requests.post(url, data='[{"some": "json"}]')
assert post4.status_code == 200
with pytest.raises(ValueError):
requests.post(url, files=['bad file data'])
def test_post_with_custom_mapping(self, httpbin):
class CustomMapping(MutableMapping):
def __init__(self, *args, **kwargs):
self.data = dict(*args, **kwargs)
def __delitem__(self, key):
del self.data[key]
def __getitem__(self, key):
return self.data[key]
def __setitem__(self, key, value):
self.data[key] = value
def __iter__(self):
return iter(self.data)
def __len__(self):
return len(self.data)
data = CustomMapping({'some': 'data'})
url = httpbin('post')
found_json = requests.post(url, data=data).json().get('form')
assert found_json == {'some': 'data'}
def test_conflicting_post_params(self, httpbin):
url = httpbin('post')
with open('Pipfile') as f:
pytest.raises(ValueError, "requests.post(url, data='[{\"some\": \"data\"}]', files={'some': f})")
pytest.raises(ValueError, "requests.post(url, data=u('[{\"some\": \"data\"}]'), files={'some': f})")
def test_request_ok_set(self, httpbin):
r = requests.get(httpbin('status', '404'))
assert not r.ok
def test_status_raising(self, httpbin):
r = requests.get(httpbin('status', '404'))
with pytest.raises(requests.exceptions.HTTPError):
r.raise_for_status()
r = requests.get(httpbin('status', '500'))
assert not r.ok
def test_decompress_gzip(self, httpbin):
r = requests.get(httpbin('gzip'))
r.content.decode('ascii')
@pytest.mark.parametrize(
'url, params', (
('/get', {'foo': 'føø'}),
('/get', {'føø': 'føø'}),
('/get', {'føø': 'føø'}),
('/get', {'foo': 'foo'}),
('ø', {'foo': 'foo'}),
))
def test_unicode_get(self, httpbin, url, params):
requests.get(httpbin(url), params=params)
def test_unicode_header_name(self, httpbin):
requests.put(
httpbin('put'),
headers={str('Content-Type'): 'application/octet-stream'},
data='\xff') # compat.str is unicode.
def test_pyopenssl_redirect(self, httpbin_secure, httpbin_ca_bundle):
requests.get(httpbin_secure('status', '301'), verify=httpbin_ca_bundle)
def test_invalid_ca_certificate_path(self, httpbin_secure):
INVALID_PATH = '/garbage'
with pytest.raises(IOError) as e:
requests.get(httpbin_secure(), verify=INVALID_PATH)
assert str(e.value) == 'Could not find a suitable TLS CA certificate bundle, invalid path: {}'.format(INVALID_PATH)
def test_invalid_ssl_certificate_files(self, httpbin_secure):
INVALID_PATH = '/garbage'
with pytest.raises(IOError) as e:
requests.get(httpbin_secure(), cert=INVALID_PATH)
assert str(e.value) == 'Could not find the TLS certificate file, invalid path: {}'.format(INVALID_PATH)
with pytest.raises(IOError) as e:
requests.get(httpbin_secure(), cert=('.', INVALID_PATH))
assert str(e.value) == 'Could not find the TLS key file, invalid path: {}'.format(INVALID_PATH)
def test_http_with_certificate(self, httpbin):
r = requests.get(httpbin(), cert='.')
assert r.status_code == 200
def test_https_warnings(self, httpbin_secure, httpbin_ca_bundle):
"""warnings are emitted with requests.get"""
if HAS_MODERN_SSL or HAS_PYOPENSSL:
warnings_expected = ('SubjectAltNameWarning', )
else:
warnings_expected = ('SNIMissingWarning',
'InsecurePlatformWarning',
'SubjectAltNameWarning', )
with pytest.warns(None) as warning_records:
warnings.simplefilter('always')
requests.get(httpbin_secure('status', '200'),
verify=httpbin_ca_bundle)
warning_records = [item for item in warning_records
if item.category.__name__ != 'ResourceWarning']
warnings_category = tuple(
item.category.__name__ for item in warning_records)
assert warnings_category == warnings_expected
def test_certificate_failure(self, httpbin_secure):
"""
When underlying SSL problems occur, an SSLError is raised.
"""
with pytest.raises(SSLError):
# Our local httpbin does not have a trusted CA, so this call will
# fail if we use our default trust bundle.
requests.get(httpbin_secure('status', '200'))
def test_urlencoded_get_query_multivalued_param(self, httpbin):
r = requests.get(httpbin('get'), params={'test': ['foo', 'baz']})
assert r.status_code == 200
assert r.url == httpbin('get?test=foo&test=baz')
def test_form_encoded_post_query_multivalued_element(self, httpbin):
r = requests.Request(method='POST', url=httpbin('post'),
data=dict(test=['foo', 'baz']))
prep = r.prepare()
assert prep.body == 'test=foo&test=baz'
def test_different_encodings_dont_break_post(self, httpbin):
r = requests.post(httpbin('post'),
data={'stuff': json.dumps({'a': 123})},
params={'blah': 'asdf1234'},
files={'file': ('test_requests.py', open(__file__, 'rb'))})
assert r.status_code == 200
@pytest.mark.parametrize(
'data', (
{'stuff': u('ëlïxr')},
{'stuff': u('ëlïxr').encode('utf-8')},
{'stuff': 'elixr'},
{'stuff': 'elixr'.encode('utf-8')},
))
def test_unicode_multipart_post(self, httpbin, data):
r = requests.post(httpbin('post'),
data=data,
files={'file': ('test_requests.py', open(__file__, 'rb'))})
assert r.status_code == 200
def test_unicode_multipart_post_fieldnames(self, httpbin):
filename = os.path.splitext(__file__)[0] + '.py'
r = requests.Request(
method='POST', url=httpbin('post'),
data={'stuff'.encode('utf-8'): 'elixr'},
files={'file': ('test_requests.py', open(filename, 'rb'))})
prep = r.prepare()
assert b'name="stuff"' in prep.body
assert b'name="b\'stuff\'"' not in prep.body
def test_unicode_method_name(self, httpbin):
files = {'file': open(__file__, 'rb')}
r = requests.request(
method=u('POST'), url=httpbin('post'), files=files)
assert r.status_code == 200
def test_unicode_method_name_with_request_object(self, httpbin):
files = {'file': open(__file__, 'rb')}
s = requests.Session()
req = requests.Request(u('POST'), httpbin('post'), files=files)
prep = s.prepare_request(req)
assert isinstance(prep.method, builtin_str)
assert prep.method == 'POST'
resp = s.send(prep)
assert resp.status_code == 200
def test_non_prepared_request_error(self):
s = requests.Session()
req = requests.Request(u('POST'), '/')
with pytest.raises(ValueError) as e:
s.send(req)
assert str(e.value) == 'You can only send PreparedRequests.'
def test_custom_content_type(self, httpbin):
r = requests.post(
httpbin('post'),
data={'stuff': json.dumps({'a': 123})},
files={
'file1': ('test_requests.py', open(__file__, 'rb')),
'file2': ('test_requests', open(__file__, 'rb'),
'text/py-content-type')})
assert r.status_code == 200
assert b"text/py-content-type" in r.request.body
def test_hook_receives_request_arguments(self, httpbin):
def hook(resp, **kwargs):
assert resp is not None
assert kwargs != {}
s = requests.Session()
r = requests.Request('GET', httpbin(), hooks={'response': hook})
prep = s.prepare_request(r)
s.send(prep)
def test_session_hooks_are_used_with_no_request_hooks(self, httpbin):
hook = lambda x, *args, **kwargs: x
s = requests.Session()
s.hooks['response'].append(hook)
r = requests.Request('GET', httpbin())
prep = s.prepare_request(r)
assert prep.hooks['response'] != []
assert prep.hooks['response'] == [hook]
def test_session_hooks_are_overridden_by_request_hooks(self, httpbin):
hook1 = lambda x, *args, **kwargs: x
hook2 = lambda x, *args, **kwargs: x
assert hook1 is not hook2
s = requests.Session()
s.hooks['response'].append(hook2)
r = requests.Request('GET', httpbin(), hooks={'response': [hook1]})
prep = s.prepare_request(r)
assert prep.hooks['response'] == [hook1]
def test_prepared_request_hook(self, httpbin):
def hook(resp, **kwargs):
resp.hook_working = True
return resp
req = requests.Request('GET', httpbin(), hooks={'response': hook})
prep = req.prepare()
s = requests.Session()
s.proxies = getproxies()
resp = s.send(prep)
assert hasattr(resp, 'hook_working')
def test_prepared_from_session(self, httpbin):
class DummyAuth(requests.auth.AuthBase):
def __call__(self, r):
r.headers['Dummy-Auth-Test'] = 'dummy-auth-test-ok'
return r
req = requests.Request('GET', httpbin('headers'))
assert not req.auth
s = requests.Session()
s.auth = DummyAuth()
prep = s.prepare_request(req)
resp = s.send(prep)
assert resp.json()['headers'][
'Dummy-Auth-Test'] == 'dummy-auth-test-ok'
def test_prepare_request_with_bytestring_url(self):
req = requests.Request('GET', b'https://httpbin.org/')
s = requests.Session()
prep = s.prepare_request(req)
assert prep.url == "https://httpbin.org/"
def test_request_with_bytestring_host(self, httpbin):
s = requests.Session()
resp = s.request(
'GET',
httpbin('cookies/set?cookie=value'),
allow_redirects=False,
headers={'Host': b'httpbin.org'}
)
assert resp.cookies.get('cookie') == 'value'
def test_links(self):
r = requests.Response()
r.headers = {
'cache-control': 'public, max-age=60, s-maxage=60',
'connection': 'keep-alive',
'content-encoding': 'gzip',
'content-type': 'application/json; charset=utf-8',
'date': 'Sat, 26 Jan 2013 16:47:56 GMT',
'etag': '"6ff6a73c0e446c1f61614769e3ceb778"',
'last-modified': 'Sat, 26 Jan 2013 16:22:39 GMT',
'link': ('; rel="next", ; '
' rel="last"'),
'server': 'GitHub.com',
'status': '200 OK',
'vary': 'Accept',
'x-content-type-options': 'nosniff',
'x-github-media-type': 'github.beta',
'x-ratelimit-limit': '60',
'x-ratelimit-remaining': '57'
}
assert r.links['next']['rel'] == 'next'
def test_cookie_parameters(self):
key = 'some_cookie'
value = 'some_value'
secure = True
domain = 'test.com'
rest = {'HttpOnly': True}
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value, secure=secure, domain=domain, rest=rest)
assert len(jar) == 1
assert 'some_cookie' in jar
cookie = list(jar)[0]
assert cookie.secure == secure
assert cookie.domain == domain
assert cookie._rest['HttpOnly'] == rest['HttpOnly']
def test_cookie_as_dict_keeps_len(self):
key = 'some_cookie'
value = 'some_value'
key1 = 'some_cookie1'
value1 = 'some_value1'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value)
jar.set(key1, value1)
d1 = dict(jar)
d2 = dict(jar.iteritems())
d3 = dict(jar.items())
assert len(jar) == 2
assert len(d1) == 2
assert len(d2) == 2
assert len(d3) == 2
def test_cookie_as_dict_keeps_items(self):
key = 'some_cookie'
value = 'some_value'
key1 = 'some_cookie1'
value1 = 'some_value1'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value)
jar.set(key1, value1)
d1 = dict(jar)
d2 = dict(jar.iteritems())
d3 = dict(jar.items())
assert d1['some_cookie'] == 'some_value'
assert d2['some_cookie'] == 'some_value'
assert d3['some_cookie1'] == 'some_value1'
def test_cookie_as_dict_keys(self):
key = 'some_cookie'
value = 'some_value'
key1 = 'some_cookie1'
value1 = 'some_value1'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value)
jar.set(key1, value1)
keys = jar.keys()
assert keys == list(keys)
# make sure one can use keys multiple times
assert list(keys) == list(keys)
def test_cookie_as_dict_values(self):
key = 'some_cookie'
value = 'some_value'
key1 = 'some_cookie1'
value1 = 'some_value1'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value)
jar.set(key1, value1)
values = jar.values()
assert values == list(values)
# make sure one can use values multiple times
assert list(values) == list(values)
def test_cookie_as_dict_items(self):
key = 'some_cookie'
value = 'some_value'
key1 = 'some_cookie1'
value1 = 'some_value1'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value)
jar.set(key1, value1)
items = jar.items()
assert items == list(items)
# make sure one can use items multiple times
assert list(items) == list(items)
def test_cookie_duplicate_names_different_domains(self):
key = 'some_cookie'
value = 'some_value'
domain1 = 'test1.com'
domain2 = 'test2.com'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value, domain=domain1)
jar.set(key, value, domain=domain2)
assert key in jar
items = jar.items()
assert len(items) == 2
# Verify that CookieConflictError is raised if domain is not specified
with pytest.raises(requests.cookies.CookieConflictError):
jar.get(key)
# Verify that CookieConflictError is not raised if domain is specified
cookie = jar.get(key, domain=domain1)
assert cookie == value
def test_cookie_duplicate_names_raises_cookie_conflict_error(self):
key = 'some_cookie'
value = 'some_value'
path = 'some_path'
jar = requests.cookies.RequestsCookieJar()
jar.set(key, value, path=path)
jar.set(key, value)
with pytest.raises(requests.cookies.CookieConflictError):
jar.get(key)
def test_cookie_policy_copy(self):
class MyCookiePolicy(cookielib.DefaultCookiePolicy):
pass
jar = requests.cookies.RequestsCookieJar()
jar.set_policy(MyCookiePolicy())
assert isinstance(jar.copy().get_policy(), MyCookiePolicy)
def test_time_elapsed_blank(self, httpbin):
r = requests.get(httpbin('get'))
td = r.elapsed
total_seconds = ((td.microseconds + (td.seconds + td.days * 24 * 3600) * 10**6) / 10**6)
assert total_seconds > 0.0
def test_empty_response_has_content_none(self):
r = requests.Response()
assert r.content is None
def test_response_is_iterable(self):
r = requests.Response()
io = StringIO.StringIO('abc')
read_ = io.read
def read_mock(amt, decode_content=None):
return read_(amt)
setattr(io, 'read', read_mock)
r.raw = io
assert next(iter(r))
io.close()
def test_response_decode_unicode(self):
"""When called with decode_unicode, Response.iter_content should always
return unicode.
"""
r = requests.Response()
r._content_consumed = True
r._content = b'the content'
r.encoding = 'ascii'
chunks = r.iter_content(decode_unicode=True)
assert all(isinstance(chunk, str) for chunk in chunks)
# also for streaming
r = requests.Response()
r.raw = io.BytesIO(b'the content')
r.encoding = 'ascii'
chunks = r.iter_content(decode_unicode=True)
assert all(isinstance(chunk, str) for chunk in chunks)
def test_response_reason_unicode(self):
# check for unicode HTTP status
r = requests.Response()
r.url = u'unicode URL'
r.reason = u'Komponenttia ei löydy'.encode('utf-8')
r.status_code = 404
r.encoding = None
assert not r.ok # old behaviour - crashes here
def test_response_reason_unicode_fallback(self):
# check raise_status falls back to ISO-8859-1
r = requests.Response()
r.url = 'some url'
reason = u'Komponenttia ei löydy'
r.reason = reason.encode('latin-1')
r.status_code = 500
r.encoding = None
with pytest.raises(requests.exceptions.HTTPError) as e:
r.raise_for_status()
assert reason in e.value.args[0]
def test_response_chunk_size_type(self):
"""Ensure that chunk_size is passed as None or an integer, otherwise
raise a TypeError.
"""
r = requests.Response()
r.raw = io.BytesIO(b'the content')
chunks = r.iter_content(1)
assert all(len(chunk) == 1 for chunk in chunks)
r = requests.Response()
r.raw = io.BytesIO(b'the content')
chunks = r.iter_content(None)
assert list(chunks) == [b'the content']
r = requests.Response()
r.raw = io.BytesIO(b'the content')
with pytest.raises(TypeError):
chunks = r.iter_content("1024")
def test_request_and_response_are_pickleable(self, httpbin):
r = requests.get(httpbin('get'))
# verify we can pickle the original request
assert pickle.loads(pickle.dumps(r.request))
# verify we can pickle the response and that we have access to
# the original request.
pr = pickle.loads(pickle.dumps(r))
assert r.request.url == pr.request.url
assert r.request.headers == pr.request.headers
def test_prepared_request_is_pickleable(self, httpbin):
p = requests.Request('GET', httpbin('get')).prepare()
# Verify PreparedRequest can be pickled and unpickled
r = pickle.loads(pickle.dumps(p))
assert r.url == p.url
assert r.headers == p.headers
assert r.body == p.body
# Verify unpickled PreparedRequest sends properly
s = requests.Session()
resp = s.send(r)
assert resp.status_code == 200
def test_prepared_request_with_file_is_pickleable(self, httpbin):
files = {'file': open(__file__, 'rb')}
r = requests.Request('POST', httpbin('post'), files=files)
p = r.prepare()
# Verify PreparedRequest can be pickled and unpickled
r = pickle.loads(pickle.dumps(p))
assert r.url == p.url
assert r.headers == p.headers
assert r.body == p.body
# Verify unpickled PreparedRequest sends properly
s = requests.Session()
resp = s.send(r)
assert resp.status_code == 200
def test_prepared_request_with_hook_is_pickleable(self, httpbin):
r = requests.Request('GET', httpbin('get'), hooks=default_hooks())
p = r.prepare()
# Verify PreparedRequest can be pickled
r = pickle.loads(pickle.dumps(p))
assert r.url == p.url
assert r.headers == p.headers
assert r.body == p.body
assert r.hooks == p.hooks
# Verify unpickled PreparedRequest sends properly
s = requests.Session()
resp = s.send(r)
assert resp.status_code == 200
def test_cannot_send_unprepared_requests(self, httpbin):
r = requests.Request(url=httpbin())
with pytest.raises(ValueError):
requests.Session().send(r)
def test_http_error(self):
error = requests.exceptions.HTTPError()
assert not error.response
response = requests.Response()
error = requests.exceptions.HTTPError(response=response)
assert error.response == response
error = requests.exceptions.HTTPError('message', response=response)
assert str(error) == 'message'
assert error.response == response
def test_session_pickling(self, httpbin):
r = requests.Request('GET', httpbin('get'))
s = requests.Session()
s = pickle.loads(pickle.dumps(s))
s.proxies = getproxies()
r = s.send(r.prepare())
assert r.status_code == 200
def test_fixes_1329(self, httpbin):
"""Ensure that header updates are done case-insensitively."""
s = requests.Session()
s.headers.update({'ACCEPT': 'BOGUS'})
s.headers.update({'accept': 'application/json'})
r = s.get(httpbin('get'))
headers = r.request.headers
assert headers['accept'] == 'application/json'
assert headers['Accept'] == 'application/json'
assert headers['ACCEPT'] == 'application/json'
def test_uppercase_scheme_redirect(self, httpbin):
parts = urlparse(httpbin('html'))
url = "HTTP://" + parts.netloc + parts.path
r = requests.get(httpbin('redirect-to'), params={'url': url})
assert r.status_code == 200
assert r.url.lower() == url.lower()
def test_transport_adapter_ordering(self):
s = requests.Session()
order = ['https://', 'http://']
assert order == list(s.adapters)
s.mount('http://git', HTTPAdapter())
s.mount('http://github', HTTPAdapter())
s.mount('http://github.com', HTTPAdapter())
s.mount('http://github.com/about/', HTTPAdapter())
order = [
'http://github.com/about/',
'http://github.com',
'http://github',
'http://git',
'https://',
'http://',
]
assert order == list(s.adapters)
s.mount('http://gittip', HTTPAdapter())
s.mount('http://gittip.com', HTTPAdapter())
s.mount('http://gittip.com/about/', HTTPAdapter())
order = [
'http://github.com/about/',
'http://gittip.com/about/',
'http://github.com',
'http://gittip.com',
'http://github',
'http://gittip',
'http://git',
'https://',
'http://',
]
assert order == list(s.adapters)
s2 = requests.Session()
s2.adapters = {'http://': HTTPAdapter()}
s2.mount('https://', HTTPAdapter())
assert 'http://' in s2.adapters
assert 'https://' in s2.adapters
def test_session_get_adapter_prefix_matching(self):
prefix = 'https://example.com'
more_specific_prefix = prefix + '/some/path'
url_matching_only_prefix = prefix + '/another/path'
url_matching_more_specific_prefix = more_specific_prefix + '/longer/path'
url_not_matching_prefix = 'https://another.example.com/'
s = requests.Session()
prefix_adapter = HTTPAdapter()
more_specific_prefix_adapter = HTTPAdapter()
s.mount(prefix, prefix_adapter)
s.mount(more_specific_prefix, more_specific_prefix_adapter)
assert s.get_adapter(url_matching_only_prefix) is prefix_adapter
assert s.get_adapter(url_matching_more_specific_prefix) is more_specific_prefix_adapter
assert s.get_adapter(url_not_matching_prefix) not in (prefix_adapter, more_specific_prefix_adapter)
def test_session_get_adapter_prefix_matching_mixed_case(self):
mixed_case_prefix = 'hTtPs://eXamPle.CoM/MixEd_CAse_PREfix'
url_matching_prefix = mixed_case_prefix + '/full_url'
s = requests.Session()
my_adapter = HTTPAdapter()
s.mount(mixed_case_prefix, my_adapter)
assert s.get_adapter(url_matching_prefix) is my_adapter
def test_session_get_adapter_prefix_matching_is_case_insensitive(self):
mixed_case_prefix = 'hTtPs://eXamPle.CoM/MixEd_CAse_PREfix'
url_matching_prefix_with_different_case = 'HtTpS://exaMPLe.cOm/MiXeD_caSE_preFIX/another_url'
s = requests.Session()
my_adapter = HTTPAdapter()
s.mount(mixed_case_prefix, my_adapter)
assert s.get_adapter(url_matching_prefix_with_different_case) is my_adapter
def test_header_remove_is_case_insensitive(self, httpbin):
# From issue #1321
s = requests.Session()
s.headers['foo'] = 'bar'
r = s.get(httpbin('get'), headers={'FOO': None})
assert 'foo' not in r.request.headers
def test_params_are_merged_case_sensitive(self, httpbin):
s = requests.Session()
s.params['foo'] = 'bar'
r = s.get(httpbin('get'), params={'FOO': 'bar'})
assert r.json()['args'] == {'foo': 'bar', 'FOO': 'bar'}
def test_long_authinfo_in_url(self):
url = 'http://{}:{}@{}:9000/path?query#frag'.format(
'E8A3BE87-9E3F-4620-8858-95478E385B5B',
'EA770032-DA4D-4D84-8CE9-29C6D910BF1E',
'exactly-------------sixty-----------three------------characters',
)
r = requests.Request('GET', url).prepare()
assert r.url == url
def test_header_keys_are_native(self, httpbin):
headers = {u('unicode'): 'blah', 'byte'.encode('ascii'): 'blah'}
r = requests.Request('GET', httpbin('get'), headers=headers)
p = r.prepare()
# This is testing that they are builtin strings. A bit weird, but there
# we go.
assert 'unicode' in p.headers.keys()
assert 'byte' in p.headers.keys()
def test_header_validation(self, httpbin):
"""Ensure prepare_headers regex isn't flagging valid header contents."""
headers_ok = {'foo': 'bar baz qux',
'bar': u'fbbq'.encode('utf8'),
'baz': '',
'qux': '1'}
r = requests.get(httpbin('get'), headers=headers_ok)
assert r.request.headers['foo'] == headers_ok['foo']
def test_header_value_not_str(self, httpbin):
"""Ensure the header value is of type string or bytes as
per discussion in GH issue #3386
"""
headers_int = {'foo': 3}
headers_dict = {'bar': {'foo': 'bar'}}
headers_list = {'baz': ['foo', 'bar']}
# Test for int
with pytest.raises(InvalidHeader) as excinfo:
r = requests.get(httpbin('get'), headers=headers_int)
assert 'foo' in str(excinfo.value)
# Test for dict
with pytest.raises(InvalidHeader) as excinfo:
r = requests.get(httpbin('get'), headers=headers_dict)
assert 'bar' in str(excinfo.value)
# Test for list
with pytest.raises(InvalidHeader) as excinfo:
r = requests.get(httpbin('get'), headers=headers_list)
assert 'baz' in str(excinfo.value)
def test_header_no_return_chars(self, httpbin):
"""Ensure that a header containing return character sequences raise an
exception. Otherwise, multiple headers are created from single string.
"""
headers_ret = {'foo': 'bar\r\nbaz: qux'}
headers_lf = {'foo': 'bar\nbaz: qux'}
headers_cr = {'foo': 'bar\rbaz: qux'}
# Test for newline
with pytest.raises(InvalidHeader):
r = requests.get(httpbin('get'), headers=headers_ret)
# Test for line feed
with pytest.raises(InvalidHeader):
r = requests.get(httpbin('get'), headers=headers_lf)
# Test for carriage return
with pytest.raises(InvalidHeader):
r = requests.get(httpbin('get'), headers=headers_cr)
def test_header_no_leading_space(self, httpbin):
"""Ensure headers containing leading whitespace raise
InvalidHeader Error before sending.
"""
headers_space = {'foo': ' bar'}
headers_tab = {'foo': ' bar'}
# Test for whitespace
with pytest.raises(InvalidHeader):
r = requests.get(httpbin('get'), headers=headers_space)
# Test for tab
with pytest.raises(InvalidHeader):
r = requests.get(httpbin('get'), headers=headers_tab)
@pytest.mark.parametrize('files', ('foo', b'foo', bytearray(b'foo')))
def test_can_send_objects_with_files(self, httpbin, files):
data = {'a': 'this is a string'}
files = {'b': files}
r = requests.Request('POST', httpbin('post'), data=data, files=files)
p = r.prepare()
assert 'multipart/form-data' in p.headers['Content-Type']
def test_can_send_file_object_with_non_string_filename(self, httpbin):
f = io.BytesIO()
f.name = 2
r = requests.Request('POST', httpbin('post'), files={'f': f})
p = r.prepare()
assert 'multipart/form-data' in p.headers['Content-Type']
def test_autoset_header_values_are_native(self, httpbin):
data = 'this is a string'
length = '16'
req = requests.Request('POST', httpbin('post'), data=data)
p = req.prepare()
assert p.headers['Content-Length'] == length
def test_nonhttp_schemes_dont_check_URLs(self):
test_urls = (
'data:image/gif;base64,R0lGODlhAQABAHAAACH5BAUAAAAALAAAAAABAAEAAAICRAEAOw==',
'file:///etc/passwd',
'magnet:?xt=urn:btih:be08f00302bc2d1d3cfa3af02024fa647a271431',
)
for test_url in test_urls:
req = requests.Request('GET', test_url)
preq = req.prepare()
assert test_url == preq.url
def test_auth_is_stripped_on_http_downgrade(self, httpbin, httpbin_secure, httpbin_ca_bundle):
r = requests.get(
httpbin_secure('redirect-to'),
params={'url': httpbin('get')},
auth=('user', 'pass'),
verify=httpbin_ca_bundle
)
assert r.history[0].request.headers['Authorization']
assert 'Authorization' not in r.request.headers
def test_auth_is_retained_for_redirect_on_host(self, httpbin):
r = requests.get(httpbin('redirect/1'), auth=('user', 'pass'))
h1 = r.history[0].request.headers['Authorization']
h2 = r.request.headers['Authorization']
assert h1 == h2
def test_should_strip_auth_host_change(self):
s = requests.Session()
assert s.should_strip_auth('http://example.com/foo', 'http://another.example.com/')
def test_should_strip_auth_http_downgrade(self):
s = requests.Session()
assert s.should_strip_auth('https://example.com/foo', 'http://example.com/bar')
def test_should_strip_auth_https_upgrade(self):
s = requests.Session()
assert not s.should_strip_auth('http://example.com/foo', 'https://example.com/bar')
assert not s.should_strip_auth('http://example.com:80/foo', 'https://example.com/bar')
assert not s.should_strip_auth('http://example.com/foo', 'https://example.com:443/bar')
# Non-standard ports should trigger stripping
assert s.should_strip_auth('http://example.com:8080/foo', 'https://example.com/bar')
assert s.should_strip_auth('http://example.com/foo', 'https://example.com:8443/bar')
def test_should_strip_auth_port_change(self):
s = requests.Session()
assert s.should_strip_auth('http://example.com:1234/foo', 'https://example.com:4321/bar')
@pytest.mark.parametrize(
'old_uri, new_uri', (
('https://example.com:443/foo', 'https://example.com/bar'),
('http://example.com:80/foo', 'http://example.com/bar'),
('https://example.com/foo', 'https://example.com:443/bar'),
('http://example.com/foo', 'http://example.com:80/bar')
))
def test_should_strip_auth_default_port(self, old_uri, new_uri):
s = requests.Session()
assert not s.should_strip_auth(old_uri, new_uri)
def test_manual_redirect_with_partial_body_read(self, httpbin):
s = requests.Session()
r1 = s.get(httpbin('redirect/2'), allow_redirects=False, stream=True)
assert r1.is_redirect
rg = s.resolve_redirects(r1, r1.request, stream=True)
# read only the first eight bytes of the response body,
# then follow the redirect
r1.iter_content(8)
r2 = next(rg)
assert r2.is_redirect
# read all of the response via iter_content,
# then follow the redirect
for _ in r2.iter_content():
pass
r3 = next(rg)
assert not r3.is_redirect
def test_prepare_body_position_non_stream(self):
data = b'the data'
prep = requests.Request('GET', 'http://example.com', data=data).prepare()
assert prep._body_position is None
def test_rewind_body(self):
data = io.BytesIO(b'the data')
prep = requests.Request('GET', 'http://example.com', data=data).prepare()
assert prep._body_position == 0
assert prep.body.read() == b'the data'
# the data has all been read
assert prep.body.read() == b''
# rewind it back
requests.utils.rewind_body(prep)
assert prep.body.read() == b'the data'
def test_rewind_partially_read_body(self):
data = io.BytesIO(b'the data')
data.read(4) # read some data
prep = requests.Request('GET', 'http://example.com', data=data).prepare()
assert prep._body_position == 4
assert prep.body.read() == b'data'
# the data has all been read
assert prep.body.read() == b''
# rewind it back
requests.utils.rewind_body(prep)
assert prep.body.read() == b'data'
def test_rewind_body_no_seek(self):
class BadFileObj:
def __init__(self, data):
self.data = data
def tell(self):
return 0
def __iter__(self):
return
data = BadFileObj('the data')
prep = requests.Request('GET', 'http://example.com', data=data).prepare()
assert prep._body_position == 0
with pytest.raises(UnrewindableBodyError) as e:
requests.utils.rewind_body(prep)
assert 'Unable to rewind request body' in str(e)
def test_rewind_body_failed_seek(self):
class BadFileObj:
def __init__(self, data):
self.data = data
def tell(self):
return 0
def seek(self, pos, whence=0):
raise OSError()
def __iter__(self):
return
data = BadFileObj('the data')
prep = requests.Request('GET', 'http://example.com', data=data).prepare()
assert prep._body_position == 0
with pytest.raises(UnrewindableBodyError) as e:
requests.utils.rewind_body(prep)
assert 'error occurred when rewinding request body' in str(e)
def test_rewind_body_failed_tell(self):
class BadFileObj:
def __init__(self, data):
self.data = data
def tell(self):
raise OSError()
def __iter__(self):
return
data = BadFileObj('the data')
prep = requests.Request('GET', 'http://example.com', data=data).prepare()
assert prep._body_position is not None
with pytest.raises(UnrewindableBodyError) as e:
requests.utils.rewind_body(prep)
assert 'Unable to rewind request body' in str(e)
def _patch_adapter_gzipped_redirect(self, session, url):
adapter = session.get_adapter(url=url)
org_build_response = adapter.build_response
self._patched_response = False
def build_response(*args, **kwargs):
resp = org_build_response(*args, **kwargs)
if not self._patched_response:
resp.raw.headers['content-encoding'] = 'gzip'
self._patched_response = True
return resp
adapter.build_response = build_response
def test_redirect_with_wrong_gzipped_header(self, httpbin):
s = requests.Session()
url = httpbin('redirect/1')
self._patch_adapter_gzipped_redirect(s, url)
s.get(url)
@pytest.mark.parametrize(
'username, password, auth_str', (
('test', 'test', 'Basic dGVzdDp0ZXN0'),
(u'имя'.encode('utf-8'), u'пароль'.encode('utf-8'), 'Basic 0LjQvNGPOtC/0LDRgNC+0LvRjA=='),
))
def test_basic_auth_str_is_always_native(self, username, password, auth_str):
s = _basic_auth_str(username, password)
assert isinstance(s, builtin_str)
assert s == auth_str
def test_requests_history_is_saved(self, httpbin):
r = requests.get(httpbin('redirect/5'))
total = r.history[-1].history
i = 0
for item in r.history:
assert item.history == total[0:i]
i += 1
def test_json_param_post_content_type_works(self, httpbin):
r = requests.post(
httpbin('post'),
json={'life': 42}
)
assert r.status_code == 200
assert 'application/json' in r.request.headers['Content-Type']
assert {'life': 42} == r.json()['json']
def test_json_param_post_should_not_override_data_param(self, httpbin):
r = requests.Request(method='POST', url=httpbin('post'),
data={'stuff': 'elixr'},
json={'music': 'flute'})
prep = r.prepare()
assert 'stuff=elixr' == prep.body
def test_response_iter_lines(self, httpbin):
r = requests.get(httpbin('stream/4'), stream=True)
assert r.status_code == 200
it = r.iter_lines()
next(it)
assert len(list(it)) == 3
def test_response_context_manager(self, httpbin):
with requests.get(httpbin('stream/4'), stream=True) as response:
assert isinstance(response, requests.Response)
assert response.raw.closed
def test_unconsumed_session_response_closes_connection(self, httpbin):
s = requests.session()
with contextlib.closing(s.get(httpbin('stream/4'), stream=True)) as response:
pass
assert response._content_consumed is False
assert response.raw.closed
@pytest.mark.xfail
def test_response_iter_lines_reentrant(self, httpbin):
"""Response.iter_lines() is not reentrant safe"""
r = requests.get(httpbin('stream/4'), stream=True)
assert r.status_code == 200
next(r.iter_lines())
assert len(list(r.iter_lines())) == 3
def test_session_close_proxy_clear(self, mocker):
proxies = {
'one': mocker.Mock(),
'two': mocker.Mock(),
}
session = requests.Session()
mocker.patch.dict(session.adapters['http://'].proxy_manager, proxies)
session.close()
proxies['one'].clear.assert_called_once_with()
proxies['two'].clear.assert_called_once_with()
def test_proxy_auth(self):
adapter = HTTPAdapter()
headers = adapter.proxy_headers("http://user:pass@httpbin.org")
assert headers == {'Proxy-Authorization': 'Basic dXNlcjpwYXNz'}
def test_proxy_auth_empty_pass(self):
adapter = HTTPAdapter()
headers = adapter.proxy_headers("http://user:@httpbin.org")
assert headers == {'Proxy-Authorization': 'Basic dXNlcjo='}
def test_response_json_when_content_is_None(self, httpbin):
r = requests.get(httpbin('/status/204'))
# Make sure r.content is None
r.status_code = 0
r._content = False
r._content_consumed = False
assert r.content is None
with pytest.raises(ValueError):
r.json()
def test_response_without_release_conn(self):
"""Test `close` call for non-urllib3-like raw objects.
Should work when `release_conn` attr doesn't exist on `response.raw`.
"""
resp = requests.Response()
resp.raw = StringIO.StringIO('test')
assert not resp.raw.closed
resp.close()
assert resp.raw.closed
def test_empty_stream_with_auth_does_not_set_content_length_header(self, httpbin):
"""Ensure that a byte stream with size 0 will not set both a Content-Length
and Transfer-Encoding header.
"""
auth = ('user', 'pass')
url = httpbin('post')
file_obj = io.BytesIO(b'')
r = requests.Request('POST', url, auth=auth, data=file_obj)
prepared_request = r.prepare()
assert 'Transfer-Encoding' in prepared_request.headers
assert 'Content-Length' not in prepared_request.headers
def test_stream_with_auth_does_not_set_transfer_encoding_header(self, httpbin):
"""Ensure that a byte stream with size > 0 will not set both a Content-Length
and Transfer-Encoding header.
"""
auth = ('user', 'pass')
url = httpbin('post')
file_obj = io.BytesIO(b'test data')
r = requests.Request('POST', url, auth=auth, data=file_obj)
prepared_request = r.prepare()
assert 'Transfer-Encoding' not in prepared_request.headers
assert 'Content-Length' in prepared_request.headers
def test_chunked_upload_does_not_set_content_length_header(self, httpbin):
"""Ensure that requests with a generator body stream using
Transfer-Encoding: chunked, not a Content-Length header.
"""
data = (i for i in [b'a', b'b', b'c'])
url = httpbin('post')
r = requests.Request('POST', url, data=data)
prepared_request = r.prepare()
assert 'Transfer-Encoding' in prepared_request.headers
assert 'Content-Length' not in prepared_request.headers
def test_custom_redirect_mixin(self, httpbin):
"""Tests a custom mixin to overwrite ``get_redirect_target``.
Ensures a subclassed ``requests.Session`` can handle a certain type of
malformed redirect responses.
1. original request receives a proper response: 302 redirect
2. following the redirect, a malformed response is given:
status code = HTTP 200
location = alternate url
3. the custom session catches the edge case and follows the redirect
"""
url_final = httpbin('html')
querystring_malformed = urlencode({'location': url_final})
url_redirect_malformed = httpbin('response-headers?%s' % querystring_malformed)
querystring_redirect = urlencode({'url': url_redirect_malformed})
url_redirect = httpbin('redirect-to?%s' % querystring_redirect)
urls_test = [url_redirect,
url_redirect_malformed,
url_final,
]
class CustomRedirectSession(requests.Session):
def get_redirect_target(self, resp):
# default behavior
if resp.is_redirect:
return resp.headers['location']
# edge case - check to see if 'location' is in headers anyways
location = resp.headers.get('location')
if location and (location != resp.url):
return location
return None
session = CustomRedirectSession()
r = session.get(urls_test[0])
assert len(r.history) == 2
assert r.status_code == 200
assert r.history[0].status_code == 302
assert r.history[0].is_redirect
assert r.history[1].status_code == 200
assert not r.history[1].is_redirect
assert r.url == urls_test[2]
class TestCaseInsensitiveDict:
@pytest.mark.parametrize(
'cid', (
CaseInsensitiveDict({'Foo': 'foo', 'BAr': 'bar'}),
CaseInsensitiveDict([('Foo', 'foo'), ('BAr', 'bar')]),
CaseInsensitiveDict(FOO='foo', BAr='bar'),
))
def test_init(self, cid):
assert len(cid) == 2
assert 'foo' in cid
assert 'bar' in cid
def test_docstring_example(self):
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
assert cid['aCCEPT'] == 'application/json'
assert list(cid) == ['Accept']
def test_len(self):
cid = CaseInsensitiveDict({'a': 'a', 'b': 'b'})
cid['A'] = 'a'
assert len(cid) == 2
def test_getitem(self):
cid = CaseInsensitiveDict({'Spam': 'blueval'})
assert cid['spam'] == 'blueval'
assert cid['SPAM'] == 'blueval'
def test_fixes_649(self):
"""__setitem__ should behave case-insensitively."""
cid = CaseInsensitiveDict()
cid['spam'] = 'oneval'
cid['Spam'] = 'twoval'
cid['sPAM'] = 'redval'
cid['SPAM'] = 'blueval'
assert cid['spam'] == 'blueval'
assert cid['SPAM'] == 'blueval'
assert list(cid.keys()) == ['SPAM']
def test_delitem(self):
cid = CaseInsensitiveDict()
cid['Spam'] = 'someval'
del cid['sPam']
assert 'spam' not in cid
assert len(cid) == 0
def test_contains(self):
cid = CaseInsensitiveDict()
cid['Spam'] = 'someval'
assert 'Spam' in cid
assert 'spam' in cid
assert 'SPAM' in cid
assert 'sPam' in cid
assert 'notspam' not in cid
def test_get(self):
cid = CaseInsensitiveDict()
cid['spam'] = 'oneval'
cid['SPAM'] = 'blueval'
assert cid.get('spam') == 'blueval'
assert cid.get('SPAM') == 'blueval'
assert cid.get('sPam') == 'blueval'
assert cid.get('notspam', 'default') == 'default'
def test_update(self):
cid = CaseInsensitiveDict()
cid['spam'] = 'blueval'
cid.update({'sPam': 'notblueval'})
assert cid['spam'] == 'notblueval'
cid = CaseInsensitiveDict({'Foo': 'foo', 'BAr': 'bar'})
cid.update({'fOO': 'anotherfoo', 'bAR': 'anotherbar'})
assert len(cid) == 2
assert cid['foo'] == 'anotherfoo'
assert cid['bar'] == 'anotherbar'
def test_update_retains_unchanged(self):
cid = CaseInsensitiveDict({'foo': 'foo', 'bar': 'bar'})
cid.update({'foo': 'newfoo'})
assert cid['bar'] == 'bar'
def test_iter(self):
cid = CaseInsensitiveDict({'Spam': 'spam', 'Eggs': 'eggs'})
keys = frozenset(['Spam', 'Eggs'])
assert frozenset(iter(cid)) == keys
def test_equality(self):
cid = CaseInsensitiveDict({'SPAM': 'blueval', 'Eggs': 'redval'})
othercid = CaseInsensitiveDict({'spam': 'blueval', 'eggs': 'redval'})
assert cid == othercid
del othercid['spam']
assert cid != othercid
assert cid == {'spam': 'blueval', 'eggs': 'redval'}
assert cid != object()
def test_setdefault(self):
cid = CaseInsensitiveDict({'Spam': 'blueval'})
assert cid.setdefault('spam', 'notblueval') == 'blueval'
assert cid.setdefault('notspam', 'notblueval') == 'notblueval'
def test_lower_items(self):
cid = CaseInsensitiveDict({
'Accept': 'application/json',
'user-Agent': 'requests',
})
keyset = frozenset(lowerkey for lowerkey, v in cid.lower_items())
lowerkeyset = frozenset(['accept', 'user-agent'])
assert keyset == lowerkeyset
def test_preserve_key_case(self):
cid = CaseInsensitiveDict({
'Accept': 'application/json',
'user-Agent': 'requests',
})
keyset = frozenset(['Accept', 'user-Agent'])
assert frozenset(i[0] for i in cid.items()) == keyset
assert frozenset(cid.keys()) == keyset
assert frozenset(cid) == keyset
def test_preserve_last_key_case(self):
cid = CaseInsensitiveDict({
'Accept': 'application/json',
'user-Agent': 'requests',
})
cid.update({'ACCEPT': 'application/json'})
cid['USER-AGENT'] = 'requests'
keyset = frozenset(['ACCEPT', 'USER-AGENT'])
assert frozenset(i[0] for i in cid.items()) == keyset
assert frozenset(cid.keys()) == keyset
assert frozenset(cid) == keyset
def test_copy(self):
cid = CaseInsensitiveDict({
'Accept': 'application/json',
'user-Agent': 'requests',
})
cid_copy = cid.copy()
assert cid == cid_copy
cid['changed'] = True
assert cid != cid_copy
class TestMorselToCookieExpires:
"""Tests for morsel_to_cookie when morsel contains expires."""
def test_expires_valid_str(self):
"""Test case where we convert expires from string time."""
morsel = Morsel()
morsel['expires'] = 'Thu, 01-Jan-1970 00:00:01 GMT'
cookie = morsel_to_cookie(morsel)
assert cookie.expires == 1
@pytest.mark.parametrize(
'value, exception', (
(100, TypeError),
('woops', ValueError),
))
def test_expires_invalid_int(self, value, exception):
"""Test case where an invalid type is passed for expires."""
morsel = Morsel()
morsel['expires'] = value
with pytest.raises(exception):
morsel_to_cookie(morsel)
def test_expires_none(self):
"""Test case where expires is None."""
morsel = Morsel()
morsel['expires'] = None
cookie = morsel_to_cookie(morsel)
assert cookie.expires is None
class TestMorselToCookieMaxAge:
"""Tests for morsel_to_cookie when morsel contains max-age."""
def test_max_age_valid_int(self):
"""Test case where a valid max age in seconds is passed."""
morsel = Morsel()
morsel['max-age'] = 60
cookie = morsel_to_cookie(morsel)
assert isinstance(cookie.expires, int)
def test_max_age_invalid_str(self):
"""Test case where a invalid max age is passed."""
morsel = Morsel()
morsel['max-age'] = 'woops'
with pytest.raises(TypeError):
morsel_to_cookie(morsel)
class TestTimeout:
def test_stream_timeout(self, httpbin):
try:
requests.get(httpbin('delay/10'), timeout=2.0)
except requests.exceptions.Timeout as e:
assert 'Read timed out' in e.args[0].args[0]
@pytest.mark.parametrize(
'timeout, error_text', (
((3, 4, 5), '(connect, read)'),
('foo', 'must be an int, float or None'),
))
def test_invalid_timeout(self, httpbin, timeout, error_text):
with pytest.raises(ValueError) as e:
requests.get(httpbin('get'), timeout=timeout)
assert error_text in str(e)
@pytest.mark.parametrize(
'timeout', (
None,
Urllib3Timeout(connect=None, read=None)
))
def test_none_timeout(self, httpbin, timeout):
"""Check that you can set None as a valid timeout value.
To actually test this behavior, we'd want to check that setting the
timeout to None actually lets the request block past the system default
timeout. However, this would make the test suite unbearably slow.
Instead we verify that setting the timeout to None does not prevent the
request from succeeding.
"""
r = requests.get(httpbin('get'), timeout=timeout)
assert r.status_code == 200
@pytest.mark.parametrize(
'timeout', (
(None, 0.1),
Urllib3Timeout(connect=None, read=0.1)
))
def test_read_timeout(self, httpbin, timeout):
try:
requests.get(httpbin('delay/10'), timeout=timeout)
pytest.fail('The recv() request should time out.')
except ReadTimeout:
pass
@pytest.mark.parametrize(
'timeout', (
(0.1, None),
Urllib3Timeout(connect=0.1, read=None)
))
def test_connect_timeout(self, timeout):
try:
requests.get(TARPIT, timeout=timeout)
pytest.fail('The connect() request should time out.')
except ConnectTimeout as e:
assert isinstance(e, ConnectionError)
assert isinstance(e, Timeout)
@pytest.mark.parametrize(
'timeout', (
(0.1, 0.1),
Urllib3Timeout(connect=0.1, read=0.1)
))
def test_total_timeout_connect(self, timeout):
try:
requests.get(TARPIT, timeout=timeout)
pytest.fail('The connect() request should time out.')
except ConnectTimeout:
pass
def test_encoded_methods(self, httpbin):
"""See: https://github.com/requests/requests/issues/2316"""
r = requests.request(b'GET', httpbin('get'))
assert r.ok
SendCall = collections.namedtuple('SendCall', ('args', 'kwargs'))
class RedirectSession(SessionRedirectMixin):
def __init__(self, order_of_redirects):
self.redirects = order_of_redirects
self.calls = []
self.max_redirects = 30
self.cookies = {}
self.trust_env = False
def send(self, *args, **kwargs):
self.calls.append(SendCall(args, kwargs))
return self.build_response()
def build_response(self):
request = self.calls[-1].args[0]
r = requests.Response()
try:
r.status_code = int(self.redirects.pop(0))
except IndexError:
r.status_code = 200
r.headers = CaseInsensitiveDict({'Location': '/'})
r.raw = self._build_raw()
r.request = request
return r
def _build_raw(self):
string = StringIO.StringIO('')
setattr(string, 'release_conn', lambda *args: args)
return string
def test_json_encodes_as_bytes():
# urllib3 expects bodies as bytes-like objects
body = {"key": "value"}
p = PreparedRequest()
p.prepare(
method='GET',
url='https://www.example.com/',
json=body
)
assert isinstance(p.body, bytes)
def test_requests_are_updated_each_time(httpbin):
session = RedirectSession([303, 307])
prep = requests.Request('POST', httpbin('post')).prepare()
r0 = session.send(prep)
assert r0.request.method == 'POST'
assert session.calls[-1] == SendCall((r0.request,), {})
redirect_generator = session.resolve_redirects(r0, prep)
default_keyword_args = {
'stream': False,
'verify': True,
'cert': None,
'timeout': None,
'allow_redirects': False,
'proxies': {},
}
for response in redirect_generator:
assert response.request.method == 'GET'
send_call = SendCall((response.request,), default_keyword_args)
assert session.calls[-1] == send_call
@pytest.mark.parametrize("var,url,proxy", [
('http_proxy', 'http://example.com', 'socks5://proxy.com:9876'),
('https_proxy', 'https://example.com', 'socks5://proxy.com:9876'),
('all_proxy', 'http://example.com', 'socks5://proxy.com:9876'),
('all_proxy', 'https://example.com', 'socks5://proxy.com:9876'),
])
def test_proxy_env_vars_override_default(var, url, proxy):
session = requests.Session()
prep = PreparedRequest()
prep.prepare(method='GET', url=url)
kwargs = {
var: proxy
}
scheme = urlparse(url).scheme
with override_environ(**kwargs):
proxies = session.rebuild_proxies(prep, {})
assert scheme in proxies
assert proxies[scheme] == proxy
@pytest.mark.parametrize(
'data', (
(('a', 'b'), ('c', 'd')),
(('c', 'd'), ('a', 'b')),
(('a', 'b'), ('c', 'd'), ('e', 'f')),
))
def test_data_argument_accepts_tuples(data):
"""Ensure that the data argument will accept tuples of strings
and properly encode them.
"""
p = PreparedRequest()
p.prepare(
method='GET',
url='http://www.example.com',
data=data,
hooks=default_hooks()
)
assert p.body == urlencode(data)
@pytest.mark.parametrize(
'kwargs', (
None,
{
'method': 'GET',
'url': 'http://www.example.com',
'data': 'foo=bar',
'hooks': default_hooks()
},
{
'method': 'GET',
'url': 'http://www.example.com',
'data': 'foo=bar',
'hooks': default_hooks(),
'cookies': {'foo': 'bar'}
},
{
'method': 'GET',
'url': u('http://www.example.com/üniçø∂é')
},
))
def test_prepared_copy(kwargs):
p = PreparedRequest()
if kwargs:
p.prepare(**kwargs)
copy = p.copy()
for attr in ('method', 'url', 'headers', '_cookies', 'body', 'hooks'):
assert getattr(p, attr) == getattr(copy, attr)
def test_urllib3_retries(httpbin):
from urllib3.util import Retry
s = requests.Session()
s.mount('http://', HTTPAdapter(max_retries=Retry(
total=2, status_forcelist=[500]
)))
with pytest.raises(RetryError):
s.get(httpbin('status/500'))
def test_urllib3_pool_connection_closed(httpbin):
s = requests.Session()
s.mount('http://', HTTPAdapter(pool_connections=0, pool_maxsize=0))
try:
s.get(httpbin('status/200'))
except ConnectionError as e:
assert u"Pool is closed." in str(e)
class TestPreparingURLs(object):
@pytest.mark.parametrize(
'url,expected',
(
('http://google.com', 'http://google.com/'),
(u'http://ジェーピーニック.jp', u'http://xn--hckqz9bzb1cyrb.jp/'),
(u'http://xn--n3h.net/', u'http://xn--n3h.net/'),
(
u'http://ジェーピーニック.jp'.encode('utf-8'),
u'http://xn--hckqz9bzb1cyrb.jp/'
),
(
u'http://straße.de/straße',
u'http://xn--strae-oqa.de/stra%C3%9Fe'
),
(
u'http://straße.de/straße'.encode('utf-8'),
u'http://xn--strae-oqa.de/stra%C3%9Fe'
),
(
u'http://Königsgäßchen.de/straße',
u'http://xn--knigsgchen-b4a3dun.de/stra%C3%9Fe'
),
(
u'http://Königsgäßchen.de/straße'.encode('utf-8'),
u'http://xn--knigsgchen-b4a3dun.de/stra%C3%9Fe'
),
(
b'http://xn--n3h.net/',
u'http://xn--n3h.net/'
),
(
b'http://[1200:0000:ab00:1234:0000:2552:7777:1313]:12345/',
u'http://[1200:0000:ab00:1234:0000:2552:7777:1313]:12345/'
),
(
u'http://[1200:0000:ab00:1234:0000:2552:7777:1313]:12345/',
u'http://[1200:0000:ab00:1234:0000:2552:7777:1313]:12345/'
)
)
)
def test_preparing_url(self, url, expected):
def normalize_percent_encode(x):
# Helper function that normalizes equivalent
# percent-encoded bytes before comparisons
for c in re.findall(r'%[a-fA-F0-9]{2}', x):
x = x.replace(c, c.upper())
return x
r = requests.Request('GET', url=url)
p = r.prepare()
assert normalize_percent_encode(p.url) == expected
@pytest.mark.parametrize(
'url',
(
b"http://*.google.com",
b"http://*",
u"http://*.google.com",
u"http://*",
u"http://☃.net/"
)
)
def test_preparing_bad_url(self, url):
r = requests.Request('GET', url=url)
with pytest.raises(requests.exceptions.InvalidURL):
r.prepare()
@pytest.mark.parametrize(
'url, exception',
(
('http://localhost:-1', InvalidURL),
)
)
def test_redirecting_to_bad_url(self, httpbin, url, exception):
with pytest.raises(exception):
r = requests.get(httpbin('redirect-to'), params={'url': url})
@pytest.mark.parametrize(
'input, expected',
(
(
b"http+unix://%2Fvar%2Frun%2Fsocket/path%7E",
u"http+unix://%2Fvar%2Frun%2Fsocket/path~",
),
(
u"http+unix://%2Fvar%2Frun%2Fsocket/path%7E",
u"http+unix://%2Fvar%2Frun%2Fsocket/path~",
),
(
b"mailto:user@example.org",
u"mailto:user@example.org",
),
(
u"mailto:user@example.org",
u"mailto:user@example.org",
),
(
b"data:SSDimaUgUHl0aG9uIQ==",
u"data:SSDimaUgUHl0aG9uIQ==",
)
)
)
def test_url_mutation(self, input, expected):
"""
This test validates that we correctly exclude some URLs from
preparation, and that we handle others. Specifically, it tests that
any URL whose scheme doesn't begin with "http" is left alone, and
those whose scheme *does* begin with "http" are mutated.
"""
r = requests.Request('GET', url=input)
p = r.prepare()
assert p.url == expected
@pytest.mark.parametrize(
'input, params, expected',
(
(
b"http+unix://%2Fvar%2Frun%2Fsocket/path",
{"key": "value"},
u"http+unix://%2Fvar%2Frun%2Fsocket/path?key=value",
),
(
u"http+unix://%2Fvar%2Frun%2Fsocket/path",
{"key": "value"},
u"http+unix://%2Fvar%2Frun%2Fsocket/path?key=value",
),
(
b"mailto:user@example.org",
{"key": "value"},
u"mailto:user@example.org",
),
(
u"mailto:user@example.org",
{"key": "value"},
u"mailto:user@example.org",
),
)
)
def test_parameters_for_nonstandard_schemes(self, input, params, expected):
"""
Setting parameters for nonstandard schemes is allowed if those schemes
begin with "http", and is forbidden otherwise.
"""
r = requests.Request('GET', url=input, params=params)
p = r.prepare()
assert p.url == expected
requests-2.22.0/tests/utils.py 0000644 0000765 0000024 00000000607 13467270450 020030 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import contextlib
import os
@contextlib.contextmanager
def override_environ(**kwargs):
save_env = dict(os.environ)
for key, value in kwargs.items():
if value is None:
del os.environ[key]
else:
os.environ[key] = value
try:
yield
finally:
os.environ.clear()
os.environ.update(save_env)
requests-2.22.0/tests/test_help.py 0000644 0000765 0000024 00000001542 13467270450 020656 0 ustar nateprewitt staff 0000000 0000000 # -*- encoding: utf-8
import sys
import pytest
from requests.help import info
def test_system_ssl():
"""Verify we're actually setting system_ssl when it should be available."""
assert info()['system_ssl']['version'] != ''
class VersionedPackage(object):
def __init__(self, version):
self.__version__ = version
def test_idna_without_version_attribute(mocker):
"""Older versions of IDNA don't provide a __version__ attribute, verify
that if we have such a package, we don't blow up.
"""
mocker.patch('requests.help.idna', new=None)
assert info()['idna'] == {'version': ''}
def test_idna_with_version_attribute(mocker):
"""Verify we're actually setting idna version when it should be available."""
mocker.patch('requests.help.idna', new=VersionedPackage('2.6'))
assert info()['idna'] == {'version': '2.6'}
requests-2.22.0/tests/test_lowlevel.py 0000644 0000765 0000024 00000026402 13467270450 021561 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
import pytest
import threading
import requests
from tests.testserver.server import Server, consume_socket_content
from .utils import override_environ
def test_chunked_upload():
"""can safely send generators"""
close_server = threading.Event()
server = Server.basic_response_server(wait_to_close_event=close_server)
data = iter([b'a', b'b', b'c'])
with server as (host, port):
url = 'http://{}:{}/'.format(host, port)
r = requests.post(url, data=data, stream=True)
close_server.set() # release server block
assert r.status_code == 200
assert r.request.headers['Transfer-Encoding'] == 'chunked'
def test_digestauth_401_count_reset_on_redirect():
"""Ensure we correctly reset num_401_calls after a successful digest auth,
followed by a 302 redirect to another digest auth prompt.
See https://github.com/requests/requests/issues/1979.
"""
text_401 = (b'HTTP/1.1 401 UNAUTHORIZED\r\n'
b'Content-Length: 0\r\n'
b'WWW-Authenticate: Digest nonce="6bf5d6e4da1ce66918800195d6b9130d"'
b', opaque="372825293d1c26955496c80ed6426e9e", '
b'realm="me@kennethreitz.com", qop=auth\r\n\r\n')
text_302 = (b'HTTP/1.1 302 FOUND\r\n'
b'Content-Length: 0\r\n'
b'Location: /\r\n\r\n')
text_200 = (b'HTTP/1.1 200 OK\r\n'
b'Content-Length: 0\r\n\r\n')
expected_digest = (b'Authorization: Digest username="user", '
b'realm="me@kennethreitz.com", '
b'nonce="6bf5d6e4da1ce66918800195d6b9130d", uri="/"')
auth = requests.auth.HTTPDigestAuth('user', 'pass')
def digest_response_handler(sock):
# Respond to initial GET with a challenge.
request_content = consume_socket_content(sock, timeout=0.5)
assert request_content.startswith(b"GET / HTTP/1.1")
sock.send(text_401)
# Verify we receive an Authorization header in response, then redirect.
request_content = consume_socket_content(sock, timeout=0.5)
assert expected_digest in request_content
sock.send(text_302)
# Verify Authorization isn't sent to the redirected host,
# then send another challenge.
request_content = consume_socket_content(sock, timeout=0.5)
assert b'Authorization:' not in request_content
sock.send(text_401)
# Verify Authorization is sent correctly again, and return 200 OK.
request_content = consume_socket_content(sock, timeout=0.5)
assert expected_digest in request_content
sock.send(text_200)
return request_content
close_server = threading.Event()
server = Server(digest_response_handler, wait_to_close_event=close_server)
with server as (host, port):
url = 'http://{}:{}/'.format(host, port)
r = requests.get(url, auth=auth)
# Verify server succeeded in authenticating.
assert r.status_code == 200
# Verify Authorization was sent in final request.
assert 'Authorization' in r.request.headers
assert r.request.headers['Authorization'].startswith('Digest ')
# Verify redirect happened as we expected.
assert r.history[0].status_code == 302
close_server.set()
def test_digestauth_401_only_sent_once():
"""Ensure we correctly respond to a 401 challenge once, and then
stop responding if challenged again.
"""
text_401 = (b'HTTP/1.1 401 UNAUTHORIZED\r\n'
b'Content-Length: 0\r\n'
b'WWW-Authenticate: Digest nonce="6bf5d6e4da1ce66918800195d6b9130d"'
b', opaque="372825293d1c26955496c80ed6426e9e", '
b'realm="me@kennethreitz.com", qop=auth\r\n\r\n')
expected_digest = (b'Authorization: Digest username="user", '
b'realm="me@kennethreitz.com", '
b'nonce="6bf5d6e4da1ce66918800195d6b9130d", uri="/"')
auth = requests.auth.HTTPDigestAuth('user', 'pass')
def digest_failed_response_handler(sock):
# Respond to initial GET with a challenge.
request_content = consume_socket_content(sock, timeout=0.5)
assert request_content.startswith(b"GET / HTTP/1.1")
sock.send(text_401)
# Verify we receive an Authorization header in response, then
# challenge again.
request_content = consume_socket_content(sock, timeout=0.5)
assert expected_digest in request_content
sock.send(text_401)
# Verify the client didn't respond to second challenge.
request_content = consume_socket_content(sock, timeout=0.5)
assert request_content == b''
return request_content
close_server = threading.Event()
server = Server(digest_failed_response_handler, wait_to_close_event=close_server)
with server as (host, port):
url = 'http://{}:{}/'.format(host, port)
r = requests.get(url, auth=auth)
# Verify server didn't authenticate us.
assert r.status_code == 401
assert r.history[0].status_code == 401
close_server.set()
def test_digestauth_only_on_4xx():
"""Ensure we only send digestauth on 4xx challenges.
See https://github.com/requests/requests/issues/3772.
"""
text_200_chal = (b'HTTP/1.1 200 OK\r\n'
b'Content-Length: 0\r\n'
b'WWW-Authenticate: Digest nonce="6bf5d6e4da1ce66918800195d6b9130d"'
b', opaque="372825293d1c26955496c80ed6426e9e", '
b'realm="me@kennethreitz.com", qop=auth\r\n\r\n')
auth = requests.auth.HTTPDigestAuth('user', 'pass')
def digest_response_handler(sock):
# Respond to GET with a 200 containing www-authenticate header.
request_content = consume_socket_content(sock, timeout=0.5)
assert request_content.startswith(b"GET / HTTP/1.1")
sock.send(text_200_chal)
# Verify the client didn't respond with auth.
request_content = consume_socket_content(sock, timeout=0.5)
assert request_content == b''
return request_content
close_server = threading.Event()
server = Server(digest_response_handler, wait_to_close_event=close_server)
with server as (host, port):
url = 'http://{}:{}/'.format(host, port)
r = requests.get(url, auth=auth)
# Verify server didn't receive auth from us.
assert r.status_code == 200
assert len(r.history) == 0
close_server.set()
_schemes_by_var_prefix = [
('http', ['http']),
('https', ['https']),
('all', ['http', 'https']),
]
_proxy_combos = []
for prefix, schemes in _schemes_by_var_prefix:
for scheme in schemes:
_proxy_combos.append(("{}_proxy".format(prefix), scheme))
_proxy_combos += [(var.upper(), scheme) for var, scheme in _proxy_combos]
@pytest.mark.parametrize("var,scheme", _proxy_combos)
def test_use_proxy_from_environment(httpbin, var, scheme):
url = "{}://httpbin.org".format(scheme)
fake_proxy = Server() # do nothing with the requests; just close the socket
with fake_proxy as (host, port):
proxy_url = "socks5://{}:{}".format(host, port)
kwargs = {var: proxy_url}
with override_environ(**kwargs):
# fake proxy's lack of response will cause a ConnectionError
with pytest.raises(requests.exceptions.ConnectionError):
requests.get(url)
# the fake proxy received a request
assert len(fake_proxy.handler_results) == 1
# it had actual content (not checking for SOCKS protocol for now)
assert len(fake_proxy.handler_results[0]) > 0
def test_redirect_rfc1808_to_non_ascii_location():
path = u'š'
expected_path = b'%C5%A1'
redirect_request = [] # stores the second request to the server
def redirect_resp_handler(sock):
consume_socket_content(sock, timeout=0.5)
location = u'//{}:{}/{}'.format(host, port, path)
sock.send(
b'HTTP/1.1 301 Moved Permanently\r\n'
b'Content-Length: 0\r\n'
b'Location: ' + location.encode('utf8') + b'\r\n'
b'\r\n'
)
redirect_request.append(consume_socket_content(sock, timeout=0.5))
sock.send(b'HTTP/1.1 200 OK\r\n\r\n')
close_server = threading.Event()
server = Server(redirect_resp_handler, wait_to_close_event=close_server)
with server as (host, port):
url = u'http://{}:{}'.format(host, port)
r = requests.get(url=url, allow_redirects=True)
assert r.status_code == 200
assert len(r.history) == 1
assert r.history[0].status_code == 301
assert redirect_request[0].startswith(b'GET /' + expected_path + b' HTTP/1.1')
assert r.url == u'{}/{}'.format(url, expected_path.decode('ascii'))
close_server.set()
def test_fragment_not_sent_with_request():
"""Verify that the fragment portion of a URI isn't sent to the server."""
def response_handler(sock):
req = consume_socket_content(sock, timeout=0.5)
sock.send(
b'HTTP/1.1 200 OK\r\n'
b'Content-Length: '+bytes(len(req))+b'\r\n'
b'\r\n'+req
)
close_server = threading.Event()
server = Server(response_handler, wait_to_close_event=close_server)
with server as (host, port):
url = 'http://{}:{}/path/to/thing/#view=edit&token=hunter2'.format(host, port)
r = requests.get(url)
raw_request = r.content
assert r.status_code == 200
headers, body = raw_request.split(b'\r\n\r\n', 1)
status_line, headers = headers.split(b'\r\n', 1)
assert status_line == b'GET /path/to/thing/ HTTP/1.1'
for frag in (b'view', b'edit', b'token', b'hunter2'):
assert frag not in headers
assert frag not in body
close_server.set()
def test_fragment_update_on_redirect():
"""Verify we only append previous fragment if one doesn't exist on new
location. If a new fragment is encountered in a Location header, it should
be added to all subsequent requests.
"""
def response_handler(sock):
consume_socket_content(sock, timeout=0.5)
sock.send(
b'HTTP/1.1 302 FOUND\r\n'
b'Content-Length: 0\r\n'
b'Location: /get#relevant-section\r\n\r\n'
)
consume_socket_content(sock, timeout=0.5)
sock.send(
b'HTTP/1.1 302 FOUND\r\n'
b'Content-Length: 0\r\n'
b'Location: /final-url/\r\n\r\n'
)
consume_socket_content(sock, timeout=0.5)
sock.send(
b'HTTP/1.1 200 OK\r\n\r\n'
)
close_server = threading.Event()
server = Server(response_handler, wait_to_close_event=close_server)
with server as (host, port):
url = 'http://{}:{}/path/to/thing/#view=edit&token=hunter2'.format(host, port)
r = requests.get(url)
raw_request = r.content
assert r.status_code == 200
assert len(r.history) == 2
assert r.history[0].request.url == url
# Verify we haven't overwritten the location with our previous fragment.
assert r.history[1].request.url == 'http://{}:{}/get#relevant-section'.format(host, port)
# Verify previous fragment is used and not the original.
assert r.url == 'http://{}:{}/final-url/#relevant-section'.format(host, port)
close_server.set()
requests-2.22.0/MANIFEST.in 0000644 0000765 0000024 00000000172 13467270450 016707 0 ustar nateprewitt staff 0000000 0000000 include README.md LICENSE NOTICE HISTORY.md pytest.ini requirements.txt Pipfile Pipfile.lock
recursive-include tests *.py
requests-2.22.0/requests/ 0000755 0000765 0000024 00000000000 13467272564 017034 5 ustar nateprewitt staff 0000000 0000000 requests-2.22.0/requests/cookies.py 0000644 0000765 0000024 00000043776 13467270450 021053 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.cookies
~~~~~~~~~~~~~~~~
Compatibility code to be able to use `cookielib.CookieJar` with requests.
requests.utils imports from here, so be careful with imports.
"""
import copy
import time
import calendar
from ._internal_utils import to_native_string
from .compat import cookielib, urlparse, urlunparse, Morsel, MutableMapping
try:
import threading
except ImportError:
import dummy_threading as threading
class MockRequest(object):
"""Wraps a `requests.Request` to mimic a `urllib2.Request`.
The code in `cookielib.CookieJar` expects this interface in order to correctly
manage cookie policies, i.e., determine whether a cookie can be set, given the
domains of the request and the cookie.
The original request object is read-only. The client is responsible for collecting
the new headers via `get_new_headers()` and interpreting them appropriately. You
probably want `get_cookie_header`, defined below.
"""
def __init__(self, request):
self._r = request
self._new_headers = {}
self.type = urlparse(self._r.url).scheme
def get_type(self):
return self.type
def get_host(self):
return urlparse(self._r.url).netloc
def get_origin_req_host(self):
return self.get_host()
def get_full_url(self):
# Only return the response's URL if the user hadn't set the Host
# header
if not self._r.headers.get('Host'):
return self._r.url
# If they did set it, retrieve it and reconstruct the expected domain
host = to_native_string(self._r.headers['Host'], encoding='utf-8')
parsed = urlparse(self._r.url)
# Reconstruct the URL as we expect it
return urlunparse([
parsed.scheme, host, parsed.path, parsed.params, parsed.query,
parsed.fragment
])
def is_unverifiable(self):
return True
def has_header(self, name):
return name in self._r.headers or name in self._new_headers
def get_header(self, name, default=None):
return self._r.headers.get(name, self._new_headers.get(name, default))
def add_header(self, key, val):
"""cookielib has no legitimate use for this method; add it back if you find one."""
raise NotImplementedError("Cookie headers should be added with add_unredirected_header()")
def add_unredirected_header(self, name, value):
self._new_headers[name] = value
def get_new_headers(self):
return self._new_headers
@property
def unverifiable(self):
return self.is_unverifiable()
@property
def origin_req_host(self):
return self.get_origin_req_host()
@property
def host(self):
return self.get_host()
class MockResponse(object):
"""Wraps a `httplib.HTTPMessage` to mimic a `urllib.addinfourl`.
...what? Basically, expose the parsed HTTP headers from the server response
the way `cookielib` expects to see them.
"""
def __init__(self, headers):
"""Make a MockResponse for `cookielib` to read.
:param headers: a httplib.HTTPMessage or analogous carrying the headers
"""
self._headers = headers
def info(self):
return self._headers
def getheaders(self, name):
self._headers.getheaders(name)
def extract_cookies_to_jar(jar, request, response):
"""Extract the cookies from the response into a CookieJar.
:param jar: cookielib.CookieJar (not necessarily a RequestsCookieJar)
:param request: our own requests.Request object
:param response: urllib3.HTTPResponse object
"""
if not (hasattr(response, '_original_response') and
response._original_response):
return
# the _original_response field is the wrapped httplib.HTTPResponse object,
req = MockRequest(request)
# pull out the HTTPMessage with the headers and put it in the mock:
res = MockResponse(response._original_response.msg)
jar.extract_cookies(res, req)
def get_cookie_header(jar, request):
"""
Produce an appropriate Cookie header string to be sent with `request`, or None.
:rtype: str
"""
r = MockRequest(request)
jar.add_cookie_header(r)
return r.get_new_headers().get('Cookie')
def remove_cookie_by_name(cookiejar, name, domain=None, path=None):
"""Unsets a cookie by name, by default over all domains and paths.
Wraps CookieJar.clear(), is O(n).
"""
clearables = []
for cookie in cookiejar:
if cookie.name != name:
continue
if domain is not None and domain != cookie.domain:
continue
if path is not None and path != cookie.path:
continue
clearables.append((cookie.domain, cookie.path, cookie.name))
for domain, path, name in clearables:
cookiejar.clear(domain, path, name)
class CookieConflictError(RuntimeError):
"""There are two cookies that meet the criteria specified in the cookie jar.
Use .get and .set and include domain and path args in order to be more specific.
"""
class RequestsCookieJar(cookielib.CookieJar, MutableMapping):
"""Compatibility class; is a cookielib.CookieJar, but exposes a dict
interface.
This is the CookieJar we create by default for requests and sessions that
don't specify one, since some clients may expect response.cookies and
session.cookies to support dict operations.
Requests does not use the dict interface internally; it's just for
compatibility with external client code. All requests code should work
out of the box with externally provided instances of ``CookieJar``, e.g.
``LWPCookieJar`` and ``FileCookieJar``.
Unlike a regular CookieJar, this class is pickleable.
.. warning:: dictionary operations that are normally O(1) may be O(n).
"""
def get(self, name, default=None, domain=None, path=None):
"""Dict-like get() that also supports optional domain and path args in
order to resolve naming collisions from using one cookie jar over
multiple domains.
.. warning:: operation is O(n), not O(1).
"""
try:
return self._find_no_duplicates(name, domain, path)
except KeyError:
return default
def set(self, name, value, **kwargs):
"""Dict-like set() that also supports optional domain and path args in
order to resolve naming collisions from using one cookie jar over
multiple domains.
"""
# support client code that unsets cookies by assignment of a None value:
if value is None:
remove_cookie_by_name(self, name, domain=kwargs.get('domain'), path=kwargs.get('path'))
return
if isinstance(value, Morsel):
c = morsel_to_cookie(value)
else:
c = create_cookie(name, value, **kwargs)
self.set_cookie(c)
return c
def iterkeys(self):
"""Dict-like iterkeys() that returns an iterator of names of cookies
from the jar.
.. seealso:: itervalues() and iteritems().
"""
for cookie in iter(self):
yield cookie.name
def keys(self):
"""Dict-like keys() that returns a list of names of cookies from the
jar.
.. seealso:: values() and items().
"""
return list(self.iterkeys())
def itervalues(self):
"""Dict-like itervalues() that returns an iterator of values of cookies
from the jar.
.. seealso:: iterkeys() and iteritems().
"""
for cookie in iter(self):
yield cookie.value
def values(self):
"""Dict-like values() that returns a list of values of cookies from the
jar.
.. seealso:: keys() and items().
"""
return list(self.itervalues())
def iteritems(self):
"""Dict-like iteritems() that returns an iterator of name-value tuples
from the jar.
.. seealso:: iterkeys() and itervalues().
"""
for cookie in iter(self):
yield cookie.name, cookie.value
def items(self):
"""Dict-like items() that returns a list of name-value tuples from the
jar. Allows client-code to call ``dict(RequestsCookieJar)`` and get a
vanilla python dict of key value pairs.
.. seealso:: keys() and values().
"""
return list(self.iteritems())
def list_domains(self):
"""Utility method to list all the domains in the jar."""
domains = []
for cookie in iter(self):
if cookie.domain not in domains:
domains.append(cookie.domain)
return domains
def list_paths(self):
"""Utility method to list all the paths in the jar."""
paths = []
for cookie in iter(self):
if cookie.path not in paths:
paths.append(cookie.path)
return paths
def multiple_domains(self):
"""Returns True if there are multiple domains in the jar.
Returns False otherwise.
:rtype: bool
"""
domains = []
for cookie in iter(self):
if cookie.domain is not None and cookie.domain in domains:
return True
domains.append(cookie.domain)
return False # there is only one domain in jar
def get_dict(self, domain=None, path=None):
"""Takes as an argument an optional domain and path and returns a plain
old Python dict of name-value pairs of cookies that meet the
requirements.
:rtype: dict
"""
dictionary = {}
for cookie in iter(self):
if (
(domain is None or cookie.domain == domain) and
(path is None or cookie.path == path)
):
dictionary[cookie.name] = cookie.value
return dictionary
def __contains__(self, name):
try:
return super(RequestsCookieJar, self).__contains__(name)
except CookieConflictError:
return True
def __getitem__(self, name):
"""Dict-like __getitem__() for compatibility with client code. Throws
exception if there are more than one cookie with name. In that case,
use the more explicit get() method instead.
.. warning:: operation is O(n), not O(1).
"""
return self._find_no_duplicates(name)
def __setitem__(self, name, value):
"""Dict-like __setitem__ for compatibility with client code. Throws
exception if there is already a cookie of that name in the jar. In that
case, use the more explicit set() method instead.
"""
self.set(name, value)
def __delitem__(self, name):
"""Deletes a cookie given a name. Wraps ``cookielib.CookieJar``'s
``remove_cookie_by_name()``.
"""
remove_cookie_by_name(self, name)
def set_cookie(self, cookie, *args, **kwargs):
if hasattr(cookie.value, 'startswith') and cookie.value.startswith('"') and cookie.value.endswith('"'):
cookie.value = cookie.value.replace('\\"', '')
return super(RequestsCookieJar, self).set_cookie(cookie, *args, **kwargs)
def update(self, other):
"""Updates this jar with cookies from another CookieJar or dict-like"""
if isinstance(other, cookielib.CookieJar):
for cookie in other:
self.set_cookie(copy.copy(cookie))
else:
super(RequestsCookieJar, self).update(other)
def _find(self, name, domain=None, path=None):
"""Requests uses this method internally to get cookie values.
If there are conflicting cookies, _find arbitrarily chooses one.
See _find_no_duplicates if you want an exception thrown if there are
conflicting cookies.
:param name: a string containing name of cookie
:param domain: (optional) string containing domain of cookie
:param path: (optional) string containing path of cookie
:return: cookie.value
"""
for cookie in iter(self):
if cookie.name == name:
if domain is None or cookie.domain == domain:
if path is None or cookie.path == path:
return cookie.value
raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path))
def _find_no_duplicates(self, name, domain=None, path=None):
"""Both ``__get_item__`` and ``get`` call this function: it's never
used elsewhere in Requests.
:param name: a string containing name of cookie
:param domain: (optional) string containing domain of cookie
:param path: (optional) string containing path of cookie
:raises KeyError: if cookie is not found
:raises CookieConflictError: if there are multiple cookies
that match name and optionally domain and path
:return: cookie.value
"""
toReturn = None
for cookie in iter(self):
if cookie.name == name:
if domain is None or cookie.domain == domain:
if path is None or cookie.path == path:
if toReturn is not None: # if there are multiple cookies that meet passed in criteria
raise CookieConflictError('There are multiple cookies with name, %r' % (name))
toReturn = cookie.value # we will eventually return this as long as no cookie conflict
if toReturn:
return toReturn
raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path))
def __getstate__(self):
"""Unlike a normal CookieJar, this class is pickleable."""
state = self.__dict__.copy()
# remove the unpickleable RLock object
state.pop('_cookies_lock')
return state
def __setstate__(self, state):
"""Unlike a normal CookieJar, this class is pickleable."""
self.__dict__.update(state)
if '_cookies_lock' not in self.__dict__:
self._cookies_lock = threading.RLock()
def copy(self):
"""Return a copy of this RequestsCookieJar."""
new_cj = RequestsCookieJar()
new_cj.set_policy(self.get_policy())
new_cj.update(self)
return new_cj
def get_policy(self):
"""Return the CookiePolicy instance used."""
return self._policy
def _copy_cookie_jar(jar):
if jar is None:
return None
if hasattr(jar, 'copy'):
# We're dealing with an instance of RequestsCookieJar
return jar.copy()
# We're dealing with a generic CookieJar instance
new_jar = copy.copy(jar)
new_jar.clear()
for cookie in jar:
new_jar.set_cookie(copy.copy(cookie))
return new_jar
def create_cookie(name, value, **kwargs):
"""Make a cookie from underspecified parameters.
By default, the pair of `name` and `value` will be set for the domain ''
and sent on every request (this is sometimes called a "supercookie").
"""
result = {
'version': 0,
'name': name,
'value': value,
'port': None,
'domain': '',
'path': '/',
'secure': False,
'expires': None,
'discard': True,
'comment': None,
'comment_url': None,
'rest': {'HttpOnly': None},
'rfc2109': False,
}
badargs = set(kwargs) - set(result)
if badargs:
err = 'create_cookie() got unexpected keyword arguments: %s'
raise TypeError(err % list(badargs))
result.update(kwargs)
result['port_specified'] = bool(result['port'])
result['domain_specified'] = bool(result['domain'])
result['domain_initial_dot'] = result['domain'].startswith('.')
result['path_specified'] = bool(result['path'])
return cookielib.Cookie(**result)
def morsel_to_cookie(morsel):
"""Convert a Morsel object into a Cookie containing the one k/v pair."""
expires = None
if morsel['max-age']:
try:
expires = int(time.time() + int(morsel['max-age']))
except ValueError:
raise TypeError('max-age: %s must be integer' % morsel['max-age'])
elif morsel['expires']:
time_template = '%a, %d-%b-%Y %H:%M:%S GMT'
expires = calendar.timegm(
time.strptime(morsel['expires'], time_template)
)
return create_cookie(
comment=morsel['comment'],
comment_url=bool(morsel['comment']),
discard=False,
domain=morsel['domain'],
expires=expires,
name=morsel.key,
path=morsel['path'],
port=None,
rest={'HttpOnly': morsel['httponly']},
rfc2109=False,
secure=bool(morsel['secure']),
value=morsel.value,
version=morsel['version'] or 0,
)
def cookiejar_from_dict(cookie_dict, cookiejar=None, overwrite=True):
"""Returns a CookieJar from a key/value dictionary.
:param cookie_dict: Dict of key/values to insert into CookieJar.
:param cookiejar: (optional) A cookiejar to add the cookies to.
:param overwrite: (optional) If False, will not replace cookies
already in the jar with new ones.
:rtype: CookieJar
"""
if cookiejar is None:
cookiejar = RequestsCookieJar()
if cookie_dict is not None:
names_from_jar = [cookie.name for cookie in cookiejar]
for name in cookie_dict:
if overwrite or (name not in names_from_jar):
cookiejar.set_cookie(create_cookie(name, cookie_dict[name]))
return cookiejar
def merge_cookies(cookiejar, cookies):
"""Add cookies to cookiejar and returns a merged CookieJar.
:param cookiejar: CookieJar object to add the cookies to.
:param cookies: Dictionary or CookieJar object to be added.
:rtype: CookieJar
"""
if not isinstance(cookiejar, cookielib.CookieJar):
raise ValueError('You can only merge into CookieJar')
if isinstance(cookies, dict):
cookiejar = cookiejar_from_dict(
cookies, cookiejar=cookiejar, overwrite=False)
elif isinstance(cookies, cookielib.CookieJar):
try:
cookiejar.update(cookies)
except AttributeError:
for cookie_in_jar in cookies:
cookiejar.set_cookie(cookie_in_jar)
return cookiejar
requests-2.22.0/requests/auth.py 0000644 0000765 0000024 00000023736 13467270450 020352 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.auth
~~~~~~~~~~~~~
This module contains the authentication handlers for Requests.
"""
import os
import re
import time
import hashlib
import threading
import warnings
from base64 import b64encode
from .compat import urlparse, str, basestring
from .cookies import extract_cookies_to_jar
from ._internal_utils import to_native_string
from .utils import parse_dict_header
CONTENT_TYPE_FORM_URLENCODED = 'application/x-www-form-urlencoded'
CONTENT_TYPE_MULTI_PART = 'multipart/form-data'
def _basic_auth_str(username, password):
"""Returns a Basic Auth string."""
# "I want us to put a big-ol' comment on top of it that
# says that this behaviour is dumb but we need to preserve
# it because people are relying on it."
# - Lukasa
#
# These are here solely to maintain backwards compatibility
# for things like ints. This will be removed in 3.0.0.
if not isinstance(username, basestring):
warnings.warn(
"Non-string usernames will no longer be supported in Requests "
"3.0.0. Please convert the object you've passed in ({!r}) to "
"a string or bytes object in the near future to avoid "
"problems.".format(username),
category=DeprecationWarning,
)
username = str(username)
if not isinstance(password, basestring):
warnings.warn(
"Non-string passwords will no longer be supported in Requests "
"3.0.0. Please convert the object you've passed in ({!r}) to "
"a string or bytes object in the near future to avoid "
"problems.".format(password),
category=DeprecationWarning,
)
password = str(password)
# -- End Removal --
if isinstance(username, str):
username = username.encode('latin1')
if isinstance(password, str):
password = password.encode('latin1')
authstr = 'Basic ' + to_native_string(
b64encode(b':'.join((username, password))).strip()
)
return authstr
class AuthBase(object):
"""Base class that all auth implementations derive from"""
def __call__(self, r):
raise NotImplementedError('Auth hooks must be callable.')
class HTTPBasicAuth(AuthBase):
"""Attaches HTTP Basic Authentication to the given Request object."""
def __init__(self, username, password):
self.username = username
self.password = password
def __eq__(self, other):
return all([
self.username == getattr(other, 'username', None),
self.password == getattr(other, 'password', None)
])
def __ne__(self, other):
return not self == other
def __call__(self, r):
r.headers['Authorization'] = _basic_auth_str(self.username, self.password)
return r
class HTTPProxyAuth(HTTPBasicAuth):
"""Attaches HTTP Proxy Authentication to a given Request object."""
def __call__(self, r):
r.headers['Proxy-Authorization'] = _basic_auth_str(self.username, self.password)
return r
class HTTPDigestAuth(AuthBase):
"""Attaches HTTP Digest Authentication to the given Request object."""
def __init__(self, username, password):
self.username = username
self.password = password
# Keep state in per-thread local storage
self._thread_local = threading.local()
def init_per_thread_state(self):
# Ensure state is initialized just once per-thread
if not hasattr(self._thread_local, 'init'):
self._thread_local.init = True
self._thread_local.last_nonce = ''
self._thread_local.nonce_count = 0
self._thread_local.chal = {}
self._thread_local.pos = None
self._thread_local.num_401_calls = None
def build_digest_header(self, method, url):
"""
:rtype: str
"""
realm = self._thread_local.chal['realm']
nonce = self._thread_local.chal['nonce']
qop = self._thread_local.chal.get('qop')
algorithm = self._thread_local.chal.get('algorithm')
opaque = self._thread_local.chal.get('opaque')
hash_utf8 = None
if algorithm is None:
_algorithm = 'MD5'
else:
_algorithm = algorithm.upper()
# lambdas assume digest modules are imported at the top level
if _algorithm == 'MD5' or _algorithm == 'MD5-SESS':
def md5_utf8(x):
if isinstance(x, str):
x = x.encode('utf-8')
return hashlib.md5(x).hexdigest()
hash_utf8 = md5_utf8
elif _algorithm == 'SHA':
def sha_utf8(x):
if isinstance(x, str):
x = x.encode('utf-8')
return hashlib.sha1(x).hexdigest()
hash_utf8 = sha_utf8
elif _algorithm == 'SHA-256':
def sha256_utf8(x):
if isinstance(x, str):
x = x.encode('utf-8')
return hashlib.sha256(x).hexdigest()
hash_utf8 = sha256_utf8
elif _algorithm == 'SHA-512':
def sha512_utf8(x):
if isinstance(x, str):
x = x.encode('utf-8')
return hashlib.sha512(x).hexdigest()
hash_utf8 = sha512_utf8
KD = lambda s, d: hash_utf8("%s:%s" % (s, d))
if hash_utf8 is None:
return None
# XXX not implemented yet
entdig = None
p_parsed = urlparse(url)
#: path is request-uri defined in RFC 2616 which should not be empty
path = p_parsed.path or "/"
if p_parsed.query:
path += '?' + p_parsed.query
A1 = '%s:%s:%s' % (self.username, realm, self.password)
A2 = '%s:%s' % (method, path)
HA1 = hash_utf8(A1)
HA2 = hash_utf8(A2)
if nonce == self._thread_local.last_nonce:
self._thread_local.nonce_count += 1
else:
self._thread_local.nonce_count = 1
ncvalue = '%08x' % self._thread_local.nonce_count
s = str(self._thread_local.nonce_count).encode('utf-8')
s += nonce.encode('utf-8')
s += time.ctime().encode('utf-8')
s += os.urandom(8)
cnonce = (hashlib.sha1(s).hexdigest()[:16])
if _algorithm == 'MD5-SESS':
HA1 = hash_utf8('%s:%s:%s' % (HA1, nonce, cnonce))
if not qop:
respdig = KD(HA1, "%s:%s" % (nonce, HA2))
elif qop == 'auth' or 'auth' in qop.split(','):
noncebit = "%s:%s:%s:%s:%s" % (
nonce, ncvalue, cnonce, 'auth', HA2
)
respdig = KD(HA1, noncebit)
else:
# XXX handle auth-int.
return None
self._thread_local.last_nonce = nonce
# XXX should the partial digests be encoded too?
base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \
'response="%s"' % (self.username, realm, nonce, path, respdig)
if opaque:
base += ', opaque="%s"' % opaque
if algorithm:
base += ', algorithm="%s"' % algorithm
if entdig:
base += ', digest="%s"' % entdig
if qop:
base += ', qop="auth", nc=%s, cnonce="%s"' % (ncvalue, cnonce)
return 'Digest %s' % (base)
def handle_redirect(self, r, **kwargs):
"""Reset num_401_calls counter on redirects."""
if r.is_redirect:
self._thread_local.num_401_calls = 1
def handle_401(self, r, **kwargs):
"""
Takes the given response and tries digest-auth, if needed.
:rtype: requests.Response
"""
# If response is not 4xx, do not auth
# See https://github.com/requests/requests/issues/3772
if not 400 <= r.status_code < 500:
self._thread_local.num_401_calls = 1
return r
if self._thread_local.pos is not None:
# Rewind the file position indicator of the body to where
# it was to resend the request.
r.request.body.seek(self._thread_local.pos)
s_auth = r.headers.get('www-authenticate', '')
if 'digest' in s_auth.lower() and self._thread_local.num_401_calls < 2:
self._thread_local.num_401_calls += 1
pat = re.compile(r'digest ', flags=re.IGNORECASE)
self._thread_local.chal = parse_dict_header(pat.sub('', s_auth, count=1))
# Consume content and release the original connection
# to allow our new request to reuse the same one.
r.content
r.close()
prep = r.request.copy()
extract_cookies_to_jar(prep._cookies, r.request, r.raw)
prep.prepare_cookies(prep._cookies)
prep.headers['Authorization'] = self.build_digest_header(
prep.method, prep.url)
_r = r.connection.send(prep, **kwargs)
_r.history.append(r)
_r.request = prep
return _r
self._thread_local.num_401_calls = 1
return r
def __call__(self, r):
# Initialize per-thread state, if needed
self.init_per_thread_state()
# If we have a saved nonce, skip the 401
if self._thread_local.last_nonce:
r.headers['Authorization'] = self.build_digest_header(r.method, r.url)
try:
self._thread_local.pos = r.body.tell()
except AttributeError:
# In the case of HTTPDigestAuth being reused and the body of
# the previous request was a file-like object, pos has the
# file position of the previous body. Ensure it's set to
# None.
self._thread_local.pos = None
r.register_hook('response', self.handle_401)
r.register_hook('response', self.handle_redirect)
self._thread_local.num_401_calls = 1
return r
def __eq__(self, other):
return all([
self.username == getattr(other, 'username', None),
self.password == getattr(other, 'password', None)
])
def __ne__(self, other):
return not self == other
requests-2.22.0/requests/sessions.py 0000644 0000765 0000024 00000071224 13467270450 021252 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.session
~~~~~~~~~~~~~~~~
This module provides a Session object to manage and persist settings across
requests (cookies, auth, proxies).
"""
import os
import sys
import time
from datetime import timedelta
from .auth import _basic_auth_str
from .compat import cookielib, is_py3, OrderedDict, urljoin, urlparse, Mapping
from .cookies import (
cookiejar_from_dict, extract_cookies_to_jar, RequestsCookieJar, merge_cookies)
from .models import Request, PreparedRequest, DEFAULT_REDIRECT_LIMIT
from .hooks import default_hooks, dispatch_hook
from ._internal_utils import to_native_string
from .utils import to_key_val_list, default_headers, DEFAULT_PORTS
from .exceptions import (
TooManyRedirects, InvalidSchema, ChunkedEncodingError, ContentDecodingError)
from .structures import CaseInsensitiveDict
from .adapters import HTTPAdapter
from .utils import (
requote_uri, get_environ_proxies, get_netrc_auth, should_bypass_proxies,
get_auth_from_url, rewind_body
)
from .status_codes import codes
# formerly defined here, reexposed here for backward compatibility
from .models import REDIRECT_STATI
# Preferred clock, based on which one is more accurate on a given system.
if sys.platform == 'win32':
try: # Python 3.4+
preferred_clock = time.perf_counter
except AttributeError: # Earlier than Python 3.
preferred_clock = time.clock
else:
preferred_clock = time.time
def merge_setting(request_setting, session_setting, dict_class=OrderedDict):
"""Determines appropriate setting for a given request, taking into account
the explicit setting on that request, and the setting in the session. If a
setting is a dictionary, they will be merged together using `dict_class`
"""
if session_setting is None:
return request_setting
if request_setting is None:
return session_setting
# Bypass if not a dictionary (e.g. verify)
if not (
isinstance(session_setting, Mapping) and
isinstance(request_setting, Mapping)
):
return request_setting
merged_setting = dict_class(to_key_val_list(session_setting))
merged_setting.update(to_key_val_list(request_setting))
# Remove keys that are set to None. Extract keys first to avoid altering
# the dictionary during iteration.
none_keys = [k for (k, v) in merged_setting.items() if v is None]
for key in none_keys:
del merged_setting[key]
return merged_setting
def merge_hooks(request_hooks, session_hooks, dict_class=OrderedDict):
"""Properly merges both requests and session hooks.
This is necessary because when request_hooks == {'response': []}, the
merge breaks Session hooks entirely.
"""
if session_hooks is None or session_hooks.get('response') == []:
return request_hooks
if request_hooks is None or request_hooks.get('response') == []:
return session_hooks
return merge_setting(request_hooks, session_hooks, dict_class)
class SessionRedirectMixin(object):
def get_redirect_target(self, resp):
"""Receives a Response. Returns a redirect URI or ``None``"""
# Due to the nature of how requests processes redirects this method will
# be called at least once upon the original response and at least twice
# on each subsequent redirect response (if any).
# If a custom mixin is used to handle this logic, it may be advantageous
# to cache the redirect location onto the response object as a private
# attribute.
if resp.is_redirect:
location = resp.headers['location']
# Currently the underlying http module on py3 decode headers
# in latin1, but empirical evidence suggests that latin1 is very
# rarely used with non-ASCII characters in HTTP headers.
# It is more likely to get UTF8 header rather than latin1.
# This causes incorrect handling of UTF8 encoded location headers.
# To solve this, we re-encode the location in latin1.
if is_py3:
location = location.encode('latin1')
return to_native_string(location, 'utf8')
return None
def should_strip_auth(self, old_url, new_url):
"""Decide whether Authorization header should be removed when redirecting"""
old_parsed = urlparse(old_url)
new_parsed = urlparse(new_url)
if old_parsed.hostname != new_parsed.hostname:
return True
# Special case: allow http -> https redirect when using the standard
# ports. This isn't specified by RFC 7235, but is kept to avoid
# breaking backwards compatibility with older versions of requests
# that allowed any redirects on the same host.
if (old_parsed.scheme == 'http' and old_parsed.port in (80, None)
and new_parsed.scheme == 'https' and new_parsed.port in (443, None)):
return False
# Handle default port usage corresponding to scheme.
changed_port = old_parsed.port != new_parsed.port
changed_scheme = old_parsed.scheme != new_parsed.scheme
default_port = (DEFAULT_PORTS.get(old_parsed.scheme, None), None)
if (not changed_scheme and old_parsed.port in default_port
and new_parsed.port in default_port):
return False
# Standard case: root URI must match
return changed_port or changed_scheme
def resolve_redirects(self, resp, req, stream=False, timeout=None,
verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs):
"""Receives a Response. Returns a generator of Responses or Requests."""
hist = [] # keep track of history
url = self.get_redirect_target(resp)
previous_fragment = urlparse(req.url).fragment
while url:
prepared_request = req.copy()
# Update history and keep track of redirects.
# resp.history must ignore the original request in this loop
hist.append(resp)
resp.history = hist[1:]
try:
resp.content # Consume socket so it can be released
except (ChunkedEncodingError, ContentDecodingError, RuntimeError):
resp.raw.read(decode_content=False)
if len(resp.history) >= self.max_redirects:
raise TooManyRedirects('Exceeded %s redirects.' % self.max_redirects, response=resp)
# Release the connection back into the pool.
resp.close()
# Handle redirection without scheme (see: RFC 1808 Section 4)
if url.startswith('//'):
parsed_rurl = urlparse(resp.url)
url = '%s:%s' % (to_native_string(parsed_rurl.scheme), url)
# Normalize url case and attach previous fragment if needed (RFC 7231 7.1.2)
parsed = urlparse(url)
if parsed.fragment == '' and previous_fragment:
parsed = parsed._replace(fragment=previous_fragment)
elif parsed.fragment:
previous_fragment = parsed.fragment
url = parsed.geturl()
# Facilitate relative 'location' headers, as allowed by RFC 7231.
# (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource')
# Compliant with RFC3986, we percent encode the url.
if not parsed.netloc:
url = urljoin(resp.url, requote_uri(url))
else:
url = requote_uri(url)
prepared_request.url = to_native_string(url)
self.rebuild_method(prepared_request, resp)
# https://github.com/requests/requests/issues/1084
if resp.status_code not in (codes.temporary_redirect, codes.permanent_redirect):
# https://github.com/requests/requests/issues/3490
purged_headers = ('Content-Length', 'Content-Type', 'Transfer-Encoding')
for header in purged_headers:
prepared_request.headers.pop(header, None)
prepared_request.body = None
headers = prepared_request.headers
try:
del headers['Cookie']
except KeyError:
pass
# Extract any cookies sent on the response to the cookiejar
# in the new request. Because we've mutated our copied prepared
# request, use the old one that we haven't yet touched.
extract_cookies_to_jar(prepared_request._cookies, req, resp.raw)
merge_cookies(prepared_request._cookies, self.cookies)
prepared_request.prepare_cookies(prepared_request._cookies)
# Rebuild auth and proxy information.
proxies = self.rebuild_proxies(prepared_request, proxies)
self.rebuild_auth(prepared_request, resp)
# A failed tell() sets `_body_position` to `object()`. This non-None
# value ensures `rewindable` will be True, allowing us to raise an
# UnrewindableBodyError, instead of hanging the connection.
rewindable = (
prepared_request._body_position is not None and
('Content-Length' in headers or 'Transfer-Encoding' in headers)
)
# Attempt to rewind consumed file-like object.
if rewindable:
rewind_body(prepared_request)
# Override the original request.
req = prepared_request
if yield_requests:
yield req
else:
resp = self.send(
req,
stream=stream,
timeout=timeout,
verify=verify,
cert=cert,
proxies=proxies,
allow_redirects=False,
**adapter_kwargs
)
extract_cookies_to_jar(self.cookies, prepared_request, resp.raw)
# extract redirect url, if any, for the next loop
url = self.get_redirect_target(resp)
yield resp
def rebuild_auth(self, prepared_request, response):
"""When being redirected we may want to strip authentication from the
request to avoid leaking credentials. This method intelligently removes
and reapplies authentication where possible to avoid credential loss.
"""
headers = prepared_request.headers
url = prepared_request.url
if 'Authorization' in headers and self.should_strip_auth(response.request.url, url):
# If we get redirected to a new host, we should strip out any
# authentication headers.
del headers['Authorization']
# .netrc might have more auth for us on our new host.
new_auth = get_netrc_auth(url) if self.trust_env else None
if new_auth is not None:
prepared_request.prepare_auth(new_auth)
return
def rebuild_proxies(self, prepared_request, proxies):
"""This method re-evaluates the proxy configuration by considering the
environment variables. If we are redirected to a URL covered by
NO_PROXY, we strip the proxy configuration. Otherwise, we set missing
proxy keys for this URL (in case they were stripped by a previous
redirect).
This method also replaces the Proxy-Authorization header where
necessary.
:rtype: dict
"""
proxies = proxies if proxies is not None else {}
headers = prepared_request.headers
url = prepared_request.url
scheme = urlparse(url).scheme
new_proxies = proxies.copy()
no_proxy = proxies.get('no_proxy')
bypass_proxy = should_bypass_proxies(url, no_proxy=no_proxy)
if self.trust_env and not bypass_proxy:
environ_proxies = get_environ_proxies(url, no_proxy=no_proxy)
proxy = environ_proxies.get(scheme, environ_proxies.get('all'))
if proxy:
new_proxies.setdefault(scheme, proxy)
if 'Proxy-Authorization' in headers:
del headers['Proxy-Authorization']
try:
username, password = get_auth_from_url(new_proxies[scheme])
except KeyError:
username, password = None, None
if username and password:
headers['Proxy-Authorization'] = _basic_auth_str(username, password)
return new_proxies
def rebuild_method(self, prepared_request, response):
"""When being redirected we may want to change the method of the request
based on certain specs or browser behavior.
"""
method = prepared_request.method
# https://tools.ietf.org/html/rfc7231#section-6.4.4
if response.status_code == codes.see_other and method != 'HEAD':
method = 'GET'
# Do what the browsers do, despite standards...
# First, turn 302s into GETs.
if response.status_code == codes.found and method != 'HEAD':
method = 'GET'
# Second, if a POST is responded to with a 301, turn it into a GET.
# This bizarre behaviour is explained in Issue 1704.
if response.status_code == codes.moved and method == 'POST':
method = 'GET'
prepared_request.method = method
class Session(SessionRedirectMixin):
"""A Requests session.
Provides cookie persistence, connection-pooling, and configuration.
Basic Usage::
>>> import requests
>>> s = requests.Session()
>>> s.get('https://httpbin.org/get')
Or as a context manager::
>>> with requests.Session() as s:
>>> s.get('https://httpbin.org/get')
"""
__attrs__ = [
'headers', 'cookies', 'auth', 'proxies', 'hooks', 'params', 'verify',
'cert', 'prefetch', 'adapters', 'stream', 'trust_env',
'max_redirects',
]
def __init__(self):
#: A case-insensitive dictionary of headers to be sent on each
#: :class:`Request ` sent from this
#: :class:`Session `.
self.headers = default_headers()
#: Default Authentication tuple or object to attach to
#: :class:`Request `.
self.auth = None
#: Dictionary mapping protocol or protocol and host to the URL of the proxy
#: (e.g. {'http': 'foo.bar:3128', 'http://host.name': 'foo.bar:4012'}) to
#: be used on each :class:`Request `.
self.proxies = {}
#: Event-handling hooks.
self.hooks = default_hooks()
#: Dictionary of querystring data to attach to each
#: :class:`Request `. The dictionary values may be lists for
#: representing multivalued query parameters.
self.params = {}
#: Stream response content default.
self.stream = False
#: SSL Verification default.
self.verify = True
#: SSL client certificate default, if String, path to ssl client
#: cert file (.pem). If Tuple, ('cert', 'key') pair.
self.cert = None
#: Maximum number of redirects allowed. If the request exceeds this
#: limit, a :class:`TooManyRedirects` exception is raised.
#: This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is
#: 30.
self.max_redirects = DEFAULT_REDIRECT_LIMIT
#: Trust environment settings for proxy configuration, default
#: authentication and similar.
self.trust_env = True
#: A CookieJar containing all currently outstanding cookies set on this
#: session. By default it is a
#: :class:`RequestsCookieJar `, but
#: may be any other ``cookielib.CookieJar`` compatible object.
self.cookies = cookiejar_from_dict({})
# Default connection adapters.
self.adapters = OrderedDict()
self.mount('https://', HTTPAdapter())
self.mount('http://', HTTPAdapter())
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def prepare_request(self, request):
"""Constructs a :class:`PreparedRequest ` for
transmission and returns it. The :class:`PreparedRequest` has settings
merged from the :class:`Request ` instance and those of the
:class:`Session`.
:param request: :class:`Request` instance to prepare with this
session's settings.
:rtype: requests.PreparedRequest
"""
cookies = request.cookies or {}
# Bootstrap CookieJar.
if not isinstance(cookies, cookielib.CookieJar):
cookies = cookiejar_from_dict(cookies)
# Merge with session cookies
merged_cookies = merge_cookies(
merge_cookies(RequestsCookieJar(), self.cookies), cookies)
# Set environment's basic authentication if not explicitly set.
auth = request.auth
if self.trust_env and not auth and not self.auth:
auth = get_netrc_auth(request.url)
p = PreparedRequest()
p.prepare(
method=request.method.upper(),
url=request.url,
files=request.files,
data=request.data,
json=request.json,
headers=merge_setting(request.headers, self.headers, dict_class=CaseInsensitiveDict),
params=merge_setting(request.params, self.params),
auth=merge_setting(auth, self.auth),
cookies=merged_cookies,
hooks=merge_hooks(request.hooks, self.hooks),
)
return p
def request(self, method, url,
params=None, data=None, headers=None, cookies=None, files=None,
auth=None, timeout=None, allow_redirects=True, proxies=None,
hooks=None, stream=None, verify=None, cert=None, json=None):
"""Constructs a :class:`Request `, prepares it and sends it.
Returns :class:`Response ` object.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query
string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json to send in the body of the
:class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the
:class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the
:class:`Request`.
:param files: (optional) Dictionary of ``'filename': file-like-objects``
for multipart encoding upload.
:param auth: (optional) Auth tuple or callable to enable
Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) ` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Set to True by default.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol or protocol and
hostname to the URL of the proxy.
:param stream: (optional) whether to immediately download the response
content. Defaults to ``False``.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param cert: (optional) if String, path to ssl client cert file (.pem).
If Tuple, ('cert', 'key') pair.
:rtype: requests.Response
"""
# Create the Request.
req = Request(
method=method.upper(),
url=url,
headers=headers,
files=files,
data=data or {},
json=json,
params=params or {},
auth=auth,
cookies=cookies,
hooks=hooks,
)
prep = self.prepare_request(req)
proxies = proxies or {}
settings = self.merge_environment_settings(
prep.url, proxies, stream, verify, cert
)
# Send the request.
send_kwargs = {
'timeout': timeout,
'allow_redirects': allow_redirects,
}
send_kwargs.update(settings)
resp = self.send(prep, **send_kwargs)
return resp
def get(self, url, **kwargs):
r"""Sends a GET request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
return self.request('GET', url, **kwargs)
def options(self, url, **kwargs):
r"""Sends a OPTIONS request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
return self.request('OPTIONS', url, **kwargs)
def head(self, url, **kwargs):
r"""Sends a HEAD request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', False)
return self.request('HEAD', url, **kwargs)
def post(self, url, data=None, json=None, **kwargs):
r"""Sends a POST request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
return self.request('POST', url, data=data, json=json, **kwargs)
def put(self, url, data=None, **kwargs):
r"""Sends a PUT request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
return self.request('PUT', url, data=data, **kwargs)
def patch(self, url, data=None, **kwargs):
r"""Sends a PATCH request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
return self.request('PATCH', url, data=data, **kwargs)
def delete(self, url, **kwargs):
r"""Sends a DELETE request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:rtype: requests.Response
"""
return self.request('DELETE', url, **kwargs)
def send(self, request, **kwargs):
"""Send a given PreparedRequest.
:rtype: requests.Response
"""
# Set defaults that the hooks can utilize to ensure they always have
# the correct parameters to reproduce the previous request.
kwargs.setdefault('stream', self.stream)
kwargs.setdefault('verify', self.verify)
kwargs.setdefault('cert', self.cert)
kwargs.setdefault('proxies', self.proxies)
# It's possible that users might accidentally send a Request object.
# Guard against that specific failure case.
if isinstance(request, Request):
raise ValueError('You can only send PreparedRequests.')
# Set up variables needed for resolve_redirects and dispatching of hooks
allow_redirects = kwargs.pop('allow_redirects', True)
stream = kwargs.get('stream')
hooks = request.hooks
# Get the appropriate adapter to use
adapter = self.get_adapter(url=request.url)
# Start time (approximately) of the request
start = preferred_clock()
# Send the request
r = adapter.send(request, **kwargs)
# Total elapsed time of the request (approximately)
elapsed = preferred_clock() - start
r.elapsed = timedelta(seconds=elapsed)
# Response manipulation hooks
r = dispatch_hook('response', hooks, r, **kwargs)
# Persist cookies
if r.history:
# If the hooks create history then we want those cookies too
for resp in r.history:
extract_cookies_to_jar(self.cookies, resp.request, resp.raw)
extract_cookies_to_jar(self.cookies, request, r.raw)
# Redirect resolving generator.
gen = self.resolve_redirects(r, request, **kwargs)
# Resolve redirects if allowed.
history = [resp for resp in gen] if allow_redirects else []
# Shuffle things around if there's history.
if history:
# Insert the first (original) request at the start
history.insert(0, r)
# Get the last request made
r = history.pop()
r.history = history
# If redirects aren't being followed, store the response on the Request for Response.next().
if not allow_redirects:
try:
r._next = next(self.resolve_redirects(r, request, yield_requests=True, **kwargs))
except StopIteration:
pass
if not stream:
r.content
return r
def merge_environment_settings(self, url, proxies, stream, verify, cert):
"""
Check the environment and merge it with some settings.
:rtype: dict
"""
# Gather clues from the surrounding environment.
if self.trust_env:
# Set environment's proxies.
no_proxy = proxies.get('no_proxy') if proxies is not None else None
env_proxies = get_environ_proxies(url, no_proxy=no_proxy)
for (k, v) in env_proxies.items():
proxies.setdefault(k, v)
# Look for requests environment configuration and be compatible
# with cURL.
if verify is True or verify is None:
verify = (os.environ.get('REQUESTS_CA_BUNDLE') or
os.environ.get('CURL_CA_BUNDLE'))
# Merge all the kwargs.
proxies = merge_setting(proxies, self.proxies)
stream = merge_setting(stream, self.stream)
verify = merge_setting(verify, self.verify)
cert = merge_setting(cert, self.cert)
return {'verify': verify, 'proxies': proxies, 'stream': stream,
'cert': cert}
def get_adapter(self, url):
"""
Returns the appropriate connection adapter for the given URL.
:rtype: requests.adapters.BaseAdapter
"""
for (prefix, adapter) in self.adapters.items():
if url.lower().startswith(prefix.lower()):
return adapter
# Nothing matches :-/
raise InvalidSchema("No connection adapters were found for '%s'" % url)
def close(self):
"""Closes all adapters and as such the session"""
for v in self.adapters.values():
v.close()
def mount(self, prefix, adapter):
"""Registers a connection adapter to a prefix.
Adapters are sorted in descending order by prefix length.
"""
self.adapters[prefix] = adapter
keys_to_move = [k for k in self.adapters if len(k) < len(prefix)]
for key in keys_to_move:
self.adapters[key] = self.adapters.pop(key)
def __getstate__(self):
state = {attr: getattr(self, attr, None) for attr in self.__attrs__}
return state
def __setstate__(self, state):
for attr, value in state.items():
setattr(self, attr, value)
def session():
"""
Returns a :class:`Session` for context-management.
.. deprecated:: 1.0.0
This method has been deprecated since version 1.0.0 and is only kept for
backwards compatibility. New code should use :class:`~requests.sessions.Session`
to create a session. This may be removed at a future date.
:rtype: Session
"""
return Session()
requests-2.22.0/requests/hooks.py 0000644 0000765 0000024 00000001365 13467270450 020526 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.hooks
~~~~~~~~~~~~~~
This module provides the capabilities for the Requests hooks system.
Available hooks:
``response``:
The response generated from a Request.
"""
HOOKS = ['response']
def default_hooks():
return {event: [] for event in HOOKS}
# TODO: response is the only one
def dispatch_hook(key, hooks, hook_data, **kwargs):
"""Dispatches a hook dictionary on a given piece of data."""
hooks = hooks or {}
hooks = hooks.get(key)
if hooks:
if hasattr(hooks, '__call__'):
hooks = [hooks]
for hook in hooks:
_hook_data = hook(hook_data, **kwargs)
if _hook_data is not None:
hook_data = _hook_data
return hook_data
requests-2.22.0/requests/compat.py 0000644 0000765 0000024 00000003216 13467270450 020663 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.compat
~~~~~~~~~~~~~~~
This module handles import compatibility issues between Python 2 and
Python 3.
"""
import chardet
import sys
# -------
# Pythons
# -------
# Syntax sugar.
_ver = sys.version_info
#: Python 2.x?
is_py2 = (_ver[0] == 2)
#: Python 3.x?
is_py3 = (_ver[0] == 3)
try:
import simplejson as json
except ImportError:
import json
# ---------
# Specifics
# ---------
if is_py2:
from urllib import (
quote, unquote, quote_plus, unquote_plus, urlencode, getproxies,
proxy_bypass, proxy_bypass_environment, getproxies_environment)
from urlparse import urlparse, urlunparse, urljoin, urlsplit, urldefrag
from urllib2 import parse_http_list
import cookielib
from Cookie import Morsel
from StringIO import StringIO
from collections import Callable, Mapping, MutableMapping, OrderedDict
builtin_str = str
bytes = str
str = unicode
basestring = basestring
numeric_types = (int, long, float)
integer_types = (int, long)
elif is_py3:
from urllib.parse import urlparse, urlunparse, urljoin, urlsplit, urlencode, quote, unquote, quote_plus, unquote_plus, urldefrag
from urllib.request import parse_http_list, getproxies, proxy_bypass, proxy_bypass_environment, getproxies_environment
from http import cookiejar as cookielib
from http.cookies import Morsel
from io import StringIO
from collections import OrderedDict
from collections.abc import Callable, Mapping, MutableMapping
builtin_str = str
str = str
bytes = bytes
basestring = (str, bytes)
numeric_types = (int, float)
integer_types = (int,)
requests-2.22.0/requests/models.py 0000644 0000765 0000024 00000102642 13467270450 020666 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.models
~~~~~~~~~~~~~~~
This module contains the primary objects that power Requests.
"""
import datetime
import sys
# Import encoding now, to avoid implicit import later.
# Implicit import within threads may cause LookupError when standard library is in a ZIP,
# such as in Embedded Python. See https://github.com/requests/requests/issues/3578.
import encodings.idna
from urllib3.fields import RequestField
from urllib3.filepost import encode_multipart_formdata
from urllib3.util import parse_url
from urllib3.exceptions import (
DecodeError, ReadTimeoutError, ProtocolError, LocationParseError)
from io import UnsupportedOperation
from .hooks import default_hooks
from .structures import CaseInsensitiveDict
from .auth import HTTPBasicAuth
from .cookies import cookiejar_from_dict, get_cookie_header, _copy_cookie_jar
from .exceptions import (
HTTPError, MissingSchema, InvalidURL, ChunkedEncodingError,
ContentDecodingError, ConnectionError, StreamConsumedError)
from ._internal_utils import to_native_string, unicode_is_ascii
from .utils import (
guess_filename, get_auth_from_url, requote_uri,
stream_decode_response_unicode, to_key_val_list, parse_header_links,
iter_slices, guess_json_utf, super_len, check_header_validity)
from .compat import (
Callable, Mapping,
cookielib, urlunparse, urlsplit, urlencode, str, bytes,
is_py2, chardet, builtin_str, basestring)
from .compat import json as complexjson
from .status_codes import codes
#: The set of HTTP status codes that indicate an automatically
#: processable redirect.
REDIRECT_STATI = (
codes.moved, # 301
codes.found, # 302
codes.other, # 303
codes.temporary_redirect, # 307
codes.permanent_redirect, # 308
)
DEFAULT_REDIRECT_LIMIT = 30
CONTENT_CHUNK_SIZE = 10 * 1024
ITER_CHUNK_SIZE = 512
class RequestEncodingMixin(object):
@property
def path_url(self):
"""Build the path URL to use."""
url = []
p = urlsplit(self.url)
path = p.path
if not path:
path = '/'
url.append(path)
query = p.query
if query:
url.append('?')
url.append(query)
return ''.join(url)
@staticmethod
def _encode_params(data):
"""Encode parameters in a piece of data.
Will successfully encode parameters when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if isinstance(data, (str, bytes)):
return data
elif hasattr(data, 'read'):
return data
elif hasattr(data, '__iter__'):
result = []
for k, vs in to_key_val_list(data):
if isinstance(vs, basestring) or not hasattr(vs, '__iter__'):
vs = [vs]
for v in vs:
if v is not None:
result.append(
(k.encode('utf-8') if isinstance(k, str) else k,
v.encode('utf-8') if isinstance(v, str) else v))
return urlencode(result, doseq=True)
else:
return data
@staticmethod
def _encode_files(files, data):
"""Build the body for a multipart/form-data request.
Will successfully encode files when passed as a dict or a list of
tuples. Order is retained if data is a list of tuples but arbitrary
if parameters are supplied as a dict.
The tuples may be 2-tuples (filename, fileobj), 3-tuples (filename, fileobj, contentype)
or 4-tuples (filename, fileobj, contentype, custom_headers).
"""
if (not files):
raise ValueError("Files must be provided.")
elif isinstance(data, basestring):
raise ValueError("Data must not be a string.")
new_fields = []
fields = to_key_val_list(data or {})
files = to_key_val_list(files or {})
for field, val in fields:
if isinstance(val, basestring) or not hasattr(val, '__iter__'):
val = [val]
for v in val:
if v is not None:
# Don't call str() on bytestrings: in Py3 it all goes wrong.
if not isinstance(v, bytes):
v = str(v)
new_fields.append(
(field.decode('utf-8') if isinstance(field, bytes) else field,
v.encode('utf-8') if isinstance(v, str) else v))
for (k, v) in files:
# support for explicit filename
ft = None
fh = None
if isinstance(v, (tuple, list)):
if len(v) == 2:
fn, fp = v
elif len(v) == 3:
fn, fp, ft = v
else:
fn, fp, ft, fh = v
else:
fn = guess_filename(v) or k
fp = v
if isinstance(fp, (str, bytes, bytearray)):
fdata = fp
elif hasattr(fp, 'read'):
fdata = fp.read()
elif fp is None:
continue
else:
fdata = fp
rf = RequestField(name=k, data=fdata, filename=fn, headers=fh)
rf.make_multipart(content_type=ft)
new_fields.append(rf)
body, content_type = encode_multipart_formdata(new_fields)
return body, content_type
class RequestHooksMixin(object):
def register_hook(self, event, hook):
"""Properly register a hook."""
if event not in self.hooks:
raise ValueError('Unsupported event specified, with event name "%s"' % (event))
if isinstance(hook, Callable):
self.hooks[event].append(hook)
elif hasattr(hook, '__iter__'):
self.hooks[event].extend(h for h in hook if isinstance(h, Callable))
def deregister_hook(self, event, hook):
"""Deregister a previously registered hook.
Returns True if the hook existed, False if not.
"""
try:
self.hooks[event].remove(hook)
return True
except ValueError:
return False
class Request(RequestHooksMixin):
"""A user-created :class:`Request ` object.
Used to prepare a :class:`PreparedRequest `, which is sent to the server.
:param method: HTTP method to use.
:param url: URL to send.
:param headers: dictionary of headers to send.
:param files: dictionary of {filename: fileobject} files to multipart upload.
:param data: the body to attach to the request. If a dictionary or
list of tuples ``[(key, value)]`` is provided, form-encoding will
take place.
:param json: json for the body to attach to the request (if files or data is not specified).
:param params: URL parameters to append to the URL. If a dictionary or
list of tuples ``[(key, value)]`` is provided, form-encoding will
take place.
:param auth: Auth handler or (user, pass) tuple.
:param cookies: dictionary or CookieJar of cookies to attach to this request.
:param hooks: dictionary of callback hooks, for internal usage.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'https://httpbin.org/get')
>>> req.prepare()
"""
def __init__(self,
method=None, url=None, headers=None, files=None, data=None,
params=None, auth=None, cookies=None, hooks=None, json=None):
# Default empty dicts for dict params.
data = [] if data is None else data
files = [] if files is None else files
headers = {} if headers is None else headers
params = {} if params is None else params
hooks = {} if hooks is None else hooks
self.hooks = default_hooks()
for (k, v) in list(hooks.items()):
self.register_hook(event=k, hook=v)
self.method = method
self.url = url
self.headers = headers
self.files = files
self.data = data
self.json = json
self.params = params
self.auth = auth
self.cookies = cookies
def __repr__(self):
return '' % (self.method)
def prepare(self):
"""Constructs a :class:`PreparedRequest ` for transmission and returns it."""
p = PreparedRequest()
p.prepare(
method=self.method,
url=self.url,
headers=self.headers,
files=self.files,
data=self.data,
json=self.json,
params=self.params,
auth=self.auth,
cookies=self.cookies,
hooks=self.hooks,
)
return p
class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
"""The fully mutable :class:`PreparedRequest ` object,
containing the exact bytes that will be sent to the server.
Generated from either a :class:`Request ` object or manually.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'https://httpbin.org/get')
>>> r = req.prepare()
>>> s = requests.Session()
>>> s.send(r)
"""
def __init__(self):
#: HTTP verb to send to the server.
self.method = None
#: HTTP URL to send the request to.
self.url = None
#: dictionary of HTTP headers.
self.headers = None
# The `CookieJar` used to create the Cookie header will be stored here
# after prepare_cookies is called
self._cookies = None
#: request body to send to the server.
self.body = None
#: dictionary of callback hooks, for internal usage.
self.hooks = default_hooks()
#: integer denoting starting position of a readable file-like body.
self._body_position = None
def prepare(self,
method=None, url=None, headers=None, files=None, data=None,
params=None, auth=None, cookies=None, hooks=None, json=None):
"""Prepares the entire request with the given parameters."""
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
self.prepare_body(data, files, json)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
# This MUST go after prepare_auth. Authenticators could add a hook
self.prepare_hooks(hooks)
def __repr__(self):
return '' % (self.method)
def copy(self):
p = PreparedRequest()
p.method = self.method
p.url = self.url
p.headers = self.headers.copy() if self.headers is not None else None
p._cookies = _copy_cookie_jar(self._cookies)
p.body = self.body
p.hooks = self.hooks
p._body_position = self._body_position
return p
def prepare_method(self, method):
"""Prepares the given HTTP method."""
self.method = method
if self.method is not None:
self.method = to_native_string(self.method.upper())
@staticmethod
def _get_idna_encoded_host(host):
import idna
try:
host = idna.encode(host, uts46=True).decode('utf-8')
except idna.IDNAError:
raise UnicodeError
return host
def prepare_url(self, url, params):
"""Prepares the given HTTP URL."""
#: Accept objects that have string representations.
#: We're unable to blindly call unicode/str functions
#: as this will include the bytestring indicator (b'')
#: on python 3.x.
#: https://github.com/requests/requests/pull/2238
if isinstance(url, bytes):
url = url.decode('utf8')
else:
url = unicode(url) if is_py2 else str(url)
# Remove leading whitespaces from url
url = url.lstrip()
# Don't do any URL preparation for non-HTTP schemes like `mailto`,
# `data` etc to work around exceptions from `url_parse`, which
# handles RFC 3986 only.
if ':' in url and not url.lower().startswith('http'):
self.url = url
return
# Support for unicode domain names and paths.
try:
scheme, auth, host, port, path, query, fragment = parse_url(url)
except LocationParseError as e:
raise InvalidURL(*e.args)
if not scheme:
error = ("Invalid URL {0!r}: No schema supplied. Perhaps you meant http://{0}?")
error = error.format(to_native_string(url, 'utf8'))
raise MissingSchema(error)
if not host:
raise InvalidURL("Invalid URL %r: No host supplied" % url)
# In general, we want to try IDNA encoding the hostname if the string contains
# non-ASCII characters. This allows users to automatically get the correct IDNA
# behaviour. For strings containing only ASCII characters, we need to also verify
# it doesn't start with a wildcard (*), before allowing the unencoded hostname.
if not unicode_is_ascii(host):
try:
host = self._get_idna_encoded_host(host)
except UnicodeError:
raise InvalidURL('URL has an invalid label.')
elif host.startswith(u'*'):
raise InvalidURL('URL has an invalid label.')
# Carefully reconstruct the network location
netloc = auth or ''
if netloc:
netloc += '@'
netloc += host
if port:
netloc += ':' + str(port)
# Bare domains aren't valid URLs.
if not path:
path = '/'
if is_py2:
if isinstance(scheme, str):
scheme = scheme.encode('utf-8')
if isinstance(netloc, str):
netloc = netloc.encode('utf-8')
if isinstance(path, str):
path = path.encode('utf-8')
if isinstance(query, str):
query = query.encode('utf-8')
if isinstance(fragment, str):
fragment = fragment.encode('utf-8')
if isinstance(params, (str, bytes)):
params = to_native_string(params)
enc_params = self._encode_params(params)
if enc_params:
if query:
query = '%s&%s' % (query, enc_params)
else:
query = enc_params
url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))
self.url = url
def prepare_headers(self, headers):
"""Prepares the given HTTP headers."""
self.headers = CaseInsensitiveDict()
if headers:
for header in headers.items():
# Raise exception on invalid header value.
check_header_validity(header)
name, value = header
self.headers[to_native_string(name)] = value
def prepare_body(self, data, files, json=None):
"""Prepares the given HTTP body data."""
# Check if file, fo, generator, iterator.
# If not, run through normal process.
# Nottin' on you.
body = None
content_type = None
if not data and json is not None:
# urllib3 requires a bytes-like body. Python 2's json.dumps
# provides this natively, but Python 3 gives a Unicode string.
content_type = 'application/json'
body = complexjson.dumps(json)
if not isinstance(body, bytes):
body = body.encode('utf-8')
is_stream = all([
hasattr(data, '__iter__'),
not isinstance(data, (basestring, list, tuple, Mapping))
])
try:
length = super_len(data)
except (TypeError, AttributeError, UnsupportedOperation):
length = None
if is_stream:
body = data
if getattr(body, 'tell', None) is not None:
# Record the current file position before reading.
# This will allow us to rewind a file in the event
# of a redirect.
try:
self._body_position = body.tell()
except (IOError, OSError):
# This differentiates from None, allowing us to catch
# a failed `tell()` later when trying to rewind the body
self._body_position = object()
if files:
raise NotImplementedError('Streamed bodies and files are mutually exclusive.')
if length:
self.headers['Content-Length'] = builtin_str(length)
else:
self.headers['Transfer-Encoding'] = 'chunked'
else:
# Multi-part file uploads.
if files:
(body, content_type) = self._encode_files(files, data)
else:
if data:
body = self._encode_params(data)
if isinstance(data, basestring) or hasattr(data, 'read'):
content_type = None
else:
content_type = 'application/x-www-form-urlencoded'
self.prepare_content_length(body)
# Add content-type if it wasn't explicitly provided.
if content_type and ('content-type' not in self.headers):
self.headers['Content-Type'] = content_type
self.body = body
def prepare_content_length(self, body):
"""Prepare Content-Length header based on request method and body"""
if body is not None:
length = super_len(body)
if length:
# If length exists, set it. Otherwise, we fallback
# to Transfer-Encoding: chunked.
self.headers['Content-Length'] = builtin_str(length)
elif self.method not in ('GET', 'HEAD') and self.headers.get('Content-Length') is None:
# Set Content-Length to 0 for methods that can have a body
# but don't provide one. (i.e. not GET or HEAD)
self.headers['Content-Length'] = '0'
def prepare_auth(self, auth, url=''):
"""Prepares the given HTTP auth data."""
# If no Auth is explicitly provided, extract it from the URL first.
if auth is None:
url_auth = get_auth_from_url(self.url)
auth = url_auth if any(url_auth) else None
if auth:
if isinstance(auth, tuple) and len(auth) == 2:
# special-case basic HTTP auth
auth = HTTPBasicAuth(*auth)
# Allow auth to make its changes.
r = auth(self)
# Update self to reflect the auth changes.
self.__dict__.update(r.__dict__)
# Recompute Content-Length
self.prepare_content_length(self.body)
def prepare_cookies(self, cookies):
"""Prepares the given HTTP cookie data.
This function eventually generates a ``Cookie`` header from the
given cookies using cookielib. Due to cookielib's design, the header
will not be regenerated if it already exists, meaning this function
can only be called once for the life of the
:class:`PreparedRequest ` object. Any subsequent calls
to ``prepare_cookies`` will have no actual effect, unless the "Cookie"
header is removed beforehand.
"""
if isinstance(cookies, cookielib.CookieJar):
self._cookies = cookies
else:
self._cookies = cookiejar_from_dict(cookies)
cookie_header = get_cookie_header(self._cookies, self)
if cookie_header is not None:
self.headers['Cookie'] = cookie_header
def prepare_hooks(self, hooks):
"""Prepares the given hooks."""
# hooks can be passed as None to the prepare method and to this
# method. To prevent iterating over None, simply use an empty list
# if hooks is False-y
hooks = hooks or []
for event in hooks:
self.register_hook(event, hooks[event])
class Response(object):
"""The :class:`Response ` object, which contains a
server's response to an HTTP request.
"""
__attrs__ = [
'_content', 'status_code', 'headers', 'url', 'history',
'encoding', 'reason', 'cookies', 'elapsed', 'request'
]
def __init__(self):
self._content = False
self._content_consumed = False
self._next = None
#: Integer Code of responded HTTP Status, e.g. 404 or 200.
self.status_code = None
#: Case-insensitive Dictionary of Response Headers.
#: For example, ``headers['content-encoding']`` will return the
#: value of a ``'Content-Encoding'`` response header.
self.headers = CaseInsensitiveDict()
#: File-like object representation of response (for advanced usage).
#: Use of ``raw`` requires that ``stream=True`` be set on the request.
# This requirement does not apply for use internally to Requests.
self.raw = None
#: Final URL location of Response.
self.url = None
#: Encoding to decode with when accessing r.text.
self.encoding = None
#: A list of :class:`Response ` objects from
#: the history of the Request. Any redirect responses will end
#: up here. The list is sorted from the oldest to the most recent request.
self.history = []
#: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
self.reason = None
#: A CookieJar of Cookies the server sent back.
self.cookies = cookiejar_from_dict({})
#: The amount of time elapsed between sending the request
#: and the arrival of the response (as a timedelta).
#: This property specifically measures the time taken between sending
#: the first byte of the request and finishing parsing the headers. It
#: is therefore unaffected by consuming the response content or the
#: value of the ``stream`` keyword argument.
self.elapsed = datetime.timedelta(0)
#: The :class:`PreparedRequest ` object to which this
#: is a response.
self.request = None
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def __getstate__(self):
# Consume everything; accessing the content attribute makes
# sure the content has been fully read.
if not self._content_consumed:
self.content
return {attr: getattr(self, attr, None) for attr in self.__attrs__}
def __setstate__(self, state):
for name, value in state.items():
setattr(self, name, value)
# pickled objects do not have .raw
setattr(self, '_content_consumed', True)
setattr(self, 'raw', None)
def __repr__(self):
return '' % (self.status_code)
def __bool__(self):
"""Returns True if :attr:`status_code` is less than 400.
This attribute checks if the status code of the response is between
400 and 600 to see if there was a client error or a server error. If
the status code, is between 200 and 400, this will return True. This
is **not** a check to see if the response code is ``200 OK``.
"""
return self.ok
def __nonzero__(self):
"""Returns True if :attr:`status_code` is less than 400.
This attribute checks if the status code of the response is between
400 and 600 to see if there was a client error or a server error. If
the status code, is between 200 and 400, this will return True. This
is **not** a check to see if the response code is ``200 OK``.
"""
return self.ok
def __iter__(self):
"""Allows you to use a response as an iterator."""
return self.iter_content(128)
@property
def ok(self):
"""Returns True if :attr:`status_code` is less than 400, False if not.
This attribute checks if the status code of the response is between
400 and 600 to see if there was a client error or a server error. If
the status code is between 200 and 400, this will return True. This
is **not** a check to see if the response code is ``200 OK``.
"""
try:
self.raise_for_status()
except HTTPError:
return False
return True
@property
def is_redirect(self):
"""True if this Response is a well-formed HTTP redirect that could have
been processed automatically (by :meth:`Session.resolve_redirects`).
"""
return ('location' in self.headers and self.status_code in REDIRECT_STATI)
@property
def is_permanent_redirect(self):
"""True if this Response one of the permanent versions of redirect."""
return ('location' in self.headers and self.status_code in (codes.moved_permanently, codes.permanent_redirect))
@property
def next(self):
"""Returns a PreparedRequest for the next request in a redirect chain, if there is one."""
return self._next
@property
def apparent_encoding(self):
"""The apparent encoding, provided by the chardet library."""
return chardet.detect(self.content)['encoding']
def iter_content(self, chunk_size=1, decode_unicode=False):
"""Iterates over the response data. When stream=True is set on the
request, this avoids reading the content at once into memory for
large responses. The chunk size is the number of bytes it should
read into memory. This is not necessarily the length of each item
returned as decoding can take place.
chunk_size must be of type int or None. A value of None will
function differently depending on the value of `stream`.
stream=True will read data as it arrives in whatever size the
chunks are received. If stream=False, data is returned as
a single chunk.
If decode_unicode is True, content will be decoded using the best
available encoding based on the response.
"""
def generate():
# Special case for urllib3.
if hasattr(self.raw, 'stream'):
try:
for chunk in self.raw.stream(chunk_size, decode_content=True):
yield chunk
except ProtocolError as e:
raise ChunkedEncodingError(e)
except DecodeError as e:
raise ContentDecodingError(e)
except ReadTimeoutError as e:
raise ConnectionError(e)
else:
# Standard file-like object.
while True:
chunk = self.raw.read(chunk_size)
if not chunk:
break
yield chunk
self._content_consumed = True
if self._content_consumed and isinstance(self._content, bool):
raise StreamConsumedError()
elif chunk_size is not None and not isinstance(chunk_size, int):
raise TypeError("chunk_size must be an int, it is instead a %s." % type(chunk_size))
# simulate reading small chunks of the content
reused_chunks = iter_slices(self._content, chunk_size)
stream_chunks = generate()
chunks = reused_chunks if self._content_consumed else stream_chunks
if decode_unicode:
chunks = stream_decode_response_unicode(chunks, self)
return chunks
def iter_lines(self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=False, delimiter=None):
"""Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the
content at once into memory for large responses.
.. note:: This method is not reentrant safe.
"""
pending = None
for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode):
if pending is not None:
chunk = pending + chunk
if delimiter:
lines = chunk.split(delimiter)
else:
lines = chunk.splitlines()
if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:
pending = lines.pop()
else:
pending = None
for line in lines:
yield line
if pending is not None:
yield pending
@property
def content(self):
"""Content of the response, in bytes."""
if self._content is False:
# Read the contents.
if self._content_consumed:
raise RuntimeError(
'The content for this response was already consumed')
if self.status_code == 0 or self.raw is None:
self._content = None
else:
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
self._content_consumed = True
# don't need to release the connection; that's been handled by urllib3
# since we exhausted the data.
return self._content
@property
def text(self):
"""Content of the response, in unicode.
If Response.encoding is None, encoding will be guessed using
``chardet``.
The encoding of the response content is determined based solely on HTTP
headers, following RFC 2616 to the letter. If you can take advantage of
non-HTTP knowledge to make a better guess at the encoding, you should
set ``r.encoding`` appropriately before accessing this property.
"""
# Try charset from content-type
content = None
encoding = self.encoding
if not self.content:
return str('')
# Fallback to auto-detected encoding.
if self.encoding is None:
encoding = self.apparent_encoding
# Decode unicode from given encoding.
try:
content = str(self.content, encoding, errors='replace')
except (LookupError, TypeError):
# A LookupError is raised if the encoding was not found which could
# indicate a misspelling or similar mistake.
#
# A TypeError can be raised if encoding is None
#
# So we try blindly encoding.
content = str(self.content, errors='replace')
return content
def json(self, **kwargs):
r"""Returns the json-encoded content of a response, if any.
:param \*\*kwargs: Optional arguments that ``json.loads`` takes.
:raises ValueError: If the response body does not contain valid json.
"""
if not self.encoding and self.content and len(self.content) > 3:
# No encoding set. JSON RFC 4627 section 3 states we should expect
# UTF-8, -16 or -32. Detect which one to use; If the detection or
# decoding fails, fall back to `self.text` (using chardet to make
# a best guess).
encoding = guess_json_utf(self.content)
if encoding is not None:
try:
return complexjson.loads(
self.content.decode(encoding), **kwargs
)
except UnicodeDecodeError:
# Wrong UTF codec detected; usually because it's not UTF-8
# but some other 8-bit codec. This is an RFC violation,
# and the server didn't bother to tell us what codec *was*
# used.
pass
return complexjson.loads(self.text, **kwargs)
@property
def links(self):
"""Returns the parsed header links of the response, if any."""
header = self.headers.get('link')
# l = MultiDict()
l = {}
if header:
links = parse_header_links(header)
for link in links:
key = link.get('rel') or link.get('url')
l[key] = link
return l
def raise_for_status(self):
"""Raises stored :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn't utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode('utf-8')
except UnicodeDecodeError:
reason = self.reason.decode('iso-8859-1')
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u'%s Client Error: %s for url: %s' % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u'%s Server Error: %s for url: %s' % (self.status_code, reason, self.url)
if http_error_msg:
raise HTTPError(http_error_msg, response=self)
def close(self):
"""Releases the connection back to the pool. Once this method has been
called the underlying ``raw`` object must not be accessed again.
*Note: Should not normally need to be called explicitly.*
"""
if not self._content_consumed:
self.raw.close()
release_conn = getattr(self.raw, 'release_conn', None)
if release_conn is not None:
release_conn()
requests-2.22.0/requests/certs.py 0000644 0000765 0000024 00000000705 13467270450 020520 0 ustar nateprewitt staff 0000000 0000000 #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
requests.certs
~~~~~~~~~~~~~~
This module returns the preferred default CA certificate bundle. There is
only one — the one from the certifi package.
If you are packaging Requests, e.g., for a Linux distribution or a managed
environment, you can change the definition of where() to return a separately
packaged CA bundle.
"""
from certifi import where
if __name__ == '__main__':
print(where())
requests-2.22.0/requests/__init__.py 0000644 0000765 0000024 00000007521 13467271065 021145 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
# __
# /__) _ _ _ _ _/ _
# / ( (- (/ (/ (- _) / _)
# /
"""
Requests HTTP Library
~~~~~~~~~~~~~~~~~~~~~
Requests is an HTTP library, written in Python, for human beings. Basic GET
usage:
>>> import requests
>>> r = requests.get('https://www.python.org')
>>> r.status_code
200
>>> 'Python is a programming language' in r.content
True
... or POST:
>>> payload = dict(key1='value1', key2='value2')
>>> r = requests.post('https://httpbin.org/post', data=payload)
>>> print(r.text)
{
...
"form": {
"key2": "value2",
"key1": "value1"
},
...
}
The other HTTP methods are supported - see `requests.api`. Full documentation
is at .
:copyright: (c) 2017 by Kenneth Reitz.
:license: Apache 2.0, see LICENSE for more details.
"""
import urllib3
import chardet
import warnings
from .exceptions import RequestsDependencyWarning
def check_compatibility(urllib3_version, chardet_version):
urllib3_version = urllib3_version.split('.')
assert urllib3_version != ['dev'] # Verify urllib3 isn't installed from git.
# Sometimes, urllib3 only reports its version as 16.1.
if len(urllib3_version) == 2:
urllib3_version.append('0')
# Check urllib3 for compatibility.
major, minor, patch = urllib3_version # noqa: F811
major, minor, patch = int(major), int(minor), int(patch)
# urllib3 >= 1.21.1, <= 1.25
assert major == 1
assert minor >= 21
assert minor <= 25
# Check chardet for compatibility.
major, minor, patch = chardet_version.split('.')[:3]
major, minor, patch = int(major), int(minor), int(patch)
# chardet >= 3.0.2, < 3.1.0
assert major == 3
assert minor < 1
assert patch >= 2
def _check_cryptography(cryptography_version):
# cryptography < 1.3.4
try:
cryptography_version = list(map(int, cryptography_version.split('.')))
except ValueError:
return
if cryptography_version < [1, 3, 4]:
warning = 'Old version of cryptography ({}) may cause slowdown.'.format(cryptography_version)
warnings.warn(warning, RequestsDependencyWarning)
# Check imported dependencies for compatibility.
try:
check_compatibility(urllib3.__version__, chardet.__version__)
except (AssertionError, ValueError):
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
"version!".format(urllib3.__version__, chardet.__version__),
RequestsDependencyWarning)
# Attempt to enable urllib3's SNI support, if possible
try:
from urllib3.contrib import pyopenssl
pyopenssl.inject_into_urllib3()
# Check cryptography version
from cryptography import __version__ as cryptography_version
_check_cryptography(cryptography_version)
except ImportError:
pass
# urllib3's DependencyWarnings should be silenced.
from urllib3.exceptions import DependencyWarning
warnings.simplefilter('ignore', DependencyWarning)
from .__version__ import __title__, __description__, __url__, __version__
from .__version__ import __build__, __author__, __author_email__, __license__
from .__version__ import __copyright__, __cake__
from . import utils
from . import packages
from .models import Request, Response, PreparedRequest
from .api import request, get, head, post, patch, put, delete, options
from .sessions import session, Session
from .status_codes import codes
from .exceptions import (
RequestException, Timeout, URLRequired,
TooManyRedirects, HTTPError, ConnectionError,
FileModeWarning, ConnectTimeout, ReadTimeout
)
# Set default logging handler to avoid "No handler found" warnings.
import logging
from logging import NullHandler
logging.getLogger(__name__).addHandler(NullHandler())
# FileModeWarnings go off per the default.
warnings.simplefilter('default', FileModeWarning, append=True)
requests-2.22.0/requests/status_codes.py 0000644 0000765 0000024 00000010041 13467270450 022072 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
r"""
The ``codes`` object defines a mapping from common names for HTTP statuses
to their numerical codes, accessible either as attributes or as dictionary
items.
>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['\o/']
200
Some codes have multiple names, and both upper- and lower-case versions of
the names are allowed. For example, ``codes.ok``, ``codes.OK``, and
``codes.okay`` all correspond to the HTTP status code 200.
"""
from .structures import LookupDict
_codes = {
# Informational.
100: ('continue',),
101: ('switching_protocols',),
102: ('processing',),
103: ('checkpoint',),
122: ('uri_too_long', 'request_uri_too_long'),
200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '✓'),
201: ('created',),
202: ('accepted',),
203: ('non_authoritative_info', 'non_authoritative_information'),
204: ('no_content',),
205: ('reset_content', 'reset'),
206: ('partial_content', 'partial'),
207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),
208: ('already_reported',),
226: ('im_used',),
# Redirection.
300: ('multiple_choices',),
301: ('moved_permanently', 'moved', '\\o-'),
302: ('found',),
303: ('see_other', 'other'),
304: ('not_modified',),
305: ('use_proxy',),
306: ('switch_proxy',),
307: ('temporary_redirect', 'temporary_moved', 'temporary'),
308: ('permanent_redirect',
'resume_incomplete', 'resume',), # These 2 to be removed in 3.0
# Client Error.
400: ('bad_request', 'bad'),
401: ('unauthorized',),
402: ('payment_required', 'payment'),
403: ('forbidden',),
404: ('not_found', '-o-'),
405: ('method_not_allowed', 'not_allowed'),
406: ('not_acceptable',),
407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),
408: ('request_timeout', 'timeout'),
409: ('conflict',),
410: ('gone',),
411: ('length_required',),
412: ('precondition_failed', 'precondition'),
413: ('request_entity_too_large',),
414: ('request_uri_too_large',),
415: ('unsupported_media_type', 'unsupported_media', 'media_type'),
416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),
417: ('expectation_failed',),
418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),
421: ('misdirected_request',),
422: ('unprocessable_entity', 'unprocessable'),
423: ('locked',),
424: ('failed_dependency', 'dependency'),
425: ('unordered_collection', 'unordered'),
426: ('upgrade_required', 'upgrade'),
428: ('precondition_required', 'precondition'),
429: ('too_many_requests', 'too_many'),
431: ('header_fields_too_large', 'fields_too_large'),
444: ('no_response', 'none'),
449: ('retry_with', 'retry'),
450: ('blocked_by_windows_parental_controls', 'parental_controls'),
451: ('unavailable_for_legal_reasons', 'legal_reasons'),
499: ('client_closed_request',),
# Server Error.
500: ('internal_server_error', 'server_error', '/o\\', '✗'),
501: ('not_implemented',),
502: ('bad_gateway',),
503: ('service_unavailable', 'unavailable'),
504: ('gateway_timeout',),
505: ('http_version_not_supported', 'http_version'),
506: ('variant_also_negotiates',),
507: ('insufficient_storage',),
509: ('bandwidth_limit_exceeded', 'bandwidth'),
510: ('not_extended',),
511: ('network_authentication_required', 'network_auth', 'network_authentication'),
}
codes = LookupDict(name='status_codes')
def _init():
for code, titles in _codes.items():
for title in titles:
setattr(codes, title, code)
if not title.startswith(('\\', '/')):
setattr(codes, title.upper(), code)
def doc(code):
names = ', '.join('``%s``' % n for n in _codes[code])
return '* %d: %s' % (code, names)
global __doc__
__doc__ = (__doc__ + '\n' +
'\n'.join(doc(code) for code in sorted(_codes))
if __doc__ is not None else None)
_init()
requests-2.22.0/requests/packages.py 0000644 0000765 0000024 00000001036 13467270450 021154 0 ustar nateprewitt staff 0000000 0000000 import sys
# This code exists for backwards compatibility reasons.
# I don't like it either. Just look the other way. :)
for package in ('urllib3', 'idna', 'chardet'):
locals()[package] = __import__(package)
# This traversal is apparently necessary such that the identities are
# preserved (requests.packages.urllib3.* is urllib3.*)
for mod in list(sys.modules):
if mod == package or mod.startswith(package + '.'):
sys.modules['requests.packages.' + mod] = sys.modules[mod]
# Kinda cool, though, right?
requests-2.22.0/requests/__version__.py 0000644 0000765 0000024 00000000664 13467271065 021670 0 ustar nateprewitt staff 0000000 0000000 # .-. .-. .-. . . .-. .-. .-. .-.
# |( |- |.| | | |- `-. | `-.
# ' ' `-' `-`.`-' `-' `-' ' `-'
__title__ = 'requests'
__description__ = 'Python HTTP for Humans.'
__url__ = 'http://python-requests.org'
__version__ = '2.22.0'
__build__ = 0x022200
__author__ = 'Kenneth Reitz'
__author_email__ = 'me@kennethreitz.org'
__license__ = 'Apache 2.0'
__copyright__ = 'Copyright 2019 Kenneth Reitz'
__cake__ = u'\u2728 \U0001f370 \u2728'
requests-2.22.0/requests/api.py 0000644 0000765 0000024 00000014177 13467270450 020161 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.api
~~~~~~~~~~~~
This module implements the Requests API.
:copyright: (c) 2012 by Kenneth Reitz.
:license: Apache2, see LICENSE for more details.
"""
from . import sessions
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request `.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload.
``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')``
or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string
defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers
to add for the file.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How many seconds to wait for the server to send data
before giving up, as a float, or a :ref:`(connect timeout, read
timeout) ` tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to ``True``.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response ` object
:rtype: requests.Response
Usage::
>>> import requests
>>> req = requests.request('GET', 'https://httpbin.org/get')
"""
# By using the 'with' statement we are sure the session is closed, thus we
# avoid leaving sockets open which can trigger a ResourceWarning in some
# cases, and look like a memory leak in others.
with sessions.Session() as session:
return session.request(method=method, url=url, **kwargs)
def get(url, params=None, **kwargs):
r"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary, list of tuples or bytes to send
in the query string for the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
return request('get', url, params=params, **kwargs)
def options(url, **kwargs):
r"""Sends an OPTIONS request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
return request('options', url, **kwargs)
def head(url, **kwargs):
r"""Sends a HEAD request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', False)
return request('head', url, **kwargs)
def post(url, data=None, json=None, **kwargs):
r"""Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
return request('post', url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
r"""Sends a PUT request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
return request('put', url, data=data, **kwargs)
def patch(url, data=None, **kwargs):
r"""Sends a PATCH request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, list of tuples, bytes, or file-like
object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
return request('patch', url, data=data, **kwargs)
def delete(url, **kwargs):
r"""Sends a DELETE request.
:param url: URL for the new :class:`Request` object.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response ` object
:rtype: requests.Response
"""
return request('delete', url, **kwargs)
requests-2.22.0/requests/_internal_utils.py 0000644 0000765 0000024 00000002110 13467270450 022563 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests._internal_utils
~~~~~~~~~~~~~~
Provides utility functions that are consumed internally by Requests
which depend on extremely few external helpers (such as compat)
"""
from .compat import is_py2, builtin_str, str
def to_native_string(string, encoding='ascii'):
"""Given a string object, regardless of type, returns a representation of
that string in the native string type, encoding and decoding where
necessary. This assumes ASCII unless told otherwise.
"""
if isinstance(string, builtin_str):
out = string
else:
if is_py2:
out = string.encode(encoding)
else:
out = string.decode(encoding)
return out
def unicode_is_ascii(u_string):
"""Determine if unicode string only contains ASCII characters.
:param str u_string: unicode string to check. Must be unicode
and not Python 2 `str`.
:rtype: bool
"""
assert isinstance(u_string, str)
try:
u_string.encode('ascii')
return True
except UnicodeEncodeError:
return False
requests-2.22.0/requests/utils.py 0000644 0000765 0000024 00000072541 13467270450 020547 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.utils
~~~~~~~~~~~~~~
This module provides utility functions that are used within Requests
that are also useful for external consumption.
"""
import codecs
import contextlib
import io
import os
import re
import socket
import struct
import sys
import tempfile
import warnings
import zipfile
from .__version__ import __version__
from . import certs
# to_native_string is unused here, but imported here for backwards compatibility
from ._internal_utils import to_native_string
from .compat import parse_http_list as _parse_list_header
from .compat import (
quote, urlparse, bytes, str, OrderedDict, unquote, getproxies,
proxy_bypass, urlunparse, basestring, integer_types, is_py3,
proxy_bypass_environment, getproxies_environment, Mapping)
from .cookies import cookiejar_from_dict
from .structures import CaseInsensitiveDict
from .exceptions import (
InvalidURL, InvalidHeader, FileModeWarning, UnrewindableBodyError)
NETRC_FILES = ('.netrc', '_netrc')
DEFAULT_CA_BUNDLE_PATH = certs.where()
DEFAULT_PORTS = {'http': 80, 'https': 443}
if sys.platform == 'win32':
# provide a proxy_bypass version on Windows without DNS lookups
def proxy_bypass_registry(host):
try:
if is_py3:
import winreg
else:
import _winreg as winreg
except ImportError:
return False
try:
internetSettings = winreg.OpenKey(winreg.HKEY_CURRENT_USER,
r'Software\Microsoft\Windows\CurrentVersion\Internet Settings')
# ProxyEnable could be REG_SZ or REG_DWORD, normalizing it
proxyEnable = int(winreg.QueryValueEx(internetSettings,
'ProxyEnable')[0])
# ProxyOverride is almost always a string
proxyOverride = winreg.QueryValueEx(internetSettings,
'ProxyOverride')[0]
except OSError:
return False
if not proxyEnable or not proxyOverride:
return False
# make a check value list from the registry entry: replace the
# '' string by the localhost entry and the corresponding
# canonical entry.
proxyOverride = proxyOverride.split(';')
# now check if we match one of the registry values.
for test in proxyOverride:
if test == '':
if '.' not in host:
return True
test = test.replace(".", r"\.") # mask dots
test = test.replace("*", r".*") # change glob sequence
test = test.replace("?", r".") # change glob char
if re.match(test, host, re.I):
return True
return False
def proxy_bypass(host): # noqa
"""Return True, if the host should be bypassed.
Checks proxy settings gathered from the environment, if specified,
or the registry.
"""
if getproxies_environment():
return proxy_bypass_environment(host)
else:
return proxy_bypass_registry(host)
def dict_to_sequence(d):
"""Returns an internal sequence dictionary update."""
if hasattr(d, 'items'):
d = d.items()
return d
def super_len(o):
total_length = None
current_position = 0
if hasattr(o, '__len__'):
total_length = len(o)
elif hasattr(o, 'len'):
total_length = o.len
elif hasattr(o, 'fileno'):
try:
fileno = o.fileno()
except io.UnsupportedOperation:
pass
else:
total_length = os.fstat(fileno).st_size
# Having used fstat to determine the file length, we need to
# confirm that this file was opened up in binary mode.
if 'b' not in o.mode:
warnings.warn((
"Requests has determined the content-length for this "
"request using the binary size of the file: however, the "
"file has been opened in text mode (i.e. without the 'b' "
"flag in the mode). This may lead to an incorrect "
"content-length. In Requests 3.0, support will be removed "
"for files in text mode."),
FileModeWarning
)
if hasattr(o, 'tell'):
try:
current_position = o.tell()
except (OSError, IOError):
# This can happen in some weird situations, such as when the file
# is actually a special file descriptor like stdin. In this
# instance, we don't know what the length is, so set it to zero and
# let requests chunk it instead.
if total_length is not None:
current_position = total_length
else:
if hasattr(o, 'seek') and total_length is None:
# StringIO and BytesIO have seek but no useable fileno
try:
# seek to end of file
o.seek(0, 2)
total_length = o.tell()
# seek back to current position to support
# partially read file-like objects
o.seek(current_position or 0)
except (OSError, IOError):
total_length = 0
if total_length is None:
total_length = 0
return max(0, total_length - current_position)
def get_netrc_auth(url, raise_errors=False):
"""Returns the Requests tuple auth for a given url from netrc."""
try:
from netrc import netrc, NetrcParseError
netrc_path = None
for f in NETRC_FILES:
try:
loc = os.path.expanduser('~/{}'.format(f))
except KeyError:
# os.path.expanduser can fail when $HOME is undefined and
# getpwuid fails. See https://bugs.python.org/issue20164 &
# https://github.com/requests/requests/issues/1846
return
if os.path.exists(loc):
netrc_path = loc
break
# Abort early if there isn't one.
if netrc_path is None:
return
ri = urlparse(url)
# Strip port numbers from netloc. This weird `if...encode`` dance is
# used for Python 3.2, which doesn't support unicode literals.
splitstr = b':'
if isinstance(url, str):
splitstr = splitstr.decode('ascii')
host = ri.netloc.split(splitstr)[0]
try:
_netrc = netrc(netrc_path).authenticators(host)
if _netrc:
# Return with login / password
login_i = (0 if _netrc[0] else 1)
return (_netrc[login_i], _netrc[2])
except (NetrcParseError, IOError):
# If there was a parsing error or a permissions issue reading the file,
# we'll just skip netrc auth unless explicitly asked to raise errors.
if raise_errors:
raise
# AppEngine hackiness.
except (ImportError, AttributeError):
pass
def guess_filename(obj):
"""Tries to guess the filename of the given object."""
name = getattr(obj, 'name', None)
if (name and isinstance(name, basestring) and name[0] != '<' and
name[-1] != '>'):
return os.path.basename(name)
def extract_zipped_paths(path):
"""Replace nonexistent paths that look like they refer to a member of a zip
archive with the location of an extracted copy of the target, or else
just return the provided path unchanged.
"""
if os.path.exists(path):
# this is already a valid path, no need to do anything further
return path
# find the first valid part of the provided path and treat that as a zip archive
# assume the rest of the path is the name of a member in the archive
archive, member = os.path.split(path)
while archive and not os.path.exists(archive):
archive, prefix = os.path.split(archive)
member = '/'.join([prefix, member])
if not zipfile.is_zipfile(archive):
return path
zip_file = zipfile.ZipFile(archive)
if member not in zip_file.namelist():
return path
# we have a valid zip archive and a valid member of that archive
tmp = tempfile.gettempdir()
extracted_path = os.path.join(tmp, *member.split('/'))
if not os.path.exists(extracted_path):
extracted_path = zip_file.extract(member, path=tmp)
return extracted_path
def from_key_val_list(value):
"""Take an object and test to see if it can be represented as a
dictionary. Unless it can not be represented as such, return an
OrderedDict, e.g.,
::
>>> from_key_val_list([('key', 'val')])
OrderedDict([('key', 'val')])
>>> from_key_val_list('string')
ValueError: cannot encode objects that are not 2-tuples
>>> from_key_val_list({'key': 'val'})
OrderedDict([('key', 'val')])
:rtype: OrderedDict
"""
if value is None:
return None
if isinstance(value, (str, bytes, bool, int)):
raise ValueError('cannot encode objects that are not 2-tuples')
return OrderedDict(value)
def to_key_val_list(value):
"""Take an object and test to see if it can be represented as a
dictionary. If it can be, return a list of tuples, e.g.,
::
>>> to_key_val_list([('key', 'val')])
[('key', 'val')]
>>> to_key_val_list({'key': 'val'})
[('key', 'val')]
>>> to_key_val_list('string')
ValueError: cannot encode objects that are not 2-tuples.
:rtype: list
"""
if value is None:
return None
if isinstance(value, (str, bytes, bool, int)):
raise ValueError('cannot encode objects that are not 2-tuples')
if isinstance(value, Mapping):
value = value.items()
return list(value)
# From mitsuhiko/werkzeug (used with permission).
def parse_list_header(value):
"""Parse lists as described by RFC 2068 Section 2.
In particular, parse comma-separated lists where the elements of
the list may include quoted-strings. A quoted-string could
contain a comma. A non-quoted string could have quotes in the
middle. Quotes are removed automatically after parsing.
It basically works like :func:`parse_set_header` just that items
may appear multiple times and case sensitivity is preserved.
The return value is a standard :class:`list`:
>>> parse_list_header('token, "quoted value"')
['token', 'quoted value']
To create a header from the :class:`list` again, use the
:func:`dump_header` function.
:param value: a string with a list header.
:return: :class:`list`
:rtype: list
"""
result = []
for item in _parse_list_header(value):
if item[:1] == item[-1:] == '"':
item = unquote_header_value(item[1:-1])
result.append(item)
return result
# From mitsuhiko/werkzeug (used with permission).
def parse_dict_header(value):
"""Parse lists of key, value pairs as described by RFC 2068 Section 2 and
convert them into a python dict:
>>> d = parse_dict_header('foo="is a fish", bar="as well"')
>>> type(d) is dict
True
>>> sorted(d.items())
[('bar', 'as well'), ('foo', 'is a fish')]
If there is no value for a key it will be `None`:
>>> parse_dict_header('key_without_value')
{'key_without_value': None}
To create a header from the :class:`dict` again, use the
:func:`dump_header` function.
:param value: a string with a dict header.
:return: :class:`dict`
:rtype: dict
"""
result = {}
for item in _parse_list_header(value):
if '=' not in item:
result[item] = None
continue
name, value = item.split('=', 1)
if value[:1] == value[-1:] == '"':
value = unquote_header_value(value[1:-1])
result[name] = value
return result
# From mitsuhiko/werkzeug (used with permission).
def unquote_header_value(value, is_filename=False):
r"""Unquotes a header value. (Reversal of :func:`quote_header_value`).
This does not use the real unquoting but what browsers are actually
using for quoting.
:param value: the header value to unquote.
:rtype: str
"""
if value and value[0] == value[-1] == '"':
# this is not the real unquoting, but fixing this so that the
# RFC is met will result in bugs with internet explorer and
# probably some other browsers as well. IE for example is
# uploading files with "C:\foo\bar.txt" as filename
value = value[1:-1]
# if this is a filename and the starting characters look like
# a UNC path, then just return the value without quotes. Using the
# replace sequence below on a UNC path has the effect of turning
# the leading double slash into a single slash and then
# _fix_ie_filename() doesn't work correctly. See #458.
if not is_filename or value[:2] != '\\\\':
return value.replace('\\\\', '\\').replace('\\"', '"')
return value
def dict_from_cookiejar(cj):
"""Returns a key/value dictionary from a CookieJar.
:param cj: CookieJar object to extract cookies from.
:rtype: dict
"""
cookie_dict = {}
for cookie in cj:
cookie_dict[cookie.name] = cookie.value
return cookie_dict
def add_dict_to_cookiejar(cj, cookie_dict):
"""Returns a CookieJar from a key/value dictionary.
:param cj: CookieJar to insert cookies into.
:param cookie_dict: Dict of key/values to insert into CookieJar.
:rtype: CookieJar
"""
return cookiejar_from_dict(cookie_dict, cj)
def get_encodings_from_content(content):
"""Returns encodings from given content string.
:param content: bytestring to extract encodings from.
"""
warnings.warn((
'In requests 3.0, get_encodings_from_content will be removed. For '
'more information, please see the discussion on issue #2266. (This'
' warning should only appear once.)'),
DeprecationWarning)
charset_re = re.compile(r']', flags=re.I)
pragma_re = re.compile(r']', flags=re.I)
xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]')
return (charset_re.findall(content) +
pragma_re.findall(content) +
xml_re.findall(content))
def _parse_content_type_header(header):
"""Returns content type and parameters from given header
:param header: string
:return: tuple containing content type and dictionary of
parameters
"""
tokens = header.split(';')
content_type, params = tokens[0].strip(), tokens[1:]
params_dict = {}
items_to_strip = "\"' "
for param in params:
param = param.strip()
if param:
key, value = param, True
index_of_equals = param.find("=")
if index_of_equals != -1:
key = param[:index_of_equals].strip(items_to_strip)
value = param[index_of_equals + 1:].strip(items_to_strip)
params_dict[key.lower()] = value
return content_type, params_dict
def get_encoding_from_headers(headers):
"""Returns encodings from given HTTP Header Dict.
:param headers: dictionary to extract encoding from.
:rtype: str
"""
content_type = headers.get('content-type')
if not content_type:
return None
content_type, params = _parse_content_type_header(content_type)
if 'charset' in params:
return params['charset'].strip("'\"")
if 'text' in content_type:
return 'ISO-8859-1'
def stream_decode_response_unicode(iterator, r):
"""Stream decodes a iterator."""
if r.encoding is None:
for item in iterator:
yield item
return
decoder = codecs.getincrementaldecoder(r.encoding)(errors='replace')
for chunk in iterator:
rv = decoder.decode(chunk)
if rv:
yield rv
rv = decoder.decode(b'', final=True)
if rv:
yield rv
def iter_slices(string, slice_length):
"""Iterate over slices of a string."""
pos = 0
if slice_length is None or slice_length <= 0:
slice_length = len(string)
while pos < len(string):
yield string[pos:pos + slice_length]
pos += slice_length
def get_unicode_from_response(r):
"""Returns the requested content back in unicode.
:param r: Response object to get unicode content from.
Tried:
1. charset from content-type
2. fall back and replace all unicode characters
:rtype: str
"""
warnings.warn((
'In requests 3.0, get_unicode_from_response will be removed. For '
'more information, please see the discussion on issue #2266. (This'
' warning should only appear once.)'),
DeprecationWarning)
tried_encodings = []
# Try charset from content-type
encoding = get_encoding_from_headers(r.headers)
if encoding:
try:
return str(r.content, encoding)
except UnicodeError:
tried_encodings.append(encoding)
# Fall back:
try:
return str(r.content, encoding, errors='replace')
except TypeError:
return r.content
# The unreserved URI characters (RFC 3986)
UNRESERVED_SET = frozenset(
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + "0123456789-._~")
def unquote_unreserved(uri):
"""Un-escape any percent-escape sequences in a URI that are unreserved
characters. This leaves all reserved, illegal and non-ASCII bytes encoded.
:rtype: str
"""
parts = uri.split('%')
for i in range(1, len(parts)):
h = parts[i][0:2]
if len(h) == 2 and h.isalnum():
try:
c = chr(int(h, 16))
except ValueError:
raise InvalidURL("Invalid percent-escape sequence: '%s'" % h)
if c in UNRESERVED_SET:
parts[i] = c + parts[i][2:]
else:
parts[i] = '%' + parts[i]
else:
parts[i] = '%' + parts[i]
return ''.join(parts)
def requote_uri(uri):
"""Re-quote the given URI.
This function passes the given URI through an unquote/quote cycle to
ensure that it is fully and consistently quoted.
:rtype: str
"""
safe_with_percent = "!#$%&'()*+,/:;=?@[]~"
safe_without_percent = "!#$&'()*+,/:;=?@[]~"
try:
# Unquote only the unreserved characters
# Then quote only illegal characters (do not quote reserved,
# unreserved, or '%')
return quote(unquote_unreserved(uri), safe=safe_with_percent)
except InvalidURL:
# We couldn't unquote the given URI, so let's try quoting it, but
# there may be unquoted '%'s in the URI. We need to make sure they're
# properly quoted so they do not cause issues elsewhere.
return quote(uri, safe=safe_without_percent)
def address_in_network(ip, net):
"""This function allows you to check if an IP belongs to a network subnet
Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24
returns False if ip = 192.168.1.1 and net = 192.168.100.0/24
:rtype: bool
"""
ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0]
netaddr, bits = net.split('/')
netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0]
network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask
return (ipaddr & netmask) == (network & netmask)
def dotted_netmask(mask):
"""Converts mask from /xx format to xxx.xxx.xxx.xxx
Example: if mask is 24 function returns 255.255.255.0
:rtype: str
"""
bits = 0xffffffff ^ (1 << 32 - mask) - 1
return socket.inet_ntoa(struct.pack('>I', bits))
def is_ipv4_address(string_ip):
"""
:rtype: bool
"""
try:
socket.inet_aton(string_ip)
except socket.error:
return False
return True
def is_valid_cidr(string_network):
"""
Very simple check of the cidr format in no_proxy variable.
:rtype: bool
"""
if string_network.count('/') == 1:
try:
mask = int(string_network.split('/')[1])
except ValueError:
return False
if mask < 1 or mask > 32:
return False
try:
socket.inet_aton(string_network.split('/')[0])
except socket.error:
return False
else:
return False
return True
@contextlib.contextmanager
def set_environ(env_name, value):
"""Set the environment variable 'env_name' to 'value'
Save previous value, yield, and then restore the previous value stored in
the environment variable 'env_name'.
If 'value' is None, do nothing"""
value_changed = value is not None
if value_changed:
old_value = os.environ.get(env_name)
os.environ[env_name] = value
try:
yield
finally:
if value_changed:
if old_value is None:
del os.environ[env_name]
else:
os.environ[env_name] = old_value
def should_bypass_proxies(url, no_proxy):
"""
Returns whether we should bypass proxies or not.
:rtype: bool
"""
# Prioritize lowercase environment variables over uppercase
# to keep a consistent behaviour with other http projects (curl, wget).
get_proxy = lambda k: os.environ.get(k) or os.environ.get(k.upper())
# First check whether no_proxy is defined. If it is, check that the URL
# we're getting isn't in the no_proxy list.
no_proxy_arg = no_proxy
if no_proxy is None:
no_proxy = get_proxy('no_proxy')
parsed = urlparse(url)
if parsed.hostname is None:
# URLs don't always have hostnames, e.g. file:/// urls.
return True
if no_proxy:
# We need to check whether we match here. We need to see if we match
# the end of the hostname, both with and without the port.
no_proxy = (
host for host in no_proxy.replace(' ', '').split(',') if host
)
if is_ipv4_address(parsed.hostname):
for proxy_ip in no_proxy:
if is_valid_cidr(proxy_ip):
if address_in_network(parsed.hostname, proxy_ip):
return True
elif parsed.hostname == proxy_ip:
# If no_proxy ip was defined in plain IP notation instead of cidr notation &
# matches the IP of the index
return True
else:
host_with_port = parsed.hostname
if parsed.port:
host_with_port += ':{}'.format(parsed.port)
for host in no_proxy:
if parsed.hostname.endswith(host) or host_with_port.endswith(host):
# The URL does match something in no_proxy, so we don't want
# to apply the proxies on this URL.
return True
with set_environ('no_proxy', no_proxy_arg):
# parsed.hostname can be `None` in cases such as a file URI.
try:
bypass = proxy_bypass(parsed.hostname)
except (TypeError, socket.gaierror):
bypass = False
if bypass:
return True
return False
def get_environ_proxies(url, no_proxy=None):
"""
Return a dict of environment proxies.
:rtype: dict
"""
if should_bypass_proxies(url, no_proxy=no_proxy):
return {}
else:
return getproxies()
def select_proxy(url, proxies):
"""Select a proxy for the url, if applicable.
:param url: The url being for the request
:param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
"""
proxies = proxies or {}
urlparts = urlparse(url)
if urlparts.hostname is None:
return proxies.get(urlparts.scheme, proxies.get('all'))
proxy_keys = [
urlparts.scheme + '://' + urlparts.hostname,
urlparts.scheme,
'all://' + urlparts.hostname,
'all',
]
proxy = None
for proxy_key in proxy_keys:
if proxy_key in proxies:
proxy = proxies[proxy_key]
break
return proxy
def default_user_agent(name="python-requests"):
"""
Return a string representing the default user agent.
:rtype: str
"""
return '%s/%s' % (name, __version__)
def default_headers():
"""
:rtype: requests.structures.CaseInsensitiveDict
"""
return CaseInsensitiveDict({
'User-Agent': default_user_agent(),
'Accept-Encoding': ', '.join(('gzip', 'deflate')),
'Accept': '*/*',
'Connection': 'keep-alive',
})
def parse_header_links(value):
"""Return a list of parsed link headers proxies.
i.e. Link: ; rel=front; type="image/jpeg",; rel=back;type="image/jpeg"
:rtype: list
"""
links = []
replace_chars = ' \'"'
value = value.strip(replace_chars)
if not value:
return links
for val in re.split(', *<', value):
try:
url, params = val.split(';', 1)
except ValueError:
url, params = val, ''
link = {'url': url.strip('<> \'"')}
for param in params.split(';'):
try:
key, value = param.split('=')
except ValueError:
break
link[key.strip(replace_chars)] = value.strip(replace_chars)
links.append(link)
return links
# Null bytes; no need to recreate these on each call to guess_json_utf
_null = '\x00'.encode('ascii') # encoding to ASCII for Python 3
_null2 = _null * 2
_null3 = _null * 3
def guess_json_utf(data):
"""
:rtype: str
"""
# JSON always starts with two ASCII characters, so detection is as
# easy as counting the nulls and from their location and count
# determine the encoding. Also detect a BOM, if present.
sample = data[:4]
if sample in (codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE):
return 'utf-32' # BOM included
if sample[:3] == codecs.BOM_UTF8:
return 'utf-8-sig' # BOM included, MS style (discouraged)
if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE):
return 'utf-16' # BOM included
nullcount = sample.count(_null)
if nullcount == 0:
return 'utf-8'
if nullcount == 2:
if sample[::2] == _null2: # 1st and 3rd are null
return 'utf-16-be'
if sample[1::2] == _null2: # 2nd and 4th are null
return 'utf-16-le'
# Did not detect 2 valid UTF-16 ascii-range characters
if nullcount == 3:
if sample[:3] == _null3:
return 'utf-32-be'
if sample[1:] == _null3:
return 'utf-32-le'
# Did not detect a valid UTF-32 ascii-range character
return None
def prepend_scheme_if_needed(url, new_scheme):
"""Given a URL that may or may not have a scheme, prepend the given scheme.
Does not replace a present scheme with the one provided as an argument.
:rtype: str
"""
scheme, netloc, path, params, query, fragment = urlparse(url, new_scheme)
# urlparse is a finicky beast, and sometimes decides that there isn't a
# netloc present. Assume that it's being over-cautious, and switch netloc
# and path if urlparse decided there was no netloc.
if not netloc:
netloc, path = path, netloc
return urlunparse((scheme, netloc, path, params, query, fragment))
def get_auth_from_url(url):
"""Given a url with authentication components, extract them into a tuple of
username,password.
:rtype: (str,str)
"""
parsed = urlparse(url)
try:
auth = (unquote(parsed.username), unquote(parsed.password))
except (AttributeError, TypeError):
auth = ('', '')
return auth
# Moved outside of function to avoid recompile every call
_CLEAN_HEADER_REGEX_BYTE = re.compile(b'^\\S[^\\r\\n]*$|^$')
_CLEAN_HEADER_REGEX_STR = re.compile(r'^\S[^\r\n]*$|^$')
def check_header_validity(header):
"""Verifies that header value is a string which doesn't contain
leading whitespace or return characters. This prevents unintended
header injection.
:param header: tuple, in the format (name, value).
"""
name, value = header
if isinstance(value, bytes):
pat = _CLEAN_HEADER_REGEX_BYTE
else:
pat = _CLEAN_HEADER_REGEX_STR
try:
if not pat.match(value):
raise InvalidHeader("Invalid return character or leading space in header: %s" % name)
except TypeError:
raise InvalidHeader("Value for header {%s: %s} must be of type str or "
"bytes, not %s" % (name, value, type(value)))
def urldefragauth(url):
"""
Given a url remove the fragment and the authentication part.
:rtype: str
"""
scheme, netloc, path, params, query, fragment = urlparse(url)
# see func:`prepend_scheme_if_needed`
if not netloc:
netloc, path = path, netloc
netloc = netloc.rsplit('@', 1)[-1]
return urlunparse((scheme, netloc, path, params, query, ''))
def rewind_body(prepared_request):
"""Move file pointer back to its recorded starting position
so it can be read again on redirect.
"""
body_seek = getattr(prepared_request.body, 'seek', None)
if body_seek is not None and isinstance(prepared_request._body_position, integer_types):
try:
body_seek(prepared_request._body_position)
except (IOError, OSError):
raise UnrewindableBodyError("An error occurred when rewinding request "
"body for redirect.")
else:
raise UnrewindableBodyError("Unable to rewind request body for redirect.")
requests-2.22.0/requests/exceptions.py 0000644 0000765 0000024 00000006161 13467270450 021563 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.exceptions
~~~~~~~~~~~~~~~~~~~
This module contains the set of Requests' exceptions.
"""
from urllib3.exceptions import HTTPError as BaseHTTPError
class RequestException(IOError):
"""There was an ambiguous exception that occurred while handling your
request.
"""
def __init__(self, *args, **kwargs):
"""Initialize RequestException with `request` and `response` objects."""
response = kwargs.pop('response', None)
self.response = response
self.request = kwargs.pop('request', None)
if (response is not None and not self.request and
hasattr(response, 'request')):
self.request = self.response.request
super(RequestException, self).__init__(*args, **kwargs)
class HTTPError(RequestException):
"""An HTTP error occurred."""
class ConnectionError(RequestException):
"""A Connection error occurred."""
class ProxyError(ConnectionError):
"""A proxy error occurred."""
class SSLError(ConnectionError):
"""An SSL error occurred."""
class Timeout(RequestException):
"""The request timed out.
Catching this error will catch both
:exc:`~requests.exceptions.ConnectTimeout` and
:exc:`~requests.exceptions.ReadTimeout` errors.
"""
class ConnectTimeout(ConnectionError, Timeout):
"""The request timed out while trying to connect to the remote server.
Requests that produced this error are safe to retry.
"""
class ReadTimeout(Timeout):
"""The server did not send any data in the allotted amount of time."""
class URLRequired(RequestException):
"""A valid URL is required to make a request."""
class TooManyRedirects(RequestException):
"""Too many redirects."""
class MissingSchema(RequestException, ValueError):
"""The URL schema (e.g. http or https) is missing."""
class InvalidSchema(RequestException, ValueError):
"""See defaults.py for valid schemas."""
class InvalidURL(RequestException, ValueError):
"""The URL provided was somehow invalid."""
class InvalidHeader(RequestException, ValueError):
"""The header value provided was somehow invalid."""
class InvalidProxyURL(InvalidURL):
"""The proxy URL provided is invalid."""
class ChunkedEncodingError(RequestException):
"""The server declared chunked encoding but sent an invalid chunk."""
class ContentDecodingError(RequestException, BaseHTTPError):
"""Failed to decode response content"""
class StreamConsumedError(RequestException, TypeError):
"""The content for this response was already consumed"""
class RetryError(RequestException):
"""Custom retries logic failed"""
class UnrewindableBodyError(RequestException):
"""Requests encountered an error when trying to rewind a body"""
# Warnings
class RequestsWarning(Warning):
"""Base warning for Requests."""
pass
class FileModeWarning(RequestsWarning, DeprecationWarning):
"""A file was opened in text mode, but Requests determined its binary length."""
pass
class RequestsDependencyWarning(RequestsWarning):
"""An imported dependency doesn't match the expected version range."""
pass
requests-2.22.0/requests/structures.py 0000644 0000765 0000024 00000005645 13467270450 021633 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.structures
~~~~~~~~~~~~~~~~~~~
Data structures that power Requests.
"""
from .compat import OrderedDict, Mapping, MutableMapping
class CaseInsensitiveDict(MutableMapping):
"""A case-insensitive ``dict``-like object.
Implements all methods and operations of
``MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
case of the last key to be set, and ``iter(instance)``,
``keys()``, ``items()``, ``iterkeys()``, and ``iteritems()``
will contain case-sensitive keys. However, querying and contains
testing is case insensitive::
cid = CaseInsensitiveDict()
cid['Accept'] = 'application/json'
cid['aCCEPT'] == 'application/json' # True
list(cid) == ['Accept'] # True
For example, ``headers['content-encoding']`` will return the
value of a ``'Content-Encoding'`` response header, regardless
of how the header name was originally stored.
If the constructor, ``.update``, or equality comparison
operations are given keys that have equal ``.lower()``s, the
behavior is undefined.
"""
def __init__(self, data=None, **kwargs):
self._store = OrderedDict()
if data is None:
data = {}
self.update(data, **kwargs)
def __setitem__(self, key, value):
# Use the lowercased key for lookups, but store the actual
# key alongside the value.
self._store[key.lower()] = (key, value)
def __getitem__(self, key):
return self._store[key.lower()][1]
def __delitem__(self, key):
del self._store[key.lower()]
def __iter__(self):
return (casedkey for casedkey, mappedvalue in self._store.values())
def __len__(self):
return len(self._store)
def lower_items(self):
"""Like iteritems(), but with all lowercase keys."""
return (
(lowerkey, keyval[1])
for (lowerkey, keyval)
in self._store.items()
)
def __eq__(self, other):
if isinstance(other, Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented
# Compare insensitively
return dict(self.lower_items()) == dict(other.lower_items())
# Copy is required
def copy(self):
return CaseInsensitiveDict(self._store.values())
def __repr__(self):
return str(dict(self.items()))
class LookupDict(dict):
"""Dictionary lookup object."""
def __init__(self, name=None):
self.name = name
super(LookupDict, self).__init__()
def __repr__(self):
return '' % (self.name)
def __getitem__(self, key):
# We allow fall-through here, so values default to None
return self.__dict__.get(key, None)
def get(self, key, default=None):
return self.__dict__.get(key, default)
requests-2.22.0/requests/help.py 0000644 0000765 0000024 00000006673 13467270450 020342 0 ustar nateprewitt staff 0000000 0000000 """Module containing bug report helper(s)."""
from __future__ import print_function
import json
import platform
import sys
import ssl
import idna
import urllib3
import chardet
from . import __version__ as requests_version
try:
from urllib3.contrib import pyopenssl
except ImportError:
pyopenssl = None
OpenSSL = None
cryptography = None
else:
import OpenSSL
import cryptography
def _implementation():
"""Return a dict with the Python implementation and version.
Provide both the name and the version of the Python implementation
currently running. For example, on CPython 2.7.5 it will return
{'name': 'CPython', 'version': '2.7.5'}.
This function works best on CPython and PyPy: in particular, it probably
doesn't work for Jython or IronPython. Future investigation should be done
to work out the correct shape of the code for those platforms.
"""
implementation = platform.python_implementation()
if implementation == 'CPython':
implementation_version = platform.python_version()
elif implementation == 'PyPy':
implementation_version = '%s.%s.%s' % (sys.pypy_version_info.major,
sys.pypy_version_info.minor,
sys.pypy_version_info.micro)
if sys.pypy_version_info.releaselevel != 'final':
implementation_version = ''.join([
implementation_version, sys.pypy_version_info.releaselevel
])
elif implementation == 'Jython':
implementation_version = platform.python_version() # Complete Guess
elif implementation == 'IronPython':
implementation_version = platform.python_version() # Complete Guess
else:
implementation_version = 'Unknown'
return {'name': implementation, 'version': implementation_version}
def info():
"""Generate information for a bug report."""
try:
platform_info = {
'system': platform.system(),
'release': platform.release(),
}
except IOError:
platform_info = {
'system': 'Unknown',
'release': 'Unknown',
}
implementation_info = _implementation()
urllib3_info = {'version': urllib3.__version__}
chardet_info = {'version': chardet.__version__}
pyopenssl_info = {
'version': None,
'openssl_version': '',
}
if OpenSSL:
pyopenssl_info = {
'version': OpenSSL.__version__,
'openssl_version': '%x' % OpenSSL.SSL.OPENSSL_VERSION_NUMBER,
}
cryptography_info = {
'version': getattr(cryptography, '__version__', ''),
}
idna_info = {
'version': getattr(idna, '__version__', ''),
}
system_ssl = ssl.OPENSSL_VERSION_NUMBER
system_ssl_info = {
'version': '%x' % system_ssl if system_ssl is not None else ''
}
return {
'platform': platform_info,
'implementation': implementation_info,
'system_ssl': system_ssl_info,
'using_pyopenssl': pyopenssl is not None,
'pyOpenSSL': pyopenssl_info,
'urllib3': urllib3_info,
'chardet': chardet_info,
'cryptography': cryptography_info,
'idna': idna_info,
'requests': {
'version': requests_version,
},
}
def main():
"""Pretty-print the bug information as JSON."""
print(json.dumps(info(), sort_keys=True, indent=2))
if __name__ == '__main__':
main()
requests-2.22.0/requests/adapters.py 0000644 0000765 0000024 00000051540 13467270450 021206 0 ustar nateprewitt staff 0000000 0000000 # -*- coding: utf-8 -*-
"""
requests.adapters
~~~~~~~~~~~~~~~~~
This module contains the transport adapters that Requests uses to define
and maintain connections.
"""
import os.path
import socket
from urllib3.poolmanager import PoolManager, proxy_from_url
from urllib3.response import HTTPResponse
from urllib3.util import parse_url
from urllib3.util import Timeout as TimeoutSauce
from urllib3.util.retry import Retry
from urllib3.exceptions import ClosedPoolError
from urllib3.exceptions import ConnectTimeoutError
from urllib3.exceptions import HTTPError as _HTTPError
from urllib3.exceptions import MaxRetryError
from urllib3.exceptions import NewConnectionError
from urllib3.exceptions import ProxyError as _ProxyError
from urllib3.exceptions import ProtocolError
from urllib3.exceptions import ReadTimeoutError
from urllib3.exceptions import SSLError as _SSLError
from urllib3.exceptions import ResponseError
from urllib3.exceptions import LocationValueError
from .models import Response
from .compat import urlparse, basestring
from .utils import (DEFAULT_CA_BUNDLE_PATH, extract_zipped_paths,
get_encoding_from_headers, prepend_scheme_if_needed,
get_auth_from_url, urldefragauth, select_proxy)
from .structures import CaseInsensitiveDict
from .cookies import extract_cookies_to_jar
from .exceptions import (ConnectionError, ConnectTimeout, ReadTimeout, SSLError,
ProxyError, RetryError, InvalidSchema, InvalidProxyURL,
InvalidURL)
from .auth import _basic_auth_str
try:
from urllib3.contrib.socks import SOCKSProxyManager
except ImportError:
def SOCKSProxyManager(*args, **kwargs):
raise InvalidSchema("Missing dependencies for SOCKS support.")
DEFAULT_POOLBLOCK = False
DEFAULT_POOLSIZE = 10
DEFAULT_RETRIES = 0
DEFAULT_POOL_TIMEOUT = None
class BaseAdapter(object):
"""The Base Transport Adapter"""
def __init__(self):
super(BaseAdapter, self).__init__()
def send(self, request, stream=False, timeout=None, verify=True,
cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest ` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) ` tuple.
:type timeout: float or tuple
:param verify: (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
"""
raise NotImplementedError
def close(self):
"""Cleans up adapter specific items."""
raise NotImplementedError
class HTTPAdapter(BaseAdapter):
"""The built-in HTTP Adapter for urllib3.
Provides a general-case interface for Requests sessions to contact HTTP and
HTTPS urls by implementing the Transport Adapter interface. This class will
usually be created by the :class:`Session ` class under the
covers.
:param pool_connections: The number of urllib3 connection pools to cache.
:param pool_maxsize: The maximum number of connections to save in the pool.
:param max_retries: The maximum number of retries each connection
should attempt. Note, this applies only to failed DNS lookups, socket
connections and connection timeouts, never to requests where data has
made it to the server. By default, Requests does not retry failed
connections. If you need granular control over the conditions under
which we retry a request, import urllib3's ``Retry`` class and pass
that instead.
:param pool_block: Whether the connection pool should block for connections.
Usage::
>>> import requests
>>> s = requests.Session()
>>> a = requests.adapters.HTTPAdapter(max_retries=3)
>>> s.mount('http://', a)
"""
__attrs__ = ['max_retries', 'config', '_pool_connections', '_pool_maxsize',
'_pool_block']
def __init__(self, pool_connections=DEFAULT_POOLSIZE,
pool_maxsize=DEFAULT_POOLSIZE, max_retries=DEFAULT_RETRIES,
pool_block=DEFAULT_POOLBLOCK):
if max_retries == DEFAULT_RETRIES:
self.max_retries = Retry(0, read=False)
else:
self.max_retries = Retry.from_int(max_retries)
self.config = {}
self.proxy_manager = {}
super(HTTPAdapter, self).__init__()
self._pool_connections = pool_connections
self._pool_maxsize = pool_maxsize
self._pool_block = pool_block
self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block)
def __getstate__(self):
return {attr: getattr(self, attr, None) for attr in self.__attrs__}
def __setstate__(self, state):
# Can't handle by adding 'proxy_manager' to self.__attrs__ because
# self.poolmanager uses a lambda function, which isn't pickleable.
self.proxy_manager = {}
self.config = {}
for attr, value in state.items():
setattr(self, attr, value)
self.init_poolmanager(self._pool_connections, self._pool_maxsize,
block=self._pool_block)
def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs):
"""Initializes a urllib3 PoolManager.
This method should not be called from user code, and is only
exposed for use when subclassing the
:class:`HTTPAdapter `.
:param connections: The number of urllib3 connection pools to cache.
:param maxsize: The maximum number of connections to save in the pool.
:param block: Block when no free connections are available.
:param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager.
"""
# save these values for pickling
self._pool_connections = connections
self._pool_maxsize = maxsize
self._pool_block = block
self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize,
block=block, strict=True, **pool_kwargs)
def proxy_manager_for(self, proxy, **proxy_kwargs):
"""Return urllib3 ProxyManager for the given proxy.
This method should not be called from user code, and is only
exposed for use when subclassing the
:class:`HTTPAdapter `.
:param proxy: The proxy to return a urllib3 ProxyManager for.
:param proxy_kwargs: Extra keyword arguments used to configure the Proxy Manager.
:returns: ProxyManager
:rtype: urllib3.ProxyManager
"""
if proxy in self.proxy_manager:
manager = self.proxy_manager[proxy]
elif proxy.lower().startswith('socks'):
username, password = get_auth_from_url(proxy)
manager = self.proxy_manager[proxy] = SOCKSProxyManager(
proxy,
username=username,
password=password,
num_pools=self._pool_connections,
maxsize=self._pool_maxsize,
block=self._pool_block,
**proxy_kwargs
)
else:
proxy_headers = self.proxy_headers(proxy)
manager = self.proxy_manager[proxy] = proxy_from_url(
proxy,
proxy_headers=proxy_headers,
num_pools=self._pool_connections,
maxsize=self._pool_maxsize,
block=self._pool_block,
**proxy_kwargs)
return manager
def cert_verify(self, conn, url, verify, cert):
"""Verify a SSL certificate. This method should not be called from user
code, and is only exposed for use when subclassing the
:class:`HTTPAdapter `.
:param conn: The urllib3 connection object associated with the cert.
:param url: The requested URL.
:param verify: Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use
:param cert: The SSL certificate to verify.
"""
if url.lower().startswith('https') and verify:
cert_loc = None
# Allow self-specified cert location.
if verify is not True:
cert_loc = verify
if not cert_loc:
cert_loc = extract_zipped_paths(DEFAULT_CA_BUNDLE_PATH)
if not cert_loc or not os.path.exists(cert_loc):
raise IOError("Could not find a suitable TLS CA certificate bundle, "
"invalid path: {}".format(cert_loc))
conn.cert_reqs = 'CERT_REQUIRED'
if not os.path.isdir(cert_loc):
conn.ca_certs = cert_loc
else:
conn.ca_cert_dir = cert_loc
else:
conn.cert_reqs = 'CERT_NONE'
conn.ca_certs = None
conn.ca_cert_dir = None
if cert:
if not isinstance(cert, basestring):
conn.cert_file = cert[0]
conn.key_file = cert[1]
else:
conn.cert_file = cert
conn.key_file = None
if conn.cert_file and not os.path.exists(conn.cert_file):
raise IOError("Could not find the TLS certificate file, "
"invalid path: {}".format(conn.cert_file))
if conn.key_file and not os.path.exists(conn.key_file):
raise IOError("Could not find the TLS key file, "
"invalid path: {}".format(conn.key_file))
def build_response(self, req, resp):
"""Builds a :class:`Response ` object from a urllib3
response. This should not be called from user code, and is only exposed
for use when subclassing the
:class:`HTTPAdapter `
:param req: The :class:`PreparedRequest ` used to generate the response.
:param resp: The urllib3 response object.
:rtype: requests.Response
"""
response = Response()
# Fallback to None if there's no status_code, for whatever reason.
response.status_code = getattr(resp, 'status', None)
# Make headers case-insensitive.
response.headers = CaseInsensitiveDict(getattr(resp, 'headers', {}))
# Set encoding.
response.encoding = get_encoding_from_headers(response.headers)
response.raw = resp
response.reason = response.raw.reason
if isinstance(req.url, bytes):
response.url = req.url.decode('utf-8')
else:
response.url = req.url
# Add new cookies from the server.
extract_cookies_to_jar(response.cookies, req, resp)
# Give the Response some context.
response.request = req
response.connection = self
return response
def get_connection(self, url, proxies=None):
"""Returns a urllib3 connection for the given URL. This should not be
called from user code, and is only exposed for use when subclassing the
:class:`HTTPAdapter `.
:param url: The URL to connect to.
:param proxies: (optional) A Requests-style dictionary of proxies used on this request.
:rtype: urllib3.ConnectionPool
"""
proxy = select_proxy(url, proxies)
if proxy:
proxy = prepend_scheme_if_needed(proxy, 'http')
proxy_url = parse_url(proxy)
if not proxy_url.host:
raise InvalidProxyURL("Please check proxy URL. It is malformed"
" and could be missing the host.")
proxy_manager = self.proxy_manager_for(proxy)
conn = proxy_manager.connection_from_url(url)
else:
# Only scheme should be lower case
parsed = urlparse(url)
url = parsed.geturl()
conn = self.poolmanager.connection_from_url(url)
return conn
def close(self):
"""Disposes of any internal state.
Currently, this closes the PoolManager and any active ProxyManager,
which closes any pooled connections.
"""
self.poolmanager.clear()
for proxy in self.proxy_manager.values():
proxy.clear()
def request_url(self, request, proxies):
"""Obtain the url to use when making the final request.
If the message is being sent through a HTTP proxy, the full URL has to
be used. Otherwise, we should only use the path portion of the URL.
This should not be called from user code, and is only exposed for use
when subclassing the
:class:`HTTPAdapter `.
:param request: The :class:`PreparedRequest ` being sent.
:param proxies: A dictionary of schemes or schemes and hosts to proxy URLs.
:rtype: str
"""
proxy = select_proxy(request.url, proxies)
scheme = urlparse(request.url).scheme
is_proxied_http_request = (proxy and scheme != 'https')
using_socks_proxy = False
if proxy:
proxy_scheme = urlparse(proxy).scheme.lower()
using_socks_proxy = proxy_scheme.startswith('socks')
url = request.path_url
if is_proxied_http_request and not using_socks_proxy:
url = urldefragauth(request.url)
return url
def add_headers(self, request, **kwargs):
"""Add any headers needed by the connection. As of v2.0 this does
nothing by default, but is left for overriding by users that subclass
the :class:`HTTPAdapter `.
This should not be called from user code, and is only exposed for use
when subclassing the
:class:`HTTPAdapter `.
:param request: The :class:`PreparedRequest ` to add headers to.
:param kwargs: The keyword arguments from the call to send().
"""
pass
def proxy_headers(self, proxy):
"""Returns a dictionary of the headers to add to any request sent
through a proxy. This works with urllib3 magic to ensure that they are
correctly sent to the proxy, rather than in a tunnelled request if
CONNECT is being used.
This should not be called from user code, and is only exposed for use
when subclassing the
:class:`HTTPAdapter `.
:param proxy: The url of the proxy being used for this request.
:rtype: dict
"""
headers = {}
username, password = get_auth_from_url(proxy)
if username:
headers['Proxy-Authorization'] = _basic_auth_str(username,
password)
return headers
def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None):
"""Sends PreparedRequest object. Returns Response object.
:param request: The :class:`PreparedRequest ` being sent.
:param stream: (optional) Whether to stream the request content.
:param timeout: (optional) How long to wait for the server to send
data before giving up, as a float, or a :ref:`(connect timeout,
read timeout) ` tuple.
:type timeout: float or tuple or urllib3 Timeout object
:param verify: (optional) Either a boolean, in which case it controls whether
we verify the server's TLS certificate, or a string, in which case it
must be a path to a CA bundle to use
:param cert: (optional) Any user-provided SSL certificate to be trusted.
:param proxies: (optional) The proxies dictionary to apply to the request.
:rtype: requests.Response
"""
try:
conn = self.get_connection(request.url, proxies)
except LocationValueError as e:
raise InvalidURL(e, request=request)
self.cert_verify(conn, request.url, verify, cert)
url = self.request_url(request, proxies)
self.add_headers(request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies)
chunked = not (request.body is None or 'Content-Length' in request.headers)
if isinstance(timeout, tuple):
try:
connect, read = timeout
timeout = TimeoutSauce(connect=connect, read=read)
except ValueError as e:
# this may raise a string formatting error.
err = ("Invalid timeout {}. Pass a (connect, read) "
"timeout tuple, or a single float to set "
"both timeouts to the same value".format(timeout))
raise ValueError(err)
elif isinstance(timeout, TimeoutSauce):
pass
else:
timeout = TimeoutSauce(connect=timeout, read=timeout)
try:
if not chunked:
resp = conn.urlopen(
method=request.method,
url=url,
body=request.body,
headers=request.headers,
redirect=False,
assert_same_host=False,
preload_content=False,
decode_content=False,
retries=self.max_retries,
timeout=timeout
)
# Send the request.
else:
if hasattr(conn, 'proxy_pool'):
conn = conn.proxy_pool
low_conn = conn._get_conn(timeout=DEFAULT_POOL_TIMEOUT)
try:
low_conn.putrequest(request.method,
url,
skip_accept_encoding=True)
for header, value in request.headers.items():
low_conn.putheader(header, value)
low_conn.endheaders()
for i in request.body:
low_conn.send(hex(len(i))[2:].encode('utf-8'))
low_conn.send(b'\r\n')
low_conn.send(i)
low_conn.send(b'\r\n')
low_conn.send(b'0\r\n\r\n')
# Receive the response from the server
try:
# For Python 2.7, use buffering of HTTP responses
r = low_conn.getresponse(buffering=True)
except TypeError:
# For compatibility with Python 3.3+
r = low_conn.getresponse()
resp = HTTPResponse.from_httplib(
r,
pool=conn,
connection=low_conn,
preload_content=False,
decode_content=False
)
except:
# If we hit any problems here, clean up the connection.
# Then, reraise so that we can handle the actual exception.
low_conn.close()
raise
except (ProtocolError, socket.error) as err:
raise ConnectionError(err, request=request)
except MaxRetryError as e:
if isinstance(e.reason, ConnectTimeoutError):
# TODO: Remove this in 3.0.0: see #2811
if not isinstance(e.reason, NewConnectionError):
raise ConnectTimeout(e, request=request)
if isinstance(e.reason, ResponseError):
raise RetryError(e, request=request)
if isinstance(e.reason, _ProxyError):
raise ProxyError(e, request=request)
if isinstance(e.reason, _SSLError):
# This branch is for urllib3 v1.22 and later.
raise SSLError(e, request=request)
raise ConnectionError(e, request=request)
except ClosedPoolError as e:
raise ConnectionError(e, request=request)
except _ProxyError as e:
raise ProxyError(e)
except (_SSLError, _HTTPError) as e:
if isinstance(e, _SSLError):
# This branch is for urllib3 versions earlier than v1.22
raise SSLError(e, request=request)
elif isinstance(e, ReadTimeoutError):
raise ReadTimeout(e, request=request)
else:
raise
return self.build_response(request, resp)
requests-2.22.0/README.md 0000644 0000765 0000024 00000007572 13467271156 016447 0 ustar nateprewitt staff 0000000 0000000 Requests: HTTP for Humans™
==========================
[](https://pypi.org/project/requests/)
[](https://pypi.org/project/requests/)
[](https://pypi.org/project/requests/)
[](https://codecov.io/github/requests/requests)
[](https://github.com/requests/requests/graphs/contributors)
[](https://saythanks.io/to/kennethreitz)
Requests is the only *Non-GMO* HTTP library for Python, safe for human
consumption.

Behold, the power of Requests:
``` {.sourceCode .python}
>>> import requests
>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
u'{"type":"User"...'
>>> r.json()
{u'disk_usage': 368627, u'private_gists': 484, ...}
```
See [the similar code, sans Requests](https://gist.github.com/973705).
[](http://docs.python-requests.org/)
Requests allows you to send *organic, grass-fed* HTTP/1.1 requests,
without the need for manual labor. There's no need to manually add query
strings to your URLs, or to form-encode your POST data. Keep-alive and
HTTP connection pooling are 100% automatic, thanks to
[urllib3](https://github.com/shazow/urllib3).
Besides, all the cool kids are doing it. Requests is one of the most
downloaded Python packages of all time, pulling in over 11,000,000
downloads every month. You don't want to be left out!
Feature Support
---------------
Requests is ready for today's web.
- International Domains and URLs
- Keep-Alive & Connection Pooling
- Sessions with Cookie Persistence
- Browser-style SSL Verification
- Basic/Digest Authentication
- Elegant Key/Value Cookies
- Automatic Decompression
- Automatic Content Decoding
- Unicode Response Bodies
- Multipart File Uploads
- HTTP(S) Proxy Support
- Connection Timeouts
- Streaming Downloads
- `.netrc` Support
- Chunked Requests
Requests officially supports Python 2.7 & 3.4–3.7, and runs great on
PyPy.
Installation
------------
To install Requests, simply use [pipenv](http://pipenv.org/) (or pip, of
course):
``` {.sourceCode .bash}
$ pipenv install requests
✨🍰✨
```
Satisfaction guaranteed.
Documentation
-------------
Fantastic documentation is available at
, for a limited time only.
How to Contribute
-----------------
1. Become more familiar with the project by reading our [Contributor's Guide](http://docs.python-requests.org/en/latest/dev/contributing/) and our [development philosophy](http://docs.python-requests.org/en/latest/dev/philosophy/).
2. Check for open issues or open a fresh issue to start a discussion
around a feature idea or a bug. There is a [Contributor
Friendly](https://github.com/requests/requests/issues?direction=desc&labels=Contributor+Friendly&page=1&sort=updated&state=open)
tag for issues that should be ideal for people who are not very
familiar with the codebase yet.
3. Fork [the repository](https://github.com/requests/requests) on
GitHub to start making your changes to the **master** branch (or
branch off of it).
4. Write a test which shows that the bug was fixed or that the feature
works as expected.
5. Send a pull request and bug the maintainer until it gets merged and
published. :) Make sure to add yourself to
[AUTHORS](https://github.com/requests/requests/blob/master/AUTHORS.rst).
requests-2.22.0/Pipfile 0000644 0000765 0000024 00000000672 13467271065 016474 0 ustar nateprewitt staff 0000000 0000000 [[source]]
url = "https://pypi.org/simple/"
verify_ssl = true
name = "pypi"
[dev-packages]
pytest = ">=2.8.0,<=3.10.1"
codecov = "*"
pytest-httpbin = ">=0.0.7,<1.0"
pytest-mock = "*"
pytest-cov = "*"
pytest-xdist = "<=1.25"
alabaster = "*"
readme-renderer = "*"
sphinx = "<=1.5.5"
pysocks = "*"
docutils = "*"
"flake8" = "*"
tox = "*"
detox = "*"
httpbin = ">=0.7.0"
[packages]
"e1839a8" = {path = ".", editable = true, extras = ["socks"]}
requests-2.22.0/setup.py 0000755 0000765 0000024 00000006360 13467271065 016676 0 ustar nateprewitt staff 0000000 0000000 #!/usr/bin/env python
# Learn more: https://github.com/kennethreitz/setup.py
import os
import re
import sys
from codecs import open
from setuptools import setup
from setuptools.command.test import test as TestCommand
here = os.path.abspath(os.path.dirname(__file__))
class PyTest(TestCommand):
user_options = [('pytest-args=', 'a', "Arguments to pass into py.test")]
def initialize_options(self):
TestCommand.initialize_options(self)
try:
from multiprocessing import cpu_count
self.pytest_args = ['-n', str(cpu_count()), '--boxed']
except (ImportError, NotImplementedError):
self.pytest_args = ['-n', '1', '--boxed']
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = []
self.test_suite = True
def run_tests(self):
import pytest
errno = pytest.main(self.pytest_args)
sys.exit(errno)
# 'setup.py publish' shortcut.
if sys.argv[-1] == 'publish':
os.system('python setup.py sdist bdist_wheel')
os.system('twine upload dist/*')
sys.exit()
packages = ['requests']
requires = [
'chardet>=3.0.2,<3.1.0',
'idna>=2.5,<2.9',
'urllib3>=1.21.1,<1.26,!=1.25.0,!=1.25.1',
'certifi>=2017.4.17'
]
test_requirements = [
'pytest-httpbin==0.0.7',
'pytest-cov',
'pytest-mock',
'pytest-xdist',
'PySocks>=1.5.6, !=1.5.7',
'pytest>=2.8.0'
]
about = {}
with open(os.path.join(here, 'requests', '__version__.py'), 'r', 'utf-8') as f:
exec(f.read(), about)
with open('README.md', 'r', 'utf-8') as f:
readme = f.read()
with open('HISTORY.md', 'r', 'utf-8') as f:
history = f.read()
setup(
name=about['__title__'],
version=about['__version__'],
description=about['__description__'],
long_description=readme,
long_description_content_type='text/markdown',
author=about['__author__'],
author_email=about['__author_email__'],
url=about['__url__'],
packages=packages,
package_data={'': ['LICENSE', 'NOTICE'], 'requests': ['*.pem']},
package_dir={'requests': 'requests'},
include_package_data=True,
python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*",
install_requires=requires,
license=about['__license__'],
zip_safe=False,
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy'
],
cmdclass={'test': PyTest},
tests_require=test_requirements,
extras_require={
'security': ['pyOpenSSL >= 0.14', 'cryptography>=1.3.4', 'idna>=2.0.0'],
'socks': ['PySocks>=1.5.6, !=1.5.7'],
'socks:sys_platform == "win32" and python_version == "2.7"': ['win_inet_pton'],
},
)
requests-2.22.0/requests.egg-info/ 0000755 0000765 0000024 00000000000 13467272564 020526 5 ustar nateprewitt staff 0000000 0000000 requests-2.22.0/requests.egg-info/PKG-INFO 0000644 0000765 0000024 00000013400 13467272563 021620 0 ustar nateprewitt staff 0000000 0000000 Metadata-Version: 2.1
Name: requests
Version: 2.22.0
Summary: Python HTTP for Humans.
Home-page: http://python-requests.org
Author: Kenneth Reitz
Author-email: me@kennethreitz.org
License: Apache 2.0
Description: Requests: HTTP for Humans™
==========================
[](https://pypi.org/project/requests/)
[](https://pypi.org/project/requests/)
[](https://pypi.org/project/requests/)
[](https://codecov.io/github/requests/requests)
[](https://github.com/requests/requests/graphs/contributors)
[](https://saythanks.io/to/kennethreitz)
Requests is the only *Non-GMO* HTTP library for Python, safe for human
consumption.

Behold, the power of Requests:
``` {.sourceCode .python}
>>> import requests
>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
u'{"type":"User"...'
>>> r.json()
{u'disk_usage': 368627, u'private_gists': 484, ...}
```
See [the similar code, sans Requests](https://gist.github.com/973705).
[](http://docs.python-requests.org/)
Requests allows you to send *organic, grass-fed* HTTP/1.1 requests,
without the need for manual labor. There's no need to manually add query
strings to your URLs, or to form-encode your POST data. Keep-alive and
HTTP connection pooling are 100% automatic, thanks to
[urllib3](https://github.com/shazow/urllib3).
Besides, all the cool kids are doing it. Requests is one of the most
downloaded Python packages of all time, pulling in over 11,000,000
downloads every month. You don't want to be left out!
Feature Support
---------------
Requests is ready for today's web.
- International Domains and URLs
- Keep-Alive & Connection Pooling
- Sessions with Cookie Persistence
- Browser-style SSL Verification
- Basic/Digest Authentication
- Elegant Key/Value Cookies
- Automatic Decompression
- Automatic Content Decoding
- Unicode Response Bodies
- Multipart File Uploads
- HTTP(S) Proxy Support
- Connection Timeouts
- Streaming Downloads
- `.netrc` Support
- Chunked Requests
Requests officially supports Python 2.7 & 3.4–3.7, and runs great on
PyPy.
Installation
------------
To install Requests, simply use [pipenv](http://pipenv.org/) (or pip, of
course):
``` {.sourceCode .bash}
$ pipenv install requests
✨🍰✨
```
Satisfaction guaranteed.
Documentation
-------------
Fantastic documentation is available at
, for a limited time only.
How to Contribute
-----------------
1. Become more familiar with the project by reading our [Contributor's Guide](http://docs.python-requests.org/en/latest/dev/contributing/) and our [development philosophy](http://docs.python-requests.org/en/latest/dev/philosophy/).
2. Check for open issues or open a fresh issue to start a discussion
around a feature idea or a bug. There is a [Contributor
Friendly](https://github.com/requests/requests/issues?direction=desc&labels=Contributor+Friendly&page=1&sort=updated&state=open)
tag for issues that should be ideal for people who are not very
familiar with the codebase yet.
3. Fork [the repository](https://github.com/requests/requests) on
GitHub to start making your changes to the **master** branch (or
branch off of it).
4. Write a test which shows that the bug was fixed or that the feature
works as expected.
5. Send a pull request and bug the maintainer until it gets merged and
published. :) Make sure to add yourself to
[AUTHORS](https://github.com/requests/requests/blob/master/AUTHORS.rst).
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Natural Language :: English
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*
Description-Content-Type: text/markdown
Provides-Extra: security
Provides-Extra: socks
requests-2.22.0/requests.egg-info/not-zip-safe 0000644 0000765 0000024 00000000001 13403504261 022732 0 ustar nateprewitt staff 0000000 0000000
requests-2.22.0/requests.egg-info/SOURCES.txt 0000644 0000765 0000024 00000001667 13467272563 022423 0 ustar nateprewitt staff 0000000 0000000 HISTORY.md
LICENSE
MANIFEST.in
Pipfile
Pipfile.lock
README.md
pytest.ini
setup.cfg
setup.py
requests/__init__.py
requests/__version__.py
requests/_internal_utils.py
requests/adapters.py
requests/api.py
requests/auth.py
requests/certs.py
requests/compat.py
requests/cookies.py
requests/exceptions.py
requests/help.py
requests/hooks.py
requests/models.py
requests/packages.py
requests/sessions.py
requests/status_codes.py
requests/structures.py
requests/utils.py
requests.egg-info/PKG-INFO
requests.egg-info/SOURCES.txt
requests.egg-info/dependency_links.txt
requests.egg-info/not-zip-safe
requests.egg-info/requires.txt
requests.egg-info/top_level.txt
tests/__init__.py
tests/compat.py
tests/conftest.py
tests/test_help.py
tests/test_hooks.py
tests/test_lowlevel.py
tests/test_packages.py
tests/test_requests.py
tests/test_structures.py
tests/test_testserver.py
tests/test_utils.py
tests/utils.py
tests/testserver/__init__.py
tests/testserver/server.py requests-2.22.0/requests.egg-info/requires.txt 0000644 0000765 0000024 00000000407 13467272563 023126 0 ustar nateprewitt staff 0000000 0000000 chardet<3.1.0,>=3.0.2
idna<2.9,>=2.5
urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
certifi>=2017.4.17
[security]
pyOpenSSL>=0.14
cryptography>=1.3.4
idna>=2.0.0
[socks]
PySocks!=1.5.7,>=1.5.6
[socks:sys_platform == "win32" and python_version == "2.7"]
win_inet_pton
requests-2.22.0/requests.egg-info/top_level.txt 0000644 0000765 0000024 00000000011 13467272563 023247 0 ustar nateprewitt staff 0000000 0000000 requests
requests-2.22.0/requests.egg-info/dependency_links.txt 0000644 0000765 0000024 00000000001 13467272563 024573 0 ustar nateprewitt staff 0000000 0000000
requests-2.22.0/setup.cfg 0000644 0000765 0000024 00000000146 13467272564 017003 0 ustar nateprewitt staff 0000000 0000000 [bdist_wheel]
universal = 1
[metadata]
license_file = LICENSE
[egg_info]
tag_build =
tag_date = 0
requests-2.22.0/Pipfile.lock 0000644 0000765 0000024 00000107612 13467271065 017425 0 ustar nateprewitt staff 0000000 0000000 {
"_meta": {
"hash": {
"sha256": "33206a3ef69c36187f33224ccf8e694a323ff4f7b2cc92c35fe8e71898b525a0"
},
"pipfile-spec": 6,
"requires": {},
"sources": [
{
"name": "pypi",
"url": "https://pypi.org/simple/",
"verify_ssl": true
}
]
},
"default": {
"certifi": {
"hashes": [
"sha256:59b7658e26ca9c7339e00f8f4636cdfe59d34fa37b9b04f6f9e9926b3cece1a5",
"sha256:b26104d6835d1f5e49452a26eb2ff87fe7090b89dfcaee5ea2212697e1e1d7ae"
],
"version": "==2019.3.9"
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"e1839a8": {
"editable": true,
"extras": [
"socks"
],
"path": "."
},
"idna": {
"hashes": [
"sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407",
"sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c"
],
"version": "==2.8"
},
"pysocks": {
"hashes": [
"sha256:15d38914b60dbcb231d276f64882a20435c049450160e953ca7d313d1405f16f",
"sha256:32238918ac0f19e9fd870a8692ac9bd14f5e8752b3c62624cda5851424642210",
"sha256:d9031ea45fdfacbe59a99273e9f0448ddb33c1580fe3831c1b09557c5718977c"
],
"version": "==1.7.0"
},
"requests": {
"editable": true,
"extras": [
"socks"
],
"path": ".",
"version": "==2.22.0"
},
"urllib3": {
"hashes": [
"sha256:a53063d8b9210a7bdec15e7b272776b9d42b2fd6816401a0d43006ad2f9902db",
"sha256:d363e3607d8de0c220d31950a8f38b18d5ba7c0830facd71a1c6b1036b7ce06c"
],
"version": "==1.25.2"
},
"win-inet-pton": {
"hashes": [
"sha256:dd03d942c0d3e2b1cf8bab511844546dfa5f74cb61b241699fa379ad707dea4f",
"sha256:eaf0193cbe7152ac313598a0da7313fb479f769343c0c16c5308f64887dc885b"
],
"markers": "sys_platform == 'win32' and python_version == '2.7'",
"version": "==1.1.0"
}
},
"develop": {
"alabaster": {
"hashes": [
"sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359",
"sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"
],
"index": "pypi",
"version": "==0.7.12"
},
"apipkg": {
"hashes": [
"sha256:37228cda29411948b422fae072f57e31d3396d2ee1c9783775980ee9c9990af6",
"sha256:58587dd4dc3daefad0487f6d9ae32b4542b185e1c36db6993290e7c41ca2b47c"
],
"version": "==1.5"
},
"atomicwrites": {
"hashes": [
"sha256:03472c30eb2c5d1ba9227e4c2ca66ab8287fbfbbda3888aa93dc2e28fc6811b4",
"sha256:75a9445bac02d8d058d5e1fe689654ba5a6556a1dfd8ce6ec55a0ed79866cfa6"
],
"version": "==1.3.0"
},
"attrs": {
"hashes": [
"sha256:69c0dbf2ed392de1cb5ec704444b08a5ef81680a61cb899dc08127123af36a79",
"sha256:f0b870f674851ecbfbbbd364d6b5cbdff9dcedbc7f3f5e18a6891057f21fe399"
],
"version": "==19.1.0"
},
"babel": {
"hashes": [
"sha256:6778d85147d5d85345c14a26aada5e478ab04e39b078b0745ee6870c2b5cf669",
"sha256:8cba50f48c529ca3fa18cf81fa9403be176d374ac4d60738b839122dfaaa3d23"
],
"version": "==2.6.0"
},
"bleach": {
"hashes": [
"sha256:213336e49e102af26d9cde77dd2d0397afabc5a6bf2fed985dc35b5d1e285a16",
"sha256:3fdf7f77adcf649c9911387df51254b813185e32b2c6619f690b593a617e19fa"
],
"version": "==3.1.0"
},
"blinker": {
"hashes": [
"sha256:471aee25f3992bd325afa3772f1063dbdbbca947a041b8b89466dc00d606f8b6"
],
"version": "==1.4"
},
"brotlipy": {
"hashes": [
"sha256:07194f4768eb62a4f4ea76b6d0df6ade185e24ebd85877c351daa0a069f1111a",
"sha256:091b299bf36dd6ef7a06570dbc98c0f80a504a56c5b797f31934d2ad01ae7d17",
"sha256:09ec3e125d16749b31c74f021aba809541b3564e5359f8c265cbae442810b41a",
"sha256:0be698678a114addcf87a4b9496c552c68a2c99bf93cf8e08f5738b392e82057",
"sha256:0fa6088a9a87645d43d7e21e32b4a6bf8f7c3939015a50158c10972aa7f425b7",
"sha256:1ea4e578241504b58f2456a6c69952c88866c794648bdc74baee74839da61d44",
"sha256:2699945a0a992c04fc7dc7fa2f1d0575a2c8b4b769f2874a08e8eae46bef36ae",
"sha256:2a80319ae13ea8dd60ecdc4f5ccf6da3ae64787765923256b62c598c5bba4121",
"sha256:2e5c64522364a9ebcdf47c5744a5ddeb3f934742d31e61ebfbbc095460b47162",
"sha256:36def0b859beaf21910157b4c33eb3b06d8ce459c942102f16988cca6ea164df",
"sha256:3a3e56ced8b15fbbd363380344f70f3b438e0fd1fcf27b7526b6172ea950e867",
"sha256:3c1d5e2cf945a46975bdb11a19257fa057b67591eb232f393d260e7246d9e571",
"sha256:50ca336374131cfad20612f26cc43c637ac0bfd2be3361495e99270883b52962",
"sha256:5de6f7d010b7558f72f4b061a07395c5c3fd57f0285c5af7f126a677b976a868",
"sha256:637847560d671657f993313ecc6c6c6666a936b7a925779fd044065c7bc035b9",
"sha256:653faef61241bf8bf99d73ca7ec4baa63401ba7b2a2aa88958394869379d67c7",
"sha256:786afc8c9bd67de8d31f46e408a3386331e126829114e4db034f91eacb05396d",
"sha256:79aaf217072840f3e9a3b641cccc51f7fc23037496bd71e26211856b93f4b4cb",
"sha256:7e31f7adcc5851ca06134705fcf3478210da45d35ad75ec181e1ce9ce345bb38",
"sha256:8b39abc3256c978f575df5cd7893153277216474f303e26f0e43ba3d3969ef96",
"sha256:9448227b0df082e574c45c983fa5cd4bda7bfb11ea6b59def0940c1647be0c3c",
"sha256:96bc59ff9b5b5552843dc67999486a220e07a0522dddd3935da05dc194fa485c",
"sha256:a07647886e24e2fb2d68ca8bf3ada398eb56fd8eac46c733d4d95c64d17f743b",
"sha256:af65d2699cb9f13b26ec3ba09e75e80d31ff422c03675fcb36ee4dabe588fdc2",
"sha256:b4c98b0d2c9c7020a524ca5bbff42027db1004c6571f8bc7b747f2b843128e7a",
"sha256:c6cc0036b1304dd0073eec416cb2f6b9e37ac8296afd9e481cac3b1f07f9db25",
"sha256:d2c1c724c4ac375feb2110f1af98ecdc0e5a8ea79d068efb5891f621a5b235cb",
"sha256:dc6c5ee0df9732a44d08edab32f8a616b769cc5a4155a12d2d010d248eb3fb07",
"sha256:fd1d1c64214af5d90014d82cee5d8141b13d44c92ada7a0c0ec0679c6f15a471"
],
"version": "==0.7.0"
},
"certifi": {
"hashes": [
"sha256:59b7658e26ca9c7339e00f8f4636cdfe59d34fa37b9b04f6f9e9926b3cece1a5",
"sha256:b26104d6835d1f5e49452a26eb2ff87fe7090b89dfcaee5ea2212697e1e1d7ae"
],
"version": "==2019.3.9"
},
"cffi": {
"hashes": [
"sha256:041c81822e9f84b1d9c401182e174996f0bae9991f33725d059b771744290774",
"sha256:046ef9a22f5d3eed06334d01b1e836977eeef500d9b78e9ef693f9380ad0b83d",
"sha256:066bc4c7895c91812eff46f4b1c285220947d4aa46fa0a2651ff85f2afae9c90",
"sha256:066c7ff148ae33040c01058662d6752fd73fbc8e64787229ea8498c7d7f4041b",
"sha256:2444d0c61f03dcd26dbf7600cf64354376ee579acad77aef459e34efcb438c63",
"sha256:300832850b8f7967e278870c5d51e3819b9aad8f0a2c8dbe39ab11f119237f45",
"sha256:34c77afe85b6b9e967bd8154e3855e847b70ca42043db6ad17f26899a3df1b25",
"sha256:46de5fa00f7ac09f020729148ff632819649b3e05a007d286242c4882f7b1dc3",
"sha256:4aa8ee7ba27c472d429b980c51e714a24f47ca296d53f4d7868075b175866f4b",
"sha256:4d0004eb4351e35ed950c14c11e734182591465a33e960a4ab5e8d4f04d72647",
"sha256:4e3d3f31a1e202b0f5a35ba3bc4eb41e2fc2b11c1eff38b362de710bcffb5016",
"sha256:50bec6d35e6b1aaeb17f7c4e2b9374ebf95a8975d57863546fa83e8d31bdb8c4",
"sha256:55cad9a6df1e2a1d62063f79d0881a414a906a6962bc160ac968cc03ed3efcfb",
"sha256:5662ad4e4e84f1eaa8efce5da695c5d2e229c563f9d5ce5b0113f71321bcf753",
"sha256:59b4dc008f98fc6ee2bb4fd7fc786a8d70000d058c2bbe2698275bc53a8d3fa7",
"sha256:73e1ffefe05e4ccd7bcea61af76f36077b914f92b76f95ccf00b0c1b9186f3f9",
"sha256:a1f0fd46eba2d71ce1589f7e50a9e2ffaeb739fb2c11e8192aa2b45d5f6cc41f",
"sha256:a2e85dc204556657661051ff4bab75a84e968669765c8a2cd425918699c3d0e8",
"sha256:a5457d47dfff24882a21492e5815f891c0ca35fefae8aa742c6c263dac16ef1f",
"sha256:a8dccd61d52a8dae4a825cdbb7735da530179fea472903eb871a5513b5abbfdc",
"sha256:ae61af521ed676cf16ae94f30fe202781a38d7178b6b4ab622e4eec8cefaff42",
"sha256:b012a5edb48288f77a63dba0840c92d0504aa215612da4541b7b42d849bc83a3",
"sha256:d2c5cfa536227f57f97c92ac30c8109688ace8fa4ac086d19d0af47d134e2909",
"sha256:d42b5796e20aacc9d15e66befb7a345454eef794fdb0737d1af593447c6c8f45",
"sha256:dee54f5d30d775f525894d67b1495625dd9322945e7fee00731952e0368ff42d",
"sha256:e070535507bd6aa07124258171be2ee8dfc19119c28ca94c9dfb7efd23564512",
"sha256:e1ff2748c84d97b065cc95429814cdba39bcbd77c9c85c89344b317dc0d9cbff",
"sha256:ed851c75d1e0e043cbf5ca9a8e1b13c4c90f3fbd863dacb01c0808e2b5204201"
],
"version": "==1.12.3"
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"click": {
"hashes": [
"sha256:2335065e6395b9e67ca716de5f7526736bfa6ceead690adf616d925bdc622b13",
"sha256:5b94b49521f6456670fdb30cd82a4eca9412788a93fa6dd6df72c94d5a8ff2d7"
],
"version": "==7.0"
},
"codecov": {
"hashes": [
"sha256:8ed8b7c6791010d359baed66f84f061bba5bd41174bf324c31311e8737602788",
"sha256:ae00d68e18d8a20e9c3288ba3875ae03db3a8e892115bf9b83ef20507732bed4"
],
"index": "pypi",
"version": "==2.0.15"
},
"configparser": {
"hashes": [
"sha256:8be81d89d6e7b4c0d4e44bcc525845f6da25821de80cb5e06e7e0238a2899e32",
"sha256:da60d0014fd8c55eb48c1c5354352e363e2d30bbf7057e5e171a468390184c75"
],
"markers": "python_version < '3.2'",
"version": "==3.7.4"
},
"contextlib2": {
"hashes": [
"sha256:509f9419ee91cdd00ba34443217d5ca51f5a364a404e1dce9e8979cea969ca48",
"sha256:f5260a6e679d2ff42ec91ec5252f4eeffdcf21053db9113bd0a8e4d953769c00"
],
"markers": "python_version < '3.2'",
"version": "==0.5.5"
},
"coverage": {
"hashes": [
"sha256:0c5fe441b9cfdab64719f24e9684502a59432df7570521563d7b1aff27ac755f",
"sha256:2b412abc4c7d6e019ce7c27cbc229783035eef6d5401695dccba80f481be4eb3",
"sha256:3684fabf6b87a369017756b551cef29e505cb155ddb892a7a29277b978da88b9",
"sha256:39e088da9b284f1bd17c750ac672103779f7954ce6125fd4382134ac8d152d74",
"sha256:3c205bc11cc4fcc57b761c2da73b9b72a59f8d5ca89979afb0c1c6f9e53c7390",
"sha256:42692db854d13c6c5e9541b6ffe0fe921fe16c9c446358d642ccae1462582d3b",
"sha256:465ce53a8c0f3a7950dfb836438442f833cf6663d407f37d8c52fe7b6e56d7e8",
"sha256:48020e343fc40f72a442c8a1334284620f81295256a6b6ca6d8aa1350c763bbe",
"sha256:4ec30ade438d1711562f3786bea33a9da6107414aed60a5daa974d50a8c2c351",
"sha256:5296fc86ab612ec12394565c500b412a43b328b3907c0d14358950d06fd83baf",
"sha256:5f61bed2f7d9b6a9ab935150a6b23d7f84b8055524e7be7715b6513f3328138e",
"sha256:6899797ac384b239ce1926f3cb86ffc19996f6fa3a1efbb23cb49e0c12d8c18c",
"sha256:68a43a9f9f83693ce0414d17e019daee7ab3f7113a70c79a3dd4c2f704e4d741",
"sha256:6b8033d47fe22506856fe450470ccb1d8ba1ffb8463494a15cfc96392a288c09",
"sha256:7ad7536066b28863e5835e8cfeaa794b7fe352d99a8cded9f43d1161be8e9fbd",
"sha256:7bacb89ccf4bedb30b277e96e4cc68cd1369ca6841bde7b005191b54d3dd1034",
"sha256:839dc7c36501254e14331bcb98b27002aa415e4af7ea039d9009409b9d2d5420",
"sha256:8e679d1bde5e2de4a909efb071f14b472a678b788904440779d2c449c0355b27",
"sha256:8f9a95b66969cdea53ec992ecea5406c5bd99c9221f539bca1e8406b200ae98c",
"sha256:932c03d2d565f75961ba1d3cec41ddde00e162c5b46d03f7423edcb807734eab",
"sha256:93f965415cc51604f571e491f280cff0f5be35895b4eb5e55b47ae90c02a497b",
"sha256:988529edadc49039d205e0aa6ce049c5ccda4acb2d6c3c5c550c17e8c02c05ba",
"sha256:998d7e73548fe395eeb294495a04d38942edb66d1fa61eb70418871bc621227e",
"sha256:9de60893fb447d1e797f6bf08fdf0dbcda0c1e34c1b06c92bd3a363c0ea8c609",
"sha256:9e80d45d0c7fcee54e22771db7f1b0b126fb4a6c0a2e5afa72f66827207ff2f2",
"sha256:a545a3dfe5082dc8e8c3eb7f8a2cf4f2870902ff1860bd99b6198cfd1f9d1f49",
"sha256:a5d8f29e5ec661143621a8f4de51adfb300d7a476224156a39a392254f70687b",
"sha256:a9abc8c480e103dc05d9b332c6cc9fb1586330356fc14f1aa9c0ca5745097d19",
"sha256:aca06bfba4759bbdb09bf52ebb15ae20268ee1f6747417837926fae990ebc41d",
"sha256:bb23b7a6fd666e551a3094ab896a57809e010059540ad20acbeec03a154224ce",
"sha256:bfd1d0ae7e292105f29d7deaa9d8f2916ed8553ab9d5f39ec65bcf5deadff3f9",
"sha256:c22ab9f96cbaff05c6a84e20ec856383d27eae09e511d3e6ac4479489195861d",
"sha256:c62ca0a38958f541a73cf86acdab020c2091631c137bd359c4f5bddde7b75fd4",
"sha256:c709d8bda72cf4cd348ccec2a4881f2c5848fd72903c185f363d361b2737f773",
"sha256:c968a6aa7e0b56ecbd28531ddf439c2ec103610d3e2bf3b75b813304f8cb7723",
"sha256:ca58eba39c68010d7e87a823f22a081b5290e3e3c64714aac3c91481d8b34d22",
"sha256:df785d8cb80539d0b55fd47183264b7002077859028dfe3070cf6359bf8b2d9c",
"sha256:f406628ca51e0ae90ae76ea8398677a921b36f0bd71aab2099dfed08abd0322f",
"sha256:f46087bbd95ebae244a0eda01a618aff11ec7a069b15a3ef8f6b520db523dcf1",
"sha256:f8019c5279eb32360ca03e9fac40a12667715546eed5c5eb59eb381f2f501260",
"sha256:fc5f4d209733750afd2714e9109816a29500718b32dd9a5db01c0cb3a019b96a"
],
"version": "==4.5.3"
},
"decorator": {
"hashes": [
"sha256:86156361c50488b84a3f148056ea716ca587df2f0de1d34750d35c21312725de",
"sha256:f069f3a01830ca754ba5258fde2278454a0b5b79e0d7f5c13b3b97e57d4acff6"
],
"version": "==4.4.0"
},
"detox": {
"hashes": [
"sha256:e650f95f0c7f5858578014b3b193e5dac76c89285c1bbe4bae598fd641bf9cd3",
"sha256:fcad009e2d20ce61176dc826a2c1562bd712fe53953ca603b455171cf819080f"
],
"index": "pypi",
"version": "==0.19"
},
"dnspython": {
"hashes": [
"sha256:36c5e8e38d4369a08b6780b7f27d790a292b2b08eea01607865bf0936c558e01",
"sha256:f69c21288a962f4da86e56c4905b49d11aba7938d3d740e80d9e366ee4f1632d"
],
"version": "==1.16.0"
},
"docutils": {
"hashes": [
"sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6",
"sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274",
"sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6"
],
"index": "pypi",
"version": "==0.14"
},
"entrypoints": {
"hashes": [
"sha256:589f874b313739ad35be6e0cd7efde2a4e9b6fea91edcc34e58ecbb8dbe56d19",
"sha256:c70dd71abe5a8c85e55e12c19bd91ccfeec11a6e99044204511f9ed547d48451"
],
"version": "==0.3"
},
"enum34": {
"hashes": [
"sha256:2d81cbbe0e73112bdfe6ef8576f2238f2ba27dd0d55752a776c41d38b7da2850",
"sha256:644837f692e5f550741432dd3f223bbb9852018674981b1664e5dc339387588a",
"sha256:6bd0f6ad48ec2aa117d3d141940d484deccda84d4fcd884f5c3d93c23ecd8c79",
"sha256:8ad8c4783bf61ded74527bffb48ed9b54166685e4230386a9ed9b1279e2df5b1"
],
"markers": "python_version < '3.4'",
"version": "==1.1.6"
},
"eventlet": {
"hashes": [
"sha256:c584163e006e613707e224552fafc63e4e0aa31d7de0ab18b481ac0b385254c8",
"sha256:d9d31a3c8dbcedbcce5859a919956d934685b17323fc80e1077cb344a2ffa68d"
],
"version": "==0.24.1"
},
"execnet": {
"hashes": [
"sha256:027ee5d961afa01e97b90d6ccc34b4ed976702bc58e7f092b3c513ea288cb6d2",
"sha256:752a3786f17416d491f833a29217dda3ea4a471fc5269c492eebcee8cc4772d3"
],
"version": "==1.6.0"
},
"filelock": {
"hashes": [
"sha256:b8d5ca5ca1c815e1574aee746650ea7301de63d87935b3463d26368b76e31633",
"sha256:d610c1bb404daf85976d7a82eb2ada120f04671007266b708606565dd03b5be6"
],
"version": "==3.0.10"
},
"flake8": {
"hashes": [
"sha256:859996073f341f2670741b51ec1e67a01da142831aa1fdc6242dbf88dffbe661",
"sha256:a796a115208f5c03b18f332f7c11729812c8c3ded6c46319c59b53efd3819da8"
],
"index": "pypi",
"version": "==3.7.7"
},
"flask": {
"hashes": [
"sha256:2271c0070dbcb5275fad4a82e29f23ab92682dc45f9dfbc22c02ba9b9322ce48",
"sha256:a080b744b7e345ccfcbc77954861cb05b3c63786e93f2b3875e0913d44b43f05"
],
"version": "==1.0.2"
},
"funcsigs": {
"hashes": [
"sha256:330cc27ccbf7f1e992e69fef78261dc7c6569012cf397db8d3de0234e6c937ca",
"sha256:a7bb0f2cf3a3fd1ab2732cb49eba4252c2af4240442415b4abce3b87022a8f50"
],
"markers": "python_version < '3.0'",
"version": "==1.0.2"
},
"functools32": {
"hashes": [
"sha256:89d824aa6c358c421a234d7f9ee0bd75933a67c29588ce50aaa3acdf4d403fa0",
"sha256:f6253dfbe0538ad2e387bd8fdfd9293c925d63553f5813c4e587745416501e6d"
],
"markers": "python_version < '3.2'",
"version": "==3.2.3.post2"
},
"greenlet": {
"hashes": [
"sha256:000546ad01e6389e98626c1367be58efa613fa82a1be98b0c6fc24b563acc6d0",
"sha256:0d48200bc50cbf498716712129eef819b1729339e34c3ae71656964dac907c28",
"sha256:23d12eacffa9d0f290c0fe0c4e81ba6d5f3a5b7ac3c30a5eaf0126bf4deda5c8",
"sha256:37c9ba82bd82eb6a23c2e5acc03055c0e45697253b2393c9a50cef76a3985304",
"sha256:51503524dd6f152ab4ad1fbd168fc6c30b5795e8c70be4410a64940b3abb55c0",
"sha256:8041e2de00e745c0e05a502d6e6db310db7faa7c979b3a5877123548a4c0b214",
"sha256:81fcd96a275209ef117e9ec91f75c731fa18dcfd9ffaa1c0adbdaa3616a86043",
"sha256:853da4f9563d982e4121fed8c92eea1a4594a2299037b3034c3c898cb8e933d6",
"sha256:8b4572c334593d449113f9dc8d19b93b7b271bdbe90ba7509eb178923327b625",
"sha256:9416443e219356e3c31f1f918a91badf2e37acf297e2fa13d24d1cc2380f8fbc",
"sha256:9854f612e1b59ec66804931df5add3b2d5ef0067748ea29dc60f0efdcda9a638",
"sha256:99a26afdb82ea83a265137a398f570402aa1f2b5dfb4ac3300c026931817b163",
"sha256:a19bf883b3384957e4a4a13e6bd1ae3d85ae87f4beb5957e35b0be287f12f4e4",
"sha256:a9f145660588187ff835c55a7d2ddf6abfc570c2651c276d3d4be8a2766db490",
"sha256:ac57fcdcfb0b73bb3203b58a14501abb7e5ff9ea5e2edfa06bb03035f0cff248",
"sha256:bcb530089ff24f6458a81ac3fa699e8c00194208a724b644ecc68422e1111939",
"sha256:beeabe25c3b704f7d56b573f7d2ff88fc99f0138e43480cecdfcaa3b87fe4f87",
"sha256:d634a7ea1fc3380ff96f9e44d8d22f38418c1c381d5fac680b272d7d90883720",
"sha256:d97b0661e1aead761f0ded3b769044bb00ed5d33e1ec865e891a8b128bf7c656"
],
"version": "==0.4.15"
},
"httpbin": {
"hashes": [
"sha256:7a04b5904c80b7aa04dd0a6af6520d68ce17a5db175e66a64b971f8e93d73a26",
"sha256:cbb37790c91575f4f15757f42ad41d9f729eb227d5edbe89e4ec175486db8dfa"
],
"index": "pypi",
"version": "==0.7.0"
},
"idna": {
"hashes": [
"sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407",
"sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c"
],
"version": "==2.8"
},
"imagesize": {
"hashes": [
"sha256:3f349de3eb99145973fefb7dbe38554414e5c30abd0c8e4b970a7c9d09f3a1d8",
"sha256:f3832918bc3c66617f92e35f5d70729187676313caa60c187eb0f28b8fe5e3b5"
],
"version": "==1.1.0"
},
"itsdangerous": {
"hashes": [
"sha256:321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19",
"sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749"
],
"version": "==1.1.0"
},
"jinja2": {
"hashes": [
"sha256:065c4f02ebe7f7cf559e49ee5a95fb800a9e4528727aec6f24402a5374c65013",
"sha256:14dd6caf1527abb21f08f86c784eac40853ba93edb79552aa1e4b8aef1b61c7b"
],
"version": "==2.10.1"
},
"markupsafe": {
"hashes": [
"sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473",
"sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161",
"sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235",
"sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5",
"sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff",
"sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b",
"sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1",
"sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e",
"sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183",
"sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66",
"sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1",
"sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1",
"sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e",
"sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b",
"sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905",
"sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735",
"sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d",
"sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e",
"sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d",
"sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c",
"sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21",
"sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2",
"sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5",
"sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b",
"sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6",
"sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f",
"sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f",
"sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"
],
"version": "==1.1.1"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"version": "==0.6.1"
},
"mock": {
"hashes": [
"sha256:83657d894c90d5681d62155c82bda9c1187827525880eda8ff5df4ec813437c3",
"sha256:d157e52d4e5b938c550f39eb2fd15610db062441a9c2747d3dbfa9298211d0f8"
],
"markers": "python_version < '3.0'",
"version": "==3.0.5"
},
"monotonic": {
"hashes": [
"sha256:23953d55076df038541e648a53676fb24980f7a1be290cdda21300b3bc21dfb0",
"sha256:552a91f381532e33cbd07c6a2655a21908088962bb8fa7239ecbcc6ad1140cc7"
],
"version": "==1.5"
},
"more-itertools": {
"hashes": [
"sha256:38a936c0a6d98a38bcc2d03fdaaedaba9f412879461dd2ceff8d37564d6522e4",
"sha256:c0a5785b1109a6bd7fac76d6837fd1feca158e54e521ccd2ae8bfe393cc9d4fc",
"sha256:fe7a7cae1ccb57d33952113ff4fa1bc5f879963600ed74918f1236e212ee50b9"
],
"version": "==5.0.0"
},
"pathlib2": {
"hashes": [
"sha256:25199318e8cc3c25dcb45cbe084cc061051336d5a9ea2a12448d3d8cb748f742",
"sha256:5887121d7f7df3603bca2f710e7219f3eca0eb69e0b7cc6e0a022e155ac931a7"
],
"markers": "python_version < '3.6'",
"version": "==2.3.3"
},
"pluggy": {
"hashes": [
"sha256:25a1bc1d148c9a640211872b4ff859878d422bccb59c9965e04eed468a0aa180",
"sha256:964cedd2b27c492fbf0b7f58b3284a09cf7f99b0f715941fb24a439b3af1bd1a"
],
"version": "==0.11.0"
},
"py": {
"hashes": [
"sha256:64f65755aee5b381cea27766a3a147c3f15b9b6b9ac88676de66ba2ae36793fa",
"sha256:dc639b046a6e2cff5bbe40194ad65936d6ba360b52b3c3fe1d08a82dd50b5e53"
],
"version": "==1.8.0"
},
"pycodestyle": {
"hashes": [
"sha256:95a2219d12372f05704562a14ec30bc76b05a5b297b21a5dfe3f6fac3491ae56",
"sha256:e40a936c9a450ad81df37f549d676d127b1b66000a6c500caa2b085bc0ca976c"
],
"version": "==2.5.0"
},
"pycparser": {
"hashes": [
"sha256:a988718abfad80b6b157acce7bf130a30876d27603738ac39f140993246b25b3"
],
"version": "==2.19"
},
"pyflakes": {
"hashes": [
"sha256:17dbeb2e3f4d772725c777fabc446d5634d1038f234e77343108ce445ea69ce0",
"sha256:d976835886f8c5b31d47970ed689944a0262b5f3afa00a5a7b4dc81e5449f8a2"
],
"version": "==2.1.1"
},
"pygments": {
"hashes": [
"sha256:31cba6ffb739f099a85e243eff8cb717089fdd3c7300767d9fc34cb8e1b065f5",
"sha256:5ad302949b3c98dd73f8d9fcdc7e9cb592f120e32a18e23efd7f3dc51194472b"
],
"version": "==2.4.0"
},
"pysocks": {
"hashes": [
"sha256:15d38914b60dbcb231d276f64882a20435c049450160e953ca7d313d1405f16f",
"sha256:32238918ac0f19e9fd870a8692ac9bd14f5e8752b3c62624cda5851424642210",
"sha256:d9031ea45fdfacbe59a99273e9f0448ddb33c1580fe3831c1b09557c5718977c"
],
"version": "==1.7.0"
},
"pytest": {
"hashes": [
"sha256:3f193df1cfe1d1609d4c583838bea3d532b18d6160fd3f55c9447fdca30848ec",
"sha256:e246cf173c01169b9617fc07264b7b1316e78d7a650055235d6d897bc80d9660"
],
"index": "pypi",
"version": "==3.10.1"
},
"pytest-cov": {
"hashes": [
"sha256:2b097cde81a302e1047331b48cadacf23577e431b61e9c6f49a1170bbe3d3da6",
"sha256:e00ea4fdde970725482f1f35630d12f074e121a23801aabf2ae154ec6bdd343a"
],
"index": "pypi",
"version": "==2.7.1"
},
"pytest-forked": {
"hashes": [
"sha256:5fe33fbd07d7b1302c95310803a5e5726a4ff7f19d5a542b7ce57c76fed8135f",
"sha256:d352aaced2ebd54d42a65825722cb433004b4446ab5d2044851d9cc7a00c9e38"
],
"version": "==1.0.2"
},
"pytest-httpbin": {
"hashes": [
"sha256:8cd57e27418a7d7d205fcc9802eea246ed06170e3065abfa76c6d9b40553592c",
"sha256:d3919c5df0b644454129c0066a8ae62db40ac54bacb4cfd89d8cfa58615a4b42"
],
"index": "pypi",
"version": "==0.3.0"
},
"pytest-mock": {
"hashes": [
"sha256:43ce4e9dd5074993e7c021bb1c22cbb5363e612a2b5a76bc6d956775b10758b7",
"sha256:5bf5771b1db93beac965a7347dc81c675ec4090cb841e49d9d34637a25c30568"
],
"index": "pypi",
"version": "==1.10.4"
},
"pytest-xdist": {
"hashes": [
"sha256:96f893094c89fddeaff3f4783f4807f7aeb138be1a0d87a8805057b6af1201b5",
"sha256:aab1402f2b063df48bf044b042707610f8fcc4c49d0eb9c10e79e30b3f26074f"
],
"index": "pypi",
"version": "==1.25.0"
},
"pytz": {
"hashes": [
"sha256:303879e36b721603cc54604edcac9d20401bdbe31e1e4fdee5b9f98d5d31dfda",
"sha256:d747dd3d23d77ef44c6a3526e274af6efeb0a6f1afd5a69ba4d5be4098c8e141"
],
"version": "==2019.1"
},
"raven": {
"extras": [
"flask"
],
"hashes": [
"sha256:3fa6de6efa2493a7c827472e984ce9b020797d0da16f1db67197bcc23c8fae54",
"sha256:44a13f87670836e153951af9a3c80405d36b43097db869a36e92809673692ce4"
],
"version": "==6.10.0"
},
"readme-renderer": {
"hashes": [
"sha256:bb16f55b259f27f75f640acf5e00cf897845a8b3e4731b5c1a436e4b8529202f",
"sha256:c8532b79afc0375a85f10433eca157d6b50f7d6990f337fa498c96cd4bfc203d"
],
"index": "pypi",
"version": "==24.0"
},
"requests": {
"editable": true,
"extras": [
"socks"
],
"path": ".",
"version": "==2.22.0"
},
"scandir": {
"hashes": [
"sha256:2586c94e907d99617887daed6c1d102b5ca28f1085f90446554abf1faf73123e",
"sha256:2ae41f43797ca0c11591c0c35f2f5875fa99f8797cb1a1fd440497ec0ae4b022",
"sha256:2b8e3888b11abb2217a32af0766bc06b65cc4a928d8727828ee68af5a967fa6f",
"sha256:2c712840c2e2ee8dfaf36034080108d30060d759c7b73a01a52251cc8989f11f",
"sha256:4d4631f6062e658e9007ab3149a9b914f3548cb38bfb021c64f39a025ce578ae",
"sha256:67f15b6f83e6507fdc6fca22fedf6ef8b334b399ca27c6b568cbfaa82a364173",
"sha256:7d2d7a06a252764061a020407b997dd036f7bd6a175a5ba2b345f0a357f0b3f4",
"sha256:8c5922863e44ffc00c5c693190648daa6d15e7c1207ed02d6f46a8dcc2869d32",
"sha256:92c85ac42f41ffdc35b6da57ed991575bdbe69db895507af88b9f499b701c188",
"sha256:b24086f2375c4a094a6b51e78b4cf7ca16c721dcee2eddd7aa6494b42d6d519d",
"sha256:cb925555f43060a1745d0a321cca94bcea927c50114b623d73179189a4e100ac"
],
"markers": "python_version < '3.5'",
"version": "==1.10.0"
},
"six": {
"hashes": [
"sha256:3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c",
"sha256:d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73"
],
"version": "==1.12.0"
},
"snowballstemmer": {
"hashes": [
"sha256:919f26a68b2c17a7634da993d91339e288964f93c274f1343e3bbbe2096e1128",
"sha256:9f3bcd3c401c3e862ec0ebe6d2c069ebc012ce142cce209c098ccb5b09136e89"
],
"version": "==1.2.1"
},
"sphinx": {
"hashes": [
"sha256:11f271e7a9398385ed730e90f0bb41dc3815294bdcd395b46ed2d033bc2e7d87",
"sha256:4064ea6c56feeb268838cb8fbbee507d0c3d5d92fa63a7df935a916b52c9e2f5"
],
"index": "pypi",
"version": "==1.5.5"
},
"toml": {
"hashes": [
"sha256:229f81c57791a41d65e399fc06bf0848bab550a9dfd5ed66df18ce5f05e73d5c",
"sha256:235682dd292d5899d361a811df37e04a8828a5b1da3115886b73cf81ebc9100e",
"sha256:f1db651f9657708513243e61e6cc67d101a39bad662eaa9b5546f789338e07a3"
],
"version": "==0.10.0"
},
"tox": {
"hashes": [
"sha256:2a8d8a63660563e41e64e3b5b677e81ce1ffa5e2a93c2c565d3768c287445800",
"sha256:edfca7809925f49bdc110d0a2d9966bbf35a0c25637216d9586e7a5c5de17bfb"
],
"index": "pypi",
"version": "==3.6.1"
},
"typing": {
"hashes": [
"sha256:4027c5f6127a6267a435201981ba156de91ad0d1d98e9ddc2aa173453453492d",
"sha256:57dcf675a99b74d64dacf6fba08fb17cf7e3d5fdff53d4a30ea2a5e7e52543d4",
"sha256:a4c8473ce11a65999c8f59cb093e70686b6c84c98df58c1dae9b3b196089858a"
],
"markers": "python_version < '3.5'",
"version": "==3.6.6"
},
"urllib3": {
"hashes": [
"sha256:a53063d8b9210a7bdec15e7b272776b9d42b2fd6816401a0d43006ad2f9902db",
"sha256:d363e3607d8de0c220d31950a8f38b18d5ba7c0830facd71a1c6b1036b7ce06c"
],
"version": "==1.25.2"
},
"virtualenv": {
"hashes": [
"sha256:99acaf1e35c7ccf9763db9ba2accbca2f4254d61d1912c5ee364f9cc4a8942a0",
"sha256:fe51cdbf04e5d8152af06c075404745a7419de27495a83f0d72518ad50be3ce8"
],
"version": "==16.6.0"
},
"webencodings": {
"hashes": [
"sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78",
"sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"
],
"version": "==0.5.1"
},
"werkzeug": {
"hashes": [
"sha256:865856ebb55c4dcd0630cdd8f3331a1847a819dda7e8c750d3db6f2aa6c0209c",
"sha256:a0b915f0815982fb2a09161cb8f31708052d0951c3ba433ccc5e1aa276507ca6"
],
"version": "==0.15.4"
}
}
}