pax_global_header00006660000000000000000000000064144357776270014540gustar00rootroot0000000000000052 comment=265c50f65930c591f70103b79794124f46697377 gpustat-1.1.1/000077500000000000000000000000001443577762700132275ustar00rootroot00000000000000gpustat-1.1.1/.github/000077500000000000000000000000001443577762700145675ustar00rootroot00000000000000gpustat-1.1.1/.github/ISSUE_TEMPLATE/000077500000000000000000000000001443577762700167525ustar00rootroot00000000000000gpustat-1.1.1/.github/ISSUE_TEMPLATE/bug_report.md000066400000000000000000000015641443577762700214520ustar00rootroot00000000000000--- name: Bug report about: Create a bug report for gpustat title: '' labels: bug assignees: '' --- **Describe the bug** A clear and concise description of what the bug is. **Screenshots or Program Output** Please provide the output of `gpustat --debug` and `nvidia-smi`. Or attach screenshots if applicable. **Environment information:** - OS: [e.g. Ubuntu 18.04 LTS] - NVIDIA Driver version: [Try `nvidia-smi` or `gpustat`] - The name(s) of GPU card: [can be omitted if screenshot attached] - gpustat version: `gpustat --version` - pynvml version: Please provide the output of `pip list | grep nvidia-ml` or `sha1sum $(python -c 'import pynvml; print(pynvml.__file__)')` **Additional context** Add any other context about the problem here. gpustat-1.1.1/.github/ISSUE_TEMPLATE/feature_request.md000066400000000000000000000011221443577762700224730ustar00rootroot00000000000000--- name: Feature request about: Suggest an idea or new feature title: '' labels: '' assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is, or what you wish to have implemented. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. gpustat-1.1.1/.github/ISSUE_TEMPLATE/other--question--discussion--etc--.md000066400000000000000000000002311443577762700255510ustar00rootroot00000000000000--- name: Other (Question, Discussion, etc.) about: 'General issues other than bug or feature: Blank template' title: '' labels: '' assignees: '' --- gpustat-1.1.1/.github/workflows/000077500000000000000000000000001443577762700166245ustar00rootroot00000000000000gpustat-1.1.1/.github/workflows/ci.yml000066400000000000000000000031151443577762700177420ustar00rootroot00000000000000name: "Run Tests" on: [push, pull_request, workflow_dispatch] defaults: run: shell: bash jobs: unit-tests: name: "Unit Tests" runs-on: ${{ matrix.os }} timeout-minutes: 10 strategy: matrix: include: - os: ubuntu-20.04 python-version: "3.6" - os: ubuntu-latest python-version: "3.7" - os: ubuntu-latest python-version: "3.8" - os: ubuntu-latest python-version: "3.9" - os: ubuntu-latest python-version: "3.10" - os: ubuntu-latest python-version: "3.11" - os: ubuntu-latest python-version: "3.11" pynvml-version: 11.495.46 - os: windows-latest python-version: "3.8" - os: windows-latest python-version: "3.9" steps: - uses: actions/checkout@v2 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Upgrade pip run: | python -m pip install -U pip - name: Configure environments run: | python --version - name: Install dependencies run: | pip install -e ".[test]" if [ -n "${{ matrix.pynvml-version }}" ]; then pip install nvidia-ml-py==${{ matrix.pynvml-version }} fi python -m gpustat --version - name: Run tests run: | pytest --color=yes -v -s env: PYTHONIOENCODING: UTF-8 gpustat-1.1.1/.gitignore000066400000000000000000000014341443577762700152210ustar00rootroot00000000000000# managed by setuptools_scm gpustat/_version.py # Byte-compiled / optimized / DLL files __pycache__/ *.py[cod] *$py.class # C extensions *.so # Distribution / packaging .Python env/ venv/ build/ develop-eggs/ dist/ downloads/ eggs/ .eggs/ lib/ lib64/ parts/ sdist/ var/ *.egg-info/ .installed.cfg *.egg # PyInstaller # Usually these files are written by a python script from a template # before PyInstaller builds the exe, so as to inject date/other infos into it. *.manifest *.spec # Installer logs pip-log.txt pip-delete-this-directory.txt # Unit test / coverage reports htmlcov/ .tox/ .coverage .coverage.* .cache nosetests.xml coverage.xml *,cover .hypothesis/ .pytest_cache/ # Translations *.mo *.pot # Django stuff: *.log # Sphinx documentation docs/_build/ # PyBuilder target/ gpustat-1.1.1/CHANGELOG.md000066400000000000000000000111771443577762700150470ustar00rootroot00000000000000Changelog for `gpustat` ======================= ## [v1.1][milestone-1.1] (2023/4/5) [milestone-1.1]: https://github.com/wookayin/gpustat/milestone/5 Bugfixes for better stability and introduces a few minor features. Importantly, nvidia-ml-py version requirement is relaxed to be compatible with modern NVIDIA GPUs. Note: Python minimum version is raised to 3.6+ (compatible with Ubuntu 20.04 LTS). ### New Feature - Add a new flag `--no-processes` to hide process information (@doncamilom) (#133) - Add a new flag `--id` to query specific GPUs only (#125) - Add shell completion via shtab (@Freed-Wu) (#131) - Add error-safe APIs `gpustat.gpu_count()` and `gpustat.is_available()` (#145) ### Enhancements - Relax `nvidia-ml-py` version requirement, allowing versions greater than 11.495 (#143) - Handle Lost GPU and Unknown Error situations (#81, #125) - Print a summary of the error message when an error happens (#142) - Use setuptools-scm to auto-generate `__version__` string. - Add Python 3.11 to CI. ### Bugfix - Fix incorrect memory usage information on nvidia drivers 510.39 or higher (#141) - Fix occasional crash when psutil throws error on reading cpu_percent (#144) - Fix afterimage texts when the number of processes changes in the watch mode (#100) - Make gpustat not crash even when there are 0 number of GPUs ## [v1.0][milestone-1.0] (2022/9/4) [milestone-1.0]: https://github.com/wookayin/gpustat/milestone/4 ### Breaking Changes - Retire Python 2 (#66). Add CI tests for python 3.8 and higher. - Use official nvidia python bindings (#107). - Due to API incompatibility issues, the nvidia driver version should be **R450** or higher in order for process information to be correctly displayed. - NOTE: `nvidia-ml-py<=11.495.46` is required (`nvidia-ml-py3` shall not be used). - Use of '--gpuname-width' will truncate longer GPU names (#47). ### New Feature and Enhancements - Add windows support again, by switching to `blessed` (#78, @skjerns) - Add '--show-codec (-e)' option: display encoder/decoder utilization (#79, @ChaoticMind) - Add full process information (-f) (#65, @bethune-bryant) - Add '--show-all (-a)' flag (#64) - '--debug' will show more detailed stacktrace/exception information - Use unicode symbols (#58, @arinbjornk) - Include nvidia driver version into JSON output (#10) ### Bug Fixes - Fix color/highlight issues on power usage - Make color/highlight work correctly when TERM is not set - Do not list the same GPU process more than once (#84) - Fix a bug where querying zombie process can throw errors (#95) - Fix a bug where psutil may fail to get process info on Windows (#121, #123, @mattip) ### Etc. - Internal improvements on code style and tests - CI: Use Github Actions ## [v0.6.0][milestone-0.6] (2019/07/22) [milestone-0.6]: https://github.com/wookayin/gpustat/issues?q=milestone%3A0.6 - [Feature] Add a flag for fan speed (`-F`, `--show-fan`) (#62, #63), contributed by @bethune-bryant - [Enhancement] Align query datetime in the header with respect to `--gpuname-width` parameter. - [Enhancement] Alias `gpustat --watch` to `-i`/`--interval` option. - [Enhancement] Display NVIDIA driver version in the header (#53) - [Bugfix] Minor fixes on debug mode - [Etc] Travis: python 3.7 ## [v0.5.0][milestone-0.5] (2018/09/09) [milestone-0.5]: https://github.com/wookayin/gpustat/issues?q=milestone%3A0.5 - [Feature] Built-in watch mode (`gpustat -i`) (#7, #41). - Contributed by @drons and @Stonesjtu, Thanks! - [Bug] Fix the problem extra character was showing (#32) - [Bug] Fix a bug in json mode where process information is unavailable (#45) - [Etc.] Refactoring of internal code structure: `gpustat` is now a package (#33) - [Etc.] More unit tests and better use of code styles (flake8) ## v0.4.1 - Fix a bug that might happen when power_draw is not available (#16) ## v0.4.0 `gpustat` is no more a zero-dependency script and now depends on some packages. Please install using pip. - Use `nvidia-ml-py` bindings and `psutil` to replace command-line call of `nvidia-smi` and `ps` (#20, Thanks to @Stonesjtu). - A behavior on pipe is changed; it will not be in color by default, use `--color` explicitly. (e.g. `watch --color -n1.0 gpustat --color`) - Fix a bug in handling stale-state or zombie process (#16) - Include non-CUDA graphics applications in the process list (#18, Thanks to @kapsh) - Support power usage (#13, #28, Thanks to @cjw85) - Support `--debug` option ## v0.3.1 - Experimental JSON output feature (#10) - Add some properties and dict-style access for `GPUStat` class - Fix Python3 compatibility ## v0.2.0 - Add `--gpuname-width` option - Display long usernames correctly - Support older NVIDIA cards (#6) gpustat-1.1.1/LICENSE000066400000000000000000000020621443577762700142340ustar00rootroot00000000000000The MIT License Copyright (c) 2016 Jongwook Choi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. gpustat-1.1.1/MANIFEST.in000066400000000000000000000000511443577762700147610ustar00rootroot00000000000000include README.md include screenshot.png gpustat-1.1.1/README.md000066400000000000000000000112251443577762700145070ustar00rootroot00000000000000`gpustat` ========= [![pypi](https://img.shields.io/pypi/v/gpustat.svg?maxAge=86400)][pypi_gpustat] [![Build Status](https://travis-ci.org/wookayin/gpustat.svg?branch=master)](https://travis-ci.org/wookayin/gpustat) [![license](https://img.shields.io/github/license/wookayin/gpustat.svg?maxAge=86400)](LICENSE) Just *less* than nvidia-smi? ![Screenshot: gpustat -cp](https://github.com/wookayin/gpustat/blob/master/screenshot.png) NOTE: This works with NVIDIA Graphics Devices only, no AMD support as of now. Contributions are welcome! Self-Promotion: A web interface of `gpustat` is available (in alpha)! Check out [gpustat-web][gpustat-web]. [gpustat-web]: https://github.com/wookayin/gpustat-web Quick Installation ------------------ Install from [PyPI][pypi_gpustat]: ``` pip install gpustat ``` If you don't have root (sudo) privilege, please try installing `gpustat` on user namespace: `pip install --user gpustat`. To install the latest version (master branch) via pip: ``` pip install git+https://github.com/wookayin/gpustat.git@master ``` ### NVIDIA Driver Requirements `gpustat` uses [NVIDIA's official python bindings for NVML library (pynvml)][pypi_pynvml]. As of now `gpustat` requires `nvidia-ml-py >= 11.450.129`, which is compatible with NVIDIA driver versions R450.00 or higher. Please upgrade the NVIDIA driver if `gpustat` fails to display process information. If your NVIDIA driver is too old, you can use older `gpustat` versions (`pip install gpustat<1.0`). See [#107][gh-issue-107] for more details. ### Python requirements - gpustat<1.0: Compatible with python 2.7 and >=3.4 - gpustat 1.0: [Python >= 3.4][gh-issue-66] - gpustat 1.1: Python >= 3.6 Usage ----- `$ gpustat` Options (Please see `gpustat --help` for more details): * `--color` : Force colored output (even when stdout is not a tty) * `--no-color` : Suppress colored output * `-u`, `--show-user` : Display username of the process owner * `-c`, `--show-cmd` : Display the process name * `-f`, `--show-full-cmd` : Display full command and cpu stats of running process * `-p`, `--show-pid` : Display PID of the process * `-F`, `--show-fan` : Display GPU fan speed * `-e`, `--show-codec` : Display encoder and/or decoder utilization * `-P`, `--show-power` : Display GPU power usage and/or limit (`draw` or `draw,limit`) * `-a`, `--show-all` : Display all gpu properties above * `--id` : Target and query specific GPUs only with the specified indices (e.g. `--id 0,1,2`) * `--no-processes` : Do not display process information (user, memory) ([#133][gh-issue-133]) * `--watch`, `-i`, `--interval` : Run in watch mode (equivalent to `watch gpustat`) if given. Denotes interval between updates. * `--json` : JSON Output ([#10][gh-issue-10]) * `--print-completion (bash|zsh|tcsh)` : Print a shell completion script. See [#131][gh-issue-131] for usage. ### Tips - Try `gpustat --debug` if something goes wrong. - To periodically watch, try `gpustat --watch` or `gpustat -i` ([#41][gh-issue-41]). - For older versions, one may use `watch --color -n1.0 gpustat --color`. - Running `nvidia-smi daemon` (root privilege required) will make querying GPUs much **faster** and use less CPU ([#54][gh-issue-54]). - The GPU ID (index) shown by `gpustat` (and `nvidia-smi`) is PCI BUS ID, while CUDA uses a different ordering (assigns the fastest GPU with the lowest ID) by default. Therefore, in order to ensure CUDA and `gpustat` use **same GPU index**, configure the `CUDA_DEVICE_ORDER` environment variable to `PCI_BUS_ID` (before setting `CUDA_VISIBLE_DEVICES` for your CUDA program): `export CUDA_DEVICE_ORDER=PCI_BUS_ID`. [pypi_gpustat]: https://pypi.org/project/gpustat/ [pypi_pynvml]: https://pypi.org/project/nvidia-ml-py/#history [gh-issue-10]: https://github.com/wookayin/gpustat/issues/10 [gh-issue-41]: https://github.com/wookayin/gpustat/issues/41 [gh-issue-54]: https://github.com/wookayin/gpustat/issues/54 [gh-issue-66]: https://github.com/wookayin/gpustat/issues/66 [gh-issue-107]: https://github.com/wookayin/gpustat/issues/107 [gh-issue-131]: https://github.com/wookayin/gpustat/issues/131 [gh-issue-133]: https://github.com/wookayin/gpustat/issues/133 Default display --------------- ``` [0] GeForce GTX Titan X | 77°C, 96 % | 11848 / 12287 MB | python/52046(11821M) ``` - `[0]`: GPU index (starts from 0) as PCI_BUS_ID - `GeForce GTX Titan X`: GPU name - `77°C`: GPU Temperature (in Celsius) - `96 %`: GPU Utilization - `11848 / 12287 MB`: GPU Memory Usage (Used / Total) - `python/...`: Running processes on GPU, owner/cmdline/PID (and their GPU memory usage) Changelog --------- See [CHANGELOG.md](CHANGELOG.md) License ------- [MIT License](LICENSE) gpustat-1.1.1/gpustat/000077500000000000000000000000001443577762700147165ustar00rootroot00000000000000gpustat-1.1.1/gpustat/__init__.py000066400000000000000000000013371443577762700170330ustar00rootroot00000000000000""" The gpustat module. """ # isort: skip_file try: from ._version import version as __version__ from ._version import version_tuple as __version_tuple__ except (ImportError, AttributeError) as ex: raise ImportError( "Unable to find `gpustat.__version__` string. " "Please try reinstalling gpustat; or if you are on a development " "version, then run `pip install -e .` and try again." ) from ex from .core import GPUStat, GPUStatCollection from .core import new_query, gpu_count, is_available from .cli import print_gpustat, main __all__ = ( '__version__', 'GPUStat', 'GPUStatCollection', 'new_query', 'gpu_count', 'is_available', 'print_gpustat', 'main', ) gpustat-1.1.1/gpustat/__main__.py000066400000000000000000000001761443577762700170140ustar00rootroot00000000000000""" gpustat.__main__ module (to support python -m gpustat) """ from .cli import main if __name__ == '__main__': main() gpustat-1.1.1/gpustat/_shtab.py000066400000000000000000000002401443577762700165240ustar00rootroot00000000000000FILE = None DIRECTORY = DIR = None def add_argument_to(parser, *args, **kwargs): from argparse import Action Action.complete = None return parser gpustat-1.1.1/gpustat/cli.py000066400000000000000000000172721443577762700160500ustar00rootroot00000000000000import os import sys import time from contextlib import suppress from blessed import Terminal from gpustat import __version__ from gpustat.core import GPUStatCollection SHTAB_PREAMBLE = { 'zsh': '''\ # % gpustat -i # float # % gpustat -i - # option # -a Display all gpu properties above # ... _complete_for_one_or_zero() { if [[ ${words[CURRENT]} == -* ]]; then # override the original options _shtab_gpustat_options=(${words[CURRENT - 1]} $_shtab_gpustat_options) _arguments -C $_shtab_gpustat_options else eval "${@[-1]}" fi } ''' } def zsh_choices_to_complete(choices, tag='', description=''): '''Change choices to complete for zsh. https://github.com/zsh-users/zsh/blob/master/Etc/completion-style-guide#L224 ''' complete = 'compadd - ' + ' '.join(filter(len, choices)) if description == '': description = tag if tag != '': complete = '_wanted ' + tag + ' expl ' + description + ' ' + complete return complete def get_complete_for_one_or_zero(input): '''Get shell complete for nargs='?'. Now only support zsh.''' output = {} for sh, complete in input.items(): if sh == 'zsh': output[sh] = "_complete_for_one_or_zero '" + complete + "'" return output def print_gpustat(*, id=None, json=False, debug=False, **kwargs): '''Display the GPU query results into standard output.''' try: gpu_stats = GPUStatCollection.new_query(debug=debug, id=id) except Exception as e: sys.stderr.write('Error on querying NVIDIA devices. ' 'Use --debug flag to see more details.\n') term = Terminal(stream=sys.stderr) sys.stderr.write(term.red(str(e)) + '\n') if debug: sys.stderr.write('\n') try: import traceback traceback.print_exc(file=sys.stderr) except Exception: # NVMLError can't be processed by traceback: # https://bugs.python.org/issue28603 # as a workaround, simply re-throw the exception raise e sys.stderr.flush() sys.exit(1) if json: gpu_stats.print_json(sys.stdout) else: gpu_stats.print_formatted(sys.stdout, **kwargs) def loop_gpustat(interval=1.0, **kwargs): term = Terminal() with term.fullscreen(): while 1: try: query_start = time.time() # Move cursor to (0, 0) but do not restore original cursor loc print(term.move(0, 0), end='') print_gpustat(eol_char=term.clear_eol + os.linesep, **kwargs) print(term.clear_eos, end='') query_duration = time.time() - query_start sleep_duration = interval - query_duration if sleep_duration > 0: time.sleep(sleep_duration) except KeyboardInterrupt: return 0 def main(*argv): if not argv: argv = list(sys.argv) # attach SIGPIPE handler to properly handle broken pipe try: # sigpipe not available under windows. just ignore in this case import signal signal.signal(signal.SIGPIPE, signal.SIG_DFL) except Exception as e: pass # arguments to gpustat import argparse try: import shtab except ImportError: from . import _shtab as shtab parser = argparse.ArgumentParser('gpustat') shtab.add_argument_to(parser, preamble=SHTAB_PREAMBLE) def nonnegative_int(value): value = int(value) if value < 0: raise argparse.ArgumentTypeError( "Only non-negative integers are allowed.") return value parser_color = parser.add_mutually_exclusive_group() parser_color.add_argument('--force-color', '--color', action='store_true', help='Force to output with colors') parser_color.add_argument('--no-color', action='store_true', help='Suppress colored output') parser.add_argument('--id', help='Target a specific GPU (index).') parser.add_argument('-a', '--show-all', action='store_true', help='Display all gpu properties above') parser.add_argument('-c', '--show-cmd', action='store_true', help='Display cmd name of running process') parser.add_argument( '-f', '--show-full-cmd', action='store_true', default=False, help='Display full command and cpu stats of running process' ) parser.add_argument('-u', '--show-user', action='store_true', help='Display username of running process') parser.add_argument('-p', '--show-pid', action='store_true', help='Display PID of running process') parser.add_argument('-F', '--show-fan-speed', '--show-fan', action='store_true', help='Display GPU fan speed') codec_choices = ['', 'enc', 'dec', 'enc,dec'] parser.add_argument( '-e', '--show-codec', nargs='?', const='enc,dec', default='', choices=codec_choices, help='Show encoder/decoder utilization' ).complete = get_complete_for_one_or_zero( # type: ignore {'zsh': zsh_choices_to_complete(codec_choices, 'codec')} ) power_choices = ['', 'draw', 'limit', 'draw,limit', 'limit,draw'] parser.add_argument( '-P', '--show-power', nargs='?', const='draw,limit', choices=power_choices, help='Show GPU power usage or draw (and/or limit)' ).complete = get_complete_for_one_or_zero( # type: ignore {'zsh': zsh_choices_to_complete(power_choices, 'power')} ) parser.add_argument('--json', action='store_true', default=False, help='Print all the information in JSON format') parser.add_argument( '-i', '--interval', '--watch', nargs='?', type=float, default=0, help='Use watch mode if given; seconds to wait between updates' ).complete = get_complete_for_one_or_zero({'zsh': '_numbers float'}) # type: ignore parser.add_argument( '--no-header', dest='show_header', action='store_false', default=True, help='Suppress header message' ) parser.add_argument( '--gpuname-width', type=nonnegative_int, default=None, help='The width at which GPU names will be displayed.' ) parser.add_argument( '--debug', action='store_true', default=False, help='Allow to print additional informations for debugging.' ) parser.add_argument( '--no-processes', dest='no_processes', action='store_true', help='Do not display running process information (memory, user, etc.)' ) parser.add_argument('-v', '--version', action='version', version=('gpustat %s' % __version__)) args = parser.parse_args(argv[1:]) # TypeError: GPUStatCollection.print_formatted() got an unexpected keyword argument 'print_completion' with suppress(AttributeError): del args.print_completion # type: ignore if args.show_all: args.show_cmd = True args.show_user = True args.show_pid = True args.show_fan_speed = True args.show_codec = 'enc,dec' args.show_power = 'draw,limit' del args.show_all if args.interval is None: # with default value args.interval = 1.0 if args.interval > 0: args.interval = max(0.1, args.interval) if args.json: sys.stderr.write("Error: --json and --interval/-i " "can't be used together.\n") sys.exit(1) loop_gpustat(**vars(args)) else: del args.interval print_gpustat(**vars(args)) if __name__ == '__main__': main(*sys.argv) gpustat-1.1.1/gpustat/core.py000066400000000000000000000641421443577762700162270ustar00rootroot00000000000000#!/usr/bin/env python """ Implementation of gpustat @author Jongwook Choi @url https://github.com/wookayin/gpustat """ from typing import Sequence import json import locale import os.path import platform import sys import time from datetime import datetime from io import StringIO import psutil from blessed import Terminal import gpustat.util as util from gpustat.nvml import pynvml as N NOT_SUPPORTED = 'Not Supported' MB = 1024 * 1024 DEFAULT_GPUNAME_WIDTH = 16 IS_WINDOWS = 'windows' in platform.platform().lower() class GPUStat(object): def __init__(self, entry): if not isinstance(entry, dict): raise TypeError( 'entry should be a dict, {} given'.format(type(entry)) ) self.entry = entry def __repr__(self): return self.print_to(StringIO()).getvalue() def keys(self): return self.entry.keys() def __getitem__(self, key): return self.entry[key] @property def available(self): return True @property def index(self): """ Returns the index of GPU (as in nvidia-smi). """ return self.entry['index'] @property def uuid(self): """ Returns the uuid returned by nvidia-smi, e.g. GPU-12345678-abcd-abcd-uuid-123456abcdef """ return self.entry['uuid'] @property def name(self): """ Returns the name of GPU card (e.g. Geforce Titan X) """ return self.entry['name'] @property def memory_total(self): """ Returns the total memory (in MB) as an integer. """ return int(self.entry['memory.total']) @property def memory_used(self): """ Returns the occupied memory (in MB) as an integer. """ return int(self.entry['memory.used']) @property def memory_free(self): """ Returns the free (available) memory (in MB) as an integer. """ v = self.memory_total - self.memory_used return max(v, 0) @property def memory_available(self): """ Returns the available memory (in MB) as an integer. Alias of memory_free. """ return self.memory_free @property def temperature(self): """ Returns the temperature (in celcius) of GPU as an integer, or None if the information is not available. """ v = self.entry['temperature.gpu'] return int(v) if v is not None else None @property def fan_speed(self): """ Returns the fan speed percentage (0-100) of maximum intended speed as an integer, or None if the information is not available. """ v = self.entry['fan.speed'] return int(v) if v is not None else None @property def utilization(self): """ Returns the GPU utilization (in percentile), or None if the information is not available. """ v = self.entry['utilization.gpu'] return int(v) if v is not None else None @property def utilization_enc(self): """ Returns the GPU encoder utilization (in percentile), or None if the information is not available. """ v = self.entry['utilization.enc'] return int(v) if v is not None else None @property def utilization_dec(self): """ Returns the GPU decoder utilization (in percentile), or None if the information is not available. """ v = self.entry['utilization.dec'] return int(v) if v is not None else None @property def power_draw(self): """ Returns the GPU power usage in Watts, or None if the information is not available. """ v = self.entry['power.draw'] return int(v) if v is not None else None @property def power_limit(self): """ Returns the (enforced) GPU power limit in Watts, or None if the information is not available. """ v = self.entry['enforced.power.limit'] return int(v) if v is not None else None @property def processes(self): """ Get the list of running processes on the GPU. """ return self.entry['processes'] def print_to(self, fp, *, with_colors=True, # deprecated arg show_cmd=False, show_full_cmd=False, no_processes=False, show_user=False, show_pid=False, show_fan_speed=None, show_codec="", show_power=None, gpuname_width=None, eol_char=os.linesep, term=None, ): if term is None: term = Terminal(stream=sys.stdout) # color settings colors = {} def _conditional(cond_fn, true_value, false_value, error_value=term.bold_black): try: return cond_fn() and true_value or false_value except Exception: return error_value _ENC_THRESHOLD = 50 colors['C0'] = term.normal colors['C1'] = term.cyan colors['CBold'] = term.bold colors['CName'] = _conditional(lambda: self.available, term.blue, term.red) colors['CTemp'] = _conditional(lambda: self.temperature < 50, term.red, term.bold_red) colors['FSpeed'] = _conditional(lambda: self.fan_speed < 30, term.cyan, term.bold_cyan) colors['CMemU'] = _conditional(lambda: self.available, term.bold_yellow, term.bold_black) colors['CMemT'] = _conditional(lambda: self.available, term.yellow, term.bold_black) colors['CMemP'] = term.yellow colors['CCPUMemU'] = term.yellow colors['CUser'] = term.bold_black # gray colors['CUtil'] = _conditional(lambda: self.utilization < 30, term.green, term.bold_green) colors['CUtilEnc'] = _conditional( lambda: self.utilization_enc < _ENC_THRESHOLD, term.green, term.bold_green) colors['CUtilDec'] = _conditional( lambda: self.utilization_dec < _ENC_THRESHOLD, term.green, term.bold_green) colors['CCPUUtil'] = term.green colors['CPowU'] = _conditional( lambda: (self.power_limit is not None and float(self.power_draw) / self.power_limit < 0.4), term.magenta, term.bold_magenta ) colors['CPowL'] = term.magenta colors['CCmd'] = term.color(24) # a bit dark if not with_colors: for k in list(colors.keys()): colors[k] = '' def _repr(v, none_value='??'): return none_value if v is None else v # build one-line display information # we want power use optional, but if deserves being grouped with # temperature and utilization reps = u"%(C1)s[{entry[index]}]%(C0)s " if gpuname_width is None or gpuname_width != 0: reps += u"%(CName)s{entry_name:{gpuname_width}}%(C0)s |" reps += u"%(CTemp)s{entry[temperature.gpu]:>3}°C%(C0)s, " if show_fan_speed: reps += "%(FSpeed)s{entry[fan.speed]:>3} %%%(C0)s, " reps += "%(CUtil)s{entry[utilization.gpu]:>3} %%%(C0)s" if show_codec: codec_info = [] if "enc" in show_codec: codec_info.append( "%(CBold)sE: %(C0)s" "%(CUtilEnc)s{entry[utilization.enc]:>3} %%%(C0)s") if "dec" in show_codec: codec_info.append( "%(CBold)sD: %(C0)s" "%(CUtilDec)s{entry[utilization.dec]:>3} %%%(C0)s") reps += " ({})".format(" ".join(codec_info)) if show_power: reps += ", %(CPowU)s{entry[power.draw]:>3}%(C0)s " if show_power is True or 'limit' in show_power: reps += "/ %(CPowL)s{entry[enforced.power.limit]:>3}%(C0)s " reps += "%(CPowL)sW%(C0)s" else: reps += "%(CPowU)sW%(C0)s" reps += " | %(C1)s%(CMemU)s{entry[memory.used]:>5}%(C0)s " \ "/ %(CMemT)s{entry[memory.total]:>5}%(C0)s MB" reps = (reps) % colors class entry_repr_accessor: def __init__(self, entry): self.entry = entry def __getitem__(self, key): return _repr(self.entry[key]) reps = reps.format( entry=entry_repr_accessor(self.entry), entry_name=util.shorten_left( self.entry["name"], width=gpuname_width, placeholder='…'), gpuname_width=gpuname_width or DEFAULT_GPUNAME_WIDTH ) # Add " |" only if processes information is to be added. if not no_processes: reps += " |" def process_repr(p): r = '' if not show_cmd or show_user: r += "{CUser}{}{C0}".format( _repr(p['username'], '--'), **colors ) if show_cmd: if r: r += ':' r += "{C1}{}{C0}".format( _repr(p.get('command', p['pid']), '--'), **colors ) if show_pid: r += ("/%s" % _repr(p['pid'], '--')) r += '({CMemP}{}M{C0})'.format( _repr(p['gpu_memory_usage'], '?'), **colors ) return r def full_process_info(p): r = "{C0} ├─ {:>6} ".format( _repr(p['pid'], '--'), **colors ) r += "{C0}({CCPUUtil}{:4.0f}%{C0}, {CCPUMemU}{:>6}{C0})".format( _repr(p['cpu_percent'], '--'), util.bytes2human(_repr(p['cpu_memory_usage'], 0)), **colors ) full_command_pretty = util.prettify_commandline( p['full_command'], colors['C1'], colors['CCmd']) r += "{C0}: {CCmd}{}{C0}".format( _repr(full_command_pretty, '?'), **colors ) return r processes = self.entry['processes'] full_processes = [] if processes is None and not no_processes: # None (not available) reps += ' ({})'.format(NOT_SUPPORTED) elif not no_processes: for p in processes: reps += ' ' + process_repr(p) if show_full_cmd: full_processes.append(eol_char + full_process_info(p)) if show_full_cmd and full_processes: full_processes[-1] = full_processes[-1].replace('├', '└', 1) reps += ''.join(full_processes) fp.write(reps) return fp def jsonify(self): o = self.entry.copy() if self.entry['processes'] is not None: o['processes'] = [{k: v for (k, v) in p.items() if k != 'gpu_uuid'} for p in self.entry['processes']] return o class InvalidGPU(GPUStat): class FallbackDict(dict): def __missing__(self, key): return "?" def __init__(self, gpu_index, message, ex): super().__init__(self.FallbackDict( index=gpu_index, name=message, processes=None )) self.exception = ex @property def available(self): return False class GPUStatCollection(Sequence[GPUStat]): global_processes = {} def __init__(self, gpu_list, driver_version=None): self.gpus = gpu_list # attach additional system information self.hostname = platform.node() self.query_time = datetime.now() self.driver_version = driver_version @staticmethod def clean_processes(): for pid in list(GPUStatCollection.global_processes.keys()): if not psutil.pid_exists(pid): del GPUStatCollection.global_processes[pid] @staticmethod def new_query(debug=False, id=None) -> 'GPUStatCollection': """Query the information of all the GPUs on local machine""" N.nvmlInit() log = util.DebugHelper() def _decode(b): if isinstance(b, bytes): return b.decode('utf-8') # for python3, to unicode return b def get_gpu_info(handle): """Get one GPU information specified by nvml handle""" def safepcall(fn, error_value): # Ignore the exception from psutil when the process is gone # at the moment of querying. See #144. return util.safecall( fn, error_value=error_value, exc_types=(psutil.AccessDenied, psutil.NoSuchProcess, FileNotFoundError)) def get_process_info(nv_process): """Get the process information of specific pid""" process = {} if nv_process.pid not in GPUStatCollection.global_processes: GPUStatCollection.global_processes[nv_process.pid] = \ psutil.Process(pid=nv_process.pid) ps_process: psutil.Process = GPUStatCollection.global_processes[nv_process.pid] # TODO: ps_process is being cached, but the dict below is not. process['username'] = safepcall(ps_process.username, '?') # cmdline returns full path; # as in `ps -o comm`, get short cmdnames. _cmdline = safepcall(ps_process.cmdline, []) if not _cmdline: # sometimes, zombie or unknown (e.g. [kworker/8:2H]) process['command'] = '?' process['full_command'] = ['?'] else: process['command'] = os.path.basename(_cmdline[0]) process['full_command'] = _cmdline # Bytes to MBytes # if drivers are not TTC this will be None. usedmem = nv_process.usedGpuMemory // MB if \ nv_process.usedGpuMemory else None process['gpu_memory_usage'] = usedmem process['cpu_percent'] = safepcall(ps_process.cpu_percent, 0.0) process['cpu_memory_usage'] = safepcall( lambda: round((ps_process.memory_percent() / 100.0) * psutil.virtual_memory().total), 0.0) process['pid'] = nv_process.pid return process name = _decode(N.nvmlDeviceGetName(handle)) uuid = _decode(N.nvmlDeviceGetUUID(handle)) try: temperature = N.nvmlDeviceGetTemperature( handle, N.NVML_TEMPERATURE_GPU ) except N.NVMLError as e: log.add_exception("temperature", e) temperature = None # Not supported try: fan_speed = N.nvmlDeviceGetFanSpeed(handle) except N.NVMLError as e: log.add_exception("fan_speed", e) fan_speed = None # Not supported try: # memory: in Bytes # Note that this is a compat-patched API (see gpustat.nvml) memory = N.nvmlDeviceGetMemoryInfo(handle) except N.NVMLError as e: log.add_exception("memory", e) memory = None # Not supported try: utilization = N.nvmlDeviceGetUtilizationRates(handle) except N.NVMLError as e: log.add_exception("utilization", e) utilization = None # Not supported try: utilization_enc = N.nvmlDeviceGetEncoderUtilization(handle) except N.NVMLError as e: log.add_exception("utilization_enc", e) utilization_enc = None # Not supported try: utilization_dec = N.nvmlDeviceGetDecoderUtilization(handle) except N.NVMLError as e: log.add_exception("utilization_dec", e) utilization_dec = None # Not supported try: power = N.nvmlDeviceGetPowerUsage(handle) except N.NVMLError as e: log.add_exception("power", e) power = None try: power_limit = N.nvmlDeviceGetEnforcedPowerLimit(handle) except N.NVMLError as e: log.add_exception("power_limit", e) power_limit = None try: nv_comp_processes = \ N.nvmlDeviceGetComputeRunningProcesses(handle) except N.NVMLError as e: log.add_exception("compute_processes", e) nv_comp_processes = None # Not supported try: nv_graphics_processes = \ N.nvmlDeviceGetGraphicsRunningProcesses(handle) except N.NVMLError as e: log.add_exception("graphics_processes", e) nv_graphics_processes = None # Not supported if nv_comp_processes is None and nv_graphics_processes is None: processes = None else: processes = [] nv_comp_processes = nv_comp_processes or [] nv_graphics_processes = nv_graphics_processes or [] # A single process might run in both of graphics and compute mode, # However we will display the process only once seen_pids = set() for nv_process in nv_comp_processes + nv_graphics_processes: if nv_process.pid in seen_pids: continue seen_pids.add(nv_process.pid) try: process = get_process_info(nv_process) processes.append(process) except psutil.NoSuchProcess: # TODO: add some reminder for NVML broken context # e.g. nvidia-smi reset or reboot the system pass except psutil.AccessDenied: pass except FileNotFoundError: # Ignore the exception which probably has occured # from psutil, due to a non-existent PID (see #95). # The exception should have been translated, but # there appears to be a bug of psutil. It is unlikely # FileNotFoundError is thrown in different situations. pass # TODO: Do not block if full process info is not requested time.sleep(0.1) for process in processes: pid = process['pid'] cache_process: psutil.Process = GPUStatCollection.global_processes[pid] process['cpu_percent'] = safepcall(cache_process.cpu_percent, 0) index = N.nvmlDeviceGetIndex(handle) # GPU Info. # We use the same key/spec as per `nvidia-smi --query-help-gpu` gpu_info = { 'index': index, 'uuid': uuid, 'name': name, 'temperature.gpu': temperature, 'fan.speed': fan_speed, 'utilization.gpu': utilization.gpu if utilization else None, 'utilization.enc': utilization_enc[0] if utilization_enc else None, 'utilization.dec': utilization_dec[0] if utilization_dec else None, 'power.draw': power // 1000 if power is not None else None, 'enforced.power.limit': power_limit // 1000 if power_limit is not None else None, # Convert bytes into MBytes 'memory.used': memory.used // MB if memory else None, 'memory.total': memory.total // MB if memory else None, 'processes': processes, } GPUStatCollection.clean_processes() return gpu_info # 1. get the list of gpu and status gpu_list = [] device_count = N.nvmlDeviceGetCount() if id is None: gpus_to_query = range(device_count) elif isinstance(id, str): gpus_to_query = [int(i) for i in id.split(',')] elif isinstance(id, Sequence): gpus_to_query = [int(i) for i in id] else: raise TypeError(f"Unknown id: {id}") for index in gpus_to_query: try: handle = N.nvmlDeviceGetHandleByIndex(index) gpu_info = get_gpu_info(handle) gpu_stat = GPUStat(gpu_info) except N.NVMLError_Unknown as e: gpu_stat = InvalidGPU(index, "((Unknown Error))", e) except N.NVMLError_GpuIsLost as e: gpu_stat = InvalidGPU(index, "((GPU is lost))", e) if isinstance(gpu_stat, InvalidGPU): log.add_exception("GPU %d" % index, gpu_stat.exception) gpu_list.append(gpu_stat) # 2. additional info (driver version, etc). try: driver_version = _decode(N.nvmlSystemGetDriverVersion()) except N.NVMLError as e: log.add_exception("driver_version", e) driver_version = None # N/A if debug: log.report_summary() N.nvmlShutdown() return GPUStatCollection(gpu_list, driver_version=driver_version) def __len__(self): return len(self.gpus) def __iter__(self): return iter(self.gpus) def __getitem__(self, index): return self.gpus[index] def __repr__(self): s = 'GPUStatCollection(host=%s, [\n' % self.hostname s += '\n'.join(' ' + str(g) for g in self.gpus) s += '\n])' return s # --- Printing Functions --- def print_formatted(self, fp=sys.stdout, *, force_color=False, no_color=False, show_cmd=False, show_full_cmd=False, show_user=False, show_pid=False, show_fan_speed=None, show_codec="", show_power=None, gpuname_width=None, show_header=True, no_processes=False, eol_char=os.linesep, ): # ANSI color configuration if force_color and no_color: raise ValueError("--color and --no_color can't" " be used at the same time") if force_color: TERM = os.getenv('TERM') or 'xterm-256color' t_color = Terminal(kind=TERM, force_styling=True) # workaround of issue #32 (watch doesn't recognize sgr0 characters) t_color._normal = u'\x1b[0;10m' elif no_color: t_color = Terminal(force_styling=None) else: t_color = Terminal() # auto, depending on isatty # appearance settings if gpuname_width is None: gpuname_width = max([len(g.entry['name']) for g in self] + [0]) # header if show_header: if IS_WINDOWS: # no localization is available; just use a reasonable default # same as str(timestr) but without ms timestr = self.query_time.strftime('%Y-%m-%d %H:%M:%S') else: time_format = locale.nl_langinfo(locale.D_T_FMT) timestr = self.query_time.strftime(time_format) header_template = '{t.bold_white}{hostname:{width}}{t.normal} ' header_template += '{timestr} ' header_template += '{t.bold_black}{driver_version}{t.normal}' header_msg = header_template.format( hostname=self.hostname, width=(gpuname_width or DEFAULT_GPUNAME_WIDTH) + 3, # len("[?]") timestr=timestr, driver_version=self.driver_version, t=t_color, ) fp.write(header_msg.strip()) fp.write(eol_char) # body for g in self: g.print_to(fp, show_cmd=show_cmd, show_full_cmd=show_full_cmd, no_processes=no_processes, show_user=show_user, show_pid=show_pid, show_fan_speed=show_fan_speed, show_codec=show_codec, show_power=show_power, gpuname_width=gpuname_width, eol_char=eol_char, term=t_color) fp.write(eol_char) if len(self.gpus) == 0: print(t_color.yellow("(No GPUs are available)")) fp.flush() def jsonify(self): return { 'hostname': self.hostname, 'driver_version': self.driver_version, 'query_time': self.query_time, "gpus": [g.jsonify() for g in self] } def print_json(self, fp=sys.stdout): def date_handler(obj): if hasattr(obj, 'isoformat'): return obj.isoformat() else: raise TypeError(type(obj)) o = self.jsonify() json.dump(o, fp, indent=4, separators=(',', ': '), default=date_handler) fp.write(os.linesep) fp.flush() def new_query() -> GPUStatCollection: ''' Obtain a new GPUStatCollection instance by querying nvidia-smi to get the list of GPUs and running process information. ''' return GPUStatCollection.new_query() def gpu_count() -> int: '''Return the number of available GPUs in the system.''' try: N.nvmlInit() return N.nvmlDeviceGetCount() except N.NVMLError: return 0 # fallback finally: try: N.nvmlShutdown() except N.NVMLError: pass def is_available() -> bool: '''Return True if the NVML library and GPU devices are available.''' return gpu_count() > 0 gpustat-1.1.1/gpustat/nvml.py000066400000000000000000000137151443577762700162530ustar00rootroot00000000000000"""Imports pynvml with sanity checks and custom patches.""" import warnings import functools import os import sys import textwrap # If this environment variable is set, we will bypass pynvml version validation # so that legacy pynvml (nvidia-ml-py3) can be used. This would be useful # in a case where there are conflicts on pynvml dependencies. # However, beware that pynvml might produce wrong results (see #107). ALLOW_LEGACY_PYNVML = os.getenv("ALLOW_LEGACY_PYNVML", "") ALLOW_LEGACY_PYNVML = ALLOW_LEGACY_PYNVML.lower() not in ('false', '0', '') try: # Check pynvml version: we require 11.450.129 or newer. # https://github.com/wookayin/gpustat/pull/107 import pynvml if not ( # Requires nvidia-ml-py >= 11.460.79 hasattr(pynvml, 'NVML_BRAND_NVIDIA_RTX') or # Requires nvidia-ml-py >= 11.450.129, < 11.510.69 hasattr(pynvml, 'nvmlDeviceGetComputeRunningProcesses_v2') ) and not ALLOW_LEGACY_PYNVML: raise ImportError("pynvml library is outdated.") if not hasattr(pynvml, '_nvmlGetFunctionPointer'): # Unofficial pynvml from @gpuopenanalytics/pynvml, see #153 import pynvml.nvml as pynvml except (ImportError, SyntaxError, RuntimeError) as e: _pynvml = sys.modules.get('pynvml', None) raise ImportError(textwrap.dedent( """\ pynvml is missing or an outdated version is installed. We require nvidia-ml-py>=11.450.129, and the official NVIDIA python bindings should be used; neither nvidia-ml-py3 nor gpuopenanalytics/pynvml. For more details, please refer to: https://github.com/wookayin/gpustat/issues/107 The root cause: """ + str(e) + """ Your pynvml installation: """ + repr(_pynvml) + """ ----------------------------------------------------------- Please reinstall `gpustat`: $ pip install --force-reinstall gpustat If it still does not fix the problem, please uninstall pynvml packages and reinstall nvidia-ml-py manually: $ pip uninstall nvidia-ml-py3 pynvml $ pip install --force-reinstall --ignore-installed 'nvidia-ml-py' """)) from e # Monkey-patch nvml due to breaking changes in pynvml. # See #107, #141, and test_gpustat.py for more details. _original_nvmlGetFunctionPointer = pynvml._nvmlGetFunctionPointer _original_nvmlDeviceGetMemoryInfo = pynvml.nvmlDeviceGetMemoryInfo class pynvml_monkeypatch: @staticmethod # Note: must be defined as a staticmethod to allow mocking. def original_nvmlGetFunctionPointer(name): return _original_nvmlGetFunctionPointer(name) FUNCTION_FALLBACKS = { # for pynvml._nvmlGetFunctionPointer 'nvmlDeviceGetComputeRunningProcesses_v3': 'nvmlDeviceGetComputeRunningProcesses_v2', 'nvmlDeviceGetGraphicsRunningProcesses_v3': 'nvmlDeviceGetGraphicsRunningProcesses_v2', } @staticmethod @functools.wraps(pynvml._nvmlGetFunctionPointer) def _nvmlGetFunctionPointer(name): """Our monkey-patched pynvml._nvmlGetFunctionPointer(). See also: test_gpustat::NvidiaDriverMock for test scenarios. See #107. """ M = pynvml_monkeypatch try: ret = M.original_nvmlGetFunctionPointer(name) return ret except pynvml.NVMLError_FunctionNotFound: # type: ignore if name in M.FUNCTION_FALLBACKS: # Lack of ...Processes_v3 APIs happens for # OLD drivers < 510.39.01 && pynvml >= 11.510, where # we fallback to v2 APIs. (see #107 for more details) ret = M.original_nvmlGetFunctionPointer( M.FUNCTION_FALLBACKS[name] ) # populate the cache, so this handler won't get executed again pynvml._nvmlGetFunctionPointer_cache[name] = ret else: # Unknown case, cannot handle. re-raise again raise return ret @staticmethod # Note: must be defined as a staticmethod to allow mocking. def original_nvmlDeviceGetMemoryInfo(*args, **kwargs): return _original_nvmlDeviceGetMemoryInfo(*args, **kwargs) has_memoryinfo_v2 = None @staticmethod @functools.wraps(pynvml.nvmlDeviceGetMemoryInfo) def nvmlDeviceGetMemoryInfo(handle): """A patched version of nvmlDeviceGetMemoryInfo. This tries `version=N.nvmlMemory_v2` if the nvmlDeviceGetMemoryInfo_v2 function is available (for driver >= 515), or fallback to the legacy v1 API for (driver < 515) to yield a correct result. See #141. """ M = pynvml_monkeypatch if M.has_memoryinfo_v2 is None: try: pynvml._nvmlGetFunctionPointer("nvmlDeviceGetMemoryInfo_v2") M.has_memoryinfo_v2 = True except pynvml.NVMLError_FunctionNotFound: # type: ignore M.has_memoryinfo_v2 = False if hasattr(pynvml, 'nvmlMemory_v2'): # pynvml >= 11.510.69 try: memory = M.original_nvmlDeviceGetMemoryInfo( handle, version=pynvml.nvmlMemory_v2) except pynvml.NVMLError_FunctionNotFound: # type: ignore # pynvml >= 11.510 but driver is old (<515.39) memory = M.original_nvmlDeviceGetMemoryInfo(handle) else: if M.has_memoryinfo_v2: warnings.warn( "Your NVIDIA driver requires a compatible version of " "pynvml (>= 11.510.69) installed to display the correct " "memory usage information (See #141 for more details). " "Please try `pip install --upgrade nvidia-ml-py`.", category=UserWarning) memory = M.original_nvmlDeviceGetMemoryInfo(handle) return memory setattr(pynvml, '_nvmlGetFunctionPointer', pynvml_monkeypatch._nvmlGetFunctionPointer) setattr(pynvml, 'nvmlDeviceGetMemoryInfo', pynvml_monkeypatch.nvmlDeviceGetMemoryInfo) __all__ = ['pynvml'] gpustat-1.1.1/gpustat/test_gpustat.py000066400000000000000000000657361443577762700200370ustar00rootroot00000000000000""" Unit or integration tests for gpustat """ # flake8: ignore=E501 import ctypes import os import shlex import sys import types from collections import namedtuple from io import StringIO from typing import Any import psutil import pytest from mockito import ANY, mock, unstub, when, when2 import gpustat from gpustat.nvml import pynvml, pynvml_monkeypatch MB = 1024 * 1024 def remove_ansi_codes(s): import re s = re.compile(r'\x1b[^m]*m').sub('', s) s = re.compile(r'\x0f').sub('', s) return s # ----------------------------------------------------------------------------- mock_gpu_handles = [types.SimpleNamespace(value='mock-handle-%d' % i, index=i) for i in range(3)] def _configure_mock(N=pynvml, _scenario_nonexistent_pid=False, # GH-95 _scenario_failing_one_gpu=None, # GH-125, GH-81 ): """Define mock behaviour for pynvml and psutil.{Process,virtual_memory}.""" # without following patch, unhashable NVMLError makes unit test crash N.NVMLError.__hash__ = lambda _: 0 assert issubclass(N.NVMLError, BaseException) unstub(N) # reset all the stubs when(N).nvmlInit().thenReturn() when(N).nvmlShutdown().thenReturn() when(N).nvmlSystemGetDriverVersion().thenReturn('415.27.mock') when(N)._nvmlGetFunctionPointer(...).thenCallOriginalImplementation() NUM_GPUS = 3 when(N).nvmlDeviceGetCount().thenReturn(NUM_GPUS) def _return_or_raise(v): """Return a callable for thenAnswer() to let exceptions re-raised.""" def _callable(*args, **kwargs): del args, kwargs if isinstance(v, Exception): raise v return v return _callable for i in range(NUM_GPUS): handle = mock_gpu_handles[i] if _scenario_failing_one_gpu and i == 2: # see #81, #125 assert (_scenario_failing_one_gpu is N.NVMLError_Unknown or _scenario_failing_one_gpu is N.NVMLError_GpuIsLost) handle = _scenario_failing_one_gpu() # see 81 when(N).nvmlDeviceGetHandleByIndex(i)\ .thenAnswer(_return_or_raise(handle)) when(N).nvmlDeviceGetIndex(handle)\ .thenReturn(i) when(N).nvmlDeviceGetName(handle)\ .thenReturn({ 0: 'GeForce GTX TITAN 0', 1: 'GeForce GTX TITAN 1', 2: 'GeForce RTX 2', }[i].encode()) when(N).nvmlDeviceGetUUID(handle)\ .thenReturn({ 0: b'GPU-10fb0fbd-2696-43f3-467f-d280d906a107', 1: b'GPU-d1df4664-bb44-189c-7ad0-ab86c8cb30e2', 2: b'GPU-50205d95-57b6-f541-2bcb-86c09afed564', }[i]) when(N).nvmlDeviceGetTemperature(handle, N.NVML_TEMPERATURE_GPU)\ .thenReturn([80, 36, 71][i]) when(N).nvmlDeviceGetFanSpeed(handle)\ .thenReturn([16, 53, 100][i]) when(N).nvmlDeviceGetPowerUsage(handle)\ .thenAnswer(_return_or_raise({ 0: 125000, 1: N.NVMLError_NotSupported(), 2: 250000 }[i])) when(N).nvmlDeviceGetEnforcedPowerLimit(handle)\ .thenAnswer(_return_or_raise({ 0: 250000, 1: 250000, 2: N.NVMLError_NotSupported() }[i])) # see also: NvidiaDriverMock mock_memory_t = namedtuple("Memory_t", ['total', 'used']) # c_nvmlMemory_t when(N).nvmlDeviceGetMemoryInfo(handle)\ .thenAnswer(_return_or_raise({ 0: mock_memory_t(total=12883853312, used=8000*MB), 1: mock_memory_t(total=12781551616, used=9000*MB), 2: mock_memory_t(total=12781551616, used=0), }[i])) # this mock function assumes <510.39 behavior (#141) when(N, strict=False)\ .nvmlDeviceGetMemoryInfo(handle, version=ANY())\ .thenRaise(N.NVMLError_FunctionNotFound) mock_utilization_t = namedtuple("Utilization_t", ['gpu', 'memory']) when(N).nvmlDeviceGetUtilizationRates(handle)\ .thenAnswer(_return_or_raise({ 0: mock_utilization_t(gpu=76, memory=0), 1: mock_utilization_t(gpu=0, memory=0), 2: N.NVMLError_NotSupported(), # Not Supported }[i])) when(N).nvmlDeviceGetEncoderUtilization(handle)\ .thenAnswer(_return_or_raise({ 0: [88, 167000], # [value, sample_rate] 1: [0, 167000], # [value, sample_rate] 2: N.NVMLError_NotSupported(), # Not Supported }[i])) when(N).nvmlDeviceGetDecoderUtilization(handle)\ .thenAnswer(_return_or_raise({ 0: [67, 167000], # [value, sample_rate] 1: [0, 167000], # [value, sample_rate] 2: N.NVMLError_NotSupported(), # Not Supported }[i])) # running process information: a bit annoying... mock_process_t = namedtuple("Process_t", ['pid', 'usedGpuMemory']) if _scenario_nonexistent_pid: mock_processes_gpu2_erratic = [ mock_process_t(99999, 9999*MB), mock_process_t(99995, 9995*MB), # see issue #95 ] else: mock_processes_gpu2_erratic = N.NVMLError_NotSupported() # see NvidiaDriverMock as well when(N).nvmlDeviceGetComputeRunningProcesses(handle)\ .thenAnswer(_return_or_raise({ 0: [mock_process_t(48448, 4000*MB), mock_process_t(153223, 4000*MB)], 1: [mock_process_t(192453, 3000*MB), mock_process_t(194826, 6000*MB)], 2: mock_processes_gpu2_erratic, # Not Supported or non-existent }[i])) when(N).nvmlDeviceGetGraphicsRunningProcesses(handle)\ .thenAnswer(_return_or_raise({ 0: [mock_process_t(48448, 4000*MB)], 1: [], 2: N.NVMLError_NotSupported(), }[i])) # for psutil mock_pid_map = { # mock/stub information for psutil... 48448: ('user1', 'python', 85.25, 3.1415), 154213: ('user1', 'caffe', 16.89, 100.00), 38310: ('user3', 'python', 26.23, 99.9653), 153223: ('user2', 'python', 15.25, 0.0000), 194826: ('user3', 'caffe', 0.0, 12.5236), 192453: ('user1', 'torch', 123.2, 0.7312), } assert 99999 not in mock_pid_map, 'scenario_nonexistent_pid' assert 99995 not in mock_pid_map, 'scenario_nonexistent_pid (#95)' def _MockedProcess(pid): if pid not in mock_pid_map: if pid == 99995: # simulate a bug reported in #95 raise FileNotFoundError("/proc/99995/stat") else: # for a process that does not exist, NoSuchProcess is the # type of exceptions supposed to be raised by psutil raise psutil.NoSuchProcess(pid=pid) username, cmdline, cpuutil, memutil = mock_pid_map[pid] p: Any = mock(strict=True) # psutil.Process p.username = lambda: username p.cmdline = lambda: [cmdline] p.cpu_percent = lambda: cpuutil p.memory_percent = lambda: memutil p.pid = pid return p when(psutil).Process(...)\ .thenAnswer(_MockedProcess) when(psutil).virtual_memory()\ .thenReturn(mock_memory_t(total=8589934592, used=0)) MOCK_EXPECTED_OUTPUT_DEFAULT = os.linesep.join("""\ [0] GeForce GTX TITAN 0 | 80°C, 76 % | 8000 / 12287 MB | user1(4000M) user2(4000M) [1] GeForce GTX TITAN 1 | 36°C, 0 % | 9000 / 12189 MB | user1(3000M) user3(6000M) [2] GeForce RTX 2 | 71°C, ?? % | 0 / 12189 MB | (Not Supported) """.splitlines()) # noqa: E501 MOCK_EXPECTED_OUTPUT_FULL = os.linesep.join("""\ [0] GeForce GTX TITAN 0 | 80°C, 16 %, 76 % (E: 88 % D: 67 %), 125 / 250 W | 8000 / 12287 MB | user1:python/48448(4000M) user2:python/153223(4000M) [1] GeForce GTX TITAN 1 | 36°C, 53 %, 0 % (E: 0 % D: 0 %), ?? / 250 W | 9000 / 12189 MB | user1:torch/192453(3000M) user3:caffe/194826(6000M) [2] GeForce RTX 2 | 71°C, 100 %, ?? % (E: ?? % D: ?? %), 250 / ?? W | 0 / 12189 MB | (Not Supported) """.splitlines()) # noqa: E501 MOCK_EXPECTED_OUTPUT_FULL_PROCESS = os.linesep.join("""\ [0] GeForce GTX TITAN 0 | 80°C, 16 %, 76 % (E: 88 % D: 67 %), 125 / 250 W | 8000 / 12287 MB | user1:python/48448(4000M) user2:python/153223(4000M) ├─ 48448 ( 85%, 257MB): python └─ 153223 ( 15%, 0B): python [1] GeForce GTX TITAN 1 | 36°C, 53 %, 0 % (E: 0 % D: 0 %), ?? / 250 W | 9000 / 12189 MB | user1:torch/192453(3000M) user3:caffe/194826(6000M) ├─ 192453 ( 123%, 59MB): torch └─ 194826 ( 0%, 1025MB): caffe [2] GeForce RTX 2 | 71°C, 100 %, ?? % (E: ?? % D: ?? %), 250 / ?? W | 0 / 12189 MB | (Not Supported) """.splitlines()) # noqa: E501 MOCK_EXPECTED_OUTPUT_NO_PROCESSES = os.linesep.join("""\ [0] GeForce GTX TITAN 0 | 80°C, 76 % | 8000 / 12287 MB [1] GeForce GTX TITAN 1 | 36°C, 0 % | 9000 / 12189 MB [2] GeForce RTX 2 | 71°C, ?? % | 0 / 12189 MB """.splitlines()) # noqa: E501 # ----------------------------------------------------------------------------- @pytest.fixture def scenario_basic(): _configure_mock() @pytest.fixture def scenario_nonexistent_pid(): _configure_mock(_scenario_nonexistent_pid=True) @pytest.fixture def scenario_failing_one_gpu(request: pytest.FixtureRequest): # request.param should be either NVMLError_Unknown or NVMLError_GpuIsLost _configure_mock(_scenario_failing_one_gpu=request.param) return dict(expected_message={ pynvml.NVMLError_GpuIsLost: 'GPU is lost', pynvml.NVMLError_Unknown: 'Unknown Error', }[request.param]) @pytest.fixture def nvidia_driver_version(request: pytest.FixtureRequest): """See NvidiaDriverMock.""" nvidia_mock: NvidiaDriverMock = request.param nvidia_mock(pynvml) if nvidia_mock.name.startswith('430'): # AssertionError: gpustat will print (Not Supported) in this case request.node.add_marker(pytest.mark.xfail( reason="nvmlDeviceGetComputeRunningProcesses_v2 does not exist")) yield nvidia_mock class NvidiaDriverMock: """Simulate the behavior of nvml's low-level functions according to a specific nvidia driver versions, with backward compatibility in concern. In all the scenarios, gpustat should work well with a compatible version of pynvml installed. For what has changed on the nvidia driver side (a non-exhaustive list), see https://github.com/NVIDIA/nvidia-settings/blame/main/src/nvml.h https://github.com/NVIDIA/nvidia-settings/blame/main/src/libXNVCtrlAttributes/NvCtrlAttributesPrivate.h Noteworthy changes of nvml driviers: 450.66: nvmlDeviceGetComputeRunningProcesses_v2 510.39.01: nvmlDeviceGetComputeRunningProcesses_v3 (_v2 removed) nvmlDeviceGetMemoryInfo_v2 Relevant github issues: #107: nvmlDeviceGetComputeRunningProcesses_v2 added #141: nvmlDeviceGetMemoryInfo (v1) broken for 510.39.01+ """ INSTANCES = [] def __init__(self, name, **kwargs): self.name = name self.feat = kwargs def __call__(self, N): self.mock_processes(N) self.mock_memoryinfo(N) def mock_processes(self, N): when(N).nvmlDeviceGetComputeRunningProcesses(...).thenCallOriginalImplementation() when(N).nvmlDeviceGetGraphicsRunningProcesses(...).thenCallOriginalImplementation() when(N).nvmlSystemGetDriverVersion().thenReturn(self.name) def process_t(pid, usedGpuMemory): return pynvml.c_nvmlProcessInfo_t( pid=ctypes.c_uint(pid), usedGpuMemory=ctypes.c_ulonglong(usedGpuMemory), ) # more low-level mocking for # nvmlDeviceGetComputeRunningProcesses_{v2, v3} & c_nvmlProcessInfo_t def _nvmlDeviceGetComputeRunningProcesses_v2(handle, c_count, c_procs): # handle: SimpleNamespace (see _configure_mock) if c_count._obj.value == 0: return pynvml.NVML_ERROR_INSUFFICIENT_SIZE else: c_count._obj.value = 2 if handle.index == 0: c = process_t(pid=48448, usedGpuMemory=4000*MB); c_procs[0] = c c = process_t(pid=153223, usedGpuMemory=4000*MB); c_procs[1] = c elif handle.index == 1: c = process_t(pid=192453, usedGpuMemory=3000*MB); c_procs[0] = c c = process_t(pid=194826, usedGpuMemory=6000*MB); c_procs[1] = c else: return pynvml.NVML_ERROR_NOT_SUPPORTED return pynvml.NVML_SUCCESS def _nvmlDeviceGetGraphicsRunningProcesses_v2(handle, c_count, c_procs): if c_count._obj.value == 0: return pynvml.NVML_ERROR_INSUFFICIENT_SIZE else: if handle.index == 0: c_count._obj.value = 1 c = process_t(pid=48448, usedGpuMemory=4000*MB); c_procs[0] = c elif handle.index == 1: c_count._obj.value = 0 else: return pynvml.NVML_ERROR_NOT_SUPPORTED return pynvml.NVML_SUCCESS # Note: N._nvmlGetFunctionPointer might have been monkey-patched, # so this mock should decorate the underlying, unwrapped raw function, # NOT a monkey-patched version of pynvml._nvmlGetFunctionPointer. for v in [1, 2, 3]: _v = f'_v{v}' if v != 1 else '' # backward compatible v3 -> v2 stub = when2(pynvml_monkeypatch.original_nvmlGetFunctionPointer, f'nvmlDeviceGetComputeRunningProcesses{_v}') if v <= self.nvmlDeviceGetComputeRunningProcesses_v: stub.thenReturn(_nvmlDeviceGetComputeRunningProcesses_v2) else: stub.thenRaise(pynvml.NVMLError(pynvml.NVML_ERROR_FUNCTION_NOT_FOUND)) stub = when2(pynvml_monkeypatch.original_nvmlGetFunctionPointer, f'nvmlDeviceGetGraphicsRunningProcesses{_v}') if v <= self.nvmlDeviceGetComputeRunningProcesses_v: stub.thenReturn(_nvmlDeviceGetGraphicsRunningProcesses_v2) else: stub.thenRaise(pynvml.NVMLError(pynvml.NVML_ERROR_FUNCTION_NOT_FOUND)) def mock_memoryinfo(self, N): nvmlMemory_v2 = 0x02000028 if self.nvmlDeviceGetMemoryInfo_v == 1: mock_memory_t = namedtuple( "c_nvmlMemory_t", ['total', 'used'], ) elif self.nvmlDeviceGetMemoryInfo_v == 2: mock_memory_t = namedtuple( "c_nvmlMemory_v2_t", ['version', 'total', 'reserved', 'free', 'used'], ) mock_memory_t.__new__.__defaults__ = (nvmlMemory_v2, 0, 0, 0, 0) else: raise NotImplementedError # simulates drivers >= 510.39, where memoryinfo v2 is introduced if self.nvmlDeviceGetMemoryInfo_v == 2: for handle in mock_gpu_handles: # a correct API requires version=... parameter # this assumes nvidia driver is also recent enough. when(pynvml_monkeypatch, strict=False)\ .original_nvmlDeviceGetMemoryInfo(handle, version=nvmlMemory_v2)\ .thenReturn({ 0: mock_memory_t(total=12883853312, used=8000*MB), 1: mock_memory_t(total=12781551616, used=9000*MB), 2: mock_memory_t(total=12781551616, used=0), }[handle.index]) # simulate #141: without the v2 parameter, gives wrong result when(pynvml_monkeypatch)\ .original_nvmlDeviceGetMemoryInfo(handle)\ .thenReturn({ 0: mock_memory_t(total=12883853312, used=8099*MB), 1: mock_memory_t(total=12781551616, used=9099*MB), 2: mock_memory_t(total=12781551616, used=99*MB), }[handle.index]) else: # old drivers < 510.39 for handle in mock_gpu_handles: # when pynvml>=11.510, v2 API can be called but can't be used when(N, strict=False)\ .nvmlDeviceGetMemoryInfo(handle, version=ANY())\ .thenRaise(N.NVMLError_FunctionNotFound) # The v1 API will give a correct result for the v1 API when(N).nvmlDeviceGetMemoryInfo(handle)\ .thenReturn({ 0: mock_memory_t(total=12883853312, used=8000*MB), 1: mock_memory_t(total=12781551616, used=9000*MB), 2: mock_memory_t(total=12781551616, used=0), }[handle.index]) def __getattr__(self, k): return self.feat[k] @property def __name__(self): return self.name def __repr__(self): return self.__name__ NvidiaDriverMock.INSTANCES = [ NvidiaDriverMock('430.xx.xx', nvmlDeviceGetComputeRunningProcesses_v=1, nvmlDeviceGetMemoryInfo_v=1, ), NvidiaDriverMock('450.66', nvmlDeviceGetComputeRunningProcesses_v=2, nvmlDeviceGetMemoryInfo_v=1, ), NvidiaDriverMock('510.39.01', nvmlDeviceGetComputeRunningProcesses_v=3, nvmlDeviceGetMemoryInfo_v=2, ), ] # ----------------------------------------------------------------------------- class TestGPUStat(object): """A pytest class suite for gpustat.""" def setup_method(self): print("") self.maxDiff = 4096 def teardown_method(self): unstub() @staticmethod def capture_output(*args): f = StringIO() import contextlib with contextlib.redirect_stdout(f): # requires python 3.4+ try: gpustat.main(*args) except SystemExit as e: if e.code != 0: raise AssertionError( "Argparse failed (see above error message)") return f.getvalue() # ----------------------------------------------------------------------- @pytest.mark.parametrize("nvidia_driver_version", NvidiaDriverMock.INSTANCES, indirect=True) def test_new_query_mocked_basic(self, scenario_basic, nvidia_driver_version): """A basic functionality test, in a case where everything is normal.""" gpustats = gpustat.new_query() fp = StringIO() gpustats.print_formatted( fp=fp, no_color=False, show_user=True, show_cmd=True, show_full_cmd=True, show_pid=True, show_fan_speed=True, show_codec="enc,dec", show_power=True, ) result = fp.getvalue() print(result) unescaped = remove_ansi_codes(result) # remove first line (header) unescaped = os.linesep.join(unescaped.splitlines()[1:]) assert unescaped == MOCK_EXPECTED_OUTPUT_FULL_PROCESS # verify gpustat results (not exhaustive yet) assert gpustats.driver_version == nvidia_driver_version.name g: gpustat.GPUStat = gpustats.gpus[0] assert g.memory_used == 8000 assert g.power_draw == 125 assert g.utilization == 76 assert g.processes and g.processes[0]['pid'] == 48448 def test_new_query_mocked_nonexistent_pid(self, scenario_nonexistent_pid): """ Test a case where nvidia query returns non-existent pids (see #16, #18) for GPU index 2. """ fp = StringIO() gpustats = gpustat.new_query() gpustats.print_formatted(fp=fp) ret = fp.getvalue() print(ret) # gpu 2: should ignore process id line = remove_ansi_codes(ret).split('\n')[3] assert '[2] GeForce RTX 2' in line, str(line) assert '99999' not in line assert '(Not Supported)' not in line @pytest.mark.parametrize("scenario_failing_one_gpu", [ pynvml.NVMLError_GpuIsLost, pynvml.NVMLError_Unknown, ], indirect=True) def test_new_query_mocked_failing_one_gpu(self, scenario_failing_one_gpu): """Test a case where one GPU is failing (see #125).""" fp = StringIO() gpustats = gpustat.new_query() gpustats.print_formatted(fp=fp, show_header=False) ret = fp.getvalue() print(ret) lines = remove_ansi_codes(ret).split('\n') message = scenario_failing_one_gpu['expected_message'] # gpu 2: failing due to unknown error line = lines[2] assert '[2] ((' + message + '))' in line, str(line) assert '99999' not in line assert '?°C, ? %' in line, str(line) assert '? / ? MB' in line, str(line) # other gpus should be displayed normally assert '[0] GeForce GTX TITAN 0' in lines[0] assert '[1] GeForce GTX TITAN 1' in lines[1] def test_attributes_and_items(self, scenario_basic): """Test whether each property of `GPUStat` instance is well-defined.""" g = gpustat.new_query()[1] # includes N/A print("(keys) : %s" % str(g.keys())) print(g) assert g['name'] == g.entry['name'] assert g['uuid'] == g.uuid with pytest.raises(KeyError): g['unknown_key'] print("uuid : %s" % g.uuid) print("name : %s" % g.name) print("memory : used %d total %d avail %d" % ( g.memory_used, g.memory_total, g.memory_available)) print("temperature : %d" % (g.temperature)) print("utilization : %s" % (g.utilization)) print("utilization_enc : %s" % (g.utilization_enc)) print("utilization_dec : %s" % (g.utilization_dec)) def test_main(self, scenario_basic): """Test whether gpustat.main() works well. The behavior is mocked exactly as in test_new_query_mocked(). """ sys.argv = ['gpustat'] gpustat.main() def test_args_commandline(self, scenario_basic): """Tests the end gpustat CLI.""" capture_output = self.capture_output def _remove_ansi_codes_and_header_line(s): unescaped = remove_ansi_codes(s) # remove first line (header) unescaped = os.linesep.join(unescaped.splitlines()[1:]) return unescaped s = capture_output('gpustat', ) assert _remove_ansi_codes_and_header_line(s) == MOCK_EXPECTED_OUTPUT_DEFAULT s = capture_output('gpustat', '--version') assert s.startswith('gpustat ') print(s) s = capture_output('gpustat', '--no-header') assert "[0]" in s.splitlines()[0] s = capture_output('gpustat', '-a') # --show-all assert _remove_ansi_codes_and_header_line(s) == MOCK_EXPECTED_OUTPUT_FULL s = capture_output('gpustat', '--color') assert '\x0f' not in s, "Extra \\x0f found (see issue #32)" assert _remove_ansi_codes_and_header_line(s) == MOCK_EXPECTED_OUTPUT_DEFAULT s = capture_output('gpustat', '--no-color') unescaped = remove_ansi_codes(s) assert s == unescaped # should have no ansi code assert _remove_ansi_codes_and_header_line(s) == MOCK_EXPECTED_OUTPUT_DEFAULT s = capture_output('gpustat', '--no-processes') assert _remove_ansi_codes_and_header_line(s) == MOCK_EXPECTED_OUTPUT_NO_PROCESSES s = capture_output('gpustat', '--id', '1,2') assert _remove_ansi_codes_and_header_line(s) == \ os.linesep.join(MOCK_EXPECTED_OUTPUT_DEFAULT.splitlines()[1:3]) def test_args_commandline_width(self, scenario_basic): capture_output = self.capture_output # see MOCK_EXPECTED_OUTPUT_DEFAULT assert len("GeForce GTX TITAN 0") == 19 s = capture_output('gpustat', '--gpuname-width', '25') print("- Should have width=25") print(s) assert 'GeForce GTX TITAN 0 |' in remove_ansi_codes(s) assert 'GeForce RTX 2 |' in remove_ansi_codes(s) # ^012345 # 19 # See #47 (since v1.0) print("- Should have width=10 (with truncation)") s = capture_output('gpustat', '--gpuname-width', '10') print(s) assert '…X TITAN 0 |' in remove_ansi_codes(s) assert '…rce RTX 2 |' in remove_ansi_codes(s) # 1234567890 print("- Should have width=1 (too short)") s = capture_output('gpustat', '--gpuname-width', '1') print(s) assert '… |' in remove_ansi_codes(s) print("- Should have width=0: no name displayed.") s = capture_output('gpustat', '--gpuname-width', '0') print(s) assert '[0] 80°C' in remove_ansi_codes(s) print("- Invalid inputs") with pytest.raises(AssertionError, match="Argparse failed"): s = capture_output('gpustat', '--gpuname-width', '-1') with pytest.raises(AssertionError, match="Argparse failed"): s = capture_output('gpustat', '--gpuname-width', 'None') def test_args_commandline_showoptions(self, scenario_basic): """Tests gpustat CLI with a variety of --show-xxx options. """ capture_output = self.capture_output print('') TEST_OPTS = [] TEST_OPTS += ['-a', '-c', '-u', '-p', '-e', '-P', '-f'] TEST_OPTS += [('-e', ''), ('-P', '')] TEST_OPTS += [('-e', 'enc,dec'), '-Plimit,draw'] TEST_OPTS += ['-cup', '-cpu', '-cufP'] # 'cpuePf' for opt in TEST_OPTS: if isinstance(opt, str): opt = [opt] print('\x1b[30m\x1b[43m', # black_on_yellow '$ gpustat ' + ' '.join(shlex.quote(o) for o in opt), '\x1b(B\x1b[m', sep='') s = capture_output('gpustat', *opt) # TODO: Validate output without hardcoding expected outputs print(s) # Finally, unknown args with pytest.raises(AssertionError): capture_output('gpustat', '--unrecognized-args-in-test') @pytest.mark.skipif(sys.platform == 'win32', reason="Do not run on Windows") def test_no_TERM(self, scenario_basic, monkeypatch): """--color should work well even when executed without TERM, e.g. ssh localhost gpustat --color""" monkeypatch.setenv("TERM", "") s = self.capture_output('gpustat', '--color', '--no-header').rstrip() print(s) assert remove_ansi_codes(s) == MOCK_EXPECTED_OUTPUT_DEFAULT, \ "wrong gpustat output" assert '\x1b[36m' in s, "should contain cyan color code" assert '\x0f' not in s, "Extra \\x0f found (see issue #32)" def test_json_mocked(self, scenario_basic): gpustats = gpustat.new_query() fp = StringIO() gpustats.print_json(fp=fp) import json j = json.loads(fp.getvalue()) from pprint import pprint pprint(j) assert j['driver_version'] == '415.27.mock' assert j['hostname'] assert j['gpus'] if __name__ == '__main__': pytest.main() gpustat-1.1.1/gpustat/util.py000066400000000000000000000056531443577762700162560ustar00rootroot00000000000000""" Miscellaneous Utilities. """ import collections import os.path import sys import traceback from typing import Callable, Tuple, Type, TypeVar, Union T = TypeVar('T') def bytes2human(in_bytes): '''Convert bytes (int) to a human-readable string.''' suffixes = ('B', 'KB', 'MB', 'GB', 'TB', 'PB') suffix = 0 result = int(in_bytes) while result > 9999 and suffix < len(suffixes): result = result >> 10 suffix += 1 if suffix >= len(suffixes): suffix -= 1 return "%d%s" % (result, suffixes[suffix]) def prettify_commandline(cmdline, color_command='', color_text=''): ''' Prettify and colorize a full command-line (given as list of strings), where command (basename) is highlighted in a different color. ''' # cmdline: Iterable[str] if isinstance(cmdline, str): return cmdline assert cmdline command_p, command_b = os.path.split(cmdline[0]) s = color_text + os.path.join(command_p, color_command + command_b + color_text) if len(cmdline) > 1: s += ' ' s += ' '.join(cmdline[1:]) return s def shorten_left(text, width, placeholder="…"): # text: str if width is None: return text if text is None or len(text) <= width: return text if width < 0: raise ValueError("width must be non-negative.") if width == 0: return "" if width == len(placeholder): return placeholder elif width - len(placeholder) < 0: return placeholder[:width] # raise ValueError("width is smaller than the length of placeholder.") return placeholder + text[-(width - len(placeholder)):] def safecall(fn: Callable[[], T], *, exc_types: Union[Type, Tuple[Type, ...]] = Exception, error_value: T) -> T: """A protected call that suppress certain types of exceptions.""" try: return fn() except exc_types: # pylint: disable=broad-except return error_value class DebugHelper: def __init__(self): self._reports = [] def add_exception(self, column, e=None): msg = "> An error while retrieving `{column}`: {e}".format( column=column, e=str(e)) self._reports.append((msg, e)) def _write(self, msg): sys.stderr.write(msg) sys.stderr.write('\n') def report_summary(self, concise=True): _seen_messages = collections.defaultdict(int) for msg, e in self._reports: if msg not in _seen_messages or not concise: self._write(msg) self._write(''.join( traceback.format_exception(None, e, e.__traceback__))) _seen_messages[msg] += 1 if concise: for msg, value in _seen_messages.items(): self._write("{msg} -> Total {value} occurrences.".format( msg=msg, value=value)) self._write('') gpustat-1.1.1/gpustat/util_test.py000066400000000000000000000014031443577762700173020ustar00rootroot00000000000000import sys import pytest from gpustat import util def test_safecall(): def _success(): return 42 def _error(): raise FileNotFoundError("oops") assert util.safecall(_success, error_value=None) == 42 assert util.safecall(_error, error_value=-1) == -1 with pytest.raises(FileNotFoundError): # not catched because exc_types does not match assert util.safecall(_error, exc_types=ValueError, error_value=-1) assert util.safecall(_error, error_value=-1, exc_types=FileNotFoundError) == -1 assert util.safecall(_error, error_value=-1, exc_types=(FileNotFoundError, OSError)) == -1 if __name__ == '__main__': sys.exit(pytest.main(["-s", "-v"] + sys.argv)) gpustat-1.1.1/pyproject.toml000066400000000000000000000001451443577762700161430ustar00rootroot00000000000000# pyproject.toml [build-system] requires = ["setuptools>=45", "setuptools_scm[toml]>=6.2", "wheel"] gpustat-1.1.1/requirements.txt000066400000000000000000000003521443577762700165130ustar00rootroot00000000000000# All required dependencies must be specified in setup.py, not here. # # This file lists additional packages that will be installed (BEFORE setup.py) # in the python environment (e.g. Travis CI; see .travis.yml). # vim: set ft=conf: gpustat-1.1.1/screenshot.png000066400000000000000000002055521443577762700161230ustar00rootroot00000000000000PNG  IHDR@CwgAMA a cHRMz&u0`:pQ<bKGDC pHYs%%IR$tIME.-hlIDATxwXǿۗ`/XAa.I4&54vŊƂ]!6,56 RҗewYyyعs̹wf̜9\                                               аnfLCӗh5&mFXXP"J}tԺ+k0a͚ X}_Dq{W_^ŶAAAQ)?n}Reuzq$m^׸âgoA-3T+` %gdJ+W;{Imf!#JD)sN[.T/(jrqtXȐ,XnpƹEE Tz XJxhvw_Ͱ6 Qof w`_x+:$Rg DZ:!PL.l >Q@L dw]*=MO綇Bog? &//@]Ə jb"DÆen(Z^Pb80'BKg+Xluʣ;Tq.irCr8/`v`Z @BlJ몿6@}ǩWD2eW4Jl' cL+^?ǫߪAAAAS<Ͼ~ Y($и j;P?wOFYKWʒߌdyV{*14 [6 mۢघ2 ;Wbnl/H?֩͵ ūD nɥsJ=ԄׯP2thӲ % 콚 pjiAAAAT: 8~/ptZr\9C)2] VD΀ T f y-9$/VWem*xvCס ju3^3!<z4ݡcEڅؒ 7k$y}Y`kuw߮-&򗮿[AAAADwWiH<^K??m;uw83 咴{%\j1aTV 9MJ*U?C/шznQe7YgC꧱ FE3 ;g;;5+2"+ODYFRIxT tRGAAADCc;N_ۇֶiߜ-[~ߗ/VowhOr%Vkls)1Ҵ"C6[#7/FODk3EU4Tt j __Y!99nȿ6<MyXkh<^U="AAAAG7L_[o([;(_}\8^W5_b&``A &;_ŘA}fMIjT[Q8մ_    ;bzijY1P^lrf=&X,qmȆ(hzE ox{B&qa^1nB颒XYE:hVa%jVd)xjB ,p-L[=AAAAXtp`mYp)뷳DSRD~r\,tw!Ѹp<2XźMfvLptcq0C`Q(1:!|ۖ-TcsVP>!._,G K(rсe\RVu֞    ,eQy0ך}~ӸΫiv,ӸpYޮH8ȣ& AFemRSMi*G*F72E SX!z1[UvA#5|o@#*ٶ`GVBܒ~ܲ7mfߜQd{-RU:M?IcYA6~ ]Bd *Ӎ+F"!lM*,+G-aK{_c}6|mc8^~ W$nevvR+/    OFZۢ0맗jAqnRM;s 䇯o0s_΁d4ml'` @xdF@@ϑ#d6ONtPѼPc @F2s 0H 㟢2.oR%C[I6L!VG>Un0+p1WK vUCnĀF*H%,H%o _#I?~UvO1dY oK|Sؔj?wwP%U=< vYʋ9RxlFKAAAATE`1*$;/aF(F!#qf~wQ^xtYBj2_07dS;pfS?ɩ~4򲰯7ݠIHߖo6٨z[Y|)@4t lkX^\.37wwVL3<`M\ x;17  òA2Bt#-Lrss?&;Y:ZAs, Ą_!ahc$ X풕?NyaGƱ3    0슫,_vka8zRS!cA$R%)OWuHAAAACy~ȣg8NB `A$X! }&|(*Q\=DT5 x9m AAAA *> |"/9.qIf8 k#                          7ϟ?Mg;ojͽctA;kʱշi8vc;K궅    ʅ6Bkg+;B:8 K%[YBl*۞VjG`ne+hnTݶeffwss]ie?>DU)UbOe5AAAT68b醶)O 9SX&LeSkH^Kvmj V,vfV:ֈcņ-Dž6t;kx6umD,U,;G_R-;7[:6¢w.pCexRZ~fV'WvZ.PuJR8n5eȼ#Z(4T{t=^mFMݣ]][1OLÛo-J%  ǁe(jsxj:0$x9/Wc-ƞV_C4sRdUz,sIgA,#C>㢺tCUrϏ-mx[U.;g\mX-y&6c wQ2@To~&]])ԃ(#35fj).γp2/mA%  jNܹRgs:mj{jZ} <FE#,KN"_ܿ6]xrϒFu[u cfQO>V_SF-Y8N8335 @nN|q><6 2Pgz"Im d>{ki=?ܭ X6.;{ZE~_4ZV=ݖ*W c7jpI_qň8`/AAAUV,ZϝFB!o<{kYh^;aTX_32:/Fu18.^~FoBѽxmX:oDg{´)#Vcl<<`R )SKgryդC ޓ{a],U<cDrK)|f\ڶqN66D{t?^3 #'nMOv-Γesn%  )ǁ5z%cZK9ҰA~ֽu}s~]0xTgm-C*3Vyu1ͤRwM-c/=[ޘ#l#\]֯}i6voJr2>⠅*\_fE{~iw]X/[]8R)ܴX7j\c--~W.We?+.O6LIKgha?z,;99,L`š5xgRs45T]c,}rhFUylȲyn@͔~|b[t؊kW@zӿ)mѻzl{~`h}\TVlL; u߷z]V 02\='d>zI9\;?F5757Щ=?^W7IԐac4wӲ 5AAAT=XN<_T_۫ڬUԧYwۿ(=)U?m֣ʫe Y*Ԗ̉6*-h{%dgoce;9'B{9WMuZ^p0oޠ}?[IuKE ;ڬ0T9@죑P5j>LQlK3~nyfDEZɵgA{ ʒ0>`$v0#r7U>P0Q1G@W{ J.oldYYYrEN]u6@EƩ木f7_f) vY#mo~:g15Up<pLLP.>!c}@%16Mݿ(qP=fH{aOFikD)]WǦ }}I]Tbc'nX_   2:*潒&̝<&Uُgxd/\-nNYb5M>}aeFIlwн)c7>A_)%Df5+~ur=[ǯSelq7߅NކL*86i{*kv(4'/1c8_rf)'=9mx o GtXo4\Σ_Z±*^+n T!k5ȷhUO2}= -|y;9GG,{67ɿߕc4HVОrxq<'Se<~3ɞǴ"j^}  z kY~[u{q|+_;v ob?bÂ&Cg?f08JZVhz,-|2 j{6*N9 =7yC; ݼZjժUg ҳ_}>{~&FN7f!q]+6Ԫk/O9;12bxNZJnbm?Szsc|¸N.m6l`Sf}mjB}nvkȈ>^ y?U9|qZ`伾~X~SY5}hoTО Qx5oIZZY9eP-09mͨ/AAACQ=_~W$t!2rR>ݽpyl/߾3jmz,*ͷfuh=<?/X=+7xy\^>b*4uIvO_%}>GA4& rWluǴ}T-oˌF+y^^CyVf*fR/'>5J\k@}-]`v+ly%d*_?ӭKFgYe6> XCz2߽v;(S[i`C@r~}̕@Q?AVEVx>}EuQ>ڣ5"h:^Q+CJΎ=Rќ_smI%  PR'"H9ߵ& Z]/ y-5=fю):pBc}ZGJ?zUۯ\߰cj~FV =n(HÌQ>2{Sc7./3F!zqŵ|:vvo֧~(pʳ/0wG~v}3~inzyҸl'mj/>s+>9Vcy SnO P]=zxe(,"Qɜ_T}  <;̸EƭH{VnRל p88&E =|^*LɴI7es~ǂ&E?+K1" ~B+xɣ5B O<ѭsF8Qt}MʆҼ6b]0r^1. t_B*ډY.G^@#y^Gёذ"uMjv){$%JtHEVKAAD>V{/y/jߡgN;󴯂v׷Us*J؊oc8hytPC3^/|o!D.ꭧr#N1'%V)I3=lЦbhC/말c*oE5z:z ?b &L0}M{$KD6=ǝ߿_o-H6ujpy''f)>?8'Xt$z{;^bGg9Y2D"Ir$%dD}  |JH`aR: {BE\\\qvQ#`{叡I/Z"-xjZ<~Fa>7@G{ёaWW5qfݩ3 dRz/ᅖǫ~Hy^RsDYw\P0;Smt~eQ{`(oWڵo׮똕E%3k|}  ;2RYz}z6x?=[̕^ܪVłMM 0;-]gW>}ʣ)Cv# }oN=p9:M9ȡixЅǘ^ Jv߭X`ӫOk%}:t~k 3s y3Ξo.<C8_Ʈ^ރ{8򴸵D>Z{>!*Ӫ^@B&u`!d_t'?4oAB쎉^( P.tԤӋDVoήH&rt7+|nbnQ,JW{~꾤׻͔V;n^zsO??x|VoKлDUnhOܰ;;#E26 !}zaKUa2aUsvl$ Y ؞J7GPhlm˔-sYTz,YG%x+_C:\y M&w#+ޭ L\Fnƣ+5x؄܏q!'n#\yP#g!W2?ԄVקcc{8)o`GK.r_;pR1V}                                                     |̆qrv1 w= Ћ𡺛 B1~msY \EMfma`>ߵNz A ߒ6EV5Cףxl\ϓ$\0fau[I|2ib_nkB0Q+dlք5b5+fu A5cs[ 9^hlV7Ӥ YX('ji{;$y)SUHuiS֧{6ju-cr M|c `mTef[C77/33}ҍt[{BskoΧ罪!Ւ.]5H 7+V\= ATZ8H.6y(FUͼMct,LogK\&5vrՐfg+ k[GiYiO<غV7+x0W.(b>x+U瓣gzyMkÇL+},vl@OFwY߮f?~nS_iq>ozn߄zX,ƇQ+vRz9%^^Z xWOnXC^嵿ZT͕[QEڭuu709wcBUԊ"=G*{ʣ@Vso[r'0o=L/Ž[p8jgp:Ѣ^~^?7_ލ9<:~iy61RnW,M[GXB0I_yoyo /<^Ix?zY^ۈykӹ _o,qýb!{(Sӗ̡]umsV*b"/G[|RUK7YCu-rKAÈ*zvxkfuro?)v]}l<$+S릯Q܅,,Ʃ]IWZӜsB(+|qMG0 3$`p')ڊ4]5ЃiAA?΁ZxJ 9Ob%,=>[f={VI#SK@VPN% 6z5*VOF&<)̲\0Z?X sc,3ɝ1`'i׈({=F"cQyotrsR9n^/2sj:<9Ry_pFFZ}32CE`-ΰ?<]tu&"`2nE!ېcbi´z3_I +eA= v0;9t Dh+2c*$8P%/ BRdPyږ'cKH (~T6B@~ R(G '׆-z(d⨿iPy:i%]$.*~=Rb43YR_^#7#!qV-F^Ϗ6YV꜒*Ε70 /ZQ=5$%v$s{Y]uGL6ø#DBMa(q&v'')Fd^=Q_F7W~aV ([eڜ㗌M e2r@5f0>*(BDѸ1my0T[H޸";cd\A"C%; r>];4G9 N?R}Q*#ũa|=;ݭ8Z>:p\gnKB+wm`5/8 `b?s Z9^m، d>{ki=?ܭ X6.;{oZ¦itש?|,ҼCEֺ28WӥcgnI G=J C뽱@ugܭL}PeE~P h}yzP_ӭ^735qF7\``ݮU#D4DQy_gLI,ϒZ|Cdc<ȃ1@VFYuZv/潒gEip SR/cTu0Fr.II];J#?).flLK#\-n26dIc{HX`w!yR"mX:oDg{´)#Mc;`ĭ ^}k/v\.{^bY-ŬDkmu'lTzuTU{|ߏ-}|RA>S7{u~z4]]uOtt?PMzEXV}| -LYel晡=yƍ`A` ^~ܒ!PŘX~ *fO9E8Dų+l,ҩo WYZܸEI'DyVBKU^`Zq{-ߪШP?r{~w am2Y{]^ PʼnS\駋;jOdZ䖛+ҹJ٪3Z/gdYra1ŗ @dm"&MY>>nU ~=g_Wꞡ w?z[;M=70I7}wcЈS!_keOr6-X:Kyh_GU/óeQ\ՆE٦Ϊ1:?4 šإcpɀQc,4Ghh<y!w70[ EgSQ'WڣMvq= vi /ciۅ;ua1,7P-A5)7 mOԬݭ%ZqEVUuC}sizO^M=& @hdyS%3S`c~zˬ+hkFO@ЈH>J>J_lr\9HHGk*wb+~!8LSCRIdʷ8Ȭ񠶿n^|W__WCGz47FYт@.fL՘/WIs MH;0NӬA0eormS |rG2p|c [7ݺSg/xq_c]=\,ǡ1)+zL[}X.s5v`9zhVA8N6x['_l,B6BM2u&J+b:g|3jw޻a18W"_-i'.O_Eu:8yY*8AW*8 'Kny[L^RY"<7 Twi"IüK6Mu@A3* "v>o  M5w׊X#+ä́|jTkmPHŨ(͸#R ͜0e)QaXסƸ \za#{ϣN6"q3$'mZ>xhvw_Ͱ6 Gko{ ֝WMs}@2{.pc /Vd b߲e`aBtX TjQUŸsMR^?XǻW/|=:2^+n<'^Hz~xv7yLK;L<~Ru ln 6Ж,Hw]u]]uOttG #Wb.rW_`9q:9XSzYgއJ޽w˱ 5{] E/8;3'Nz7kv7|>$qD ;Y/ @Z`T*zsa @7n3x޷>*Q +~/ bjzC}W ,Q~b[Ek97O_u+4yO)vN@]"֕1#mԨ @|D5ڈ0lvh$7\GCG6 \Xh}`z f0ʠy^]=%qMT>!ɣ;Tq*%/q0L;_o,6'd(K6/s`CWQRwL/k[;@z8Xρ8iżtn}:7s$-A2s(M<|qM0\-0r^_~?,[XxDDDDDĩ> 6pȞא1L[YpM{3#Hx%áalL OuR8o8rGNqWM(."`i?޸eڦ¬ۚ @昚gwl6IK&HޝW3+sdbWcF wʞЭ~HޝR?錏YɯaRսGz 3W S~iG&ׯ;. رL ς o۷L2HP* "1v2L#%ͺsGW9qGF}d ulC聋! YU$~!y$)uU>B2 uOPb2Sj?4")*}]~r&>=Fɞ+oz~~@ ʻ&\Y39US'L\veЊ~KX +;G ?vر}Ky_9\\ deS^XϷHx<ϸ[>,ͻ%(nWwP׭Ӏ&,ah t!m"R[+ah&VSg ,QUS!JJ#2%9UNkms<M3QIi"QxuzjGE\9%ú&F,Yn=2nZ?'imXjYzp?Ko*~G7Dh1-}Uc&2T M1]?PN $¯{ Ywĭ -֍:}U. Hq Q/ͫУ~qQLK/_GVO[7s=KiM{s~hn I1+[];\!ʀגCao!,U뤿Z_<'W2^>|}8ڰ}կ˹-?sxRRUcP$`"*>R8OC,ue8inSsd{![ٵ{XSzu{Z@qGZJ 7m8po l y@{;"G7F08v`q3*`9w..${|Û$`Z8j.9\ DK(lKfʌxf%}z/,;98+ԍWd%]ԘiVvRv}FCWЁ8F,P{6sŭ:0<=Coߔva3mvg~ևuudI- ~ڢ zCX?m Qz'=59>'ψ'W @vuV8}_ĭo-=9E ^pj?چ Hs_F'4FsSu&veW|>S7 vUݒ__  @yV%2'@սJYޗs&24̬lXkٹf!p@W EƵ[G4;HM1e ^-V_w_/^ut(#PE`Bk0d3bB}!ۓ*]N>$RϟZ/ף[dҔ8w^Gusz]0r^1.$KgIQO/oSN`Ce *=m=؛O[Kun+0X{I]C4ɷ?tX^33 W1Ypy39ve?+.9\l`Zl4[Z%G}g\ʳ腗ݿF&v 0+sz}v :fwԧ84Dsw&| j1+!v;3_ǔ9W$ Y7ca,'A~!n*G9XA,ZE'e=Q_$XvnA_Fg')lF$wm' |ތS{Rw᣻tkcQ<5 woǰ!uO!NR$N #ů*G0K#:cXxf ,6㱟ukBtzIcN#J0~ڼ#њQ;i;:+dd,)ؒI ǒ~ǴMZ~N8 "I#/غ`wKtJ`,LV+P s+B.f3 0!WI!-zWnݱ@֕%Y#r[ijI) |nz&Ozn-xU`*`@%?0P<%)=i>|)k%ckoƅ݉,s~1'ks _doWF߿:H4n`Mf Y3dڳVFzo Vխ:DUw V! KM/?Νw;);jkuWɒe2eFsdK6#+MPAa.cwy?@潰*Ǡl:l (0K30 px:VyAy<0,:^ Lc\5yKxy*]lj'8yi*̛~tpD!0hy`\؛1ki&c12ZbkgV%N^Ó asJ^/Mѿ=I>fI|Ȯ뵾ľbWU2;ς\ۢ*]g*؟ڿKh/oh5Q/>;bD' K&'?a9ď0WJoy&nr|0zEbzO;6D_Xw2#WWm%{ݶBAtQT,KR[W?{Au;LLd{Co%9cJK) g dU|,Nʲɸ<ꫦ;G8\߿7ҹޟV,{οjX5 h5x `#ֆ pY? ;LBP3[S͹Xs\g)YB0LmJx<[߷b \9d:_[o$}y`>"s0rݸ3zHj:p``Ȯ%:h江_6iUc?Je|ƾWBoJU^\/tpv.1? @!Z^0^}y|GPplk!0b2@h!3y+,Ee0+r( DWv}um`yb@bTas0DuЪ)> 8b('ϮKoݲco^ 7mG,xvFsk)xd`]`C\V%Gm4nΦg& ;t:;t:Z_w{`]Ax*͇//M?:{nU Omo;n>yϠl[S23BpףCT}JUM{ep<7`d lll6Qf'F*ӗq-\P6׊{`Eu&iһ?ѵӛ8r=K"[u)F~fʤTi/\Pmwz //Wy|ʯN"{_/򭥷{l 3krX;*cUM@O,’GO.6ǷIY򶽷˲}2YZ?UYܤ6܋~x}AVx]_Qg}AWVbܢ;=fj}sͽwmDտ3욟ib3q8\g1;Tk$X[ 6ƏEC] C ef=g -Mz\̆r>׸ʫ|HChiTv'{/IʤDFiڥ99 >;CN5kB'c!3%Wie09xl PlJͻf(_Ÿޭ&@<`d&U-H *Yڼm)wh;+VGw%hލas}!mW~*^Vd`ף{;rS.v2G/^rKY@ꉟ'=]g'a_)ImYǀ#$ uPn<侽 7F{twiؽfN{XZ,]aRwc;VKp" OWO9^J ')ECJZLʨT$ú5a߾0u›ۆ 9u_ XȱآO1_vr^<3hРA};ˬ8E|Dhg3 nPyQ|RG ߋn[l 7գ--a靱wi+C׉EvȭVtv\>cc80F/+(_=Gk_\(3˽'WSAʝ$=0J j.~2@<<2{WEJʞQY۴hL}lBFIYiC wě>tfy$X,Huʸޣ7o )lȸ{Tcc/Ϝbch< {911{zz[EW kH^Vkm^tI{y/XMMԱuTޕtzcܹ+"}|0;v͜*Rךp|Mpų\ΘQHJ¿]J6wuϚۛVR::ܾWޫYzl{C߈Lv-9[O1d'oXĭsڝ}[s =Vx`9, Mr[6&i@xdF9r02٣f1`k,8 aE  iGrre`]_o<WbDXiDwħlvL{@W- 3x\VF DɇPd qP4k6x۶e- zT]vvpm~q\FPl̇Gb]F,lS1Ґ$ UU 6=m2X?ڟ_f 5lySC  V PMbAW/qח}T>axOYɩYeXiEbnKS rZ9x,/vܞ G~:7I襹>ؘ* ʸܜ/eܰ+s/v0)Wr#=[y5Aͦ{7s̺kCٵ^m\QK$_f՜)=YJgzsErGNt9 Pw$K+(c2jmSG\]y˾Nns7$%O!Dغjރ;wMJϝ=IN*yJ߿Ȟ1Ko1%/җOF yG;v̤nXw4 s3';yѩ IW R75|䏛LRD5\Wy;fɇgļf WW؊ 5vǠYuY ,3p#is(_Fэ+5ju4I=ې,+<>T͉Re£GJ E BxxdY ?^)Z0Ad*,_Z\$Kˑ!)H!%XBP($F/ԭ,xsfw0BH<]Vbe({o?CDJ{*ȶn Ee7gmAmK ߷7G:`zl-\iVRzX#nmk M\iȓr)s M|dWe`~d-\+29d9+?AgU}LuxZN8i-g^l{ZFj]^3Y`d>iTDuͳPfe7NZC85yNnvˇ+Q2odF ;pLL/ / kwXE3^Wz7 ف(>.\&w@קcc{8:F?aÚx{~fI%+Al?¥(NȌJp\ِZ]i j>O^rʙ՚o+W\]*:? `q\¥(- 'p) {Tw5+W\r!ug+W\:ѿm!j0B ׺ JcMATB@fgSu>?ӰE+*-%"A5XizbAuյ6@ |LYO$ ߰\ m]ۚO;P$]A@t0nO\!;ǁocϯX"I^s B31,(Up6% €; 7>B(Y[`Ԣħaf-hG~!΁ٖ%Dqvn,Pғ;4 B.mH Q=7;OAzZj4BW~e>t yb\[16rhi\[۠Oa{`7} A@Q| 'Ha9 rsrF\|nN|AODBħ +hP]#ą]n[$t~Dףv慆|K9am'IPpȉFAAAAAAAAAAAAAAAAAAAAAAAAEY^2ƥٙs'Vd1x)9j Cu7 A5ci' _}<Ͷ"|kk#8fR%m_Q ?I.$q?|41/wU5AT!Eƨmn2q[kbrۅ Xl yk/4a+iRL4EC?>`}*wuϦb_e7Wc WdZcs=`lm.Ͻx2Z2v6#+tXw{n/J|53/u&X`1>\Z0f._63*XףV))6Z:xrZ'*Ւj܊*nnɑr>׼ _YWqýQL `yٷ_-9| =yT2KfM9,9$wTxx > {~# ,!줯<7_\rSuC X, cb?MW `ZynPAs`i/$ bbjFeX%`aeWLXlx]Mj#SCfYڟi,Z1dNCJ qkN=#I7WQF:9F +Ē#pvlrΥ%h$8`{K:i:H@DFFr92$2[axx`OQI\Q:Or;zL-gxb%,\9?pLe\,Gg;@ irrr6%:XTP G<1B~Њ^&ڙ*L|:zMDԲ#|׍(R dRxR VVo88Ia,;gf'Wmp3Bc}L7g6% A ò^(pSlB9O\[ZR$R6y@/E#խ!e,ɵaˀ> 8oԡ>@O5#R\N¶zK:;8XF"}EuůeS\ge^ۂ&:{˳Twfؼ:]T'߮I} 7]g' VjJӮU^DJR#bD_?+R(WvgRʅY_Nvp!ny^T*0{(:5BI΋sϯ0zX  \(j=@|/z2 ,BYfuCum3S`pIfVV"ɩ0$r^n8uev ,'IS."q7V\̌單36WA" 09 !#j1J~,ɲj?W甬`W_# EWVw:Wޜ7jE'G"Ԑûۑ=ogUwcUL='\jPf+JYL٦{(aTҖa(q&v'')Fd^=Q_F7W~aV ([eڜ㗌M e2r@5f0>*(BDѸ1my0T[H޸";:8qcPI@ON&if@nӏԥ(r_ymqeGZgv^+g/pЫ w!Q~->޻g5f<IDATe'zOS4۾V iZ޴tש?|,ҼCEֺ28Wӥcgn\<$x9̼~l $\΅u bkҦi/6! ).6tw6Gn`;\RQyVBIB7U@͈Ɍz ;g8;qpr-{RdS8fFeuv)4GKH}tA1%5)bnl{6v1dT,QxaSppeAiA_mLK~˽r/%>ip SR/cTu0Fr.II];J#?).flLK#\-n26dIc{HX`w!yR"mX:oDg{´)#Mc;`ĭ ^}k/v\.{^bY-ŬDkmu'lTzuTU{|ߏ-}|R>S7{u~z4]]uOtt?PMzEXV}| -LYel晡=yƍ`A` b_(8nFm;?>@8Dų+l,ҩo WYZܸEI'DyVBKʨviZ\UQ~$A4e(w0ȷOwդȴ-7'5ɩWs}+Ug&-.kWm^0Όɲ1(c'з/"ɲ90SEMۛ||/.ݦzξn=C~00Xw{Wo`n(Ǡ-T?姮+3O}("&[ǂkeOr6-X:K倉ZԼ{|UUzUv~$A4W2,F|J 'Uo&Z SG$Ww35gVхx] ηCo|pi4S.rPr tYUA=lϖ_fMd 0rBW!?H~Lw꼑 s-9#Cya? W;I`ZznTW#+5 0I25 KM*ӫOk'@[{R8v@ Vnm.j9WȢc hio|f"VgzDJ~=,<3rw:,L%!A9Ҹ^ƉS[.z^D5igtϳТ\7yHibjdJ6>7smW,6\H}a=>e^A|ʄ7C3]/1&bZʒ+Sɘ>IԐac4w2u#.Hk.-t{C?x YĆߺ/:Ǝ2tt׵?ѵ%n¶;a(3Okx M.y,fbo (NYxGMGa po?'wJn4٣o.]B ӫMx,cc` lO3}uby ITT[7g} X\q-w?0ZV{/_Vz(W<,O|YAm9ݼ*d @h]aАBV8ɉ1z"5KK0Q½G;7+?}Nh׻D57.S|0Ny7,}t"{16djLȒg d* C%DؔUSWW+ҖzS +_l[Ia/ҋc >`֏Yᢼ]}t.o`a芮<ώoW1MI /}·/, G7'#p6NT:ZG.zC{Ĵt(۴B\SEcVi5Έx!]{7NT{ML \!lس Mj#7 *75ei Ľ`ސu2 ŃE BbKW!Mh[\G]O»(NahyHM 3{C؟>kHw7EN)\vhq=FЂ77N!c}@%1jݻs_=Qz6Dicy0*{'W:H2&X:9 H $lT4 V14CUq@]%h̜Ie|j%.s_c3c멮ul־m:LYOh ?|jӫnbY&sWϝZpi 2nl9ܿ) d>#Lm[XݩԳP8lbޯQϱL̘”ەX__dq>M 90=uF^ @g'Hy-LpvX&:iv1EN܊3pZ[R;԰s9[rX(ғ|ZVcW"_-ZY{ӗ=|QmdP"_Kf{1q1_; Xʙ Am>ٵ'c.4AOO̐#Ȫ@X,Q\T OL]|_.jW4mfK6 _^+b. vzڦ8UZ))(F1)JRT)fNɲɨ0,qc\~=ZQR Zcj!!sQwm x m|6`y9 ;$7 DZ:!PL.l ~QpN_]g+ 3JuME2ܵv<=Wk䏪rl|xהd3~5K~ |2C4_fNAINoYq'n p<'yf=ii珒)ÿ<ѝأ#]OL"{67ɿuzhKG$޻aB]]uOttG #Wb.rW_`9q:9XS (Z W`ө]s-Օ"`xȣMi 7{ D0iݼ#EZu3=GV/g{p{M?G=N`9Yգ irt{"a߷+P L/^pLS;˴y9ZLU&*8rW8&Z/L ,6'd(K6/s`CWQRwL/}:&1<^{V\,@05CZ,bۇL9mH7ߦ HlhNU#},"xHW"s <#_7mCCCpiCcU5H7`+.΍:ʒ |?>' aphTZi`dK<) Q U'\gAf (l|<.cw[[r|$m>xs_o]i䜞wKdD^,3|bp8pH^_/G?}f>WyHJn}:ۑaO8f|OoneYv .5m[>pq]lSgp(F(XĂ*PK5wU&Ao; <@(ec'7D:e?s+_[l^yRAZƙGTC_aNڈg*_ `YB.DVk1 AſN^p*Oi9n/vEYQqL|=5iJyNUf5׼ܔؽKKA,1 `fd}k`_*SGHȍڷy]^Tje08Z_$$?QRl! [ )]Σ }MW{Dv*9T#`3V*S!}~Z<Ȉu~MqL K[Aϕe~hҧA-, l#mB{- &D$a/L4QRɳL!l9 yn@zOӀֹXϙ3 4w gP^XP*)E¶JxP DQ/β"eŒf/"!#Gzuq5[; ?:?$7do תK(RT,g!L'G"486|?w4%OG=0w5"+ؘ'petX_yZ^<6q D&ZV)B g|AA`&`N@xAqeІI&ƶyA VNJqYYN (Ѻ!\RxLl +/lıu_GlHғUoLȡ@K. uLQJ5㰚:{uȝX~5(܇*rZ>(W1HN % {cc6j9@ev/kF+<0$@"κw~M.6~˾ B3=zYDыMR@vY?5)iFkZ K3C=⣏\?s1<[0~gGxh4dq餎0G )[k_D, OӀ!7G*J!+2 dLk;Umyե H1t͝ @r|+;*{EIJExz,/U#V]vxnBR,}[ y 3ѣ+ǎ1 Y wp#ںUGde>%Go]R. :ϓ{raI2g &UZ(f;Gmx*BLd3RXSl,V#FsX- J7RtaDQ/K:a??6-ݟF{}3՞j7K.0.^Z8SxNTB֮1jG @-(d)"Ԝsm` 8#6U\@Џ[9O:l1Jڂ~D/'1DF%dq@f;^ݤ7ܬ?)} ^8vbN~~*KDcU*j{1xԟ妔 Jzz@U::W{8,vWkJ`E4ӋǞ^f__p9:`EdT2?q 'YSjj1:r7__;lnW%MGPj @/c._p*B\Zt^Mй|^sa̪Ը>Rs#C .k ZZPdc'v, @I9: <~uh մH?ڻkФ*%dY/MOz?^]Rc hc2J?v ^b!$8$J .FdKE88&Ȱ̇bɆ K&yZwB̺I}CJ|MtG.7yuyӟˋfmA 0a<~9nehSݯj31@WjQ^~J&a䚃Yj,`.i I)|lRN"!=Cs0CM|Roօ22Xq df\32Fi<.<°f0kHc`Ȇ֯~S:tA\qQNnUa׭ r t>Vik:R4dT -;25`)RߋF@^Y?KK瑒\i2-.”aT4 J޼HWV׫ ZxhQʋy$LybU]$L2&אwUuw'k?gpdRuq&_ .Gf澋Mw==vFLA@0HΦWR#X*v%e%Le7J 06grQ_|LWPdf~)~iڅ+NN\E,eaW+uպ hMsW|-32ë\>iC@RRa2on/ aRd)99O´@TtmXV phȾ :;CXށpV"6c0Aj,>|3ܑbDo*-~Q,Khabdl7<0j*[;E!,P 3LDѻh; acÕn"  ];֍b<;v+Xꇱb ۦTQP۲zAw}~ӉG0.0҈GCCWbg ޺ ֘p0$Ip51iKɳy:Y!idնutk;b!Y#rHOZ|&x.c?]6c d`#Q*-.P\*ᘷ 4arkͪӴKoHp+5iץYI}۠l^2.lIqh=5%sd@ݨ4Mjr5l{hh !)  ) ?*AU}, UY0VG*z_XIJBܹ219R7wbNkXޏcIguvH{OQJ/ ҕ#Xwx>zxjx+% `[Q#Զ݌ʏ{q8nGaVx$% 8[ zҟ<+,D,a [ۗަK._΢ =*dc sҀ Zŵ(2pW4U77+t V4d&MgRduJ _f{lX M=I/켝E)HZCw_YqeLg6xw4Pgd6R\-MIRuIR^gb8X}f^-_'xKx*;kvhi/1aL}uq&|aQґêӴki/b·)J֫S57Cq17G5&_%*\0vņ ɵSʆLn߯JQMLW°|v]"%bUY<ò4"(-Ԫ~݃7i@bWL CYZZP"{!$=9RC[BBMayN<{ EХ# aNFC(omI1c~<nҫ?~s}KӐ/MW.g&QSytݲb 3jQKT` S(^]wY5~ rþ RjtG ^=p9GdNC%9P&ЙLc۸dպ@ܠ|Ap;'TnX,! b:; ͧO]~-Q :י+SҨQlXVT9:N _eir{@pϫտ Ɵ֟PMMX\Ƈq4W&dI۵uO<>e2=<^m=@&̺LfL\@I' qj9Fn)Pu_NTU EqN.}3ը/:&}_ c+yHN gM9ݪ2z҈#FܽYbF!V2$gƨ#| /Y+z+,(56U+VZvTi Tګeg32L\R.x?0miߦi߃[4r4SwyԄ{۱:'W4iU@Dq1Lv/Hfб⏻*a&}&!VCcmצΛʻ6{~!5)'H/CCsi2NȗJS1>Dhg֧t\Ly\s5TfOl|r߫ K^])3a/ H/mYY%[ɕ4/e֭o%6_I0j1`yٻ-{WOW!*^D[tv@An M]7,7 FFn>2>$zNlGeծ[sظٻZ3 g~QKE%e5j(#&s~{L@:;r$Z;Pyw/37?ل"Gatk^ d/Of^նt  92]y=i͓π:9:B&P<%"H'eͅMrχi)vdݤjbI?ل Ք*B4au,Vt"D_J!i\ { 2boO+UxqڕosmE '-d9)4?YW}YBX 꺾W ~lG l^NnڢxsF7LL@f 2 )賜j˕KĮF_ud@9X H,' ; X|9FQXvͽuja"_[Zy̿*c=@JQzb( Ǫ{!ʿ+pg%7ASX-tt-ٽC#LW-|mZ@F=Ot^;yr"177޼!L8srO>_쟭R㬶wN|oxH|Fiﺣ?ofQ3[qŽ59eb~6zpz,d8[-H+M,l(ic=G'?\Q^[mymJt4}dDOܱUˡ"L` RpjSaܯ0#mTb]n+"Hf5bZ ٬l.|22PGh]?_tã[tZ.f (*r{9ž^[Sm1 O&7œ=*Kp:OZY;̤#k'4gj*^%F#_* z; lU]֓:uT9E%5?2ar{3{ev^~/=yeuXA_ϸ]Wgڡ?~PXV -~lu Ro?&~)R~?d~(jchܯlBdJIG)W/Mfd*xPJ%E"Fk&e; 3~EC7[,-ŎQ˓DUe-?o*`{a,̎]bG)R1AhTZ.׶<}~ ^X\89A r^aHu\J❞;IGeDIq7|mM~is S'<<֫tsҷO{5#Y?"_oLJXZ%^axH5<`G8+f:_5nΰʘ@I>_GnCwyÚ:܅'^0fR%5;\h;GU -DAn6״ƶ1+5u0k)MGv013S־ qu`83ǗhiLJ]1zw `'}V8=3&W  ?Zu~4E4"ϏAS6:s໠WF;ICxл<ۉkkuo%}p6bqxUC M#s%lhCjLLLLm ~V!3C:P/Akjx}g>7h 4`Ko%l#h|x̘`@qR"4BXPش?N*?g$hϯX> I^kf,LikEA~)_O u8E)Vc~{jQi•GNmaU6X Yb9 zC3|e[^Ņ9QNn;td# M9&P̆nH#.)|!un |4".L5=^T8!Ěw7t>JFt @u)444@Ql&6EFksrÛ2*z\IlfQZq/DBCCB E)a-44' MA֟lj `&an>r lM+bfEaa ݖAQv 3k`B;QBcf)~[(zd7 5444vS,hKnЍLY `P=g5thG|~,.v@9 qZ #XGBfү[QbKoþ|_JC_ȿ@?R~5!` fQu$vP] #+*,ɺ8#3>3U0F 0 5,io-؞O¬UKHyr֝nK k>7j& nTGcGóq)PcjHyNHg#K>i#We>Yya?7FyԱ~T_ _ XxKH2,ŅWU_MY]`+@7˧~ک\Z4vTF@2pZ͔>.fCQ}8y ȻjDr젵][ "̙kbۓrK (|;3꯼?aVFE^ݰ9[Ԗ99lh%AB4I?~C"- r2 Su(w‡$Zh7~ EQ~~ +&D*<(tmL6H=bqaaVz;7wխ`\ʴ'c18'Ξ' 殍!l#Nmm[_dUYUk;[(k.ϸ6Gx/W١ `0E={w{{Xpa%?3tȻOe N=?#{:)8JI֫1[6Q<Ͼ7eV23oOO8O_dMѶԑ~(^V0%Kp6T?#"dpX%P熓_} drBOocu,6-c T<_oDwNj#Jw?9e1owk3f߬‘ԟgb@\V捫hcE5fP }ehPmڣmUnlPUwK0AK{^%L:_ ,BRIV>vȤ"dkիx;+:RY*0,˾=H/&<_ @uQU}*Y¨'OܬG8n~{vO<0ki:BNc5[۲8,ݱ_*XGuQ+[jWzn 'O&ƣg=wvfVn f>ձVfJ-mjVx4skP՛7o}MoVZgʲ.3O\oz_Kd攔ym)CVt=.!D"zjײ*ĞФ2mXq)l{-7hf#u=ؐx'#=Xax;Avr~zKd\̞6Rs=O ;r_l Y/]UF9qAm:U:쭹 e[a z!q2Q6QOJ4!ÿnˣdlkF6m?H7t̐}>LrXC쌔k,IZ`jPQAv,]YC󐕡\Ҍ؉RǟغQX2ֳ;,xL(J9plUɁ5׎1Dž W%FYԁOge T26Mnʻ{ۗ^rzOj732&h?oh+1i->V_i?/`V %$ OG^ώeԢm7"kJy' mncelgEJZm=Z; y}k/*."1J FjDǼqB.~K -kWm"QѾۘ @q3Zl?.K`=QlzX@Blٮ3׸Tyk׵DiQ+w5,Eى/Tv-KSB&|{pA[}+z-jşgRADr{cʜ_Z_o#$|uxqՏMIʼ"+` ?,%ʷjgB(x|$u f~n/USrM|͍9V=ᾞCLm}ju NHy ^??ڝLf)U*tlR']HMx?Gaٮlk4*yٖf~~ŏ'^\ףV"u6k\P*+#'϶vVdp\񺪥`CZR'& e{[G (y3+$/al'jR+vmҋw?|x`\':uQDx}4snU7qoR^te̞FQZAz!)#$AӍ n:!ﲤHc`7ѱv<5*uK6 ]kT_' 3kUKh۩6}[526' y䍝SC([@ɀU>c[Q@[Z3yAf7~yZ9I14.+R & B`3j($,=W ~"nXɘ@f%'TM;gĀ.2.KLK WT}~( йI1:TVKҒ4[x|v*yI>#KFtޱX٧O6oW {~} SZ9GcZHI}VYJٵ,GWEk^& ۧvѶD[}m{< 0j@! 5|Qu]N z^:+I0Wm᭿U1<Mm{ҹ@?Ї!LS޵cld%!kqSU6C_-:KH-PNCog`lȄ7( ~t~r ӝ,@RR#*$ڲkj*ς\C *)䈥oa%5\@h9}|ԙ|\\SCǎdoqlN 3@ea&k5b6t7+VP"^Q:(W'mU:pE'ޑ%.%kI/^^J{w;v{U neIeUXQ2~{vD}v9ɸuJ^lNVu ssH*#8p[B͕mqp;4Sait9_=dJE8pDAN$_o:Lq&{B@ 0Q^?)k/s_ xw.ҕ柷S LGk{[ mX oU:dT?6|lզg[em'}koWoujcY}x?ږVhnJHP=tSي'P> ?x+8x].*틔+tfcWF):mPO$%^$%^Vykwmۣmԍ~@ O r*H};klv5&H˯V'p&:ZW5j9GsIڥu 1_OK2֭t^y{*lcL(@T'I0IMy7,;q'SvVPrWɉx\c:z`YKs7Zbd oo} 9<8tn*;v Rs5NV-WZ[vBwɻyn3^8[]@>5Ya]RoBܣ7'ҳOSEC*NܹVM,Q+(hcTRw3).<#+FuR9iq NziC6tUGړG%>+-}扺\+$P @7OyaՕFee-* lЬ<r9@ѳNB=)xZVaH>1wŦ{pM[`g[OzYu^lJ&D9_n8Ϯ(_;]$Y܋JCtú P#hf~- /5~_ O X<;`n|v?RU? dZѾ"Gw?~/Iv|Y!v=~=ɎEZ@yFY,%=n0Ū*ۿݲYV;8Rj׵?ږVhm" 1ɤTl6PsȑՏWwj ¾gv?5@˻]Uv8z;ub+umL!땊T+ȻѶ2H?HW|G@06mH7n '/DԲ2(*gXe?T_Ԏ@u6?G͋rºg6˯&hXce׺ =ĺ {|܎+5RGВgJ`;q'_$o .%ve UzdxЭΘ~rr2]u~:$mjb)ɲןpfoFE`o}-Ha* d$Y0yzgwkiتwOiQU*v D 5b\P|Exshڇؓ 8d/Sܖ) \w2rn/hQ蓩 3^mWڒkUSz<<ə J?(?Y•I.)*)~;֐\ߡgQzWHo^oI%[q͇bbvT91 ?U[o9C - )꟱wm^p</mxmS!D/7dG|ձoUHX|m-@ؓsae|؅# >&A0D<ܒ[KC2?fL84W>z_K(2Ȇ ;*f#1~qnYxLp7YGqMT|Cq᩾~syU^۹;SἼ Хc[Dϓ.]_s@>-=>x۳@.]a1[ET o/ wE)n L`(zu=`rtf&fx:$Tя>)E[ EIx6X2Q6 bKZSWW@O3Z!ں͇;)"jp@OcmKCy׶=:_G+"D7kN23kS<eІX2eq8/U9&^6\K?:K4Mԯ,i3eyn-jա= wUĵ>#˚uTUV$ #"i5+;3_}q}\;L&K?~xnzu4{lp[VyBgoӧݺjpx#S]IOWHV! sz:8$]ս&K5kpwG }-f]%:yЗrҲ.ﶱW'Զ;<z H܎mdS]/:" `N_9o|&UTXt*4͸wo&~CQrj7U_Tʲ[Î]5)d]Um$Ѻ* o+p> R5./&TR/ro>]*Xe`{m.UQɡOҦ~~%jGXٱwldommmmm1hژ#kC֬ W[ڍ6E>}tMJcVqv3D7>X,I$`+kqY,2۷sR*nM-@'!>)[5Ȍ="w teI!=01E|i3 zw=ZӾ2ӔQ_4-Ү.,E/v_%5wۣ[uV_%tb{+"1YְֻgG^QU_~kNtqDƌ_tCcUYI7`+.΍:ʒ |?>' aphTrCd0ja0Ҭ'Oǃoj@zsCU sY:B!x\'TϷOW I|~u޺ZwKdD^,3|bp8pH^_/G?}f>WyHJn}:ۑaO8f|OoneYF~*a ڶ H-3?%nzq`tVf-jͰ.OvLVg9|a΁ggv5 RXSe[-߰&F< S e"Lگv?ѓ@I7]$cPudd'{yIU,Vls ^ K"ޙށoբ&M) ^iҬWٚwɼ}tȫ+8sn~ 6nf@6kVR4ml #!7jw5bC+ Uq`p GE #I` I~꣤B XSmG' A~UR sF< .gTq:6`)VGj m{+I<:R)Hsy6UI E(Q~c&~z?-EQz* PM\ULqE>dh_T4`9ΰ. o-݌@_)bAZu EJcE],Ā)p䈾x?ڻ]Y짣#] 6 ?}6֬W֥MG>/Qr_}SnÒ>kE5kznsZQ!ë9uDݹN 6c]GbOnVL@\+:P'G{ůmԀ(1Uk]jho2w(/?~edgoSfl:?=Qݝz!7G*EU~_IJV2 dLk;UmP)^H1t͝ @Tu?322x%)^xz,/U#V]vxnBR,}[ y 3ѣ+ˍ1 Y wp#ںUGde.)eɽe>?B$}ճC*l6ԇ@Q^#RJ)SGaۀ6+_@̋q>}ۖ#7P)f|R7,=f 4gb]Y[Ϟm|0czU avf叓$#cY+Lw<=kE)P\X,% A=Q9e?" )_~Ea ޘ[j Z3ƑvK#Jx;C Ǧ>5眪|*XB!M3P~K4I-VLGrK_jtXBf|dMz c'dD4&]5?c:N8[nJ n ѯzd`NG8, ƫp7ˁݕ@XMⱧy4xW3\zXU xǏ}yI?Oا&"DB)D+>Y95g޼D<#ckU em; Z\%dR]3aA}|+onUc-0enS?9iH懠]Zɻey:ώe$*i e;QeuE6G_#VV\ףͬsQ{ZlשW"Utfn!"CPqK!#Wt1Ŝ@25OGܯȐ3V&frRk%()[o dyW'S{p-T,kEIKkSjqխŪ  H}5O)mu6m\)w]ze3l*[>xv?A94}C6ܙ{.4 **}7^Bn⹩cl؆5}@I7ĩ.>](=~9s϶$SZdKsAeصW$G c,c:_y:fғ+9kv%$N&٤4EC{ aErikϻHuZ*ȶp".be\ĵ+r(]?7.Ú w /Gsٵqyw%c6 Jan?9Ǔ y"gO\,/+31m$*(Xf!ߔ%MS!d)-"?AaH-M[_CAQ$i3^GX0w%v&%孓1޳Dy׮=)R䙽f{+$3M"NXKB3 z:0`_ M"7.;/gNm`_`UM:J^4RYZ:J\mt$SG~Q]U_SX^MвucFT^#EdKϝ&T-t,Ș\C:anlV3xXKeIYSG:ԡǥ ~ (/DHښ.6KYϬ/ƛ9r2Qy=y e.8Yv-wp` (}ykN(ڹ)eFCBxTzug6=PY<А骴쓬.Yz @յDm#[J 9^e82e,%2I/JtmXVC5͑}iO b5/l8ZgxL*wз8> Bq3ܑ3!)Z4=Gi,aų݌#RsV<?=V5<0-jUQ"6n7p.6f]\60zrJAp -u?y43VX*Y<_5/U1#Iy/qgQCB[ر9i@cmv(cqJ ?z3U0m]Y(zsr3. ~):mΤV?-Q_ 彋 DYigi|z)< [k(_+ ;/np,FU+<)U^.g~*)dFٴW ^R'.Κ6ZKL;vrv_x] /_Kt$4|Z('>Oԭx~n6EI-_bFOu,_@*P7OZئD߯˷VmA;,mMq<<Ჽ]mjyP\ &Q>/hп@̥ b+iLiJJضՐeb%C<{|W'8) }gW7T,bA)iX`_rY;tX>.O.A596[6֎,3Bm(N}VW`Vqpa/DݰXm[C*&zKN{5Wsyv0ēCf޻ w>Qj-Z'C{}RFKY} sZl eLel¶B\評˯ X ,.VBkI_g{O7Hn/G(OFݏb<`YZ*T)As'so}\Eܤɉ7>+w,3dS&'HpQ]MX`/zi6)LQɦk륿ð%jyid/Q f SW&3&=0n= X{I6]=fq X5jW׉J]y !O0K>_o~y>n6 ӆy]5}IVެ2uZV$g̐ѓF8ɯ}5TI/-x'6\p}sٝ}{U_Ir'C&smGv?j@KNJKEIZvT_ ϼʠI3N>/MklOi+VeɲQ^5ЪYMw(_/? ~Ҭjl/Mn:# =#i[fpW}j7揱_oM0yu_ۗn^ێ?ZټRVAH=p b@U:XIsT$3\Khp Z>fw[^אEkCV܌V^h,i3ɻ RrBBG2d)NyOmeh.Od<.f\6jH)(&c]VDﻒT?;>e(_@#_ۢ.PlZAU%m:<đeb *ۂq,Ⱥwt{[sel~8p9t' GNZա)V.|/846zp4'-Y7yצ=/C F ?\H$4̔2JXPE?^G9R(πlL8N &U3~|$g v6j׽/vN `k{h `b2ciOAT[@^"vOmWh]~!imC @~⎫ AϠ G(*ԮwNK;?Ѐ;jc1v @(Zn-5,0-Ϡ=xWUGm^K&=쭖ۤkT &?sGu&vv͵6 $0 n8 S+W%Qq*V0ٳ/\ՈPW$+ݧ]Sy;vR,!0 ۪^qHA噱W]W.eS,H"nVr:ɶ޻ }k|ST4//)KM׈f%}g0W'=ʃKu2bkgv /+uBu]}GDXGu \.PP;ž^[Sm{w[2j̈́Mw 4eF}'ӄkxGoq{n~X8nP;AVh[^[}Ү=juZ> W4Nm}ವ]U .Y?1F[ycad,sNL.1)u!t\}6[jLx/F˛[RS;J@pta>L8'V@jt_}-jH\*H9w}-H8֊; ")@r8N\K{! ML94UH/& t£1G{EA*̿Ғ5Ϸ7ɭWil Q9H뭻W}w?ym}zU9E%5˄5>,>@&Q{PU=9ʶaq0ʺ$muW9Ugx]x7>I_ٻة"} z#{ }!)vY\$3Ceԡ):b[cL2#5iRO޾'l&@}\q,8ζTA#lǭHxk-Ԯ=()1JOo<69kh2;vՖ:VR-'%@ B"Et[$ lx ER2IՈBG.ۛ*wrr֫W/;;muQ{r^"(x<( < *d&aB9\-&bxүNVbZp`^2~^\p7(IzjkcNŒiӒ#4b05':B1Pc\6޿Nbmv7K؈նihh??bc ''9.f,@"~z?C+{N&l ~ŞjVY/q@&-9!;F7ͩ)@, zdu"7M Y0?)NGe<~6N(s͍Ԟ{!I{7.s;E9׾}C>`G8+f:_5nΰʘ@I>_Gѻ<ۉaX[VR /G3ג.#Mua#QdM&}Hߐض0f`e6u-屩&@qqfc_/8т"oң (.zvNh>?Ǵ!5OBY{̘XWC# ؘd=q=\-eLq@+1̄ ##uN[bq-`ׂ:ixEvDZ,.jRgxh??"&&&B 1d՘5tS' MA8o'Z׽>X\U /4gsdLLLL̕ ݖOfȾ1111BZi~U~=F B8\yihhhhj >u|HIӘ#k[ 0dW A;MmW ίC_4444:Mm>3Y Ctxuݥ64tk>vF45DQJXn 2ICxл<'<>XI؁H6[4Jس@QXXxCdPh!@뜑wF'J~ihhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh4~byw%tEXtdate:create2016-06-02T23:47:47+09:00 Y%tEXtdate:modify2016-06-02T23:46:45+09:00)VIENDB`gpustat-1.1.1/setup.cfg000066400000000000000000000000731443577762700150500ustar00rootroot00000000000000[aliases] test = pytest [tool:pytest] addopts = --verbose gpustat-1.1.1/setup.py000066400000000000000000000101071443577762700147400ustar00rootroot00000000000000#!/usr/bin/env python import sys import os import re from setuptools import setup, Command __PATH__ = os.path.abspath(os.path.dirname(__file__)) def read_readme(): with open('README.md') as f: return f.read() def read_version(): try: import setuptools_scm except ImportError as ex: raise ImportError( "setuptools_scm not found. When running setup.py directly, " "setuptools_scm needs to be installed manually. " "Or consider running `pip install -e .` instead." ) version = setuptools_scm.get_version() setuptools_scm.dump_version(root=__PATH__, version=version, write_to='gpustat/_version.py') return version if os.getenv("GPUSTAT_VERSION"): # release process, e.g. GPUSTAT_VERSION="1.1" python setup.py sdist __version__ = os.environ["GPUSTAT_VERSION"] else: # Let dev version auto-generated from git tags, or # grab the version information from PKG-INFO for source distribution __version__ = read_version() # brought from https://github.com/kennethreitz/setup.py class DeployCommand(Command): description = 'Build and deploy the package to PyPI.' user_options = [] def initialize_options(self): pass def finalize_options(self): pass @staticmethod def status(s): print(s) def run(self): import twine # we require twine locally # noqa assert 'dev' not in __version__, ( "Only non-devel versions are allowed. " "__version__ == {}".format(__version__)) with os.popen("git status --short") as fp: git_status = fp.read().strip() if git_status: print("Error: git repository is not clean.\n") os.system("git status --short") sys.exit(1) try: from shutil import rmtree self.status('Removing previous builds ...') rmtree(os.path.join(__PATH__, 'dist')) except OSError: pass self.status('Building Source and Wheel (universal) distribution ...') os.system("GPUSTAT_VERSION='{}' sh -c '{} setup.py sdist'".format( __version__, sys.executable)) self.status('Uploading the package to PyPI via Twine ...') ret = os.system('twine upload dist/*') if ret != 0: sys.exit(ret) self.status('Creating git tags ...') os.system('git tag v{0}'.format(__version__)) os.system('git tag --list') sys.exit() install_requires = [ 'nvidia-ml-py>=11.450.129', # see #107, #143 'psutil>=5.6.0', # GH-1447 'blessed>=1.17.1', # GH-126 ] tests_requires = [ 'mockito>=1.2.1', 'pytest>=5.4.1', # python 3.6+ 'pytest-runner', ] setup( name='gpustat', version=__version__, license='MIT', description='An utility to monitor NVIDIA GPU status and usage', long_description=read_readme(), long_description_content_type='text/markdown', url='https://github.com/wookayin/gpustat', author='Jongwook Choi', author_email='wookayin@gmail.com', keywords='nvidia-smi gpu cuda monitoring gpustat', classifiers=[ # https://pypi.python.org/pypi?%3Aaction=list_classifiers 'Development Status :: 5 - Production/Stable', 'License :: OSI Approved :: MIT License', 'Operating System :: POSIX :: Linux', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', 'Programming Language :: Python :: 3.11', 'Topic :: System :: Monitoring', ], packages=['gpustat'], install_requires=install_requires, extras_require={'test': tests_requires, 'completion': ['shtab']}, tests_require=tests_requires, entry_points={ 'console_scripts': ['gpustat=gpustat:main'], }, cmdclass={ 'deploy': DeployCommand, }, include_package_data=True, zip_safe=False, python_requires='>=3.6', )