procfs-core-0.17.0/.cargo_vcs_info.json0000644000000001510000000000100133400ustar { "git": { "sha1": "e303757cd3e21c763e339f6f7e4c9cb4e435ff83" }, "path_in_vcs": "procfs-core" }procfs-core-0.17.0/COPYRIGHT.txt000064400000000000000000000562751046102023000142630ustar 00000000000000The source code for the procfs library is copyright by Andrew Chin, 2019, and other contributors. It is licensed under either of * Apache License, Version 2.0, http://www.apache.org/licenses/LICENSE-2.0 * MIT license, http://opensource.org/licenses/MIT at your option. The documentation of this library is derived from documentation written by others: * The proc(5) man page: Copyright (C) 1994, 1995 by Daniel Quinlan (quinlan@yggdrasil.com) and Copyright (C) 2002-2008,2017 Michael Kerrisk with networking additions from Alan Cox (A.Cox@swansea.ac.uk) and scsi additions from Michael Neuffer (neuffer@mail.uni-mainz.de) and sysctl additions from Andries Brouwer (aeb@cwi.nl) and System V IPC (as well as various other) additions from Michael Kerrisk Under the GPL Free Documentation License (reproduced below). * Other manual pages: Copyright (c) 2006, 2008 by Michael Kerrisk Under the following license: Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Since the Linux kernel and libraries are constantly changing, this manual page may be incorrect or out-of-date. The author(s) assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. The author(s) may not have taken the same level of care in the production of this manual, which is licensed free of charge, as they might when working professionally. Formatted or processed versions of this manual, if unaccompanied by the source, must acknowledge the copyright and authors of this work. * The Linux Documentation Project: Copyright 2003 Binh Nguyen Under the GPL Free Documenation License. See: http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/ln14.html ================================== Below is a copy of the GPL license: This is free documentation; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. The GNU General Public License's references to "object code" and "executables" are to be interpreted as the output of any document formatting or typesetting system, including intermediate and printed output. This manual is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this manual; if not, see . ================================== A full copy of the GNU Free Documentation License, version 1.2, can be found here: https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt Below is a copy of this license: GNU Free Documentation License Version 1.2, November 2002 Copyright (C) 2000,2001,2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. 3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher. D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. H. Include an unaltered copy of this License. I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements". 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See https://www.gnu.org/licenses/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this: with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. procfs-core-0.17.0/Cargo.toml0000644000000030050000000000100113370ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" rust-version = "1.48" name = "procfs-core" version = "0.17.0" authors = ["Andrew Chin "] build = false autobins = false autoexamples = false autotests = false autobenches = false description = "Data structures and parsing for the linux procfs pseudo-filesystem" documentation = "https://docs.rs/procfs-core/" readme = "README.md" keywords = [ "procfs", "proc", "linux", "process", ] categories = [ "os::unix-apis", "filesystem", ] license = "MIT OR Apache-2.0" repository = "https://github.com/eminence/procfs" [package.metadata.docs.rs] all-features = true [lib] name = "procfs_core" path = "src/lib.rs" [dependencies.backtrace] version = "0.3" optional = true [dependencies.bitflags] version = "2" [dependencies.chrono] version = "0.4.20" features = ["clock"] optional = true default-features = false [dependencies.hex] version = "0.4" [dependencies.serde] version = "1.0" features = ["derive"] optional = true [features] default = ["chrono"] serde1 = [ "serde", "bitflags/serde", ] procfs-core-0.17.0/Cargo.toml.orig000064400000000000000000000014221046102023000150210ustar 00000000000000[package] name = "procfs-core" documentation = "https://docs.rs/procfs-core/" description = "Data structures and parsing for the linux procfs pseudo-filesystem" readme = "../README.md" version.workspace = true authors.workspace = true repository.workspace = true keywords.workspace = true categories.workspace = true license.workspace = true edition.workspace = true rust-version.workspace = true [features] default = ["chrono"] serde1 = ["serde", "bitflags/serde"] [dependencies] backtrace = { version = "0.3", optional = true } bitflags = { version = "2" } chrono = { version = "0.4.20", optional = true, features = ["clock"], default-features = false } hex = "0.4" serde = { version = "1.0", features = ["derive"], optional = true } [package.metadata.docs.rs] all-features = true procfs-core-0.17.0/LICENSE-APACHE000064400000000000000000000261361046102023000140670ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "{}" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright {yyyy} {name of copyright owner} Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. procfs-core-0.17.0/LICENSE-MIT000064400000000000000000000020511046102023000135650ustar 00000000000000Copyright (c) 2015 The procfs Developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. procfs-core-0.17.0/README.md000064400000000000000000000101321046102023000134070ustar 00000000000000procfs ====== [![Crate](https://img.shields.io/crates/v/procfs.svg)](https://crates.io/crates/procfs) [![Docs](https://docs.rs/procfs/badge.svg)](https://docs.rs/procfs) [![Minimum rustc version](https://img.shields.io/badge/rustc-1.48+-lightgray.svg)](https://github.com/eminence/procfs#minimum-rust-version) This crate is an interface to the `proc` pseudo-filesystem on linux, which is normally mounted as `/proc`. Long-term, this crate aims to be fairly feature complete, but at the moment not all files are exposed. See the docs for info on what's supported, or view the [support.md](https://github.com/eminence/procfs/blob/master/support.md) file in the code repository. ## Examples There are several examples in the docs and in the [examples folder](https://github.com/eminence/procfs/tree/master/procfs/examples) of the code repository. Here's a small example that prints out all processes that are running on the same tty as the calling process. This is very similar to what "ps" does in its default mode: ```rust fn main() { let me = procfs::process::Process::myself().unwrap(); let me_stat = me.stat().unwrap(); let tps = procfs::ticks_per_second().unwrap(); println!("{: >5} {: <8} {: >8} {}", "PID", "TTY", "TIME", "CMD"); let tty = format!("pty/{}", me_stat.tty_nr().1); for prc in procfs::process::all_processes().unwrap() { let prc = prc.unwrap(); let stat = prc.stat().unwrap(); if stat.tty_nr == me_stat.tty_nr { // total_time is in seconds let total_time = (stat.utime + stat.stime) as f32 / (tps as f32); println!( "{: >5} {: <8} {: >8} {}", stat.pid, tty, total_time, stat.comm ); } } } ``` Here's another example that shows how to get the current memory usage of the current process: ```rust use procfs::process::Process; fn main() { let me = Process::myself().unwrap(); let me_stat = me.stat().unwrap(); println!("PID: {}", me.pid); let page_size = procfs::page_size(); println!("Memory page size: {}", page_size); println!("== Data from /proc/self/stat:"); println!("Total virtual memory used: {} bytes", me_stat.vsize); println!( "Total resident set: {} pages ({} bytes)", me_stat.rss, me_stat.rss * page_size ); } ``` There are a few ways to get this data, so also checkout the longer [self_memory](https://github.com/eminence/procfs/blob/master/procfs/examples/self_memory.rs) example for more details. ## Cargo features The following cargo features are available: * `chrono` -- Default. Optional. This feature enables a few methods that return values as `DateTime` objects. * `flate2` -- Default. Optional. This feature enables parsing gzip compressed `/proc/config.gz` file via the `procfs::kernel_config` method. * `backtrace` -- Optional. This feature lets you get a stack trace whenever an `InternalError` is raised. * `serde1` -- Optional. This feature allows most structs to be serialized and deserialized using serde 1.0. Note, this feature requires a version of rust newer than 1.48.0 (which is the MSRV for procfs). The exact version required is not specified here, since serde does not not have an MSRV policy. ## Minimum Rust Version This crate is only tested against the latest stable rustc compiler, but may work with older compilers. See [msrv.md](msrv.md) for more details. ## License The procfs library is licensed under either of * Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. For additional copyright information regarding documentation, please also see the COPYRIGHT.txt file. ### Contribution Contributions are welcome, especially in the areas of documentation and testing on older kernels. Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. procfs-core-0.17.0/src/cgroups.rs000064400000000000000000000107401046102023000147540ustar 00000000000000use crate::ProcResult; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::io::BufRead; #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] /// Container group controller information. pub struct CGroupController { /// The name of the controller. pub name: String, /// The unique ID of the cgroup hierarchy on which this controller is mounted. /// /// If multiple cgroups v1 controllers are bound to the same hierarchy, then each will show /// the same hierarchy ID in this field. The value in this field will be 0 if: /// /// * the controller is not mounted on a cgroups v1 hierarchy; /// * the controller is bound to the cgroups v2 single unified hierarchy; or /// * the controller is disabled (see below). pub hierarchy: u32, /// The number of control groups in this hierarchy using this controller. pub num_cgroups: u32, /// This field contains the value `true` if this controller is enabled, or `false` if it has been disabled pub enabled: bool, } #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] /// Container group controller information. // This contains a vector, but if each subsystem name is unique, maybe this can be a // hashmap instead pub struct CGroupControllers(pub Vec); impl crate::FromBufRead for CGroupControllers { fn from_buf_read(reader: R) -> ProcResult { let mut vec = Vec::new(); for line in reader.lines() { let line = line?; if line.starts_with('#') { continue; } let mut s = line.split_whitespace(); let name = expect!(s.next(), "name").to_owned(); let hierarchy = from_str!(u32, expect!(s.next(), "hierarchy")); let num_cgroups = from_str!(u32, expect!(s.next(), "num_cgroups")); let enabled = expect!(s.next(), "enabled") == "1"; vec.push(CGroupController { name, hierarchy, num_cgroups, enabled, }); } Ok(CGroupControllers(vec)) } } /// Information about a process cgroup #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct ProcessCGroup { /// For cgroups version 1 hierarchies, this field contains a unique hierarchy ID number /// that can be matched to a hierarchy ID in /proc/cgroups. For the cgroups version 2 /// hierarchy, this field contains the value 0. pub hierarchy: u32, /// For cgroups version 1 hierarchies, this field contains a comma-separated list of the /// controllers bound to the hierarchy. /// /// For the cgroups version 2 hierarchy, this field is empty. pub controllers: Vec, /// This field contains the pathname of the control group in the hierarchy to which the process /// belongs. /// /// This pathname is relative to the mount point of the hierarchy. pub pathname: String, } /// Information about process cgroups. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct ProcessCGroups(pub Vec); impl crate::FromBufRead for ProcessCGroups { fn from_buf_read(reader: R) -> ProcResult { let mut vec = Vec::new(); for line in reader.lines() { let line = line?; if line.starts_with('#') { continue; } let mut s = line.splitn(3, ':'); let hierarchy = from_str!(u32, expect!(s.next(), "hierarchy")); let controllers = expect!(s.next(), "controllers") .split(',') .filter(|s| !s.is_empty()) .map(|s| s.to_owned()) .collect(); let pathname = expect!(s.next(), "path").to_owned(); vec.push(ProcessCGroup { hierarchy, controllers, pathname, }); } Ok(ProcessCGroups(vec)) } } impl IntoIterator for ProcessCGroups { type IntoIter = std::vec::IntoIter; type Item = ProcessCGroup; fn into_iter(self) -> Self::IntoIter { self.0.into_iter() } } impl<'a> IntoIterator for &'a ProcessCGroups { type IntoIter = std::slice::Iter<'a, ProcessCGroup>; type Item = &'a ProcessCGroup; fn into_iter(self) -> Self::IntoIter { self.0.iter() } } procfs-core-0.17.0/src/cpuinfo.rs000064400000000000000000000154461046102023000147450ustar 00000000000000use crate::{expect, ProcResult}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::{collections::HashMap, io::BufRead}; /// Represents the data from `/proc/cpuinfo`. /// /// The `fields` field stores the fields that are common among all CPUs. The `cpus` field stores /// CPU-specific info. /// /// For common fields, there are methods that will return the data, converted to a more appropriate /// data type. These methods will all return `None` if the field doesn't exist, or is in some /// unexpected format (in that case, you'll have to access the string data directly). #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct CpuInfo { /// This stores fields that are common among all CPUs pub fields: HashMap, pub cpus: Vec>, } impl crate::FromBufRead for CpuInfo { fn from_buf_read(r: R) -> ProcResult { let mut list = Vec::new(); let mut map = Some(HashMap::new()); // the first line of a cpu block must start with "processor" let mut found_first = false; for line in r.lines().flatten() { if !line.is_empty() { let mut s = line.split(':'); let key = expect!(s.next()); if !found_first && key.trim() == "processor" { found_first = true; } if !found_first { continue; } if let Some(value) = s.next() { let key = key.trim().to_owned(); let value = value.trim().to_owned(); map.get_or_insert(HashMap::new()).insert(key, value); } } else if let Some(map) = map.take() { list.push(map); found_first = false; } } if let Some(map) = map.take() { list.push(map); } // find properties that are the same for all cpus assert!(!list.is_empty()); let common_fields: Vec = list[0] .iter() .filter_map(|(key, val)| { if list.iter().all(|map| map.get(key).map_or(false, |v| v == val)) { Some(key.clone()) } else { None } }) .collect(); let mut common_map = HashMap::new(); for (k, v) in &list[0] { if common_fields.contains(k) { common_map.insert(k.clone(), v.clone()); } } for map in &mut list { map.retain(|k, _| !common_fields.contains(k)); } Ok(CpuInfo { fields: common_map, cpus: list, }) } } impl CpuInfo { /// Get the total number of cpu cores. /// /// This is the number of entries in the `/proc/cpuinfo` file. pub fn num_cores(&self) -> usize { self.cpus.len() } /// Get info for a specific cpu. /// /// This will merge the common fields with the cpu-specific fields. /// /// Returns None if the requested cpu index is not found. pub fn get_info(&self, cpu_num: usize) -> Option> { self.cpus.get(cpu_num).map(|info| { self.fields .iter() .chain(info.iter()) .map(|(k, v)| (k.as_ref(), v.as_ref())) .collect() }) } /// Get the content of a specific field associated to a CPU /// /// If the field is not found in the set of CPU-specific fields, then /// it is returned from the set of common fields. /// /// Returns None if the requested cpu index is not found, or if the field /// is not found. pub fn get_field(&self, cpu_num: usize, field_name: &str) -> Option<&str> { self.cpus.get(cpu_num).and_then(|cpu_fields| { cpu_fields .get(field_name) .or_else(|| self.fields.get(field_name)) .map(|s| s.as_ref()) }) } pub fn model_name(&self, cpu_num: usize) -> Option<&str> { self.get_field(cpu_num, "model name") } pub fn vendor_id(&self, cpu_num: usize) -> Option<&str> { self.get_field(cpu_num, "vendor_id") } /// May not be available on some older 2.6 kernels pub fn physical_id(&self, cpu_num: usize) -> Option { self.get_field(cpu_num, "physical id").and_then(|s| s.parse().ok()) } pub fn flags(&self, cpu_num: usize) -> Option> { self.get_field(cpu_num, "flags") .map(|flags| flags.split_whitespace().collect()) } } #[cfg(test)] mod tests { use super::*; #[test] fn test_cpuinfo_rpi() { // My rpi system includes some stuff at the end of /proc/cpuinfo that we shouldn't parse let data = r#"processor : 0 model name : ARMv7 Processor rev 4 (v7l) BogoMIPS : 38.40 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 processor : 1 model name : ARMv7 Processor rev 4 (v7l) BogoMIPS : 38.40 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 processor : 2 model name : ARMv7 Processor rev 4 (v7l) BogoMIPS : 38.40 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 processor : 3 model name : ARMv7 Processor rev 4 (v7l) BogoMIPS : 38.40 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 Hardware : BCM2835 Revision : a020d3 Serial : 0000000012345678 Model : Raspberry Pi 3 Model B Plus Rev 1.3 "#; let r = std::io::Cursor::new(data.as_bytes()); use crate::FromRead; let info = CpuInfo::from_read(r).unwrap(); assert_eq!(info.num_cores(), 4); let info = info.get_info(0).unwrap(); assert!(info.get("model name").is_some()); assert!(info.get("BogoMIPS").is_some()); assert!(info.get("Features").is_some()); assert!(info.get("CPU implementer").is_some()); assert!(info.get("CPU architecture").is_some()); assert!(info.get("CPU variant").is_some()); assert!(info.get("CPU part").is_some()); assert!(info.get("CPU revision").is_some()); } } procfs-core-0.17.0/src/crypto.rs000064400000000000000000000432311046102023000146130ustar 00000000000000use crate::{expect, FromBufRead, ProcError, ProcResult}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::{ collections::HashMap, convert::TryFrom, io::BufRead, iter::{once, Peekable}, str::FromStr, }; /// Represents the data from `/proc/crypto`. /// /// Each block represents a cryptographic implementation that has been registered with the kernel. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct CryptoTable { pub crypto_blocks: HashMap>, } /// Format of a crypto implementation represented in /proc/crypto. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct CryptoBlock { pub name: String, pub driver: String, pub module: String, pub priority: isize, pub ref_count: isize, pub self_test: SelfTest, pub internal: bool, pub fips_enabled: bool, pub crypto_type: Type, } impl FromBufRead for CryptoTable { fn from_buf_read(r: R) -> ProcResult { let mut lines = r.lines().peekable(); let mut crypto_blocks: HashMap> = HashMap::new(); while let Some(line) = lines.next() { let line = line?; // Just skip empty lines if !line.is_empty() { let mut split = line.split(':'); let name = expect!(split.next()); if name.trim() == "name" { let name = expect!(split.next()).trim().to_string(); let block = CryptoBlock::from_iter(&mut lines, name.as_str())?; let blocks = crypto_blocks.entry(name).or_insert(Vec::new()); blocks.push(block); } } } Ok(CryptoTable { crypto_blocks }) } } impl CryptoTable { pub fn get>(&self, target: T) -> Option<&Vec> { self.crypto_blocks.get(target.as_ref()) } } impl CryptoBlock { fn from_iter>>( iter: &mut Peekable, name: &str, ) -> ProcResult { let driver = parse_line(iter, "driver", name)?; let module = parse_line(iter, "module", name)?; let priority = from_str!(isize, &parse_line(iter, "priority", name)?); let ref_count = from_str!(isize, &parse_line(iter, "refcnt", name)?); let self_test = SelfTest::try_from(parse_line(iter, "selftest", name)?.as_str())?; let internal = parse_bool(iter, "internal", name)?; let fips_enabled = parse_fips(iter, name)?; let crypto_type = Type::from_iter(iter, name)?; Ok(CryptoBlock { name: name.to_string(), driver, module, priority, ref_count, self_test, internal, fips_enabled, crypto_type, }) } } /// Potential results for selftest. #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum SelfTest { Passed, Unknown, } impl TryFrom<&str> for SelfTest { type Error = ProcError; fn try_from(value: &str) -> Result { Ok(match value { "passed" => Self::Passed, "unknown" => Self::Unknown, _ => { return Err(build_internal_error!(format!( "Could not recognise self test string {value}" ))) } }) } } /// Enumeration of potential types and their associated data. Unknown at end to catch unrecognised types. #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum Type { /// Symmetric Key Cipher Skcipher(Skcipher), /// Single Block Cipher Cipher(Cipher), /// Syncronous Hash Shash(Shash), /// Asyncronous Hash Ahash(Ahash), /// Authenticated Encryption with Associated Data Aead(Aead), /// Random Number Generator Rng(Rng), /// Test algorithm Larval(Larval), /// Synchronous Compression Scomp, /// General Compression Compression, /// Asymmetric Cipher AkCipher, /// Key-agreement Protocol Primitive Kpp, /// Signature Sig, /// Unrecognised type, associated data collected in to a hash map Unknown(Unknown), } impl Type { fn from_iter>>( iter: &mut Peekable, name: &str, ) -> ProcResult { let type_name = parse_line(iter, "type", name)?; Ok(match type_name.as_str() { "skcipher" => Self::Skcipher(Skcipher::parse(iter, name)?), "cipher" => Self::Cipher(Cipher::parse(iter, name)?), "shash" => Self::Shash(Shash::parse(iter, name)?), "scomp" => Self::Scomp, "compression" => Self::Compression, "akcipher" => Self::AkCipher, "kpp" => Self::Kpp, "ahash" => Self::Ahash(Ahash::parse(iter, name)?), "aead" => Self::Aead(Aead::parse(iter, name)?), "rng" => Self::Rng(Rng::parse(iter, name)?), "larval" => Self::Larval(Larval::parse(iter, name)?), "sig" => Self::Sig, unknown_name => Self::Unknown(Unknown::parse(iter, unknown_name)), }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Skcipher { pub async_capable: bool, pub block_size: usize, pub min_key_size: usize, pub max_key_size: usize, pub iv_size: usize, pub chunk_size: usize, pub walk_size: usize, } impl Skcipher { fn parse>>(iter: &mut T, name: &str) -> ProcResult { let async_capable = parse_bool(iter, "async", name)?; let block_size = from_str!(usize, &parse_line(iter, "blocksize", name)?); let min_key_size = from_str!(usize, &parse_line(iter, "min keysize", name)?); let max_key_size = from_str!(usize, &parse_line(iter, "max keysize", name)?); let iv_size = from_str!(usize, &parse_line(iter, "ivsize", name)?); let chunk_size = from_str!(usize, &parse_line(iter, "chunksize", name)?); let walk_size = from_str!(usize, &parse_line(iter, "walksize", name)?); Ok(Self { async_capable, block_size, min_key_size, max_key_size, iv_size, chunk_size, walk_size, }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Cipher { pub block_size: usize, pub min_key_size: usize, pub max_key_size: usize, } impl Cipher { fn parse>>(iter: &mut T, name: &str) -> ProcResult { let block_size = from_str!(usize, &parse_line(iter, "blocksize", name)?); let min_key_size = from_str!(usize, &parse_line(iter, "min keysize", name)?); let max_key_size = from_str!(usize, &parse_line(iter, "max keysize", name)?); Ok(Self { block_size, min_key_size, max_key_size, }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Shash { pub block_size: usize, pub digest_size: usize, } impl Shash { fn parse>>(iter: &mut T, name: &str) -> ProcResult { let block_size = from_str!(usize, &parse_line(iter, "blocksize", name)?); let digest_size = from_str!(usize, &parse_line(iter, "digestsize", name)?); Ok(Self { block_size, digest_size, }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Ahash { pub async_capable: bool, pub block_size: usize, pub digest_size: usize, } impl Ahash { fn parse>>(iter: &mut T, name: &str) -> ProcResult { let async_capable = parse_bool(iter, "async", name)?; let block_size = from_str!(usize, &parse_line(iter, "blocksize", name)?); let digest_size = from_str!(usize, &parse_line(iter, "digestsize", name)?); Ok(Self { async_capable, block_size, digest_size, }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Aead { pub async_capable: bool, pub block_size: usize, pub iv_size: usize, pub max_auth_size: usize, pub gen_iv: Option, } impl Aead { fn parse>>( iter: &mut Peekable, name: &str, ) -> ProcResult { let async_capable = parse_bool(iter, "async", name)?; let block_size = from_str!(usize, &parse_line(iter, "blocksize", name)?); let iv_size = from_str!(usize, &parse_line(iter, "ivsize", name)?); let max_auth_size = from_str!(usize, &parse_line(iter, "maxauthsize", name)?); let gen_iv = parse_gen_iv(iter, name)?; Ok(Self { async_capable, block_size, iv_size, max_auth_size, gen_iv, }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Rng { pub seed_size: usize, } impl Rng { fn parse>>(iter: &mut T, name: &str) -> ProcResult { let seed_size = from_str!(usize, &parse_line(iter, "seedsize", name)?); Ok(Self { seed_size }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Larval { pub flags: u32, } impl Larval { fn parse>>(iter: &mut T, name: &str) -> ProcResult { let flags = from_str!(u32, &parse_line(iter, "flags", name)?); Ok(Self { flags }) } } #[derive(Debug, Clone, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Unknown { pub fields: HashMap, } impl Unknown { fn parse>>(iter: &mut T, unknown_name: &str) -> Self { let fields = iter .map_while(|line| { let line = match line { Ok(line) => line, Err(_) => return None, }; (!line.is_empty()).then(|| { line.split_once(':') .map(|(k, v)| (k.trim().to_string(), v.trim().to_string())) }) }) .flatten() .chain(once((String::from("name"), unknown_name.to_string()))) .collect(); Self { fields } } } fn parse_line>>( iter: &mut T, to_find: &str, name: &str, ) -> ProcResult { let line = expect!(iter.next())?; let (key, val) = expect!(line.split_once(':')); if key.trim() != to_find { return Err(build_internal_error!(format!( "could not locate {to_find} in /proc/crypto, block {name}" ))); } Ok(val.trim().to_string()) } fn parse_fips>>( iter: &mut Peekable, name: &str, ) -> ProcResult { if iter .peek() .map(|line| line.as_ref().is_ok_and(|line| line.contains("fips"))) .unwrap_or(false) { let fips = parse_line(iter, "fips", name)?; if fips == "yes" { return Ok(true); } } Ok(false) } fn parse_bool>>( iter: &mut T, to_find: &str, name: &str, ) -> ProcResult { match parse_line(iter, to_find, name)?.as_str() { "yes" => Ok(true), "no" => Ok(false), _ => Err(build_internal_error!(format!( "{to_find} for {name} was unrecognised term" ))), } } fn parse_gen_iv>>( iter: &mut Peekable, name: &str, ) -> ProcResult> { if iter .peek() .map(|line| line.as_ref().is_ok_and(|line| line.contains("geniv"))) .unwrap_or(false) { let val = parse_line(iter, "geniv", name)?; if val != "" { return Ok(Some(expect!(usize::from_str(&val)))); } } Ok(None) } #[cfg(test)] mod test { use super::*; #[test] fn parse_line_correct() { let line = Ok("name : ghash".to_string()); let mut iter = std::iter::once(line); let val = match parse_line(&mut iter, "name", "parse_line_correct") { Ok(val) => val, Err(e) => panic!("{}", e), }; assert_eq!("ghash", val); } #[test] fn parse_line_incorrect() { let line = Ok("name : ghash".to_string()); let mut iter = std::iter::once(line); let val = match parse_line(&mut iter, "name", "parse_line_incorrect") { Ok(val) => val, Err(e) => panic!("{}", e), }; assert_ne!("hash", val); } #[test] fn parse_block() { let block = r#"driver : deflate-generic module : kernel priority : 0 refcnt : 2 selftest : passed internal : no type : compression"#; let mut iter = block.lines().map(|s| Ok(s.to_string())).peekable(); let block = CryptoBlock::from_iter(&mut iter, "deflate"); let block = block.expect("Should be have read one block"); assert_eq!(block.name, "deflate"); assert_eq!(block.driver, "deflate-generic"); assert_eq!(block.module, "kernel"); assert_eq!(block.priority, 0); assert_eq!(block.ref_count, 2); assert_eq!(block.self_test, SelfTest::Passed); assert_eq!(block.internal, false); assert_eq!(block.crypto_type, Type::Compression); } #[test] fn parse_bad_block() { let block = r#"driver : deflate-generic module : kernel priority : 0 refcnt : 2 selftest : passed internal : no type : aead"#; let mut iter = block.lines().map(|s| Ok(s.to_string())).peekable(); let block = CryptoBlock::from_iter(&mut iter, "deflate"); eprintln!("{block:?}"); assert!(block.is_err()); } #[test] fn parse_two() { let block = r#"name : ccm(aes) driver : ccm_base(ctr(aes-aesni),cbcmac(aes-aesni)) module : ccm priority : 300 refcnt : 4 selftest : passed internal : no type : aead async : no blocksize : 1 ivsize : 16 maxauthsize : 16 geniv : name : ctr(aes) driver : ctr(aes-aesni) module : kernel priority : 300 refcnt : 4 selftest : passed internal : no type : skcipher async : no blocksize : 1 min keysize : 16 max keysize : 32 ivsize : 16 chunksize : 16 walksize : 16 "#; let blocks = CryptoTable::from_buf_read(block.as_bytes()); let blocks = blocks.expect("Should be have read two blocks"); assert_eq!(blocks.crypto_blocks.len(), 2); } #[test] fn parse_duplicate_name() { let block = r#"name : deflate driver : deflate-generic module : kernel priority : 0 refcnt : 2 selftest : passed internal : no type : compression name : deflate driver : deflate-non-generic module : kernel priority : 0 refcnt : 2 selftest : passed internal : no type : compression "#; let blocks = CryptoTable::from_buf_read(block.as_bytes()); let blocks = blocks.expect("Should be have read two blocks"); assert_eq!(blocks.crypto_blocks.len(), 1); let deflate_vec = blocks .crypto_blocks .get("deflate") .expect("Should have created a vec of deflates"); assert_eq!(deflate_vec.len(), 2); } #[test] fn parse_unknown() { let block = r#"driver : ccm_base(ctr(aes-aesni),cbcmac(aes-aesni)) module : ccm priority : 300 refcnt : 4 selftest : passed internal : no type : unknown key : val key2 : val2 "#; let mut iter = block.lines().map(|s| Ok(s.to_string())).peekable(); let block = CryptoBlock::from_iter(&mut iter, "ccm(aes)"); let block = block.expect("Should be have read one block"); let mut compare = HashMap::new(); compare.insert(String::from("key"), String::from("val")); compare.insert(String::from("key2"), String::from("val2")); compare.insert(String::from("name"), String::from("unknown")); assert_eq!(block.crypto_type, Type::Unknown(Unknown { fields: compare })); } #[test] fn parse_unknown_top() { let block = r#"name : ccm(aes) driver : ccm_base(ctr(aes-aesni),cbcmac(aes-aesni)) module : ccm priority : 300 refcnt : 4 selftest : passed internal : no type : unknown key : val key2 : val2 name : ctr(aes) driver : ctr(aes-aesni) module : kernel priority : 300 refcnt : 4 selftest : passed internal : no type : skcipher async : no blocksize : 1 min keysize : 16 max keysize : 32 ivsize : 16 chunksize : 16 walksize : 16 "#; let blocks = CryptoTable::from_buf_read(block.as_bytes()); let blocks = blocks.expect("Should be have read one block"); assert_eq!(blocks.crypto_blocks.len(), 2); } } procfs-core-0.17.0/src/devices.rs000064400000000000000000000106641046102023000147210ustar 00000000000000use std::io::BufRead; use super::ProcResult; use std::str::FromStr; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Device entries under `/proc/devices` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Devices { /// Character devices pub char_devices: Vec, /// Block devices, which can be empty if the kernel doesn't support block devices (without `CONFIG_BLOCK`) pub block_devices: Vec, } /// A charcter device entry under `/proc/devices` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct CharDeviceEntry { /// Device major number pub major: u32, /// Device name pub name: String, } /// A block device entry under `/proc/devices` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct BlockDeviceEntry { /// Device major number pub major: i32, /// Device name pub name: String, } impl super::FromBufRead for Devices { fn from_buf_read(r: R) -> ProcResult { enum State { Char, Block, } let mut state = State::Char; // Always start with char devices let mut devices = Devices { char_devices: vec![], block_devices: vec![], }; for line in r.lines() { let line = expect!(line); if line.is_empty() { continue; } else if line.starts_with("Character devices:") { state = State::Char; continue; } else if line.starts_with("Block devices:") { state = State::Block; continue; } let mut s = line.split_whitespace(); match state { State::Char => { let major = expect!(u32::from_str(expect!(s.next()))); let name = expect!(s.next()).to_string(); let char_device_entry = CharDeviceEntry { major, name }; devices.char_devices.push(char_device_entry); } State::Block => { let major = expect!(i32::from_str(expect!(s.next()))); let name = expect!(s.next()).to_string(); let block_device_entry = BlockDeviceEntry { major, name }; devices.block_devices.push(block_device_entry); } } } Ok(devices) } } #[cfg(test)] mod tests { use super::*; #[test] fn test_devices() { use crate::FromBufRead; use std::io::Cursor; let s = "Character devices: 1 mem 4 /dev/vc/0 4 tty 4 ttyS 5 /dev/tty 5 /dev/console 5 /dev/ptmx 7 vcs 10 misc 13 input 29 fb 90 mtd 136 pts 180 usb 188 ttyUSB 189 usb_device Block devices: 7 loop 8 sd 65 sd 71 sd 128 sd 135 sd 254 device-mapper 259 blkext "; let cursor = Cursor::new(s); let devices = Devices::from_buf_read(cursor).unwrap(); let (chrs, blks) = (devices.char_devices, devices.block_devices); assert_eq!(chrs.len(), 16); assert_eq!(chrs[1].major, 4); assert_eq!(chrs[1].name, "/dev/vc/0"); assert_eq!(chrs[8].major, 10); assert_eq!(chrs[8].name, "misc"); assert_eq!(chrs[15].major, 189); assert_eq!(chrs[15].name, "usb_device"); assert_eq!(blks.len(), 8); assert_eq!(blks[0].major, 7); assert_eq!(blks[0].name, "loop"); assert_eq!(blks[7].major, 259); assert_eq!(blks[7].name, "blkext"); } #[test] fn test_devices_without_block() { use crate::FromBufRead; use std::io::Cursor; let s = "Character devices: 1 mem 4 /dev/vc/0 4 tty 4 ttyS 5 /dev/tty 5 /dev/console 5 /dev/ptmx 7 vcs 10 misc 13 input 29 fb 90 mtd 136 pts 180 usb 188 ttyUSB 189 usb_device "; let cursor = Cursor::new(s); let devices = Devices::from_buf_read(cursor).unwrap(); let (chrs, blks) = (devices.char_devices, devices.block_devices); assert_eq!(chrs.len(), 16); assert_eq!(chrs[1].major, 4); assert_eq!(chrs[1].name, "/dev/vc/0"); assert_eq!(chrs[8].major, 10); assert_eq!(chrs[8].name, "misc"); assert_eq!(chrs[15].major, 189); assert_eq!(chrs[15].name, "usb_device"); assert_eq!(blks.len(), 0); } } procfs-core-0.17.0/src/diskstats.rs000064400000000000000000000112521046102023000153020ustar 00000000000000use crate::{expect, from_str, ProcResult}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::io::BufRead; /// Disk IO stat information /// /// To fully understand these fields, please see the [iostats.txt](https://www.kernel.org/doc/Documentation/iostats.txt) /// kernel documentation. /// /// For an example, see the [diskstats.rs](https://github.com/eminence/procfs/tree/master/examples) /// example in the source repo. // Doc reference: https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats // Doc reference: https://www.kernel.org/doc/Documentation/iostats.txt #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct DiskStat { /// The device major number pub major: i32, /// The device minor number pub minor: i32, /// Device name pub name: String, /// Reads completed successfully /// /// This is the total number of reads completed successfully pub reads: u64, /// Reads merged /// /// The number of adjacent reads that have been merged for efficiency. pub merged: u64, /// Sectors read successfully /// /// This is the total number of sectors read successfully. pub sectors_read: u64, /// Time spent reading (ms) pub time_reading: u64, /// writes completed pub writes: u64, /// writes merged /// /// The number of adjacent writes that have been merged for efficiency. pub writes_merged: u64, /// Sectors written successfully pub sectors_written: u64, /// Time spent writing (ms) pub time_writing: u64, /// I/Os currently in progress pub in_progress: u64, /// Time spent doing I/Os (ms) pub time_in_progress: u64, /// Weighted time spent doing I/Os (ms) pub weighted_time_in_progress: u64, /// Discards completed successfully /// /// (since kernel 4.18) pub discards: Option, /// Discards merged pub discards_merged: Option, /// Sectors discarded pub sectors_discarded: Option, /// Time spent discarding pub time_discarding: Option, /// Flush requests completed successfully /// /// (since kernel 5.5) pub flushes: Option, /// Time spent flushing pub time_flushing: Option, } /// A list of disk stats. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct DiskStats(pub Vec); impl crate::FromBufRead for DiskStats { fn from_buf_read(r: R) -> ProcResult { let mut v = Vec::new(); for line in r.lines() { let line = line?; v.push(DiskStat::from_line(&line)?); } Ok(DiskStats(v)) } } impl DiskStat { pub fn from_line(line: &str) -> ProcResult { let mut s = line.split_whitespace(); let major = from_str!(i32, expect!(s.next())); let minor = from_str!(i32, expect!(s.next())); let name = expect!(s.next()).to_string(); let reads = from_str!(u64, expect!(s.next())); let merged = from_str!(u64, expect!(s.next())); let sectors_read = from_str!(u64, expect!(s.next())); let time_reading = from_str!(u64, expect!(s.next())); let writes = from_str!(u64, expect!(s.next())); let writes_merged = from_str!(u64, expect!(s.next())); let sectors_written = from_str!(u64, expect!(s.next())); let time_writing = from_str!(u64, expect!(s.next())); let in_progress = from_str!(u64, expect!(s.next())); let time_in_progress = from_str!(u64, expect!(s.next())); let weighted_time_in_progress = from_str!(u64, expect!(s.next())); let discards = s.next().and_then(|s| u64::from_str_radix(s, 10).ok()); let discards_merged = s.next().and_then(|s| u64::from_str_radix(s, 10).ok()); let sectors_discarded = s.next().and_then(|s| u64::from_str_radix(s, 10).ok()); let time_discarding = s.next().and_then(|s| u64::from_str_radix(s, 10).ok()); let flushes = s.next().and_then(|s| u64::from_str_radix(s, 10).ok()); let time_flushing = s.next().and_then(|s| u64::from_str_radix(s, 10).ok()); Ok(DiskStat { major, minor, name, reads, merged, sectors_read, time_reading, writes, writes_merged, sectors_written, time_writing, in_progress, time_in_progress, weighted_time_in_progress, discards, discards_merged, sectors_discarded, time_discarding, flushes, time_flushing, }) } } procfs-core-0.17.0/src/iomem.rs000064400000000000000000000035511046102023000144020ustar 00000000000000#[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::io::BufRead; use super::ProcResult; use crate::{process::Pfn, split_into_num}; #[derive(Debug, PartialEq, Eq, Clone, Hash)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Iomem(pub Vec<(usize, PhysicalMemoryMap)>); impl crate::FromBufRead for Iomem { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); for line in r.lines() { let line = expect!(line); let (indent, map) = PhysicalMemoryMap::from_line(&line)?; vec.push((indent, map)); } Ok(Iomem(vec)) } } #[derive(Debug, PartialEq, Eq, Clone, Hash)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct PhysicalMemoryMap { /// The address space in the process that the mapping occupies. pub address: (u64, u64), pub name: String, } impl PhysicalMemoryMap { fn from_line(line: &str) -> ProcResult<(usize, PhysicalMemoryMap)> { let indent = line.chars().take_while(|c| *c == ' ').count() / 2; let line = line.trim(); let mut s = line.split(" : "); let address = expect!(s.next()); let name = expect!(s.next()); Ok(( indent, PhysicalMemoryMap { address: split_into_num(address, '-', 16)?, name: String::from(name), }, )) } /// Get the PFN range for the mapping /// /// First element of the tuple (start) is included. /// Second element (end) is excluded pub fn get_range(&self) -> impl crate::WithSystemInfo { move |si: &crate::SystemInfo| { let start = self.address.0 / si.page_size(); let end = (self.address.1 + 1) / si.page_size(); (Pfn(start), Pfn(end)) } } } procfs-core-0.17.0/src/keyring.rs000064400000000000000000000354661046102023000147560ustar 00000000000000//! Functions related to the in-kernel key management and retention facility //! //! For more details on this facility, see the `keyrings(7)` man page. use crate::{build_internal_error, expect, from_str, ProcResult}; use bitflags::bitflags; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::{collections::HashMap, io::BufRead, time::Duration}; bitflags! { /// Various key flags #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct KeyFlags: u32 { /// The key has been instantiated const INSTANTIATED = 0x01; /// THe key has been revoked const REVOKED = 0x02; /// The key is dead /// /// I.e. the key type has been unregistered. A key may be briefly in this state during garbage collection. const DEAD = 0x04; /// The key contributes to the user's quota const QUOTA = 0x08; /// The key is under construction via a callback to user space const UNDER_CONSTRUCTION = 0x10; /// The key is negatively instantiated const NEGATIVE = 0x20; /// The key has been invalidated const INVALID = 0x40; } } bitflags! { /// Bitflags that represent the permissions for a key #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct PermissionFlags: u32 { /// The attributes of the key may be read /// /// This includes the type, description, and access rights (excluding the security label) const VIEW = 0x01; /// For a key: the payload of the key may be read. For a keyring: the list of serial numbers (keys) to which the keyring has links may be read. const READ = 0x02; /// The payload of the key may be updated and the key may be revoked. /// /// For a keyring, links may be added to or removed from the keyring, and the keyring /// may be cleared completely (all links are removed). const WRITE = 0x04; /// The key may be found by a search. /// /// For keyrings: keys and keyrings that are linked to by the keyring may be searched. const SEARCH = 0x08; /// Links may be created from keyrings to the key. /// /// The initial link to a key that is established when the key is created doesn't require this permission. const LINK = 0x10; /// The ownership details and security label of the key may be changed, the key's expiration /// time may be set, and the key may be revoked. const SETATTR = 0x20; const ALL = Self::VIEW.bits() | Self::READ.bits() | Self::WRITE.bits() | Self::SEARCH.bits() | Self::LINK.bits() | Self::SETATTR.bits(); } } impl KeyFlags { fn from_str(s: &str) -> KeyFlags { let mut me = KeyFlags::empty(); let mut chars = s.chars(); match chars.next() { Some(c) if c == 'I' => me.insert(KeyFlags::INSTANTIATED), _ => {} } match chars.next() { Some(c) if c == 'R' => me.insert(KeyFlags::REVOKED), _ => {} } match chars.next() { Some(c) if c == 'D' => me.insert(KeyFlags::DEAD), _ => {} } match chars.next() { Some(c) if c == 'Q' => me.insert(KeyFlags::QUOTA), _ => {} } match chars.next() { Some(c) if c == 'U' => me.insert(KeyFlags::UNDER_CONSTRUCTION), _ => {} } match chars.next() { Some(c) if c == 'N' => me.insert(KeyFlags::NEGATIVE), _ => {} } match chars.next() { Some(c) if c == 'i' => me.insert(KeyFlags::INVALID), _ => {} } me } } #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Permissions { pub possessor: PermissionFlags, pub user: PermissionFlags, pub group: PermissionFlags, pub other: PermissionFlags, } impl Permissions { fn from_str(s: &str) -> ProcResult { let possessor = PermissionFlags::from_bits(from_str!(u32, &s[0..2], 16)) .ok_or_else(|| build_internal_error!(format!("Unable to parse {:?} as PermissionFlags", s)))?; let user = PermissionFlags::from_bits(from_str!(u32, &s[2..4], 16)) .ok_or_else(|| build_internal_error!(format!("Unable to parse {:?} as PermissionFlags", s)))?; let group = PermissionFlags::from_bits(from_str!(u32, &s[4..6], 16)) .ok_or_else(|| build_internal_error!(format!("Unable to parse {:?} as PermissionFlags", s)))?; let other = PermissionFlags::from_bits(from_str!(u32, &s[6..8], 16)) .ok_or_else(|| build_internal_error!(format!("Unable to parse {:?} as PermissionFlags", s)))?; Ok(Permissions { possessor, user, group, other, }) } } #[derive(Debug, Clone, Eq, PartialEq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum KeyTimeout { Permanent, Expired, Timeout(Duration), } impl KeyTimeout { fn from_str(s: &str) -> ProcResult { if s == "perm" { Ok(KeyTimeout::Permanent) } else if s == "expd" { Ok(KeyTimeout::Expired) } else { let (val, unit) = s.split_at(s.len() - 1); let val = from_str!(u64, val); match unit { "s" => Ok(KeyTimeout::Timeout(Duration::from_secs(val))), "m" => Ok(KeyTimeout::Timeout(Duration::from_secs(val * 60))), "h" => Ok(KeyTimeout::Timeout(Duration::from_secs(val * 60 * 60))), "d" => Ok(KeyTimeout::Timeout(Duration::from_secs(val * 60 * 60 * 24))), "w" => Ok(KeyTimeout::Timeout(Duration::from_secs(val * 60 * 60 * 24 * 7))), _ => Err(build_internal_error!(format!("Unable to parse keytimeout of {:?}", s))), } } } } #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum KeyType { /// This is a general-purpose key type. /// /// The key is kept entirely within kernel memory. The payload may be read and updated by /// user-space applications. The payload for keys of this type is a blob of arbitrary /// data of up to 32,767 bytes. /// The description may be any valid string, though it is preferred that it start /// with a colon-delimited prefix representing the service to which the key is of /// interest (for instance "afs:mykey"). User, /// Keyrings are special keys which store a set of links to other keys (including /// other keyrings), analogous to a directory holding links to files. The main /// purpose of a keyring is to prevent other keys from being garbage collected /// because nothing refers to them. /// /// Keyrings with descriptions (names) that begin with a period ('.') are re‐ /// served to the implementation. Keyring, /// This key type is essentially the same as "user", but it does not provide /// reading (i.e., the keyctl(2) KEYCTL_READ operation), meaning that the key /// payload is never visible from user space. This is suitable for storing user‐ /// name-password pairs that should not be readable from user space. /// /// The description of a "logon" key must start with a non-empty colon-delimited /// prefix whose purpose is to identify the service to which the key belongs. /// (Note that this differs from keys of the "user" type, where the inclusion of /// a prefix is recommended but is not enforced.) Logon, /// This key type is similar to the "user" key type, but it may hold a payload of /// up to 1 MiB in size. This key type is useful for purposes such as holding /// Kerberos ticket caches. /// /// The payload data may be stored in a tmpfs filesystem, rather than in kernel /// memory, if the data size exceeds the overhead of storing the data in the /// filesystem. (Storing the data in a filesystem requires filesystem structures /// to be allocated in the kernel. The size of these structures determines the /// size threshold above which the tmpfs storage method is used.) Since Linux /// 4.8, the payload data is encrypted when stored in tmpfs, thereby preventing /// it from being written unencrypted into swap space. BigKey, /// Other specialized, but rare keys types Other(String), } impl KeyType { fn from_str(s: &str) -> KeyType { match s { "keyring" => KeyType::Keyring, "user" => KeyType::User, "logon" => KeyType::Logon, "big_key" => KeyType::BigKey, other => KeyType::Other(other.to_string()), } } } /// A key #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Key { /// The ID (serial number) of the key pub id: u64, /// A set of flags describing the state of the key pub flags: KeyFlags, /// Count of the number of kernel credential structures that are /// pinning the key (approximately: the number of threads and open file /// references that refer to this key). pub usage: u32, /// Key timeout pub timeout: KeyTimeout, /// Key permissions pub permissions: Permissions, /// The user ID of the key owner pub uid: u32, /// The group ID of the key. /// /// The value of `None` here means that the key has no group ID; this can occur in certain circumstances for /// keys created by the kernel. pub gid: Option, /// The type of key pub key_type: KeyType, /// The key description pub description: String, } impl Key { fn from_line(s: &str) -> ProcResult { let mut s = s.split_whitespace(); let id = from_str!(u64, expect!(s.next()), 16); let s_flags = expect!(s.next()); let usage = from_str!(u32, expect!(s.next())); let s_timeout = expect!(s.next()); let s_perms = expect!(s.next()); let uid = from_str!(u32, expect!(s.next())); let s_gid = expect!(s.next()); let s_type = expect!(s.next()); let desc: Vec<_> = s.collect(); Ok(Key { id, flags: KeyFlags::from_str(s_flags), usage, timeout: KeyTimeout::from_str(s_timeout)?, permissions: Permissions::from_str(s_perms)?, uid, gid: if s_gid == "-1" { None } else { Some(from_str!(u32, s_gid)) }, key_type: KeyType::from_str(s_type), description: desc.join(" "), }) } } /// A set of keys. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Keys(pub Vec); impl crate::FromBufRead for Keys { fn from_buf_read(r: R) -> ProcResult { let mut v = Vec::new(); for line in r.lines() { let line = line?; v.push(Key::from_line(&line)?); } Ok(Keys(v)) } } /// Information about a user with at least one key #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KeyUser { /// The user that owns the key pub uid: u32, /// The kernel-internal usage count for the kernel structure used to record key users pub usage: u32, /// The total number of keys owned by the user pub nkeys: u32, /// THe number of keys that have been instantiated pub nikeys: u32, /// The number of keys owned by the user pub qnkeys: u32, /// The maximum number of keys that the user may own pub maxkeys: u32, /// The number of bytes consumed in playloads of the keys owned by this user pub qnbytes: u32, /// The upper limit on the number of bytes in key payloads for this user pub maxbytes: u32, } impl KeyUser { fn from_str(s: &str) -> ProcResult { let mut s = s.split_whitespace(); let uid = expect!(s.next()); let usage = from_str!(u32, expect!(s.next())); let keys = expect!(s.next()); let qkeys = expect!(s.next()); let qbytes = expect!(s.next()); let (nkeys, nikeys) = { let mut s = keys.split('/'); (from_str!(u32, expect!(s.next())), from_str!(u32, expect!(s.next()))) }; let (qnkeys, maxkeys) = { let mut s = qkeys.split('/'); (from_str!(u32, expect!(s.next())), from_str!(u32, expect!(s.next()))) }; let (qnbytes, maxbytes) = { let mut s = qbytes.split('/'); (from_str!(u32, expect!(s.next())), from_str!(u32, expect!(s.next()))) }; Ok(KeyUser { uid: from_str!(u32, &uid[0..uid.len() - 1]), usage, nkeys, nikeys, qnkeys, maxkeys, qnbytes, maxbytes, }) } } /// Information about a set of users with at least one key. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KeyUsers(pub HashMap); impl crate::FromBufRead for KeyUsers { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); for line in r.lines() { let line = line?; let user = KeyUser::from_str(&line)?; map.insert(user.uid, user); } Ok(KeyUsers(map)) } } #[cfg(test)] mod tests { use super::*; #[test] fn key_flags() { assert_eq!(KeyFlags::from_str("I------"), KeyFlags::INSTANTIATED); assert_eq!(KeyFlags::from_str("IR"), KeyFlags::INSTANTIATED | KeyFlags::REVOKED); assert_eq!(KeyFlags::from_str("IRDQUNi"), KeyFlags::all()); } #[test] fn timeout() { assert_eq!(KeyTimeout::from_str("perm").unwrap(), KeyTimeout::Permanent); assert_eq!(KeyTimeout::from_str("expd").unwrap(), KeyTimeout::Expired); assert_eq!( KeyTimeout::from_str("2w").unwrap(), KeyTimeout::Timeout(Duration::from_secs(1209600)) ); assert_eq!( KeyTimeout::from_str("14d").unwrap(), KeyTimeout::Timeout(Duration::from_secs(1209600)) ); assert_eq!( KeyTimeout::from_str("336h").unwrap(), KeyTimeout::Timeout(Duration::from_secs(1209600)) ); assert_eq!( KeyTimeout::from_str("20160m").unwrap(), KeyTimeout::Timeout(Duration::from_secs(1209600)) ); assert_eq!( KeyTimeout::from_str("1209600s").unwrap(), KeyTimeout::Timeout(Duration::from_secs(1209600)) ); } } procfs-core-0.17.0/src/kpageflags.rs000064400000000000000000000111771046102023000154030ustar 00000000000000use bitflags::bitflags; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; //const fn genmask(high: usize, low: usize) -> u64 { // let mask_bits = size_of::() * 8; // (!0 - (1 << low) + 1) & (!0 >> (mask_bits - 1 - high)) //} bitflags! { /// Represents the fields and flags in a page table entry for a memory page. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct PhysicalPageFlags: u64 { /// The page is being locked for exclusive access, e.g. by undergoing read/write IO const LOCKED = 1 << 0; /// IO error occurred const ERROR = 1 << 1; /// The page has been referenced since last LRU list enqueue/requeue const REFERENCED = 1 << 2; /// The page has up-to-date data. ie. for file backed page: (in-memory data revision >= on-disk one) const UPTODATE = 1 << 3; /// The page has been written to, hence contains new data. i.e. for file backed page: (in-memory data revision > on-disk one) const DIRTY = 1 << 4; /// The page is in one of the LRU lists const LRU = 1 << 5; /// The page is in the active LRU list const ACTIVE = 1 << 6; /// The page is managed by the SLAB/SLOB/SLUB/SLQB kernel memory allocator. When compound page is used, SLUB/SLQB will only set this flag on the head page; SLOB will not flag it at all const SLAB = 1 << 7; /// The page is being synced to disk const WRITEBACK = 1 << 8; /// The page will be reclaimed soon after its pageout IO completed const RECLAIM = 1 << 9; /// A free memory block managed by the buddy system allocator. The buddy system organizes free memory in blocks of various orders. An order N block has 2^N physically contiguous pages, with the BUDDY flag set for and _only_ for the first page const BUDDY = 1 << 10; /// A memory mapped page const MMAP = 1 << 11; /// A memory mapped page that is not part of a file const ANON = 1 << 12; /// The page is mapped to swap space, i.e. has an associated swap entry const SWAPCACHE = 1 << 13; /// The page is backed by swap/RAM const SWAPBACKED = 1 << 14; /// A compound page with order N consists of 2^N physically contiguous pages. A compound page with order 2 takes the form of “HTTT”, where H donates its head page and T donates its tail page(s). The major consumers of compound pages are hugeTLB pages (), the SLUB etc. memory allocators and various device drivers. However in this interface, only huge/giga pages are made visible to end users const COMPOUND_HEAD = 1 << 15; /// A compound page tail (see description above) const COMPOUND_TAIL = 1 << 16; /// This is an integral part of a HugeTLB page const HUGE = 1 << 17; /// The page is in the unevictable (non-)LRU list It is somehow pinned and not a candidate for LRU page reclaims, e.g. ramfs pages, shmctl(SHM_LOCK) and mlock() memory segments const UNEVICTABLE = 1 << 18; /// Hardware detected memory corruption on this page: don’t touch the data! const HWPOISON = 1 << 19; /// No page frame exists at the requested address const NOPAGE = 1 << 20; /// Identical memory pages dynamically shared between one or more processes const KSM = 1 << 21; /// Contiguous pages which construct transparent hugepages const THP = 1 << 22; /// The page is logically offline const OFFLINE = 1 << 23; /// Zero page for pfn_zero or huge_zero page const ZERO_PAGE = 1 << 24; /// The page has not been accessed since it was marked idle (see ). Note that this flag may be stale in case the page was accessed via a PTE. To make sure the flag is up-to-date one has to read /sys/kernel/mm/page_idle/bitmap first const IDLE = 1 << 25; /// The page is in use as a page table const PGTABLE = 1 << 26; } } impl PhysicalPageFlags { pub fn parse_info(info: u64) -> Self { PhysicalPageFlags::from_bits_truncate(info) } } #[cfg(test)] mod tests { use super::*; #[test] fn test_kpageflags_parsing() { let pagemap_entry: u64 = 0b0000000000000000000000000000000000000000000000000000000000000001; let info = PhysicalPageFlags::parse_info(pagemap_entry); assert!(info == PhysicalPageFlags::LOCKED); } } procfs-core-0.17.0/src/lib.rs000064400000000000000000001113561046102023000140450ustar 00000000000000#![allow(unknown_lints)] // The suggested fix with `str::parse` removes support for Rust 1.48 #![allow(clippy::from_str_radix_10)] #![deny(broken_intra_doc_links, invalid_html_tags)] //! This crate provides to an interface into the linux `procfs` filesystem, usually mounted at //! `/proc`. //! //! This is a pseudo-filesystem which is available on most every linux system and provides an //! interface to kernel data structures. //! //! # `procfs-core` //! //! The `procfs-core` crate is a fully platform-independent crate that contains most of the data-structures and //! parsing code. Most people should first look at the `procfs` crate instead. use bitflags::bitflags; use std::fmt; use std::io::{BufRead, BufReader, Read}; use std::path::{Path, PathBuf}; use std::str::FromStr; use std::{collections::HashMap, time::Duration}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Types which can be parsed from a Read implementation. pub trait FromRead: Sized { /// Read the type from a Read. fn from_read(r: R) -> ProcResult; /// Read the type from a file. fn from_file>(path: P) -> ProcResult { std::fs::File::open(path.as_ref()) .map_err(|e| e.into()) .and_then(|f| Self::from_read(f)) .map_err(|e| e.error_path(path.as_ref())) } } /// Types which can be parsed from a BufRead implementation. pub trait FromBufRead: Sized { fn from_buf_read(r: R) -> ProcResult; } impl FromRead for T { fn from_read(r: R) -> ProcResult { Self::from_buf_read(BufReader::new(r)) } } /// Types which can be parsed from a Read implementation and system info. pub trait FromReadSI: Sized { /// Parse the type from a Read and system info. fn from_read(r: R, system_info: &SystemInfo) -> ProcResult; /// Parse the type from a file. fn from_file>(path: P, system_info: &SystemInfo) -> ProcResult { std::fs::File::open(path.as_ref()) .map_err(|e| e.into()) .and_then(|f| Self::from_read(f, system_info)) .map_err(|e| e.error_path(path.as_ref())) } } /// Types which can be parsed from a BufRead implementation and system info. pub trait FromBufReadSI: Sized { fn from_buf_read(r: R, system_info: &SystemInfo) -> ProcResult; } impl FromReadSI for T { fn from_read(r: R, system_info: &SystemInfo) -> ProcResult { Self::from_buf_read(BufReader::new(r), system_info) } } /// Extension traits useful for importing wholesale. pub mod prelude { pub use super::{FromBufRead, FromBufReadSI, FromRead, FromReadSI}; } #[doc(hidden)] pub trait IntoOption { fn into_option(t: Self) -> Option; } impl IntoOption for Option { fn into_option(t: Option) -> Option { t } } impl IntoOption for Result { fn into_option(t: Result) -> Option { t.ok() } } #[doc(hidden)] pub trait IntoResult { fn into(t: Self) -> Result; } #[macro_export] #[doc(hidden)] macro_rules! build_internal_error { ($err: expr) => { crate::ProcError::InternalError(crate::InternalError { msg: format!("Internal Unwrap Error: {}", $err), file: file!(), line: line!(), #[cfg(feature = "backtrace")] backtrace: backtrace::Backtrace::new(), }) }; ($err: expr, $msg: expr) => { crate::ProcError::InternalError(crate::InternalError { msg: format!("Internal Unwrap Error: {}: {}", $msg, $err), file: file!(), line: line!(), #[cfg(feature = "backtrace")] backtrace: backtrace::Backtrace::new(), }) }; } // custom NoneError, since std::option::NoneError is nightly-only // See https://github.com/rust-lang/rust/issues/42327 #[doc(hidden)] pub struct NoneError; impl std::fmt::Display for NoneError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!(f, "NoneError") } } impl IntoResult for Option { fn into(t: Option) -> Result { t.ok_or(NoneError) } } impl IntoResult for Result { fn into(t: Result) -> Result { t } } #[allow(unused_macros)] #[macro_export] #[doc(hidden)] macro_rules! proc_panic { ($e:expr) => { crate::IntoOption::into_option($e).unwrap_or_else(|| { panic!( "Failed to unwrap {}. Please report this as a procfs bug.", stringify!($e) ) }) }; ($e:expr, $msg:expr) => { crate::IntoOption::into_option($e).unwrap_or_else(|| { panic!( "Failed to unwrap {} ({}). Please report this as a procfs bug.", stringify!($e), $msg ) }) }; } #[macro_export] #[doc(hidden)] macro_rules! expect { ($e:expr) => { match crate::IntoResult::into($e) { Ok(v) => v, Err(e) => return Err(crate::build_internal_error!(e)), } }; ($e:expr, $msg:expr) => { match crate::IntoResult::into($e) { Ok(v) => v, Err(e) => return Err(crate::build_internal_error!(e, $msg)), } }; } #[macro_export] #[doc(hidden)] macro_rules! from_str { ($t:tt, $e:expr) => {{ let e = $e; crate::expect!( $t::from_str_radix(e, 10), format!("Failed to parse {} ({:?}) as a {}", stringify!($e), e, stringify!($t),) ) }}; ($t:tt, $e:expr, $radix:expr) => {{ let e = $e; crate::expect!( $t::from_str_radix(e, $radix), format!("Failed to parse {} ({:?}) as a {}", stringify!($e), e, stringify!($t)) ) }}; ($t:tt, $e:expr, $radix:expr, pid:$pid:expr) => {{ let e = $e; crate::expect!( $t::from_str_radix(e, $radix), format!( "Failed to parse {} ({:?}) as a {} (pid {})", stringify!($e), e, stringify!($t), $pid ) ) }}; } /// Auxiliary system information interface. /// /// A few function in this crate require some extra system info to compute their results. For example, /// the [crate::process::Stat::rss_bytes()] function needs to know the page size. Since `procfs-core` only parses /// data and never interacts with a real system, this `SystemInfoInterface` is what allows real system info to be used. /// /// If you are a user of the `procfs` crate, you'll normally use the `[procfs::WithCurrentSystemInfo]` trait. /// For example: /// /// ```rust,ignore /// use procfs::WithCurrentSystemInfo; /// /// let me = procfs::process::Process::myself().unwrap(); /// let stat = me.stat().unwrap(); /// let bytes = stat.rss_bytes().get(); /// ``` /// /// However, imagine that you captured a process's stat info, along with page size: /// ```rust /// # use procfs_core::{FromRead, WithSystemInfo}; /// # let stat_data = std::io::Cursor::new(b"475071 (cat) R 323893 475071 323893 34826 475071 4194304 94 0 0 0 0 0 0 0 20 0 1 0 201288208 5738496 225 18446744073709551615 94881179934720 94881179954601 140722831478832 0 0 0 0 0 0 0 0 0 17 4 0 0 0 0 0 94881179970608 94881179972224 94881184485376 140722831483757 140722831483777 140722831483777 140722831486955 0"); /// let stat = procfs_core::process::Stat::from_read(stat_data).unwrap(); /// /// let system_info = procfs_core::ExplicitSystemInfo { /// boot_time_secs: 1692972606, /// ticks_per_second: 100, /// page_size: 4096, /// is_little_endian: true, /// }; /// /// let rss_bytes = stat.rss_bytes().with_system_info(&system_info); /// ``` pub trait SystemInfoInterface { fn boot_time_secs(&self) -> ProcResult; fn ticks_per_second(&self) -> u64; fn page_size(&self) -> u64; /// Whether the system is little endian (true) or big endian (false). fn is_little_endian(&self) -> bool; #[cfg(feature = "chrono")] fn boot_time(&self) -> ProcResult> { use chrono::TimeZone; let date_time = expect!(chrono::Local.timestamp_opt(self.boot_time_secs()? as i64, 0).single()); Ok(date_time) } } /// Auxiliary system information. pub type SystemInfo = dyn SystemInfoInterface; /// A convenience stuct implementing [SystemInfoInterface] with explicitly-specified values. #[derive(Debug, Clone, Copy)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct ExplicitSystemInfo { pub boot_time_secs: u64, pub ticks_per_second: u64, pub page_size: u64, pub is_little_endian: bool, } impl SystemInfoInterface for ExplicitSystemInfo { fn boot_time_secs(&self) -> ProcResult { Ok(self.boot_time_secs) } fn ticks_per_second(&self) -> u64 { self.ticks_per_second } fn page_size(&self) -> u64 { self.page_size } fn is_little_endian(&self) -> bool { self.is_little_endian } } /// Values which can provide an output given the [SystemInfo]. pub trait WithSystemInfo<'a>: 'a { type Output: 'a; /// Get the output derived from the given [SystemInfo]. fn with_system_info(self, info: &SystemInfo) -> Self::Output; } impl<'a, F: 'a, R: 'a> WithSystemInfo<'a> for F where F: FnOnce(&SystemInfo) -> R, { type Output = R; fn with_system_info(self, info: &SystemInfo) -> Self::Output { self(info) } } #[doc(hidden)] pub fn from_iter<'a, I, U>(i: I) -> ProcResult where I: IntoIterator, U: FromStr, { let mut iter = i.into_iter(); let val = expect!(iter.next()); match FromStr::from_str(val) { Ok(u) => Ok(u), Err(..) => Err(build_internal_error!("Failed to convert")), } } fn from_iter_optional<'a, I, U>(i: I) -> ProcResult> where I: IntoIterator, U: FromStr, { let mut iter = i.into_iter(); let Some(val) = iter.next() else { return Ok(None); }; match FromStr::from_str(val) { Ok(u) => Ok(Some(u)), Err(..) => Err(build_internal_error!("Failed to convert")), } } mod cgroups; pub use cgroups::*; mod cpuinfo; pub use cpuinfo::*; mod crypto; pub use crypto::*; mod devices; pub use devices::*; mod diskstats; pub use diskstats::*; mod iomem; pub use iomem::*; pub mod keyring; mod locks; pub use locks::*; mod mounts; pub use mounts::*; mod partitions; pub use partitions::*; mod meminfo; pub use meminfo::*; pub mod net; mod pressure; pub use pressure::*; pub mod process; mod kpageflags; pub use kpageflags::*; pub mod sys; pub use sys::kernel::Version as KernelVersion; mod sysvipc_shm; pub use sysvipc_shm::*; mod uptime; pub use uptime::*; // TODO temporary, only for procfs pub trait FromStrRadix: Sized { fn from_str_radix(t: &str, radix: u32) -> Result; } impl FromStrRadix for u64 { fn from_str_radix(s: &str, radix: u32) -> Result { u64::from_str_radix(s, radix) } } impl FromStrRadix for i32 { fn from_str_radix(s: &str, radix: u32) -> Result { i32::from_str_radix(s, radix) } } fn split_into_num(s: &str, sep: char, radix: u32) -> ProcResult<(T, T)> { let mut s = s.split(sep); let a = expect!(FromStrRadix::from_str_radix(expect!(s.next()), radix)); let b = expect!(FromStrRadix::from_str_radix(expect!(s.next()), radix)); Ok((a, b)) } /// This is used to hold both an IO error as well as the path of the file that originated the error #[derive(Debug)] #[doc(hidden)] pub struct IoErrorWrapper { pub path: PathBuf, pub inner: std::io::Error, } impl std::error::Error for IoErrorWrapper {} impl fmt::Display for IoErrorWrapper { fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> { write!(f, "IoErrorWrapper({}): {}", self.path.display(), self.inner) } } /// The main error type for the procfs crate. /// /// For more info, see the [ProcError] type. pub type ProcResult = Result; /// The various error conditions in the procfs crate. /// /// Most of the variants have an `Option` component. If the error root cause was related /// to some operation on a file, the path of this file will be stored in this component. #[derive(Debug)] pub enum ProcError { /// A standard permission denied error. /// /// This will be a common error, since some files in the procfs filesystem are only readable by /// the root user. PermissionDenied(Option), /// This might mean that the process no longer exists, or that your kernel doesn't support the /// feature you are trying to use. NotFound(Option), /// This might mean that a procfs file has incomplete contents. /// /// If you encounter this error, consider retrying the operation. Incomplete(Option), /// Any other IO error (rare). Io(std::io::Error, Option), /// Any other non-IO error (very rare). Other(String), /// This error indicates that some unexpected error occurred. This is a bug. The inner /// [InternalError] struct will contain some more info. /// /// If you ever encounter this error, consider it a bug in the procfs crate and please report /// it on github. InternalError(InternalError), } /// Extensions for dealing with ProcErrors. pub trait ProcErrorExt { /// Add path information to the error. fn error_path(self, path: &Path) -> Self; } impl ProcErrorExt for ProcError { fn error_path(mut self, path: &Path) -> Self { use ProcError::*; match &mut self { PermissionDenied(p) | NotFound(p) | Incomplete(p) | Io(_, p) if p.is_none() => { *p = Some(path.to_owned()); } _ => (), } self } } impl ProcErrorExt for ProcResult { fn error_path(self, path: &Path) -> Self { self.map_err(|e| e.error_path(path)) } } /// An internal error in the procfs crate /// /// If you encounter this error, consider it a bug and please report it on /// [github](https://github.com/eminence/procfs). /// /// If you compile with the optional `backtrace` feature (disabled by default), /// you can gain access to a stack trace of where the error happened. #[cfg_attr(feature = "serde1", derive(Serialize))] pub struct InternalError { pub msg: String, pub file: &'static str, pub line: u32, #[cfg(feature = "backtrace")] #[cfg_attr(feature = "serde1", serde(skip))] pub backtrace: backtrace::Backtrace, } impl std::fmt::Debug for InternalError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!( f, "bug at {}:{} (please report this procfs bug)\n{}", self.file, self.line, self.msg ) } } impl std::fmt::Display for InternalError { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { write!( f, "bug at {}:{} (please report this procfs bug)\n{}", self.file, self.line, self.msg ) } } impl From for ProcError { fn from(io: std::io::Error) -> Self { use std::io::ErrorKind; let kind = io.kind(); // the only way we'll have a path for the IO error is if this IO error // has a inner type if io.get_ref().is_some() { let inner = io.into_inner().unwrap(); // is this inner type a IoErrorWrapper? match inner.downcast::() { Ok(wrapper) => { let path = wrapper.path; match kind { ErrorKind::PermissionDenied => ProcError::PermissionDenied(Some(path)), ErrorKind::NotFound => ProcError::NotFound(Some(path)), _other => { // All platforms happen to have ESRCH=3, and windows actually // translates it to a `NotFound` anyway. const ESRCH: i32 = 3; if matches!(wrapper.inner.raw_os_error(), Some(raw) if raw == ESRCH) { // This "No such process" error gets mapped into a NotFound error return ProcError::NotFound(Some(path)); } else { ProcError::Io(wrapper.inner, Some(path)) } } } } Err(io) => { // reconstruct the original error ProcError::Io(std::io::Error::new(kind, io), None) } } } else { match kind { ErrorKind::PermissionDenied => ProcError::PermissionDenied(None), ErrorKind::NotFound => ProcError::NotFound(None), _other => ProcError::Io(io, None), } } } } impl From<&'static str> for ProcError { fn from(val: &'static str) -> Self { ProcError::Other(val.to_owned()) } } impl From for ProcError { fn from(val: std::num::ParseIntError) -> Self { ProcError::Other(format!("ParseIntError: {}", val)) } } impl From for ProcError { fn from(e: std::string::ParseError) -> Self { match e {} } } impl std::fmt::Display for ProcError { fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> { match self { // Variants with paths: ProcError::PermissionDenied(Some(p)) => write!(f, "Permission Denied: {}", p.display()), ProcError::NotFound(Some(p)) => write!(f, "File not found: {}", p.display()), ProcError::Incomplete(Some(p)) => write!(f, "Data incomplete: {}", p.display()), ProcError::Io(inner, Some(p)) => { write!(f, "Unexpected IO error({}): {}", p.display(), inner) } // Variants without paths: ProcError::PermissionDenied(None) => write!(f, "Permission Denied"), ProcError::NotFound(None) => write!(f, "File not found"), ProcError::Incomplete(None) => write!(f, "Data incomplete"), ProcError::Io(inner, None) => write!(f, "Unexpected IO error: {}", inner), ProcError::Other(s) => write!(f, "Unknown error {}", s), ProcError::InternalError(e) => write!(f, "Internal error: {}", e), } } } impl std::error::Error for ProcError {} /// Load average figures. /// /// Load averages are calculated as the number of jobs in the run queue (state R) or waiting for /// disk I/O (state D) averaged over 1, 5, and 15 minutes. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct LoadAverage { /// The one-minute load average pub one: f32, /// The five-minute load average pub five: f32, /// The fifteen-minute load average pub fifteen: f32, /// The number of currently runnable kernel scheduling entities (processes, threads). pub cur: u32, /// The number of kernel scheduling entities that currently exist on the system. pub max: u32, /// The fifth field is the PID of the process that was most recently created on the system. pub latest_pid: u32, } impl FromRead for LoadAverage { fn from_read(mut reader: R) -> ProcResult { let mut line = String::new(); reader.read_to_string(&mut line)?; let mut s = line.split_whitespace(); let one = expect!(f32::from_str(expect!(s.next()))); let five = expect!(f32::from_str(expect!(s.next()))); let fifteen = expect!(f32::from_str(expect!(s.next()))); let curmax = expect!(s.next()); let latest_pid = expect!(u32::from_str(expect!(s.next()))); let mut s = curmax.split('/'); let cur = expect!(u32::from_str(expect!(s.next()))); let max = expect!(u32::from_str(expect!(s.next()))); Ok(LoadAverage { one, five, fifteen, cur, max, latest_pid, }) } } /// Possible values for a kernel config option #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum ConfigSetting { Yes, Module, Value(String), } /// The kernel configuration. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KernelConfig(pub HashMap); impl FromBufRead for KernelConfig { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); for line in r.lines() { let line = line?; if line.starts_with('#') { continue; } if line.contains('=') { let mut s = line.splitn(2, '='); let name = expect!(s.next()).to_owned(); let value = match expect!(s.next()) { "y" => ConfigSetting::Yes, "m" => ConfigSetting::Module, s => ConfigSetting::Value(s.to_owned()), }; map.insert(name, value); } } Ok(KernelConfig(map)) } } /// The amount of time, measured in ticks, the CPU has been in specific states /// /// These fields are measured in ticks because the underlying data from the kernel is measured in ticks. /// The number of ticks per second is generally 100 on most systems. /// /// To convert this value to seconds, you can divide by the tps. There are also convenience methods /// that you can use too. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct CpuTime { /// Ticks spent in user mode pub user: u64, /// Ticks spent in user mode with low priority (nice) pub nice: u64, /// Ticks spent in system mode pub system: u64, /// Ticks spent in the idle state pub idle: u64, /// Ticks waiting for I/O to complete /// /// This value is not reliable, for the following reasons: /// /// 1. The CPU will not wait for I/O to complete; iowait is the time that a /// task is waiting for I/O to complete. When a CPU goes into idle state /// for outstanding task I/O, another task will be scheduled on this CPU. /// /// 2. On a multi-core CPU, this task waiting for I/O to complete is not running /// on any CPU, so the iowait for each CPU is difficult to calculate. /// /// 3. The value in this field may *decrease* in certain conditions. /// /// (Since Linux 2.5.41) pub iowait: Option, /// Ticks servicing interrupts /// /// (Since Linux 2.6.0) pub irq: Option, /// Ticks servicing softirqs /// /// (Since Linux 2.6.0) pub softirq: Option, /// Ticks of stolen time. /// /// Stolen time is the time spent in other operating systems when running in /// a virtualized environment. /// /// (Since Linux 2.6.11) pub steal: Option, /// Ticks spent running a virtual CPU for guest operating systems under control /// of the linux kernel /// /// (Since Linux 2.6.24) pub guest: Option, /// Ticks spent running a niced guest /// /// (Since Linux 2.6.33) pub guest_nice: Option, tps: u64, } impl CpuTime { fn from_str(s: &str, ticks_per_second: u64) -> ProcResult { let mut s = s.split_whitespace(); // Store this field in the struct so we don't have to attempt to unwrap ticks_per_second() when we convert // from ticks into other time units let tps = ticks_per_second; s.next(); let user = from_str!(u64, expect!(s.next())); let nice = from_str!(u64, expect!(s.next())); let system = from_str!(u64, expect!(s.next())); let idle = from_str!(u64, expect!(s.next())); let iowait = s.next().map(|s| Ok(from_str!(u64, s))).transpose()?; let irq = s.next().map(|s| Ok(from_str!(u64, s))).transpose()?; let softirq = s.next().map(|s| Ok(from_str!(u64, s))).transpose()?; let steal = s.next().map(|s| Ok(from_str!(u64, s))).transpose()?; let guest = s.next().map(|s| Ok(from_str!(u64, s))).transpose()?; let guest_nice = s.next().map(|s| Ok(from_str!(u64, s))).transpose()?; Ok(CpuTime { user, nice, system, idle, iowait, irq, softirq, steal, guest, guest_nice, tps, }) } /// Milliseconds spent in user mode pub fn user_ms(&self) -> u64 { let ms_per_tick = 1000 / self.tps; self.user * ms_per_tick } /// Time spent in user mode pub fn user_duration(&self) -> Duration { Duration::from_millis(self.user_ms()) } /// Milliseconds spent in user mode with low priority (nice) pub fn nice_ms(&self) -> u64 { let ms_per_tick = 1000 / self.tps; self.nice * ms_per_tick } /// Time spent in user mode with low priority (nice) pub fn nice_duration(&self) -> Duration { Duration::from_millis(self.nice_ms()) } /// Milliseconds spent in system mode pub fn system_ms(&self) -> u64 { let ms_per_tick = 1000 / self.tps; self.system * ms_per_tick } /// Time spent in system mode pub fn system_duration(&self) -> Duration { Duration::from_millis(self.system_ms()) } /// Milliseconds spent in the idle state pub fn idle_ms(&self) -> u64 { let ms_per_tick = 1000 / self.tps; self.idle * ms_per_tick } /// Time spent in the idle state pub fn idle_duration(&self) -> Duration { Duration::from_millis(self.idle_ms()) } /// Milliseconds spent waiting for I/O to complete pub fn iowait_ms(&self) -> Option { let ms_per_tick = 1000 / self.tps; self.iowait.map(|io| io * ms_per_tick) } /// Time spent waiting for I/O to complete pub fn iowait_duration(&self) -> Option { self.iowait_ms().map(Duration::from_millis) } /// Milliseconds spent servicing interrupts pub fn irq_ms(&self) -> Option { let ms_per_tick = 1000 / self.tps; self.irq.map(|ms| ms * ms_per_tick) } /// Time spent servicing interrupts pub fn irq_duration(&self) -> Option { self.irq_ms().map(Duration::from_millis) } /// Milliseconds spent servicing softirqs pub fn softirq_ms(&self) -> Option { let ms_per_tick = 1000 / self.tps; self.softirq.map(|ms| ms * ms_per_tick) } /// Time spent servicing softirqs pub fn softirq_duration(&self) -> Option { self.softirq_ms().map(Duration::from_millis) } /// Milliseconds of stolen time pub fn steal_ms(&self) -> Option { let ms_per_tick = 1000 / self.tps; self.steal.map(|ms| ms * ms_per_tick) } /// Amount of stolen time pub fn steal_duration(&self) -> Option { self.steal_ms().map(Duration::from_millis) } /// Milliseconds spent running a virtual CPU for guest operating systems under control of the linux kernel pub fn guest_ms(&self) -> Option { let ms_per_tick = 1000 / self.tps; self.guest.map(|ms| ms * ms_per_tick) } /// Time spent running a virtual CPU for guest operating systems under control of the linux kernel pub fn guest_duration(&self) -> Option { self.guest_ms().map(Duration::from_millis) } /// Milliseconds spent running a niced guest pub fn guest_nice_ms(&self) -> Option { let ms_per_tick = 1000 / self.tps; self.guest_nice.map(|ms| ms * ms_per_tick) } /// Time spent running a niced guest pub fn guest_nice_duration(&self) -> Option { self.guest_nice_ms().map(Duration::from_millis) } } /// Kernel/system statistics, from `/proc/stat` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KernelStats { /// The amount of time the system spent in various states pub total: CpuTime, /// The amount of time that specific CPUs spent in various states pub cpu_time: Vec, /// The number of context switches that the system underwent pub ctxt: u64, /// Boot time, in number of seconds since the Epoch pub btime: u64, /// Number of forks since boot pub processes: u64, /// Number of processes in runnable state /// /// (Since Linux 2.5.45) pub procs_running: Option, /// Number of processes blocked waiting for I/O /// /// (Since Linux 2.5.45) pub procs_blocked: Option, } impl FromBufReadSI for KernelStats { fn from_buf_read(r: R, system_info: &SystemInfo) -> ProcResult { let lines = r.lines(); let mut total_cpu = None; let mut cpus = Vec::new(); let mut ctxt = None; let mut btime = None; let mut processes = None; let mut procs_running = None; let mut procs_blocked = None; for line in lines { let line = line?; if line.starts_with("cpu ") { total_cpu = Some(CpuTime::from_str(&line, system_info.ticks_per_second())?); } else if line.starts_with("cpu") { cpus.push(CpuTime::from_str(&line, system_info.ticks_per_second())?); } else if let Some(stripped) = line.strip_prefix("ctxt ") { ctxt = Some(from_str!(u64, stripped)); } else if let Some(stripped) = line.strip_prefix("btime ") { btime = Some(from_str!(u64, stripped)); } else if let Some(stripped) = line.strip_prefix("processes ") { processes = Some(from_str!(u64, stripped)); } else if let Some(stripped) = line.strip_prefix("procs_running ") { procs_running = Some(from_str!(u32, stripped)); } else if let Some(stripped) = line.strip_prefix("procs_blocked ") { procs_blocked = Some(from_str!(u32, stripped)); } } Ok(KernelStats { total: expect!(total_cpu), cpu_time: cpus, ctxt: expect!(ctxt), btime: expect!(btime), processes: expect!(processes), procs_running, procs_blocked, }) } } /// Various virtual memory statistics /// /// Since the exact set of statistics will vary from kernel to kernel, and because most of them are /// not well documented, this struct contains a HashMap instead of specific members. Consult the /// kernel source code for more details of this data. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct VmStat(pub HashMap); impl FromBufRead for VmStat { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); for line in r.lines() { let line = line?; let mut split = line.split_whitespace(); let name = expect!(split.next()); let val = from_str!(i64, expect!(split.next())); map.insert(name.to_owned(), val); } Ok(VmStat(map)) } } /// Details about a loaded kernel module /// /// For an example, see the [lsmod.rs](https://github.com/eminence/procfs/tree/master/examples) /// example in the source repo. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KernelModule { /// The name of the module pub name: String, /// The size of the module pub size: u32, /// The number of references in the kernel to this module. This can be -1 if the module is unloading pub refcount: i32, /// A list of modules that depend on this module. pub used_by: Vec, /// The module state /// /// This will probably always be "Live", but it could also be either "Unloading" or "Loading" pub state: String, } /// A set of loaded kernel modules #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KernelModules(pub HashMap); impl FromBufRead for KernelModules { /// This should correspond to the data in `/proc/modules`. fn from_buf_read(r: R) -> ProcResult { // kernel reference: kernel/module.c m_show() let mut map = HashMap::new(); for line in r.lines() { let line: String = line?; let mut s = line.split_whitespace(); let name = expect!(s.next()); let size = from_str!(u32, expect!(s.next())); let refcount = from_str!(i32, expect!(s.next())); let used_by: &str = expect!(s.next()); let state = expect!(s.next()); map.insert( name.to_string(), KernelModule { name: name.to_string(), size, refcount, used_by: if used_by == "-" { Vec::new() } else { used_by .split(',') .filter(|s| !s.is_empty()) .map(|s| s.to_string()) .collect() }, state: state.to_string(), }, ); } Ok(KernelModules(map)) } } /// A list of the arguments passed to the Linux kernel at boot time. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct KernelCmdline(pub Vec); impl FromRead for KernelCmdline { /// This should correspond to the data in `/proc/cmdline`. fn from_read(mut r: R) -> ProcResult { let mut buf = String::new(); r.read_to_string(&mut buf)?; Ok(KernelCmdline( buf.split(' ') .filter_map(|s| if !s.is_empty() { Some(s.to_string()) } else { None }) .collect(), )) } } #[cfg(test)] mod tests { use super::*; #[test] fn test_kernel_from_str() { let k = KernelVersion::from_str("1.2.3").unwrap(); assert_eq!(k.major, 1); assert_eq!(k.minor, 2); assert_eq!(k.patch, 3); let k = KernelVersion::from_str("4.9.16-gentoo").unwrap(); assert_eq!(k.major, 4); assert_eq!(k.minor, 9); assert_eq!(k.patch, 16); let k = KernelVersion::from_str("4.9.266-0.1.ac.225.84.332.metal1.x86_64").unwrap(); assert_eq!(k.major, 4); assert_eq!(k.minor, 9); assert_eq!(k.patch, 266); } #[test] fn test_kernel_cmp() { let a = KernelVersion::from_str("1.2.3").unwrap(); let b = KernelVersion::from_str("1.2.3").unwrap(); let c = KernelVersion::from_str("1.2.4").unwrap(); let d = KernelVersion::from_str("1.5.4").unwrap(); let e = KernelVersion::from_str("2.5.4").unwrap(); assert_eq!(a, b); assert!(a < c); assert!(a < d); assert!(a < e); assert!(e > d); assert!(e > c); assert!(e > b); } #[test] fn test_loadavg_from_reader() -> ProcResult<()> { let load_average = LoadAverage::from_read("2.63 1.00 1.42 3/4280 2496732".as_bytes())?; assert_eq!(load_average.one, 2.63); assert_eq!(load_average.five, 1.00); assert_eq!(load_average.fifteen, 1.42); assert_eq!(load_average.max, 4280); assert_eq!(load_average.cur, 3); assert_eq!(load_average.latest_pid, 2496732); Ok(()) } #[test] fn test_from_str() -> ProcResult<()> { assert_eq!(from_str!(u8, "12"), 12); assert_eq!(from_str!(u8, "A", 16), 10); Ok(()) } #[test] fn test_from_str_fail() { fn inner() -> ProcResult<()> { let s = "four"; from_str!(u8, s); unreachable!() } assert!(inner().is_err()) } #[test] fn test_nopanic() { fn _inner() -> ProcResult { let x: Option = None; let y: bool = expect!(x); Ok(y) } let r = _inner(); println!("{:?}", r); assert!(r.is_err()); fn _inner2() -> ProcResult { let _f: std::fs::File = expect!(std::fs::File::open("/doesnotexist")); Ok(true) } let r = _inner2(); println!("{:?}", r); assert!(r.is_err()); } #[cfg(feature = "backtrace")] #[test] fn test_backtrace() { fn _inner() -> ProcResult { let _f: std::fs::File = expect!(std::fs::File::open("/doesnotexist")); Ok(true) } let r = _inner(); println!("{:?}", r); } } procfs-core-0.17.0/src/locks.rs000064400000000000000000000156771046102023000144230ustar 00000000000000use crate::{expect, from_str, ProcResult}; use std::io::BufRead; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// The type of a file lock #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum LockType { /// A BSD file lock created using `flock` FLock, /// A POSIX byte-range lock created with `fcntl` Posix, /// An Open File Description (ODF) lock created with `fnctl` ODF, /// Some other unknown lock type Other(String), } impl LockType { pub fn as_str(&self) -> &str { match self { LockType::FLock => "FLOCK", LockType::Posix => "POSIX", LockType::ODF => "ODF", LockType::Other(s) => s.as_ref(), } } } impl From<&str> for LockType { fn from(s: &str) -> LockType { match s { "FLOCK" => LockType::FLock, "OFDLCK" => LockType::ODF, "POSIX" => LockType::Posix, x => LockType::Other(x.to_string()), } } } /// The mode of a lock (advisory or mandatory) #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum LockMode { Advisory, Mandatory, /// Some other unknown lock mode Other(String), } impl LockMode { pub fn as_str(&self) -> &str { match self { LockMode::Advisory => "ADVISORY", LockMode::Mandatory => "MANDATORY", LockMode::Other(s) => s.as_ref(), } } } impl From<&str> for LockMode { fn from(s: &str) -> LockMode { match s { "ADVISORY" => LockMode::Advisory, "MANDATORY" => LockMode::Mandatory, x => LockMode::Other(x.to_string()), } } } /// The kind of a lock (read or write) #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum LockKind { /// A read lock (or BSD shared lock) Read, /// A write lock (or a BSD exclusive lock) Write, /// Some other unknown lock kind Other(String), } impl LockKind { pub fn as_str(&self) -> &str { match self { LockKind::Read => "READ", LockKind::Write => "WRITE", LockKind::Other(s) => s.as_ref(), } } } impl From<&str> for LockKind { fn from(s: &str) -> LockKind { match s { "READ" => LockKind::Read, "WRITE" => LockKind::Write, x => LockKind::Other(x.to_string()), } } } #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] /// Details about an individual file lock /// /// For an example, see the [lslocks.rs](https://github.com/eminence/procfs/tree/master/examples) /// example in the source repo. pub struct Lock { /// The type of lock pub lock_type: LockType, /// The lock mode (advisory or mandatory) pub mode: LockMode, /// The kind of lock (read or write) pub kind: LockKind, /// The process that owns the lock /// /// Because OFD locks are not owned by a single process (since multiple processes /// may have file descriptors that refer to the same FD), this field may be `None`. /// /// Before kernel 4.14 a bug meant that the PID of of the process that initially /// acquired the lock was displayed instead of `None`. pub pid: Option, /// The major ID of the device containing the FS that contains this lock pub devmaj: u32, /// The minor ID of the device containing the FS that contains this lock pub devmin: u32, /// The inode of the locked file pub inode: u64, /// The offset (in bytes) of the first byte of the lock. /// /// For BSD locks, this value is always 0. pub offset_first: u64, /// The offset (in bytes) of the last byte of the lock. /// /// `None` means the lock extends to the end of the file. For BSD locks, /// the value is always `None`. pub offset_last: Option, } impl Lock { fn from_line(line: &str) -> ProcResult { let mut s = line.split_whitespace(); let _ = expect!(s.next()); let typ = { let t = expect!(s.next()); if t == "->" { // some locks start a "->" which apparently means they are "blocked" (but i'm not sure what that actually means) From::from(expect!(s.next())) } else { From::from(t) } }; let mode = From::from(expect!(s.next())); let kind = From::from(expect!(s.next())); let pid = expect!(s.next()); let disk_inode = expect!(s.next()); let offset_first = from_str!(u64, expect!(s.next())); let offset_last = expect!(s.next()); let mut dis = disk_inode.split(':'); let devmaj = from_str!(u32, expect!(dis.next()), 16); let devmin = from_str!(u32, expect!(dis.next()), 16); let inode = from_str!(u64, expect!(dis.next())); Ok(Lock { lock_type: typ, mode, kind, pid: if pid == "-1" { None } else { Some(from_str!(i32, pid)) }, devmaj, devmin, inode, offset_first, offset_last: if offset_last == "EOF" { None } else { Some(from_str!(u64, offset_last)) }, }) } } #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] /// Details about file locks pub struct Locks(pub Vec); impl super::FromBufRead for Locks { fn from_buf_read(r: R) -> ProcResult { let mut v = Vec::new(); for line in r.lines() { let line = line?; v.push(Lock::from_line(&line)?); } Ok(Locks(v)) } } #[cfg(test)] mod tests { #[test] fn test_blocked() { let data = r#"1: POSIX ADVISORY WRITE 723 00:14:16845 0 EOF 2: FLOCK ADVISORY WRITE 652 00:14:16763 0 EOF 3: FLOCK ADVISORY WRITE 1594 fd:00:396528 0 EOF 4: FLOCK ADVISORY WRITE 1594 fd:00:396527 0 EOF 5: FLOCK ADVISORY WRITE 2851 fd:00:529372 0 EOF 6: POSIX ADVISORY WRITE 1280 00:14:16200 0 0 6: -> POSIX ADVISORY WRITE 1281 00:14:16200 0 0 6: -> POSIX ADVISORY WRITE 1279 00:14:16200 0 0 6: -> POSIX ADVISORY WRITE 1282 00:14:16200 0 0 6: -> POSIX ADVISORY WRITE 1283 00:14:16200 0 0 7: OFDLCK ADVISORY READ -1 00:06:1028 0 EOF 8: FLOCK ADVISORY WRITE 6471 fd:00:529426 0 EOF 9: FLOCK ADVISORY WRITE 6471 fd:00:529424 0 EOF 10: FLOCK ADVISORY WRITE 6471 fd:00:529420 0 EOF 11: FLOCK ADVISORY WRITE 6471 fd:00:529418 0 EOF 12: POSIX ADVISORY WRITE 1279 00:14:23553 0 EOF 13: FLOCK ADVISORY WRITE 6471 fd:00:393838 0 EOF 14: POSIX ADVISORY WRITE 655 00:14:16146 0 EOF"#; for line in data.lines() { super::Lock::from_line(line.trim()).unwrap(); } } } procfs-core-0.17.0/src/meminfo.rs000064400000000000000000000415521046102023000147310ustar 00000000000000use super::{expect, from_str, ProcResult}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::{collections::HashMap, io}; fn convert_to_kibibytes(num: u64, unit: &str) -> ProcResult { match unit { "B" => Ok(num), "KiB" | "kiB" | "kB" | "KB" => Ok(num * 1024), "MiB" | "miB" | "MB" | "mB" => Ok(num * 1024 * 1024), "GiB" | "giB" | "GB" | "gB" => Ok(num * 1024 * 1024 * 1024), unknown => Err(build_internal_error!(format!("Unknown unit type {}", unknown))), } } /// This struct reports statistics about memory usage on the system, based on /// the `/proc/meminfo` file. /// /// It is used by `free(1)` to report the amount of free and used memory (both /// physical and swap) on the system as well as the shared memory and /// buffers used by the kernel. Each struct member is generally reported in /// bytes, but a few are unitless values. /// /// Except as noted below, all of the fields have been present since at least /// Linux 2.6.0. Some fields are optional and are present only if the kernel /// was configured with various options; those dependencies are noted in the list. /// /// **Notes** /// /// While the file shows kilobytes (kB; 1 kB equals 1000 B), /// it is actually kibibytes (KiB; 1 KiB equals 1024 B). /// /// All sizes are converted to bytes. Unitless values, like `hugepages_total` are not affected. /// /// This imprecision in /proc/meminfo is known, /// but is not corrected due to legacy concerns - /// programs rely on /proc/meminfo to specify size with the "kB" string. /// /// New fields to this struct may be added at any time (even without a major or minor semver bump). #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[allow(non_snake_case)] #[non_exhaustive] pub struct Meminfo { /// Total usable RAM (i.e., physical RAM minus a few reserved bits and the kernel binary code). pub mem_total: u64, /// The sum of [LowFree](#structfield.low_free) + [HighFree](#structfield.high_free). pub mem_free: u64, /// An estimate of how much memory is available for starting new applications, without swapping. /// /// (since Linux 3.14) pub mem_available: Option, /// Relatively temporary storage for raw disk blocks that shouldn't get tremendously large (20MB or so). pub buffers: u64, /// In-memory cache for files read from the disk (the page cache). Doesn't include SwapCached. pub cached: u64, /// Memory that once was swapped out, is swapped back in but still also is in the swap /// file. /// /// (If memory pressure is high, these pages don't need to be swapped out again /// because they are already in the swap file. This saves I/O.) pub swap_cached: u64, /// Memory that has been used more recently and usually not reclaimed unless absolutely /// necessary. pub active: u64, /// Memory which has been less recently used. It is more eligible to be reclaimed for other /// purposes. pub inactive: u64, /// [To be documented.] /// /// (since Linux 2.6.28) pub active_anon: Option, /// [To be documented.] /// /// (since Linux 2.6.28) pub inactive_anon: Option, /// [To be documented.] /// /// (since Linux 2.6.28) pub active_file: Option, /// [To be documented.] /// /// (since Linux 2.6.28) pub inactive_file: Option, /// [To be documented.] /// /// (From Linux 2.6.28 to 2.6.30, CONFIG_UNEVICTABLE_LRU was required.) pub unevictable: Option, /// [To be documented.] /// /// (From Linux 2.6.28 to 2.6.30, CONFIG_UNEVICTABLE_LRU was required.) pub mlocked: Option, /// Total amount of highmem. /// /// Highmem is all memory above ~860MB of physical memory. Highmem areas are for use by /// user-space programs, or for the page cache. The kernel must use tricks to access this /// memory, making it slower to access than lowmem. /// /// (Starting with Linux 2.6.19, CONFIG_HIGHMEM is required.) pub high_total: Option, /// Amount of free highmem. /// /// (Starting with Linux 2.6.19, CONFIG_HIGHMEM is required.) pub high_free: Option, /// Total amount of lowmem. /// /// Lowmem is memory which can be used for every thing that highmem can be used for, /// but it is also available for the kernel's use for its own data structures. /// Among many other things, it is where everything from Slab is allocated. /// Bad things happen when you're out of lowmem. /// /// (Starting with Linux 2.6.19, CONFIG_HIGHMEM is required.) pub low_total: Option, /// Amount of free lowmem. /// /// (Starting with Linux 2.6.19, CONFIG_HIGHMEM is required.) pub low_free: Option, /// [To be documented.] /// /// (since Linux 2.6.29. CONFIG_MMU is required.) pub mmap_copy: Option, /// Total amount of swap space available. pub swap_total: u64, /// Amount of swap space that is currently unused. pub swap_free: u64, /// Memory which is waiting to get written back to the disk. pub dirty: u64, /// Memory which is actively being written back to the disk. pub writeback: u64, /// Non-file backed pages mapped into user-space page tables. /// /// (since Linux 2.6.18) pub anon_pages: Option, /// Files which have been mapped into memory (with mmap(2)), such as libraries. pub mapped: u64, /// Amount of memory consumed in tmpfs(5) filesystems. /// /// (since Linux 2.6.32) pub shmem: Option, /// In-kernel data structures cache. pub slab: u64, /// Part of Slab, that can be reclaimed on memory pressure. /// /// (since Linux 2.6.19) pub s_reclaimable: Option, /// Part of Slab, that cannot be reclaimed on memory pressure. /// /// (since Linux 2.6.19) pub s_unreclaim: Option, /// Amount of memory allocated to kernel stacks. /// /// (since Linux 2.6.32) pub kernel_stack: Option, /// Amount of memory dedicated to the lowest level of page tables. /// /// (since Linux 2.6.18) pub page_tables: Option, /// Amount of memory allocated for seconary page tables. This currently includes KVM mmu /// allocations on x86 and arm64. /// /// (since Linux 6.1) pub secondary_page_tables: Option, /// [To be documented.] /// /// (CONFIG_QUICKLIST is required. Since Linux 2.6.27) pub quicklists: Option, /// NFS pages sent to the server, but not yet committed to stable storage. /// /// (since Linux 2.6.18) pub nfs_unstable: Option, /// Memory used for block device "bounce buffers". /// /// (since Linux 2.6.18) pub bounce: Option, /// Memory used by FUSE for temporary writeback buffers. /// /// (since Linux 2.6.26) pub writeback_tmp: Option, /// This is the total amount of memory currently available to be allocated on the system, /// expressed in bytes. /// /// This limit is adhered to only if strict overcommit /// accounting is enabled (mode 2 in /proc/sys/vm/overcommit_memory). The limit is calculated /// according to the formula described under /proc/sys/vm/overcommit_memory. For further /// details, see the kernel source file /// [Documentation/vm/overcommit-accounting](https://www.kernel.org/doc/Documentation/vm/overcommit-accounting). /// /// (since Linux 2.6.10) pub commit_limit: Option, /// The amount of memory presently allocated on the system. /// /// The committed memory is a sum of all of the memory which has been allocated /// by processes, even if it has not been "used" by them as of yet. A process which allocates 1GB of memory (using malloc(3) /// or similar), but touches only 300MB of that memory will show up as using only 300MB of memory even if it has the address space /// allocated for the entire 1GB. /// /// This 1GB is memory which has been "committed" to by the VM and can be used at any time by the allocating application. With /// strict overcommit enabled on the system (mode 2 in /proc/sys/vm/overcommit_memory), allocations which would exceed the Committed_AS /// mitLimit will not be permitted. This is useful if one needs to guarantee that processes will not fail due to lack of memory once /// that memory has been successfully allocated. pub committed_as: u64, /// Total size of vmalloc memory area. pub vmalloc_total: u64, /// Amount of vmalloc area which is used. pub vmalloc_used: u64, /// Largest contiguous block of vmalloc area which is free. pub vmalloc_chunk: u64, /// [To be documented.] /// /// (CONFIG_MEMORY_FAILURE is required. Since Linux 2.6.32) pub hardware_corrupted: Option, /// Non-file backed huge pages mapped into user-space page tables. /// /// (CONFIG_TRANSPARENT_HUGEPAGE is required. Since Linux 2.6.38) pub anon_hugepages: Option, /// Memory used by shared memory (shmem) and tmpfs(5) allocated with huge pages /// /// (CONFIG_TRANSPARENT_HUGEPAGE is required. Since Linux 4.8) pub shmem_hugepages: Option, /// Shared memory mapped into user space with huge pages. /// /// (CONFIG_TRANSPARENT_HUGEPAGE is required. Since Linux 4.8) pub shmem_pmd_mapped: Option, /// Total CMA (Contiguous Memory Allocator) pages. /// /// (CONFIG_CMA is required. Since Linux 3.1) pub cma_total: Option, /// Free CMA (Contiguous Memory Allocator) pages. /// /// (CONFIG_CMA is required. Since Linux 3.1) pub cma_free: Option, /// The size of the pool of huge pages. /// /// CONFIG_HUGETLB_PAGE is required.) pub hugepages_total: Option, /// The number of huge pages in the pool that are not yet allocated. /// /// (CONFIG_HUGETLB_PAGE is required.) pub hugepages_free: Option, /// This is the number of huge pages for which a commitment to allocate from the pool has been /// made, but no allocation has yet been made. /// /// These reserved huge pages guarantee that an application will be able to allocate a /// huge page from the pool of huge pages at fault time. /// /// (CONFIG_HUGETLB_PAGE is required. Since Linux 2.6.17) pub hugepages_rsvd: Option, /// This is the number of huge pages in the pool above the value in /proc/sys/vm/nr_hugepages. /// /// The maximum number of surplus huge pages is controlled by /proc/sys/vm/nr_overcommit_hugepages. /// /// (CONFIG_HUGETLB_PAGE is required. Since Linux 2.6.24) pub hugepages_surp: Option, /// The size of huge pages. /// /// (CONFIG_HUGETLB_PAGE is required.) pub hugepagesize: Option, /// Number of bytes of RAM linearly mapped by kernel in 4kB pages. (x86.) /// /// (since Linux 2.6.27) pub direct_map_4k: Option, /// Number of bytes of RAM linearly mapped by kernel in 4MB pages. /// /// (x86 with CONFIG_X86_64 or CONFIG_X86_PAE enabled. Since Linux 2.6.27) pub direct_map_4M: Option, /// Number of bytes of RAM linearly mapped by kernel in 2MB pages. /// /// (x86 with neither CONFIG_X86_64 nor CONFIG_X86_PAE enabled. Since Linux 2.6.27) pub direct_map_2M: Option, /// (x86 with CONFIG_X86_64 and CONFIG_X86_DIRECT_GBPAGES enabled. Since Linux 2.6.27) pub direct_map_1G: Option, /// needs documentation pub hugetlb: Option, /// Memory allocated to the per-cpu alloctor used to back per-cpu allocations. /// /// This stat excludes the cost of metadata. pub per_cpu: Option, /// Kernel allocations that the kernel will attempt to reclaim under memory pressure. /// /// Includes s_reclaimable, and other direct allocations with a shrinker. pub k_reclaimable: Option, /// Undocumented field /// /// (CONFIG_TRANSPARENT_HUGEPAGE is requried. Since Linux 5.4) pub file_pmd_mapped: Option, /// Undocumented field /// /// (CONFIG_TRANSPARENT_HUGEPAGE is required. Since Linux 5.4) pub file_huge_pages: Option, /// Memory consumed by the zswap backend (compressed size). /// /// (CONFIG_ZSWAP is required. Since Linux 5.19) pub z_swap: Option, /// Amount of anonymous memory stored in zswap (original size). /// /// (CONFIG_ZSWAP is required. Since Linux 5.19) pub z_swapped: Option, } impl super::FromBufRead for Meminfo { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); for line in r.lines() { let line = expect!(line); if line.is_empty() { continue; } let mut s = line.split_whitespace(); let field = expect!(s.next(), "no field"); let value = expect!(s.next(), "no value"); let unit = s.next(); // optional let value = from_str!(u64, value); let value = if let Some(unit) = unit { convert_to_kibibytes(value, unit)? } else { value }; map.insert(field[..field.len() - 1].to_string(), value); } // use 'remove' to move the value out of the hashmap // if there's anything still left in the map at the end, that // means we probably have a bug/typo, or are out-of-date let meminfo = Meminfo { mem_total: expect!(map.remove("MemTotal")), mem_free: expect!(map.remove("MemFree")), mem_available: map.remove("MemAvailable"), buffers: expect!(map.remove("Buffers")), cached: expect!(map.remove("Cached")), swap_cached: expect!(map.remove("SwapCached")), active: expect!(map.remove("Active")), inactive: expect!(map.remove("Inactive")), active_anon: map.remove("Active(anon)"), inactive_anon: map.remove("Inactive(anon)"), active_file: map.remove("Active(file)"), inactive_file: map.remove("Inactive(file)"), unevictable: map.remove("Unevictable"), mlocked: map.remove("Mlocked"), high_total: map.remove("HighTotal"), high_free: map.remove("HighFree"), low_total: map.remove("LowTotal"), low_free: map.remove("LowFree"), mmap_copy: map.remove("MmapCopy"), swap_total: expect!(map.remove("SwapTotal")), swap_free: expect!(map.remove("SwapFree")), dirty: expect!(map.remove("Dirty")), writeback: expect!(map.remove("Writeback")), anon_pages: map.remove("AnonPages"), mapped: expect!(map.remove("Mapped")), shmem: map.remove("Shmem"), slab: expect!(map.remove("Slab")), s_reclaimable: map.remove("SReclaimable"), s_unreclaim: map.remove("SUnreclaim"), kernel_stack: map.remove("KernelStack"), page_tables: map.remove("PageTables"), secondary_page_tables: map.remove("SecPageTables"), quicklists: map.remove("Quicklists"), nfs_unstable: map.remove("NFS_Unstable"), bounce: map.remove("Bounce"), writeback_tmp: map.remove("WritebackTmp"), commit_limit: map.remove("CommitLimit"), committed_as: expect!(map.remove("Committed_AS")), vmalloc_total: expect!(map.remove("VmallocTotal")), vmalloc_used: expect!(map.remove("VmallocUsed")), vmalloc_chunk: expect!(map.remove("VmallocChunk")), hardware_corrupted: map.remove("HardwareCorrupted"), anon_hugepages: map.remove("AnonHugePages"), shmem_hugepages: map.remove("ShmemHugePages"), shmem_pmd_mapped: map.remove("ShmemPmdMapped"), cma_total: map.remove("CmaTotal"), cma_free: map.remove("CmaFree"), hugepages_total: map.remove("HugePages_Total"), hugepages_free: map.remove("HugePages_Free"), hugepages_rsvd: map.remove("HugePages_Rsvd"), hugepages_surp: map.remove("HugePages_Surp"), hugepagesize: map.remove("Hugepagesize"), direct_map_4k: map.remove("DirectMap4k"), direct_map_4M: map.remove("DirectMap4M"), direct_map_2M: map.remove("DirectMap2M"), direct_map_1G: map.remove("DirectMap1G"), k_reclaimable: map.remove("KReclaimable"), per_cpu: map.remove("Percpu"), hugetlb: map.remove("Hugetlb"), file_pmd_mapped: map.remove("FilePmdMapped"), file_huge_pages: map.remove("FileHugePages"), z_swap: map.remove("Zswap"), z_swapped: map.remove("Zswapped"), }; if cfg!(test) { assert!(map.is_empty(), "meminfo map is not empty: {:#?}", map); } Ok(meminfo) } } procfs-core-0.17.0/src/mounts.rs000064400000000000000000000063331046102023000146220ustar 00000000000000use std::{collections::HashMap, io::BufRead}; use super::ProcResult; use std::str::FromStr; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// A mountpoint entry under `/proc/mounts` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[allow(non_snake_case)] pub struct MountEntry { /// Device pub fs_spec: String, /// Mountpoint pub fs_file: String, /// FS type pub fs_vfstype: String, /// Mount options pub fs_mntops: HashMap>, /// Dump pub fs_freq: u8, /// Check pub fs_passno: u8, } impl super::FromBufRead for Vec { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); for line in r.lines() { let line = expect!(line); let mut s = line.split_whitespace(); let fs_spec = unmangle_octal(expect!(s.next())); let fs_file = unmangle_octal(expect!(s.next())); let fs_vfstype = unmangle_octal(expect!(s.next())); let fs_mntops = unmangle_octal(expect!(s.next())); let fs_mntops: HashMap> = fs_mntops .split(',') .map(|s| { let mut split = s.splitn(2, '='); let k = split.next().unwrap().to_string(); // can not fail, splitn will always return at least 1 element let v = split.next().map(|s| s.to_string()); (k, v) }) .collect(); let fs_freq = expect!(u8::from_str(expect!(s.next()))); let fs_passno = expect!(u8::from_str(expect!(s.next()))); let mount_entry = MountEntry { fs_spec, fs_file, fs_vfstype, fs_mntops, fs_freq, fs_passno, }; vec.push(mount_entry); } Ok(vec) } } /// Unmangle spaces ' ', tabs '\t', line breaks '\n', backslashes '\\', and hashes '#' /// /// See https://elixir.bootlin.com/linux/v6.2.8/source/fs/proc_namespace.c#L89 pub(crate) fn unmangle_octal(input: &str) -> String { let mut input = input.to_string(); for (octal, c) in [(r"\011", "\t"), (r"\012", "\n"), (r"\134", "\\"), (r"\043", "#")] { input = input.replace(octal, c); } input } #[test] fn test_unmangle_octal() { let tests = [ (r"a\134b\011c\012d\043e", "a\\b\tc\nd#e"), // all escaped chars with abcde in between (r"abcd", r"abcd"), // do nothing ]; for (input, expected) in tests { assert_eq!(unmangle_octal(input), expected); } } #[test] fn test_mounts() { use crate::FromBufRead; use std::io::Cursor; let s = "proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 /dev/mapper/ol-root / xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 Downloads /media/sf_downloads vboxsf rw,nodev,relatime,iocharset=utf8,uid=0,gid=977,dmode=0770,fmode=0770,tag=VBoxAutomounter 0 0"; let cursor = Cursor::new(s); let mounts = Vec::::from_buf_read(cursor).unwrap(); assert_eq!(mounts.len(), 4); } procfs-core-0.17.0/src/net.rs000064400000000000000000002117011046102023000140600ustar 00000000000000// Don't throw clippy warnings for manual string stripping. // The suggested fix with `strip_prefix` removes support for Rust 1.33 and 1.38 #![allow(clippy::manual_strip)] //! Information about the networking layer. //! //! This module corresponds to the `/proc/net` directory and contains various information about the //! networking layer. use crate::ProcResult; use crate::{build_internal_error, expect, from_iter, from_str}; use std::collections::HashMap; use bitflags::bitflags; use std::io::BufRead; use std::net::{Ipv4Addr, Ipv6Addr, SocketAddr, SocketAddrV4, SocketAddrV6}; use std::{path::PathBuf, str::FromStr}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum TcpState { Established = 1, SynSent, SynRecv, FinWait1, FinWait2, TimeWait, Close, CloseWait, LastAck, Listen, Closing, NewSynRecv, } impl TcpState { pub fn from_u8(num: u8) -> Option { match num { 0x01 => Some(TcpState::Established), 0x02 => Some(TcpState::SynSent), 0x03 => Some(TcpState::SynRecv), 0x04 => Some(TcpState::FinWait1), 0x05 => Some(TcpState::FinWait2), 0x06 => Some(TcpState::TimeWait), 0x07 => Some(TcpState::Close), 0x08 => Some(TcpState::CloseWait), 0x09 => Some(TcpState::LastAck), 0x0A => Some(TcpState::Listen), 0x0B => Some(TcpState::Closing), 0x0C => Some(TcpState::NewSynRecv), _ => None, } } pub fn to_u8(&self) -> u8 { match self { TcpState::Established => 0x01, TcpState::SynSent => 0x02, TcpState::SynRecv => 0x03, TcpState::FinWait1 => 0x04, TcpState::FinWait2 => 0x05, TcpState::TimeWait => 0x06, TcpState::Close => 0x07, TcpState::CloseWait => 0x08, TcpState::LastAck => 0x09, TcpState::Listen => 0x0A, TcpState::Closing => 0x0B, TcpState::NewSynRecv => 0x0C, } } } #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum UdpState { Established = 1, Close = 7, } impl UdpState { pub fn from_u8(num: u8) -> Option { match num { 0x01 => Some(UdpState::Established), 0x07 => Some(UdpState::Close), _ => None, } } pub fn to_u8(&self) -> u8 { match self { UdpState::Established => 0x01, UdpState::Close => 0x07, } } } #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum UnixState { UNCONNECTED = 1, CONNECTING = 2, CONNECTED = 3, DISCONNECTING = 4, } impl UnixState { pub fn from_u8(num: u8) -> Option { match num { 0x01 => Some(UnixState::UNCONNECTED), 0x02 => Some(UnixState::CONNECTING), 0x03 => Some(UnixState::CONNECTED), 0x04 => Some(UnixState::DISCONNECTING), _ => None, } } pub fn to_u8(&self) -> u8 { match self { UnixState::UNCONNECTED => 0x01, UnixState::CONNECTING => 0x02, UnixState::CONNECTED => 0x03, UnixState::DISCONNECTING => 0x04, } } } /// An entry in the TCP socket table #[derive(Debug, Clone)] #[non_exhaustive] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct TcpNetEntry { pub local_address: SocketAddr, pub remote_address: SocketAddr, pub state: TcpState, pub rx_queue: u32, pub tx_queue: u32, pub uid: u32, pub inode: u64, } /// An entry in the UDP socket table #[derive(Debug, Clone)] #[non_exhaustive] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UdpNetEntry { pub local_address: SocketAddr, pub remote_address: SocketAddr, pub state: UdpState, pub rx_queue: u32, pub tx_queue: u32, pub uid: u32, pub inode: u64, } /// An entry in the Unix socket table #[derive(Debug, Clone)] #[non_exhaustive] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UnixNetEntry { /// The number of users of the socket pub ref_count: u32, /// The socket type. /// /// Possible values are `SOCK_STREAM`, `SOCK_DGRAM`, or `SOCK_SEQPACKET`. These constants can /// be found in the libc crate. pub socket_type: u16, /// The state of the socket pub state: UnixState, /// The inode number of the socket pub inode: u64, /// The bound pathname (if any) of the socket. /// /// Sockets in the abstract namespace are included, and are shown with a path that commences /// with the '@' character. pub path: Option, } /// Parses an address in the form 00010203:1234 /// /// Also supports IPv6 fn parse_addressport_str(s: &str, little_endian: bool) -> ProcResult { let mut las = s.split(':'); let ip_part = expect!(las.next(), "ip_part"); let port = expect!(las.next(), "port"); let port = from_str!(u16, port, 16); use std::convert::TryInto; let read_u32 = if little_endian { u32::from_le_bytes } else { u32::from_be_bytes }; if ip_part.len() == 8 { let bytes = expect!(hex::decode(ip_part)); let ip_u32 = read_u32(bytes[..4].try_into().unwrap()); let ip = Ipv4Addr::from(ip_u32); Ok(SocketAddr::V4(SocketAddrV4::new(ip, port))) } else if ip_part.len() == 32 { let bytes = expect!(hex::decode(ip_part)); let ip_a = read_u32(bytes[0..4].try_into().unwrap()); let ip_b = read_u32(bytes[4..8].try_into().unwrap()); let ip_c = read_u32(bytes[8..12].try_into().unwrap()); let ip_d = read_u32(bytes[12..16].try_into().unwrap()); let ip = Ipv6Addr::new( ((ip_a >> 16) & 0xffff) as u16, (ip_a & 0xffff) as u16, ((ip_b >> 16) & 0xffff) as u16, (ip_b & 0xffff) as u16, ((ip_c >> 16) & 0xffff) as u16, (ip_c & 0xffff) as u16, ((ip_d >> 16) & 0xffff) as u16, (ip_d & 0xffff) as u16, ); Ok(SocketAddr::V6(SocketAddrV6::new(ip, port, 0, 0))) } else { Err(build_internal_error!(format!( "Unable to parse {:?} as an address:port", s ))) } } /// TCP socket entries. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct TcpNetEntries(pub Vec); impl super::FromBufReadSI for TcpNetEntries { fn from_buf_read(r: R, system_info: &crate::SystemInfo) -> ProcResult { let mut vec = Vec::new(); // first line is a header we need to skip for line in r.lines().skip(1) { let line = line?; let mut s = line.split_whitespace(); s.next(); let local_address = expect!(s.next(), "tcp::local_address"); let rem_address = expect!(s.next(), "tcp::rem_address"); let state = expect!(s.next(), "tcp::st"); let mut tx_rx_queue = expect!(s.next(), "tcp::tx_queue:rx_queue").splitn(2, ':'); let tx_queue = from_str!(u32, expect!(tx_rx_queue.next(), "tcp::tx_queue"), 16); let rx_queue = from_str!(u32, expect!(tx_rx_queue.next(), "tcp::rx_queue"), 16); s.next(); // skip tr and tm->when s.next(); // skip retrnsmt let uid = from_str!(u32, expect!(s.next(), "tcp::uid")); s.next(); // skip timeout let inode = expect!(s.next(), "tcp::inode"); vec.push(TcpNetEntry { local_address: parse_addressport_str(local_address, system_info.is_little_endian())?, remote_address: parse_addressport_str(rem_address, system_info.is_little_endian())?, rx_queue, tx_queue, state: expect!(TcpState::from_u8(from_str!(u8, state, 16))), uid, inode: from_str!(u64, inode), }); } Ok(TcpNetEntries(vec)) } } /// UDP socket entries. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UdpNetEntries(pub Vec); impl super::FromBufReadSI for UdpNetEntries { fn from_buf_read(r: R, system_info: &crate::SystemInfo) -> ProcResult { let mut vec = Vec::new(); // first line is a header we need to skip for line in r.lines().skip(1) { let line = line?; let mut s = line.split_whitespace(); s.next(); let local_address = expect!(s.next(), "udp::local_address"); let rem_address = expect!(s.next(), "udp::rem_address"); let state = expect!(s.next(), "udp::st"); let mut tx_rx_queue = expect!(s.next(), "udp::tx_queue:rx_queue").splitn(2, ':'); let tx_queue: u32 = from_str!(u32, expect!(tx_rx_queue.next(), "udp::tx_queue"), 16); let rx_queue: u32 = from_str!(u32, expect!(tx_rx_queue.next(), "udp::rx_queue"), 16); s.next(); // skip tr and tm->when s.next(); // skip retrnsmt let uid = from_str!(u32, expect!(s.next(), "udp::uid")); s.next(); // skip timeout let inode = expect!(s.next(), "udp::inode"); vec.push(UdpNetEntry { local_address: parse_addressport_str(local_address, system_info.is_little_endian())?, remote_address: parse_addressport_str(rem_address, system_info.is_little_endian())?, rx_queue, tx_queue, state: expect!(UdpState::from_u8(from_str!(u8, state, 16))), uid, inode: from_str!(u64, inode), }); } Ok(UdpNetEntries(vec)) } } /// Unix socket entries. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UnixNetEntries(pub Vec); impl super::FromBufRead for UnixNetEntries { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); // first line is a header we need to skip for line in r.lines().skip(1) { let line = line?; let mut s = line.split_whitespace(); s.next(); // skip table slot number let ref_count = from_str!(u32, expect!(s.next()), 16); s.next(); // skip protocol, always zero s.next(); // skip internal kernel flags let socket_type = from_str!(u16, expect!(s.next()), 16); let state = from_str!(u8, expect!(s.next()), 16); let inode = from_str!(u64, expect!(s.next())); let path = s.next().map(PathBuf::from); vec.push(UnixNetEntry { ref_count, socket_type, inode, state: expect!(UnixState::from_u8(state)), path, }); } Ok(UnixNetEntries(vec)) } } /// An entry in the ARP table #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct ARPEntry { /// IPv4 address pub ip_address: Ipv4Addr, /// Hardware type /// /// This will almost always be ETHER (or maybe INFINIBAND) pub hw_type: ARPHardware, /// Internal kernel flags pub flags: ARPFlags, /// MAC Address pub hw_address: Option<[u8; 6]>, /// Device name pub device: String, } bitflags! { /// Hardware type for an ARP table entry. // source: include/uapi/linux/if_arp.h #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct ARPHardware: u32 { /// NET/ROM pseudo const NETROM = 0; /// Ethernet const ETHER = 1; /// Experimental ethernet const EETHER = 2; /// AX.25 Level 2 const AX25 = 3; /// PROnet token ring const PRONET = 4; /// Chaosnet const CHAOS = 5; /// IEEE 802.2 Ethernet/TR/TB const IEEE802 = 6; /// Arcnet const ARCNET = 7; /// APPLEtalk const APPLETLK = 8; /// Frame Relay DLCI const DLCI = 15; /// ATM const ATM = 19; /// Metricom STRIP const METRICOM = 23; //// IEEE 1394 IPv4 - RFC 2734 const IEEE1394 = 24; /// EUI-64 const EUI64 = 27; /// InfiniBand const INFINIBAND = 32; } } bitflags! { /// Flags for ARP entries // source: include/uapi/linux/if_arp.h #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct ARPFlags: u32 { /// Completed entry const COM = 0x02; /// Permanent entry const PERM = 0x04; /// Publish entry const PUBL = 0x08; /// Has requested trailers const USETRAILERS = 0x10; /// Want to use a netmask (only for proxy entries) const NETMASK = 0x20; // Don't answer this address const DONTPUB = 0x40; } } /// ARP table entries. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct ArpEntries(pub Vec); impl super::FromBufRead for ArpEntries { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); // First line is a header we need to skip for line in r.lines().skip(1) { // Check if there might have been an IO error. let line = line?; let mut line = line.split_whitespace(); let ip_address = expect!(Ipv4Addr::from_str(expect!(line.next()))); let hw = from_str!(u32, &expect!(line.next())[2..], 16); let hw = ARPHardware::from_bits_truncate(hw); let flags = from_str!(u32, &expect!(line.next())[2..], 16); let flags = ARPFlags::from_bits_truncate(flags); let mac = expect!(line.next()); let mut mac: Vec> = mac.split(':').map(|s| Ok(from_str!(u8, s, 16))).collect(); let mac = if mac.len() == 6 { let mac_block_f = mac.pop().unwrap()?; let mac_block_e = mac.pop().unwrap()?; let mac_block_d = mac.pop().unwrap()?; let mac_block_c = mac.pop().unwrap()?; let mac_block_b = mac.pop().unwrap()?; let mac_block_a = mac.pop().unwrap()?; if mac_block_a == 0 && mac_block_b == 0 && mac_block_c == 0 && mac_block_d == 0 && mac_block_e == 0 && mac_block_f == 0 { None } else { Some([ mac_block_a, mac_block_b, mac_block_c, mac_block_d, mac_block_e, mac_block_f, ]) } } else { None }; // mask is always "*" let _mask = expect!(line.next()); let dev = expect!(line.next()); vec.push(ARPEntry { ip_address, hw_type: hw, flags, hw_address: mac, device: dev.to_string(), }) } Ok(ArpEntries(vec)) } } /// General statistics for a network interface/device /// /// For an example, see the [interface_stats.rs](https://github.com/eminence/procfs/tree/master/examples) /// example in the source repo. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct DeviceStatus { /// Name of the interface pub name: String, /// Total bytes received pub recv_bytes: u64, /// Total packets received pub recv_packets: u64, /// Bad packets received pub recv_errs: u64, /// Packets dropped pub recv_drop: u64, /// Fifo overrun pub recv_fifo: u64, /// Frame alignment errors pub recv_frame: u64, /// Number of compressed packets received pub recv_compressed: u64, /// Number of multicast packets received pub recv_multicast: u64, /// Total bytes transmitted pub sent_bytes: u64, /// Total packets transmitted pub sent_packets: u64, /// Number of transmission errors pub sent_errs: u64, /// Number of packets dropped during transmission pub sent_drop: u64, pub sent_fifo: u64, /// Number of collisions pub sent_colls: u64, /// Number of packets not sent due to carrier errors pub sent_carrier: u64, /// Number of compressed packets transmitted pub sent_compressed: u64, } impl DeviceStatus { fn from_str(s: &str) -> ProcResult { let mut split = s.split_whitespace(); let name: String = expect!(from_iter(&mut split)); let recv_bytes = expect!(from_iter(&mut split)); let recv_packets = expect!(from_iter(&mut split)); let recv_errs = expect!(from_iter(&mut split)); let recv_drop = expect!(from_iter(&mut split)); let recv_fifo = expect!(from_iter(&mut split)); let recv_frame = expect!(from_iter(&mut split)); let recv_compressed = expect!(from_iter(&mut split)); let recv_multicast = expect!(from_iter(&mut split)); let sent_bytes = expect!(from_iter(&mut split)); let sent_packets = expect!(from_iter(&mut split)); let sent_errs = expect!(from_iter(&mut split)); let sent_drop = expect!(from_iter(&mut split)); let sent_fifo = expect!(from_iter(&mut split)); let sent_colls = expect!(from_iter(&mut split)); let sent_carrier = expect!(from_iter(&mut split)); let sent_compressed = expect!(from_iter(&mut split)); Ok(DeviceStatus { name: name.trim_end_matches(':').to_owned(), recv_bytes, recv_packets, recv_errs, recv_drop, recv_fifo, recv_frame, recv_compressed, recv_multicast, sent_bytes, sent_packets, sent_errs, sent_drop, sent_fifo, sent_colls, sent_carrier, sent_compressed, }) } } /// Device status information for all network interfaces. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct InterfaceDeviceStatus(pub HashMap); impl super::FromBufRead for InterfaceDeviceStatus { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); // the first two lines are headers, so skip them for line in r.lines().skip(2) { let dev = DeviceStatus::from_str(&line?)?; map.insert(dev.name.clone(), dev); } Ok(InterfaceDeviceStatus(map)) } } /// An entry in the ipv4 route table #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct RouteEntry { /// Interface to which packets for this route will be sent pub iface: String, /// The destination network or destination host pub destination: Ipv4Addr, pub gateway: Ipv4Addr, pub flags: u16, /// Number of references to this route pub refcnt: u16, /// Count of lookups for the route pub in_use: u16, /// The 'distance' to the target (usually counted in hops) pub metrics: u32, pub mask: Ipv4Addr, /// Default maximum transmission unit for TCP connections over this route pub mtu: u32, /// Default window size for TCP connections over this route pub window: u32, /// Initial RTT (Round Trip Time) pub irtt: u32, } /// A set of ipv4 routes. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct RouteEntries(pub Vec); impl super::FromBufRead for RouteEntries { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); // First line is a header we need to skip for line in r.lines().skip(1) { // Check if there might have been an IO error. let line = line?; let mut line = line.split_whitespace(); // network interface name, e.g. eth0 let iface = expect!(line.next()); let destination = from_str!(u32, expect!(line.next()), 16).to_ne_bytes().into(); let gateway = from_str!(u32, expect!(line.next()), 16).to_ne_bytes().into(); let flags = from_str!(u16, expect!(line.next()), 16); let refcnt = from_str!(u16, expect!(line.next()), 10); let in_use = from_str!(u16, expect!(line.next()), 10); let metrics = from_str!(u32, expect!(line.next()), 10); let mask = from_str!(u32, expect!(line.next()), 16).to_ne_bytes().into(); let mtu = from_str!(u32, expect!(line.next()), 10); let window = from_str!(u32, expect!(line.next()), 10); let irtt = from_str!(u32, expect!(line.next()), 10); vec.push(RouteEntry { iface: iface.to_string(), destination, gateway, flags, refcnt, in_use, metrics, mask, mtu, window, irtt, }); } Ok(RouteEntries(vec)) } } #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] /// The indication of whether this entity is acting as an IP gateway in respect /// to the forwarding of datagrams received by, but not addressed to, this /// entity. IP gateways forward datagrams. IP hosts do not (except those /// source-routed via the host). /// /// Note that for some managed nodes, this object may take on only a subset of /// the values possible. Accordingly, it is appropriate for an agent to return a /// `badValue` response if a management station attempts to change this object /// to an inappropriate value. pub enum IpForwarding { /// Acting as a gateway Forwarding = 1, /// Not acting as a gateway NotForwarding = 2, } impl IpForwarding { pub fn from_u8(num: u8) -> Option { match num { 1 => Some(IpForwarding::Forwarding), 2 => Some(IpForwarding::NotForwarding), _ => None, } } pub fn to_u8(&self) -> u8 { match self { IpForwarding::Forwarding => 1, IpForwarding::NotForwarding => 2, } } } #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] /// The algorithm used to determine the timeout value used for retransmitting /// unacknowledged octets. pub enum TcpRtoAlgorithm { /// None of the following Other = 1, /// A constant rto Constant = 2, /// MIL-STD-1778, [Appendix B](https://datatracker.ietf.org/doc/html/rfc1213#appendix-B) Rsre = 3, /// Van Jacobson's algorithm /// /// Reference: Jacobson, V., "Congestion Avoidance and Control", SIGCOMM 1988, Stanford, California. Vanj = 4, } impl TcpRtoAlgorithm { pub fn from_u8(num: u8) -> Option { match num { 1 => Some(TcpRtoAlgorithm::Other), 2 => Some(TcpRtoAlgorithm::Constant), 3 => Some(TcpRtoAlgorithm::Rsre), 4 => Some(TcpRtoAlgorithm::Vanj), _ => None, } } pub fn to_u8(&self) -> u8 { match self { TcpRtoAlgorithm::Other => 1, TcpRtoAlgorithm::Constant => 2, TcpRtoAlgorithm::Rsre => 3, TcpRtoAlgorithm::Vanj => 4, } } } /// This struct holds the data needed for the IP, ICMP, TCP, and UDP management /// information bases for an SNMP agent. /// /// For more details, see [RFC1213](https://datatracker.ietf.org/doc/html/rfc1213) /// and [SNMP counter](https://docs.kernel.org/networking/snmp_counter.html) #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Snmp { pub ip_forwarding: IpForwarding, /// The default value inserted into the Time-To-Live field of the IP header /// of datagrams originated at this entity, whenever a TTL value is not /// supplied by the transport layer protocol. pub ip_default_ttl: u32, /// The total number of input datagrams received from interfaces, including /// those received in error. pub ip_in_receives: u64, /// The number of input datagrams discarded due to errors in their IP /// headers. pub ip_in_hdr_errors: u64, /// The number of input datagrams discarded because the IP address in their /// IP header's destination field was not a valid address to be received at /// this entity. pub ip_in_addr_errors: u64, /// The number of input datagrams for which this entity was not their final /// IP destination, as a result of which an attempt was made to find a /// route to forward them to that final destination. pub ip_forw_datagrams: u64, /// The number of locally-addressed datagrams received successfully but /// discarded because of an unknown or unsupported protocol. pub ip_in_unknown_protos: u64, /// The number of input IP datagrams for which no problems were encountered /// to prevent their continued processing, but which were discarded /// (e.g., for lack of buffer space). pub ip_in_discards: u64, /// The total number of input datagrams successfully delivered to IP /// user-protocols (including ICMP). /// /// Note that this counter does not include any datagrams discarded while /// awaiting re-assembly. pub ip_in_delivers: u64, /// The total number of IP datagrams which local IP user-protocols /// (including ICMP) supplied to IP in requests for transmission. /// /// Note that this counter does not include any datagrams counted in /// ipForwDatagrams. pub ip_out_requests: u64, /// The number of output IP datagrams for which no problem was encountered /// to prevent their transmission to their destination, but which were /// discarded (e.g., for lack of buffer space). /// /// Note that this counter would include datagrams counted in /// `IpForwDatagrams` if any such packets met this (discretionary) discard /// criterion. pub ip_out_discards: u64, /// The number of IP datagrams discarded because no route could be found to /// transmit them to their destination. /// /// Note that this counter includes any packets counted in `IpForwDatagrams` /// which meet this `no-route' criterion. /// /// Note that this includes any datagarms which a host cannot route because /// all of its default gateways are down. pub ip_out_no_routes: u64, /// The maximum number of seconds which received fragments are held while /// they are awaiting reassembly at this entity. pub ip_reasm_timeout: u64, /// The number of IP fragments received which needed to be reassembled at /// this entity. pub ip_reasm_reqds: u64, /// The number of IP datagrams successfully re-assembled. pub ip_reasm_oks: u64, /// The number of failures detected by the IP re-assembly algorithm /// (for whatever reason: timed out, errors, etc). /// /// Note that this is not necessarily a count of discarded IP fragments /// since some algorithms (notably the algorithm in [RFC 815](https://datatracker.ietf.org/doc/html/rfc815)) /// can lose track of the number of fragments by combining them as they are /// received. pub ip_reasm_fails: u64, /// The number of IP datagrams that have been successfully fragmented at /// this entity. pub ip_frag_oks: u64, /// The number of IP datagrams that have been discarded because they needed /// to be fragmented at this entity but could not be, e.g., because their /// `Don't Fragment` flag was set. pub ip_frag_fails: u64, /// The number of IP datagram fragments that have been generated as a result /// of fragmentation at this entity. pub ip_frag_creates: u64, /// The total number of ICMP messages which the entity received. /// /// Note that this counter includes all those counted by `icmp_in_errors`. pub icmp_in_msgs: u64, /// The number of ICMP messages which the entity received but determined as /// having ICMP-specific errors (bad ICMP checksums, bad length, etc. pub icmp_in_errors: u64, /// This counter indicates the checksum of the ICMP packet is wrong. /// /// Non RFC1213 field pub icmp_in_csum_errors: u64, /// The number of ICMP Destination Unreachable messages received. pub icmp_in_dest_unreachs: u64, /// The number of ICMP Time Exceeded messages received. pub icmp_in_time_excds: u64, /// The number of ICMP Parameter Problem messages received. pub icmp_in_parm_probs: u64, /// The number of ICMP Source Quench messages received. pub icmp_in_src_quenchs: u64, /// The number of ICMP Redirect messages received. pub icmp_in_redirects: u64, /// The number of ICMP Echo (request) messages received. pub icmp_in_echos: u64, /// The number of ICMP Echo Reply messages received. pub icmp_in_echo_reps: u64, /// The number of ICMP Timestamp (request) messages received. pub icmp_in_timestamps: u64, /// The number of ICMP Timestamp Reply messages received. pub icmp_in_timestamp_reps: u64, /// The number of ICMP Address Mask Request messages received. pub icmp_in_addr_masks: u64, /// The number of ICMP Address Mask Reply messages received. pub icmp_in_addr_mask_reps: u64, /// The total number of ICMP messages which this entity attempted to send. /// /// Note that this counter includes all those counted by `icmp_out_errors`. pub icmp_out_msgs: u64, /// The number of ICMP messages which this entity did not send due to /// problems discovered within ICMP such as a lack of buffers. This value /// should not include errors discovered outside the ICMP layer such as the /// inability of IP to route the resultant datagram. In some /// implementations there may be no types of error which contribute to this /// counter's value. pub icmp_out_errors: u64, /// The number of ICMP Destination Unreachable messages sent. pub icmp_out_dest_unreachs: u64, /// The number of ICMP Time Exceeded messages sent. pub icmp_out_time_excds: u64, /// The number of ICMP Parameter Problem messages sent. pub icmp_out_parm_probs: u64, /// The number of ICMP Source Quench messages sent. pub icmp_out_src_quenchs: u64, /// The number of ICMP Redirect messages sent. For a host, this object will /// always be zero, since hosts do not send redirects. pub icmp_out_redirects: u64, /// The number of ICMP Echo (request) messages sent. pub icmp_out_echos: u64, /// The number of ICMP Echo Reply messages sent. pub icmp_out_echo_reps: u64, /// The number of ICMP Timestamp (request) messages sent. pub icmp_out_timestamps: u64, /// The number of ICMP Timestamp Reply messages sent. pub icmp_out_timestamp_reps: u64, /// The number of ICMP Address Mask Request messages sent. pub icmp_out_addr_masks: u64, /// The number of ICMP Address Mask Reply messages sent. pub icmp_out_addr_mask_reps: u64, // ignore ICMP numeric types pub tcp_rto_algorithm: TcpRtoAlgorithm, /// The minimum value permitted by a TCP implementation for the /// retransmission timeout, measured in milliseconds. More refined /// semantics for objects of this type depend upon the algorithm used to /// determine the retransmission timeout. In particular, when the timeout /// algorithm is rsre(3), an object of this type has the semantics of the /// LBOUND quantity described in [RFC 793](https://datatracker.ietf.org/doc/html/rfc793). pub tcp_rto_min: u64, /// The maximum value permitted by a TCP implementation for the /// retransmission timeout, measured in milliseconds. More refined /// semantics for objects of this type depend upon the algorithm used to /// determine the retransmission timeout. In particular, when the timeout /// algorithm is rsre(3), an object of this type has the semantics of the /// UBOUND quantity described in [RFC 793](https://datatracker.ietf.org/doc/html/rfc793). pub tcp_rto_max: u64, /// The limit on the total number of TCP connections the entity can support. /// In entities where the maximum number of connections is dynamic, this /// object should contain the value -1. pub tcp_max_conn: i64, /// The number of times TCP connections have made a direct transition to the /// SYN-SENT state from the CLOSED state. pub tcp_active_opens: u64, /// The number of times TCP connections have made a direct transition to the /// SYN-RCVD state from the LISTEN state. pub tcp_passive_opens: u64, /// The number of times TCP connections have made a direct transition to the /// CLOSED state from either the SYN-SENT state or the SYN-RCVD state, plus /// the number of times TCP connections have made a direct transition to the /// LISTEN state from the SYN-RCVD state. pub tcp_attempt_fails: u64, /// The number of times TCP connections have made a direct transition to the /// CLOSED state from either the ESTABLISHED state or the CLOSE-WAIT state. pub tcp_estab_resets: u64, /// The number of TCP connections for which the current state is either /// ESTABLISHED or CLOSE-WAIT. pub tcp_curr_estab: u64, /// The total number of segments received, including those received in /// error. This count includes segments received on currently established /// connections. pub tcp_in_segs: u64, /// The total number of segments sent, including those on current /// connections but excluding those containing only retransmitted octets. pub tcp_out_segs: u64, /// The total number of segments retransmitted - that is, the number of TCP /// segments transmitted containing one or more previously transmitted octets. pub tcp_retrans_segs: u64, /// The total number of segments received in error (e.g., bad TCP checksums). pub tcp_in_errs: u64, /// The number of TCP segments sent containing the RST flag. pub tcp_out_rsts: u64, /// [To be documented.] /// /// Non RFC1213 field pub tcp_in_csum_errors: u64, /// The total number of UDP datagrams delivered to UDP users. pub udp_in_datagrams: u64, /// The total number of received UDP datagrams for which there was no /// application at the destination port. pub udp_no_ports: u64, /// The number of received UDP datagrams that could not be delivered for /// reasons other than the lack of an application at the destination port. pub udp_in_errors: u64, /// The total number of UDP datagrams sent from this entity. pub udp_out_datagrams: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_rcvbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_sndbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_in_csum_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_ignored_multi: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_in_datagrams: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_no_ports: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_in_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_out_datagrams: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_rcvbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_sndbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_in_csum_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_ignored_multi: u64, } /// A /proc/net/snmp section /// /// A section represents two lines, 1x header and 1x data /// Each line has a prefix [ip, icmp, icmpmsg, tcp, udp, udplite] /// Eg. /// Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets CurrEstab InSegs OutSegs RetransSegs InErrs OutRsts InCsumErrors /// Tcp: 1 200 120000 -1 177 14 0 6 4 11155 10083 18 0 94 0 #[derive(Debug)] struct SnmpSection { prefix: String, values: HashMap, } impl<'a> SnmpSection { fn new(hdr: String, data: String) -> ProcResult { let mut hdr = hdr.trim_end().split_whitespace(); let mut data = data.trim_end().split_whitespace(); let prefix = expect!(hdr.next()).to_owned(); expect!(data.next()); let mut values = HashMap::new(); for hdr in hdr { values.insert(hdr.to_owned(), expect!(data.next()).to_owned()); } Ok(Self { prefix, values }) } } /// An iterator over the /proc/net/snmp sections using `BufRead`. #[derive(Debug)] struct SnmpSections { buf: B, } impl Iterator for SnmpSections { type Item = ProcResult; fn next(&mut self) -> Option { let mut hdr = String::new(); match self.buf.read_line(&mut hdr) { Ok(0) => None, Ok(_n) => { let mut data = String::new(); match self.buf.read_line(&mut data) { Ok(_n) => Some(SnmpSection::new(hdr, data)), Err(e) => Some(Err(e.into())), } } Err(e) => Some(Err(e.into())), } } } impl super::FromBufRead for Snmp { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); let sections = SnmpSections { buf: r }; for section in sections.flatten() { let p = §ion.prefix; for (hdr, v) in §ion.values { map.insert(format!("{p}{hdr}"), v.to_owned()); } } let snmp = Snmp { // Ip ip_forwarding: expect!(IpForwarding::from_u8(from_str!( u8, &expect!(map.remove("Ip:Forwarding")) ))), ip_default_ttl: from_str!(u32, &expect!(map.remove("Ip:DefaultTTL"))), ip_in_receives: from_str!(u64, &expect!(map.remove("Ip:InReceives"))), ip_in_hdr_errors: from_str!(u64, &expect!(map.remove("Ip:InHdrErrors"))), ip_in_addr_errors: from_str!(u64, &expect!(map.remove("Ip:InAddrErrors"))), ip_forw_datagrams: from_str!(u64, &expect!(map.remove("Ip:ForwDatagrams"))), ip_in_unknown_protos: from_str!(u64, &expect!(map.remove("Ip:InUnknownProtos"))), ip_in_discards: from_str!(u64, &expect!(map.remove("Ip:InDiscards"))), ip_in_delivers: from_str!(u64, &expect!(map.remove("Ip:InDelivers"))), ip_out_requests: from_str!(u64, &expect!(map.remove("Ip:OutRequests"))), ip_out_discards: from_str!(u64, &expect!(map.remove("Ip:OutDiscards"))), ip_out_no_routes: from_str!(u64, &expect!(map.remove("Ip:OutNoRoutes"))), ip_reasm_timeout: from_str!(u64, &expect!(map.remove("Ip:ReasmTimeout"))), ip_reasm_reqds: from_str!(u64, &expect!(map.remove("Ip:ReasmReqds"))), ip_reasm_oks: from_str!(u64, &expect!(map.remove("Ip:ReasmOKs"))), ip_reasm_fails: from_str!(u64, &expect!(map.remove("Ip:ReasmFails"))), ip_frag_oks: from_str!(u64, &expect!(map.remove("Ip:FragOKs"))), ip_frag_fails: from_str!(u64, &expect!(map.remove("Ip:FragFails"))), ip_frag_creates: from_str!(u64, &expect!(map.remove("Ip:FragCreates"))), //ip_out_transmits: from_str!(u64, &expect!(map.remove("Ip:OutTransmits"))), // Icmp icmp_in_msgs: from_str!(u64, &expect!(map.remove("Icmp:InMsgs"))), icmp_in_errors: from_str!(u64, &expect!(map.remove("Icmp:InErrors"))), icmp_in_csum_errors: from_str!(u64, &expect!(map.remove("Icmp:InCsumErrors"))), icmp_in_dest_unreachs: from_str!(u64, &expect!(map.remove("Icmp:InDestUnreachs"))), icmp_in_time_excds: from_str!(u64, &expect!(map.remove("Icmp:InTimeExcds"))), icmp_in_parm_probs: from_str!(u64, &expect!(map.remove("Icmp:InParmProbs"))), icmp_in_src_quenchs: from_str!(u64, &expect!(map.remove("Icmp:InSrcQuenchs"))), icmp_in_redirects: from_str!(u64, &expect!(map.remove("Icmp:InRedirects"))), icmp_in_echos: from_str!(u64, &expect!(map.remove("Icmp:InEchos"))), icmp_in_echo_reps: from_str!(u64, &expect!(map.remove("Icmp:InEchoReps"))), icmp_in_timestamps: from_str!(u64, &expect!(map.remove("Icmp:InTimestamps"))), icmp_in_timestamp_reps: from_str!(u64, &expect!(map.remove("Icmp:InTimestampReps"))), icmp_in_addr_masks: from_str!(u64, &expect!(map.remove("Icmp:InAddrMasks"))), icmp_in_addr_mask_reps: from_str!(u64, &expect!(map.remove("Icmp:InAddrMaskReps"))), icmp_out_msgs: from_str!(u64, &expect!(map.remove("Icmp:OutMsgs"))), icmp_out_errors: from_str!(u64, &expect!(map.remove("Icmp:OutErrors"))), //icmp_out_rate_limit_global: from_str!(u64, &expect!(map.remove("Icmp:OutRateLimitGlobal"))), //icmp_out_rate_limit_host: from_str!(u64, &expect!(map.remove("Icmp:OutRateLimitHost"))), icmp_out_dest_unreachs: from_str!(u64, &expect!(map.remove("Icmp:OutDestUnreachs"))), icmp_out_time_excds: from_str!(u64, &expect!(map.remove("Icmp:OutTimeExcds"))), icmp_out_parm_probs: from_str!(u64, &expect!(map.remove("Icmp:OutParmProbs"))), icmp_out_src_quenchs: from_str!(u64, &expect!(map.remove("Icmp:OutSrcQuenchs"))), icmp_out_redirects: from_str!(u64, &expect!(map.remove("Icmp:OutRedirects"))), icmp_out_echos: from_str!(u64, &expect!(map.remove("Icmp:OutEchos"))), icmp_out_echo_reps: from_str!(u64, &expect!(map.remove("Icmp:OutEchoReps"))), icmp_out_timestamps: from_str!(u64, &expect!(map.remove("Icmp:OutTimestamps"))), icmp_out_timestamp_reps: from_str!(u64, &expect!(map.remove("Icmp:OutTimestampReps"))), icmp_out_addr_masks: from_str!(u64, &expect!(map.remove("Icmp:OutAddrMasks"))), icmp_out_addr_mask_reps: from_str!(u64, &expect!(map.remove("Icmp:OutAddrMaskReps"))), // Tcp tcp_rto_algorithm: expect!(TcpRtoAlgorithm::from_u8(from_str!( u8, &expect!(map.remove("Tcp:RtoAlgorithm")) ))), tcp_rto_min: from_str!(u64, &expect!(map.remove("Tcp:RtoMin"))), tcp_rto_max: from_str!(u64, &expect!(map.remove("Tcp:RtoMax"))), tcp_max_conn: from_str!(i64, &expect!(map.remove("Tcp:MaxConn"))), tcp_active_opens: from_str!(u64, &expect!(map.remove("Tcp:ActiveOpens"))), tcp_passive_opens: from_str!(u64, &expect!(map.remove("Tcp:PassiveOpens"))), tcp_attempt_fails: from_str!(u64, &expect!(map.remove("Tcp:AttemptFails"))), tcp_estab_resets: from_str!(u64, &expect!(map.remove("Tcp:EstabResets"))), tcp_curr_estab: from_str!(u64, &expect!(map.remove("Tcp:CurrEstab"))), tcp_in_segs: from_str!(u64, &expect!(map.remove("Tcp:InSegs"))), tcp_out_segs: from_str!(u64, &expect!(map.remove("Tcp:OutSegs"))), tcp_retrans_segs: from_str!(u64, &expect!(map.remove("Tcp:RetransSegs"))), tcp_in_errs: from_str!(u64, &expect!(map.remove("Tcp:InErrs"))), tcp_out_rsts: from_str!(u64, &expect!(map.remove("Tcp:OutRsts"))), tcp_in_csum_errors: from_str!(u64, &expect!(map.remove("Tcp:InCsumErrors"))), // Udp udp_in_datagrams: from_str!(u64, &expect!(map.remove("Udp:InDatagrams"))), udp_no_ports: from_str!(u64, &expect!(map.remove("Udp:NoPorts"))), udp_in_errors: from_str!(u64, &expect!(map.remove("Udp:InErrors"))), udp_out_datagrams: from_str!(u64, &expect!(map.remove("Udp:OutDatagrams"))), udp_rcvbuf_errors: from_str!(u64, &expect!(map.remove("Udp:RcvbufErrors"))), udp_sndbuf_errors: from_str!(u64, &expect!(map.remove("Udp:SndbufErrors"))), udp_in_csum_errors: from_str!(u64, &expect!(map.remove("Udp:InCsumErrors"))), udp_ignored_multi: from_str!(u64, &expect!(map.remove("Udp:IgnoredMulti"))), //udp_mem_errors: from_str!(u64, &expect!(map.remove("Udp:MemErrors"))), // UdpLite udp_lite_in_datagrams: from_str!(u64, &expect!(map.remove("UdpLite:InDatagrams"))), udp_lite_no_ports: from_str!(u64, &expect!(map.remove("UdpLite:NoPorts"))), udp_lite_in_errors: from_str!(u64, &expect!(map.remove("UdpLite:InErrors"))), udp_lite_out_datagrams: from_str!(u64, &expect!(map.remove("UdpLite:OutDatagrams"))), udp_lite_rcvbuf_errors: from_str!(u64, &expect!(map.remove("UdpLite:RcvbufErrors"))), udp_lite_sndbuf_errors: from_str!(u64, &expect!(map.remove("UdpLite:SndbufErrors"))), udp_lite_in_csum_errors: from_str!(u64, &expect!(map.remove("UdpLite:InCsumErrors"))), udp_lite_ignored_multi: from_str!(u64, &expect!(map.remove("UdpLite:IgnoredMulti"))), //udp_lite_mem_errors: from_str!(u64, &expect!(map.remove("UdpLite:MemErrors"))), }; Ok(snmp) } } /// This struct holds the data needed for the IP, ICMP, TCP, and UDP management /// information bases for an SNMP agent. /// /// Note that this struct is only for IPv6 /// /// For more details, see [RFC1213](https://datatracker.ietf.org/doc/html/rfc1213) /// and [SNMP counter](https://docs.kernel.org/networking/snmp_counter.html) #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Snmp6 { /// The total number of input datagrams received from interfaces, including /// those received in error. pub ip_in_receives: u64, /// The number of input datagrams discarded due to errors in their IP /// headers. pub ip_in_hdr_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_too_big_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_no_routes: u64, /// The number of input datagrams discarded because the IP address in their /// IP header's destination field was not a valid address to be received at /// this entity. pub ip_in_addr_errors: u64, /// The number of locally-addressed datagrams received successfully but /// discarded because of an unknown or unsupported protocol. pub ip_in_unknown_protos: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_truncated_pkts: u64, /// The number of input IP datagrams for which no problems were encountered /// to prevent their continued processing, but which were discarded /// (e.g., for lack of buffer space). pub ip_in_discards: u64, /// The total number of input datagrams successfully delivered to IP /// user-protocols (including ICMP). /// /// Note that this counter does not include any datagrams discarded while /// awaiting re-assembly. pub ip_in_delivers: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_out_forw_datagrams: u64, /// The total number of IP datagrams which local IP user-protocols /// (including ICMP) supplied to IP in requests for transmission. /// /// Note that this counter does not include any datagrams counted in /// ipForwDatagrams. pub ip_out_requests: u64, /// The number of output IP datagrams for which no problem was encountered /// to prevent their transmission to their destination, but which were /// discarded (e.g., for lack of buffer space). /// /// Note that this counter would include datagrams counted in /// `IpForwDatagrams` if any such packets met this (discretionary) discard /// criterion. pub ip_out_discards: u64, /// The number of IP datagrams discarded because no route could be found to /// transmit them to their destination. /// /// Note that this counter includes any packets counted in `IpForwDatagrams` /// which meet this `no-route' criterion. /// /// Note that this includes any datagarms which a host cannot route because /// all of its default gateways are down. pub ip_out_no_routes: u64, /// The maximum number of seconds which received fragments are held while /// they are awaiting reassembly at this entity. pub ip_reasm_timeout: u64, /// The number of IP fragments received which needed to be reassembled at /// this entity. pub ip_reasm_reqds: u64, /// The number of IP datagrams successfully re-assembled. pub ip_reasm_oks: u64, /// The number of failures detected by the IP re-assembly algorithm /// (for whatever reason: timed out, errors, etc). /// /// Note that this is not necessarily a count of discarded IP fragments /// since some algorithms (notably the algorithm in [RFC 815](https://datatracker.ietf.org/doc/html/rfc815)) /// can lose track of the number of fragments by combining them as they are /// received. pub ip_reasm_fails: u64, /// The number of IP datagrams that have been successfully fragmented at /// this entity. pub ip_frag_oks: u64, /// The number of IP datagrams that have been discarded because they needed /// to be fragmented at this entity but could not be, e.g., because their /// `Don't Fragment` flag was set. pub ip_frag_fails: u64, /// The number of IP datagram fragments that have been generated as a result /// of fragmentation at this entity. pub ip_frag_creates: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_mcast_pkts: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_out_mcast_pkts: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_octets: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_out_octets: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_mcast_octets: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_out_mcast_octets: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_bcast_octets: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_out_bcast_octets: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_no_ect_pkts: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_ect1_pkts: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_ect0_pkts: u64, /// [To be documented.] /// /// Non RFC1213 field pub ip_in_ce_pkts: u64, /// The total number of ICMP messages which the entity received. /// /// Note that this counter includes all those counted by `icmp_in_errors`. pub icmp_in_msgs: u64, /// The number of ICMP messages which the entity received but determined as /// having ICMP-specific errors (bad ICMP checksums, bad length, etc. pub icmp_in_errors: u64, /// The total number of ICMP messages which this entity attempted to send. /// /// Note that this counter includes all those counted by `icmp_out_errors`. pub icmp_out_msgs: u64, /// The number of ICMP messages which this entity did not send due to /// problems discovered within ICMP such as a lack of buffers. This value /// should not include errors discovered outside the ICMP layer such as the /// inability of IP to route the resultant datagram. In some /// implementations there may be no types of error which contribute to this /// counter's value. pub icmp_out_errors: u64, /// This counter indicates the checksum of the ICMP packet is wrong. /// /// Non RFC1213 field pub icmp_in_csum_errors: u64, /// The number of ICMP Destination Unreachable messages received. pub icmp_in_dest_unreachs: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_pkt_too_bigs: u64, /// The number of ICMP Time Exceeded messages received. pub icmp_in_time_excds: u64, /// The number of ICMP Parameter Problem messages received. pub icmp_in_parm_problem: u64, /// The number of ICMP Echo (request) messages received. pub icmp_in_echos: u64, /// The number of ICMP Echo Reply messages received. pub icmp_in_echo_replies: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_group_memb_queries: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_group_memb_responses: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_group_memb_reductions: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_router_solicits: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_router_advertisements: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_neighbor_solicits: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_neighbor_advertisements: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_redirects: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_in_mldv2_reports: u64, /// The number of ICMP Destination Unreachable messages sent. pub icmp_out_dest_unreachs: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_pkt_too_bigs: u64, /// The number of ICMP Time Exceeded messages sent. pub icmp_out_time_excds: u64, /// The number of ICMP Parameter Problem messages sent. pub icmp_out_parm_problems: u64, /// The number of ICMP Echo (request) messages sent. pub icmp_out_echos: u64, /// The number of ICMP Echo Reply messages sent. pub icmp_out_echo_replies: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_group_memb_queries: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_group_memb_responses: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_group_memb_reductions: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_router_solicits: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_router_advertisements: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_neighbor_solicits: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_neighbor_advertisements: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_redirects: u64, /// [To be documented.] /// /// Non RFC1213 field pub icmp_out_mldv2_reports: u64, // // ignore ICMP numeric types // /// The total number of UDP datagrams delivered to UDP users. pub udp_in_datagrams: u64, /// The total number of received UDP datagrams for which there was no /// application at the destination port. pub udp_no_ports: u64, /// The number of received UDP datagrams that could not be delivered for /// reasons other than the lack of an application at the destination port. pub udp_in_errors: u64, /// The total number of UDP datagrams sent from this entity. pub udp_out_datagrams: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_rcvbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_sndbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_in_csum_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_ignored_multi: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_in_datagrams: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_no_ports: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_in_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_out_datagrams: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_rcvbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_sndbuf_errors: u64, /// [To be documented.] /// /// Non RFC1213 field pub udp_lite_in_csum_errors: u64, } impl super::FromBufRead for Snmp6 { fn from_buf_read(r: R) -> ProcResult { let mut map = HashMap::new(); for line in r.lines() { let line = expect!(line); if line.is_empty() { continue; } let mut s = line.split_whitespace(); let field = expect!(s.next(), "no field"); if field.starts_with("Icmp6InType") || field.starts_with("Icmp6OutType") { continue; } let value = from_str!(u64, expect!(s.next(), "no value")); map.insert(field.to_string(), value); } let snmp6 = Snmp6 { ip_in_receives: expect!(map.remove("Ip6InReceives")), ip_in_hdr_errors: expect!(map.remove("Ip6InHdrErrors")), ip_in_too_big_errors: expect!(map.remove("Ip6InTooBigErrors")), ip_in_no_routes: expect!(map.remove("Ip6InNoRoutes")), ip_in_addr_errors: expect!(map.remove("Ip6InAddrErrors")), ip_in_unknown_protos: expect!(map.remove("Ip6InUnknownProtos")), ip_in_truncated_pkts: expect!(map.remove("Ip6InTruncatedPkts")), ip_in_discards: expect!(map.remove("Ip6InDiscards")), ip_in_delivers: expect!(map.remove("Ip6InDelivers")), ip_out_forw_datagrams: expect!(map.remove("Ip6OutForwDatagrams")), ip_out_requests: expect!(map.remove("Ip6OutRequests")), ip_out_discards: expect!(map.remove("Ip6OutDiscards")), ip_out_no_routes: expect!(map.remove("Ip6OutNoRoutes")), ip_reasm_timeout: expect!(map.remove("Ip6ReasmTimeout")), ip_reasm_reqds: expect!(map.remove("Ip6ReasmReqds")), ip_reasm_oks: expect!(map.remove("Ip6ReasmOKs")), ip_reasm_fails: expect!(map.remove("Ip6ReasmFails")), ip_frag_oks: expect!(map.remove("Ip6FragOKs")), ip_frag_fails: expect!(map.remove("Ip6FragFails")), ip_frag_creates: expect!(map.remove("Ip6FragCreates")), ip_in_mcast_pkts: expect!(map.remove("Ip6InMcastPkts")), ip_out_mcast_pkts: expect!(map.remove("Ip6OutMcastPkts")), ip_in_octets: expect!(map.remove("Ip6InOctets")), ip_out_octets: expect!(map.remove("Ip6OutOctets")), ip_in_mcast_octets: expect!(map.remove("Ip6InMcastOctets")), ip_out_mcast_octets: expect!(map.remove("Ip6OutMcastOctets")), ip_in_bcast_octets: expect!(map.remove("Ip6InBcastOctets")), ip_out_bcast_octets: expect!(map.remove("Ip6OutBcastOctets")), ip_in_no_ect_pkts: expect!(map.remove("Ip6InNoECTPkts")), ip_in_ect1_pkts: expect!(map.remove("Ip6InECT1Pkts")), ip_in_ect0_pkts: expect!(map.remove("Ip6InECT0Pkts")), ip_in_ce_pkts: expect!(map.remove("Ip6InCEPkts")), icmp_in_msgs: expect!(map.remove("Icmp6InMsgs")), icmp_in_errors: expect!(map.remove("Icmp6InErrors")), icmp_out_msgs: expect!(map.remove("Icmp6OutMsgs")), icmp_out_errors: expect!(map.remove("Icmp6OutErrors")), icmp_in_csum_errors: expect!(map.remove("Icmp6InCsumErrors")), icmp_in_dest_unreachs: expect!(map.remove("Icmp6InDestUnreachs")), icmp_in_pkt_too_bigs: expect!(map.remove("Icmp6InPktTooBigs")), icmp_in_time_excds: expect!(map.remove("Icmp6InTimeExcds")), icmp_in_parm_problem: expect!(map.remove("Icmp6InParmProblems")), icmp_in_echos: expect!(map.remove("Icmp6InEchos")), icmp_in_echo_replies: expect!(map.remove("Icmp6InEchoReplies")), icmp_in_group_memb_queries: expect!(map.remove("Icmp6InGroupMembQueries")), icmp_in_group_memb_responses: expect!(map.remove("Icmp6InGroupMembResponses")), icmp_in_group_memb_reductions: expect!(map.remove("Icmp6InGroupMembReductions")), icmp_in_router_solicits: expect!(map.remove("Icmp6InRouterSolicits")), icmp_in_router_advertisements: expect!(map.remove("Icmp6InRouterAdvertisements")), icmp_in_neighbor_solicits: expect!(map.remove("Icmp6InNeighborSolicits")), icmp_in_neighbor_advertisements: expect!(map.remove("Icmp6InNeighborAdvertisements")), icmp_in_redirects: expect!(map.remove("Icmp6InRedirects")), icmp_in_mldv2_reports: expect!(map.remove("Icmp6InMLDv2Reports")), icmp_out_dest_unreachs: expect!(map.remove("Icmp6OutDestUnreachs")), icmp_out_pkt_too_bigs: expect!(map.remove("Icmp6OutPktTooBigs")), icmp_out_time_excds: expect!(map.remove("Icmp6OutTimeExcds")), icmp_out_parm_problems: expect!(map.remove("Icmp6OutParmProblems")), icmp_out_echos: expect!(map.remove("Icmp6OutEchos")), icmp_out_echo_replies: expect!(map.remove("Icmp6OutEchoReplies")), icmp_out_group_memb_queries: expect!(map.remove("Icmp6OutGroupMembQueries")), icmp_out_group_memb_responses: expect!(map.remove("Icmp6OutGroupMembResponses")), icmp_out_group_memb_reductions: expect!(map.remove("Icmp6OutGroupMembReductions")), icmp_out_router_solicits: expect!(map.remove("Icmp6OutRouterSolicits")), icmp_out_router_advertisements: expect!(map.remove("Icmp6OutRouterAdvertisements")), icmp_out_neighbor_solicits: expect!(map.remove("Icmp6OutNeighborSolicits")), icmp_out_neighbor_advertisements: expect!(map.remove("Icmp6OutNeighborAdvertisements")), icmp_out_redirects: expect!(map.remove("Icmp6OutRedirects")), icmp_out_mldv2_reports: expect!(map.remove("Icmp6OutMLDv2Reports")), // // ignore ICMP numeric types // udp_in_datagrams: expect!(map.remove("Udp6InDatagrams")), udp_no_ports: expect!(map.remove("Udp6NoPorts")), udp_in_errors: expect!(map.remove("Udp6InErrors")), udp_out_datagrams: expect!(map.remove("Udp6OutDatagrams")), udp_rcvbuf_errors: expect!(map.remove("Udp6RcvbufErrors")), udp_sndbuf_errors: expect!(map.remove("Udp6SndbufErrors")), udp_in_csum_errors: expect!(map.remove("Udp6InCsumErrors")), udp_ignored_multi: expect!(map.remove("Udp6IgnoredMulti")), udp_lite_in_datagrams: expect!(map.remove("UdpLite6InDatagrams")), udp_lite_no_ports: expect!(map.remove("UdpLite6NoPorts")), udp_lite_in_errors: expect!(map.remove("UdpLite6InErrors")), udp_lite_out_datagrams: expect!(map.remove("UdpLite6OutDatagrams")), udp_lite_rcvbuf_errors: expect!(map.remove("UdpLite6RcvbufErrors")), udp_lite_sndbuf_errors: expect!(map.remove("UdpLite6SndbufErrors")), udp_lite_in_csum_errors: expect!(map.remove("UdpLite6InCsumErrors")), }; if cfg!(test) { assert!(map.is_empty(), "snmp6 map is not empty: {:#?}", map); } Ok(snmp6) } } #[cfg(test)] mod tests { use super::*; use std::net::IpAddr; #[test] fn test_parse_ipaddr() { use std::str::FromStr; let addr = parse_addressport_str("0100007F:1234", true).unwrap(); assert_eq!(addr.port(), 0x1234); match addr.ip() { IpAddr::V4(addr) => assert_eq!(addr, Ipv4Addr::new(127, 0, 0, 1)), _ => panic!("Not IPv4"), } // When you connect to [2a00:1450:4001:814::200e]:80 (ipv6.google.com) the entry with // 5014002A14080140000000000E200000:0050 remote endpoint is created in /proc/net/tcp6 // on Linux 4.19. let addr = parse_addressport_str("5014002A14080140000000000E200000:0050", true).unwrap(); assert_eq!(addr.port(), 80); match addr.ip() { IpAddr::V6(addr) => assert_eq!(addr, Ipv6Addr::from_str("2a00:1450:4001:814::200e").unwrap()), _ => panic!("Not IPv6"), } // IPv6 test case from https://stackoverflow.com/questions/41940483/parse-ipv6-addresses-from-proc-net-tcp6-python-2-7/41948004#41948004 let addr = parse_addressport_str("B80D01200000000067452301EFCDAB89:0", true).unwrap(); assert_eq!(addr.port(), 0); match addr.ip() { IpAddr::V6(addr) => assert_eq!(addr, Ipv6Addr::from_str("2001:db8::123:4567:89ab:cdef").unwrap()), _ => panic!("Not IPv6"), } let addr = parse_addressport_str("1234:1234", true); assert!(addr.is_err()); } #[test] fn test_tcpstate_from() { assert_eq!(TcpState::from_u8(0xA).unwrap(), TcpState::Listen); } #[test] fn test_snmp_debian_6_8_12() { // Sample from Debian 6.8.12-1 let data = r#"Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors ForwDatagrams InUnknownProtos InDiscards InDelivers OutRequests OutDiscards OutNoRoutes ReasmTimeout ReasmReqds ReasmOKs ReasmFails FragOKs FragFails FragCreates OutTransmits Ip: 1 64 58881328 0 1 0 0 0 58879082 12449667 9745 1855 0 4 2 0 0 0 0 12449667 Icmp: InMsgs InErrors InCsumErrors InDestUnreachs InTimeExcds InParmProbs InSrcQuenchs InRedirects InEchos InEchoReps InTimestamps InTimestampReps InAddrMasks InAddrMaskReps OutMsgs OutErrors OutRateLimitGlobal OutRateLimitHost OutDestUnreachs OutTimeExcds OutParmProbs OutSrcQuenchs OutRedirects OutEchos OutEchoReps OutTimestamps OutTimestampReps OutAddrMasks OutAddrMaskReps Icmp: 16667 83 0 16667 0 0 0 0 0 0 0 0 0 0 21854 0 2 81 21854 0 0 0 0 0 0 0 0 0 0 IcmpMsg: InType3 OutType3 IcmpMsg: 16667 21854 Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets CurrEstab InSegs OutSegs RetransSegs InErrs OutRsts InCsumErrors Tcp: 1 200 120000 -1 88170 33742 29003 4952 9 5129401 4676076 3246 60 40857 0 Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti MemErrors Udp: 48327329 21522 6981741 9605045 6981727 9497 14 478236 0 UdpLite: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti MemErrors UdpLite: 0 0 0 0 0 0 0 0 0 "#; let r = std::io::Cursor::new(data.as_bytes()); use crate::FromRead; let res = Snmp::from_read(r).unwrap(); assert_eq!(res.ip_forwarding, IpForwarding::Forwarding); assert_eq!(res.ip_in_receives, 58881328); assert_eq!(res.ip_in_delivers, 58879082); assert_eq!(res.ip_out_requests, 12449667); assert_eq!(res.ip_out_no_routes, 1855); assert_eq!(res.tcp_rto_algorithm, TcpRtoAlgorithm::Other); assert_eq!(res.tcp_rto_min, 200); assert_eq!(res.tcp_rto_max, 120000); assert_eq!(res.tcp_max_conn, -1); assert_eq!(res.tcp_curr_estab, 9); assert_eq!(res.tcp_in_segs, 5129401); assert_eq!(res.tcp_out_segs, 4676076); assert_eq!(res.udp_in_datagrams, 48327329); assert_eq!(res.udp_in_csum_errors, 14); assert_eq!(res.udp_no_ports, 21522); assert_eq!(res.udp_out_datagrams, 9605045); println!("{res:?}"); } #[test] fn test_snmp_missing_icmp_msg() { // https://github.com/eminence/procfs/issues/310 let data = r#"Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors ForwDatagrams InUnknownProtos InDiscards InDelivers OutRequests OutDiscards OutNoRoutes ReasmTimeout ReasmReqds ReasmOKs ReasmFails FragOKs FragFails FragCreates OutTransmits Ip: 2 64 12063 0 1 0 0 0 11952 8953 0 0 0 0 0 0 0 0 0 8953 Icmp: InMsgs InErrors InCsumErrors InDestUnreachs InTimeExcds InParmProbs InSrcQuenchs InRedirects InEchos InEchoReps InTimestamps InTimestampReps InAddrMasks InAddrMaskReps OutMsgs OutErrors OutRateLimitGlobal OutRateLimitHost OutDestUnreachs OutTimeExcds OutParmProbs OutSrcQuenchs OutRedirects OutEchos OutEchoReps OutTimestamps OutTimestampReps OutAddrMasks OutAddrMaskReps Icmp: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets CurrEstab InSegs OutSegs RetransSegs InErrs OutRsts InCsumErrors Tcp: 1 200 120000 -1 177 14 0 6 4 11155 10083 18 0 94 0 Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti MemErrors Udp: 2772 0 0 1890 0 0 0 745 0 UdpLite: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti MemErrors UdpLite: 0 0 0 0 0 0 0 0 0 "#; let r = std::io::Cursor::new(data.as_bytes()); use crate::FromRead; let res = Snmp::from_read(r).unwrap(); println!("{res:?}"); } } procfs-core-0.17.0/src/partitions.rs000064400000000000000000000044031046102023000154650ustar 00000000000000use std::io::BufRead; use super::ProcResult; use std::str::FromStr; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// A partition entry under `/proc/partitions` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[allow(non_snake_case)] pub struct PartitionEntry { /// Device major number pub major: u16, /// Device minor number pub minor: u16, /// Number of 1024 byte blocks pub blocks: u64, /// Device name pub name: String, } impl super::FromBufRead for Vec { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); for line in r.lines().skip(2) { let line = expect!(line); let mut s = line.split_whitespace(); let major = expect!(u16::from_str(expect!(s.next()))); let minor = expect!(u16::from_str(expect!(s.next()))); let blocks = expect!(u64::from_str(expect!(s.next()))); let name = expect!(s.next()).to_string(); let partition_entry = PartitionEntry { major, minor, blocks, name, }; vec.push(partition_entry); } Ok(vec) } } #[test] fn test_partitions() { use crate::FromBufRead; use std::io::Cursor; let s = "major minor #blocks name 259 0 1000204632 nvme0n1 259 1 1048576 nvme0n1p1 259 2 1048576 nvme0n1p2 259 3 104857600 nvme0n1p3 259 4 893248512 nvme0n1p4 253 0 104841216 dm-0 252 0 8388608 zram0 253 1 893232128 dm-1 8 0 3953664 sda 8 1 2097152 sda1 8 2 1855488 sda2 253 2 1853440 dm-2 "; let cursor = Cursor::new(s); let partitions = Vec::::from_buf_read(cursor).unwrap(); assert_eq!(partitions.len(), 12); assert_eq!(partitions[3].major, 259); assert_eq!(partitions[3].minor, 3); assert_eq!(partitions[3].blocks, 104857600); assert_eq!(partitions[3].name, "nvme0n1p3"); assert_eq!(partitions[11].major, 253); assert_eq!(partitions[11].minor, 2); assert_eq!(partitions[11].blocks, 1853440); assert_eq!(partitions[11].name, "dm-2"); } procfs-core-0.17.0/src/pressure.rs000064400000000000000000000121351046102023000151420ustar 00000000000000//! Pressure stall information retreived from `/proc/pressure/cpu`, //! `/proc/pressure/memory` and `/proc/pressure/io` //! may not be available on kernels older than 4.20.0 //! For reference: //! //! See also: use crate::{ProcError, ProcResult}; use std::collections::HashMap; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Pressure stall information for either CPU, memory, or IO. /// /// See also: #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct PressureRecord { /// 10 second window /// /// The percentage of time, over a 10 second window, that either some or all tasks were stalled /// waiting for a resource. pub avg10: f32, /// 60 second window /// /// The percentage of time, over a 60 second window, that either some or all tasks were stalled /// waiting for a resource. pub avg60: f32, /// 300 second window /// /// The percentage of time, over a 300 second window, that either some or all tasks were stalled /// waiting for a resource. pub avg300: f32, /// Total stall time (in microseconds). pub total: u64, } /// CPU pressure information #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct CpuPressure { pub some: PressureRecord, } impl super::FromBufRead for CpuPressure { fn from_buf_read(mut r: R) -> ProcResult { let mut some = String::new(); r.read_line(&mut some)?; Ok(CpuPressure { some: parse_pressure_record(&some)?, }) } } /// Memory pressure information #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MemoryPressure { /// This record indicates the share of time in which at least some tasks are stalled pub some: PressureRecord, /// This record indicates this share of time in which all non-idle tasks are stalled /// simultaneously. pub full: PressureRecord, } impl super::FromBufRead for MemoryPressure { fn from_buf_read(r: R) -> ProcResult { let (some, full) = get_pressure(r)?; Ok(MemoryPressure { some, full }) } } /// IO pressure information #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct IoPressure { /// This record indicates the share of time in which at least some tasks are stalled pub some: PressureRecord, /// This record indicates this share of time in which all non-idle tasks are stalled /// simultaneously. pub full: PressureRecord, } impl super::FromBufRead for IoPressure { fn from_buf_read(r: R) -> ProcResult { let (some, full) = get_pressure(r)?; Ok(IoPressure { some, full }) } } fn get_f32(map: &HashMap<&str, &str>, value: &str) -> ProcResult { map.get(value).map_or_else( || Err(ProcError::Incomplete(None)), |v| v.parse::().map_err(|_| ProcError::Incomplete(None)), ) } fn get_total(map: &HashMap<&str, &str>) -> ProcResult { map.get("total").map_or_else( || Err(ProcError::Incomplete(None)), |v| v.parse::().map_err(|_| ProcError::Incomplete(None)), ) } fn parse_pressure_record(line: &str) -> ProcResult { let mut parsed = HashMap::new(); if !line.starts_with("some") && !line.starts_with("full") { return Err(ProcError::Incomplete(None)); } let values = &line[5..]; for kv_str in values.split_whitespace() { let kv_split = kv_str.split('='); let vec: Vec<&str> = kv_split.collect(); if vec.len() == 2 { parsed.insert(vec[0], vec[1]); } } Ok(PressureRecord { avg10: get_f32(&parsed, "avg10")?, avg60: get_f32(&parsed, "avg60")?, avg300: get_f32(&parsed, "avg300")?, total: get_total(&parsed)?, }) } fn get_pressure(mut r: R) -> ProcResult<(PressureRecord, PressureRecord)> { let mut some = String::new(); r.read_line(&mut some)?; let mut full = String::new(); r.read_line(&mut full)?; Ok((parse_pressure_record(&some)?, parse_pressure_record(&full)?)) } #[cfg(test)] mod test { use super::*; use std::f32::EPSILON; #[test] fn test_parse_pressure_record() { let record = parse_pressure_record("full avg10=2.10 avg60=0.12 avg300=0.00 total=391926").unwrap(); assert!(record.avg10 - 2.10 < EPSILON); assert!(record.avg60 - 0.12 < EPSILON); assert!(record.avg300 - 0.00 < EPSILON); assert_eq!(record.total, 391_926); } #[test] fn test_parse_pressure_record_errs() { assert!(parse_pressure_record("avg10=2.10 avg60=0.12 avg300=0.00 total=391926").is_err()); assert!(parse_pressure_record("some avg10=2.10 avg300=0.00 total=391926").is_err()); assert!(parse_pressure_record("some avg10=2.10 avg60=0.00 avg300=0.00").is_err()); } } procfs-core-0.17.0/src/process/clear_refs.rs000064400000000000000000000062761046102023000170660ustar 00000000000000use std::fmt; /// Clearing the PG_Referenced and ACCESSED/YOUNG bits /// provides a method to measure approximately how much memory /// a process is using. One first inspects the values in the /// "Referenced" fields for the VMAs shown in /// `/proc/[pid]/smaps` to get an idea of the memory footprint /// of the process. One then clears the PG_Referenced and /// ACCESSED/YOUNG bits and, after some measured time /// interval, once again inspects the values in the /// "Referenced" fields to get an idea of the change in memory /// footprint of the process during the measured interval. If /// one is interested only in inspecting the selected mapping /// types, then the value 2 or 3 can be used instead of 1. /// /// The `/proc/[pid]/clear_refs` file is present only if the /// CONFIG_PROC_PAGE_MONITOR kernel configuration option is /// enabled. /// /// Only writable by the owner of the process /// /// See `procfs::Process::clear_refs()` and `procfs::Process::pagemap()` #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] pub enum ClearRefs { /// (since Linux 2.6.22) /// /// Reset the PG_Referenced and ACCESSED/YOUNG bits for /// all the pages associated with the process. (Before /// kernel 2.6.32, writing any nonzero value to this /// file had this effect.) PGReferencedAll = 1, /// (since Linux 2.6.32) /// /// Reset the PG_Referenced and ACCESSED/YOUNG bits for /// all anonymous pages associated with the process. PGReferencedAnonymous = 2, /// (since Linux 2.6.32) /// /// Reset the PG_Referenced and ACCESSED/YOUNG bits for /// all file-mapped pages associated with the process. PGReferencedFile = 3, /// (since Linux 3.11) /// /// Clear the soft-dirty bit for all the pages /// associated with the process. This is used (in /// conjunction with `/proc/[pid]/pagemap`) by the check- /// point restore system to discover which pages of a /// process have been dirtied since the file /// `/proc/[pid]/clear_refs` was written to. SoftDirty = 4, /// (since Linux 4.0) /// /// Reset the peak resident set size ("high water /// mark") to the process's current resident set size /// value. PeakRSS = 5, } impl fmt::Display for ClearRefs { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!( f, "{}", match self { ClearRefs::PGReferencedAll => 1, ClearRefs::PGReferencedAnonymous => 2, ClearRefs::PGReferencedFile => 3, ClearRefs::SoftDirty => 4, ClearRefs::PeakRSS => 5, } ) } } impl std::str::FromStr for ClearRefs { type Err = &'static str; fn from_str(s: &str) -> Result { s.parse() .map_err(|_| "Fail to parse clear refs value") .and_then(|n| match n { 1 => Ok(ClearRefs::PGReferencedAll), 2 => Ok(ClearRefs::PGReferencedAnonymous), 3 => Ok(ClearRefs::PGReferencedFile), 4 => Ok(ClearRefs::SoftDirty), 5 => Ok(ClearRefs::PeakRSS), _ => Err("Unknown clear refs value"), }) } } procfs-core-0.17.0/src/process/limit.rs000064400000000000000000000157351046102023000160770ustar 00000000000000use crate::{ProcError, ProcResult}; use std::collections::HashMap; use std::io::BufRead; use std::str::FromStr; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Process limits /// /// For more details about each of these limits, see the `getrlimit` man page. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Limits { /// Max Cpu Time /// /// This is a limit, in seconds, on the amount of CPU time that the process can consume. pub max_cpu_time: Limit, /// Max file size /// /// This is the maximum size in bytes of files that the process may create. pub max_file_size: Limit, /// Max data size /// /// This is the maximum size of the process's data segment (initialized data, uninitialized /// data, and heap). pub max_data_size: Limit, /// Max stack size /// /// This is the maximum size of the process stack, in bytes. pub max_stack_size: Limit, /// Max core file size /// /// This is the maximum size of a *core* file in bytes that the process may dump. pub max_core_file_size: Limit, /// Max resident set /// /// This is a limit (in bytes) on the process's resident set (the number of virtual pages /// resident in RAM). pub max_resident_set: Limit, /// Max processes /// /// This is a limit on the number of extant process (or, more precisely on Linux, threads) for /// the real user rID of the calling process. pub max_processes: Limit, /// Max open files /// /// This specifies a value one greater than the maximum file descriptor number that can be /// opened by this process. pub max_open_files: Limit, /// Max locked memory /// /// This is the maximum number of bytes of memory that may be locked into RAM. pub max_locked_memory: Limit, /// Max address space /// /// This is the maximum size of the process's virtual memory (address space). pub max_address_space: Limit, /// Max file locks /// /// This is a limit on the combined number of flock locks and fcntl leases that this process /// may establish. pub max_file_locks: Limit, /// Max pending signals /// /// This is a limit on the number of signals that may be queued for the real user rID of the /// calling process. pub max_pending_signals: Limit, /// Max msgqueue size /// /// This is a limit on the number of bytes that can be allocated for POSIX message queues for /// the real user rID of the calling process. pub max_msgqueue_size: Limit, /// Max nice priority /// /// This specifies a ceiling to which the process's nice value can be raised using /// `setpriority` or `nice`. pub max_nice_priority: Limit, /// Max realtime priority /// /// This specifies a ceiling on the real-time priority that may be set for this process using /// `sched_setscheduler` and `sched_setparam`. pub max_realtime_priority: Limit, /// Max realtime timeout /// /// This is a limit (in microseconds) on the amount of CPU time that a process scheduled under /// a real-time scheduling policy may consume without making a blocking system call. pub max_realtime_timeout: Limit, } impl crate::FromBufRead for Limits { fn from_buf_read(r: R) -> ProcResult { let mut lines = r.lines(); let mut map = HashMap::new(); while let Some(Ok(line)) = lines.next() { let line = line.trim(); if line.starts_with("Limit") { continue; } let s: Vec<_> = line.split_whitespace().collect(); let l = s.len(); let (hard_limit, soft_limit, name) = if line.starts_with("Max nice priority") || line.starts_with("Max realtime priority") { // these two limits don't have units, and so need different offsets: let hard_limit = expect!(s.get(l - 1)).to_owned(); let soft_limit = expect!(s.get(l - 2)).to_owned(); let name = s[0..l - 2].join(" "); (hard_limit, soft_limit, name) } else { let hard_limit = expect!(s.get(l - 2)).to_owned(); let soft_limit = expect!(s.get(l - 3)).to_owned(); let name = s[0..l - 3].join(" "); (hard_limit, soft_limit, name) }; let _units = expect!(s.get(l - 1)); map.insert(name.to_owned(), (soft_limit.to_owned(), hard_limit.to_owned())); } let limits = Limits { max_cpu_time: Limit::from_pair(expect!(map.remove("Max cpu time")))?, max_file_size: Limit::from_pair(expect!(map.remove("Max file size")))?, max_data_size: Limit::from_pair(expect!(map.remove("Max data size")))?, max_stack_size: Limit::from_pair(expect!(map.remove("Max stack size")))?, max_core_file_size: Limit::from_pair(expect!(map.remove("Max core file size")))?, max_resident_set: Limit::from_pair(expect!(map.remove("Max resident set")))?, max_processes: Limit::from_pair(expect!(map.remove("Max processes")))?, max_open_files: Limit::from_pair(expect!(map.remove("Max open files")))?, max_locked_memory: Limit::from_pair(expect!(map.remove("Max locked memory")))?, max_address_space: Limit::from_pair(expect!(map.remove("Max address space")))?, max_file_locks: Limit::from_pair(expect!(map.remove("Max file locks")))?, max_pending_signals: Limit::from_pair(expect!(map.remove("Max pending signals")))?, max_msgqueue_size: Limit::from_pair(expect!(map.remove("Max msgqueue size")))?, max_nice_priority: Limit::from_pair(expect!(map.remove("Max nice priority")))?, max_realtime_priority: Limit::from_pair(expect!(map.remove("Max realtime priority")))?, max_realtime_timeout: Limit::from_pair(expect!(map.remove("Max realtime timeout")))?, }; if cfg!(test) { assert!(map.is_empty(), "Map isn't empty: {:?}", map); } Ok(limits) } } #[derive(Debug, Copy, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Limit { pub soft_limit: LimitValue, pub hard_limit: LimitValue, } impl Limit { fn from_pair(l: (String, String)) -> ProcResult { let (soft, hard) = l; Ok(Limit { soft_limit: LimitValue::from_str(&soft)?, hard_limit: LimitValue::from_str(&hard)?, }) } } #[derive(Debug, Copy, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum LimitValue { Unlimited, Value(u64), } impl FromStr for LimitValue { type Err = ProcError; fn from_str(s: &str) -> Result { if s == "unlimited" { Ok(LimitValue::Unlimited) } else { Ok(LimitValue::Value(from_str!(u64, s))) } } } procfs-core-0.17.0/src/process/mod.rs000064400000000000000000000706741046102023000155430ustar 00000000000000//! Functions and structs related to process information //! //! The primary source of data for functions in this module is the files in a `/proc//` //! directory. use super::*; use crate::from_iter; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::io::Read; use std::path::PathBuf; use std::str::FromStr; mod limit; pub use limit::*; mod stat; pub use stat::*; mod mount; pub use mount::*; mod namespaces; pub use namespaces::*; mod status; pub use status::*; mod schedstat; pub use schedstat::*; mod smaps_rollup; pub use smaps_rollup::*; mod pagemap; pub use pagemap::*; mod clear_refs; pub use clear_refs::*; bitflags! { /// Kernel flags for a process /// /// See also the [Stat::flags()] method. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct StatFlags: u32 { /// I am an IDLE thread const PF_IDLE = 0x0000_0002; /// Getting shut down const PF_EXITING = 0x0000_0004; /// PI exit done on shut down const PF_EXITPIDONE = 0x0000_0008; /// I'm a virtual CPU const PF_VCPU = 0x0000_0010; /// I'm a workqueue worker const PF_WQ_WORKER = 0x0000_0020; /// Forked but didn't exec const PF_FORKNOEXEC = 0x0000_0040; /// Process policy on mce errors; const PF_MCE_PROCESS = 0x0000_0080; /// Used super-user privileges const PF_SUPERPRIV = 0x0000_0100; /// Dumped core const PF_DUMPCORE = 0x0000_0200; /// Killed by a signal const PF_SIGNALED = 0x0000_0400; ///Allocating memory const PF_MEMALLOC = 0x0000_0800; /// set_user() noticed that RLIMIT_NPROC was exceeded const PF_NPROC_EXCEEDED = 0x0000_1000; /// If unset the fpu must be initialized before use const PF_USED_MATH = 0x0000_2000; /// Used async_schedule*(), used by module init const PF_USED_ASYNC = 0x0000_4000; /// This thread should not be frozen const PF_NOFREEZE = 0x0000_8000; /// Frozen for system suspend const PF_FROZEN = 0x0001_0000; /// I am kswapd const PF_KSWAPD = 0x0002_0000; /// All allocation requests will inherit GFP_NOFS const PF_MEMALLOC_NOFS = 0x0004_0000; /// All allocation requests will inherit GFP_NOIO const PF_MEMALLOC_NOIO = 0x0008_0000; /// Throttle me less: I clean memory const PF_LESS_THROTTLE = 0x0010_0000; /// I am a kernel thread const PF_KTHREAD = 0x0020_0000; /// Randomize virtual address space const PF_RANDOMIZE = 0x0040_0000; /// Allowed to write to swap const PF_SWAPWRITE = 0x0080_0000; /// Stalled due to lack of memory const PF_MEMSTALL = 0x0100_0000; /// I'm an Usermodehelper process const PF_UMH = 0x0200_0000; /// Userland is not allowed to meddle with cpus_allowed const PF_NO_SETAFFINITY = 0x0400_0000; /// Early kill for mce process policy const PF_MCE_EARLY = 0x0800_0000; /// All allocation request will have _GFP_MOVABLE cleared const PF_MEMALLOC_NOCMA = 0x1000_0000; /// Thread belongs to the rt mutex tester const PF_MUTEX_TESTER = 0x2000_0000; /// Freezer should not count it as freezable const PF_FREEZER_SKIP = 0x4000_0000; /// This thread called freeze_processes() and should not be frozen const PF_SUSPEND_TASK = 0x8000_0000; } } bitflags! { /// See the [coredump_filter()](struct.Process.html#method.coredump_filter) method. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct CoredumpFlags: u32 { const ANONYMOUS_PRIVATE_MAPPINGS = 0x01; const ANONYMOUS_SHARED_MAPPINGS = 0x02; const FILEBACKED_PRIVATE_MAPPINGS = 0x04; const FILEBACKED_SHARED_MAPPINGS = 0x08; const ELF_HEADERS = 0x10; const PROVATE_HUGEPAGES = 0x20; const SHARED_HUGEPAGES = 0x40; const PRIVATE_DAX_PAGES = 0x80; const SHARED_DAX_PAGES = 0x100; } } bitflags! { /// The permissions a process has on memory map entries. /// /// Note that the `SHARED` and `PRIVATE` are mutually exclusive, so while you can /// use `MMPermissions::all()` to construct an instance that has all bits set, /// this particular value would never been seen in procfs. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord, Default)] pub struct MMPermissions: u8 { /// No permissions const NONE = 0; /// Read permission const READ = 1 << 0; /// Write permission const WRITE = 1 << 1; /// Execute permission const EXECUTE = 1 << 2; /// Memory is shared with another process. /// /// Mutually exclusive with PRIVATE. const SHARED = 1 << 3; /// Memory is private (and copy-on-write) /// /// Mutually exclusive with SHARED. const PRIVATE = 1 << 4; } } impl MMPermissions { fn from_ascii_char(b: u8) -> Self { match b { b'r' => Self::READ, b'w' => Self::WRITE, b'x' => Self::EXECUTE, b's' => Self::SHARED, b'p' => Self::PRIVATE, _ => Self::NONE, } } /// Returns this permission map as a 4-character string, similar to what you /// might see in `/proc/\/maps`. /// /// Note that the SHARED and PRIVATE bits are mutually exclusive, so this /// string is 4 characters long, not 5. pub fn as_str(&self) -> String { let mut s = String::with_capacity(4); s.push(if self.contains(Self::READ) { 'r' } else { '-' }); s.push(if self.contains(Self::WRITE) { 'w' } else { '-' }); s.push(if self.contains(Self::EXECUTE) { 'x' } else { '-' }); s.push(if self.contains(Self::SHARED) { 's' } else if self.contains(Self::PRIVATE) { 'p' } else { '-' }); s } } impl FromStr for MMPermissions { type Err = std::convert::Infallible; fn from_str(s: &str) -> Result { // Only operate on ASCII (byte) values Ok(s.bytes() .map(Self::from_ascii_char) .fold(Self::default(), std::ops::BitOr::bitor)) } } bitflags! { /// Represents the kernel flags associated with the virtual memory area. /// The names of these flags are just those you'll find in the man page, but in upper case. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord, Default)] pub struct VmFlags: u32 { /// No flags const NONE = 0; /// Readable const RD = 1 << 0; /// Writable const WR = 1 << 1; /// Executable const EX = 1 << 2; /// Shared const SH = 1 << 3; /// May read const MR = 1 << 4; /// May write const MW = 1 << 5; /// May execute const ME = 1 << 6; /// May share const MS = 1 << 7; /// Stack segment grows down const GD = 1 << 8; /// Pure PFN range const PF = 1 << 9; /// Disable write to the mapped file const DW = 1 << 10; /// Pages are locked in memory const LO = 1 << 11; /// Memory mapped I/O area const IO = 1 << 12; /// Sequential read advise provided const SR = 1 << 13; /// Random read provided const RR = 1 << 14; /// Do not copy area on fork const DC = 1 << 15; /// Do not expand area on remapping const DE = 1 << 16; /// Area is accountable const AC = 1 << 17; /// Swap space is not reserved for the area const NR = 1 << 18; /// Area uses huge TLB pages const HT = 1 << 19; /// Perform synchronous page faults (since Linux 4.15) const SF = 1 << 20; /// Non-linear mapping (removed in Linux 4.0) const NL = 1 << 21; /// Architecture specific flag const AR = 1 << 22; /// Wipe on fork (since Linux 4.14) const WF = 1 << 23; /// Do not include area into core dump const DD = 1 << 24; /// Soft-dirty flag (since Linux 3.13) const SD = 1 << 25; /// Mixed map area const MM = 1 << 26; /// Huge page advise flag const HG = 1 << 27; /// No-huge page advise flag const NH = 1 << 28; /// Mergeable advise flag const MG = 1 << 29; /// Userfaultfd missing pages tracking (since Linux 4.3) const UM = 1 << 30; /// Userfaultfd wprotect pages tracking (since Linux 4.3) const UW = 1 << 31; } } impl VmFlags { fn from_str(flag: &str) -> Self { if flag.len() != 2 { return VmFlags::NONE; } match flag { "rd" => VmFlags::RD, "wr" => VmFlags::WR, "ex" => VmFlags::EX, "sh" => VmFlags::SH, "mr" => VmFlags::MR, "mw" => VmFlags::MW, "me" => VmFlags::ME, "ms" => VmFlags::MS, "gd" => VmFlags::GD, "pf" => VmFlags::PF, "dw" => VmFlags::DW, "lo" => VmFlags::LO, "io" => VmFlags::IO, "sr" => VmFlags::SR, "rr" => VmFlags::RR, "dc" => VmFlags::DC, "de" => VmFlags::DE, "ac" => VmFlags::AC, "nr" => VmFlags::NR, "ht" => VmFlags::HT, "sf" => VmFlags::SF, "nl" => VmFlags::NL, "ar" => VmFlags::AR, "wf" => VmFlags::WF, "dd" => VmFlags::DD, "sd" => VmFlags::SD, "mm" => VmFlags::MM, "hg" => VmFlags::HG, "nh" => VmFlags::NH, "mg" => VmFlags::MG, "um" => VmFlags::UM, "uw" => VmFlags::UW, _ => VmFlags::NONE, } } } /// Represents the state of a process. #[derive(Debug, Clone, Copy, Eq, PartialEq)] pub enum ProcState { /// Running (R) Running, /// Sleeping in an interruptible wait (S) Sleeping, /// Waiting in uninterruptible disk sleep (D) Waiting, /// Zombie (Z) Zombie, /// Stopped (on a signal) (T) /// /// Or before Linux 2.6.33, trace stopped Stopped, /// Tracing stop (t) (Linux 2.6.33 onward) Tracing, /// Dead (X) Dead, /// Wakekill (K) (Linux 2.6.33 to 3.13 only) Wakekill, /// Waking (W) (Linux 2.6.33 to 3.13 only) Waking, /// Parked (P) (Linux 3.9 to 3.13 only) Parked, /// Idle (I) Idle, } impl ProcState { pub fn from_char(c: char) -> Option { match c { 'R' => Some(ProcState::Running), 'S' => Some(ProcState::Sleeping), 'D' => Some(ProcState::Waiting), 'Z' => Some(ProcState::Zombie), 'T' => Some(ProcState::Stopped), 't' => Some(ProcState::Tracing), 'X' | 'x' => Some(ProcState::Dead), 'K' => Some(ProcState::Wakekill), 'W' => Some(ProcState::Waking), 'P' => Some(ProcState::Parked), 'I' => Some(ProcState::Idle), _ => None, } } } impl FromStr for ProcState { type Err = ProcError; fn from_str(s: &str) -> Result { ProcState::from_char(expect!(s.chars().next(), "empty string")) .ok_or_else(|| build_internal_error!("failed to convert")) } } /// This struct contains I/O statistics for the process, built from `/proc//io` /// /// # Note /// /// In the current implementation, things are a bit racy on 32-bit systems: if process A /// reads process B's `/proc//io` while process B is updating one of these 64-bit /// counters, process A could see an intermediate result. #[derive(Debug, Copy, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Io { /// Characters read /// /// The number of bytes which this task has caused to be read from storage. This is simply the /// sum of bytes which this process passed to read(2) and similar system calls. It includes /// things such as terminal I/O and is unaffected by whether or not actual physical disk I/O /// was required (the read might have been satisfied from pagecache). pub rchar: u64, /// characters written /// /// The number of bytes which this task has caused, or shall cause to be written to disk. /// Similar caveats apply here as with rchar. pub wchar: u64, /// read syscalls /// /// Attempt to count the number of write I/O operations—that is, system calls such as write(2) /// and pwrite(2). pub syscr: u64, /// write syscalls /// /// Attempt to count the number of write I/O operations—that is, system calls such as write(2) /// and pwrite(2). pub syscw: u64, /// bytes read /// /// Attempt to count the number of bytes which this process really did cause to be fetched from /// the storage layer. This is accurate for block-backed filesystems. pub read_bytes: u64, /// bytes written /// /// Attempt to count the number of bytes which this process caused to be sent to the storage layer. pub write_bytes: u64, /// Cancelled write bytes. /// /// The big inaccuracy here is truncate. If a process writes 1MB to a file and then deletes /// the file, it will in fact perform no write‐ out. But it will have been accounted as having /// caused 1MB of write. In other words: this field represents the number of bytes which this /// process caused to not happen, by truncating pagecache. A task can cause "negative" I/O too. /// If this task truncates some dirty pagecache, some I/O which another task has been accounted /// for (in its write_bytes) will not be happening. pub cancelled_write_bytes: u64, } #[derive(Debug, PartialEq, Eq, Clone, Hash)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum MMapPath { /// The file that is backing the mapping. Path(PathBuf), /// The process's heap. Heap, /// The initial process's (also known as the main thread's) stack. Stack, /// A thread's stack (where the `` is a thread ID). It corresponds to the /// `/proc//task//` path. /// /// (since Linux 3.4) TStack(u32), /// The virtual dynamically linked shared object. Vdso, /// Shared kernel variables Vvar, /// obsolete virtual syscalls, succeeded by vdso Vsyscall, /// rollup memory mappings, from `/proc//smaps_rollup` Rollup, /// An anonymous mapping as obtained via mmap(2). Anonymous, /// Shared memory segment. The i32 value corresponds to [Shm.key](Shm::key), while [MemoryMap.inode](MemoryMap::inode) corresponds to [Shm.shmid](Shm::shmid) Vsys(i32), /// Some other pseudo-path Other(String), } impl MMapPath { pub fn from(path: &str) -> ProcResult { Ok(match path.trim() { "" => MMapPath::Anonymous, "[heap]" => MMapPath::Heap, "[stack]" => MMapPath::Stack, "[vdso]" => MMapPath::Vdso, "[vvar]" => MMapPath::Vvar, "[vsyscall]" => MMapPath::Vsyscall, "[rollup]" => MMapPath::Rollup, x if x.starts_with("[stack:") => { let mut s = x[1..x.len() - 1].split(':'); let tid = from_str!(u32, expect!(s.nth(1))); MMapPath::TStack(tid) } x if x.starts_with('[') && x.ends_with(']') => MMapPath::Other(x[1..x.len() - 1].to_string()), x if x.starts_with("/SYSV") => MMapPath::Vsys(u32::from_str_radix(&x[5..13], 16)? as i32), // 32bits signed hex. /SYSVaabbccdd (deleted) x => MMapPath::Path(PathBuf::from(x)), }) } } /// Represents all entries in a `/proc//maps` or `/proc//smaps` file. #[derive(Debug, PartialEq, Eq, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[non_exhaustive] pub struct MemoryMaps(pub Vec); impl crate::FromBufRead for MemoryMaps { /// The data should be formatted according to procfs /proc/pid/{maps,smaps,smaps_rollup}. fn from_buf_read(reader: R) -> ProcResult { let mut memory_maps = Vec::new(); let mut line_iter = reader.lines().map(|r| r.map_err(|_| ProcError::Incomplete(None))); let mut current_memory_map: Option = None; while let Some(line) = line_iter.next().transpose()? { // Assumes all extension fields (in `/proc//smaps`) start with a capital letter, // which seems to be the case. if line.starts_with(|c: char| c.is_ascii_uppercase()) { match current_memory_map.as_mut() { None => return Err(ProcError::Incomplete(None)), Some(mm) => { // This is probably an attribute if line.starts_with("VmFlags") { let flags = line.split_ascii_whitespace(); let flags = flags.skip(1); // Skips the `VmFlags:` part since we don't need it. let flags = flags .map(VmFlags::from_str) // FUTURE: use `Iterator::reduce` .fold(VmFlags::NONE, std::ops::BitOr::bitor); mm.extension.vm_flags = flags; } else { let mut parts = line.split_ascii_whitespace(); let key = parts.next(); let value = parts.next(); if let (Some(k), Some(v)) = (key, value) { // While most entries do have one, not all of them do. let size_suffix = parts.next(); // Limited poking at /proc//smaps and then checking if "MB", "GB", and "TB" appear in the C file that is // supposedly responsible for creating smaps, has lead me to believe that the only size suffixes we'll ever encounter // "kB", which is most likely kibibytes. Actually checking if the size suffix is any of the above is a way to // future-proof the code, but I am not sure it is worth doing so. let size_multiplier = if size_suffix.is_some() { 1024 } else { 1 }; let v = v.parse::().map_err(|_| { ProcError::Other("Value in `Key: Value` pair was not actually a number".into()) })?; // This ignores the case when our Key: Value pairs are really Key Value pairs. Is this a good idea? let k = k.trim_end_matches(':'); mm.extension.map.insert(k.into(), v * size_multiplier); } } } } } else { if let Some(mm) = current_memory_map.take() { memory_maps.push(mm); } current_memory_map = Some(MemoryMap::from_line(&line)?); } } if let Some(mm) = current_memory_map.take() { memory_maps.push(mm); } Ok(MemoryMaps(memory_maps)) } } impl MemoryMaps { /// Return an iterator over [MemoryMap]. pub fn iter(&self) -> std::slice::Iter { self.0.iter() } pub fn len(&self) -> usize { self.0.len() } } impl<'a> IntoIterator for &'a MemoryMaps { type IntoIter = std::slice::Iter<'a, MemoryMap>; type Item = &'a MemoryMap; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl IntoIterator for MemoryMaps { type IntoIter = std::vec::IntoIter; type Item = MemoryMap; fn into_iter(self) -> Self::IntoIter { self.0.into_iter() } } /// Represents an entry in a `/proc//maps` or `/proc//smaps` file. #[derive(Debug, PartialEq, Eq, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MemoryMap { /// The address space in the process that the mapping occupies. pub address: (u64, u64), pub perms: MMPermissions, /// The offset into the file/whatever pub offset: u64, /// The device (major, minor) pub dev: (i32, i32), /// The inode on that device /// /// 0 indicates that no inode is associated with the memory region, as would be the case with /// BSS (uninitialized data). pub inode: u64, pub pathname: MMapPath, /// Memory mapping extension information, populated when parsing `/proc//smaps`. /// /// The members will be `Default::default()` (empty/none) when the information isn't available. pub extension: MMapExtension, } impl MemoryMap { fn from_line(line: &str) -> ProcResult { let mut s = line.splitn(6, ' '); let address = expect!(s.next()); let perms = expect!(s.next()); let offset = expect!(s.next()); let dev = expect!(s.next()); let inode = expect!(s.next()); let path = expect!(s.next()); Ok(MemoryMap { address: split_into_num(address, '-', 16)?, perms: perms.parse()?, offset: from_str!(u64, offset, 16), dev: split_into_num(dev, ':', 16)?, inode: from_str!(u64, inode), pathname: MMapPath::from(path)?, extension: Default::default(), }) } } /// Represents the information about a specific mapping as presented in /proc/\/smaps #[derive(Default, Debug, PartialEq, Eq, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MMapExtension { /// Key-value pairs that may represent statistics about memory usage, or other interesting things, /// such a "ProtectionKey" (if you're on X86 and that kernel config option was specified). /// /// Note that should a key-value pair represent a memory usage statistic, it will be in bytes. /// /// Check your manpage for more information pub map: HashMap, /// Kernel flags associated with the virtual memory area /// /// (since Linux 3.8) pub vm_flags: VmFlags, } impl MMapExtension { /// Return whether the extension information is empty. pub fn is_empty(&self) -> bool { self.map.is_empty() && self.vm_flags == VmFlags::NONE } } impl crate::FromBufRead for Io { fn from_buf_read(reader: R) -> ProcResult { let mut map = HashMap::new(); for line in reader.lines() { let line = line?; if line.is_empty() || !line.contains(' ') { continue; } let mut s = line.split_whitespace(); let field = expect!(s.next()); let value = expect!(s.next()); let value = from_str!(u64, value); map.insert(field[..field.len() - 1].to_string(), value); } let io = Io { rchar: expect!(map.remove("rchar")), wchar: expect!(map.remove("wchar")), syscr: expect!(map.remove("syscr")), syscw: expect!(map.remove("syscw")), read_bytes: expect!(map.remove("read_bytes")), write_bytes: expect!(map.remove("write_bytes")), cancelled_write_bytes: expect!(map.remove("cancelled_write_bytes")), }; assert!(!cfg!(test) || map.is_empty(), "io map is not empty: {:#?}", map); Ok(io) } } /// Describes a file descriptor opened by a process. #[derive(Clone, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum FDTarget { /// A file or device Path(PathBuf), /// A socket type, with an inode Socket(u64), Net(u64), Pipe(u64), /// A file descriptor that have no corresponding inode. AnonInode(String), /// A memfd file descriptor with a name. MemFD(String), /// Some other file descriptor type, with an inode. Other(String, u64), } impl FromStr for FDTarget { type Err = ProcError; fn from_str(s: &str) -> Result { // helper function that removes the first and last character fn strip_first_last(s: &str) -> ProcResult<&str> { if s.len() > 2 { let mut c = s.chars(); // remove the first and last characters let _ = c.next(); let _ = c.next_back(); Ok(c.as_str()) } else { Err(ProcError::Incomplete(None)) } } if !s.starts_with('/') && s.contains(':') { let mut s = s.split(':'); let fd_type = expect!(s.next()); match fd_type { "socket" => { let inode = expect!(s.next(), "socket inode"); let inode = expect!(u64::from_str_radix(strip_first_last(inode)?, 10)); Ok(FDTarget::Socket(inode)) } "net" => { let inode = expect!(s.next(), "net inode"); let inode = expect!(u64::from_str_radix(strip_first_last(inode)?, 10)); Ok(FDTarget::Net(inode)) } "pipe" => { let inode = expect!(s.next(), "pipe inode"); let inode = expect!(u64::from_str_radix(strip_first_last(inode)?, 10)); Ok(FDTarget::Pipe(inode)) } "anon_inode" => Ok(FDTarget::AnonInode(expect!(s.next(), "anon inode").to_string())), "" => Err(ProcError::Incomplete(None)), x => { let inode = expect!(s.next(), "other inode"); let inode = expect!(u64::from_str_radix(strip_first_last(inode)?, 10)); Ok(FDTarget::Other(x.to_string(), inode)) } } } else if let Some(s) = s.strip_prefix("/memfd:") { Ok(FDTarget::MemFD(s.to_string())) } else { Ok(FDTarget::Path(PathBuf::from(s))) } } } /// Provides information about memory usage, measured in pages. #[derive(Debug, Clone, Copy)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct StatM { /// Total program size, measured in pages /// /// (same as VmSize in /proc/\/status) pub size: u64, /// Resident set size, measured in pages /// /// (same as VmRSS in /proc/\/status) pub resident: u64, /// number of resident shared pages (i.e., backed by a file) /// /// (same as RssFile+RssShmem in /proc/\/status) pub shared: u64, /// Text (code) pub text: u64, /// library (unused since Linux 2.6; always 0) pub lib: u64, /// data + stack pub data: u64, /// dirty pages (unused since Linux 2.6; always 0) pub dt: u64, } impl crate::FromRead for StatM { fn from_read(mut r: R) -> ProcResult { let mut line = String::new(); r.read_to_string(&mut line)?; let mut s = line.split_whitespace(); let size = expect!(from_iter(&mut s)); let resident = expect!(from_iter(&mut s)); let shared = expect!(from_iter(&mut s)); let text = expect!(from_iter(&mut s)); let lib = expect!(from_iter(&mut s)); let data = expect!(from_iter(&mut s)); let dt = expect!(from_iter(&mut s)); if cfg!(test) { assert!(s.next().is_none()); } Ok(StatM { size, resident, shared, text, lib, data, dt, }) } } #[cfg(test)] mod tests { use super::*; #[test] fn parse_memory_map_permissions() { use MMPermissions as P; assert_eq!("rw-p".parse(), Ok(P::READ | P::WRITE | P::PRIVATE)); assert_eq!("r-xs".parse(), Ok(P::READ | P::EXECUTE | P::SHARED)); assert_eq!("----".parse(), Ok(P::NONE)); assert_eq!((P::READ | P::WRITE | P::PRIVATE).as_str(), "rw-p"); assert_eq!((P::READ | P::EXECUTE | P::SHARED).as_str(), "r-xs"); assert_eq!(P::NONE.as_str(), "----"); } } procfs-core-0.17.0/src/process/mount.rs000064400000000000000000000603731046102023000161210ustar 00000000000000use bitflags::bitflags; use crate::{from_iter, ProcResult}; use std::collections::HashMap; use std::io::{BufRead, Lines}; use std::path::PathBuf; use std::time::Duration; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; bitflags! { #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct NFSServerCaps: u32 { const NFS_CAP_READDIRPLUS = 1; const NFS_CAP_HARDLINKS = (1 << 1); const NFS_CAP_SYMLINKS = (1 << 2); const NFS_CAP_ACLS = (1 << 3); const NFS_CAP_ATOMIC_OPEN = (1 << 4); const NFS_CAP_LGOPEN = (1 << 5); const NFS_CAP_FILEID = (1 << 6); const NFS_CAP_MODE = (1 << 7); const NFS_CAP_NLINK = (1 << 8); const NFS_CAP_OWNER = (1 << 9); const NFS_CAP_OWNER_GROUP = (1 << 10); const NFS_CAP_ATIME = (1 << 11); const NFS_CAP_CTIME = (1 << 12); const NFS_CAP_MTIME = (1 << 13); const NFS_CAP_POSIX_LOCK = (1 << 14); const NFS_CAP_UIDGID_NOMAP = (1 << 15); const NFS_CAP_STATEID_NFSV41 = (1 << 16); const NFS_CAP_ATOMIC_OPEN_V1 = (1 << 17); const NFS_CAP_SECURITY_LABEL = (1 << 18); const NFS_CAP_SEEK = (1 << 19); const NFS_CAP_ALLOCATE = (1 << 20); const NFS_CAP_DEALLOCATE = (1 << 21); const NFS_CAP_LAYOUTSTATS = (1 << 22); const NFS_CAP_CLONE = (1 << 23); const NFS_CAP_COPY = (1 << 24); const NFS_CAP_OFFLOAD_CANCEL = (1 << 25); } } /// Information about a all mounts in a process's mount namespace. /// /// This data is taken from the `/proc/[pid]/mountinfo` file. pub struct MountInfos(pub Vec); impl MountInfos { /// Returns an borrowed iterator. pub fn iter(&self) -> std::slice::Iter<'_, MountInfo> { self.into_iter() } } impl crate::FromBufRead for MountInfos { fn from_buf_read(r: R) -> ProcResult { let lines = r.lines(); let mut vec = Vec::new(); for line in lines { vec.push(MountInfo::from_line(&line?)?); } Ok(MountInfos(vec)) } } impl IntoIterator for MountInfos { type IntoIter = std::vec::IntoIter; type Item = MountInfo; fn into_iter(self) -> Self::IntoIter { self.0.into_iter() } } impl<'a> IntoIterator for &'a MountInfos { type IntoIter = std::slice::Iter<'a, MountInfo>; type Item = &'a MountInfo; fn into_iter(self) -> Self::IntoIter { self.0.iter() } } /// Information about a specific mount in a process's mount namespace. /// /// This data is taken from the `/proc/[pid]/mountinfo` file. /// /// For an example, see the /// [mountinfo.rs](https://github.com/eminence/procfs/tree/master/procfs/examples) example in the /// source repo. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MountInfo { /// Mount ID. A unique ID for the mount (but may be reused after `unmount`) pub mnt_id: i32, /// Parent mount ID. The ID of the parent mount (or of self for the root of the mount /// namespace's mount tree). /// /// If the parent mount point lies outside the process's root directory, the ID shown here /// won't have a corresponding record in mountinfo whose mount ID matches this parent mount /// ID (because mount points that lie outside the process's root directory are not shown in /// mountinfo). As a special case of this point, the process's root mount point may have a /// parent mount (for the initramfs filesystem) that lies outside the process's root /// directory, and an entry for that mount point will not appear in mountinfo. pub pid: i32, /// The value of `st_dev` for files on this filesystem pub majmin: String, /// The pathname of the directory in the filesystem which forms the root of this mount. pub root: String, /// The pathname of the mount point relative to the process's root directory. pub mount_point: PathBuf, /// Per-mount options pub mount_options: HashMap>, /// Optional fields pub opt_fields: Vec, /// Filesystem type pub fs_type: String, /// Mount source pub mount_source: Option, /// Per-superblock options. pub super_options: HashMap>, } impl MountInfo { pub fn from_line(line: &str) -> ProcResult { let mut split = line.split_whitespace(); let mnt_id = expect!(from_iter(&mut split)); let pid = expect!(from_iter(&mut split)); let majmin: String = expect!(from_iter(&mut split)); let root = expect!(from_iter(&mut split)); let mount_point = expect!(from_iter(&mut split)); let mount_options = { let mut map = HashMap::new(); let all_opts = expect!(split.next()); for opt in all_opts.split(',') { let mut s = opt.splitn(2, '='); let opt_name = expect!(s.next()); map.insert(opt_name.to_owned(), s.next().map(|s| s.to_owned())); } map }; let mut opt_fields = Vec::new(); loop { let f = expect!(split.next()); if f == "-" { break; } let mut s = f.split(':'); let opt = match expect!(s.next()) { "shared" => { let val = expect!(from_iter(&mut s)); MountOptFields::Shared(val) } "master" => { let val = expect!(from_iter(&mut s)); MountOptFields::Master(val) } "propagate_from" => { let val = expect!(from_iter(&mut s)); MountOptFields::PropagateFrom(val) } "unbindable" => MountOptFields::Unbindable, _ => continue, }; opt_fields.push(opt); } let fs_type: String = expect!(from_iter(&mut split)); let mount_source = match expect!(split.next()) { "none" => None, x => Some(x.to_owned()), }; let super_options = { let mut map = HashMap::new(); let all_opts = expect!(split.next()); for opt in all_opts.split(',') { let mut s = opt.splitn(2, '='); let opt_name = expect!(s.next()); map.insert(opt_name.to_owned(), s.next().map(|s| s.to_owned())); } map }; Ok(MountInfo { mnt_id, pid, majmin, root, mount_point, mount_options, opt_fields, fs_type, mount_source, super_options, }) } } /// Optional fields used in [MountInfo] #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum MountOptFields { /// This mount point is shared in peer group. Each peer group has a unique ID that is /// automatically generated by the kernel, and all mount points in the same peer group will /// show the same ID Shared(u32), /// THis mount is a slave to the specified shared peer group. Master(u32), /// This mount is a slave and receives propagation from the shared peer group PropagateFrom(u32), /// This is an unbindable mount Unbindable, } /// A single entry in [MountStats]. #[derive(Debug, Clone)] #[cfg_attr(test, derive(PartialEq))] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MountStat { /// The name of the mounted device pub device: Option, /// The mountpoint within the filesystem tree pub mount_point: PathBuf, /// The filesystem type pub fs: String, /// If the mount is NFS, this will contain various NFS statistics pub statistics: Option, } /// Mount information from `/proc//mountstats`. #[derive(Debug, Clone)] #[cfg_attr(test, derive(PartialEq))] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MountStats(pub Vec); impl crate::FromBufRead for MountStats { /// This should correspond to data in `/proc//mountstats`. fn from_buf_read(r: R) -> ProcResult { let mut v = Vec::new(); let mut lines = r.lines(); while let Some(Ok(line)) = lines.next() { if line.starts_with("device ") { // line will be of the format: // device proc mounted on /proc with fstype proc let mut s = line.split_whitespace(); let device = Some(expect!(s.nth(1)).to_owned()); let mount_point = PathBuf::from(expect!(s.nth(2))); let fs = expect!(s.nth(2)).to_owned(); let statistics = match s.next() { Some(stats) if stats.starts_with("statvers=") => { Some(MountNFSStatistics::from_lines(&mut lines, &stats[9..])?) } _ => None, }; v.push(MountStat { device, mount_point, fs, statistics, }); } } Ok(MountStats(v)) } } impl IntoIterator for MountStats { type IntoIter = std::vec::IntoIter; type Item = MountStat; fn into_iter(self) -> Self::IntoIter { self.0.into_iter() } } /// Only NFS mounts provide additional statistics in `MountStat` entries. // // Thank you to Chris Siebenmann for their helpful work in documenting these structures: // https://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsIndex #[derive(Debug, Clone)] #[cfg_attr(test, derive(PartialEq))] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct MountNFSStatistics { /// The version of the NFS statistics block. Either "1.0" or "1.1". pub version: String, /// The mount options. /// /// The meaning of these can be found in the manual pages for mount(5) and nfs(5) pub opts: Vec, /// Duration the NFS mount has been in existence. pub age: Duration, // * fsc (?) // * impl_id (NFSv4): Option> /// NFS Capabilities. /// /// See `include/linux/nfs_fs_sb.h` /// /// Some known values: /// * caps: server capabilities. See [NFSServerCaps]. /// * wtmult: server disk block size /// * dtsize: readdir size /// * bsize: server block size pub caps: Vec, // * nfsv4 (NFSv4): Option> pub sec: Vec, pub events: NFSEventCounter, pub bytes: NFSByteCounter, // * RPC iostats version: // * xprt // * per-op statistics pub per_op_stats: NFSPerOpStats, } impl MountNFSStatistics { // Keep reading lines until we get to a blank line fn from_lines(r: &mut Lines, statsver: &str) -> ProcResult { let mut parsing_per_op = false; let mut opts: Option> = None; let mut age = None; let mut caps = None; let mut sec = None; let mut bytes = None; let mut events = None; let mut per_op = HashMap::new(); while let Some(Ok(line)) = r.next() { let line = line.trim(); if line.trim() == "" { break; } if !parsing_per_op { if let Some(stripped) = line.strip_prefix("opts:") { opts = Some(stripped.trim().split(',').map(|s| s.to_string()).collect()); } else if let Some(stripped) = line.strip_prefix("age:") { age = Some(Duration::from_secs(from_str!(u64, stripped.trim()))); } else if let Some(stripped) = line.strip_prefix("caps:") { caps = Some(stripped.trim().split(',').map(|s| s.to_string()).collect()); } else if let Some(stripped) = line.strip_prefix("sec:") { sec = Some(stripped.trim().split(',').map(|s| s.to_string()).collect()); } else if let Some(stripped) = line.strip_prefix("bytes:") { bytes = Some(NFSByteCounter::from_str(stripped.trim())?); } else if let Some(stripped) = line.strip_prefix("events:") { events = Some(NFSEventCounter::from_str(stripped.trim())?); } if line == "per-op statistics" { parsing_per_op = true; } } else { let mut split = line.split(':'); let name = expect!(split.next()).to_string(); let stats = NFSOperationStat::from_str(expect!(split.next()))?; per_op.insert(name, stats); } } Ok(MountNFSStatistics { version: statsver.to_string(), opts: expect!(opts, "Failed to find opts field in nfs stats"), age: expect!(age, "Failed to find age field in nfs stats"), caps: expect!(caps, "Failed to find caps field in nfs stats"), sec: expect!(sec, "Failed to find sec field in nfs stats"), events: expect!(events, "Failed to find events section in nfs stats"), bytes: expect!(bytes, "Failed to find bytes section in nfs stats"), per_op_stats: per_op, }) } /// Attempts to parse the caps= value from the [caps](struct.MountNFSStatistics.html#structfield.caps) field. pub fn server_caps(&self) -> ProcResult> { for data in &self.caps { if let Some(stripped) = data.strip_prefix("caps=0x") { let val = from_str!(u32, stripped, 16); return Ok(NFSServerCaps::from_bits(val)); } } Ok(None) } } /// Represents NFS data from `/proc//mountstats` under the section `events`. /// /// The underlying data structure in the kernel can be found under *fs/nfs/iostat.h* `nfs_iostat`. /// The fields are documented in the kernel source only under *include/linux/nfs_iostat.h* `enum /// nfs_stat_eventcounters`. #[derive(Debug, Copy, Clone)] #[cfg_attr(test, derive(PartialEq))] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct NFSEventCounter { pub inode_revalidate: u64, pub deny_try_revalidate: u64, pub data_invalidate: u64, pub attr_invalidate: u64, pub vfs_open: u64, pub vfs_lookup: u64, pub vfs_access: u64, pub vfs_update_page: u64, pub vfs_read_page: u64, pub vfs_read_pages: u64, pub vfs_write_page: u64, pub vfs_write_pages: u64, pub vfs_get_dents: u64, pub vfs_set_attr: u64, pub vfs_flush: u64, pub vfs_fs_sync: u64, pub vfs_lock: u64, pub vfs_release: u64, pub congestion_wait: u64, pub set_attr_trunc: u64, pub extend_write: u64, pub silly_rename: u64, pub short_read: u64, pub short_write: u64, pub delay: u64, pub pnfs_read: u64, pub pnfs_write: u64, } impl NFSEventCounter { fn from_str(s: &str) -> ProcResult { let mut s = s.split_whitespace(); Ok(NFSEventCounter { inode_revalidate: from_str!(u64, expect!(s.next())), deny_try_revalidate: from_str!(u64, expect!(s.next())), data_invalidate: from_str!(u64, expect!(s.next())), attr_invalidate: from_str!(u64, expect!(s.next())), vfs_open: from_str!(u64, expect!(s.next())), vfs_lookup: from_str!(u64, expect!(s.next())), vfs_access: from_str!(u64, expect!(s.next())), vfs_update_page: from_str!(u64, expect!(s.next())), vfs_read_page: from_str!(u64, expect!(s.next())), vfs_read_pages: from_str!(u64, expect!(s.next())), vfs_write_page: from_str!(u64, expect!(s.next())), vfs_write_pages: from_str!(u64, expect!(s.next())), vfs_get_dents: from_str!(u64, expect!(s.next())), vfs_set_attr: from_str!(u64, expect!(s.next())), vfs_flush: from_str!(u64, expect!(s.next())), vfs_fs_sync: from_str!(u64, expect!(s.next())), vfs_lock: from_str!(u64, expect!(s.next())), vfs_release: from_str!(u64, expect!(s.next())), congestion_wait: from_str!(u64, expect!(s.next())), set_attr_trunc: from_str!(u64, expect!(s.next())), extend_write: from_str!(u64, expect!(s.next())), silly_rename: from_str!(u64, expect!(s.next())), short_read: from_str!(u64, expect!(s.next())), short_write: from_str!(u64, expect!(s.next())), delay: from_str!(u64, expect!(s.next())), pnfs_read: from_str!(u64, expect!(s.next())), pnfs_write: from_str!(u64, expect!(s.next())), }) } } /// Represents NFS data from `/proc//mountstats` under the section `bytes`. /// /// The underlying data structure in the kernel can be found under *fs/nfs/iostat.h* `nfs_iostat`. /// The fields are documented in the kernel source only under *include/linux/nfs_iostat.h* `enum /// nfs_stat_bytecounters` #[derive(Debug, Copy, Clone)] #[cfg_attr(test, derive(PartialEq))] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct NFSByteCounter { pub normal_read: u64, pub normal_write: u64, pub direct_read: u64, pub direct_write: u64, pub server_read: u64, pub server_write: u64, pub pages_read: u64, pub pages_write: u64, } impl NFSByteCounter { fn from_str(s: &str) -> ProcResult { let mut s = s.split_whitespace(); Ok(NFSByteCounter { normal_read: from_str!(u64, expect!(s.next())), normal_write: from_str!(u64, expect!(s.next())), direct_read: from_str!(u64, expect!(s.next())), direct_write: from_str!(u64, expect!(s.next())), server_read: from_str!(u64, expect!(s.next())), server_write: from_str!(u64, expect!(s.next())), pages_read: from_str!(u64, expect!(s.next())), pages_write: from_str!(u64, expect!(s.next())), }) } } /// Represents NFS data from `/proc//mountstats` under the section of `per-op statistics`. /// /// Here is what the Kernel says about the attributes: /// /// Regarding `operations`, `transmissions` and `major_timeouts`: /// /// > These counters give an idea about how many request /// > transmissions are required, on average, to complete that /// > particular procedure. Some procedures may require more /// > than one transmission because the server is unresponsive, /// > the client is retransmitting too aggressively, or the /// > requests are large and the network is congested. /// /// Regarding `bytes_sent` and `bytes_recv`: /// /// > These count how many bytes are sent and received for a /// > given RPC procedure type. This indicates how much load a /// > particular procedure is putting on the network. These /// > counts include the RPC and ULP headers, and the request /// > payload. /// /// Regarding `cum_queue_time`, `cum_resp_time` and `cum_total_req_time`: /// /// > The length of time an RPC request waits in queue before /// > transmission, the network + server latency of the request, /// > and the total time the request spent from init to release /// > are measured. /// /// (source: *include/linux/sunrpc/metrics.h* `struct rpc_iostats`) #[derive(Debug, Clone)] #[cfg_attr(test, derive(PartialEq))] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct NFSOperationStat { /// Count of rpc operations. pub operations: u64, /// Count of rpc transmissions pub transmissions: u64, /// Count of rpc major timeouts pub major_timeouts: u64, /// Count of bytes send. Does not only include the RPC payload but the RPC headers as well. pub bytes_sent: u64, /// Count of bytes received as `bytes_sent`. pub bytes_recv: u64, /// How long all requests have spend in the queue before being send. pub cum_queue_time: Duration, /// How long it took to get a response back. pub cum_resp_time: Duration, /// How long all requests have taken from beeing queued to the point they where completely /// handled. pub cum_total_req_time: Duration, } impl NFSOperationStat { fn from_str(s: &str) -> ProcResult { let mut s = s.split_whitespace(); let operations = from_str!(u64, expect!(s.next())); let transmissions = from_str!(u64, expect!(s.next())); let major_timeouts = from_str!(u64, expect!(s.next())); let bytes_sent = from_str!(u64, expect!(s.next())); let bytes_recv = from_str!(u64, expect!(s.next())); let cum_queue_time_ms = from_str!(u64, expect!(s.next())); let cum_resp_time_ms = from_str!(u64, expect!(s.next())); let cum_total_req_time_ms = from_str!(u64, expect!(s.next())); Ok(NFSOperationStat { operations, transmissions, major_timeouts, bytes_sent, bytes_recv, cum_queue_time: Duration::from_millis(cum_queue_time_ms), cum_resp_time: Duration::from_millis(cum_resp_time_ms), cum_total_req_time: Duration::from_millis(cum_total_req_time_ms), }) } } pub type NFSPerOpStats = HashMap; #[cfg(test)] mod tests { use super::*; use crate::FromRead; use std::time::Duration; #[test] fn test_mountinfo() { let s = "25 0 8:1 / / rw,relatime shared:1 - ext4 /dev/sda1 rw,errors=remount-ro"; let stat = MountInfo::from_line(s).unwrap(); println!("{:?}", stat); } #[test] fn test_proc_mountstats() { let MountStats(simple) = FromRead::from_read( "device /dev/md127 mounted on /boot with fstype ext2 device /dev/md124 mounted on /home with fstype ext4 device tmpfs mounted on /run/user/0 with fstype tmpfs " .as_bytes(), ) .unwrap(); let simple_parsed = vec![ MountStat { device: Some("/dev/md127".to_string()), mount_point: PathBuf::from("/boot"), fs: "ext2".to_string(), statistics: None, }, MountStat { device: Some("/dev/md124".to_string()), mount_point: PathBuf::from("/home"), fs: "ext4".to_string(), statistics: None, }, MountStat { device: Some("tmpfs".to_string()), mount_point: PathBuf::from("/run/user/0"), fs: "tmpfs".to_string(), statistics: None, }, ]; assert_eq!(simple, simple_parsed); let MountStats(mountstats) = FromRead::from_read("device elwe:/space mounted on /srv/elwe/space with fstype nfs4 statvers=1.1 opts: rw,vers=4.1,rsize=131072,wsize=131072,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=krb5,clientaddr=10.0.1.77,local_lock=none age: 3542 impl_id: name='',domain='',date='0,0' caps: caps=0x3ffdf,wtmult=512,dtsize=32768,bsize=0,namlen=255 nfsv4: bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions,pnfs=not configured sec: flavor=6,pseudoflavor=390003 events: 114 1579 5 3 132 20 3019 1 2 3 4 5 115 1 4 1 2 4 3 4 5 6 7 8 9 0 1 bytes: 1 2 3 4 5 6 7 8 RPC iostats version: 1.0 p/v: 100003/4 (nfs) xprt: tcp 909 0 1 0 2 294 294 0 294 0 2 0 0 per-op statistics NULL: 0 0 0 0 0 0 0 0 READ: 1 2 3 4 5 6 7 8 WRITE: 0 0 0 0 0 0 0 0 COMMIT: 0 0 0 0 0 0 0 0 OPEN: 1 1 0 320 420 0 124 124 ".as_bytes()).unwrap(); let nfs_v4 = &mountstats[0]; match &nfs_v4.statistics { Some(stats) => { assert_eq!("1.1".to_string(), stats.version, "mountstats version wrongly parsed."); assert_eq!(Duration::from_secs(3542), stats.age); assert_eq!(1, stats.bytes.normal_read); assert_eq!(114, stats.events.inode_revalidate); assert!(stats.server_caps().unwrap().is_some()); } None => { panic!("Failed to retrieve nfs statistics"); } } } } procfs-core-0.17.0/src/process/namespaces.rs000064400000000000000000000016761046102023000170770ustar 00000000000000use std::collections::HashMap; use std::ffi::OsString; use std::path::PathBuf; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Information about a namespace #[derive(Debug, Clone, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Namespace { /// Namespace type pub ns_type: OsString, /// Handle to the namespace pub path: PathBuf, /// Namespace identifier (inode number) pub identifier: u64, /// Device id of the namespace pub device_id: u64, } impl PartialEq for Namespace { fn eq(&self, other: &Self) -> bool { // see https://lore.kernel.org/lkml/87poky5ca9.fsf@xmission.com/ self.identifier == other.identifier && self.device_id == other.device_id } } /// All namespaces of a process. #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Namespaces(pub HashMap); procfs-core-0.17.0/src/process/pagemap.rs000064400000000000000000000117421046102023000163650ustar 00000000000000use bitflags::bitflags; use std::{fmt, mem::size_of}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; const fn genmask(high: usize, low: usize) -> u64 { let mask_bits = size_of::() * 8; (!0 - (1 << low) + 1) & (!0 >> (mask_bits - 1 - high)) } // source: include/linux/swap.h const MAX_SWAPFILES_SHIFT: usize = 5; // source: fs/proc/task_mmu.c bitflags! { /// Represents the fields and flags in a page table entry for a swapped page. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct SwapPageFlags: u64 { /// Swap type if swapped #[doc(hidden)] const SWAP_TYPE = genmask(MAX_SWAPFILES_SHIFT - 1, 0); /// Swap offset if swapped #[doc(hidden)] const SWAP_OFFSET = genmask(54, MAX_SWAPFILES_SHIFT); /// PTE is soft-dirty const SOFT_DIRTY = 1 << 55; /// Page is exclusively mapped const MMAP_EXCLUSIVE = 1 << 56; /// Page is file-page or shared-anon const FILE = 1 << 61; /// Page is swapped #[doc(hidden)] const SWAP = 1 << 62; /// Page is present const PRESENT = 1 << 63; } } impl SwapPageFlags { /// Returns the swap type recorded in this entry. pub fn get_swap_type(&self) -> u64 { (*self & Self::SWAP_TYPE).bits() } /// Returns the swap offset recorded in this entry. pub fn get_swap_offset(&self) -> u64 { (*self & Self::SWAP_OFFSET).bits() >> MAX_SWAPFILES_SHIFT } } bitflags! { /// Represents the fields and flags in a page table entry for a memory page. #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct MemoryPageFlags: u64 { /// Page frame number if present #[doc(hidden)] const PFN = genmask(54, 0); /// PTE is soft-dirty const SOFT_DIRTY = 1 << 55; /// Page is exclusively mapped const MMAP_EXCLUSIVE = 1 << 56; /// Page is file-page or shared-anon const FILE = 1 << 61; /// Page is swapped #[doc(hidden)] const SWAP = 1 << 62; /// Page is present const PRESENT = 1 << 63; } } impl MemoryPageFlags { /// Returns the page frame number recorded in this entry. pub fn get_page_frame_number(&self) -> Pfn { Pfn((*self & Self::PFN).bits()) } } /// A Page Frame Number, representing a 4 kiB physical memory page #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] pub struct Pfn(pub u64); impl fmt::UpperHex for Pfn { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { let val = self.0; fmt::UpperHex::fmt(&val, f) } } impl fmt::LowerHex for Pfn { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { let val = self.0; fmt::LowerHex::fmt(&val, f) } } /// Represents a page table entry in `/proc//pagemap`. #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)] pub enum PageInfo { /// Entry referring to a memory page MemoryPage(MemoryPageFlags), /// Entry referring to a swapped page SwapPage(SwapPageFlags), } impl PageInfo { pub fn parse_info(info: u64) -> Self { let flags = MemoryPageFlags::from_bits_retain(info); if flags.contains(MemoryPageFlags::SWAP) { Self::SwapPage(SwapPageFlags::from_bits_retain(info)) } else { Self::MemoryPage(flags) } } } #[cfg(test)] mod tests { use super::*; #[test] fn test_genmask() { let mask = genmask(3, 1); assert_eq!(mask, 0b1110); let mask = genmask(3, 0); assert_eq!(mask, 0b1111); let mask = genmask(63, 62); assert_eq!(mask, 0b11 << 62); } #[test] fn test_page_info() { let pagemap_entry: u64 = 0b1000000110000000000000000000000000000000000000000000000000000011; let info = PageInfo::parse_info(pagemap_entry); if let PageInfo::MemoryPage(memory_flags) = info { assert!(memory_flags .contains(MemoryPageFlags::PRESENT | MemoryPageFlags::MMAP_EXCLUSIVE | MemoryPageFlags::SOFT_DIRTY)); assert_eq!(memory_flags.get_page_frame_number(), Pfn(0b11)); } else { panic!("Wrong SWAP decoding"); } let pagemap_entry: u64 = 0b1100000110000000000000000000000000000000000000000000000001100010; let info = PageInfo::parse_info(pagemap_entry); if let PageInfo::SwapPage(swap_flags) = info { assert!( swap_flags.contains(SwapPageFlags::PRESENT | SwapPageFlags::MMAP_EXCLUSIVE | SwapPageFlags::SOFT_DIRTY) ); assert_eq!(swap_flags.get_swap_type(), 0b10); assert_eq!(swap_flags.get_swap_offset(), 0b11); } else { panic!("Wrong SWAP decoding"); } } } procfs-core-0.17.0/src/process/schedstat.rs000064400000000000000000000021621046102023000167310ustar 00000000000000use crate::from_iter; use crate::ProcResult; use std::io::Read; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Provides scheduler statistics of the process, based on the `/proc//schedstat` file. #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Schedstat { /// Time spent on the cpu. /// /// Measured in nanoseconds. pub sum_exec_runtime: u64, /// Time spent waiting on a runqueue. /// /// Measured in nanoseconds. pub run_delay: u64, /// \# of timeslices run on this cpu. pub pcount: u64, } impl crate::FromRead for Schedstat { fn from_read(mut r: R) -> ProcResult { let mut line = String::new(); r.read_to_string(&mut line)?; let mut s = line.split_whitespace(); let schedstat = Schedstat { sum_exec_runtime: expect!(from_iter(&mut s)), run_delay: expect!(from_iter(&mut s)), pcount: expect!(from_iter(&mut s)), }; if cfg!(test) { assert!(s.next().is_none()); } Ok(schedstat) } } procfs-core-0.17.0/src/process/smaps_rollup.rs000064400000000000000000000005161046102023000174700ustar 00000000000000use super::MemoryMaps; use crate::ProcResult; #[derive(Debug)] pub struct SmapsRollup { pub memory_map_rollup: MemoryMaps, } impl crate::FromBufRead for SmapsRollup { fn from_buf_read(r: R) -> ProcResult { MemoryMaps::from_buf_read(r).map(|m| SmapsRollup { memory_map_rollup: m }) } } procfs-core-0.17.0/src/process/stat.rs000064400000000000000000000424671046102023000157360ustar 00000000000000use super::ProcState; use super::StatFlags; use crate::{from_iter, from_iter_optional, ProcResult}; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; use std::io::Read; use std::str::FromStr; /// Status information about the process, based on the `/proc//stat` file. /// /// Not all fields are available in every kernel. These fields have `Option` types. /// /// New fields to this struct may be added at any time (even without a major or minor semver bump). #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[non_exhaustive] pub struct Stat { /// The process ID. pub pid: i32, /// The filename of the executable, without the parentheses. /// /// This is visible whether or not the executable is swapped out. /// /// Note that if the actual comm field contains invalid UTF-8 characters, they will be replaced /// here by the U+FFFD replacement character. pub comm: String, /// Process State. /// /// See [state()](#method.state) to get the process state as an enum. pub state: char, /// The PID of the parent of this process. pub ppid: i32, /// The process group ID of the process. pub pgrp: i32, /// The session ID of the process. pub session: i32, /// The controlling terminal of the process. /// /// The minor device number is contained in the combination of bits 31 to 20 and 7 to 0; /// the major device number is in bits 15 to 8. /// /// See [tty_nr()](#method.tty_nr) to get this value decoded into a (major, minor) tuple pub tty_nr: i32, /// The ID of the foreground process group of the controlling terminal of the process. pub tpgid: i32, /// The kernel flags word of the process. /// /// For bit meanings, see the PF_* defines in the Linux kernel source file /// [`include/linux/sched.h`](https://github.com/torvalds/linux/blob/master/include/linux/sched.h). /// /// See [flags()](#method.flags) to get a [`StatFlags`](struct.StatFlags.html) bitfield object. pub flags: u32, /// The number of minor faults the process has made which have not required loading a memory /// page from disk. pub minflt: u64, /// The number of minor faults that the process's waited-for children have made. pub cminflt: u64, /// The number of major faults the process has made which have required loading a memory page /// from disk. pub majflt: u64, /// The number of major faults that the process's waited-for children have made. pub cmajflt: u64, /// Amount of time that this process has been scheduled in user mode, measured in clock ticks /// (divide by `ticks_per_second()`). /// /// This includes guest time, guest_time (time spent running a virtual CPU, see below), so that /// applications that are not aware of the guest time field do not lose that time from their /// calculations. pub utime: u64, /// Amount of time that this process has been scheduled in kernel mode, measured in clock ticks /// (divide by `ticks_per_second()`). pub stime: u64, /// Amount of time that this process's waited-for children have been scheduled in /// user mode, measured in clock ticks (divide by `ticks_per_second()`). /// /// This includes guest time, cguest_time (time spent running a virtual CPU, see below). pub cutime: i64, /// Amount of time that this process's waited-for children have been scheduled in kernel /// mode, measured in clock ticks (divide by `ticks_per_second()`). pub cstime: i64, /// For processes running a real-time scheduling policy (policy below; see sched_setscheduler(2)), /// this is the negated scheduling priority, minus one; /// /// That is, a number in the range -2 to -100, /// corresponding to real-time priority 1 to 99. For processes running under a non-real-time /// scheduling policy, this is the raw nice value (setpriority(2)) as represented in the kernel. /// The kernel stores nice values as numbers in the range 0 (high) to 39 (low), corresponding /// to the user-visible nice range of -20 to 19. /// (This explanation is for Linux 2.6) /// /// Before Linux 2.6, this was a scaled value based on the scheduler weighting given to this process. pub priority: i64, /// The nice value (see `setpriority(2)`), a value in the range 19 (low priority) to -20 (high priority). pub nice: i64, /// Number of threads in this process (since Linux 2.6). Before kernel 2.6, this field was /// hard coded to 0 as a placeholder for an earlier removed field. pub num_threads: i64, /// The time in jiffies before the next SIGALRM is sent to the process due to an interval /// timer. /// /// Since kernel 2.6.17, this field is no longer maintained, and is hard coded as 0. pub itrealvalue: i64, /// The time the process started after system boot. /// /// In kernels before Linux 2.6, this value was expressed in jiffies. Since Linux 2.6, the /// value is expressed in clock ticks (divide by `sysconf(_SC_CLK_TCK)`). /// #[cfg_attr( feature = "chrono", doc = "See also the [Stat::starttime()] method to get the starttime as a `DateTime` object" )] #[cfg_attr( not(feature = "chrono"), doc = "If you compile with the optional `chrono` feature, you can use the `starttime()` method to get the starttime as a `DateTime` object" )] pub starttime: u64, /// Virtual memory size in bytes. pub vsize: u64, /// Resident Set Size: number of pages the process has in real memory. /// /// This is just the pages which count toward text, data, or stack space. /// This does not include pages which have not been demand-loaded in, or which are swapped out. pub rss: u64, /// Current soft limit in bytes on the rss of the process; see the description of RLIMIT_RSS in /// getrlimit(2). pub rsslim: u64, /// The address above which program text can run. pub startcode: u64, /// The address below which program text can run. pub endcode: u64, /// The address of the start (i.e., bottom) of the stack. pub startstack: u64, /// The current value of ESP (stack pointer), as found in the kernel stack page for the /// process. pub kstkesp: u64, /// The current EIP (instruction pointer). pub kstkeip: u64, /// The bitmap of pending signals, displayed as a decimal number. Obsolete, because it does /// not provide information on real-time signals; use `/proc//status` instead. pub signal: u64, /// The bitmap of blocked signals, displayed as a decimal number. Obsolete, because it does /// not provide information on real-time signals; use `/proc//status` instead. pub blocked: u64, /// The bitmap of ignored signals, displayed as a decimal number. Obsolete, because it does /// not provide information on real-time signals; use `/proc//status` instead. pub sigignore: u64, /// The bitmap of caught signals, displayed as a decimal number. Obsolete, because it does not /// provide information on real-time signals; use `/proc//status` instead. pub sigcatch: u64, /// This is the "channel" in which the process is waiting. It is the address of a location /// in the kernel where the process is sleeping. The corresponding symbolic name can be found in /// `/proc//wchan`. pub wchan: u64, /// Number of pages swapped **(not maintained)**. pub nswap: u64, /// Cumulative nswap for child processes **(not maintained)**. pub cnswap: u64, /// Signal to be sent to parent when we die. /// /// (since Linux 2.1.22) pub exit_signal: Option, /// CPU number last executed on. /// /// (since Linux 2.2.8) pub processor: Option, /// Real-time scheduling priority /// /// Real-time scheduling priority, a number in the range 1 to 99 for processes scheduled under a real-time policy, or 0, for non-real-time processes /// /// (since Linux 2.5.19) pub rt_priority: Option, /// Scheduling policy (see sched_setscheduler(2)). /// /// Decode using the `SCHED_*` constants in `linux/sched.h`. /// /// (since Linux 2.5.19) pub policy: Option, /// Aggregated block I/O delays, measured in clock ticks (centiseconds). /// /// (since Linux 2.6.18) pub delayacct_blkio_ticks: Option, /// Guest time of the process (time spent running a virtual CPU for a guest operating system), /// measured in clock ticks (divide by `ticks_per_second()`) /// /// (since Linux 2.6.24) pub guest_time: Option, /// Guest time of the process's children, measured in clock ticks (divide by /// `ticks_per_second()`). /// /// (since Linux 2.6.24) pub cguest_time: Option, /// Address above which program initialized and uninitialized (BSS) data are placed. /// /// (since Linux 3.3) pub start_data: Option, /// Address below which program initialized and uninitialized (BSS) data are placed. /// /// (since Linux 3.3) pub end_data: Option, /// Address above which program heap can be expanded with brk(2). /// /// (since Linux 3.3) pub start_brk: Option, /// Address above which program command-line arguments (argv) are placed. /// /// (since Linux 3.5) pub arg_start: Option, /// Address below program command-line arguments (argv) are placed. /// /// (since Linux 3.5) pub arg_end: Option, /// Address above which program environment is placed. /// /// (since Linux 3.5) pub env_start: Option, /// Address below which program environment is placed. /// /// (since Linux 3.5) pub env_end: Option, /// The thread's exit status in the form reported by waitpid(2). /// /// (since Linux 3.5) pub exit_code: Option, } impl crate::FromRead for Stat { #[allow(clippy::cognitive_complexity)] fn from_read(mut r: R) -> ProcResult { // read in entire thing, this is only going to be 1 line let mut buf = Vec::with_capacity(512); r.read_to_end(&mut buf)?; let line = String::from_utf8_lossy(&buf); let buf = line.trim(); // find the first opening paren, and split off the first part (pid) let start_paren = expect!(buf.find('(')); let end_paren = expect!(buf.rfind(')')); let pid_s = &buf[..start_paren - 1]; let comm = buf[start_paren + 1..end_paren].to_string(); let rest = &buf[end_paren + 2..]; let pid = expect!(FromStr::from_str(pid_s)); let mut rest = rest.split(' '); let state = expect!(expect!(rest.next()).chars().next()); let ppid = expect!(from_iter(&mut rest)); let pgrp = expect!(from_iter(&mut rest)); let session = expect!(from_iter(&mut rest)); let tty_nr = expect!(from_iter(&mut rest)); let tpgid = expect!(from_iter(&mut rest)); let flags = expect!(from_iter(&mut rest)); let minflt = expect!(from_iter(&mut rest)); let cminflt = expect!(from_iter(&mut rest)); let majflt = expect!(from_iter(&mut rest)); let cmajflt = expect!(from_iter(&mut rest)); let utime = expect!(from_iter(&mut rest)); let stime = expect!(from_iter(&mut rest)); let cutime = expect!(from_iter(&mut rest)); let cstime = expect!(from_iter(&mut rest)); let priority = expect!(from_iter(&mut rest)); let nice = expect!(from_iter(&mut rest)); let num_threads = expect!(from_iter(&mut rest)); let itrealvalue = expect!(from_iter(&mut rest)); let starttime = expect!(from_iter(&mut rest)); let vsize = expect!(from_iter(&mut rest)); let rss = expect!(from_iter(&mut rest)); let rsslim = expect!(from_iter(&mut rest)); let startcode = expect!(from_iter(&mut rest)); let endcode = expect!(from_iter(&mut rest)); let startstack = expect!(from_iter(&mut rest)); let kstkesp = expect!(from_iter(&mut rest)); let kstkeip = expect!(from_iter(&mut rest)); let signal = expect!(from_iter(&mut rest)); let blocked = expect!(from_iter(&mut rest)); let sigignore = expect!(from_iter(&mut rest)); let sigcatch = expect!(from_iter(&mut rest)); let wchan = expect!(from_iter(&mut rest)); let nswap = expect!(from_iter(&mut rest)); let cnswap = expect!(from_iter(&mut rest)); // Since 2.1.22 let exit_signal = expect!(from_iter_optional(&mut rest)); // Since 2.2.8 let processor = expect!(from_iter_optional(&mut rest)); // Since 2.5.19 let rt_priority = expect!(from_iter_optional(&mut rest)); let policy = expect!(from_iter_optional(&mut rest)); // Since 2.6.18 let delayacct_blkio_ticks = expect!(from_iter_optional(&mut rest)); // Since 2.6.24 let guest_time = expect!(from_iter_optional(&mut rest)); let cguest_time = expect!(from_iter_optional(&mut rest)); // Since 3.3.0 let start_data = expect!(from_iter_optional(&mut rest)); let end_data = expect!(from_iter_optional(&mut rest)); let start_brk = expect!(from_iter_optional(&mut rest)); // Since 3.5.0 let arg_start = expect!(from_iter_optional(&mut rest)); let arg_end = expect!(from_iter_optional(&mut rest)); let env_start = expect!(from_iter_optional(&mut rest)); let env_end = expect!(from_iter_optional(&mut rest)); let exit_code = expect!(from_iter_optional(&mut rest)); Ok(Stat { pid, comm, state, ppid, pgrp, session, tty_nr, tpgid, flags, minflt, cminflt, majflt, cmajflt, utime, stime, cutime, cstime, priority, nice, num_threads, itrealvalue, starttime, vsize, rss, rsslim, startcode, endcode, startstack, kstkesp, kstkeip, signal, blocked, sigignore, sigcatch, wchan, nswap, cnswap, exit_signal, processor, rt_priority, policy, delayacct_blkio_ticks, guest_time, cguest_time, start_data, end_data, start_brk, arg_start, arg_end, env_start, env_end, exit_code, }) } } impl Stat { pub fn state(&self) -> ProcResult { ProcState::from_char(self.state) .ok_or_else(|| build_internal_error!(format!("{:?} is not a recognized process state", self.state))) } pub fn tty_nr(&self) -> (i32, i32) { // minor is bits 31-20 and 7-0 // major is 15-8 // mmmmmmmmmmmm____MMMMMMMMmmmmmmmm // 11111111111100000000000000000000 let major = (self.tty_nr & 0xfff00) >> 8; let minor = (self.tty_nr & 0x000ff) | ((self.tty_nr >> 12) & 0xfff00); (major, minor) } /// The kernel flags word of the process, as a bitfield /// /// See also the [Stat::flags](struct.Stat.html#structfield.flags) field. pub fn flags(&self) -> ProcResult { StatFlags::from_bits(self.flags) .ok_or_else(|| build_internal_error!(format!("Can't construct flags bitfield from {:?}", self.flags))) } /// Get the starttime of the process as a `DateTime` object. /// /// See also the [`starttime`](struct.Stat.html#structfield.starttime) field. /// /// This function requires the "chrono" features to be enabled (which it is by default). /// /// Since computing the absolute start time requires knowing the current boot time, this function returns /// a type that needs info about the current machine. /// /// # Example /// /// ```rust,ignore /// use procfs::WithCurrentSystemInfo; /// /// let me = procfs::process::Process::myself().unwrap(); /// let stat = me.stat().unwrap(); /// let start = stat.starttime().get().unwrap(); /// ``` #[cfg(feature = "chrono")] pub fn starttime(&self) -> impl crate::WithSystemInfo>> { move |si: &crate::SystemInfo| { let seconds_since_boot = self.starttime as f32 / si.ticks_per_second() as f32; Ok(si.boot_time()? + chrono::Duration::milliseconds((seconds_since_boot * 1000.0) as i64)) } } /// Gets the Resident Set Size (in bytes) /// /// The `rss` field will return the same value in pages /// /// # Example /// /// Calculating the rss value in bytes requires knowing the page size, so a `SystemInfo` is needed. /// ```rust,ignore /// use procfs::WithCurrentSystemInfo; /// /// let me = procfs::process::Process::myself().unwrap(); /// let stat = me.stat().unwrap(); /// let bytes = stat.rss_bytes().get(); /// ``` pub fn rss_bytes(&self) -> impl crate::WithSystemInfo { move |si: &crate::SystemInfo| self.rss * si.page_size() } } procfs-core-0.17.0/src/process/status.rs000064400000000000000000000362241046102023000163000ustar 00000000000000use crate::{FromStrRadix, ProcResult}; use std::collections::HashMap; use std::io::BufRead; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// Status information about the process, based on the `/proc//status` file. /// /// Not all fields are available in every kernel. These fields have `Option` types. /// In general, the current kernel version will tell you what fields you can expect, but this /// isn't totally reliable, since some kernels might backport certain fields, or fields might /// only be present if certain kernel configuration options are enabled. Be prepared to /// handle `None` values. /// /// New fields to this struct may be added at any time (even without a major or minor semver bump). #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[non_exhaustive] pub struct Status { /// Command run by this process. pub name: String, /// Process umask, expressed in octal with a leading zero; see umask(2). (Since Linux 4.7.) pub umask: Option, /// Current state of the process. pub state: String, /// Thread group ID (i.e., Process ID). pub tgid: i32, /// NUMA group ID (0 if none; since Linux 3.13). pub ngid: Option, /// Thread ID (see gettid(2)). pub pid: i32, /// PID of parent process. pub ppid: i32, /// PID of process tracing this process (0 if not being traced). pub tracerpid: i32, /// Real UID. pub ruid: u32, /// Effective UID. pub euid: u32, /// Saved set UID. pub suid: u32, /// Filesystem UID. pub fuid: u32, /// Real GID. pub rgid: u32, /// Effective GID. pub egid: u32, /// Saved set GID. pub sgid: u32, /// Filesystem GID. pub fgid: u32, /// Number of file descriptor slots currently allocated. pub fdsize: u32, /// Supplementary group list. pub groups: Vec, /// Thread group ID (i.e., PID) in each of the PID /// namespaces of which (pid)[struct.Status.html#structfield.pid] is a member. The leftmost entry /// shows the value with respect to the PID namespace of the /// reading process, followed by the value in successively /// nested inner namespaces. (Since Linux 4.1.) pub nstgid: Option>, /// Thread ID in each of the PID namespaces of which /// (pid)[struct.Status.html#structfield.pid] is a member. The fields are ordered as for NStgid. /// (Since Linux 4.1.) pub nspid: Option>, /// Process group ID in each of the PID namespaces of /// which (pid)[struct.Status.html#structfield.pid] is a member. The fields are ordered as for NStgid. (Since Linux 4.1.) pub nspgid: Option>, /// NSsid: descendant namespace session ID hierarchy Session ID /// in each of the PID namespaces of which (pid)[struct.Status.html#structfield.pid] is a member. /// The fields are ordered as for NStgid. (Since Linux 4.1.) pub nssid: Option>, /// Peak virtual memory size by kibibytes. pub vmpeak: Option, /// Virtual memory size by kibibytes. pub vmsize: Option, /// Locked memory size by kibibytes (see mlock(3)). pub vmlck: Option, /// Pinned memory size by kibibytes (since Linux 3.2). These are /// pages that can't be moved because something needs to /// directly access physical memory. pub vmpin: Option, /// Peak resident set size by kibibytes ("high water mark"). pub vmhwm: Option, /// Resident set size by kibibytes. Note that the value here is the /// sum of RssAnon, RssFile, and RssShmem. pub vmrss: Option, /// Size of resident anonymous memory by kibibytes. (since Linux 4.5). pub rssanon: Option, /// Size of resident file mappings by kibibytes. (since Linux 4.5). pub rssfile: Option, /// Size of resident shared memory by kibibytes (includes System V /// shared memory, mappings from tmpfs(5), and shared anonymous /// mappings). (since Linux 4.5). pub rssshmem: Option, /// Size of data by kibibytes. pub vmdata: Option, /// Size of stack by kibibytes. pub vmstk: Option, /// Size of text seg‐ments by kibibytes. pub vmexe: Option, /// Shared library code size by kibibytes. pub vmlib: Option, /// Page table entries size by kibibytes (since Linux 2.6.10). pub vmpte: Option, /// Swapped-out virtual memory size by anonymous private /// pages by kibibytes; shmem swap usage is not included (since Linux 2.6.34). pub vmswap: Option, /// Size of hugetlb memory portions by kB. (since Linux 4.4). pub hugetlbpages: Option, /// Number of threads in process containing this thread. pub threads: u64, /// This field contains two slash-separated numbers that /// relate to queued signals for the real user ID of this /// process. The first of these is the number of currently /// queued signals for this real user ID, and the second is the /// resource limit on the number of queued signals for this /// process (see the description of RLIMIT_SIGPENDING in /// getrlimit(2)). pub sigq: (u64, u64), /// Number of signals pending for thread (see pthreads(7) and signal(7)). pub sigpnd: u64, /// Number of signals pending for process as a whole (see pthreads(7) and signal(7)). pub shdpnd: u64, /// Masks indicating signals being blocked (see signal(7)). pub sigblk: u64, /// Masks indicating signals being ignored (see signal(7)). pub sigign: u64, /// Masks indicating signals being caught (see signal(7)). pub sigcgt: u64, /// Masks of capabilities enabled in inheritable sets (see capabilities(7)). pub capinh: u64, /// Masks of capabilities enabled in permitted sets (see capabilities(7)). pub capprm: u64, /// Masks of capabilities enabled in effective sets (see capabilities(7)). pub capeff: u64, /// Capability Bounding set (since Linux 2.6.26, see capabilities(7)). pub capbnd: Option, /// Ambient capability set (since Linux 4.3, see capabilities(7)). pub capamb: Option, /// Value of the no_new_privs bit (since Linux 4.10, see prctl(2)). pub nonewprivs: Option, /// Seccomp mode of the process (since Linux 3.8, see /// seccomp(2)). 0 means SECCOMP_MODE_DISABLED; 1 means SEC‐ /// COMP_MODE_STRICT; 2 means SECCOMP_MODE_FILTER. This field /// is provided only if the kernel was built with the CON‐ /// FIG_SECCOMP kernel configuration option enabled. pub seccomp: Option, /// Speculative store bypass mitigation status. pub speculation_store_bypass: Option, /// Mask of CPUs on which this process may run (since Linux 2.6.24, see cpuset(7)). pub cpus_allowed: Option>, /// Same as previous, but in "list format" (since Linux 2.6.26, see cpuset(7)). pub cpus_allowed_list: Option>, /// Mask of memory nodes allowed to this process (since Linux 2.6.24, see cpuset(7)). pub mems_allowed: Option>, /// Same as previous, but in "list format" (since Linux 2.6.26, see cpuset(7)). pub mems_allowed_list: Option>, /// Number of voluntary context switches (since Linux 2.6.23). pub voluntary_ctxt_switches: Option, /// Number of involuntary context switches (since Linux 2.6.23). pub nonvoluntary_ctxt_switches: Option, /// Contains true if the process is currently dumping core. /// /// This information can be used by a monitoring process to avoid killing a processing that is /// currently dumping core, which could result in a corrupted core dump file. /// /// (Since Linux 4.15) pub core_dumping: Option, /// Contains true if the process is allowed to use THP /// /// (Since Linux 5.0) pub thp_enabled: Option, } impl crate::FromBufRead for Status { fn from_buf_read(reader: R) -> ProcResult { let mut map = HashMap::new(); for line in reader.lines() { let line = line?; if line.is_empty() { continue; } let mut s = line.split(':'); let field = expect!(s.next()); let value = expect!(s.next()).trim(); map.insert(field.to_string(), value.to_string()); } let status = Status { name: expect!(map.remove("Name")), umask: map.remove("Umask").map(|x| Ok(from_str!(u32, &x, 8))).transpose()?, state: expect!(map.remove("State")), tgid: from_str!(i32, &expect!(map.remove("Tgid"))), ngid: map.remove("Ngid").map(|x| Ok(from_str!(i32, &x))).transpose()?, pid: from_str!(i32, &expect!(map.remove("Pid"))), ppid: from_str!(i32, &expect!(map.remove("PPid"))), tracerpid: from_str!(i32, &expect!(map.remove("TracerPid"))), ruid: expect!(Status::parse_uid_gid(expect!(map.get("Uid")), 0)), euid: expect!(Status::parse_uid_gid(expect!(map.get("Uid")), 1)), suid: expect!(Status::parse_uid_gid(expect!(map.get("Uid")), 2)), fuid: expect!(Status::parse_uid_gid(&expect!(map.remove("Uid")), 3)), rgid: expect!(Status::parse_uid_gid(expect!(map.get("Gid")), 0)), egid: expect!(Status::parse_uid_gid(expect!(map.get("Gid")), 1)), sgid: expect!(Status::parse_uid_gid(expect!(map.get("Gid")), 2)), fgid: expect!(Status::parse_uid_gid(&expect!(map.remove("Gid")), 3)), fdsize: from_str!(u32, &expect!(map.remove("FDSize"))), groups: Status::parse_list(&expect!(map.remove("Groups")))?, nstgid: map.remove("NStgid").map(|x| Status::parse_list(&x)).transpose()?, nspid: map.remove("NSpid").map(|x| Status::parse_list(&x)).transpose()?, nspgid: map.remove("NSpgid").map(|x| Status::parse_list(&x)).transpose()?, nssid: map.remove("NSsid").map(|x| Status::parse_list(&x)).transpose()?, vmpeak: Status::parse_with_kb(map.remove("VmPeak"))?, vmsize: Status::parse_with_kb(map.remove("VmSize"))?, vmlck: Status::parse_with_kb(map.remove("VmLck"))?, vmpin: Status::parse_with_kb(map.remove("VmPin"))?, vmhwm: Status::parse_with_kb(map.remove("VmHWM"))?, vmrss: Status::parse_with_kb(map.remove("VmRSS"))?, rssanon: Status::parse_with_kb(map.remove("RssAnon"))?, rssfile: Status::parse_with_kb(map.remove("RssFile"))?, rssshmem: Status::parse_with_kb(map.remove("RssShmem"))?, vmdata: Status::parse_with_kb(map.remove("VmData"))?, vmstk: Status::parse_with_kb(map.remove("VmStk"))?, vmexe: Status::parse_with_kb(map.remove("VmExe"))?, vmlib: Status::parse_with_kb(map.remove("VmLib"))?, vmpte: Status::parse_with_kb(map.remove("VmPTE"))?, vmswap: Status::parse_with_kb(map.remove("VmSwap"))?, hugetlbpages: Status::parse_with_kb(map.remove("HugetlbPages"))?, threads: from_str!(u64, &expect!(map.remove("Threads"))), sigq: expect!(Status::parse_sigq(&expect!(map.remove("SigQ")))), sigpnd: from_str!(u64, &expect!(map.remove("SigPnd")), 16), shdpnd: from_str!(u64, &expect!(map.remove("ShdPnd")), 16), sigblk: from_str!(u64, &expect!(map.remove("SigBlk")), 16), sigign: from_str!(u64, &expect!(map.remove("SigIgn")), 16), sigcgt: from_str!(u64, &expect!(map.remove("SigCgt")), 16), capinh: from_str!(u64, &expect!(map.remove("CapInh")), 16), capprm: from_str!(u64, &expect!(map.remove("CapPrm")), 16), capeff: from_str!(u64, &expect!(map.remove("CapEff")), 16), capbnd: map.remove("CapBnd").map(|x| Ok(from_str!(u64, &x, 16))).transpose()?, capamb: map.remove("CapAmb").map(|x| Ok(from_str!(u64, &x, 16))).transpose()?, nonewprivs: map.remove("NoNewPrivs").map(|x| Ok(from_str!(u64, &x))).transpose()?, seccomp: map.remove("Seccomp").map(|x| Ok(from_str!(u32, &x))).transpose()?, speculation_store_bypass: map.remove("Speculation_Store_Bypass"), cpus_allowed: map .remove("Cpus_allowed") .map(|x| Status::parse_allowed(&x)) .transpose()?, cpus_allowed_list: map .remove("Cpus_allowed_list") .and_then(|x| Status::parse_allowed_list(&x).ok()), mems_allowed: map .remove("Mems_allowed") .map(|x| Status::parse_allowed(&x)) .transpose()?, mems_allowed_list: map .remove("Mems_allowed_list") .and_then(|x| Status::parse_allowed_list(&x).ok()), voluntary_ctxt_switches: map .remove("voluntary_ctxt_switches") .map(|x| Ok(from_str!(u64, &x))) .transpose()?, nonvoluntary_ctxt_switches: map .remove("nonvoluntary_ctxt_switches") .map(|x| Ok(from_str!(u64, &x))) .transpose()?, core_dumping: map.remove("CoreDumping").map(|x| x == "1"), thp_enabled: map.remove("THP_enabled").map(|x| x == "1"), }; if cfg!(test) && !map.is_empty() { // This isn't an error because different kernels may put different data here, and distros // may backport these changes into older kernels. Too hard to keep track of eprintln!("Warning: status map is not empty: {:#?}", map); } Ok(status) } } impl Status { fn parse_with_kb(s: Option) -> ProcResult> { if let Some(s) = s { Ok(Some(from_str!(T, &s.replace(" kB", "")))) } else { Ok(None) } } #[doc(hidden)] pub fn parse_uid_gid(s: &str, i: usize) -> ProcResult { Ok(from_str!(u32, expect!(s.split_whitespace().nth(i)))) } fn parse_sigq(s: &str) -> ProcResult<(u64, u64)> { let mut iter = s.split('/'); let first = from_str!(u64, expect!(iter.next())); let second = from_str!(u64, expect!(iter.next())); Ok((first, second)) } fn parse_list(s: &str) -> ProcResult> { let mut ret = Vec::new(); for i in s.split_whitespace() { ret.push(from_str!(T, i)); } Ok(ret) } fn parse_allowed(s: &str) -> ProcResult> { let mut ret = Vec::new(); for i in s.split(',') { ret.push(from_str!(u32, i, 16)); } Ok(ret) } fn parse_allowed_list(s: &str) -> ProcResult> { let mut ret = Vec::new(); for s in s.split(',') { if s.contains('-') { let mut s = s.split('-'); let beg = from_str!(u32, expect!(s.next())); if let Some(x) = s.next() { let end = from_str!(u32, x); ret.push((beg, end)); } } else { let beg = from_str!(u32, s); let end = from_str!(u32, s); ret.push((beg, end)); } } Ok(ret) } } procfs-core-0.17.0/src/sys/kernel/mod.rs000064400000000000000000000324561046102023000161570ustar 00000000000000//! Global kernel info / tuning miscellaneous stuff //! //! The files in this directory can be used to tune and monitor miscellaneous //! and general things in the operation of the Linux kernel. use std::cmp; use std::collections::HashSet; use std::str::FromStr; use bitflags::bitflags; use crate::{ProcError, ProcResult}; /// Represents a kernel version, in major.minor.release version. #[derive(Debug, Copy, Clone, Eq, PartialEq, Hash)] pub struct Version { pub major: u8, pub minor: u8, pub patch: u16, } impl Version { pub fn new(major: u8, minor: u8, patch: u16) -> Version { Version { major, minor, patch } } /// Parses a kernel version string, in major.minor.release syntax. /// /// Note that any extra information (stuff after a dash) is ignored. /// /// # Example /// /// ``` /// # use procfs_core::KernelVersion; /// let a = KernelVersion::from_str("3.16.0-6-amd64").unwrap(); /// let b = KernelVersion::new(3, 16, 0); /// assert_eq!(a, b); /// /// ``` #[allow(clippy::should_implement_trait)] pub fn from_str(s: &str) -> Result { let pos = s.find(|c: char| c != '.' && !c.is_ascii_digit()); let kernel = if let Some(pos) = pos { let (s, _) = s.split_at(pos); s } else { s }; let mut kernel_split = kernel.split('.'); let major = kernel_split.next().ok_or("Missing major version component")?; let minor = kernel_split.next().ok_or("Missing minor version component")?; let patch = kernel_split.next().ok_or("Missing patch version component")?; let major = major.parse().map_err(|_| "Failed to parse major version")?; let minor = minor.parse().map_err(|_| "Failed to parse minor version")?; let patch = patch.parse().map_err(|_| "Failed to parse patch version")?; Ok(Version { major, minor, patch }) } } impl FromStr for Version { type Err = &'static str; /// Parses a kernel version string, in major.minor.release syntax. /// /// Note that any extra information (stuff after a dash) is ignored. /// /// # Example /// /// ``` /// # use procfs_core::KernelVersion; /// let a: KernelVersion = "3.16.0-6-amd64".parse().unwrap(); /// let b = KernelVersion::new(3, 16, 0); /// assert_eq!(a, b); /// /// ``` fn from_str(s: &str) -> Result { Version::from_str(s) } } impl cmp::Ord for Version { fn cmp(&self, other: &Self) -> cmp::Ordering { match self.major.cmp(&other.major) { cmp::Ordering::Equal => match self.minor.cmp(&other.minor) { cmp::Ordering::Equal => self.patch.cmp(&other.patch), x => x, }, x => x, } } } impl cmp::PartialOrd for Version { fn partial_cmp(&self, other: &Self) -> Option { Some(self.cmp(other)) } } /// Represents a kernel type #[derive(Debug, Clone, Eq, PartialEq)] pub struct Type { pub sysname: String, } impl Type { pub fn new(sysname: String) -> Type { Type { sysname } } } impl FromStr for Type { type Err = &'static str; /// Parse a kernel type string /// /// Notice that in Linux source code, it is defined as a single string. fn from_str(s: &str) -> Result { Ok(Type::new(s.to_string())) } } /// Represents a kernel build information #[derive(Debug, Clone, Eq, PartialEq)] pub struct BuildInfo { pub version: String, pub flags: HashSet, /// This field contains any extra data from the /proc/sys/kernel/version file. It generally contains the build date of the kernel, but the format of the date can vary. /// /// A method named `extra_date` is provided which would try to parse some date formats. When the date format is not supported, an error will be returned. It depends on chrono feature. pub extra: String, } impl BuildInfo { pub fn new(version: &str, flags: HashSet, extra: String) -> BuildInfo { BuildInfo { version: version.to_string(), flags, extra, } } /// Check if SMP is ON pub fn smp(&self) -> bool { self.flags.contains("SMP") } /// Check if PREEMPT is ON pub fn preempt(&self) -> bool { self.flags.contains("PREEMPT") } /// Check if PREEMPTRT is ON pub fn preemptrt(&self) -> bool { self.flags.contains("PREEMPTRT") } /// Return version number /// /// This would parse number from first digits of version string. For example, #21~1 to 21. pub fn version_number(&self) -> ProcResult { let mut version_str = String::new(); for c in self.version.chars() { if c.is_ascii_digit() { version_str.push(c); } else { break; } } let version_number: u32 = version_str.parse().map_err(|_| "Failed to parse version number")?; Ok(version_number) } /// Parse extra field to `DateTime` object /// /// This function may fail as TIMESTAMP can be various formats. #[cfg(feature = "chrono")] pub fn extra_date(&self) -> ProcResult> { if let Ok(dt) = chrono::DateTime::parse_from_str(&format!("{} +0000", &self.extra), "%a %b %d %H:%M:%S UTC %Y %z") { return Ok(dt.with_timezone(&chrono::Local)); } if let Ok(dt) = chrono::DateTime::parse_from_str(&self.extra, "%a, %d %b %Y %H:%M:%S %z") { return Ok(dt.with_timezone(&chrono::Local)); } Err(ProcError::Other("Failed to parse extra field to date".to_string())) } } impl FromStr for BuildInfo { type Err = &'static str; /// Parse a kernel build information string fn from_str(s: &str) -> Result { let mut version = String::new(); let mut flags: HashSet = HashSet::new(); let mut extra: String = String::new(); let mut splited = s.split(' '); let version_str = splited.next(); if let Some(version_str) = version_str { if let Some(stripped) = version_str.strip_prefix('#') { version.push_str(stripped); } else { return Err("Failed to parse kernel build version"); } } else { return Err("Failed to parse kernel build version"); } for s in &mut splited { if s.chars().all(char::is_uppercase) { flags.insert(s.to_string()); } else { extra.push_str(s); extra.push(' '); break; } } let remains: Vec<&str> = splited.collect(); extra.push_str(&remains.join(" ")); Ok(BuildInfo { version, flags, extra }) } } #[derive(Debug, PartialEq, Eq, Copy, Clone)] /// Represents the data from `/proc/sys/kernel/sem` pub struct SemaphoreLimits { /// The maximum semaphores per semaphore set pub semmsl: u64, /// A system-wide limit on the number of semaphores in all semaphore sets pub semmns: u64, /// The maximum number of operations that may be specified in a semop(2) call pub semopm: u64, /// A system-wide limit on the maximum number of semaphore identifiers pub semmni: u64, } impl SemaphoreLimits { fn from_str(s: &str) -> Result { let mut s = s.split_ascii_whitespace(); let semmsl = s.next().ok_or("Missing SEMMSL")?; let semmns = s.next().ok_or("Missing SEMMNS")?; let semopm = s.next().ok_or("Missing SEMOPM")?; let semmni = s.next().ok_or("Missing SEMMNI")?; let semmsl = semmsl.parse().map_err(|_| "Failed to parse SEMMSL")?; let semmns = semmns.parse().map_err(|_| "Failed to parse SEMMNS")?; let semopm = semopm.parse().map_err(|_| "Failed to parse SEMOPM")?; let semmni = semmni.parse().map_err(|_| "Failed to parse SEMMNI")?; Ok(SemaphoreLimits { semmsl, semmns, semopm, semmni, }) } } impl FromStr for SemaphoreLimits { type Err = &'static str; fn from_str(s: &str) -> Result { SemaphoreLimits::from_str(s) } } bitflags! { /// Flags representing allowed sysrq functions #[derive(Copy, Clone, Debug, Hash, Eq, PartialEq, PartialOrd, Ord)] pub struct AllowedFunctions : u16 { /// Enable control of console log level const ENABLE_CONTROL_LOG_LEVEL = 2; /// Enable control of keyboard (SAK, unraw) const ENABLE_CONTROL_KEYBOARD = 4; /// Enable debugging dumps of processes etc const ENABLE_DEBUGGING_DUMPS = 8; /// Enable sync command const ENABLE_SYNC_COMMAND = 16; /// Enable remound read-only const ENABLE_REMOUNT_READ_ONLY = 32; /// Enable signaling of processes (term, kill, oom-kill) const ENABLE_SIGNALING_PROCESSES = 64; /// Allow reboot/poweroff const ALLOW_REBOOT_POWEROFF = 128; /// Allow nicing of all real-time tasks const ALLOW_NICING_REAL_TIME_TASKS = 256; } } /// Values controlling functions allowed to be invoked by the SysRq key #[derive(Copy, Clone, Debug)] pub enum SysRq { /// Disable sysrq completely Disable, /// Enable all functions of sysrq Enable, /// Bitmask of allowed sysrq functions AllowedFunctions(AllowedFunctions), } impl SysRq { pub fn to_number(self) -> u16 { match self { SysRq::Disable => 0, SysRq::Enable => 1, SysRq::AllowedFunctions(allowed) => allowed.bits(), } } fn from_str(s: &str) -> ProcResult { match s.parse::()? { 0 => Ok(SysRq::Disable), 1 => Ok(SysRq::Enable), x => match AllowedFunctions::from_bits(x) { Some(allowed) => Ok(SysRq::AllowedFunctions(allowed)), None => Err("Invalid value".into()), }, } } } impl FromStr for SysRq { type Err = ProcError; fn from_str(s: &str) -> Result { SysRq::from_str(s) } } /// The minimum value that can be written to `/proc/sys/kernel/threads-max` on Linux 4.1 or later pub const THREADS_MIN: u32 = 20; /// The maximum value that can be written to `/proc/sys/kernel/threads-max` on Linux 4.1 or later pub const THREADS_MAX: u32 = 0x3fff_ffff; #[cfg(test)] mod tests { use super::*; #[test] fn test_version() { let a = Version::from_str("3.16.0-6-amd64").unwrap(); let b = Version::new(3, 16, 0); assert_eq!(a, b); let a = Version::from_str("3.16.0").unwrap(); let b = Version::new(3, 16, 0); assert_eq!(a, b); let a = Version::from_str("3.16.0_1").unwrap(); let b = Version::new(3, 16, 0); assert_eq!(a, b); } #[test] fn test_type() { let a = Type::from_str("Linux").unwrap(); assert_eq!(a.sysname, "Linux"); } #[test] fn test_build_info() { // For Ubuntu, Manjaro, CentOS and others: let a = BuildInfo::from_str("#1 SMP PREEMPT Thu Sep 30 15:29:01 UTC 2021").unwrap(); let mut flags: HashSet = HashSet::new(); flags.insert("SMP".to_string()); flags.insert("PREEMPT".to_string()); assert_eq!(a.version, "1"); assert_eq!(a.version_number().unwrap(), 1); assert_eq!(a.flags, flags); assert!(a.smp()); assert!(a.preempt()); assert!(!a.preemptrt()); assert_eq!(a.extra, "Thu Sep 30 15:29:01 UTC 2021"); #[cfg(feature = "chrono")] let _ = a.extra_date().unwrap(); // For Arch and others: let b = BuildInfo::from_str("#1 SMP PREEMPT Fri, 12 Nov 2021 19:22:10 +0000").unwrap(); assert_eq!(b.version, "1"); assert_eq!(b.version_number().unwrap(), 1); assert_eq!(b.flags, flags); assert_eq!(b.extra, "Fri, 12 Nov 2021 19:22:10 +0000"); assert!(b.smp()); assert!(b.preempt()); assert!(!b.preemptrt()); #[cfg(feature = "chrono")] let _ = b.extra_date().unwrap(); // For Debian and others: let c = BuildInfo::from_str("#1 SMP Debian 5.10.46-4 (2021-08-03)").unwrap(); let mut flags: HashSet = HashSet::new(); flags.insert("SMP".to_string()); assert_eq!(c.version, "1"); assert_eq!(c.version_number().unwrap(), 1); assert_eq!(c.flags, flags); assert_eq!(c.extra, "Debian 5.10.46-4 (2021-08-03)"); assert!(c.smp()); assert!(!c.preempt()); assert!(!c.preemptrt()); // Skip the date parsing for now } #[test] fn test_semaphore_limits() { // Note that the below string has tab characters in it. Make sure to not remove them. let a = SemaphoreLimits::from_str("32000 1024000000 500 32000").unwrap(); let b = SemaphoreLimits { semmsl: 32_000, semmns: 1_024_000_000, semopm: 500, semmni: 32_000, }; assert_eq!(a, b); let a = SemaphoreLimits::from_str("1"); assert!(a.is_err() && a.err().unwrap() == "Missing SEMMNS"); let a = SemaphoreLimits::from_str("1 string 500 3200"); assert!(a.is_err() && a.err().unwrap() == "Failed to parse SEMMNS"); } } procfs-core-0.17.0/src/sys/mod.rs000064400000000000000000000007111046102023000146640ustar 00000000000000//! Sysctl is a means of configuring certain aspects of the kernel at run-time, //! and the `/proc/sys/` directory is there so that you don't even need special tools to do it! //! //! This directory (present since 1.3.57) contains a number of files //! and subdirectories corresponding to kernel variables. //! These variables can be read and sometimes modified using the `/proc` filesystem, //! and the (deprecated) sysctl(2) system call. pub mod kernel; procfs-core-0.17.0/src/sysvipc_shm.rs000064400000000000000000000064571046102023000156530ustar 00000000000000use std::io; use super::{expect, ProcResult}; use std::str::FromStr; #[cfg(feature = "serde1")] use serde::{Deserialize, Serialize}; /// A shared memory segment parsed from `/proc/sysvipc/shm` /// Relation with [crate::process::MMapPath::Vsys] #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[allow(non_snake_case)] pub struct Shm { /// Segment key pub key: i32, /// Segment ID, unique pub shmid: u64, /// Access permissions, as octal pub perms: u16, /// Size in bytes pub size: u64, /// Creator PID pub cpid: i32, /// Last operator PID pub lpid: i32, /// Number of attached processes pub nattch: u32, /// User ID pub uid: u16, /// Group ID pub gid: u16, /// Creator UID pub cuid: u16, /// Creator GID pub cgid: u16, /// Time of last `shmat` (attach), epoch pub atime: u64, /// Time of last `shmdt` (detach), epoch pub dtime: u64, /// Time of last permission change, epoch pub ctime: u64, /// Current part of the shared memory resident in memory pub rss: u64, /// Current part of the shared memory in SWAP pub swap: u64, } /// A set of shared memory segments parsed from `/proc/sysvipc/shm` #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct SharedMemorySegments(pub Vec); impl super::FromBufRead for SharedMemorySegments { fn from_buf_read(r: R) -> ProcResult { let mut vec = Vec::new(); // See printing code here: // https://elixir.bootlin.com/linux/latest/source/ipc/shm.c#L1737 for line in r.lines().skip(1) { let line = expect!(line); let mut s = line.split_whitespace(); let key = expect!(i32::from_str(expect!(s.next()))); let shmid = expect!(u64::from_str(expect!(s.next()))); let perms = expect!(u16::from_str(expect!(s.next()))); let size = expect!(u64::from_str(expect!(s.next()))); let cpid = expect!(i32::from_str(expect!(s.next()))); let lpid = expect!(i32::from_str(expect!(s.next()))); let nattch = expect!(u32::from_str(expect!(s.next()))); let uid = expect!(u16::from_str(expect!(s.next()))); let gid = expect!(u16::from_str(expect!(s.next()))); let cuid = expect!(u16::from_str(expect!(s.next()))); let cgid = expect!(u16::from_str(expect!(s.next()))); let atime = expect!(u64::from_str(expect!(s.next()))); let dtime = expect!(u64::from_str(expect!(s.next()))); let ctime = expect!(u64::from_str(expect!(s.next()))); let rss = expect!(u64::from_str(expect!(s.next()))); let swap = expect!(u64::from_str(expect!(s.next()))); let shm = Shm { key, shmid, perms, size, cpid, lpid, nattch, uid, gid, cuid, cgid, atime, dtime, ctime, rss, swap, }; vec.push(shm); } Ok(SharedMemorySegments(vec)) } } procfs-core-0.17.0/src/uptime.rs000064400000000000000000000035651046102023000146040ustar 00000000000000use crate::{expect, ProcResult}; use std::io::Read; use std::str::FromStr; use std::time::Duration; /// The uptime of the system, based on the `/proc/uptime` file. #[derive(Debug, Clone)] #[non_exhaustive] pub struct Uptime { /// The uptime of the system (including time spent in suspend). pub uptime: f64, /// The sum of how much time each core has spent idle. pub idle: f64, } impl super::FromRead for Uptime { fn from_read(mut r: R) -> ProcResult { let mut buf = Vec::with_capacity(128); r.read_to_end(&mut buf)?; let line = String::from_utf8_lossy(&buf); let buf = line.trim(); let mut s = buf.split(' '); let uptime = expect!(f64::from_str(expect!(s.next()))); let idle = expect!(f64::from_str(expect!(s.next()))); Ok(Uptime { uptime, idle }) } } impl Uptime { /// The uptime of the system (including time spent in suspend). pub fn uptime_duration(&self) -> Duration { let secs = self.uptime.trunc() as u64; let csecs = (self.uptime.fract() * 100.0).round() as u32; let nsecs = csecs * 10_000_000; Duration::new(secs, nsecs) } /// The sum of how much time each core has spent idle. pub fn idle_duration(&self) -> Duration { let secs = self.idle.trunc() as u64; let csecs = (self.idle.fract() * 100.0).round() as u32; let nsecs = csecs * 10_000_000; Duration::new(secs, nsecs) } } #[cfg(test)] mod tests { use super::*; use crate::FromRead; use std::io::Cursor; #[test] fn test_uptime() { let reader = Cursor::new(b"2578790.61 1999230.98\n"); let uptime = Uptime::from_read(reader).unwrap(); assert_eq!(uptime.uptime_duration(), Duration::new(2578790, 610_000_000)); assert_eq!(uptime.idle_duration(), Duration::new(1999230, 980_000_000)); } }