aya-obj-0.2.1/.cargo_vcs_info.json0000644000000001450000000000100123560ustar { "git": { "sha1": "c6a34cade195d682e1eece5b71e3ab48e48f3cda" }, "path_in_vcs": "aya-obj" }aya-obj-0.2.1/CHANGELOG.md000064400000000000000000001572751046102023000130000ustar 00000000000000# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## 0.2.1 (2024-11-01) ### New Features - Rename Bpf to Ebpf And BpfLoader to EbpfLoader. This also adds type aliases to preserve the use of the old names, making updating to a new Aya release less of a burden. These aliases are marked as deprecated since we'll likely remove them in a later release. ### Bug Fixes - Fill bss maps with zeros The loader should fill bss maps with zeros according to the size of the ELF section. Failure to do so yields weird verifier messages as follows: ``` cannot access ptr member ops with moff 0 in struct bpf_map with off 0 size 4 ``` Reference to this in the cilium/ebpf code is here [1]. I could not find a reference in libbpf. ### Other - cgroup_iter_order NFPROTO* nf_inet_hooks Adds the following to codegen: - `bpf_cgroup_iter_order`: used in `bpf_link_info.iter.group.order` - `NFPROTO_*`: used in `bpf_link_info.netfilter.pf` - `nf_inet_hooks`: used in `bpf_link_info.netfilter.hooknum` Include `linux/netfilter.h` in `linux_wrapper.h` for `NFPROTO_*` and `nf_inet_hooks` to generate. - revamp MapInfo be more friendly with older kernels Adds detection for whether a field is available in `MapInfo`: - For `map_type()`, we treturn new enum `MapType` instead of the integer representation. - For fields that can't be zero, we return `Option` type. - For `name_as_str()`, it now uses the feature probe `bpf_name()` to detect if field is available. Although the feature probe checks for program name, it can also be used for map name since they were both introduced in the same commit. - revamp ProgramInfo be more friendly with older kernels Purpose of this commit is to add detections for whether a field is available in `ProgramInfo`. - For `program_type()`, we return the new enum `ProgramType` instead of the integer representation. - For fields that we know cannot be zero, we return `Option` type. - For `name_as_str()`, it now also uses the feature probe `bpf_name()` to detect if field is available or not. - Two additional feature probes are added for the fields: - `prog_info_map_ids()` probe -> `map_ids()` field - `prog_info_gpl_compatible()` probe -> `gpl_compatible()` field With the `prog_info_map_ids()` probe, the previous implementation that I had for `bpf_prog_get_info_by_fd()` is shortened to use the probe instead of having to make 2 potential syscalls. The `test_loaded_at()` test is also moved into info tests since it is better related to the info tests. - add conversion u32 to enum type for prog, link, & attach type Add conversion from u32 to program type, link type, and attach type. Additionally, remove duplicate match statement for u32 conversion to `BPF_MAP_TYPE_BLOOM_FILTER` & `BPF_MAP_TYPE_CGRP_STORAGE`. New error `InvalidTypeBinding` is created to represent when a parsed/received value binding to a type is invalid. This is used in the new conversions added here, and also replaces `InvalidMapTypeError` in `TryFrom` for `bpf_map_type`. - add archs powerpc64 and s390x to aya bpfman, a project using aya, has a requirement to support powerpc64 and s390x architectures. Adding these two architectures to aya. - Generate new bindings ### Test - adjust test to not use byte arrays Where possible, replace the hardcoded byte arrays in the tests with the structs they represent, then convert the structs to byte arrays. - adjust test byte arrays for big endian Adding support for s390x (big endian architecture) and found that some of the unit tests have structures and files implemented as byte arrays. They are all coded as little endian and need a bug endian version to work properly. ### New Features (BREAKING) - Rename BpfRelocationError -> EbpfRelocationError - Rename BpfSectionKind to EbpfSectionKind ### Commit Statistics - 25 commits contributed to the release over the course of 241 calendar days. - 247 days passed between releases. - 12 commits were understood as [conventional](https://www.conventionalcommits.org). - 0 issues like '(#ID)' were seen in commit messages ### Commit Details
view details * **Uncategorized** - Merge pull request #1073 from dave-tucker/reloc-bug ([`b2ac9fe`](https://github.com/aya-rs/aya/commit/b2ac9fe85db6c25d0b8155a75a2df96a80a19811)) - Fill bss maps with zeros ([`ca0c32d`](https://github.com/aya-rs/aya/commit/ca0c32d1076af81349a52235a4b6fb3937a697b3)) - Merge pull request #1055 from aya-rs/codegen ([`59b3873`](https://github.com/aya-rs/aya/commit/59b3873a92d1eb49ca1008cb193e962fa95b3e97)) - [codegen] Update libbpf to 80b16457cb23db4d633b17ba0305f29daa2eb307 ([`f8ad84c`](https://github.com/aya-rs/aya/commit/f8ad84c3d322d414f27375044ba694a169abfa76)) - Cgroup_iter_order NFPROTO* nf_inet_hooks ([`366c599`](https://github.com/aya-rs/aya/commit/366c599c2083baf72c40c816da2c530dec7fd612)) - Release aya-obj v0.2.0, aya v0.13.0, safety bump aya v0.13.0 ([`c169b72`](https://github.com/aya-rs/aya/commit/c169b727e6b8f8c2dda57f54b8c77f8b551025c6)) - Appease clippy ([`aa240ba`](https://github.com/aya-rs/aya/commit/aa240baadf99d3fea0477a9b3966789b0f4ffe57)) - Merge pull request #1007 from tyrone-wu/aya/info-api ([`15eb935`](https://github.com/aya-rs/aya/commit/15eb935bce6d41fb67189c48ce582b074544e0ed)) - Revamp MapInfo be more friendly with older kernels ([`fbb0930`](https://github.com/aya-rs/aya/commit/fbb09304a2de0d8baf7ea20c9727fcd2e4fb7f41)) - Revamp ProgramInfo be more friendly with older kernels ([`88f5ac3`](https://github.com/aya-rs/aya/commit/88f5ac31142f1657b41b1ee0f217dcd9125b210a)) - Add conversion u32 to enum type for prog, link, & attach type ([`1634fa7`](https://github.com/aya-rs/aya/commit/1634fa7188e40ed75da53517f1fdb7396c348c34)) - Merge pull request #974 from Billy99/billy99-arch-ppc64-s390x ([`ab5e688`](https://github.com/aya-rs/aya/commit/ab5e688fd49fcfb402ad47d51cb445437fbd8cb7)) - Adjust test to not use byte arrays ([`4dc4b5c`](https://github.com/aya-rs/aya/commit/4dc4b5ccd48bd86e2cc59ad7386514c1531450af)) - Add archs powerpc64 and s390x to aya ([`b513af1`](https://github.com/aya-rs/aya/commit/b513af12e8baa5c5097eaf0afdae61a830c3f877)) - Adjust test byte arrays for big endian ([`eef7346`](https://github.com/aya-rs/aya/commit/eef7346fb2231f8741410381198015cceeebfac9)) - Merge pull request #989 from aya-rs/codegen ([`8015e10`](https://github.com/aya-rs/aya/commit/8015e100796c550804ccf8fea691c63ec1ac36b8)) - [codegen] Update libbpf to 686f600bca59e107af4040d0838ca2b02c14ff50 ([`8d7446e`](https://github.com/aya-rs/aya/commit/8d7446e01132fe1751605b87a6b4a0165273de15)) - Merge pull request #978 from aya-rs/codegen ([`06aa5c8`](https://github.com/aya-rs/aya/commit/06aa5c8ed344bd0d85096a0fd033ff0bd90a2f88)) - [codegen] Update libbpf to c1a6c770c46c6e78ad6755bf596c23a4e6f6b216 ([`8b50a6a`](https://github.com/aya-rs/aya/commit/8b50a6a5738b5a57121205490d26805c74cb63de)) - Document miri skip reasons ([`35962a4`](https://github.com/aya-rs/aya/commit/35962a4794484aa3b37dadc98a70a659fd107b75)) - Generate new bindings ([`b06ff40`](https://github.com/aya-rs/aya/commit/b06ff402780b80862933791831c578e4c339fc96)) - Merge pull request #528 from dave-tucker/rename-all-the-things ([`63d8d4d`](https://github.com/aya-rs/aya/commit/63d8d4d34bdbbee149047dc0a5e9c2b191f3b32d)) - Rename Bpf to Ebpf ([`8c79b71`](https://github.com/aya-rs/aya/commit/8c79b71bd5699a686f33360520aa95c1a2895fa5)) - Rename BpfRelocationError -> EbpfRelocationError ([`fd48c55`](https://github.com/aya-rs/aya/commit/fd48c55466a23953ce7a4912306e1acf059b498b)) - Rename BpfSectionKind to EbpfSectionKind ([`cf3e2ca`](https://github.com/aya-rs/aya/commit/cf3e2ca677c81224368fb2838ebc5b10ee98419a))
## 0.2.0 (2024-10-09) ### New Features - Rename Bpf to Ebpf And BpfLoader to EbpfLoader. This also adds type aliases to preserve the use of the old names, making updating to a new Aya release less of a burden. These aliases are marked as deprecated since we'll likely remove them in a later release. ### Other - revamp MapInfo be more friendly with older kernels Adds detection for whether a field is available in `MapInfo`: - For `map_type()`, we treturn new enum `MapType` instead of the integer representation. - For fields that can't be zero, we return `Option` type. - For `name_as_str()`, it now uses the feature probe `bpf_name()` to detect if field is available. Although the feature probe checks for program name, it can also be used for map name since they were both introduced in the same commit. - revamp ProgramInfo be more friendly with older kernels Purpose of this commit is to add detections for whether a field is available in `ProgramInfo`. - For `program_type()`, we return the new enum `ProgramType` instead of the integer representation. - For fields that we know cannot be zero, we return `Option` type. - For `name_as_str()`, it now also uses the feature probe `bpf_name()` to detect if field is available or not. - Two additional feature probes are added for the fields: - `prog_info_map_ids()` probe -> `map_ids()` field - `prog_info_gpl_compatible()` probe -> `gpl_compatible()` field With the `prog_info_map_ids()` probe, the previous implementation that I had for `bpf_prog_get_info_by_fd()` is shortened to use the probe instead of having to make 2 potential syscalls. The `test_loaded_at()` test is also moved into info tests since it is better related to the info tests. - add conversion u32 to enum type for prog, link, & attach type Add conversion from u32 to program type, link type, and attach type. Additionally, remove duplicate match statement for u32 conversion to `BPF_MAP_TYPE_BLOOM_FILTER` & `BPF_MAP_TYPE_CGRP_STORAGE`. New error `InvalidTypeBinding` is created to represent when a parsed/received value binding to a type is invalid. This is used in the new conversions added here, and also replaces `InvalidMapTypeError` in `TryFrom` for `bpf_map_type`. - add archs powerpc64 and s390x to aya bpfman, a project using aya, has a requirement to support powerpc64 and s390x architectures. Adding these two architectures to aya. - Generate new bindings ### Test - adjust test to not use byte arrays Where possible, replace the hardcoded byte arrays in the tests with the structs they represent, then convert the structs to byte arrays. - adjust test byte arrays for big endian Adding support for s390x (big endian architecture) and found that some of the unit tests have structures and files implemented as byte arrays. They are all coded as little endian and need a bug endian version to work properly. ### New Features (BREAKING) - Rename BpfRelocationError -> EbpfRelocationError - Rename BpfSectionKind to EbpfSectionKind ## 0.1.0 (2024-02-28) ### Chore - Use the cargo workspace package table This allows for inheritance of common fields from the workspace root. The following fields have been made common: - authors - license - repository - homepage - edition - Appease clippy unused imports ### Documentation - Add CHANGELOG ### Other - appease new nightly clippy lints ``` error: unnecessary use of `get("foo").is_some()` --> aya-obj/src/obj.rs:1690:26 | 1690 | assert!(obj.maps.get("foo").is_some()); | ^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key("foo")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check note: the lint level is defined here --> aya-obj/src/lib.rs:68:9 | 68 | #![deny(clippy::all, missing_docs)] | ^^^^^^^^^^^ = note: `#[deny(clippy::unnecessary_get_then_check)]` implied by `#[deny(clippy::all)]` error: unnecessary use of `get("foo").is_some()` --> aya-obj/src/obj.rs:1777:26 | 1777 | assert!(obj.maps.get("foo").is_some()); | ^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key("foo")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get("bar").is_some()` --> aya-obj/src/obj.rs:1778:26 | 1778 | assert!(obj.maps.get("bar").is_some()); | ^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key("bar")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get("baz").is_some()` --> aya-obj/src/obj.rs:1779:26 | 1779 | assert!(obj.maps.get("baz").is_some()); | ^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key("baz")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get(".bss").is_some()` --> aya-obj/src/obj.rs:1799:26 | 1799 | assert!(obj.maps.get(".bss").is_some()); | ^^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key(".bss")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get(".rodata").is_some()` --> aya-obj/src/obj.rs:1810:26 | 1810 | assert!(obj.maps.get(".rodata").is_some()); | ^^^^^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key(".rodata")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get(".rodata.boo").is_some()` --> aya-obj/src/obj.rs:1821:26 | 1821 | assert!(obj.maps.get(".rodata.boo").is_some()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key(".rodata.boo")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get(".data").is_some()` --> aya-obj/src/obj.rs:1832:26 | 1832 | assert!(obj.maps.get(".data").is_some()); | ^^^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key(".data")` | = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#unnecessary_get_then_check error: unnecessary use of `get(".data.boo").is_some()` --> aya-obj/src/obj.rs:1843:26 | 1843 | assert!(obj.maps.get(".data.boo").is_some()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^ help: replace it with: `contains_key(".data.boo")` ``` - Handle lack of match of enum variants correctly When comparing `local_spec` with `target_spec` for enum relocations, we can encounter a situation when a matchinng variant in a candidate spec doesn't exist. Before this change, such case wasn't handled explicitly, therefore resulted in returning currently constructed `target_spec` at the end. The problem is that such `target_spec` was, due to lack of match, incomplete. It didn't contain any `accessors` nor `parts`. Later usage of such incomplete `target_spec` was leading to panics, since the code operating on enums' `target_spec` expects at least one `accessor` to be available. - don't parse labels as programs Fixes a bug introduced by https://github.com/aya-rs/aya/pull/413 where we were generating a bunch of spurious LBB* programs. - remove redundant keys `default-features = false` is already in the root Cargo.toml. - group_imports = "StdExternalCrate" High time we stop debating this; let the robots do the work. - make maps work on kernels not supporting ProgIds On startup, the kernel is probed for support of chained program ids for CpuMap, DevMap and DevMapHash, and will patch maps at load time to have the proper size. Then, at runtime, the support is checked and will error out if a program id is passed when the kernel does not support it. - add support for map-bound XDP programs Such programs are to be bound to cpumap or devmap instead of the usual network interfaces. - `MapFd` and `SockMapFd` are owned - reduce indirection in section parsing Remove repetition of permitted cgroup attach types. Make optionality of name more explicit rather than pretending both kind and name are equal to section. - MapData::fd is non-optional The primary driver of change here is that `MapData::create` is now a factory function that returns `Result` rather than mutating `&mut self`. The remaining changes are consequences of that change, the most notable of which is the removal of several errors which are no longer possible. - Add clang-format - s/types.types[i]/*t/ where possible We already have a mutable reference in scope, use it where possible. - Mutate BTF in-place without clone The BTF we're working on is Cow anyway so modifying in-place is fine. All we need to do is store some information before we start our mutable iteration to avoid concurrently borrowing types both mutably and immutably. - use Self instead of restating the type - avoid multiple vector allocations Rather than creating an empty vector and iteratively appending - which might induce intermediate allocations - create an ExactSizeIterator and collect it into a vector, which should produce exactly one allocation. - Fix (func|line)_info multiple progs in section This commit fixes the (func|line)_info when we have multiple programs in the same section. The integration test reloc.bpf.c serves as our test case here. This required filtering down the (func|line)_info to only that in scope of the current symbol + then adjusting the offets to appease the kernel. - Remove name from ProgramSection The name here is never used as we get the program name from the symbol table instead. - Propagate sleepable into ProgramSection - Find programs using the symbol table This makes a few changes to the way that Aya reads the ELF object files. 1. To find programs in a section, we use the symbols table. This allows for cases where multiple programs could appear in the same section. 2. When parsing our ELF file we build symbols_by_section_index as an optimization as we use it for legacy maps, BTF maps and now programs. As a result of theses changes the "NAME" used in `bpf.prog_mut("NAME")` is now ALWAYS the same as the function name in the eBPF code, making the user experience more consistent. - better panic messages Always include operands in failing assertions. Use assert_matches over manual match + panic. - Define dependencies on the workspace level This way we will avoid version mismatches and make differences in features across our crates clearer. - avoid an allocation - remove dead code This logic moved in bb595c4e69ff0c72c8327e7f64d43ca7a4bc16a3. The mutation here prevented the compiler from noticing. - don't allocate static strings - aya-obj: Make it possible to externally assemble BtfEnum - Make Features part of the public API This commit adds a new probe for bpf_attach_cookie, which would be used to implement USDT probes. Since USDT probes aren't currently supported, we this triggers a dead_code warning in clippy. There are cases where exposing FEATURES - our lazy static - is actually helpful to users of the library. For example, they may wish to choose to load a different version of their bytecode based on current features. Or, in the case of an orchestrator like bpfd, we might want to allow users to describe which features their program needs and return nice error message is one or more nodes in their cluster doesn't support the necessary feature set. To do this without breaking the API, we make all the internal members of the `Features` and `BtfFeatures` structs private, and add accessors for them. We then add a `features()` API to avoid leaking the lazy_static. - allow global value to be optional This allow to not error out when a global symbol is missing from the object. - update hashbrown requirement from 0.13 to 0.14 Updates the requirements on [hashbrown](https://github.com/rust-lang/hashbrown) to permit the latest version. - [Changelog](https://github.com/rust-lang/hashbrown/blob/master/CHANGELOG.md) - [Commits](https://github.com/rust-lang/hashbrown/compare/v0.13.1...v0.14.0) --- updated-dependencies: - dependency-name: hashbrown dependency-type: direct:production ... - update rbpf requirement from 0.1.0 to 0.2.0 Updates the requirements on [rbpf](https://github.com/qmonnet/rbpf) to permit the latest version. - [Commits](https://github.com/qmonnet/rbpf/compare/v0.1.0...v0.2.0) --- updated-dependencies: - dependency-name: rbpf dependency-type: direct:production ... - Make relocations less strict Missing relocations at load time shouldn't cause an error in aya-obj but instead poison related instructions. This makes struct flavors work. - Apply BTF relocations to all functions This fix aya wrong logic causing non entrypoint functions to not have any BTF relocations working. Also fix missing section_offset computation for instruction offset in multiple spots. - Do not create data maps on kernel without global data support Fix map creation failure when a BPF have a data section on older kernel. (< 5.2) If the BPF uses that section, relocation will fail accordingly and report an error. - Fix ProgramSection::from_str for bss and rodata sections - Move program's functions to the same map - update object requirement from 0.30 to 0.31 Updates the requirements on [object](https://github.com/gimli-rs/object) to permit the latest version. - [Release notes](https://github.com/gimli-rs/object/releases) - [Changelog](https://github.com/gimli-rs/object/blob/master/CHANGELOG.md) - [Commits](https://github.com/gimli-rs/object/compare/0.30.0...0.31.0) --- updated-dependencies: - dependency-name: object dependency-type: direct:production ... - flip feature "no_std" to feature "std" This fixes `cargo build --all-features` by sidestepping the feature unification problem described in The Cargo Book[0]. Add `cargo hack --feature-powerset` to CI to enforce that this doesn't regress (and that all combinations of features work). Since error_in_core is nightly-only, use core-error and a fake std module to allow aya-obj to build without std on stable. [0] https://doc.rust-lang.org/cargo/reference/features.html#feature-unification - Add sanitize code for kernels without bpf_probe_read_kernel Required for kernel before 5.5. Also move Features to aya-obj. - fix DATASEC to STRUCT conversion This fix the following issues: - Previously the DATASEC name wasn't sanitized resulting on "Invalid name" returned by old kernels. - The newly created BTF struct had a size of 0 making old kernels refuse it. This was tested on Debian 10 with kernel 4.19.0-21. - support relocations across multiple text sections + fixes Fix R_BPF_64_64 text relocations in sections other than .text (for instance .text.unlikely). Also fix misc bugs triggered by integration tests. - change two drain() calls to into_iter() - rework `maps` section parsing Avoid allocations and add comments explaining how things work. - fix compilation with nightly - More discrete feature logging Just use the Debug formatter vs. printing a message for each probe. - Make features a lazy_static - Add multibuffer support for XDP - Add support for multibuffer programs This adds support for loading XDP programs that are multi-buffer capable, which is signalled using the xdp.frags section name. When this is set, we should set the BPF_F_XDP_HAS_FRAGS flag when loading the program into the kernel. - btf: add support for BTF_KIND_ENUM64 - btf: fix relocations for signed enums (32 bits) Enums now carry a signed bit in the info flags. Take it into account when applying enum relocations. - btf: switch ComputedRelocationValue::value to u64 This is in preparation of adding Enum64 relocation support - Add new map types Include all new map types which were included in the last libbpf update (5d13fd5acaa9). - Update `BPF_MAP_TYPE_CGROUP_STORAGE` name to `BPF_MAP_TYPE_CGRP_STORAGE` It changed in libbpf - update documentation and versioning info - Set the version number of `aya-obj` to `0.1.0`. - Update the description of the `aya-obj` crate. - Add a section in README and rustdoc warning about the unstable API. - add documentation on program names This commit adds documentation on how program names are parsed from section names, as is used by `aya_obj::Object.programs` as HashMap keys, and updates the examples into using program names. - fix rustfmt diffs and typos - add no_std feature The crate has few libstd dependencies. Since it should be platform- independent in principle, making it no_std like the object crate would seem reasonable. However, the feature `error_in_core` is not yet stabilized, and the thiserror crate currently offers no no_std support. When the feature no_std is selected, we enable the `error_in_core` feature, switch to thiserror-core and replace the HashMap with the one in hashbrown. - add integration tests against rbpf - add basic documentation to public members Types relevant to maps are moved into aya_obj::maps. Some members are marked `pub(crate)` again. - migrate aya::obj into a separate crate To split the crate into two, several changes were made: 1. Most `pub(crate)` are now `pub` to allow access from Aya; 2. Parts of BpfError are merged into, for example, RelocationError; 3. BTF part of Features is moved into the new crate; 4. `#![deny(missing_docs)]` is removed temporarily; 5. Some other code gets moved into the new crate, mainly: - aya::{bpf_map_def, BtfMapDef, PinningType}, - aya::programs::{CgroupSock*AttachType}, The new crate is currenly allowing missing_docs. Member visibility will be adjusted later to minimize exposure of implementation details. - migrate bindgen destination ### Test - avoid lossy string conversions We can be strict in tests. ### Commit Statistics - 146 commits contributed to the release. - 63 commits were understood as [conventional](https://www.conventionalcommits.org). - 1 unique issue was worked on: [#608](https://github.com/aya-rs/aya/issues/608) ### Commit Details
view details * **[#608](https://github.com/aya-rs/aya/issues/608)** - Fix load errors for empty (but existent) BTF/BTF.ext sections ([`5894c4c`](https://github.com/aya-rs/aya/commit/5894c4ce82948c7e5fe766f41b690d036fcca907)) * **Uncategorized** - Release aya-obj v0.1.0, aya v0.12.0, safety bump aya-log v0.2.0 ([`0e99fa0`](https://github.com/aya-rs/aya/commit/0e99fa0f340b2fb2e0da3b330aa6555322a77eec)) - Merge pull request #891 from dave-tucker/changelog ([`431ce23`](https://github.com/aya-rs/aya/commit/431ce23f27ef5c36a6b38c73b38f23b1cf007900)) - Add CHANGELOG ([`72e8aab`](https://github.com/aya-rs/aya/commit/72e8aab6c8be8663c5b6ff6b606a51debf512f7d)) - Appease new nightly clippy lints ([`3369169`](https://github.com/aya-rs/aya/commit/3369169aaca6510a47318fc29bbdb801b60b1c21)) - Merge pull request #882 from dave-tucker/metadata ([`0fadd69`](https://github.com/aya-rs/aya/commit/0fadd695377b8a3f0d9a3af3bc8140f0f1bed8d2)) - Use the cargo workspace package table ([`b3e7ef7`](https://github.com/aya-rs/aya/commit/b3e7ef741c5b8d09fc7dc8302576f8174be75ff4)) - Merge pull request #885 from dave-tucker/nightly-up ([`2d72197`](https://github.com/aya-rs/aya/commit/2d721971cfae39e168f0dc4dac1f219490c16fbf)) - Appease clippy unused imports ([`770a95e`](https://github.com/aya-rs/aya/commit/770a95e0779a6a943c2f5439334fa208ac2ca7e6)) - Handle lack of match of enum variants correctly ([`c05a3b6`](https://github.com/aya-rs/aya/commit/c05a3b69b7a94036c380bd64c6de51377987077c)) - Don't parse labels as programs ([`35e21ae`](https://github.com/aya-rs/aya/commit/35e21ae0079d38e90d90fc85d29580c8b44b16d4)) - Merge pull request #812 from tamird/redundant-cargo ([`715d490`](https://github.com/aya-rs/aya/commit/715d49022eefb152ef8817c730d9eac2b3e6d66f)) - Remove redundant keys ([`cc48523`](https://github.com/aya-rs/aya/commit/cc48523347c2be5520779ef8eeadc6d3a68649d0)) - Merge pull request #797 from aya-rs/rustfmt-group-imports ([`373fb7b`](https://github.com/aya-rs/aya/commit/373fb7bf06ba80ee4c120d8c112f5e810204c472)) - Group_imports = "StdExternalCrate" ([`d16e607`](https://github.com/aya-rs/aya/commit/d16e607fd4b6258b516913071fdacafeb2bbbff9)) - Merge pull request #527 from Tuetuopay/xdpmaps ([`7f9ce06`](https://github.com/aya-rs/aya/commit/7f9ce062f4b8b5cefbe07d8ea47363266f7eacd1)) - Aya, bpf: misc fixes following review comments ([`579e3ce`](https://github.com/aya-rs/aya/commit/579e3cee22ae8e932efb0894ca7fd9ceb91ca7fa)) - Make maps work on kernels not supporting ProgIds ([`00dc7a5`](https://github.com/aya-rs/aya/commit/00dc7a5bd4468b7d86d7f167a49e78d89016e2ac)) - Add support for map-bound XDP programs ([`139f382`](https://github.com/aya-rs/aya/commit/139f3826383daba9a10dc7aacc079f31d28980fc)) - Merge pull request #770 from aya-rs/mapfd-is-owned ([`41d01f6`](https://github.com/aya-rs/aya/commit/41d01f638bc81306749dd0f6aa7d2a677f4de27b)) - `MapFd` and `SockMapFd` are owned ([`f415926`](https://github.com/aya-rs/aya/commit/f41592663cda156082255b93db145cfdd19378e5)) - Merge pull request #766 from aya-rs/obj-better-sense ([`e9690df`](https://github.com/aya-rs/aya/commit/e9690df834b502575321ba32fd09f93eaacb03fa)) - Reduce indirection in section parsing ([`c139627`](https://github.com/aya-rs/aya/commit/c139627f8f180638b786b5e3cd48b8473d96fe56)) - Merge pull request #742 from aya-rs/avoid-utf-assumption ([`8ffd9bb`](https://github.com/aya-rs/aya/commit/8ffd9bb236a4dfc7694bbdac2b6ea1236b238582)) - Avoid lossy string conversions ([`572d047`](https://github.com/aya-rs/aya/commit/572d047e37111b732be49ef3ad6fb16f70aa4063)) - Merge pull request #758 from aya-rs/map-fd-not-option ([`1d5f764`](https://github.com/aya-rs/aya/commit/1d5f764d07c06fa25167d1d4cf341913d4f0cd01)) - MapData::fd is non-optional ([`89bc255`](https://github.com/aya-rs/aya/commit/89bc255f1d14d72a61064b9b40b641b58f8970e0)) - Merge pull request #749 from dave-tucker/clang-format ([`8ce1c00`](https://github.com/aya-rs/aya/commit/8ce1c00ad8b4ac1362eaf24d99eafd848546c9d3)) - Add clang-format ([`0212400`](https://github.com/aya-rs/aya/commit/02124002c88d7a89d6c9afd89857c4c301e09801)) - Merge pull request #734 from aya-rs/reduce-slicing ([`d3513e7`](https://github.com/aya-rs/aya/commit/d3513e7010cdab04a3d8bb5c7e7518ff67548302)) - S/types.types[i]/*t/ where possible ([`dfb6020`](https://github.com/aya-rs/aya/commit/dfb6020a1dc1d0ee28426bd9e3086dd449f643f7)) - Merge pull request #725 from dave-tucker/enum64 ([`2a55fc7`](https://github.com/aya-rs/aya/commit/2a55fc7bd3a15340b5b644d668f3a387bbdb09d3)) - Aya, aya-obj: Implement ENUM64 fixups ([`e38e256`](https://github.com/aya-rs/aya/commit/e38e2566e3393034b37c299e50c6a4b70d51ad1d)) - Merge pull request #731 from dave-tucker/noclone-btf ([`e210012`](https://github.com/aya-rs/aya/commit/e21001226fc05840867f43f6a4455a4c919e3b91)) - Mutate BTF in-place without clone ([`098d436`](https://github.com/aya-rs/aya/commit/098d4364bd0fb8551f0515cb84afda6aff23ed7f)) - Merge pull request #726 from aya-rs/btf-iter-alloc ([`761e4dd`](https://github.com/aya-rs/aya/commit/761e4ddbe3abf8b9177ebd6984465fe66696728a)) - Use Self instead of restating the type ([`826e0e5`](https://github.com/aya-rs/aya/commit/826e0e5050e9bf9e0cdff6d2a20c1169820d0e57)) - Avoid multiple vector allocations ([`2a054d7`](https://github.com/aya-rs/aya/commit/2a054d76ae167e7c2a6b4bfb1cf51770f93d394a)) - Merge pull request #721 from dave-tucker/fix-funcinfo ([`1979da9`](https://github.com/aya-rs/aya/commit/1979da92a722bacd9c984865a4c7108e22fb618f)) - Fix (func|line)_info multiple progs in section ([`79ea64c`](https://github.com/aya-rs/aya/commit/79ea64ca7fd3cc1b17573b539fd8fa8e76644beb)) - Merge pull request #720 from dave-tucker/programsection-noname ([`e915379`](https://github.com/aya-rs/aya/commit/e9153792f1c18caa5899edc7c05487eb291415a4)) - Remove name from ProgramSection ([`cca9b8f`](https://github.com/aya-rs/aya/commit/cca9b8f1a7e345a39d852bd18a43974871d3ed4b)) - Merge pull request #711 from dave-tucker/sleepable ([`77e9603`](https://github.com/aya-rs/aya/commit/77e9603976b58491427df049a163e1945bc0bf27)) - Propagate sleepable into ProgramSection ([`677e7bd`](https://github.com/aya-rs/aya/commit/677e7bda4a826aca858311670d1592162b682dff)) - Merge pull request #413 from dave-tucker/fix-names-once-and-for-all ([`e833a71`](https://github.com/aya-rs/aya/commit/e833a71b022b39fa7c7a904b74ef0c55ff7c19ee)) - Merge pull request #704 from aya-rs/better-panic ([`868a9b0`](https://github.com/aya-rs/aya/commit/868a9b00b3701a4e035dc1d70cac934ef836655b)) - Find programs using the symbol table ([`bf7fdff`](https://github.com/aya-rs/aya/commit/bf7fdff1cef28961f096d1c1e00181e0a0c2d14e)) - Better panic messages ([`17f25a6`](https://github.com/aya-rs/aya/commit/17f25a67934ad10443a4fbb62a563b5f6edcaa5f)) - Merge pull request #699 from aya-rs/cache-again-god-damn-it ([`e95f76a`](https://github.com/aya-rs/aya/commit/e95f76a5b348070dd6833d37ea16db04f6afa612)) - Do not escape newlines on Err(LoadError).unwrap() ([`8961be9`](https://github.com/aya-rs/aya/commit/8961be95268d2a4464ef75b0898cf07f9ba44470)) - Merge pull request #667 from vadorovsky/workspace-dependencies ([`f554d42`](https://github.com/aya-rs/aya/commit/f554d421053bc34266afbf8e00b28705ab4b41d2)) - Define dependencies on the workspace level ([`96fa08b`](https://github.com/aya-rs/aya/commit/96fa08bd82233268154edf30b106876f5a4f0e30)) - Merge pull request #665 from aya-rs/dead-code-rm ([`893ab76`](https://github.com/aya-rs/aya/commit/893ab76afaa9f729967eec47cc211f0a46f6268e)) - Avoid an allocation ([`6f2a8c8`](https://github.com/aya-rs/aya/commit/6f2a8c8a5c47098fb5e5a75ecebdff493d486c97)) - Remove dead code ([`d71d1e1`](https://github.com/aya-rs/aya/commit/d71d1e199382379036dc4760e4edbd5e637e07c3)) - Merge pull request #656 from aya-rs/kernel-version-fml ([`232cd45`](https://github.com/aya-rs/aya/commit/232cd45e41031060238d37fc7f08eb3d63fa2eeb)) - Replace matches with assert_matches ([`961f45d`](https://github.com/aya-rs/aya/commit/961f45da37616b912d2d4ed594036369f3f8285b)) - Merge pull request #650 from aya-rs/test-cleanup ([`61608e6`](https://github.com/aya-rs/aya/commit/61608e64583f9dc599eef9b8db098f38a765b285)) - Run tests with powerset of features ([`8e9712a`](https://github.com/aya-rs/aya/commit/8e9712ac024cbc05dfe8ba09a9dd725e56e34a51)) - Merge pull request #648 from aya-rs/clippy-more ([`a840a17`](https://github.com/aya-rs/aya/commit/a840a17308c1c27867e67baa62942738c5bd2caf)) - Clippy over tests and integration-ebpf ([`e621a09`](https://github.com/aya-rs/aya/commit/e621a09181d0a5ddb6289d8b13d4b89a71de63f1)) - Merge pull request #643 from aya-rs/procfs ([`6e9aba5`](https://github.com/aya-rs/aya/commit/6e9aba55fe8d23aa337b29a1cab890bb54816068)) - Remove verifier log special case ([`b5ebcb7`](https://github.com/aya-rs/aya/commit/b5ebcb7cc5fd0f719567b97f682a0ea0f8e0dc13)) - Merge pull request #641 from aya-rs/logger-messages-plz ([`4c0983b`](https://github.com/aya-rs/aya/commit/4c0983bca962e0e9b2711805ae7fbc6b53457c34)) - Hide details of VerifierLog ([`6b94b20`](https://github.com/aya-rs/aya/commit/6b94b2080dc4c122954beea814b2a1a4569e9aa3)) - Use procfs crate for kernel version parsing ([`b611038`](https://github.com/aya-rs/aya/commit/b611038d5b41a45ca70553550dbdef9aa1fd117c)) - Merge pull request #642 from aya-rs/less-strings ([`32be47a`](https://github.com/aya-rs/aya/commit/32be47a23b94902caadcc7bb1612adbd18318eca)) - Don't allocate static strings ([`27120b3`](https://github.com/aya-rs/aya/commit/27120b328aac5f992eed98b03216a9880a381749)) - Merge pull request #635 from marysaka/misc/aya-obj-enum-public ([`5c86b7e`](https://github.com/aya-rs/aya/commit/5c86b7ee950762d1cc37fc39c788e670869db231)) - Aya-obj: Make it possible to externally assemble BtfEnum ([`d9dfd94`](https://github.com/aya-rs/aya/commit/d9dfd94f29be8c28b7fe0ef4ab560db49f7514fb)) - Merge pull request #531 from dave-tucker/probe-cookie ([`bc0d021`](https://github.com/aya-rs/aya/commit/bc0d02143f5bc6103cca27d5f0c7a40beacd0668)) - Make Features part of the public API ([`47f764c`](https://github.com/aya-rs/aya/commit/47f764c19185a69a00f3925239797caa98cd5afe)) - Merge pull request #632 from marysaka/feat/global-data-optional ([`b2737d5`](https://github.com/aya-rs/aya/commit/b2737d5b0d18ce09202ca9eb2ce772b1144ea6b8)) - Allow global value to be optional ([`93435fc`](https://github.com/aya-rs/aya/commit/93435fc85400aa036f3890c43c78c9c9eb4baa96)) - Merge pull request #626 from aya-rs/dependabot/cargo/hashbrown-0.14 ([`26c6b92`](https://github.com/aya-rs/aya/commit/26c6b92ef1d58d0703a4a020db02dca65911456c)) - Update hashbrown requirement from 0.13 to 0.14 ([`f5f8083`](https://github.com/aya-rs/aya/commit/f5f8083441afd2daed9344fc2031878d574efaf1)) - Merge pull request #623 from aya-rs/dependabot/cargo/rbpf-0.2.0 ([`53ec1f2`](https://github.com/aya-rs/aya/commit/53ec1f23ea4efe7c686a6a4fb8bb166c8d444dc8)) - Update rbpf requirement from 0.1.0 to 0.2.0 ([`fa3dd4b`](https://github.com/aya-rs/aya/commit/fa3dd4bef252566aa26577a0d42b2ff59ac2ff2a)) - Merge pull request #563 from marysaka/fix/reloc-less-strict ([`85ad019`](https://github.com/aya-rs/aya/commit/85ad0197e0e0e30c99f3af63584f9c569b752a50)) - Make relocations less strict ([`35eaa50`](https://github.com/aya-rs/aya/commit/35eaa50736d9e894eb5122b1070afd7b0442eae6)) - Merge pull request #602 from marysaka/fix/btf-reloc-all-functions ([`3a9a54f`](https://github.com/aya-rs/aya/commit/3a9a54fd9b2f69e2427accbe0451761ecc537197)) - Merge pull request #616 from nak3/fix-bump ([`3211d2c`](https://github.com/aya-rs/aya/commit/3211d2c92801d8208c76856cb271f2b7772a0313)) - Apply BTF relocations to all functions ([`c4e721f`](https://github.com/aya-rs/aya/commit/c4e721f3d334a7c2e5e6d6cd6f4ade0f1334be72)) - [codegen] Update libbpf to f7eb43b90f4c8882edf6354f8585094f8f3aade0Update libbpf to f7eb43b90f4c8882edf6354f8585094f8f3aade0 ([`0bc886f`](https://github.com/aya-rs/aya/commit/0bc886f1634443d202e24f56cb74d3dce2e66e37)) - Merge pull request #585 from probulate/tag-len-value ([`5165bf2`](https://github.com/aya-rs/aya/commit/5165bf2f99cdc228122bdab505c2059723e95a9f)) - Merge pull request #605 from marysaka/fix/global-data-reloc-ancient-kernels ([`9c437aa`](https://github.com/aya-rs/aya/commit/9c437aafd96bebc5c90fdc7f370b5415174b1019)) - Merge pull request #604 from marysaka/fix/section-kind-from-str ([`3a9058e`](https://github.com/aya-rs/aya/commit/3a9058e7625b56ac26d6bb592dd4c3a93c61d6b0)) - Do not create data maps on kernel without global data support ([`591e212`](https://github.com/aya-rs/aya/commit/591e21267a9bc9adca9818095de5a695cee7ee9b)) - Fix ProgramSection::from_str for bss and rodata sections ([`18b3d75`](https://github.com/aya-rs/aya/commit/18b3d75d096e3c90f8c5b2f7292637a3369f96a6)) - Build tests with all features ([`4e2f832`](https://github.com/aya-rs/aya/commit/4e2f8322cc6ee7ef06a1d5718405964e8da14d18)) - Move program's functions to the same map ([`9e1109b`](https://github.com/aya-rs/aya/commit/9e1109b3ce70a3668771bd11a7fda101eec3ab93)) - Merge pull request #597 from nak3/test-clippy ([`7cd1c64`](https://github.com/aya-rs/aya/commit/7cd1c642e35d271c75eb1e9d65988e539a90f2bf)) - Drop unnecessary mut ([`e67025b`](https://github.com/aya-rs/aya/commit/e67025b66f08592bb7e9a3273d56eb5669b16d90)) - Merge pull request #577 from aya-rs/dependabot/cargo/object-0.31 ([`deb054a`](https://github.com/aya-rs/aya/commit/deb054afa45cfb9ffb7b213f34fc549c9503c0dd)) - Merge pull request #545 from epompeii/lsm_sleepable ([`120b59d`](https://github.com/aya-rs/aya/commit/120b59dd2e42805cf5880ada8f1bd0ba5faf4a44)) - Update object requirement from 0.30 to 0.31 ([`4c78f7f`](https://github.com/aya-rs/aya/commit/4c78f7f1a014cf54d54c805233a0f29eb1ca5eeb)) - Merge pull request #586 from probulate/no-std-inversion ([`45efa63`](https://github.com/aya-rs/aya/commit/45efa6384ffbcff82ca55e151c446d930147abf0)) - Flip feature "no_std" to feature "std" ([`33a0a2b`](https://github.com/aya-rs/aya/commit/33a0a2b604e77b63b771b9d0e167c894793492b5)) - Merge branch 'aya-rs:main' into lsm_sleepable ([`1f2006b`](https://github.com/aya-rs/aya/commit/1f2006bfde865cc4308643b21d51cf4a8e69d6d4)) - Merge pull request #583 from 0xrawsec/fix-builtin-linkage ([`b2d5059`](https://github.com/aya-rs/aya/commit/b2d5059ac250b4017ba723e594292f0356c31811)) - - comment changed to be more precise - adapted test to be more readable ([`1464bdc`](https://github.com/aya-rs/aya/commit/1464bdc1d4393e1a4ab5cff3833f784444b1d175)) - Added memmove, memcmp to the list of function changed to BTF_FUNC_STATIC ([`72c1572`](https://github.com/aya-rs/aya/commit/72c15721781f758c65cd4b94def8e907e42d8c35)) - Fixed indent ([`a51c9bc`](https://github.com/aya-rs/aya/commit/a51c9bc532f101302a38cd866b40a5014fa61c54)) - Removed useless line break and comments ([`5b4fc9e`](https://github.com/aya-rs/aya/commit/5b4fc9ea93f32da4c58be4b261905b883c9ea20b)) - Add debug messages ([`74bc754`](https://github.com/aya-rs/aya/commit/74bc754862df5571a4fafb18260bc1e5c4acd9b2)) - Merge pull request #582 from marysaka/feature/no-kern-read-sanitizer ([`b5c2928`](https://github.com/aya-rs/aya/commit/b5c2928b0e0d20c48157a5862f0d2c3dd5dbb784)) - Add sanitize code for kernels without bpf_probe_read_kernel ([`1132b6e`](https://github.com/aya-rs/aya/commit/1132b6e01b86856aa1fddf179fcc7e3825e79406)) - Fixed BTF linkage of memset and memcpy to static ([`4e41da6`](https://github.com/aya-rs/aya/commit/4e41da6a86418e4e2a9241b42301a1abe38e7372)) - Merge pull request #581 from marysaka/fix/datasec-struct-conversion ([`858f77b`](https://github.com/aya-rs/aya/commit/858f77bf2cfb457765b7deb81ba75fb706c71954)) - Fix DATASEC to STRUCT conversion ([`4e33fa0`](https://github.com/aya-rs/aya/commit/4e33fa011e87cdc2fc59025b9e531b4872651cd0)) - Merge pull request #572 from alessandrod/reloc-fixes ([`542ada3`](https://github.com/aya-rs/aya/commit/542ada3fe7f9d4d06542253361acc5fadce3f24b)) - Support relocations across multiple text sections + fixes ([`93ac3e9`](https://github.com/aya-rs/aya/commit/93ac3e94bcb47864670c124dfe00e16ed2ab6f5e)) - Change two drain() calls to into_iter() ([`b25a089`](https://github.com/aya-rs/aya/commit/b25a08981986cac4f511433d165560576a8c9856)) - Aya, aya-obj: refactor map relocations ([`401ea5e`](https://github.com/aya-rs/aya/commit/401ea5e8482ece34b6c88de85ec474bdfc577fd4)) - Rework `maps` section parsing ([`5c4f1d6`](https://github.com/aya-rs/aya/commit/5c4f1d69a60e0c5324512a7cfbc4467b7f5d0bca)) - Review ([`85714d5`](https://github.com/aya-rs/aya/commit/85714d5cf3622da49d1442c34caa63451d9efe48)) - Macro ([`6dfb9d8`](https://github.com/aya-rs/aya/commit/6dfb9d82af9c178f4effd7a0c9095442816a014c)) - Obj ([`6a25d4d`](https://github.com/aya-rs/aya/commit/6a25d4ddec42e3408bd823fccc6e64c33575bc5c)) - Fix compilation with nightly ([`dfbe120`](https://github.com/aya-rs/aya/commit/dfbe1207c1bbd105d1daa9b08cec0e9803b5464e)) - Merge pull request #537 from aya-rs/codegen ([`8684a57`](https://github.com/aya-rs/aya/commit/8684a5783db6953b28e42bbbcdc52514fc4e6c37)) - [codegen] Update libbpf to a41e6ef3251cba858021b90c33abb9efdb17f575Update libbpf to a41e6ef3251cba858021b90c33abb9efdb17f575 ([`24f15ea`](https://github.com/aya-rs/aya/commit/24f15ea25f413633f8c498ee5be046e797acebae)) - More discrete feature logging ([`7479c1d`](https://github.com/aya-rs/aya/commit/7479c1dd6c1356bddb0401dbeea65618674524c9)) - Make features a lazy_static ([`ce22ca6`](https://github.com/aya-rs/aya/commit/ce22ca668f3e7c0f9832d28370457204537d2e50)) - Merge pull request #519 from dave-tucker/frags ([`bc83f20`](https://github.com/aya-rs/aya/commit/bc83f208b11542607e02751126a68b1ca568873b)) - Add multibuffer support for XDP ([`376c486`](https://github.com/aya-rs/aya/commit/376c48640033fdbf8b5199641f353587273f8a32)) - Add support for multibuffer programs ([`a18693b`](https://github.com/aya-rs/aya/commit/a18693b42dc986bde06b07540e261ecac59eef24)) - Merge pull request #453 from alessandrod/btf-kind-enum64 ([`e8e2767`](https://github.com/aya-rs/aya/commit/e8e276730e7351888a71f1196ca1bfbc06c22432)) - Btf: add support for BTF_KIND_ENUM64 ([`9a6f814`](https://github.com/aya-rs/aya/commit/9a6f8143a1a4c5c88a373701d74d96596c75242f)) - Merge pull request #501 from alessandrod/fix-enum32-relocs ([`f81b1b9`](https://github.com/aya-rs/aya/commit/f81b1b9f3ec1de5241d8882da56f1d8d7c22d994)) - Btf: fix relocations for signed enums (32 bits) ([`4482db4`](https://github.com/aya-rs/aya/commit/4482db42d86c657826efe80f484f57a601ed2f38)) - Btf: switch ComputedRelocationValue::value to u64 ([`d6b976c`](https://github.com/aya-rs/aya/commit/d6b976c6f1f6163680c179502f4f454d0cec747e)) - Fix lints ([`9f4ef6f`](https://github.com/aya-rs/aya/commit/9f4ef6f67df397c7e243435ccb3bdd517fd467cf)) - Merge pull request #487 from vadorovsky/new-map-types ([`42c4a8b`](https://github.com/aya-rs/aya/commit/42c4a8be7c502d7e7508c636f7c1cb28296c26b8)) - Add new map types ([`3d03c8a`](https://github.com/aya-rs/aya/commit/3d03c8a8e0a9033be8c1ab020129db7790cc7493)) - Merge pull request #483 from aya-rs/codegen ([`0399991`](https://github.com/aya-rs/aya/commit/03999913833ad576d9ba7d1c0123703f49b340a5)) - Update `BPF_MAP_TYPE_CGROUP_STORAGE` name to `BPF_MAP_TYPE_CGRP_STORAGE` ([`cb28533`](https://github.com/aya-rs/aya/commit/cb28533e2f9eb0b2cd80f4bf9515cdec31763749)) - [codegen] Update libbpf to 3423d5e7cdab356d115aef7f987b4a1098ede448Update libbpf to 3423d5e7cdab356d115aef7f987b4a1098ede448 ([`5d13fd5`](https://github.com/aya-rs/aya/commit/5d13fd5acaa90efedb76d371b69431ac9a262fdd)) - Merge pull request #475 from yesh0/aya-obj ([`897957a`](https://github.com/aya-rs/aya/commit/897957ac84370cd1ee463bdf2ff4859333b41012)) - Update documentation and versioning info ([`9c451a3`](https://github.com/aya-rs/aya/commit/9c451a3357317405dd8e2e4df7d006cee943adcc)) - Add documentation on program names ([`772af17`](https://github.com/aya-rs/aya/commit/772af170aea2feccb5e98cc84125e9e31b9fbe9a)) - Fix rustfmt diffs and typos ([`9ec3447`](https://github.com/aya-rs/aya/commit/9ec3447e891ca770a65f8ff9b71884f25530f515)) - Add no_std feature ([`30f1fab`](https://github.com/aya-rs/aya/commit/30f1fabc05654e8d11dd2648767895123c141c3b)) - Add integration tests against rbpf ([`311ead6`](https://github.com/aya-rs/aya/commit/311ead6760ce53e9503af00391e6631f7387ab4a)) - Add basic documentation to public members ([`e52497c`](https://github.com/aya-rs/aya/commit/e52497cb9c02123ae450ca36fb6f898d24b25c4b)) - Migrate aya::obj into a separate crate ([`ac49827`](https://github.com/aya-rs/aya/commit/ac49827e204801079be2b87160a795ef412bd6cb)) - Migrate bindgen destination ([`81bc307`](https://github.com/aya-rs/aya/commit/81bc307dce452f0aacbfbe8c304089d11ddd8c5e))
aya-obj-0.2.1/Cargo.toml0000644000000031270000000000100103570ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2021" name = "aya-obj" version = "0.2.1" authors = ["Aya Contributors"] build = false autobins = false autoexamples = false autotests = false autobenches = false description = "An eBPF object file parsing library with BTF and relocation support." homepage = "https://aya-rs.dev" documentation = "https://docs.rs/aya-obj" readme = "README.md" keywords = [ "bpf", "btf", "ebpf", "elf", "object", ] license = "MIT OR Apache-2.0" repository = "https://github.com/aya-rs/aya" [lib] name = "aya_obj" path = "src/lib.rs" [dependencies.bytes] version = "1" default-features = false [dependencies.core-error] version = "0.0.0" default-features = true [dependencies.hashbrown] version = "0.15.0" default-features = true [dependencies.log] version = "0.4" default-features = false [dependencies.object] version = "0.36" features = [ "elf", "read_core", ] default-features = false [dependencies.thiserror] version = "1" default-features = false [dev-dependencies.assert_matches] version = "1.5.0" default-features = false [dev-dependencies.rbpf] version = "0.3.0" default-features = false [features] std = [] aya-obj-0.2.1/Cargo.toml.orig000064400000000000000000000013751046102023000140430ustar 00000000000000[package] name = "aya-obj" version = "0.2.1" description = "An eBPF object file parsing library with BTF and relocation support." keywords = ["bpf", "btf", "ebpf", "elf", "object"] readme = "README.md" documentation = "https://docs.rs/aya-obj" authors.workspace = true license.workspace = true repository.workspace = true homepage.workspace = true edition.workspace = true [dependencies] bytes = { workspace = true } core-error = { workspace = true, default-features = true } hashbrown = { workspace = true, default-features = true } log = { workspace = true } object = { workspace = true, features = ["elf", "read_core"] } thiserror = { workspace = true } [dev-dependencies] assert_matches = { workspace = true } rbpf = { workspace = true } [features] std = [] aya-obj-0.2.1/README.md000064400000000000000000000031461046102023000124310ustar 00000000000000# aya-obj ## Status This crate includes code that started as internal API used by the [aya] crate. It has been split out so that it can be used by other projects that deal with eBPF object files. Unless you're writing low level eBPF plumbing tools, you should not need to use this crate but see the [aya] crate instead. The API as it is today has a few rough edges and is generally not as polished nor stable as the main [aya] crate API. As always, improvements welcome! [aya]: https://github.com/aya-rs/aya ## Overview eBPF programs written with [libbpf] or [aya-bpf] are usually compiled into an ELF object file, using various sections to store information about the eBPF programs. `aya-obj` is a library for parsing such eBPF object files, with BTF and relocation support. [libbpf]: https://github.com/libbpf/libbpf [aya-bpf]: https://github.com/aya-rs/aya ## Example This example loads a simple eBPF program and runs it with [rbpf]. ```rust use aya_obj::{generated::bpf_insn, Object}; // Parse the object file let bytes = std::fs::read("program.o").unwrap(); let mut object = Object::parse(&bytes).unwrap(); // Relocate the programs object.relocate_calls().unwrap(); object.relocate_maps(std::iter::empty()).unwrap(); // Run with rbpf let instructions = &object.programs["prog_name"].function.instructions; let data = unsafe { core::slice::from_raw_parts( instructions.as_ptr() as *const u8, instructions.len() * core::mem::size_of::(), ) }; let vm = rbpf::EbpfVmNoData::new(Some(data)).unwrap(); let _return = vm.execute_program().unwrap(); ``` [rbpf]: https://github.com/qmonnet/rbpf aya-obj-0.2.1/include/linux_wrapper.h000064400000000000000000000007451046102023000156470ustar 00000000000000#include #include #include #include #include #include #include #include #include /* workaround the fact that bindgen can't parse the IOC macros */ int AYA_PERF_EVENT_IOC_ENABLE = PERF_EVENT_IOC_ENABLE; int AYA_PERF_EVENT_IOC_DISABLE = PERF_EVENT_IOC_DISABLE; int AYA_PERF_EVENT_IOC_SET_BPF = PERF_EVENT_IOC_SET_BPF; aya-obj-0.2.1/src/btf/btf.rs000064400000000000000000002112621046102023000136350ustar 00000000000000use alloc::{ borrow::{Cow, ToOwned as _}, format, string::String, vec, vec::Vec, }; use core::{ffi::CStr, mem, ptr}; use bytes::BufMut; use log::debug; use object::{Endianness, SectionIndex}; #[cfg(not(feature = "std"))] use crate::std; use crate::{ btf::{ info::{FuncSecInfo, LineSecInfo}, relocation::Relocation, Array, BtfEnum, BtfKind, BtfMember, BtfType, Const, Enum, FuncInfo, FuncLinkage, Int, IntEncoding, LineInfo, Struct, Typedef, Union, VarLinkage, }, generated::{btf_ext_header, btf_header}, util::{bytes_of, HashMap}, Object, }; pub(crate) const MAX_RESOLVE_DEPTH: u8 = 32; pub(crate) const MAX_SPEC_LEN: usize = 64; /// The error type returned when `BTF` operations fail. #[derive(thiserror::Error, Debug)] pub enum BtfError { #[cfg(feature = "std")] /// Error parsing file #[error("error parsing {path}")] FileError { /// file path path: std::path::PathBuf, /// source of the error #[source] error: std::io::Error, }, /// Error parsing BTF header #[error("error parsing BTF header")] InvalidHeader, /// invalid BTF type info segment #[error("invalid BTF type info segment")] InvalidTypeInfo, /// invalid BTF relocation info segment #[error("invalid BTF relocation info segment")] InvalidRelocationInfo, /// invalid BTF type kind #[error("invalid BTF type kind `{kind}`")] InvalidTypeKind { /// type kind kind: u32, }, /// invalid BTF relocation kind #[error("invalid BTF relocation kind `{kind}`")] InvalidRelocationKind { /// type kind kind: u32, }, /// invalid BTF string offset #[error("invalid BTF string offset: {offset}")] InvalidStringOffset { /// offset offset: usize, }, /// invalid BTF info #[error("invalid BTF info, offset: {offset} len: {len} section_len: {section_len}")] InvalidInfo { /// offset offset: usize, /// length len: usize, /// section length section_len: usize, }, /// invalid BTF line infos #[error("invalid BTF line info, offset: {offset} len: {len} section_len: {section_len}")] InvalidLineInfo { /// offset offset: usize, /// length len: usize, /// section length section_len: usize, }, /// unknown BTF type id #[error("Unknown BTF type id `{type_id}`")] UnknownBtfType { /// type id type_id: u32, }, /// unexpected btf type id #[error("Unexpected BTF type id `{type_id}`")] UnexpectedBtfType { /// type id type_id: u32, }, /// unknown BTF type #[error("Unknown BTF type `{type_name}`")] UnknownBtfTypeName { /// type name type_name: String, }, /// maximum depth reached resolving BTF type #[error("maximum depth reached resolving BTF type")] MaximumTypeDepthReached { /// type id type_id: u32, }, #[cfg(feature = "std")] /// Loading the btf failed #[error("the BPF_BTF_LOAD syscall failed. Verifier output: {verifier_log}")] LoadError { /// The [`std::io::Error`] returned by the `BPF_BTF_LOAD` syscall. #[source] io_error: std::io::Error, /// The error log produced by the kernel verifier. verifier_log: crate::VerifierLog, }, /// offset not found for symbol #[error("Offset not found for symbol `{symbol_name}`")] SymbolOffsetNotFound { /// name of the symbol symbol_name: String, }, /// btf type that is not VAR found in DATASEC #[error("BTF type that is not VAR was found in DATASEC")] InvalidDatasec, /// unable to determine the size of section #[error("Unable to determine the size of section `{section_name}`")] UnknownSectionSize { /// name of the section section_name: String, }, /// unable to get symbol name #[error("Unable to get symbol name")] InvalidSymbolName, } /// Available BTF features #[derive(Default, Debug)] #[allow(missing_docs)] pub struct BtfFeatures { btf_func: bool, btf_func_global: bool, btf_datasec: bool, btf_float: bool, btf_decl_tag: bool, btf_type_tag: bool, btf_enum64: bool, } impl BtfFeatures { #[doc(hidden)] pub fn new( btf_func: bool, btf_func_global: bool, btf_datasec: bool, btf_float: bool, btf_decl_tag: bool, btf_type_tag: bool, btf_enum64: bool, ) -> Self { BtfFeatures { btf_func, btf_func_global, btf_datasec, btf_float, btf_decl_tag, btf_type_tag, btf_enum64, } } /// Returns true if the BTF_TYPE_FUNC is supported. pub fn btf_func(&self) -> bool { self.btf_func } /// Returns true if the BTF_TYPE_FUNC_GLOBAL is supported. pub fn btf_func_global(&self) -> bool { self.btf_func_global } /// Returns true if the BTF_TYPE_DATASEC is supported. pub fn btf_datasec(&self) -> bool { self.btf_datasec } /// Returns true if the BTF_FLOAT is supported. pub fn btf_float(&self) -> bool { self.btf_float } /// Returns true if the BTF_DECL_TAG is supported. pub fn btf_decl_tag(&self) -> bool { self.btf_decl_tag } /// Returns true if the BTF_TYPE_TAG is supported. pub fn btf_type_tag(&self) -> bool { self.btf_type_tag } /// Returns true if the BTF_KIND_FUNC_PROTO is supported. pub fn btf_kind_func_proto(&self) -> bool { self.btf_func && self.btf_decl_tag } /// Returns true if the BTF_KIND_ENUM64 is supported. pub fn btf_enum64(&self) -> bool { self.btf_enum64 } } /// BPF Type Format metadata. /// /// BTF is a kind of debug metadata that allows eBPF programs compiled against one kernel version /// to be loaded into different kernel versions. /// /// Aya automatically loads BTF metadata if you use `Ebpf::load_file`. You /// only need to explicitly use this type if you want to load BTF from a non-standard /// location or if you are using `Ebpf::load`. #[derive(Clone, Debug)] pub struct Btf { header: btf_header, strings: Vec, types: BtfTypes, _endianness: Endianness, } impl Btf { /// Creates a new empty instance with its header initialized pub fn new() -> Btf { Btf { header: btf_header { magic: 0xeb9f, version: 0x01, flags: 0x00, hdr_len: 0x18, type_off: 0x00, type_len: 0x00, str_off: 0x00, str_len: 0x00, }, strings: vec![0], types: BtfTypes::default(), _endianness: Endianness::default(), } } pub(crate) fn is_empty(&self) -> bool { // the first one is awlays BtfType::Unknown self.types.types.len() < 2 } pub(crate) fn types(&self) -> impl Iterator { self.types.types.iter() } /// Adds a string to BTF metadata, returning an offset pub fn add_string(&mut self, name: &str) -> u32 { let str = name.bytes().chain(std::iter::once(0)); let name_offset = self.strings.len(); self.strings.extend(str); self.header.str_len = self.strings.len() as u32; name_offset as u32 } /// Adds a type to BTF metadata, returning a type id pub fn add_type(&mut self, btf_type: BtfType) -> u32 { let size = btf_type.type_info_size() as u32; let type_id = self.types.len(); self.types.push(btf_type); self.header.type_len += size; self.header.str_off += size; type_id as u32 } /// Loads BTF metadata from `/sys/kernel/btf/vmlinux`. #[cfg(feature = "std")] pub fn from_sys_fs() -> Result { Btf::parse_file("/sys/kernel/btf/vmlinux", Endianness::default()) } /// Loads BTF metadata from the given `path`. #[cfg(feature = "std")] pub fn parse_file>( path: P, endianness: Endianness, ) -> Result { use std::{borrow::ToOwned, fs}; let path = path.as_ref(); Btf::parse( &fs::read(path).map_err(|error| BtfError::FileError { path: path.to_owned(), error, })?, endianness, ) } /// Parses BTF from binary data of the given endianness pub fn parse(data: &[u8], endianness: Endianness) -> Result { if data.len() < mem::size_of::() { return Err(BtfError::InvalidHeader); } // safety: btf_header is POD so read_unaligned is safe let header = unsafe { read_btf_header(data) }; let str_off = header.hdr_len as usize + header.str_off as usize; let str_len = header.str_len as usize; if str_off + str_len > data.len() { return Err(BtfError::InvalidHeader); } let strings = data[str_off..str_off + str_len].to_vec(); let types = Btf::read_type_info(&header, data, endianness)?; Ok(Btf { header, strings, types, _endianness: endianness, }) } fn read_type_info( header: &btf_header, data: &[u8], endianness: Endianness, ) -> Result { let hdr_len = header.hdr_len as usize; let type_off = header.type_off as usize; let type_len = header.type_len as usize; let base = hdr_len + type_off; if base + type_len > data.len() { return Err(BtfError::InvalidTypeInfo); } let mut data = &data[base..base + type_len]; let mut types = BtfTypes::default(); while !data.is_empty() { // Safety: // read() reads POD values from ELF, which is sound, but the values can still contain // internally inconsistent values (like out of bound offsets and such). let ty = unsafe { BtfType::read(data, endianness)? }; data = &data[ty.type_info_size()..]; types.push(ty); } Ok(types) } pub(crate) fn string_at(&self, offset: u32) -> Result, BtfError> { let btf_header { hdr_len, mut str_off, str_len, .. } = self.header; str_off += hdr_len; if offset >= str_off + str_len { return Err(BtfError::InvalidStringOffset { offset: offset as usize, }); } let offset = offset as usize; let nul = self.strings[offset..] .iter() .position(|c| *c == 0u8) .ok_or(BtfError::InvalidStringOffset { offset })?; let s = CStr::from_bytes_with_nul(&self.strings[offset..=offset + nul]) .map_err(|_| BtfError::InvalidStringOffset { offset })?; Ok(s.to_string_lossy()) } pub(crate) fn type_by_id(&self, type_id: u32) -> Result<&BtfType, BtfError> { self.types.type_by_id(type_id) } pub(crate) fn resolve_type(&self, root_type_id: u32) -> Result { self.types.resolve_type(root_type_id) } pub(crate) fn type_name(&self, ty: &BtfType) -> Result, BtfError> { self.string_at(ty.name_offset()) } pub(crate) fn err_type_name(&self, ty: &BtfType) -> Option { self.string_at(ty.name_offset()).ok().map(String::from) } /// Returns a type id matching the type name and [BtfKind] pub fn id_by_type_name_kind(&self, name: &str, kind: BtfKind) -> Result { for (type_id, ty) in self.types().enumerate() { if ty.kind() != kind { continue; } if self.type_name(ty)? == name { return Ok(type_id as u32); } continue; } Err(BtfError::UnknownBtfTypeName { type_name: name.to_owned(), }) } pub(crate) fn type_size(&self, root_type_id: u32) -> Result { let mut type_id = root_type_id; let mut n_elems = 1; for _ in 0..MAX_RESOLVE_DEPTH { let ty = self.types.type_by_id(type_id)?; let size = match ty { BtfType::Array(Array { array, .. }) => { n_elems = array.len; type_id = array.element_type; continue; } other => { if let Some(size) = other.size() { size } else if let Some(next) = other.btf_type() { type_id = next; continue; } else { return Err(BtfError::UnexpectedBtfType { type_id }); } } }; return Ok((size * n_elems) as usize); } Err(BtfError::MaximumTypeDepthReached { type_id: root_type_id, }) } /// Encodes the metadata as BTF format pub fn to_bytes(&self) -> Vec { // Safety: btf_header is POD let mut buf = unsafe { bytes_of::(&self.header).to_vec() }; // Skip the first type since it's always BtfType::Unknown for type_by_id to work buf.extend(self.types.to_bytes()); buf.put(self.strings.as_slice()); buf } // This follows the same logic as libbpf's bpf_object__sanitize_btf() function. // https://github.com/libbpf/libbpf/blob/05f94ddbb837f5f4b3161e341eed21be307eaa04/src/libbpf.c#L2701 // // Fixup: The loader needs to adjust values in the BTF before it's loaded into the kernel. // Sanitize: Replace an unsupported BTF type with a placeholder type. // // In addition to the libbpf logic, it performs some fixups to the BTF generated by bpf-linker // for Aya programs. These fixups are gradually moving into bpf-linker itself. pub(crate) fn fixup_and_sanitize( &mut self, section_infos: &HashMap, symbol_offsets: &HashMap, features: &BtfFeatures, ) -> Result<(), BtfError> { // ENUM64 placeholder type needs to be added before we take ownership of // self.types to ensure that the offsets in the BtfHeader are correct. let placeholder_name = self.add_string("enum64_placeholder"); let enum64_placeholder_id = (!features.btf_enum64 && self.types().any(|t| t.kind() == BtfKind::Enum64)) .then(|| { self.add_type(BtfType::Int(Int::new( placeholder_name, 1, IntEncoding::None, 0, ))) }); let mut types = mem::take(&mut self.types); for i in 0..types.types.len() { let t = &mut types.types[i]; let kind = t.kind(); match t { // Fixup PTR for Rust. // // LLVM emits names for Rust pointer types, which the kernel doesn't like. // While I figure out if this needs fixing in the Kernel or LLVM, we'll // do a fixup here. BtfType::Ptr(ptr) => { ptr.name_offset = 0; } // Sanitize VAR if they are not supported. BtfType::Var(v) if !features.btf_datasec => { *t = BtfType::Int(Int::new(v.name_offset, 1, IntEncoding::None, 0)); } // Sanitize DATASEC if they are not supported. BtfType::DataSec(d) if !features.btf_datasec => { debug!("{}: not supported. replacing with STRUCT", kind); // STRUCT aren't allowed to have "." in their name, fixup this if needed. let mut name_offset = d.name_offset; let name = self.string_at(name_offset)?; // Handle any "." characters in struct names. // Example: ".maps" let fixed_name = name.replace('.', "_"); if fixed_name != name { name_offset = self.add_string(&fixed_name); } let entries = std::mem::take(&mut d.entries); let members = entries .iter() .map(|e| { let mt = types.type_by_id(e.btf_type).unwrap(); BtfMember { name_offset: mt.name_offset(), btf_type: e.btf_type, offset: e.offset * 8, } }) .collect(); // Must reborrow here because we borrow `types` immutably above. let t = &mut types.types[i]; *t = BtfType::Struct(Struct::new(name_offset, members, entries.len() as u32)); } // Fixup DATASEC. // // DATASEC sizes aren't always set by LLVM so we need to fix them // here before loading the btf to the kernel. BtfType::DataSec(d) if features.btf_datasec => { // Start DataSec Fixups let name = self.string_at(d.name_offset)?; let name = name.into_owned(); // Handle any "/" characters in section names. // Example: "maps/hashmap" let fixed_name = name.replace('/', "."); if fixed_name != name { d.name_offset = self.add_string(&fixed_name); } // There are some cases when the compiler does indeed populate the size. if d.size > 0 { debug!("{} {}: size fixup not required", kind, name); } else { // We need to get the size of the section from the ELF file. // Fortunately, we cached these when parsing it initially // and we can this up by name in section_infos. let size = match section_infos.get(&name) { Some((_, size)) => size, None => { return Err(BtfError::UnknownSectionSize { section_name: name }); } }; debug!("{} {}: fixup size to {}", kind, name, size); d.size = *size as u32; // The Vec contains BTF_KIND_VAR sections // that need to have their offsets adjusted. To do this, // we need to get the offset from the ELF file. // This was also cached during initial parsing and // we can query by name in symbol_offsets. let mut entries = mem::take(&mut d.entries); let mut fixed_section = d.clone(); for e in entries.iter_mut() { if let BtfType::Var(var) = types.type_by_id(e.btf_type)? { let var_name = self.string_at(var.name_offset)?; if var.linkage == VarLinkage::Static { debug!( "{} {}: VAR {}: fixup not required", kind, name, var_name ); continue; } let offset = match symbol_offsets.get(var_name.as_ref()) { Some(offset) => offset, None => { return Err(BtfError::SymbolOffsetNotFound { symbol_name: var_name.into_owned(), }); } }; e.offset = *offset as u32; debug!( "{} {}: VAR {}: fixup offset {}", kind, name, var_name, offset ); } else { return Err(BtfError::InvalidDatasec); } } fixed_section.entries = entries; // Must reborrow here because we borrow `types` immutably above. let t = &mut types.types[i]; *t = BtfType::DataSec(fixed_section); } } // Fixup FUNC_PROTO. BtfType::FuncProto(ty) if features.btf_func => { for (i, param) in ty.params.iter_mut().enumerate() { if param.name_offset == 0 && param.btf_type != 0 { param.name_offset = self.add_string(&format!("param{i}")); } } } // Sanitize FUNC_PROTO. BtfType::FuncProto(ty) if !features.btf_func => { debug!("{}: not supported. replacing with ENUM", kind); let members: Vec = ty .params .iter() .map(|p| BtfEnum { name_offset: p.name_offset, value: p.btf_type, }) .collect(); let enum_type = BtfType::Enum(Enum::new(ty.name_offset, false, members)); *t = enum_type; } // Sanitize FUNC. BtfType::Func(ty) => { let name = self.string_at(ty.name_offset)?; // Sanitize FUNC. if !features.btf_func { debug!("{}: not supported. replacing with TYPEDEF", kind); *t = BtfType::Typedef(Typedef::new(ty.name_offset, ty.btf_type)); } else if !features.btf_func_global || name == "memset" || name == "memcpy" || name == "memmove" || name == "memcmp" { // Sanitize BTF_FUNC_GLOBAL when not supported and ensure that // memory builtins are marked as static. Globals are type checked // and verified separately from their callers, while instead we // want tracking info (eg bound checks) to be propagated to the // memory builtins. if ty.linkage() == FuncLinkage::Global { if !features.btf_func_global { debug!( "{}: BTF_FUNC_GLOBAL not supported. replacing with BTF_FUNC_STATIC", kind ); } else { debug!("changing FUNC {name} linkage to BTF_FUNC_STATIC"); } ty.set_linkage(FuncLinkage::Static); } } } // Sanitize FLOAT. BtfType::Float(ty) if !features.btf_float => { debug!("{}: not supported. replacing with STRUCT", kind); *t = BtfType::Struct(Struct::new(0, vec![], ty.size)); } // Sanitize DECL_TAG. BtfType::DeclTag(ty) if !features.btf_decl_tag => { debug!("{}: not supported. replacing with INT", kind); *t = BtfType::Int(Int::new(ty.name_offset, 1, IntEncoding::None, 0)); } // Sanitize TYPE_TAG. BtfType::TypeTag(ty) if !features.btf_type_tag => { debug!("{}: not supported. replacing with CONST", kind); *t = BtfType::Const(Const::new(ty.btf_type)); } // Sanitize Signed ENUMs. BtfType::Enum(ty) if !features.btf_enum64 && ty.is_signed() => { debug!("{}: signed ENUMs not supported. Marking as unsigned", kind); ty.set_signed(false); } // Sanitize ENUM64. BtfType::Enum64(ty) if !features.btf_enum64 => { debug!("{}: not supported. replacing with UNION", kind); let placeholder_id = enum64_placeholder_id.expect("enum64_placeholder_id must be set"); let members: Vec = ty .variants .iter() .map(|v| BtfMember { name_offset: v.name_offset, btf_type: placeholder_id, offset: 0, }) .collect(); *t = BtfType::Union(Union::new(ty.name_offset, members.len() as u32, members)); } // The type does not need fixing up or sanitization. _ => {} } } self.types = types; Ok(()) } } impl Default for Btf { fn default() -> Self { Self::new() } } impl Object { /// Fixes up and sanitizes BTF data. /// /// Mostly, it removes unsupported types and works around LLVM behaviours. pub fn fixup_and_sanitize_btf( &mut self, features: &BtfFeatures, ) -> Result, BtfError> { if let Some(ref mut obj_btf) = &mut self.btf { if obj_btf.is_empty() { return Ok(None); } // fixup btf obj_btf.fixup_and_sanitize( &self.section_infos, &self.symbol_offset_by_name, features, )?; Ok(Some(obj_btf)) } else { Ok(None) } } } unsafe fn read_btf_header(data: &[u8]) -> btf_header { // safety: btf_header is POD so read_unaligned is safe ptr::read_unaligned(data.as_ptr() as *const btf_header) } /// Data in the `.BTF.ext` section #[derive(Debug, Clone)] pub struct BtfExt { data: Vec, _endianness: Endianness, relocations: Vec<(u32, Vec)>, header: btf_ext_header, func_info_rec_size: usize, pub(crate) func_info: FuncInfo, line_info_rec_size: usize, pub(crate) line_info: LineInfo, core_relo_rec_size: usize, } impl BtfExt { pub(crate) fn parse( data: &[u8], endianness: Endianness, btf: &Btf, ) -> Result { #[repr(C)] #[derive(Debug, Copy, Clone)] struct MinimalHeader { pub magic: u16, pub version: u8, pub flags: u8, pub hdr_len: u32, } if data.len() < std::mem::size_of::() { return Err(BtfError::InvalidHeader); } let header = { // first find the actual size of the header by converting into the minimal valid header // Safety: MinimalHeader is POD so read_unaligned is safe let minimal_header = unsafe { ptr::read_unaligned::(data.as_ptr() as *const MinimalHeader) }; let len_to_read = minimal_header.hdr_len as usize; // prevent invalid input from causing UB if data.len() < len_to_read { return Err(BtfError::InvalidHeader); } // forwards compatibility: if newer headers are bigger // than the pre-generated btf_ext_header we should only // read up to btf_ext_header let len_to_read = len_to_read.min(std::mem::size_of::()); // now create our full-fledge header; but start with it // zeroed out so unavailable fields stay as zero on older // BTF.ext sections let mut header = std::mem::MaybeUninit::::zeroed(); // Safety: we have checked that len_to_read is less than // size_of:: and less than // data.len(). Additionally, we know that the header has // been initialized so it's safe to call for assume_init. unsafe { std::ptr::copy(data.as_ptr(), header.as_mut_ptr() as *mut u8, len_to_read); header.assume_init() } }; let btf_ext_header { hdr_len, func_info_off, func_info_len, line_info_off, line_info_len, core_relo_off, core_relo_len, .. } = header; let rec_size = |offset, len| { let offset = hdr_len as usize + offset as usize; let len = len as usize; // check that there's at least enough space for the `rec_size` field if (len > 0 && len < 4) || offset + len > data.len() { return Err(BtfError::InvalidInfo { offset, len, section_len: data.len(), }); } let read_u32 = if endianness == Endianness::Little { u32::from_le_bytes } else { u32::from_be_bytes }; Ok(if len > 0 { read_u32(data[offset..offset + 4].try_into().unwrap()) as usize } else { 0 }) }; let mut ext = BtfExt { header, relocations: Vec::new(), func_info: FuncInfo::new(), line_info: LineInfo::new(), func_info_rec_size: rec_size(func_info_off, func_info_len)?, line_info_rec_size: rec_size(line_info_off, line_info_len)?, core_relo_rec_size: rec_size(core_relo_off, core_relo_len)?, data: data.to_vec(), _endianness: endianness, }; let func_info_rec_size = ext.func_info_rec_size; ext.func_info.data.extend( SecInfoIter::new(ext.func_info_data(), ext.func_info_rec_size, endianness) .map(move |sec| { let name = btf .string_at(sec.name_offset) .ok() .map(String::from) .unwrap(); let info = FuncSecInfo::parse( sec.name_offset, sec.num_info, func_info_rec_size, sec.data, endianness, ); Ok((name, info)) }) .collect::, _>>()?, ); let line_info_rec_size = ext.line_info_rec_size; ext.line_info.data.extend( SecInfoIter::new(ext.line_info_data(), ext.line_info_rec_size, endianness) .map(move |sec| { let name = btf .string_at(sec.name_offset) .ok() .map(String::from) .unwrap(); let info = LineSecInfo::parse( sec.name_offset, sec.num_info, line_info_rec_size, sec.data, endianness, ); Ok((name, info)) }) .collect::, _>>()?, ); let rec_size = ext.core_relo_rec_size; ext.relocations.extend( SecInfoIter::new(ext.core_relo_data(), ext.core_relo_rec_size, endianness) .map(move |sec| { let relos = sec .data .chunks(rec_size) .enumerate() .map(|(n, rec)| unsafe { Relocation::parse(rec, n) }) .collect::, _>>()?; Ok((sec.name_offset, relos)) }) .collect::, _>>()?, ); Ok(ext) } fn info_data(&self, offset: u32, len: u32) -> &[u8] { let offset = (self.header.hdr_len + offset) as usize; let data = &self.data[offset..offset + len as usize]; if len > 0 { // skip `rec_size` &data[4..] } else { data } } fn core_relo_data(&self) -> &[u8] { self.info_data(self.header.core_relo_off, self.header.core_relo_len) } fn func_info_data(&self) -> &[u8] { self.info_data(self.header.func_info_off, self.header.func_info_len) } fn line_info_data(&self) -> &[u8] { self.info_data(self.header.line_info_off, self.header.line_info_len) } pub(crate) fn relocations(&self) -> impl Iterator)> { self.relocations.iter() } pub(crate) fn func_info_rec_size(&self) -> usize { self.func_info_rec_size } pub(crate) fn line_info_rec_size(&self) -> usize { self.line_info_rec_size } } pub(crate) struct SecInfoIter<'a> { data: &'a [u8], offset: usize, rec_size: usize, endianness: Endianness, } impl<'a> SecInfoIter<'a> { fn new(data: &'a [u8], rec_size: usize, endianness: Endianness) -> Self { Self { data, rec_size, offset: 0, endianness, } } } impl<'a> Iterator for SecInfoIter<'a> { type Item = SecInfo<'a>; fn next(&mut self) -> Option { let data = self.data; if self.offset + 8 >= data.len() { return None; } let read_u32 = if self.endianness == Endianness::Little { u32::from_le_bytes } else { u32::from_be_bytes }; let name_offset = read_u32(data[self.offset..self.offset + 4].try_into().unwrap()); self.offset += 4; let num_info = u32::from_ne_bytes(data[self.offset..self.offset + 4].try_into().unwrap()); self.offset += 4; let data = &data[self.offset..self.offset + (self.rec_size * num_info as usize)]; self.offset += self.rec_size * num_info as usize; Some(SecInfo { name_offset, num_info, data, }) } } /// BtfTypes allows for access and manipulation of a /// collection of BtfType objects #[derive(Debug, Clone)] pub(crate) struct BtfTypes { pub(crate) types: Vec, } impl Default for BtfTypes { fn default() -> Self { Self { types: vec![BtfType::Unknown], } } } impl BtfTypes { pub(crate) fn to_bytes(&self) -> Vec { let mut buf = vec![]; for t in self.types.iter().skip(1) { let b = t.to_bytes(); buf.extend(b) } buf } pub(crate) fn len(&self) -> usize { self.types.len() } pub(crate) fn push(&mut self, value: BtfType) { self.types.push(value) } pub(crate) fn type_by_id(&self, type_id: u32) -> Result<&BtfType, BtfError> { self.types .get(type_id as usize) .ok_or(BtfError::UnknownBtfType { type_id }) } pub(crate) fn resolve_type(&self, root_type_id: u32) -> Result { let mut type_id = root_type_id; for _ in 0..MAX_RESOLVE_DEPTH { let ty = self.type_by_id(type_id)?; use BtfType::*; match ty { Volatile(ty) => { type_id = ty.btf_type; continue; } Const(ty) => { type_id = ty.btf_type; continue; } Restrict(ty) => { type_id = ty.btf_type; continue; } Typedef(ty) => { type_id = ty.btf_type; continue; } TypeTag(ty) => { type_id = ty.btf_type; continue; } _ => return Ok(type_id), } } Err(BtfError::MaximumTypeDepthReached { type_id: root_type_id, }) } } #[derive(Debug)] pub(crate) struct SecInfo<'a> { name_offset: u32, num_info: u32, data: &'a [u8], } #[cfg(test)] mod tests { use assert_matches::assert_matches; use super::*; use crate::btf::{ BtfEnum64, BtfParam, DataSec, DataSecEntry, DeclTag, Enum64, Float, Func, FuncProto, Ptr, TypeTag, Var, }; #[test] fn test_parse_header() { let header = btf_header { magic: 0xeb9f, version: 0x01, flags: 0x00, hdr_len: 0x18, type_off: 0x00, type_len: 0x2a5464, str_off: 0x2a5464, str_len: 0x1c6410, }; let data = unsafe { bytes_of::(&header).to_vec() }; let header = unsafe { read_btf_header(&data) }; assert_eq!(header.magic, 0xeb9f); assert_eq!(header.version, 0x01); assert_eq!(header.flags, 0x00); assert_eq!(header.hdr_len, 0x18); assert_eq!(header.type_off, 0x00); assert_eq!(header.type_len, 0x2a5464); assert_eq!(header.str_off, 0x2a5464); assert_eq!(header.str_len, 0x1c6410); } #[test] fn test_parse_btf() { // this generated BTF data is from an XDP program that simply returns XDP_PASS // compiled using clang #[cfg(target_endian = "little")] let data: &[u8] = &[ 0x9f, 0xeb, 0x01, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0c, 0x01, 0x00, 0x00, 0x0c, 0x01, 0x00, 0x00, 0xe1, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x02, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x04, 0x18, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x60, 0x00, 0x00, 0x00, 0x30, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x3f, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0xa0, 0x00, 0x00, 0x00, 0x4e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x04, 0x00, 0x00, 0x00, 0x54, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x0d, 0x06, 0x00, 0x00, 0x00, 0x61, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x65, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x01, 0x69, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x0c, 0x05, 0x00, 0x00, 0x00, 0xb7, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xbc, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0xd0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x09, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0xd9, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x0f, 0x00, 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x78, 0x64, 0x70, 0x5f, 0x6d, 0x64, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x65, 0x6e, 0x64, 0x00, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x00, 0x69, 0x6e, 0x67, 0x72, 0x65, 0x73, 0x73, 0x5f, 0x69, 0x66, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x00, 0x72, 0x78, 0x5f, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x00, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x5f, 0x69, 0x66, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x00, 0x5f, 0x5f, 0x75, 0x33, 0x32, 0x00, 0x75, 0x6e, 0x73, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x20, 0x69, 0x6e, 0x74, 0x00, 0x63, 0x74, 0x78, 0x00, 0x69, 0x6e, 0x74, 0x00, 0x78, 0x64, 0x70, 0x5f, 0x70, 0x61, 0x73, 0x73, 0x00, 0x78, 0x64, 0x70, 0x2f, 0x70, 0x61, 0x73, 0x73, 0x00, 0x2f, 0x68, 0x6f, 0x6d, 0x65, 0x2f, 0x64, 0x61, 0x76, 0x65, 0x2f, 0x64, 0x65, 0x76, 0x2f, 0x62, 0x70, 0x66, 0x64, 0x2f, 0x62, 0x70, 0x66, 0x2f, 0x78, 0x64, 0x70, 0x5f, 0x70, 0x61, 0x73, 0x73, 0x2e, 0x62, 0x70, 0x66, 0x2e, 0x63, 0x00, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x58, 0x44, 0x50, 0x5f, 0x50, 0x41, 0x53, 0x53, 0x3b, 0x00, 0x63, 0x68, 0x61, 0x72, 0x00, 0x5f, 0x5f, 0x41, 0x52, 0x52, 0x41, 0x59, 0x5f, 0x53, 0x49, 0x5a, 0x45, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x5f, 0x00, 0x5f, 0x6c, 0x69, 0x63, 0x65, 0x6e, 0x73, 0x65, 0x00, 0x6c, 0x69, 0x63, 0x65, 0x6e, 0x73, 0x65, 0x00, ]; #[cfg(target_endian = "big")] let data: &[u8] = &[ 0xeb, 0x9f, 0x01, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x01, 0x0c, 0x00, 0x00, 0x00, 0xe1, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x60, 0x00, 0x00, 0x00, 0x30, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x3f, 0x00, 0x00, 0x00, 0x30, 0x00, 0x00, 0x00, 0xa0, 0x00, 0x00, 0x00, 0x4e, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x54, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x61, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x65, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x01, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x69, 0x0c, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0xb7, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xbc, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0xd0, 0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0xd9, 0x0f, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x78, 0x64, 0x70, 0x5f, 0x6d, 0x64, 0x00, 0x64, 0x61, 0x74, 0x61, 0x00, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x65, 0x6e, 0x64, 0x00, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x6d, 0x65, 0x74, 0x61, 0x00, 0x69, 0x6e, 0x67, 0x72, 0x65, 0x73, 0x73, 0x5f, 0x69, 0x66, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x00, 0x72, 0x78, 0x5f, 0x71, 0x75, 0x65, 0x75, 0x65, 0x5f, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x00, 0x65, 0x67, 0x72, 0x65, 0x73, 0x73, 0x5f, 0x69, 0x66, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x00, 0x5f, 0x5f, 0x75, 0x33, 0x32, 0x00, 0x75, 0x6e, 0x73, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x20, 0x69, 0x6e, 0x74, 0x00, 0x63, 0x74, 0x78, 0x00, 0x69, 0x6e, 0x74, 0x00, 0x78, 0x64, 0x70, 0x5f, 0x70, 0x61, 0x73, 0x73, 0x00, 0x78, 0x64, 0x70, 0x2f, 0x70, 0x61, 0x73, 0x73, 0x00, 0x2f, 0x68, 0x6f, 0x6d, 0x65, 0x2f, 0x64, 0x61, 0x76, 0x65, 0x2f, 0x64, 0x65, 0x76, 0x2f, 0x62, 0x70, 0x66, 0x64, 0x2f, 0x62, 0x70, 0x66, 0x2f, 0x78, 0x64, 0x70, 0x5f, 0x70, 0x61, 0x73, 0x73, 0x2e, 0x62, 0x70, 0x66, 0x2e, 0x63, 0x00, 0x20, 0x20, 0x20, 0x20, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6e, 0x20, 0x58, 0x44, 0x50, 0x5f, 0x50, 0x41, 0x53, 0x53, 0x3b, 0x00, 0x63, 0x68, 0x61, 0x72, 0x00, 0x5f, 0x5f, 0x41, 0x52, 0x52, 0x41, 0x59, 0x5f, 0x53, 0x49, 0x5a, 0x45, 0x5f, 0x54, 0x59, 0x50, 0x45, 0x5f, 0x5f, 0x00, 0x5f, 0x6c, 0x69, 0x63, 0x65, 0x6e, 0x73, 0x65, 0x00, 0x6c, 0x69, 0x63, 0x65, 0x6e, 0x73, 0x65, 0x00, ]; assert_eq!(data.len(), 517); let btf = Btf::parse(data, Endianness::default()).unwrap_or_else(|e| panic!("{}", e)); let data2 = btf.to_bytes(); assert_eq!(data2.len(), 517); assert_eq!(data, data2); const FUNC_LEN: u32 = 0x14; const LINE_INFO_LEN: u32 = 0x1c; const CORE_RELO_LEN: u32 = 0; const DATA_LEN: u32 = (FUNC_LEN + LINE_INFO_LEN + CORE_RELO_LEN) / 4; struct TestStruct { _header: btf_ext_header, _data: [u32; DATA_LEN as usize], } let test_data = TestStruct { _header: btf_ext_header { magic: 0xeb9f, version: 1, flags: 0, hdr_len: 0x20, func_info_off: 0, func_info_len: FUNC_LEN, line_info_off: FUNC_LEN, line_info_len: LINE_INFO_LEN, core_relo_off: FUNC_LEN + LINE_INFO_LEN, core_relo_len: CORE_RELO_LEN, }, _data: [ 0x00000008u32, 0x00000072u32, 0x00000001u32, 0x00000000u32, 0x00000007u32, 0x00000010u32, 0x00000072u32, 0x00000001u32, 0x00000000u32, 0x0000007bu32, 0x000000a2u32, 0x00002c05u32, ], }; let ext_data = unsafe { bytes_of::(&test_data).to_vec() }; assert_eq!(ext_data.len(), 80); let _: BtfExt = BtfExt::parse(&ext_data, Endianness::default(), &btf) .unwrap_or_else(|e| panic!("{}", e)); } #[test] fn parsing_older_ext_data() { const TYPE_LEN: u32 = 0; const STR_LEN: u32 = 1; struct BtfTestStruct { _header: btf_header, _data: [u8; (TYPE_LEN + STR_LEN) as usize], } let btf_test_data = BtfTestStruct { _header: btf_header { magic: 0xeb9f, version: 0x01, flags: 0x00, hdr_len: 24, type_off: 0, type_len: TYPE_LEN, str_off: TYPE_LEN, str_len: TYPE_LEN + STR_LEN, }, _data: [0x00u8], }; let btf_data = unsafe { bytes_of::(&btf_test_data).to_vec() }; const FUNC_INFO_LEN: u32 = 4; const LINE_INFO_LEN: u32 = 4; const CORE_RELO_LEN: u32 = 16; let ext_header = btf_ext_header { magic: 0xeb9f, version: 1, flags: 0, hdr_len: 24, func_info_off: 0, func_info_len: FUNC_INFO_LEN, line_info_off: FUNC_INFO_LEN, line_info_len: LINE_INFO_LEN, core_relo_off: FUNC_INFO_LEN + LINE_INFO_LEN, core_relo_len: CORE_RELO_LEN, }; let btf_ext_data = unsafe { bytes_of::(&ext_header).to_vec() }; let btf = Btf::parse(&btf_data, Endianness::default()).unwrap(); let btf_ext = BtfExt::parse(&btf_ext_data, Endianness::default(), &btf).unwrap(); assert_eq!(btf_ext.func_info_rec_size(), 8); assert_eq!(btf_ext.line_info_rec_size(), 16); } #[test] fn test_write_btf() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type = BtfType::Int(Int::new(name_offset, 4, IntEncoding::Signed, 0)); btf.add_type(int_type); let name_offset = btf.add_string("widget"); let int_type = BtfType::Int(Int::new(name_offset, 4, IntEncoding::Signed, 0)); btf.add_type(int_type); let btf_bytes = btf.to_bytes(); let raw_btf = btf_bytes.as_slice(); let btf = Btf::parse(raw_btf, Endianness::default()).unwrap_or_else(|e| panic!("{}", e)); assert_eq!(btf.string_at(1).unwrap(), "int"); assert_eq!(btf.string_at(5).unwrap(), "widget"); } #[test] fn test_fixup_ptr() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let name_offset = btf.add_string("&mut int"); let ptr_type_id = btf.add_type(BtfType::Ptr(Ptr::new(name_offset, int_type_id))); let features = Default::default(); btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(ptr_type_id).unwrap(), BtfType::Ptr(fixed) => { assert_eq!(fixed.name_offset, 0); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_var() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let name_offset = btf.add_string("&mut int"); let var_type_id = btf.add_type(BtfType::Var(Var::new( name_offset, int_type_id, VarLinkage::Static, ))); let features = BtfFeatures { btf_datasec: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(var_type_id).unwrap(), BtfType::Int(fixed) => { assert_eq!(fixed.name_offset, name_offset); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_datasec() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let var_name_offset = btf.add_string("foo"); let var_type_id = btf.add_type(BtfType::Var(Var::new( var_name_offset, int_type_id, VarLinkage::Static, ))); let name_offset = btf.add_string("data"); let variables = vec![DataSecEntry { btf_type: var_type_id, offset: 0, size: 4, }]; let datasec_type_id = btf.add_type(BtfType::DataSec(DataSec::new(name_offset, variables, 0))); let features = BtfFeatures { btf_datasec: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(datasec_type_id).unwrap(), BtfType::Struct(fixed) => { assert_eq!(fixed.name_offset , name_offset); assert_matches!(*fixed.members, [ BtfMember { name_offset: name_offset1, btf_type, offset: 0, }, ] => { assert_eq!(name_offset1, var_name_offset); assert_eq!(btf_type, var_type_id); }) }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_fixup_datasec() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let name_offset = btf.add_string("foo"); let var_type_id = btf.add_type(BtfType::Var(Var::new( name_offset, int_type_id, VarLinkage::Global, ))); let name_offset = btf.add_string(".data/foo"); let variables = vec![DataSecEntry { btf_type: var_type_id, offset: 0, size: 4, }]; let datasec_type_id = btf.add_type(BtfType::DataSec(DataSec::new(name_offset, variables, 0))); let features = BtfFeatures { btf_datasec: true, ..Default::default() }; btf.fixup_and_sanitize( &HashMap::from([(".data/foo".to_owned(), (SectionIndex(0), 32u64))]), &HashMap::from([("foo".to_owned(), 64u64)]), &features, ) .unwrap(); assert_matches!(btf.type_by_id(datasec_type_id).unwrap(), BtfType::DataSec(fixed) => { assert_ne!(fixed.name_offset, name_offset); assert_eq!(fixed.size, 32); assert_matches!(*fixed.entries, [ DataSecEntry { btf_type, offset, size, }, ] => { assert_eq!(btf_type, var_type_id); assert_eq!(offset, 64); assert_eq!(size, 4); } ); assert_eq!(btf.string_at(fixed.name_offset).unwrap(), ".data.foo"); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_func_and_proto() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let params = vec![ BtfParam { name_offset: btf.add_string("a"), btf_type: int_type_id, }, BtfParam { name_offset: btf.add_string("b"), btf_type: int_type_id, }, ]; let func_proto_type_id = btf.add_type(BtfType::FuncProto(FuncProto::new(params, int_type_id))); let inc = btf.add_string("inc"); let func_type_id = btf.add_type(BtfType::Func(Func::new( inc, func_proto_type_id, FuncLinkage::Static, ))); let features = BtfFeatures { btf_func: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(func_proto_type_id).unwrap(), BtfType::Enum(fixed) => { assert_eq!(fixed.name_offset, 0); assert_matches!(*fixed.variants, [ BtfEnum { name_offset: name_offset1, value: value1, }, BtfEnum { name_offset: name_offset2, value: value2, }, ] => { assert_eq!(btf.string_at(name_offset1).unwrap(), "a"); assert_eq!(value1, int_type_id); assert_eq!(btf.string_at(name_offset2).unwrap(), "b"); assert_eq!(value2, int_type_id); } ); }); assert_matches!(btf.type_by_id(func_type_id).unwrap(), BtfType::Typedef(fixed) => { assert_eq!(fixed.name_offset, inc); assert_eq!(fixed.btf_type, func_proto_type_id); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_fixup_func_proto() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type = BtfType::Int(Int::new(name_offset, 4, IntEncoding::Signed, 0)); let int_type_id = btf.add_type(int_type); let params = vec![ BtfParam { name_offset: 0, btf_type: int_type_id, }, BtfParam { name_offset: 0, btf_type: int_type_id, }, ]; let func_proto = BtfType::FuncProto(FuncProto::new(params, int_type_id)); let func_proto_type_id = btf.add_type(func_proto); let features = BtfFeatures { btf_func: true, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(func_proto_type_id).unwrap(), BtfType::FuncProto(fixed) => { assert_matches!(*fixed.params, [ BtfParam { name_offset: name_offset1, btf_type: btf_type1, }, BtfParam { name_offset: name_offset2, btf_type: btf_type2, }, ] => { assert_eq!(btf.string_at(name_offset1).unwrap(), "param0"); assert_eq!(btf_type1, int_type_id); assert_eq!(btf.string_at(name_offset2).unwrap(), "param1"); assert_eq!(btf_type2, int_type_id); } ); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_func_global() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let params = vec![ BtfParam { name_offset: btf.add_string("a"), btf_type: int_type_id, }, BtfParam { name_offset: btf.add_string("b"), btf_type: int_type_id, }, ]; let func_proto_type_id = btf.add_type(BtfType::FuncProto(FuncProto::new(params, int_type_id))); let inc = btf.add_string("inc"); let func_type_id = btf.add_type(BtfType::Func(Func::new( inc, func_proto_type_id, FuncLinkage::Global, ))); let features = BtfFeatures { btf_func: true, btf_func_global: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(func_type_id).unwrap(), BtfType::Func(fixed) => { assert_eq!(fixed.linkage(), FuncLinkage::Static); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_mem_builtins() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let params = vec![ BtfParam { name_offset: btf.add_string("a"), btf_type: int_type_id, }, BtfParam { name_offset: btf.add_string("b"), btf_type: int_type_id, }, ]; let func_proto_type_id = btf.add_type(BtfType::FuncProto(FuncProto::new(params, int_type_id))); let builtins = ["memset", "memcpy", "memcmp", "memmove"]; for fname in builtins { let func_name_offset = btf.add_string(fname); let func_type_id = btf.add_type(BtfType::Func(Func::new( func_name_offset, func_proto_type_id, FuncLinkage::Global, ))); let features = BtfFeatures { btf_func: true, btf_func_global: true, // to force function name check ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(func_type_id).unwrap(), BtfType::Func(fixed) => { assert_eq!(fixed.linkage(), FuncLinkage::Static); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } } #[test] fn test_sanitize_float() { let mut btf = Btf::new(); let name_offset = btf.add_string("float"); let float_type_id = btf.add_type(BtfType::Float(Float::new(name_offset, 16))); let features = BtfFeatures { btf_float: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(float_type_id).unwrap(), BtfType::Struct(fixed) => { assert_eq!(fixed.name_offset, 0); assert_eq!(fixed.size, 16); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_decl_tag() { let mut btf = Btf::new(); let name_offset = btf.add_string("int"); let int_type_id = btf.add_type(BtfType::Int(Int::new( name_offset, 4, IntEncoding::Signed, 0, ))); let name_offset = btf.add_string("foo"); let var_type_id = btf.add_type(BtfType::Var(Var::new( name_offset, int_type_id, VarLinkage::Static, ))); let name_offset = btf.add_string("decl_tag"); let decl_tag_type_id = btf.add_type(BtfType::DeclTag(DeclTag::new(name_offset, var_type_id, -1))); let features = BtfFeatures { btf_decl_tag: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(decl_tag_type_id).unwrap(), BtfType::Int(fixed) => { assert_eq!(fixed.name_offset, name_offset); assert_eq!(fixed.size, 1); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_type_tag() { let mut btf = Btf::new(); let int_type_id = btf.add_type(BtfType::Int(Int::new(0, 4, IntEncoding::Signed, 0))); let name_offset = btf.add_string("int"); let type_tag_type = btf.add_type(BtfType::TypeTag(TypeTag::new(name_offset, int_type_id))); btf.add_type(BtfType::Ptr(Ptr::new(0, type_tag_type))); let features = BtfFeatures { btf_type_tag: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(type_tag_type).unwrap(), BtfType::Const(fixed) => { assert_eq!(fixed.btf_type, int_type_id); }); // Ensure we can convert to bytes and back again let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } // Not possible to emulate file system file "/sys/kernel/btf/vmlinux" as big endian, so skip #[test] #[cfg(feature = "std")] #[cfg_attr(miri, ignore = "`open` not available when isolation is enabled")] #[cfg(target_endian = "little")] fn test_read_btf_from_sys_fs() { let btf = Btf::parse_file("/sys/kernel/btf/vmlinux", Endianness::default()).unwrap(); let task_struct_id = btf .id_by_type_name_kind("task_struct", BtfKind::Struct) .unwrap(); // we can't assert on exact ID since this may change across kernel versions assert!(task_struct_id != 0); let netif_id = btf .id_by_type_name_kind("netif_receive_skb", BtfKind::Func) .unwrap(); assert!(netif_id != 0); let u32_def = btf.id_by_type_name_kind("__u32", BtfKind::Typedef).unwrap(); assert!(u32_def != 0); let u32_base = btf.resolve_type(u32_def).unwrap(); assert!(u32_base != 0); let u32_ty = btf.type_by_id(u32_base).unwrap(); assert_eq!(u32_ty.kind(), BtfKind::Int); } #[test] fn test_sanitize_signed_enum() { let mut btf = Btf::new(); let name_offset = btf.add_string("signed_enum"); let name_a = btf.add_string("A"); let name_b = btf.add_string("B"); let name_c = btf.add_string("C"); let enum64_type = Enum::new( name_offset, true, vec![ BtfEnum::new(name_a, -1i32 as u32), BtfEnum::new(name_b, -2i32 as u32), BtfEnum::new(name_c, -3i32 as u32), ], ); let enum_type_id = btf.add_type(BtfType::Enum(enum64_type)); let features = BtfFeatures { btf_enum64: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(enum_type_id).unwrap(), BtfType::Enum(fixed) => { assert!(!fixed.is_signed()); assert_matches!(fixed.variants[..], [ BtfEnum { name_offset: name1, value: 0xFFFF_FFFF }, BtfEnum { name_offset: name2, value: 0xFFFF_FFFE }, BtfEnum { name_offset: name3, value: 0xFFFF_FFFD }, ] => { assert_eq!(name1, name_a); assert_eq!(name2, name_b); assert_eq!(name3, name_c); }); }); // Ensure we can convert to bytes and back again. let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } #[test] fn test_sanitize_enum64() { let mut btf = Btf::new(); let name_offset = btf.add_string("enum64"); let name_a = btf.add_string("A"); let name_b = btf.add_string("B"); let name_c = btf.add_string("C"); let enum64_type = Enum64::new( name_offset, false, vec![ BtfEnum64::new(name_a, 1), BtfEnum64::new(name_b, 2), BtfEnum64::new(name_c, 3), ], ); let enum_type_id = btf.add_type(BtfType::Enum64(enum64_type)); let features = BtfFeatures { btf_enum64: false, ..Default::default() }; btf.fixup_and_sanitize(&HashMap::new(), &HashMap::new(), &features) .unwrap(); assert_matches!(btf.type_by_id(enum_type_id).unwrap(), BtfType::Union(fixed) => { let placeholder = btf.id_by_type_name_kind("enum64_placeholder", BtfKind::Int) .expect("enum64_placeholder type not found"); assert_matches!(fixed.members[..], [ BtfMember { name_offset: name_offset1, btf_type: btf_type1, offset: 0 }, BtfMember { name_offset: name_offset2, btf_type: btf_type2, offset: 0 }, BtfMember { name_offset: name_offset3, btf_type: btf_type3, offset: 0 }, ] => { assert_eq!(name_offset1, name_a); assert_eq!(btf_type1, placeholder); assert_eq!(name_offset2, name_b); assert_eq!(btf_type2, placeholder); assert_eq!(name_offset3, name_c); assert_eq!(btf_type3, placeholder); }); }); // Ensure we can convert to bytes and back again. let raw = btf.to_bytes(); Btf::parse(&raw, Endianness::default()).unwrap(); } } aya-obj-0.2.1/src/btf/info.rs000064400000000000000000000143751046102023000140230ustar 00000000000000use alloc::{string::String, vec, vec::Vec}; use bytes::BufMut; use object::Endianness; use crate::{ generated::{bpf_func_info, bpf_line_info}, relocation::INS_SIZE, util::{bytes_of, HashMap}, }; /* The func_info subsection layout: * record size for struct bpf_func_info in the func_info subsection * struct btf_sec_func_info for section #1 * a list of bpf_func_info records for section #1 * where struct bpf_func_info mimics one in include/uapi/linux/bpf.h * but may not be identical * struct btf_sec_func_info for section #2 * a list of bpf_func_info records for section #2 * ...... */ /// A collection of [bpf_func_info] collected from the `btf_ext_info_sec` struct /// inside the [FuncInfo] subsection. /// /// See [BPF Type Format (BTF) — The Linux Kernel documentation](https://docs.kernel.org/bpf/btf.html) /// for more information. #[derive(Debug, Clone, Default)] pub struct FuncSecInfo { pub(crate) _sec_name_offset: u32, /// The number of info entries pub num_info: u32, /// Info entries pub func_info: Vec, } impl FuncSecInfo { pub(crate) fn parse( sec_name_offset: u32, num_info: u32, rec_size: usize, func_info_data: &[u8], endianness: Endianness, ) -> FuncSecInfo { let func_info = func_info_data .chunks(rec_size) .map(|data| { let read_u32 = if endianness == Endianness::Little { u32::from_le_bytes } else { u32::from_be_bytes }; let mut offset = 0; // ELF instruction offsets are in bytes // Kernel instruction offsets are in instructions units // We can convert by dividing the length in bytes by INS_SIZE let insn_off = read_u32(data[offset..offset + 4].try_into().unwrap()) / INS_SIZE as u32; offset += 4; let type_id = read_u32(data[offset..offset + 4].try_into().unwrap()); bpf_func_info { insn_off, type_id } }) .collect(); FuncSecInfo { _sec_name_offset: sec_name_offset, num_info, func_info, } } /// Encodes the [bpf_func_info] entries. pub fn func_info_bytes(&self) -> Vec { let mut buf = vec![]; for l in &self.func_info { // Safety: bpf_func_info is POD buf.put(unsafe { bytes_of::(l) }) } buf } /// Returns the number of [bpf_func_info] entries. pub fn len(&self) -> usize { self.func_info.len() } } /// A collection of [FuncSecInfo] collected from the `func_info` subsection /// in the `.BTF.ext` section. /// /// See [BPF Type Format (BTF) — The Linux Kernel documentation](https://docs.kernel.org/bpf/btf.html) /// for more information. #[derive(Debug, Clone)] pub struct FuncInfo { /// The [FuncSecInfo] subsections for some sections, referenced by section names pub data: HashMap, } impl FuncInfo { pub(crate) fn new() -> FuncInfo { FuncInfo { data: HashMap::new(), } } pub(crate) fn get(&self, name: &str) -> FuncSecInfo { match self.data.get(name) { Some(d) => d.clone(), None => FuncSecInfo::default(), } } } /// A collection of [bpf_line_info] collected from the `btf_ext_info_sec` struct /// inside the `line_info` subsection. /// /// See [BPF Type Format (BTF) — The Linux Kernel documentation](https://docs.kernel.org/bpf/btf.html) /// for more information. #[derive(Debug, Clone, Default)] pub struct LineSecInfo { // each line info section has a header pub(crate) _sec_name_offset: u32, /// The number of entries pub num_info: u32, // followed by one or more bpf_line_info structs /// The [bpf_line_info] entries pub line_info: Vec, } impl LineSecInfo { pub(crate) fn parse( sec_name_offset: u32, num_info: u32, rec_size: usize, func_info_data: &[u8], endianness: Endianness, ) -> LineSecInfo { let line_info = func_info_data .chunks(rec_size) .map(|data| { let read_u32 = if endianness == Endianness::Little { u32::from_le_bytes } else { u32::from_be_bytes }; let mut offset = 0; // ELF instruction offsets are in bytes // Kernel instruction offsets are in instructions units // We can convert by dividing the length in bytes by INS_SIZE let insn_off = read_u32(data[offset..offset + 4].try_into().unwrap()) / INS_SIZE as u32; offset += 4; let file_name_off = read_u32(data[offset..offset + 4].try_into().unwrap()); offset += 4; let line_off = read_u32(data[offset..offset + 4].try_into().unwrap()); offset += 4; let line_col = read_u32(data[offset..offset + 4].try_into().unwrap()); bpf_line_info { insn_off, file_name_off, line_off, line_col, } }) .collect(); LineSecInfo { _sec_name_offset: sec_name_offset, num_info, line_info, } } /// Encodes the entries. pub fn line_info_bytes(&self) -> Vec { let mut buf = vec![]; for l in &self.line_info { // Safety: bpf_func_info is POD buf.put(unsafe { bytes_of::(l) }) } buf } /// Returns the number of entries. pub fn len(&self) -> usize { self.line_info.len() } } #[derive(Debug, Clone)] pub(crate) struct LineInfo { pub data: HashMap, } impl LineInfo { pub(crate) fn new() -> LineInfo { LineInfo { data: HashMap::new(), } } pub(crate) fn get(&self, name: &str) -> LineSecInfo { match self.data.get(name) { Some(d) => d.clone(), None => LineSecInfo::default(), } } } aya-obj-0.2.1/src/btf/mod.rs000064400000000000000000000003271046102023000136370ustar 00000000000000//! BTF loading, parsing and relocation. #[allow(clippy::module_inception)] mod btf; mod info; mod relocation; mod types; pub use btf::*; pub use info::*; pub use relocation::BtfRelocationError; pub use types::*; aya-obj-0.2.1/src/btf/relocation.rs000064400000000000000000001326311046102023000152230ustar 00000000000000use alloc::{ borrow::{Cow, ToOwned as _}, collections::BTreeMap, format, string::{String, ToString}, vec, vec::Vec, }; use core::{mem, ops::Bound::Included, ptr}; use object::SectionIndex; #[cfg(not(feature = "std"))] use crate::std; use crate::{ btf::{ fields_are_compatible, types_are_compatible, Array, Btf, BtfError, BtfMember, BtfType, IntEncoding, Struct, Union, MAX_SPEC_LEN, }, generated::{ bpf_core_relo, bpf_core_relo_kind::*, bpf_insn, BPF_ALU, BPF_ALU64, BPF_B, BPF_CALL, BPF_DW, BPF_H, BPF_JMP, BPF_K, BPF_LD, BPF_LDX, BPF_ST, BPF_STX, BPF_W, BTF_INT_SIGNED, }, util::HashMap, Function, Object, }; /// The error type returned by [`Object::relocate_btf`]. #[derive(thiserror::Error, Debug)] #[error("error relocating `{section}`")] pub struct BtfRelocationError { /// The function name pub section: String, #[source] /// The original error error: RelocationError, } /// Relocation failures #[derive(thiserror::Error, Debug)] enum RelocationError { #[cfg(feature = "std")] /// I/O error #[error(transparent)] IOError(#[from] std::io::Error), /// Section not found #[error("section not found")] SectionNotFound, /// Function not found #[error("function not found")] FunctionNotFound, /// Invalid relocation access string #[error("invalid relocation access string {access_str}")] InvalidAccessString { /// The access string access_str: String, }, /// Invalid instruction index referenced by relocation #[error("invalid instruction index #{index} referenced by relocation #{relocation_number}, the program contains {num_instructions} instructions")] InvalidInstructionIndex { /// The invalid instruction index index: usize, /// Number of instructions in the program num_instructions: usize, /// The relocation number relocation_number: usize, }, /// Multiple candidate target types found with different memory layouts #[error("error relocating {type_name}, multiple candidate target types found with different memory layouts: {candidates:?}")] ConflictingCandidates { /// The type name type_name: String, /// The candidates candidates: Vec, }, /// Maximum nesting level reached evaluating candidate type #[error("maximum nesting level reached evaluating candidate type `{}`", err_type_name(.type_name))] MaximumNestingLevelReached { /// The type name type_name: Option, }, /// Invalid access string #[error("invalid access string `{spec}` for type `{}`: {error}", err_type_name(.type_name))] InvalidAccessIndex { /// The type name type_name: Option, /// The access string spec: String, /// The index index: usize, /// The max index max_index: usize, /// The error message error: &'static str, }, /// Relocation not valid for type #[error( "relocation #{relocation_number} of kind `{relocation_kind}` not valid for type `{type_kind}`: {error}" )] InvalidRelocationKindForType { /// The relocation number relocation_number: usize, /// The relocation kind relocation_kind: String, /// The type kind type_kind: String, /// The error message error: &'static str, }, /// Invalid instruction referenced by relocation #[error( "instruction #{index} referenced by relocation #{relocation_number} is invalid: {error}" )] InvalidInstruction { /// The relocation number relocation_number: usize, /// The instruction index index: usize, /// The error message error: Cow<'static, str>, }, #[error("applying relocation `{kind:?}` missing target BTF info for type `{type_id}` at instruction #{ins_index}")] MissingTargetDefinition { kind: RelocationKind, type_id: u32, ins_index: usize, }, /// BTF error #[error("invalid BTF")] BtfError(#[from] BtfError), } fn err_type_name(name: &Option) -> &str { name.as_deref().unwrap_or("[unknown name]") } #[derive(Copy, Clone, Debug)] #[repr(u32)] enum RelocationKind { FieldByteOffset = BPF_CORE_FIELD_BYTE_OFFSET, FieldByteSize = BPF_CORE_FIELD_BYTE_SIZE, FieldExists = BPF_CORE_FIELD_EXISTS, FieldSigned = BPF_CORE_FIELD_SIGNED, FieldLShift64 = BPF_CORE_FIELD_LSHIFT_U64, FieldRShift64 = BPF_CORE_FIELD_RSHIFT_U64, TypeIdLocal = BPF_CORE_TYPE_ID_LOCAL, TypeIdTarget = BPF_CORE_TYPE_ID_TARGET, TypeExists = BPF_CORE_TYPE_EXISTS, TypeSize = BPF_CORE_TYPE_SIZE, EnumVariantExists = BPF_CORE_ENUMVAL_EXISTS, EnumVariantValue = BPF_CORE_ENUMVAL_VALUE, } impl TryFrom for RelocationKind { type Error = BtfError; fn try_from(v: u32) -> Result { use RelocationKind::*; Ok(match v { BPF_CORE_FIELD_BYTE_OFFSET => FieldByteOffset, BPF_CORE_FIELD_BYTE_SIZE => FieldByteSize, BPF_CORE_FIELD_EXISTS => FieldExists, BPF_CORE_FIELD_SIGNED => FieldSigned, BPF_CORE_FIELD_LSHIFT_U64 => FieldLShift64, BPF_CORE_FIELD_RSHIFT_U64 => FieldRShift64, BPF_CORE_TYPE_ID_LOCAL => TypeIdLocal, BPF_CORE_TYPE_ID_TARGET => TypeIdTarget, BPF_CORE_TYPE_EXISTS => TypeExists, BPF_CORE_TYPE_SIZE => TypeSize, BPF_CORE_ENUMVAL_EXISTS => EnumVariantExists, BPF_CORE_ENUMVAL_VALUE => EnumVariantValue, kind => return Err(BtfError::InvalidRelocationKind { kind }), }) } } #[derive(Debug, Copy, Clone)] pub(crate) struct Relocation { kind: RelocationKind, ins_offset: usize, type_id: u32, access_str_offset: u32, number: usize, } impl Relocation { #[allow(unused_unsafe)] pub(crate) unsafe fn parse(data: &[u8], number: usize) -> Result { if mem::size_of::() > data.len() { return Err(BtfError::InvalidRelocationInfo); } let rel = unsafe { ptr::read_unaligned::(data.as_ptr() as *const _) }; Ok(Relocation { kind: rel.kind.try_into()?, ins_offset: rel.insn_off as usize, type_id: rel.type_id, access_str_offset: rel.access_str_off, number, }) } } impl Object { /// Relocates programs inside this object file with loaded BTF info. pub fn relocate_btf(&mut self, target_btf: &Btf) -> Result<(), BtfRelocationError> { let (local_btf, btf_ext) = match (&self.btf, &self.btf_ext) { (Some(btf), Some(btf_ext)) => (btf, btf_ext), _ => return Ok(()), }; let mut candidates_cache = HashMap::>::new(); for (sec_name_off, relos) in btf_ext.relocations() { let section_name = local_btf .string_at(*sec_name_off) .map_err(|e| BtfRelocationError { section: format!("section@{sec_name_off}"), error: RelocationError::BtfError(e), })?; let (section_index, _) = self .section_infos .get(§ion_name.to_string()) .ok_or_else(|| BtfRelocationError { section: section_name.to_string(), error: RelocationError::SectionNotFound, })?; match relocate_btf_functions( section_index, &mut self.functions, relos, local_btf, target_btf, &mut candidates_cache, ) { Ok(_) => {} Err(error) => { return Err(BtfRelocationError { section: section_name.to_string(), error, }) } } } Ok(()) } } fn is_relocation_inside_function( section_index: &SectionIndex, func: &Function, rel: &Relocation, ) -> bool { if section_index.0 != func.section_index.0 { return false; } let ins_offset = rel.ins_offset / mem::size_of::(); let func_offset = func.section_offset / mem::size_of::(); let func_size = func.instructions.len(); (func_offset..func_offset + func_size).contains(&ins_offset) } fn function_by_relocation<'a>( section_index: &SectionIndex, functions: &'a mut BTreeMap<(usize, u64), Function>, rel: &Relocation, ) -> Option<&'a mut Function> { functions .range_mut(( Included(&(section_index.0, 0)), Included(&(section_index.0, u64::MAX)), )) .map(|(_, func)| func) .find(|func| is_relocation_inside_function(section_index, func, rel)) } fn relocate_btf_functions<'target>( section_index: &SectionIndex, functions: &mut BTreeMap<(usize, u64), Function>, relos: &[Relocation], local_btf: &Btf, target_btf: &'target Btf, candidates_cache: &mut HashMap>>, ) -> Result<(), RelocationError> { let mut last_function_opt: Option<&mut Function> = None; for rel in relos { let function = match last_function_opt.take() { Some(func) if is_relocation_inside_function(section_index, func, rel) => func, _ => function_by_relocation(section_index, functions, rel) .ok_or(RelocationError::FunctionNotFound)?, }; let instructions = &mut function.instructions; let ins_index = (rel.ins_offset - function.section_offset) / mem::size_of::(); if ins_index >= instructions.len() { return Err(RelocationError::InvalidInstructionIndex { index: ins_index, num_instructions: instructions.len(), relocation_number: rel.number, }); } let local_ty = local_btf.type_by_id(rel.type_id)?; let local_name = &*local_btf.type_name(local_ty)?; let access_str = &*local_btf.string_at(rel.access_str_offset)?; let local_spec = AccessSpec::new(local_btf, rel.type_id, access_str, *rel)?; let matches = match rel.kind { RelocationKind::TypeIdLocal => Vec::new(), // we don't need to look at target types to relocate this value _ => { let candidates = match candidates_cache.get(&rel.type_id) { Some(cands) => cands, None => { candidates_cache.insert( rel.type_id, find_candidates(local_ty, local_name, target_btf)?, ); candidates_cache.get(&rel.type_id).unwrap() } }; let mut matches = Vec::new(); for candidate in candidates { if let Some(candidate_spec) = match_candidate(&local_spec, candidate)? { let comp_rel = ComputedRelocation::new(rel, &local_spec, Some(&candidate_spec))?; matches.push((candidate.name.clone(), candidate_spec, comp_rel)); } } matches } }; let comp_rel = if !matches.is_empty() { let mut matches = matches.into_iter(); let (_, target_spec, target_comp_rel) = matches.next().unwrap(); // if there's more than one candidate, make sure that they all resolve to the // same value, else the relocation is ambiguous and can't be applied let conflicts = matches .filter_map(|(cand_name, cand_spec, cand_comp_rel)| { if cand_spec.bit_offset != target_spec.bit_offset { return Some(cand_name); } else if let (Some(cand_comp_rel_target), Some(target_comp_rel_target)) = ( cand_comp_rel.target.as_ref(), target_comp_rel.target.as_ref(), ) { if cand_comp_rel_target.value != target_comp_rel_target.value { return Some(cand_name); } } None }) .collect::>(); if !conflicts.is_empty() { return Err(RelocationError::ConflictingCandidates { type_name: local_name.to_string(), candidates: conflicts, }); } target_comp_rel } else { // there are no candidate matches and therefore no target_spec. This might mean // that matching failed, or that the relocation can be applied looking at local // types only (eg with EnumVariantExists, FieldExists etc) ComputedRelocation::new(rel, &local_spec, None)? }; comp_rel.apply(function, rel, local_btf, target_btf)?; last_function_opt = Some(function); } Ok(()) } fn flavorless_name(name: &str) -> &str { name.split_once("___").map_or(name, |x| x.0) } fn find_candidates<'target>( local_ty: &BtfType, local_name: &str, target_btf: &'target Btf, ) -> Result>, BtfError> { let mut candidates = Vec::new(); let local_name = flavorless_name(local_name); for (type_id, ty) in target_btf.types().enumerate() { if local_ty.kind() != ty.kind() { continue; } let name = &*target_btf.type_name(ty)?; if local_name != flavorless_name(name) { continue; } candidates.push(Candidate { name: name.to_owned(), btf: target_btf, _ty: ty, type_id: type_id as u32, }); } Ok(candidates) } fn match_candidate<'target>( local_spec: &AccessSpec, candidate: &'target Candidate, ) -> Result>, RelocationError> { let mut target_spec = AccessSpec { btf: candidate.btf, root_type_id: candidate.type_id, relocation: local_spec.relocation, parts: Vec::new(), accessors: Vec::new(), bit_offset: 0, }; match local_spec.relocation.kind { RelocationKind::TypeIdLocal | RelocationKind::TypeIdTarget | RelocationKind::TypeExists | RelocationKind::TypeSize => { if types_are_compatible( local_spec.btf, local_spec.root_type_id, candidate.btf, candidate.type_id, )? { Ok(Some(target_spec)) } else { Ok(None) } } RelocationKind::EnumVariantExists | RelocationKind::EnumVariantValue => { let target_id = candidate.btf.resolve_type(candidate.type_id)?; let target_ty = candidate.btf.type_by_id(target_id)?; // the first accessor is guaranteed to have a name by construction let local_variant_name = local_spec.accessors[0].name.as_ref().unwrap(); fn match_enum<'a>( iterator: impl Iterator, candidate: &Candidate, local_variant_name: &str, target_id: u32, mut target_spec: AccessSpec<'a>, ) -> Result>, RelocationError> { for (index, name_offset) in iterator { let target_variant_name = candidate.btf.string_at(name_offset)?; if flavorless_name(local_variant_name) == flavorless_name(&target_variant_name) { target_spec.parts.push(index); target_spec.accessors.push(Accessor { index, type_id: target_id, name: None, }); return Ok(Some(target_spec)); } } Ok(None) } match target_ty { BtfType::Enum(en) => match_enum( en.variants .iter() .map(|member| member.name_offset) .enumerate(), candidate, local_variant_name, target_id, target_spec, ), BtfType::Enum64(en) => match_enum( en.variants .iter() .map(|member| member.name_offset) .enumerate(), candidate, local_variant_name, target_id, target_spec, ), _ => Ok(None), } } RelocationKind::FieldByteOffset | RelocationKind::FieldByteSize | RelocationKind::FieldExists | RelocationKind::FieldSigned | RelocationKind::FieldLShift64 | RelocationKind::FieldRShift64 => { let mut target_id = candidate.type_id; for (i, accessor) in local_spec.accessors.iter().enumerate() { target_id = candidate.btf.resolve_type(target_id)?; if accessor.name.is_some() { if let Some(next_id) = match_member( local_spec.btf, local_spec, accessor, candidate.btf, target_id, &mut target_spec, )? { target_id = next_id; } else { return Ok(None); } } else { // i = 0 is the base struct. for i > 0, we need to potentially do bounds checking if i > 0 { let target_ty = candidate.btf.type_by_id(target_id)?; let array = match target_ty { BtfType::Array(Array { array, .. }) => array, _ => return Ok(None), }; let var_len = array.len == 0 && { // an array is potentially variable length if it's the last field // of the parent struct and has 0 elements let parent = target_spec.accessors.last().unwrap(); let parent_ty = candidate.btf.type_by_id(parent.type_id)?; match parent_ty { BtfType::Struct(s) => parent.index == s.members.len() - 1, _ => false, } }; if !var_len && accessor.index >= array.len as usize { return Ok(None); } target_id = candidate.btf.resolve_type(array.element_type)?; } if target_spec.parts.len() == MAX_SPEC_LEN { return Err(RelocationError::MaximumNestingLevelReached { type_name: Some(candidate.name.clone()), }); } target_spec.parts.push(accessor.index); target_spec.accessors.push(Accessor { index: accessor.index, type_id: target_id, name: None, }); target_spec.bit_offset += accessor.index * candidate.btf.type_size(target_id)? * 8; } } Ok(Some(target_spec)) } } } fn match_member<'target>( local_btf: &Btf, local_spec: &AccessSpec<'_>, local_accessor: &Accessor, target_btf: &'target Btf, target_id: u32, target_spec: &mut AccessSpec<'target>, ) -> Result, RelocationError> { let local_ty = local_btf.type_by_id(local_accessor.type_id)?; let local_member = match local_ty { // this won't panic, bounds are checked when local_spec is built in AccessSpec::new BtfType::Struct(s) => s.members.get(local_accessor.index).unwrap(), BtfType::Union(u) => u.members.get(local_accessor.index).unwrap(), local_ty => panic!("unexpected type {:?}", local_ty), }; let local_name = &*local_btf.string_at(local_member.name_offset)?; let target_id = target_btf.resolve_type(target_id)?; let target_ty = target_btf.type_by_id(target_id)?; let target_members: Vec<&BtfMember> = match target_ty.members() { Some(members) => members.collect(), // not a fields type, no match None => return Ok(None), }; for (index, target_member) in target_members.iter().enumerate() { if target_spec.parts.len() == MAX_SPEC_LEN { let root_ty = target_spec.btf.type_by_id(target_spec.root_type_id)?; return Err(RelocationError::MaximumNestingLevelReached { type_name: target_spec.btf.err_type_name(root_ty), }); } // this will not panic as we've already established these are fields types let bit_offset = target_ty.member_bit_offset(target_member).unwrap(); let target_name = &*target_btf.string_at(target_member.name_offset)?; if target_name.is_empty() { let ret = match_member( local_btf, local_spec, local_accessor, target_btf, target_member.btf_type, target_spec, )?; if ret.is_some() { target_spec.bit_offset += bit_offset; target_spec.parts.push(index); return Ok(ret); } } else if local_name == target_name { if fields_are_compatible( local_spec.btf, local_member.btf_type, target_btf, target_member.btf_type, )? { target_spec.bit_offset += bit_offset; target_spec.parts.push(index); target_spec.accessors.push(Accessor { type_id: target_id, index, name: Some(target_name.to_owned()), }); return Ok(Some(target_member.btf_type)); } else { return Ok(None); } } } Ok(None) } #[derive(Debug)] struct AccessSpec<'a> { btf: &'a Btf, root_type_id: u32, parts: Vec, accessors: Vec, relocation: Relocation, bit_offset: usize, } impl<'a> AccessSpec<'a> { fn new( btf: &'a Btf, root_type_id: u32, spec: &str, relocation: Relocation, ) -> Result, RelocationError> { let parts = spec .split(':') .map(|s| s.parse::()) .collect::, _>>() .map_err(|_| RelocationError::InvalidAccessString { access_str: spec.to_string(), })?; let mut type_id = btf.resolve_type(root_type_id)?; let ty = btf.type_by_id(type_id)?; let spec = match relocation.kind { RelocationKind::TypeIdLocal | RelocationKind::TypeIdTarget | RelocationKind::TypeExists | RelocationKind::TypeSize => { if parts != [0] { return Err(RelocationError::InvalidAccessString { access_str: spec.to_string(), }); } AccessSpec { btf, root_type_id, relocation, parts, accessors: Vec::new(), bit_offset: 0, } } RelocationKind::EnumVariantExists | RelocationKind::EnumVariantValue => match ty { BtfType::Enum(_) | BtfType::Enum64(_) => { if parts.len() != 1 { return Err(RelocationError::InvalidAccessString { access_str: spec.to_string(), }); } let index = parts[0]; let (n_variants, name_offset) = match ty { BtfType::Enum(en) => ( en.variants.len(), en.variants.get(index).map(|v| v.name_offset), ), BtfType::Enum64(en) => ( en.variants.len(), en.variants.get(index).map(|v| v.name_offset), ), _ => unreachable!(), }; if name_offset.is_none() { return Err(RelocationError::InvalidAccessIndex { type_name: btf.err_type_name(ty), spec: spec.to_string(), index, max_index: n_variants, error: "tried to access nonexistant enum variant", }); } let accessors = vec![Accessor { type_id, index, name: Some(btf.string_at(name_offset.unwrap())?.to_string()), }]; AccessSpec { btf, root_type_id, relocation, parts, accessors, bit_offset: 0, } } _ => { return Err(RelocationError::InvalidRelocationKindForType { relocation_number: relocation.number, relocation_kind: format!("{:?}", relocation.kind), type_kind: format!("{:?}", ty.kind()), error: "enum relocation on non-enum type", }) } }, RelocationKind::FieldByteOffset | RelocationKind::FieldByteSize | RelocationKind::FieldExists | RelocationKind::FieldSigned | RelocationKind::FieldLShift64 | RelocationKind::FieldRShift64 => { let mut accessors = vec![Accessor { type_id, index: parts[0], name: None, }]; let mut bit_offset = accessors[0].index * btf.type_size(type_id)?; for index in parts.iter().skip(1).cloned() { type_id = btf.resolve_type(type_id)?; let ty = btf.type_by_id(type_id)?; match ty { BtfType::Struct(Struct { members, .. }) | BtfType::Union(Union { members, .. }) => { if index >= members.len() { return Err(RelocationError::InvalidAccessIndex { type_name: btf.err_type_name(ty), spec: spec.to_string(), index, max_index: members.len(), error: "out of bounds struct or union access", }); } let member = &members[index]; bit_offset += ty.member_bit_offset(member).unwrap(); if member.name_offset != 0 { accessors.push(Accessor { type_id, index, name: Some(btf.string_at(member.name_offset)?.to_string()), }); } type_id = member.btf_type; } BtfType::Array(Array { array, .. }) => { type_id = btf.resolve_type(array.element_type)?; let var_len = array.len == 0 && { // an array is potentially variable length if it's the last field // of the parent struct and has 0 elements let parent = accessors.last().unwrap(); let parent_ty = btf.type_by_id(parent.type_id)?; match parent_ty { BtfType::Struct(s) => index == s.members.len() - 1, _ => false, } }; if !var_len && index >= array.len as usize { return Err(RelocationError::InvalidAccessIndex { type_name: btf.err_type_name(ty), spec: spec.to_string(), index, max_index: array.len as usize, error: "array index out of bounds", }); } accessors.push(Accessor { type_id, index, name: None, }); let size = btf.type_size(type_id)?; bit_offset += index * size * 8; } rel_kind => { return Err(RelocationError::InvalidRelocationKindForType { relocation_number: relocation.number, relocation_kind: format!("{rel_kind:?}"), type_kind: format!("{:?}", ty.kind()), error: "field relocation on a type that doesn't have fields", }); } }; } AccessSpec { btf, root_type_id, parts, accessors, relocation, bit_offset, } } }; Ok(spec) } } #[derive(Debug)] struct Accessor { type_id: u32, index: usize, name: Option, } #[derive(Debug)] struct Candidate<'a> { name: String, btf: &'a Btf, _ty: &'a BtfType, type_id: u32, } #[derive(Debug)] struct ComputedRelocation { local: ComputedRelocationValue, target: Option, } #[derive(Debug)] struct ComputedRelocationValue { value: u64, size: u32, type_id: Option, } fn poison_insn(ins: &mut bpf_insn) { ins.code = (BPF_JMP | BPF_CALL) as u8; ins.set_dst_reg(0); ins.set_src_reg(0); ins.off = 0; ins.imm = 0xBAD2310; } impl ComputedRelocation { fn new( rel: &Relocation, local_spec: &AccessSpec, target_spec: Option<&AccessSpec>, ) -> Result { use RelocationKind::*; let ret = match rel.kind { FieldByteOffset | FieldByteSize | FieldExists | FieldSigned | FieldLShift64 | FieldRShift64 => ComputedRelocation { local: Self::compute_field_relocation(rel, Some(local_spec))?, target: Self::compute_field_relocation(rel, target_spec).ok(), }, TypeIdLocal | TypeIdTarget | TypeExists | TypeSize => ComputedRelocation { local: Self::compute_type_relocation(rel, local_spec, target_spec)?, target: Self::compute_type_relocation(rel, local_spec, target_spec).ok(), }, EnumVariantExists | EnumVariantValue => ComputedRelocation { local: Self::compute_enum_relocation(rel, Some(local_spec))?, target: Self::compute_enum_relocation(rel, target_spec).ok(), }, }; Ok(ret) } fn apply( &self, function: &mut Function, rel: &Relocation, local_btf: &Btf, target_btf: &Btf, ) -> Result<(), RelocationError> { let instructions = &mut function.instructions; let num_instructions = instructions.len(); let ins_index = (rel.ins_offset - function.section_offset) / mem::size_of::(); let ins = instructions .get_mut(ins_index) .ok_or(RelocationError::InvalidInstructionIndex { index: rel.ins_offset, num_instructions, relocation_number: rel.number, })?; let target = if let Some(target) = self.target.as_ref() { target } else { let is_ld_imm64 = ins.code == (BPF_LD | BPF_DW) as u8; poison_insn(ins); if is_ld_imm64 { let next_ins = instructions.get_mut(ins_index + 1).ok_or( RelocationError::InvalidInstructionIndex { index: (ins_index + 1) * mem::size_of::(), num_instructions, relocation_number: rel.number, }, )?; poison_insn(next_ins); } return Ok(()); }; let class = (ins.code & 0x07) as u32; let target_value = target.value; match class { BPF_ALU | BPF_ALU64 => { let src_reg = ins.src_reg(); if src_reg != BPF_K as u8 { return Err(RelocationError::InvalidInstruction { relocation_number: rel.number, index: ins_index, error: format!("invalid src_reg={src_reg:x} expected {BPF_K:x}").into(), }); } ins.imm = target_value as i32; } BPF_LDX | BPF_ST | BPF_STX => { if target_value > i16::MAX as u64 { return Err(RelocationError::InvalidInstruction { relocation_number: rel.number, index: ins_index, error: format!("value `{target_value}` overflows 16 bits offset field") .into(), }); } ins.off = target_value as i16; if self.local.size != target.size { let local_ty = local_btf.type_by_id(self.local.type_id.unwrap())?; let target_ty = target_btf.type_by_id(target.type_id.unwrap())?; let unsigned = |info: u32| ((info >> 24) & 0x0F) & BTF_INT_SIGNED == 0; use BtfType::*; match (local_ty, target_ty) { (Ptr(_), Ptr(_)) => {} (Int(local), Int(target)) if unsigned(local.data) && unsigned(target.data) => {} _ => { return Err(RelocationError::InvalidInstruction { relocation_number: rel.number, index: ins_index, error: format!( "original type {} has size {} but target type {} has size {}", err_type_name(&local_btf.err_type_name(local_ty)), self.local.size, err_type_name(&target_btf.err_type_name(target_ty)), target.size, ) .into(), }) } } let size = match target.size { 8 => BPF_DW, 4 => BPF_W, 2 => BPF_H, 1 => BPF_B, size => { return Err(RelocationError::InvalidInstruction { relocation_number: rel.number, index: ins_index, error: format!("invalid target size {size}").into(), }) } } as u8; ins.code = ins.code & 0xE0 | size | ins.code & 0x07; } } BPF_LD => { ins.imm = target_value as i32; let next_ins = instructions.get_mut(ins_index + 1).ok_or( RelocationError::InvalidInstructionIndex { index: ins_index + 1, num_instructions, relocation_number: rel.number, }, )?; next_ins.imm = (target_value >> 32) as i32; } class => { return Err(RelocationError::InvalidInstruction { relocation_number: rel.number, index: ins_index, error: format!("invalid instruction class {class:x}").into(), }) } }; Ok(()) } fn compute_enum_relocation( rel: &Relocation, spec: Option<&AccessSpec>, ) -> Result { use RelocationKind::*; let value = match (rel.kind, spec) { (EnumVariantExists, spec) => spec.is_some() as u64, (EnumVariantValue, Some(spec)) => { let accessor = &spec.accessors[0]; match spec.btf.type_by_id(accessor.type_id)? { BtfType::Enum(en) => { let value = en.variants[accessor.index].value; if en.is_signed() { value as i32 as u64 } else { value as u64 } } BtfType::Enum64(en) => { let variant = &en.variants[accessor.index]; (variant.value_high as u64) << 32 | variant.value_low as u64 } // candidate selection ensures that rel_kind == local_kind == target_kind _ => unreachable!(), } } _ => { return Err(RelocationError::MissingTargetDefinition { kind: rel.kind, type_id: rel.type_id, ins_index: rel.ins_offset / mem::size_of::(), })?; } }; Ok(ComputedRelocationValue { value, size: 0, type_id: None, }) } fn compute_field_relocation( rel: &Relocation, spec: Option<&AccessSpec>, ) -> Result { use RelocationKind::*; if let FieldExists = rel.kind { // this is the bpf_preserve_field_info(member_access, FIELD_EXISTENCE) case. If we // managed to build a spec, it means the field exists. return Ok(ComputedRelocationValue { value: spec.is_some() as u64, size: 0, type_id: None, }); } let spec = match spec { Some(spec) => spec, None => { return Err(RelocationError::MissingTargetDefinition { kind: rel.kind, type_id: rel.type_id, ins_index: rel.ins_offset / mem::size_of::(), })?; } }; let accessor = spec.accessors.last().unwrap(); if accessor.name.is_none() { // the last accessor is unnamed, meaning that this is an array access return match rel.kind { FieldByteOffset => Ok(ComputedRelocationValue { value: (spec.bit_offset / 8) as u64, size: spec.btf.type_size(accessor.type_id)? as u32, type_id: Some(accessor.type_id), }), FieldByteSize => Ok(ComputedRelocationValue { value: spec.btf.type_size(accessor.type_id)? as u64, size: 0, type_id: Some(accessor.type_id), }), rel_kind => { let ty = spec.btf.type_by_id(accessor.type_id)?; return Err(RelocationError::InvalidRelocationKindForType { relocation_number: rel.number, relocation_kind: format!("{rel_kind:?}"), type_kind: format!("{:?}", ty.kind()), error: "invalid relocation kind for array type", }); } }; } let ty = spec.btf.type_by_id(accessor.type_id)?; let (ll_ty, member) = match ty { BtfType::Struct(t) => (ty, t.members.get(accessor.index).unwrap()), BtfType::Union(t) => (ty, t.members.get(accessor.index).unwrap()), _ => { return Err(RelocationError::InvalidRelocationKindForType { relocation_number: rel.number, relocation_kind: format!("{:?}", rel.kind), type_kind: format!("{:?}", ty.kind()), error: "field relocation on a type that doesn't have fields", }); } }; let bit_off = spec.bit_offset as u32; let member_type_id = spec.btf.resolve_type(member.btf_type)?; let member_ty = spec.btf.type_by_id(member_type_id)?; let mut byte_size; let mut byte_off; let mut bit_size = ll_ty.member_bit_field_size(member).unwrap() as u32; let is_bitfield = bit_size > 0; if is_bitfield { // find out the smallest int size to load the bitfield byte_size = member_ty.size().unwrap(); byte_off = bit_off / 8 / byte_size * byte_size; while bit_off + bit_size - byte_off * 8 > byte_size * 8 { if byte_size >= 8 { // the bitfield is larger than 8 bytes!? return Err(BtfError::InvalidTypeInfo.into()); } byte_size *= 2; byte_off = bit_off / 8 / byte_size * byte_size; } } else { byte_size = spec.btf.type_size(member_type_id)? as u32; bit_size = byte_size * 8; byte_off = spec.bit_offset as u32 / 8; } let mut value = ComputedRelocationValue { value: 0, size: 0, type_id: None, }; #[allow(clippy::wildcard_in_or_patterns)] match rel.kind { FieldByteOffset => { value.value = byte_off as u64; if !is_bitfield { value.size = byte_size; value.type_id = Some(member_type_id); } } FieldByteSize => { value.value = byte_size as u64; } FieldSigned => match member_ty { BtfType::Enum(en) => value.value = en.is_signed() as u64, BtfType::Enum64(en) => value.value = en.is_signed() as u64, BtfType::Int(i) => value.value = i.encoding() as u64 & IntEncoding::Signed as u64, _ => (), }, #[cfg(target_endian = "little")] FieldLShift64 => { value.value = 64 - (bit_off + bit_size - byte_off * 8) as u64; } #[cfg(target_endian = "big")] FieldLShift64 => { value.value = ((8 - byte_size) * 8 + (bit_off - byte_off * 8)) as u64; } FieldRShift64 => { value.value = 64 - bit_size as u64; } kind @ (FieldExists | TypeIdLocal | TypeIdTarget | TypeExists | TypeSize | EnumVariantExists | EnumVariantValue) => { panic!("unexpected relocation kind {:?}", kind) } } Ok(value) } fn compute_type_relocation( rel: &Relocation, local_spec: &AccessSpec, target_spec: Option<&AccessSpec>, ) -> Result { use RelocationKind::*; let value = match (rel.kind, target_spec) { (TypeIdLocal, _) => local_spec.root_type_id as u64, (TypeIdTarget, Some(target_spec)) => target_spec.root_type_id as u64, (TypeExists, target_spec) => target_spec.is_some() as u64, (TypeSize, Some(target_spec)) => { target_spec.btf.type_size(target_spec.root_type_id)? as u64 } _ => { return Err(RelocationError::MissingTargetDefinition { kind: rel.kind, type_id: rel.type_id, ins_index: rel.ins_offset / mem::size_of::(), })?; } }; Ok(ComputedRelocationValue { value, size: 0, type_id: None, }) } } aya-obj-0.2.1/src/btf/types.rs000064400000000000000000001447671046102023000142450ustar 00000000000000#![allow(missing_docs)] use alloc::{string::ToString, vec, vec::Vec}; use core::{fmt::Display, mem, ptr}; use object::Endianness; use crate::btf::{Btf, BtfError, MAX_RESOLVE_DEPTH}; #[derive(Clone, Debug)] pub enum BtfType { Unknown, Fwd(Fwd), Const(Const), Volatile(Volatile), Restrict(Restrict), Ptr(Ptr), Typedef(Typedef), Func(Func), Int(Int), Float(Float), Enum(Enum), Array(Array), Struct(Struct), Union(Union), FuncProto(FuncProto), Var(Var), DataSec(DataSec), DeclTag(DeclTag), TypeTag(TypeTag), Enum64(Enum64), } #[repr(C)] #[derive(Clone, Debug)] pub struct Fwd { pub(crate) name_offset: u32, info: u32, _unused: u32, } impl Fwd { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Fwd } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } } #[repr(C)] #[derive(Clone, Debug)] pub struct Const { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, } impl Const { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Const } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub(crate) fn new(btf_type: u32) -> Self { let info = (BtfKind::Const as u32) << 24; Self { name_offset: 0, info, btf_type, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Volatile { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, } impl Volatile { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Volatile } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } } #[derive(Clone, Debug)] pub struct Restrict { pub(crate) name_offset: u32, _info: u32, pub(crate) btf_type: u32, } impl Restrict { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Restrict } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } } #[repr(C)] #[derive(Clone, Debug)] pub struct Ptr { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, } impl Ptr { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Ptr } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, btf_type: u32) -> Self { let info = (BtfKind::Ptr as u32) << 24; Self { name_offset, info, btf_type, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Typedef { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, } impl Typedef { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Typedef } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub(crate) fn new(name_offset: u32, btf_type: u32) -> Self { let info = (BtfKind::Typedef as u32) << 24; Self { name_offset, info, btf_type, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Float { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, } impl Float { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Float } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, size: u32) -> Self { let info = (BtfKind::Float as u32) << 24; Self { name_offset, info, size, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Func { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, } #[repr(u32)] #[derive(Clone, Debug, PartialEq, Eq)] pub enum FuncLinkage { Static = 0, Global = 1, Extern = 2, Unknown, } impl From for FuncLinkage { fn from(v: u32) -> Self { match v { 0 => FuncLinkage::Static, 1 => FuncLinkage::Global, 2 => FuncLinkage::Extern, _ => FuncLinkage::Unknown, } } } impl Func { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Func } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, proto: u32, linkage: FuncLinkage) -> Self { let mut info = (BtfKind::Func as u32) << 24; info |= (linkage as u32) & 0xFFFF; Self { name_offset, info, btf_type: proto, } } pub(crate) fn linkage(&self) -> FuncLinkage { (self.info & 0xFFF).into() } pub(crate) fn set_linkage(&mut self, linkage: FuncLinkage) { self.info = (self.info & 0xFFFF0000) | (linkage as u32) & 0xFFFF; } } #[repr(C)] #[derive(Clone, Debug)] pub struct TypeTag { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, } impl TypeTag { pub(crate) fn to_bytes(&self) -> Vec { bytes_of::(self).to_vec() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::TypeTag } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, btf_type: u32) -> Self { let info = (BtfKind::TypeTag as u32) << 24; Self { name_offset, info, btf_type, } } } #[repr(u32)] #[derive(Clone, Debug, Eq, PartialEq)] pub enum IntEncoding { None, Signed = 1, Char = 2, Bool = 4, Unknown, } impl From for IntEncoding { fn from(v: u32) -> Self { match v { 0 => IntEncoding::None, 1 => IntEncoding::Signed, 2 => IntEncoding::Char, 4 => IntEncoding::Bool, _ => IntEncoding::Unknown, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Int { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, pub(crate) data: u32, } impl Int { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, size, data, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(size), bytes_of::(data), ] .concat() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Int } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, size: u32, encoding: IntEncoding, offset: u32) -> Self { let info = (BtfKind::Int as u32) << 24; let mut data = 0u32; data |= (encoding as u32 & 0x0f) << 24; data |= (offset & 0xff) << 16; data |= (size * 8) & 0xff; Self { name_offset, info, size, data, } } pub(crate) fn encoding(&self) -> IntEncoding { ((self.data & 0x0f000000) >> 24).into() } pub(crate) fn offset(&self) -> u32 { (self.data & 0x00ff0000) >> 16 } // TODO: Remove directive this when this crate is pub #[cfg(test)] pub(crate) fn bits(&self) -> u32 { self.data & 0x000000ff } } #[repr(C)] #[derive(Debug, Clone)] pub struct BtfEnum { pub name_offset: u32, pub value: u32, } impl BtfEnum { pub fn new(name_offset: u32, value: u32) -> Self { Self { name_offset, value } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Enum { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, pub(crate) variants: Vec, } impl Enum { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, size, variants, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(size), ] .into_iter() .chain(variants.iter().flat_map(|BtfEnum { name_offset, value }| { [bytes_of::(name_offset), bytes_of::(value)] })) .flatten() .copied() .collect() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Enum } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() + mem::size_of::() * self.variants.len() } pub fn new(name_offset: u32, signed: bool, variants: Vec) -> Self { let mut info = (BtfKind::Enum as u32) << 24; info |= (variants.len() as u32) & 0xFFFF; if signed { info |= 1 << 31; } Self { name_offset, info, size: 4, variants, } } pub(crate) fn is_signed(&self) -> bool { self.info >> 31 == 1 } pub(crate) fn set_signed(&mut self, signed: bool) { if signed { self.info |= 1 << 31; } else { self.info &= !(1 << 31); } } } #[repr(C)] #[derive(Debug, Clone)] pub struct BtfEnum64 { pub(crate) name_offset: u32, pub(crate) value_low: u32, pub(crate) value_high: u32, } impl BtfEnum64 { pub fn new(name_offset: u32, value: u64) -> Self { Self { name_offset, value_low: value as u32, value_high: (value >> 32) as u32, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Enum64 { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, pub(crate) variants: Vec, } impl Enum64 { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, size, variants, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(size), ] .into_iter() .chain(variants.iter().flat_map( |BtfEnum64 { name_offset, value_low, value_high, }| { [ bytes_of::(name_offset), bytes_of::(value_low), bytes_of::(value_high), ] }, )) .flatten() .copied() .collect() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Enum64 } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() + mem::size_of::() * self.variants.len() } pub(crate) fn is_signed(&self) -> bool { self.info >> 31 == 1 } pub fn new(name_offset: u32, signed: bool, variants: Vec) -> Self { let mut info = (BtfKind::Enum64 as u32) << 24; if signed { info |= 1 << 31 }; info |= (variants.len() as u32) & 0xFFFF; Enum64 { name_offset, info, // According to the documentation: // // https://www.kernel.org/doc/html/next/bpf/btf.html // // The size may be 1/2/4/8. Since BtfEnum64::new() takes a u64, we // can assume that the size is 8. size: 8, variants, } } } #[repr(C)] #[derive(Clone, Debug)] pub(crate) struct BtfMember { pub(crate) name_offset: u32, pub(crate) btf_type: u32, pub(crate) offset: u32, } #[repr(C)] #[derive(Clone, Debug)] pub struct Struct { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, pub(crate) members: Vec, } impl Struct { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, size, members, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(size), ] .into_iter() .chain(members.iter().flat_map( |BtfMember { name_offset, btf_type, offset, }| { [ bytes_of::(name_offset), bytes_of::(btf_type), bytes_of::(offset), ] }, )) .flatten() .copied() .collect() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Struct } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() + mem::size_of::() * self.members.len() } pub(crate) fn new(name_offset: u32, members: Vec, size: u32) -> Self { let mut info = (BtfKind::Struct as u32) << 24; info |= (members.len() as u32) & 0xFFFF; Self { name_offset, info, size, members, } } pub(crate) fn member_bit_offset(&self, member: &BtfMember) -> usize { let k_flag = self.info >> 31 == 1; let bit_offset = if k_flag { member.offset & 0xFFFFFF } else { member.offset }; bit_offset as usize } pub(crate) fn member_bit_field_size(&self, member: &BtfMember) -> usize { let k_flag = (self.info >> 31) == 1; let size = if k_flag { member.offset >> 24 } else { 0 }; size as usize } } #[repr(C)] #[derive(Clone, Debug)] pub struct Union { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, pub(crate) members: Vec, } impl Union { pub(crate) fn new(name_offset: u32, size: u32, members: Vec) -> Self { let mut info = (BtfKind::Union as u32) << 24; info |= (members.len() as u32) & 0xFFFF; Self { name_offset, info, size, members, } } pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, size, members, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(size), ] .into_iter() .chain(members.iter().flat_map( |BtfMember { name_offset, btf_type, offset, }| { [ bytes_of::(name_offset), bytes_of::(btf_type), bytes_of::(offset), ] }, )) .flatten() .copied() .collect() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Union } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() + mem::size_of::() * self.members.len() } pub(crate) fn member_bit_offset(&self, member: &BtfMember) -> usize { let k_flag = self.info >> 31 == 1; let bit_offset = if k_flag { member.offset & 0xFFFFFF } else { member.offset }; bit_offset as usize } pub(crate) fn member_bit_field_size(&self, member: &BtfMember) -> usize { let k_flag = (self.info >> 31) == 1; let size = if k_flag { member.offset >> 24 } else { 0 }; size as usize } } #[repr(C)] #[derive(Clone, Debug)] pub(crate) struct BtfArray { pub(crate) element_type: u32, pub(crate) index_type: u32, pub(crate) len: u32, } #[repr(C)] #[derive(Clone, Debug)] pub struct Array { pub(crate) name_offset: u32, info: u32, _unused: u32, pub(crate) array: BtfArray, } impl Array { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, _unused, array, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(_unused), bytes_of::(array), ] .concat() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Array } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } #[cfg(test)] pub(crate) fn new(name_offset: u32, element_type: u32, index_type: u32, len: u32) -> Self { let info = (BtfKind::Array as u32) << 24; Self { name_offset, info, _unused: 0, array: BtfArray { element_type, index_type, len, }, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct BtfParam { pub name_offset: u32, pub btf_type: u32, } #[repr(C)] #[derive(Clone, Debug)] pub struct FuncProto { pub(crate) name_offset: u32, info: u32, pub(crate) return_type: u32, pub(crate) params: Vec, } impl FuncProto { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, return_type, params, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(return_type), ] .into_iter() .chain(params.iter().flat_map( |BtfParam { name_offset, btf_type, }| { [bytes_of::(name_offset), bytes_of::(btf_type)] }, )) .flatten() .copied() .collect() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::FuncProto } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() + mem::size_of::() * self.params.len() } pub fn new(params: Vec, return_type: u32) -> Self { let mut info = (BtfKind::FuncProto as u32) << 24; info |= (params.len() as u32) & 0xFFFF; Self { name_offset: 0, info, return_type, params, } } } #[repr(u32)] #[derive(Clone, Debug, PartialEq, Eq)] pub enum VarLinkage { Static, Global, Extern, Unknown, } impl From for VarLinkage { fn from(v: u32) -> Self { match v { 0 => VarLinkage::Static, 1 => VarLinkage::Global, 2 => VarLinkage::Extern, _ => VarLinkage::Unknown, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct Var { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, pub(crate) linkage: VarLinkage, } impl Var { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, btf_type, linkage, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(btf_type), bytes_of::(linkage), ] .concat() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::Var } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, btf_type: u32, linkage: VarLinkage) -> Self { let info = (BtfKind::Var as u32) << 24; Self { name_offset, info, btf_type, linkage, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct DataSecEntry { pub btf_type: u32, pub offset: u32, pub size: u32, } #[repr(C)] #[derive(Clone, Debug)] pub struct DataSec { pub(crate) name_offset: u32, info: u32, pub(crate) size: u32, pub(crate) entries: Vec, } impl DataSec { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, size, entries, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(size), ] .into_iter() .chain(entries.iter().flat_map( |DataSecEntry { btf_type, offset, size, }| { [ bytes_of::(btf_type), bytes_of::(offset), bytes_of::(size), ] }, )) .flatten() .copied() .collect() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::DataSec } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() + mem::size_of::() * self.entries.len() } pub fn new(name_offset: u32, entries: Vec, size: u32) -> Self { let mut info = (BtfKind::DataSec as u32) << 24; info |= (entries.len() as u32) & 0xFFFF; Self { name_offset, info, size, entries, } } } #[repr(C)] #[derive(Clone, Debug)] pub struct DeclTag { pub(crate) name_offset: u32, info: u32, pub(crate) btf_type: u32, pub(crate) component_index: i32, } impl DeclTag { pub(crate) fn to_bytes(&self) -> Vec { let Self { name_offset, info, btf_type, component_index, } = self; [ bytes_of::(name_offset), bytes_of::(info), bytes_of::(btf_type), bytes_of::(component_index), ] .concat() } pub(crate) fn kind(&self) -> BtfKind { BtfKind::DeclTag } pub(crate) fn type_info_size(&self) -> usize { mem::size_of::() } pub fn new(name_offset: u32, btf_type: u32, component_index: i32) -> Self { let info = (BtfKind::DeclTag as u32) << 24; Self { name_offset, info, btf_type, component_index, } } } #[derive(Copy, Clone, Debug, Eq, PartialEq, Default)] #[repr(u32)] pub enum BtfKind { #[default] Unknown = 0, Int = 1, Ptr = 2, Array = 3, Struct = 4, Union = 5, Enum = 6, Fwd = 7, Typedef = 8, Volatile = 9, Const = 10, Restrict = 11, Func = 12, FuncProto = 13, Var = 14, DataSec = 15, Float = 16, DeclTag = 17, TypeTag = 18, Enum64 = 19, } impl TryFrom for BtfKind { type Error = BtfError; fn try_from(v: u32) -> Result { use BtfKind::*; Ok(match v { 0 => Unknown, 1 => Int, 2 => Ptr, 3 => Array, 4 => Struct, 5 => Union, 6 => Enum, 7 => Fwd, 8 => Typedef, 9 => Volatile, 10 => Const, 11 => Restrict, 12 => Func, 13 => FuncProto, 14 => Var, 15 => DataSec, 16 => Float, 17 => DeclTag, 18 => TypeTag, 19 => Enum64, kind => return Err(BtfError::InvalidTypeKind { kind }), }) } } impl Display for BtfKind { fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { match self { BtfKind::Unknown => write!(f, "[UNKNOWN]"), BtfKind::Int => write!(f, "[INT]"), BtfKind::Float => write!(f, "[FLOAT]"), BtfKind::Ptr => write!(f, "[PTR]"), BtfKind::Array => write!(f, "[ARRAY]"), BtfKind::Struct => write!(f, "[STRUCT]"), BtfKind::Union => write!(f, "[UNION]"), BtfKind::Enum => write!(f, "[ENUM]"), BtfKind::Fwd => write!(f, "[FWD]"), BtfKind::Typedef => write!(f, "[TYPEDEF]"), BtfKind::Volatile => write!(f, "[VOLATILE]"), BtfKind::Const => write!(f, "[CONST]"), BtfKind::Restrict => write!(f, "[RESTRICT]"), BtfKind::Func => write!(f, "[FUNC]"), BtfKind::FuncProto => write!(f, "[FUNC_PROTO]"), BtfKind::Var => write!(f, "[VAR]"), BtfKind::DataSec => write!(f, "[DATASEC]"), BtfKind::DeclTag => write!(f, "[DECL_TAG]"), BtfKind::TypeTag => write!(f, "[TYPE_TAG]"), BtfKind::Enum64 => write!(f, "[ENUM64]"), } } } unsafe fn read(data: &[u8]) -> Result { if mem::size_of::() > data.len() { return Err(BtfError::InvalidTypeInfo); } Ok(ptr::read_unaligned::(data.as_ptr() as *const T)) } unsafe fn read_array(data: &[u8], len: usize) -> Result, BtfError> { if mem::size_of::() * len > data.len() { return Err(BtfError::InvalidTypeInfo); } let data = &data[0..mem::size_of::() * len]; let r = data .chunks(mem::size_of::()) .map(|chunk| ptr::read_unaligned(chunk.as_ptr() as *const T)) .collect(); Ok(r) } impl BtfType { #[allow(unused_unsafe)] pub(crate) unsafe fn read(data: &[u8], endianness: Endianness) -> Result { let ty = unsafe { read_array::(data, 3)? }; let data = &data[mem::size_of::() * 3..]; let vlen = type_vlen(ty[1]); Ok(match type_kind(ty[1])? { BtfKind::Unknown => BtfType::Unknown, BtfKind::Fwd => BtfType::Fwd(Fwd { name_offset: ty[0], info: ty[1], _unused: 0, }), BtfKind::Const => BtfType::Const(Const { name_offset: ty[0], info: ty[1], btf_type: ty[2], }), BtfKind::Volatile => BtfType::Volatile(Volatile { name_offset: ty[0], info: ty[1], btf_type: ty[2], }), BtfKind::Restrict => BtfType::Restrict(Restrict { name_offset: ty[0], _info: ty[1], btf_type: ty[2], }), BtfKind::Ptr => BtfType::Ptr(Ptr { name_offset: ty[0], info: ty[1], btf_type: ty[2], }), BtfKind::Typedef => BtfType::Typedef(Typedef { name_offset: ty[0], info: ty[1], btf_type: ty[2], }), BtfKind::Func => BtfType::Func(Func { name_offset: ty[0], info: ty[1], btf_type: ty[2], }), BtfKind::Int => { if mem::size_of::() > data.len() { return Err(BtfError::InvalidTypeInfo); } let read_u32 = if endianness == Endianness::Little { u32::from_le_bytes } else { u32::from_be_bytes }; BtfType::Int(Int { name_offset: ty[0], info: ty[1], size: ty[2], data: read_u32(data[..mem::size_of::()].try_into().unwrap()), }) } BtfKind::Float => BtfType::Float(Float { name_offset: ty[0], info: ty[1], size: ty[2], }), BtfKind::Enum => BtfType::Enum(Enum { name_offset: ty[0], info: ty[1], size: ty[2], variants: unsafe { read_array::(data, vlen)? }, }), BtfKind::Enum64 => BtfType::Enum64(Enum64 { name_offset: ty[0], info: ty[1], size: ty[2], variants: unsafe { read_array::(data, vlen)? }, }), BtfKind::Array => BtfType::Array(Array { name_offset: ty[0], info: ty[1], _unused: 0, array: unsafe { read(data)? }, }), BtfKind::Struct => BtfType::Struct(Struct { name_offset: ty[0], info: ty[1], size: ty[2], members: unsafe { read_array::(data, vlen)? }, }), BtfKind::Union => BtfType::Union(Union { name_offset: ty[0], info: ty[1], size: ty[2], members: unsafe { read_array::(data, vlen)? }, }), BtfKind::FuncProto => BtfType::FuncProto(FuncProto { name_offset: ty[0], info: ty[1], return_type: ty[2], params: unsafe { read_array::(data, vlen)? }, }), BtfKind::Var => BtfType::Var(Var { name_offset: ty[0], info: ty[1], btf_type: ty[2], linkage: unsafe { read(data)? }, }), BtfKind::DataSec => BtfType::DataSec(DataSec { name_offset: ty[0], info: ty[1], size: ty[2], entries: unsafe { read_array::(data, vlen)? }, }), BtfKind::DeclTag => BtfType::DeclTag(DeclTag { name_offset: ty[0], info: ty[1], btf_type: ty[2], component_index: unsafe { read(data)? }, }), BtfKind::TypeTag => BtfType::TypeTag(TypeTag { name_offset: ty[0], info: ty[1], btf_type: ty[2], }), }) } pub(crate) fn to_bytes(&self) -> Vec { match self { BtfType::Unknown => vec![], BtfType::Fwd(t) => t.to_bytes(), BtfType::Const(t) => t.to_bytes(), BtfType::Volatile(t) => t.to_bytes(), BtfType::Restrict(t) => t.to_bytes(), BtfType::Ptr(t) => t.to_bytes(), BtfType::Typedef(t) => t.to_bytes(), BtfType::Func(t) => t.to_bytes(), BtfType::Int(t) => t.to_bytes(), BtfType::Float(t) => t.to_bytes(), BtfType::Enum(t) => t.to_bytes(), BtfType::Enum64(t) => t.to_bytes(), BtfType::Array(t) => t.to_bytes(), BtfType::Struct(t) => t.to_bytes(), BtfType::Union(t) => t.to_bytes(), BtfType::FuncProto(t) => t.to_bytes(), BtfType::Var(t) => t.to_bytes(), BtfType::DataSec(t) => t.to_bytes(), BtfType::DeclTag(t) => t.to_bytes(), BtfType::TypeTag(t) => t.to_bytes(), } } pub(crate) fn size(&self) -> Option { match self { BtfType::Int(t) => Some(t.size), BtfType::Float(t) => Some(t.size), BtfType::Enum(t) => Some(t.size), BtfType::Enum64(t) => Some(t.size), BtfType::Struct(t) => Some(t.size), BtfType::Union(t) => Some(t.size), BtfType::DataSec(t) => Some(t.size), BtfType::Ptr(_) => Some(mem::size_of::<&()>() as u32), _ => None, } } pub(crate) fn btf_type(&self) -> Option { match self { BtfType::Const(t) => Some(t.btf_type), BtfType::Volatile(t) => Some(t.btf_type), BtfType::Restrict(t) => Some(t.btf_type), BtfType::Ptr(t) => Some(t.btf_type), BtfType::Typedef(t) => Some(t.btf_type), // FuncProto contains the return type here, and doesn't directly reference another type BtfType::FuncProto(t) => Some(t.return_type), BtfType::Var(t) => Some(t.btf_type), BtfType::DeclTag(t) => Some(t.btf_type), BtfType::TypeTag(t) => Some(t.btf_type), _ => None, } } pub(crate) fn type_info_size(&self) -> usize { match self { BtfType::Unknown => mem::size_of::(), BtfType::Fwd(t) => t.type_info_size(), BtfType::Const(t) => t.type_info_size(), BtfType::Volatile(t) => t.type_info_size(), BtfType::Restrict(t) => t.type_info_size(), BtfType::Ptr(t) => t.type_info_size(), BtfType::Typedef(t) => t.type_info_size(), BtfType::Func(t) => t.type_info_size(), BtfType::Int(t) => t.type_info_size(), BtfType::Float(t) => t.type_info_size(), BtfType::Enum(t) => t.type_info_size(), BtfType::Enum64(t) => t.type_info_size(), BtfType::Array(t) => t.type_info_size(), BtfType::Struct(t) => t.type_info_size(), BtfType::Union(t) => t.type_info_size(), BtfType::FuncProto(t) => t.type_info_size(), BtfType::Var(t) => t.type_info_size(), BtfType::DataSec(t) => t.type_info_size(), BtfType::DeclTag(t) => t.type_info_size(), BtfType::TypeTag(t) => t.type_info_size(), } } pub(crate) fn name_offset(&self) -> u32 { match self { BtfType::Unknown => 0, BtfType::Fwd(t) => t.name_offset, BtfType::Const(t) => t.name_offset, BtfType::Volatile(t) => t.name_offset, BtfType::Restrict(t) => t.name_offset, BtfType::Ptr(t) => t.name_offset, BtfType::Typedef(t) => t.name_offset, BtfType::Func(t) => t.name_offset, BtfType::Int(t) => t.name_offset, BtfType::Float(t) => t.name_offset, BtfType::Enum(t) => t.name_offset, BtfType::Enum64(t) => t.name_offset, BtfType::Array(t) => t.name_offset, BtfType::Struct(t) => t.name_offset, BtfType::Union(t) => t.name_offset, BtfType::FuncProto(t) => t.name_offset, BtfType::Var(t) => t.name_offset, BtfType::DataSec(t) => t.name_offset, BtfType::DeclTag(t) => t.name_offset, BtfType::TypeTag(t) => t.name_offset, } } pub(crate) fn kind(&self) -> BtfKind { match self { BtfType::Unknown => BtfKind::Unknown, BtfType::Fwd(t) => t.kind(), BtfType::Const(t) => t.kind(), BtfType::Volatile(t) => t.kind(), BtfType::Restrict(t) => t.kind(), BtfType::Ptr(t) => t.kind(), BtfType::Typedef(t) => t.kind(), BtfType::Func(t) => t.kind(), BtfType::Int(t) => t.kind(), BtfType::Float(t) => t.kind(), BtfType::Enum(t) => t.kind(), BtfType::Enum64(t) => t.kind(), BtfType::Array(t) => t.kind(), BtfType::Struct(t) => t.kind(), BtfType::Union(t) => t.kind(), BtfType::FuncProto(t) => t.kind(), BtfType::Var(t) => t.kind(), BtfType::DataSec(t) => t.kind(), BtfType::DeclTag(t) => t.kind(), BtfType::TypeTag(t) => t.kind(), } } pub(crate) fn is_composite(&self) -> bool { matches!(self, BtfType::Struct(_) | BtfType::Union(_)) } pub(crate) fn members(&self) -> Option> { match self { BtfType::Struct(t) => Some(t.members.iter()), BtfType::Union(t) => Some(t.members.iter()), _ => None, } } pub(crate) fn member_bit_field_size(&self, member: &BtfMember) -> Option { match self { BtfType::Struct(t) => Some(t.member_bit_field_size(member)), BtfType::Union(t) => Some(t.member_bit_field_size(member)), _ => None, } } pub(crate) fn member_bit_offset(&self, member: &BtfMember) -> Option { match self { BtfType::Struct(t) => Some(t.member_bit_offset(member)), BtfType::Union(t) => Some(t.member_bit_offset(member)), _ => None, } } pub(crate) fn is_compatible(&self, other: &BtfType) -> bool { if self.kind() == other.kind() { return true; } matches!( (self.kind(), other.kind()), (BtfKind::Enum, BtfKind::Enum64) | (BtfKind::Enum64, BtfKind::Enum) ) } } fn type_kind(info: u32) -> Result { ((info >> 24) & 0x1F).try_into() } fn type_vlen(info: u32) -> usize { (info & 0xFFFF) as usize } pub(crate) fn types_are_compatible( local_btf: &Btf, root_local_id: u32, target_btf: &Btf, root_target_id: u32, ) -> Result { let mut local_id = root_local_id; let mut target_id = root_target_id; let local_ty = local_btf.type_by_id(local_id)?; let target_ty = target_btf.type_by_id(target_id)?; if !local_ty.is_compatible(target_ty) { return Ok(false); } for _ in 0..MAX_RESOLVE_DEPTH { local_id = local_btf.resolve_type(local_id)?; target_id = target_btf.resolve_type(target_id)?; let local_ty = local_btf.type_by_id(local_id)?; let target_ty = target_btf.type_by_id(target_id)?; if !local_ty.is_compatible(target_ty) { return Ok(false); } match local_ty { BtfType::Unknown | BtfType::Struct(_) | BtfType::Union(_) | BtfType::Enum(_) | BtfType::Enum64(_) | BtfType::Fwd(_) | BtfType::Float(_) => return Ok(true), BtfType::Int(local) => { if let BtfType::Int(target) = target_ty { return Ok(local.offset() == 0 && target.offset() == 0); } } BtfType::Ptr(local) => { if let BtfType::Ptr(target) = target_ty { local_id = local.btf_type; target_id = target.btf_type; continue; } } BtfType::Array(Array { array: local, .. }) => { if let BtfType::Array(Array { array: target, .. }) = target_ty { local_id = local.element_type; target_id = target.element_type; continue; } } BtfType::FuncProto(local) => { if let BtfType::FuncProto(target) = target_ty { if local.params.len() != target.params.len() { return Ok(false); } for (l_param, t_param) in local.params.iter().zip(target.params.iter()) { let local_id = local_btf.resolve_type(l_param.btf_type)?; let target_id = target_btf.resolve_type(t_param.btf_type)?; if !types_are_compatible(local_btf, local_id, target_btf, target_id)? { return Ok(false); } } local_id = local.return_type; target_id = target.return_type; continue; } } local_ty => panic!("unexpected type {:?}", local_ty), } } Err(BtfError::MaximumTypeDepthReached { type_id: local_id }) } pub(crate) fn fields_are_compatible( local_btf: &Btf, mut local_id: u32, target_btf: &Btf, mut target_id: u32, ) -> Result { for _ in 0..MAX_RESOLVE_DEPTH { local_id = local_btf.resolve_type(local_id)?; target_id = target_btf.resolve_type(target_id)?; let local_ty = local_btf.type_by_id(local_id)?; let target_ty = target_btf.type_by_id(target_id)?; if local_ty.is_composite() && target_ty.is_composite() { return Ok(true); } if !local_ty.is_compatible(target_ty) { return Ok(false); } match local_ty { BtfType::Fwd(_) | BtfType::Enum(_) | BtfType::Enum64(_) => { let flavorless_name = |name: &str| name.split_once("___").map_or(name, |x| x.0).to_string(); let local_name = flavorless_name(&local_btf.type_name(local_ty)?); let target_name = flavorless_name(&target_btf.type_name(target_ty)?); return Ok(local_name == target_name); } BtfType::Int(local) => { if let BtfType::Int(target) = target_ty { return Ok(local.offset() == 0 && target.offset() == 0); } } BtfType::Float(_) => return Ok(true), BtfType::Ptr(_) => return Ok(true), BtfType::Array(Array { array: local, .. }) => { if let BtfType::Array(Array { array: target, .. }) = target_ty { local_id = local.element_type; target_id = target.element_type; continue; } } local_ty => panic!("unexpected type {:?}", local_ty), } } Err(BtfError::MaximumTypeDepthReached { type_id: local_id }) } fn bytes_of(val: &T) -> &[u8] { // Safety: all btf types are POD unsafe { crate::util::bytes_of(val) } } #[cfg(test)] mod tests { use assert_matches::assert_matches; use super::*; #[test] fn test_read_btf_type_int() { let endianness = Endianness::default(); let bpf_type = BtfType::Int(Int::new(1, 8, IntEncoding::None, 0)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Int(new @ Int { name_offset, info: _, size, data: _, }) => { assert_eq!(name_offset, 1); assert_eq!(size, 8); assert_eq!(new.bits(), 64); assert_eq!(new.to_bytes(), data); }); } #[test] fn test_read_btf_type_ptr() { let endianness = Endianness::default(); let bpf_type = BtfType::Ptr(Ptr::new(0, 0x06)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Ptr(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_array() { let endianness = Endianness::default(); let bpf_type = BtfType::Array(Array::new(0, 1, 0x12, 2)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Array(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_struct() { let endianness = Endianness::default(); let members = vec![BtfMember { name_offset: 0x0247, btf_type: 0x12, offset: 0, }]; let bpf_type = BtfType::Struct(Struct::new(0, members, 4)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Struct(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_union() { let endianness = Endianness::default(); let members = vec![BtfMember { name_offset: 0x040d, btf_type: 0x68, offset: 0, }]; let bpf_type = BtfType::Union(Union::new(0, 4, members)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Union(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_enum() { let endianness = Endianness::default(); let enum1 = BtfEnum::new(0xc9, 0); let enum2 = BtfEnum::new(0xcf, 1); let variants = vec![enum1, enum2]; let bpf_type = BtfType::Enum(Enum::new(0, false, variants)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Enum(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_fwd() { let endianness = Endianness::default(); let info = (BtfKind::Fwd as u32) << 24; let bpf_type = BtfType::Fwd(Fwd { name_offset: 0x550b, info, _unused: 0, }); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Fwd(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_typedef() { let endianness = Endianness::default(); let bpf_type = BtfType::Typedef(Typedef::new(0x31, 0x0b)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Typedef(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_volatile() { let endianness = Endianness::default(); let info = (BtfKind::Volatile as u32) << 24; let bpf_type = BtfType::Volatile(Volatile { name_offset: 0, info, btf_type: 0x24, }); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Volatile(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_const() { let endianness = Endianness::default(); let bpf_type = BtfType::Const(Const::new(1)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Const(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_restrict() { let endianness = Endianness::default(); let info = (BtfKind::Restrict as u32) << 24; let bpf_type = BtfType::Restrict(Restrict { name_offset: 0, _info: info, btf_type: 4, }); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Restrict(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_func() { let endianness = Endianness::default(); let bpf_type = BtfType::Func(Func::new(0x000f8b17, 0xe4f0, FuncLinkage::Global)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Func(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_func_proto() { let endianness = Endianness::default(); let params = vec![BtfParam { name_offset: 0, btf_type: 0x12, }]; let bpf_type = BtfType::FuncProto(FuncProto::new(params, 0)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::FuncProto(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_func_var() { let endianness = Endianness::default(); // NOTE: There was no data in /sys/kernell/btf/vmlinux for this type let bpf_type = BtfType::Var(Var::new(0, 0xf0, VarLinkage::Static)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Var(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_read_btf_type_func_datasec() { let endianness = Endianness::default(); let entries = vec![DataSecEntry { btf_type: 11, offset: 0, size: 4, }]; let bpf_type = BtfType::DataSec(DataSec::new(0xd9, entries, 0)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::DataSec(DataSec { name_offset: _, info: _, size, entries, }) => { assert_eq!(size, 0); assert_matches!(*entries, [ DataSecEntry { btf_type: 11, offset: 0, size: 4, } ]); } ); } #[test] fn test_read_btf_type_float() { let endianness = Endianness::default(); let bpf_type = BtfType::Float(Float::new(0x02fd, 8)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Float(got) => { assert_eq!(got.to_bytes(), data); }); } #[test] fn test_write_btf_func_proto() { let params = vec![ BtfParam { name_offset: 1, btf_type: 1, }, BtfParam { name_offset: 3, btf_type: 1, }, ]; let func_proto = FuncProto::new(params, 2); let data = func_proto.to_bytes(); assert_matches!(unsafe { BtfType::read(&data, Endianness::default()) }.unwrap(), BtfType::FuncProto(FuncProto { name_offset: _, info: _, return_type: _, params, }) => { assert_matches!(*params, [ _, _, ]) }); } #[test] fn test_types_are_compatible() { let mut btf = Btf::new(); let name_offset = btf.add_string("u32"); let u32t = btf.add_type(BtfType::Int(Int::new(name_offset, 4, IntEncoding::None, 0))); let name_offset = btf.add_string("u64"); let u64t = btf.add_type(BtfType::Int(Int::new(name_offset, 8, IntEncoding::None, 0))); let name_offset = btf.add_string("widgets"); let array_type = btf.add_type(BtfType::Array(Array::new(name_offset, u64t, u32t, 16))); assert!(types_are_compatible(&btf, u32t, &btf, u32t).unwrap()); // int types are compatible if offsets match. size and encoding aren't compared assert!(types_are_compatible(&btf, u32t, &btf, u64t).unwrap()); assert!(types_are_compatible(&btf, array_type, &btf, array_type).unwrap()); } #[test] pub fn test_read_btf_type_enum64() { let endianness = Endianness::default(); let variants = vec![BtfEnum64::new(0, 0xbbbbbbbbaaaaaaaau64)]; let bpf_type = BtfType::Enum64(Enum64::new(0, false, variants)); let data: &[u8] = &bpf_type.to_bytes(); assert_matches!(unsafe { BtfType::read(data, endianness) }.unwrap(), BtfType::Enum64(got) => { assert_eq!(got.to_bytes(), data); }); } } aya-obj-0.2.1/src/generated/btf_internal_bindings.rs000064400000000000000000000025751046102023000205760ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ pub type __u8 = ::core::ffi::c_uchar; pub type __u16 = ::core::ffi::c_ushort; pub type __u32 = ::core::ffi::c_uint; pub mod bpf_core_relo_kind { pub type Type = ::core::ffi::c_uint; pub const BPF_CORE_FIELD_BYTE_OFFSET: Type = 0; pub const BPF_CORE_FIELD_BYTE_SIZE: Type = 1; pub const BPF_CORE_FIELD_EXISTS: Type = 2; pub const BPF_CORE_FIELD_SIGNED: Type = 3; pub const BPF_CORE_FIELD_LSHIFT_U64: Type = 4; pub const BPF_CORE_FIELD_RSHIFT_U64: Type = 5; pub const BPF_CORE_TYPE_ID_LOCAL: Type = 6; pub const BPF_CORE_TYPE_ID_TARGET: Type = 7; pub const BPF_CORE_TYPE_EXISTS: Type = 8; pub const BPF_CORE_TYPE_SIZE: Type = 9; pub const BPF_CORE_ENUMVAL_EXISTS: Type = 10; pub const BPF_CORE_ENUMVAL_VALUE: Type = 11; pub const BPF_CORE_TYPE_MATCHES: Type = 12; } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_core_relo { pub insn_off: __u32, pub type_id: __u32, pub access_str_off: __u32, pub kind: bpf_core_relo_kind::Type, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_ext_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub func_info_off: __u32, pub func_info_len: __u32, pub line_info_off: __u32, pub line_info_len: __u32, pub core_relo_off: __u32, pub core_relo_len: __u32, } aya-obj-0.2.1/src/generated/linux_bindings_aarch64.rs000064400000000000000000002356331046102023000206010ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ #[repr(C)] #[derive(Copy, Clone, Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)] pub struct __BindgenBitfieldUnit { storage: Storage, } impl __BindgenBitfieldUnit { #[inline] pub const fn new(storage: Storage) -> Self { Self { storage } } } impl __BindgenBitfieldUnit where Storage: AsRef<[u8]> + AsMut<[u8]>, { #[inline] pub fn get_bit(&self, index: usize) -> bool { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = self.storage.as_ref()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; byte & mask == mask } #[inline] pub fn set_bit(&mut self, index: usize, val: bool) { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = &mut self.storage.as_mut()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; if val { *byte |= mask; } else { *byte &= !mask; } } #[inline] pub fn get(&self, bit_offset: usize, bit_width: u8) -> u64 { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); let mut val = 0; for i in 0..(bit_width as usize) { if self.get_bit(i + bit_offset) { let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; val |= 1 << index; } } val } #[inline] pub fn set(&mut self, bit_offset: usize, bit_width: u8, val: u64) { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); for i in 0..(bit_width as usize) { let mask = 1 << i; let val_bit_is_set = val & mask == mask; let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; self.set_bit(index + bit_offset, val_bit_is_set); } } } #[repr(C)] #[derive(Default)] pub struct __IncompleteArrayField(::core::marker::PhantomData, [T; 0]); impl __IncompleteArrayField { #[inline] pub const fn new() -> Self { __IncompleteArrayField(::core::marker::PhantomData, []) } #[inline] pub fn as_ptr(&self) -> *const T { self as *const _ as *const T } #[inline] pub fn as_mut_ptr(&mut self) -> *mut T { self as *mut _ as *mut T } #[inline] pub unsafe fn as_slice(&self, len: usize) -> &[T] { ::core::slice::from_raw_parts(self.as_ptr(), len) } #[inline] pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] { ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len) } } impl ::core::fmt::Debug for __IncompleteArrayField { fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { fmt.write_str("__IncompleteArrayField") } } pub const SO_ATTACH_BPF: u32 = 50; pub const SO_DETACH_BPF: u32 = 27; pub const BPF_LD: u32 = 0; pub const BPF_LDX: u32 = 1; pub const BPF_ST: u32 = 2; pub const BPF_STX: u32 = 3; pub const BPF_ALU: u32 = 4; pub const BPF_JMP: u32 = 5; pub const BPF_W: u32 = 0; pub const BPF_H: u32 = 8; pub const BPF_B: u32 = 16; pub const BPF_K: u32 = 0; pub const BPF_ALU64: u32 = 7; pub const BPF_DW: u32 = 24; pub const BPF_CALL: u32 = 128; pub const BPF_F_ALLOW_OVERRIDE: u32 = 1; pub const BPF_F_ALLOW_MULTI: u32 = 2; pub const BPF_F_REPLACE: u32 = 4; pub const BPF_F_BEFORE: u32 = 8; pub const BPF_F_AFTER: u32 = 16; pub const BPF_F_ID: u32 = 32; pub const BPF_F_STRICT_ALIGNMENT: u32 = 1; pub const BPF_F_ANY_ALIGNMENT: u32 = 2; pub const BPF_F_TEST_RND_HI32: u32 = 4; pub const BPF_F_TEST_STATE_FREQ: u32 = 8; pub const BPF_F_SLEEPABLE: u32 = 16; pub const BPF_F_XDP_HAS_FRAGS: u32 = 32; pub const BPF_F_XDP_DEV_BOUND_ONLY: u32 = 64; pub const BPF_F_TEST_REG_INVARIANTS: u32 = 128; pub const BPF_F_NETFILTER_IP_DEFRAG: u32 = 1; pub const BPF_PSEUDO_MAP_FD: u32 = 1; pub const BPF_PSEUDO_MAP_IDX: u32 = 5; pub const BPF_PSEUDO_MAP_VALUE: u32 = 2; pub const BPF_PSEUDO_MAP_IDX_VALUE: u32 = 6; pub const BPF_PSEUDO_BTF_ID: u32 = 3; pub const BPF_PSEUDO_FUNC: u32 = 4; pub const BPF_PSEUDO_CALL: u32 = 1; pub const BPF_PSEUDO_KFUNC_CALL: u32 = 2; pub const BPF_F_QUERY_EFFECTIVE: u32 = 1; pub const BPF_F_TEST_RUN_ON_CPU: u32 = 1; pub const BPF_F_TEST_XDP_LIVE_FRAMES: u32 = 2; pub const BTF_INT_SIGNED: u32 = 1; pub const BTF_INT_CHAR: u32 = 2; pub const BTF_INT_BOOL: u32 = 4; pub const NLMSG_ALIGNTO: u32 = 4; pub const XDP_FLAGS_UPDATE_IF_NOEXIST: u32 = 1; pub const XDP_FLAGS_SKB_MODE: u32 = 2; pub const XDP_FLAGS_DRV_MODE: u32 = 4; pub const XDP_FLAGS_HW_MODE: u32 = 8; pub const XDP_FLAGS_REPLACE: u32 = 16; pub const XDP_FLAGS_MODES: u32 = 14; pub const XDP_FLAGS_MASK: u32 = 31; pub const PERF_MAX_STACK_DEPTH: u32 = 127; pub const PERF_MAX_CONTEXTS_PER_STACK: u32 = 8; pub const PERF_FLAG_FD_NO_GROUP: u32 = 1; pub const PERF_FLAG_FD_OUTPUT: u32 = 2; pub const PERF_FLAG_PID_CGROUP: u32 = 4; pub const PERF_FLAG_FD_CLOEXEC: u32 = 8; pub const TC_H_MAJ_MASK: u32 = 4294901760; pub const TC_H_MIN_MASK: u32 = 65535; pub const TC_H_UNSPEC: u32 = 0; pub const TC_H_ROOT: u32 = 4294967295; pub const TC_H_INGRESS: u32 = 4294967281; pub const TC_H_CLSACT: u32 = 4294967281; pub const TC_H_MIN_PRIORITY: u32 = 65504; pub const TC_H_MIN_INGRESS: u32 = 65522; pub const TC_H_MIN_EGRESS: u32 = 65523; pub const TCA_BPF_FLAG_ACT_DIRECT: u32 = 1; pub type __u8 = ::core::ffi::c_uchar; pub type __s16 = ::core::ffi::c_short; pub type __u16 = ::core::ffi::c_ushort; pub type __s32 = ::core::ffi::c_int; pub type __u32 = ::core::ffi::c_uint; pub type __s64 = ::core::ffi::c_longlong; pub type __u64 = ::core::ffi::c_ulonglong; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_insn { pub code: __u8, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 1usize]>, pub off: __s16, pub imm: __s32, } impl bpf_insn { #[inline] pub fn dst_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 4u8) as u8) } } #[inline] pub fn set_dst_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 4u8, val as u64) } } #[inline] pub fn src_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 4u8) as u8) } } #[inline] pub fn set_src_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 4u8, val as u64) } } #[inline] pub fn new_bitfield_1(dst_reg: __u8, src_reg: __u8) -> __BindgenBitfieldUnit<[u8; 1usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 1usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 4u8, { let dst_reg: u8 = unsafe { ::core::mem::transmute(dst_reg) }; dst_reg as u64 }); __bindgen_bitfield_unit.set(4usize, 4u8, { let src_reg: u8 = unsafe { ::core::mem::transmute(src_reg) }; src_reg as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug)] pub struct bpf_lpm_trie_key { pub prefixlen: __u32, pub data: __IncompleteArrayField<__u8>, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cgroup_iter_order { BPF_CGROUP_ITER_ORDER_UNSPEC = 0, BPF_CGROUP_ITER_SELF_ONLY = 1, BPF_CGROUP_ITER_DESCENDANTS_PRE = 2, BPF_CGROUP_ITER_DESCENDANTS_POST = 3, BPF_CGROUP_ITER_ANCESTORS_UP = 4, } impl bpf_cmd { pub const BPF_PROG_RUN: bpf_cmd = bpf_cmd::BPF_PROG_TEST_RUN; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cmd { BPF_MAP_CREATE = 0, BPF_MAP_LOOKUP_ELEM = 1, BPF_MAP_UPDATE_ELEM = 2, BPF_MAP_DELETE_ELEM = 3, BPF_MAP_GET_NEXT_KEY = 4, BPF_PROG_LOAD = 5, BPF_OBJ_PIN = 6, BPF_OBJ_GET = 7, BPF_PROG_ATTACH = 8, BPF_PROG_DETACH = 9, BPF_PROG_TEST_RUN = 10, BPF_PROG_GET_NEXT_ID = 11, BPF_MAP_GET_NEXT_ID = 12, BPF_PROG_GET_FD_BY_ID = 13, BPF_MAP_GET_FD_BY_ID = 14, BPF_OBJ_GET_INFO_BY_FD = 15, BPF_PROG_QUERY = 16, BPF_RAW_TRACEPOINT_OPEN = 17, BPF_BTF_LOAD = 18, BPF_BTF_GET_FD_BY_ID = 19, BPF_TASK_FD_QUERY = 20, BPF_MAP_LOOKUP_AND_DELETE_ELEM = 21, BPF_MAP_FREEZE = 22, BPF_BTF_GET_NEXT_ID = 23, BPF_MAP_LOOKUP_BATCH = 24, BPF_MAP_LOOKUP_AND_DELETE_BATCH = 25, BPF_MAP_UPDATE_BATCH = 26, BPF_MAP_DELETE_BATCH = 27, BPF_LINK_CREATE = 28, BPF_LINK_UPDATE = 29, BPF_LINK_GET_FD_BY_ID = 30, BPF_LINK_GET_NEXT_ID = 31, BPF_ENABLE_STATS = 32, BPF_ITER_CREATE = 33, BPF_LINK_DETACH = 34, BPF_PROG_BIND_MAP = 35, BPF_TOKEN_CREATE = 36, __MAX_BPF_CMD = 37, } impl bpf_map_type { pub const BPF_MAP_TYPE_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED; } impl bpf_map_type { pub const BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_map_type { BPF_MAP_TYPE_UNSPEC = 0, BPF_MAP_TYPE_HASH = 1, BPF_MAP_TYPE_ARRAY = 2, BPF_MAP_TYPE_PROG_ARRAY = 3, BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4, BPF_MAP_TYPE_PERCPU_HASH = 5, BPF_MAP_TYPE_PERCPU_ARRAY = 6, BPF_MAP_TYPE_STACK_TRACE = 7, BPF_MAP_TYPE_CGROUP_ARRAY = 8, BPF_MAP_TYPE_LRU_HASH = 9, BPF_MAP_TYPE_LRU_PERCPU_HASH = 10, BPF_MAP_TYPE_LPM_TRIE = 11, BPF_MAP_TYPE_ARRAY_OF_MAPS = 12, BPF_MAP_TYPE_HASH_OF_MAPS = 13, BPF_MAP_TYPE_DEVMAP = 14, BPF_MAP_TYPE_SOCKMAP = 15, BPF_MAP_TYPE_CPUMAP = 16, BPF_MAP_TYPE_XSKMAP = 17, BPF_MAP_TYPE_SOCKHASH = 18, BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 19, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED = 21, BPF_MAP_TYPE_QUEUE = 22, BPF_MAP_TYPE_STACK = 23, BPF_MAP_TYPE_SK_STORAGE = 24, BPF_MAP_TYPE_DEVMAP_HASH = 25, BPF_MAP_TYPE_STRUCT_OPS = 26, BPF_MAP_TYPE_RINGBUF = 27, BPF_MAP_TYPE_INODE_STORAGE = 28, BPF_MAP_TYPE_TASK_STORAGE = 29, BPF_MAP_TYPE_BLOOM_FILTER = 30, BPF_MAP_TYPE_USER_RINGBUF = 31, BPF_MAP_TYPE_CGRP_STORAGE = 32, BPF_MAP_TYPE_ARENA = 33, __MAX_BPF_MAP_TYPE = 34, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC = 0, BPF_PROG_TYPE_SOCKET_FILTER = 1, BPF_PROG_TYPE_KPROBE = 2, BPF_PROG_TYPE_SCHED_CLS = 3, BPF_PROG_TYPE_SCHED_ACT = 4, BPF_PROG_TYPE_TRACEPOINT = 5, BPF_PROG_TYPE_XDP = 6, BPF_PROG_TYPE_PERF_EVENT = 7, BPF_PROG_TYPE_CGROUP_SKB = 8, BPF_PROG_TYPE_CGROUP_SOCK = 9, BPF_PROG_TYPE_LWT_IN = 10, BPF_PROG_TYPE_LWT_OUT = 11, BPF_PROG_TYPE_LWT_XMIT = 12, BPF_PROG_TYPE_SOCK_OPS = 13, BPF_PROG_TYPE_SK_SKB = 14, BPF_PROG_TYPE_CGROUP_DEVICE = 15, BPF_PROG_TYPE_SK_MSG = 16, BPF_PROG_TYPE_RAW_TRACEPOINT = 17, BPF_PROG_TYPE_CGROUP_SOCK_ADDR = 18, BPF_PROG_TYPE_LWT_SEG6LOCAL = 19, BPF_PROG_TYPE_LIRC_MODE2 = 20, BPF_PROG_TYPE_SK_REUSEPORT = 21, BPF_PROG_TYPE_FLOW_DISSECTOR = 22, BPF_PROG_TYPE_CGROUP_SYSCTL = 23, BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE = 24, BPF_PROG_TYPE_CGROUP_SOCKOPT = 25, BPF_PROG_TYPE_TRACING = 26, BPF_PROG_TYPE_STRUCT_OPS = 27, BPF_PROG_TYPE_EXT = 28, BPF_PROG_TYPE_LSM = 29, BPF_PROG_TYPE_SK_LOOKUP = 30, BPF_PROG_TYPE_SYSCALL = 31, BPF_PROG_TYPE_NETFILTER = 32, __MAX_BPF_PROG_TYPE = 33, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_attach_type { BPF_CGROUP_INET_INGRESS = 0, BPF_CGROUP_INET_EGRESS = 1, BPF_CGROUP_INET_SOCK_CREATE = 2, BPF_CGROUP_SOCK_OPS = 3, BPF_SK_SKB_STREAM_PARSER = 4, BPF_SK_SKB_STREAM_VERDICT = 5, BPF_CGROUP_DEVICE = 6, BPF_SK_MSG_VERDICT = 7, BPF_CGROUP_INET4_BIND = 8, BPF_CGROUP_INET6_BIND = 9, BPF_CGROUP_INET4_CONNECT = 10, BPF_CGROUP_INET6_CONNECT = 11, BPF_CGROUP_INET4_POST_BIND = 12, BPF_CGROUP_INET6_POST_BIND = 13, BPF_CGROUP_UDP4_SENDMSG = 14, BPF_CGROUP_UDP6_SENDMSG = 15, BPF_LIRC_MODE2 = 16, BPF_FLOW_DISSECTOR = 17, BPF_CGROUP_SYSCTL = 18, BPF_CGROUP_UDP4_RECVMSG = 19, BPF_CGROUP_UDP6_RECVMSG = 20, BPF_CGROUP_GETSOCKOPT = 21, BPF_CGROUP_SETSOCKOPT = 22, BPF_TRACE_RAW_TP = 23, BPF_TRACE_FENTRY = 24, BPF_TRACE_FEXIT = 25, BPF_MODIFY_RETURN = 26, BPF_LSM_MAC = 27, BPF_TRACE_ITER = 28, BPF_CGROUP_INET4_GETPEERNAME = 29, BPF_CGROUP_INET6_GETPEERNAME = 30, BPF_CGROUP_INET4_GETSOCKNAME = 31, BPF_CGROUP_INET6_GETSOCKNAME = 32, BPF_XDP_DEVMAP = 33, BPF_CGROUP_INET_SOCK_RELEASE = 34, BPF_XDP_CPUMAP = 35, BPF_SK_LOOKUP = 36, BPF_XDP = 37, BPF_SK_SKB_VERDICT = 38, BPF_SK_REUSEPORT_SELECT = 39, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 40, BPF_PERF_EVENT = 41, BPF_TRACE_KPROBE_MULTI = 42, BPF_LSM_CGROUP = 43, BPF_STRUCT_OPS = 44, BPF_NETFILTER = 45, BPF_TCX_INGRESS = 46, BPF_TCX_EGRESS = 47, BPF_TRACE_UPROBE_MULTI = 48, BPF_CGROUP_UNIX_CONNECT = 49, BPF_CGROUP_UNIX_SENDMSG = 50, BPF_CGROUP_UNIX_RECVMSG = 51, BPF_CGROUP_UNIX_GETPEERNAME = 52, BPF_CGROUP_UNIX_GETSOCKNAME = 53, BPF_NETKIT_PRIMARY = 54, BPF_NETKIT_PEER = 55, __MAX_BPF_ATTACH_TYPE = 56, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_link_type { BPF_LINK_TYPE_UNSPEC = 0, BPF_LINK_TYPE_RAW_TRACEPOINT = 1, BPF_LINK_TYPE_TRACING = 2, BPF_LINK_TYPE_CGROUP = 3, BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, __MAX_BPF_LINK_TYPE = 14, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_perf_event_type { BPF_PERF_EVENT_UNSPEC = 0, BPF_PERF_EVENT_UPROBE = 1, BPF_PERF_EVENT_URETPROBE = 2, BPF_PERF_EVENT_KPROBE = 3, BPF_PERF_EVENT_KRETPROBE = 4, BPF_PERF_EVENT_TRACEPOINT = 5, BPF_PERF_EVENT_EVENT = 6, } pub const BPF_F_KPROBE_MULTI_RETURN: _bindgen_ty_2 = 1; pub type _bindgen_ty_2 = ::core::ffi::c_uint; pub const BPF_F_UPROBE_MULTI_RETURN: _bindgen_ty_3 = 1; pub type _bindgen_ty_3 = ::core::ffi::c_uint; pub const BPF_ANY: _bindgen_ty_4 = 0; pub const BPF_NOEXIST: _bindgen_ty_4 = 1; pub const BPF_EXIST: _bindgen_ty_4 = 2; pub const BPF_F_LOCK: _bindgen_ty_4 = 4; pub type _bindgen_ty_4 = ::core::ffi::c_uint; pub const BPF_F_NO_PREALLOC: _bindgen_ty_5 = 1; pub const BPF_F_NO_COMMON_LRU: _bindgen_ty_5 = 2; pub const BPF_F_NUMA_NODE: _bindgen_ty_5 = 4; pub const BPF_F_RDONLY: _bindgen_ty_5 = 8; pub const BPF_F_WRONLY: _bindgen_ty_5 = 16; pub const BPF_F_STACK_BUILD_ID: _bindgen_ty_5 = 32; pub const BPF_F_ZERO_SEED: _bindgen_ty_5 = 64; pub const BPF_F_RDONLY_PROG: _bindgen_ty_5 = 128; pub const BPF_F_WRONLY_PROG: _bindgen_ty_5 = 256; pub const BPF_F_CLONE: _bindgen_ty_5 = 512; pub const BPF_F_MMAPABLE: _bindgen_ty_5 = 1024; pub const BPF_F_PRESERVE_ELEMS: _bindgen_ty_5 = 2048; pub const BPF_F_INNER_MAP: _bindgen_ty_5 = 4096; pub const BPF_F_LINK: _bindgen_ty_5 = 8192; pub const BPF_F_PATH_FD: _bindgen_ty_5 = 16384; pub const BPF_F_VTYPE_BTF_OBJ_FD: _bindgen_ty_5 = 32768; pub const BPF_F_TOKEN_FD: _bindgen_ty_5 = 65536; pub const BPF_F_SEGV_ON_FAULT: _bindgen_ty_5 = 131072; pub const BPF_F_NO_USER_CONV: _bindgen_ty_5 = 262144; pub type _bindgen_ty_5 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_stats_type { BPF_STATS_RUN_TIME = 0, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr { pub __bindgen_anon_1: bpf_attr__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_2, pub batch: bpf_attr__bindgen_ty_3, pub __bindgen_anon_3: bpf_attr__bindgen_ty_4, pub __bindgen_anon_4: bpf_attr__bindgen_ty_5, pub __bindgen_anon_5: bpf_attr__bindgen_ty_6, pub test: bpf_attr__bindgen_ty_7, pub __bindgen_anon_6: bpf_attr__bindgen_ty_8, pub info: bpf_attr__bindgen_ty_9, pub query: bpf_attr__bindgen_ty_10, pub raw_tracepoint: bpf_attr__bindgen_ty_11, pub __bindgen_anon_7: bpf_attr__bindgen_ty_12, pub task_fd_query: bpf_attr__bindgen_ty_13, pub link_create: bpf_attr__bindgen_ty_14, pub link_update: bpf_attr__bindgen_ty_15, pub link_detach: bpf_attr__bindgen_ty_16, pub enable_stats: bpf_attr__bindgen_ty_17, pub iter_create: bpf_attr__bindgen_ty_18, pub prog_bind_map: bpf_attr__bindgen_ty_19, pub token_create: bpf_attr__bindgen_ty_20, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_1 { pub map_type: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub inner_map_fd: __u32, pub numa_node: __u32, pub map_name: [::core::ffi::c_char; 16usize], pub map_ifindex: __u32, pub btf_fd: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_value_type_id: __u32, pub map_extra: __u64, pub value_type_btf_obj_fd: __s32, pub map_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_2 { pub map_fd: __u32, pub key: __u64, pub __bindgen_anon_1: bpf_attr__bindgen_ty_2__bindgen_ty_1, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_2__bindgen_ty_1 { pub value: __u64, pub next_key: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_3 { pub in_batch: __u64, pub out_batch: __u64, pub keys: __u64, pub values: __u64, pub count: __u32, pub map_fd: __u32, pub elem_flags: __u64, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_4 { pub prog_type: __u32, pub insn_cnt: __u32, pub insns: __u64, pub license: __u64, pub log_level: __u32, pub log_size: __u32, pub log_buf: __u64, pub kern_version: __u32, pub prog_flags: __u32, pub prog_name: [::core::ffi::c_char; 16usize], pub prog_ifindex: __u32, pub expected_attach_type: __u32, pub prog_btf_fd: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub func_info_cnt: __u32, pub line_info_rec_size: __u32, pub line_info: __u64, pub line_info_cnt: __u32, pub attach_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_4__bindgen_ty_1, pub core_relo_cnt: __u32, pub fd_array: __u64, pub core_relos: __u64, pub core_relo_rec_size: __u32, pub log_true_size: __u32, pub prog_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_4__bindgen_ty_1 { pub attach_prog_fd: __u32, pub attach_btf_obj_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_5 { pub pathname: __u64, pub bpf_fd: __u32, pub file_flags: __u32, pub path_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_6__bindgen_ty_1, pub attach_bpf_fd: __u32, pub attach_type: __u32, pub attach_flags: __u32, pub replace_bpf_fd: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_6__bindgen_ty_2, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_2 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_7 { pub prog_fd: __u32, pub retval: __u32, pub data_size_in: __u32, pub data_size_out: __u32, pub data_in: __u64, pub data_out: __u64, pub repeat: __u32, pub duration: __u32, pub ctx_size_in: __u32, pub ctx_size_out: __u32, pub ctx_in: __u64, pub ctx_out: __u64, pub flags: __u32, pub cpu: __u32, pub batch_size: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_8__bindgen_ty_1, pub next_id: __u32, pub open_flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_8__bindgen_ty_1 { pub start_id: __u32, pub prog_id: __u32, pub map_id: __u32, pub btf_id: __u32, pub link_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_9 { pub bpf_fd: __u32, pub info_len: __u32, pub info: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_10 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_10__bindgen_ty_1, pub attach_type: __u32, pub query_flags: __u32, pub attach_flags: __u32, pub prog_ids: __u64, pub __bindgen_anon_2: bpf_attr__bindgen_ty_10__bindgen_ty_2, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub prog_attach_flags: __u64, pub link_ids: __u64, pub link_attach_flags: __u64, pub revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_2 { pub prog_cnt: __u32, pub count: __u32, } impl bpf_attr__bindgen_ty_10 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_11 { pub name: __u64, pub prog_fd: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_attr__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_12 { pub btf: __u64, pub btf_log_buf: __u64, pub btf_size: __u32, pub btf_log_size: __u32, pub btf_log_level: __u32, pub btf_log_true_size: __u32, pub btf_flags: __u32, pub btf_token_fd: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_13 { pub pid: __u32, pub fd: __u32, pub flags: __u32, pub buf_len: __u32, pub buf: __u64, pub prog_id: __u32, pub fd_type: __u32, pub probe_offset: __u64, pub probe_addr: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_14__bindgen_ty_2, pub attach_type: __u32, pub flags: __u32, pub __bindgen_anon_3: bpf_attr__bindgen_ty_14__bindgen_ty_3, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_1 { pub prog_fd: __u32, pub map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_2 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3 { pub target_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1, pub perf_event: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2, pub kprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3, pub tracing: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4, pub netfilter: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5, pub tcx: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6, pub uprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7, pub netkit: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1 { pub iter_info: __u64, pub iter_info_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2 { pub bpf_cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3 { pub flags: __u32, pub cnt: __u32, pub syms: __u64, pub addrs: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4 { pub target_btf_id: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub cnt: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_15 { pub link_fd: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_15__bindgen_ty_1, pub flags: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_15__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_1 { pub new_prog_fd: __u32, pub new_map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_2 { pub old_prog_fd: __u32, pub old_map_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_16 { pub link_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_17 { pub type_: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_18 { pub link_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_19 { pub prog_fd: __u32, pub map_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_20 { pub flags: __u32, pub bpffs_fd: __u32, } pub const BPF_F_RECOMPUTE_CSUM: _bindgen_ty_6 = 1; pub const BPF_F_INVALIDATE_HASH: _bindgen_ty_6 = 2; pub type _bindgen_ty_6 = ::core::ffi::c_uint; pub const BPF_F_HDR_FIELD_MASK: _bindgen_ty_7 = 15; pub type _bindgen_ty_7 = ::core::ffi::c_uint; pub const BPF_F_PSEUDO_HDR: _bindgen_ty_8 = 16; pub const BPF_F_MARK_MANGLED_0: _bindgen_ty_8 = 32; pub const BPF_F_MARK_ENFORCE: _bindgen_ty_8 = 64; pub type _bindgen_ty_8 = ::core::ffi::c_uint; pub const BPF_F_INGRESS: _bindgen_ty_9 = 1; pub type _bindgen_ty_9 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_IPV6: _bindgen_ty_10 = 1; pub type _bindgen_ty_10 = ::core::ffi::c_uint; pub const BPF_F_SKIP_FIELD_MASK: _bindgen_ty_11 = 255; pub const BPF_F_USER_STACK: _bindgen_ty_11 = 256; pub const BPF_F_FAST_STACK_CMP: _bindgen_ty_11 = 512; pub const BPF_F_REUSE_STACKID: _bindgen_ty_11 = 1024; pub const BPF_F_USER_BUILD_ID: _bindgen_ty_11 = 2048; pub type _bindgen_ty_11 = ::core::ffi::c_uint; pub const BPF_F_ZERO_CSUM_TX: _bindgen_ty_12 = 2; pub const BPF_F_DONT_FRAGMENT: _bindgen_ty_12 = 4; pub const BPF_F_SEQ_NUMBER: _bindgen_ty_12 = 8; pub const BPF_F_NO_TUNNEL_KEY: _bindgen_ty_12 = 16; pub type _bindgen_ty_12 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_FLAGS: _bindgen_ty_13 = 16; pub type _bindgen_ty_13 = ::core::ffi::c_uint; pub const BPF_F_INDEX_MASK: _bindgen_ty_14 = 4294967295; pub const BPF_F_CURRENT_CPU: _bindgen_ty_14 = 4294967295; pub const BPF_F_CTXLEN_MASK: _bindgen_ty_14 = 4503595332403200; pub type _bindgen_ty_14 = ::core::ffi::c_ulong; pub const BPF_F_CURRENT_NETNS: _bindgen_ty_15 = -1; pub type _bindgen_ty_15 = ::core::ffi::c_int; pub const BPF_F_ADJ_ROOM_FIXED_GSO: _bindgen_ty_17 = 1; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV4: _bindgen_ty_17 = 2; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV6: _bindgen_ty_17 = 4; pub const BPF_F_ADJ_ROOM_ENCAP_L4_GRE: _bindgen_ty_17 = 8; pub const BPF_F_ADJ_ROOM_ENCAP_L4_UDP: _bindgen_ty_17 = 16; pub const BPF_F_ADJ_ROOM_NO_CSUM_RESET: _bindgen_ty_17 = 32; pub const BPF_F_ADJ_ROOM_ENCAP_L2_ETH: _bindgen_ty_17 = 64; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV4: _bindgen_ty_17 = 128; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV6: _bindgen_ty_17 = 256; pub type _bindgen_ty_17 = ::core::ffi::c_uint; pub const BPF_F_SYSCTL_BASE_NAME: _bindgen_ty_19 = 1; pub type _bindgen_ty_19 = ::core::ffi::c_uint; pub const BPF_F_GET_BRANCH_RECORDS_SIZE: _bindgen_ty_21 = 1; pub type _bindgen_ty_21 = ::core::ffi::c_uint; pub const BPF_RINGBUF_BUSY_BIT: _bindgen_ty_24 = 2147483648; pub const BPF_RINGBUF_DISCARD_BIT: _bindgen_ty_24 = 1073741824; pub const BPF_RINGBUF_HDR_SZ: _bindgen_ty_24 = 8; pub type _bindgen_ty_24 = ::core::ffi::c_uint; pub const BPF_F_BPRM_SECUREEXEC: _bindgen_ty_26 = 1; pub type _bindgen_ty_26 = ::core::ffi::c_uint; pub const BPF_F_BROADCAST: _bindgen_ty_27 = 8; pub const BPF_F_EXCLUDE_INGRESS: _bindgen_ty_27 = 16; pub type _bindgen_ty_27 = ::core::ffi::c_uint; #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_devmap_val { pub ifindex: __u32, pub bpf_prog: bpf_devmap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_devmap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_cpumap_val { pub qsize: __u32, pub bpf_prog: bpf_cpumap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_cpumap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_prog_info { pub type_: __u32, pub id: __u32, pub tag: [__u8; 8usize], pub jited_prog_len: __u32, pub xlated_prog_len: __u32, pub jited_prog_insns: __u64, pub xlated_prog_insns: __u64, pub load_time: __u64, pub created_by_uid: __u32, pub nr_map_ids: __u32, pub map_ids: __u64, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub netns_dev: __u64, pub netns_ino: __u64, pub nr_jited_ksyms: __u32, pub nr_jited_func_lens: __u32, pub jited_ksyms: __u64, pub jited_func_lens: __u64, pub btf_id: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub nr_func_info: __u32, pub nr_line_info: __u32, pub line_info: __u64, pub jited_line_info: __u64, pub nr_jited_line_info: __u32, pub line_info_rec_size: __u32, pub jited_line_info_rec_size: __u32, pub nr_prog_tags: __u32, pub prog_tags: __u64, pub run_time_ns: __u64, pub run_cnt: __u64, pub recursion_misses: __u64, pub verified_insns: __u32, pub attach_btf_obj_id: __u32, pub attach_btf_id: __u32, } impl bpf_prog_info { #[inline] pub fn gpl_compatible(&self) -> __u32 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u32) } } #[inline] pub fn set_gpl_compatible(&mut self, val: __u32) { unsafe { let val: u32 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn new_bitfield_1(gpl_compatible: __u32) -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let gpl_compatible: u32 = unsafe { ::core::mem::transmute(gpl_compatible) }; gpl_compatible as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_map_info { pub type_: __u32, pub id: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub btf_vmlinux_value_type_id: __u32, pub netns_dev: __u64, pub netns_ino: __u64, pub btf_id: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_id: __u32, pub map_extra: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_btf_info { pub btf: __u64, pub btf_size: __u32, pub id: __u32, pub name: __u64, pub name_len: __u32, pub kernel_btf: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info { pub type_: __u32, pub id: __u32, pub prog_id: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1 { pub raw_tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_1, pub tracing: bpf_link_info__bindgen_ty_1__bindgen_ty_2, pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_3, pub iter: bpf_link_info__bindgen_ty_1__bindgen_ty_4, pub netns: bpf_link_info__bindgen_ty_1__bindgen_ty_5, pub xdp: bpf_link_info__bindgen_ty_1__bindgen_ty_6, pub struct_ops: bpf_link_info__bindgen_ty_1__bindgen_ty_7, pub netfilter: bpf_link_info__bindgen_ty_1__bindgen_ty_8, pub kprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_9, pub uprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_10, pub perf_event: bpf_link_info__bindgen_ty_1__bindgen_ty_11, pub tcx: bpf_link_info__bindgen_ty_1__bindgen_ty_12, pub netkit: bpf_link_info__bindgen_ty_1__bindgen_ty_13, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_1 { pub tp_name: __u64, pub tp_name_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_2 { pub attach_type: __u32, pub target_obj_id: __u32, pub target_btf_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_3 { pub cgroup_id: __u64, pub attach_type: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4 { pub target_name: __u64, pub target_name_len: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1, pub __bindgen_anon_2: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1 { pub map: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1 { pub map_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2 { pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1, pub task: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1 { pub cgroup_id: __u64, pub order: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2 { pub tid: __u32, pub pid: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_5 { pub netns_ino: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_6 { pub ifindex: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_7 { pub map_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_8 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_9 { pub addrs: __u64, pub count: __u32, pub flags: __u32, pub missed: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_10 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub path_size: __u32, pub count: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11 { pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1 { pub uprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1, pub kprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2, pub tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3, pub event: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1 { pub file_name: __u64, pub name_len: __u32, pub offset: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2 { pub func_name: __u64, pub name_len: __u32, pub offset: __u32, pub addr: __u64, pub missed: __u64, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { pub tp_name: __u64, pub name_len: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { pub config: __u64, pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_12 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_13 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_func_info { pub insn_off: __u32, pub type_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_line_info { pub insn_off: __u32, pub file_name_off: __u32, pub line_off: __u32, pub line_col: __u32, } pub const BPF_F_TIMER_ABS: _bindgen_ty_41 = 1; pub const BPF_F_TIMER_CPU_PIN: _bindgen_ty_41 = 2; pub type _bindgen_ty_41 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub type_off: __u32, pub type_len: __u32, pub str_off: __u32, pub str_len: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct btf_type { pub name_off: __u32, pub info: __u32, pub __bindgen_anon_1: btf_type__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union btf_type__bindgen_ty_1 { pub size: __u32, pub type_: __u32, } pub const BTF_KIND_UNKN: _bindgen_ty_42 = 0; pub const BTF_KIND_INT: _bindgen_ty_42 = 1; pub const BTF_KIND_PTR: _bindgen_ty_42 = 2; pub const BTF_KIND_ARRAY: _bindgen_ty_42 = 3; pub const BTF_KIND_STRUCT: _bindgen_ty_42 = 4; pub const BTF_KIND_UNION: _bindgen_ty_42 = 5; pub const BTF_KIND_ENUM: _bindgen_ty_42 = 6; pub const BTF_KIND_FWD: _bindgen_ty_42 = 7; pub const BTF_KIND_TYPEDEF: _bindgen_ty_42 = 8; pub const BTF_KIND_VOLATILE: _bindgen_ty_42 = 9; pub const BTF_KIND_CONST: _bindgen_ty_42 = 10; pub const BTF_KIND_RESTRICT: _bindgen_ty_42 = 11; pub const BTF_KIND_FUNC: _bindgen_ty_42 = 12; pub const BTF_KIND_FUNC_PROTO: _bindgen_ty_42 = 13; pub const BTF_KIND_VAR: _bindgen_ty_42 = 14; pub const BTF_KIND_DATASEC: _bindgen_ty_42 = 15; pub const BTF_KIND_FLOAT: _bindgen_ty_42 = 16; pub const BTF_KIND_DECL_TAG: _bindgen_ty_42 = 17; pub const BTF_KIND_TYPE_TAG: _bindgen_ty_42 = 18; pub const BTF_KIND_ENUM64: _bindgen_ty_42 = 19; pub const NR_BTF_KINDS: _bindgen_ty_42 = 20; pub const BTF_KIND_MAX: _bindgen_ty_42 = 19; pub type _bindgen_ty_42 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_enum { pub name_off: __u32, pub val: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_array { pub type_: __u32, pub index_type: __u32, pub nelems: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_member { pub name_off: __u32, pub type_: __u32, pub offset: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_param { pub name_off: __u32, pub type_: __u32, } pub const BTF_VAR_STATIC: _bindgen_ty_43 = 0; pub const BTF_VAR_GLOBAL_ALLOCATED: _bindgen_ty_43 = 1; pub const BTF_VAR_GLOBAL_EXTERN: _bindgen_ty_43 = 2; pub type _bindgen_ty_43 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum btf_func_linkage { BTF_FUNC_STATIC = 0, BTF_FUNC_GLOBAL = 1, BTF_FUNC_EXTERN = 2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var { pub linkage: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var_secinfo { pub type_: __u32, pub offset: __u32, pub size: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_decl_tag { pub component_idx: __s32, } pub const IFLA_XDP_UNSPEC: _bindgen_ty_92 = 0; pub const IFLA_XDP_FD: _bindgen_ty_92 = 1; pub const IFLA_XDP_ATTACHED: _bindgen_ty_92 = 2; pub const IFLA_XDP_FLAGS: _bindgen_ty_92 = 3; pub const IFLA_XDP_PROG_ID: _bindgen_ty_92 = 4; pub const IFLA_XDP_DRV_PROG_ID: _bindgen_ty_92 = 5; pub const IFLA_XDP_SKB_PROG_ID: _bindgen_ty_92 = 6; pub const IFLA_XDP_HW_PROG_ID: _bindgen_ty_92 = 7; pub const IFLA_XDP_EXPECTED_FD: _bindgen_ty_92 = 8; pub const __IFLA_XDP_MAX: _bindgen_ty_92 = 9; pub type _bindgen_ty_92 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum nf_inet_hooks { NF_INET_PRE_ROUTING = 0, NF_INET_LOCAL_IN = 1, NF_INET_FORWARD = 2, NF_INET_LOCAL_OUT = 3, NF_INET_POST_ROUTING = 4, NF_INET_NUMHOOKS = 5, } pub const NFPROTO_UNSPEC: _bindgen_ty_99 = 0; pub const NFPROTO_INET: _bindgen_ty_99 = 1; pub const NFPROTO_IPV4: _bindgen_ty_99 = 2; pub const NFPROTO_ARP: _bindgen_ty_99 = 3; pub const NFPROTO_NETDEV: _bindgen_ty_99 = 5; pub const NFPROTO_BRIDGE: _bindgen_ty_99 = 7; pub const NFPROTO_IPV6: _bindgen_ty_99 = 10; pub const NFPROTO_DECNET: _bindgen_ty_99 = 12; pub const NFPROTO_NUMPROTO: _bindgen_ty_99 = 13; pub type _bindgen_ty_99 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, PERF_TYPE_HW_CACHE = 3, PERF_TYPE_RAW = 4, PERF_TYPE_BREAKPOINT = 5, PERF_TYPE_MAX = 6, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_id { PERF_COUNT_HW_CPU_CYCLES = 0, PERF_COUNT_HW_INSTRUCTIONS = 1, PERF_COUNT_HW_CACHE_REFERENCES = 2, PERF_COUNT_HW_CACHE_MISSES = 3, PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, PERF_COUNT_HW_BRANCH_MISSES = 5, PERF_COUNT_HW_BUS_CYCLES = 6, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7, PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8, PERF_COUNT_HW_REF_CPU_CYCLES = 9, PERF_COUNT_HW_MAX = 10, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_id { PERF_COUNT_HW_CACHE_L1D = 0, PERF_COUNT_HW_CACHE_L1I = 1, PERF_COUNT_HW_CACHE_LL = 2, PERF_COUNT_HW_CACHE_DTLB = 3, PERF_COUNT_HW_CACHE_ITLB = 4, PERF_COUNT_HW_CACHE_BPU = 5, PERF_COUNT_HW_CACHE_NODE = 6, PERF_COUNT_HW_CACHE_MAX = 7, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_id { PERF_COUNT_HW_CACHE_OP_READ = 0, PERF_COUNT_HW_CACHE_OP_WRITE = 1, PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, PERF_COUNT_HW_CACHE_OP_MAX = 3, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_result_id { PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, PERF_COUNT_HW_CACHE_RESULT_MISS = 1, PERF_COUNT_HW_CACHE_RESULT_MAX = 2, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, PERF_COUNT_SW_CONTEXT_SWITCHES = 3, PERF_COUNT_SW_CPU_MIGRATIONS = 4, PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, PERF_COUNT_SW_ALIGNMENT_FAULTS = 7, PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, PERF_COUNT_SW_CGROUP_SWITCHES = 11, PERF_COUNT_SW_MAX = 12, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_sample_format { PERF_SAMPLE_IP = 1, PERF_SAMPLE_TID = 2, PERF_SAMPLE_TIME = 4, PERF_SAMPLE_ADDR = 8, PERF_SAMPLE_READ = 16, PERF_SAMPLE_CALLCHAIN = 32, PERF_SAMPLE_ID = 64, PERF_SAMPLE_CPU = 128, PERF_SAMPLE_PERIOD = 256, PERF_SAMPLE_STREAM_ID = 512, PERF_SAMPLE_RAW = 1024, PERF_SAMPLE_BRANCH_STACK = 2048, PERF_SAMPLE_REGS_USER = 4096, PERF_SAMPLE_STACK_USER = 8192, PERF_SAMPLE_WEIGHT = 16384, PERF_SAMPLE_DATA_SRC = 32768, PERF_SAMPLE_IDENTIFIER = 65536, PERF_SAMPLE_TRANSACTION = 131072, PERF_SAMPLE_REGS_INTR = 262144, PERF_SAMPLE_PHYS_ADDR = 524288, PERF_SAMPLE_AUX = 1048576, PERF_SAMPLE_CGROUP = 2097152, PERF_SAMPLE_DATA_PAGE_SIZE = 4194304, PERF_SAMPLE_CODE_PAGE_SIZE = 8388608, PERF_SAMPLE_WEIGHT_STRUCT = 16777216, PERF_SAMPLE_MAX = 33554432, } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_attr { pub type_: __u32, pub size: __u32, pub config: __u64, pub __bindgen_anon_1: perf_event_attr__bindgen_ty_1, pub sample_type: __u64, pub read_format: __u64, pub _bitfield_align_1: [u32; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, pub __bindgen_anon_2: perf_event_attr__bindgen_ty_2, pub bp_type: __u32, pub __bindgen_anon_3: perf_event_attr__bindgen_ty_3, pub __bindgen_anon_4: perf_event_attr__bindgen_ty_4, pub branch_sample_type: __u64, pub sample_regs_user: __u64, pub sample_stack_user: __u32, pub clockid: __s32, pub sample_regs_intr: __u64, pub aux_watermark: __u32, pub sample_max_stack: __u16, pub __reserved_2: __u16, pub aux_sample_size: __u32, pub __reserved_3: __u32, pub sig_data: __u64, pub config3: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_1 { pub sample_period: __u64, pub sample_freq: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_2 { pub wakeup_events: __u32, pub wakeup_watermark: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_3 { pub bp_addr: __u64, pub kprobe_func: __u64, pub uprobe_path: __u64, pub config1: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_4 { pub bp_len: __u64, pub kprobe_addr: __u64, pub probe_offset: __u64, pub config2: __u64, } impl perf_event_attr { #[inline] pub fn disabled(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_disabled(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn inherit(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_inherit(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn pinned(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_pinned(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn exclusive(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_exclusive(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn exclude_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_exclude_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn exclude_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_exclude_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn exclude_hv(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 1u8) as u64) } } #[inline] pub fn set_exclude_hv(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 1u8, val as u64) } } #[inline] pub fn exclude_idle(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(7usize, 1u8) as u64) } } #[inline] pub fn set_exclude_idle(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(7usize, 1u8, val as u64) } } #[inline] pub fn mmap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(8usize, 1u8) as u64) } } #[inline] pub fn set_mmap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(8usize, 1u8, val as u64) } } #[inline] pub fn comm(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(9usize, 1u8) as u64) } } #[inline] pub fn set_comm(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(9usize, 1u8, val as u64) } } #[inline] pub fn freq(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(10usize, 1u8) as u64) } } #[inline] pub fn set_freq(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(10usize, 1u8, val as u64) } } #[inline] pub fn inherit_stat(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(11usize, 1u8) as u64) } } #[inline] pub fn set_inherit_stat(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(11usize, 1u8, val as u64) } } #[inline] pub fn enable_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(12usize, 1u8) as u64) } } #[inline] pub fn set_enable_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(12usize, 1u8, val as u64) } } #[inline] pub fn task(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(13usize, 1u8) as u64) } } #[inline] pub fn set_task(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(13usize, 1u8, val as u64) } } #[inline] pub fn watermark(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(14usize, 1u8) as u64) } } #[inline] pub fn set_watermark(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(14usize, 1u8, val as u64) } } #[inline] pub fn precise_ip(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(15usize, 2u8) as u64) } } #[inline] pub fn set_precise_ip(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(15usize, 2u8, val as u64) } } #[inline] pub fn mmap_data(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(17usize, 1u8) as u64) } } #[inline] pub fn set_mmap_data(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(17usize, 1u8, val as u64) } } #[inline] pub fn sample_id_all(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(18usize, 1u8) as u64) } } #[inline] pub fn set_sample_id_all(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(18usize, 1u8, val as u64) } } #[inline] pub fn exclude_host(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(19usize, 1u8) as u64) } } #[inline] pub fn set_exclude_host(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(19usize, 1u8, val as u64) } } #[inline] pub fn exclude_guest(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(20usize, 1u8) as u64) } } #[inline] pub fn set_exclude_guest(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(20usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(21usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(21usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(22usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(22usize, 1u8, val as u64) } } #[inline] pub fn mmap2(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(23usize, 1u8) as u64) } } #[inline] pub fn set_mmap2(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(23usize, 1u8, val as u64) } } #[inline] pub fn comm_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(24usize, 1u8) as u64) } } #[inline] pub fn set_comm_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(24usize, 1u8, val as u64) } } #[inline] pub fn use_clockid(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(25usize, 1u8) as u64) } } #[inline] pub fn set_use_clockid(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(25usize, 1u8, val as u64) } } #[inline] pub fn context_switch(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(26usize, 1u8) as u64) } } #[inline] pub fn set_context_switch(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(26usize, 1u8, val as u64) } } #[inline] pub fn write_backward(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(27usize, 1u8) as u64) } } #[inline] pub fn set_write_backward(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(27usize, 1u8, val as u64) } } #[inline] pub fn namespaces(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(28usize, 1u8) as u64) } } #[inline] pub fn set_namespaces(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(28usize, 1u8, val as u64) } } #[inline] pub fn ksymbol(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(29usize, 1u8) as u64) } } #[inline] pub fn set_ksymbol(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(29usize, 1u8, val as u64) } } #[inline] pub fn bpf_event(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(30usize, 1u8) as u64) } } #[inline] pub fn set_bpf_event(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(30usize, 1u8, val as u64) } } #[inline] pub fn aux_output(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(31usize, 1u8) as u64) } } #[inline] pub fn set_aux_output(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(31usize, 1u8, val as u64) } } #[inline] pub fn cgroup(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(32usize, 1u8) as u64) } } #[inline] pub fn set_cgroup(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(32usize, 1u8, val as u64) } } #[inline] pub fn text_poke(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(33usize, 1u8) as u64) } } #[inline] pub fn set_text_poke(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(33usize, 1u8, val as u64) } } #[inline] pub fn build_id(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(34usize, 1u8) as u64) } } #[inline] pub fn set_build_id(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(34usize, 1u8, val as u64) } } #[inline] pub fn inherit_thread(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(35usize, 1u8) as u64) } } #[inline] pub fn set_inherit_thread(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(35usize, 1u8, val as u64) } } #[inline] pub fn remove_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(36usize, 1u8) as u64) } } #[inline] pub fn set_remove_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(36usize, 1u8, val as u64) } } #[inline] pub fn sigtrap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(37usize, 1u8) as u64) } } #[inline] pub fn set_sigtrap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(37usize, 1u8, val as u64) } } #[inline] pub fn __reserved_1(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(38usize, 26u8) as u64) } } #[inline] pub fn set___reserved_1(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(38usize, 26u8, val as u64) } } #[inline] pub fn new_bitfield_1( disabled: __u64, inherit: __u64, pinned: __u64, exclusive: __u64, exclude_user: __u64, exclude_kernel: __u64, exclude_hv: __u64, exclude_idle: __u64, mmap: __u64, comm: __u64, freq: __u64, inherit_stat: __u64, enable_on_exec: __u64, task: __u64, watermark: __u64, precise_ip: __u64, mmap_data: __u64, sample_id_all: __u64, exclude_host: __u64, exclude_guest: __u64, exclude_callchain_kernel: __u64, exclude_callchain_user: __u64, mmap2: __u64, comm_exec: __u64, use_clockid: __u64, context_switch: __u64, write_backward: __u64, namespaces: __u64, ksymbol: __u64, bpf_event: __u64, aux_output: __u64, cgroup: __u64, text_poke: __u64, build_id: __u64, inherit_thread: __u64, remove_on_exec: __u64, sigtrap: __u64, __reserved_1: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let disabled: u64 = unsafe { ::core::mem::transmute(disabled) }; disabled as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let inherit: u64 = unsafe { ::core::mem::transmute(inherit) }; inherit as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let pinned: u64 = unsafe { ::core::mem::transmute(pinned) }; pinned as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let exclusive: u64 = unsafe { ::core::mem::transmute(exclusive) }; exclusive as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let exclude_user: u64 = unsafe { ::core::mem::transmute(exclude_user) }; exclude_user as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let exclude_kernel: u64 = unsafe { ::core::mem::transmute(exclude_kernel) }; exclude_kernel as u64 }); __bindgen_bitfield_unit.set(6usize, 1u8, { let exclude_hv: u64 = unsafe { ::core::mem::transmute(exclude_hv) }; exclude_hv as u64 }); __bindgen_bitfield_unit.set(7usize, 1u8, { let exclude_idle: u64 = unsafe { ::core::mem::transmute(exclude_idle) }; exclude_idle as u64 }); __bindgen_bitfield_unit.set(8usize, 1u8, { let mmap: u64 = unsafe { ::core::mem::transmute(mmap) }; mmap as u64 }); __bindgen_bitfield_unit.set(9usize, 1u8, { let comm: u64 = unsafe { ::core::mem::transmute(comm) }; comm as u64 }); __bindgen_bitfield_unit.set(10usize, 1u8, { let freq: u64 = unsafe { ::core::mem::transmute(freq) }; freq as u64 }); __bindgen_bitfield_unit.set(11usize, 1u8, { let inherit_stat: u64 = unsafe { ::core::mem::transmute(inherit_stat) }; inherit_stat as u64 }); __bindgen_bitfield_unit.set(12usize, 1u8, { let enable_on_exec: u64 = unsafe { ::core::mem::transmute(enable_on_exec) }; enable_on_exec as u64 }); __bindgen_bitfield_unit.set(13usize, 1u8, { let task: u64 = unsafe { ::core::mem::transmute(task) }; task as u64 }); __bindgen_bitfield_unit.set(14usize, 1u8, { let watermark: u64 = unsafe { ::core::mem::transmute(watermark) }; watermark as u64 }); __bindgen_bitfield_unit.set(15usize, 2u8, { let precise_ip: u64 = unsafe { ::core::mem::transmute(precise_ip) }; precise_ip as u64 }); __bindgen_bitfield_unit.set(17usize, 1u8, { let mmap_data: u64 = unsafe { ::core::mem::transmute(mmap_data) }; mmap_data as u64 }); __bindgen_bitfield_unit.set(18usize, 1u8, { let sample_id_all: u64 = unsafe { ::core::mem::transmute(sample_id_all) }; sample_id_all as u64 }); __bindgen_bitfield_unit.set(19usize, 1u8, { let exclude_host: u64 = unsafe { ::core::mem::transmute(exclude_host) }; exclude_host as u64 }); __bindgen_bitfield_unit.set(20usize, 1u8, { let exclude_guest: u64 = unsafe { ::core::mem::transmute(exclude_guest) }; exclude_guest as u64 }); __bindgen_bitfield_unit.set(21usize, 1u8, { let exclude_callchain_kernel: u64 = unsafe { ::core::mem::transmute(exclude_callchain_kernel) }; exclude_callchain_kernel as u64 }); __bindgen_bitfield_unit.set(22usize, 1u8, { let exclude_callchain_user: u64 = unsafe { ::core::mem::transmute(exclude_callchain_user) }; exclude_callchain_user as u64 }); __bindgen_bitfield_unit.set(23usize, 1u8, { let mmap2: u64 = unsafe { ::core::mem::transmute(mmap2) }; mmap2 as u64 }); __bindgen_bitfield_unit.set(24usize, 1u8, { let comm_exec: u64 = unsafe { ::core::mem::transmute(comm_exec) }; comm_exec as u64 }); __bindgen_bitfield_unit.set(25usize, 1u8, { let use_clockid: u64 = unsafe { ::core::mem::transmute(use_clockid) }; use_clockid as u64 }); __bindgen_bitfield_unit.set(26usize, 1u8, { let context_switch: u64 = unsafe { ::core::mem::transmute(context_switch) }; context_switch as u64 }); __bindgen_bitfield_unit.set(27usize, 1u8, { let write_backward: u64 = unsafe { ::core::mem::transmute(write_backward) }; write_backward as u64 }); __bindgen_bitfield_unit.set(28usize, 1u8, { let namespaces: u64 = unsafe { ::core::mem::transmute(namespaces) }; namespaces as u64 }); __bindgen_bitfield_unit.set(29usize, 1u8, { let ksymbol: u64 = unsafe { ::core::mem::transmute(ksymbol) }; ksymbol as u64 }); __bindgen_bitfield_unit.set(30usize, 1u8, { let bpf_event: u64 = unsafe { ::core::mem::transmute(bpf_event) }; bpf_event as u64 }); __bindgen_bitfield_unit.set(31usize, 1u8, { let aux_output: u64 = unsafe { ::core::mem::transmute(aux_output) }; aux_output as u64 }); __bindgen_bitfield_unit.set(32usize, 1u8, { let cgroup: u64 = unsafe { ::core::mem::transmute(cgroup) }; cgroup as u64 }); __bindgen_bitfield_unit.set(33usize, 1u8, { let text_poke: u64 = unsafe { ::core::mem::transmute(text_poke) }; text_poke as u64 }); __bindgen_bitfield_unit.set(34usize, 1u8, { let build_id: u64 = unsafe { ::core::mem::transmute(build_id) }; build_id as u64 }); __bindgen_bitfield_unit.set(35usize, 1u8, { let inherit_thread: u64 = unsafe { ::core::mem::transmute(inherit_thread) }; inherit_thread as u64 }); __bindgen_bitfield_unit.set(36usize, 1u8, { let remove_on_exec: u64 = unsafe { ::core::mem::transmute(remove_on_exec) }; remove_on_exec as u64 }); __bindgen_bitfield_unit.set(37usize, 1u8, { let sigtrap: u64 = unsafe { ::core::mem::transmute(sigtrap) }; sigtrap as u64 }); __bindgen_bitfield_unit.set(38usize, 26u8, { let __reserved_1: u64 = unsafe { ::core::mem::transmute(__reserved_1) }; __reserved_1 as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_mmap_page { pub version: __u32, pub compat_version: __u32, pub lock: __u32, pub index: __u32, pub offset: __s64, pub time_enabled: __u64, pub time_running: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1, pub pmc_width: __u16, pub time_shift: __u16, pub time_mult: __u32, pub time_offset: __u64, pub time_zero: __u64, pub size: __u32, pub __reserved_1: __u32, pub time_cycles: __u64, pub time_mask: __u64, pub __reserved: [__u8; 928usize], pub data_head: __u64, pub data_tail: __u64, pub data_offset: __u64, pub data_size: __u64, pub aux_head: __u64, pub aux_tail: __u64, pub aux_offset: __u64, pub aux_size: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_mmap_page__bindgen_ty_1 { pub capabilities: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { pub _bitfield_align_1: [u64; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, } impl perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { #[inline] pub fn cap_bit0(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn cap_bit0_is_deprecated(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0_is_deprecated(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn cap_user_rdpmc(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_rdpmc(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_zero(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_zero(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_short(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_short(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn cap_____res(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 58u8) as u64) } } #[inline] pub fn set_cap_____res(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 58u8, val as u64) } } #[inline] pub fn new_bitfield_1( cap_bit0: __u64, cap_bit0_is_deprecated: __u64, cap_user_rdpmc: __u64, cap_user_time: __u64, cap_user_time_zero: __u64, cap_user_time_short: __u64, cap_____res: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let cap_bit0: u64 = unsafe { ::core::mem::transmute(cap_bit0) }; cap_bit0 as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let cap_bit0_is_deprecated: u64 = unsafe { ::core::mem::transmute(cap_bit0_is_deprecated) }; cap_bit0_is_deprecated as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let cap_user_rdpmc: u64 = unsafe { ::core::mem::transmute(cap_user_rdpmc) }; cap_user_rdpmc as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let cap_user_time: u64 = unsafe { ::core::mem::transmute(cap_user_time) }; cap_user_time as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let cap_user_time_zero: u64 = unsafe { ::core::mem::transmute(cap_user_time_zero) }; cap_user_time_zero as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let cap_user_time_short: u64 = unsafe { ::core::mem::transmute(cap_user_time_short) }; cap_user_time_short as u64 }); __bindgen_bitfield_unit.set(6usize, 58u8, { let cap_____res: u64 = unsafe { ::core::mem::transmute(cap_____res) }; cap_____res as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_header { pub type_: __u32, pub misc: __u16, pub size: __u16, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_type { PERF_RECORD_MMAP = 1, PERF_RECORD_LOST = 2, PERF_RECORD_COMM = 3, PERF_RECORD_EXIT = 4, PERF_RECORD_THROTTLE = 5, PERF_RECORD_UNTHROTTLE = 6, PERF_RECORD_FORK = 7, PERF_RECORD_READ = 8, PERF_RECORD_SAMPLE = 9, PERF_RECORD_MMAP2 = 10, PERF_RECORD_AUX = 11, PERF_RECORD_ITRACE_START = 12, PERF_RECORD_LOST_SAMPLES = 13, PERF_RECORD_SWITCH = 14, PERF_RECORD_SWITCH_CPU_WIDE = 15, PERF_RECORD_NAMESPACES = 16, PERF_RECORD_KSYMBOL = 17, PERF_RECORD_BPF_EVENT = 18, PERF_RECORD_CGROUP = 19, PERF_RECORD_TEXT_POKE = 20, PERF_RECORD_AUX_OUTPUT_HW_ID = 21, PERF_RECORD_MAX = 22, } pub const TCA_BPF_UNSPEC: _bindgen_ty_154 = 0; pub const TCA_BPF_ACT: _bindgen_ty_154 = 1; pub const TCA_BPF_POLICE: _bindgen_ty_154 = 2; pub const TCA_BPF_CLASSID: _bindgen_ty_154 = 3; pub const TCA_BPF_OPS_LEN: _bindgen_ty_154 = 4; pub const TCA_BPF_OPS: _bindgen_ty_154 = 5; pub const TCA_BPF_FD: _bindgen_ty_154 = 6; pub const TCA_BPF_NAME: _bindgen_ty_154 = 7; pub const TCA_BPF_FLAGS: _bindgen_ty_154 = 8; pub const TCA_BPF_FLAGS_GEN: _bindgen_ty_154 = 9; pub const TCA_BPF_TAG: _bindgen_ty_154 = 10; pub const TCA_BPF_ID: _bindgen_ty_154 = 11; pub const __TCA_BPF_MAX: _bindgen_ty_154 = 12; pub type _bindgen_ty_154 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct ifinfomsg { pub ifi_family: ::core::ffi::c_uchar, pub __ifi_pad: ::core::ffi::c_uchar, pub ifi_type: ::core::ffi::c_ushort, pub ifi_index: ::core::ffi::c_int, pub ifi_flags: ::core::ffi::c_uint, pub ifi_change: ::core::ffi::c_uint, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct tcmsg { pub tcm_family: ::core::ffi::c_uchar, pub tcm__pad1: ::core::ffi::c_uchar, pub tcm__pad2: ::core::ffi::c_ushort, pub tcm_ifindex: ::core::ffi::c_int, pub tcm_handle: __u32, pub tcm_parent: __u32, pub tcm_info: __u32, } pub const TCA_UNSPEC: _bindgen_ty_172 = 0; pub const TCA_KIND: _bindgen_ty_172 = 1; pub const TCA_OPTIONS: _bindgen_ty_172 = 2; pub const TCA_STATS: _bindgen_ty_172 = 3; pub const TCA_XSTATS: _bindgen_ty_172 = 4; pub const TCA_RATE: _bindgen_ty_172 = 5; pub const TCA_FCNT: _bindgen_ty_172 = 6; pub const TCA_STATS2: _bindgen_ty_172 = 7; pub const TCA_STAB: _bindgen_ty_172 = 8; pub const TCA_PAD: _bindgen_ty_172 = 9; pub const TCA_DUMP_INVISIBLE: _bindgen_ty_172 = 10; pub const TCA_CHAIN: _bindgen_ty_172 = 11; pub const TCA_HW_OFFLOAD: _bindgen_ty_172 = 12; pub const TCA_INGRESS_BLOCK: _bindgen_ty_172 = 13; pub const TCA_EGRESS_BLOCK: _bindgen_ty_172 = 14; pub const __TCA_MAX: _bindgen_ty_172 = 15; pub type _bindgen_ty_172 = ::core::ffi::c_uint; pub const AYA_PERF_EVENT_IOC_ENABLE: ::core::ffi::c_int = 9216; pub const AYA_PERF_EVENT_IOC_DISABLE: ::core::ffi::c_int = 9217; pub const AYA_PERF_EVENT_IOC_SET_BPF: ::core::ffi::c_int = 1074013192; aya-obj-0.2.1/src/generated/linux_bindings_armv7.rs000064400000000000000000002356371046102023000204110ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ #[repr(C)] #[derive(Copy, Clone, Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)] pub struct __BindgenBitfieldUnit { storage: Storage, } impl __BindgenBitfieldUnit { #[inline] pub const fn new(storage: Storage) -> Self { Self { storage } } } impl __BindgenBitfieldUnit where Storage: AsRef<[u8]> + AsMut<[u8]>, { #[inline] pub fn get_bit(&self, index: usize) -> bool { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = self.storage.as_ref()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; byte & mask == mask } #[inline] pub fn set_bit(&mut self, index: usize, val: bool) { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = &mut self.storage.as_mut()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; if val { *byte |= mask; } else { *byte &= !mask; } } #[inline] pub fn get(&self, bit_offset: usize, bit_width: u8) -> u64 { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); let mut val = 0; for i in 0..(bit_width as usize) { if self.get_bit(i + bit_offset) { let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; val |= 1 << index; } } val } #[inline] pub fn set(&mut self, bit_offset: usize, bit_width: u8, val: u64) { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); for i in 0..(bit_width as usize) { let mask = 1 << i; let val_bit_is_set = val & mask == mask; let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; self.set_bit(index + bit_offset, val_bit_is_set); } } } #[repr(C)] #[derive(Default)] pub struct __IncompleteArrayField(::core::marker::PhantomData, [T; 0]); impl __IncompleteArrayField { #[inline] pub const fn new() -> Self { __IncompleteArrayField(::core::marker::PhantomData, []) } #[inline] pub fn as_ptr(&self) -> *const T { self as *const _ as *const T } #[inline] pub fn as_mut_ptr(&mut self) -> *mut T { self as *mut _ as *mut T } #[inline] pub unsafe fn as_slice(&self, len: usize) -> &[T] { ::core::slice::from_raw_parts(self.as_ptr(), len) } #[inline] pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] { ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len) } } impl ::core::fmt::Debug for __IncompleteArrayField { fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { fmt.write_str("__IncompleteArrayField") } } pub const SO_ATTACH_BPF: u32 = 50; pub const SO_DETACH_BPF: u32 = 27; pub const BPF_LD: u32 = 0; pub const BPF_LDX: u32 = 1; pub const BPF_ST: u32 = 2; pub const BPF_STX: u32 = 3; pub const BPF_ALU: u32 = 4; pub const BPF_JMP: u32 = 5; pub const BPF_W: u32 = 0; pub const BPF_H: u32 = 8; pub const BPF_B: u32 = 16; pub const BPF_K: u32 = 0; pub const BPF_ALU64: u32 = 7; pub const BPF_DW: u32 = 24; pub const BPF_CALL: u32 = 128; pub const BPF_F_ALLOW_OVERRIDE: u32 = 1; pub const BPF_F_ALLOW_MULTI: u32 = 2; pub const BPF_F_REPLACE: u32 = 4; pub const BPF_F_BEFORE: u32 = 8; pub const BPF_F_AFTER: u32 = 16; pub const BPF_F_ID: u32 = 32; pub const BPF_F_STRICT_ALIGNMENT: u32 = 1; pub const BPF_F_ANY_ALIGNMENT: u32 = 2; pub const BPF_F_TEST_RND_HI32: u32 = 4; pub const BPF_F_TEST_STATE_FREQ: u32 = 8; pub const BPF_F_SLEEPABLE: u32 = 16; pub const BPF_F_XDP_HAS_FRAGS: u32 = 32; pub const BPF_F_XDP_DEV_BOUND_ONLY: u32 = 64; pub const BPF_F_TEST_REG_INVARIANTS: u32 = 128; pub const BPF_F_NETFILTER_IP_DEFRAG: u32 = 1; pub const BPF_PSEUDO_MAP_FD: u32 = 1; pub const BPF_PSEUDO_MAP_IDX: u32 = 5; pub const BPF_PSEUDO_MAP_VALUE: u32 = 2; pub const BPF_PSEUDO_MAP_IDX_VALUE: u32 = 6; pub const BPF_PSEUDO_BTF_ID: u32 = 3; pub const BPF_PSEUDO_FUNC: u32 = 4; pub const BPF_PSEUDO_CALL: u32 = 1; pub const BPF_PSEUDO_KFUNC_CALL: u32 = 2; pub const BPF_F_QUERY_EFFECTIVE: u32 = 1; pub const BPF_F_TEST_RUN_ON_CPU: u32 = 1; pub const BPF_F_TEST_XDP_LIVE_FRAMES: u32 = 2; pub const BTF_INT_SIGNED: u32 = 1; pub const BTF_INT_CHAR: u32 = 2; pub const BTF_INT_BOOL: u32 = 4; pub const NLMSG_ALIGNTO: u32 = 4; pub const XDP_FLAGS_UPDATE_IF_NOEXIST: u32 = 1; pub const XDP_FLAGS_SKB_MODE: u32 = 2; pub const XDP_FLAGS_DRV_MODE: u32 = 4; pub const XDP_FLAGS_HW_MODE: u32 = 8; pub const XDP_FLAGS_REPLACE: u32 = 16; pub const XDP_FLAGS_MODES: u32 = 14; pub const XDP_FLAGS_MASK: u32 = 31; pub const PERF_MAX_STACK_DEPTH: u32 = 127; pub const PERF_MAX_CONTEXTS_PER_STACK: u32 = 8; pub const PERF_FLAG_FD_NO_GROUP: u32 = 1; pub const PERF_FLAG_FD_OUTPUT: u32 = 2; pub const PERF_FLAG_PID_CGROUP: u32 = 4; pub const PERF_FLAG_FD_CLOEXEC: u32 = 8; pub const TC_H_MAJ_MASK: u32 = 4294901760; pub const TC_H_MIN_MASK: u32 = 65535; pub const TC_H_UNSPEC: u32 = 0; pub const TC_H_ROOT: u32 = 4294967295; pub const TC_H_INGRESS: u32 = 4294967281; pub const TC_H_CLSACT: u32 = 4294967281; pub const TC_H_MIN_PRIORITY: u32 = 65504; pub const TC_H_MIN_INGRESS: u32 = 65522; pub const TC_H_MIN_EGRESS: u32 = 65523; pub const TCA_BPF_FLAG_ACT_DIRECT: u32 = 1; pub type __u8 = ::core::ffi::c_uchar; pub type __s16 = ::core::ffi::c_short; pub type __u16 = ::core::ffi::c_ushort; pub type __s32 = ::core::ffi::c_int; pub type __u32 = ::core::ffi::c_uint; pub type __s64 = ::core::ffi::c_longlong; pub type __u64 = ::core::ffi::c_ulonglong; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_insn { pub code: __u8, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 1usize]>, pub off: __s16, pub imm: __s32, } impl bpf_insn { #[inline] pub fn dst_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 4u8) as u8) } } #[inline] pub fn set_dst_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 4u8, val as u64) } } #[inline] pub fn src_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 4u8) as u8) } } #[inline] pub fn set_src_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 4u8, val as u64) } } #[inline] pub fn new_bitfield_1(dst_reg: __u8, src_reg: __u8) -> __BindgenBitfieldUnit<[u8; 1usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 1usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 4u8, { let dst_reg: u8 = unsafe { ::core::mem::transmute(dst_reg) }; dst_reg as u64 }); __bindgen_bitfield_unit.set(4usize, 4u8, { let src_reg: u8 = unsafe { ::core::mem::transmute(src_reg) }; src_reg as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug)] pub struct bpf_lpm_trie_key { pub prefixlen: __u32, pub data: __IncompleteArrayField<__u8>, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cgroup_iter_order { BPF_CGROUP_ITER_ORDER_UNSPEC = 0, BPF_CGROUP_ITER_SELF_ONLY = 1, BPF_CGROUP_ITER_DESCENDANTS_PRE = 2, BPF_CGROUP_ITER_DESCENDANTS_POST = 3, BPF_CGROUP_ITER_ANCESTORS_UP = 4, } impl bpf_cmd { pub const BPF_PROG_RUN: bpf_cmd = bpf_cmd::BPF_PROG_TEST_RUN; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cmd { BPF_MAP_CREATE = 0, BPF_MAP_LOOKUP_ELEM = 1, BPF_MAP_UPDATE_ELEM = 2, BPF_MAP_DELETE_ELEM = 3, BPF_MAP_GET_NEXT_KEY = 4, BPF_PROG_LOAD = 5, BPF_OBJ_PIN = 6, BPF_OBJ_GET = 7, BPF_PROG_ATTACH = 8, BPF_PROG_DETACH = 9, BPF_PROG_TEST_RUN = 10, BPF_PROG_GET_NEXT_ID = 11, BPF_MAP_GET_NEXT_ID = 12, BPF_PROG_GET_FD_BY_ID = 13, BPF_MAP_GET_FD_BY_ID = 14, BPF_OBJ_GET_INFO_BY_FD = 15, BPF_PROG_QUERY = 16, BPF_RAW_TRACEPOINT_OPEN = 17, BPF_BTF_LOAD = 18, BPF_BTF_GET_FD_BY_ID = 19, BPF_TASK_FD_QUERY = 20, BPF_MAP_LOOKUP_AND_DELETE_ELEM = 21, BPF_MAP_FREEZE = 22, BPF_BTF_GET_NEXT_ID = 23, BPF_MAP_LOOKUP_BATCH = 24, BPF_MAP_LOOKUP_AND_DELETE_BATCH = 25, BPF_MAP_UPDATE_BATCH = 26, BPF_MAP_DELETE_BATCH = 27, BPF_LINK_CREATE = 28, BPF_LINK_UPDATE = 29, BPF_LINK_GET_FD_BY_ID = 30, BPF_LINK_GET_NEXT_ID = 31, BPF_ENABLE_STATS = 32, BPF_ITER_CREATE = 33, BPF_LINK_DETACH = 34, BPF_PROG_BIND_MAP = 35, BPF_TOKEN_CREATE = 36, __MAX_BPF_CMD = 37, } impl bpf_map_type { pub const BPF_MAP_TYPE_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED; } impl bpf_map_type { pub const BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_map_type { BPF_MAP_TYPE_UNSPEC = 0, BPF_MAP_TYPE_HASH = 1, BPF_MAP_TYPE_ARRAY = 2, BPF_MAP_TYPE_PROG_ARRAY = 3, BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4, BPF_MAP_TYPE_PERCPU_HASH = 5, BPF_MAP_TYPE_PERCPU_ARRAY = 6, BPF_MAP_TYPE_STACK_TRACE = 7, BPF_MAP_TYPE_CGROUP_ARRAY = 8, BPF_MAP_TYPE_LRU_HASH = 9, BPF_MAP_TYPE_LRU_PERCPU_HASH = 10, BPF_MAP_TYPE_LPM_TRIE = 11, BPF_MAP_TYPE_ARRAY_OF_MAPS = 12, BPF_MAP_TYPE_HASH_OF_MAPS = 13, BPF_MAP_TYPE_DEVMAP = 14, BPF_MAP_TYPE_SOCKMAP = 15, BPF_MAP_TYPE_CPUMAP = 16, BPF_MAP_TYPE_XSKMAP = 17, BPF_MAP_TYPE_SOCKHASH = 18, BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 19, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED = 21, BPF_MAP_TYPE_QUEUE = 22, BPF_MAP_TYPE_STACK = 23, BPF_MAP_TYPE_SK_STORAGE = 24, BPF_MAP_TYPE_DEVMAP_HASH = 25, BPF_MAP_TYPE_STRUCT_OPS = 26, BPF_MAP_TYPE_RINGBUF = 27, BPF_MAP_TYPE_INODE_STORAGE = 28, BPF_MAP_TYPE_TASK_STORAGE = 29, BPF_MAP_TYPE_BLOOM_FILTER = 30, BPF_MAP_TYPE_USER_RINGBUF = 31, BPF_MAP_TYPE_CGRP_STORAGE = 32, BPF_MAP_TYPE_ARENA = 33, __MAX_BPF_MAP_TYPE = 34, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC = 0, BPF_PROG_TYPE_SOCKET_FILTER = 1, BPF_PROG_TYPE_KPROBE = 2, BPF_PROG_TYPE_SCHED_CLS = 3, BPF_PROG_TYPE_SCHED_ACT = 4, BPF_PROG_TYPE_TRACEPOINT = 5, BPF_PROG_TYPE_XDP = 6, BPF_PROG_TYPE_PERF_EVENT = 7, BPF_PROG_TYPE_CGROUP_SKB = 8, BPF_PROG_TYPE_CGROUP_SOCK = 9, BPF_PROG_TYPE_LWT_IN = 10, BPF_PROG_TYPE_LWT_OUT = 11, BPF_PROG_TYPE_LWT_XMIT = 12, BPF_PROG_TYPE_SOCK_OPS = 13, BPF_PROG_TYPE_SK_SKB = 14, BPF_PROG_TYPE_CGROUP_DEVICE = 15, BPF_PROG_TYPE_SK_MSG = 16, BPF_PROG_TYPE_RAW_TRACEPOINT = 17, BPF_PROG_TYPE_CGROUP_SOCK_ADDR = 18, BPF_PROG_TYPE_LWT_SEG6LOCAL = 19, BPF_PROG_TYPE_LIRC_MODE2 = 20, BPF_PROG_TYPE_SK_REUSEPORT = 21, BPF_PROG_TYPE_FLOW_DISSECTOR = 22, BPF_PROG_TYPE_CGROUP_SYSCTL = 23, BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE = 24, BPF_PROG_TYPE_CGROUP_SOCKOPT = 25, BPF_PROG_TYPE_TRACING = 26, BPF_PROG_TYPE_STRUCT_OPS = 27, BPF_PROG_TYPE_EXT = 28, BPF_PROG_TYPE_LSM = 29, BPF_PROG_TYPE_SK_LOOKUP = 30, BPF_PROG_TYPE_SYSCALL = 31, BPF_PROG_TYPE_NETFILTER = 32, __MAX_BPF_PROG_TYPE = 33, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_attach_type { BPF_CGROUP_INET_INGRESS = 0, BPF_CGROUP_INET_EGRESS = 1, BPF_CGROUP_INET_SOCK_CREATE = 2, BPF_CGROUP_SOCK_OPS = 3, BPF_SK_SKB_STREAM_PARSER = 4, BPF_SK_SKB_STREAM_VERDICT = 5, BPF_CGROUP_DEVICE = 6, BPF_SK_MSG_VERDICT = 7, BPF_CGROUP_INET4_BIND = 8, BPF_CGROUP_INET6_BIND = 9, BPF_CGROUP_INET4_CONNECT = 10, BPF_CGROUP_INET6_CONNECT = 11, BPF_CGROUP_INET4_POST_BIND = 12, BPF_CGROUP_INET6_POST_BIND = 13, BPF_CGROUP_UDP4_SENDMSG = 14, BPF_CGROUP_UDP6_SENDMSG = 15, BPF_LIRC_MODE2 = 16, BPF_FLOW_DISSECTOR = 17, BPF_CGROUP_SYSCTL = 18, BPF_CGROUP_UDP4_RECVMSG = 19, BPF_CGROUP_UDP6_RECVMSG = 20, BPF_CGROUP_GETSOCKOPT = 21, BPF_CGROUP_SETSOCKOPT = 22, BPF_TRACE_RAW_TP = 23, BPF_TRACE_FENTRY = 24, BPF_TRACE_FEXIT = 25, BPF_MODIFY_RETURN = 26, BPF_LSM_MAC = 27, BPF_TRACE_ITER = 28, BPF_CGROUP_INET4_GETPEERNAME = 29, BPF_CGROUP_INET6_GETPEERNAME = 30, BPF_CGROUP_INET4_GETSOCKNAME = 31, BPF_CGROUP_INET6_GETSOCKNAME = 32, BPF_XDP_DEVMAP = 33, BPF_CGROUP_INET_SOCK_RELEASE = 34, BPF_XDP_CPUMAP = 35, BPF_SK_LOOKUP = 36, BPF_XDP = 37, BPF_SK_SKB_VERDICT = 38, BPF_SK_REUSEPORT_SELECT = 39, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 40, BPF_PERF_EVENT = 41, BPF_TRACE_KPROBE_MULTI = 42, BPF_LSM_CGROUP = 43, BPF_STRUCT_OPS = 44, BPF_NETFILTER = 45, BPF_TCX_INGRESS = 46, BPF_TCX_EGRESS = 47, BPF_TRACE_UPROBE_MULTI = 48, BPF_CGROUP_UNIX_CONNECT = 49, BPF_CGROUP_UNIX_SENDMSG = 50, BPF_CGROUP_UNIX_RECVMSG = 51, BPF_CGROUP_UNIX_GETPEERNAME = 52, BPF_CGROUP_UNIX_GETSOCKNAME = 53, BPF_NETKIT_PRIMARY = 54, BPF_NETKIT_PEER = 55, __MAX_BPF_ATTACH_TYPE = 56, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_link_type { BPF_LINK_TYPE_UNSPEC = 0, BPF_LINK_TYPE_RAW_TRACEPOINT = 1, BPF_LINK_TYPE_TRACING = 2, BPF_LINK_TYPE_CGROUP = 3, BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, __MAX_BPF_LINK_TYPE = 14, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_perf_event_type { BPF_PERF_EVENT_UNSPEC = 0, BPF_PERF_EVENT_UPROBE = 1, BPF_PERF_EVENT_URETPROBE = 2, BPF_PERF_EVENT_KPROBE = 3, BPF_PERF_EVENT_KRETPROBE = 4, BPF_PERF_EVENT_TRACEPOINT = 5, BPF_PERF_EVENT_EVENT = 6, } pub const BPF_F_KPROBE_MULTI_RETURN: _bindgen_ty_2 = 1; pub type _bindgen_ty_2 = ::core::ffi::c_uint; pub const BPF_F_UPROBE_MULTI_RETURN: _bindgen_ty_3 = 1; pub type _bindgen_ty_3 = ::core::ffi::c_uint; pub const BPF_ANY: _bindgen_ty_4 = 0; pub const BPF_NOEXIST: _bindgen_ty_4 = 1; pub const BPF_EXIST: _bindgen_ty_4 = 2; pub const BPF_F_LOCK: _bindgen_ty_4 = 4; pub type _bindgen_ty_4 = ::core::ffi::c_uint; pub const BPF_F_NO_PREALLOC: _bindgen_ty_5 = 1; pub const BPF_F_NO_COMMON_LRU: _bindgen_ty_5 = 2; pub const BPF_F_NUMA_NODE: _bindgen_ty_5 = 4; pub const BPF_F_RDONLY: _bindgen_ty_5 = 8; pub const BPF_F_WRONLY: _bindgen_ty_5 = 16; pub const BPF_F_STACK_BUILD_ID: _bindgen_ty_5 = 32; pub const BPF_F_ZERO_SEED: _bindgen_ty_5 = 64; pub const BPF_F_RDONLY_PROG: _bindgen_ty_5 = 128; pub const BPF_F_WRONLY_PROG: _bindgen_ty_5 = 256; pub const BPF_F_CLONE: _bindgen_ty_5 = 512; pub const BPF_F_MMAPABLE: _bindgen_ty_5 = 1024; pub const BPF_F_PRESERVE_ELEMS: _bindgen_ty_5 = 2048; pub const BPF_F_INNER_MAP: _bindgen_ty_5 = 4096; pub const BPF_F_LINK: _bindgen_ty_5 = 8192; pub const BPF_F_PATH_FD: _bindgen_ty_5 = 16384; pub const BPF_F_VTYPE_BTF_OBJ_FD: _bindgen_ty_5 = 32768; pub const BPF_F_TOKEN_FD: _bindgen_ty_5 = 65536; pub const BPF_F_SEGV_ON_FAULT: _bindgen_ty_5 = 131072; pub const BPF_F_NO_USER_CONV: _bindgen_ty_5 = 262144; pub type _bindgen_ty_5 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_stats_type { BPF_STATS_RUN_TIME = 0, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr { pub __bindgen_anon_1: bpf_attr__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_2, pub batch: bpf_attr__bindgen_ty_3, pub __bindgen_anon_3: bpf_attr__bindgen_ty_4, pub __bindgen_anon_4: bpf_attr__bindgen_ty_5, pub __bindgen_anon_5: bpf_attr__bindgen_ty_6, pub test: bpf_attr__bindgen_ty_7, pub __bindgen_anon_6: bpf_attr__bindgen_ty_8, pub info: bpf_attr__bindgen_ty_9, pub query: bpf_attr__bindgen_ty_10, pub raw_tracepoint: bpf_attr__bindgen_ty_11, pub __bindgen_anon_7: bpf_attr__bindgen_ty_12, pub task_fd_query: bpf_attr__bindgen_ty_13, pub link_create: bpf_attr__bindgen_ty_14, pub link_update: bpf_attr__bindgen_ty_15, pub link_detach: bpf_attr__bindgen_ty_16, pub enable_stats: bpf_attr__bindgen_ty_17, pub iter_create: bpf_attr__bindgen_ty_18, pub prog_bind_map: bpf_attr__bindgen_ty_19, pub token_create: bpf_attr__bindgen_ty_20, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_1 { pub map_type: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub inner_map_fd: __u32, pub numa_node: __u32, pub map_name: [::core::ffi::c_char; 16usize], pub map_ifindex: __u32, pub btf_fd: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_value_type_id: __u32, pub map_extra: __u64, pub value_type_btf_obj_fd: __s32, pub map_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_2 { pub map_fd: __u32, pub key: __u64, pub __bindgen_anon_1: bpf_attr__bindgen_ty_2__bindgen_ty_1, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_2__bindgen_ty_1 { pub value: __u64, pub next_key: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_3 { pub in_batch: __u64, pub out_batch: __u64, pub keys: __u64, pub values: __u64, pub count: __u32, pub map_fd: __u32, pub elem_flags: __u64, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_4 { pub prog_type: __u32, pub insn_cnt: __u32, pub insns: __u64, pub license: __u64, pub log_level: __u32, pub log_size: __u32, pub log_buf: __u64, pub kern_version: __u32, pub prog_flags: __u32, pub prog_name: [::core::ffi::c_char; 16usize], pub prog_ifindex: __u32, pub expected_attach_type: __u32, pub prog_btf_fd: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub func_info_cnt: __u32, pub line_info_rec_size: __u32, pub line_info: __u64, pub line_info_cnt: __u32, pub attach_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_4__bindgen_ty_1, pub core_relo_cnt: __u32, pub fd_array: __u64, pub core_relos: __u64, pub core_relo_rec_size: __u32, pub log_true_size: __u32, pub prog_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_4__bindgen_ty_1 { pub attach_prog_fd: __u32, pub attach_btf_obj_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_5 { pub pathname: __u64, pub bpf_fd: __u32, pub file_flags: __u32, pub path_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_6__bindgen_ty_1, pub attach_bpf_fd: __u32, pub attach_type: __u32, pub attach_flags: __u32, pub replace_bpf_fd: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_6__bindgen_ty_2, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_2 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_7 { pub prog_fd: __u32, pub retval: __u32, pub data_size_in: __u32, pub data_size_out: __u32, pub data_in: __u64, pub data_out: __u64, pub repeat: __u32, pub duration: __u32, pub ctx_size_in: __u32, pub ctx_size_out: __u32, pub ctx_in: __u64, pub ctx_out: __u64, pub flags: __u32, pub cpu: __u32, pub batch_size: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_8__bindgen_ty_1, pub next_id: __u32, pub open_flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_8__bindgen_ty_1 { pub start_id: __u32, pub prog_id: __u32, pub map_id: __u32, pub btf_id: __u32, pub link_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_9 { pub bpf_fd: __u32, pub info_len: __u32, pub info: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_10 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_10__bindgen_ty_1, pub attach_type: __u32, pub query_flags: __u32, pub attach_flags: __u32, pub prog_ids: __u64, pub __bindgen_anon_2: bpf_attr__bindgen_ty_10__bindgen_ty_2, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub prog_attach_flags: __u64, pub link_ids: __u64, pub link_attach_flags: __u64, pub revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_2 { pub prog_cnt: __u32, pub count: __u32, } impl bpf_attr__bindgen_ty_10 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_11 { pub name: __u64, pub prog_fd: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_attr__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_12 { pub btf: __u64, pub btf_log_buf: __u64, pub btf_size: __u32, pub btf_log_size: __u32, pub btf_log_level: __u32, pub btf_log_true_size: __u32, pub btf_flags: __u32, pub btf_token_fd: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_13 { pub pid: __u32, pub fd: __u32, pub flags: __u32, pub buf_len: __u32, pub buf: __u64, pub prog_id: __u32, pub fd_type: __u32, pub probe_offset: __u64, pub probe_addr: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_14__bindgen_ty_2, pub attach_type: __u32, pub flags: __u32, pub __bindgen_anon_3: bpf_attr__bindgen_ty_14__bindgen_ty_3, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_1 { pub prog_fd: __u32, pub map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_2 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3 { pub target_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1, pub perf_event: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2, pub kprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3, pub tracing: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4, pub netfilter: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5, pub tcx: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6, pub uprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7, pub netkit: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1 { pub iter_info: __u64, pub iter_info_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2 { pub bpf_cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3 { pub flags: __u32, pub cnt: __u32, pub syms: __u64, pub addrs: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4 { pub target_btf_id: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub cnt: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_15 { pub link_fd: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_15__bindgen_ty_1, pub flags: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_15__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_1 { pub new_prog_fd: __u32, pub new_map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_2 { pub old_prog_fd: __u32, pub old_map_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_16 { pub link_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_17 { pub type_: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_18 { pub link_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_19 { pub prog_fd: __u32, pub map_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_20 { pub flags: __u32, pub bpffs_fd: __u32, } pub const BPF_F_RECOMPUTE_CSUM: _bindgen_ty_6 = 1; pub const BPF_F_INVALIDATE_HASH: _bindgen_ty_6 = 2; pub type _bindgen_ty_6 = ::core::ffi::c_uint; pub const BPF_F_HDR_FIELD_MASK: _bindgen_ty_7 = 15; pub type _bindgen_ty_7 = ::core::ffi::c_uint; pub const BPF_F_PSEUDO_HDR: _bindgen_ty_8 = 16; pub const BPF_F_MARK_MANGLED_0: _bindgen_ty_8 = 32; pub const BPF_F_MARK_ENFORCE: _bindgen_ty_8 = 64; pub type _bindgen_ty_8 = ::core::ffi::c_uint; pub const BPF_F_INGRESS: _bindgen_ty_9 = 1; pub type _bindgen_ty_9 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_IPV6: _bindgen_ty_10 = 1; pub type _bindgen_ty_10 = ::core::ffi::c_uint; pub const BPF_F_SKIP_FIELD_MASK: _bindgen_ty_11 = 255; pub const BPF_F_USER_STACK: _bindgen_ty_11 = 256; pub const BPF_F_FAST_STACK_CMP: _bindgen_ty_11 = 512; pub const BPF_F_REUSE_STACKID: _bindgen_ty_11 = 1024; pub const BPF_F_USER_BUILD_ID: _bindgen_ty_11 = 2048; pub type _bindgen_ty_11 = ::core::ffi::c_uint; pub const BPF_F_ZERO_CSUM_TX: _bindgen_ty_12 = 2; pub const BPF_F_DONT_FRAGMENT: _bindgen_ty_12 = 4; pub const BPF_F_SEQ_NUMBER: _bindgen_ty_12 = 8; pub const BPF_F_NO_TUNNEL_KEY: _bindgen_ty_12 = 16; pub type _bindgen_ty_12 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_FLAGS: _bindgen_ty_13 = 16; pub type _bindgen_ty_13 = ::core::ffi::c_uint; pub const BPF_F_INDEX_MASK: _bindgen_ty_14 = 4294967295; pub const BPF_F_CURRENT_CPU: _bindgen_ty_14 = 4294967295; pub const BPF_F_CTXLEN_MASK: _bindgen_ty_14 = 4503595332403200; pub type _bindgen_ty_14 = ::core::ffi::c_ulonglong; pub const BPF_F_CURRENT_NETNS: _bindgen_ty_15 = -1; pub type _bindgen_ty_15 = ::core::ffi::c_int; pub const BPF_F_ADJ_ROOM_FIXED_GSO: _bindgen_ty_17 = 1; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV4: _bindgen_ty_17 = 2; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV6: _bindgen_ty_17 = 4; pub const BPF_F_ADJ_ROOM_ENCAP_L4_GRE: _bindgen_ty_17 = 8; pub const BPF_F_ADJ_ROOM_ENCAP_L4_UDP: _bindgen_ty_17 = 16; pub const BPF_F_ADJ_ROOM_NO_CSUM_RESET: _bindgen_ty_17 = 32; pub const BPF_F_ADJ_ROOM_ENCAP_L2_ETH: _bindgen_ty_17 = 64; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV4: _bindgen_ty_17 = 128; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV6: _bindgen_ty_17 = 256; pub type _bindgen_ty_17 = ::core::ffi::c_uint; pub const BPF_F_SYSCTL_BASE_NAME: _bindgen_ty_19 = 1; pub type _bindgen_ty_19 = ::core::ffi::c_uint; pub const BPF_F_GET_BRANCH_RECORDS_SIZE: _bindgen_ty_21 = 1; pub type _bindgen_ty_21 = ::core::ffi::c_uint; pub const BPF_RINGBUF_BUSY_BIT: _bindgen_ty_24 = 2147483648; pub const BPF_RINGBUF_DISCARD_BIT: _bindgen_ty_24 = 1073741824; pub const BPF_RINGBUF_HDR_SZ: _bindgen_ty_24 = 8; pub type _bindgen_ty_24 = ::core::ffi::c_uint; pub const BPF_F_BPRM_SECUREEXEC: _bindgen_ty_26 = 1; pub type _bindgen_ty_26 = ::core::ffi::c_uint; pub const BPF_F_BROADCAST: _bindgen_ty_27 = 8; pub const BPF_F_EXCLUDE_INGRESS: _bindgen_ty_27 = 16; pub type _bindgen_ty_27 = ::core::ffi::c_uint; #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_devmap_val { pub ifindex: __u32, pub bpf_prog: bpf_devmap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_devmap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_cpumap_val { pub qsize: __u32, pub bpf_prog: bpf_cpumap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_cpumap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_prog_info { pub type_: __u32, pub id: __u32, pub tag: [__u8; 8usize], pub jited_prog_len: __u32, pub xlated_prog_len: __u32, pub jited_prog_insns: __u64, pub xlated_prog_insns: __u64, pub load_time: __u64, pub created_by_uid: __u32, pub nr_map_ids: __u32, pub map_ids: __u64, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub netns_dev: __u64, pub netns_ino: __u64, pub nr_jited_ksyms: __u32, pub nr_jited_func_lens: __u32, pub jited_ksyms: __u64, pub jited_func_lens: __u64, pub btf_id: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub nr_func_info: __u32, pub nr_line_info: __u32, pub line_info: __u64, pub jited_line_info: __u64, pub nr_jited_line_info: __u32, pub line_info_rec_size: __u32, pub jited_line_info_rec_size: __u32, pub nr_prog_tags: __u32, pub prog_tags: __u64, pub run_time_ns: __u64, pub run_cnt: __u64, pub recursion_misses: __u64, pub verified_insns: __u32, pub attach_btf_obj_id: __u32, pub attach_btf_id: __u32, } impl bpf_prog_info { #[inline] pub fn gpl_compatible(&self) -> __u32 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u32) } } #[inline] pub fn set_gpl_compatible(&mut self, val: __u32) { unsafe { let val: u32 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn new_bitfield_1(gpl_compatible: __u32) -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let gpl_compatible: u32 = unsafe { ::core::mem::transmute(gpl_compatible) }; gpl_compatible as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_map_info { pub type_: __u32, pub id: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub btf_vmlinux_value_type_id: __u32, pub netns_dev: __u64, pub netns_ino: __u64, pub btf_id: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_id: __u32, pub map_extra: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_btf_info { pub btf: __u64, pub btf_size: __u32, pub id: __u32, pub name: __u64, pub name_len: __u32, pub kernel_btf: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info { pub type_: __u32, pub id: __u32, pub prog_id: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1 { pub raw_tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_1, pub tracing: bpf_link_info__bindgen_ty_1__bindgen_ty_2, pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_3, pub iter: bpf_link_info__bindgen_ty_1__bindgen_ty_4, pub netns: bpf_link_info__bindgen_ty_1__bindgen_ty_5, pub xdp: bpf_link_info__bindgen_ty_1__bindgen_ty_6, pub struct_ops: bpf_link_info__bindgen_ty_1__bindgen_ty_7, pub netfilter: bpf_link_info__bindgen_ty_1__bindgen_ty_8, pub kprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_9, pub uprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_10, pub perf_event: bpf_link_info__bindgen_ty_1__bindgen_ty_11, pub tcx: bpf_link_info__bindgen_ty_1__bindgen_ty_12, pub netkit: bpf_link_info__bindgen_ty_1__bindgen_ty_13, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_1 { pub tp_name: __u64, pub tp_name_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_2 { pub attach_type: __u32, pub target_obj_id: __u32, pub target_btf_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_3 { pub cgroup_id: __u64, pub attach_type: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4 { pub target_name: __u64, pub target_name_len: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1, pub __bindgen_anon_2: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1 { pub map: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1 { pub map_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2 { pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1, pub task: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1 { pub cgroup_id: __u64, pub order: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2 { pub tid: __u32, pub pid: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_5 { pub netns_ino: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_6 { pub ifindex: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_7 { pub map_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_8 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_9 { pub addrs: __u64, pub count: __u32, pub flags: __u32, pub missed: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_10 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub path_size: __u32, pub count: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11 { pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1 { pub uprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1, pub kprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2, pub tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3, pub event: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1 { pub file_name: __u64, pub name_len: __u32, pub offset: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2 { pub func_name: __u64, pub name_len: __u32, pub offset: __u32, pub addr: __u64, pub missed: __u64, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { pub tp_name: __u64, pub name_len: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { pub config: __u64, pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_12 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_13 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_func_info { pub insn_off: __u32, pub type_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_line_info { pub insn_off: __u32, pub file_name_off: __u32, pub line_off: __u32, pub line_col: __u32, } pub const BPF_F_TIMER_ABS: _bindgen_ty_41 = 1; pub const BPF_F_TIMER_CPU_PIN: _bindgen_ty_41 = 2; pub type _bindgen_ty_41 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub type_off: __u32, pub type_len: __u32, pub str_off: __u32, pub str_len: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct btf_type { pub name_off: __u32, pub info: __u32, pub __bindgen_anon_1: btf_type__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union btf_type__bindgen_ty_1 { pub size: __u32, pub type_: __u32, } pub const BTF_KIND_UNKN: _bindgen_ty_42 = 0; pub const BTF_KIND_INT: _bindgen_ty_42 = 1; pub const BTF_KIND_PTR: _bindgen_ty_42 = 2; pub const BTF_KIND_ARRAY: _bindgen_ty_42 = 3; pub const BTF_KIND_STRUCT: _bindgen_ty_42 = 4; pub const BTF_KIND_UNION: _bindgen_ty_42 = 5; pub const BTF_KIND_ENUM: _bindgen_ty_42 = 6; pub const BTF_KIND_FWD: _bindgen_ty_42 = 7; pub const BTF_KIND_TYPEDEF: _bindgen_ty_42 = 8; pub const BTF_KIND_VOLATILE: _bindgen_ty_42 = 9; pub const BTF_KIND_CONST: _bindgen_ty_42 = 10; pub const BTF_KIND_RESTRICT: _bindgen_ty_42 = 11; pub const BTF_KIND_FUNC: _bindgen_ty_42 = 12; pub const BTF_KIND_FUNC_PROTO: _bindgen_ty_42 = 13; pub const BTF_KIND_VAR: _bindgen_ty_42 = 14; pub const BTF_KIND_DATASEC: _bindgen_ty_42 = 15; pub const BTF_KIND_FLOAT: _bindgen_ty_42 = 16; pub const BTF_KIND_DECL_TAG: _bindgen_ty_42 = 17; pub const BTF_KIND_TYPE_TAG: _bindgen_ty_42 = 18; pub const BTF_KIND_ENUM64: _bindgen_ty_42 = 19; pub const NR_BTF_KINDS: _bindgen_ty_42 = 20; pub const BTF_KIND_MAX: _bindgen_ty_42 = 19; pub type _bindgen_ty_42 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_enum { pub name_off: __u32, pub val: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_array { pub type_: __u32, pub index_type: __u32, pub nelems: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_member { pub name_off: __u32, pub type_: __u32, pub offset: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_param { pub name_off: __u32, pub type_: __u32, } pub const BTF_VAR_STATIC: _bindgen_ty_43 = 0; pub const BTF_VAR_GLOBAL_ALLOCATED: _bindgen_ty_43 = 1; pub const BTF_VAR_GLOBAL_EXTERN: _bindgen_ty_43 = 2; pub type _bindgen_ty_43 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum btf_func_linkage { BTF_FUNC_STATIC = 0, BTF_FUNC_GLOBAL = 1, BTF_FUNC_EXTERN = 2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var { pub linkage: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var_secinfo { pub type_: __u32, pub offset: __u32, pub size: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_decl_tag { pub component_idx: __s32, } pub const IFLA_XDP_UNSPEC: _bindgen_ty_92 = 0; pub const IFLA_XDP_FD: _bindgen_ty_92 = 1; pub const IFLA_XDP_ATTACHED: _bindgen_ty_92 = 2; pub const IFLA_XDP_FLAGS: _bindgen_ty_92 = 3; pub const IFLA_XDP_PROG_ID: _bindgen_ty_92 = 4; pub const IFLA_XDP_DRV_PROG_ID: _bindgen_ty_92 = 5; pub const IFLA_XDP_SKB_PROG_ID: _bindgen_ty_92 = 6; pub const IFLA_XDP_HW_PROG_ID: _bindgen_ty_92 = 7; pub const IFLA_XDP_EXPECTED_FD: _bindgen_ty_92 = 8; pub const __IFLA_XDP_MAX: _bindgen_ty_92 = 9; pub type _bindgen_ty_92 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum nf_inet_hooks { NF_INET_PRE_ROUTING = 0, NF_INET_LOCAL_IN = 1, NF_INET_FORWARD = 2, NF_INET_LOCAL_OUT = 3, NF_INET_POST_ROUTING = 4, NF_INET_NUMHOOKS = 5, } pub const NFPROTO_UNSPEC: _bindgen_ty_99 = 0; pub const NFPROTO_INET: _bindgen_ty_99 = 1; pub const NFPROTO_IPV4: _bindgen_ty_99 = 2; pub const NFPROTO_ARP: _bindgen_ty_99 = 3; pub const NFPROTO_NETDEV: _bindgen_ty_99 = 5; pub const NFPROTO_BRIDGE: _bindgen_ty_99 = 7; pub const NFPROTO_IPV6: _bindgen_ty_99 = 10; pub const NFPROTO_DECNET: _bindgen_ty_99 = 12; pub const NFPROTO_NUMPROTO: _bindgen_ty_99 = 13; pub type _bindgen_ty_99 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, PERF_TYPE_HW_CACHE = 3, PERF_TYPE_RAW = 4, PERF_TYPE_BREAKPOINT = 5, PERF_TYPE_MAX = 6, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_id { PERF_COUNT_HW_CPU_CYCLES = 0, PERF_COUNT_HW_INSTRUCTIONS = 1, PERF_COUNT_HW_CACHE_REFERENCES = 2, PERF_COUNT_HW_CACHE_MISSES = 3, PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, PERF_COUNT_HW_BRANCH_MISSES = 5, PERF_COUNT_HW_BUS_CYCLES = 6, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7, PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8, PERF_COUNT_HW_REF_CPU_CYCLES = 9, PERF_COUNT_HW_MAX = 10, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_id { PERF_COUNT_HW_CACHE_L1D = 0, PERF_COUNT_HW_CACHE_L1I = 1, PERF_COUNT_HW_CACHE_LL = 2, PERF_COUNT_HW_CACHE_DTLB = 3, PERF_COUNT_HW_CACHE_ITLB = 4, PERF_COUNT_HW_CACHE_BPU = 5, PERF_COUNT_HW_CACHE_NODE = 6, PERF_COUNT_HW_CACHE_MAX = 7, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_id { PERF_COUNT_HW_CACHE_OP_READ = 0, PERF_COUNT_HW_CACHE_OP_WRITE = 1, PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, PERF_COUNT_HW_CACHE_OP_MAX = 3, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_result_id { PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, PERF_COUNT_HW_CACHE_RESULT_MISS = 1, PERF_COUNT_HW_CACHE_RESULT_MAX = 2, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, PERF_COUNT_SW_CONTEXT_SWITCHES = 3, PERF_COUNT_SW_CPU_MIGRATIONS = 4, PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, PERF_COUNT_SW_ALIGNMENT_FAULTS = 7, PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, PERF_COUNT_SW_CGROUP_SWITCHES = 11, PERF_COUNT_SW_MAX = 12, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_sample_format { PERF_SAMPLE_IP = 1, PERF_SAMPLE_TID = 2, PERF_SAMPLE_TIME = 4, PERF_SAMPLE_ADDR = 8, PERF_SAMPLE_READ = 16, PERF_SAMPLE_CALLCHAIN = 32, PERF_SAMPLE_ID = 64, PERF_SAMPLE_CPU = 128, PERF_SAMPLE_PERIOD = 256, PERF_SAMPLE_STREAM_ID = 512, PERF_SAMPLE_RAW = 1024, PERF_SAMPLE_BRANCH_STACK = 2048, PERF_SAMPLE_REGS_USER = 4096, PERF_SAMPLE_STACK_USER = 8192, PERF_SAMPLE_WEIGHT = 16384, PERF_SAMPLE_DATA_SRC = 32768, PERF_SAMPLE_IDENTIFIER = 65536, PERF_SAMPLE_TRANSACTION = 131072, PERF_SAMPLE_REGS_INTR = 262144, PERF_SAMPLE_PHYS_ADDR = 524288, PERF_SAMPLE_AUX = 1048576, PERF_SAMPLE_CGROUP = 2097152, PERF_SAMPLE_DATA_PAGE_SIZE = 4194304, PERF_SAMPLE_CODE_PAGE_SIZE = 8388608, PERF_SAMPLE_WEIGHT_STRUCT = 16777216, PERF_SAMPLE_MAX = 33554432, } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_attr { pub type_: __u32, pub size: __u32, pub config: __u64, pub __bindgen_anon_1: perf_event_attr__bindgen_ty_1, pub sample_type: __u64, pub read_format: __u64, pub _bitfield_align_1: [u32; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, pub __bindgen_anon_2: perf_event_attr__bindgen_ty_2, pub bp_type: __u32, pub __bindgen_anon_3: perf_event_attr__bindgen_ty_3, pub __bindgen_anon_4: perf_event_attr__bindgen_ty_4, pub branch_sample_type: __u64, pub sample_regs_user: __u64, pub sample_stack_user: __u32, pub clockid: __s32, pub sample_regs_intr: __u64, pub aux_watermark: __u32, pub sample_max_stack: __u16, pub __reserved_2: __u16, pub aux_sample_size: __u32, pub __reserved_3: __u32, pub sig_data: __u64, pub config3: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_1 { pub sample_period: __u64, pub sample_freq: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_2 { pub wakeup_events: __u32, pub wakeup_watermark: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_3 { pub bp_addr: __u64, pub kprobe_func: __u64, pub uprobe_path: __u64, pub config1: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_4 { pub bp_len: __u64, pub kprobe_addr: __u64, pub probe_offset: __u64, pub config2: __u64, } impl perf_event_attr { #[inline] pub fn disabled(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_disabled(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn inherit(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_inherit(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn pinned(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_pinned(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn exclusive(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_exclusive(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn exclude_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_exclude_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn exclude_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_exclude_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn exclude_hv(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 1u8) as u64) } } #[inline] pub fn set_exclude_hv(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 1u8, val as u64) } } #[inline] pub fn exclude_idle(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(7usize, 1u8) as u64) } } #[inline] pub fn set_exclude_idle(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(7usize, 1u8, val as u64) } } #[inline] pub fn mmap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(8usize, 1u8) as u64) } } #[inline] pub fn set_mmap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(8usize, 1u8, val as u64) } } #[inline] pub fn comm(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(9usize, 1u8) as u64) } } #[inline] pub fn set_comm(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(9usize, 1u8, val as u64) } } #[inline] pub fn freq(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(10usize, 1u8) as u64) } } #[inline] pub fn set_freq(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(10usize, 1u8, val as u64) } } #[inline] pub fn inherit_stat(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(11usize, 1u8) as u64) } } #[inline] pub fn set_inherit_stat(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(11usize, 1u8, val as u64) } } #[inline] pub fn enable_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(12usize, 1u8) as u64) } } #[inline] pub fn set_enable_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(12usize, 1u8, val as u64) } } #[inline] pub fn task(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(13usize, 1u8) as u64) } } #[inline] pub fn set_task(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(13usize, 1u8, val as u64) } } #[inline] pub fn watermark(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(14usize, 1u8) as u64) } } #[inline] pub fn set_watermark(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(14usize, 1u8, val as u64) } } #[inline] pub fn precise_ip(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(15usize, 2u8) as u64) } } #[inline] pub fn set_precise_ip(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(15usize, 2u8, val as u64) } } #[inline] pub fn mmap_data(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(17usize, 1u8) as u64) } } #[inline] pub fn set_mmap_data(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(17usize, 1u8, val as u64) } } #[inline] pub fn sample_id_all(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(18usize, 1u8) as u64) } } #[inline] pub fn set_sample_id_all(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(18usize, 1u8, val as u64) } } #[inline] pub fn exclude_host(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(19usize, 1u8) as u64) } } #[inline] pub fn set_exclude_host(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(19usize, 1u8, val as u64) } } #[inline] pub fn exclude_guest(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(20usize, 1u8) as u64) } } #[inline] pub fn set_exclude_guest(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(20usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(21usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(21usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(22usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(22usize, 1u8, val as u64) } } #[inline] pub fn mmap2(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(23usize, 1u8) as u64) } } #[inline] pub fn set_mmap2(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(23usize, 1u8, val as u64) } } #[inline] pub fn comm_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(24usize, 1u8) as u64) } } #[inline] pub fn set_comm_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(24usize, 1u8, val as u64) } } #[inline] pub fn use_clockid(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(25usize, 1u8) as u64) } } #[inline] pub fn set_use_clockid(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(25usize, 1u8, val as u64) } } #[inline] pub fn context_switch(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(26usize, 1u8) as u64) } } #[inline] pub fn set_context_switch(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(26usize, 1u8, val as u64) } } #[inline] pub fn write_backward(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(27usize, 1u8) as u64) } } #[inline] pub fn set_write_backward(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(27usize, 1u8, val as u64) } } #[inline] pub fn namespaces(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(28usize, 1u8) as u64) } } #[inline] pub fn set_namespaces(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(28usize, 1u8, val as u64) } } #[inline] pub fn ksymbol(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(29usize, 1u8) as u64) } } #[inline] pub fn set_ksymbol(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(29usize, 1u8, val as u64) } } #[inline] pub fn bpf_event(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(30usize, 1u8) as u64) } } #[inline] pub fn set_bpf_event(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(30usize, 1u8, val as u64) } } #[inline] pub fn aux_output(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(31usize, 1u8) as u64) } } #[inline] pub fn set_aux_output(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(31usize, 1u8, val as u64) } } #[inline] pub fn cgroup(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(32usize, 1u8) as u64) } } #[inline] pub fn set_cgroup(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(32usize, 1u8, val as u64) } } #[inline] pub fn text_poke(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(33usize, 1u8) as u64) } } #[inline] pub fn set_text_poke(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(33usize, 1u8, val as u64) } } #[inline] pub fn build_id(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(34usize, 1u8) as u64) } } #[inline] pub fn set_build_id(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(34usize, 1u8, val as u64) } } #[inline] pub fn inherit_thread(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(35usize, 1u8) as u64) } } #[inline] pub fn set_inherit_thread(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(35usize, 1u8, val as u64) } } #[inline] pub fn remove_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(36usize, 1u8) as u64) } } #[inline] pub fn set_remove_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(36usize, 1u8, val as u64) } } #[inline] pub fn sigtrap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(37usize, 1u8) as u64) } } #[inline] pub fn set_sigtrap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(37usize, 1u8, val as u64) } } #[inline] pub fn __reserved_1(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(38usize, 26u8) as u64) } } #[inline] pub fn set___reserved_1(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(38usize, 26u8, val as u64) } } #[inline] pub fn new_bitfield_1( disabled: __u64, inherit: __u64, pinned: __u64, exclusive: __u64, exclude_user: __u64, exclude_kernel: __u64, exclude_hv: __u64, exclude_idle: __u64, mmap: __u64, comm: __u64, freq: __u64, inherit_stat: __u64, enable_on_exec: __u64, task: __u64, watermark: __u64, precise_ip: __u64, mmap_data: __u64, sample_id_all: __u64, exclude_host: __u64, exclude_guest: __u64, exclude_callchain_kernel: __u64, exclude_callchain_user: __u64, mmap2: __u64, comm_exec: __u64, use_clockid: __u64, context_switch: __u64, write_backward: __u64, namespaces: __u64, ksymbol: __u64, bpf_event: __u64, aux_output: __u64, cgroup: __u64, text_poke: __u64, build_id: __u64, inherit_thread: __u64, remove_on_exec: __u64, sigtrap: __u64, __reserved_1: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let disabled: u64 = unsafe { ::core::mem::transmute(disabled) }; disabled as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let inherit: u64 = unsafe { ::core::mem::transmute(inherit) }; inherit as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let pinned: u64 = unsafe { ::core::mem::transmute(pinned) }; pinned as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let exclusive: u64 = unsafe { ::core::mem::transmute(exclusive) }; exclusive as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let exclude_user: u64 = unsafe { ::core::mem::transmute(exclude_user) }; exclude_user as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let exclude_kernel: u64 = unsafe { ::core::mem::transmute(exclude_kernel) }; exclude_kernel as u64 }); __bindgen_bitfield_unit.set(6usize, 1u8, { let exclude_hv: u64 = unsafe { ::core::mem::transmute(exclude_hv) }; exclude_hv as u64 }); __bindgen_bitfield_unit.set(7usize, 1u8, { let exclude_idle: u64 = unsafe { ::core::mem::transmute(exclude_idle) }; exclude_idle as u64 }); __bindgen_bitfield_unit.set(8usize, 1u8, { let mmap: u64 = unsafe { ::core::mem::transmute(mmap) }; mmap as u64 }); __bindgen_bitfield_unit.set(9usize, 1u8, { let comm: u64 = unsafe { ::core::mem::transmute(comm) }; comm as u64 }); __bindgen_bitfield_unit.set(10usize, 1u8, { let freq: u64 = unsafe { ::core::mem::transmute(freq) }; freq as u64 }); __bindgen_bitfield_unit.set(11usize, 1u8, { let inherit_stat: u64 = unsafe { ::core::mem::transmute(inherit_stat) }; inherit_stat as u64 }); __bindgen_bitfield_unit.set(12usize, 1u8, { let enable_on_exec: u64 = unsafe { ::core::mem::transmute(enable_on_exec) }; enable_on_exec as u64 }); __bindgen_bitfield_unit.set(13usize, 1u8, { let task: u64 = unsafe { ::core::mem::transmute(task) }; task as u64 }); __bindgen_bitfield_unit.set(14usize, 1u8, { let watermark: u64 = unsafe { ::core::mem::transmute(watermark) }; watermark as u64 }); __bindgen_bitfield_unit.set(15usize, 2u8, { let precise_ip: u64 = unsafe { ::core::mem::transmute(precise_ip) }; precise_ip as u64 }); __bindgen_bitfield_unit.set(17usize, 1u8, { let mmap_data: u64 = unsafe { ::core::mem::transmute(mmap_data) }; mmap_data as u64 }); __bindgen_bitfield_unit.set(18usize, 1u8, { let sample_id_all: u64 = unsafe { ::core::mem::transmute(sample_id_all) }; sample_id_all as u64 }); __bindgen_bitfield_unit.set(19usize, 1u8, { let exclude_host: u64 = unsafe { ::core::mem::transmute(exclude_host) }; exclude_host as u64 }); __bindgen_bitfield_unit.set(20usize, 1u8, { let exclude_guest: u64 = unsafe { ::core::mem::transmute(exclude_guest) }; exclude_guest as u64 }); __bindgen_bitfield_unit.set(21usize, 1u8, { let exclude_callchain_kernel: u64 = unsafe { ::core::mem::transmute(exclude_callchain_kernel) }; exclude_callchain_kernel as u64 }); __bindgen_bitfield_unit.set(22usize, 1u8, { let exclude_callchain_user: u64 = unsafe { ::core::mem::transmute(exclude_callchain_user) }; exclude_callchain_user as u64 }); __bindgen_bitfield_unit.set(23usize, 1u8, { let mmap2: u64 = unsafe { ::core::mem::transmute(mmap2) }; mmap2 as u64 }); __bindgen_bitfield_unit.set(24usize, 1u8, { let comm_exec: u64 = unsafe { ::core::mem::transmute(comm_exec) }; comm_exec as u64 }); __bindgen_bitfield_unit.set(25usize, 1u8, { let use_clockid: u64 = unsafe { ::core::mem::transmute(use_clockid) }; use_clockid as u64 }); __bindgen_bitfield_unit.set(26usize, 1u8, { let context_switch: u64 = unsafe { ::core::mem::transmute(context_switch) }; context_switch as u64 }); __bindgen_bitfield_unit.set(27usize, 1u8, { let write_backward: u64 = unsafe { ::core::mem::transmute(write_backward) }; write_backward as u64 }); __bindgen_bitfield_unit.set(28usize, 1u8, { let namespaces: u64 = unsafe { ::core::mem::transmute(namespaces) }; namespaces as u64 }); __bindgen_bitfield_unit.set(29usize, 1u8, { let ksymbol: u64 = unsafe { ::core::mem::transmute(ksymbol) }; ksymbol as u64 }); __bindgen_bitfield_unit.set(30usize, 1u8, { let bpf_event: u64 = unsafe { ::core::mem::transmute(bpf_event) }; bpf_event as u64 }); __bindgen_bitfield_unit.set(31usize, 1u8, { let aux_output: u64 = unsafe { ::core::mem::transmute(aux_output) }; aux_output as u64 }); __bindgen_bitfield_unit.set(32usize, 1u8, { let cgroup: u64 = unsafe { ::core::mem::transmute(cgroup) }; cgroup as u64 }); __bindgen_bitfield_unit.set(33usize, 1u8, { let text_poke: u64 = unsafe { ::core::mem::transmute(text_poke) }; text_poke as u64 }); __bindgen_bitfield_unit.set(34usize, 1u8, { let build_id: u64 = unsafe { ::core::mem::transmute(build_id) }; build_id as u64 }); __bindgen_bitfield_unit.set(35usize, 1u8, { let inherit_thread: u64 = unsafe { ::core::mem::transmute(inherit_thread) }; inherit_thread as u64 }); __bindgen_bitfield_unit.set(36usize, 1u8, { let remove_on_exec: u64 = unsafe { ::core::mem::transmute(remove_on_exec) }; remove_on_exec as u64 }); __bindgen_bitfield_unit.set(37usize, 1u8, { let sigtrap: u64 = unsafe { ::core::mem::transmute(sigtrap) }; sigtrap as u64 }); __bindgen_bitfield_unit.set(38usize, 26u8, { let __reserved_1: u64 = unsafe { ::core::mem::transmute(__reserved_1) }; __reserved_1 as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_mmap_page { pub version: __u32, pub compat_version: __u32, pub lock: __u32, pub index: __u32, pub offset: __s64, pub time_enabled: __u64, pub time_running: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1, pub pmc_width: __u16, pub time_shift: __u16, pub time_mult: __u32, pub time_offset: __u64, pub time_zero: __u64, pub size: __u32, pub __reserved_1: __u32, pub time_cycles: __u64, pub time_mask: __u64, pub __reserved: [__u8; 928usize], pub data_head: __u64, pub data_tail: __u64, pub data_offset: __u64, pub data_size: __u64, pub aux_head: __u64, pub aux_tail: __u64, pub aux_offset: __u64, pub aux_size: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_mmap_page__bindgen_ty_1 { pub capabilities: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { pub _bitfield_align_1: [u64; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, } impl perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { #[inline] pub fn cap_bit0(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn cap_bit0_is_deprecated(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0_is_deprecated(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn cap_user_rdpmc(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_rdpmc(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_zero(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_zero(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_short(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_short(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn cap_____res(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 58u8) as u64) } } #[inline] pub fn set_cap_____res(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 58u8, val as u64) } } #[inline] pub fn new_bitfield_1( cap_bit0: __u64, cap_bit0_is_deprecated: __u64, cap_user_rdpmc: __u64, cap_user_time: __u64, cap_user_time_zero: __u64, cap_user_time_short: __u64, cap_____res: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let cap_bit0: u64 = unsafe { ::core::mem::transmute(cap_bit0) }; cap_bit0 as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let cap_bit0_is_deprecated: u64 = unsafe { ::core::mem::transmute(cap_bit0_is_deprecated) }; cap_bit0_is_deprecated as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let cap_user_rdpmc: u64 = unsafe { ::core::mem::transmute(cap_user_rdpmc) }; cap_user_rdpmc as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let cap_user_time: u64 = unsafe { ::core::mem::transmute(cap_user_time) }; cap_user_time as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let cap_user_time_zero: u64 = unsafe { ::core::mem::transmute(cap_user_time_zero) }; cap_user_time_zero as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let cap_user_time_short: u64 = unsafe { ::core::mem::transmute(cap_user_time_short) }; cap_user_time_short as u64 }); __bindgen_bitfield_unit.set(6usize, 58u8, { let cap_____res: u64 = unsafe { ::core::mem::transmute(cap_____res) }; cap_____res as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_header { pub type_: __u32, pub misc: __u16, pub size: __u16, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_type { PERF_RECORD_MMAP = 1, PERF_RECORD_LOST = 2, PERF_RECORD_COMM = 3, PERF_RECORD_EXIT = 4, PERF_RECORD_THROTTLE = 5, PERF_RECORD_UNTHROTTLE = 6, PERF_RECORD_FORK = 7, PERF_RECORD_READ = 8, PERF_RECORD_SAMPLE = 9, PERF_RECORD_MMAP2 = 10, PERF_RECORD_AUX = 11, PERF_RECORD_ITRACE_START = 12, PERF_RECORD_LOST_SAMPLES = 13, PERF_RECORD_SWITCH = 14, PERF_RECORD_SWITCH_CPU_WIDE = 15, PERF_RECORD_NAMESPACES = 16, PERF_RECORD_KSYMBOL = 17, PERF_RECORD_BPF_EVENT = 18, PERF_RECORD_CGROUP = 19, PERF_RECORD_TEXT_POKE = 20, PERF_RECORD_AUX_OUTPUT_HW_ID = 21, PERF_RECORD_MAX = 22, } pub const TCA_BPF_UNSPEC: _bindgen_ty_154 = 0; pub const TCA_BPF_ACT: _bindgen_ty_154 = 1; pub const TCA_BPF_POLICE: _bindgen_ty_154 = 2; pub const TCA_BPF_CLASSID: _bindgen_ty_154 = 3; pub const TCA_BPF_OPS_LEN: _bindgen_ty_154 = 4; pub const TCA_BPF_OPS: _bindgen_ty_154 = 5; pub const TCA_BPF_FD: _bindgen_ty_154 = 6; pub const TCA_BPF_NAME: _bindgen_ty_154 = 7; pub const TCA_BPF_FLAGS: _bindgen_ty_154 = 8; pub const TCA_BPF_FLAGS_GEN: _bindgen_ty_154 = 9; pub const TCA_BPF_TAG: _bindgen_ty_154 = 10; pub const TCA_BPF_ID: _bindgen_ty_154 = 11; pub const __TCA_BPF_MAX: _bindgen_ty_154 = 12; pub type _bindgen_ty_154 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct ifinfomsg { pub ifi_family: ::core::ffi::c_uchar, pub __ifi_pad: ::core::ffi::c_uchar, pub ifi_type: ::core::ffi::c_ushort, pub ifi_index: ::core::ffi::c_int, pub ifi_flags: ::core::ffi::c_uint, pub ifi_change: ::core::ffi::c_uint, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct tcmsg { pub tcm_family: ::core::ffi::c_uchar, pub tcm__pad1: ::core::ffi::c_uchar, pub tcm__pad2: ::core::ffi::c_ushort, pub tcm_ifindex: ::core::ffi::c_int, pub tcm_handle: __u32, pub tcm_parent: __u32, pub tcm_info: __u32, } pub const TCA_UNSPEC: _bindgen_ty_172 = 0; pub const TCA_KIND: _bindgen_ty_172 = 1; pub const TCA_OPTIONS: _bindgen_ty_172 = 2; pub const TCA_STATS: _bindgen_ty_172 = 3; pub const TCA_XSTATS: _bindgen_ty_172 = 4; pub const TCA_RATE: _bindgen_ty_172 = 5; pub const TCA_FCNT: _bindgen_ty_172 = 6; pub const TCA_STATS2: _bindgen_ty_172 = 7; pub const TCA_STAB: _bindgen_ty_172 = 8; pub const TCA_PAD: _bindgen_ty_172 = 9; pub const TCA_DUMP_INVISIBLE: _bindgen_ty_172 = 10; pub const TCA_CHAIN: _bindgen_ty_172 = 11; pub const TCA_HW_OFFLOAD: _bindgen_ty_172 = 12; pub const TCA_INGRESS_BLOCK: _bindgen_ty_172 = 13; pub const TCA_EGRESS_BLOCK: _bindgen_ty_172 = 14; pub const __TCA_MAX: _bindgen_ty_172 = 15; pub type _bindgen_ty_172 = ::core::ffi::c_uint; pub const AYA_PERF_EVENT_IOC_ENABLE: ::core::ffi::c_int = 9216; pub const AYA_PERF_EVENT_IOC_DISABLE: ::core::ffi::c_int = 9217; pub const AYA_PERF_EVENT_IOC_SET_BPF: ::core::ffi::c_int = 1074013192; aya-obj-0.2.1/src/generated/linux_bindings_powerpc64.rs000064400000000000000000002356361046102023000212050ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ #[repr(C)] #[derive(Copy, Clone, Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)] pub struct __BindgenBitfieldUnit { storage: Storage, } impl __BindgenBitfieldUnit { #[inline] pub const fn new(storage: Storage) -> Self { Self { storage } } } impl __BindgenBitfieldUnit where Storage: AsRef<[u8]> + AsMut<[u8]>, { #[inline] pub fn get_bit(&self, index: usize) -> bool { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = self.storage.as_ref()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; byte & mask == mask } #[inline] pub fn set_bit(&mut self, index: usize, val: bool) { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = &mut self.storage.as_mut()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; if val { *byte |= mask; } else { *byte &= !mask; } } #[inline] pub fn get(&self, bit_offset: usize, bit_width: u8) -> u64 { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); let mut val = 0; for i in 0..(bit_width as usize) { if self.get_bit(i + bit_offset) { let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; val |= 1 << index; } } val } #[inline] pub fn set(&mut self, bit_offset: usize, bit_width: u8, val: u64) { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); for i in 0..(bit_width as usize) { let mask = 1 << i; let val_bit_is_set = val & mask == mask; let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; self.set_bit(index + bit_offset, val_bit_is_set); } } } #[repr(C)] #[derive(Default)] pub struct __IncompleteArrayField(::core::marker::PhantomData, [T; 0]); impl __IncompleteArrayField { #[inline] pub const fn new() -> Self { __IncompleteArrayField(::core::marker::PhantomData, []) } #[inline] pub fn as_ptr(&self) -> *const T { self as *const _ as *const T } #[inline] pub fn as_mut_ptr(&mut self) -> *mut T { self as *mut _ as *mut T } #[inline] pub unsafe fn as_slice(&self, len: usize) -> &[T] { ::core::slice::from_raw_parts(self.as_ptr(), len) } #[inline] pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] { ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len) } } impl ::core::fmt::Debug for __IncompleteArrayField { fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { fmt.write_str("__IncompleteArrayField") } } pub const SO_ATTACH_BPF: u32 = 50; pub const SO_DETACH_BPF: u32 = 27; pub const BPF_LD: u32 = 0; pub const BPF_LDX: u32 = 1; pub const BPF_ST: u32 = 2; pub const BPF_STX: u32 = 3; pub const BPF_ALU: u32 = 4; pub const BPF_JMP: u32 = 5; pub const BPF_W: u32 = 0; pub const BPF_H: u32 = 8; pub const BPF_B: u32 = 16; pub const BPF_K: u32 = 0; pub const BPF_ALU64: u32 = 7; pub const BPF_DW: u32 = 24; pub const BPF_CALL: u32 = 128; pub const BPF_F_ALLOW_OVERRIDE: u32 = 1; pub const BPF_F_ALLOW_MULTI: u32 = 2; pub const BPF_F_REPLACE: u32 = 4; pub const BPF_F_BEFORE: u32 = 8; pub const BPF_F_AFTER: u32 = 16; pub const BPF_F_ID: u32 = 32; pub const BPF_F_STRICT_ALIGNMENT: u32 = 1; pub const BPF_F_ANY_ALIGNMENT: u32 = 2; pub const BPF_F_TEST_RND_HI32: u32 = 4; pub const BPF_F_TEST_STATE_FREQ: u32 = 8; pub const BPF_F_SLEEPABLE: u32 = 16; pub const BPF_F_XDP_HAS_FRAGS: u32 = 32; pub const BPF_F_XDP_DEV_BOUND_ONLY: u32 = 64; pub const BPF_F_TEST_REG_INVARIANTS: u32 = 128; pub const BPF_F_NETFILTER_IP_DEFRAG: u32 = 1; pub const BPF_PSEUDO_MAP_FD: u32 = 1; pub const BPF_PSEUDO_MAP_IDX: u32 = 5; pub const BPF_PSEUDO_MAP_VALUE: u32 = 2; pub const BPF_PSEUDO_MAP_IDX_VALUE: u32 = 6; pub const BPF_PSEUDO_BTF_ID: u32 = 3; pub const BPF_PSEUDO_FUNC: u32 = 4; pub const BPF_PSEUDO_CALL: u32 = 1; pub const BPF_PSEUDO_KFUNC_CALL: u32 = 2; pub const BPF_F_QUERY_EFFECTIVE: u32 = 1; pub const BPF_F_TEST_RUN_ON_CPU: u32 = 1; pub const BPF_F_TEST_XDP_LIVE_FRAMES: u32 = 2; pub const BTF_INT_SIGNED: u32 = 1; pub const BTF_INT_CHAR: u32 = 2; pub const BTF_INT_BOOL: u32 = 4; pub const NLMSG_ALIGNTO: u32 = 4; pub const XDP_FLAGS_UPDATE_IF_NOEXIST: u32 = 1; pub const XDP_FLAGS_SKB_MODE: u32 = 2; pub const XDP_FLAGS_DRV_MODE: u32 = 4; pub const XDP_FLAGS_HW_MODE: u32 = 8; pub const XDP_FLAGS_REPLACE: u32 = 16; pub const XDP_FLAGS_MODES: u32 = 14; pub const XDP_FLAGS_MASK: u32 = 31; pub const PERF_MAX_STACK_DEPTH: u32 = 127; pub const PERF_MAX_CONTEXTS_PER_STACK: u32 = 8; pub const PERF_FLAG_FD_NO_GROUP: u32 = 1; pub const PERF_FLAG_FD_OUTPUT: u32 = 2; pub const PERF_FLAG_PID_CGROUP: u32 = 4; pub const PERF_FLAG_FD_CLOEXEC: u32 = 8; pub const TC_H_MAJ_MASK: u32 = 4294901760; pub const TC_H_MIN_MASK: u32 = 65535; pub const TC_H_UNSPEC: u32 = 0; pub const TC_H_ROOT: u32 = 4294967295; pub const TC_H_INGRESS: u32 = 4294967281; pub const TC_H_CLSACT: u32 = 4294967281; pub const TC_H_MIN_PRIORITY: u32 = 65504; pub const TC_H_MIN_INGRESS: u32 = 65522; pub const TC_H_MIN_EGRESS: u32 = 65523; pub const TCA_BPF_FLAG_ACT_DIRECT: u32 = 1; pub type __u8 = ::core::ffi::c_uchar; pub type __s16 = ::core::ffi::c_short; pub type __u16 = ::core::ffi::c_ushort; pub type __s32 = ::core::ffi::c_int; pub type __u32 = ::core::ffi::c_uint; pub type __s64 = ::core::ffi::c_long; pub type __u64 = ::core::ffi::c_ulong; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_insn { pub code: __u8, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 1usize]>, pub off: __s16, pub imm: __s32, } impl bpf_insn { #[inline] pub fn dst_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 4u8) as u8) } } #[inline] pub fn set_dst_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 4u8, val as u64) } } #[inline] pub fn src_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 4u8) as u8) } } #[inline] pub fn set_src_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 4u8, val as u64) } } #[inline] pub fn new_bitfield_1(dst_reg: __u8, src_reg: __u8) -> __BindgenBitfieldUnit<[u8; 1usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 1usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 4u8, { let dst_reg: u8 = unsafe { ::core::mem::transmute(dst_reg) }; dst_reg as u64 }); __bindgen_bitfield_unit.set(4usize, 4u8, { let src_reg: u8 = unsafe { ::core::mem::transmute(src_reg) }; src_reg as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug)] pub struct bpf_lpm_trie_key { pub prefixlen: __u32, pub data: __IncompleteArrayField<__u8>, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cgroup_iter_order { BPF_CGROUP_ITER_ORDER_UNSPEC = 0, BPF_CGROUP_ITER_SELF_ONLY = 1, BPF_CGROUP_ITER_DESCENDANTS_PRE = 2, BPF_CGROUP_ITER_DESCENDANTS_POST = 3, BPF_CGROUP_ITER_ANCESTORS_UP = 4, } impl bpf_cmd { pub const BPF_PROG_RUN: bpf_cmd = bpf_cmd::BPF_PROG_TEST_RUN; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cmd { BPF_MAP_CREATE = 0, BPF_MAP_LOOKUP_ELEM = 1, BPF_MAP_UPDATE_ELEM = 2, BPF_MAP_DELETE_ELEM = 3, BPF_MAP_GET_NEXT_KEY = 4, BPF_PROG_LOAD = 5, BPF_OBJ_PIN = 6, BPF_OBJ_GET = 7, BPF_PROG_ATTACH = 8, BPF_PROG_DETACH = 9, BPF_PROG_TEST_RUN = 10, BPF_PROG_GET_NEXT_ID = 11, BPF_MAP_GET_NEXT_ID = 12, BPF_PROG_GET_FD_BY_ID = 13, BPF_MAP_GET_FD_BY_ID = 14, BPF_OBJ_GET_INFO_BY_FD = 15, BPF_PROG_QUERY = 16, BPF_RAW_TRACEPOINT_OPEN = 17, BPF_BTF_LOAD = 18, BPF_BTF_GET_FD_BY_ID = 19, BPF_TASK_FD_QUERY = 20, BPF_MAP_LOOKUP_AND_DELETE_ELEM = 21, BPF_MAP_FREEZE = 22, BPF_BTF_GET_NEXT_ID = 23, BPF_MAP_LOOKUP_BATCH = 24, BPF_MAP_LOOKUP_AND_DELETE_BATCH = 25, BPF_MAP_UPDATE_BATCH = 26, BPF_MAP_DELETE_BATCH = 27, BPF_LINK_CREATE = 28, BPF_LINK_UPDATE = 29, BPF_LINK_GET_FD_BY_ID = 30, BPF_LINK_GET_NEXT_ID = 31, BPF_ENABLE_STATS = 32, BPF_ITER_CREATE = 33, BPF_LINK_DETACH = 34, BPF_PROG_BIND_MAP = 35, BPF_TOKEN_CREATE = 36, __MAX_BPF_CMD = 37, } impl bpf_map_type { pub const BPF_MAP_TYPE_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED; } impl bpf_map_type { pub const BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_map_type { BPF_MAP_TYPE_UNSPEC = 0, BPF_MAP_TYPE_HASH = 1, BPF_MAP_TYPE_ARRAY = 2, BPF_MAP_TYPE_PROG_ARRAY = 3, BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4, BPF_MAP_TYPE_PERCPU_HASH = 5, BPF_MAP_TYPE_PERCPU_ARRAY = 6, BPF_MAP_TYPE_STACK_TRACE = 7, BPF_MAP_TYPE_CGROUP_ARRAY = 8, BPF_MAP_TYPE_LRU_HASH = 9, BPF_MAP_TYPE_LRU_PERCPU_HASH = 10, BPF_MAP_TYPE_LPM_TRIE = 11, BPF_MAP_TYPE_ARRAY_OF_MAPS = 12, BPF_MAP_TYPE_HASH_OF_MAPS = 13, BPF_MAP_TYPE_DEVMAP = 14, BPF_MAP_TYPE_SOCKMAP = 15, BPF_MAP_TYPE_CPUMAP = 16, BPF_MAP_TYPE_XSKMAP = 17, BPF_MAP_TYPE_SOCKHASH = 18, BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 19, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED = 21, BPF_MAP_TYPE_QUEUE = 22, BPF_MAP_TYPE_STACK = 23, BPF_MAP_TYPE_SK_STORAGE = 24, BPF_MAP_TYPE_DEVMAP_HASH = 25, BPF_MAP_TYPE_STRUCT_OPS = 26, BPF_MAP_TYPE_RINGBUF = 27, BPF_MAP_TYPE_INODE_STORAGE = 28, BPF_MAP_TYPE_TASK_STORAGE = 29, BPF_MAP_TYPE_BLOOM_FILTER = 30, BPF_MAP_TYPE_USER_RINGBUF = 31, BPF_MAP_TYPE_CGRP_STORAGE = 32, BPF_MAP_TYPE_ARENA = 33, __MAX_BPF_MAP_TYPE = 34, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC = 0, BPF_PROG_TYPE_SOCKET_FILTER = 1, BPF_PROG_TYPE_KPROBE = 2, BPF_PROG_TYPE_SCHED_CLS = 3, BPF_PROG_TYPE_SCHED_ACT = 4, BPF_PROG_TYPE_TRACEPOINT = 5, BPF_PROG_TYPE_XDP = 6, BPF_PROG_TYPE_PERF_EVENT = 7, BPF_PROG_TYPE_CGROUP_SKB = 8, BPF_PROG_TYPE_CGROUP_SOCK = 9, BPF_PROG_TYPE_LWT_IN = 10, BPF_PROG_TYPE_LWT_OUT = 11, BPF_PROG_TYPE_LWT_XMIT = 12, BPF_PROG_TYPE_SOCK_OPS = 13, BPF_PROG_TYPE_SK_SKB = 14, BPF_PROG_TYPE_CGROUP_DEVICE = 15, BPF_PROG_TYPE_SK_MSG = 16, BPF_PROG_TYPE_RAW_TRACEPOINT = 17, BPF_PROG_TYPE_CGROUP_SOCK_ADDR = 18, BPF_PROG_TYPE_LWT_SEG6LOCAL = 19, BPF_PROG_TYPE_LIRC_MODE2 = 20, BPF_PROG_TYPE_SK_REUSEPORT = 21, BPF_PROG_TYPE_FLOW_DISSECTOR = 22, BPF_PROG_TYPE_CGROUP_SYSCTL = 23, BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE = 24, BPF_PROG_TYPE_CGROUP_SOCKOPT = 25, BPF_PROG_TYPE_TRACING = 26, BPF_PROG_TYPE_STRUCT_OPS = 27, BPF_PROG_TYPE_EXT = 28, BPF_PROG_TYPE_LSM = 29, BPF_PROG_TYPE_SK_LOOKUP = 30, BPF_PROG_TYPE_SYSCALL = 31, BPF_PROG_TYPE_NETFILTER = 32, __MAX_BPF_PROG_TYPE = 33, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_attach_type { BPF_CGROUP_INET_INGRESS = 0, BPF_CGROUP_INET_EGRESS = 1, BPF_CGROUP_INET_SOCK_CREATE = 2, BPF_CGROUP_SOCK_OPS = 3, BPF_SK_SKB_STREAM_PARSER = 4, BPF_SK_SKB_STREAM_VERDICT = 5, BPF_CGROUP_DEVICE = 6, BPF_SK_MSG_VERDICT = 7, BPF_CGROUP_INET4_BIND = 8, BPF_CGROUP_INET6_BIND = 9, BPF_CGROUP_INET4_CONNECT = 10, BPF_CGROUP_INET6_CONNECT = 11, BPF_CGROUP_INET4_POST_BIND = 12, BPF_CGROUP_INET6_POST_BIND = 13, BPF_CGROUP_UDP4_SENDMSG = 14, BPF_CGROUP_UDP6_SENDMSG = 15, BPF_LIRC_MODE2 = 16, BPF_FLOW_DISSECTOR = 17, BPF_CGROUP_SYSCTL = 18, BPF_CGROUP_UDP4_RECVMSG = 19, BPF_CGROUP_UDP6_RECVMSG = 20, BPF_CGROUP_GETSOCKOPT = 21, BPF_CGROUP_SETSOCKOPT = 22, BPF_TRACE_RAW_TP = 23, BPF_TRACE_FENTRY = 24, BPF_TRACE_FEXIT = 25, BPF_MODIFY_RETURN = 26, BPF_LSM_MAC = 27, BPF_TRACE_ITER = 28, BPF_CGROUP_INET4_GETPEERNAME = 29, BPF_CGROUP_INET6_GETPEERNAME = 30, BPF_CGROUP_INET4_GETSOCKNAME = 31, BPF_CGROUP_INET6_GETSOCKNAME = 32, BPF_XDP_DEVMAP = 33, BPF_CGROUP_INET_SOCK_RELEASE = 34, BPF_XDP_CPUMAP = 35, BPF_SK_LOOKUP = 36, BPF_XDP = 37, BPF_SK_SKB_VERDICT = 38, BPF_SK_REUSEPORT_SELECT = 39, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 40, BPF_PERF_EVENT = 41, BPF_TRACE_KPROBE_MULTI = 42, BPF_LSM_CGROUP = 43, BPF_STRUCT_OPS = 44, BPF_NETFILTER = 45, BPF_TCX_INGRESS = 46, BPF_TCX_EGRESS = 47, BPF_TRACE_UPROBE_MULTI = 48, BPF_CGROUP_UNIX_CONNECT = 49, BPF_CGROUP_UNIX_SENDMSG = 50, BPF_CGROUP_UNIX_RECVMSG = 51, BPF_CGROUP_UNIX_GETPEERNAME = 52, BPF_CGROUP_UNIX_GETSOCKNAME = 53, BPF_NETKIT_PRIMARY = 54, BPF_NETKIT_PEER = 55, __MAX_BPF_ATTACH_TYPE = 56, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_link_type { BPF_LINK_TYPE_UNSPEC = 0, BPF_LINK_TYPE_RAW_TRACEPOINT = 1, BPF_LINK_TYPE_TRACING = 2, BPF_LINK_TYPE_CGROUP = 3, BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, __MAX_BPF_LINK_TYPE = 14, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_perf_event_type { BPF_PERF_EVENT_UNSPEC = 0, BPF_PERF_EVENT_UPROBE = 1, BPF_PERF_EVENT_URETPROBE = 2, BPF_PERF_EVENT_KPROBE = 3, BPF_PERF_EVENT_KRETPROBE = 4, BPF_PERF_EVENT_TRACEPOINT = 5, BPF_PERF_EVENT_EVENT = 6, } pub const BPF_F_KPROBE_MULTI_RETURN: _bindgen_ty_2 = 1; pub type _bindgen_ty_2 = ::core::ffi::c_uint; pub const BPF_F_UPROBE_MULTI_RETURN: _bindgen_ty_3 = 1; pub type _bindgen_ty_3 = ::core::ffi::c_uint; pub const BPF_ANY: _bindgen_ty_4 = 0; pub const BPF_NOEXIST: _bindgen_ty_4 = 1; pub const BPF_EXIST: _bindgen_ty_4 = 2; pub const BPF_F_LOCK: _bindgen_ty_4 = 4; pub type _bindgen_ty_4 = ::core::ffi::c_uint; pub const BPF_F_NO_PREALLOC: _bindgen_ty_5 = 1; pub const BPF_F_NO_COMMON_LRU: _bindgen_ty_5 = 2; pub const BPF_F_NUMA_NODE: _bindgen_ty_5 = 4; pub const BPF_F_RDONLY: _bindgen_ty_5 = 8; pub const BPF_F_WRONLY: _bindgen_ty_5 = 16; pub const BPF_F_STACK_BUILD_ID: _bindgen_ty_5 = 32; pub const BPF_F_ZERO_SEED: _bindgen_ty_5 = 64; pub const BPF_F_RDONLY_PROG: _bindgen_ty_5 = 128; pub const BPF_F_WRONLY_PROG: _bindgen_ty_5 = 256; pub const BPF_F_CLONE: _bindgen_ty_5 = 512; pub const BPF_F_MMAPABLE: _bindgen_ty_5 = 1024; pub const BPF_F_PRESERVE_ELEMS: _bindgen_ty_5 = 2048; pub const BPF_F_INNER_MAP: _bindgen_ty_5 = 4096; pub const BPF_F_LINK: _bindgen_ty_5 = 8192; pub const BPF_F_PATH_FD: _bindgen_ty_5 = 16384; pub const BPF_F_VTYPE_BTF_OBJ_FD: _bindgen_ty_5 = 32768; pub const BPF_F_TOKEN_FD: _bindgen_ty_5 = 65536; pub const BPF_F_SEGV_ON_FAULT: _bindgen_ty_5 = 131072; pub const BPF_F_NO_USER_CONV: _bindgen_ty_5 = 262144; pub type _bindgen_ty_5 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_stats_type { BPF_STATS_RUN_TIME = 0, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr { pub __bindgen_anon_1: bpf_attr__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_2, pub batch: bpf_attr__bindgen_ty_3, pub __bindgen_anon_3: bpf_attr__bindgen_ty_4, pub __bindgen_anon_4: bpf_attr__bindgen_ty_5, pub __bindgen_anon_5: bpf_attr__bindgen_ty_6, pub test: bpf_attr__bindgen_ty_7, pub __bindgen_anon_6: bpf_attr__bindgen_ty_8, pub info: bpf_attr__bindgen_ty_9, pub query: bpf_attr__bindgen_ty_10, pub raw_tracepoint: bpf_attr__bindgen_ty_11, pub __bindgen_anon_7: bpf_attr__bindgen_ty_12, pub task_fd_query: bpf_attr__bindgen_ty_13, pub link_create: bpf_attr__bindgen_ty_14, pub link_update: bpf_attr__bindgen_ty_15, pub link_detach: bpf_attr__bindgen_ty_16, pub enable_stats: bpf_attr__bindgen_ty_17, pub iter_create: bpf_attr__bindgen_ty_18, pub prog_bind_map: bpf_attr__bindgen_ty_19, pub token_create: bpf_attr__bindgen_ty_20, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_1 { pub map_type: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub inner_map_fd: __u32, pub numa_node: __u32, pub map_name: [::core::ffi::c_char; 16usize], pub map_ifindex: __u32, pub btf_fd: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_value_type_id: __u32, pub map_extra: __u64, pub value_type_btf_obj_fd: __s32, pub map_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_2 { pub map_fd: __u32, pub key: __u64, pub __bindgen_anon_1: bpf_attr__bindgen_ty_2__bindgen_ty_1, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_2__bindgen_ty_1 { pub value: __u64, pub next_key: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_3 { pub in_batch: __u64, pub out_batch: __u64, pub keys: __u64, pub values: __u64, pub count: __u32, pub map_fd: __u32, pub elem_flags: __u64, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_4 { pub prog_type: __u32, pub insn_cnt: __u32, pub insns: __u64, pub license: __u64, pub log_level: __u32, pub log_size: __u32, pub log_buf: __u64, pub kern_version: __u32, pub prog_flags: __u32, pub prog_name: [::core::ffi::c_char; 16usize], pub prog_ifindex: __u32, pub expected_attach_type: __u32, pub prog_btf_fd: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub func_info_cnt: __u32, pub line_info_rec_size: __u32, pub line_info: __u64, pub line_info_cnt: __u32, pub attach_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_4__bindgen_ty_1, pub core_relo_cnt: __u32, pub fd_array: __u64, pub core_relos: __u64, pub core_relo_rec_size: __u32, pub log_true_size: __u32, pub prog_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_4__bindgen_ty_1 { pub attach_prog_fd: __u32, pub attach_btf_obj_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_5 { pub pathname: __u64, pub bpf_fd: __u32, pub file_flags: __u32, pub path_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_6__bindgen_ty_1, pub attach_bpf_fd: __u32, pub attach_type: __u32, pub attach_flags: __u32, pub replace_bpf_fd: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_6__bindgen_ty_2, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_2 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_7 { pub prog_fd: __u32, pub retval: __u32, pub data_size_in: __u32, pub data_size_out: __u32, pub data_in: __u64, pub data_out: __u64, pub repeat: __u32, pub duration: __u32, pub ctx_size_in: __u32, pub ctx_size_out: __u32, pub ctx_in: __u64, pub ctx_out: __u64, pub flags: __u32, pub cpu: __u32, pub batch_size: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_8__bindgen_ty_1, pub next_id: __u32, pub open_flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_8__bindgen_ty_1 { pub start_id: __u32, pub prog_id: __u32, pub map_id: __u32, pub btf_id: __u32, pub link_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_9 { pub bpf_fd: __u32, pub info_len: __u32, pub info: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_10 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_10__bindgen_ty_1, pub attach_type: __u32, pub query_flags: __u32, pub attach_flags: __u32, pub prog_ids: __u64, pub __bindgen_anon_2: bpf_attr__bindgen_ty_10__bindgen_ty_2, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub prog_attach_flags: __u64, pub link_ids: __u64, pub link_attach_flags: __u64, pub revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_2 { pub prog_cnt: __u32, pub count: __u32, } impl bpf_attr__bindgen_ty_10 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_11 { pub name: __u64, pub prog_fd: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_attr__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_12 { pub btf: __u64, pub btf_log_buf: __u64, pub btf_size: __u32, pub btf_log_size: __u32, pub btf_log_level: __u32, pub btf_log_true_size: __u32, pub btf_flags: __u32, pub btf_token_fd: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_13 { pub pid: __u32, pub fd: __u32, pub flags: __u32, pub buf_len: __u32, pub buf: __u64, pub prog_id: __u32, pub fd_type: __u32, pub probe_offset: __u64, pub probe_addr: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_14__bindgen_ty_2, pub attach_type: __u32, pub flags: __u32, pub __bindgen_anon_3: bpf_attr__bindgen_ty_14__bindgen_ty_3, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_1 { pub prog_fd: __u32, pub map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_2 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3 { pub target_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1, pub perf_event: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2, pub kprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3, pub tracing: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4, pub netfilter: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5, pub tcx: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6, pub uprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7, pub netkit: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1 { pub iter_info: __u64, pub iter_info_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2 { pub bpf_cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3 { pub flags: __u32, pub cnt: __u32, pub syms: __u64, pub addrs: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4 { pub target_btf_id: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub cnt: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_15 { pub link_fd: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_15__bindgen_ty_1, pub flags: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_15__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_1 { pub new_prog_fd: __u32, pub new_map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_2 { pub old_prog_fd: __u32, pub old_map_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_16 { pub link_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_17 { pub type_: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_18 { pub link_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_19 { pub prog_fd: __u32, pub map_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_20 { pub flags: __u32, pub bpffs_fd: __u32, } pub const BPF_F_RECOMPUTE_CSUM: _bindgen_ty_6 = 1; pub const BPF_F_INVALIDATE_HASH: _bindgen_ty_6 = 2; pub type _bindgen_ty_6 = ::core::ffi::c_uint; pub const BPF_F_HDR_FIELD_MASK: _bindgen_ty_7 = 15; pub type _bindgen_ty_7 = ::core::ffi::c_uint; pub const BPF_F_PSEUDO_HDR: _bindgen_ty_8 = 16; pub const BPF_F_MARK_MANGLED_0: _bindgen_ty_8 = 32; pub const BPF_F_MARK_ENFORCE: _bindgen_ty_8 = 64; pub type _bindgen_ty_8 = ::core::ffi::c_uint; pub const BPF_F_INGRESS: _bindgen_ty_9 = 1; pub type _bindgen_ty_9 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_IPV6: _bindgen_ty_10 = 1; pub type _bindgen_ty_10 = ::core::ffi::c_uint; pub const BPF_F_SKIP_FIELD_MASK: _bindgen_ty_11 = 255; pub const BPF_F_USER_STACK: _bindgen_ty_11 = 256; pub const BPF_F_FAST_STACK_CMP: _bindgen_ty_11 = 512; pub const BPF_F_REUSE_STACKID: _bindgen_ty_11 = 1024; pub const BPF_F_USER_BUILD_ID: _bindgen_ty_11 = 2048; pub type _bindgen_ty_11 = ::core::ffi::c_uint; pub const BPF_F_ZERO_CSUM_TX: _bindgen_ty_12 = 2; pub const BPF_F_DONT_FRAGMENT: _bindgen_ty_12 = 4; pub const BPF_F_SEQ_NUMBER: _bindgen_ty_12 = 8; pub const BPF_F_NO_TUNNEL_KEY: _bindgen_ty_12 = 16; pub type _bindgen_ty_12 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_FLAGS: _bindgen_ty_13 = 16; pub type _bindgen_ty_13 = ::core::ffi::c_uint; pub const BPF_F_INDEX_MASK: _bindgen_ty_14 = 4294967295; pub const BPF_F_CURRENT_CPU: _bindgen_ty_14 = 4294967295; pub const BPF_F_CTXLEN_MASK: _bindgen_ty_14 = 4503595332403200; pub type _bindgen_ty_14 = ::core::ffi::c_ulong; pub const BPF_F_CURRENT_NETNS: _bindgen_ty_15 = -1; pub type _bindgen_ty_15 = ::core::ffi::c_int; pub const BPF_F_ADJ_ROOM_FIXED_GSO: _bindgen_ty_17 = 1; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV4: _bindgen_ty_17 = 2; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV6: _bindgen_ty_17 = 4; pub const BPF_F_ADJ_ROOM_ENCAP_L4_GRE: _bindgen_ty_17 = 8; pub const BPF_F_ADJ_ROOM_ENCAP_L4_UDP: _bindgen_ty_17 = 16; pub const BPF_F_ADJ_ROOM_NO_CSUM_RESET: _bindgen_ty_17 = 32; pub const BPF_F_ADJ_ROOM_ENCAP_L2_ETH: _bindgen_ty_17 = 64; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV4: _bindgen_ty_17 = 128; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV6: _bindgen_ty_17 = 256; pub type _bindgen_ty_17 = ::core::ffi::c_uint; pub const BPF_F_SYSCTL_BASE_NAME: _bindgen_ty_19 = 1; pub type _bindgen_ty_19 = ::core::ffi::c_uint; pub const BPF_F_GET_BRANCH_RECORDS_SIZE: _bindgen_ty_21 = 1; pub type _bindgen_ty_21 = ::core::ffi::c_uint; pub const BPF_RINGBUF_BUSY_BIT: _bindgen_ty_24 = 2147483648; pub const BPF_RINGBUF_DISCARD_BIT: _bindgen_ty_24 = 1073741824; pub const BPF_RINGBUF_HDR_SZ: _bindgen_ty_24 = 8; pub type _bindgen_ty_24 = ::core::ffi::c_uint; pub const BPF_F_BPRM_SECUREEXEC: _bindgen_ty_26 = 1; pub type _bindgen_ty_26 = ::core::ffi::c_uint; pub const BPF_F_BROADCAST: _bindgen_ty_27 = 8; pub const BPF_F_EXCLUDE_INGRESS: _bindgen_ty_27 = 16; pub type _bindgen_ty_27 = ::core::ffi::c_uint; #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_devmap_val { pub ifindex: __u32, pub bpf_prog: bpf_devmap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_devmap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_cpumap_val { pub qsize: __u32, pub bpf_prog: bpf_cpumap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_cpumap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_prog_info { pub type_: __u32, pub id: __u32, pub tag: [__u8; 8usize], pub jited_prog_len: __u32, pub xlated_prog_len: __u32, pub jited_prog_insns: __u64, pub xlated_prog_insns: __u64, pub load_time: __u64, pub created_by_uid: __u32, pub nr_map_ids: __u32, pub map_ids: __u64, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub netns_dev: __u64, pub netns_ino: __u64, pub nr_jited_ksyms: __u32, pub nr_jited_func_lens: __u32, pub jited_ksyms: __u64, pub jited_func_lens: __u64, pub btf_id: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub nr_func_info: __u32, pub nr_line_info: __u32, pub line_info: __u64, pub jited_line_info: __u64, pub nr_jited_line_info: __u32, pub line_info_rec_size: __u32, pub jited_line_info_rec_size: __u32, pub nr_prog_tags: __u32, pub prog_tags: __u64, pub run_time_ns: __u64, pub run_cnt: __u64, pub recursion_misses: __u64, pub verified_insns: __u32, pub attach_btf_obj_id: __u32, pub attach_btf_id: __u32, } impl bpf_prog_info { #[inline] pub fn gpl_compatible(&self) -> __u32 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u32) } } #[inline] pub fn set_gpl_compatible(&mut self, val: __u32) { unsafe { let val: u32 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn new_bitfield_1(gpl_compatible: __u32) -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let gpl_compatible: u32 = unsafe { ::core::mem::transmute(gpl_compatible) }; gpl_compatible as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_map_info { pub type_: __u32, pub id: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub btf_vmlinux_value_type_id: __u32, pub netns_dev: __u64, pub netns_ino: __u64, pub btf_id: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_id: __u32, pub map_extra: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_btf_info { pub btf: __u64, pub btf_size: __u32, pub id: __u32, pub name: __u64, pub name_len: __u32, pub kernel_btf: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info { pub type_: __u32, pub id: __u32, pub prog_id: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1 { pub raw_tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_1, pub tracing: bpf_link_info__bindgen_ty_1__bindgen_ty_2, pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_3, pub iter: bpf_link_info__bindgen_ty_1__bindgen_ty_4, pub netns: bpf_link_info__bindgen_ty_1__bindgen_ty_5, pub xdp: bpf_link_info__bindgen_ty_1__bindgen_ty_6, pub struct_ops: bpf_link_info__bindgen_ty_1__bindgen_ty_7, pub netfilter: bpf_link_info__bindgen_ty_1__bindgen_ty_8, pub kprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_9, pub uprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_10, pub perf_event: bpf_link_info__bindgen_ty_1__bindgen_ty_11, pub tcx: bpf_link_info__bindgen_ty_1__bindgen_ty_12, pub netkit: bpf_link_info__bindgen_ty_1__bindgen_ty_13, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_1 { pub tp_name: __u64, pub tp_name_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_2 { pub attach_type: __u32, pub target_obj_id: __u32, pub target_btf_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_3 { pub cgroup_id: __u64, pub attach_type: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4 { pub target_name: __u64, pub target_name_len: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1, pub __bindgen_anon_2: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1 { pub map: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1 { pub map_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2 { pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1, pub task: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1 { pub cgroup_id: __u64, pub order: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2 { pub tid: __u32, pub pid: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_5 { pub netns_ino: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_6 { pub ifindex: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_7 { pub map_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_8 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_9 { pub addrs: __u64, pub count: __u32, pub flags: __u32, pub missed: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_10 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub path_size: __u32, pub count: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11 { pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1 { pub uprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1, pub kprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2, pub tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3, pub event: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1 { pub file_name: __u64, pub name_len: __u32, pub offset: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2 { pub func_name: __u64, pub name_len: __u32, pub offset: __u32, pub addr: __u64, pub missed: __u64, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { pub tp_name: __u64, pub name_len: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { pub config: __u64, pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_12 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_13 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_func_info { pub insn_off: __u32, pub type_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_line_info { pub insn_off: __u32, pub file_name_off: __u32, pub line_off: __u32, pub line_col: __u32, } pub const BPF_F_TIMER_ABS: _bindgen_ty_41 = 1; pub const BPF_F_TIMER_CPU_PIN: _bindgen_ty_41 = 2; pub type _bindgen_ty_41 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub type_off: __u32, pub type_len: __u32, pub str_off: __u32, pub str_len: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct btf_type { pub name_off: __u32, pub info: __u32, pub __bindgen_anon_1: btf_type__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union btf_type__bindgen_ty_1 { pub size: __u32, pub type_: __u32, } pub const BTF_KIND_UNKN: _bindgen_ty_42 = 0; pub const BTF_KIND_INT: _bindgen_ty_42 = 1; pub const BTF_KIND_PTR: _bindgen_ty_42 = 2; pub const BTF_KIND_ARRAY: _bindgen_ty_42 = 3; pub const BTF_KIND_STRUCT: _bindgen_ty_42 = 4; pub const BTF_KIND_UNION: _bindgen_ty_42 = 5; pub const BTF_KIND_ENUM: _bindgen_ty_42 = 6; pub const BTF_KIND_FWD: _bindgen_ty_42 = 7; pub const BTF_KIND_TYPEDEF: _bindgen_ty_42 = 8; pub const BTF_KIND_VOLATILE: _bindgen_ty_42 = 9; pub const BTF_KIND_CONST: _bindgen_ty_42 = 10; pub const BTF_KIND_RESTRICT: _bindgen_ty_42 = 11; pub const BTF_KIND_FUNC: _bindgen_ty_42 = 12; pub const BTF_KIND_FUNC_PROTO: _bindgen_ty_42 = 13; pub const BTF_KIND_VAR: _bindgen_ty_42 = 14; pub const BTF_KIND_DATASEC: _bindgen_ty_42 = 15; pub const BTF_KIND_FLOAT: _bindgen_ty_42 = 16; pub const BTF_KIND_DECL_TAG: _bindgen_ty_42 = 17; pub const BTF_KIND_TYPE_TAG: _bindgen_ty_42 = 18; pub const BTF_KIND_ENUM64: _bindgen_ty_42 = 19; pub const NR_BTF_KINDS: _bindgen_ty_42 = 20; pub const BTF_KIND_MAX: _bindgen_ty_42 = 19; pub type _bindgen_ty_42 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_enum { pub name_off: __u32, pub val: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_array { pub type_: __u32, pub index_type: __u32, pub nelems: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_member { pub name_off: __u32, pub type_: __u32, pub offset: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_param { pub name_off: __u32, pub type_: __u32, } pub const BTF_VAR_STATIC: _bindgen_ty_43 = 0; pub const BTF_VAR_GLOBAL_ALLOCATED: _bindgen_ty_43 = 1; pub const BTF_VAR_GLOBAL_EXTERN: _bindgen_ty_43 = 2; pub type _bindgen_ty_43 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum btf_func_linkage { BTF_FUNC_STATIC = 0, BTF_FUNC_GLOBAL = 1, BTF_FUNC_EXTERN = 2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var { pub linkage: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var_secinfo { pub type_: __u32, pub offset: __u32, pub size: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_decl_tag { pub component_idx: __s32, } pub const IFLA_XDP_UNSPEC: _bindgen_ty_92 = 0; pub const IFLA_XDP_FD: _bindgen_ty_92 = 1; pub const IFLA_XDP_ATTACHED: _bindgen_ty_92 = 2; pub const IFLA_XDP_FLAGS: _bindgen_ty_92 = 3; pub const IFLA_XDP_PROG_ID: _bindgen_ty_92 = 4; pub const IFLA_XDP_DRV_PROG_ID: _bindgen_ty_92 = 5; pub const IFLA_XDP_SKB_PROG_ID: _bindgen_ty_92 = 6; pub const IFLA_XDP_HW_PROG_ID: _bindgen_ty_92 = 7; pub const IFLA_XDP_EXPECTED_FD: _bindgen_ty_92 = 8; pub const __IFLA_XDP_MAX: _bindgen_ty_92 = 9; pub type _bindgen_ty_92 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum nf_inet_hooks { NF_INET_PRE_ROUTING = 0, NF_INET_LOCAL_IN = 1, NF_INET_FORWARD = 2, NF_INET_LOCAL_OUT = 3, NF_INET_POST_ROUTING = 4, NF_INET_NUMHOOKS = 5, } pub const NFPROTO_UNSPEC: _bindgen_ty_99 = 0; pub const NFPROTO_INET: _bindgen_ty_99 = 1; pub const NFPROTO_IPV4: _bindgen_ty_99 = 2; pub const NFPROTO_ARP: _bindgen_ty_99 = 3; pub const NFPROTO_NETDEV: _bindgen_ty_99 = 5; pub const NFPROTO_BRIDGE: _bindgen_ty_99 = 7; pub const NFPROTO_IPV6: _bindgen_ty_99 = 10; pub const NFPROTO_DECNET: _bindgen_ty_99 = 12; pub const NFPROTO_NUMPROTO: _bindgen_ty_99 = 13; pub type _bindgen_ty_99 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, PERF_TYPE_HW_CACHE = 3, PERF_TYPE_RAW = 4, PERF_TYPE_BREAKPOINT = 5, PERF_TYPE_MAX = 6, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_id { PERF_COUNT_HW_CPU_CYCLES = 0, PERF_COUNT_HW_INSTRUCTIONS = 1, PERF_COUNT_HW_CACHE_REFERENCES = 2, PERF_COUNT_HW_CACHE_MISSES = 3, PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, PERF_COUNT_HW_BRANCH_MISSES = 5, PERF_COUNT_HW_BUS_CYCLES = 6, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7, PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8, PERF_COUNT_HW_REF_CPU_CYCLES = 9, PERF_COUNT_HW_MAX = 10, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_id { PERF_COUNT_HW_CACHE_L1D = 0, PERF_COUNT_HW_CACHE_L1I = 1, PERF_COUNT_HW_CACHE_LL = 2, PERF_COUNT_HW_CACHE_DTLB = 3, PERF_COUNT_HW_CACHE_ITLB = 4, PERF_COUNT_HW_CACHE_BPU = 5, PERF_COUNT_HW_CACHE_NODE = 6, PERF_COUNT_HW_CACHE_MAX = 7, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_id { PERF_COUNT_HW_CACHE_OP_READ = 0, PERF_COUNT_HW_CACHE_OP_WRITE = 1, PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, PERF_COUNT_HW_CACHE_OP_MAX = 3, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_result_id { PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, PERF_COUNT_HW_CACHE_RESULT_MISS = 1, PERF_COUNT_HW_CACHE_RESULT_MAX = 2, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, PERF_COUNT_SW_CONTEXT_SWITCHES = 3, PERF_COUNT_SW_CPU_MIGRATIONS = 4, PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, PERF_COUNT_SW_ALIGNMENT_FAULTS = 7, PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, PERF_COUNT_SW_CGROUP_SWITCHES = 11, PERF_COUNT_SW_MAX = 12, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_sample_format { PERF_SAMPLE_IP = 1, PERF_SAMPLE_TID = 2, PERF_SAMPLE_TIME = 4, PERF_SAMPLE_ADDR = 8, PERF_SAMPLE_READ = 16, PERF_SAMPLE_CALLCHAIN = 32, PERF_SAMPLE_ID = 64, PERF_SAMPLE_CPU = 128, PERF_SAMPLE_PERIOD = 256, PERF_SAMPLE_STREAM_ID = 512, PERF_SAMPLE_RAW = 1024, PERF_SAMPLE_BRANCH_STACK = 2048, PERF_SAMPLE_REGS_USER = 4096, PERF_SAMPLE_STACK_USER = 8192, PERF_SAMPLE_WEIGHT = 16384, PERF_SAMPLE_DATA_SRC = 32768, PERF_SAMPLE_IDENTIFIER = 65536, PERF_SAMPLE_TRANSACTION = 131072, PERF_SAMPLE_REGS_INTR = 262144, PERF_SAMPLE_PHYS_ADDR = 524288, PERF_SAMPLE_AUX = 1048576, PERF_SAMPLE_CGROUP = 2097152, PERF_SAMPLE_DATA_PAGE_SIZE = 4194304, PERF_SAMPLE_CODE_PAGE_SIZE = 8388608, PERF_SAMPLE_WEIGHT_STRUCT = 16777216, PERF_SAMPLE_MAX = 33554432, } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_attr { pub type_: __u32, pub size: __u32, pub config: __u64, pub __bindgen_anon_1: perf_event_attr__bindgen_ty_1, pub sample_type: __u64, pub read_format: __u64, pub _bitfield_align_1: [u32; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, pub __bindgen_anon_2: perf_event_attr__bindgen_ty_2, pub bp_type: __u32, pub __bindgen_anon_3: perf_event_attr__bindgen_ty_3, pub __bindgen_anon_4: perf_event_attr__bindgen_ty_4, pub branch_sample_type: __u64, pub sample_regs_user: __u64, pub sample_stack_user: __u32, pub clockid: __s32, pub sample_regs_intr: __u64, pub aux_watermark: __u32, pub sample_max_stack: __u16, pub __reserved_2: __u16, pub aux_sample_size: __u32, pub __reserved_3: __u32, pub sig_data: __u64, pub config3: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_1 { pub sample_period: __u64, pub sample_freq: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_2 { pub wakeup_events: __u32, pub wakeup_watermark: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_3 { pub bp_addr: __u64, pub kprobe_func: __u64, pub uprobe_path: __u64, pub config1: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_4 { pub bp_len: __u64, pub kprobe_addr: __u64, pub probe_offset: __u64, pub config2: __u64, } impl perf_event_attr { #[inline] pub fn disabled(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_disabled(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn inherit(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_inherit(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn pinned(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_pinned(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn exclusive(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_exclusive(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn exclude_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_exclude_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn exclude_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_exclude_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn exclude_hv(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 1u8) as u64) } } #[inline] pub fn set_exclude_hv(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 1u8, val as u64) } } #[inline] pub fn exclude_idle(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(7usize, 1u8) as u64) } } #[inline] pub fn set_exclude_idle(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(7usize, 1u8, val as u64) } } #[inline] pub fn mmap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(8usize, 1u8) as u64) } } #[inline] pub fn set_mmap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(8usize, 1u8, val as u64) } } #[inline] pub fn comm(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(9usize, 1u8) as u64) } } #[inline] pub fn set_comm(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(9usize, 1u8, val as u64) } } #[inline] pub fn freq(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(10usize, 1u8) as u64) } } #[inline] pub fn set_freq(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(10usize, 1u8, val as u64) } } #[inline] pub fn inherit_stat(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(11usize, 1u8) as u64) } } #[inline] pub fn set_inherit_stat(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(11usize, 1u8, val as u64) } } #[inline] pub fn enable_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(12usize, 1u8) as u64) } } #[inline] pub fn set_enable_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(12usize, 1u8, val as u64) } } #[inline] pub fn task(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(13usize, 1u8) as u64) } } #[inline] pub fn set_task(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(13usize, 1u8, val as u64) } } #[inline] pub fn watermark(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(14usize, 1u8) as u64) } } #[inline] pub fn set_watermark(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(14usize, 1u8, val as u64) } } #[inline] pub fn precise_ip(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(15usize, 2u8) as u64) } } #[inline] pub fn set_precise_ip(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(15usize, 2u8, val as u64) } } #[inline] pub fn mmap_data(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(17usize, 1u8) as u64) } } #[inline] pub fn set_mmap_data(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(17usize, 1u8, val as u64) } } #[inline] pub fn sample_id_all(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(18usize, 1u8) as u64) } } #[inline] pub fn set_sample_id_all(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(18usize, 1u8, val as u64) } } #[inline] pub fn exclude_host(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(19usize, 1u8) as u64) } } #[inline] pub fn set_exclude_host(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(19usize, 1u8, val as u64) } } #[inline] pub fn exclude_guest(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(20usize, 1u8) as u64) } } #[inline] pub fn set_exclude_guest(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(20usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(21usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(21usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(22usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(22usize, 1u8, val as u64) } } #[inline] pub fn mmap2(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(23usize, 1u8) as u64) } } #[inline] pub fn set_mmap2(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(23usize, 1u8, val as u64) } } #[inline] pub fn comm_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(24usize, 1u8) as u64) } } #[inline] pub fn set_comm_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(24usize, 1u8, val as u64) } } #[inline] pub fn use_clockid(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(25usize, 1u8) as u64) } } #[inline] pub fn set_use_clockid(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(25usize, 1u8, val as u64) } } #[inline] pub fn context_switch(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(26usize, 1u8) as u64) } } #[inline] pub fn set_context_switch(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(26usize, 1u8, val as u64) } } #[inline] pub fn write_backward(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(27usize, 1u8) as u64) } } #[inline] pub fn set_write_backward(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(27usize, 1u8, val as u64) } } #[inline] pub fn namespaces(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(28usize, 1u8) as u64) } } #[inline] pub fn set_namespaces(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(28usize, 1u8, val as u64) } } #[inline] pub fn ksymbol(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(29usize, 1u8) as u64) } } #[inline] pub fn set_ksymbol(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(29usize, 1u8, val as u64) } } #[inline] pub fn bpf_event(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(30usize, 1u8) as u64) } } #[inline] pub fn set_bpf_event(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(30usize, 1u8, val as u64) } } #[inline] pub fn aux_output(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(31usize, 1u8) as u64) } } #[inline] pub fn set_aux_output(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(31usize, 1u8, val as u64) } } #[inline] pub fn cgroup(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(32usize, 1u8) as u64) } } #[inline] pub fn set_cgroup(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(32usize, 1u8, val as u64) } } #[inline] pub fn text_poke(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(33usize, 1u8) as u64) } } #[inline] pub fn set_text_poke(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(33usize, 1u8, val as u64) } } #[inline] pub fn build_id(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(34usize, 1u8) as u64) } } #[inline] pub fn set_build_id(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(34usize, 1u8, val as u64) } } #[inline] pub fn inherit_thread(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(35usize, 1u8) as u64) } } #[inline] pub fn set_inherit_thread(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(35usize, 1u8, val as u64) } } #[inline] pub fn remove_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(36usize, 1u8) as u64) } } #[inline] pub fn set_remove_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(36usize, 1u8, val as u64) } } #[inline] pub fn sigtrap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(37usize, 1u8) as u64) } } #[inline] pub fn set_sigtrap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(37usize, 1u8, val as u64) } } #[inline] pub fn __reserved_1(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(38usize, 26u8) as u64) } } #[inline] pub fn set___reserved_1(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(38usize, 26u8, val as u64) } } #[inline] pub fn new_bitfield_1( disabled: __u64, inherit: __u64, pinned: __u64, exclusive: __u64, exclude_user: __u64, exclude_kernel: __u64, exclude_hv: __u64, exclude_idle: __u64, mmap: __u64, comm: __u64, freq: __u64, inherit_stat: __u64, enable_on_exec: __u64, task: __u64, watermark: __u64, precise_ip: __u64, mmap_data: __u64, sample_id_all: __u64, exclude_host: __u64, exclude_guest: __u64, exclude_callchain_kernel: __u64, exclude_callchain_user: __u64, mmap2: __u64, comm_exec: __u64, use_clockid: __u64, context_switch: __u64, write_backward: __u64, namespaces: __u64, ksymbol: __u64, bpf_event: __u64, aux_output: __u64, cgroup: __u64, text_poke: __u64, build_id: __u64, inherit_thread: __u64, remove_on_exec: __u64, sigtrap: __u64, __reserved_1: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let disabled: u64 = unsafe { ::core::mem::transmute(disabled) }; disabled as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let inherit: u64 = unsafe { ::core::mem::transmute(inherit) }; inherit as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let pinned: u64 = unsafe { ::core::mem::transmute(pinned) }; pinned as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let exclusive: u64 = unsafe { ::core::mem::transmute(exclusive) }; exclusive as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let exclude_user: u64 = unsafe { ::core::mem::transmute(exclude_user) }; exclude_user as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let exclude_kernel: u64 = unsafe { ::core::mem::transmute(exclude_kernel) }; exclude_kernel as u64 }); __bindgen_bitfield_unit.set(6usize, 1u8, { let exclude_hv: u64 = unsafe { ::core::mem::transmute(exclude_hv) }; exclude_hv as u64 }); __bindgen_bitfield_unit.set(7usize, 1u8, { let exclude_idle: u64 = unsafe { ::core::mem::transmute(exclude_idle) }; exclude_idle as u64 }); __bindgen_bitfield_unit.set(8usize, 1u8, { let mmap: u64 = unsafe { ::core::mem::transmute(mmap) }; mmap as u64 }); __bindgen_bitfield_unit.set(9usize, 1u8, { let comm: u64 = unsafe { ::core::mem::transmute(comm) }; comm as u64 }); __bindgen_bitfield_unit.set(10usize, 1u8, { let freq: u64 = unsafe { ::core::mem::transmute(freq) }; freq as u64 }); __bindgen_bitfield_unit.set(11usize, 1u8, { let inherit_stat: u64 = unsafe { ::core::mem::transmute(inherit_stat) }; inherit_stat as u64 }); __bindgen_bitfield_unit.set(12usize, 1u8, { let enable_on_exec: u64 = unsafe { ::core::mem::transmute(enable_on_exec) }; enable_on_exec as u64 }); __bindgen_bitfield_unit.set(13usize, 1u8, { let task: u64 = unsafe { ::core::mem::transmute(task) }; task as u64 }); __bindgen_bitfield_unit.set(14usize, 1u8, { let watermark: u64 = unsafe { ::core::mem::transmute(watermark) }; watermark as u64 }); __bindgen_bitfield_unit.set(15usize, 2u8, { let precise_ip: u64 = unsafe { ::core::mem::transmute(precise_ip) }; precise_ip as u64 }); __bindgen_bitfield_unit.set(17usize, 1u8, { let mmap_data: u64 = unsafe { ::core::mem::transmute(mmap_data) }; mmap_data as u64 }); __bindgen_bitfield_unit.set(18usize, 1u8, { let sample_id_all: u64 = unsafe { ::core::mem::transmute(sample_id_all) }; sample_id_all as u64 }); __bindgen_bitfield_unit.set(19usize, 1u8, { let exclude_host: u64 = unsafe { ::core::mem::transmute(exclude_host) }; exclude_host as u64 }); __bindgen_bitfield_unit.set(20usize, 1u8, { let exclude_guest: u64 = unsafe { ::core::mem::transmute(exclude_guest) }; exclude_guest as u64 }); __bindgen_bitfield_unit.set(21usize, 1u8, { let exclude_callchain_kernel: u64 = unsafe { ::core::mem::transmute(exclude_callchain_kernel) }; exclude_callchain_kernel as u64 }); __bindgen_bitfield_unit.set(22usize, 1u8, { let exclude_callchain_user: u64 = unsafe { ::core::mem::transmute(exclude_callchain_user) }; exclude_callchain_user as u64 }); __bindgen_bitfield_unit.set(23usize, 1u8, { let mmap2: u64 = unsafe { ::core::mem::transmute(mmap2) }; mmap2 as u64 }); __bindgen_bitfield_unit.set(24usize, 1u8, { let comm_exec: u64 = unsafe { ::core::mem::transmute(comm_exec) }; comm_exec as u64 }); __bindgen_bitfield_unit.set(25usize, 1u8, { let use_clockid: u64 = unsafe { ::core::mem::transmute(use_clockid) }; use_clockid as u64 }); __bindgen_bitfield_unit.set(26usize, 1u8, { let context_switch: u64 = unsafe { ::core::mem::transmute(context_switch) }; context_switch as u64 }); __bindgen_bitfield_unit.set(27usize, 1u8, { let write_backward: u64 = unsafe { ::core::mem::transmute(write_backward) }; write_backward as u64 }); __bindgen_bitfield_unit.set(28usize, 1u8, { let namespaces: u64 = unsafe { ::core::mem::transmute(namespaces) }; namespaces as u64 }); __bindgen_bitfield_unit.set(29usize, 1u8, { let ksymbol: u64 = unsafe { ::core::mem::transmute(ksymbol) }; ksymbol as u64 }); __bindgen_bitfield_unit.set(30usize, 1u8, { let bpf_event: u64 = unsafe { ::core::mem::transmute(bpf_event) }; bpf_event as u64 }); __bindgen_bitfield_unit.set(31usize, 1u8, { let aux_output: u64 = unsafe { ::core::mem::transmute(aux_output) }; aux_output as u64 }); __bindgen_bitfield_unit.set(32usize, 1u8, { let cgroup: u64 = unsafe { ::core::mem::transmute(cgroup) }; cgroup as u64 }); __bindgen_bitfield_unit.set(33usize, 1u8, { let text_poke: u64 = unsafe { ::core::mem::transmute(text_poke) }; text_poke as u64 }); __bindgen_bitfield_unit.set(34usize, 1u8, { let build_id: u64 = unsafe { ::core::mem::transmute(build_id) }; build_id as u64 }); __bindgen_bitfield_unit.set(35usize, 1u8, { let inherit_thread: u64 = unsafe { ::core::mem::transmute(inherit_thread) }; inherit_thread as u64 }); __bindgen_bitfield_unit.set(36usize, 1u8, { let remove_on_exec: u64 = unsafe { ::core::mem::transmute(remove_on_exec) }; remove_on_exec as u64 }); __bindgen_bitfield_unit.set(37usize, 1u8, { let sigtrap: u64 = unsafe { ::core::mem::transmute(sigtrap) }; sigtrap as u64 }); __bindgen_bitfield_unit.set(38usize, 26u8, { let __reserved_1: u64 = unsafe { ::core::mem::transmute(__reserved_1) }; __reserved_1 as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_mmap_page { pub version: __u32, pub compat_version: __u32, pub lock: __u32, pub index: __u32, pub offset: __s64, pub time_enabled: __u64, pub time_running: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1, pub pmc_width: __u16, pub time_shift: __u16, pub time_mult: __u32, pub time_offset: __u64, pub time_zero: __u64, pub size: __u32, pub __reserved_1: __u32, pub time_cycles: __u64, pub time_mask: __u64, pub __reserved: [__u8; 928usize], pub data_head: __u64, pub data_tail: __u64, pub data_offset: __u64, pub data_size: __u64, pub aux_head: __u64, pub aux_tail: __u64, pub aux_offset: __u64, pub aux_size: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_mmap_page__bindgen_ty_1 { pub capabilities: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { pub _bitfield_align_1: [u64; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, } impl perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { #[inline] pub fn cap_bit0(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn cap_bit0_is_deprecated(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0_is_deprecated(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn cap_user_rdpmc(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_rdpmc(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_zero(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_zero(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_short(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_short(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn cap_____res(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 58u8) as u64) } } #[inline] pub fn set_cap_____res(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 58u8, val as u64) } } #[inline] pub fn new_bitfield_1( cap_bit0: __u64, cap_bit0_is_deprecated: __u64, cap_user_rdpmc: __u64, cap_user_time: __u64, cap_user_time_zero: __u64, cap_user_time_short: __u64, cap_____res: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let cap_bit0: u64 = unsafe { ::core::mem::transmute(cap_bit0) }; cap_bit0 as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let cap_bit0_is_deprecated: u64 = unsafe { ::core::mem::transmute(cap_bit0_is_deprecated) }; cap_bit0_is_deprecated as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let cap_user_rdpmc: u64 = unsafe { ::core::mem::transmute(cap_user_rdpmc) }; cap_user_rdpmc as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let cap_user_time: u64 = unsafe { ::core::mem::transmute(cap_user_time) }; cap_user_time as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let cap_user_time_zero: u64 = unsafe { ::core::mem::transmute(cap_user_time_zero) }; cap_user_time_zero as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let cap_user_time_short: u64 = unsafe { ::core::mem::transmute(cap_user_time_short) }; cap_user_time_short as u64 }); __bindgen_bitfield_unit.set(6usize, 58u8, { let cap_____res: u64 = unsafe { ::core::mem::transmute(cap_____res) }; cap_____res as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_header { pub type_: __u32, pub misc: __u16, pub size: __u16, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_type { PERF_RECORD_MMAP = 1, PERF_RECORD_LOST = 2, PERF_RECORD_COMM = 3, PERF_RECORD_EXIT = 4, PERF_RECORD_THROTTLE = 5, PERF_RECORD_UNTHROTTLE = 6, PERF_RECORD_FORK = 7, PERF_RECORD_READ = 8, PERF_RECORD_SAMPLE = 9, PERF_RECORD_MMAP2 = 10, PERF_RECORD_AUX = 11, PERF_RECORD_ITRACE_START = 12, PERF_RECORD_LOST_SAMPLES = 13, PERF_RECORD_SWITCH = 14, PERF_RECORD_SWITCH_CPU_WIDE = 15, PERF_RECORD_NAMESPACES = 16, PERF_RECORD_KSYMBOL = 17, PERF_RECORD_BPF_EVENT = 18, PERF_RECORD_CGROUP = 19, PERF_RECORD_TEXT_POKE = 20, PERF_RECORD_AUX_OUTPUT_HW_ID = 21, PERF_RECORD_MAX = 22, } pub const TCA_BPF_UNSPEC: _bindgen_ty_154 = 0; pub const TCA_BPF_ACT: _bindgen_ty_154 = 1; pub const TCA_BPF_POLICE: _bindgen_ty_154 = 2; pub const TCA_BPF_CLASSID: _bindgen_ty_154 = 3; pub const TCA_BPF_OPS_LEN: _bindgen_ty_154 = 4; pub const TCA_BPF_OPS: _bindgen_ty_154 = 5; pub const TCA_BPF_FD: _bindgen_ty_154 = 6; pub const TCA_BPF_NAME: _bindgen_ty_154 = 7; pub const TCA_BPF_FLAGS: _bindgen_ty_154 = 8; pub const TCA_BPF_FLAGS_GEN: _bindgen_ty_154 = 9; pub const TCA_BPF_TAG: _bindgen_ty_154 = 10; pub const TCA_BPF_ID: _bindgen_ty_154 = 11; pub const __TCA_BPF_MAX: _bindgen_ty_154 = 12; pub type _bindgen_ty_154 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct ifinfomsg { pub ifi_family: ::core::ffi::c_uchar, pub __ifi_pad: ::core::ffi::c_uchar, pub ifi_type: ::core::ffi::c_ushort, pub ifi_index: ::core::ffi::c_int, pub ifi_flags: ::core::ffi::c_uint, pub ifi_change: ::core::ffi::c_uint, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct tcmsg { pub tcm_family: ::core::ffi::c_uchar, pub tcm__pad1: ::core::ffi::c_uchar, pub tcm__pad2: ::core::ffi::c_ushort, pub tcm_ifindex: ::core::ffi::c_int, pub tcm_handle: __u32, pub tcm_parent: __u32, pub tcm_info: __u32, } pub const TCA_UNSPEC: _bindgen_ty_172 = 0; pub const TCA_KIND: _bindgen_ty_172 = 1; pub const TCA_OPTIONS: _bindgen_ty_172 = 2; pub const TCA_STATS: _bindgen_ty_172 = 3; pub const TCA_XSTATS: _bindgen_ty_172 = 4; pub const TCA_RATE: _bindgen_ty_172 = 5; pub const TCA_FCNT: _bindgen_ty_172 = 6; pub const TCA_STATS2: _bindgen_ty_172 = 7; pub const TCA_STAB: _bindgen_ty_172 = 8; pub const TCA_PAD: _bindgen_ty_172 = 9; pub const TCA_DUMP_INVISIBLE: _bindgen_ty_172 = 10; pub const TCA_CHAIN: _bindgen_ty_172 = 11; pub const TCA_HW_OFFLOAD: _bindgen_ty_172 = 12; pub const TCA_INGRESS_BLOCK: _bindgen_ty_172 = 13; pub const TCA_EGRESS_BLOCK: _bindgen_ty_172 = 14; pub const __TCA_MAX: _bindgen_ty_172 = 15; pub type _bindgen_ty_172 = ::core::ffi::c_uint; pub const AYA_PERF_EVENT_IOC_ENABLE: ::core::ffi::c_int = 536880128; pub const AYA_PERF_EVENT_IOC_DISABLE: ::core::ffi::c_int = 536880129; pub const AYA_PERF_EVENT_IOC_SET_BPF: ::core::ffi::c_int = -2147212280; aya-obj-0.2.1/src/generated/linux_bindings_riscv64.rs000064400000000000000000002356331046102023000206510ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ #[repr(C)] #[derive(Copy, Clone, Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)] pub struct __BindgenBitfieldUnit { storage: Storage, } impl __BindgenBitfieldUnit { #[inline] pub const fn new(storage: Storage) -> Self { Self { storage } } } impl __BindgenBitfieldUnit where Storage: AsRef<[u8]> + AsMut<[u8]>, { #[inline] pub fn get_bit(&self, index: usize) -> bool { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = self.storage.as_ref()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; byte & mask == mask } #[inline] pub fn set_bit(&mut self, index: usize, val: bool) { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = &mut self.storage.as_mut()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; if val { *byte |= mask; } else { *byte &= !mask; } } #[inline] pub fn get(&self, bit_offset: usize, bit_width: u8) -> u64 { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); let mut val = 0; for i in 0..(bit_width as usize) { if self.get_bit(i + bit_offset) { let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; val |= 1 << index; } } val } #[inline] pub fn set(&mut self, bit_offset: usize, bit_width: u8, val: u64) { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); for i in 0..(bit_width as usize) { let mask = 1 << i; let val_bit_is_set = val & mask == mask; let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; self.set_bit(index + bit_offset, val_bit_is_set); } } } #[repr(C)] #[derive(Default)] pub struct __IncompleteArrayField(::core::marker::PhantomData, [T; 0]); impl __IncompleteArrayField { #[inline] pub const fn new() -> Self { __IncompleteArrayField(::core::marker::PhantomData, []) } #[inline] pub fn as_ptr(&self) -> *const T { self as *const _ as *const T } #[inline] pub fn as_mut_ptr(&mut self) -> *mut T { self as *mut _ as *mut T } #[inline] pub unsafe fn as_slice(&self, len: usize) -> &[T] { ::core::slice::from_raw_parts(self.as_ptr(), len) } #[inline] pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] { ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len) } } impl ::core::fmt::Debug for __IncompleteArrayField { fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { fmt.write_str("__IncompleteArrayField") } } pub const SO_ATTACH_BPF: u32 = 50; pub const SO_DETACH_BPF: u32 = 27; pub const BPF_LD: u32 = 0; pub const BPF_LDX: u32 = 1; pub const BPF_ST: u32 = 2; pub const BPF_STX: u32 = 3; pub const BPF_ALU: u32 = 4; pub const BPF_JMP: u32 = 5; pub const BPF_W: u32 = 0; pub const BPF_H: u32 = 8; pub const BPF_B: u32 = 16; pub const BPF_K: u32 = 0; pub const BPF_ALU64: u32 = 7; pub const BPF_DW: u32 = 24; pub const BPF_CALL: u32 = 128; pub const BPF_F_ALLOW_OVERRIDE: u32 = 1; pub const BPF_F_ALLOW_MULTI: u32 = 2; pub const BPF_F_REPLACE: u32 = 4; pub const BPF_F_BEFORE: u32 = 8; pub const BPF_F_AFTER: u32 = 16; pub const BPF_F_ID: u32 = 32; pub const BPF_F_STRICT_ALIGNMENT: u32 = 1; pub const BPF_F_ANY_ALIGNMENT: u32 = 2; pub const BPF_F_TEST_RND_HI32: u32 = 4; pub const BPF_F_TEST_STATE_FREQ: u32 = 8; pub const BPF_F_SLEEPABLE: u32 = 16; pub const BPF_F_XDP_HAS_FRAGS: u32 = 32; pub const BPF_F_XDP_DEV_BOUND_ONLY: u32 = 64; pub const BPF_F_TEST_REG_INVARIANTS: u32 = 128; pub const BPF_F_NETFILTER_IP_DEFRAG: u32 = 1; pub const BPF_PSEUDO_MAP_FD: u32 = 1; pub const BPF_PSEUDO_MAP_IDX: u32 = 5; pub const BPF_PSEUDO_MAP_VALUE: u32 = 2; pub const BPF_PSEUDO_MAP_IDX_VALUE: u32 = 6; pub const BPF_PSEUDO_BTF_ID: u32 = 3; pub const BPF_PSEUDO_FUNC: u32 = 4; pub const BPF_PSEUDO_CALL: u32 = 1; pub const BPF_PSEUDO_KFUNC_CALL: u32 = 2; pub const BPF_F_QUERY_EFFECTIVE: u32 = 1; pub const BPF_F_TEST_RUN_ON_CPU: u32 = 1; pub const BPF_F_TEST_XDP_LIVE_FRAMES: u32 = 2; pub const BTF_INT_SIGNED: u32 = 1; pub const BTF_INT_CHAR: u32 = 2; pub const BTF_INT_BOOL: u32 = 4; pub const NLMSG_ALIGNTO: u32 = 4; pub const XDP_FLAGS_UPDATE_IF_NOEXIST: u32 = 1; pub const XDP_FLAGS_SKB_MODE: u32 = 2; pub const XDP_FLAGS_DRV_MODE: u32 = 4; pub const XDP_FLAGS_HW_MODE: u32 = 8; pub const XDP_FLAGS_REPLACE: u32 = 16; pub const XDP_FLAGS_MODES: u32 = 14; pub const XDP_FLAGS_MASK: u32 = 31; pub const PERF_MAX_STACK_DEPTH: u32 = 127; pub const PERF_MAX_CONTEXTS_PER_STACK: u32 = 8; pub const PERF_FLAG_FD_NO_GROUP: u32 = 1; pub const PERF_FLAG_FD_OUTPUT: u32 = 2; pub const PERF_FLAG_PID_CGROUP: u32 = 4; pub const PERF_FLAG_FD_CLOEXEC: u32 = 8; pub const TC_H_MAJ_MASK: u32 = 4294901760; pub const TC_H_MIN_MASK: u32 = 65535; pub const TC_H_UNSPEC: u32 = 0; pub const TC_H_ROOT: u32 = 4294967295; pub const TC_H_INGRESS: u32 = 4294967281; pub const TC_H_CLSACT: u32 = 4294967281; pub const TC_H_MIN_PRIORITY: u32 = 65504; pub const TC_H_MIN_INGRESS: u32 = 65522; pub const TC_H_MIN_EGRESS: u32 = 65523; pub const TCA_BPF_FLAG_ACT_DIRECT: u32 = 1; pub type __u8 = ::core::ffi::c_uchar; pub type __s16 = ::core::ffi::c_short; pub type __u16 = ::core::ffi::c_ushort; pub type __s32 = ::core::ffi::c_int; pub type __u32 = ::core::ffi::c_uint; pub type __s64 = ::core::ffi::c_longlong; pub type __u64 = ::core::ffi::c_ulonglong; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_insn { pub code: __u8, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 1usize]>, pub off: __s16, pub imm: __s32, } impl bpf_insn { #[inline] pub fn dst_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 4u8) as u8) } } #[inline] pub fn set_dst_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 4u8, val as u64) } } #[inline] pub fn src_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 4u8) as u8) } } #[inline] pub fn set_src_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 4u8, val as u64) } } #[inline] pub fn new_bitfield_1(dst_reg: __u8, src_reg: __u8) -> __BindgenBitfieldUnit<[u8; 1usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 1usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 4u8, { let dst_reg: u8 = unsafe { ::core::mem::transmute(dst_reg) }; dst_reg as u64 }); __bindgen_bitfield_unit.set(4usize, 4u8, { let src_reg: u8 = unsafe { ::core::mem::transmute(src_reg) }; src_reg as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug)] pub struct bpf_lpm_trie_key { pub prefixlen: __u32, pub data: __IncompleteArrayField<__u8>, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cgroup_iter_order { BPF_CGROUP_ITER_ORDER_UNSPEC = 0, BPF_CGROUP_ITER_SELF_ONLY = 1, BPF_CGROUP_ITER_DESCENDANTS_PRE = 2, BPF_CGROUP_ITER_DESCENDANTS_POST = 3, BPF_CGROUP_ITER_ANCESTORS_UP = 4, } impl bpf_cmd { pub const BPF_PROG_RUN: bpf_cmd = bpf_cmd::BPF_PROG_TEST_RUN; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cmd { BPF_MAP_CREATE = 0, BPF_MAP_LOOKUP_ELEM = 1, BPF_MAP_UPDATE_ELEM = 2, BPF_MAP_DELETE_ELEM = 3, BPF_MAP_GET_NEXT_KEY = 4, BPF_PROG_LOAD = 5, BPF_OBJ_PIN = 6, BPF_OBJ_GET = 7, BPF_PROG_ATTACH = 8, BPF_PROG_DETACH = 9, BPF_PROG_TEST_RUN = 10, BPF_PROG_GET_NEXT_ID = 11, BPF_MAP_GET_NEXT_ID = 12, BPF_PROG_GET_FD_BY_ID = 13, BPF_MAP_GET_FD_BY_ID = 14, BPF_OBJ_GET_INFO_BY_FD = 15, BPF_PROG_QUERY = 16, BPF_RAW_TRACEPOINT_OPEN = 17, BPF_BTF_LOAD = 18, BPF_BTF_GET_FD_BY_ID = 19, BPF_TASK_FD_QUERY = 20, BPF_MAP_LOOKUP_AND_DELETE_ELEM = 21, BPF_MAP_FREEZE = 22, BPF_BTF_GET_NEXT_ID = 23, BPF_MAP_LOOKUP_BATCH = 24, BPF_MAP_LOOKUP_AND_DELETE_BATCH = 25, BPF_MAP_UPDATE_BATCH = 26, BPF_MAP_DELETE_BATCH = 27, BPF_LINK_CREATE = 28, BPF_LINK_UPDATE = 29, BPF_LINK_GET_FD_BY_ID = 30, BPF_LINK_GET_NEXT_ID = 31, BPF_ENABLE_STATS = 32, BPF_ITER_CREATE = 33, BPF_LINK_DETACH = 34, BPF_PROG_BIND_MAP = 35, BPF_TOKEN_CREATE = 36, __MAX_BPF_CMD = 37, } impl bpf_map_type { pub const BPF_MAP_TYPE_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED; } impl bpf_map_type { pub const BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_map_type { BPF_MAP_TYPE_UNSPEC = 0, BPF_MAP_TYPE_HASH = 1, BPF_MAP_TYPE_ARRAY = 2, BPF_MAP_TYPE_PROG_ARRAY = 3, BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4, BPF_MAP_TYPE_PERCPU_HASH = 5, BPF_MAP_TYPE_PERCPU_ARRAY = 6, BPF_MAP_TYPE_STACK_TRACE = 7, BPF_MAP_TYPE_CGROUP_ARRAY = 8, BPF_MAP_TYPE_LRU_HASH = 9, BPF_MAP_TYPE_LRU_PERCPU_HASH = 10, BPF_MAP_TYPE_LPM_TRIE = 11, BPF_MAP_TYPE_ARRAY_OF_MAPS = 12, BPF_MAP_TYPE_HASH_OF_MAPS = 13, BPF_MAP_TYPE_DEVMAP = 14, BPF_MAP_TYPE_SOCKMAP = 15, BPF_MAP_TYPE_CPUMAP = 16, BPF_MAP_TYPE_XSKMAP = 17, BPF_MAP_TYPE_SOCKHASH = 18, BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 19, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED = 21, BPF_MAP_TYPE_QUEUE = 22, BPF_MAP_TYPE_STACK = 23, BPF_MAP_TYPE_SK_STORAGE = 24, BPF_MAP_TYPE_DEVMAP_HASH = 25, BPF_MAP_TYPE_STRUCT_OPS = 26, BPF_MAP_TYPE_RINGBUF = 27, BPF_MAP_TYPE_INODE_STORAGE = 28, BPF_MAP_TYPE_TASK_STORAGE = 29, BPF_MAP_TYPE_BLOOM_FILTER = 30, BPF_MAP_TYPE_USER_RINGBUF = 31, BPF_MAP_TYPE_CGRP_STORAGE = 32, BPF_MAP_TYPE_ARENA = 33, __MAX_BPF_MAP_TYPE = 34, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC = 0, BPF_PROG_TYPE_SOCKET_FILTER = 1, BPF_PROG_TYPE_KPROBE = 2, BPF_PROG_TYPE_SCHED_CLS = 3, BPF_PROG_TYPE_SCHED_ACT = 4, BPF_PROG_TYPE_TRACEPOINT = 5, BPF_PROG_TYPE_XDP = 6, BPF_PROG_TYPE_PERF_EVENT = 7, BPF_PROG_TYPE_CGROUP_SKB = 8, BPF_PROG_TYPE_CGROUP_SOCK = 9, BPF_PROG_TYPE_LWT_IN = 10, BPF_PROG_TYPE_LWT_OUT = 11, BPF_PROG_TYPE_LWT_XMIT = 12, BPF_PROG_TYPE_SOCK_OPS = 13, BPF_PROG_TYPE_SK_SKB = 14, BPF_PROG_TYPE_CGROUP_DEVICE = 15, BPF_PROG_TYPE_SK_MSG = 16, BPF_PROG_TYPE_RAW_TRACEPOINT = 17, BPF_PROG_TYPE_CGROUP_SOCK_ADDR = 18, BPF_PROG_TYPE_LWT_SEG6LOCAL = 19, BPF_PROG_TYPE_LIRC_MODE2 = 20, BPF_PROG_TYPE_SK_REUSEPORT = 21, BPF_PROG_TYPE_FLOW_DISSECTOR = 22, BPF_PROG_TYPE_CGROUP_SYSCTL = 23, BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE = 24, BPF_PROG_TYPE_CGROUP_SOCKOPT = 25, BPF_PROG_TYPE_TRACING = 26, BPF_PROG_TYPE_STRUCT_OPS = 27, BPF_PROG_TYPE_EXT = 28, BPF_PROG_TYPE_LSM = 29, BPF_PROG_TYPE_SK_LOOKUP = 30, BPF_PROG_TYPE_SYSCALL = 31, BPF_PROG_TYPE_NETFILTER = 32, __MAX_BPF_PROG_TYPE = 33, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_attach_type { BPF_CGROUP_INET_INGRESS = 0, BPF_CGROUP_INET_EGRESS = 1, BPF_CGROUP_INET_SOCK_CREATE = 2, BPF_CGROUP_SOCK_OPS = 3, BPF_SK_SKB_STREAM_PARSER = 4, BPF_SK_SKB_STREAM_VERDICT = 5, BPF_CGROUP_DEVICE = 6, BPF_SK_MSG_VERDICT = 7, BPF_CGROUP_INET4_BIND = 8, BPF_CGROUP_INET6_BIND = 9, BPF_CGROUP_INET4_CONNECT = 10, BPF_CGROUP_INET6_CONNECT = 11, BPF_CGROUP_INET4_POST_BIND = 12, BPF_CGROUP_INET6_POST_BIND = 13, BPF_CGROUP_UDP4_SENDMSG = 14, BPF_CGROUP_UDP6_SENDMSG = 15, BPF_LIRC_MODE2 = 16, BPF_FLOW_DISSECTOR = 17, BPF_CGROUP_SYSCTL = 18, BPF_CGROUP_UDP4_RECVMSG = 19, BPF_CGROUP_UDP6_RECVMSG = 20, BPF_CGROUP_GETSOCKOPT = 21, BPF_CGROUP_SETSOCKOPT = 22, BPF_TRACE_RAW_TP = 23, BPF_TRACE_FENTRY = 24, BPF_TRACE_FEXIT = 25, BPF_MODIFY_RETURN = 26, BPF_LSM_MAC = 27, BPF_TRACE_ITER = 28, BPF_CGROUP_INET4_GETPEERNAME = 29, BPF_CGROUP_INET6_GETPEERNAME = 30, BPF_CGROUP_INET4_GETSOCKNAME = 31, BPF_CGROUP_INET6_GETSOCKNAME = 32, BPF_XDP_DEVMAP = 33, BPF_CGROUP_INET_SOCK_RELEASE = 34, BPF_XDP_CPUMAP = 35, BPF_SK_LOOKUP = 36, BPF_XDP = 37, BPF_SK_SKB_VERDICT = 38, BPF_SK_REUSEPORT_SELECT = 39, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 40, BPF_PERF_EVENT = 41, BPF_TRACE_KPROBE_MULTI = 42, BPF_LSM_CGROUP = 43, BPF_STRUCT_OPS = 44, BPF_NETFILTER = 45, BPF_TCX_INGRESS = 46, BPF_TCX_EGRESS = 47, BPF_TRACE_UPROBE_MULTI = 48, BPF_CGROUP_UNIX_CONNECT = 49, BPF_CGROUP_UNIX_SENDMSG = 50, BPF_CGROUP_UNIX_RECVMSG = 51, BPF_CGROUP_UNIX_GETPEERNAME = 52, BPF_CGROUP_UNIX_GETSOCKNAME = 53, BPF_NETKIT_PRIMARY = 54, BPF_NETKIT_PEER = 55, __MAX_BPF_ATTACH_TYPE = 56, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_link_type { BPF_LINK_TYPE_UNSPEC = 0, BPF_LINK_TYPE_RAW_TRACEPOINT = 1, BPF_LINK_TYPE_TRACING = 2, BPF_LINK_TYPE_CGROUP = 3, BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, __MAX_BPF_LINK_TYPE = 14, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_perf_event_type { BPF_PERF_EVENT_UNSPEC = 0, BPF_PERF_EVENT_UPROBE = 1, BPF_PERF_EVENT_URETPROBE = 2, BPF_PERF_EVENT_KPROBE = 3, BPF_PERF_EVENT_KRETPROBE = 4, BPF_PERF_EVENT_TRACEPOINT = 5, BPF_PERF_EVENT_EVENT = 6, } pub const BPF_F_KPROBE_MULTI_RETURN: _bindgen_ty_2 = 1; pub type _bindgen_ty_2 = ::core::ffi::c_uint; pub const BPF_F_UPROBE_MULTI_RETURN: _bindgen_ty_3 = 1; pub type _bindgen_ty_3 = ::core::ffi::c_uint; pub const BPF_ANY: _bindgen_ty_4 = 0; pub const BPF_NOEXIST: _bindgen_ty_4 = 1; pub const BPF_EXIST: _bindgen_ty_4 = 2; pub const BPF_F_LOCK: _bindgen_ty_4 = 4; pub type _bindgen_ty_4 = ::core::ffi::c_uint; pub const BPF_F_NO_PREALLOC: _bindgen_ty_5 = 1; pub const BPF_F_NO_COMMON_LRU: _bindgen_ty_5 = 2; pub const BPF_F_NUMA_NODE: _bindgen_ty_5 = 4; pub const BPF_F_RDONLY: _bindgen_ty_5 = 8; pub const BPF_F_WRONLY: _bindgen_ty_5 = 16; pub const BPF_F_STACK_BUILD_ID: _bindgen_ty_5 = 32; pub const BPF_F_ZERO_SEED: _bindgen_ty_5 = 64; pub const BPF_F_RDONLY_PROG: _bindgen_ty_5 = 128; pub const BPF_F_WRONLY_PROG: _bindgen_ty_5 = 256; pub const BPF_F_CLONE: _bindgen_ty_5 = 512; pub const BPF_F_MMAPABLE: _bindgen_ty_5 = 1024; pub const BPF_F_PRESERVE_ELEMS: _bindgen_ty_5 = 2048; pub const BPF_F_INNER_MAP: _bindgen_ty_5 = 4096; pub const BPF_F_LINK: _bindgen_ty_5 = 8192; pub const BPF_F_PATH_FD: _bindgen_ty_5 = 16384; pub const BPF_F_VTYPE_BTF_OBJ_FD: _bindgen_ty_5 = 32768; pub const BPF_F_TOKEN_FD: _bindgen_ty_5 = 65536; pub const BPF_F_SEGV_ON_FAULT: _bindgen_ty_5 = 131072; pub const BPF_F_NO_USER_CONV: _bindgen_ty_5 = 262144; pub type _bindgen_ty_5 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_stats_type { BPF_STATS_RUN_TIME = 0, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr { pub __bindgen_anon_1: bpf_attr__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_2, pub batch: bpf_attr__bindgen_ty_3, pub __bindgen_anon_3: bpf_attr__bindgen_ty_4, pub __bindgen_anon_4: bpf_attr__bindgen_ty_5, pub __bindgen_anon_5: bpf_attr__bindgen_ty_6, pub test: bpf_attr__bindgen_ty_7, pub __bindgen_anon_6: bpf_attr__bindgen_ty_8, pub info: bpf_attr__bindgen_ty_9, pub query: bpf_attr__bindgen_ty_10, pub raw_tracepoint: bpf_attr__bindgen_ty_11, pub __bindgen_anon_7: bpf_attr__bindgen_ty_12, pub task_fd_query: bpf_attr__bindgen_ty_13, pub link_create: bpf_attr__bindgen_ty_14, pub link_update: bpf_attr__bindgen_ty_15, pub link_detach: bpf_attr__bindgen_ty_16, pub enable_stats: bpf_attr__bindgen_ty_17, pub iter_create: bpf_attr__bindgen_ty_18, pub prog_bind_map: bpf_attr__bindgen_ty_19, pub token_create: bpf_attr__bindgen_ty_20, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_1 { pub map_type: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub inner_map_fd: __u32, pub numa_node: __u32, pub map_name: [::core::ffi::c_char; 16usize], pub map_ifindex: __u32, pub btf_fd: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_value_type_id: __u32, pub map_extra: __u64, pub value_type_btf_obj_fd: __s32, pub map_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_2 { pub map_fd: __u32, pub key: __u64, pub __bindgen_anon_1: bpf_attr__bindgen_ty_2__bindgen_ty_1, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_2__bindgen_ty_1 { pub value: __u64, pub next_key: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_3 { pub in_batch: __u64, pub out_batch: __u64, pub keys: __u64, pub values: __u64, pub count: __u32, pub map_fd: __u32, pub elem_flags: __u64, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_4 { pub prog_type: __u32, pub insn_cnt: __u32, pub insns: __u64, pub license: __u64, pub log_level: __u32, pub log_size: __u32, pub log_buf: __u64, pub kern_version: __u32, pub prog_flags: __u32, pub prog_name: [::core::ffi::c_char; 16usize], pub prog_ifindex: __u32, pub expected_attach_type: __u32, pub prog_btf_fd: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub func_info_cnt: __u32, pub line_info_rec_size: __u32, pub line_info: __u64, pub line_info_cnt: __u32, pub attach_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_4__bindgen_ty_1, pub core_relo_cnt: __u32, pub fd_array: __u64, pub core_relos: __u64, pub core_relo_rec_size: __u32, pub log_true_size: __u32, pub prog_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_4__bindgen_ty_1 { pub attach_prog_fd: __u32, pub attach_btf_obj_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_5 { pub pathname: __u64, pub bpf_fd: __u32, pub file_flags: __u32, pub path_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_6__bindgen_ty_1, pub attach_bpf_fd: __u32, pub attach_type: __u32, pub attach_flags: __u32, pub replace_bpf_fd: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_6__bindgen_ty_2, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_2 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_7 { pub prog_fd: __u32, pub retval: __u32, pub data_size_in: __u32, pub data_size_out: __u32, pub data_in: __u64, pub data_out: __u64, pub repeat: __u32, pub duration: __u32, pub ctx_size_in: __u32, pub ctx_size_out: __u32, pub ctx_in: __u64, pub ctx_out: __u64, pub flags: __u32, pub cpu: __u32, pub batch_size: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_8__bindgen_ty_1, pub next_id: __u32, pub open_flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_8__bindgen_ty_1 { pub start_id: __u32, pub prog_id: __u32, pub map_id: __u32, pub btf_id: __u32, pub link_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_9 { pub bpf_fd: __u32, pub info_len: __u32, pub info: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_10 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_10__bindgen_ty_1, pub attach_type: __u32, pub query_flags: __u32, pub attach_flags: __u32, pub prog_ids: __u64, pub __bindgen_anon_2: bpf_attr__bindgen_ty_10__bindgen_ty_2, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub prog_attach_flags: __u64, pub link_ids: __u64, pub link_attach_flags: __u64, pub revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_2 { pub prog_cnt: __u32, pub count: __u32, } impl bpf_attr__bindgen_ty_10 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_11 { pub name: __u64, pub prog_fd: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_attr__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_12 { pub btf: __u64, pub btf_log_buf: __u64, pub btf_size: __u32, pub btf_log_size: __u32, pub btf_log_level: __u32, pub btf_log_true_size: __u32, pub btf_flags: __u32, pub btf_token_fd: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_13 { pub pid: __u32, pub fd: __u32, pub flags: __u32, pub buf_len: __u32, pub buf: __u64, pub prog_id: __u32, pub fd_type: __u32, pub probe_offset: __u64, pub probe_addr: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_14__bindgen_ty_2, pub attach_type: __u32, pub flags: __u32, pub __bindgen_anon_3: bpf_attr__bindgen_ty_14__bindgen_ty_3, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_1 { pub prog_fd: __u32, pub map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_2 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3 { pub target_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1, pub perf_event: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2, pub kprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3, pub tracing: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4, pub netfilter: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5, pub tcx: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6, pub uprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7, pub netkit: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1 { pub iter_info: __u64, pub iter_info_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2 { pub bpf_cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3 { pub flags: __u32, pub cnt: __u32, pub syms: __u64, pub addrs: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4 { pub target_btf_id: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub cnt: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_15 { pub link_fd: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_15__bindgen_ty_1, pub flags: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_15__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_1 { pub new_prog_fd: __u32, pub new_map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_2 { pub old_prog_fd: __u32, pub old_map_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_16 { pub link_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_17 { pub type_: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_18 { pub link_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_19 { pub prog_fd: __u32, pub map_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_20 { pub flags: __u32, pub bpffs_fd: __u32, } pub const BPF_F_RECOMPUTE_CSUM: _bindgen_ty_6 = 1; pub const BPF_F_INVALIDATE_HASH: _bindgen_ty_6 = 2; pub type _bindgen_ty_6 = ::core::ffi::c_uint; pub const BPF_F_HDR_FIELD_MASK: _bindgen_ty_7 = 15; pub type _bindgen_ty_7 = ::core::ffi::c_uint; pub const BPF_F_PSEUDO_HDR: _bindgen_ty_8 = 16; pub const BPF_F_MARK_MANGLED_0: _bindgen_ty_8 = 32; pub const BPF_F_MARK_ENFORCE: _bindgen_ty_8 = 64; pub type _bindgen_ty_8 = ::core::ffi::c_uint; pub const BPF_F_INGRESS: _bindgen_ty_9 = 1; pub type _bindgen_ty_9 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_IPV6: _bindgen_ty_10 = 1; pub type _bindgen_ty_10 = ::core::ffi::c_uint; pub const BPF_F_SKIP_FIELD_MASK: _bindgen_ty_11 = 255; pub const BPF_F_USER_STACK: _bindgen_ty_11 = 256; pub const BPF_F_FAST_STACK_CMP: _bindgen_ty_11 = 512; pub const BPF_F_REUSE_STACKID: _bindgen_ty_11 = 1024; pub const BPF_F_USER_BUILD_ID: _bindgen_ty_11 = 2048; pub type _bindgen_ty_11 = ::core::ffi::c_uint; pub const BPF_F_ZERO_CSUM_TX: _bindgen_ty_12 = 2; pub const BPF_F_DONT_FRAGMENT: _bindgen_ty_12 = 4; pub const BPF_F_SEQ_NUMBER: _bindgen_ty_12 = 8; pub const BPF_F_NO_TUNNEL_KEY: _bindgen_ty_12 = 16; pub type _bindgen_ty_12 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_FLAGS: _bindgen_ty_13 = 16; pub type _bindgen_ty_13 = ::core::ffi::c_uint; pub const BPF_F_INDEX_MASK: _bindgen_ty_14 = 4294967295; pub const BPF_F_CURRENT_CPU: _bindgen_ty_14 = 4294967295; pub const BPF_F_CTXLEN_MASK: _bindgen_ty_14 = 4503595332403200; pub type _bindgen_ty_14 = ::core::ffi::c_ulong; pub const BPF_F_CURRENT_NETNS: _bindgen_ty_15 = -1; pub type _bindgen_ty_15 = ::core::ffi::c_int; pub const BPF_F_ADJ_ROOM_FIXED_GSO: _bindgen_ty_17 = 1; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV4: _bindgen_ty_17 = 2; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV6: _bindgen_ty_17 = 4; pub const BPF_F_ADJ_ROOM_ENCAP_L4_GRE: _bindgen_ty_17 = 8; pub const BPF_F_ADJ_ROOM_ENCAP_L4_UDP: _bindgen_ty_17 = 16; pub const BPF_F_ADJ_ROOM_NO_CSUM_RESET: _bindgen_ty_17 = 32; pub const BPF_F_ADJ_ROOM_ENCAP_L2_ETH: _bindgen_ty_17 = 64; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV4: _bindgen_ty_17 = 128; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV6: _bindgen_ty_17 = 256; pub type _bindgen_ty_17 = ::core::ffi::c_uint; pub const BPF_F_SYSCTL_BASE_NAME: _bindgen_ty_19 = 1; pub type _bindgen_ty_19 = ::core::ffi::c_uint; pub const BPF_F_GET_BRANCH_RECORDS_SIZE: _bindgen_ty_21 = 1; pub type _bindgen_ty_21 = ::core::ffi::c_uint; pub const BPF_RINGBUF_BUSY_BIT: _bindgen_ty_24 = 2147483648; pub const BPF_RINGBUF_DISCARD_BIT: _bindgen_ty_24 = 1073741824; pub const BPF_RINGBUF_HDR_SZ: _bindgen_ty_24 = 8; pub type _bindgen_ty_24 = ::core::ffi::c_uint; pub const BPF_F_BPRM_SECUREEXEC: _bindgen_ty_26 = 1; pub type _bindgen_ty_26 = ::core::ffi::c_uint; pub const BPF_F_BROADCAST: _bindgen_ty_27 = 8; pub const BPF_F_EXCLUDE_INGRESS: _bindgen_ty_27 = 16; pub type _bindgen_ty_27 = ::core::ffi::c_uint; #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_devmap_val { pub ifindex: __u32, pub bpf_prog: bpf_devmap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_devmap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_cpumap_val { pub qsize: __u32, pub bpf_prog: bpf_cpumap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_cpumap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_prog_info { pub type_: __u32, pub id: __u32, pub tag: [__u8; 8usize], pub jited_prog_len: __u32, pub xlated_prog_len: __u32, pub jited_prog_insns: __u64, pub xlated_prog_insns: __u64, pub load_time: __u64, pub created_by_uid: __u32, pub nr_map_ids: __u32, pub map_ids: __u64, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub netns_dev: __u64, pub netns_ino: __u64, pub nr_jited_ksyms: __u32, pub nr_jited_func_lens: __u32, pub jited_ksyms: __u64, pub jited_func_lens: __u64, pub btf_id: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub nr_func_info: __u32, pub nr_line_info: __u32, pub line_info: __u64, pub jited_line_info: __u64, pub nr_jited_line_info: __u32, pub line_info_rec_size: __u32, pub jited_line_info_rec_size: __u32, pub nr_prog_tags: __u32, pub prog_tags: __u64, pub run_time_ns: __u64, pub run_cnt: __u64, pub recursion_misses: __u64, pub verified_insns: __u32, pub attach_btf_obj_id: __u32, pub attach_btf_id: __u32, } impl bpf_prog_info { #[inline] pub fn gpl_compatible(&self) -> __u32 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u32) } } #[inline] pub fn set_gpl_compatible(&mut self, val: __u32) { unsafe { let val: u32 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn new_bitfield_1(gpl_compatible: __u32) -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let gpl_compatible: u32 = unsafe { ::core::mem::transmute(gpl_compatible) }; gpl_compatible as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_map_info { pub type_: __u32, pub id: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub btf_vmlinux_value_type_id: __u32, pub netns_dev: __u64, pub netns_ino: __u64, pub btf_id: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_id: __u32, pub map_extra: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_btf_info { pub btf: __u64, pub btf_size: __u32, pub id: __u32, pub name: __u64, pub name_len: __u32, pub kernel_btf: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info { pub type_: __u32, pub id: __u32, pub prog_id: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1 { pub raw_tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_1, pub tracing: bpf_link_info__bindgen_ty_1__bindgen_ty_2, pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_3, pub iter: bpf_link_info__bindgen_ty_1__bindgen_ty_4, pub netns: bpf_link_info__bindgen_ty_1__bindgen_ty_5, pub xdp: bpf_link_info__bindgen_ty_1__bindgen_ty_6, pub struct_ops: bpf_link_info__bindgen_ty_1__bindgen_ty_7, pub netfilter: bpf_link_info__bindgen_ty_1__bindgen_ty_8, pub kprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_9, pub uprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_10, pub perf_event: bpf_link_info__bindgen_ty_1__bindgen_ty_11, pub tcx: bpf_link_info__bindgen_ty_1__bindgen_ty_12, pub netkit: bpf_link_info__bindgen_ty_1__bindgen_ty_13, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_1 { pub tp_name: __u64, pub tp_name_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_2 { pub attach_type: __u32, pub target_obj_id: __u32, pub target_btf_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_3 { pub cgroup_id: __u64, pub attach_type: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4 { pub target_name: __u64, pub target_name_len: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1, pub __bindgen_anon_2: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1 { pub map: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1 { pub map_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2 { pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1, pub task: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1 { pub cgroup_id: __u64, pub order: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2 { pub tid: __u32, pub pid: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_5 { pub netns_ino: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_6 { pub ifindex: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_7 { pub map_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_8 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_9 { pub addrs: __u64, pub count: __u32, pub flags: __u32, pub missed: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_10 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub path_size: __u32, pub count: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11 { pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1 { pub uprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1, pub kprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2, pub tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3, pub event: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1 { pub file_name: __u64, pub name_len: __u32, pub offset: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2 { pub func_name: __u64, pub name_len: __u32, pub offset: __u32, pub addr: __u64, pub missed: __u64, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { pub tp_name: __u64, pub name_len: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { pub config: __u64, pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_12 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_13 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_func_info { pub insn_off: __u32, pub type_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_line_info { pub insn_off: __u32, pub file_name_off: __u32, pub line_off: __u32, pub line_col: __u32, } pub const BPF_F_TIMER_ABS: _bindgen_ty_41 = 1; pub const BPF_F_TIMER_CPU_PIN: _bindgen_ty_41 = 2; pub type _bindgen_ty_41 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub type_off: __u32, pub type_len: __u32, pub str_off: __u32, pub str_len: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct btf_type { pub name_off: __u32, pub info: __u32, pub __bindgen_anon_1: btf_type__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union btf_type__bindgen_ty_1 { pub size: __u32, pub type_: __u32, } pub const BTF_KIND_UNKN: _bindgen_ty_42 = 0; pub const BTF_KIND_INT: _bindgen_ty_42 = 1; pub const BTF_KIND_PTR: _bindgen_ty_42 = 2; pub const BTF_KIND_ARRAY: _bindgen_ty_42 = 3; pub const BTF_KIND_STRUCT: _bindgen_ty_42 = 4; pub const BTF_KIND_UNION: _bindgen_ty_42 = 5; pub const BTF_KIND_ENUM: _bindgen_ty_42 = 6; pub const BTF_KIND_FWD: _bindgen_ty_42 = 7; pub const BTF_KIND_TYPEDEF: _bindgen_ty_42 = 8; pub const BTF_KIND_VOLATILE: _bindgen_ty_42 = 9; pub const BTF_KIND_CONST: _bindgen_ty_42 = 10; pub const BTF_KIND_RESTRICT: _bindgen_ty_42 = 11; pub const BTF_KIND_FUNC: _bindgen_ty_42 = 12; pub const BTF_KIND_FUNC_PROTO: _bindgen_ty_42 = 13; pub const BTF_KIND_VAR: _bindgen_ty_42 = 14; pub const BTF_KIND_DATASEC: _bindgen_ty_42 = 15; pub const BTF_KIND_FLOAT: _bindgen_ty_42 = 16; pub const BTF_KIND_DECL_TAG: _bindgen_ty_42 = 17; pub const BTF_KIND_TYPE_TAG: _bindgen_ty_42 = 18; pub const BTF_KIND_ENUM64: _bindgen_ty_42 = 19; pub const NR_BTF_KINDS: _bindgen_ty_42 = 20; pub const BTF_KIND_MAX: _bindgen_ty_42 = 19; pub type _bindgen_ty_42 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_enum { pub name_off: __u32, pub val: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_array { pub type_: __u32, pub index_type: __u32, pub nelems: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_member { pub name_off: __u32, pub type_: __u32, pub offset: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_param { pub name_off: __u32, pub type_: __u32, } pub const BTF_VAR_STATIC: _bindgen_ty_43 = 0; pub const BTF_VAR_GLOBAL_ALLOCATED: _bindgen_ty_43 = 1; pub const BTF_VAR_GLOBAL_EXTERN: _bindgen_ty_43 = 2; pub type _bindgen_ty_43 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum btf_func_linkage { BTF_FUNC_STATIC = 0, BTF_FUNC_GLOBAL = 1, BTF_FUNC_EXTERN = 2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var { pub linkage: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var_secinfo { pub type_: __u32, pub offset: __u32, pub size: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_decl_tag { pub component_idx: __s32, } pub const IFLA_XDP_UNSPEC: _bindgen_ty_92 = 0; pub const IFLA_XDP_FD: _bindgen_ty_92 = 1; pub const IFLA_XDP_ATTACHED: _bindgen_ty_92 = 2; pub const IFLA_XDP_FLAGS: _bindgen_ty_92 = 3; pub const IFLA_XDP_PROG_ID: _bindgen_ty_92 = 4; pub const IFLA_XDP_DRV_PROG_ID: _bindgen_ty_92 = 5; pub const IFLA_XDP_SKB_PROG_ID: _bindgen_ty_92 = 6; pub const IFLA_XDP_HW_PROG_ID: _bindgen_ty_92 = 7; pub const IFLA_XDP_EXPECTED_FD: _bindgen_ty_92 = 8; pub const __IFLA_XDP_MAX: _bindgen_ty_92 = 9; pub type _bindgen_ty_92 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum nf_inet_hooks { NF_INET_PRE_ROUTING = 0, NF_INET_LOCAL_IN = 1, NF_INET_FORWARD = 2, NF_INET_LOCAL_OUT = 3, NF_INET_POST_ROUTING = 4, NF_INET_NUMHOOKS = 5, } pub const NFPROTO_UNSPEC: _bindgen_ty_99 = 0; pub const NFPROTO_INET: _bindgen_ty_99 = 1; pub const NFPROTO_IPV4: _bindgen_ty_99 = 2; pub const NFPROTO_ARP: _bindgen_ty_99 = 3; pub const NFPROTO_NETDEV: _bindgen_ty_99 = 5; pub const NFPROTO_BRIDGE: _bindgen_ty_99 = 7; pub const NFPROTO_IPV6: _bindgen_ty_99 = 10; pub const NFPROTO_DECNET: _bindgen_ty_99 = 12; pub const NFPROTO_NUMPROTO: _bindgen_ty_99 = 13; pub type _bindgen_ty_99 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, PERF_TYPE_HW_CACHE = 3, PERF_TYPE_RAW = 4, PERF_TYPE_BREAKPOINT = 5, PERF_TYPE_MAX = 6, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_id { PERF_COUNT_HW_CPU_CYCLES = 0, PERF_COUNT_HW_INSTRUCTIONS = 1, PERF_COUNT_HW_CACHE_REFERENCES = 2, PERF_COUNT_HW_CACHE_MISSES = 3, PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, PERF_COUNT_HW_BRANCH_MISSES = 5, PERF_COUNT_HW_BUS_CYCLES = 6, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7, PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8, PERF_COUNT_HW_REF_CPU_CYCLES = 9, PERF_COUNT_HW_MAX = 10, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_id { PERF_COUNT_HW_CACHE_L1D = 0, PERF_COUNT_HW_CACHE_L1I = 1, PERF_COUNT_HW_CACHE_LL = 2, PERF_COUNT_HW_CACHE_DTLB = 3, PERF_COUNT_HW_CACHE_ITLB = 4, PERF_COUNT_HW_CACHE_BPU = 5, PERF_COUNT_HW_CACHE_NODE = 6, PERF_COUNT_HW_CACHE_MAX = 7, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_id { PERF_COUNT_HW_CACHE_OP_READ = 0, PERF_COUNT_HW_CACHE_OP_WRITE = 1, PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, PERF_COUNT_HW_CACHE_OP_MAX = 3, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_result_id { PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, PERF_COUNT_HW_CACHE_RESULT_MISS = 1, PERF_COUNT_HW_CACHE_RESULT_MAX = 2, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, PERF_COUNT_SW_CONTEXT_SWITCHES = 3, PERF_COUNT_SW_CPU_MIGRATIONS = 4, PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, PERF_COUNT_SW_ALIGNMENT_FAULTS = 7, PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, PERF_COUNT_SW_CGROUP_SWITCHES = 11, PERF_COUNT_SW_MAX = 12, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_sample_format { PERF_SAMPLE_IP = 1, PERF_SAMPLE_TID = 2, PERF_SAMPLE_TIME = 4, PERF_SAMPLE_ADDR = 8, PERF_SAMPLE_READ = 16, PERF_SAMPLE_CALLCHAIN = 32, PERF_SAMPLE_ID = 64, PERF_SAMPLE_CPU = 128, PERF_SAMPLE_PERIOD = 256, PERF_SAMPLE_STREAM_ID = 512, PERF_SAMPLE_RAW = 1024, PERF_SAMPLE_BRANCH_STACK = 2048, PERF_SAMPLE_REGS_USER = 4096, PERF_SAMPLE_STACK_USER = 8192, PERF_SAMPLE_WEIGHT = 16384, PERF_SAMPLE_DATA_SRC = 32768, PERF_SAMPLE_IDENTIFIER = 65536, PERF_SAMPLE_TRANSACTION = 131072, PERF_SAMPLE_REGS_INTR = 262144, PERF_SAMPLE_PHYS_ADDR = 524288, PERF_SAMPLE_AUX = 1048576, PERF_SAMPLE_CGROUP = 2097152, PERF_SAMPLE_DATA_PAGE_SIZE = 4194304, PERF_SAMPLE_CODE_PAGE_SIZE = 8388608, PERF_SAMPLE_WEIGHT_STRUCT = 16777216, PERF_SAMPLE_MAX = 33554432, } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_attr { pub type_: __u32, pub size: __u32, pub config: __u64, pub __bindgen_anon_1: perf_event_attr__bindgen_ty_1, pub sample_type: __u64, pub read_format: __u64, pub _bitfield_align_1: [u32; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, pub __bindgen_anon_2: perf_event_attr__bindgen_ty_2, pub bp_type: __u32, pub __bindgen_anon_3: perf_event_attr__bindgen_ty_3, pub __bindgen_anon_4: perf_event_attr__bindgen_ty_4, pub branch_sample_type: __u64, pub sample_regs_user: __u64, pub sample_stack_user: __u32, pub clockid: __s32, pub sample_regs_intr: __u64, pub aux_watermark: __u32, pub sample_max_stack: __u16, pub __reserved_2: __u16, pub aux_sample_size: __u32, pub __reserved_3: __u32, pub sig_data: __u64, pub config3: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_1 { pub sample_period: __u64, pub sample_freq: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_2 { pub wakeup_events: __u32, pub wakeup_watermark: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_3 { pub bp_addr: __u64, pub kprobe_func: __u64, pub uprobe_path: __u64, pub config1: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_4 { pub bp_len: __u64, pub kprobe_addr: __u64, pub probe_offset: __u64, pub config2: __u64, } impl perf_event_attr { #[inline] pub fn disabled(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_disabled(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn inherit(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_inherit(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn pinned(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_pinned(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn exclusive(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_exclusive(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn exclude_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_exclude_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn exclude_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_exclude_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn exclude_hv(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 1u8) as u64) } } #[inline] pub fn set_exclude_hv(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 1u8, val as u64) } } #[inline] pub fn exclude_idle(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(7usize, 1u8) as u64) } } #[inline] pub fn set_exclude_idle(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(7usize, 1u8, val as u64) } } #[inline] pub fn mmap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(8usize, 1u8) as u64) } } #[inline] pub fn set_mmap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(8usize, 1u8, val as u64) } } #[inline] pub fn comm(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(9usize, 1u8) as u64) } } #[inline] pub fn set_comm(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(9usize, 1u8, val as u64) } } #[inline] pub fn freq(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(10usize, 1u8) as u64) } } #[inline] pub fn set_freq(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(10usize, 1u8, val as u64) } } #[inline] pub fn inherit_stat(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(11usize, 1u8) as u64) } } #[inline] pub fn set_inherit_stat(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(11usize, 1u8, val as u64) } } #[inline] pub fn enable_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(12usize, 1u8) as u64) } } #[inline] pub fn set_enable_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(12usize, 1u8, val as u64) } } #[inline] pub fn task(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(13usize, 1u8) as u64) } } #[inline] pub fn set_task(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(13usize, 1u8, val as u64) } } #[inline] pub fn watermark(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(14usize, 1u8) as u64) } } #[inline] pub fn set_watermark(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(14usize, 1u8, val as u64) } } #[inline] pub fn precise_ip(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(15usize, 2u8) as u64) } } #[inline] pub fn set_precise_ip(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(15usize, 2u8, val as u64) } } #[inline] pub fn mmap_data(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(17usize, 1u8) as u64) } } #[inline] pub fn set_mmap_data(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(17usize, 1u8, val as u64) } } #[inline] pub fn sample_id_all(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(18usize, 1u8) as u64) } } #[inline] pub fn set_sample_id_all(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(18usize, 1u8, val as u64) } } #[inline] pub fn exclude_host(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(19usize, 1u8) as u64) } } #[inline] pub fn set_exclude_host(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(19usize, 1u8, val as u64) } } #[inline] pub fn exclude_guest(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(20usize, 1u8) as u64) } } #[inline] pub fn set_exclude_guest(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(20usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(21usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(21usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(22usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(22usize, 1u8, val as u64) } } #[inline] pub fn mmap2(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(23usize, 1u8) as u64) } } #[inline] pub fn set_mmap2(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(23usize, 1u8, val as u64) } } #[inline] pub fn comm_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(24usize, 1u8) as u64) } } #[inline] pub fn set_comm_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(24usize, 1u8, val as u64) } } #[inline] pub fn use_clockid(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(25usize, 1u8) as u64) } } #[inline] pub fn set_use_clockid(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(25usize, 1u8, val as u64) } } #[inline] pub fn context_switch(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(26usize, 1u8) as u64) } } #[inline] pub fn set_context_switch(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(26usize, 1u8, val as u64) } } #[inline] pub fn write_backward(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(27usize, 1u8) as u64) } } #[inline] pub fn set_write_backward(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(27usize, 1u8, val as u64) } } #[inline] pub fn namespaces(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(28usize, 1u8) as u64) } } #[inline] pub fn set_namespaces(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(28usize, 1u8, val as u64) } } #[inline] pub fn ksymbol(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(29usize, 1u8) as u64) } } #[inline] pub fn set_ksymbol(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(29usize, 1u8, val as u64) } } #[inline] pub fn bpf_event(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(30usize, 1u8) as u64) } } #[inline] pub fn set_bpf_event(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(30usize, 1u8, val as u64) } } #[inline] pub fn aux_output(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(31usize, 1u8) as u64) } } #[inline] pub fn set_aux_output(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(31usize, 1u8, val as u64) } } #[inline] pub fn cgroup(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(32usize, 1u8) as u64) } } #[inline] pub fn set_cgroup(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(32usize, 1u8, val as u64) } } #[inline] pub fn text_poke(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(33usize, 1u8) as u64) } } #[inline] pub fn set_text_poke(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(33usize, 1u8, val as u64) } } #[inline] pub fn build_id(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(34usize, 1u8) as u64) } } #[inline] pub fn set_build_id(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(34usize, 1u8, val as u64) } } #[inline] pub fn inherit_thread(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(35usize, 1u8) as u64) } } #[inline] pub fn set_inherit_thread(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(35usize, 1u8, val as u64) } } #[inline] pub fn remove_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(36usize, 1u8) as u64) } } #[inline] pub fn set_remove_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(36usize, 1u8, val as u64) } } #[inline] pub fn sigtrap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(37usize, 1u8) as u64) } } #[inline] pub fn set_sigtrap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(37usize, 1u8, val as u64) } } #[inline] pub fn __reserved_1(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(38usize, 26u8) as u64) } } #[inline] pub fn set___reserved_1(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(38usize, 26u8, val as u64) } } #[inline] pub fn new_bitfield_1( disabled: __u64, inherit: __u64, pinned: __u64, exclusive: __u64, exclude_user: __u64, exclude_kernel: __u64, exclude_hv: __u64, exclude_idle: __u64, mmap: __u64, comm: __u64, freq: __u64, inherit_stat: __u64, enable_on_exec: __u64, task: __u64, watermark: __u64, precise_ip: __u64, mmap_data: __u64, sample_id_all: __u64, exclude_host: __u64, exclude_guest: __u64, exclude_callchain_kernel: __u64, exclude_callchain_user: __u64, mmap2: __u64, comm_exec: __u64, use_clockid: __u64, context_switch: __u64, write_backward: __u64, namespaces: __u64, ksymbol: __u64, bpf_event: __u64, aux_output: __u64, cgroup: __u64, text_poke: __u64, build_id: __u64, inherit_thread: __u64, remove_on_exec: __u64, sigtrap: __u64, __reserved_1: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let disabled: u64 = unsafe { ::core::mem::transmute(disabled) }; disabled as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let inherit: u64 = unsafe { ::core::mem::transmute(inherit) }; inherit as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let pinned: u64 = unsafe { ::core::mem::transmute(pinned) }; pinned as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let exclusive: u64 = unsafe { ::core::mem::transmute(exclusive) }; exclusive as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let exclude_user: u64 = unsafe { ::core::mem::transmute(exclude_user) }; exclude_user as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let exclude_kernel: u64 = unsafe { ::core::mem::transmute(exclude_kernel) }; exclude_kernel as u64 }); __bindgen_bitfield_unit.set(6usize, 1u8, { let exclude_hv: u64 = unsafe { ::core::mem::transmute(exclude_hv) }; exclude_hv as u64 }); __bindgen_bitfield_unit.set(7usize, 1u8, { let exclude_idle: u64 = unsafe { ::core::mem::transmute(exclude_idle) }; exclude_idle as u64 }); __bindgen_bitfield_unit.set(8usize, 1u8, { let mmap: u64 = unsafe { ::core::mem::transmute(mmap) }; mmap as u64 }); __bindgen_bitfield_unit.set(9usize, 1u8, { let comm: u64 = unsafe { ::core::mem::transmute(comm) }; comm as u64 }); __bindgen_bitfield_unit.set(10usize, 1u8, { let freq: u64 = unsafe { ::core::mem::transmute(freq) }; freq as u64 }); __bindgen_bitfield_unit.set(11usize, 1u8, { let inherit_stat: u64 = unsafe { ::core::mem::transmute(inherit_stat) }; inherit_stat as u64 }); __bindgen_bitfield_unit.set(12usize, 1u8, { let enable_on_exec: u64 = unsafe { ::core::mem::transmute(enable_on_exec) }; enable_on_exec as u64 }); __bindgen_bitfield_unit.set(13usize, 1u8, { let task: u64 = unsafe { ::core::mem::transmute(task) }; task as u64 }); __bindgen_bitfield_unit.set(14usize, 1u8, { let watermark: u64 = unsafe { ::core::mem::transmute(watermark) }; watermark as u64 }); __bindgen_bitfield_unit.set(15usize, 2u8, { let precise_ip: u64 = unsafe { ::core::mem::transmute(precise_ip) }; precise_ip as u64 }); __bindgen_bitfield_unit.set(17usize, 1u8, { let mmap_data: u64 = unsafe { ::core::mem::transmute(mmap_data) }; mmap_data as u64 }); __bindgen_bitfield_unit.set(18usize, 1u8, { let sample_id_all: u64 = unsafe { ::core::mem::transmute(sample_id_all) }; sample_id_all as u64 }); __bindgen_bitfield_unit.set(19usize, 1u8, { let exclude_host: u64 = unsafe { ::core::mem::transmute(exclude_host) }; exclude_host as u64 }); __bindgen_bitfield_unit.set(20usize, 1u8, { let exclude_guest: u64 = unsafe { ::core::mem::transmute(exclude_guest) }; exclude_guest as u64 }); __bindgen_bitfield_unit.set(21usize, 1u8, { let exclude_callchain_kernel: u64 = unsafe { ::core::mem::transmute(exclude_callchain_kernel) }; exclude_callchain_kernel as u64 }); __bindgen_bitfield_unit.set(22usize, 1u8, { let exclude_callchain_user: u64 = unsafe { ::core::mem::transmute(exclude_callchain_user) }; exclude_callchain_user as u64 }); __bindgen_bitfield_unit.set(23usize, 1u8, { let mmap2: u64 = unsafe { ::core::mem::transmute(mmap2) }; mmap2 as u64 }); __bindgen_bitfield_unit.set(24usize, 1u8, { let comm_exec: u64 = unsafe { ::core::mem::transmute(comm_exec) }; comm_exec as u64 }); __bindgen_bitfield_unit.set(25usize, 1u8, { let use_clockid: u64 = unsafe { ::core::mem::transmute(use_clockid) }; use_clockid as u64 }); __bindgen_bitfield_unit.set(26usize, 1u8, { let context_switch: u64 = unsafe { ::core::mem::transmute(context_switch) }; context_switch as u64 }); __bindgen_bitfield_unit.set(27usize, 1u8, { let write_backward: u64 = unsafe { ::core::mem::transmute(write_backward) }; write_backward as u64 }); __bindgen_bitfield_unit.set(28usize, 1u8, { let namespaces: u64 = unsafe { ::core::mem::transmute(namespaces) }; namespaces as u64 }); __bindgen_bitfield_unit.set(29usize, 1u8, { let ksymbol: u64 = unsafe { ::core::mem::transmute(ksymbol) }; ksymbol as u64 }); __bindgen_bitfield_unit.set(30usize, 1u8, { let bpf_event: u64 = unsafe { ::core::mem::transmute(bpf_event) }; bpf_event as u64 }); __bindgen_bitfield_unit.set(31usize, 1u8, { let aux_output: u64 = unsafe { ::core::mem::transmute(aux_output) }; aux_output as u64 }); __bindgen_bitfield_unit.set(32usize, 1u8, { let cgroup: u64 = unsafe { ::core::mem::transmute(cgroup) }; cgroup as u64 }); __bindgen_bitfield_unit.set(33usize, 1u8, { let text_poke: u64 = unsafe { ::core::mem::transmute(text_poke) }; text_poke as u64 }); __bindgen_bitfield_unit.set(34usize, 1u8, { let build_id: u64 = unsafe { ::core::mem::transmute(build_id) }; build_id as u64 }); __bindgen_bitfield_unit.set(35usize, 1u8, { let inherit_thread: u64 = unsafe { ::core::mem::transmute(inherit_thread) }; inherit_thread as u64 }); __bindgen_bitfield_unit.set(36usize, 1u8, { let remove_on_exec: u64 = unsafe { ::core::mem::transmute(remove_on_exec) }; remove_on_exec as u64 }); __bindgen_bitfield_unit.set(37usize, 1u8, { let sigtrap: u64 = unsafe { ::core::mem::transmute(sigtrap) }; sigtrap as u64 }); __bindgen_bitfield_unit.set(38usize, 26u8, { let __reserved_1: u64 = unsafe { ::core::mem::transmute(__reserved_1) }; __reserved_1 as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_mmap_page { pub version: __u32, pub compat_version: __u32, pub lock: __u32, pub index: __u32, pub offset: __s64, pub time_enabled: __u64, pub time_running: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1, pub pmc_width: __u16, pub time_shift: __u16, pub time_mult: __u32, pub time_offset: __u64, pub time_zero: __u64, pub size: __u32, pub __reserved_1: __u32, pub time_cycles: __u64, pub time_mask: __u64, pub __reserved: [__u8; 928usize], pub data_head: __u64, pub data_tail: __u64, pub data_offset: __u64, pub data_size: __u64, pub aux_head: __u64, pub aux_tail: __u64, pub aux_offset: __u64, pub aux_size: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_mmap_page__bindgen_ty_1 { pub capabilities: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { pub _bitfield_align_1: [u64; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, } impl perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { #[inline] pub fn cap_bit0(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn cap_bit0_is_deprecated(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0_is_deprecated(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn cap_user_rdpmc(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_rdpmc(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_zero(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_zero(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_short(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_short(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn cap_____res(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 58u8) as u64) } } #[inline] pub fn set_cap_____res(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 58u8, val as u64) } } #[inline] pub fn new_bitfield_1( cap_bit0: __u64, cap_bit0_is_deprecated: __u64, cap_user_rdpmc: __u64, cap_user_time: __u64, cap_user_time_zero: __u64, cap_user_time_short: __u64, cap_____res: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let cap_bit0: u64 = unsafe { ::core::mem::transmute(cap_bit0) }; cap_bit0 as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let cap_bit0_is_deprecated: u64 = unsafe { ::core::mem::transmute(cap_bit0_is_deprecated) }; cap_bit0_is_deprecated as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let cap_user_rdpmc: u64 = unsafe { ::core::mem::transmute(cap_user_rdpmc) }; cap_user_rdpmc as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let cap_user_time: u64 = unsafe { ::core::mem::transmute(cap_user_time) }; cap_user_time as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let cap_user_time_zero: u64 = unsafe { ::core::mem::transmute(cap_user_time_zero) }; cap_user_time_zero as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let cap_user_time_short: u64 = unsafe { ::core::mem::transmute(cap_user_time_short) }; cap_user_time_short as u64 }); __bindgen_bitfield_unit.set(6usize, 58u8, { let cap_____res: u64 = unsafe { ::core::mem::transmute(cap_____res) }; cap_____res as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_header { pub type_: __u32, pub misc: __u16, pub size: __u16, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_type { PERF_RECORD_MMAP = 1, PERF_RECORD_LOST = 2, PERF_RECORD_COMM = 3, PERF_RECORD_EXIT = 4, PERF_RECORD_THROTTLE = 5, PERF_RECORD_UNTHROTTLE = 6, PERF_RECORD_FORK = 7, PERF_RECORD_READ = 8, PERF_RECORD_SAMPLE = 9, PERF_RECORD_MMAP2 = 10, PERF_RECORD_AUX = 11, PERF_RECORD_ITRACE_START = 12, PERF_RECORD_LOST_SAMPLES = 13, PERF_RECORD_SWITCH = 14, PERF_RECORD_SWITCH_CPU_WIDE = 15, PERF_RECORD_NAMESPACES = 16, PERF_RECORD_KSYMBOL = 17, PERF_RECORD_BPF_EVENT = 18, PERF_RECORD_CGROUP = 19, PERF_RECORD_TEXT_POKE = 20, PERF_RECORD_AUX_OUTPUT_HW_ID = 21, PERF_RECORD_MAX = 22, } pub const TCA_BPF_UNSPEC: _bindgen_ty_154 = 0; pub const TCA_BPF_ACT: _bindgen_ty_154 = 1; pub const TCA_BPF_POLICE: _bindgen_ty_154 = 2; pub const TCA_BPF_CLASSID: _bindgen_ty_154 = 3; pub const TCA_BPF_OPS_LEN: _bindgen_ty_154 = 4; pub const TCA_BPF_OPS: _bindgen_ty_154 = 5; pub const TCA_BPF_FD: _bindgen_ty_154 = 6; pub const TCA_BPF_NAME: _bindgen_ty_154 = 7; pub const TCA_BPF_FLAGS: _bindgen_ty_154 = 8; pub const TCA_BPF_FLAGS_GEN: _bindgen_ty_154 = 9; pub const TCA_BPF_TAG: _bindgen_ty_154 = 10; pub const TCA_BPF_ID: _bindgen_ty_154 = 11; pub const __TCA_BPF_MAX: _bindgen_ty_154 = 12; pub type _bindgen_ty_154 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct ifinfomsg { pub ifi_family: ::core::ffi::c_uchar, pub __ifi_pad: ::core::ffi::c_uchar, pub ifi_type: ::core::ffi::c_ushort, pub ifi_index: ::core::ffi::c_int, pub ifi_flags: ::core::ffi::c_uint, pub ifi_change: ::core::ffi::c_uint, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct tcmsg { pub tcm_family: ::core::ffi::c_uchar, pub tcm__pad1: ::core::ffi::c_uchar, pub tcm__pad2: ::core::ffi::c_ushort, pub tcm_ifindex: ::core::ffi::c_int, pub tcm_handle: __u32, pub tcm_parent: __u32, pub tcm_info: __u32, } pub const TCA_UNSPEC: _bindgen_ty_172 = 0; pub const TCA_KIND: _bindgen_ty_172 = 1; pub const TCA_OPTIONS: _bindgen_ty_172 = 2; pub const TCA_STATS: _bindgen_ty_172 = 3; pub const TCA_XSTATS: _bindgen_ty_172 = 4; pub const TCA_RATE: _bindgen_ty_172 = 5; pub const TCA_FCNT: _bindgen_ty_172 = 6; pub const TCA_STATS2: _bindgen_ty_172 = 7; pub const TCA_STAB: _bindgen_ty_172 = 8; pub const TCA_PAD: _bindgen_ty_172 = 9; pub const TCA_DUMP_INVISIBLE: _bindgen_ty_172 = 10; pub const TCA_CHAIN: _bindgen_ty_172 = 11; pub const TCA_HW_OFFLOAD: _bindgen_ty_172 = 12; pub const TCA_INGRESS_BLOCK: _bindgen_ty_172 = 13; pub const TCA_EGRESS_BLOCK: _bindgen_ty_172 = 14; pub const __TCA_MAX: _bindgen_ty_172 = 15; pub type _bindgen_ty_172 = ::core::ffi::c_uint; pub const AYA_PERF_EVENT_IOC_ENABLE: ::core::ffi::c_int = 9216; pub const AYA_PERF_EVENT_IOC_DISABLE: ::core::ffi::c_int = 9217; pub const AYA_PERF_EVENT_IOC_SET_BPF: ::core::ffi::c_int = 1074013192; aya-obj-0.2.1/src/generated/linux_bindings_s390x.rs000064400000000000000000002356331046102023000202370ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ #[repr(C)] #[derive(Copy, Clone, Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)] pub struct __BindgenBitfieldUnit { storage: Storage, } impl __BindgenBitfieldUnit { #[inline] pub const fn new(storage: Storage) -> Self { Self { storage } } } impl __BindgenBitfieldUnit where Storage: AsRef<[u8]> + AsMut<[u8]>, { #[inline] pub fn get_bit(&self, index: usize) -> bool { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = self.storage.as_ref()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; byte & mask == mask } #[inline] pub fn set_bit(&mut self, index: usize, val: bool) { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = &mut self.storage.as_mut()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; if val { *byte |= mask; } else { *byte &= !mask; } } #[inline] pub fn get(&self, bit_offset: usize, bit_width: u8) -> u64 { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); let mut val = 0; for i in 0..(bit_width as usize) { if self.get_bit(i + bit_offset) { let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; val |= 1 << index; } } val } #[inline] pub fn set(&mut self, bit_offset: usize, bit_width: u8, val: u64) { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); for i in 0..(bit_width as usize) { let mask = 1 << i; let val_bit_is_set = val & mask == mask; let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; self.set_bit(index + bit_offset, val_bit_is_set); } } } #[repr(C)] #[derive(Default)] pub struct __IncompleteArrayField(::core::marker::PhantomData, [T; 0]); impl __IncompleteArrayField { #[inline] pub const fn new() -> Self { __IncompleteArrayField(::core::marker::PhantomData, []) } #[inline] pub fn as_ptr(&self) -> *const T { self as *const _ as *const T } #[inline] pub fn as_mut_ptr(&mut self) -> *mut T { self as *mut _ as *mut T } #[inline] pub unsafe fn as_slice(&self, len: usize) -> &[T] { ::core::slice::from_raw_parts(self.as_ptr(), len) } #[inline] pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] { ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len) } } impl ::core::fmt::Debug for __IncompleteArrayField { fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { fmt.write_str("__IncompleteArrayField") } } pub const SO_ATTACH_BPF: u32 = 50; pub const SO_DETACH_BPF: u32 = 27; pub const BPF_LD: u32 = 0; pub const BPF_LDX: u32 = 1; pub const BPF_ST: u32 = 2; pub const BPF_STX: u32 = 3; pub const BPF_ALU: u32 = 4; pub const BPF_JMP: u32 = 5; pub const BPF_W: u32 = 0; pub const BPF_H: u32 = 8; pub const BPF_B: u32 = 16; pub const BPF_K: u32 = 0; pub const BPF_ALU64: u32 = 7; pub const BPF_DW: u32 = 24; pub const BPF_CALL: u32 = 128; pub const BPF_F_ALLOW_OVERRIDE: u32 = 1; pub const BPF_F_ALLOW_MULTI: u32 = 2; pub const BPF_F_REPLACE: u32 = 4; pub const BPF_F_BEFORE: u32 = 8; pub const BPF_F_AFTER: u32 = 16; pub const BPF_F_ID: u32 = 32; pub const BPF_F_STRICT_ALIGNMENT: u32 = 1; pub const BPF_F_ANY_ALIGNMENT: u32 = 2; pub const BPF_F_TEST_RND_HI32: u32 = 4; pub const BPF_F_TEST_STATE_FREQ: u32 = 8; pub const BPF_F_SLEEPABLE: u32 = 16; pub const BPF_F_XDP_HAS_FRAGS: u32 = 32; pub const BPF_F_XDP_DEV_BOUND_ONLY: u32 = 64; pub const BPF_F_TEST_REG_INVARIANTS: u32 = 128; pub const BPF_F_NETFILTER_IP_DEFRAG: u32 = 1; pub const BPF_PSEUDO_MAP_FD: u32 = 1; pub const BPF_PSEUDO_MAP_IDX: u32 = 5; pub const BPF_PSEUDO_MAP_VALUE: u32 = 2; pub const BPF_PSEUDO_MAP_IDX_VALUE: u32 = 6; pub const BPF_PSEUDO_BTF_ID: u32 = 3; pub const BPF_PSEUDO_FUNC: u32 = 4; pub const BPF_PSEUDO_CALL: u32 = 1; pub const BPF_PSEUDO_KFUNC_CALL: u32 = 2; pub const BPF_F_QUERY_EFFECTIVE: u32 = 1; pub const BPF_F_TEST_RUN_ON_CPU: u32 = 1; pub const BPF_F_TEST_XDP_LIVE_FRAMES: u32 = 2; pub const BTF_INT_SIGNED: u32 = 1; pub const BTF_INT_CHAR: u32 = 2; pub const BTF_INT_BOOL: u32 = 4; pub const NLMSG_ALIGNTO: u32 = 4; pub const XDP_FLAGS_UPDATE_IF_NOEXIST: u32 = 1; pub const XDP_FLAGS_SKB_MODE: u32 = 2; pub const XDP_FLAGS_DRV_MODE: u32 = 4; pub const XDP_FLAGS_HW_MODE: u32 = 8; pub const XDP_FLAGS_REPLACE: u32 = 16; pub const XDP_FLAGS_MODES: u32 = 14; pub const XDP_FLAGS_MASK: u32 = 31; pub const PERF_MAX_STACK_DEPTH: u32 = 127; pub const PERF_MAX_CONTEXTS_PER_STACK: u32 = 8; pub const PERF_FLAG_FD_NO_GROUP: u32 = 1; pub const PERF_FLAG_FD_OUTPUT: u32 = 2; pub const PERF_FLAG_PID_CGROUP: u32 = 4; pub const PERF_FLAG_FD_CLOEXEC: u32 = 8; pub const TC_H_MAJ_MASK: u32 = 4294901760; pub const TC_H_MIN_MASK: u32 = 65535; pub const TC_H_UNSPEC: u32 = 0; pub const TC_H_ROOT: u32 = 4294967295; pub const TC_H_INGRESS: u32 = 4294967281; pub const TC_H_CLSACT: u32 = 4294967281; pub const TC_H_MIN_PRIORITY: u32 = 65504; pub const TC_H_MIN_INGRESS: u32 = 65522; pub const TC_H_MIN_EGRESS: u32 = 65523; pub const TCA_BPF_FLAG_ACT_DIRECT: u32 = 1; pub type __u8 = ::core::ffi::c_uchar; pub type __s16 = ::core::ffi::c_short; pub type __u16 = ::core::ffi::c_ushort; pub type __s32 = ::core::ffi::c_int; pub type __u32 = ::core::ffi::c_uint; pub type __s64 = ::core::ffi::c_longlong; pub type __u64 = ::core::ffi::c_ulonglong; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_insn { pub code: __u8, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 1usize]>, pub off: __s16, pub imm: __s32, } impl bpf_insn { #[inline] pub fn dst_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 4u8) as u8) } } #[inline] pub fn set_dst_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 4u8, val as u64) } } #[inline] pub fn src_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 4u8) as u8) } } #[inline] pub fn set_src_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 4u8, val as u64) } } #[inline] pub fn new_bitfield_1(dst_reg: __u8, src_reg: __u8) -> __BindgenBitfieldUnit<[u8; 1usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 1usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 4u8, { let dst_reg: u8 = unsafe { ::core::mem::transmute(dst_reg) }; dst_reg as u64 }); __bindgen_bitfield_unit.set(4usize, 4u8, { let src_reg: u8 = unsafe { ::core::mem::transmute(src_reg) }; src_reg as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug)] pub struct bpf_lpm_trie_key { pub prefixlen: __u32, pub data: __IncompleteArrayField<__u8>, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cgroup_iter_order { BPF_CGROUP_ITER_ORDER_UNSPEC = 0, BPF_CGROUP_ITER_SELF_ONLY = 1, BPF_CGROUP_ITER_DESCENDANTS_PRE = 2, BPF_CGROUP_ITER_DESCENDANTS_POST = 3, BPF_CGROUP_ITER_ANCESTORS_UP = 4, } impl bpf_cmd { pub const BPF_PROG_RUN: bpf_cmd = bpf_cmd::BPF_PROG_TEST_RUN; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cmd { BPF_MAP_CREATE = 0, BPF_MAP_LOOKUP_ELEM = 1, BPF_MAP_UPDATE_ELEM = 2, BPF_MAP_DELETE_ELEM = 3, BPF_MAP_GET_NEXT_KEY = 4, BPF_PROG_LOAD = 5, BPF_OBJ_PIN = 6, BPF_OBJ_GET = 7, BPF_PROG_ATTACH = 8, BPF_PROG_DETACH = 9, BPF_PROG_TEST_RUN = 10, BPF_PROG_GET_NEXT_ID = 11, BPF_MAP_GET_NEXT_ID = 12, BPF_PROG_GET_FD_BY_ID = 13, BPF_MAP_GET_FD_BY_ID = 14, BPF_OBJ_GET_INFO_BY_FD = 15, BPF_PROG_QUERY = 16, BPF_RAW_TRACEPOINT_OPEN = 17, BPF_BTF_LOAD = 18, BPF_BTF_GET_FD_BY_ID = 19, BPF_TASK_FD_QUERY = 20, BPF_MAP_LOOKUP_AND_DELETE_ELEM = 21, BPF_MAP_FREEZE = 22, BPF_BTF_GET_NEXT_ID = 23, BPF_MAP_LOOKUP_BATCH = 24, BPF_MAP_LOOKUP_AND_DELETE_BATCH = 25, BPF_MAP_UPDATE_BATCH = 26, BPF_MAP_DELETE_BATCH = 27, BPF_LINK_CREATE = 28, BPF_LINK_UPDATE = 29, BPF_LINK_GET_FD_BY_ID = 30, BPF_LINK_GET_NEXT_ID = 31, BPF_ENABLE_STATS = 32, BPF_ITER_CREATE = 33, BPF_LINK_DETACH = 34, BPF_PROG_BIND_MAP = 35, BPF_TOKEN_CREATE = 36, __MAX_BPF_CMD = 37, } impl bpf_map_type { pub const BPF_MAP_TYPE_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED; } impl bpf_map_type { pub const BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_map_type { BPF_MAP_TYPE_UNSPEC = 0, BPF_MAP_TYPE_HASH = 1, BPF_MAP_TYPE_ARRAY = 2, BPF_MAP_TYPE_PROG_ARRAY = 3, BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4, BPF_MAP_TYPE_PERCPU_HASH = 5, BPF_MAP_TYPE_PERCPU_ARRAY = 6, BPF_MAP_TYPE_STACK_TRACE = 7, BPF_MAP_TYPE_CGROUP_ARRAY = 8, BPF_MAP_TYPE_LRU_HASH = 9, BPF_MAP_TYPE_LRU_PERCPU_HASH = 10, BPF_MAP_TYPE_LPM_TRIE = 11, BPF_MAP_TYPE_ARRAY_OF_MAPS = 12, BPF_MAP_TYPE_HASH_OF_MAPS = 13, BPF_MAP_TYPE_DEVMAP = 14, BPF_MAP_TYPE_SOCKMAP = 15, BPF_MAP_TYPE_CPUMAP = 16, BPF_MAP_TYPE_XSKMAP = 17, BPF_MAP_TYPE_SOCKHASH = 18, BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 19, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED = 21, BPF_MAP_TYPE_QUEUE = 22, BPF_MAP_TYPE_STACK = 23, BPF_MAP_TYPE_SK_STORAGE = 24, BPF_MAP_TYPE_DEVMAP_HASH = 25, BPF_MAP_TYPE_STRUCT_OPS = 26, BPF_MAP_TYPE_RINGBUF = 27, BPF_MAP_TYPE_INODE_STORAGE = 28, BPF_MAP_TYPE_TASK_STORAGE = 29, BPF_MAP_TYPE_BLOOM_FILTER = 30, BPF_MAP_TYPE_USER_RINGBUF = 31, BPF_MAP_TYPE_CGRP_STORAGE = 32, BPF_MAP_TYPE_ARENA = 33, __MAX_BPF_MAP_TYPE = 34, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC = 0, BPF_PROG_TYPE_SOCKET_FILTER = 1, BPF_PROG_TYPE_KPROBE = 2, BPF_PROG_TYPE_SCHED_CLS = 3, BPF_PROG_TYPE_SCHED_ACT = 4, BPF_PROG_TYPE_TRACEPOINT = 5, BPF_PROG_TYPE_XDP = 6, BPF_PROG_TYPE_PERF_EVENT = 7, BPF_PROG_TYPE_CGROUP_SKB = 8, BPF_PROG_TYPE_CGROUP_SOCK = 9, BPF_PROG_TYPE_LWT_IN = 10, BPF_PROG_TYPE_LWT_OUT = 11, BPF_PROG_TYPE_LWT_XMIT = 12, BPF_PROG_TYPE_SOCK_OPS = 13, BPF_PROG_TYPE_SK_SKB = 14, BPF_PROG_TYPE_CGROUP_DEVICE = 15, BPF_PROG_TYPE_SK_MSG = 16, BPF_PROG_TYPE_RAW_TRACEPOINT = 17, BPF_PROG_TYPE_CGROUP_SOCK_ADDR = 18, BPF_PROG_TYPE_LWT_SEG6LOCAL = 19, BPF_PROG_TYPE_LIRC_MODE2 = 20, BPF_PROG_TYPE_SK_REUSEPORT = 21, BPF_PROG_TYPE_FLOW_DISSECTOR = 22, BPF_PROG_TYPE_CGROUP_SYSCTL = 23, BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE = 24, BPF_PROG_TYPE_CGROUP_SOCKOPT = 25, BPF_PROG_TYPE_TRACING = 26, BPF_PROG_TYPE_STRUCT_OPS = 27, BPF_PROG_TYPE_EXT = 28, BPF_PROG_TYPE_LSM = 29, BPF_PROG_TYPE_SK_LOOKUP = 30, BPF_PROG_TYPE_SYSCALL = 31, BPF_PROG_TYPE_NETFILTER = 32, __MAX_BPF_PROG_TYPE = 33, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_attach_type { BPF_CGROUP_INET_INGRESS = 0, BPF_CGROUP_INET_EGRESS = 1, BPF_CGROUP_INET_SOCK_CREATE = 2, BPF_CGROUP_SOCK_OPS = 3, BPF_SK_SKB_STREAM_PARSER = 4, BPF_SK_SKB_STREAM_VERDICT = 5, BPF_CGROUP_DEVICE = 6, BPF_SK_MSG_VERDICT = 7, BPF_CGROUP_INET4_BIND = 8, BPF_CGROUP_INET6_BIND = 9, BPF_CGROUP_INET4_CONNECT = 10, BPF_CGROUP_INET6_CONNECT = 11, BPF_CGROUP_INET4_POST_BIND = 12, BPF_CGROUP_INET6_POST_BIND = 13, BPF_CGROUP_UDP4_SENDMSG = 14, BPF_CGROUP_UDP6_SENDMSG = 15, BPF_LIRC_MODE2 = 16, BPF_FLOW_DISSECTOR = 17, BPF_CGROUP_SYSCTL = 18, BPF_CGROUP_UDP4_RECVMSG = 19, BPF_CGROUP_UDP6_RECVMSG = 20, BPF_CGROUP_GETSOCKOPT = 21, BPF_CGROUP_SETSOCKOPT = 22, BPF_TRACE_RAW_TP = 23, BPF_TRACE_FENTRY = 24, BPF_TRACE_FEXIT = 25, BPF_MODIFY_RETURN = 26, BPF_LSM_MAC = 27, BPF_TRACE_ITER = 28, BPF_CGROUP_INET4_GETPEERNAME = 29, BPF_CGROUP_INET6_GETPEERNAME = 30, BPF_CGROUP_INET4_GETSOCKNAME = 31, BPF_CGROUP_INET6_GETSOCKNAME = 32, BPF_XDP_DEVMAP = 33, BPF_CGROUP_INET_SOCK_RELEASE = 34, BPF_XDP_CPUMAP = 35, BPF_SK_LOOKUP = 36, BPF_XDP = 37, BPF_SK_SKB_VERDICT = 38, BPF_SK_REUSEPORT_SELECT = 39, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 40, BPF_PERF_EVENT = 41, BPF_TRACE_KPROBE_MULTI = 42, BPF_LSM_CGROUP = 43, BPF_STRUCT_OPS = 44, BPF_NETFILTER = 45, BPF_TCX_INGRESS = 46, BPF_TCX_EGRESS = 47, BPF_TRACE_UPROBE_MULTI = 48, BPF_CGROUP_UNIX_CONNECT = 49, BPF_CGROUP_UNIX_SENDMSG = 50, BPF_CGROUP_UNIX_RECVMSG = 51, BPF_CGROUP_UNIX_GETPEERNAME = 52, BPF_CGROUP_UNIX_GETSOCKNAME = 53, BPF_NETKIT_PRIMARY = 54, BPF_NETKIT_PEER = 55, __MAX_BPF_ATTACH_TYPE = 56, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_link_type { BPF_LINK_TYPE_UNSPEC = 0, BPF_LINK_TYPE_RAW_TRACEPOINT = 1, BPF_LINK_TYPE_TRACING = 2, BPF_LINK_TYPE_CGROUP = 3, BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, __MAX_BPF_LINK_TYPE = 14, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_perf_event_type { BPF_PERF_EVENT_UNSPEC = 0, BPF_PERF_EVENT_UPROBE = 1, BPF_PERF_EVENT_URETPROBE = 2, BPF_PERF_EVENT_KPROBE = 3, BPF_PERF_EVENT_KRETPROBE = 4, BPF_PERF_EVENT_TRACEPOINT = 5, BPF_PERF_EVENT_EVENT = 6, } pub const BPF_F_KPROBE_MULTI_RETURN: _bindgen_ty_2 = 1; pub type _bindgen_ty_2 = ::core::ffi::c_uint; pub const BPF_F_UPROBE_MULTI_RETURN: _bindgen_ty_3 = 1; pub type _bindgen_ty_3 = ::core::ffi::c_uint; pub const BPF_ANY: _bindgen_ty_4 = 0; pub const BPF_NOEXIST: _bindgen_ty_4 = 1; pub const BPF_EXIST: _bindgen_ty_4 = 2; pub const BPF_F_LOCK: _bindgen_ty_4 = 4; pub type _bindgen_ty_4 = ::core::ffi::c_uint; pub const BPF_F_NO_PREALLOC: _bindgen_ty_5 = 1; pub const BPF_F_NO_COMMON_LRU: _bindgen_ty_5 = 2; pub const BPF_F_NUMA_NODE: _bindgen_ty_5 = 4; pub const BPF_F_RDONLY: _bindgen_ty_5 = 8; pub const BPF_F_WRONLY: _bindgen_ty_5 = 16; pub const BPF_F_STACK_BUILD_ID: _bindgen_ty_5 = 32; pub const BPF_F_ZERO_SEED: _bindgen_ty_5 = 64; pub const BPF_F_RDONLY_PROG: _bindgen_ty_5 = 128; pub const BPF_F_WRONLY_PROG: _bindgen_ty_5 = 256; pub const BPF_F_CLONE: _bindgen_ty_5 = 512; pub const BPF_F_MMAPABLE: _bindgen_ty_5 = 1024; pub const BPF_F_PRESERVE_ELEMS: _bindgen_ty_5 = 2048; pub const BPF_F_INNER_MAP: _bindgen_ty_5 = 4096; pub const BPF_F_LINK: _bindgen_ty_5 = 8192; pub const BPF_F_PATH_FD: _bindgen_ty_5 = 16384; pub const BPF_F_VTYPE_BTF_OBJ_FD: _bindgen_ty_5 = 32768; pub const BPF_F_TOKEN_FD: _bindgen_ty_5 = 65536; pub const BPF_F_SEGV_ON_FAULT: _bindgen_ty_5 = 131072; pub const BPF_F_NO_USER_CONV: _bindgen_ty_5 = 262144; pub type _bindgen_ty_5 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_stats_type { BPF_STATS_RUN_TIME = 0, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr { pub __bindgen_anon_1: bpf_attr__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_2, pub batch: bpf_attr__bindgen_ty_3, pub __bindgen_anon_3: bpf_attr__bindgen_ty_4, pub __bindgen_anon_4: bpf_attr__bindgen_ty_5, pub __bindgen_anon_5: bpf_attr__bindgen_ty_6, pub test: bpf_attr__bindgen_ty_7, pub __bindgen_anon_6: bpf_attr__bindgen_ty_8, pub info: bpf_attr__bindgen_ty_9, pub query: bpf_attr__bindgen_ty_10, pub raw_tracepoint: bpf_attr__bindgen_ty_11, pub __bindgen_anon_7: bpf_attr__bindgen_ty_12, pub task_fd_query: bpf_attr__bindgen_ty_13, pub link_create: bpf_attr__bindgen_ty_14, pub link_update: bpf_attr__bindgen_ty_15, pub link_detach: bpf_attr__bindgen_ty_16, pub enable_stats: bpf_attr__bindgen_ty_17, pub iter_create: bpf_attr__bindgen_ty_18, pub prog_bind_map: bpf_attr__bindgen_ty_19, pub token_create: bpf_attr__bindgen_ty_20, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_1 { pub map_type: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub inner_map_fd: __u32, pub numa_node: __u32, pub map_name: [::core::ffi::c_char; 16usize], pub map_ifindex: __u32, pub btf_fd: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_value_type_id: __u32, pub map_extra: __u64, pub value_type_btf_obj_fd: __s32, pub map_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_2 { pub map_fd: __u32, pub key: __u64, pub __bindgen_anon_1: bpf_attr__bindgen_ty_2__bindgen_ty_1, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_2__bindgen_ty_1 { pub value: __u64, pub next_key: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_3 { pub in_batch: __u64, pub out_batch: __u64, pub keys: __u64, pub values: __u64, pub count: __u32, pub map_fd: __u32, pub elem_flags: __u64, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_4 { pub prog_type: __u32, pub insn_cnt: __u32, pub insns: __u64, pub license: __u64, pub log_level: __u32, pub log_size: __u32, pub log_buf: __u64, pub kern_version: __u32, pub prog_flags: __u32, pub prog_name: [::core::ffi::c_char; 16usize], pub prog_ifindex: __u32, pub expected_attach_type: __u32, pub prog_btf_fd: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub func_info_cnt: __u32, pub line_info_rec_size: __u32, pub line_info: __u64, pub line_info_cnt: __u32, pub attach_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_4__bindgen_ty_1, pub core_relo_cnt: __u32, pub fd_array: __u64, pub core_relos: __u64, pub core_relo_rec_size: __u32, pub log_true_size: __u32, pub prog_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_4__bindgen_ty_1 { pub attach_prog_fd: __u32, pub attach_btf_obj_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_5 { pub pathname: __u64, pub bpf_fd: __u32, pub file_flags: __u32, pub path_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_6__bindgen_ty_1, pub attach_bpf_fd: __u32, pub attach_type: __u32, pub attach_flags: __u32, pub replace_bpf_fd: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_6__bindgen_ty_2, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_2 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_7 { pub prog_fd: __u32, pub retval: __u32, pub data_size_in: __u32, pub data_size_out: __u32, pub data_in: __u64, pub data_out: __u64, pub repeat: __u32, pub duration: __u32, pub ctx_size_in: __u32, pub ctx_size_out: __u32, pub ctx_in: __u64, pub ctx_out: __u64, pub flags: __u32, pub cpu: __u32, pub batch_size: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_8__bindgen_ty_1, pub next_id: __u32, pub open_flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_8__bindgen_ty_1 { pub start_id: __u32, pub prog_id: __u32, pub map_id: __u32, pub btf_id: __u32, pub link_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_9 { pub bpf_fd: __u32, pub info_len: __u32, pub info: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_10 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_10__bindgen_ty_1, pub attach_type: __u32, pub query_flags: __u32, pub attach_flags: __u32, pub prog_ids: __u64, pub __bindgen_anon_2: bpf_attr__bindgen_ty_10__bindgen_ty_2, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub prog_attach_flags: __u64, pub link_ids: __u64, pub link_attach_flags: __u64, pub revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_2 { pub prog_cnt: __u32, pub count: __u32, } impl bpf_attr__bindgen_ty_10 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_11 { pub name: __u64, pub prog_fd: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_attr__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_12 { pub btf: __u64, pub btf_log_buf: __u64, pub btf_size: __u32, pub btf_log_size: __u32, pub btf_log_level: __u32, pub btf_log_true_size: __u32, pub btf_flags: __u32, pub btf_token_fd: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_13 { pub pid: __u32, pub fd: __u32, pub flags: __u32, pub buf_len: __u32, pub buf: __u64, pub prog_id: __u32, pub fd_type: __u32, pub probe_offset: __u64, pub probe_addr: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_14__bindgen_ty_2, pub attach_type: __u32, pub flags: __u32, pub __bindgen_anon_3: bpf_attr__bindgen_ty_14__bindgen_ty_3, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_1 { pub prog_fd: __u32, pub map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_2 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3 { pub target_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1, pub perf_event: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2, pub kprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3, pub tracing: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4, pub netfilter: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5, pub tcx: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6, pub uprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7, pub netkit: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1 { pub iter_info: __u64, pub iter_info_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2 { pub bpf_cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3 { pub flags: __u32, pub cnt: __u32, pub syms: __u64, pub addrs: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4 { pub target_btf_id: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub cnt: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_15 { pub link_fd: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_15__bindgen_ty_1, pub flags: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_15__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_1 { pub new_prog_fd: __u32, pub new_map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_2 { pub old_prog_fd: __u32, pub old_map_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_16 { pub link_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_17 { pub type_: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_18 { pub link_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_19 { pub prog_fd: __u32, pub map_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_20 { pub flags: __u32, pub bpffs_fd: __u32, } pub const BPF_F_RECOMPUTE_CSUM: _bindgen_ty_6 = 1; pub const BPF_F_INVALIDATE_HASH: _bindgen_ty_6 = 2; pub type _bindgen_ty_6 = ::core::ffi::c_uint; pub const BPF_F_HDR_FIELD_MASK: _bindgen_ty_7 = 15; pub type _bindgen_ty_7 = ::core::ffi::c_uint; pub const BPF_F_PSEUDO_HDR: _bindgen_ty_8 = 16; pub const BPF_F_MARK_MANGLED_0: _bindgen_ty_8 = 32; pub const BPF_F_MARK_ENFORCE: _bindgen_ty_8 = 64; pub type _bindgen_ty_8 = ::core::ffi::c_uint; pub const BPF_F_INGRESS: _bindgen_ty_9 = 1; pub type _bindgen_ty_9 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_IPV6: _bindgen_ty_10 = 1; pub type _bindgen_ty_10 = ::core::ffi::c_uint; pub const BPF_F_SKIP_FIELD_MASK: _bindgen_ty_11 = 255; pub const BPF_F_USER_STACK: _bindgen_ty_11 = 256; pub const BPF_F_FAST_STACK_CMP: _bindgen_ty_11 = 512; pub const BPF_F_REUSE_STACKID: _bindgen_ty_11 = 1024; pub const BPF_F_USER_BUILD_ID: _bindgen_ty_11 = 2048; pub type _bindgen_ty_11 = ::core::ffi::c_uint; pub const BPF_F_ZERO_CSUM_TX: _bindgen_ty_12 = 2; pub const BPF_F_DONT_FRAGMENT: _bindgen_ty_12 = 4; pub const BPF_F_SEQ_NUMBER: _bindgen_ty_12 = 8; pub const BPF_F_NO_TUNNEL_KEY: _bindgen_ty_12 = 16; pub type _bindgen_ty_12 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_FLAGS: _bindgen_ty_13 = 16; pub type _bindgen_ty_13 = ::core::ffi::c_uint; pub const BPF_F_INDEX_MASK: _bindgen_ty_14 = 4294967295; pub const BPF_F_CURRENT_CPU: _bindgen_ty_14 = 4294967295; pub const BPF_F_CTXLEN_MASK: _bindgen_ty_14 = 4503595332403200; pub type _bindgen_ty_14 = ::core::ffi::c_ulong; pub const BPF_F_CURRENT_NETNS: _bindgen_ty_15 = -1; pub type _bindgen_ty_15 = ::core::ffi::c_int; pub const BPF_F_ADJ_ROOM_FIXED_GSO: _bindgen_ty_17 = 1; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV4: _bindgen_ty_17 = 2; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV6: _bindgen_ty_17 = 4; pub const BPF_F_ADJ_ROOM_ENCAP_L4_GRE: _bindgen_ty_17 = 8; pub const BPF_F_ADJ_ROOM_ENCAP_L4_UDP: _bindgen_ty_17 = 16; pub const BPF_F_ADJ_ROOM_NO_CSUM_RESET: _bindgen_ty_17 = 32; pub const BPF_F_ADJ_ROOM_ENCAP_L2_ETH: _bindgen_ty_17 = 64; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV4: _bindgen_ty_17 = 128; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV6: _bindgen_ty_17 = 256; pub type _bindgen_ty_17 = ::core::ffi::c_uint; pub const BPF_F_SYSCTL_BASE_NAME: _bindgen_ty_19 = 1; pub type _bindgen_ty_19 = ::core::ffi::c_uint; pub const BPF_F_GET_BRANCH_RECORDS_SIZE: _bindgen_ty_21 = 1; pub type _bindgen_ty_21 = ::core::ffi::c_uint; pub const BPF_RINGBUF_BUSY_BIT: _bindgen_ty_24 = 2147483648; pub const BPF_RINGBUF_DISCARD_BIT: _bindgen_ty_24 = 1073741824; pub const BPF_RINGBUF_HDR_SZ: _bindgen_ty_24 = 8; pub type _bindgen_ty_24 = ::core::ffi::c_uint; pub const BPF_F_BPRM_SECUREEXEC: _bindgen_ty_26 = 1; pub type _bindgen_ty_26 = ::core::ffi::c_uint; pub const BPF_F_BROADCAST: _bindgen_ty_27 = 8; pub const BPF_F_EXCLUDE_INGRESS: _bindgen_ty_27 = 16; pub type _bindgen_ty_27 = ::core::ffi::c_uint; #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_devmap_val { pub ifindex: __u32, pub bpf_prog: bpf_devmap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_devmap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_cpumap_val { pub qsize: __u32, pub bpf_prog: bpf_cpumap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_cpumap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_prog_info { pub type_: __u32, pub id: __u32, pub tag: [__u8; 8usize], pub jited_prog_len: __u32, pub xlated_prog_len: __u32, pub jited_prog_insns: __u64, pub xlated_prog_insns: __u64, pub load_time: __u64, pub created_by_uid: __u32, pub nr_map_ids: __u32, pub map_ids: __u64, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub netns_dev: __u64, pub netns_ino: __u64, pub nr_jited_ksyms: __u32, pub nr_jited_func_lens: __u32, pub jited_ksyms: __u64, pub jited_func_lens: __u64, pub btf_id: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub nr_func_info: __u32, pub nr_line_info: __u32, pub line_info: __u64, pub jited_line_info: __u64, pub nr_jited_line_info: __u32, pub line_info_rec_size: __u32, pub jited_line_info_rec_size: __u32, pub nr_prog_tags: __u32, pub prog_tags: __u64, pub run_time_ns: __u64, pub run_cnt: __u64, pub recursion_misses: __u64, pub verified_insns: __u32, pub attach_btf_obj_id: __u32, pub attach_btf_id: __u32, } impl bpf_prog_info { #[inline] pub fn gpl_compatible(&self) -> __u32 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u32) } } #[inline] pub fn set_gpl_compatible(&mut self, val: __u32) { unsafe { let val: u32 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn new_bitfield_1(gpl_compatible: __u32) -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let gpl_compatible: u32 = unsafe { ::core::mem::transmute(gpl_compatible) }; gpl_compatible as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_map_info { pub type_: __u32, pub id: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub btf_vmlinux_value_type_id: __u32, pub netns_dev: __u64, pub netns_ino: __u64, pub btf_id: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_id: __u32, pub map_extra: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_btf_info { pub btf: __u64, pub btf_size: __u32, pub id: __u32, pub name: __u64, pub name_len: __u32, pub kernel_btf: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info { pub type_: __u32, pub id: __u32, pub prog_id: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1 { pub raw_tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_1, pub tracing: bpf_link_info__bindgen_ty_1__bindgen_ty_2, pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_3, pub iter: bpf_link_info__bindgen_ty_1__bindgen_ty_4, pub netns: bpf_link_info__bindgen_ty_1__bindgen_ty_5, pub xdp: bpf_link_info__bindgen_ty_1__bindgen_ty_6, pub struct_ops: bpf_link_info__bindgen_ty_1__bindgen_ty_7, pub netfilter: bpf_link_info__bindgen_ty_1__bindgen_ty_8, pub kprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_9, pub uprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_10, pub perf_event: bpf_link_info__bindgen_ty_1__bindgen_ty_11, pub tcx: bpf_link_info__bindgen_ty_1__bindgen_ty_12, pub netkit: bpf_link_info__bindgen_ty_1__bindgen_ty_13, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_1 { pub tp_name: __u64, pub tp_name_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_2 { pub attach_type: __u32, pub target_obj_id: __u32, pub target_btf_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_3 { pub cgroup_id: __u64, pub attach_type: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4 { pub target_name: __u64, pub target_name_len: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1, pub __bindgen_anon_2: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1 { pub map: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1 { pub map_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2 { pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1, pub task: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1 { pub cgroup_id: __u64, pub order: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2 { pub tid: __u32, pub pid: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_5 { pub netns_ino: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_6 { pub ifindex: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_7 { pub map_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_8 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_9 { pub addrs: __u64, pub count: __u32, pub flags: __u32, pub missed: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_10 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub path_size: __u32, pub count: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11 { pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1 { pub uprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1, pub kprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2, pub tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3, pub event: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1 { pub file_name: __u64, pub name_len: __u32, pub offset: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2 { pub func_name: __u64, pub name_len: __u32, pub offset: __u32, pub addr: __u64, pub missed: __u64, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { pub tp_name: __u64, pub name_len: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { pub config: __u64, pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_12 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_13 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_func_info { pub insn_off: __u32, pub type_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_line_info { pub insn_off: __u32, pub file_name_off: __u32, pub line_off: __u32, pub line_col: __u32, } pub const BPF_F_TIMER_ABS: _bindgen_ty_41 = 1; pub const BPF_F_TIMER_CPU_PIN: _bindgen_ty_41 = 2; pub type _bindgen_ty_41 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub type_off: __u32, pub type_len: __u32, pub str_off: __u32, pub str_len: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct btf_type { pub name_off: __u32, pub info: __u32, pub __bindgen_anon_1: btf_type__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union btf_type__bindgen_ty_1 { pub size: __u32, pub type_: __u32, } pub const BTF_KIND_UNKN: _bindgen_ty_42 = 0; pub const BTF_KIND_INT: _bindgen_ty_42 = 1; pub const BTF_KIND_PTR: _bindgen_ty_42 = 2; pub const BTF_KIND_ARRAY: _bindgen_ty_42 = 3; pub const BTF_KIND_STRUCT: _bindgen_ty_42 = 4; pub const BTF_KIND_UNION: _bindgen_ty_42 = 5; pub const BTF_KIND_ENUM: _bindgen_ty_42 = 6; pub const BTF_KIND_FWD: _bindgen_ty_42 = 7; pub const BTF_KIND_TYPEDEF: _bindgen_ty_42 = 8; pub const BTF_KIND_VOLATILE: _bindgen_ty_42 = 9; pub const BTF_KIND_CONST: _bindgen_ty_42 = 10; pub const BTF_KIND_RESTRICT: _bindgen_ty_42 = 11; pub const BTF_KIND_FUNC: _bindgen_ty_42 = 12; pub const BTF_KIND_FUNC_PROTO: _bindgen_ty_42 = 13; pub const BTF_KIND_VAR: _bindgen_ty_42 = 14; pub const BTF_KIND_DATASEC: _bindgen_ty_42 = 15; pub const BTF_KIND_FLOAT: _bindgen_ty_42 = 16; pub const BTF_KIND_DECL_TAG: _bindgen_ty_42 = 17; pub const BTF_KIND_TYPE_TAG: _bindgen_ty_42 = 18; pub const BTF_KIND_ENUM64: _bindgen_ty_42 = 19; pub const NR_BTF_KINDS: _bindgen_ty_42 = 20; pub const BTF_KIND_MAX: _bindgen_ty_42 = 19; pub type _bindgen_ty_42 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_enum { pub name_off: __u32, pub val: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_array { pub type_: __u32, pub index_type: __u32, pub nelems: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_member { pub name_off: __u32, pub type_: __u32, pub offset: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_param { pub name_off: __u32, pub type_: __u32, } pub const BTF_VAR_STATIC: _bindgen_ty_43 = 0; pub const BTF_VAR_GLOBAL_ALLOCATED: _bindgen_ty_43 = 1; pub const BTF_VAR_GLOBAL_EXTERN: _bindgen_ty_43 = 2; pub type _bindgen_ty_43 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum btf_func_linkage { BTF_FUNC_STATIC = 0, BTF_FUNC_GLOBAL = 1, BTF_FUNC_EXTERN = 2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var { pub linkage: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var_secinfo { pub type_: __u32, pub offset: __u32, pub size: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_decl_tag { pub component_idx: __s32, } pub const IFLA_XDP_UNSPEC: _bindgen_ty_92 = 0; pub const IFLA_XDP_FD: _bindgen_ty_92 = 1; pub const IFLA_XDP_ATTACHED: _bindgen_ty_92 = 2; pub const IFLA_XDP_FLAGS: _bindgen_ty_92 = 3; pub const IFLA_XDP_PROG_ID: _bindgen_ty_92 = 4; pub const IFLA_XDP_DRV_PROG_ID: _bindgen_ty_92 = 5; pub const IFLA_XDP_SKB_PROG_ID: _bindgen_ty_92 = 6; pub const IFLA_XDP_HW_PROG_ID: _bindgen_ty_92 = 7; pub const IFLA_XDP_EXPECTED_FD: _bindgen_ty_92 = 8; pub const __IFLA_XDP_MAX: _bindgen_ty_92 = 9; pub type _bindgen_ty_92 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum nf_inet_hooks { NF_INET_PRE_ROUTING = 0, NF_INET_LOCAL_IN = 1, NF_INET_FORWARD = 2, NF_INET_LOCAL_OUT = 3, NF_INET_POST_ROUTING = 4, NF_INET_NUMHOOKS = 5, } pub const NFPROTO_UNSPEC: _bindgen_ty_99 = 0; pub const NFPROTO_INET: _bindgen_ty_99 = 1; pub const NFPROTO_IPV4: _bindgen_ty_99 = 2; pub const NFPROTO_ARP: _bindgen_ty_99 = 3; pub const NFPROTO_NETDEV: _bindgen_ty_99 = 5; pub const NFPROTO_BRIDGE: _bindgen_ty_99 = 7; pub const NFPROTO_IPV6: _bindgen_ty_99 = 10; pub const NFPROTO_DECNET: _bindgen_ty_99 = 12; pub const NFPROTO_NUMPROTO: _bindgen_ty_99 = 13; pub type _bindgen_ty_99 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, PERF_TYPE_HW_CACHE = 3, PERF_TYPE_RAW = 4, PERF_TYPE_BREAKPOINT = 5, PERF_TYPE_MAX = 6, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_id { PERF_COUNT_HW_CPU_CYCLES = 0, PERF_COUNT_HW_INSTRUCTIONS = 1, PERF_COUNT_HW_CACHE_REFERENCES = 2, PERF_COUNT_HW_CACHE_MISSES = 3, PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, PERF_COUNT_HW_BRANCH_MISSES = 5, PERF_COUNT_HW_BUS_CYCLES = 6, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7, PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8, PERF_COUNT_HW_REF_CPU_CYCLES = 9, PERF_COUNT_HW_MAX = 10, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_id { PERF_COUNT_HW_CACHE_L1D = 0, PERF_COUNT_HW_CACHE_L1I = 1, PERF_COUNT_HW_CACHE_LL = 2, PERF_COUNT_HW_CACHE_DTLB = 3, PERF_COUNT_HW_CACHE_ITLB = 4, PERF_COUNT_HW_CACHE_BPU = 5, PERF_COUNT_HW_CACHE_NODE = 6, PERF_COUNT_HW_CACHE_MAX = 7, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_id { PERF_COUNT_HW_CACHE_OP_READ = 0, PERF_COUNT_HW_CACHE_OP_WRITE = 1, PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, PERF_COUNT_HW_CACHE_OP_MAX = 3, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_result_id { PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, PERF_COUNT_HW_CACHE_RESULT_MISS = 1, PERF_COUNT_HW_CACHE_RESULT_MAX = 2, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, PERF_COUNT_SW_CONTEXT_SWITCHES = 3, PERF_COUNT_SW_CPU_MIGRATIONS = 4, PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, PERF_COUNT_SW_ALIGNMENT_FAULTS = 7, PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, PERF_COUNT_SW_CGROUP_SWITCHES = 11, PERF_COUNT_SW_MAX = 12, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_sample_format { PERF_SAMPLE_IP = 1, PERF_SAMPLE_TID = 2, PERF_SAMPLE_TIME = 4, PERF_SAMPLE_ADDR = 8, PERF_SAMPLE_READ = 16, PERF_SAMPLE_CALLCHAIN = 32, PERF_SAMPLE_ID = 64, PERF_SAMPLE_CPU = 128, PERF_SAMPLE_PERIOD = 256, PERF_SAMPLE_STREAM_ID = 512, PERF_SAMPLE_RAW = 1024, PERF_SAMPLE_BRANCH_STACK = 2048, PERF_SAMPLE_REGS_USER = 4096, PERF_SAMPLE_STACK_USER = 8192, PERF_SAMPLE_WEIGHT = 16384, PERF_SAMPLE_DATA_SRC = 32768, PERF_SAMPLE_IDENTIFIER = 65536, PERF_SAMPLE_TRANSACTION = 131072, PERF_SAMPLE_REGS_INTR = 262144, PERF_SAMPLE_PHYS_ADDR = 524288, PERF_SAMPLE_AUX = 1048576, PERF_SAMPLE_CGROUP = 2097152, PERF_SAMPLE_DATA_PAGE_SIZE = 4194304, PERF_SAMPLE_CODE_PAGE_SIZE = 8388608, PERF_SAMPLE_WEIGHT_STRUCT = 16777216, PERF_SAMPLE_MAX = 33554432, } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_attr { pub type_: __u32, pub size: __u32, pub config: __u64, pub __bindgen_anon_1: perf_event_attr__bindgen_ty_1, pub sample_type: __u64, pub read_format: __u64, pub _bitfield_align_1: [u32; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, pub __bindgen_anon_2: perf_event_attr__bindgen_ty_2, pub bp_type: __u32, pub __bindgen_anon_3: perf_event_attr__bindgen_ty_3, pub __bindgen_anon_4: perf_event_attr__bindgen_ty_4, pub branch_sample_type: __u64, pub sample_regs_user: __u64, pub sample_stack_user: __u32, pub clockid: __s32, pub sample_regs_intr: __u64, pub aux_watermark: __u32, pub sample_max_stack: __u16, pub __reserved_2: __u16, pub aux_sample_size: __u32, pub __reserved_3: __u32, pub sig_data: __u64, pub config3: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_1 { pub sample_period: __u64, pub sample_freq: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_2 { pub wakeup_events: __u32, pub wakeup_watermark: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_3 { pub bp_addr: __u64, pub kprobe_func: __u64, pub uprobe_path: __u64, pub config1: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_4 { pub bp_len: __u64, pub kprobe_addr: __u64, pub probe_offset: __u64, pub config2: __u64, } impl perf_event_attr { #[inline] pub fn disabled(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_disabled(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn inherit(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_inherit(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn pinned(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_pinned(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn exclusive(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_exclusive(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn exclude_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_exclude_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn exclude_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_exclude_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn exclude_hv(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 1u8) as u64) } } #[inline] pub fn set_exclude_hv(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 1u8, val as u64) } } #[inline] pub fn exclude_idle(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(7usize, 1u8) as u64) } } #[inline] pub fn set_exclude_idle(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(7usize, 1u8, val as u64) } } #[inline] pub fn mmap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(8usize, 1u8) as u64) } } #[inline] pub fn set_mmap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(8usize, 1u8, val as u64) } } #[inline] pub fn comm(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(9usize, 1u8) as u64) } } #[inline] pub fn set_comm(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(9usize, 1u8, val as u64) } } #[inline] pub fn freq(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(10usize, 1u8) as u64) } } #[inline] pub fn set_freq(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(10usize, 1u8, val as u64) } } #[inline] pub fn inherit_stat(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(11usize, 1u8) as u64) } } #[inline] pub fn set_inherit_stat(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(11usize, 1u8, val as u64) } } #[inline] pub fn enable_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(12usize, 1u8) as u64) } } #[inline] pub fn set_enable_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(12usize, 1u8, val as u64) } } #[inline] pub fn task(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(13usize, 1u8) as u64) } } #[inline] pub fn set_task(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(13usize, 1u8, val as u64) } } #[inline] pub fn watermark(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(14usize, 1u8) as u64) } } #[inline] pub fn set_watermark(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(14usize, 1u8, val as u64) } } #[inline] pub fn precise_ip(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(15usize, 2u8) as u64) } } #[inline] pub fn set_precise_ip(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(15usize, 2u8, val as u64) } } #[inline] pub fn mmap_data(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(17usize, 1u8) as u64) } } #[inline] pub fn set_mmap_data(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(17usize, 1u8, val as u64) } } #[inline] pub fn sample_id_all(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(18usize, 1u8) as u64) } } #[inline] pub fn set_sample_id_all(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(18usize, 1u8, val as u64) } } #[inline] pub fn exclude_host(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(19usize, 1u8) as u64) } } #[inline] pub fn set_exclude_host(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(19usize, 1u8, val as u64) } } #[inline] pub fn exclude_guest(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(20usize, 1u8) as u64) } } #[inline] pub fn set_exclude_guest(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(20usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(21usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(21usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(22usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(22usize, 1u8, val as u64) } } #[inline] pub fn mmap2(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(23usize, 1u8) as u64) } } #[inline] pub fn set_mmap2(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(23usize, 1u8, val as u64) } } #[inline] pub fn comm_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(24usize, 1u8) as u64) } } #[inline] pub fn set_comm_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(24usize, 1u8, val as u64) } } #[inline] pub fn use_clockid(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(25usize, 1u8) as u64) } } #[inline] pub fn set_use_clockid(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(25usize, 1u8, val as u64) } } #[inline] pub fn context_switch(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(26usize, 1u8) as u64) } } #[inline] pub fn set_context_switch(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(26usize, 1u8, val as u64) } } #[inline] pub fn write_backward(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(27usize, 1u8) as u64) } } #[inline] pub fn set_write_backward(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(27usize, 1u8, val as u64) } } #[inline] pub fn namespaces(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(28usize, 1u8) as u64) } } #[inline] pub fn set_namespaces(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(28usize, 1u8, val as u64) } } #[inline] pub fn ksymbol(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(29usize, 1u8) as u64) } } #[inline] pub fn set_ksymbol(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(29usize, 1u8, val as u64) } } #[inline] pub fn bpf_event(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(30usize, 1u8) as u64) } } #[inline] pub fn set_bpf_event(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(30usize, 1u8, val as u64) } } #[inline] pub fn aux_output(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(31usize, 1u8) as u64) } } #[inline] pub fn set_aux_output(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(31usize, 1u8, val as u64) } } #[inline] pub fn cgroup(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(32usize, 1u8) as u64) } } #[inline] pub fn set_cgroup(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(32usize, 1u8, val as u64) } } #[inline] pub fn text_poke(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(33usize, 1u8) as u64) } } #[inline] pub fn set_text_poke(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(33usize, 1u8, val as u64) } } #[inline] pub fn build_id(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(34usize, 1u8) as u64) } } #[inline] pub fn set_build_id(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(34usize, 1u8, val as u64) } } #[inline] pub fn inherit_thread(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(35usize, 1u8) as u64) } } #[inline] pub fn set_inherit_thread(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(35usize, 1u8, val as u64) } } #[inline] pub fn remove_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(36usize, 1u8) as u64) } } #[inline] pub fn set_remove_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(36usize, 1u8, val as u64) } } #[inline] pub fn sigtrap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(37usize, 1u8) as u64) } } #[inline] pub fn set_sigtrap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(37usize, 1u8, val as u64) } } #[inline] pub fn __reserved_1(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(38usize, 26u8) as u64) } } #[inline] pub fn set___reserved_1(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(38usize, 26u8, val as u64) } } #[inline] pub fn new_bitfield_1( disabled: __u64, inherit: __u64, pinned: __u64, exclusive: __u64, exclude_user: __u64, exclude_kernel: __u64, exclude_hv: __u64, exclude_idle: __u64, mmap: __u64, comm: __u64, freq: __u64, inherit_stat: __u64, enable_on_exec: __u64, task: __u64, watermark: __u64, precise_ip: __u64, mmap_data: __u64, sample_id_all: __u64, exclude_host: __u64, exclude_guest: __u64, exclude_callchain_kernel: __u64, exclude_callchain_user: __u64, mmap2: __u64, comm_exec: __u64, use_clockid: __u64, context_switch: __u64, write_backward: __u64, namespaces: __u64, ksymbol: __u64, bpf_event: __u64, aux_output: __u64, cgroup: __u64, text_poke: __u64, build_id: __u64, inherit_thread: __u64, remove_on_exec: __u64, sigtrap: __u64, __reserved_1: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let disabled: u64 = unsafe { ::core::mem::transmute(disabled) }; disabled as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let inherit: u64 = unsafe { ::core::mem::transmute(inherit) }; inherit as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let pinned: u64 = unsafe { ::core::mem::transmute(pinned) }; pinned as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let exclusive: u64 = unsafe { ::core::mem::transmute(exclusive) }; exclusive as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let exclude_user: u64 = unsafe { ::core::mem::transmute(exclude_user) }; exclude_user as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let exclude_kernel: u64 = unsafe { ::core::mem::transmute(exclude_kernel) }; exclude_kernel as u64 }); __bindgen_bitfield_unit.set(6usize, 1u8, { let exclude_hv: u64 = unsafe { ::core::mem::transmute(exclude_hv) }; exclude_hv as u64 }); __bindgen_bitfield_unit.set(7usize, 1u8, { let exclude_idle: u64 = unsafe { ::core::mem::transmute(exclude_idle) }; exclude_idle as u64 }); __bindgen_bitfield_unit.set(8usize, 1u8, { let mmap: u64 = unsafe { ::core::mem::transmute(mmap) }; mmap as u64 }); __bindgen_bitfield_unit.set(9usize, 1u8, { let comm: u64 = unsafe { ::core::mem::transmute(comm) }; comm as u64 }); __bindgen_bitfield_unit.set(10usize, 1u8, { let freq: u64 = unsafe { ::core::mem::transmute(freq) }; freq as u64 }); __bindgen_bitfield_unit.set(11usize, 1u8, { let inherit_stat: u64 = unsafe { ::core::mem::transmute(inherit_stat) }; inherit_stat as u64 }); __bindgen_bitfield_unit.set(12usize, 1u8, { let enable_on_exec: u64 = unsafe { ::core::mem::transmute(enable_on_exec) }; enable_on_exec as u64 }); __bindgen_bitfield_unit.set(13usize, 1u8, { let task: u64 = unsafe { ::core::mem::transmute(task) }; task as u64 }); __bindgen_bitfield_unit.set(14usize, 1u8, { let watermark: u64 = unsafe { ::core::mem::transmute(watermark) }; watermark as u64 }); __bindgen_bitfield_unit.set(15usize, 2u8, { let precise_ip: u64 = unsafe { ::core::mem::transmute(precise_ip) }; precise_ip as u64 }); __bindgen_bitfield_unit.set(17usize, 1u8, { let mmap_data: u64 = unsafe { ::core::mem::transmute(mmap_data) }; mmap_data as u64 }); __bindgen_bitfield_unit.set(18usize, 1u8, { let sample_id_all: u64 = unsafe { ::core::mem::transmute(sample_id_all) }; sample_id_all as u64 }); __bindgen_bitfield_unit.set(19usize, 1u8, { let exclude_host: u64 = unsafe { ::core::mem::transmute(exclude_host) }; exclude_host as u64 }); __bindgen_bitfield_unit.set(20usize, 1u8, { let exclude_guest: u64 = unsafe { ::core::mem::transmute(exclude_guest) }; exclude_guest as u64 }); __bindgen_bitfield_unit.set(21usize, 1u8, { let exclude_callchain_kernel: u64 = unsafe { ::core::mem::transmute(exclude_callchain_kernel) }; exclude_callchain_kernel as u64 }); __bindgen_bitfield_unit.set(22usize, 1u8, { let exclude_callchain_user: u64 = unsafe { ::core::mem::transmute(exclude_callchain_user) }; exclude_callchain_user as u64 }); __bindgen_bitfield_unit.set(23usize, 1u8, { let mmap2: u64 = unsafe { ::core::mem::transmute(mmap2) }; mmap2 as u64 }); __bindgen_bitfield_unit.set(24usize, 1u8, { let comm_exec: u64 = unsafe { ::core::mem::transmute(comm_exec) }; comm_exec as u64 }); __bindgen_bitfield_unit.set(25usize, 1u8, { let use_clockid: u64 = unsafe { ::core::mem::transmute(use_clockid) }; use_clockid as u64 }); __bindgen_bitfield_unit.set(26usize, 1u8, { let context_switch: u64 = unsafe { ::core::mem::transmute(context_switch) }; context_switch as u64 }); __bindgen_bitfield_unit.set(27usize, 1u8, { let write_backward: u64 = unsafe { ::core::mem::transmute(write_backward) }; write_backward as u64 }); __bindgen_bitfield_unit.set(28usize, 1u8, { let namespaces: u64 = unsafe { ::core::mem::transmute(namespaces) }; namespaces as u64 }); __bindgen_bitfield_unit.set(29usize, 1u8, { let ksymbol: u64 = unsafe { ::core::mem::transmute(ksymbol) }; ksymbol as u64 }); __bindgen_bitfield_unit.set(30usize, 1u8, { let bpf_event: u64 = unsafe { ::core::mem::transmute(bpf_event) }; bpf_event as u64 }); __bindgen_bitfield_unit.set(31usize, 1u8, { let aux_output: u64 = unsafe { ::core::mem::transmute(aux_output) }; aux_output as u64 }); __bindgen_bitfield_unit.set(32usize, 1u8, { let cgroup: u64 = unsafe { ::core::mem::transmute(cgroup) }; cgroup as u64 }); __bindgen_bitfield_unit.set(33usize, 1u8, { let text_poke: u64 = unsafe { ::core::mem::transmute(text_poke) }; text_poke as u64 }); __bindgen_bitfield_unit.set(34usize, 1u8, { let build_id: u64 = unsafe { ::core::mem::transmute(build_id) }; build_id as u64 }); __bindgen_bitfield_unit.set(35usize, 1u8, { let inherit_thread: u64 = unsafe { ::core::mem::transmute(inherit_thread) }; inherit_thread as u64 }); __bindgen_bitfield_unit.set(36usize, 1u8, { let remove_on_exec: u64 = unsafe { ::core::mem::transmute(remove_on_exec) }; remove_on_exec as u64 }); __bindgen_bitfield_unit.set(37usize, 1u8, { let sigtrap: u64 = unsafe { ::core::mem::transmute(sigtrap) }; sigtrap as u64 }); __bindgen_bitfield_unit.set(38usize, 26u8, { let __reserved_1: u64 = unsafe { ::core::mem::transmute(__reserved_1) }; __reserved_1 as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_mmap_page { pub version: __u32, pub compat_version: __u32, pub lock: __u32, pub index: __u32, pub offset: __s64, pub time_enabled: __u64, pub time_running: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1, pub pmc_width: __u16, pub time_shift: __u16, pub time_mult: __u32, pub time_offset: __u64, pub time_zero: __u64, pub size: __u32, pub __reserved_1: __u32, pub time_cycles: __u64, pub time_mask: __u64, pub __reserved: [__u8; 928usize], pub data_head: __u64, pub data_tail: __u64, pub data_offset: __u64, pub data_size: __u64, pub aux_head: __u64, pub aux_tail: __u64, pub aux_offset: __u64, pub aux_size: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_mmap_page__bindgen_ty_1 { pub capabilities: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { pub _bitfield_align_1: [u64; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, } impl perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { #[inline] pub fn cap_bit0(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn cap_bit0_is_deprecated(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0_is_deprecated(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn cap_user_rdpmc(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_rdpmc(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_zero(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_zero(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_short(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_short(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn cap_____res(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 58u8) as u64) } } #[inline] pub fn set_cap_____res(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 58u8, val as u64) } } #[inline] pub fn new_bitfield_1( cap_bit0: __u64, cap_bit0_is_deprecated: __u64, cap_user_rdpmc: __u64, cap_user_time: __u64, cap_user_time_zero: __u64, cap_user_time_short: __u64, cap_____res: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let cap_bit0: u64 = unsafe { ::core::mem::transmute(cap_bit0) }; cap_bit0 as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let cap_bit0_is_deprecated: u64 = unsafe { ::core::mem::transmute(cap_bit0_is_deprecated) }; cap_bit0_is_deprecated as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let cap_user_rdpmc: u64 = unsafe { ::core::mem::transmute(cap_user_rdpmc) }; cap_user_rdpmc as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let cap_user_time: u64 = unsafe { ::core::mem::transmute(cap_user_time) }; cap_user_time as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let cap_user_time_zero: u64 = unsafe { ::core::mem::transmute(cap_user_time_zero) }; cap_user_time_zero as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let cap_user_time_short: u64 = unsafe { ::core::mem::transmute(cap_user_time_short) }; cap_user_time_short as u64 }); __bindgen_bitfield_unit.set(6usize, 58u8, { let cap_____res: u64 = unsafe { ::core::mem::transmute(cap_____res) }; cap_____res as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_header { pub type_: __u32, pub misc: __u16, pub size: __u16, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_type { PERF_RECORD_MMAP = 1, PERF_RECORD_LOST = 2, PERF_RECORD_COMM = 3, PERF_RECORD_EXIT = 4, PERF_RECORD_THROTTLE = 5, PERF_RECORD_UNTHROTTLE = 6, PERF_RECORD_FORK = 7, PERF_RECORD_READ = 8, PERF_RECORD_SAMPLE = 9, PERF_RECORD_MMAP2 = 10, PERF_RECORD_AUX = 11, PERF_RECORD_ITRACE_START = 12, PERF_RECORD_LOST_SAMPLES = 13, PERF_RECORD_SWITCH = 14, PERF_RECORD_SWITCH_CPU_WIDE = 15, PERF_RECORD_NAMESPACES = 16, PERF_RECORD_KSYMBOL = 17, PERF_RECORD_BPF_EVENT = 18, PERF_RECORD_CGROUP = 19, PERF_RECORD_TEXT_POKE = 20, PERF_RECORD_AUX_OUTPUT_HW_ID = 21, PERF_RECORD_MAX = 22, } pub const TCA_BPF_UNSPEC: _bindgen_ty_154 = 0; pub const TCA_BPF_ACT: _bindgen_ty_154 = 1; pub const TCA_BPF_POLICE: _bindgen_ty_154 = 2; pub const TCA_BPF_CLASSID: _bindgen_ty_154 = 3; pub const TCA_BPF_OPS_LEN: _bindgen_ty_154 = 4; pub const TCA_BPF_OPS: _bindgen_ty_154 = 5; pub const TCA_BPF_FD: _bindgen_ty_154 = 6; pub const TCA_BPF_NAME: _bindgen_ty_154 = 7; pub const TCA_BPF_FLAGS: _bindgen_ty_154 = 8; pub const TCA_BPF_FLAGS_GEN: _bindgen_ty_154 = 9; pub const TCA_BPF_TAG: _bindgen_ty_154 = 10; pub const TCA_BPF_ID: _bindgen_ty_154 = 11; pub const __TCA_BPF_MAX: _bindgen_ty_154 = 12; pub type _bindgen_ty_154 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct ifinfomsg { pub ifi_family: ::core::ffi::c_uchar, pub __ifi_pad: ::core::ffi::c_uchar, pub ifi_type: ::core::ffi::c_ushort, pub ifi_index: ::core::ffi::c_int, pub ifi_flags: ::core::ffi::c_uint, pub ifi_change: ::core::ffi::c_uint, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct tcmsg { pub tcm_family: ::core::ffi::c_uchar, pub tcm__pad1: ::core::ffi::c_uchar, pub tcm__pad2: ::core::ffi::c_ushort, pub tcm_ifindex: ::core::ffi::c_int, pub tcm_handle: __u32, pub tcm_parent: __u32, pub tcm_info: __u32, } pub const TCA_UNSPEC: _bindgen_ty_172 = 0; pub const TCA_KIND: _bindgen_ty_172 = 1; pub const TCA_OPTIONS: _bindgen_ty_172 = 2; pub const TCA_STATS: _bindgen_ty_172 = 3; pub const TCA_XSTATS: _bindgen_ty_172 = 4; pub const TCA_RATE: _bindgen_ty_172 = 5; pub const TCA_FCNT: _bindgen_ty_172 = 6; pub const TCA_STATS2: _bindgen_ty_172 = 7; pub const TCA_STAB: _bindgen_ty_172 = 8; pub const TCA_PAD: _bindgen_ty_172 = 9; pub const TCA_DUMP_INVISIBLE: _bindgen_ty_172 = 10; pub const TCA_CHAIN: _bindgen_ty_172 = 11; pub const TCA_HW_OFFLOAD: _bindgen_ty_172 = 12; pub const TCA_INGRESS_BLOCK: _bindgen_ty_172 = 13; pub const TCA_EGRESS_BLOCK: _bindgen_ty_172 = 14; pub const __TCA_MAX: _bindgen_ty_172 = 15; pub type _bindgen_ty_172 = ::core::ffi::c_uint; pub const AYA_PERF_EVENT_IOC_ENABLE: ::core::ffi::c_int = 9216; pub const AYA_PERF_EVENT_IOC_DISABLE: ::core::ffi::c_int = 9217; pub const AYA_PERF_EVENT_IOC_SET_BPF: ::core::ffi::c_int = 1074013192; aya-obj-0.2.1/src/generated/linux_bindings_x86_64.rs000064400000000000000000002356331046102023000203070ustar 00000000000000/* automatically generated by rust-bindgen 0.70.1 */ #[repr(C)] #[derive(Copy, Clone, Debug, Default, Eq, Hash, Ord, PartialEq, PartialOrd)] pub struct __BindgenBitfieldUnit { storage: Storage, } impl __BindgenBitfieldUnit { #[inline] pub const fn new(storage: Storage) -> Self { Self { storage } } } impl __BindgenBitfieldUnit where Storage: AsRef<[u8]> + AsMut<[u8]>, { #[inline] pub fn get_bit(&self, index: usize) -> bool { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = self.storage.as_ref()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; byte & mask == mask } #[inline] pub fn set_bit(&mut self, index: usize, val: bool) { debug_assert!(index / 8 < self.storage.as_ref().len()); let byte_index = index / 8; let byte = &mut self.storage.as_mut()[byte_index]; let bit_index = if cfg!(target_endian = "big") { 7 - (index % 8) } else { index % 8 }; let mask = 1 << bit_index; if val { *byte |= mask; } else { *byte &= !mask; } } #[inline] pub fn get(&self, bit_offset: usize, bit_width: u8) -> u64 { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); let mut val = 0; for i in 0..(bit_width as usize) { if self.get_bit(i + bit_offset) { let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; val |= 1 << index; } } val } #[inline] pub fn set(&mut self, bit_offset: usize, bit_width: u8, val: u64) { debug_assert!(bit_width <= 64); debug_assert!(bit_offset / 8 < self.storage.as_ref().len()); debug_assert!((bit_offset + (bit_width as usize)) / 8 <= self.storage.as_ref().len()); for i in 0..(bit_width as usize) { let mask = 1 << i; let val_bit_is_set = val & mask == mask; let index = if cfg!(target_endian = "big") { bit_width as usize - 1 - i } else { i }; self.set_bit(index + bit_offset, val_bit_is_set); } } } #[repr(C)] #[derive(Default)] pub struct __IncompleteArrayField(::core::marker::PhantomData, [T; 0]); impl __IncompleteArrayField { #[inline] pub const fn new() -> Self { __IncompleteArrayField(::core::marker::PhantomData, []) } #[inline] pub fn as_ptr(&self) -> *const T { self as *const _ as *const T } #[inline] pub fn as_mut_ptr(&mut self) -> *mut T { self as *mut _ as *mut T } #[inline] pub unsafe fn as_slice(&self, len: usize) -> &[T] { ::core::slice::from_raw_parts(self.as_ptr(), len) } #[inline] pub unsafe fn as_mut_slice(&mut self, len: usize) -> &mut [T] { ::core::slice::from_raw_parts_mut(self.as_mut_ptr(), len) } } impl ::core::fmt::Debug for __IncompleteArrayField { fn fmt(&self, fmt: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { fmt.write_str("__IncompleteArrayField") } } pub const SO_ATTACH_BPF: u32 = 50; pub const SO_DETACH_BPF: u32 = 27; pub const BPF_LD: u32 = 0; pub const BPF_LDX: u32 = 1; pub const BPF_ST: u32 = 2; pub const BPF_STX: u32 = 3; pub const BPF_ALU: u32 = 4; pub const BPF_JMP: u32 = 5; pub const BPF_W: u32 = 0; pub const BPF_H: u32 = 8; pub const BPF_B: u32 = 16; pub const BPF_K: u32 = 0; pub const BPF_ALU64: u32 = 7; pub const BPF_DW: u32 = 24; pub const BPF_CALL: u32 = 128; pub const BPF_F_ALLOW_OVERRIDE: u32 = 1; pub const BPF_F_ALLOW_MULTI: u32 = 2; pub const BPF_F_REPLACE: u32 = 4; pub const BPF_F_BEFORE: u32 = 8; pub const BPF_F_AFTER: u32 = 16; pub const BPF_F_ID: u32 = 32; pub const BPF_F_STRICT_ALIGNMENT: u32 = 1; pub const BPF_F_ANY_ALIGNMENT: u32 = 2; pub const BPF_F_TEST_RND_HI32: u32 = 4; pub const BPF_F_TEST_STATE_FREQ: u32 = 8; pub const BPF_F_SLEEPABLE: u32 = 16; pub const BPF_F_XDP_HAS_FRAGS: u32 = 32; pub const BPF_F_XDP_DEV_BOUND_ONLY: u32 = 64; pub const BPF_F_TEST_REG_INVARIANTS: u32 = 128; pub const BPF_F_NETFILTER_IP_DEFRAG: u32 = 1; pub const BPF_PSEUDO_MAP_FD: u32 = 1; pub const BPF_PSEUDO_MAP_IDX: u32 = 5; pub const BPF_PSEUDO_MAP_VALUE: u32 = 2; pub const BPF_PSEUDO_MAP_IDX_VALUE: u32 = 6; pub const BPF_PSEUDO_BTF_ID: u32 = 3; pub const BPF_PSEUDO_FUNC: u32 = 4; pub const BPF_PSEUDO_CALL: u32 = 1; pub const BPF_PSEUDO_KFUNC_CALL: u32 = 2; pub const BPF_F_QUERY_EFFECTIVE: u32 = 1; pub const BPF_F_TEST_RUN_ON_CPU: u32 = 1; pub const BPF_F_TEST_XDP_LIVE_FRAMES: u32 = 2; pub const BTF_INT_SIGNED: u32 = 1; pub const BTF_INT_CHAR: u32 = 2; pub const BTF_INT_BOOL: u32 = 4; pub const NLMSG_ALIGNTO: u32 = 4; pub const XDP_FLAGS_UPDATE_IF_NOEXIST: u32 = 1; pub const XDP_FLAGS_SKB_MODE: u32 = 2; pub const XDP_FLAGS_DRV_MODE: u32 = 4; pub const XDP_FLAGS_HW_MODE: u32 = 8; pub const XDP_FLAGS_REPLACE: u32 = 16; pub const XDP_FLAGS_MODES: u32 = 14; pub const XDP_FLAGS_MASK: u32 = 31; pub const PERF_MAX_STACK_DEPTH: u32 = 127; pub const PERF_MAX_CONTEXTS_PER_STACK: u32 = 8; pub const PERF_FLAG_FD_NO_GROUP: u32 = 1; pub const PERF_FLAG_FD_OUTPUT: u32 = 2; pub const PERF_FLAG_PID_CGROUP: u32 = 4; pub const PERF_FLAG_FD_CLOEXEC: u32 = 8; pub const TC_H_MAJ_MASK: u32 = 4294901760; pub const TC_H_MIN_MASK: u32 = 65535; pub const TC_H_UNSPEC: u32 = 0; pub const TC_H_ROOT: u32 = 4294967295; pub const TC_H_INGRESS: u32 = 4294967281; pub const TC_H_CLSACT: u32 = 4294967281; pub const TC_H_MIN_PRIORITY: u32 = 65504; pub const TC_H_MIN_INGRESS: u32 = 65522; pub const TC_H_MIN_EGRESS: u32 = 65523; pub const TCA_BPF_FLAG_ACT_DIRECT: u32 = 1; pub type __u8 = ::core::ffi::c_uchar; pub type __s16 = ::core::ffi::c_short; pub type __u16 = ::core::ffi::c_ushort; pub type __s32 = ::core::ffi::c_int; pub type __u32 = ::core::ffi::c_uint; pub type __s64 = ::core::ffi::c_longlong; pub type __u64 = ::core::ffi::c_ulonglong; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_insn { pub code: __u8, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 1usize]>, pub off: __s16, pub imm: __s32, } impl bpf_insn { #[inline] pub fn dst_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 4u8) as u8) } } #[inline] pub fn set_dst_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 4u8, val as u64) } } #[inline] pub fn src_reg(&self) -> __u8 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 4u8) as u8) } } #[inline] pub fn set_src_reg(&mut self, val: __u8) { unsafe { let val: u8 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 4u8, val as u64) } } #[inline] pub fn new_bitfield_1(dst_reg: __u8, src_reg: __u8) -> __BindgenBitfieldUnit<[u8; 1usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 1usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 4u8, { let dst_reg: u8 = unsafe { ::core::mem::transmute(dst_reg) }; dst_reg as u64 }); __bindgen_bitfield_unit.set(4usize, 4u8, { let src_reg: u8 = unsafe { ::core::mem::transmute(src_reg) }; src_reg as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug)] pub struct bpf_lpm_trie_key { pub prefixlen: __u32, pub data: __IncompleteArrayField<__u8>, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cgroup_iter_order { BPF_CGROUP_ITER_ORDER_UNSPEC = 0, BPF_CGROUP_ITER_SELF_ONLY = 1, BPF_CGROUP_ITER_DESCENDANTS_PRE = 2, BPF_CGROUP_ITER_DESCENDANTS_POST = 3, BPF_CGROUP_ITER_ANCESTORS_UP = 4, } impl bpf_cmd { pub const BPF_PROG_RUN: bpf_cmd = bpf_cmd::BPF_PROG_TEST_RUN; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_cmd { BPF_MAP_CREATE = 0, BPF_MAP_LOOKUP_ELEM = 1, BPF_MAP_UPDATE_ELEM = 2, BPF_MAP_DELETE_ELEM = 3, BPF_MAP_GET_NEXT_KEY = 4, BPF_PROG_LOAD = 5, BPF_OBJ_PIN = 6, BPF_OBJ_GET = 7, BPF_PROG_ATTACH = 8, BPF_PROG_DETACH = 9, BPF_PROG_TEST_RUN = 10, BPF_PROG_GET_NEXT_ID = 11, BPF_MAP_GET_NEXT_ID = 12, BPF_PROG_GET_FD_BY_ID = 13, BPF_MAP_GET_FD_BY_ID = 14, BPF_OBJ_GET_INFO_BY_FD = 15, BPF_PROG_QUERY = 16, BPF_RAW_TRACEPOINT_OPEN = 17, BPF_BTF_LOAD = 18, BPF_BTF_GET_FD_BY_ID = 19, BPF_TASK_FD_QUERY = 20, BPF_MAP_LOOKUP_AND_DELETE_ELEM = 21, BPF_MAP_FREEZE = 22, BPF_BTF_GET_NEXT_ID = 23, BPF_MAP_LOOKUP_BATCH = 24, BPF_MAP_LOOKUP_AND_DELETE_BATCH = 25, BPF_MAP_UPDATE_BATCH = 26, BPF_MAP_DELETE_BATCH = 27, BPF_LINK_CREATE = 28, BPF_LINK_UPDATE = 29, BPF_LINK_GET_FD_BY_ID = 30, BPF_LINK_GET_NEXT_ID = 31, BPF_ENABLE_STATS = 32, BPF_ITER_CREATE = 33, BPF_LINK_DETACH = 34, BPF_PROG_BIND_MAP = 35, BPF_TOKEN_CREATE = 36, __MAX_BPF_CMD = 37, } impl bpf_map_type { pub const BPF_MAP_TYPE_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED; } impl bpf_map_type { pub const BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE: bpf_map_type = bpf_map_type::BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED; } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_map_type { BPF_MAP_TYPE_UNSPEC = 0, BPF_MAP_TYPE_HASH = 1, BPF_MAP_TYPE_ARRAY = 2, BPF_MAP_TYPE_PROG_ARRAY = 3, BPF_MAP_TYPE_PERF_EVENT_ARRAY = 4, BPF_MAP_TYPE_PERCPU_HASH = 5, BPF_MAP_TYPE_PERCPU_ARRAY = 6, BPF_MAP_TYPE_STACK_TRACE = 7, BPF_MAP_TYPE_CGROUP_ARRAY = 8, BPF_MAP_TYPE_LRU_HASH = 9, BPF_MAP_TYPE_LRU_PERCPU_HASH = 10, BPF_MAP_TYPE_LPM_TRIE = 11, BPF_MAP_TYPE_ARRAY_OF_MAPS = 12, BPF_MAP_TYPE_HASH_OF_MAPS = 13, BPF_MAP_TYPE_DEVMAP = 14, BPF_MAP_TYPE_SOCKMAP = 15, BPF_MAP_TYPE_CPUMAP = 16, BPF_MAP_TYPE_XSKMAP = 17, BPF_MAP_TYPE_SOCKHASH = 18, BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED = 19, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY = 20, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED = 21, BPF_MAP_TYPE_QUEUE = 22, BPF_MAP_TYPE_STACK = 23, BPF_MAP_TYPE_SK_STORAGE = 24, BPF_MAP_TYPE_DEVMAP_HASH = 25, BPF_MAP_TYPE_STRUCT_OPS = 26, BPF_MAP_TYPE_RINGBUF = 27, BPF_MAP_TYPE_INODE_STORAGE = 28, BPF_MAP_TYPE_TASK_STORAGE = 29, BPF_MAP_TYPE_BLOOM_FILTER = 30, BPF_MAP_TYPE_USER_RINGBUF = 31, BPF_MAP_TYPE_CGRP_STORAGE = 32, BPF_MAP_TYPE_ARENA = 33, __MAX_BPF_MAP_TYPE = 34, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_prog_type { BPF_PROG_TYPE_UNSPEC = 0, BPF_PROG_TYPE_SOCKET_FILTER = 1, BPF_PROG_TYPE_KPROBE = 2, BPF_PROG_TYPE_SCHED_CLS = 3, BPF_PROG_TYPE_SCHED_ACT = 4, BPF_PROG_TYPE_TRACEPOINT = 5, BPF_PROG_TYPE_XDP = 6, BPF_PROG_TYPE_PERF_EVENT = 7, BPF_PROG_TYPE_CGROUP_SKB = 8, BPF_PROG_TYPE_CGROUP_SOCK = 9, BPF_PROG_TYPE_LWT_IN = 10, BPF_PROG_TYPE_LWT_OUT = 11, BPF_PROG_TYPE_LWT_XMIT = 12, BPF_PROG_TYPE_SOCK_OPS = 13, BPF_PROG_TYPE_SK_SKB = 14, BPF_PROG_TYPE_CGROUP_DEVICE = 15, BPF_PROG_TYPE_SK_MSG = 16, BPF_PROG_TYPE_RAW_TRACEPOINT = 17, BPF_PROG_TYPE_CGROUP_SOCK_ADDR = 18, BPF_PROG_TYPE_LWT_SEG6LOCAL = 19, BPF_PROG_TYPE_LIRC_MODE2 = 20, BPF_PROG_TYPE_SK_REUSEPORT = 21, BPF_PROG_TYPE_FLOW_DISSECTOR = 22, BPF_PROG_TYPE_CGROUP_SYSCTL = 23, BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE = 24, BPF_PROG_TYPE_CGROUP_SOCKOPT = 25, BPF_PROG_TYPE_TRACING = 26, BPF_PROG_TYPE_STRUCT_OPS = 27, BPF_PROG_TYPE_EXT = 28, BPF_PROG_TYPE_LSM = 29, BPF_PROG_TYPE_SK_LOOKUP = 30, BPF_PROG_TYPE_SYSCALL = 31, BPF_PROG_TYPE_NETFILTER = 32, __MAX_BPF_PROG_TYPE = 33, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_attach_type { BPF_CGROUP_INET_INGRESS = 0, BPF_CGROUP_INET_EGRESS = 1, BPF_CGROUP_INET_SOCK_CREATE = 2, BPF_CGROUP_SOCK_OPS = 3, BPF_SK_SKB_STREAM_PARSER = 4, BPF_SK_SKB_STREAM_VERDICT = 5, BPF_CGROUP_DEVICE = 6, BPF_SK_MSG_VERDICT = 7, BPF_CGROUP_INET4_BIND = 8, BPF_CGROUP_INET6_BIND = 9, BPF_CGROUP_INET4_CONNECT = 10, BPF_CGROUP_INET6_CONNECT = 11, BPF_CGROUP_INET4_POST_BIND = 12, BPF_CGROUP_INET6_POST_BIND = 13, BPF_CGROUP_UDP4_SENDMSG = 14, BPF_CGROUP_UDP6_SENDMSG = 15, BPF_LIRC_MODE2 = 16, BPF_FLOW_DISSECTOR = 17, BPF_CGROUP_SYSCTL = 18, BPF_CGROUP_UDP4_RECVMSG = 19, BPF_CGROUP_UDP6_RECVMSG = 20, BPF_CGROUP_GETSOCKOPT = 21, BPF_CGROUP_SETSOCKOPT = 22, BPF_TRACE_RAW_TP = 23, BPF_TRACE_FENTRY = 24, BPF_TRACE_FEXIT = 25, BPF_MODIFY_RETURN = 26, BPF_LSM_MAC = 27, BPF_TRACE_ITER = 28, BPF_CGROUP_INET4_GETPEERNAME = 29, BPF_CGROUP_INET6_GETPEERNAME = 30, BPF_CGROUP_INET4_GETSOCKNAME = 31, BPF_CGROUP_INET6_GETSOCKNAME = 32, BPF_XDP_DEVMAP = 33, BPF_CGROUP_INET_SOCK_RELEASE = 34, BPF_XDP_CPUMAP = 35, BPF_SK_LOOKUP = 36, BPF_XDP = 37, BPF_SK_SKB_VERDICT = 38, BPF_SK_REUSEPORT_SELECT = 39, BPF_SK_REUSEPORT_SELECT_OR_MIGRATE = 40, BPF_PERF_EVENT = 41, BPF_TRACE_KPROBE_MULTI = 42, BPF_LSM_CGROUP = 43, BPF_STRUCT_OPS = 44, BPF_NETFILTER = 45, BPF_TCX_INGRESS = 46, BPF_TCX_EGRESS = 47, BPF_TRACE_UPROBE_MULTI = 48, BPF_CGROUP_UNIX_CONNECT = 49, BPF_CGROUP_UNIX_SENDMSG = 50, BPF_CGROUP_UNIX_RECVMSG = 51, BPF_CGROUP_UNIX_GETPEERNAME = 52, BPF_CGROUP_UNIX_GETSOCKNAME = 53, BPF_NETKIT_PRIMARY = 54, BPF_NETKIT_PEER = 55, __MAX_BPF_ATTACH_TYPE = 56, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_link_type { BPF_LINK_TYPE_UNSPEC = 0, BPF_LINK_TYPE_RAW_TRACEPOINT = 1, BPF_LINK_TYPE_TRACING = 2, BPF_LINK_TYPE_CGROUP = 3, BPF_LINK_TYPE_ITER = 4, BPF_LINK_TYPE_NETNS = 5, BPF_LINK_TYPE_XDP = 6, BPF_LINK_TYPE_PERF_EVENT = 7, BPF_LINK_TYPE_KPROBE_MULTI = 8, BPF_LINK_TYPE_STRUCT_OPS = 9, BPF_LINK_TYPE_NETFILTER = 10, BPF_LINK_TYPE_TCX = 11, BPF_LINK_TYPE_UPROBE_MULTI = 12, BPF_LINK_TYPE_NETKIT = 13, __MAX_BPF_LINK_TYPE = 14, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_perf_event_type { BPF_PERF_EVENT_UNSPEC = 0, BPF_PERF_EVENT_UPROBE = 1, BPF_PERF_EVENT_URETPROBE = 2, BPF_PERF_EVENT_KPROBE = 3, BPF_PERF_EVENT_KRETPROBE = 4, BPF_PERF_EVENT_TRACEPOINT = 5, BPF_PERF_EVENT_EVENT = 6, } pub const BPF_F_KPROBE_MULTI_RETURN: _bindgen_ty_2 = 1; pub type _bindgen_ty_2 = ::core::ffi::c_uint; pub const BPF_F_UPROBE_MULTI_RETURN: _bindgen_ty_3 = 1; pub type _bindgen_ty_3 = ::core::ffi::c_uint; pub const BPF_ANY: _bindgen_ty_4 = 0; pub const BPF_NOEXIST: _bindgen_ty_4 = 1; pub const BPF_EXIST: _bindgen_ty_4 = 2; pub const BPF_F_LOCK: _bindgen_ty_4 = 4; pub type _bindgen_ty_4 = ::core::ffi::c_uint; pub const BPF_F_NO_PREALLOC: _bindgen_ty_5 = 1; pub const BPF_F_NO_COMMON_LRU: _bindgen_ty_5 = 2; pub const BPF_F_NUMA_NODE: _bindgen_ty_5 = 4; pub const BPF_F_RDONLY: _bindgen_ty_5 = 8; pub const BPF_F_WRONLY: _bindgen_ty_5 = 16; pub const BPF_F_STACK_BUILD_ID: _bindgen_ty_5 = 32; pub const BPF_F_ZERO_SEED: _bindgen_ty_5 = 64; pub const BPF_F_RDONLY_PROG: _bindgen_ty_5 = 128; pub const BPF_F_WRONLY_PROG: _bindgen_ty_5 = 256; pub const BPF_F_CLONE: _bindgen_ty_5 = 512; pub const BPF_F_MMAPABLE: _bindgen_ty_5 = 1024; pub const BPF_F_PRESERVE_ELEMS: _bindgen_ty_5 = 2048; pub const BPF_F_INNER_MAP: _bindgen_ty_5 = 4096; pub const BPF_F_LINK: _bindgen_ty_5 = 8192; pub const BPF_F_PATH_FD: _bindgen_ty_5 = 16384; pub const BPF_F_VTYPE_BTF_OBJ_FD: _bindgen_ty_5 = 32768; pub const BPF_F_TOKEN_FD: _bindgen_ty_5 = 65536; pub const BPF_F_SEGV_ON_FAULT: _bindgen_ty_5 = 131072; pub const BPF_F_NO_USER_CONV: _bindgen_ty_5 = 262144; pub type _bindgen_ty_5 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum bpf_stats_type { BPF_STATS_RUN_TIME = 0, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr { pub __bindgen_anon_1: bpf_attr__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_2, pub batch: bpf_attr__bindgen_ty_3, pub __bindgen_anon_3: bpf_attr__bindgen_ty_4, pub __bindgen_anon_4: bpf_attr__bindgen_ty_5, pub __bindgen_anon_5: bpf_attr__bindgen_ty_6, pub test: bpf_attr__bindgen_ty_7, pub __bindgen_anon_6: bpf_attr__bindgen_ty_8, pub info: bpf_attr__bindgen_ty_9, pub query: bpf_attr__bindgen_ty_10, pub raw_tracepoint: bpf_attr__bindgen_ty_11, pub __bindgen_anon_7: bpf_attr__bindgen_ty_12, pub task_fd_query: bpf_attr__bindgen_ty_13, pub link_create: bpf_attr__bindgen_ty_14, pub link_update: bpf_attr__bindgen_ty_15, pub link_detach: bpf_attr__bindgen_ty_16, pub enable_stats: bpf_attr__bindgen_ty_17, pub iter_create: bpf_attr__bindgen_ty_18, pub prog_bind_map: bpf_attr__bindgen_ty_19, pub token_create: bpf_attr__bindgen_ty_20, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_1 { pub map_type: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub inner_map_fd: __u32, pub numa_node: __u32, pub map_name: [::core::ffi::c_char; 16usize], pub map_ifindex: __u32, pub btf_fd: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_value_type_id: __u32, pub map_extra: __u64, pub value_type_btf_obj_fd: __s32, pub map_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_2 { pub map_fd: __u32, pub key: __u64, pub __bindgen_anon_1: bpf_attr__bindgen_ty_2__bindgen_ty_1, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_2__bindgen_ty_1 { pub value: __u64, pub next_key: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_3 { pub in_batch: __u64, pub out_batch: __u64, pub keys: __u64, pub values: __u64, pub count: __u32, pub map_fd: __u32, pub elem_flags: __u64, pub flags: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_4 { pub prog_type: __u32, pub insn_cnt: __u32, pub insns: __u64, pub license: __u64, pub log_level: __u32, pub log_size: __u32, pub log_buf: __u64, pub kern_version: __u32, pub prog_flags: __u32, pub prog_name: [::core::ffi::c_char; 16usize], pub prog_ifindex: __u32, pub expected_attach_type: __u32, pub prog_btf_fd: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub func_info_cnt: __u32, pub line_info_rec_size: __u32, pub line_info: __u64, pub line_info_cnt: __u32, pub attach_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_4__bindgen_ty_1, pub core_relo_cnt: __u32, pub fd_array: __u64, pub core_relos: __u64, pub core_relo_rec_size: __u32, pub log_true_size: __u32, pub prog_token_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_4__bindgen_ty_1 { pub attach_prog_fd: __u32, pub attach_btf_obj_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_5 { pub pathname: __u64, pub bpf_fd: __u32, pub file_flags: __u32, pub path_fd: __s32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_6__bindgen_ty_1, pub attach_bpf_fd: __u32, pub attach_type: __u32, pub attach_flags: __u32, pub replace_bpf_fd: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_6__bindgen_ty_2, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_6__bindgen_ty_2 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_7 { pub prog_fd: __u32, pub retval: __u32, pub data_size_in: __u32, pub data_size_out: __u32, pub data_in: __u64, pub data_out: __u64, pub repeat: __u32, pub duration: __u32, pub ctx_size_in: __u32, pub ctx_size_out: __u32, pub ctx_in: __u64, pub ctx_out: __u64, pub flags: __u32, pub cpu: __u32, pub batch_size: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_8__bindgen_ty_1, pub next_id: __u32, pub open_flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_8__bindgen_ty_1 { pub start_id: __u32, pub prog_id: __u32, pub map_id: __u32, pub btf_id: __u32, pub link_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_9 { pub bpf_fd: __u32, pub info_len: __u32, pub info: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_10 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_10__bindgen_ty_1, pub attach_type: __u32, pub query_flags: __u32, pub attach_flags: __u32, pub prog_ids: __u64, pub __bindgen_anon_2: bpf_attr__bindgen_ty_10__bindgen_ty_2, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub prog_attach_flags: __u64, pub link_ids: __u64, pub link_attach_flags: __u64, pub revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_1 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_10__bindgen_ty_2 { pub prog_cnt: __u32, pub count: __u32, } impl bpf_attr__bindgen_ty_10 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_11 { pub name: __u64, pub prog_fd: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_attr__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_12 { pub btf: __u64, pub btf_log_buf: __u64, pub btf_size: __u32, pub btf_log_size: __u32, pub btf_log_level: __u32, pub btf_log_true_size: __u32, pub btf_flags: __u32, pub btf_token_fd: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_13 { pub pid: __u32, pub fd: __u32, pub flags: __u32, pub buf_len: __u32, pub buf: __u64, pub prog_id: __u32, pub fd_type: __u32, pub probe_offset: __u64, pub probe_addr: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_1, pub __bindgen_anon_2: bpf_attr__bindgen_ty_14__bindgen_ty_2, pub attach_type: __u32, pub flags: __u32, pub __bindgen_anon_3: bpf_attr__bindgen_ty_14__bindgen_ty_3, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_1 { pub prog_fd: __u32, pub map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_2 { pub target_fd: __u32, pub target_ifindex: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3 { pub target_btf_id: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1, pub perf_event: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2, pub kprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3, pub tracing: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4, pub netfilter: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5, pub tcx: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6, pub uprobe_multi: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7, pub netkit: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_1 { pub iter_info: __u64, pub iter_info_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_2 { pub bpf_cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_3 { pub flags: __u32, pub cnt: __u32, pub syms: __u64, pub addrs: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_4 { pub target_btf_id: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_5 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_6__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_7 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub cnt: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8 { pub __bindgen_anon_1: bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1, pub expected_revision: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_14__bindgen_ty_3__bindgen_ty_8__bindgen_ty_1 { pub relative_fd: __u32, pub relative_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_attr__bindgen_ty_15 { pub link_fd: __u32, pub __bindgen_anon_1: bpf_attr__bindgen_ty_15__bindgen_ty_1, pub flags: __u32, pub __bindgen_anon_2: bpf_attr__bindgen_ty_15__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_1 { pub new_prog_fd: __u32, pub new_map_fd: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_attr__bindgen_ty_15__bindgen_ty_2 { pub old_prog_fd: __u32, pub old_map_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_16 { pub link_fd: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_17 { pub type_: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_18 { pub link_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_19 { pub prog_fd: __u32, pub map_fd: __u32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_attr__bindgen_ty_20 { pub flags: __u32, pub bpffs_fd: __u32, } pub const BPF_F_RECOMPUTE_CSUM: _bindgen_ty_6 = 1; pub const BPF_F_INVALIDATE_HASH: _bindgen_ty_6 = 2; pub type _bindgen_ty_6 = ::core::ffi::c_uint; pub const BPF_F_HDR_FIELD_MASK: _bindgen_ty_7 = 15; pub type _bindgen_ty_7 = ::core::ffi::c_uint; pub const BPF_F_PSEUDO_HDR: _bindgen_ty_8 = 16; pub const BPF_F_MARK_MANGLED_0: _bindgen_ty_8 = 32; pub const BPF_F_MARK_ENFORCE: _bindgen_ty_8 = 64; pub type _bindgen_ty_8 = ::core::ffi::c_uint; pub const BPF_F_INGRESS: _bindgen_ty_9 = 1; pub type _bindgen_ty_9 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_IPV6: _bindgen_ty_10 = 1; pub type _bindgen_ty_10 = ::core::ffi::c_uint; pub const BPF_F_SKIP_FIELD_MASK: _bindgen_ty_11 = 255; pub const BPF_F_USER_STACK: _bindgen_ty_11 = 256; pub const BPF_F_FAST_STACK_CMP: _bindgen_ty_11 = 512; pub const BPF_F_REUSE_STACKID: _bindgen_ty_11 = 1024; pub const BPF_F_USER_BUILD_ID: _bindgen_ty_11 = 2048; pub type _bindgen_ty_11 = ::core::ffi::c_uint; pub const BPF_F_ZERO_CSUM_TX: _bindgen_ty_12 = 2; pub const BPF_F_DONT_FRAGMENT: _bindgen_ty_12 = 4; pub const BPF_F_SEQ_NUMBER: _bindgen_ty_12 = 8; pub const BPF_F_NO_TUNNEL_KEY: _bindgen_ty_12 = 16; pub type _bindgen_ty_12 = ::core::ffi::c_uint; pub const BPF_F_TUNINFO_FLAGS: _bindgen_ty_13 = 16; pub type _bindgen_ty_13 = ::core::ffi::c_uint; pub const BPF_F_INDEX_MASK: _bindgen_ty_14 = 4294967295; pub const BPF_F_CURRENT_CPU: _bindgen_ty_14 = 4294967295; pub const BPF_F_CTXLEN_MASK: _bindgen_ty_14 = 4503595332403200; pub type _bindgen_ty_14 = ::core::ffi::c_ulong; pub const BPF_F_CURRENT_NETNS: _bindgen_ty_15 = -1; pub type _bindgen_ty_15 = ::core::ffi::c_int; pub const BPF_F_ADJ_ROOM_FIXED_GSO: _bindgen_ty_17 = 1; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV4: _bindgen_ty_17 = 2; pub const BPF_F_ADJ_ROOM_ENCAP_L3_IPV6: _bindgen_ty_17 = 4; pub const BPF_F_ADJ_ROOM_ENCAP_L4_GRE: _bindgen_ty_17 = 8; pub const BPF_F_ADJ_ROOM_ENCAP_L4_UDP: _bindgen_ty_17 = 16; pub const BPF_F_ADJ_ROOM_NO_CSUM_RESET: _bindgen_ty_17 = 32; pub const BPF_F_ADJ_ROOM_ENCAP_L2_ETH: _bindgen_ty_17 = 64; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV4: _bindgen_ty_17 = 128; pub const BPF_F_ADJ_ROOM_DECAP_L3_IPV6: _bindgen_ty_17 = 256; pub type _bindgen_ty_17 = ::core::ffi::c_uint; pub const BPF_F_SYSCTL_BASE_NAME: _bindgen_ty_19 = 1; pub type _bindgen_ty_19 = ::core::ffi::c_uint; pub const BPF_F_GET_BRANCH_RECORDS_SIZE: _bindgen_ty_21 = 1; pub type _bindgen_ty_21 = ::core::ffi::c_uint; pub const BPF_RINGBUF_BUSY_BIT: _bindgen_ty_24 = 2147483648; pub const BPF_RINGBUF_DISCARD_BIT: _bindgen_ty_24 = 1073741824; pub const BPF_RINGBUF_HDR_SZ: _bindgen_ty_24 = 8; pub type _bindgen_ty_24 = ::core::ffi::c_uint; pub const BPF_F_BPRM_SECUREEXEC: _bindgen_ty_26 = 1; pub type _bindgen_ty_26 = ::core::ffi::c_uint; pub const BPF_F_BROADCAST: _bindgen_ty_27 = 8; pub const BPF_F_EXCLUDE_INGRESS: _bindgen_ty_27 = 16; pub type _bindgen_ty_27 = ::core::ffi::c_uint; #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_devmap_val { pub ifindex: __u32, pub bpf_prog: bpf_devmap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_devmap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_cpumap_val { pub qsize: __u32, pub bpf_prog: bpf_cpumap_val__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_cpumap_val__bindgen_ty_1 { pub fd: ::core::ffi::c_int, pub id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_prog_info { pub type_: __u32, pub id: __u32, pub tag: [__u8; 8usize], pub jited_prog_len: __u32, pub xlated_prog_len: __u32, pub jited_prog_insns: __u64, pub xlated_prog_insns: __u64, pub load_time: __u64, pub created_by_uid: __u32, pub nr_map_ids: __u32, pub map_ids: __u64, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub netns_dev: __u64, pub netns_ino: __u64, pub nr_jited_ksyms: __u32, pub nr_jited_func_lens: __u32, pub jited_ksyms: __u64, pub jited_func_lens: __u64, pub btf_id: __u32, pub func_info_rec_size: __u32, pub func_info: __u64, pub nr_func_info: __u32, pub nr_line_info: __u32, pub line_info: __u64, pub jited_line_info: __u64, pub nr_jited_line_info: __u32, pub line_info_rec_size: __u32, pub jited_line_info_rec_size: __u32, pub nr_prog_tags: __u32, pub prog_tags: __u64, pub run_time_ns: __u64, pub run_cnt: __u64, pub recursion_misses: __u64, pub verified_insns: __u32, pub attach_btf_obj_id: __u32, pub attach_btf_id: __u32, } impl bpf_prog_info { #[inline] pub fn gpl_compatible(&self) -> __u32 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u32) } } #[inline] pub fn set_gpl_compatible(&mut self, val: __u32) { unsafe { let val: u32 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn new_bitfield_1(gpl_compatible: __u32) -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let gpl_compatible: u32 = unsafe { ::core::mem::transmute(gpl_compatible) }; gpl_compatible as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_map_info { pub type_: __u32, pub id: __u32, pub key_size: __u32, pub value_size: __u32, pub max_entries: __u32, pub map_flags: __u32, pub name: [::core::ffi::c_char; 16usize], pub ifindex: __u32, pub btf_vmlinux_value_type_id: __u32, pub netns_dev: __u64, pub netns_ino: __u64, pub btf_id: __u32, pub btf_key_type_id: __u32, pub btf_value_type_id: __u32, pub btf_vmlinux_id: __u32, pub map_extra: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_btf_info { pub btf: __u64, pub btf_size: __u32, pub id: __u32, pub name: __u64, pub name_len: __u32, pub kernel_btf: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info { pub type_: __u32, pub id: __u32, pub prog_id: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1 { pub raw_tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_1, pub tracing: bpf_link_info__bindgen_ty_1__bindgen_ty_2, pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_3, pub iter: bpf_link_info__bindgen_ty_1__bindgen_ty_4, pub netns: bpf_link_info__bindgen_ty_1__bindgen_ty_5, pub xdp: bpf_link_info__bindgen_ty_1__bindgen_ty_6, pub struct_ops: bpf_link_info__bindgen_ty_1__bindgen_ty_7, pub netfilter: bpf_link_info__bindgen_ty_1__bindgen_ty_8, pub kprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_9, pub uprobe_multi: bpf_link_info__bindgen_ty_1__bindgen_ty_10, pub perf_event: bpf_link_info__bindgen_ty_1__bindgen_ty_11, pub tcx: bpf_link_info__bindgen_ty_1__bindgen_ty_12, pub netkit: bpf_link_info__bindgen_ty_1__bindgen_ty_13, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_1 { pub tp_name: __u64, pub tp_name_len: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_2 { pub attach_type: __u32, pub target_obj_id: __u32, pub target_btf_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_3 { pub cgroup_id: __u64, pub attach_type: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4 { pub target_name: __u64, pub target_name_len: __u32, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1, pub __bindgen_anon_2: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1 { pub map: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_1__bindgen_ty_1 { pub map_id: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2 { pub cgroup: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1, pub task: bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_1 { pub cgroup_id: __u64, pub order: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_4__bindgen_ty_2__bindgen_ty_2 { pub tid: __u32, pub pid: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_5 { pub netns_ino: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_6 { pub ifindex: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_7 { pub map_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_8 { pub pf: __u32, pub hooknum: __u32, pub priority: __s32, pub flags: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_9 { pub addrs: __u64, pub count: __u32, pub flags: __u32, pub missed: __u64, pub cookies: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_10 { pub path: __u64, pub offsets: __u64, pub ref_ctr_offsets: __u64, pub cookies: __u64, pub path_size: __u32, pub count: __u32, pub flags: __u32, pub pid: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11 { pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub __bindgen_anon_1: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1 { pub uprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1, pub kprobe: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2, pub tracepoint: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3, pub event: bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_1 { pub file_name: __u64, pub name_len: __u32, pub offset: __u32, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_2 { pub func_name: __u64, pub name_len: __u32, pub offset: __u32, pub addr: __u64, pub missed: __u64, pub cookie: __u64, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { pub tp_name: __u64, pub name_len: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_3 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { pub config: __u64, pub type_: __u32, pub _bitfield_align_1: [u8; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 4usize]>, pub cookie: __u64, } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11__bindgen_ty_1__bindgen_ty_4 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } impl bpf_link_info__bindgen_ty_1__bindgen_ty_11 { #[inline] pub fn new_bitfield_1() -> __BindgenBitfieldUnit<[u8; 4usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 4usize]> = Default::default(); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_12 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_link_info__bindgen_ty_1__bindgen_ty_13 { pub ifindex: __u32, pub attach_type: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_func_info { pub insn_off: __u32, pub type_id: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct bpf_line_info { pub insn_off: __u32, pub file_name_off: __u32, pub line_off: __u32, pub line_col: __u32, } pub const BPF_F_TIMER_ABS: _bindgen_ty_41 = 1; pub const BPF_F_TIMER_CPU_PIN: _bindgen_ty_41 = 2; pub type _bindgen_ty_41 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_header { pub magic: __u16, pub version: __u8, pub flags: __u8, pub hdr_len: __u32, pub type_off: __u32, pub type_len: __u32, pub str_off: __u32, pub str_len: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub struct btf_type { pub name_off: __u32, pub info: __u32, pub __bindgen_anon_1: btf_type__bindgen_ty_1, } #[repr(C)] #[derive(Copy, Clone)] pub union btf_type__bindgen_ty_1 { pub size: __u32, pub type_: __u32, } pub const BTF_KIND_UNKN: _bindgen_ty_42 = 0; pub const BTF_KIND_INT: _bindgen_ty_42 = 1; pub const BTF_KIND_PTR: _bindgen_ty_42 = 2; pub const BTF_KIND_ARRAY: _bindgen_ty_42 = 3; pub const BTF_KIND_STRUCT: _bindgen_ty_42 = 4; pub const BTF_KIND_UNION: _bindgen_ty_42 = 5; pub const BTF_KIND_ENUM: _bindgen_ty_42 = 6; pub const BTF_KIND_FWD: _bindgen_ty_42 = 7; pub const BTF_KIND_TYPEDEF: _bindgen_ty_42 = 8; pub const BTF_KIND_VOLATILE: _bindgen_ty_42 = 9; pub const BTF_KIND_CONST: _bindgen_ty_42 = 10; pub const BTF_KIND_RESTRICT: _bindgen_ty_42 = 11; pub const BTF_KIND_FUNC: _bindgen_ty_42 = 12; pub const BTF_KIND_FUNC_PROTO: _bindgen_ty_42 = 13; pub const BTF_KIND_VAR: _bindgen_ty_42 = 14; pub const BTF_KIND_DATASEC: _bindgen_ty_42 = 15; pub const BTF_KIND_FLOAT: _bindgen_ty_42 = 16; pub const BTF_KIND_DECL_TAG: _bindgen_ty_42 = 17; pub const BTF_KIND_TYPE_TAG: _bindgen_ty_42 = 18; pub const BTF_KIND_ENUM64: _bindgen_ty_42 = 19; pub const NR_BTF_KINDS: _bindgen_ty_42 = 20; pub const BTF_KIND_MAX: _bindgen_ty_42 = 19; pub type _bindgen_ty_42 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_enum { pub name_off: __u32, pub val: __s32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_array { pub type_: __u32, pub index_type: __u32, pub nelems: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_member { pub name_off: __u32, pub type_: __u32, pub offset: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_param { pub name_off: __u32, pub type_: __u32, } pub const BTF_VAR_STATIC: _bindgen_ty_43 = 0; pub const BTF_VAR_GLOBAL_ALLOCATED: _bindgen_ty_43 = 1; pub const BTF_VAR_GLOBAL_EXTERN: _bindgen_ty_43 = 2; pub type _bindgen_ty_43 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum btf_func_linkage { BTF_FUNC_STATIC = 0, BTF_FUNC_GLOBAL = 1, BTF_FUNC_EXTERN = 2, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var { pub linkage: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_var_secinfo { pub type_: __u32, pub offset: __u32, pub size: __u32, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct btf_decl_tag { pub component_idx: __s32, } pub const IFLA_XDP_UNSPEC: _bindgen_ty_92 = 0; pub const IFLA_XDP_FD: _bindgen_ty_92 = 1; pub const IFLA_XDP_ATTACHED: _bindgen_ty_92 = 2; pub const IFLA_XDP_FLAGS: _bindgen_ty_92 = 3; pub const IFLA_XDP_PROG_ID: _bindgen_ty_92 = 4; pub const IFLA_XDP_DRV_PROG_ID: _bindgen_ty_92 = 5; pub const IFLA_XDP_SKB_PROG_ID: _bindgen_ty_92 = 6; pub const IFLA_XDP_HW_PROG_ID: _bindgen_ty_92 = 7; pub const IFLA_XDP_EXPECTED_FD: _bindgen_ty_92 = 8; pub const __IFLA_XDP_MAX: _bindgen_ty_92 = 9; pub type _bindgen_ty_92 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum nf_inet_hooks { NF_INET_PRE_ROUTING = 0, NF_INET_LOCAL_IN = 1, NF_INET_FORWARD = 2, NF_INET_LOCAL_OUT = 3, NF_INET_POST_ROUTING = 4, NF_INET_NUMHOOKS = 5, } pub const NFPROTO_UNSPEC: _bindgen_ty_99 = 0; pub const NFPROTO_INET: _bindgen_ty_99 = 1; pub const NFPROTO_IPV4: _bindgen_ty_99 = 2; pub const NFPROTO_ARP: _bindgen_ty_99 = 3; pub const NFPROTO_NETDEV: _bindgen_ty_99 = 5; pub const NFPROTO_BRIDGE: _bindgen_ty_99 = 7; pub const NFPROTO_IPV6: _bindgen_ty_99 = 10; pub const NFPROTO_DECNET: _bindgen_ty_99 = 12; pub const NFPROTO_NUMPROTO: _bindgen_ty_99 = 13; pub type _bindgen_ty_99 = ::core::ffi::c_uint; #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_type_id { PERF_TYPE_HARDWARE = 0, PERF_TYPE_SOFTWARE = 1, PERF_TYPE_TRACEPOINT = 2, PERF_TYPE_HW_CACHE = 3, PERF_TYPE_RAW = 4, PERF_TYPE_BREAKPOINT = 5, PERF_TYPE_MAX = 6, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_id { PERF_COUNT_HW_CPU_CYCLES = 0, PERF_COUNT_HW_INSTRUCTIONS = 1, PERF_COUNT_HW_CACHE_REFERENCES = 2, PERF_COUNT_HW_CACHE_MISSES = 3, PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4, PERF_COUNT_HW_BRANCH_MISSES = 5, PERF_COUNT_HW_BUS_CYCLES = 6, PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7, PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8, PERF_COUNT_HW_REF_CPU_CYCLES = 9, PERF_COUNT_HW_MAX = 10, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_id { PERF_COUNT_HW_CACHE_L1D = 0, PERF_COUNT_HW_CACHE_L1I = 1, PERF_COUNT_HW_CACHE_LL = 2, PERF_COUNT_HW_CACHE_DTLB = 3, PERF_COUNT_HW_CACHE_ITLB = 4, PERF_COUNT_HW_CACHE_BPU = 5, PERF_COUNT_HW_CACHE_NODE = 6, PERF_COUNT_HW_CACHE_MAX = 7, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_id { PERF_COUNT_HW_CACHE_OP_READ = 0, PERF_COUNT_HW_CACHE_OP_WRITE = 1, PERF_COUNT_HW_CACHE_OP_PREFETCH = 2, PERF_COUNT_HW_CACHE_OP_MAX = 3, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_hw_cache_op_result_id { PERF_COUNT_HW_CACHE_RESULT_ACCESS = 0, PERF_COUNT_HW_CACHE_RESULT_MISS = 1, PERF_COUNT_HW_CACHE_RESULT_MAX = 2, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_sw_ids { PERF_COUNT_SW_CPU_CLOCK = 0, PERF_COUNT_SW_TASK_CLOCK = 1, PERF_COUNT_SW_PAGE_FAULTS = 2, PERF_COUNT_SW_CONTEXT_SWITCHES = 3, PERF_COUNT_SW_CPU_MIGRATIONS = 4, PERF_COUNT_SW_PAGE_FAULTS_MIN = 5, PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6, PERF_COUNT_SW_ALIGNMENT_FAULTS = 7, PERF_COUNT_SW_EMULATION_FAULTS = 8, PERF_COUNT_SW_DUMMY = 9, PERF_COUNT_SW_BPF_OUTPUT = 10, PERF_COUNT_SW_CGROUP_SWITCHES = 11, PERF_COUNT_SW_MAX = 12, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_sample_format { PERF_SAMPLE_IP = 1, PERF_SAMPLE_TID = 2, PERF_SAMPLE_TIME = 4, PERF_SAMPLE_ADDR = 8, PERF_SAMPLE_READ = 16, PERF_SAMPLE_CALLCHAIN = 32, PERF_SAMPLE_ID = 64, PERF_SAMPLE_CPU = 128, PERF_SAMPLE_PERIOD = 256, PERF_SAMPLE_STREAM_ID = 512, PERF_SAMPLE_RAW = 1024, PERF_SAMPLE_BRANCH_STACK = 2048, PERF_SAMPLE_REGS_USER = 4096, PERF_SAMPLE_STACK_USER = 8192, PERF_SAMPLE_WEIGHT = 16384, PERF_SAMPLE_DATA_SRC = 32768, PERF_SAMPLE_IDENTIFIER = 65536, PERF_SAMPLE_TRANSACTION = 131072, PERF_SAMPLE_REGS_INTR = 262144, PERF_SAMPLE_PHYS_ADDR = 524288, PERF_SAMPLE_AUX = 1048576, PERF_SAMPLE_CGROUP = 2097152, PERF_SAMPLE_DATA_PAGE_SIZE = 4194304, PERF_SAMPLE_CODE_PAGE_SIZE = 8388608, PERF_SAMPLE_WEIGHT_STRUCT = 16777216, PERF_SAMPLE_MAX = 33554432, } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_attr { pub type_: __u32, pub size: __u32, pub config: __u64, pub __bindgen_anon_1: perf_event_attr__bindgen_ty_1, pub sample_type: __u64, pub read_format: __u64, pub _bitfield_align_1: [u32; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, pub __bindgen_anon_2: perf_event_attr__bindgen_ty_2, pub bp_type: __u32, pub __bindgen_anon_3: perf_event_attr__bindgen_ty_3, pub __bindgen_anon_4: perf_event_attr__bindgen_ty_4, pub branch_sample_type: __u64, pub sample_regs_user: __u64, pub sample_stack_user: __u32, pub clockid: __s32, pub sample_regs_intr: __u64, pub aux_watermark: __u32, pub sample_max_stack: __u16, pub __reserved_2: __u16, pub aux_sample_size: __u32, pub __reserved_3: __u32, pub sig_data: __u64, pub config3: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_1 { pub sample_period: __u64, pub sample_freq: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_2 { pub wakeup_events: __u32, pub wakeup_watermark: __u32, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_3 { pub bp_addr: __u64, pub kprobe_func: __u64, pub uprobe_path: __u64, pub config1: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_attr__bindgen_ty_4 { pub bp_len: __u64, pub kprobe_addr: __u64, pub probe_offset: __u64, pub config2: __u64, } impl perf_event_attr { #[inline] pub fn disabled(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_disabled(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn inherit(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_inherit(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn pinned(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_pinned(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn exclusive(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_exclusive(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn exclude_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_exclude_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn exclude_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_exclude_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn exclude_hv(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 1u8) as u64) } } #[inline] pub fn set_exclude_hv(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 1u8, val as u64) } } #[inline] pub fn exclude_idle(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(7usize, 1u8) as u64) } } #[inline] pub fn set_exclude_idle(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(7usize, 1u8, val as u64) } } #[inline] pub fn mmap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(8usize, 1u8) as u64) } } #[inline] pub fn set_mmap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(8usize, 1u8, val as u64) } } #[inline] pub fn comm(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(9usize, 1u8) as u64) } } #[inline] pub fn set_comm(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(9usize, 1u8, val as u64) } } #[inline] pub fn freq(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(10usize, 1u8) as u64) } } #[inline] pub fn set_freq(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(10usize, 1u8, val as u64) } } #[inline] pub fn inherit_stat(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(11usize, 1u8) as u64) } } #[inline] pub fn set_inherit_stat(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(11usize, 1u8, val as u64) } } #[inline] pub fn enable_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(12usize, 1u8) as u64) } } #[inline] pub fn set_enable_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(12usize, 1u8, val as u64) } } #[inline] pub fn task(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(13usize, 1u8) as u64) } } #[inline] pub fn set_task(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(13usize, 1u8, val as u64) } } #[inline] pub fn watermark(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(14usize, 1u8) as u64) } } #[inline] pub fn set_watermark(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(14usize, 1u8, val as u64) } } #[inline] pub fn precise_ip(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(15usize, 2u8) as u64) } } #[inline] pub fn set_precise_ip(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(15usize, 2u8, val as u64) } } #[inline] pub fn mmap_data(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(17usize, 1u8) as u64) } } #[inline] pub fn set_mmap_data(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(17usize, 1u8, val as u64) } } #[inline] pub fn sample_id_all(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(18usize, 1u8) as u64) } } #[inline] pub fn set_sample_id_all(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(18usize, 1u8, val as u64) } } #[inline] pub fn exclude_host(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(19usize, 1u8) as u64) } } #[inline] pub fn set_exclude_host(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(19usize, 1u8, val as u64) } } #[inline] pub fn exclude_guest(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(20usize, 1u8) as u64) } } #[inline] pub fn set_exclude_guest(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(20usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_kernel(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(21usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_kernel(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(21usize, 1u8, val as u64) } } #[inline] pub fn exclude_callchain_user(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(22usize, 1u8) as u64) } } #[inline] pub fn set_exclude_callchain_user(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(22usize, 1u8, val as u64) } } #[inline] pub fn mmap2(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(23usize, 1u8) as u64) } } #[inline] pub fn set_mmap2(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(23usize, 1u8, val as u64) } } #[inline] pub fn comm_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(24usize, 1u8) as u64) } } #[inline] pub fn set_comm_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(24usize, 1u8, val as u64) } } #[inline] pub fn use_clockid(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(25usize, 1u8) as u64) } } #[inline] pub fn set_use_clockid(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(25usize, 1u8, val as u64) } } #[inline] pub fn context_switch(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(26usize, 1u8) as u64) } } #[inline] pub fn set_context_switch(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(26usize, 1u8, val as u64) } } #[inline] pub fn write_backward(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(27usize, 1u8) as u64) } } #[inline] pub fn set_write_backward(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(27usize, 1u8, val as u64) } } #[inline] pub fn namespaces(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(28usize, 1u8) as u64) } } #[inline] pub fn set_namespaces(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(28usize, 1u8, val as u64) } } #[inline] pub fn ksymbol(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(29usize, 1u8) as u64) } } #[inline] pub fn set_ksymbol(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(29usize, 1u8, val as u64) } } #[inline] pub fn bpf_event(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(30usize, 1u8) as u64) } } #[inline] pub fn set_bpf_event(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(30usize, 1u8, val as u64) } } #[inline] pub fn aux_output(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(31usize, 1u8) as u64) } } #[inline] pub fn set_aux_output(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(31usize, 1u8, val as u64) } } #[inline] pub fn cgroup(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(32usize, 1u8) as u64) } } #[inline] pub fn set_cgroup(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(32usize, 1u8, val as u64) } } #[inline] pub fn text_poke(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(33usize, 1u8) as u64) } } #[inline] pub fn set_text_poke(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(33usize, 1u8, val as u64) } } #[inline] pub fn build_id(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(34usize, 1u8) as u64) } } #[inline] pub fn set_build_id(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(34usize, 1u8, val as u64) } } #[inline] pub fn inherit_thread(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(35usize, 1u8) as u64) } } #[inline] pub fn set_inherit_thread(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(35usize, 1u8, val as u64) } } #[inline] pub fn remove_on_exec(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(36usize, 1u8) as u64) } } #[inline] pub fn set_remove_on_exec(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(36usize, 1u8, val as u64) } } #[inline] pub fn sigtrap(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(37usize, 1u8) as u64) } } #[inline] pub fn set_sigtrap(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(37usize, 1u8, val as u64) } } #[inline] pub fn __reserved_1(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(38usize, 26u8) as u64) } } #[inline] pub fn set___reserved_1(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(38usize, 26u8, val as u64) } } #[inline] pub fn new_bitfield_1( disabled: __u64, inherit: __u64, pinned: __u64, exclusive: __u64, exclude_user: __u64, exclude_kernel: __u64, exclude_hv: __u64, exclude_idle: __u64, mmap: __u64, comm: __u64, freq: __u64, inherit_stat: __u64, enable_on_exec: __u64, task: __u64, watermark: __u64, precise_ip: __u64, mmap_data: __u64, sample_id_all: __u64, exclude_host: __u64, exclude_guest: __u64, exclude_callchain_kernel: __u64, exclude_callchain_user: __u64, mmap2: __u64, comm_exec: __u64, use_clockid: __u64, context_switch: __u64, write_backward: __u64, namespaces: __u64, ksymbol: __u64, bpf_event: __u64, aux_output: __u64, cgroup: __u64, text_poke: __u64, build_id: __u64, inherit_thread: __u64, remove_on_exec: __u64, sigtrap: __u64, __reserved_1: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let disabled: u64 = unsafe { ::core::mem::transmute(disabled) }; disabled as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let inherit: u64 = unsafe { ::core::mem::transmute(inherit) }; inherit as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let pinned: u64 = unsafe { ::core::mem::transmute(pinned) }; pinned as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let exclusive: u64 = unsafe { ::core::mem::transmute(exclusive) }; exclusive as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let exclude_user: u64 = unsafe { ::core::mem::transmute(exclude_user) }; exclude_user as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let exclude_kernel: u64 = unsafe { ::core::mem::transmute(exclude_kernel) }; exclude_kernel as u64 }); __bindgen_bitfield_unit.set(6usize, 1u8, { let exclude_hv: u64 = unsafe { ::core::mem::transmute(exclude_hv) }; exclude_hv as u64 }); __bindgen_bitfield_unit.set(7usize, 1u8, { let exclude_idle: u64 = unsafe { ::core::mem::transmute(exclude_idle) }; exclude_idle as u64 }); __bindgen_bitfield_unit.set(8usize, 1u8, { let mmap: u64 = unsafe { ::core::mem::transmute(mmap) }; mmap as u64 }); __bindgen_bitfield_unit.set(9usize, 1u8, { let comm: u64 = unsafe { ::core::mem::transmute(comm) }; comm as u64 }); __bindgen_bitfield_unit.set(10usize, 1u8, { let freq: u64 = unsafe { ::core::mem::transmute(freq) }; freq as u64 }); __bindgen_bitfield_unit.set(11usize, 1u8, { let inherit_stat: u64 = unsafe { ::core::mem::transmute(inherit_stat) }; inherit_stat as u64 }); __bindgen_bitfield_unit.set(12usize, 1u8, { let enable_on_exec: u64 = unsafe { ::core::mem::transmute(enable_on_exec) }; enable_on_exec as u64 }); __bindgen_bitfield_unit.set(13usize, 1u8, { let task: u64 = unsafe { ::core::mem::transmute(task) }; task as u64 }); __bindgen_bitfield_unit.set(14usize, 1u8, { let watermark: u64 = unsafe { ::core::mem::transmute(watermark) }; watermark as u64 }); __bindgen_bitfield_unit.set(15usize, 2u8, { let precise_ip: u64 = unsafe { ::core::mem::transmute(precise_ip) }; precise_ip as u64 }); __bindgen_bitfield_unit.set(17usize, 1u8, { let mmap_data: u64 = unsafe { ::core::mem::transmute(mmap_data) }; mmap_data as u64 }); __bindgen_bitfield_unit.set(18usize, 1u8, { let sample_id_all: u64 = unsafe { ::core::mem::transmute(sample_id_all) }; sample_id_all as u64 }); __bindgen_bitfield_unit.set(19usize, 1u8, { let exclude_host: u64 = unsafe { ::core::mem::transmute(exclude_host) }; exclude_host as u64 }); __bindgen_bitfield_unit.set(20usize, 1u8, { let exclude_guest: u64 = unsafe { ::core::mem::transmute(exclude_guest) }; exclude_guest as u64 }); __bindgen_bitfield_unit.set(21usize, 1u8, { let exclude_callchain_kernel: u64 = unsafe { ::core::mem::transmute(exclude_callchain_kernel) }; exclude_callchain_kernel as u64 }); __bindgen_bitfield_unit.set(22usize, 1u8, { let exclude_callchain_user: u64 = unsafe { ::core::mem::transmute(exclude_callchain_user) }; exclude_callchain_user as u64 }); __bindgen_bitfield_unit.set(23usize, 1u8, { let mmap2: u64 = unsafe { ::core::mem::transmute(mmap2) }; mmap2 as u64 }); __bindgen_bitfield_unit.set(24usize, 1u8, { let comm_exec: u64 = unsafe { ::core::mem::transmute(comm_exec) }; comm_exec as u64 }); __bindgen_bitfield_unit.set(25usize, 1u8, { let use_clockid: u64 = unsafe { ::core::mem::transmute(use_clockid) }; use_clockid as u64 }); __bindgen_bitfield_unit.set(26usize, 1u8, { let context_switch: u64 = unsafe { ::core::mem::transmute(context_switch) }; context_switch as u64 }); __bindgen_bitfield_unit.set(27usize, 1u8, { let write_backward: u64 = unsafe { ::core::mem::transmute(write_backward) }; write_backward as u64 }); __bindgen_bitfield_unit.set(28usize, 1u8, { let namespaces: u64 = unsafe { ::core::mem::transmute(namespaces) }; namespaces as u64 }); __bindgen_bitfield_unit.set(29usize, 1u8, { let ksymbol: u64 = unsafe { ::core::mem::transmute(ksymbol) }; ksymbol as u64 }); __bindgen_bitfield_unit.set(30usize, 1u8, { let bpf_event: u64 = unsafe { ::core::mem::transmute(bpf_event) }; bpf_event as u64 }); __bindgen_bitfield_unit.set(31usize, 1u8, { let aux_output: u64 = unsafe { ::core::mem::transmute(aux_output) }; aux_output as u64 }); __bindgen_bitfield_unit.set(32usize, 1u8, { let cgroup: u64 = unsafe { ::core::mem::transmute(cgroup) }; cgroup as u64 }); __bindgen_bitfield_unit.set(33usize, 1u8, { let text_poke: u64 = unsafe { ::core::mem::transmute(text_poke) }; text_poke as u64 }); __bindgen_bitfield_unit.set(34usize, 1u8, { let build_id: u64 = unsafe { ::core::mem::transmute(build_id) }; build_id as u64 }); __bindgen_bitfield_unit.set(35usize, 1u8, { let inherit_thread: u64 = unsafe { ::core::mem::transmute(inherit_thread) }; inherit_thread as u64 }); __bindgen_bitfield_unit.set(36usize, 1u8, { let remove_on_exec: u64 = unsafe { ::core::mem::transmute(remove_on_exec) }; remove_on_exec as u64 }); __bindgen_bitfield_unit.set(37usize, 1u8, { let sigtrap: u64 = unsafe { ::core::mem::transmute(sigtrap) }; sigtrap as u64 }); __bindgen_bitfield_unit.set(38usize, 26u8, { let __reserved_1: u64 = unsafe { ::core::mem::transmute(__reserved_1) }; __reserved_1 as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Copy, Clone)] pub struct perf_event_mmap_page { pub version: __u32, pub compat_version: __u32, pub lock: __u32, pub index: __u32, pub offset: __s64, pub time_enabled: __u64, pub time_running: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1, pub pmc_width: __u16, pub time_shift: __u16, pub time_mult: __u32, pub time_offset: __u64, pub time_zero: __u64, pub size: __u32, pub __reserved_1: __u32, pub time_cycles: __u64, pub time_mask: __u64, pub __reserved: [__u8; 928usize], pub data_head: __u64, pub data_tail: __u64, pub data_offset: __u64, pub data_size: __u64, pub aux_head: __u64, pub aux_tail: __u64, pub aux_offset: __u64, pub aux_size: __u64, } #[repr(C)] #[derive(Copy, Clone)] pub union perf_event_mmap_page__bindgen_ty_1 { pub capabilities: __u64, pub __bindgen_anon_1: perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { pub _bitfield_align_1: [u64; 0], pub _bitfield_1: __BindgenBitfieldUnit<[u8; 8usize]>, } impl perf_event_mmap_page__bindgen_ty_1__bindgen_ty_1 { #[inline] pub fn cap_bit0(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(0usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(0usize, 1u8, val as u64) } } #[inline] pub fn cap_bit0_is_deprecated(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(1usize, 1u8) as u64) } } #[inline] pub fn set_cap_bit0_is_deprecated(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(1usize, 1u8, val as u64) } } #[inline] pub fn cap_user_rdpmc(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(2usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_rdpmc(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(2usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(3usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(3usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_zero(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(4usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_zero(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(4usize, 1u8, val as u64) } } #[inline] pub fn cap_user_time_short(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(5usize, 1u8) as u64) } } #[inline] pub fn set_cap_user_time_short(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(5usize, 1u8, val as u64) } } #[inline] pub fn cap_____res(&self) -> __u64 { unsafe { ::core::mem::transmute(self._bitfield_1.get(6usize, 58u8) as u64) } } #[inline] pub fn set_cap_____res(&mut self, val: __u64) { unsafe { let val: u64 = ::core::mem::transmute(val); self._bitfield_1.set(6usize, 58u8, val as u64) } } #[inline] pub fn new_bitfield_1( cap_bit0: __u64, cap_bit0_is_deprecated: __u64, cap_user_rdpmc: __u64, cap_user_time: __u64, cap_user_time_zero: __u64, cap_user_time_short: __u64, cap_____res: __u64, ) -> __BindgenBitfieldUnit<[u8; 8usize]> { let mut __bindgen_bitfield_unit: __BindgenBitfieldUnit<[u8; 8usize]> = Default::default(); __bindgen_bitfield_unit.set(0usize, 1u8, { let cap_bit0: u64 = unsafe { ::core::mem::transmute(cap_bit0) }; cap_bit0 as u64 }); __bindgen_bitfield_unit.set(1usize, 1u8, { let cap_bit0_is_deprecated: u64 = unsafe { ::core::mem::transmute(cap_bit0_is_deprecated) }; cap_bit0_is_deprecated as u64 }); __bindgen_bitfield_unit.set(2usize, 1u8, { let cap_user_rdpmc: u64 = unsafe { ::core::mem::transmute(cap_user_rdpmc) }; cap_user_rdpmc as u64 }); __bindgen_bitfield_unit.set(3usize, 1u8, { let cap_user_time: u64 = unsafe { ::core::mem::transmute(cap_user_time) }; cap_user_time as u64 }); __bindgen_bitfield_unit.set(4usize, 1u8, { let cap_user_time_zero: u64 = unsafe { ::core::mem::transmute(cap_user_time_zero) }; cap_user_time_zero as u64 }); __bindgen_bitfield_unit.set(5usize, 1u8, { let cap_user_time_short: u64 = unsafe { ::core::mem::transmute(cap_user_time_short) }; cap_user_time_short as u64 }); __bindgen_bitfield_unit.set(6usize, 58u8, { let cap_____res: u64 = unsafe { ::core::mem::transmute(cap_____res) }; cap_____res as u64 }); __bindgen_bitfield_unit } } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct perf_event_header { pub type_: __u32, pub misc: __u16, pub size: __u16, } #[repr(u32)] #[derive(Debug, Copy, Clone, Hash, PartialEq, Eq)] pub enum perf_event_type { PERF_RECORD_MMAP = 1, PERF_RECORD_LOST = 2, PERF_RECORD_COMM = 3, PERF_RECORD_EXIT = 4, PERF_RECORD_THROTTLE = 5, PERF_RECORD_UNTHROTTLE = 6, PERF_RECORD_FORK = 7, PERF_RECORD_READ = 8, PERF_RECORD_SAMPLE = 9, PERF_RECORD_MMAP2 = 10, PERF_RECORD_AUX = 11, PERF_RECORD_ITRACE_START = 12, PERF_RECORD_LOST_SAMPLES = 13, PERF_RECORD_SWITCH = 14, PERF_RECORD_SWITCH_CPU_WIDE = 15, PERF_RECORD_NAMESPACES = 16, PERF_RECORD_KSYMBOL = 17, PERF_RECORD_BPF_EVENT = 18, PERF_RECORD_CGROUP = 19, PERF_RECORD_TEXT_POKE = 20, PERF_RECORD_AUX_OUTPUT_HW_ID = 21, PERF_RECORD_MAX = 22, } pub const TCA_BPF_UNSPEC: _bindgen_ty_154 = 0; pub const TCA_BPF_ACT: _bindgen_ty_154 = 1; pub const TCA_BPF_POLICE: _bindgen_ty_154 = 2; pub const TCA_BPF_CLASSID: _bindgen_ty_154 = 3; pub const TCA_BPF_OPS_LEN: _bindgen_ty_154 = 4; pub const TCA_BPF_OPS: _bindgen_ty_154 = 5; pub const TCA_BPF_FD: _bindgen_ty_154 = 6; pub const TCA_BPF_NAME: _bindgen_ty_154 = 7; pub const TCA_BPF_FLAGS: _bindgen_ty_154 = 8; pub const TCA_BPF_FLAGS_GEN: _bindgen_ty_154 = 9; pub const TCA_BPF_TAG: _bindgen_ty_154 = 10; pub const TCA_BPF_ID: _bindgen_ty_154 = 11; pub const __TCA_BPF_MAX: _bindgen_ty_154 = 12; pub type _bindgen_ty_154 = ::core::ffi::c_uint; #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct ifinfomsg { pub ifi_family: ::core::ffi::c_uchar, pub __ifi_pad: ::core::ffi::c_uchar, pub ifi_type: ::core::ffi::c_ushort, pub ifi_index: ::core::ffi::c_int, pub ifi_flags: ::core::ffi::c_uint, pub ifi_change: ::core::ffi::c_uint, } #[repr(C)] #[derive(Debug, Copy, Clone)] pub struct tcmsg { pub tcm_family: ::core::ffi::c_uchar, pub tcm__pad1: ::core::ffi::c_uchar, pub tcm__pad2: ::core::ffi::c_ushort, pub tcm_ifindex: ::core::ffi::c_int, pub tcm_handle: __u32, pub tcm_parent: __u32, pub tcm_info: __u32, } pub const TCA_UNSPEC: _bindgen_ty_172 = 0; pub const TCA_KIND: _bindgen_ty_172 = 1; pub const TCA_OPTIONS: _bindgen_ty_172 = 2; pub const TCA_STATS: _bindgen_ty_172 = 3; pub const TCA_XSTATS: _bindgen_ty_172 = 4; pub const TCA_RATE: _bindgen_ty_172 = 5; pub const TCA_FCNT: _bindgen_ty_172 = 6; pub const TCA_STATS2: _bindgen_ty_172 = 7; pub const TCA_STAB: _bindgen_ty_172 = 8; pub const TCA_PAD: _bindgen_ty_172 = 9; pub const TCA_DUMP_INVISIBLE: _bindgen_ty_172 = 10; pub const TCA_CHAIN: _bindgen_ty_172 = 11; pub const TCA_HW_OFFLOAD: _bindgen_ty_172 = 12; pub const TCA_INGRESS_BLOCK: _bindgen_ty_172 = 13; pub const TCA_EGRESS_BLOCK: _bindgen_ty_172 = 14; pub const __TCA_MAX: _bindgen_ty_172 = 15; pub type _bindgen_ty_172 = ::core::ffi::c_uint; pub const AYA_PERF_EVENT_IOC_ENABLE: ::core::ffi::c_int = 9216; pub const AYA_PERF_EVENT_IOC_DISABLE: ::core::ffi::c_int = 9217; pub const AYA_PERF_EVENT_IOC_SET_BPF: ::core::ffi::c_int = 1074013192; aya-obj-0.2.1/src/generated/mod.rs000064400000000000000000000021231046102023000150160ustar 00000000000000//! eBPF bindings generated by rust-bindgen #![allow( dead_code, non_camel_case_types, non_snake_case, clippy::all, missing_docs )] mod btf_internal_bindings; #[cfg(target_arch = "aarch64")] mod linux_bindings_aarch64; #[cfg(target_arch = "arm")] mod linux_bindings_armv7; #[cfg(target_arch = "powerpc64")] mod linux_bindings_powerpc64; #[cfg(target_arch = "riscv64")] mod linux_bindings_riscv64; #[cfg(target_arch = "s390x")] mod linux_bindings_s390x; #[cfg(target_arch = "x86_64")] mod linux_bindings_x86_64; // don't re-export __u8 __u16 etc which are already exported by the // linux_bindings_* module pub use btf_internal_bindings::{bpf_core_relo, bpf_core_relo_kind, btf_ext_header}; #[cfg(target_arch = "aarch64")] pub use linux_bindings_aarch64::*; #[cfg(target_arch = "arm")] pub use linux_bindings_armv7::*; #[cfg(target_arch = "powerpc64")] pub use linux_bindings_powerpc64::*; #[cfg(target_arch = "riscv64")] pub use linux_bindings_riscv64::*; #[cfg(target_arch = "s390x")] pub use linux_bindings_s390x::*; #[cfg(target_arch = "x86_64")] pub use linux_bindings_x86_64::*; aya-obj-0.2.1/src/lib.rs000064400000000000000000000067561046102023000130670ustar 00000000000000//! An eBPF object file parsing library with BTF and relocation support. //! //! # Status //! //! This crate includes code that started as internal API used by //! the [aya] crate. It has been split out so that it can be used by //! other projects that deal with eBPF object files. Unless you're writing //! low level eBPF plumbing tools, you should not need to use this crate //! but see the [aya] crate instead. //! //! The API as it is today has a few rough edges and is generally not as //! polished nor stable as the main [aya] crate API. As always, //! improvements welcome! //! //! [aya]: https://github.com/aya-rs/aya //! //! # Overview //! //! eBPF programs written with [libbpf] or [aya-bpf] are usually compiled //! into an ELF object file, using various sections to store information //! about the eBPF programs. //! //! `aya-obj` is a library for parsing such eBPF object files, with BTF and //! relocation support. //! //! [libbpf]: https://github.com/libbpf/libbpf //! [aya-bpf]: https://github.com/aya-rs/aya //! //! # Example //! //! This example loads a simple eBPF program and runs it with [rbpf]. //! //! ```no_run //! use aya_obj::{generated::bpf_insn, Object}; //! //! // Parse the object file //! let bytes = std::fs::read("program.o").unwrap(); //! let mut object = Object::parse(&bytes).unwrap(); //! // Relocate the programs //! #[cfg(feature = "std")] //! let text_sections = std::collections::HashSet::new(); //! #[cfg(not(feature = "std"))] //! let text_sections = hashbrown::HashSet::new(); //! object.relocate_calls(&text_sections).unwrap(); //! object.relocate_maps(std::iter::empty(), &text_sections).unwrap(); //! //! // Run with rbpf //! let function = object.functions.get(&object.programs["prog_name"].function_key()).unwrap(); //! let instructions = &function.instructions; //! let data = unsafe { //! core::slice::from_raw_parts( //! instructions.as_ptr() as *const u8, //! instructions.len() * core::mem::size_of::(), //! ) //! }; //! let vm = rbpf::EbpfVmNoData::new(Some(data)).unwrap(); //! let _return = vm.execute_program().unwrap(); //! ``` //! //! [rbpf]: https://github.com/qmonnet/rbpf #![no_std] #![doc( html_logo_url = "https://aya-rs.dev/assets/images/crabby.svg", html_favicon_url = "https://aya-rs.dev/assets/images/crabby.svg" )] #![cfg_attr(docsrs, feature(doc_cfg))] #![deny(clippy::all, missing_docs)] #![allow(clippy::missing_safety_doc, clippy::len_without_is_empty)] extern crate alloc; #[cfg(feature = "std")] extern crate std; #[cfg(not(feature = "std"))] mod std { pub mod error { pub use core_error::Error; } pub use core::*; pub mod os { pub mod fd { pub type RawFd = core::ffi::c_int; } } } pub mod btf; pub mod generated; pub mod links; pub mod maps; pub mod obj; pub mod programs; pub mod relocation; mod util; pub use maps::Map; pub use obj::*; /// An error returned from the verifier. /// /// Provides a [`Debug`] implementation that doesn't escape newlines. pub struct VerifierLog(alloc::string::String); impl VerifierLog { /// Create a new verifier log. pub fn new(log: alloc::string::String) -> Self { Self(log) } } impl std::fmt::Debug for VerifierLog { fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { let Self(log) = self; f.write_str(log) } } impl std::fmt::Display for VerifierLog { fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { ::fmt(self, f) } } aya-obj-0.2.1/src/links.rs000064400000000000000000000136651046102023000134360ustar 00000000000000//! Link type bindings. use crate::{ generated::{bpf_attach_type, bpf_link_type}, InvalidTypeBinding, }; impl TryFrom for bpf_link_type { type Error = InvalidTypeBinding; fn try_from(link_type: u32) -> Result { use bpf_link_type::*; Ok(match link_type { x if x == BPF_LINK_TYPE_UNSPEC as u32 => BPF_LINK_TYPE_UNSPEC, x if x == BPF_LINK_TYPE_RAW_TRACEPOINT as u32 => BPF_LINK_TYPE_RAW_TRACEPOINT, x if x == BPF_LINK_TYPE_TRACING as u32 => BPF_LINK_TYPE_TRACING, x if x == BPF_LINK_TYPE_CGROUP as u32 => BPF_LINK_TYPE_CGROUP, x if x == BPF_LINK_TYPE_ITER as u32 => BPF_LINK_TYPE_ITER, x if x == BPF_LINK_TYPE_NETNS as u32 => BPF_LINK_TYPE_NETNS, x if x == BPF_LINK_TYPE_XDP as u32 => BPF_LINK_TYPE_XDP, x if x == BPF_LINK_TYPE_PERF_EVENT as u32 => BPF_LINK_TYPE_PERF_EVENT, x if x == BPF_LINK_TYPE_KPROBE_MULTI as u32 => BPF_LINK_TYPE_KPROBE_MULTI, x if x == BPF_LINK_TYPE_STRUCT_OPS as u32 => BPF_LINK_TYPE_STRUCT_OPS, x if x == BPF_LINK_TYPE_NETFILTER as u32 => BPF_LINK_TYPE_NETFILTER, x if x == BPF_LINK_TYPE_TCX as u32 => BPF_LINK_TYPE_TCX, x if x == BPF_LINK_TYPE_UPROBE_MULTI as u32 => BPF_LINK_TYPE_UPROBE_MULTI, x if x == BPF_LINK_TYPE_NETKIT as u32 => BPF_LINK_TYPE_NETKIT, _ => return Err(InvalidTypeBinding { value: link_type }), }) } } impl TryFrom for bpf_attach_type { type Error = InvalidTypeBinding; fn try_from(attach_type: u32) -> Result { use bpf_attach_type::*; Ok(match attach_type { x if x == BPF_CGROUP_INET_INGRESS as u32 => BPF_CGROUP_INET_INGRESS, x if x == BPF_CGROUP_INET_EGRESS as u32 => BPF_CGROUP_INET_EGRESS, x if x == BPF_CGROUP_INET_SOCK_CREATE as u32 => BPF_CGROUP_INET_SOCK_CREATE, x if x == BPF_CGROUP_SOCK_OPS as u32 => BPF_CGROUP_SOCK_OPS, x if x == BPF_SK_SKB_STREAM_PARSER as u32 => BPF_SK_SKB_STREAM_PARSER, x if x == BPF_SK_SKB_STREAM_VERDICT as u32 => BPF_SK_SKB_STREAM_VERDICT, x if x == BPF_CGROUP_DEVICE as u32 => BPF_CGROUP_DEVICE, x if x == BPF_SK_MSG_VERDICT as u32 => BPF_SK_MSG_VERDICT, x if x == BPF_CGROUP_INET4_BIND as u32 => BPF_CGROUP_INET4_BIND, x if x == BPF_CGROUP_INET6_BIND as u32 => BPF_CGROUP_INET6_BIND, x if x == BPF_CGROUP_INET4_CONNECT as u32 => BPF_CGROUP_INET4_CONNECT, x if x == BPF_CGROUP_INET6_CONNECT as u32 => BPF_CGROUP_INET6_CONNECT, x if x == BPF_CGROUP_INET4_POST_BIND as u32 => BPF_CGROUP_INET4_POST_BIND, x if x == BPF_CGROUP_INET6_POST_BIND as u32 => BPF_CGROUP_INET6_POST_BIND, x if x == BPF_CGROUP_UDP4_SENDMSG as u32 => BPF_CGROUP_UDP4_SENDMSG, x if x == BPF_CGROUP_UDP6_SENDMSG as u32 => BPF_CGROUP_UDP6_SENDMSG, x if x == BPF_LIRC_MODE2 as u32 => BPF_LIRC_MODE2, x if x == BPF_FLOW_DISSECTOR as u32 => BPF_FLOW_DISSECTOR, x if x == BPF_CGROUP_SYSCTL as u32 => BPF_CGROUP_SYSCTL, x if x == BPF_CGROUP_UDP4_RECVMSG as u32 => BPF_CGROUP_UDP4_RECVMSG, x if x == BPF_CGROUP_UDP6_RECVMSG as u32 => BPF_CGROUP_UDP6_RECVMSG, x if x == BPF_CGROUP_GETSOCKOPT as u32 => BPF_CGROUP_GETSOCKOPT, x if x == BPF_CGROUP_SETSOCKOPT as u32 => BPF_CGROUP_SETSOCKOPT, x if x == BPF_TRACE_RAW_TP as u32 => BPF_TRACE_RAW_TP, x if x == BPF_TRACE_FENTRY as u32 => BPF_TRACE_FENTRY, x if x == BPF_TRACE_FEXIT as u32 => BPF_TRACE_FEXIT, x if x == BPF_MODIFY_RETURN as u32 => BPF_MODIFY_RETURN, x if x == BPF_LSM_MAC as u32 => BPF_LSM_MAC, x if x == BPF_TRACE_ITER as u32 => BPF_TRACE_ITER, x if x == BPF_CGROUP_INET4_GETPEERNAME as u32 => BPF_CGROUP_INET4_GETPEERNAME, x if x == BPF_CGROUP_INET6_GETPEERNAME as u32 => BPF_CGROUP_INET6_GETPEERNAME, x if x == BPF_CGROUP_INET4_GETSOCKNAME as u32 => BPF_CGROUP_INET4_GETSOCKNAME, x if x == BPF_CGROUP_INET6_GETSOCKNAME as u32 => BPF_CGROUP_INET6_GETSOCKNAME, x if x == BPF_XDP_DEVMAP as u32 => BPF_XDP_DEVMAP, x if x == BPF_CGROUP_INET_SOCK_RELEASE as u32 => BPF_CGROUP_INET_SOCK_RELEASE, x if x == BPF_XDP_CPUMAP as u32 => BPF_XDP_CPUMAP, x if x == BPF_SK_LOOKUP as u32 => BPF_SK_LOOKUP, x if x == BPF_XDP as u32 => BPF_XDP, x if x == BPF_SK_SKB_VERDICT as u32 => BPF_SK_SKB_VERDICT, x if x == BPF_SK_REUSEPORT_SELECT as u32 => BPF_SK_REUSEPORT_SELECT, x if x == BPF_SK_REUSEPORT_SELECT_OR_MIGRATE as u32 => { BPF_SK_REUSEPORT_SELECT_OR_MIGRATE } x if x == BPF_PERF_EVENT as u32 => BPF_PERF_EVENT, x if x == BPF_TRACE_KPROBE_MULTI as u32 => BPF_TRACE_KPROBE_MULTI, x if x == BPF_LSM_CGROUP as u32 => BPF_LSM_CGROUP, x if x == BPF_STRUCT_OPS as u32 => BPF_STRUCT_OPS, x if x == BPF_NETFILTER as u32 => BPF_NETFILTER, x if x == BPF_TCX_INGRESS as u32 => BPF_TCX_INGRESS, x if x == BPF_TCX_EGRESS as u32 => BPF_TCX_EGRESS, x if x == BPF_TRACE_UPROBE_MULTI as u32 => BPF_TRACE_UPROBE_MULTI, x if x == BPF_CGROUP_UNIX_CONNECT as u32 => BPF_CGROUP_UNIX_CONNECT, x if x == BPF_CGROUP_UNIX_SENDMSG as u32 => BPF_CGROUP_UNIX_SENDMSG, x if x == BPF_CGROUP_UNIX_RECVMSG as u32 => BPF_CGROUP_UNIX_RECVMSG, x if x == BPF_CGROUP_UNIX_GETPEERNAME as u32 => BPF_CGROUP_UNIX_GETPEERNAME, x if x == BPF_CGROUP_UNIX_GETSOCKNAME as u32 => BPF_CGROUP_UNIX_GETSOCKNAME, x if x == BPF_NETKIT_PRIMARY as u32 => BPF_NETKIT_PRIMARY, x if x == BPF_NETKIT_PEER as u32 => BPF_NETKIT_PEER, _ => return Err(InvalidTypeBinding { value: attach_type }), }) } } aya-obj-0.2.1/src/maps.rs000064400000000000000000000227171046102023000132540ustar 00000000000000//! Map struct and type bindings. use alloc::vec::Vec; use core::mem; #[cfg(not(feature = "std"))] use crate::std; use crate::{EbpfSectionKind, InvalidTypeBinding}; impl TryFrom for crate::generated::bpf_map_type { type Error = InvalidTypeBinding; fn try_from(map_type: u32) -> Result { use crate::generated::bpf_map_type::*; Ok(match map_type { x if x == BPF_MAP_TYPE_UNSPEC as u32 => BPF_MAP_TYPE_UNSPEC, x if x == BPF_MAP_TYPE_HASH as u32 => BPF_MAP_TYPE_HASH, x if x == BPF_MAP_TYPE_ARRAY as u32 => BPF_MAP_TYPE_ARRAY, x if x == BPF_MAP_TYPE_PROG_ARRAY as u32 => BPF_MAP_TYPE_PROG_ARRAY, x if x == BPF_MAP_TYPE_PERF_EVENT_ARRAY as u32 => BPF_MAP_TYPE_PERF_EVENT_ARRAY, x if x == BPF_MAP_TYPE_PERCPU_HASH as u32 => BPF_MAP_TYPE_PERCPU_HASH, x if x == BPF_MAP_TYPE_PERCPU_ARRAY as u32 => BPF_MAP_TYPE_PERCPU_ARRAY, x if x == BPF_MAP_TYPE_STACK_TRACE as u32 => BPF_MAP_TYPE_STACK_TRACE, x if x == BPF_MAP_TYPE_CGROUP_ARRAY as u32 => BPF_MAP_TYPE_CGROUP_ARRAY, x if x == BPF_MAP_TYPE_LRU_HASH as u32 => BPF_MAP_TYPE_LRU_HASH, x if x == BPF_MAP_TYPE_LRU_PERCPU_HASH as u32 => BPF_MAP_TYPE_LRU_PERCPU_HASH, x if x == BPF_MAP_TYPE_LPM_TRIE as u32 => BPF_MAP_TYPE_LPM_TRIE, x if x == BPF_MAP_TYPE_ARRAY_OF_MAPS as u32 => BPF_MAP_TYPE_ARRAY_OF_MAPS, x if x == BPF_MAP_TYPE_HASH_OF_MAPS as u32 => BPF_MAP_TYPE_HASH_OF_MAPS, x if x == BPF_MAP_TYPE_DEVMAP as u32 => BPF_MAP_TYPE_DEVMAP, x if x == BPF_MAP_TYPE_SOCKMAP as u32 => BPF_MAP_TYPE_SOCKMAP, x if x == BPF_MAP_TYPE_CPUMAP as u32 => BPF_MAP_TYPE_CPUMAP, x if x == BPF_MAP_TYPE_XSKMAP as u32 => BPF_MAP_TYPE_XSKMAP, x if x == BPF_MAP_TYPE_SOCKHASH as u32 => BPF_MAP_TYPE_SOCKHASH, x if x == BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED as u32 => { BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED } x if x == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY as u32 => BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, x if x == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED as u32 => { BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED } x if x == BPF_MAP_TYPE_QUEUE as u32 => BPF_MAP_TYPE_QUEUE, x if x == BPF_MAP_TYPE_STACK as u32 => BPF_MAP_TYPE_STACK, x if x == BPF_MAP_TYPE_SK_STORAGE as u32 => BPF_MAP_TYPE_SK_STORAGE, x if x == BPF_MAP_TYPE_DEVMAP_HASH as u32 => BPF_MAP_TYPE_DEVMAP_HASH, x if x == BPF_MAP_TYPE_STRUCT_OPS as u32 => BPF_MAP_TYPE_STRUCT_OPS, x if x == BPF_MAP_TYPE_RINGBUF as u32 => BPF_MAP_TYPE_RINGBUF, x if x == BPF_MAP_TYPE_INODE_STORAGE as u32 => BPF_MAP_TYPE_INODE_STORAGE, x if x == BPF_MAP_TYPE_TASK_STORAGE as u32 => BPF_MAP_TYPE_TASK_STORAGE, x if x == BPF_MAP_TYPE_BLOOM_FILTER as u32 => BPF_MAP_TYPE_BLOOM_FILTER, x if x == BPF_MAP_TYPE_USER_RINGBUF as u32 => BPF_MAP_TYPE_USER_RINGBUF, x if x == BPF_MAP_TYPE_CGRP_STORAGE as u32 => BPF_MAP_TYPE_CGRP_STORAGE, x if x == BPF_MAP_TYPE_ARENA as u32 => BPF_MAP_TYPE_ARENA, _ => return Err(InvalidTypeBinding { value: map_type }), }) } } /// BTF definition of a map #[derive(Copy, Clone, Debug, Default, PartialEq, Eq)] pub struct BtfMapDef { pub(crate) map_type: u32, pub(crate) key_size: u32, pub(crate) value_size: u32, pub(crate) max_entries: u32, pub(crate) map_flags: u32, pub(crate) pinning: PinningType, /// BTF type id of the map key pub btf_key_type_id: u32, /// BTF type id of the map value pub btf_value_type_id: u32, } /// The pinning type /// /// Upon pinning a map, a file representation is created for the map, /// so that the map can be alive and retrievable across sessions. #[repr(u32)] #[derive(Copy, Clone, Debug, PartialEq, Eq, Default)] pub enum PinningType { /// No pinning #[default] None = 0, /// Pin by the name ByName = 1, } /// The error type returned when failing to parse a [PinningType] #[derive(Debug, thiserror::Error)] pub enum PinningError { /// Unsupported pinning type #[error("unsupported pinning type `{pinning_type}`")] Unsupported { /// The unsupported pinning type pinning_type: u32, }, } impl TryFrom for PinningType { type Error = PinningError; fn try_from(value: u32) -> Result { match value { 0 => Ok(PinningType::None), 1 => Ok(PinningType::ByName), pinning_type => Err(PinningError::Unsupported { pinning_type }), } } } /// Map definition in legacy BPF map declaration style #[allow(non_camel_case_types)] #[repr(C)] #[derive(Copy, Clone, Debug, Default, PartialEq, Eq)] pub struct bpf_map_def { // minimum features required by old BPF programs /// The map type pub map_type: u32, /// The key_size pub key_size: u32, /// The value size pub value_size: u32, /// Max entry number pub max_entries: u32, /// Map flags pub map_flags: u32, // optional features /// Id pub id: u32, /// Pinning type pub pinning: PinningType, } /// The first five __u32 of `bpf_map_def` must be defined. pub(crate) const MINIMUM_MAP_SIZE: usize = mem::size_of::() * 5; /// Map data defined in `maps` or `.maps` sections #[derive(Debug, Clone)] pub enum Map { /// A map defined in the `maps` section Legacy(LegacyMap), /// A map defined in the `.maps` section Btf(BtfMap), } impl Map { /// Returns the map type pub fn map_type(&self) -> u32 { match self { Map::Legacy(m) => m.def.map_type, Map::Btf(m) => m.def.map_type, } } /// Returns the key size in bytes pub fn key_size(&self) -> u32 { match self { Map::Legacy(m) => m.def.key_size, Map::Btf(m) => m.def.key_size, } } /// Returns the value size in bytes pub fn value_size(&self) -> u32 { match self { Map::Legacy(m) => m.def.value_size, Map::Btf(m) => m.def.value_size, } } /// Set the value size in bytes pub fn set_value_size(&mut self, size: u32) { match self { Map::Legacy(m) => m.def.value_size = size, Map::Btf(m) => m.def.value_size = size, } } /// Returns the max entry number pub fn max_entries(&self) -> u32 { match self { Map::Legacy(m) => m.def.max_entries, Map::Btf(m) => m.def.max_entries, } } /// Sets the max entry number pub fn set_max_entries(&mut self, v: u32) { match self { Map::Legacy(m) => m.def.max_entries = v, Map::Btf(m) => m.def.max_entries = v, } } /// Returns the map flags pub fn map_flags(&self) -> u32 { match self { Map::Legacy(m) => m.def.map_flags, Map::Btf(m) => m.def.map_flags, } } /// Returns the pinning type of the map pub fn pinning(&self) -> PinningType { match self { Map::Legacy(m) => m.def.pinning, Map::Btf(m) => m.def.pinning, } } /// Returns the map data pub fn data(&self) -> &[u8] { match self { Map::Legacy(m) => &m.data, Map::Btf(m) => &m.data, } } /// Returns the map data as mutable pub fn data_mut(&mut self) -> &mut Vec { match self { Map::Legacy(m) => m.data.as_mut(), Map::Btf(m) => m.data.as_mut(), } } /// Returns the section index pub fn section_index(&self) -> usize { match self { Map::Legacy(m) => m.section_index, Map::Btf(m) => m.section_index, } } /// Returns the section kind. pub fn section_kind(&self) -> EbpfSectionKind { match self { Map::Legacy(m) => m.section_kind, Map::Btf(_) => EbpfSectionKind::BtfMaps, } } /// Returns the symbol index. /// /// This is `None` for data maps (.bss, .data and .rodata) since those don't /// need symbols in order to be relocated. pub fn symbol_index(&self) -> Option { match self { Map::Legacy(m) => m.symbol_index, Map::Btf(m) => Some(m.symbol_index), } } } /// A map declared with legacy BPF map declaration style, most likely from a `maps` section. /// /// See [Drop support for legacy BPF map declaration syntax - Libbpf: the road to v1.0](https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#drop-support-for-legacy-bpf-map-declaration-syntax) /// for more info. #[derive(Debug, Clone)] pub struct LegacyMap { /// The definition of the map pub def: bpf_map_def, /// The section index pub section_index: usize, /// The section kind pub section_kind: EbpfSectionKind, /// The symbol index. /// /// This is None for data maps (.bss .data and .rodata). We don't need /// symbols to relocate those since they don't contain multiple maps, but /// are just a flat array of bytes. pub symbol_index: Option, /// The map data pub data: Vec, } /// A BTF-defined map, most likely from a `.maps` section. #[derive(Debug, Clone)] pub struct BtfMap { /// The definition of the map pub def: BtfMapDef, pub(crate) section_index: usize, pub(crate) symbol_index: usize, pub(crate) data: Vec, } aya-obj-0.2.1/src/obj.rs000064400000000000000000002716251046102023000130720ustar 00000000000000//! Object file loading, parsing, and relocation. use alloc::{ borrow::ToOwned, collections::BTreeMap, ffi::CString, string::{String, ToString}, vec, vec::Vec, }; use core::{ffi::CStr, mem, ptr, slice::from_raw_parts_mut, str::FromStr}; use log::debug; use object::{ read::{Object as ElfObject, ObjectSection, Section as ObjSection}, Endianness, ObjectSymbol, ObjectSymbolTable, RelocationTarget, SectionIndex, SectionKind, SymbolKind, }; #[cfg(not(feature = "std"))] use crate::std; use crate::{ btf::{ Array, Btf, BtfError, BtfExt, BtfFeatures, BtfType, DataSecEntry, FuncSecInfo, LineSecInfo, }, generated::{ bpf_insn, bpf_map_info, bpf_map_type::BPF_MAP_TYPE_ARRAY, BPF_CALL, BPF_F_RDONLY_PROG, BPF_JMP, BPF_K, }, maps::{bpf_map_def, BtfMap, BtfMapDef, LegacyMap, Map, PinningType, MINIMUM_MAP_SIZE}, programs::{ CgroupSockAddrAttachType, CgroupSockAttachType, CgroupSockoptAttachType, XdpAttachType, }, relocation::*, util::HashMap, }; const KERNEL_VERSION_ANY: u32 = 0xFFFF_FFFE; /// Features implements BPF and BTF feature detection #[derive(Default, Debug)] #[allow(missing_docs)] pub struct Features { bpf_name: bool, bpf_probe_read_kernel: bool, bpf_perf_link: bool, bpf_global_data: bool, bpf_cookie: bool, cpumap_prog_id: bool, devmap_prog_id: bool, prog_info_map_ids: bool, prog_info_gpl_compatible: bool, btf: Option, } impl Features { #[doc(hidden)] #[allow(clippy::too_many_arguments)] pub fn new( bpf_name: bool, bpf_probe_read_kernel: bool, bpf_perf_link: bool, bpf_global_data: bool, bpf_cookie: bool, cpumap_prog_id: bool, devmap_prog_id: bool, prog_info_map_ids: bool, prog_info_gpl_compatible: bool, btf: Option, ) -> Self { Self { bpf_name, bpf_probe_read_kernel, bpf_perf_link, bpf_global_data, bpf_cookie, cpumap_prog_id, devmap_prog_id, prog_info_map_ids, prog_info_gpl_compatible, btf, } } /// Returns whether BPF program names and map names are supported. /// /// Although the feature probe performs the check for program name, we can use this to also /// detect if map name is supported since they were both introduced in the same commit. pub fn bpf_name(&self) -> bool { self.bpf_name } /// Returns whether the bpf_probe_read_kernel helper is supported. pub fn bpf_probe_read_kernel(&self) -> bool { self.bpf_probe_read_kernel } /// Returns whether bpf_links are supported for Kprobes/Uprobes/Tracepoints. pub fn bpf_perf_link(&self) -> bool { self.bpf_perf_link } /// Returns whether BPF program global data is supported. pub fn bpf_global_data(&self) -> bool { self.bpf_global_data } /// Returns whether BPF program cookie is supported. pub fn bpf_cookie(&self) -> bool { self.bpf_cookie } /// Returns whether XDP CPU Maps support chained program IDs. pub fn cpumap_prog_id(&self) -> bool { self.cpumap_prog_id } /// Returns whether XDP Device Maps support chained program IDs. pub fn devmap_prog_id(&self) -> bool { self.devmap_prog_id } /// Returns whether `bpf_prog_info` supports `nr_map_ids` & `map_ids` fields. pub fn prog_info_map_ids(&self) -> bool { self.prog_info_map_ids } /// Returns whether `bpf_prog_info` supports `gpl_compatible` field. pub fn prog_info_gpl_compatible(&self) -> bool { self.prog_info_gpl_compatible } /// If BTF is supported, returns which BTF features are supported. pub fn btf(&self) -> Option<&BtfFeatures> { self.btf.as_ref() } } /// The loaded object file representation #[derive(Clone, Debug)] pub struct Object { /// The endianness pub endianness: Endianness, /// Program license pub license: CString, /// Kernel version pub kernel_version: Option, /// Program BTF pub btf: Option, /// Program BTF.ext pub btf_ext: Option, /// Referenced maps pub maps: HashMap, /// A hash map of programs, using the program names parsed /// in [ProgramSection]s as keys. pub programs: HashMap, /// Functions pub functions: BTreeMap<(usize, u64), Function>, pub(crate) relocations: HashMap>, pub(crate) symbol_table: HashMap, pub(crate) symbols_by_section: HashMap>, pub(crate) section_infos: HashMap, // symbol_offset_by_name caches symbols that could be referenced from a // BTF VAR type so the offsets can be fixed up pub(crate) symbol_offset_by_name: HashMap, } /// An eBPF program #[derive(Debug, Clone)] pub struct Program { /// The license pub license: CString, /// The kernel version pub kernel_version: Option, /// The section containing the program pub section: ProgramSection, /// The section index of the program pub section_index: usize, /// The address of the program pub address: u64, } impl Program { /// The key used by [Object::functions] pub fn function_key(&self) -> (usize, u64) { (self.section_index, self.address) } } /// An eBPF function #[derive(Debug, Clone)] pub struct Function { /// The address pub address: u64, /// The function name pub name: String, /// The section index pub section_index: SectionIndex, /// The section offset pub section_offset: usize, /// The eBPF byte code instructions pub instructions: Vec, /// The function info pub func_info: FuncSecInfo, /// The line info pub line_info: LineSecInfo, /// Function info record size pub func_info_rec_size: usize, /// Line info record size pub line_info_rec_size: usize, } /// Section types containing eBPF programs /// /// # Section Name Parsing /// /// Section types are parsed from the section name strings. /// /// In order for Aya to treat a section as a [ProgramSection], /// there are a few requirements: /// - The section must be an executable code section. /// - The section name must conform to [Program Types and ELF Sections]. /// /// [Program Types and ELF Sections]: https://docs.kernel.org/bpf/libbpf/program_types.html /// /// # Unsupported Sections /// /// Currently, the following section names are not supported yet: /// - `flow_dissector`: `BPF_PROG_TYPE_FLOW_DISSECTOR` /// - `ksyscall+` or `kretsyscall+` /// - `usdt+` /// - `kprobe.multi+` or `kretprobe.multi+`: `BPF_TRACE_KPROBE_MULTI` /// - `lsm_cgroup+` /// - `lwt_in`, `lwt_out`, `lwt_seg6local`, `lwt_xmit` /// - `raw_tp.w+`, `raw_tracepoint.w+` /// - `action` /// - `sk_reuseport/migrate`, `sk_reuseport` /// - `syscall` /// - `struct_ops+` /// - `fmod_ret+`, `fmod_ret.s+` /// - `iter+`, `iter.s+` #[derive(Debug, Clone)] #[allow(missing_docs)] pub enum ProgramSection { KRetProbe, KProbe, UProbe { sleepable: bool, }, URetProbe { sleepable: bool, }, TracePoint, SocketFilter, Xdp { frags: bool, attach_type: XdpAttachType, }, SkMsg, SkSkbStreamParser, SkSkbStreamVerdict, SockOps, SchedClassifier, CgroupSkb, CgroupSkbIngress, CgroupSkbEgress, CgroupSockAddr { attach_type: CgroupSockAddrAttachType, }, CgroupSysctl, CgroupSockopt { attach_type: CgroupSockoptAttachType, }, LircMode2, PerfEvent, RawTracePoint, Lsm { sleepable: bool, }, BtfTracePoint, FEntry { sleepable: bool, }, FExit { sleepable: bool, }, Extension, SkLookup, CgroupSock { attach_type: CgroupSockAttachType, }, CgroupDevice, } impl FromStr for ProgramSection { type Err = ParseError; fn from_str(section: &str) -> Result { use ProgramSection::*; // parse the common case, eg "xdp/program_name" or // "sk_skb/stream_verdict/program_name" let mut pieces = section.split('/'); let mut next = || { pieces .next() .ok_or_else(|| ParseError::InvalidProgramSection { section: section.to_owned(), }) }; let kind = next()?; Ok(match kind { "kprobe" => KProbe, "kretprobe" => KRetProbe, "uprobe" => UProbe { sleepable: false }, "uprobe.s" => UProbe { sleepable: true }, "uretprobe" => URetProbe { sleepable: false }, "uretprobe.s" => URetProbe { sleepable: true }, "xdp" | "xdp.frags" => Xdp { frags: kind == "xdp.frags", attach_type: match pieces.next() { None => XdpAttachType::Interface, Some("cpumap") => XdpAttachType::CpuMap, Some("devmap") => XdpAttachType::DevMap, Some(_) => { return Err(ParseError::InvalidProgramSection { section: section.to_owned(), }) } }, }, "tp_btf" => BtfTracePoint, "tracepoint" | "tp" => TracePoint, "socket" => SocketFilter, "sk_msg" => SkMsg, "sk_skb" => { let name = next()?; match name { "stream_parser" => SkSkbStreamParser, "stream_verdict" => SkSkbStreamVerdict, _ => { return Err(ParseError::InvalidProgramSection { section: section.to_owned(), }) } } } "sockops" => SockOps, "classifier" => SchedClassifier, "cgroup_skb" => { let name = next()?; match name { "ingress" => CgroupSkbIngress, "egress" => CgroupSkbEgress, _ => { return Err(ParseError::InvalidProgramSection { section: section.to_owned(), }) } } } "cgroup" => { let name = next()?; match name { "skb" => CgroupSkb, "sysctl" => CgroupSysctl, "dev" => CgroupDevice, "getsockopt" => CgroupSockopt { attach_type: CgroupSockoptAttachType::Get, }, "setsockopt" => CgroupSockopt { attach_type: CgroupSockoptAttachType::Set, }, "sock" => CgroupSock { attach_type: CgroupSockAttachType::default(), }, "post_bind4" => CgroupSock { attach_type: CgroupSockAttachType::PostBind4, }, "post_bind6" => CgroupSock { attach_type: CgroupSockAttachType::PostBind6, }, "sock_create" => CgroupSock { attach_type: CgroupSockAttachType::SockCreate, }, "sock_release" => CgroupSock { attach_type: CgroupSockAttachType::SockRelease, }, "bind4" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::Bind4, }, "bind6" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::Bind6, }, "connect4" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::Connect4, }, "connect6" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::Connect6, }, "getpeername4" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::GetPeerName4, }, "getpeername6" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::GetPeerName6, }, "getsockname4" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::GetSockName4, }, "getsockname6" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::GetSockName6, }, "sendmsg4" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::UDPSendMsg4, }, "sendmsg6" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::UDPSendMsg6, }, "recvmsg4" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::UDPRecvMsg4, }, "recvmsg6" => CgroupSockAddr { attach_type: CgroupSockAddrAttachType::UDPRecvMsg6, }, _ => { return Err(ParseError::InvalidProgramSection { section: section.to_owned(), }); } } } "lirc_mode2" => LircMode2, "perf_event" => PerfEvent, "raw_tp" | "raw_tracepoint" => RawTracePoint, "lsm" => Lsm { sleepable: false }, "lsm.s" => Lsm { sleepable: true }, "fentry" => FEntry { sleepable: false }, "fentry.s" => FEntry { sleepable: true }, "fexit" => FExit { sleepable: false }, "fexit.s" => FExit { sleepable: true }, "freplace" => Extension, "sk_lookup" => SkLookup, _ => { return Err(ParseError::InvalidProgramSection { section: section.to_owned(), }) } }) } } impl Object { /// Parses the binary data as an object file into an [Object] pub fn parse(data: &[u8]) -> Result { let obj = object::read::File::parse(data).map_err(ParseError::ElfError)?; let endianness = obj.endianness(); let license = if let Some(section) = obj.section_by_name("license") { parse_license(Section::try_from(§ion)?.data)? } else { CString::new("GPL").unwrap() }; let kernel_version = if let Some(section) = obj.section_by_name("version") { parse_version(Section::try_from(§ion)?.data, endianness)? } else { None }; let mut bpf_obj = Object::new(endianness, license, kernel_version); if let Some(symbol_table) = obj.symbol_table() { for symbol in symbol_table.symbols() { let name = symbol .name() .ok() .map(String::from) .ok_or(BtfError::InvalidSymbolName)?; let sym = Symbol { index: symbol.index().0, name: Some(name.clone()), section_index: symbol.section().index().map(|i| i.0), address: symbol.address(), size: symbol.size(), is_definition: symbol.is_definition(), kind: symbol.kind(), }; bpf_obj.symbol_table.insert(symbol.index().0, sym); if let Some(section_idx) = symbol.section().index() { bpf_obj .symbols_by_section .entry(section_idx) .or_default() .push(symbol.index().0); } if symbol.is_global() || symbol.kind() == SymbolKind::Data { bpf_obj.symbol_offset_by_name.insert(name, symbol.address()); } } } // .BTF and .BTF.ext sections must be parsed first // as they're required to prepare function and line information // when parsing program sections if let Some(s) = obj.section_by_name(".BTF") { bpf_obj.parse_section(Section::try_from(&s)?)?; if let Some(s) = obj.section_by_name(".BTF.ext") { bpf_obj.parse_section(Section::try_from(&s)?)?; } } for s in obj.sections() { if let Ok(name) = s.name() { if name == ".BTF" || name == ".BTF.ext" { continue; } } bpf_obj.parse_section(Section::try_from(&s)?)?; } Ok(bpf_obj) } fn new(endianness: Endianness, license: CString, kernel_version: Option) -> Object { Object { endianness, license, kernel_version, btf: None, btf_ext: None, maps: HashMap::new(), programs: HashMap::new(), functions: BTreeMap::new(), relocations: HashMap::new(), symbol_table: HashMap::new(), symbols_by_section: HashMap::new(), section_infos: HashMap::new(), symbol_offset_by_name: HashMap::new(), } } /// Patches map data pub fn patch_map_data( &mut self, globals: HashMap<&str, (&[u8], bool)>, ) -> Result<(), ParseError> { let symbols: HashMap = self .symbol_table .iter() .filter(|(_, s)| s.name.is_some()) .map(|(_, s)| (s.name.as_ref().unwrap().clone(), s)) .collect(); for (name, (data, must_exist)) in globals { if let Some(symbol) = symbols.get(name) { if data.len() as u64 != symbol.size { return Err(ParseError::InvalidGlobalData { name: name.to_string(), sym_size: symbol.size, data_size: data.len(), }); } let (_, map) = self .maps .iter_mut() // assumption: there is only one map created per section where we're trying to // patch data. this assumption holds true for the .rodata section at least .find(|(_, m)| symbol.section_index == Some(m.section_index())) .ok_or_else(|| ParseError::MapNotFound { index: symbol.section_index.unwrap_or(0), })?; let start = symbol.address as usize; let end = start + symbol.size as usize; if start > end || end > map.data().len() { return Err(ParseError::InvalidGlobalData { name: name.to_string(), sym_size: symbol.size, data_size: data.len(), }); } map.data_mut().splice(start..end, data.iter().cloned()); } else if must_exist { return Err(ParseError::SymbolNotFound { name: name.to_owned(), }); } } Ok(()) } fn parse_btf(&mut self, section: &Section) -> Result<(), BtfError> { self.btf = Some(Btf::parse(section.data, self.endianness)?); Ok(()) } fn parse_btf_ext(&mut self, section: &Section) -> Result<(), BtfError> { self.btf_ext = Some(BtfExt::parse( section.data, self.endianness, self.btf.as_ref().unwrap(), )?); Ok(()) } fn parse_programs(&mut self, section: &Section) -> Result<(), ParseError> { let program_section = ProgramSection::from_str(section.name)?; let syms = self.symbols_by_section .get(§ion.index) .ok_or(ParseError::NoSymbolsForSection { section_name: section.name.to_string(), })?; for symbol_index in syms { let symbol = self .symbol_table .get(symbol_index) .expect("all symbols in symbols_by_section are also in symbol_table"); // Here we get both ::Label (LBB*) and ::Text symbols, and we only want the latter. let name = match (symbol.name.as_ref(), symbol.kind) { (Some(name), SymbolKind::Text) if !name.is_empty() => name, _ => continue, }; let (p, f) = self.parse_program(section, program_section.clone(), name.to_string(), symbol)?; let key = p.function_key(); self.programs.insert(f.name.clone(), p); self.functions.insert(key, f); } Ok(()) } fn parse_program( &self, section: &Section, program_section: ProgramSection, name: String, symbol: &Symbol, ) -> Result<(Program, Function), ParseError> { let offset = symbol.address as usize - section.address as usize; let (func_info, line_info, func_info_rec_size, line_info_rec_size) = get_func_and_line_info(self.btf_ext.as_ref(), symbol, section, offset, true); let start = symbol.address as usize; let end = (symbol.address + symbol.size) as usize; let function = Function { name: name.to_owned(), address: symbol.address, section_index: section.index, section_offset: start, instructions: copy_instructions(§ion.data[start..end])?, func_info, line_info, func_info_rec_size, line_info_rec_size, }; Ok(( Program { license: self.license.clone(), kernel_version: self.kernel_version, section: program_section.clone(), section_index: section.index.0, address: symbol.address, }, function, )) } fn parse_text_section(&mut self, section: Section) -> Result<(), ParseError> { let mut symbols_by_address = HashMap::new(); for sym in self.symbol_table.values() { if sym.is_definition && sym.kind == SymbolKind::Text && sym.section_index == Some(section.index.0) { if symbols_by_address.contains_key(&sym.address) { return Err(ParseError::SymbolTableConflict { section_index: section.index.0, address: sym.address, }); } symbols_by_address.insert(sym.address, sym); } } let mut offset = 0; while offset < section.data.len() { let address = section.address + offset as u64; let sym = symbols_by_address .get(&address) .ok_or(ParseError::UnknownSymbol { section_index: section.index.0, address, })?; if sym.size == 0 { return Err(ParseError::InvalidSymbol { index: sym.index, name: sym.name.clone(), }); } let (func_info, line_info, func_info_rec_size, line_info_rec_size) = get_func_and_line_info(self.btf_ext.as_ref(), sym, §ion, offset, false); self.functions.insert( (section.index.0, sym.address), Function { address, name: sym.name.clone().unwrap(), section_index: section.index, section_offset: offset, instructions: copy_instructions( §ion.data[offset..offset + sym.size as usize], )?, func_info, line_info, func_info_rec_size, line_info_rec_size, }, ); offset += sym.size as usize; } if !section.relocations.is_empty() { self.relocations.insert( section.index, section .relocations .into_iter() .map(|rel| (rel.offset, rel)) .collect(), ); } Ok(()) } fn parse_btf_maps(&mut self, section: &Section) -> Result<(), ParseError> { if self.btf.is_none() { return Err(ParseError::NoBTF); } let btf = self.btf.as_ref().unwrap(); let maps: HashMap<&String, usize> = self .symbols_by_section .get(§ion.index) .ok_or(ParseError::NoSymbolsForSection { section_name: section.name.to_owned(), })? .iter() .filter_map(|s| { let symbol = self.symbol_table.get(s).unwrap(); symbol.name.as_ref().map(|name| (name, symbol.index)) }) .collect(); for t in btf.types() { if let BtfType::DataSec(datasec) = &t { let type_name = match btf.type_name(t) { Ok(name) => name, _ => continue, }; if type_name == section.name { // each btf_var_secinfo contains a map for info in &datasec.entries { let (map_name, def) = parse_btf_map_def(btf, info)?; let symbol_index = maps.get(&map_name) .ok_or_else(|| ParseError::SymbolNotFound { name: map_name.to_string(), })?; self.maps.insert( map_name, Map::Btf(BtfMap { def, section_index: section.index.0, symbol_index: *symbol_index, data: Vec::new(), }), ); } } } } Ok(()) } // Parses multiple map definition contained in a single `maps` section (which is // different from `.maps` which is used for BTF). We can tell where each map is // based on the symbol table. fn parse_maps_section<'a, I: Iterator>( &self, maps: &mut HashMap, section: &Section, symbols: I, ) -> Result<(), ParseError> { let mut have_symbols = false; // each symbol in the section is a separate map for i in symbols { let sym = self.symbol_table.get(i).ok_or(ParseError::SymbolNotFound { name: i.to_string(), })?; let start = sym.address as usize; let end = start + sym.size as usize; let data = §ion.data[start..end]; let name = sym .name .as_ref() .ok_or(ParseError::MapSymbolNameNotFound { i: *i })?; let def = parse_map_def(name, data)?; maps.insert( name.to_string(), Map::Legacy(LegacyMap { section_index: section.index.0, section_kind: section.kind, symbol_index: Some(sym.index), def, data: Vec::new(), }), ); have_symbols = true; } if !have_symbols { return Err(ParseError::NoSymbolsForSection { section_name: section.name.to_owned(), }); } Ok(()) } fn parse_section(&mut self, section: Section) -> Result<(), ParseError> { self.section_infos .insert(section.name.to_owned(), (section.index, section.size)); match section.kind { EbpfSectionKind::Data | EbpfSectionKind::Rodata | EbpfSectionKind::Bss => { self.maps .insert(section.name.to_string(), parse_data_map_section(§ion)?); } EbpfSectionKind::Text => self.parse_text_section(section)?, EbpfSectionKind::Btf => self.parse_btf(§ion)?, EbpfSectionKind::BtfExt => self.parse_btf_ext(§ion)?, EbpfSectionKind::BtfMaps => self.parse_btf_maps(§ion)?, EbpfSectionKind::Maps => { // take out self.maps so we can borrow the iterator below // without cloning or collecting let mut maps = mem::take(&mut self.maps); // extract the symbols for the .maps section, we'll need them // during parsing let symbols = self .symbols_by_section .get(§ion.index) .ok_or(ParseError::NoSymbolsForSection { section_name: section.name.to_owned(), })? .iter(); let res = self.parse_maps_section(&mut maps, §ion, symbols); // put the maps back self.maps = maps; res? } EbpfSectionKind::Program => { self.parse_programs(§ion)?; if !section.relocations.is_empty() { self.relocations.insert( section.index, section .relocations .into_iter() .map(|rel| (rel.offset, rel)) .collect(), ); } } EbpfSectionKind::Undefined | EbpfSectionKind::License | EbpfSectionKind::Version => {} } Ok(()) } /// Sanitize BPF functions. pub fn sanitize_functions(&mut self, features: &Features) { for function in self.functions.values_mut() { function.sanitize(features); } } } fn insn_is_helper_call(ins: &bpf_insn) -> bool { let klass = (ins.code & 0x07) as u32; let op = (ins.code & 0xF0) as u32; let src = (ins.code & 0x08) as u32; klass == BPF_JMP && op == BPF_CALL && src == BPF_K && ins.src_reg() == 0 && ins.dst_reg() == 0 } const BPF_FUNC_PROBE_READ: i32 = 4; const BPF_FUNC_PROBE_READ_STR: i32 = 45; const BPF_FUNC_PROBE_READ_USER: i32 = 112; const BPF_FUNC_PROBE_READ_KERNEL: i32 = 113; const BPF_FUNC_PROBE_READ_USER_STR: i32 = 114; const BPF_FUNC_PROBE_READ_KERNEL_STR: i32 = 115; impl Function { fn sanitize(&mut self, features: &Features) { for inst in &mut self.instructions { if !insn_is_helper_call(inst) { continue; } match inst.imm { BPF_FUNC_PROBE_READ_USER | BPF_FUNC_PROBE_READ_KERNEL if !features.bpf_probe_read_kernel => { inst.imm = BPF_FUNC_PROBE_READ; } BPF_FUNC_PROBE_READ_USER_STR | BPF_FUNC_PROBE_READ_KERNEL_STR if !features.bpf_probe_read_kernel => { inst.imm = BPF_FUNC_PROBE_READ_STR; } _ => {} } } } } /// Errors caught during parsing the object file #[derive(Debug, thiserror::Error)] #[allow(missing_docs)] pub enum ParseError { #[error("error parsing ELF data")] ElfError(object::read::Error), /// Error parsing BTF object #[error("BTF error")] BtfError(#[from] BtfError), #[error("invalid license `{data:?}`: missing NULL terminator")] MissingLicenseNullTerminator { data: Vec }, #[error("invalid license `{data:?}`")] InvalidLicense { data: Vec }, #[error("invalid kernel version `{data:?}`")] InvalidKernelVersion { data: Vec }, #[error("error parsing section with index {index}")] SectionError { index: usize, error: object::read::Error, }, #[error("unsupported relocation target")] UnsupportedRelocationTarget, #[error("invalid program section `{section}`")] InvalidProgramSection { section: String }, #[error("invalid program code")] InvalidProgramCode, #[error("error parsing map `{name}`")] InvalidMapDefinition { name: String }, #[error("two or more symbols in section `{section_index}` have the same address {address:#X}")] SymbolTableConflict { section_index: usize, address: u64 }, #[error("unknown symbol in section `{section_index}` at address {address:#X}")] UnknownSymbol { section_index: usize, address: u64 }, #[error("invalid symbol, index `{index}` name: {}", .name.as_ref().unwrap_or(&"[unknown]".into()))] InvalidSymbol { index: usize, name: Option }, #[error("symbol {name} has size `{sym_size}`, but provided data is of size `{data_size}`")] InvalidGlobalData { name: String, sym_size: u64, data_size: usize, }, #[error("symbol with name {name} not found in the symbols table")] SymbolNotFound { name: String }, #[error("map for section with index {index} not found")] MapNotFound { index: usize }, #[error("the map number {i} in the `maps` section doesn't have a symbol name")] MapSymbolNameNotFound { i: usize }, #[error("no symbols found in the {section_name} section")] NoSymbolsForSection { section_name: String }, /// No BTF parsed for object #[error("no BTF parsed for object")] NoBTF, } /// Invalid bindings to the bpf type from the parsed/received value. pub struct InvalidTypeBinding { /// The value parsed/received. pub value: T, } /// The kind of an ELF section. #[derive(Debug, Copy, Clone, Eq, PartialEq)] pub enum EbpfSectionKind { /// Undefined Undefined, /// `maps` Maps, /// `.maps` BtfMaps, /// A program section Program, /// `.data` Data, /// `.rodata` Rodata, /// `.bss` Bss, /// `.text` Text, /// `.BTF` Btf, /// `.BTF.ext` BtfExt, /// `license` License, /// `version` Version, } impl EbpfSectionKind { fn from_name(name: &str) -> EbpfSectionKind { if name.starts_with("license") { EbpfSectionKind::License } else if name.starts_with("version") { EbpfSectionKind::Version } else if name.starts_with("maps") { EbpfSectionKind::Maps } else if name.starts_with(".maps") { EbpfSectionKind::BtfMaps } else if name.starts_with(".text") { EbpfSectionKind::Text } else if name.starts_with(".bss") { EbpfSectionKind::Bss } else if name.starts_with(".data") { EbpfSectionKind::Data } else if name.starts_with(".rodata") { EbpfSectionKind::Rodata } else if name == ".BTF" { EbpfSectionKind::Btf } else if name == ".BTF.ext" { EbpfSectionKind::BtfExt } else { EbpfSectionKind::Undefined } } } #[derive(Debug)] struct Section<'a> { index: SectionIndex, kind: EbpfSectionKind, address: u64, name: &'a str, data: &'a [u8], size: u64, relocations: Vec, } impl<'a> TryFrom<&'a ObjSection<'_, '_>> for Section<'a> { type Error = ParseError; fn try_from(section: &'a ObjSection) -> Result, ParseError> { let index = section.index(); let map_err = |error| ParseError::SectionError { index: index.0, error, }; let name = section.name().map_err(map_err)?; let kind = match EbpfSectionKind::from_name(name) { EbpfSectionKind::Undefined => { if section.kind() == SectionKind::Text && section.size() > 0 { EbpfSectionKind::Program } else { EbpfSectionKind::Undefined } } k => k, }; Ok(Section { index, kind, address: section.address(), name, data: section.data().map_err(map_err)?, size: section.size(), relocations: section .relocations() .map(|(offset, r)| { Ok(Relocation { symbol_index: match r.target() { RelocationTarget::Symbol(index) => index.0, _ => return Err(ParseError::UnsupportedRelocationTarget), }, offset, size: r.size(), }) }) .collect::, _>>()?, }) } } fn parse_license(data: &[u8]) -> Result { if data.len() < 2 { return Err(ParseError::InvalidLicense { data: data.to_vec(), }); } if data[data.len() - 1] != 0 { return Err(ParseError::MissingLicenseNullTerminator { data: data.to_vec(), }); } Ok(CStr::from_bytes_with_nul(data) .map_err(|_| ParseError::InvalidLicense { data: data.to_vec(), })? .to_owned()) } fn parse_version(data: &[u8], endianness: object::Endianness) -> Result, ParseError> { let data = match data.len() { 4 => data.try_into().unwrap(), _ => { return Err(ParseError::InvalidKernelVersion { data: data.to_vec(), }) } }; let v = match endianness { object::Endianness::Big => u32::from_be_bytes(data), object::Endianness::Little => u32::from_le_bytes(data), }; Ok(if v == KERNEL_VERSION_ANY { None } else { Some(v) }) } // Gets an integer value from a BTF map defintion K/V pair. // type_id should be a PTR to an ARRAY. // the value is encoded in the array nr_elems field. fn get_map_field(btf: &Btf, type_id: u32) -> Result { let pty = match &btf.type_by_id(type_id)? { BtfType::Ptr(pty) => pty, other => { return Err(BtfError::UnexpectedBtfType { type_id: other.btf_type().unwrap_or(0), }) } }; // Safety: union let arr = match &btf.type_by_id(pty.btf_type)? { BtfType::Array(Array { array, .. }) => array, other => { return Err(BtfError::UnexpectedBtfType { type_id: other.btf_type().unwrap_or(0), }) } }; Ok(arr.len) } // Parsed '.bss' '.data' and '.rodata' sections. These sections are arrays of // bytes and are relocated based on their section index. fn parse_data_map_section(section: &Section) -> Result { let (def, data) = match section.kind { EbpfSectionKind::Data | EbpfSectionKind::Rodata => { let def = bpf_map_def { map_type: BPF_MAP_TYPE_ARRAY as u32, key_size: mem::size_of::() as u32, // We need to use section.size here since // .bss will always have data.len() == 0 value_size: section.size as u32, max_entries: 1, map_flags: if section.kind == EbpfSectionKind::Rodata { BPF_F_RDONLY_PROG } else { 0 }, ..Default::default() }; (def, section.data.to_vec()) } EbpfSectionKind::Bss => { let def = bpf_map_def { map_type: BPF_MAP_TYPE_ARRAY as u32, key_size: mem::size_of::() as u32, value_size: section.size as u32, max_entries: 1, map_flags: 0, ..Default::default() }; (def, vec![0; section.size as usize]) } _ => unreachable!(), }; Ok(Map::Legacy(LegacyMap { section_index: section.index.0, section_kind: section.kind, // Data maps don't require symbols to be relocated symbol_index: None, def, data, })) } fn parse_map_def(name: &str, data: &[u8]) -> Result { if data.len() < MINIMUM_MAP_SIZE { return Err(ParseError::InvalidMapDefinition { name: name.to_owned(), }); } if data.len() < mem::size_of::() { let mut map_def = bpf_map_def::default(); unsafe { let map_def_ptr = from_raw_parts_mut(&mut map_def as *mut bpf_map_def as *mut u8, data.len()); map_def_ptr.copy_from_slice(data); } Ok(map_def) } else { Ok(unsafe { ptr::read_unaligned(data.as_ptr() as *const bpf_map_def) }) } } fn parse_btf_map_def(btf: &Btf, info: &DataSecEntry) -> Result<(String, BtfMapDef), BtfError> { let ty = match btf.type_by_id(info.btf_type)? { BtfType::Var(var) => var, other => { return Err(BtfError::UnexpectedBtfType { type_id: other.btf_type().unwrap_or(0), }) } }; let map_name = btf.string_at(ty.name_offset)?; let mut map_def = BtfMapDef::default(); // Safety: union let root_type = btf.resolve_type(ty.btf_type)?; let s = match btf.type_by_id(root_type)? { BtfType::Struct(s) => s, other => { return Err(BtfError::UnexpectedBtfType { type_id: other.btf_type().unwrap_or(0), }) } }; for m in &s.members { match btf.string_at(m.name_offset)?.as_ref() { "type" => { map_def.map_type = get_map_field(btf, m.btf_type)?; } "key" => { if let BtfType::Ptr(pty) = btf.type_by_id(m.btf_type)? { // Safety: union let t = pty.btf_type; map_def.key_size = btf.type_size(t)? as u32; map_def.btf_key_type_id = t; } else { return Err(BtfError::UnexpectedBtfType { type_id: m.btf_type, }); } } "key_size" => { map_def.key_size = get_map_field(btf, m.btf_type)?; } "value" => { if let BtfType::Ptr(pty) = btf.type_by_id(m.btf_type)? { let t = pty.btf_type; map_def.value_size = btf.type_size(t)? as u32; map_def.btf_value_type_id = t; } else { return Err(BtfError::UnexpectedBtfType { type_id: m.btf_type, }); } } "value_size" => { map_def.value_size = get_map_field(btf, m.btf_type)?; } "max_entries" => { map_def.max_entries = get_map_field(btf, m.btf_type)?; } "map_flags" => { map_def.map_flags = get_map_field(btf, m.btf_type)?; } "pinning" => { let pinning = get_map_field(btf, m.btf_type)?; map_def.pinning = PinningType::try_from(pinning).unwrap_or_else(|_| { debug!("{} is not a valid pin type. using PIN_NONE", pinning); PinningType::None }); } other => { debug!("skipping unknown map section: {}", other); continue; } } } Ok((map_name.to_string(), map_def)) } /// Parses a [bpf_map_info] into a [Map]. pub fn parse_map_info(info: bpf_map_info, pinned: PinningType) -> Map { if info.btf_key_type_id != 0 { Map::Btf(BtfMap { def: BtfMapDef { map_type: info.type_, key_size: info.key_size, value_size: info.value_size, max_entries: info.max_entries, map_flags: info.map_flags, pinning: pinned, btf_key_type_id: info.btf_key_type_id, btf_value_type_id: info.btf_value_type_id, }, section_index: 0, symbol_index: 0, data: Vec::new(), }) } else { Map::Legacy(LegacyMap { def: bpf_map_def { map_type: info.type_, key_size: info.key_size, value_size: info.value_size, max_entries: info.max_entries, map_flags: info.map_flags, pinning: pinned, id: info.id, }, section_index: 0, symbol_index: None, section_kind: EbpfSectionKind::Undefined, data: Vec::new(), }) } } /// Copies a block of eBPF instructions pub fn copy_instructions(data: &[u8]) -> Result, ParseError> { if data.len() % mem::size_of::() > 0 { return Err(ParseError::InvalidProgramCode); } let instructions = data .chunks_exact(mem::size_of::()) .map(|d| unsafe { ptr::read_unaligned(d.as_ptr() as *const bpf_insn) }) .collect::>(); Ok(instructions) } fn get_func_and_line_info( btf_ext: Option<&BtfExt>, symbol: &Symbol, section: &Section, offset: usize, rewrite_insn_off: bool, ) -> (FuncSecInfo, LineSecInfo, usize, usize) { btf_ext .map(|btf_ext| { let instruction_offset = (offset / INS_SIZE) as u32; let symbol_size_instructions = (symbol.size as usize / INS_SIZE) as u32; let mut func_info = btf_ext.func_info.get(section.name); func_info.func_info.retain_mut(|f| { let retain = f.insn_off == instruction_offset; if retain && rewrite_insn_off { f.insn_off = 0; } retain }); let mut line_info = btf_ext.line_info.get(section.name); line_info .line_info .retain_mut(|l| match l.insn_off.checked_sub(instruction_offset) { None => false, Some(insn_off) => { let retain = insn_off < symbol_size_instructions; if retain && rewrite_insn_off { l.insn_off = insn_off } retain } }); ( func_info, line_info, btf_ext.func_info_rec_size(), btf_ext.line_info_rec_size(), ) }) .unwrap_or_default() } #[cfg(test)] mod tests { use alloc::vec; use assert_matches::assert_matches; use super::*; use crate::generated::btf_ext_header; const FAKE_INS_LEN: u64 = 8; fn fake_section<'a>( kind: EbpfSectionKind, name: &'a str, data: &'a [u8], index: Option, ) -> Section<'a> { let idx = index.unwrap_or(0); Section { index: SectionIndex(idx), kind, address: 0, name, data, size: data.len() as u64, relocations: Vec::new(), } } fn fake_ins() -> bpf_insn { bpf_insn { code: 0, _bitfield_align_1: [], _bitfield_1: bpf_insn::new_bitfield_1(0, 0), off: 0, imm: 0, } } fn fake_sym(obj: &mut Object, section_index: usize, address: u64, name: &str, size: u64) { let idx = obj.symbol_table.len(); obj.symbol_table.insert( idx + 1, Symbol { index: idx + 1, section_index: Some(section_index), name: Some(name.to_string()), address, size, is_definition: false, kind: SymbolKind::Text, }, ); obj.symbols_by_section .entry(SectionIndex(section_index)) .or_default() .push(idx + 1); } fn bytes_of(val: &T) -> &[u8] { // Safety: This is for testing only unsafe { crate::util::bytes_of(val) } } #[test] fn test_parse_generic_error() { assert_matches!(Object::parse(&b"foo"[..]), Err(ParseError::ElfError(_))) } #[test] fn test_parse_license() { assert_matches!(parse_license(b""), Err(ParseError::InvalidLicense { .. })); assert_matches!(parse_license(b"\0"), Err(ParseError::InvalidLicense { .. })); assert_matches!( parse_license(b"GPL"), Err(ParseError::MissingLicenseNullTerminator { .. }) ); assert_eq!(parse_license(b"GPL\0").unwrap().to_str().unwrap(), "GPL"); } #[test] fn test_parse_version() { assert_matches!( parse_version(b"", Endianness::Little), Err(ParseError::InvalidKernelVersion { .. }) ); assert_matches!( parse_version(b"123", Endianness::Little), Err(ParseError::InvalidKernelVersion { .. }) ); assert_matches!( parse_version(&0xFFFF_FFFEu32.to_le_bytes(), Endianness::Little), Ok(None) ); assert_matches!( parse_version(&0xFFFF_FFFEu32.to_be_bytes(), Endianness::Big), Ok(None) ); assert_matches!( parse_version(&1234u32.to_le_bytes(), Endianness::Little), Ok(Some(1234)) ); } #[test] fn test_parse_map_def_error() { assert_matches!( parse_map_def("foo", &[]), Err(ParseError::InvalidMapDefinition { .. }) ); } #[test] fn test_parse_map_short() { let def = bpf_map_def { map_type: 1, key_size: 2, value_size: 3, max_entries: 4, map_flags: 5, id: 0, pinning: PinningType::None, }; assert_eq!( parse_map_def("foo", &bytes_of(&def)[..MINIMUM_MAP_SIZE]).unwrap(), def ); } #[test] fn test_parse_map_def() { let def = bpf_map_def { map_type: 1, key_size: 2, value_size: 3, max_entries: 4, map_flags: 5, id: 6, pinning: PinningType::ByName, }; assert_eq!(parse_map_def("foo", bytes_of(&def)).unwrap(), def); } #[test] fn test_parse_map_def_with_padding() { let def = bpf_map_def { map_type: 1, key_size: 2, value_size: 3, max_entries: 4, map_flags: 5, id: 6, pinning: PinningType::ByName, }; let mut buf = [0u8; 128]; unsafe { ptr::write_unaligned(buf.as_mut_ptr() as *mut _, def) }; assert_eq!(parse_map_def("foo", &buf).unwrap(), def); } #[test] fn test_parse_map_data() { let map_data = b"map data"; assert_matches!( parse_data_map_section( &fake_section( EbpfSectionKind::Data, ".bss", map_data, None, ), ), Ok(Map::Legacy(LegacyMap { section_index: 0, section_kind: EbpfSectionKind::Data, symbol_index: None, def: bpf_map_def { map_type: _map_type, key_size: 4, value_size, max_entries: 1, map_flags: 0, id: 0, pinning: PinningType::None, }, data, })) if data == map_data && value_size == map_data.len() as u32 ) } fn fake_obj() -> Object { Object::new(Endianness::Little, CString::new("GPL").unwrap(), None) } #[test] fn sanitizes_empty_btf_files_to_none() { let mut obj = fake_obj(); let btf = Btf::new(); let btf_bytes = btf.to_bytes(); obj.parse_section(fake_section(EbpfSectionKind::Btf, ".BTF", &btf_bytes, None)) .unwrap(); const FUNC_INFO_LEN: u32 = 4; const LINE_INFO_LEN: u32 = 4; const CORE_RELO_LEN: u32 = 16; let ext_header = btf_ext_header { magic: 0xeb9f, version: 1, flags: 0, hdr_len: 24, func_info_off: 0, func_info_len: FUNC_INFO_LEN, line_info_off: FUNC_INFO_LEN, line_info_len: LINE_INFO_LEN, core_relo_off: FUNC_INFO_LEN + LINE_INFO_LEN, core_relo_len: CORE_RELO_LEN, }; let btf_ext_bytes = bytes_of::(&ext_header).to_vec(); obj.parse_section(fake_section( EbpfSectionKind::BtfExt, ".BTF.ext", &btf_ext_bytes, None, )) .unwrap(); let btf = obj.fixup_and_sanitize_btf(&BtfFeatures::default()).unwrap(); assert!(btf.is_none()); } #[test] fn test_parse_program_error() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", 1); assert_matches!( obj.parse_programs(&fake_section( EbpfSectionKind::Program, "kprobe/foo", &42u32.to_ne_bytes(), None, ),), Err(ParseError::InvalidProgramCode) ); } #[test] fn test_parse_program() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); obj.parse_programs(&fake_section( EbpfSectionKind::Program, "kprobe/foo", bytes_of(&fake_ins()), None, )) .unwrap(); let prog_foo = obj.programs.get("foo").unwrap(); assert_matches!(prog_foo, Program { license, kernel_version: None, section: ProgramSection::KProbe { .. }, .. } => assert_eq!(license.to_str().unwrap(), "GPL")); assert_matches!( obj.functions.get(&prog_foo.function_key()), Some(Function { name, address: 0, section_index: SectionIndex(0), section_offset: 0, instructions, ..}) if name == "foo" && instructions.len() == 1 ) } #[test] fn test_parse_section_map() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", mem::size_of::() as u64); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Maps, "maps/foo", bytes_of(&bpf_map_def { map_type: 1, key_size: 2, value_size: 3, max_entries: 4, map_flags: 5, ..Default::default() }), None, )), Ok(()) ); assert!(obj.maps.contains_key("foo")); } #[test] fn test_parse_multiple_program_in_same_section() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); fake_sym(&mut obj, 0, FAKE_INS_LEN, "bar", FAKE_INS_LEN); let insns = [fake_ins(), fake_ins()]; let data = bytes_of(&insns); obj.parse_programs(&fake_section( EbpfSectionKind::Program, "kprobe", data, None, )) .unwrap(); let prog_foo = obj.programs.get("foo").unwrap(); let function_foo = obj.functions.get(&prog_foo.function_key()).unwrap(); let prog_bar = obj.programs.get("bar").unwrap(); let function_bar = obj.functions.get(&prog_bar.function_key()).unwrap(); assert_matches!(prog_foo, Program { license, kernel_version: None, section: ProgramSection::KProbe { .. }, .. } => assert_eq!(license.to_str().unwrap(), "GPL")); assert_matches!( function_foo, Function { name, address: 0, section_index: SectionIndex(0), section_offset: 0, instructions, .. } if name == "foo" && instructions.len() == 1 ); assert_matches!(prog_bar, Program { license, kernel_version: None, section: ProgramSection::KProbe { .. }, .. } => assert_eq!(license.to_str().unwrap(), "GPL")); assert_matches!( function_bar, Function { name, address: 8, section_index: SectionIndex(0), section_offset: 8, instructions, .. } if name == "bar" && instructions.len() == 1 ); } #[test] fn test_parse_section_multiple_maps() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", mem::size_of::() as u64); fake_sym(&mut obj, 0, 28, "bar", mem::size_of::() as u64); fake_sym(&mut obj, 0, 60, "baz", mem::size_of::() as u64); let def = &bpf_map_def { map_type: 1, key_size: 2, value_size: 3, max_entries: 4, map_flags: 5, ..Default::default() }; let map_data = bytes_of(def).to_vec(); let mut buf = vec![]; buf.extend(&map_data); buf.extend(&map_data); // throw in some padding buf.extend([0, 0, 0, 0]); buf.extend(&map_data); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Maps, "maps", buf.as_slice(), None )), Ok(()) ); assert!(obj.maps.contains_key("foo")); assert!(obj.maps.contains_key("bar")); assert!(obj.maps.contains_key("baz")); for map in obj.maps.values() { assert_matches!(map, Map::Legacy(m) => { assert_eq!(&m.def, def); }) } } #[test] fn test_parse_section_data() { let mut obj = fake_obj(); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Data, ".bss", b"map data", None )), Ok(()) ); assert!(obj.maps.contains_key(".bss")); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Data, ".rodata", b"map data", None )), Ok(()) ); assert!(obj.maps.contains_key(".rodata")); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Data, ".rodata.boo", b"map data", None )), Ok(()) ); assert!(obj.maps.contains_key(".rodata.boo")); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Data, ".data", b"map data", None )), Ok(()) ); assert!(obj.maps.contains_key(".data")); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Data, ".data.boo", b"map data", None )), Ok(()) ); assert!(obj.maps.contains_key(".data.boo")); } #[test] fn test_parse_section_kprobe() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "kprobe/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::KProbe { .. }, .. }) ); } #[test] fn test_parse_section_uprobe() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "uprobe/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::UProbe { .. }, .. }) ); } #[test] fn test_parse_section_uprobe_sleepable() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "uprobe.s/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::UProbe { sleepable: true, .. }, .. }) ); } #[test] fn test_parse_section_uretprobe() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "uretprobe/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::URetProbe { .. }, .. }) ); } #[test] fn test_parse_section_uretprobe_sleepable() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "uretprobe.s/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::URetProbe { sleepable: true, .. }, .. }) ); } #[test] fn test_parse_section_trace_point() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); fake_sym(&mut obj, 1, 0, "bar", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "tracepoint/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::TracePoint { .. }, .. }) ); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "tp/foo/bar", bytes_of(&fake_ins()), Some(1), )), Ok(()) ); assert_matches!( obj.programs.get("bar"), Some(Program { section: ProgramSection::TracePoint { .. }, .. }) ); } #[test] fn test_parse_section_socket_filter() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "socket/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::SocketFilter { .. }, .. }) ); } #[test] fn test_parse_section_xdp() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "xdp", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::Xdp { frags: false, .. }, .. }) ); } #[test] fn test_parse_section_xdp_frags() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "xdp.frags", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::Xdp { frags: true, .. }, .. }) ); } #[test] fn test_parse_section_raw_tp() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); fake_sym(&mut obj, 1, 0, "bar", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "raw_tp/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::RawTracePoint { .. }, .. }) ); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "raw_tracepoint/bar", bytes_of(&fake_ins()), Some(1) )), Ok(()) ); assert_matches!( obj.programs.get("bar"), Some(Program { section: ProgramSection::RawTracePoint { .. }, .. }) ); } #[test] fn test_parse_section_lsm() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "lsm/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::Lsm { sleepable: false, .. }, .. }) ); } #[test] fn test_parse_section_lsm_sleepable() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "lsm.s/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::Lsm { sleepable: true, .. }, .. }) ); } #[test] fn test_parse_section_btf_tracepoint() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "tp_btf/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::BtfTracePoint { .. }, .. }) ); } #[test] fn test_parse_section_skskb_unnamed() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "stream_parser", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "sk_skb/stream_parser", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("stream_parser"), Some(Program { section: ProgramSection::SkSkbStreamParser { .. }, .. }) ); } #[test] fn test_parse_section_skskb_named() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "my_parser", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "sk_skb/stream_parser/my_parser", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("my_parser"), Some(Program { section: ProgramSection::SkSkbStreamParser { .. }, .. }) ); } #[test] fn test_parse_section_fentry() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "fentry/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::FEntry { .. }, .. }) ); } #[test] fn test_parse_section_fentry_sleepable() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "fentry.s/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::FEntry { sleepable: true, .. }, .. }) ); } #[test] fn test_parse_section_fexit() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "fexit/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::FExit { .. }, .. }) ); } #[test] fn test_parse_section_fexit_sleepable() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "fexit.s/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::FExit { sleepable: true, .. }, .. }) ); } #[test] fn test_parse_section_cgroup_skb_ingress_unnamed() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "ingress", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup_skb/ingress", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("ingress"), Some(Program { section: ProgramSection::CgroupSkbIngress { .. }, .. }) ); } #[test] fn test_parse_section_cgroup_skb_ingress_named() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup_skb/ingress/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::CgroupSkbIngress { .. }, .. }) ); } #[test] fn test_parse_section_cgroup_skb_no_direction_unamed() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "skb", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup/skb", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("skb"), Some(Program { section: ProgramSection::CgroupSkb { .. }, .. }) ); } #[test] fn test_parse_section_cgroup_skb_no_direction_named() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup/skb/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::CgroupSkb { .. }, .. }) ); } #[test] fn test_parse_section_sock_addr_named() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup/connect4/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::CgroupSockAddr { attach_type: CgroupSockAddrAttachType::Connect4, .. }, .. }) ); } #[test] fn test_parse_section_sock_addr_unnamed() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "connect4", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup/connect4", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("connect4"), Some(Program { section: ProgramSection::CgroupSockAddr { attach_type: CgroupSockAddrAttachType::Connect4, .. }, .. }) ); } #[test] fn test_parse_section_sockopt_named() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "foo", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup/getsockopt/foo", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("foo"), Some(Program { section: ProgramSection::CgroupSockopt { attach_type: CgroupSockoptAttachType::Get, .. }, .. }) ); } #[test] fn test_parse_section_sockopt_unnamed() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "getsockopt", FAKE_INS_LEN); assert_matches!( obj.parse_section(fake_section( EbpfSectionKind::Program, "cgroup/getsockopt", bytes_of(&fake_ins()), None )), Ok(()) ); assert_matches!( obj.programs.get("getsockopt"), Some(Program { section: ProgramSection::CgroupSockopt { attach_type: CgroupSockoptAttachType::Get, .. }, .. }) ); } #[test] fn test_patch_map_data() { let mut obj = fake_obj(); obj.maps.insert( ".rodata".to_owned(), Map::Legacy(LegacyMap { def: bpf_map_def { map_type: BPF_MAP_TYPE_ARRAY as u32, key_size: mem::size_of::() as u32, value_size: 3, max_entries: 1, map_flags: BPF_F_RDONLY_PROG, id: 1, pinning: PinningType::None, }, section_index: 1, section_kind: EbpfSectionKind::Rodata, symbol_index: Some(1), data: vec![0, 0, 0], }), ); obj.symbol_table.insert( 1, Symbol { index: 1, section_index: Some(1), name: Some("my_config".to_owned()), address: 0, size: 3, is_definition: true, kind: SymbolKind::Data, }, ); let test_data: &[u8] = &[1, 2, 3]; obj.patch_map_data(HashMap::from([ ("my_config", (test_data, true)), ("optional_variable", (test_data, false)), ])) .unwrap(); let map = obj.maps.get(".rodata").unwrap(); assert_eq!(test_data, map.data()); } #[test] fn test_parse_btf_map_section() { let mut obj = fake_obj(); fake_sym(&mut obj, 0, 0, "map_1", 0); fake_sym(&mut obj, 0, 0, "map_2", 0); // generated from: // objcopy --dump-section .BTF=test.btf ./target/bpfel-unknown-none/debug/multimap-btf.bpf.o // hexdump -v -e '7/1 "0x%02X, " 1/1 " 0x%02X,\n"' test.btf #[cfg(target_endian = "little")] let data: &[u8] = &[ 0x9F, 0xEB, 0x01, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xF0, 0x01, 0x00, 0x00, 0xF0, 0x01, 0x00, 0x00, 0xCC, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x06, 0x00, 0x00, 0x00, 0x19, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x07, 0x00, 0x00, 0x00, 0x1F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x09, 0x00, 0x00, 0x00, 0x2C, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x0A, 0x00, 0x00, 0x00, 0x32, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x08, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x0C, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x04, 0x20, 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4A, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x4E, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x54, 0x00, 0x00, 0x00, 0x0B, 0x00, 0x00, 0x00, 0xC0, 0x00, 0x00, 0x00, 0x60, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0E, 0x0D, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x04, 0x20, 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4A, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x4E, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x54, 0x00, 0x00, 0x00, 0x0B, 0x00, 0x00, 0x00, 0xC0, 0x00, 0x00, 0x00, 0x66, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0E, 0x0F, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x0D, 0x02, 0x00, 0x00, 0x00, 0x6C, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, 0x00, 0x70, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x0C, 0x12, 0x00, 0x00, 0x00, 0xB0, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0xB5, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0E, 0x15, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0xBE, 0x01, 0x00, 0x00, 0x02, 0x00, 0x00, 0x0F, 0x00, 0x00, 0x00, 0x00, 0x0E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0xC4, 0x01, 0x00, 0x00, 0x01, 0x00, 0x00, 0x0F, 0x00, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x69, 0x6E, 0x74, 0x00, 0x5F, 0x5F, 0x41, 0x52, 0x52, 0x41, 0x59, 0x5F, 0x53, 0x49, 0x5A, 0x45, 0x5F, 0x54, 0x59, 0x50, 0x45, 0x5F, 0x5F, 0x00, 0x5F, 0x5F, 0x75, 0x33, 0x32, 0x00, 0x75, 0x6E, 0x73, 0x69, 0x67, 0x6E, 0x65, 0x64, 0x20, 0x69, 0x6E, 0x74, 0x00, 0x5F, 0x5F, 0x75, 0x36, 0x34, 0x00, 0x75, 0x6E, 0x73, 0x69, 0x67, 0x6E, 0x65, 0x64, 0x20, 0x6C, 0x6F, 0x6E, 0x67, 0x20, 0x6C, 0x6F, 0x6E, 0x67, 0x00, 0x74, 0x79, 0x70, 0x65, 0x00, 0x6B, 0x65, 0x79, 0x00, 0x76, 0x61, 0x6C, 0x75, 0x65, 0x00, 0x6D, 0x61, 0x78, 0x5F, 0x65, 0x6E, 0x74, 0x72, 0x69, 0x65, 0x73, 0x00, 0x6D, 0x61, 0x70, 0x5F, 0x31, 0x00, 0x6D, 0x61, 0x70, 0x5F, 0x32, 0x00, 0x63, 0x74, 0x78, 0x00, 0x62, 0x70, 0x66, 0x5F, 0x70, 0x72, 0x6F, 0x67, 0x00, 0x74, 0x72, 0x61, 0x63, 0x65, 0x70, 0x6F, 0x69, 0x6E, 0x74, 0x00, 0x2F, 0x76, 0x61, 0x72, 0x2F, 0x68, 0x6F, 0x6D, 0x65, 0x2F, 0x64, 0x61, 0x76, 0x65, 0x2F, 0x64, 0x65, 0x76, 0x2F, 0x61, 0x79, 0x61, 0x2D, 0x72, 0x73, 0x2F, 0x61, 0x79, 0x61, 0x2F, 0x74, 0x65, 0x73, 0x74, 0x2F, 0x69, 0x6E, 0x74, 0x65, 0x67, 0x72, 0x61, 0x74, 0x69, 0x6F, 0x6E, 0x2D, 0x65, 0x62, 0x70, 0x66, 0x2F, 0x73, 0x72, 0x63, 0x2F, 0x62, 0x70, 0x66, 0x2F, 0x6D, 0x75, 0x6C, 0x74, 0x69, 0x6D, 0x61, 0x70, 0x2D, 0x62, 0x74, 0x66, 0x2E, 0x62, 0x70, 0x66, 0x2E, 0x63, 0x00, 0x69, 0x6E, 0x74, 0x20, 0x62, 0x70, 0x66, 0x5F, 0x70, 0x72, 0x6F, 0x67, 0x28, 0x76, 0x6F, 0x69, 0x64, 0x20, 0x2A, 0x63, 0x74, 0x78, 0x29, 0x00, 0x09, 0x5F, 0x5F, 0x75, 0x33, 0x32, 0x20, 0x6B, 0x65, 0x79, 0x20, 0x3D, 0x20, 0x30, 0x3B, 0x00, 0x09, 0x5F, 0x5F, 0x75, 0x36, 0x34, 0x20, 0x74, 0x77, 0x65, 0x6E, 0x74, 0x79, 0x5F, 0x66, 0x6F, 0x75, 0x72, 0x20, 0x3D, 0x20, 0x32, 0x34, 0x3B, 0x00, 0x09, 0x5F, 0x5F, 0x75, 0x36, 0x34, 0x20, 0x66, 0x6F, 0x72, 0x74, 0x79, 0x5F, 0x74, 0x77, 0x6F, 0x20, 0x3D, 0x20, 0x34, 0x32, 0x3B, 0x00, 0x20, 0x20, 0x20, 0x20, 0x62, 0x70, 0x66, 0x5F, 0x6D, 0x61, 0x70, 0x5F, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x5F, 0x65, 0x6C, 0x65, 0x6D, 0x28, 0x26, 0x6D, 0x61, 0x70, 0x5F, 0x31, 0x2C, 0x20, 0x26, 0x6B, 0x65, 0x79, 0x2C, 0x20, 0x26, 0x74, 0x77, 0x65, 0x6E, 0x74, 0x79, 0x5F, 0x66, 0x6F, 0x75, 0x72, 0x2C, 0x20, 0x42, 0x50, 0x46, 0x5F, 0x41, 0x4E, 0x59, 0x29, 0x3B, 0x00, 0x20, 0x20, 0x20, 0x20, 0x62, 0x70, 0x66, 0x5F, 0x6D, 0x61, 0x70, 0x5F, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x5F, 0x65, 0x6C, 0x65, 0x6D, 0x28, 0x26, 0x6D, 0x61, 0x70, 0x5F, 0x32, 0x2C, 0x20, 0x26, 0x6B, 0x65, 0x79, 0x2C, 0x20, 0x26, 0x66, 0x6F, 0x72, 0x74, 0x79, 0x5F, 0x74, 0x77, 0x6F, 0x2C, 0x20, 0x42, 0x50, 0x46, 0x5F, 0x41, 0x4E, 0x59, 0x29, 0x3B, 0x00, 0x09, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6E, 0x20, 0x30, 0x3B, 0x00, 0x63, 0x68, 0x61, 0x72, 0x00, 0x5F, 0x6C, 0x69, 0x63, 0x65, 0x6E, 0x73, 0x65, 0x00, 0x2E, 0x6D, 0x61, 0x70, 0x73, 0x00, 0x6C, 0x69, 0x63, 0x65, 0x6E, 0x73, 0x65, 0x00, ]; #[cfg(target_endian = "big")] let data: &[u8] = &[ 0xEB, 0x9F, 0x01, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xF0, 0x00, 0x00, 0x01, 0xF0, 0x00, 0x00, 0x01, 0xCC, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x01, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x05, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x19, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x1F, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x2C, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x00, 0x00, 0x32, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0C, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4A, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x4E, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x54, 0x00, 0x00, 0x00, 0x0B, 0x00, 0x00, 0x00, 0xC0, 0x00, 0x00, 0x00, 0x60, 0x0E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x45, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4A, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x4E, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x54, 0x00, 0x00, 0x00, 0x0B, 0x00, 0x00, 0x00, 0xC0, 0x00, 0x00, 0x00, 0x66, 0x0E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0F, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x6C, 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, 0x00, 0x70, 0x0C, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x01, 0xB0, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x01, 0xB5, 0x0E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x01, 0xBE, 0x0F, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x01, 0xC4, 0x0F, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x69, 0x6E, 0x74, 0x00, 0x5F, 0x5F, 0x41, 0x52, 0x52, 0x41, 0x59, 0x5F, 0x53, 0x49, 0x5A, 0x45, 0x5F, 0x54, 0x59, 0x50, 0x45, 0x5F, 0x5F, 0x00, 0x5F, 0x5F, 0x75, 0x33, 0x32, 0x00, 0x75, 0x6E, 0x73, 0x69, 0x67, 0x6E, 0x65, 0x64, 0x20, 0x69, 0x6E, 0x74, 0x00, 0x5F, 0x5F, 0x75, 0x36, 0x34, 0x00, 0x75, 0x6E, 0x73, 0x69, 0x67, 0x6E, 0x65, 0x64, 0x20, 0x6C, 0x6F, 0x6E, 0x67, 0x20, 0x6C, 0x6F, 0x6E, 0x67, 0x00, 0x74, 0x79, 0x70, 0x65, 0x00, 0x6B, 0x65, 0x79, 0x00, 0x76, 0x61, 0x6C, 0x75, 0x65, 0x00, 0x6D, 0x61, 0x78, 0x5F, 0x65, 0x6E, 0x74, 0x72, 0x69, 0x65, 0x73, 0x00, 0x6D, 0x61, 0x70, 0x5F, 0x31, 0x00, 0x6D, 0x61, 0x70, 0x5F, 0x32, 0x00, 0x63, 0x74, 0x78, 0x00, 0x62, 0x70, 0x66, 0x5F, 0x70, 0x72, 0x6F, 0x67, 0x00, 0x74, 0x72, 0x61, 0x63, 0x65, 0x70, 0x6F, 0x69, 0x6E, 0x74, 0x00, 0x2F, 0x76, 0x61, 0x72, 0x2F, 0x68, 0x6F, 0x6D, 0x65, 0x2F, 0x64, 0x61, 0x76, 0x65, 0x2F, 0x64, 0x65, 0x76, 0x2F, 0x61, 0x79, 0x61, 0x2D, 0x72, 0x73, 0x2F, 0x61, 0x79, 0x61, 0x2F, 0x74, 0x65, 0x73, 0x74, 0x2F, 0x69, 0x6E, 0x74, 0x65, 0x67, 0x72, 0x61, 0x74, 0x69, 0x6F, 0x6E, 0x2D, 0x65, 0x62, 0x70, 0x66, 0x2F, 0x73, 0x72, 0x63, 0x2F, 0x62, 0x70, 0x66, 0x2F, 0x6D, 0x75, 0x6C, 0x74, 0x69, 0x6D, 0x61, 0x70, 0x2D, 0x62, 0x74, 0x66, 0x2E, 0x62, 0x70, 0x66, 0x2E, 0x63, 0x00, 0x69, 0x6E, 0x74, 0x20, 0x62, 0x70, 0x66, 0x5F, 0x70, 0x72, 0x6F, 0x67, 0x28, 0x76, 0x6F, 0x69, 0x64, 0x20, 0x2A, 0x63, 0x74, 0x78, 0x29, 0x00, 0x09, 0x5F, 0x5F, 0x75, 0x33, 0x32, 0x20, 0x6B, 0x65, 0x79, 0x20, 0x3D, 0x20, 0x30, 0x3B, 0x00, 0x09, 0x5F, 0x5F, 0x75, 0x36, 0x34, 0x20, 0x74, 0x77, 0x65, 0x6E, 0x74, 0x79, 0x5F, 0x66, 0x6F, 0x75, 0x72, 0x20, 0x3D, 0x20, 0x32, 0x34, 0x3B, 0x00, 0x09, 0x5F, 0x5F, 0x75, 0x36, 0x34, 0x20, 0x66, 0x6F, 0x72, 0x74, 0x79, 0x5F, 0x74, 0x77, 0x6F, 0x20, 0x3D, 0x20, 0x34, 0x32, 0x3B, 0x00, 0x20, 0x20, 0x20, 0x20, 0x62, 0x70, 0x66, 0x5F, 0x6D, 0x61, 0x70, 0x5F, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x5F, 0x65, 0x6C, 0x65, 0x6D, 0x28, 0x26, 0x6D, 0x61, 0x70, 0x5F, 0x31, 0x2C, 0x20, 0x26, 0x6B, 0x65, 0x79, 0x2C, 0x20, 0x26, 0x74, 0x77, 0x65, 0x6E, 0x74, 0x79, 0x5F, 0x66, 0x6F, 0x75, 0x72, 0x2C, 0x20, 0x42, 0x50, 0x46, 0x5F, 0x41, 0x4E, 0x59, 0x29, 0x3B, 0x00, 0x20, 0x20, 0x20, 0x20, 0x62, 0x70, 0x66, 0x5F, 0x6D, 0x61, 0x70, 0x5F, 0x75, 0x70, 0x64, 0x61, 0x74, 0x65, 0x5F, 0x65, 0x6C, 0x65, 0x6D, 0x28, 0x26, 0x6D, 0x61, 0x70, 0x5F, 0x32, 0x2C, 0x20, 0x26, 0x6B, 0x65, 0x79, 0x2C, 0x20, 0x26, 0x66, 0x6F, 0x72, 0x74, 0x79, 0x5F, 0x74, 0x77, 0x6F, 0x2C, 0x20, 0x42, 0x50, 0x46, 0x5F, 0x41, 0x4E, 0x59, 0x29, 0x3B, 0x00, 0x09, 0x72, 0x65, 0x74, 0x75, 0x72, 0x6E, 0x20, 0x30, 0x3B, 0x00, 0x63, 0x68, 0x61, 0x72, 0x00, 0x5F, 0x6C, 0x69, 0x63, 0x65, 0x6E, 0x73, 0x65, 0x00, 0x2E, 0x6D, 0x61, 0x70, 0x73, 0x00, 0x6C, 0x69, 0x63, 0x65, 0x6E, 0x73, 0x65, 0x00, ]; let btf_section = fake_section(EbpfSectionKind::Btf, ".BTF", data, None); obj.parse_section(btf_section).unwrap(); let map_section = fake_section(EbpfSectionKind::BtfMaps, ".maps", &[], None); obj.parse_section(map_section).unwrap(); let map = obj.maps.get("map_1").unwrap(); assert_matches!(map, Map::Btf(m) => { assert_eq!(m.def.key_size, 4); assert_eq!(m.def.value_size, 8); assert_eq!(m.def.max_entries, 1); }); } } aya-obj-0.2.1/src/programs/cgroup_sock.rs000064400000000000000000000016761046102023000164650ustar 00000000000000//! Cgroup socket programs. use crate::generated::bpf_attach_type; /// Defines where to attach a `CgroupSock` program. #[derive(Copy, Clone, Debug, Default)] pub enum CgroupSockAttachType { /// Called after the IPv4 bind events. PostBind4, /// Called after the IPv6 bind events. PostBind6, /// Attach to IPv4 connect events. #[default] SockCreate, /// Attach to IPv6 connect events. SockRelease, } impl From for bpf_attach_type { fn from(s: CgroupSockAttachType) -> bpf_attach_type { match s { CgroupSockAttachType::PostBind4 => bpf_attach_type::BPF_CGROUP_INET4_POST_BIND, CgroupSockAttachType::PostBind6 => bpf_attach_type::BPF_CGROUP_INET6_POST_BIND, CgroupSockAttachType::SockCreate => bpf_attach_type::BPF_CGROUP_INET_SOCK_CREATE, CgroupSockAttachType::SockRelease => bpf_attach_type::BPF_CGROUP_INET_SOCK_RELEASE, } } } aya-obj-0.2.1/src/programs/cgroup_sock_addr.rs000064400000000000000000000042111046102023000174430ustar 00000000000000//! Cgroup socket address programs. use crate::generated::bpf_attach_type; /// Defines where to attach a `CgroupSockAddr` program. #[derive(Copy, Clone, Debug)] pub enum CgroupSockAddrAttachType { /// Attach to IPv4 bind events. Bind4, /// Attach to IPv6 bind events. Bind6, /// Attach to IPv4 connect events. Connect4, /// Attach to IPv6 connect events. Connect6, /// Attach to IPv4 getpeername events. GetPeerName4, /// Attach to IPv6 getpeername events. GetPeerName6, /// Attach to IPv4 getsockname events. GetSockName4, /// Attach to IPv6 getsockname events. GetSockName6, /// Attach to IPv4 udp_sendmsg events. UDPSendMsg4, /// Attach to IPv6 udp_sendmsg events. UDPSendMsg6, /// Attach to IPv4 udp_recvmsg events. UDPRecvMsg4, /// Attach to IPv6 udp_recvmsg events. UDPRecvMsg6, } impl From for bpf_attach_type { fn from(s: CgroupSockAddrAttachType) -> bpf_attach_type { match s { CgroupSockAddrAttachType::Bind4 => bpf_attach_type::BPF_CGROUP_INET4_BIND, CgroupSockAddrAttachType::Bind6 => bpf_attach_type::BPF_CGROUP_INET6_BIND, CgroupSockAddrAttachType::Connect4 => bpf_attach_type::BPF_CGROUP_INET4_CONNECT, CgroupSockAddrAttachType::Connect6 => bpf_attach_type::BPF_CGROUP_INET6_CONNECT, CgroupSockAddrAttachType::GetPeerName4 => bpf_attach_type::BPF_CGROUP_INET4_GETPEERNAME, CgroupSockAddrAttachType::GetPeerName6 => bpf_attach_type::BPF_CGROUP_INET6_GETPEERNAME, CgroupSockAddrAttachType::GetSockName4 => bpf_attach_type::BPF_CGROUP_INET4_GETSOCKNAME, CgroupSockAddrAttachType::GetSockName6 => bpf_attach_type::BPF_CGROUP_INET6_GETSOCKNAME, CgroupSockAddrAttachType::UDPSendMsg4 => bpf_attach_type::BPF_CGROUP_UDP4_SENDMSG, CgroupSockAddrAttachType::UDPSendMsg6 => bpf_attach_type::BPF_CGROUP_UDP6_SENDMSG, CgroupSockAddrAttachType::UDPRecvMsg4 => bpf_attach_type::BPF_CGROUP_UDP4_RECVMSG, CgroupSockAddrAttachType::UDPRecvMsg6 => bpf_attach_type::BPF_CGROUP_UDP6_RECVMSG, } } } aya-obj-0.2.1/src/programs/cgroup_sockopt.rs000064400000000000000000000011261046102023000171760ustar 00000000000000//! Cgroup socket option programs. use crate::generated::bpf_attach_type; /// Defines where to attach a `CgroupSockopt` program. #[derive(Copy, Clone, Debug)] pub enum CgroupSockoptAttachType { /// Attach to GetSockopt. Get, /// Attach to SetSockopt. Set, } impl From for bpf_attach_type { fn from(s: CgroupSockoptAttachType) -> bpf_attach_type { match s { CgroupSockoptAttachType::Get => bpf_attach_type::BPF_CGROUP_GETSOCKOPT, CgroupSockoptAttachType::Set => bpf_attach_type::BPF_CGROUP_SETSOCKOPT, } } } aya-obj-0.2.1/src/programs/mod.rs000064400000000000000000000004631046102023000147170ustar 00000000000000//! Program struct and type bindings. pub mod cgroup_sock; pub mod cgroup_sock_addr; pub mod cgroup_sockopt; mod types; pub mod xdp; pub use cgroup_sock::CgroupSockAttachType; pub use cgroup_sock_addr::CgroupSockAddrAttachType; pub use cgroup_sockopt::CgroupSockoptAttachType; pub use xdp::XdpAttachType; aya-obj-0.2.1/src/programs/types.rs000064400000000000000000000060561046102023000153100ustar 00000000000000//! Program type bindings. use crate::{ generated::bpf_prog_type::{self, *}, InvalidTypeBinding, }; impl TryFrom for bpf_prog_type { type Error = InvalidTypeBinding; fn try_from(prog_type: u32) -> Result { Ok(match prog_type { x if x == BPF_PROG_TYPE_UNSPEC as u32 => BPF_PROG_TYPE_UNSPEC, x if x == BPF_PROG_TYPE_SOCKET_FILTER as u32 => BPF_PROG_TYPE_SOCKET_FILTER, x if x == BPF_PROG_TYPE_KPROBE as u32 => BPF_PROG_TYPE_KPROBE, x if x == BPF_PROG_TYPE_SCHED_CLS as u32 => BPF_PROG_TYPE_SCHED_CLS, x if x == BPF_PROG_TYPE_SCHED_ACT as u32 => BPF_PROG_TYPE_SCHED_ACT, x if x == BPF_PROG_TYPE_TRACEPOINT as u32 => BPF_PROG_TYPE_TRACEPOINT, x if x == BPF_PROG_TYPE_XDP as u32 => BPF_PROG_TYPE_XDP, x if x == BPF_PROG_TYPE_PERF_EVENT as u32 => BPF_PROG_TYPE_PERF_EVENT, x if x == BPF_PROG_TYPE_CGROUP_SKB as u32 => BPF_PROG_TYPE_CGROUP_SKB, x if x == BPF_PROG_TYPE_CGROUP_SOCK as u32 => BPF_PROG_TYPE_CGROUP_SOCK, x if x == BPF_PROG_TYPE_LWT_IN as u32 => BPF_PROG_TYPE_LWT_IN, x if x == BPF_PROG_TYPE_LWT_OUT as u32 => BPF_PROG_TYPE_LWT_OUT, x if x == BPF_PROG_TYPE_LWT_XMIT as u32 => BPF_PROG_TYPE_LWT_XMIT, x if x == BPF_PROG_TYPE_SOCK_OPS as u32 => BPF_PROG_TYPE_SOCK_OPS, x if x == BPF_PROG_TYPE_SK_SKB as u32 => BPF_PROG_TYPE_SK_SKB, x if x == BPF_PROG_TYPE_CGROUP_DEVICE as u32 => BPF_PROG_TYPE_CGROUP_DEVICE, x if x == BPF_PROG_TYPE_SK_MSG as u32 => BPF_PROG_TYPE_SK_MSG, x if x == BPF_PROG_TYPE_RAW_TRACEPOINT as u32 => BPF_PROG_TYPE_RAW_TRACEPOINT, x if x == BPF_PROG_TYPE_CGROUP_SOCK_ADDR as u32 => BPF_PROG_TYPE_CGROUP_SOCK_ADDR, x if x == BPF_PROG_TYPE_LWT_SEG6LOCAL as u32 => BPF_PROG_TYPE_LWT_SEG6LOCAL, x if x == BPF_PROG_TYPE_LIRC_MODE2 as u32 => BPF_PROG_TYPE_LIRC_MODE2, x if x == BPF_PROG_TYPE_SK_REUSEPORT as u32 => BPF_PROG_TYPE_SK_REUSEPORT, x if x == BPF_PROG_TYPE_FLOW_DISSECTOR as u32 => BPF_PROG_TYPE_FLOW_DISSECTOR, x if x == BPF_PROG_TYPE_CGROUP_SYSCTL as u32 => BPF_PROG_TYPE_CGROUP_SYSCTL, x if x == BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE as u32 => { BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE } x if x == BPF_PROG_TYPE_CGROUP_SOCKOPT as u32 => BPF_PROG_TYPE_CGROUP_SOCKOPT, x if x == BPF_PROG_TYPE_TRACING as u32 => BPF_PROG_TYPE_TRACING, x if x == BPF_PROG_TYPE_STRUCT_OPS as u32 => BPF_PROG_TYPE_STRUCT_OPS, x if x == BPF_PROG_TYPE_EXT as u32 => BPF_PROG_TYPE_EXT, x if x == BPF_PROG_TYPE_LSM as u32 => BPF_PROG_TYPE_LSM, x if x == BPF_PROG_TYPE_SK_LOOKUP as u32 => BPF_PROG_TYPE_SK_LOOKUP, x if x == BPF_PROG_TYPE_SYSCALL as u32 => BPF_PROG_TYPE_SYSCALL, x if x == BPF_PROG_TYPE_NETFILTER as u32 => BPF_PROG_TYPE_NETFILTER, _ => return Err(InvalidTypeBinding { value: prog_type }), }) } } aya-obj-0.2.1/src/programs/xdp.rs000064400000000000000000000012661046102023000147350ustar 00000000000000//! XDP programs. use crate::generated::bpf_attach_type; /// Defines where to attach an `XDP` program. #[derive(Copy, Clone, Debug)] pub enum XdpAttachType { /// Attach to a network interface. Interface, /// Attach to a cpumap. Requires kernel 5.9 or later. CpuMap, /// Attach to a devmap. Requires kernel 5.8 or later. DevMap, } impl From for bpf_attach_type { fn from(value: XdpAttachType) -> Self { match value { XdpAttachType::Interface => bpf_attach_type::BPF_XDP, XdpAttachType::CpuMap => bpf_attach_type::BPF_XDP_CPUMAP, XdpAttachType::DevMap => bpf_attach_type::BPF_XDP_DEVMAP, } } } aya-obj-0.2.1/src/relocation.rs000064400000000000000000000576511046102023000144600ustar 00000000000000//! Program relocation handling. use alloc::{borrow::ToOwned, collections::BTreeMap, string::String}; use core::mem; use log::debug; use object::{SectionIndex, SymbolKind}; #[cfg(not(feature = "std"))] use crate::std; use crate::{ generated::{ bpf_insn, BPF_CALL, BPF_JMP, BPF_K, BPF_PSEUDO_CALL, BPF_PSEUDO_FUNC, BPF_PSEUDO_MAP_FD, BPF_PSEUDO_MAP_VALUE, }, maps::Map, obj::{Function, Object}, util::{HashMap, HashSet}, EbpfSectionKind, }; pub(crate) const INS_SIZE: usize = mem::size_of::(); /// The error type returned by [`Object::relocate_maps`] and [`Object::relocate_calls`] #[derive(thiserror::Error, Debug)] #[error("error relocating `{function}`")] pub struct EbpfRelocationError { /// The function name function: String, #[source] /// The original error error: RelocationError, } /// Relocation failures #[derive(Debug, thiserror::Error)] pub enum RelocationError { /// Unknown symbol #[error("unknown symbol, index `{index}`")] UnknownSymbol { /// The symbol index index: usize, }, /// Section not found #[error("section `{section_index}` not found, referenced by symbol `{}` #{symbol_index}", .symbol_name.clone().unwrap_or_default())] SectionNotFound { /// The section index section_index: usize, /// The symbol index symbol_index: usize, /// The symbol name symbol_name: Option, }, /// Unknown function #[error("function {address:#x} not found while relocating `{caller_name}`")] UnknownFunction { /// The function address address: u64, /// The caller name caller_name: String, }, /// Unknown function #[error("program at section {section_index} and address {address:#x} was not found while relocating")] UnknownProgram { /// The function section index section_index: usize, /// The function address address: u64, }, /// Invalid relocation offset #[error("invalid offset `{offset}` applying relocation #{relocation_number}")] InvalidRelocationOffset { /// The relocation offset offset: u64, /// The relocation number relocation_number: usize, }, } #[derive(Debug, Copy, Clone)] pub(crate) struct Relocation { // byte offset of the instruction to be relocated pub(crate) offset: u64, pub(crate) size: u8, // index of the symbol to relocate to pub(crate) symbol_index: usize, } #[derive(Debug, Clone)] pub(crate) struct Symbol { pub(crate) index: usize, pub(crate) section_index: Option, pub(crate) name: Option, pub(crate) address: u64, pub(crate) size: u64, pub(crate) is_definition: bool, pub(crate) kind: SymbolKind, } impl Object { /// Relocates the map references pub fn relocate_maps<'a, I: Iterator>( &mut self, maps: I, text_sections: &HashSet, ) -> Result<(), EbpfRelocationError> { let mut maps_by_section = HashMap::new(); let mut maps_by_symbol = HashMap::new(); for (name, fd, map) in maps { maps_by_section.insert(map.section_index(), (name, fd, map)); if let Some(index) = map.symbol_index() { maps_by_symbol.insert(index, (name, fd, map)); } } for function in self.functions.values_mut() { if let Some(relocations) = self.relocations.get(&function.section_index) { relocate_maps( function, relocations.values(), &maps_by_section, &maps_by_symbol, &self.symbol_table, text_sections, ) .map_err(|error| EbpfRelocationError { function: function.name.clone(), error, })?; } } Ok(()) } /// Relocates function calls pub fn relocate_calls( &mut self, text_sections: &HashSet, ) -> Result<(), EbpfRelocationError> { for (name, program) in self.programs.iter() { let linker = FunctionLinker::new( &self.functions, &self.relocations, &self.symbol_table, text_sections, ); let func_orig = self.functions .get(&program.function_key()) .ok_or_else(|| EbpfRelocationError { function: name.clone(), error: RelocationError::UnknownProgram { section_index: program.section_index, address: program.address, }, })?; let func = linker .link(func_orig) .map_err(|error| EbpfRelocationError { function: name.to_owned(), error, })?; self.functions.insert(program.function_key(), func); } Ok(()) } } fn relocate_maps<'a, I: Iterator>( fun: &mut Function, relocations: I, maps_by_section: &HashMap, maps_by_symbol: &HashMap, symbol_table: &HashMap, text_sections: &HashSet, ) -> Result<(), RelocationError> { let section_offset = fun.section_offset; let instructions = &mut fun.instructions; let function_size = instructions.len() * INS_SIZE; for (rel_n, rel) in relocations.enumerate() { let rel_offset = rel.offset as usize; if rel_offset < section_offset || rel_offset >= section_offset + function_size { // the relocation doesn't apply to this function continue; } // make sure that the relocation offset is properly aligned let ins_offset = rel_offset - section_offset; if ins_offset % INS_SIZE != 0 { return Err(RelocationError::InvalidRelocationOffset { offset: rel.offset, relocation_number: rel_n, }); } let ins_index = ins_offset / INS_SIZE; // a map relocation points to the ELF section that contains the map let sym = symbol_table .get(&rel.symbol_index) .ok_or(RelocationError::UnknownSymbol { index: rel.symbol_index, })?; let Some(section_index) = sym.section_index else { // this is not a map relocation continue; }; // calls and relocation to .text symbols are handled in a separate step if insn_is_call(&instructions[ins_index]) || text_sections.contains(§ion_index) { continue; } let (_name, fd, map) = if let Some(m) = maps_by_symbol.get(&rel.symbol_index) { let map = &m.2; debug!( "relocating map by symbol index {:?}, kind {:?} at insn {ins_index} in section {}", map.symbol_index(), map.section_kind(), fun.section_index.0 ); debug_assert_eq!(map.symbol_index().unwrap(), rel.symbol_index); m } else { let Some(m) = maps_by_section.get(§ion_index) else { debug!("failed relocating map by section index {}", section_index); return Err(RelocationError::SectionNotFound { symbol_index: rel.symbol_index, symbol_name: sym.name.clone(), section_index, }); }; let map = &m.2; debug!( "relocating map by section index {}, kind {:?} at insn {ins_index} in section {}", map.section_index(), map.section_kind(), fun.section_index.0, ); debug_assert_eq!(map.symbol_index(), None); debug_assert!(matches!( map.section_kind(), EbpfSectionKind::Bss | EbpfSectionKind::Data | EbpfSectionKind::Rodata )); m }; debug_assert_eq!(map.section_index(), section_index); if !map.data().is_empty() { instructions[ins_index].set_src_reg(BPF_PSEUDO_MAP_VALUE as u8); instructions[ins_index + 1].imm = instructions[ins_index].imm + sym.address as i32; } else { instructions[ins_index].set_src_reg(BPF_PSEUDO_MAP_FD as u8); } instructions[ins_index].imm = *fd; } Ok(()) } struct FunctionLinker<'a> { functions: &'a BTreeMap<(usize, u64), Function>, linked_functions: HashMap, relocations: &'a HashMap>, symbol_table: &'a HashMap, text_sections: &'a HashSet, } impl<'a> FunctionLinker<'a> { fn new( functions: &'a BTreeMap<(usize, u64), Function>, relocations: &'a HashMap>, symbol_table: &'a HashMap, text_sections: &'a HashSet, ) -> FunctionLinker<'a> { FunctionLinker { functions, linked_functions: HashMap::new(), relocations, symbol_table, text_sections, } } fn link(mut self, program_function: &Function) -> Result { let mut fun = program_function.clone(); // relocate calls in the program's main function. As relocation happens, // it will trigger linking in all the callees. self.relocate(&mut fun, program_function)?; // this now includes the program function plus all the other functions called during // execution Ok(fun) } fn link_function( &mut self, program: &mut Function, fun: &Function, ) -> Result { if let Some(fun_ins_index) = self.linked_functions.get(&fun.address) { return Ok(*fun_ins_index); }; // append fun.instructions to the program and record that `fun.address` has been inserted // at `start_ins`. We'll use `start_ins` to do pc-relative calls. let start_ins = program.instructions.len(); program.instructions.extend(&fun.instructions); debug!( "linked function `{}` at instruction {}", fun.name, start_ins ); // link func and line info into the main program // the offset needs to be adjusted self.link_func_and_line_info(program, fun, start_ins)?; self.linked_functions.insert(fun.address, start_ins); // relocate `fun`, recursively linking in all the callees self.relocate(program, fun)?; Ok(start_ins) } fn relocate(&mut self, program: &mut Function, fun: &Function) -> Result<(), RelocationError> { let relocations = self.relocations.get(&fun.section_index); let n_instructions = fun.instructions.len(); let start_ins = program.instructions.len() - n_instructions; debug!( "relocating program `{}` function `{}` size {}", program.name, fun.name, n_instructions ); // process all the instructions. We can't only loop over relocations since we need to // patch pc-relative calls too. for ins_index in start_ins..start_ins + n_instructions { let ins = program.instructions[ins_index]; let is_call = insn_is_call(&ins); let rel = relocations .and_then(|relocations| { relocations .get(&((fun.section_offset + (ins_index - start_ins) * INS_SIZE) as u64)) }) .and_then(|rel| { // get the symbol for the relocation self.symbol_table .get(&rel.symbol_index) .map(|sym| (rel, sym)) }) .filter(|(_rel, sym)| { // only consider text relocations, data relocations are // relocated in relocate_maps() sym.kind == SymbolKind::Text || sym .section_index .map(|section_index| self.text_sections.contains(§ion_index)) .unwrap_or(false) }); // not a call and not a text relocation, we don't need to do anything if !is_call && rel.is_none() { continue; } let (callee_section_index, callee_address) = if let Some((rel, sym)) = rel { let address = match sym.kind { SymbolKind::Text => sym.address, // R_BPF_64_32 this is a call SymbolKind::Section if rel.size == 32 => { sym.address + (ins.imm + 1) as u64 * INS_SIZE as u64 } // R_BPF_64_64 this is a ld_imm64 text relocation SymbolKind::Section if rel.size == 64 => sym.address + ins.imm as u64, _ => todo!(), // FIXME: return an error here, }; (sym.section_index.unwrap(), address) } else { // The caller and the callee are in the same ELF section and this is a pc-relative // call. Resolve the pc-relative imm to an absolute address. let ins_size = INS_SIZE as i64; ( fun.section_index.0, (fun.section_offset as i64 + ((ins_index - start_ins) as i64) * ins_size + (ins.imm + 1) as i64 * ins_size) as u64, ) }; debug!( "relocating {} to callee address {:#x} in section {} ({}) at instruction {ins_index}", if is_call { "call" } else { "reference" }, callee_address, callee_section_index, if rel.is_some() { "relocation" } else { "pc-relative" }, ); // lookup and link the callee if it hasn't been linked already. `callee_ins_index` will // contain the instruction index of the callee inside the program. let callee = self .functions .get(&(callee_section_index, callee_address)) .ok_or(RelocationError::UnknownFunction { address: callee_address, caller_name: fun.name.clone(), })?; debug!("callee is `{}`", callee.name); let callee_ins_index = self.link_function(program, callee)? as i32; let ins = &mut program.instructions[ins_index]; let ins_index = ins_index as i32; ins.imm = callee_ins_index - ins_index - 1; debug!( "callee `{}` is at ins {callee_ins_index}, {} from current instruction {ins_index}", callee.name, ins.imm ); if !is_call { ins.set_src_reg(BPF_PSEUDO_FUNC as u8); } } debug!( "finished relocating program `{}` function `{}`", program.name, fun.name ); Ok(()) } fn link_func_and_line_info( &mut self, program: &mut Function, fun: &Function, start: usize, ) -> Result<(), RelocationError> { let func_info = &fun.func_info.func_info; let func_info = func_info.iter().cloned().map(|mut info| { // `start` is the new instruction offset of `fun` within `program` info.insn_off = start as u32; info }); program.func_info.func_info.extend(func_info); program.func_info.num_info = program.func_info.func_info.len() as u32; let line_info = &fun.line_info.line_info; if !line_info.is_empty() { // this is the original offset let original_start_off = line_info[0].insn_off; let line_info = line_info.iter().cloned().map(|mut info| { // rebase offsets on top of start, which is the offset of the // function in the program being linked info.insn_off = start as u32 + (info.insn_off - original_start_off); info }); program.line_info.line_info.extend(line_info); program.line_info.num_info = program.func_info.func_info.len() as u32; } Ok(()) } } fn insn_is_call(ins: &bpf_insn) -> bool { let klass = (ins.code & 0x07) as u32; let op = (ins.code & 0xF0) as u32; let src = (ins.code & 0x08) as u32; klass == BPF_JMP && op == BPF_CALL && src == BPF_K && ins.src_reg() as u32 == BPF_PSEUDO_CALL && ins.dst_reg() == 0 && ins.off == 0 } #[cfg(test)] mod test { use alloc::{string::ToString, vec, vec::Vec}; use super::*; use crate::maps::{BtfMap, LegacyMap}; fn fake_sym(index: usize, section_index: usize, address: u64, name: &str, size: u64) -> Symbol { Symbol { index, section_index: Some(section_index), name: Some(name.to_string()), address, size, is_definition: false, kind: SymbolKind::Data, } } fn ins(bytes: &[u8]) -> bpf_insn { unsafe { core::ptr::read_unaligned(bytes.as_ptr() as *const _) } } fn fake_legacy_map(symbol_index: usize) -> Map { Map::Legacy(LegacyMap { def: Default::default(), section_index: 0, section_kind: EbpfSectionKind::Undefined, symbol_index: Some(symbol_index), data: Vec::new(), }) } fn fake_btf_map(symbol_index: usize) -> Map { Map::Btf(BtfMap { def: Default::default(), section_index: 0, symbol_index, data: Vec::new(), }) } fn fake_func(name: &str, instructions: Vec) -> Function { Function { address: Default::default(), name: name.to_string(), section_index: SectionIndex(0), section_offset: Default::default(), instructions, func_info: Default::default(), line_info: Default::default(), func_info_rec_size: Default::default(), line_info_rec_size: Default::default(), } } #[test] fn test_single_legacy_map_relocation() { let mut fun = fake_func( "test", vec![ins(&[ 0x18, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ])], ); let symbol_table = HashMap::from([(1, fake_sym(1, 0, 0, "test_map", 0))]); let relocations = [Relocation { offset: 0x0, symbol_index: 1, size: 64, }]; let maps_by_section = HashMap::new(); let map = fake_legacy_map(1); let maps_by_symbol = HashMap::from([(1, ("test_map", 1, &map))]); relocate_maps( &mut fun, relocations.iter(), &maps_by_section, &maps_by_symbol, &symbol_table, &HashSet::new(), ) .unwrap(); assert_eq!(fun.instructions[0].src_reg(), BPF_PSEUDO_MAP_FD as u8); assert_eq!(fun.instructions[0].imm, 1); } #[test] fn test_multiple_legacy_map_relocation() { let mut fun = fake_func( "test", vec![ ins(&[ 0x18, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ]), ins(&[ 0x18, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ]), ], ); let symbol_table = HashMap::from([ (1, fake_sym(1, 0, 0, "test_map_1", 0)), (2, fake_sym(2, 0, 0, "test_map_2", 0)), ]); let relocations = [ Relocation { offset: 0x0, symbol_index: 1, size: 64, }, Relocation { offset: mem::size_of::() as u64, symbol_index: 2, size: 64, }, ]; let maps_by_section = HashMap::new(); let map_1 = fake_legacy_map(1); let map_2 = fake_legacy_map(2); let maps_by_symbol = HashMap::from([ (1, ("test_map_1", 1, &map_1)), (2, ("test_map_2", 2, &map_2)), ]); relocate_maps( &mut fun, relocations.iter(), &maps_by_section, &maps_by_symbol, &symbol_table, &HashSet::new(), ) .unwrap(); assert_eq!(fun.instructions[0].src_reg(), BPF_PSEUDO_MAP_FD as u8); assert_eq!(fun.instructions[0].imm, 1); assert_eq!(fun.instructions[1].src_reg(), BPF_PSEUDO_MAP_FD as u8); assert_eq!(fun.instructions[1].imm, 2); } #[test] fn test_single_btf_map_relocation() { let mut fun = fake_func( "test", vec![ins(&[ 0x18, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ])], ); let symbol_table = HashMap::from([(1, fake_sym(1, 0, 0, "test_map", 0))]); let relocations = [Relocation { offset: 0x0, symbol_index: 1, size: 64, }]; let maps_by_section = HashMap::new(); let map = fake_btf_map(1); let maps_by_symbol = HashMap::from([(1, ("test_map", 1, &map))]); relocate_maps( &mut fun, relocations.iter(), &maps_by_section, &maps_by_symbol, &symbol_table, &HashSet::new(), ) .unwrap(); assert_eq!(fun.instructions[0].src_reg(), BPF_PSEUDO_MAP_FD as u8); assert_eq!(fun.instructions[0].imm, 1); } #[test] fn test_multiple_btf_map_relocation() { let mut fun = fake_func( "test", vec![ ins(&[ 0x18, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ]), ins(&[ 0x18, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ]), ], ); let symbol_table = HashMap::from([ (1, fake_sym(1, 0, 0, "test_map_1", 0)), (2, fake_sym(2, 0, 0, "test_map_2", 0)), ]); let relocations = [ Relocation { offset: 0x0, symbol_index: 1, size: 64, }, Relocation { offset: mem::size_of::() as u64, symbol_index: 2, size: 64, }, ]; let maps_by_section = HashMap::new(); let map_1 = fake_btf_map(1); let map_2 = fake_btf_map(2); let maps_by_symbol = HashMap::from([ (1, ("test_map_1", 1, &map_1)), (2, ("test_map_2", 2, &map_2)), ]); relocate_maps( &mut fun, relocations.iter(), &maps_by_section, &maps_by_symbol, &symbol_table, &HashSet::new(), ) .unwrap(); assert_eq!(fun.instructions[0].src_reg(), BPF_PSEUDO_MAP_FD as u8); assert_eq!(fun.instructions[0].imm, 1); assert_eq!(fun.instructions[1].src_reg(), BPF_PSEUDO_MAP_FD as u8); assert_eq!(fun.instructions[1].imm, 2); } } aya-obj-0.2.1/src/util.rs000064400000000000000000000007531046102023000132650ustar 00000000000000use core::{mem, slice}; #[cfg(feature = "std")] pub(crate) use std::collections::HashMap; #[cfg(feature = "std")] pub(crate) use std::collections::HashSet; #[cfg(not(feature = "std"))] pub(crate) use hashbrown::HashMap; #[cfg(not(feature = "std"))] pub(crate) use hashbrown::HashSet; /// bytes_of converts a to a byte slice pub(crate) unsafe fn bytes_of(val: &T) -> &[u8] { let size = mem::size_of::(); slice::from_raw_parts(slice::from_ref(val).as_ptr().cast(), size) }