pax_global_header00006660000000000000000000000064146313361430014516gustar00rootroot0000000000000052 comment=4b62e794b662bd7da2af7ce74adfd0fa10bf9dba waypipe-v0.9.1/000077500000000000000000000000001463133614300133715ustar00rootroot00000000000000waypipe-v0.9.1/.clang-format000066400000000000000000000053601463133614300157500ustar00rootroot00000000000000# Only including options for C only Language: Cpp AlignAfterOpenBracket: DontAlign AlignConsecutiveAssignments: false AlignConsecutiveDeclarations: false AlignEscapedNewlines: Right AlignOperands: true AlignTrailingComments: true AllowAllParametersOfDeclarationOnNextLine: true AllowShortBlocksOnASingleLine: false AllowShortCaseLabelsOnASingleLine: false AllowShortFunctionsOnASingleLine: All AllowShortIfStatementsOnASingleLine: false AllowShortLoopsOnASingleLine: false AlwaysBreakAfterDefinitionReturnType: None AlwaysBreakAfterReturnType: None AlwaysBreakBeforeMultilineStrings: false BinPackArguments: true BinPackParameters: true BraceWrapping: AfterControlStatement: false AfterEnum: false AfterFunction: false AfterStruct: false AfterUnion: false AfterExternBlock: false BeforeCatch: false BeforeElse: false IndentBraces: false SplitEmptyFunction: true SplitEmptyRecord: true SplitEmptyNamespace: true BreakBeforeBinaryOperators: None BreakBeforeBraces: Linux BreakBeforeInheritanceComma: false BreakBeforeTernaryOperators: true BreakStringLiterals: false ColumnLimit: 80 CommentPragmas: '^ IWYU pragma:' CompactNamespaces: false ConstructorInitializerAllOnOneLineOrOnePerLine: false ConstructorInitializerIndentWidth: 4 ContinuationIndentWidth: 16 Cpp11BracedListStyle: true DerivePointerAlignment: false DisableFormat: false ExperimentalAutoDetectBinPacking: false FixNamespaceComments: true ForEachMacros: - foreach - Q_FOREACH - BOOST_FOREACH IncludeBlocks: Preserve IncludeCategories: - Regex: '^"(llvm|llvm-c|clang|clang-c)/' Priority: 2 - Regex: '^(<|"(gtest|gmock|isl|json)/)' Priority: 3 - Regex: '.*' Priority: 1 IncludeIsMainRegex: '(Test)?$' IndentCaseLabels: false IndentPPDirectives: None IndentWidth: 8 IndentWrappedFunctionNames: false KeepEmptyLinesAtTheStartOfBlocks: true MacroBlockBegin: '' MacroBlockEnd: '' MaxEmptyLinesToKeep: 1 NamespaceIndentation: None PenaltyBreakAssignment: 2 PenaltyBreakBeforeFirstCallParameter: 19 PenaltyBreakComment: 300 PenaltyBreakFirstLessLess: 120 PenaltyBreakString: 1000 PenaltyBreakTemplateDeclaration: 10 PenaltyExcessCharacter: 1000000 PenaltyReturnTypeOnItsOwnLine: 60 PointerAlignment: Right ReflowComments: true SortIncludes: true SortUsingDeclarations: true SpaceAfterCStyleCast: false SpaceBeforeParens: ControlStatements SpaceBeforeRangeBasedForLoopColon: true SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 1 SpacesInAngles: false SpacesInContainerLiterals: true SpacesInCStyleCastParentheses: false SpacesInParentheses: false SpacesInSquareBrackets: false Standard: Cpp03 TabWidth: 8 UseTab: ForContinuationAndIndentation waypipe-v0.9.1/.gitignore000066400000000000000000000001031463133614300153530ustar00rootroot00000000000000waypipe build/ Doxyfile html latex doc test/matrix /build-minimal/ waypipe-v0.9.1/CONTRIBUTING.md000066400000000000000000000042541463133614300156270ustar00rootroot00000000000000Contributing guidelines =============================================================================== ## Formatting To avoid needless time spent formatting things, this project has autoformatting set up. Yes, it's often ugly, but after using it long enough you'll forget that code can look nice. Python scripts are formatted with black[0], and C code with clang-format[1]. The script `autoformat.sh` at the root of the directory should format all source code files in the project. [0] https://github.com/python/black [1] https://clang.llvm.org/docs/ClangFormat.html ## Types * Typedefs should be used only for function signatures, and never applied to structs. * `short`, `long`, and `long long` should not be used, in favor of `int16_t` and `int64_t`. * All wire-format structures should use fixed size types. It's safe to assume that buffers will never be larger than about 1 GB, so buffer sizes and indices do not require 64 bit types when used in protocol message headers. * `printf` should be called with the correct format codes. For example, `%zd` for `ssize_t`, and the `PRIu32` macro for `uint32_t`. * Avoid unnecessary casts. ## Comments Explain precisely that which is not obvious. `/* ... */` is preferred to `// ...` for longer comments; the leading `/*` and trailing `*/ do not need lines of their own. Use Doxygen style (`/**`) for functions and structs that need commenting, but not to the point where it hinders source code readability. Waypipe is not a library. ## Memory and errors All error conditions should be handled, including the errors produced by allocation failures. (It is relatively easy to test for allocation failure by `LD_PRELOAD`ing a library that redefines malloc et al.; see for instance "mallocfail" and "failmalloc". `ulimit -v` may be less effective.) Some errors are unrecoverable, and for those cases Waypipe should shut down cleanly. For instance, if Waypipe cannot replicate a file descriptor, then an application connected through it will almost certainly crash, and it's better to have Waypipe exit instead. Other errors can safely ignored -- if fine grained damage tracking fails, a sane fallback would be to assume that an entire surface is damaged. waypipe-v0.9.1/COPYING000066400000000000000000000023051463133614300144240ustar00rootroot00000000000000Copyright © 2019 Manuel Stoeckl Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. --- The above is the version of the MIT "Expat" License used by X.org: http://cgit.freedesktop.org/xorg/xserver/tree/COPYING waypipe-v0.9.1/README.md000066400000000000000000000146111463133614300146530ustar00rootroot00000000000000Waypipe ================================================================================ `waypipe` is a proxy for Wayland[0] clients. It forwards Wayland messages and serializes changes to shared memory buffers over a single socket. This makes application forwarding similar to `ssh -X` [1] feasible. [0] [https://wayland.freedesktop.org/](https://wayland.freedesktop.org/) [1] [https://wiki.archlinux.org/title/OpenSSH#X11_forwarding](https://wiki.archlinux.org/title/OpenSSH#X11_forwarding) ## Usage `waypipe` should be installed on both the local and remote computers. There is a user-friendly command line pattern which prefixes a call to `ssh` and automatically sets up a reverse tunnel for protocol data. For example, waypipe ssh user@theserver weston-terminal will run `ssh`, connect to `theserver`, and remotely run `weston-terminal`, using local and remote `waypipe` processes to synchronize the shared memory buffers used by Wayland clients between both computers. Command line arguments before `ssh` apply only to `waypipe`; those after `ssh` belong to `ssh`. Alternatively, one can launch the local and remote processes by hand, with the following set of shell commands: /usr/bin/waypipe -s /tmp/socket-local client & ssh -R /tmp/socket-remote:/tmp/socket-local -t user@theserver \ /usr/bin/waypipe -s /tmp/socket-remote server -- \ /usr/bin/weston-terminal kill %1 It's possible to set up the local and remote processes so that, when the connection between the the sockets used by each end breaks, one can create a new forwarded socket on the remote side and reconnect the two processes. For a more detailed example, see the man page. ## Installing Build with meson[0]. A typical incantation is cd /path/to/waypipe/ && cd .. mkdir build-waypipe meson --buildtype debugoptimized waypipe build-waypipe ninja -C build-waypipe install Core build requirements: * meson (build, >= 0.47. with dependencies `ninja`, `pkg-config`, `python3`) * C compiler Optional dependencies: * liblz4 (for fast compression, >=1.7.0) * libzstd (for slower compression, >= 0.4.6) * libgbm (to support programs using OpenGL via DMABUFs) * libdrm (same as for libgbm) * ffmpeg (>=3.1, needs avcodec/avutil/swscale for lossy video encoding) * libva (for hardware video encoding and decoding) * scdoc (to generate a man page) * sys/sdt.h (to provide static tracepoints for profiling) * ssh (runtime, OpenSSH >= 6.7, for Unix domain socket forwarding) * libx264 (ffmpeg runtime, for software video decoding and encoding) [0] [https://mesonbuild.com/](https://mesonbuild.com/) [1] [https://git.sr.ht/~sircmpwn/scdoc](https://git.sr.ht/~sircmpwn/scdoc) ## Reporting issues Waypipe is developed at [0]; file bug reports or submit patches here. In general, if a program does not work properly under Waypipe, it is a bug worth reporting. If possible, before doing so ensure both computers are using the most recently released version of Waypipe (or are built from git master). A workaround that may help for some programs using OpenGL or Vulkan is to run Waypipe with the `--no-gpu` flag, which may force them to use software rendering and shared memory buffers. (Please still file a bug.) Some programs may require specific environment variable settings or command line flags to run remotely; a few examples are given in the man page[1]. Useful information for bug reports includes: * If a Waypipe process has crashed on either end of the connection, a full stack trace, with debug symbols. (In gdb, `bt full`). * If the program uses OpenGL or Vulkan, the graphics cards and drivers on both computers. * The output of `waypipe --version` on both ends of the connection * Logs when Waypipe is run with the `--debug` flag, or when the program is run with the environment variable setting `WAYLAND_DEBUG=1`. * Screenshots of any visual glitches. [0] [https://gitlab.freedesktop.org/mstoeckl/waypipe/](https://gitlab.freedesktop.org/mstoeckl/waypipe/) [1] [https://gitlab.freedesktop.org/mstoeckl/waypipe/-/blob/master/waypipe.scd](https://gitlab.freedesktop.org/mstoeckl/waypipe/-/blob/master/waypipe.scd) ## Technical Limitations Waypipe does not have a full view of the Wayland protocol. It includes a compiled form of the base protocol and several extension protocols, but is not able to parse all messages that the programs it connects send. Fortunately, the Wayland wire protocol is partially self-describing, so Waypipe can parse the messages it needs (those related to resources shared with file descriptors) while ignoring the rest. This makes Waypipe partially forward-compatible: if a future protocol comes out about details (for example, about window positioning) which do not require that file descriptors be sent, then applications will be able to use that protocol even with older versions of Waypipe. The tradeoff to allowing messages that Waypipe can not parse is that Waypipe can only make minor modifications to the wire protocol. In particular, adding or removing any Wayland protocol objects would require changing all messages that refer to them, including those messages that Waypipe does not parse. This precludes, for example, global object deduplication tricks that could reduce startup time for complicated applications. Shared memory buffer updates, including those for the contents of windows, are tracked by keeping a "mirror" copy of the buffer the represents the view which the opposing instance of Waypipe has. This way, Waypipe can send only the regions of the buffer that have changed relative to the remote copy. This is more efficient than resending the entire buffer on every update, which is good for applications with reasonably static user interfaces (like a text editor or email client). However, with programs with animations where the interaction latency matters (like games or certain audio tools), major window updates will unavoidably produce a lag spike. The additional memory cost of keeping mirrors is moderate. The video encoding option for DMABUFs currently maintains a video stream for each buffer that is used by a window surface. Since surfaces typically rotate between a small number of buffers, a video encoded window will appear to flicker as it switches rapidly between the underlying buffers, each of whose video streams has different encoding artifacts. The `zwp_linux_explicit_synchronization_v1` Wayland protocol is currently not supported. Waypipe does not work between computers that use different byte orders. waypipe-v0.9.1/autoformat.sh000077500000000000000000000002061463133614300161070ustar00rootroot00000000000000#!/bin/sh set -e black -q test/*.py protocols/*.py clang-format -style=file --assume-filename=C -i src/*.h src/*.c test/*.c test/*.h waypipe-v0.9.1/meson.build000066400000000000000000000105301463133614300155320ustar00rootroot00000000000000project( 'waypipe', 'c', license: 'MIT/Expat', meson_version: '>=0.47.0', default_options: [ 'c_std=c11', 'warning_level=3', 'werror=true', ], version: '0.9.1', ) # DEFAULT_SOURCE implies POSIX_C_SOURCE 200809L + extras like CMSG_LEN # requires glibc >= 4.19 (2014), freebsd libc (since 2016?), musl >= 1.15 (2014) add_project_arguments('-D_DEFAULT_SOURCE', language: 'c') # Sometimes ignoring the result of read()/write() is the right thing to do add_project_arguments('-Wno-unused-result', language: 'c') cc = meson.get_compiler('c') config_data = configuration_data() # mention version version = '"@0@"'.format(meson.project_version()) git = find_program('git', native: true, required: false) if git.found() dir_arg = '--git-dir=@0@/.git'.format(meson.source_root()) commit = run_command([git, dir_arg, 'rev-parse', '--verify', '-q', 'HEAD']) if commit.returncode() == 0 version = '"@0@ (commit @1@)"'.format(meson.project_version(), commit.stdout().strip()) endif endif config_data.set('WAYPIPE_VERSION', version) # Make build reproducible if possible python3 = import('python').find_installation() prefix_finder = 'import os.path; print(os.path.join(os.path.relpath(\'@0@\', \'@1@\'),\'\'))' r = run_command(python3, '-c', prefix_finder.format(meson.source_root(), meson.build_root())) relative_dir = r.stdout().strip() if cc.has_argument('-fmacro-prefix-map=/prefix/to/hide=') add_project_arguments( '-fmacro-prefix-map=@0@='.format(relative_dir), language: 'c', ) else add_project_arguments( '-DWAYPIPE_REL_SRC_DIR="@0@"'.format(relative_dir), language: 'c', ) endif libgbm = dependency('gbm', required: get_option('with_dmabuf')) libdrm = dependency('libdrm', required: get_option('with_dmabuf')) if libgbm.found() and libdrm.found() config_data.set('HAS_DMABUF', 1, description: 'Support DMABUF replication') has_dmabuf = true else has_dmabuf = false endif pthreads = dependency('threads') rt = cc.find_library('rt') # XXX dtrace -G (Solaris, FreeBSD, NetBSD) isn't supported yet is_linux = host_machine.system() == 'linux' is_darwin = host_machine.system() == 'darwin' if (is_linux or is_darwin) and get_option('with_systemtap') and cc.has_header('sys/sdt.h') config_data.set('HAS_USDT', 1, description: 'Enable static trace probes') endif has_flag_to_host = ''' // linux/vm_sockets.h doesn't compile on its own // "invalid application of 'sizeof' to incomplete type 'struct sockaddr'" #include #include #ifndef VMADDR_FLAG_TO_HOST #error #endif int main(void) { return 0; } ''' if is_linux and cc.has_header('linux/vm_sockets.h') and cc.compiles(has_flag_to_host, name: 'has VMADDR_FLAG_TO_HOST') config_data.set('HAS_VSOCK', 1, description: 'Enable VM Sockets (VSOCK)') endif liblz4 = dependency('liblz4', version: '>=1.7.0', required: get_option('with_lz4')) if liblz4.found() config_data.set('HAS_LZ4', 1, description: 'Enable LZ4 compression') endif libzstd = dependency('libzstd', version: '>=0.4.6', required: get_option('with_zstd')) if libzstd.found() config_data.set('HAS_ZSTD', 1, description: 'Enable Zstd compression') endif libavcodec = dependency('libavcodec', required: get_option('with_video')) libavutil = dependency('libavutil', required: get_option('with_video')) libswscale = dependency('libswscale', required: get_option('with_video')) libva = dependency('libva', required: get_option('with_vaapi')) if libavcodec.found() and libavutil.found() and libswscale.found() config_data.set('HAS_VIDEO', 1, description: 'Enable video (de)compression') if libva.found() config_data.set('HAS_VAAPI', 1, description: 'Enable hardware video (de)compression with VAAPI') endif endif waypipe_includes = [include_directories('protocols'), include_directories('src')] if libdrm.found() waypipe_includes += include_directories(libdrm.get_pkgconfig_variable('includedir')) endif subdir('protocols') subdir('src') subdir('test') scdoc = dependency('scdoc', version: '>=1.9.4', native: true, required: get_option('man-pages')) if scdoc.found() scdoc_prog = find_program(scdoc.get_pkgconfig_variable('scdoc'), native: true) sh = find_program('sh', native: true) mandir = get_option('mandir') custom_target( 'waypipe.1', input: 'waypipe.scd', output: 'waypipe.1', command: [ sh, '-c', '@0@ < @INPUT@ > @1@'.format(scdoc_prog.path(), 'waypipe.1') ], install: true, install_dir: '@0@/man1'.format(mandir) ) endif waypipe-v0.9.1/meson_options.txt000066400000000000000000000030301463133614300170220ustar00rootroot00000000000000option('man-pages', type: 'feature', value: 'auto', description: 'Generate and install man pages') option('with_video', type : 'feature', value : 'auto', description : 'Link with ffmpeg libraries and provide a command line option to display all buffers using a video stream') option('with_dmabuf', type : 'feature', value : 'auto', description : 'Support DMABUFs, the file descriptors used to exchange data for e.g. OpenGL applications') option('with_lz4', type : 'feature', value : 'auto', description : 'Support LZ4 as a compression mechanism') option('with_zstd', type : 'feature', value : 'auto', description : 'Support ZStandard as a compression mechanism') option('with_vaapi', type : 'feature', value : 'auto', description : 'Link with libva and use VAAPI to perform hardware video output color space conversions on GPU') option('with_systemtap', type: 'boolean', value: true, description: 'Enable tracing using sdt and provide static tracepoints for profiling') # It is recommended to keep these on; Waypipe will automatically select the highest available instruction set at runtime option('with_avx512f', type: 'boolean', value: true, description: 'Compile with support for AVX512f SIMD instructions') option('with_avx2', type: 'boolean', value: true, description: 'Compile with support for AVX2 SIMD instructions') option('with_sse3', type: 'boolean', value: true, description: 'Compile with support for SSE3 SIMD instructions') option('with_neon_opts', type: 'boolean', value: true, description: 'Compile with support for ARM64 neon instructions') waypipe-v0.9.1/minimal_build.sh000077500000000000000000000020021463133614300165270ustar00rootroot00000000000000#!/bin/sh set -e echo "This script is a backup build system in case meson/ninja are unavailable." echo "No optional features or optimizations are included. Waypipe will be slow." echo "Requirements: python3, gcc, libc+pthreads" echo "Enter to continue, interrupt to exit." read unused mkdir -p build-minimal cd build-minimal echo "Generating code..." python3 ../protocols/symgen.py data ../protocols/function_list.txt protocols.c \ ../protocols/*.xml python3 ../protocols/symgen.py header ../protocols/function_list.txt protocols.h \ ../protocols/*.xml echo '#define WAYPIPE_VERSION "minimal"' > config-waypipe.h echo "Compiling..." gcc -D_DEFAULT_SOURCE -Os -I. -I../protocols/ -lpthread -o waypipe protocols.c \ ../src/bench.c ../src/client.c ../src/dmabuf.c ../src/handlers.c \ ../src/interval.c ../src/kernel.c ../src/mainloop.c ../src/parsing.c \ ../src/platform.c ../src/server.c ../src/shadow.c ../src/util.c \ ../src/video.c ../src/waypipe.c cd .. echo "Done. See ./build-minimal/waypipe" waypipe-v0.9.1/protocols/000077500000000000000000000000001463133614300154155ustar00rootroot00000000000000waypipe-v0.9.1/protocols/function_list.txt000066400000000000000000000033121463133614300210350ustar00rootroot00000000000000gtk_primary_selection_offer_req_receive gtk_primary_selection_source_evt_send wl_buffer_evt_release wl_data_offer_req_receive wl_data_source_evt_send wl_display_evt_delete_id wl_display_evt_error wl_display_req_get_registry wl_display_req_sync wl_drm_evt_device wl_drm_req_create_prime_buffer wl_keyboard_evt_keymap wl_registry_evt_global wl_registry_evt_global_remove wl_registry_req_bind wl_shm_req_create_pool wl_shm_pool_req_create_buffer wl_shm_pool_req_resize wl_surface_req_attach wl_surface_req_commit wl_surface_req_damage wl_surface_req_damage_buffer wl_surface_req_set_buffer_transform wl_surface_req_set_buffer_scale wp_presentation_evt_clock_id wp_presentation_feedback_evt_presented wp_presentation_req_feedback xdg_toplevel_req_set_title zwlr_data_control_offer_v1_req_receive zwlr_data_control_source_v1_evt_send zwlr_export_dmabuf_frame_v1_evt_frame zwlr_export_dmabuf_frame_v1_evt_object zwlr_export_dmabuf_frame_v1_evt_ready zwlr_gamma_control_v1_req_set_gamma zwlr_screencopy_frame_v1_evt_ready zwlr_screencopy_frame_v1_req_copy zwp_linux_dmabuf_feedback_v1_evt_done zwp_linux_dmabuf_feedback_v1_evt_format_table zwp_linux_dmabuf_feedback_v1_evt_main_device zwp_linux_dmabuf_feedback_v1_evt_tranche_done zwp_linux_dmabuf_feedback_v1_evt_tranche_target_device zwp_linux_dmabuf_feedback_v1_evt_tranche_formats zwp_linux_dmabuf_feedback_v1_evt_tranche_flags zwp_linux_buffer_params_v1_evt_created zwp_linux_buffer_params_v1_req_add zwp_linux_buffer_params_v1_req_create zwp_linux_buffer_params_v1_req_create_immed zwp_linux_dmabuf_v1_evt_modifier zwp_linux_dmabuf_v1_req_get_default_feedback zwp_linux_dmabuf_v1_req_get_surface_feedback zwp_primary_selection_offer_v1_req_receive zwp_primary_selection_source_v1_evt_send waypipe-v0.9.1/protocols/gtk-primary-selection.xml000066400000000000000000000237111463133614300223740ustar00rootroot00000000000000 Copyright © 2015, 2016 Red Hat Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol provides the ability to have a primary selection device to match that of the X server. This primary selection is a shortcut to the common clipboard selection, where text just needs to be selected in order to allow copying it elsewhere. The de facto way to perform this action is the middle mouse button, although it is not limited to this one. Clients wishing to honor primary selection should create a primary selection source and set it as the selection through wp_primary_selection_device.set_selection whenever the text selection changes. In order to minimize calls in pointer-driven text selection, it should happen only once after the operation finished. Similarly, a NULL source should be set when text is unselected. wp_primary_selection_offer objects are first announced through the wp_primary_selection_device.data_offer event. Immediately after this event, the primary data offer will emit wp_primary_selection_offer.offer events to let know of the mime types being offered. When the primary selection changes, the client with the keyboard focus will receive wp_primary_selection_device.selection events. Only the client with the keyboard focus will receive such events with a non-NULL wp_primary_selection_offer. Across keyboard focus changes, previously focused clients will receive wp_primary_selection_device.events with a NULL wp_primary_selection_offer. In order to request the primary selection data, the client must pass a recent serial pertaining to the press event that is triggering the operation, if the compositor deems the serial valid and recent, the wp_primary_selection_source.send event will happen in the other end to let the transfer begin. The client owning the primary selection should write the requested data, and close the file descriptor immediately. If the primary selection owner client disappeared during the transfer, the client reading the data will receive a wp_primary_selection_device.selection event with a NULL wp_primary_selection_offer, the client should take this as a hint to finish the reads related to the no longer existing offer. The primary selection owner should be checking for errors during writes, merely cancelling the ongoing transfer if any happened. The primary selection device manager is a singleton global object that provides access to the primary selection. It allows to create wp_primary_selection_source objects, as well as retrieving the per-seat wp_primary_selection_device objects. Create a new primary selection source. Create a new data device for a given seat. Destroy the primary selection device manager. Replaces the current selection. The previous owner of the primary selection will receive a wp_primary_selection_source.cancelled event. To unset the selection, set the source to NULL. Introduces a new wp_primary_selection_offer object that may be used to receive the current primary selection. Immediately following this event, the new wp_primary_selection_offer object will send wp_primary_selection_offer.offer events to describe the offered mime types. The wp_primary_selection_device.selection event is sent to notify the client of a new primary selection. This event is sent after the wp_primary_selection.data_offer event introducing this object, and after the offer has announced its mimetypes through wp_primary_selection_offer.offer. The data_offer is valid until a new offer or NULL is received or until the client loses keyboard focus. The client must destroy the previous selection data_offer, if any, upon receiving this event. Destroy the primary selection device. A wp_primary_selection_offer represents an offer to transfer the contents of the primary selection clipboard to the client. Similar to wl_data_offer, the offer also describes the mime types that the source will transferthat the data can be converted to and provides the mechanisms for transferring the data directly to the client. To transfer the contents of the primary selection clipboard, the client issues this request and indicates the mime type that it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the mime type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and closes its end, at which point the transfer is complete. Destroy the primary selection offer. Sent immediately after creating announcing the wp_primary_selection_offer through wp_primary_selection_device.data_offer. One event is sent per offered mime type. The source side of a wp_primary_selection_offer, it provides a way to describe the offered data and respond to requests to transfer the requested contents of the primary selection clipboard. This request adds a mime type to the set of mime types advertised to targets. Can be called several times to offer multiple types. Destroy the primary selection source. Request for the current primary selection contents from the client. Send the specified mime type over the passed file descriptor, then close it. This primary selection source is no longer valid. The client should clean up and destroy this primary selection source. waypipe-v0.9.1/protocols/input-method-unstable-v2.xml000066400000000000000000000515521463133614300227240ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2011 Intel Corporation Copyright © 2012-2013 Collabora, Ltd. Copyright © 2012, 2013 Intel Corporation Copyright © 2015, 2016 Jan Arne Petersen Copyright © 2017, 2018 Red Hat, Inc. Copyright © 2018 Purism SPC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows applications to act as input methods for compositors. An input method context is used to manage the state of the input method. Text strings are UTF-8 encoded, their indices and lengths are in bytes. This document adheres to the RFC 2119 when using words like "must", "should", "may", etc. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. An input method object allows for clients to compose text. The objects connects the client to a text input in an application, and lets the client to serve as an input method for a seat. The zwp_input_method_v2 object can occupy two distinct states: active and inactive. In the active state, the object is associated to and communicates with a text input. In the inactive state, there is no associated text input, and the only communication is with the compositor. Initially, the input method is in the inactive state. Requests issued in the inactive state must be accepted by the compositor. Because of the serial mechanism, and the state reset on activate event, they will not have any effect on the state of the next text input. There must be no more than one input method object per seat. Notification that a text input focused on this seat requested the input method to be activated. This event serves the purpose of providing the compositor with an active input method. This event resets all state associated with previous enable, disable, surrounding_text, text_change_cause, and content_type events, as well as the state associated with set_preedit_string, commit_string, and delete_surrounding_text requests. In addition, it marks the zwp_input_method_v2 object as active, and makes any existing zwp_input_popup_surface_v2 objects visible. The surrounding_text, and content_type events must follow before the next done event if the text input supports the respective functionality. State set with this event is double-buffered. It will get applied on the next zwp_input_method_v2.done event, and stay valid until changed. Notification that no focused text input currently needs an active input method on this seat. This event marks the zwp_input_method_v2 object as inactive. The compositor must make all existing zwp_input_popup_surface_v2 objects invisible until the next activate event. State set with this event is double-buffered. It will get applied on the next zwp_input_method_v2.done event, and stay valid until changed. Updates the surrounding plain text around the cursor, excluding the preedit text. If any preedit text is present, it is replaced with the cursor for the purpose of this event. The argument text is a buffer containing the preedit string, and must include the cursor position, and the complete selection. It should contain additional characters before and after these. There is a maximum length of wayland messages, so text can not be longer than 4000 bytes. cursor is the byte offset of the cursor within the text buffer. anchor is the byte offset of the selection anchor within the text buffer. If there is no selected text, anchor must be the same as cursor. If this event does not arrive before the first done event, the input method may assume that the text input does not support this functionality and ignore following surrounding_text events. Values set with this event are double-buffered. They will get applied and set to initial values on the next zwp_input_method_v2.done event. The initial state for affected fields is empty, meaning that the text input does not support sending surrounding text. If the empty values get applied, subsequent attempts to change them may have no effect. Tells the input method why the text surrounding the cursor changed. Whenever the client detects an external change in text, cursor, or anchor position, it must issue this request to the compositor. This request is intended to give the input method a chance to update the preedit text in an appropriate way, e.g. by removing it when the user starts typing with a keyboard. cause describes the source of the change. The value set with this event is double-buffered. It will get applied and set to its initial value on the next zwp_input_method_v2.done event. The initial value of cause is input_method. Indicates the content type and hint for the current zwp_input_method_v2 instance. Values set with this event are double-buffered. They will get applied on the next zwp_input_method_v2.done event. The initial value for hint is none, and the initial value for purpose is normal. Atomically applies state changes recently sent to the client. The done event establishes and updates the state of the client, and must be issued after any changes to apply them. Text input state (content purpose, content hint, surrounding text, and change cause) is conceptually double-buffered within an input method context. Events modify the pending state, as opposed to the current state in use by the input method. A done event atomically applies all pending state, replacing the current state. After done, the new pending state is as documented for each related request. Events must be applied in the order of arrival. Neither current nor pending state are modified unless noted otherwise. Send the commit string text for insertion to the application. Inserts a string at current cursor position (see commit event sequence). The string to commit could be either just a single character after a key press or the result of some composing. The argument text is a buffer containing the string to insert. There is a maximum length of wayland messages, so text can not be longer than 4000 bytes. Values set with this event are double-buffered. They must be applied and reset to initial on the next zwp_text_input_v3.commit request. The initial value of text is an empty string. Send the pre-edit string text to the application text input. Place a new composing text (pre-edit) at the current cursor position. Any previously set composing text must be removed. Any previously existing selected text must be removed. The cursor is moved to a new position within the preedit string. The argument text is a buffer containing the preedit string. There is a maximum length of wayland messages, so text can not be longer than 4000 bytes. The arguments cursor_begin and cursor_end are counted in bytes relative to the beginning of the submitted string buffer. Cursor should be hidden by the text input when both are equal to -1. cursor_begin indicates the beginning of the cursor. cursor_end indicates the end of the cursor. It may be equal or different than cursor_begin. Values set with this event are double-buffered. They must be applied on the next zwp_input_method_v2.commit event. The initial value of text is an empty string. The initial value of cursor_begin, and cursor_end are both 0. Remove the surrounding text. before_length and after_length are the number of bytes before and after the current cursor index (excluding the preedit text) to delete. If any preedit text is present, it is replaced with the cursor for the purpose of this event. In effect before_length is counted from the beginning of preedit text, and after_length from its end (see commit event sequence). Values set with this event are double-buffered. They must be applied and reset to initial on the next zwp_input_method_v2.commit request. The initial values of both before_length and after_length are 0. Apply state changes from commit_string, set_preedit_string and delete_surrounding_text requests. The state relating to these events is double-buffered, and each one modifies the pending state. This request replaces the current state with the pending state. The connected text input is expected to proceed by evaluating the changes in the following order: 1. Replace existing preedit string with the cursor. 2. Delete requested surrounding text. 3. Insert commit string with the cursor at its end. 4. Calculate surrounding text to send. 5. Insert new preedit text in cursor position. 6. Place cursor inside preedit text. The serial number reflects the last state of the zwp_input_method_v2 object known to the client. The value of the serial argument must be equal to the number of done events already issued by that object. When the compositor receives a commit request with a serial different than the number of past done events, it must proceed as normal, except it should not change the current state of the zwp_input_method_v2 object. Creates a new zwp_input_popup_surface_v2 object wrapping a given surface. The surface gets assigned the "input_popup" role. If the surface already has an assigned role, the compositor must issue a protocol error. Allow an input method to receive hardware keyboard input and process key events to generate text events (with pre-edit) over the wire. This allows input methods which compose multiple key events for inputting text like it is done for CJK languages. The compositor should send all keyboard events on the seat to the grab holder via the returned wl_keyboard object. Nevertheless, the compositor may decide not to forward any particular event. The compositor must not further process any event after it has been forwarded to the grab holder. Releasing the resulting wl_keyboard object releases the grab. The input method ceased to be available. The compositor must issue this event as the only event on the object if there was another input_method object associated with the same seat at the time of its creation. The compositor must issue this request when the object is no longer usable, e.g. due to seat removal. The input method context becomes inert and should be destroyed after deactivation is handled. Any further requests and events except for the destroy request must be ignored. Destroys the zwp_text_input_v2 object and any associated child objects, i.e. zwp_input_popup_surface_v2 and zwp_input_method_keyboard_grab_v2. This interface marks a surface as a popup for interacting with an input method. The compositor should place it near the active text input area. It must be visible if and only if the input method is in the active state. The client must not destroy the underlying wl_surface while the zwp_input_popup_surface_v2 object exists. Notify about the position of the area of the text input expressed as a rectangle in surface local coordinates. This is a hint to the input method telling it the relative position of the text being entered. The zwp_input_method_keyboard_grab_v2 interface represents an exclusive grab of the wl_keyboard interface associated with the seat. This event provides a file descriptor to the client which can be memory-mapped to provide a keyboard mapping description. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. Notifies clients that the modifier and/or group state has changed, and it should update its local state. Informs the client about the keyboard's repeat rate and delay. This event is sent as soon as the zwp_input_method_keyboard_grab_v2 object has been created, and is guaranteed to be received by the client before any key press event. Negative values for either rate or delay are illegal. A rate of zero will disable any repeating (regardless of the value of delay). This event can be sent later on as well with a new value if necessary, so clients should continue listening for the event past the creation of zwp_input_method_keyboard_grab_v2. The input method manager allows the client to become the input method on a chosen seat. No more than one input method must be associated with any seat at any given time. Request a new input zwp_input_method_v2 object associated with a given seat. Destroys the zwp_input_method_manager_v2 object. The zwp_input_method_v2 objects originating from it remain valid. waypipe-v0.9.1/protocols/linux-dmabuf-unstable-v1.xml000066400000000000000000000701071463133614300226760ustar00rootroot00000000000000 Copyright © 2014, 2015 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Following the interfaces from: https://www.khronos.org/registry/egl/extensions/EXT/EGL_EXT_image_dma_buf_import.txt https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import_modifiers.txt and the Linux DRM sub-system's AddFb2 ioctl. This interface offers ways to create generic dmabuf-based wl_buffers. Clients can use the get_surface_feedback request to get dmabuf feedback for a particular surface. If the client wants to retrieve feedback not tied to a surface, they can use the get_default_feedback request. The following are required from clients: - Clients must ensure that either all data in the dma-buf is coherent for all subsequent read access or that coherency is correctly handled by the underlying kernel-side dma-buf implementation. - Don't make any more attachments after sending the buffer to the compositor. Making more attachments later increases the risk of the compositor not being able to use (re-import) an existing dmabuf-based wl_buffer. The underlying graphics stack must ensure the following: - The dmabuf file descriptors relayed to the server will stay valid for the whole lifetime of the wl_buffer. This means the server may at any time use those fds to import the dmabuf into any kernel sub-system that might accept it. However, when the underlying graphics stack fails to deliver the promise, because of e.g. a device hot-unplug which raises internal errors, after the wl_buffer has been successfully created the compositor must not raise protocol errors to the client when dmabuf import later fails. To create a wl_buffer from one or more dmabufs, a client creates a zwp_linux_dmabuf_params_v1 object with a zwp_linux_dmabuf_v1.create_params request. All planes required by the intended format are added with the 'add' request. Finally, a 'create' or 'create_immed' request is issued, which has the following outcome depending on the import success. The 'create' request, - on success, triggers a 'created' event which provides the final wl_buffer to the client. - on failure, triggers a 'failed' event to convey that the server cannot use the dmabufs received from the client. For the 'create_immed' request, - on success, the server immediately imports the added dmabufs to create a wl_buffer. No event is sent from the server in this case. - on failure, the server can choose to either: - terminate the client by raising a fatal error. - mark the wl_buffer as failed, and send a 'failed' event to the client. If the client uses a failed wl_buffer as an argument to any request, the behaviour is compositor implementation-defined. For all DRM formats and unless specified in another protocol extension, pre-multiplied alpha is used for pixel values. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. Objects created through this interface, especially wl_buffers, will remain valid. This temporary object is used to collect multiple dmabuf handles into a single batch to create a wl_buffer. It can only be used once and should be destroyed after a 'created' or 'failed' event has been received. This event advertises one buffer format that the server supports. All the supported formats are advertised once when the client binds to this interface. A roundtrip after binding guarantees that the client has received all supported formats. For the definition of the format codes, see the zwp_linux_buffer_params_v1::create request. Starting version 4, the format event is deprecated and must not be sent by compositors. Instead, use get_default_feedback or get_surface_feedback. This event advertises the formats that the server supports, along with the modifiers supported for each format. All the supported modifiers for all the supported formats are advertised once when the client binds to this interface. A roundtrip after binding guarantees that the client has received all supported format-modifier pairs. For legacy support, DRM_FORMAT_MOD_INVALID (that is, modifier_hi == 0x00ffffff and modifier_lo == 0xffffffff) is allowed in this event. It indicates that the server can support the format with an implicit modifier. When a plane has DRM_FORMAT_MOD_INVALID as its modifier, it is as if no explicit modifier is specified. The effective modifier will be derived from the dmabuf. A compositor that sends valid modifiers and DRM_FORMAT_MOD_INVALID for a given format supports both explicit modifiers and implicit modifiers. For the definition of the format and modifier codes, see the zwp_linux_buffer_params_v1::create and zwp_linux_buffer_params_v1::add requests. Starting version 4, the modifier event is deprecated and must not be sent by compositors. Instead, use get_default_feedback or get_surface_feedback. This request creates a new wp_linux_dmabuf_feedback object not bound to a particular surface. This object will deliver feedback about dmabuf parameters to use if the client doesn't support per-surface feedback (see get_surface_feedback). This request creates a new wp_linux_dmabuf_feedback object for the specified wl_surface. This object will deliver feedback about dmabuf parameters to use for buffers attached to this surface. If the surface is destroyed before the wp_linux_dmabuf_feedback object, the feedback object becomes inert. This temporary object is a collection of dmabufs and other parameters that together form a single logical buffer. The temporary object may eventually create one wl_buffer unless cancelled by destroying it before requesting 'create'. Single-planar formats only require one dmabuf, however multi-planar formats may require more than one dmabuf. For all formats, an 'add' request must be called once per plane (even if the underlying dmabuf fd is identical). You must use consecutive plane indices ('plane_idx' argument for 'add') from zero to the number of planes used by the drm_fourcc format code. All planes required by the format must be given exactly once, but can be given in any order. Each plane index can be set only once. Cleans up the temporary data sent to the server for dmabuf-based wl_buffer creation. This request adds one dmabuf to the set in this zwp_linux_buffer_params_v1. The 64-bit unsigned value combined from modifier_hi and modifier_lo is the dmabuf layout modifier. DRM AddFB2 ioctl calls this the fb modifier, which is defined in drm_mode.h of Linux UAPI. This is an opaque token. Drivers use this token to express tiling, compression, etc. driver-specific modifications to the base format defined by the DRM fourcc code. Starting from version 4, the invalid_format protocol error is sent if the format + modifier pair was not advertised as supported. This request raises the PLANE_IDX error if plane_idx is too large. The error PLANE_SET is raised if attempting to set a plane that was already set. This asks for creation of a wl_buffer from the added dmabuf buffers. The wl_buffer is not created immediately but returned via the 'created' event if the dmabuf sharing succeeds. The sharing may fail at runtime for reasons a client cannot predict, in which case the 'failed' event is triggered. The 'format' argument is a DRM_FORMAT code, as defined by the libdrm's drm_fourcc.h. The Linux kernel's DRM sub-system is the authoritative source on how the format codes should work. The 'flags' is a bitfield of the flags defined in enum "flags". 'y_invert' means the that the image needs to be y-flipped. Flag 'interlaced' means that the frame in the buffer is not progressive as usual, but interlaced. An interlaced buffer as supported here must always contain both top and bottom fields. The top field always begins on the first pixel row. The temporal ordering between the two fields is top field first, unless 'bottom_first' is specified. It is undefined whether 'bottom_first' is ignored if 'interlaced' is not set. This protocol does not convey any information about field rate, duration, or timing, other than the relative ordering between the two fields in one buffer. A compositor may have to estimate the intended field rate from the incoming buffer rate. It is undefined whether the time of receiving wl_surface.commit with a new buffer attached, applying the wl_surface state, wl_surface.frame callback trigger, presentation, or any other point in the compositor cycle is used to measure the frame or field times. There is no support for detecting missed or late frames/fields/buffers either, and there is no support whatsoever for cooperating with interlaced compositor output. The composited image quality resulting from the use of interlaced buffers is explicitly undefined. A compositor may use elaborate hardware features or software to deinterlace and create progressive output frames from a sequence of interlaced input buffers, or it may produce substandard image quality. However, compositors that cannot guarantee reasonable image quality in all cases are recommended to just reject all interlaced buffers. Any argument errors, including non-positive width or height, mismatch between the number of planes and the format, bad format, bad offset or stride, may be indicated by fatal protocol errors: INCOMPLETE, INVALID_FORMAT, INVALID_DIMENSIONS, OUT_OF_BOUNDS. Dmabuf import errors in the server that are not obvious client bugs are returned via the 'failed' event as non-fatal. This allows attempting dmabuf sharing and falling back in the client if it fails. This request can be sent only once in the object's lifetime, after which the only legal request is destroy. This object should be destroyed after issuing a 'create' request. Attempting to use this object after issuing 'create' raises ALREADY_USED protocol error. It is not mandatory to issue 'create'. If a client wants to cancel the buffer creation, it can just destroy this object. This event indicates that the attempted buffer creation was successful. It provides the new wl_buffer referencing the dmabuf(s). Upon receiving this event, the client should destroy the zlinux_dmabuf_params object. This event indicates that the attempted buffer creation has failed. It usually means that one of the dmabuf constraints has not been fulfilled. Upon receiving this event, the client should destroy the zlinux_buffer_params object. This asks for immediate creation of a wl_buffer by importing the added dmabufs. In case of import success, no event is sent from the server, and the wl_buffer is ready to be used by the client. Upon import failure, either of the following may happen, as seen fit by the implementation: - the client is terminated with one of the following fatal protocol errors: - INCOMPLETE, INVALID_FORMAT, INVALID_DIMENSIONS, OUT_OF_BOUNDS, in case of argument errors such as mismatch between the number of planes and the format, bad format, non-positive width or height, or bad offset or stride. - INVALID_WL_BUFFER, in case the cause for failure is unknown or plaform specific. - the server creates an invalid wl_buffer, marks it as failed and sends a 'failed' event to the client. The result of using this invalid wl_buffer as an argument in any request by the client is defined by the compositor implementation. This takes the same arguments as a 'create' request, and obeys the same restrictions. This object advertises dmabuf parameters feedback. This includes the preferred devices and the supported formats/modifiers. The parameters are sent once when this object is created and whenever they change. The done event is always sent once after all parameters have been sent. When a single parameter changes, all parameters are re-sent by the compositor. Compositors can re-send the parameters when the current client buffer allocations are sub-optimal. Compositors should not re-send the parameters if re-allocating the buffers would not result in a more optimal configuration. In particular, compositors should avoid sending the exact same parameters multiple times in a row. The tranche_target_device and tranche_modifier events are grouped by tranches of preference. For each tranche, a tranche_target_device, one tranche_flags and one or more tranche_modifier events are sent, followed by a tranche_done event finishing the list. The tranches are sent in descending order of preference. All formats and modifiers in the same tranche have the same preference. To send parameters, the compositor sends one main_device event, tranches (each consisting of one tranche_target_device event, one tranche_flags event, tranche_modifier events and then a tranche_done event), then one done event. Using this request a client can tell the server that it is not going to use the wp_linux_dmabuf_feedback object anymore. This event is sent after all parameters of a wp_linux_dmabuf_feedback object have been sent. This allows changes to the wp_linux_dmabuf_feedback parameters to be seen as atomic, even if they happen via multiple events. This event provides a file descriptor which can be memory-mapped to access the format and modifier table. The table contains a tightly packed array of consecutive format + modifier pairs. Each pair is 16 bytes wide. It contains a format as a 32-bit unsigned integer, followed by 4 bytes of unused padding, and a modifier as a 64-bit unsigned integer. The native endianness is used. The client must map the file descriptor in read-only private mode. Compositors are not allowed to mutate the table file contents once this event has been sent. Instead, compositors must create a new, separate table file and re-send feedback parameters. Compositors are allowed to store duplicate format + modifier pairs in the table. This event advertises the main device that the server prefers to use when direct scan-out to the target device isn't possible. The advertised main device may be different for each wp_linux_dmabuf_feedback object, and may change over time. There is exactly one main device. The compositor must send at least one preference tranche with tranche_target_device equal to main_device. Clients need to create buffers that the main device can import and read from, otherwise creating the dmabuf wl_buffer will fail (see the wp_linux_buffer_params.create and create_immed requests for details). The main device will also likely be kept active by the compositor, so clients can use it instead of waking up another device for power savings. In general the device is a DRM node. The DRM node type (primary vs. render) is unspecified. Clients must not rely on the compositor sending a particular node type. Clients cannot check two devices for equality by comparing the dev_t value. If explicit modifiers are not supported and the client performs buffer allocations on a different device than the main device, then the client must force the buffer to have a linear layout. This event splits tranche_target_device and tranche_modifier events in preference tranches. It is sent after a set of tranche_target_device and tranche_modifier events; it represents the end of a tranche. The next tranche will have a lower preference. This event advertises the target device that the server prefers to use for a buffer created given this tranche. The advertised target device may be different for each preference tranche, and may change over time. There is exactly one target device per tranche. The target device may be a scan-out device, for example if the compositor prefers to directly scan-out a buffer created given this tranche. The target device may be a rendering device, for example if the compositor prefers to texture from said buffer. The client can use this hint to allocate the buffer in a way that makes it accessible from the target device, ideally directly. The buffer must still be accessible from the main device, either through direct import or through a potentially more expensive fallback path. If the buffer can't be directly imported from the main device then clients must be prepared for the compositor changing the tranche priority or making wl_buffer creation fail (see the wp_linux_buffer_params.create and create_immed requests for details). If the device is a DRM node, the DRM node type (primary vs. render) is unspecified. Clients must not rely on the compositor sending a particular node type. Clients cannot check two devices for equality by comparing the dev_t value. This event is tied to a preference tranche, see the tranche_done event. This event advertises the format + modifier combinations that the compositor supports. It carries an array of indices, each referring to a format + modifier pair in the last received format table (see the format_table event). Each index is a 16-bit unsigned integer in native endianness. For legacy support, DRM_FORMAT_MOD_INVALID is an allowed modifier. It indicates that the server can support the format with an implicit modifier. When a buffer has DRM_FORMAT_MOD_INVALID as its modifier, it is as if no explicit modifier is specified. The effective modifier will be derived from the dmabuf. A compositor that sends valid modifiers and DRM_FORMAT_MOD_INVALID for a given format supports both explicit modifiers and implicit modifiers. Compositors must not send duplicate format + modifier pairs within the same tranche or across two different tranches with the same target device and flags. This event is tied to a preference tranche, see the tranche_done event. For the definition of the format and modifier codes, see the wp_linux_buffer_params.create request. This event sets tranche-specific flags. The scanout flag is a hint that direct scan-out may be attempted by the compositor on the target device if the client appropriately allocates a buffer. How to allocate a buffer that can be scanned out on the target device is implementation-defined. This event is tied to a preference tranche, see the tranche_done event. waypipe-v0.9.1/protocols/meson.build000066400000000000000000000027531463133614300175660ustar00rootroot00000000000000 symgen_path = join_paths(meson.current_source_dir(), 'symgen.py') sendgen_path = join_paths(meson.current_source_dir(), 'sendgen.py') fn_list = join_paths(meson.current_source_dir(), 'function_list.txt') # Include a copy of these protocols in the repository, rather than looking # for packages containing them, to: # a) avoid versioning problems as new protocols/methods are introduced # b) keep the minimum build complexity for waypipe low # c) be able to relay through newer protocols than are default on a system protocols = [ 'wayland.xml', 'xdg-shell.xml', 'presentation-time.xml', 'linux-dmabuf-unstable-v1.xml', 'gtk-primary-selection.xml', 'input-method-unstable-v2.xml', 'primary-selection-unstable-v1.xml', 'virtual-keyboard-unstable-v1.xml', 'wlr-screencopy-unstable-v1.xml', 'wlr-export-dmabuf-unstable-v1.xml', 'wlr-data-control-unstable-v1.xml', 'wlr-gamma-control-unstable-v1.xml', 'wayland-drm.xml', ] protocols_src = [] protocols_src += custom_target( 'protocol code', output: 'protocols.c', input: protocols, depend_files: [fn_list, symgen_path], command: [python3, symgen_path, 'data', fn_list, '@OUTPUT@', '@INPUT@'], ) protocols_src += custom_target( 'protocol header', output: 'protocols.h', input: protocols, depend_files: [fn_list, symgen_path], command: [python3, symgen_path, 'header', fn_list, '@OUTPUT@', '@INPUT@'], ) # For use in test abs_protocols = [] foreach xml : protocols abs_protocols += join_paths(meson.current_source_dir(), xml) endforeach waypipe-v0.9.1/protocols/presentation-time.xml000066400000000000000000000305021463133614300216060ustar00rootroot00000000000000 Copyright © 2013-2014 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The main feature of this interface is accurate presentation timing feedback to ensure smooth video playback while maintaining audio/video synchronization. Some features use the concept of a presentation clock, which is defined in the presentation.clock_id event. A content update for a wl_surface is submitted by a wl_surface.commit request. Request 'feedback' associates with the wl_surface.commit and provides feedback on the content update, particularly the final realized presentation time. When the final realized presentation time is available, e.g. after a framebuffer flip completes, the requested presentation_feedback.presented events are sent. The final presentation time can differ from the compositor's predicted display update time and the update's target time, especially when the compositor misses its target vertical blanking period. These fatal protocol errors may be emitted in response to illegal presentation requests. Informs the server that the client will no longer be using this protocol object. Existing objects created by this object are not affected. Request presentation feedback for the current content submission on the given surface. This creates a new presentation_feedback object, which will deliver the feedback information once. If multiple presentation_feedback objects are created for the same submission, they will all deliver the same information. For details on what information is returned, see the presentation_feedback interface. This event tells the client in which clock domain the compositor interprets the timestamps used by the presentation extension. This clock is called the presentation clock. The compositor sends this event when the client binds to the presentation interface. The presentation clock does not change during the lifetime of the client connection. The clock identifier is platform dependent. On Linux/glibc, the identifier value is one of the clockid_t values accepted by clock_gettime(). clock_gettime() is defined by POSIX.1-2001. Timestamps in this clock domain are expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. Note that clock_id applies only to the presentation clock, and implies nothing about e.g. the timestamps used in the Wayland core protocol input events. Compositors should prefer a clock which does not jump and is not slewed e.g. by NTP. The absolute value of the clock is irrelevant. Precision of one millisecond or better is recommended. Clients must be able to query the current clock value directly, not by asking the compositor. A presentation_feedback object returns an indication that a wl_surface content update has become visible to the user. One object corresponds to one content update submission (wl_surface.commit). There are two possible outcomes: the content update is presented to the user, and a presentation timestamp delivered; or, the user did not see the content update because it was superseded or its surface destroyed, and the content update is discarded. Once a presentation_feedback object has delivered a 'presented' or 'discarded' event it is automatically destroyed. As presentation can be synchronized to only one output at a time, this event tells which output it was. This event is only sent prior to the presented event. As clients may bind to the same global wl_output multiple times, this event is sent for each bound instance that matches the synchronized output. If a client has not bound to the right wl_output global at all, this event is not sent. These flags provide information about how the presentation of the related content update was done. The intent is to help clients assess the reliability of the feedback and the visual quality with respect to possible tearing and timings. The presentation was synchronized to the "vertical retrace" by the display hardware such that tearing does not happen. Relying on software scheduling is not acceptable for this flag. If presentation is done by a copy to the active frontbuffer, then it must guarantee that tearing cannot happen. The display hardware provided measurements that the hardware driver converted into a presentation timestamp. Sampling a clock in software is not acceptable for this flag. The display hardware signalled that it started using the new image content. The opposite of this is e.g. a timer being used to guess when the display hardware has switched to the new image content. The presentation of this update was done zero-copy. This means the buffer from the client was given to display hardware as is, without copying it. Compositing with OpenGL counts as copying, even if textured directly from the client buffer. Possible zero-copy cases include direct scanout of a fullscreen surface and a surface on a hardware overlay. The associated content update was displayed to the user at the indicated time (tv_sec_hi/lo, tv_nsec). For the interpretation of the timestamp, see presentation.clock_id event. The timestamp corresponds to the time when the content update turned into light the first time on the surface's main output. Compositors may approximate this from the framebuffer flip completion events from the system, and the latency of the physical display path if known. This event is preceded by all related sync_output events telling which output's refresh cycle the feedback corresponds to, i.e. the main output for the surface. Compositors are recommended to choose the output containing the largest part of the wl_surface, or keeping the output they previously chose. Having a stable presentation output association helps clients predict future output refreshes (vblank). The 'refresh' argument gives the compositor's prediction of how many nanoseconds after tv_sec, tv_nsec the very next output refresh may occur. This is to further aid clients in predicting future refreshes, i.e., estimating the timestamps targeting the next few vblanks. If such prediction cannot usefully be done, the argument is zero. If the output does not have a constant refresh rate, explicit video mode switches excluded, then the refresh argument must be zero. The 64-bit value combined from seq_hi and seq_lo is the value of the output's vertical retrace counter when the content update was first scanned out to the display. This value must be compatible with the definition of MSC in GLX_OML_sync_control specification. Note, that if the display path has a non-zero latency, the time instant specified by this counter may differ from the timestamp's. If the output does not have a concept of vertical retrace or a refresh cycle, or the output device is self-refreshing without a way to query the refresh count, then the arguments seq_hi and seq_lo must be zero. The content update was never displayed to the user. waypipe-v0.9.1/protocols/primary-selection-unstable-v1.xml000066400000000000000000000243451463133614300237540ustar00rootroot00000000000000 Copyright © 2015, 2016 Red Hat Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol provides the ability to have a primary selection device to match that of the X server. This primary selection is a shortcut to the common clipboard selection, where text just needs to be selected in order to allow copying it elsewhere. The de facto way to perform this action is the middle mouse button, although it is not limited to this one. Clients wishing to honor primary selection should create a primary selection source and set it as the selection through wp_primary_selection_device.set_selection whenever the text selection changes. In order to minimize calls in pointer-driven text selection, it should happen only once after the operation finished. Similarly, a NULL source should be set when text is unselected. wp_primary_selection_offer objects are first announced through the wp_primary_selection_device.data_offer event. Immediately after this event, the primary data offer will emit wp_primary_selection_offer.offer events to let know of the mime types being offered. When the primary selection changes, the client with the keyboard focus will receive wp_primary_selection_device.selection events. Only the client with the keyboard focus will receive such events with a non-NULL wp_primary_selection_offer. Across keyboard focus changes, previously focused clients will receive wp_primary_selection_device.events with a NULL wp_primary_selection_offer. In order to request the primary selection data, the client must pass a recent serial pertaining to the press event that is triggering the operation, if the compositor deems the serial valid and recent, the wp_primary_selection_source.send event will happen in the other end to let the transfer begin. The client owning the primary selection should write the requested data, and close the file descriptor immediately. If the primary selection owner client disappeared during the transfer, the client reading the data will receive a wp_primary_selection_device.selection event with a NULL wp_primary_selection_offer, the client should take this as a hint to finish the reads related to the no longer existing offer. The primary selection owner should be checking for errors during writes, merely cancelling the ongoing transfer if any happened. The primary selection device manager is a singleton global object that provides access to the primary selection. It allows to create wp_primary_selection_source objects, as well as retrieving the per-seat wp_primary_selection_device objects. Create a new primary selection source. Create a new data device for a given seat. Destroy the primary selection device manager. Replaces the current selection. The previous owner of the primary selection will receive a wp_primary_selection_source.cancelled event. To unset the selection, set the source to NULL. Introduces a new wp_primary_selection_offer object that may be used to receive the current primary selection. Immediately following this event, the new wp_primary_selection_offer object will send wp_primary_selection_offer.offer events to describe the offered mime types. The wp_primary_selection_device.selection event is sent to notify the client of a new primary selection. This event is sent after the wp_primary_selection.data_offer event introducing this object, and after the offer has announced its mimetypes through wp_primary_selection_offer.offer. The data_offer is valid until a new offer or NULL is received or until the client loses keyboard focus. The client must destroy the previous selection data_offer, if any, upon receiving this event. Destroy the primary selection device. A wp_primary_selection_offer represents an offer to transfer the contents of the primary selection clipboard to the client. Similar to wl_data_offer, the offer also describes the mime types that the data can be converted to and provides the mechanisms for transferring the data directly to the client. To transfer the contents of the primary selection clipboard, the client issues this request and indicates the mime type that it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the mime type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and closes its end, at which point the transfer is complete. Destroy the primary selection offer. Sent immediately after creating announcing the wp_primary_selection_offer through wp_primary_selection_device.data_offer. One event is sent per offered mime type. The source side of a wp_primary_selection_offer, it provides a way to describe the offered data and respond to requests to transfer the requested contents of the primary selection clipboard. This request adds a mime type to the set of mime types advertised to targets. Can be called several times to offer multiple types. Destroy the primary selection source. Request for the current primary selection contents from the client. Send the specified mime type over the passed file descriptor, then close it. This primary selection source is no longer valid. The client should clean up and destroy this primary selection source. waypipe-v0.9.1/protocols/sendgen.py000066400000000000000000000153721463133614300174220ustar00rootroot00000000000000#!/usr/bin/env python3 import os, sys, fnmatch import xml.etree.ElementTree as ET """ A static protocol code generator for the task of creating the wire representation of a list of events/requests """ wltype_to_ctypes = { "uint": "uint32_t ", "fixed": "uint32_t ", "int": "int32_t ", "object": "struct wp_objid ", "new_id": "struct wp_objid ", "string": "const char *", "fd": "int ", } def write_enum(ostream, iface_name, enum): enum_name = enum.attrib["name"] is_bitfield = "bitfield" in enum.attrib and enum.attrib["bitfield"] == "true" for entry in enum: if entry.tag != "entry": continue entry_name = entry.attrib["name"] entry_value = entry.attrib["value"] full_name = (iface_name + "_" + enum_name + "_" + entry_name).upper() print("#define {} {}".format(full_name, entry_value), file=ostream) def is_exportable(func_name, export_list): for e in export_list: if fnmatch.fnmatchcase(func_name, e): return True return False def write_func(ostream, iface_name, func, is_request, func_no, export_list): func_name = ( iface_name + "_" + ("req" if is_request else "evt") + "_" + func.attrib["name"] ) for_export = is_exportable(func_name, export_list) if not for_export: return c_sig = ["struct transfer_states *ts", "struct wp_objid " + iface_name + "_id"] w_args = [] num_fd_args = 0 num_reg_args = 0 num_obj_args = 0 num_new_args = 0 num_stretch_args = 0 for arg in func: if arg.tag != "arg": continue arg_name = arg.attrib["name"] arg_type = arg.attrib["type"] arg_interface = arg.attrib["interface"] if "interface" in arg.attrib else None if arg_type == "new_id" and arg_interface is None: # Special case, for wl_registry_bind c_sig.append("const char *interface") c_sig.append("uint32_t version") c_sig.append("struct wp_objid id") w_args.append(("interface", "string", None)) w_args.append(("version", "uint", None)) w_args.append((arg_name, "new_id", None)) num_obj_args += 1 num_new_args += 1 num_reg_args += 3 num_stretch_args += 1 continue if arg_type == "array": c_sig.append("int " + arg_name + "_count") c_sig.append("const uint8_t *" + arg_name + "_val") else: c_sig.append(wltype_to_ctypes[arg_type] + arg_name) w_args.append((arg_name, arg_type, arg_interface)) if arg_type == "fd": num_fd_args += 1 else: num_reg_args += 1 if arg_type == "object" or arg_type == "new_id": num_obj_args += 1 if arg_type == "new_id": num_new_args += 1 if arg_type in ("array", "string"): num_stretch_args += 1 send_signature = "static void send_{}({}) ".format(func_name, ", ".join(c_sig)) W = lambda *x: print(*x, file=ostream) # Write function definition W(send_signature + " {") W("\tts->fd_size = 0;") W("\tts->msg_space[0] = {}.id;".format(iface_name + "_id")) W("\tts->msg_size = 2;") tmp_names = ["ctx"] for i, (arg_name, arg_type, arg_interface) in enumerate(w_args): if arg_type == "array": raise NotImplementedError() continue elif arg_type == "fd": W("\tts->fd_space[ts->fd_size++] = {};".format(arg_name)) continue elif arg_type == "string": W("\tserialize_string(ts, {});".format(arg_name)) continue elif arg_type == "object" or arg_type == "new_id": W("\tts->msg_space[ts->msg_size++] = {}.id;".format(arg_name)) elif arg_type == "int": W("\tts->msg_space[ts->msg_size++] = (uint32_t){};".format(arg_name)) elif arg_type == "uint" or arg_type == "fixed": W("\tts->msg_space[ts->msg_size++] = {};".format(arg_name)) else: raise KeyError(arg_type) W("\tts->msg_space[1] = ((uint32_t)ts->msg_size << 18) | {};".format(func_no)) if is_request: W("\tts->send(ts, ts->app, ts->comp);") else: W("\tts->send(ts, ts->comp, ts->app);") W("}") if __name__ == "__main__": req_file, dest = sys.argv[1:3] sources = sys.argv[3:] assert dest.endswith(".h") dest_shortname = dest[:-2] header_flag = dest_shortname.upper().replace("/", "_") + "_H" export_list = open(req_file).read().split("\n") with open(dest, "w") as ostream: W = lambda *x: print(*x, file=ostream) W("#ifndef {}".format(header_flag)) W("#include ") W("#include ") W("#include ") W("struct test_state;") W("struct wp_objid { uint32_t id; };") W("struct transfer_states {") W("\tuint32_t msg_space[256];") W("\tint fd_space[16];") W("\tunsigned int msg_size;") W("\tunsigned int fd_size;") W("\tstruct test_state *app;") W("\tstruct test_state *comp;") W( "\tvoid (*send)(struct transfer_states *, struct test_state *src, struct test_state *dst);" ) W("};") # note: this script assumes that serialize_string will be used W("static void serialize_string(struct transfer_states *ts, const char *str) {") W("\tif (str) {") W("\t\tsize_t slen = strlen(str) + 1;") W("\t\tts->msg_space[ts->msg_size] = (uint32_t)slen;") W("\t\tmemcpy(&ts->msg_space[ts->msg_size + 1], str, slen);") W("\t\tts->msg_size += ((uint32_t)slen + 0x7) >> 2;") W("\t} else {") W("\t\tts->msg_space[ts->msg_size++] = 0;") W("\t}") W("}") for source in sorted(sources): tree = ET.parse(source) root = tree.getroot() for interface in root: if interface.tag != "interface": continue iface_name = interface.attrib["name"] func_data = [] nreq, nevt = 0, 0 for item in interface: if item.tag == "enum": write_enum(ostream, iface_name, item) elif item.tag == "request": write_func(ostream, iface_name, item, True, nreq, export_list) nreq += 1 elif item.tag == "event": write_func(ostream, iface_name, item, False, nevt, export_list) nevt += 1 elif item.tag == "description": pass else: raise Exception(item.tag) W("#endif /* {} */".format(header_flag)) waypipe-v0.9.1/protocols/symgen.py000077500000000000000000000356771463133614300173160ustar00rootroot00000000000000#!/usr/bin/env python3 import os, sys, fnmatch import xml.etree.ElementTree as ET import argparse """ A static protocol code generator. """ wltype_to_ctypes = { "uint": "uint32_t ", "fixed": "uint32_t ", "int": "int32_t ", "object": "struct wp_object *", "new_id": "struct wp_object *", "string": "const char *", "fd": "int ", } def superstring(a, b): na, nb = len(a), len(b) if nb > na: b, a, nb, na = a, b, na, nb # A contains B for i in range(na - nb + 1): if a[i : nb + i] == b: return a # suffix of B is prefix of A ba_overlap = 0 for i in range(1, nb): if b[-i:] == a[:i]: ba_overlap = i # suffix of A is prefix of B ab_overlap = 0 for i in range(1, nb): if a[-i:] == b[:i]: ab_overlap = i if ba_overlap > ab_overlap: return b + a[ba_overlap:] else: return a + b[ab_overlap:] def get_offset(haystack, needle): for i in range(len(haystack) - len(needle) + 1): if haystack[i : i + len(needle)] == needle: return i return None def shortest_superstring(strings): """ Given strings L_1,...L_n over domain U, report an approximation of the shortest superstring of the lists, and offsets of the L_i into this string. Has O(n^3) runtime; O(n^2 polylog) is possible. """ if not len(strings): return None pool = [] for s in strings: if s not in pool: pool.append(s) while len(pool) > 1: max_overlap = 0 best = None for i in range(len(pool)): for j in range(i): d = len(pool[i]) + len(pool[j]) - len(superstring(pool[i], pool[j])) if d >= max_overlap: max_overlap = d best = (j, i) s = superstring(pool[best[0]], pool[best[1]]) del pool[best[1]] del pool[best[0]] pool.append(s) sstring = pool[0] for s in strings: assert get_offset(sstring, s) != None, ("substring property", sstring, s) return sstring def write_enum(is_header, ostream, iface_name, enum): if not is_header: return enum_name = enum.attrib["name"] is_bitfield = "bitfield" in enum.attrib and enum.attrib["bitfield"] == "true" long_name = iface_name + "_" + enum_name print("enum " + long_name + " {", file=ostream) for entry in enum: if entry.tag != "entry": continue entry_name = entry.attrib["name"] entry_value = entry.attrib["value"] full_name = long_name.upper() + "_" + entry_name.upper() print("\t" + full_name + " = " + entry_value + ",", file=ostream) print("};", file=ostream) def write_version(is_header, ostream, iface_name, version): if not is_header: return print( "#define " + iface_name.upper() + "_INTERFACE_VERSION " + str(version), file=ostream, ) def is_exportable(func_name, export_list): for e in export_list: if fnmatch.fnmatchcase(func_name, e): return True return False def write_func(is_header, ostream, func_name, func): c_sig = ["struct context *ctx"] w_args = [] num_fd_args = 0 num_reg_args = 0 num_obj_args = 0 num_new_args = 0 num_stretch_args = 0 for arg in func: if arg.tag != "arg": continue arg_name = arg.attrib["name"] arg_type = arg.attrib["type"] arg_interface = arg.attrib["interface"] if "interface" in arg.attrib else None if arg_type == "new_id" and arg_interface is None: # Special case, for wl_registry_bind c_sig.append("const char *interface") c_sig.append("uint32_t version") c_sig.append("struct wp_object *id") w_args.append(("interface", "string", None)) w_args.append(("version", "uint", None)) w_args.append((arg_name, "new_id", None)) num_obj_args += 1 num_new_args += 1 num_reg_args += 3 num_stretch_args += 1 continue if arg_type == "array": c_sig.append("uint32_t " + arg_name + "_count") c_sig.append("const uint8_t *" + arg_name + "_val") else: c_sig.append(wltype_to_ctypes[arg_type] + arg_name) w_args.append((arg_name, arg_type, arg_interface)) if arg_type == "fd": num_fd_args += 1 else: num_reg_args += 1 if arg_type == "object" or arg_type == "new_id": num_obj_args += 1 if arg_type == "new_id": num_new_args += 1 if arg_type in ("array", "string"): num_stretch_args += 1 do_signature = "void do_{}({});".format(func_name, ", ".join(c_sig)) handle_signature = "static void call_{}(struct context *ctx, const uint32_t *payload, const int *fds, struct message_tracker *mt)".format( func_name ) W = lambda *x: print(*x, file=ostream) if is_header: W(do_signature) if not is_header: # Write function definition W(do_signature) W(handle_signature + " {") if num_reg_args > 0: W("\tunsigned int i = 0;") if num_fd_args > 0: W("\tunsigned int k = 0;") tmp_names = ["ctx"] n_fds_left = num_fd_args n_reg_left = num_reg_args for i, (arg_name, arg_type, arg_interface) in enumerate(w_args): if arg_type == "array": n_reg_left -= 1 W( "\tconst uint8_t *arg{}_b = (const uint8_t *)&payload[i + 1];".format( i ) ) W("\tuint32_t arg{}_a = payload[i];".format(i)) if n_reg_left > 0: W("\ti += 1 + (unsigned int)((arg{}_a + 0x3) >> 2);".format(i)) tmp_names.append("arg{}_a".format(i)) tmp_names.append("arg{}_b".format(i)) continue tmp_names.append("arg{}".format(i)) if arg_type == "fd": n_fds_left -= 1 W("\tint arg{} = fds[{}];".format(i, "k++" if n_fds_left > 0 else "k")) continue n_reg_left -= 1 if arg_type == "string": W("\tconst char *arg{} = (const char *)&payload[i + 1];".format(i)) W("\tif (!payload[i]) arg{} = NULL;".format(i)) if n_reg_left > 0: W("\ti += 1 + ((payload[i] + 0x3) >> 2);") continue i_incr = "i++" if n_reg_left > 0 else "i" if arg_type == "object" or arg_type == "new_id": if arg_interface is None: intf_str = "NULL" else: intf_str = "&intf_" + arg_interface W( "\tstruct wp_object *arg{} = get_object(mt, payload[{}], {});".format( i, i_incr, intf_str ) ) elif arg_type == "int": W("\tint32_t arg{} = (int32_t)payload[{}];".format(i, i_incr)) elif arg_type == "uint" or arg_type == "fixed": W("\tuint32_t arg{} = payload[{}];".format(i, i_incr)) W("\tdo_{}({});".format(func_name, ", ".join(tmp_names))) if num_obj_args == 0: W("\t(void)mt;") if num_fd_args == 0: W("\t(void)fds;") if num_reg_args == 0: W("\t(void)payload;") W("}") def load_msg_data(func_name, func, for_export): w_args = [] for arg in func: if arg.tag != "arg": continue arg_name = arg.attrib["name"] arg_type = arg.attrib["type"] arg_interface = arg.attrib["interface"] if "interface" in arg.attrib else None if arg_type == "new_id" and arg_interface is None: w_args.append(("interface", "string", None)) w_args.append(("version", "uint", None)) w_args.append((arg_name, "new_id", None)) else: w_args.append((arg_name, arg_type, arg_interface)) new_objs = [] for arg_name, arg_type, arg_interface in w_args: if arg_type == "new_id": new_objs.append( "&intf_" + arg_interface if arg_interface is not None else "NULL" ) # gap coding: 0=end,1=new_obj,2=array,3=string num_fd_args = 0 gaps = [0] gap_ends = [] for arg_name, arg_type, arg_interface in w_args: if arg_type == "fd": num_fd_args += 1 continue gaps[-1] += 1 if arg_type in ("new_id", "string", "array"): gap_ends.append({"new_id": 1, "string": 3, "array": 2}[arg_type]) gaps.append(0) gap_ends.append(0) gap_codes = [str(g * 4 + e) for g, e in zip(gaps, gap_ends)] is_destructor = "type" in func.attrib and func.attrib["type"] == "destructor" is_request = item.tag == "request" short_name = func.attrib["name"] return ( is_request, func_name, short_name, new_objs, gap_codes, is_destructor, num_fd_args, for_export, ) def write_interface( ostream, iface_name, func_data, gap_code_array, new_obj_array, dest_name ): reqs, evts = [], [] for x in func_data: if x[0]: reqs.append(x) else: evts.append(x) W = lambda *x: print(*x, file=ostream) if len(reqs) > 0 or len(evts) > 0: W("static const struct msg_data msgs_" + iface_name + "[] = {") msg_names = [] for x in reqs + evts: ( is_request, func_name, short_name, new_objs, gap_codes, is_destructor, num_fd_args, for_export, ) = x msg_names.append(short_name) mda = [] mda.append( "gaps_{} + {}".format(dest_name, get_offset(gap_code_array, gap_codes)) ) if len(new_objs) > 0: mda.append( "objt_{} + {}".format(dest_name, get_offset(new_obj_array, new_objs)) ) else: mda.append("NULL") mda.append(("call_" + func_name) if for_export else "NULL") mda.append(str(num_fd_args)) mda.append("true" if is_destructor else "false") W("\t{" + ", ".join(mda) + "},") mcn = "NULL" if len(reqs) > 0 or len(evts) > 0: W("};") mcn = "msgs_" + iface_name W("const struct wp_interface intf_" + iface_name + " = {") W("\t" + mcn + ",") W("\t" + str(len(reqs)) + ",") W("\t" + str(len(evts)) + ",") W('\t"{}",'.format(iface_name)) W('\t"{}",'.format("\\0".join(msg_names))) W("};") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("mode", help="Either 'header' or 'data'.") parser.add_argument( "export_list", help="List of events/requests which need parsing." ) parser.add_argument("output_file", help="C file to create.") parser.add_argument("protocols", nargs="+", help="XML protocol files to use.") args = parser.parse_args() is_header = {"data": False, "header": True}[args.mode] if is_header: assert args.output_file[-2:] == ".h" else: assert args.output_file[-2:] == ".c" dest_name = os.path.basename(args.output_file)[:-2].replace("-", "_") export_list = open(args.export_list).read().split("\n") intfset = set() for source in args.protocols: tree = ET.parse(source) root = tree.getroot() for intf in root: if intf.tag == "interface": intfset.add(intf.attrib["name"]) for msg in intf: for arg in msg: if "interface" in arg.attrib: intfset.add(arg.attrib["interface"]) interfaces = sorted(intfset) header_guard = "{}_H".format(dest_name.upper()) with open(args.output_file, "w") as ostream: W = lambda *x: print(*x, file=ostream) if is_header: W("#ifndef {}".format(header_guard)) W("#define {}".format(header_guard)) W() W('#include "symgen_types.h"') if not is_header: W("#include ") for intf in interfaces: W("extern const struct wp_interface intf_{};".format(intf)) gap_code_list = [] new_obj_list = [] interface_data = [] for source in sorted(args.protocols): tree = ET.parse(source) root = tree.getroot() for interface in root: if interface.tag != "interface": continue iface_name = interface.attrib["name"] write_version( is_header, ostream, iface_name, interface.attrib["version"] ) func_data = [] for item in interface: if item.tag == "enum": write_enum(is_header, ostream, iface_name, item) elif item.tag == "request" or item.tag == "event": is_req = item.tag == "request" func_name = ( iface_name + "_" + ("req" if is_req else "evt") + "_" + item.attrib["name"] ) for_export = is_exportable(func_name, export_list) if for_export: write_func(is_header, ostream, func_name, item) if not is_header: func_data.append(load_msg_data(func_name, item, for_export)) elif item.tag == "description": pass else: raise Exception(item.tag) for x in func_data: gap_code_list.append(x[4]) new_obj_list.append(x[3]) interface_data.append((iface_name, func_data)) if not is_header: gap_code_array = shortest_superstring(gap_code_list) new_obj_array = shortest_superstring(new_obj_list) if new_obj_array is not None: W("static const struct wp_interface *objt_" + dest_name + "[] = {") W("\t" + ",\n\t".join(new_obj_array)) W("};") if gap_code_array is not None: W("static const uint16_t gaps_" + dest_name + "[] = {") W("\t" + ",\n\t".join(gap_code_array)) W("};") for iface_name, func_data in interface_data: write_interface( ostream, iface_name, func_data, gap_code_array, new_obj_array, dest_name, ) if is_header: W() W("#endif /* {} */".format(header_guard)) waypipe-v0.9.1/protocols/symgen_types.h000066400000000000000000000025461463133614300203230ustar00rootroot00000000000000#ifndef SYMGEN_TYPES_H #define SYMGEN_TYPES_H #include #include struct context; struct message_tracker; struct wp_object; typedef void (*wp_callfn_t)(struct context *ctx, const uint32_t *payload, const int *fds, struct message_tracker *mt); #define GAP_CODE_END 0x0 #define GAP_CODE_OBJ 0x1 #define GAP_CODE_ARR 0x2 #define GAP_CODE_STR 0x3 struct msg_data { /* Number of 4-byte blocks until next nontrivial input. * (Note: 16-bit length is sufficient since message lengths also 16-bit) * Lowest 2 bits indicate if what follows is end/obj/array/string */ const uint16_t* gaps; /* Pointer to new object types, can be null if none indicated */ const struct wp_interface **new_objs; /* Function pointer to parse + invoke do_ handler */ const wp_callfn_t call; /* Number of associated file descriptors */ const int16_t n_fds; /* Whether message destroys the object */ bool is_destructor; }; struct wp_interface { /* msgs[0..nreq-1] are reqs; msgs[nreq..nreq+nevt-1] are evts */ const struct msg_data *msgs; const int nreq, nevt; /* The name of the interface */ const char *name; /* The names of the messages, in order; stored tightly packed */ const char *msg_names; }; /* User should define this function. */ struct wp_object *get_object(struct message_tracker *mt, uint32_t id, const struct wp_interface *intf); #endif /* SYMGEN_TYPES_H */ waypipe-v0.9.1/protocols/virtual-keyboard-unstable-v1.xml000066400000000000000000000114261463133614300235660ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2013 Intel Corporation Copyright © 2012-2013 Collabora, Ltd. Copyright © 2018 Purism SPC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The virtual keyboard provides an application with requests which emulate the behaviour of a physical keyboard. This interface can be used by clients on its own to provide raw input events, or it can accompany the input method protocol. Provide a file descriptor to the compositor which can be memory-mapped to provide a keyboard mapping description. Format carries a value from the keymap_format enumeration. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. All requests regarding a single object must share the same clock. Keymap must be set before issuing this request. State carries a value from the key_state enumeration. Notifies the compositor that the modifier and/or group state has changed, and it should update state. The client should use wl_keyboard.modifiers event to synchronize its internal state with seat state. Keymap must be set before issuing this request. A virtual keyboard manager allows an application to provide keyboard input events as if they came from a physical keyboard. Creates a new virtual keyboard associated to a seat. If the compositor enables a keyboard to perform arbitrary actions, it should present an error when an untrusted client requests a new keyboard. waypipe-v0.9.1/protocols/wayland-drm.xml000066400000000000000000000173051463133614300203640ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2011 Intel Corporation Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that\n the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Bitmask of capabilities. waypipe-v0.9.1/protocols/wayland.xml000066400000000000000000004356431463133614300176150ustar00rootroot00000000000000 Copyright © 2008-2011 Kristian Høgsberg Copyright © 2010-2011 Intel Corporation Copyright © 2012-2013 Collabora, Ltd. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The core global object. This is a special singleton object. It is used for internal Wayland protocol features. The sync request asks the server to emit the 'done' event on the returned wl_callback object. Since requests are handled in-order and events are delivered in-order, this can be used as a barrier to ensure all previous requests and the resulting events have been handled. The object returned by this request will be destroyed by the compositor after the callback is fired and as such the client must not attempt to use it after that point. The callback_data passed in the callback is the event serial. This request creates a registry object that allows the client to list and bind the global objects available from the compositor. It should be noted that the server side resources consumed in response to a get_registry request can only be released when the client disconnects, not when the client side proxy is destroyed. Therefore, clients should invoke get_registry as infrequently as possible to avoid wasting memory. The error event is sent out when a fatal (non-recoverable) error has occurred. The object_id argument is the object where the error occurred, most often in response to a request to that object. The code identifies the error and is defined by the object interface. As such, each interface defines its own set of error codes. The message is a brief description of the error, for (debugging) convenience. These errors are global and can be emitted in response to any server request. This event is used internally by the object ID management logic. When a client deletes an object that it had created, the server will send this event to acknowledge that it has seen the delete request. When the client receives this event, it will know that it can safely reuse the object ID. The singleton global registry object. The server has a number of global objects that are available to all clients. These objects typically represent an actual object in the server (for example, an input device) or they are singleton objects that provide extension functionality. When a client creates a registry object, the registry object will emit a global event for each global currently in the registry. Globals come and go as a result of device or monitor hotplugs, reconfiguration or other events, and the registry will send out global and global_remove events to keep the client up to date with the changes. To mark the end of the initial burst of events, the client can use the wl_display.sync request immediately after calling wl_display.get_registry. A client can bind to a global object by using the bind request. This creates a client-side handle that lets the object emit events to the client and lets the client invoke requests on the object. Binds a new, client-created object to the server using the specified name as the identifier. Notify the client of global objects. The event notifies the client that a global object with the given name is now available, and it implements the given version of the given interface. Notify the client of removed global objects. This event notifies the client that the global identified by name is no longer available. If the client bound to the global using the bind request, the client should now destroy that object. The object remains valid and requests to the object will be ignored until the client destroys it, to avoid races between the global going away and a client sending a request to it. Clients can handle the 'done' event to get notified when the related request is done. Note, because wl_callback objects are created from multiple independent factory interfaces, the wl_callback interface is frozen at version 1. Notify the client when the related request is done. A compositor. This object is a singleton global. The compositor is in charge of combining the contents of multiple surfaces into one displayable output. Ask the compositor to create a new surface. Ask the compositor to create a new region. The wl_shm_pool object encapsulates a piece of memory shared between the compositor and client. Through the wl_shm_pool object, the client can allocate shared memory wl_buffer objects. All objects created through the same pool share the same underlying mapped memory. Reusing the mapped memory avoids the setup/teardown overhead and is useful when interactively resizing a surface or for many small buffers. Create a wl_buffer object from the pool. The buffer is created offset bytes into the pool and has width and height as specified. The stride argument specifies the number of bytes from the beginning of one row to the beginning of the next. The format is the pixel format of the buffer and must be one of those advertised through the wl_shm.format event. A buffer will keep a reference to the pool it was created from so it is valid to destroy the pool immediately after creating a buffer from it. Destroy the shared memory pool. The mmapped memory will be released when all buffers that have been created from this pool are gone. This request will cause the server to remap the backing memory for the pool from the file descriptor passed when the pool was created, but using the new size. This request can only be used to make the pool bigger. This request only changes the amount of bytes that are mmapped by the server and does not touch the file corresponding to the file descriptor passed at creation time. It is the client's responsibility to ensure that the file is at least as big as the new pool size. A singleton global object that provides support for shared memory. Clients can create wl_shm_pool objects using the create_pool request. On binding the wl_shm object one or more format events are emitted to inform clients about the valid pixel formats that can be used for buffers. These errors can be emitted in response to wl_shm requests. This describes the memory layout of an individual pixel. All renderers should support argb8888 and xrgb8888 but any other formats are optional and may not be supported by the particular renderer in use. The drm format codes match the macros defined in drm_fourcc.h, except argb8888 and xrgb8888. The formats actually supported by the compositor will be reported by the format event. For all wl_shm formats and unless specified in another protocol extension, pre-multiplied alpha is used for pixel values. Create a new wl_shm_pool object. The pool can be used to create shared memory based buffer objects. The server will mmap size bytes of the passed file descriptor, to use as backing memory for the pool. Informs the client about a valid pixel format that can be used for buffers. Known formats include argb8888 and xrgb8888. A buffer provides the content for a wl_surface. Buffers are created through factory interfaces such as wl_shm, wp_linux_buffer_params (from the linux-dmabuf protocol extension) or similar. It has a width and a height and can be attached to a wl_surface, but the mechanism by which a client provides and updates the contents is defined by the buffer factory interface. If the buffer uses a format that has an alpha channel, the alpha channel is assumed to be premultiplied in the color channels unless otherwise specified. Note, because wl_buffer objects are created from multiple independent factory interfaces, the wl_buffer interface is frozen at version 1. Destroy a buffer. If and how you need to release the backing storage is defined by the buffer factory interface. For possible side-effects to a surface, see wl_surface.attach. Sent when this wl_buffer is no longer used by the compositor. The client is now free to reuse or destroy this buffer and its backing storage. If a client receives a release event before the frame callback requested in the same wl_surface.commit that attaches this wl_buffer to a surface, then the client is immediately free to reuse the buffer and its backing storage, and does not need a second buffer for the next surface content update. Typically this is possible, when the compositor maintains a copy of the wl_surface contents, e.g. as a GL texture. This is an important optimization for GL(ES) compositors with wl_shm clients. A wl_data_offer represents a piece of data offered for transfer by another client (the source client). It is used by the copy-and-paste and drag-and-drop mechanisms. The offer describes the different mime types that the data can be converted to and provides the mechanism for transferring the data directly from the source client. Indicate that the client can accept the given mime type, or NULL for not accepted. For objects of version 2 or older, this request is used by the client to give feedback whether the client can receive the given mime type, or NULL if none is accepted; the feedback does not determine whether the drag-and-drop operation succeeds or not. For objects of version 3 or newer, this request determines the final result of the drag-and-drop operation. If the end result is that no mime types were accepted, the drag-and-drop operation will be cancelled and the corresponding drag source will receive wl_data_source.cancelled. Clients may still use this event in conjunction with wl_data_source.action for feedback. To transfer the offered data, the client issues this request and indicates the mime type it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the mime type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and then closes its end, at which point the transfer is complete. This request may happen multiple times for different mime types, both before and after wl_data_device.drop. Drag-and-drop destination clients may preemptively fetch data or examine it more closely to determine acceptance. Destroy the data offer. Sent immediately after creating the wl_data_offer object. One event per offered mime type. Notifies the compositor that the drag destination successfully finished the drag-and-drop operation. Upon receiving this request, the compositor will emit wl_data_source.dnd_finished on the drag source client. It is a client error to perform other requests than wl_data_offer.destroy after this one. It is also an error to perform this request after a NULL mime type has been set in wl_data_offer.accept or no action was received through wl_data_offer.action. If wl_data_offer.finish request is received for a non drag and drop operation, the invalid_finish protocol error is raised. Sets the actions that the destination side client supports for this operation. This request may trigger the emission of wl_data_source.action and wl_data_offer.action events if the compositor needs to change the selected action. This request can be called multiple times throughout the drag-and-drop operation, typically in response to wl_data_device.enter or wl_data_device.motion events. This request determines the final result of the drag-and-drop operation. If the end result is that no action is accepted, the drag source will receive wl_data_source.cancelled. The dnd_actions argument must contain only values expressed in the wl_data_device_manager.dnd_actions enum, and the preferred_action argument must only contain one of those values set, otherwise it will result in a protocol error. While managing an "ask" action, the destination drag-and-drop client may perform further wl_data_offer.receive requests, and is expected to perform one last wl_data_offer.set_actions request with a preferred action other than "ask" (and optionally wl_data_offer.accept) before requesting wl_data_offer.finish, in order to convey the action selected by the user. If the preferred action is not in the wl_data_offer.source_actions mask, an error will be raised. If the "ask" action is dismissed (e.g. user cancellation), the client is expected to perform wl_data_offer.destroy right away. This request can only be made on drag-and-drop offers, a protocol error will be raised otherwise. This event indicates the actions offered by the data source. It will be sent immediately after creating the wl_data_offer object, or anytime the source side changes its offered actions through wl_data_source.set_actions. This event indicates the action selected by the compositor after matching the source/destination side actions. Only one action (or none) will be offered here. This event can be emitted multiple times during the drag-and-drop operation in response to destination side action changes through wl_data_offer.set_actions. This event will no longer be emitted after wl_data_device.drop happened on the drag-and-drop destination, the client must honor the last action received, or the last preferred one set through wl_data_offer.set_actions when handling an "ask" action. Compositors may also change the selected action on the fly, mainly in response to keyboard modifier changes during the drag-and-drop operation. The most recent action received is always the valid one. Prior to receiving wl_data_device.drop, the chosen action may change (e.g. due to keyboard modifiers being pressed). At the time of receiving wl_data_device.drop the drag-and-drop destination must honor the last action received. Action changes may still happen after wl_data_device.drop, especially on "ask" actions, where the drag-and-drop destination may choose another action afterwards. Action changes happening at this stage are always the result of inter-client negotiation, the compositor shall no longer be able to induce a different action. Upon "ask" actions, it is expected that the drag-and-drop destination may potentially choose a different action and/or mime type, based on wl_data_offer.source_actions and finally chosen by the user (e.g. popping up a menu with the available options). The final wl_data_offer.set_actions and wl_data_offer.accept requests must happen before the call to wl_data_offer.finish. The wl_data_source object is the source side of a wl_data_offer. It is created by the source client in a data transfer and provides a way to describe the offered data and a way to respond to requests to transfer the data. This request adds a mime type to the set of mime types advertised to targets. Can be called several times to offer multiple types. Destroy the data source. Sent when a target accepts pointer_focus or motion events. If a target does not accept any of the offered types, type is NULL. Used for feedback during drag-and-drop. Request for data from the client. Send the data as the specified mime type over the passed file descriptor, then close it. This data source is no longer valid. There are several reasons why this could happen: - The data source has been replaced by another data source. - The drag-and-drop operation was performed, but the drop destination did not accept any of the mime types offered through wl_data_source.target. - The drag-and-drop operation was performed, but the drop destination did not select any of the actions present in the mask offered through wl_data_source.action. - The drag-and-drop operation was performed but didn't happen over a surface. - The compositor cancelled the drag-and-drop operation (e.g. compositor dependent timeouts to avoid stale drag-and-drop transfers). The client should clean up and destroy this data source. For objects of version 2 or older, wl_data_source.cancelled will only be emitted if the data source was replaced by another data source. Sets the actions that the source side client supports for this operation. This request may trigger wl_data_source.action and wl_data_offer.action events if the compositor needs to change the selected action. The dnd_actions argument must contain only values expressed in the wl_data_device_manager.dnd_actions enum, otherwise it will result in a protocol error. This request must be made once only, and can only be made on sources used in drag-and-drop, so it must be performed before wl_data_device.start_drag. Attempting to use the source other than for drag-and-drop will raise a protocol error. The user performed the drop action. This event does not indicate acceptance, wl_data_source.cancelled may still be emitted afterwards if the drop destination does not accept any mime type. However, this event might however not be received if the compositor cancelled the drag-and-drop operation before this event could happen. Note that the data_source may still be used in the future and should not be destroyed here. The drop destination finished interoperating with this data source, so the client is now free to destroy this data source and free all associated data. If the action used to perform the operation was "move", the source can now delete the transferred data. This event indicates the action selected by the compositor after matching the source/destination side actions. Only one action (or none) will be offered here. This event can be emitted multiple times during the drag-and-drop operation, mainly in response to destination side changes through wl_data_offer.set_actions, and as the data device enters/leaves surfaces. It is only possible to receive this event after wl_data_source.dnd_drop_performed if the drag-and-drop operation ended in an "ask" action, in which case the final wl_data_source.action event will happen immediately before wl_data_source.dnd_finished. Compositors may also change the selected action on the fly, mainly in response to keyboard modifier changes during the drag-and-drop operation. The most recent action received is always the valid one. The chosen action may change alongside negotiation (e.g. an "ask" action can turn into a "move" operation), so the effects of the final action must always be applied in wl_data_offer.dnd_finished. Clients can trigger cursor surface changes from this point, so they reflect the current action. There is one wl_data_device per seat which can be obtained from the global wl_data_device_manager singleton. A wl_data_device provides access to inter-client data transfer mechanisms such as copy-and-paste and drag-and-drop. This request asks the compositor to start a drag-and-drop operation on behalf of the client. The source argument is the data source that provides the data for the eventual data transfer. If source is NULL, enter, leave and motion events are sent only to the client that initiated the drag and the client is expected to handle the data passing internally. If source is destroyed, the drag-and-drop session will be cancelled. The origin surface is the surface where the drag originates and the client must have an active implicit grab that matches the serial. The icon surface is an optional (can be NULL) surface that provides an icon to be moved around with the cursor. Initially, the top-left corner of the icon surface is placed at the cursor hotspot, but subsequent wl_surface.attach request can move the relative position. Attach requests must be confirmed with wl_surface.commit as usual. The icon surface is given the role of a drag-and-drop icon. If the icon surface already has another role, it raises a protocol error. The input region is ignored for wl_surfaces with the role of a drag-and-drop icon. This request asks the compositor to set the selection to the data from the source on behalf of the client. To unset the selection, set the source to NULL. The data_offer event introduces a new wl_data_offer object, which will subsequently be used in either the data_device.enter event (for drag-and-drop) or the data_device.selection event (for selections). Immediately following the data_device.data_offer event, the new data_offer object will send out data_offer.offer events to describe the mime types it offers. This event is sent when an active drag-and-drop pointer enters a surface owned by the client. The position of the pointer at enter time is provided by the x and y arguments, in surface-local coordinates. This event is sent when the drag-and-drop pointer leaves the surface and the session ends. The client must destroy the wl_data_offer introduced at enter time at this point. This event is sent when the drag-and-drop pointer moves within the currently focused surface. The new position of the pointer is provided by the x and y arguments, in surface-local coordinates. The event is sent when a drag-and-drop operation is ended because the implicit grab is removed. The drag-and-drop destination is expected to honor the last action received through wl_data_offer.action, if the resulting action is "copy" or "move", the destination can still perform wl_data_offer.receive requests, and is expected to end all transfers with a wl_data_offer.finish request. If the resulting action is "ask", the action will not be considered final. The drag-and-drop destination is expected to perform one last wl_data_offer.set_actions request, or wl_data_offer.destroy in order to cancel the operation. The selection event is sent out to notify the client of a new wl_data_offer for the selection for this device. The data_device.data_offer and the data_offer.offer events are sent out immediately before this event to introduce the data offer object. The selection event is sent to a client immediately before receiving keyboard focus and when a new selection is set while the client has keyboard focus. The data_offer is valid until a new data_offer or NULL is received or until the client loses keyboard focus. Switching surface with keyboard focus within the same client doesn't mean a new selection will be sent. The client must destroy the previous selection data_offer, if any, upon receiving this event. This request destroys the data device. The wl_data_device_manager is a singleton global object that provides access to inter-client data transfer mechanisms such as copy-and-paste and drag-and-drop. These mechanisms are tied to a wl_seat and this interface lets a client get a wl_data_device corresponding to a wl_seat. Depending on the version bound, the objects created from the bound wl_data_device_manager object will have different requirements for functioning properly. See wl_data_source.set_actions, wl_data_offer.accept and wl_data_offer.finish for details. Create a new data source. Create a new data device for a given seat. This is a bitmask of the available/preferred actions in a drag-and-drop operation. In the compositor, the selected action is a result of matching the actions offered by the source and destination sides. "action" events with a "none" action will be sent to both source and destination if there is no match. All further checks will effectively happen on (source actions ∩ destination actions). In addition, compositors may also pick different actions in reaction to key modifiers being pressed. One common design that is used in major toolkits (and the behavior recommended for compositors) is: - If no modifiers are pressed, the first match (in bit order) will be used. - Pressing Shift selects "move", if enabled in the mask. - Pressing Control selects "copy", if enabled in the mask. Behavior beyond that is considered implementation-dependent. Compositors may for example bind other modifiers (like Alt/Meta) or drags initiated with other buttons than BTN_LEFT to specific actions (e.g. "ask"). This interface is implemented by servers that provide desktop-style user interfaces. It allows clients to associate a wl_shell_surface with a basic surface. Note! This protocol is deprecated and not intended for production use. For desktop-style user interfaces, use xdg_shell. Compositors and clients should not implement this interface. Create a shell surface for an existing surface. This gives the wl_surface the role of a shell surface. If the wl_surface already has another role, it raises a protocol error. Only one shell surface can be associated with a given surface. An interface that may be implemented by a wl_surface, for implementations that provide a desktop-style user interface. It provides requests to treat surfaces like toplevel, fullscreen or popup windows, move, resize or maximize them, associate metadata like title and class, etc. On the server side the object is automatically destroyed when the related wl_surface is destroyed. On the client side, wl_shell_surface_destroy() must be called before destroying the wl_surface object. A client must respond to a ping event with a pong request or the client may be deemed unresponsive. Start a pointer-driven move of the surface. This request must be used in response to a button press event. The server may ignore move requests depending on the state of the surface (e.g. fullscreen or maximized). These values are used to indicate which edge of a surface is being dragged in a resize operation. The server may use this information to adapt its behavior, e.g. choose an appropriate cursor image. Start a pointer-driven resizing of the surface. This request must be used in response to a button press event. The server may ignore resize requests depending on the state of the surface (e.g. fullscreen or maximized). Map the surface as a toplevel surface. A toplevel surface is not fullscreen, maximized or transient. These flags specify details of the expected behaviour of transient surfaces. Used in the set_transient request. Map the surface relative to an existing surface. The x and y arguments specify the location of the upper left corner of the surface relative to the upper left corner of the parent surface, in surface-local coordinates. The flags argument controls details of the transient behaviour. Hints to indicate to the compositor how to deal with a conflict between the dimensions of the surface and the dimensions of the output. The compositor is free to ignore this parameter. Map the surface as a fullscreen surface. If an output parameter is given then the surface will be made fullscreen on that output. If the client does not specify the output then the compositor will apply its policy - usually choosing the output on which the surface has the biggest surface area. The client may specify a method to resolve a size conflict between the output size and the surface size - this is provided through the method parameter. The framerate parameter is used only when the method is set to "driver", to indicate the preferred framerate. A value of 0 indicates that the client does not care about framerate. The framerate is specified in mHz, that is framerate of 60000 is 60Hz. A method of "scale" or "driver" implies a scaling operation of the surface, either via a direct scaling operation or a change of the output mode. This will override any kind of output scaling, so that mapping a surface with a buffer size equal to the mode can fill the screen independent of buffer_scale. A method of "fill" means we don't scale up the buffer, however any output scale is applied. This means that you may run into an edge case where the application maps a buffer with the same size of the output mode but buffer_scale 1 (thus making a surface larger than the output). In this case it is allowed to downscale the results to fit the screen. The compositor must reply to this request with a configure event with the dimensions for the output on which the surface will be made fullscreen. Map the surface as a popup. A popup surface is a transient surface with an added pointer grab. An existing implicit grab will be changed to owner-events mode, and the popup grab will continue after the implicit grab ends (i.e. releasing the mouse button does not cause the popup to be unmapped). The popup grab continues until the window is destroyed or a mouse button is pressed in any other client's window. A click in any of the client's surfaces is reported as normal, however, clicks in other clients' surfaces will be discarded and trigger the callback. The x and y arguments specify the location of the upper left corner of the surface relative to the upper left corner of the parent surface, in surface-local coordinates. Map the surface as a maximized surface. If an output parameter is given then the surface will be maximized on that output. If the client does not specify the output then the compositor will apply its policy - usually choosing the output on which the surface has the biggest surface area. The compositor will reply with a configure event telling the expected new surface size. The operation is completed on the next buffer attach to this surface. A maximized surface typically fills the entire output it is bound to, except for desktop elements such as panels. This is the main difference between a maximized shell surface and a fullscreen shell surface. The details depend on the compositor implementation. Set a short title for the surface. This string may be used to identify the surface in a task bar, window list, or other user interface elements provided by the compositor. The string must be encoded in UTF-8. Set a class for the surface. The surface class identifies the general class of applications to which the surface belongs. A common convention is to use the file name (or the full path if it is a non-standard location) of the application's .desktop file as the class. Ping a client to check if it is receiving events and sending requests. A client is expected to reply with a pong request. The configure event asks the client to resize its surface. The size is a hint, in the sense that the client is free to ignore it if it doesn't resize, pick a smaller size (to satisfy aspect ratio or resize in steps of NxM pixels). The edges parameter provides a hint about how the surface was resized. The client may use this information to decide how to adjust its content to the new size (e.g. a scrolling area might adjust its content position to leave the viewable content unmoved). The client is free to dismiss all but the last configure event it received. The width and height arguments specify the size of the window in surface-local coordinates. The popup_done event is sent out when a popup grab is broken, that is, when the user clicks a surface that doesn't belong to the client owning the popup surface. A surface is a rectangular area that may be displayed on zero or more outputs, and shown any number of times at the compositor's discretion. They can present wl_buffers, receive user input, and define a local coordinate system. The size of a surface (and relative positions on it) is described in surface-local coordinates, which may differ from the buffer coordinates of the pixel content, in case a buffer_transform or a buffer_scale is used. A surface without a "role" is fairly useless: a compositor does not know where, when or how to present it. The role is the purpose of a wl_surface. Examples of roles are a cursor for a pointer (as set by wl_pointer.set_cursor), a drag icon (wl_data_device.start_drag), a sub-surface (wl_subcompositor.get_subsurface), and a window as defined by a shell protocol (e.g. wl_shell.get_shell_surface). A surface can have only one role at a time. Initially a wl_surface does not have a role. Once a wl_surface is given a role, it is set permanently for the whole lifetime of the wl_surface object. Giving the current role again is allowed, unless explicitly forbidden by the relevant interface specification. Surface roles are given by requests in other interfaces such as wl_pointer.set_cursor. The request should explicitly mention that this request gives a role to a wl_surface. Often, this request also creates a new protocol object that represents the role and adds additional functionality to wl_surface. When a client wants to destroy a wl_surface, they must destroy this role object before the wl_surface, otherwise a defunct_role_object error is sent. Destroying the role object does not remove the role from the wl_surface, but it may stop the wl_surface from "playing the role". For instance, if a wl_subsurface object is destroyed, the wl_surface it was created for will be unmapped and forget its position and z-order. It is allowed to create a wl_subsurface for the same wl_surface again, but it is not allowed to use the wl_surface as a cursor (cursor is a different role than sub-surface, and role switching is not allowed). These errors can be emitted in response to wl_surface requests. Deletes the surface and invalidates its object ID. Set a buffer as the content of this surface. The new size of the surface is calculated based on the buffer size transformed by the inverse buffer_transform and the inverse buffer_scale. This means that at commit time the supplied buffer size must be an integer multiple of the buffer_scale. If that's not the case, an invalid_size error is sent. The x and y arguments specify the location of the new pending buffer's upper left corner, relative to the current buffer's upper left corner, in surface-local coordinates. In other words, the x and y, combined with the new surface size define in which directions the surface's size changes. Setting anything other than 0 as x and y arguments is discouraged, and should instead be replaced with using the separate wl_surface.offset request. When the bound wl_surface version is 5 or higher, passing any non-zero x or y is a protocol violation, and will result in an 'invalid_offset' error being raised. The x and y arguments are ignored and do not change the pending state. To achieve equivalent semantics, use wl_surface.offset. Surface contents are double-buffered state, see wl_surface.commit. The initial surface contents are void; there is no content. wl_surface.attach assigns the given wl_buffer as the pending wl_buffer. wl_surface.commit makes the pending wl_buffer the new surface contents, and the size of the surface becomes the size calculated from the wl_buffer, as described above. After commit, there is no pending buffer until the next attach. Committing a pending wl_buffer allows the compositor to read the pixels in the wl_buffer. The compositor may access the pixels at any time after the wl_surface.commit request. When the compositor will not access the pixels anymore, it will send the wl_buffer.release event. Only after receiving wl_buffer.release, the client may reuse the wl_buffer. A wl_buffer that has been attached and then replaced by another attach instead of committed will not receive a release event, and is not used by the compositor. If a pending wl_buffer has been committed to more than one wl_surface, the delivery of wl_buffer.release events becomes undefined. A well behaved client should not rely on wl_buffer.release events in this case. Alternatively, a client could create multiple wl_buffer objects from the same backing storage or use wp_linux_buffer_release. Destroying the wl_buffer after wl_buffer.release does not change the surface contents. Destroying the wl_buffer before wl_buffer.release is allowed as long as the underlying buffer storage isn't re-used (this can happen e.g. on client process termination). However, if the client destroys the wl_buffer before receiving the wl_buffer.release event and mutates the underlying buffer storage, the surface contents become undefined immediately. If wl_surface.attach is sent with a NULL wl_buffer, the following wl_surface.commit will remove the surface content. This request is used to describe the regions where the pending buffer is different from the current surface contents, and where the surface therefore needs to be repainted. The compositor ignores the parts of the damage that fall outside of the surface. Damage is double-buffered state, see wl_surface.commit. The damage rectangle is specified in surface-local coordinates, where x and y specify the upper left corner of the damage rectangle. The initial value for pending damage is empty: no damage. wl_surface.damage adds pending damage: the new pending damage is the union of old pending damage and the given rectangle. wl_surface.commit assigns pending damage as the current damage, and clears pending damage. The server will clear the current damage as it repaints the surface. Note! New clients should not use this request. Instead damage can be posted with wl_surface.damage_buffer which uses buffer coordinates instead of surface coordinates. Request a notification when it is a good time to start drawing a new frame, by creating a frame callback. This is useful for throttling redrawing operations, and driving animations. When a client is animating on a wl_surface, it can use the 'frame' request to get notified when it is a good time to draw and commit the next frame of animation. If the client commits an update earlier than that, it is likely that some updates will not make it to the display, and the client is wasting resources by drawing too often. The frame request will take effect on the next wl_surface.commit. The notification will only be posted for one frame unless requested again. For a wl_surface, the notifications are posted in the order the frame requests were committed. The server must send the notifications so that a client will not send excessive updates, while still allowing the highest possible update rate for clients that wait for the reply before drawing again. The server should give some time for the client to draw and commit after sending the frame callback events to let it hit the next output refresh. A server should avoid signaling the frame callbacks if the surface is not visible in any way, e.g. the surface is off-screen, or completely obscured by other opaque surfaces. The object returned by this request will be destroyed by the compositor after the callback is fired and as such the client must not attempt to use it after that point. The callback_data passed in the callback is the current time, in milliseconds, with an undefined base. This request sets the region of the surface that contains opaque content. The opaque region is an optimization hint for the compositor that lets it optimize the redrawing of content behind opaque regions. Setting an opaque region is not required for correct behaviour, but marking transparent content as opaque will result in repaint artifacts. The opaque region is specified in surface-local coordinates. The compositor ignores the parts of the opaque region that fall outside of the surface. Opaque region is double-buffered state, see wl_surface.commit. wl_surface.set_opaque_region changes the pending opaque region. wl_surface.commit copies the pending region to the current region. Otherwise, the pending and current regions are never changed. The initial value for an opaque region is empty. Setting the pending opaque region has copy semantics, and the wl_region object can be destroyed immediately. A NULL wl_region causes the pending opaque region to be set to empty. This request sets the region of the surface that can receive pointer and touch events. Input events happening outside of this region will try the next surface in the server surface stack. The compositor ignores the parts of the input region that fall outside of the surface. The input region is specified in surface-local coordinates. Input region is double-buffered state, see wl_surface.commit. wl_surface.set_input_region changes the pending input region. wl_surface.commit copies the pending region to the current region. Otherwise the pending and current regions are never changed, except cursor and icon surfaces are special cases, see wl_pointer.set_cursor and wl_data_device.start_drag. The initial value for an input region is infinite. That means the whole surface will accept input. Setting the pending input region has copy semantics, and the wl_region object can be destroyed immediately. A NULL wl_region causes the input region to be set to infinite. Surface state (input, opaque, and damage regions, attached buffers, etc.) is double-buffered. Protocol requests modify the pending state, as opposed to the current state in use by the compositor. A commit request atomically applies all pending state, replacing the current state. After commit, the new pending state is as documented for each related request. On commit, a pending wl_buffer is applied first, and all other state second. This means that all coordinates in double-buffered state are relative to the new wl_buffer coming into use, except for wl_surface.attach itself. If there is no pending wl_buffer, the coordinates are relative to the current surface contents. All requests that need a commit to become effective are documented to affect double-buffered state. Other interfaces may add further double-buffered surface state. This is emitted whenever a surface's creation, movement, or resizing results in some part of it being within the scanout region of an output. Note that a surface may be overlapping with zero or more outputs. This is emitted whenever a surface's creation, movement, or resizing results in it no longer having any part of it within the scanout region of an output. Clients should not use the number of outputs the surface is on for frame throttling purposes. The surface might be hidden even if no leave event has been sent, and the compositor might expect new surface content updates even if no enter event has been sent. The frame event should be used instead. This request sets an optional transformation on how the compositor interprets the contents of the buffer attached to the surface. The accepted values for the transform parameter are the values for wl_output.transform. Buffer transform is double-buffered state, see wl_surface.commit. A newly created surface has its buffer transformation set to normal. wl_surface.set_buffer_transform changes the pending buffer transformation. wl_surface.commit copies the pending buffer transformation to the current one. Otherwise, the pending and current values are never changed. The purpose of this request is to allow clients to render content according to the output transform, thus permitting the compositor to use certain optimizations even if the display is rotated. Using hardware overlays and scanning out a client buffer for fullscreen surfaces are examples of such optimizations. Those optimizations are highly dependent on the compositor implementation, so the use of this request should be considered on a case-by-case basis. Note that if the transform value includes 90 or 270 degree rotation, the width of the buffer will become the surface height and the height of the buffer will become the surface width. If transform is not one of the values from the wl_output.transform enum the invalid_transform protocol error is raised. This request sets an optional scaling factor on how the compositor interprets the contents of the buffer attached to the window. Buffer scale is double-buffered state, see wl_surface.commit. A newly created surface has its buffer scale set to 1. wl_surface.set_buffer_scale changes the pending buffer scale. wl_surface.commit copies the pending buffer scale to the current one. Otherwise, the pending and current values are never changed. The purpose of this request is to allow clients to supply higher resolution buffer data for use on high resolution outputs. It is intended that you pick the same buffer scale as the scale of the output that the surface is displayed on. This means the compositor can avoid scaling when rendering the surface on that output. Note that if the scale is larger than 1, then you have to attach a buffer that is larger (by a factor of scale in each dimension) than the desired surface size. If scale is not positive the invalid_scale protocol error is raised. This request is used to describe the regions where the pending buffer is different from the current surface contents, and where the surface therefore needs to be repainted. The compositor ignores the parts of the damage that fall outside of the surface. Damage is double-buffered state, see wl_surface.commit. The damage rectangle is specified in buffer coordinates, where x and y specify the upper left corner of the damage rectangle. The initial value for pending damage is empty: no damage. wl_surface.damage_buffer adds pending damage: the new pending damage is the union of old pending damage and the given rectangle. wl_surface.commit assigns pending damage as the current damage, and clears pending damage. The server will clear the current damage as it repaints the surface. This request differs from wl_surface.damage in only one way - it takes damage in buffer coordinates instead of surface-local coordinates. While this generally is more intuitive than surface coordinates, it is especially desirable when using wp_viewport or when a drawing library (like EGL) is unaware of buffer scale and buffer transform. Note: Because buffer transformation changes and damage requests may be interleaved in the protocol stream, it is impossible to determine the actual mapping between surface and buffer damage until wl_surface.commit time. Therefore, compositors wishing to take both kinds of damage into account will have to accumulate damage from the two requests separately and only transform from one to the other after receiving the wl_surface.commit. The x and y arguments specify the location of the new pending buffer's upper left corner, relative to the current buffer's upper left corner, in surface-local coordinates. In other words, the x and y, combined with the new surface size define in which directions the surface's size changes. Surface location offset is double-buffered state, see wl_surface.commit. This request is semantically equivalent to and the replaces the x and y arguments in the wl_surface.attach request in wl_surface versions prior to 5. See wl_surface.attach for details. This event indicates the preferred buffer scale for this surface. It is sent whenever the compositor's preference changes. It is intended that scaling aware clients use this event to scale their content and use wl_surface.set_buffer_scale to indicate the scale they have rendered with. This allows clients to supply a higher detail buffer. This event indicates the preferred buffer transform for this surface. It is sent whenever the compositor's preference changes. It is intended that transform aware clients use this event to apply the transform to their content and use wl_surface.set_buffer_transform to indicate the transform they have rendered with. A seat is a group of keyboards, pointer and touch devices. This object is published as a global during start up, or when such a device is hot plugged. A seat typically has a pointer and maintains a keyboard focus and a pointer focus. This is a bitmask of capabilities this seat has; if a member is set, then it is present on the seat. These errors can be emitted in response to wl_seat requests. This is emitted whenever a seat gains or loses the pointer, keyboard or touch capabilities. The argument is a capability enum containing the complete set of capabilities this seat has. When the pointer capability is added, a client may create a wl_pointer object using the wl_seat.get_pointer request. This object will receive pointer events until the capability is removed in the future. When the pointer capability is removed, a client should destroy the wl_pointer objects associated with the seat where the capability was removed, using the wl_pointer.release request. No further pointer events will be received on these objects. In some compositors, if a seat regains the pointer capability and a client has a previously obtained wl_pointer object of version 4 or less, that object may start sending pointer events again. This behavior is considered a misinterpretation of the intended behavior and must not be relied upon by the client. wl_pointer objects of version 5 or later must not send events if created before the most recent event notifying the client of an added pointer capability. The above behavior also applies to wl_keyboard and wl_touch with the keyboard and touch capabilities, respectively. The ID provided will be initialized to the wl_pointer interface for this seat. This request only takes effect if the seat has the pointer capability, or has had the pointer capability in the past. It is a protocol violation to issue this request on a seat that has never had the pointer capability. The missing_capability error will be sent in this case. The ID provided will be initialized to the wl_keyboard interface for this seat. This request only takes effect if the seat has the keyboard capability, or has had the keyboard capability in the past. It is a protocol violation to issue this request on a seat that has never had the keyboard capability. The missing_capability error will be sent in this case. The ID provided will be initialized to the wl_touch interface for this seat. This request only takes effect if the seat has the touch capability, or has had the touch capability in the past. It is a protocol violation to issue this request on a seat that has never had the touch capability. The missing_capability error will be sent in this case. In a multi-seat configuration the seat name can be used by clients to help identify which physical devices the seat represents. The seat name is a UTF-8 string with no convention defined for its contents. Each name is unique among all wl_seat globals. The name is only guaranteed to be unique for the current compositor instance. The same seat names are used for all clients. Thus, the name can be shared across processes to refer to a specific wl_seat global. The name event is sent after binding to the seat global. This event is only sent once per seat object, and the name does not change over the lifetime of the wl_seat global. Compositors may re-use the same seat name if the wl_seat global is destroyed and re-created later. Using this request a client can tell the server that it is not going to use the seat object anymore. The wl_pointer interface represents one or more input devices, such as mice, which control the pointer location and pointer_focus of a seat. The wl_pointer interface generates motion, enter and leave events for the surfaces that the pointer is located over, and button and axis events for button presses, button releases and scrolling. Set the pointer surface, i.e., the surface that contains the pointer image (cursor). This request gives the surface the role of a cursor. If the surface already has another role, it raises a protocol error. The cursor actually changes only if the pointer focus for this device is one of the requesting client's surfaces or the surface parameter is the current pointer surface. If there was a previous surface set with this request it is replaced. If surface is NULL, the pointer image is hidden. The parameters hotspot_x and hotspot_y define the position of the pointer surface relative to the pointer location. Its top-left corner is always at (x, y) - (hotspot_x, hotspot_y), where (x, y) are the coordinates of the pointer location, in surface-local coordinates. On surface.attach requests to the pointer surface, hotspot_x and hotspot_y are decremented by the x and y parameters passed to the request. Attach must be confirmed by wl_surface.commit as usual. The hotspot can also be updated by passing the currently set pointer surface to this request with new values for hotspot_x and hotspot_y. The input region is ignored for wl_surfaces with the role of a cursor. When the use as a cursor ends, the wl_surface is unmapped. The serial parameter must match the latest wl_pointer.enter serial number sent to the client. Otherwise the request will be ignored. Notification that this seat's pointer is focused on a certain surface. When a seat's focus enters a surface, the pointer image is undefined and a client should respond to this event by setting an appropriate pointer image with the set_cursor request. Notification that this seat's pointer is no longer focused on a certain surface. The leave notification is sent before the enter notification for the new focus. Notification of pointer location change. The arguments surface_x and surface_y are the location relative to the focused surface. Describes the physical state of a button that produced the button event. Mouse button click and release notifications. The location of the click is given by the last motion or enter event. The time argument is a timestamp with millisecond granularity, with an undefined base. The button is a button code as defined in the Linux kernel's linux/input-event-codes.h header file, e.g. BTN_LEFT. Any 16-bit button code value is reserved for future additions to the kernel's event code list. All other button codes above 0xFFFF are currently undefined but may be used in future versions of this protocol. Describes the axis types of scroll events. Scroll and other axis notifications. For scroll events (vertical and horizontal scroll axes), the value parameter is the length of a vector along the specified axis in a coordinate space identical to those of motion events, representing a relative movement along the specified axis. For devices that support movements non-parallel to axes multiple axis events will be emitted. When applicable, for example for touch pads, the server can choose to emit scroll events where the motion vector is equivalent to a motion event vector. When applicable, a client can transform its content relative to the scroll distance. Using this request a client can tell the server that it is not going to use the pointer object anymore. This request destroys the pointer proxy object, so clients must not call wl_pointer_destroy() after using this request. Indicates the end of a set of events that logically belong together. A client is expected to accumulate the data in all events within the frame before proceeding. All wl_pointer events before a wl_pointer.frame event belong logically together. For example, in a diagonal scroll motion the compositor will send an optional wl_pointer.axis_source event, two wl_pointer.axis events (horizontal and vertical) and finally a wl_pointer.frame event. The client may use this information to calculate a diagonal vector for scrolling. When multiple wl_pointer.axis events occur within the same frame, the motion vector is the combined motion of all events. When a wl_pointer.axis and a wl_pointer.axis_stop event occur within the same frame, this indicates that axis movement in one axis has stopped but continues in the other axis. When multiple wl_pointer.axis_stop events occur within the same frame, this indicates that these axes stopped in the same instance. A wl_pointer.frame event is sent for every logical event group, even if the group only contains a single wl_pointer event. Specifically, a client may get a sequence: motion, frame, button, frame, axis, frame, axis_stop, frame. The wl_pointer.enter and wl_pointer.leave events are logical events generated by the compositor and not the hardware. These events are also grouped by a wl_pointer.frame. When a pointer moves from one surface to another, a compositor should group the wl_pointer.leave event within the same wl_pointer.frame. However, a client must not rely on wl_pointer.leave and wl_pointer.enter being in the same wl_pointer.frame. Compositor-specific policies may require the wl_pointer.leave and wl_pointer.enter event being split across multiple wl_pointer.frame groups. Describes the source types for axis events. This indicates to the client how an axis event was physically generated; a client may adjust the user interface accordingly. For example, scroll events from a "finger" source may be in a smooth coordinate space with kinetic scrolling whereas a "wheel" source may be in discrete steps of a number of lines. The "continuous" axis source is a device generating events in a continuous coordinate space, but using something other than a finger. One example for this source is button-based scrolling where the vertical motion of a device is converted to scroll events while a button is held down. The "wheel tilt" axis source indicates that the actual device is a wheel but the scroll event is not caused by a rotation but a (usually sideways) tilt of the wheel. Source information for scroll and other axes. This event does not occur on its own. It is sent before a wl_pointer.frame event and carries the source information for all events within that frame. The source specifies how this event was generated. If the source is wl_pointer.axis_source.finger, a wl_pointer.axis_stop event will be sent when the user lifts the finger off the device. If the source is wl_pointer.axis_source.wheel, wl_pointer.axis_source.wheel_tilt or wl_pointer.axis_source.continuous, a wl_pointer.axis_stop event may or may not be sent. Whether a compositor sends an axis_stop event for these sources is hardware-specific and implementation-dependent; clients must not rely on receiving an axis_stop event for these scroll sources and should treat scroll sequences from these scroll sources as unterminated by default. This event is optional. If the source is unknown for a particular axis event sequence, no event is sent. Only one wl_pointer.axis_source event is permitted per frame. The order of wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. Stop notification for scroll and other axes. For some wl_pointer.axis_source types, a wl_pointer.axis_stop event is sent to notify a client that the axis sequence has terminated. This enables the client to implement kinetic scrolling. See the wl_pointer.axis_source documentation for information on when this event may be generated. Any wl_pointer.axis events with the same axis_source after this event should be considered as the start of a new axis motion. The timestamp is to be interpreted identical to the timestamp in the wl_pointer.axis event. The timestamp value may be the same as a preceding wl_pointer.axis event. Discrete step information for scroll and other axes. This event carries the axis value of the wl_pointer.axis event in discrete steps (e.g. mouse wheel clicks). This event is deprecated with wl_pointer version 8 - this event is not sent to clients supporting version 8 or later. This event does not occur on its own, it is coupled with a wl_pointer.axis event that represents this axis value on a continuous scale. The protocol guarantees that each axis_discrete event is always followed by exactly one axis event with the same axis number within the same wl_pointer.frame. Note that the protocol allows for other events to occur between the axis_discrete and its coupled axis event, including other axis_discrete or axis events. A wl_pointer.frame must not contain more than one axis_discrete event per axis type. This event is optional; continuous scrolling devices like two-finger scrolling on touchpads do not have discrete steps and do not generate this event. The discrete value carries the directional information. e.g. a value of -2 is two steps towards the negative direction of this axis. The axis number is identical to the axis number in the associated axis event. The order of wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. Discrete high-resolution scroll information. This event carries high-resolution wheel scroll information, with each multiple of 120 representing one logical scroll step (a wheel detent). For example, an axis_value120 of 30 is one quarter of a logical scroll step in the positive direction, a value120 of -240 are two logical scroll steps in the negative direction within the same hardware event. Clients that rely on discrete scrolling should accumulate the value120 to multiples of 120 before processing the event. The value120 must not be zero. This event replaces the wl_pointer.axis_discrete event in clients supporting wl_pointer version 8 or later. Where a wl_pointer.axis_source event occurs in the same wl_pointer.frame, the axis source applies to this event. The order of wl_pointer.axis_value120 and wl_pointer.axis_source is not guaranteed. This specifies the direction of the physical motion that caused a wl_pointer.axis event, relative to the wl_pointer.axis direction. Relative directional information of the entity causing the axis motion. For a wl_pointer.axis event, the wl_pointer.axis_relative_direction event specifies the movement direction of the entity causing the wl_pointer.axis event. For example: - if a user's fingers on a touchpad move down and this causes a wl_pointer.axis vertical_scroll down event, the physical direction is 'identical' - if a user's fingers on a touchpad move down and this causes a wl_pointer.axis vertical_scroll up scroll up event ('natural scrolling'), the physical direction is 'inverted'. A client may use this information to adjust scroll motion of components. Specifically, enabling natural scrolling causes the content to change direction compared to traditional scrolling. Some widgets like volume control sliders should usually match the physical direction regardless of whether natural scrolling is active. This event enables clients to match the scroll direction of a widget to the physical direction. This event does not occur on its own, it is coupled with a wl_pointer.axis event that represents this axis value. The protocol guarantees that each axis_relative_direction event is always followed by exactly one axis event with the same axis number within the same wl_pointer.frame. Note that the protocol allows for other events to occur between the axis_relative_direction and its coupled axis event. The axis number is identical to the axis number in the associated axis event. The order of wl_pointer.axis_relative_direction, wl_pointer.axis_discrete and wl_pointer.axis_source is not guaranteed. The wl_keyboard interface represents one or more keyboards associated with a seat. This specifies the format of the keymap provided to the client with the wl_keyboard.keymap event. This event provides a file descriptor to the client which can be memory-mapped in read-only mode to provide a keyboard mapping description. From version 7 onwards, the fd must be mapped with MAP_PRIVATE by the recipient, as MAP_SHARED may fail. Notification that this seat's keyboard focus is on a certain surface. The compositor must send the wl_keyboard.modifiers event after this event. Notification that this seat's keyboard focus is no longer on a certain surface. The leave notification is sent before the enter notification for the new focus. After this event client must assume that all keys, including modifiers, are lifted and also it must stop key repeating if there's some going on. Describes the physical state of a key that produced the key event. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. The key is a platform-specific key code that can be interpreted by feeding it to the keyboard mapping (see the keymap event). If this event produces a change in modifiers, then the resulting wl_keyboard.modifiers event must be sent after this event. Notifies clients that the modifier and/or group state has changed, and it should update its local state. Informs the client about the keyboard's repeat rate and delay. This event is sent as soon as the wl_keyboard object has been created, and is guaranteed to be received by the client before any key press event. Negative values for either rate or delay are illegal. A rate of zero will disable any repeating (regardless of the value of delay). This event can be sent later on as well with a new value if necessary, so clients should continue listening for the event past the creation of wl_keyboard. The wl_touch interface represents a touchscreen associated with a seat. Touch interactions can consist of one or more contacts. For each contact, a series of events is generated, starting with a down event, followed by zero or more motion events, and ending with an up event. Events relating to the same contact point can be identified by the ID of the sequence. A new touch point has appeared on the surface. This touch point is assigned a unique ID. Future events from this touch point reference this ID. The ID ceases to be valid after a touch up event and may be reused in the future. The touch point has disappeared. No further events will be sent for this touch point and the touch point's ID is released and may be reused in a future touch down event. A touch point has changed coordinates. Indicates the end of a set of events that logically belong together. A client is expected to accumulate the data in all events within the frame before proceeding. A wl_touch.frame terminates at least one event but otherwise no guarantee is provided about the set of events within a frame. A client must assume that any state not updated in a frame is unchanged from the previously known state. Sent if the compositor decides the touch stream is a global gesture. No further events are sent to the clients from that particular gesture. Touch cancellation applies to all touch points currently active on this client's surface. The client is responsible for finalizing the touch points, future touch points on this surface may reuse the touch point ID. Sent when a touchpoint has changed its shape. This event does not occur on its own. It is sent before a wl_touch.frame event and carries the new shape information for any previously reported, or new touch points of that frame. Other events describing the touch point such as wl_touch.down, wl_touch.motion or wl_touch.orientation may be sent within the same wl_touch.frame. A client should treat these events as a single logical touch point update. The order of wl_touch.shape, wl_touch.orientation and wl_touch.motion is not guaranteed. A wl_touch.down event is guaranteed to occur before the first wl_touch.shape event for this touch ID but both events may occur within the same wl_touch.frame. A touchpoint shape is approximated by an ellipse through the major and minor axis length. The major axis length describes the longer diameter of the ellipse, while the minor axis length describes the shorter diameter. Major and minor are orthogonal and both are specified in surface-local coordinates. The center of the ellipse is always at the touchpoint location as reported by wl_touch.down or wl_touch.move. This event is only sent by the compositor if the touch device supports shape reports. The client has to make reasonable assumptions about the shape if it did not receive this event. Sent when a touchpoint has changed its orientation. This event does not occur on its own. It is sent before a wl_touch.frame event and carries the new shape information for any previously reported, or new touch points of that frame. Other events describing the touch point such as wl_touch.down, wl_touch.motion or wl_touch.shape may be sent within the same wl_touch.frame. A client should treat these events as a single logical touch point update. The order of wl_touch.shape, wl_touch.orientation and wl_touch.motion is not guaranteed. A wl_touch.down event is guaranteed to occur before the first wl_touch.orientation event for this touch ID but both events may occur within the same wl_touch.frame. The orientation describes the clockwise angle of a touchpoint's major axis to the positive surface y-axis and is normalized to the -180 to +180 degree range. The granularity of orientation depends on the touch device, some devices only support binary rotation values between 0 and 90 degrees. This event is only sent by the compositor if the touch device supports orientation reports. An output describes part of the compositor geometry. The compositor works in the 'compositor coordinate system' and an output corresponds to a rectangular area in that space that is actually visible. This typically corresponds to a monitor that displays part of the compositor space. This object is published as global during start up, or when a monitor is hotplugged. This enumeration describes how the physical pixels on an output are laid out. This describes the transform that a compositor will apply to a surface to compensate for the rotation or mirroring of an output device. The flipped values correspond to an initial flip around a vertical axis followed by rotation. The purpose is mainly to allow clients to render accordingly and tell the compositor, so that for fullscreen surfaces, the compositor will still be able to scan out directly from client surfaces. The geometry event describes geometric properties of the output. The event is sent when binding to the output object and whenever any of the properties change. The physical size can be set to zero if it doesn't make sense for this output (e.g. for projectors or virtual outputs). The geometry event will be followed by a done event (starting from version 2). Note: wl_output only advertises partial information about the output position and identification. Some compositors, for instance those not implementing a desktop-style output layout or those exposing virtual outputs, might fake this information. Instead of using x and y, clients should use xdg_output.logical_position. Instead of using make and model, clients should use name and description. These flags describe properties of an output mode. They are used in the flags bitfield of the mode event. The mode event describes an available mode for the output. The event is sent when binding to the output object and there will always be one mode, the current mode. The event is sent again if an output changes mode, for the mode that is now current. In other words, the current mode is always the last mode that was received with the current flag set. Non-current modes are deprecated. A compositor can decide to only advertise the current mode and never send other modes. Clients should not rely on non-current modes. The size of a mode is given in physical hardware units of the output device. This is not necessarily the same as the output size in the global compositor space. For instance, the output may be scaled, as described in wl_output.scale, or transformed, as described in wl_output.transform. Clients willing to retrieve the output size in the global compositor space should use xdg_output.logical_size instead. The vertical refresh rate can be set to zero if it doesn't make sense for this output (e.g. for virtual outputs). The mode event will be followed by a done event (starting from version 2). Clients should not use the refresh rate to schedule frames. Instead, they should use the wl_surface.frame event or the presentation-time protocol. Note: this information is not always meaningful for all outputs. Some compositors, such as those exposing virtual outputs, might fake the refresh rate or the size. This event is sent after all other properties have been sent after binding to the output object and after any other property changes done after that. This allows changes to the output properties to be seen as atomic, even if they happen via multiple events. This event contains scaling geometry information that is not in the geometry event. It may be sent after binding the output object or if the output scale changes later. If it is not sent, the client should assume a scale of 1. A scale larger than 1 means that the compositor will automatically scale surface buffers by this amount when rendering. This is used for very high resolution displays where applications rendering at the native resolution would be too small to be legible. It is intended that scaling aware clients track the current output of a surface, and if it is on a scaled output it should use wl_surface.set_buffer_scale with the scale of the output. That way the compositor can avoid scaling the surface, and the client can supply a higher detail image. The scale event will be followed by a done event. Using this request a client can tell the server that it is not going to use the output object anymore. Many compositors will assign user-friendly names to their outputs, show them to the user, allow the user to refer to an output, etc. The client may wish to know this name as well to offer the user similar behaviors. The name is a UTF-8 string with no convention defined for its contents. Each name is unique among all wl_output globals. The name is only guaranteed to be unique for the compositor instance. The same output name is used for all clients for a given wl_output global. Thus, the name can be shared across processes to refer to a specific wl_output global. The name is not guaranteed to be persistent across sessions, thus cannot be used to reliably identify an output in e.g. configuration files. Examples of names include 'HDMI-A-1', 'WL-1', 'X11-1', etc. However, do not assume that the name is a reflection of an underlying DRM connector, X11 connection, etc. The name event is sent after binding the output object. This event is only sent once per output object, and the name does not change over the lifetime of the wl_output global. Compositors may re-use the same output name if the wl_output global is destroyed and re-created later. Compositors should avoid re-using the same name if possible. The name event will be followed by a done event. Many compositors can produce human-readable descriptions of their outputs. The client may wish to know this description as well, e.g. for output selection purposes. The description is a UTF-8 string with no convention defined for its contents. The description is not guaranteed to be unique among all wl_output globals. Examples might include 'Foocorp 11" Display' or 'Virtual X11 output via :1'. The description event is sent after binding the output object and whenever the description changes. The description is optional, and may not be sent at all. The description event will be followed by a done event. A region object describes an area. Region objects are used to describe the opaque and input regions of a surface. Destroy the region. This will invalidate the object ID. Add the specified rectangle to the region. Subtract the specified rectangle from the region. The global interface exposing sub-surface compositing capabilities. A wl_surface, that has sub-surfaces associated, is called the parent surface. Sub-surfaces can be arbitrarily nested and create a tree of sub-surfaces. The root surface in a tree of sub-surfaces is the main surface. The main surface cannot be a sub-surface, because sub-surfaces must always have a parent. A main surface with its sub-surfaces forms a (compound) window. For window management purposes, this set of wl_surface objects is to be considered as a single window, and it should also behave as such. The aim of sub-surfaces is to offload some of the compositing work within a window from clients to the compositor. A prime example is a video player with decorations and video in separate wl_surface objects. This should allow the compositor to pass YUV video buffer processing to dedicated overlay hardware when possible. Informs the server that the client will not be using this protocol object anymore. This does not affect any other objects, wl_subsurface objects included. Create a sub-surface interface for the given surface, and associate it with the given parent surface. This turns a plain wl_surface into a sub-surface. The to-be sub-surface must not already have another role, and it must not have an existing wl_subsurface object. Otherwise the bad_surface protocol error is raised. Adding sub-surfaces to a parent is a double-buffered operation on the parent (see wl_surface.commit). The effect of adding a sub-surface becomes visible on the next time the state of the parent surface is applied. The parent surface must not be one of the child surface's descendants, and the parent must be different from the child surface, otherwise the bad_parent protocol error is raised. This request modifies the behaviour of wl_surface.commit request on the sub-surface, see the documentation on wl_subsurface interface. An additional interface to a wl_surface object, which has been made a sub-surface. A sub-surface has one parent surface. A sub-surface's size and position are not limited to that of the parent. Particularly, a sub-surface is not automatically clipped to its parent's area. A sub-surface becomes mapped, when a non-NULL wl_buffer is applied and the parent surface is mapped. The order of which one happens first is irrelevant. A sub-surface is hidden if the parent becomes hidden, or if a NULL wl_buffer is applied. These rules apply recursively through the tree of surfaces. The behaviour of a wl_surface.commit request on a sub-surface depends on the sub-surface's mode. The possible modes are synchronized and desynchronized, see methods wl_subsurface.set_sync and wl_subsurface.set_desync. Synchronized mode caches the wl_surface state to be applied when the parent's state gets applied, and desynchronized mode applies the pending wl_surface state directly. A sub-surface is initially in the synchronized mode. Sub-surfaces also have another kind of state, which is managed by wl_subsurface requests, as opposed to wl_surface requests. This state includes the sub-surface position relative to the parent surface (wl_subsurface.set_position), and the stacking order of the parent and its sub-surfaces (wl_subsurface.place_above and .place_below). This state is applied when the parent surface's wl_surface state is applied, regardless of the sub-surface's mode. As the exception, set_sync and set_desync are effective immediately. The main surface can be thought to be always in desynchronized mode, since it does not have a parent in the sub-surfaces sense. Even if a sub-surface is in desynchronized mode, it will behave as in synchronized mode, if its parent surface behaves as in synchronized mode. This rule is applied recursively throughout the tree of surfaces. This means, that one can set a sub-surface into synchronized mode, and then assume that all its child and grand-child sub-surfaces are synchronized, too, without explicitly setting them. Destroying a sub-surface takes effect immediately. If you need to synchronize the removal of a sub-surface to the parent surface update, unmap the sub-surface first by attaching a NULL wl_buffer, update parent, and then destroy the sub-surface. If the parent wl_surface object is destroyed, the sub-surface is unmapped. The sub-surface interface is removed from the wl_surface object that was turned into a sub-surface with a wl_subcompositor.get_subsurface request. The wl_surface's association to the parent is deleted. The wl_surface is unmapped immediately. This schedules a sub-surface position change. The sub-surface will be moved so that its origin (top left corner pixel) will be at the location x, y of the parent surface coordinate system. The coordinates are not restricted to the parent surface area. Negative values are allowed. The scheduled coordinates will take effect whenever the state of the parent surface is applied. When this happens depends on whether the parent surface is in synchronized mode or not. See wl_subsurface.set_sync and wl_subsurface.set_desync for details. If more than one set_position request is invoked by the client before the commit of the parent surface, the position of a new request always replaces the scheduled position from any previous request. The initial position is 0, 0. This sub-surface is taken from the stack, and put back just above the reference surface, changing the z-order of the sub-surfaces. The reference surface must be one of the sibling surfaces, or the parent surface. Using any other surface, including this sub-surface, will cause a protocol error. The z-order is double-buffered. Requests are handled in order and applied immediately to a pending state. The final pending state is copied to the active state the next time the state of the parent surface is applied. When this happens depends on whether the parent surface is in synchronized mode or not. See wl_subsurface.set_sync and wl_subsurface.set_desync for details. A new sub-surface is initially added as the top-most in the stack of its siblings and parent. The sub-surface is placed just below the reference surface. See wl_subsurface.place_above. Change the commit behaviour of the sub-surface to synchronized mode, also described as the parent dependent mode. In synchronized mode, wl_surface.commit on a sub-surface will accumulate the committed state in a cache, but the state will not be applied and hence will not change the compositor output. The cached state is applied to the sub-surface immediately after the parent surface's state is applied. This ensures atomic updates of the parent and all its synchronized sub-surfaces. Applying the cached state will invalidate the cache, so further parent surface commits do not (re-)apply old state. See wl_subsurface for the recursive effect of this mode. Change the commit behaviour of the sub-surface to desynchronized mode, also described as independent or freely running mode. In desynchronized mode, wl_surface.commit on a sub-surface will apply the pending state directly, without caching, as happens normally with a wl_surface. Calling wl_surface.commit on the parent surface has no effect on the sub-surface's wl_surface state. This mode allows a sub-surface to be updated on its own. If cached state exists when wl_surface.commit is called in desynchronized mode, the pending state is added to the cached state, and applied as a whole. This invalidates the cache. Note: even if a sub-surface is set to desynchronized, a parent sub-surface may override it to behave as synchronized. For details, see wl_subsurface. If a surface's parent surface behaves as desynchronized, then the cached state is applied on set_desync. waypipe-v0.9.1/protocols/wlr-data-control-unstable-v1.xml000066400000000000000000000274161463133614300235010ustar00rootroot00000000000000 Copyright © 2018 Simon Ser Copyright © 2019 Ivan Molodetskikh Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. This protocol allows a privileged client to control data devices. In particular, the client will be able to manage the current selection and take the role of a clipboard manager. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This interface is a manager that allows creating per-seat data device controls. Create a new data source. Create a data device that can be used to manage a seat's selection. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This interface allows a client to manage a seat's selection. When the seat is destroyed, this object becomes inert. This request asks the compositor to set the selection to the data from the source on behalf of the client. The given source may not be used in any further set_selection or set_primary_selection requests. Attempting to use a previously used source is a protocol error. To unset the selection, set the source to NULL. Destroys the data device object. The data_offer event introduces a new wlr_data_control_offer object, which will subsequently be used in either the wlr_data_control_device.selection event (for the regular clipboard selections) or the wlr_data_control_device.primary_selection event (for the primary clipboard selections). Immediately following the wlr_data_control_device.data_offer event, the new data_offer object will send out wlr_data_control_offer.offer events to describe the MIME types it offers. The selection event is sent out to notify the client of a new wlr_data_control_offer for the selection for this device. The wlr_data_control_device.data_offer and the wlr_data_control_offer.offer events are sent out immediately before this event to introduce the data offer object. The selection event is sent to a client when a new selection is set. The wlr_data_control_offer is valid until a new wlr_data_control_offer or NULL is received. The client must destroy the previous selection wlr_data_control_offer, if any, upon receiving this event. The first selection event is sent upon binding the wlr_data_control_device object. This data control object is no longer valid and should be destroyed by the client. The primary_selection event is sent out to notify the client of a new wlr_data_control_offer for the primary selection for this device. The wlr_data_control_device.data_offer and the wlr_data_control_offer.offer events are sent out immediately before this event to introduce the data offer object. The primary_selection event is sent to a client when a new primary selection is set. The wlr_data_control_offer is valid until a new wlr_data_control_offer or NULL is received. The client must destroy the previous primary selection wlr_data_control_offer, if any, upon receiving this event. If the compositor supports primary selection, the first primary_selection event is sent upon binding the wlr_data_control_device object. This request asks the compositor to set the primary selection to the data from the source on behalf of the client. The given source may not be used in any further set_selection or set_primary_selection requests. Attempting to use a previously used source is a protocol error. To unset the primary selection, set the source to NULL. The compositor will ignore this request if it does not support primary selection. The wlr_data_control_source object is the source side of a wlr_data_control_offer. It is created by the source client in a data transfer and provides a way to describe the offered data and a way to respond to requests to transfer the data. This request adds a MIME type to the set of MIME types advertised to targets. Can be called several times to offer multiple types. Calling this after wlr_data_control_device.set_selection is a protocol error. Destroys the data source object. Request for data from the client. Send the data as the specified MIME type over the passed file descriptor, then close it. This data source is no longer valid. The data source has been replaced by another data source. The client should clean up and destroy this data source. A wlr_data_control_offer represents a piece of data offered for transfer by another client (the source client). The offer describes the different MIME types that the data can be converted to and provides the mechanism for transferring the data directly from the source client. To transfer the offered data, the client issues this request and indicates the MIME type it wants to receive. The transfer happens through the passed file descriptor (typically created with the pipe system call). The source client writes the data in the MIME type representation requested and then closes the file descriptor. The receiving client reads from the read end of the pipe until EOF and then closes its end, at which point the transfer is complete. This request may happen multiple times for different MIME types. Destroys the data offer object. Sent immediately after creating the wlr_data_control_offer object. One event per offered MIME type. waypipe-v0.9.1/protocols/wlr-export-dmabuf-unstable-v1.xml000066400000000000000000000217271463133614300236660ustar00rootroot00000000000000 Copyright © 2018 Rostislav Pehlivanov Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. An interface to capture surfaces in an efficient way by exporting DMA-BUFs. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This object is a manager with which to start capturing from sources. Capture the next frame of an entire output. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This object represents a single DMA-BUF frame. If the capture is successful, the compositor will first send a "frame" event, followed by one or several "object". When the frame is available for readout, the "ready" event is sent. If the capture failed, the "cancel" event is sent. This can happen anytime before the "ready" event. Once either a "ready" or a "cancel" event is received, the client should destroy the frame. Once an "object" event is received, the client is responsible for closing the associated file descriptor. All frames are read-only and may not be written into or altered. Special flags that should be respected by the client. Main event supplying the client with information about the frame. If the capture didn't fail, this event is always emitted first before any other events. This event is followed by a number of "object" as specified by the "num_objects" argument. Event which serves to supply the client with the file descriptors containing the data for each object. After receiving this event, the client must always close the file descriptor as soon as they're done with it and even if the frame fails. This event is sent as soon as the frame is presented, indicating it is available for reading. This event includes the time at which presentation happened at. The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. The seconds part may have an arbitrary offset at start. After receiving this event, the client should destroy this object. Indicates reason for cancelling the frame. If the capture failed or if the frame is no longer valid after the "frame" event has been emitted, this event will be used to inform the client to scrap the frame. If the failure is temporary, the client may capture again the same source. If the failure is permanent, any further attempts to capture the same source will fail again. After receiving this event, the client should destroy this object. Unreferences the frame. This request must be called as soon as its no longer used. It can be called at any time by the client. The client will still have to close any FDs it has been given. waypipe-v0.9.1/protocols/wlr-gamma-control-unstable-v1.xml000066400000000000000000000126031463133614300236420ustar00rootroot00000000000000 Copyright © 2015 Giulio camuffo Copyright © 2018 Simon Ser Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. This protocol allows a privileged client to set the gamma tables for outputs. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This interface is a manager that allows creating per-output gamma controls. Create a gamma control that can be used to adjust gamma tables for the provided output. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This interface allows a client to adjust gamma tables for a particular output. The client will receive the gamma size, and will then be able to set gamma tables. At any time the compositor can send a failed event indicating that this object is no longer valid. There can only be at most one gamma control object per output, which has exclusive access to this particular output. When the gamma control object is destroyed, the gamma table is restored to its original value. Advertise the size of each gamma ramp. This event is sent immediately when the gamma control object is created. Set the gamma table. The file descriptor can be memory-mapped to provide the raw gamma table, which contains successive gamma ramps for the red, green and blue channels. Each gamma ramp is an array of 16-byte unsigned integers which has the same length as the gamma size. The file descriptor data must have the same length as three times the gamma size. This event indicates that the gamma control is no longer valid. This can happen for a number of reasons, including: - The output doesn't support gamma tables - Setting the gamma tables failed - Another client already has exclusive gamma control for this output - The compositor has transferred gamma control to another client Upon receiving this event, the client should destroy this object. Destroys the gamma control object. If the object is still valid, this restores the original gamma tables. waypipe-v0.9.1/protocols/wlr-screencopy-unstable-v1.xml000066400000000000000000000236661463133614300232670ustar00rootroot00000000000000 Copyright © 2018 Simon Ser Copyright © 2019 Andri Yngvason Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. This protocol allows clients to ask the compositor to copy part of the screen content to a client buffer. Warning! The protocol described in this file is experimental and backward incompatible changes may be made. Backward compatible changes may be added together with the corresponding interface version bump. Backward incompatible changes are done by bumping the version number in the protocol and interface names and resetting the interface version. Once the protocol is to be declared stable, the 'z' prefix and the version number in the protocol and interface names are removed and the interface version number is reset. This object is a manager which offers requests to start capturing from a source. Capture the next frame of an entire output. Capture the next frame of an output's region. The region is given in output logical coordinates, see xdg_output.logical_size. The region will be clipped to the output's extents. All objects created by the manager will still remain valid, until their appropriate destroy request has been called. This object represents a single frame. When created, a series of buffer events will be sent, each representing a supported buffer type. The "buffer_done" event is sent afterwards to indicate that all supported buffer types have been enumerated. The client will then be able to send a "copy" request. If the capture is successful, the compositor will send a "flags" followed by a "ready" event. For objects version 2 or lower, wl_shm buffers are always supported, ie. the "buffer" event is guaranteed to be sent. If the capture failed, the "failed" event is sent. This can happen anytime before the "ready" event. Once either a "ready" or a "failed" event is received, the client should destroy the frame. Provides information about wl_shm buffer parameters that need to be used for this frame. This event is sent once after the frame is created if wl_shm buffers are supported. Copy the frame to the supplied buffer. The buffer must have a the correct size, see zwlr_screencopy_frame_v1.buffer and zwlr_screencopy_frame_v1.linux_dmabuf. The buffer needs to have a supported format. If the frame is successfully copied, a "flags" and a "ready" events are sent. Otherwise, a "failed" event is sent. Provides flags about the frame. This event is sent once before the "ready" event. Called as soon as the frame is copied, indicating it is available for reading. This event includes the time at which presentation happened at. The timestamp is expressed as tv_sec_hi, tv_sec_lo, tv_nsec triples, each component being an unsigned 32-bit value. Whole seconds are in tv_sec which is a 64-bit value combined from tv_sec_hi and tv_sec_lo, and the additional fractional part in tv_nsec as nanoseconds. Hence, for valid timestamps tv_nsec must be in [0, 999999999]. The seconds part may have an arbitrary offset at start. After receiving this event, the client should destroy the object. This event indicates that the attempted frame copy has failed. After receiving this event, the client should destroy the object. Destroys the frame. This request can be sent at any time by the client. Same as copy, except it waits until there is damage to copy. This event is sent right before the ready event when copy_with_damage is requested. It may be generated multiple times for each copy_with_damage request. The arguments describe a box around an area that has changed since the last copy request that was derived from the current screencopy manager instance. The union of all regions received between the call to copy_with_damage and a ready event is the total damage since the prior ready event. Provides information about linux-dmabuf buffer parameters that need to be used for this frame. This event is sent once after the frame is created if linux-dmabuf buffers are supported. This event is sent once after all buffer events have been sent. The client should proceed to create a buffer of one of the supported types, and send a "copy" request. waypipe-v0.9.1/protocols/xdg-shell.xml000066400000000000000000001526631463133614300200430ustar00rootroot00000000000000 Copyright © 2008-2013 Kristian Høgsberg Copyright © 2013 Rafael Antognolli Copyright © 2013 Jasper St. Pierre Copyright © 2010-2013 Intel Corporation Copyright © 2015-2017 Samsung Electronics Co., Ltd Copyright © 2015-2017 Red Hat Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The xdg_wm_base interface is exposed as a global object enabling clients to turn their wl_surfaces into windows in a desktop environment. It defines the basic functionality needed for clients and the compositor to create windows that can be dragged, resized, maximized, etc, as well as creating transient windows such as popup menus. Destroy this xdg_wm_base object. Destroying a bound xdg_wm_base object while there are surfaces still alive created by this xdg_wm_base object instance is illegal and will result in a protocol error. Create a positioner object. A positioner object is used to position surfaces relative to some parent surface. See the interface description and xdg_surface.get_popup for details. This creates an xdg_surface for the given surface. While xdg_surface itself is not a role, the corresponding surface may only be assigned a role extending xdg_surface, such as xdg_toplevel or xdg_popup. It is illegal to create an xdg_surface for a wl_surface which already has an assigned role and this will result in a protocol error. This creates an xdg_surface for the given surface. An xdg_surface is used as basis to define a role to a given surface, such as xdg_toplevel or xdg_popup. It also manages functionality shared between xdg_surface based surface roles. See the documentation of xdg_surface for more details about what an xdg_surface is and how it is used. A client must respond to a ping event with a pong request or the client may be deemed unresponsive. See xdg_wm_base.ping. The ping event asks the client if it's still alive. Pass the serial specified in the event back to the compositor by sending a "pong" request back with the specified serial. See xdg_wm_base.pong. Compositors can use this to determine if the client is still alive. It's unspecified what will happen if the client doesn't respond to the ping request, or in what timeframe. Clients should try to respond in a reasonable amount of time. A compositor is free to ping in any way it wants, but a client must always respond to any xdg_wm_base object it created. The xdg_positioner provides a collection of rules for the placement of a child surface relative to a parent surface. Rules can be defined to ensure the child surface remains within the visible area's borders, and to specify how the child surface changes its position, such as sliding along an axis, or flipping around a rectangle. These positioner-created rules are constrained by the requirement that a child surface must intersect with or be at least partially adjacent to its parent surface. See the various requests for details about possible rules. At the time of the request, the compositor makes a copy of the rules specified by the xdg_positioner. Thus, after the request is complete the xdg_positioner object can be destroyed or reused; further changes to the object will have no effect on previous usages. For an xdg_positioner object to be considered complete, it must have a non-zero size set by set_size, and a non-zero anchor rectangle set by set_anchor_rect. Passing an incomplete xdg_positioner object when positioning a surface raises an error. Notify the compositor that the xdg_positioner will no longer be used. Set the size of the surface that is to be positioned with the positioner object. The size is in surface-local coordinates and corresponds to the window geometry. See xdg_surface.set_window_geometry. If a zero or negative size is set the invalid_input error is raised. Specify the anchor rectangle within the parent surface that the child surface will be placed relative to. The rectangle is relative to the window geometry as defined by xdg_surface.set_window_geometry of the parent surface. When the xdg_positioner object is used to position a child surface, the anchor rectangle may not extend outside the window geometry of the positioned child's parent surface. If a negative size is set the invalid_input error is raised. Defines the anchor point for the anchor rectangle. The specified anchor is used derive an anchor point that the child surface will be positioned relative to. If a corner anchor is set (e.g. 'top_left' or 'bottom_right'), the anchor point will be at the specified corner; otherwise, the derived anchor point will be centered on the specified edge, or in the center of the anchor rectangle if no edge is specified. Defines in what direction a surface should be positioned, relative to the anchor point of the parent surface. If a corner gravity is specified (e.g. 'bottom_right' or 'top_left'), then the child surface will be placed towards the specified gravity; otherwise, the child surface will be centered over the anchor point on any axis that had no gravity specified. The constraint adjustment value define ways the compositor will adjust the position of the surface, if the unadjusted position would result in the surface being partly constrained. Whether a surface is considered 'constrained' is left to the compositor to determine. For example, the surface may be partly outside the compositor's defined 'work area', thus necessitating the child surface's position be adjusted until it is entirely inside the work area. The adjustments can be combined, according to a defined precedence: 1) Flip, 2) Slide, 3) Resize. Don't alter the surface position even if it is constrained on some axis, for example partially outside the edge of an output. Slide the surface along the x axis until it is no longer constrained. First try to slide towards the direction of the gravity on the x axis until either the edge in the opposite direction of the gravity is unconstrained or the edge in the direction of the gravity is constrained. Then try to slide towards the opposite direction of the gravity on the x axis until either the edge in the direction of the gravity is unconstrained or the edge in the opposite direction of the gravity is constrained. Slide the surface along the y axis until it is no longer constrained. First try to slide towards the direction of the gravity on the y axis until either the edge in the opposite direction of the gravity is unconstrained or the edge in the direction of the gravity is constrained. Then try to slide towards the opposite direction of the gravity on the y axis until either the edge in the direction of the gravity is unconstrained or the edge in the opposite direction of the gravity is constrained. Invert the anchor and gravity on the x axis if the surface is constrained on the x axis. For example, if the left edge of the surface is constrained, the gravity is 'left' and the anchor is 'left', change the gravity to 'right' and the anchor to 'right'. If the adjusted position also ends up being constrained, the resulting position of the flip_x adjustment will be the one before the adjustment. Invert the anchor and gravity on the y axis if the surface is constrained on the y axis. For example, if the bottom edge of the surface is constrained, the gravity is 'bottom' and the anchor is 'bottom', change the gravity to 'top' and the anchor to 'top'. The adjusted position is calculated given the original anchor rectangle and offset, but with the new flipped anchor and gravity values. If the adjusted position also ends up being constrained, the resulting position of the flip_y adjustment will be the one before the adjustment. Resize the surface horizontally so that it is completely unconstrained. Resize the surface vertically so that it is completely unconstrained. Specify how the window should be positioned if the originally intended position caused the surface to be constrained, meaning at least partially outside positioning boundaries set by the compositor. The adjustment is set by constructing a bitmask describing the adjustment to be made when the surface is constrained on that axis. If no bit for one axis is set, the compositor will assume that the child surface should not change its position on that axis when constrained. If more than one bit for one axis is set, the order of how adjustments are applied is specified in the corresponding adjustment descriptions. The default adjustment is none. Specify the surface position offset relative to the position of the anchor on the anchor rectangle and the anchor on the surface. For example if the anchor of the anchor rectangle is at (x, y), the surface has the gravity bottom|right, and the offset is (ox, oy), the calculated surface position will be (x + ox, y + oy). The offset position of the surface is the one used for constraint testing. See set_constraint_adjustment. An example use case is placing a popup menu on top of a user interface element, while aligning the user interface element of the parent surface with some user interface element placed somewhere in the popup surface. When set reactive, the surface is reconstrained if the conditions used for constraining changed, e.g. the parent window moved. If the conditions changed and the popup was reconstrained, an xdg_popup.configure event is sent with updated geometry, followed by an xdg_surface.configure event. Set the parent window geometry the compositor should use when positioning the popup. The compositor may use this information to determine the future state the popup should be constrained using. If this doesn't match the dimension of the parent the popup is eventually positioned against, the behavior is undefined. The arguments are given in the surface-local coordinate space. Set the serial of an xdg_surface.configure event this positioner will be used in response to. The compositor may use this information together with set_parent_size to determine what future state the popup should be constrained using. An interface that may be implemented by a wl_surface, for implementations that provide a desktop-style user interface. It provides a base set of functionality required to construct user interface elements requiring management by the compositor, such as toplevel windows, menus, etc. The types of functionality are split into xdg_surface roles. Creating an xdg_surface does not set the role for a wl_surface. In order to map an xdg_surface, the client must create a role-specific object using, e.g., get_toplevel, get_popup. The wl_surface for any given xdg_surface can have at most one role, and may not be assigned any role not based on xdg_surface. A role must be assigned before any other requests are made to the xdg_surface object. The client must call wl_surface.commit on the corresponding wl_surface for the xdg_surface state to take effect. Creating an xdg_surface from a wl_surface which has a buffer attached or committed is a client error, and any attempts by a client to attach or manipulate a buffer prior to the first xdg_surface.configure call must also be treated as errors. After creating a role-specific object and setting it up, the client must perform an initial commit without any buffer attached. The compositor will reply with an xdg_surface.configure event. The client must acknowledge it and is then allowed to attach a buffer to map the surface. Mapping an xdg_surface-based role surface is defined as making it possible for the surface to be shown by the compositor. Note that a mapped surface is not guaranteed to be visible once it is mapped. For an xdg_surface to be mapped by the compositor, the following conditions must be met: (1) the client has assigned an xdg_surface-based role to the surface (2) the client has set and committed the xdg_surface state and the role-dependent state to the surface (3) the client has committed a buffer to the surface A newly-unmapped surface is considered to have met condition (1) out of the 3 required conditions for mapping a surface if its role surface has not been destroyed. Destroy the xdg_surface object. An xdg_surface must only be destroyed after its role object has been destroyed. This creates an xdg_toplevel object for the given xdg_surface and gives the associated wl_surface the xdg_toplevel role. See the documentation of xdg_toplevel for more details about what an xdg_toplevel is and how it is used. This creates an xdg_popup object for the given xdg_surface and gives the associated wl_surface the xdg_popup role. If null is passed as a parent, a parent surface must be specified using some other protocol, before committing the initial state. See the documentation of xdg_popup for more details about what an xdg_popup is and how it is used. The window geometry of a surface is its "visible bounds" from the user's perspective. Client-side decorations often have invisible portions like drop-shadows which should be ignored for the purposes of aligning, placing and constraining windows. The window geometry is double buffered, and will be applied at the time wl_surface.commit of the corresponding wl_surface is called. When maintaining a position, the compositor should treat the (x, y) coordinate of the window geometry as the top left corner of the window. A client changing the (x, y) window geometry coordinate should in general not alter the position of the window. Once the window geometry of the surface is set, it is not possible to unset it, and it will remain the same until set_window_geometry is called again, even if a new subsurface or buffer is attached. If never set, the value is the full bounds of the surface, including any subsurfaces. This updates dynamically on every commit. This unset is meant for extremely simple clients. The arguments are given in the surface-local coordinate space of the wl_surface associated with this xdg_surface. The width and height must be greater than zero. Setting an invalid size will raise an error. When applied, the effective window geometry will be the set window geometry clamped to the bounding rectangle of the combined geometry of the surface of the xdg_surface and the associated subsurfaces. When a configure event is received, if a client commits the surface in response to the configure event, then the client must make an ack_configure request sometime before the commit request, passing along the serial of the configure event. For instance, for toplevel surfaces the compositor might use this information to move a surface to the top left only when the client has drawn itself for the maximized or fullscreen state. If the client receives multiple configure events before it can respond to one, it only has to ack the last configure event. A client is not required to commit immediately after sending an ack_configure request - it may even ack_configure several times before its next surface commit. A client may send multiple ack_configure requests before committing, but only the last request sent before a commit indicates which configure event the client really is responding to. The configure event marks the end of a configure sequence. A configure sequence is a set of one or more events configuring the state of the xdg_surface, including the final xdg_surface.configure event. Where applicable, xdg_surface surface roles will during a configure sequence extend this event as a latched state sent as events before the xdg_surface.configure event. Such events should be considered to make up a set of atomically applied configuration states, where the xdg_surface.configure commits the accumulated state. Clients should arrange their surface for the new states, and then send an ack_configure request with the serial sent in this configure event at some point before committing the new surface. If the client receives multiple configure events before it can respond to one, it is free to discard all but the last event it received. This interface defines an xdg_surface role which allows a surface to, among other things, set window-like properties such as maximize, fullscreen, and minimize, set application-specific metadata like title and id, and well as trigger user interactive operations such as interactive resize and move. Unmapping an xdg_toplevel means that the surface cannot be shown by the compositor until it is explicitly mapped again. All active operations (e.g., move, resize) are canceled and all attributes (e.g. title, state, stacking, ...) are discarded for an xdg_toplevel surface when it is unmapped. The xdg_toplevel returns to the state it had right after xdg_surface.get_toplevel. The client can re-map the toplevel by perfoming a commit without any buffer attached, waiting for a configure event and handling it as usual (see xdg_surface description). Attaching a null buffer to a toplevel unmaps the surface. This request destroys the role surface and unmaps the surface; see "Unmapping" behavior in interface section for details. Set the "parent" of this surface. This surface should be stacked above the parent surface and all other ancestor surfaces. Parent windows should be set on dialogs, toolboxes, or other "auxiliary" surfaces, so that the parent is raised when the dialog is raised. Setting a null parent for a child window removes any parent-child relationship for the child. Setting a null parent for a window which currently has no parent is a no-op. If the parent is unmapped then its children are managed as though the parent of the now-unmapped parent has become the parent of this surface. If no parent exists for the now-unmapped parent then the children are managed as though they have no parent surface. Set a short title for the surface. This string may be used to identify the surface in a task bar, window list, or other user interface elements provided by the compositor. The string must be encoded in UTF-8. Set an application identifier for the surface. The app ID identifies the general class of applications to which the surface belongs. The compositor can use this to group multiple surfaces together, or to determine how to launch a new application. For D-Bus activatable applications, the app ID is used as the D-Bus service name. The compositor shell will try to group application surfaces together by their app ID. As a best practice, it is suggested to select app ID's that match the basename of the application's .desktop file. For example, "org.freedesktop.FooViewer" where the .desktop file is "org.freedesktop.FooViewer.desktop". Like other properties, a set_app_id request can be sent after the xdg_toplevel has been mapped to update the property. See the desktop-entry specification [0] for more details on application identifiers and how they relate to well-known D-Bus names and .desktop files. [0] http://standards.freedesktop.org/desktop-entry-spec/ Clients implementing client-side decorations might want to show a context menu when right-clicking on the decorations, giving the user a menu that they can use to maximize or minimize the window. This request asks the compositor to pop up such a window menu at the given position, relative to the local surface coordinates of the parent surface. There are no guarantees as to what menu items the window menu contains. This request must be used in response to some sort of user action like a button press, key press, or touch down event. Start an interactive, user-driven move of the surface. This request must be used in response to some sort of user action like a button press, key press, or touch down event. The passed serial is used to determine the type of interactive move (touch, pointer, etc). The server may ignore move requests depending on the state of the surface (e.g. fullscreen or maximized), or if the passed serial is no longer valid. If triggered, the surface will lose the focus of the device (wl_pointer, wl_touch, etc) used for the move. It is up to the compositor to visually indicate that the move is taking place, such as updating a pointer cursor, during the move. There is no guarantee that the device focus will return when the move is completed. These values are used to indicate which edge of a surface is being dragged in a resize operation. Start a user-driven, interactive resize of the surface. This request must be used in response to some sort of user action like a button press, key press, or touch down event. The passed serial is used to determine the type of interactive resize (touch, pointer, etc). The server may ignore resize requests depending on the state of the surface (e.g. fullscreen or maximized). If triggered, the client will receive configure events with the "resize" state enum value and the expected sizes. See the "resize" enum value for more details about what is required. The client must also acknowledge configure events using "ack_configure". After the resize is completed, the client will receive another "configure" event without the resize state. If triggered, the surface also will lose the focus of the device (wl_pointer, wl_touch, etc) used for the resize. It is up to the compositor to visually indicate that the resize is taking place, such as updating a pointer cursor, during the resize. There is no guarantee that the device focus will return when the resize is completed. The edges parameter specifies how the surface should be resized, and is one of the values of the resize_edge enum. The compositor may use this information to update the surface position for example when dragging the top left corner. The compositor may also use this information to adapt its behavior, e.g. choose an appropriate cursor image. The different state values used on the surface. This is designed for state values like maximized, fullscreen. It is paired with the configure event to ensure that both the client and the compositor setting the state can be synchronized. States set in this way are double-buffered. They will get applied on the next commit. The surface is maximized. The window geometry specified in the configure event must be obeyed by the client. The client should draw without shadow or other decoration outside of the window geometry. The surface is fullscreen. The window geometry specified in the configure event is a maximum; the client cannot resize beyond it. For a surface to cover the whole fullscreened area, the geometry dimensions must be obeyed by the client. For more details, see xdg_toplevel.set_fullscreen. The surface is being resized. The window geometry specified in the configure event is a maximum; the client cannot resize beyond it. Clients that have aspect ratio or cell sizing configuration can use a smaller size, however. Client window decorations should be painted as if the window is active. Do not assume this means that the window actually has keyboard or pointer focus. The window is currently in a tiled layout and the left edge is considered to be adjacent to another part of the tiling grid. The window is currently in a tiled layout and the right edge is considered to be adjacent to another part of the tiling grid. The window is currently in a tiled layout and the top edge is considered to be adjacent to another part of the tiling grid. The window is currently in a tiled layout and the bottom edge is considered to be adjacent to another part of the tiling grid. Set a maximum size for the window. The client can specify a maximum size so that the compositor does not try to configure the window beyond this size. The width and height arguments are in window geometry coordinates. See xdg_surface.set_window_geometry. Values set in this way are double-buffered. They will get applied on the next commit. The compositor can use this information to allow or disallow different states like maximize or fullscreen and draw accurate animations. Similarly, a tiling window manager may use this information to place and resize client windows in a more effective way. The client should not rely on the compositor to obey the maximum size. The compositor may decide to ignore the values set by the client and request a larger size. If never set, or a value of zero in the request, means that the client has no expected maximum size in the given dimension. As a result, a client wishing to reset the maximum size to an unspecified state can use zero for width and height in the request. Requesting a maximum size to be smaller than the minimum size of a surface is illegal and will result in a protocol error. The width and height must be greater than or equal to zero. Using strictly negative values for width and height will result in a protocol error. Set a minimum size for the window. The client can specify a minimum size so that the compositor does not try to configure the window below this size. The width and height arguments are in window geometry coordinates. See xdg_surface.set_window_geometry. Values set in this way are double-buffered. They will get applied on the next commit. The compositor can use this information to allow or disallow different states like maximize or fullscreen and draw accurate animations. Similarly, a tiling window manager may use this information to place and resize client windows in a more effective way. The client should not rely on the compositor to obey the minimum size. The compositor may decide to ignore the values set by the client and request a smaller size. If never set, or a value of zero in the request, means that the client has no expected minimum size in the given dimension. As a result, a client wishing to reset the minimum size to an unspecified state can use zero for width and height in the request. Requesting a minimum size to be larger than the maximum size of a surface is illegal and will result in a protocol error. The width and height must be greater than or equal to zero. Using strictly negative values for width and height will result in a protocol error. Maximize the surface. After requesting that the surface should be maximized, the compositor will respond by emitting a configure event. Whether this configure actually sets the window maximized is subject to compositor policies. The client must then update its content, drawing in the configured state. The client must also acknowledge the configure when committing the new content (see ack_configure). It is up to the compositor to decide how and where to maximize the surface, for example which output and what region of the screen should be used. If the surface was already maximized, the compositor will still emit a configure event with the "maximized" state. If the surface is in a fullscreen state, this request has no direct effect. It may alter the state the surface is returned to when unmaximized unless overridden by the compositor. Unmaximize the surface. After requesting that the surface should be unmaximized, the compositor will respond by emitting a configure event. Whether this actually un-maximizes the window is subject to compositor policies. If available and applicable, the compositor will include the window geometry dimensions the window had prior to being maximized in the configure event. The client must then update its content, drawing it in the configured state. The client must also acknowledge the configure when committing the new content (see ack_configure). It is up to the compositor to position the surface after it was unmaximized; usually the position the surface had before maximizing, if applicable. If the surface was already not maximized, the compositor will still emit a configure event without the "maximized" state. If the surface is in a fullscreen state, this request has no direct effect. It may alter the state the surface is returned to when unmaximized unless overridden by the compositor. Make the surface fullscreen. After requesting that the surface should be fullscreened, the compositor will respond by emitting a configure event. Whether the client is actually put into a fullscreen state is subject to compositor policies. The client must also acknowledge the configure when committing the new content (see ack_configure). The output passed by the request indicates the client's preference as to which display it should be set fullscreen on. If this value is NULL, it's up to the compositor to choose which display will be used to map this surface. If the surface doesn't cover the whole output, the compositor will position the surface in the center of the output and compensate with with border fill covering the rest of the output. The content of the border fill is undefined, but should be assumed to be in some way that attempts to blend into the surrounding area (e.g. solid black). If the fullscreened surface is not opaque, the compositor must make sure that other screen content not part of the same surface tree (made up of subsurfaces, popups or similarly coupled surfaces) are not visible below the fullscreened surface. Make the surface no longer fullscreen. After requesting that the surface should be unfullscreened, the compositor will respond by emitting a configure event. Whether this actually removes the fullscreen state of the client is subject to compositor policies. Making a surface unfullscreen sets states for the surface based on the following: * the state(s) it may have had before becoming fullscreen * any state(s) decided by the compositor * any state(s) requested by the client while the surface was fullscreen The compositor may include the previous window geometry dimensions in the configure event, if applicable. The client must also acknowledge the configure when committing the new content (see ack_configure). Request that the compositor minimize your surface. There is no way to know if the surface is currently minimized, nor is there any way to unset minimization on this surface. If you are looking to throttle redrawing when minimized, please instead use the wl_surface.frame event for this, as this will also work with live previews on windows in Alt-Tab, Expose or similar compositor features. This configure event asks the client to resize its toplevel surface or to change its state. The configured state should not be applied immediately. See xdg_surface.configure for details. The width and height arguments specify a hint to the window about how its surface should be resized in window geometry coordinates. See set_window_geometry. If the width or height arguments are zero, it means the client should decide its own window dimension. This may happen when the compositor needs to configure the state of the surface but doesn't have any information about any previous or expected dimension. The states listed in the event specify how the width/height arguments should be interpreted, and possibly how it should be drawn. Clients must send an ack_configure in response to this event. See xdg_surface.configure and xdg_surface.ack_configure for details. The close event is sent by the compositor when the user wants the surface to be closed. This should be equivalent to the user clicking the close button in client-side decorations, if your application has any. This is only a request that the user intends to close the window. The client may choose to ignore this request, or show a dialog to ask the user to save their data, etc. A popup surface is a short-lived, temporary surface. It can be used to implement for example menus, popovers, tooltips and other similar user interface concepts. A popup can be made to take an explicit grab. See xdg_popup.grab for details. When the popup is dismissed, a popup_done event will be sent out, and at the same time the surface will be unmapped. See the xdg_popup.popup_done event for details. Explicitly destroying the xdg_popup object will also dismiss the popup and unmap the surface. Clients that want to dismiss the popup when another surface of their own is clicked should dismiss the popup using the destroy request. A newly created xdg_popup will be stacked on top of all previously created xdg_popup surfaces associated with the same xdg_toplevel. The parent of an xdg_popup must be mapped (see the xdg_surface description) before the xdg_popup itself. The client must call wl_surface.commit on the corresponding wl_surface for the xdg_popup state to take effect. This destroys the popup. Explicitly destroying the xdg_popup object will also dismiss the popup, and unmap the surface. If this xdg_popup is not the "topmost" popup, a protocol error will be sent. This request makes the created popup take an explicit grab. An explicit grab will be dismissed when the user dismisses the popup, or when the client destroys the xdg_popup. This can be done by the user clicking outside the surface, using the keyboard, or even locking the screen through closing the lid or a timeout. If the compositor denies the grab, the popup will be immediately dismissed. This request must be used in response to some sort of user action like a button press, key press, or touch down event. The serial number of the event should be passed as 'serial'. The parent of a grabbing popup must either be an xdg_toplevel surface or another xdg_popup with an explicit grab. If the parent is another xdg_popup it means that the popups are nested, with this popup now being the topmost popup. Nested popups must be destroyed in the reverse order they were created in, e.g. the only popup you are allowed to destroy at all times is the topmost one. When compositors choose to dismiss a popup, they may dismiss every nested grabbing popup as well. When a compositor dismisses popups, it will follow the same dismissing order as required from the client. The parent of a grabbing popup must either be another xdg_popup with an active explicit grab, or an xdg_popup or xdg_toplevel, if there are no explicit grabs already taken. If the topmost grabbing popup is destroyed, the grab will be returned to the parent of the popup, if that parent previously had an explicit grab. If the parent is a grabbing popup which has already been dismissed, this popup will be immediately dismissed. If the parent is a popup that did not take an explicit grab, an error will be raised. During a popup grab, the client owning the grab will receive pointer and touch events for all their surfaces as normal (similar to an "owner-events" grab in X11 parlance), while the top most grabbing popup will always have keyboard focus. This event asks the popup surface to configure itself given the configuration. The configured state should not be applied immediately. See xdg_surface.configure for details. The x and y arguments represent the position the popup was placed at given the xdg_positioner rule, relative to the upper left corner of the window geometry of the parent surface. For version 2 or older, the configure event for an xdg_popup is only ever sent once for the initial configuration. Starting with version 3, it may be sent again if the popup is setup with an xdg_positioner with set_reactive requested, or in response to xdg_popup.reposition requests. The popup_done event is sent out when a popup is dismissed by the compositor. The client should destroy the xdg_popup object at this point. Reposition an already-mapped popup. The popup will be placed given the details in the passed xdg_positioner object, and a xdg_popup.repositioned followed by xdg_popup.configure and xdg_surface.configure will be emitted in response. Any parameters set by the previous positioner will be discarded. The passed token will be sent in the corresponding xdg_popup.repositioned event. The new popup position will not take effect until the corresponding configure event is acknowledged by the client. See xdg_popup.repositioned for details. The token itself is opaque, and has no other special meaning. If multiple reposition requests are sent, the compositor may skip all but the last one. If the popup is repositioned in response to a configure event for its parent, the client should send an xdg_positioner.set_parent_configure and possibly an xdg_positioner.set_parent_size request to allow the compositor to properly constrain the popup. If the popup is repositioned together with a parent that is being resized, but not in response to a configure event, the client should send an xdg_positioner.set_parent_size request. The repositioned event is sent as part of a popup configuration sequence, together with xdg_popup.configure and lastly xdg_surface.configure to notify the completion of a reposition request. The repositioned event is to notify about the completion of a xdg_popup.reposition request. The token argument is the token passed in the xdg_popup.reposition request. Immediately after this event is emitted, xdg_popup.configure and xdg_surface.configure will be sent with the updated size and position, as well as a new configure serial. The client should optionally update the content of the popup, but must acknowledge the new popup configuration for the new position to take effect. See xdg_surface.ack_configure for details. waypipe-v0.9.1/src/000077500000000000000000000000001463133614300141605ustar00rootroot00000000000000waypipe-v0.9.1/src/bench.c000066400000000000000000000306221463133614300154060ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #include "util.h" #include #include #include #include #include #include struct compression_range { enum compression_mode mode; int min_val; int max_val; const char *desc; }; static const struct compression_range comp_ranges[] = { {COMP_NONE, 0, 0, "none"}, #ifdef HAS_LZ4 {COMP_LZ4, -10, 16, "lz4"}, #endif #ifdef HAS_ZSTD {COMP_ZSTD, -10, 22, "zstd"}, #endif }; static void *create_text_like_image(size_t size) { uint8_t *data = malloc(size); if (!data) { return NULL; } for (size_t i = 0; i < size; i++) { size_t step = i / 203 - i / 501; bool s = step % 2 == 0; data[i] = (uint8_t)(s ? ((step >> 1) & 0x2) + 0xfe : 0x00); } // int f = open("1.rgb", O_RDONLY); // read(f, data, size); // close(f); return data; } static void *create_video_like_image(size_t size) { uint8_t *data = malloc(size); if (!data) { return NULL; } for (size_t i = 0; i < size; i++) { /* primary sequence, with runs, but avoiding obvious repetition * then add fine grain, a main source of complexity in real * images */ uint32_t noise = (uint32_t)rand() % 2; data[i] = (uint8_t)(i + i / 101 + i / 33 + noise); } // int f = open("0.rgb", O_RDONLY); // read(f, data, size); // close(f); return data; } /** Create a shuffled variation of the original image. */ static void perturb(void *data, size_t size) { uint8_t *bytes = (uint8_t *)data; for (int i = 0; i < 50; i++) { // TODO: avoid redundant motion, and make this very fast size_t low = (size_t)rand() % size; size_t high = (size_t)rand() % size; if (low >= high) { continue; } for (size_t k = 0; k < (high - low) / 2; k++) { uint8_t tmp = bytes[low + k]; bytes[low + k] = bytes[high - k]; bytes[high - k] = tmp; } } } struct bench_result { const struct compression_range *rng; int level; float comp_time, dcomp_time; }; static int float_compare(const void *a, const void *b) { float va = *(const float *)a; float vb = *(const float *)b; if (va < vb) return -1; if (va > vb) return 1; return 0; } static int compare_bench_result(const void *a, const void *b) { const struct bench_result *va = (const struct bench_result *)a; const struct bench_result *vb = (const struct bench_result *)b; if (va->comp_time < vb->comp_time) return -1; if (va->comp_time > vb->comp_time) return 1; return 0; } struct diff_comp_results { /* Compressed packet size, in bytes */ float packet_size; /* Time to construct compressed diff, in seconds */ float diffcomp_time; /* Diff size / buffer size */ float diff_frac; /* Compressed size / original size */ float comp_frac; }; static int compare_timespec(const struct timespec *a, const struct timespec *b) { if (a->tv_sec != b->tv_sec) return a->tv_sec < b->tv_sec ? -1 : 1; if (a->tv_nsec != b->tv_nsec) return a->tv_nsec < b->tv_nsec ? -1 : 1; return 0; } /* requires delta >= 0 */ static struct timespec timespec_add(struct timespec base, int64_t delta_ns) { struct timespec ret; ret.tv_sec = base.tv_sec + delta_ns / 1000000000LL; ret.tv_nsec = base.tv_nsec + delta_ns % 1000000000LL; if (ret.tv_nsec > 1000000000LL) { ret.tv_nsec -= 1000000000LL; ret.tv_sec++; } return ret; } static int64_t timespec_sub(struct timespec a, struct timespec b) { return (a.tv_sec - b.tv_sec) * 1000000000LL + (a.tv_nsec - b.tv_nsec); } #define NSAMPLES 5 static struct bench_result run_sub_bench(bool first, const struct compression_range *rng, int level, float bandwidth_mBps, int n_worker_threads, unsigned int seed, bool text_like, size_t test_size, void *image) { /* Reset seed, so that all random image * perturbations are consistent between runs */ srand(seed); /* Setup a shadow structure */ struct thread_pool pool; setup_thread_pool(&pool, rng->mode, level, n_worker_threads); if (first) { printf("Running compression level benchmarks, assuming bandwidth=%g MB/s, with %d threads\n", bandwidth_mBps, pool.nthreads); } struct fd_translation_map map; setup_translation_map(&map, false); struct wmsg_open_file file_msg; file_msg.remote_id = 0; file_msg.file_size = (uint32_t)test_size; file_msg.size_and_type = transfer_header( sizeof(struct wmsg_open_file), WMSG_OPEN_FILE); struct render_data render; memset(&render, 0, sizeof(render)); render.disabled = true; render.drm_fd = 1; render.av_disabled = true; struct bytebuf msg = {.size = sizeof(struct wmsg_open_file), .data = (char *)&file_msg}; (void)apply_update(&map, &pool, &render, WMSG_OPEN_FILE, 0, &msg); struct shadow_fd *sfd = get_shadow_for_rid(&map, 0); int iter = 0; float samples[NSAMPLES]; float diff_frac[NSAMPLES], comp_frac[NSAMPLES]; for (; !shutdown_flag && iter < NSAMPLES; iter++) { /* Reset image state */ memcpy(sfd->mem_local, image, test_size); memcpy(sfd->mem_mirror, image, test_size); perturb(sfd->mem_local, test_size); sfd->is_dirty = true; damage_everything(&sfd->damage); /* Create transfer queue */ struct transfer_queue transfer_data; memset(&transfer_data, 0, sizeof(struct transfer_queue)); pthread_mutex_init(&transfer_data.async_recv_queue.lock, NULL); struct timespec t0, t1; clock_gettime(CLOCK_REALTIME, &t0); collect_update(&pool, sfd, &transfer_data, false); start_parallel_work(&pool, &transfer_data.async_recv_queue); /* A restricted main loop, in which transfer blocks are * instantaneously consumed when previous blocks have been * 'sent' */ struct timespec next_write_time = {.tv_sec = 0, .tv_nsec = 0}; size_t total_wire_size = 0; size_t net_diff_size = 0; while (1) { uint8_t flush[64]; (void)read(pool.selfpipe_r, flush, sizeof(flush)); /* Run tasks on main thread, just like the main loop */ bool done = false; struct task_data task; bool has_task = request_work_task(&pool, &task, &done); if (has_task) { run_task(&task, &pool.threads[0]); pthread_mutex_lock(&pool.work_mutex); pool.tasks_in_progress--; pthread_mutex_unlock(&pool.work_mutex); } struct timespec cur_time; clock_gettime(CLOCK_REALTIME, &cur_time); if (compare_timespec(&next_write_time, &cur_time) < 0) { transfer_load_async(&transfer_data); if (transfer_data.start < transfer_data.end) { struct iovec v = transfer_data.vecs [transfer_data.start++]; float delay_s = (float)v.iov_len / (bandwidth_mBps * 1e6f); total_wire_size += v.iov_len; /* Only one message type will be * produced for diffs */ struct wmsg_buffer_diff *header = v.iov_base; net_diff_size += (size_t)(header->diff_size + header->ntrailing); /* Advance timer for next receipt */ int64_t delay_ns = (int64_t)(delay_s * 1e9f); next_write_time = timespec_add( cur_time, delay_ns); } } else { /* Very short delay, for poll loop */ bool tasks_remaining = false; pthread_mutex_lock(&pool.work_mutex); tasks_remaining = pool.stack_count > 0; pthread_mutex_unlock(&pool.work_mutex); struct timespec delay_time; delay_time.tv_sec = 0; delay_time.tv_nsec = 10000; if (!tasks_remaining) { int64_t nsecs_left = timespec_sub( next_write_time, cur_time); if (nsecs_left > 1000000000LL) { nsecs_left = 1000000000LL; } if (nsecs_left > delay_time.tv_nsec) { delay_time.tv_nsec = nsecs_left; } } nanosleep(&delay_time, NULL); } bool all_sent = false; all_sent = transfer_data.start == transfer_data.end; if (done && all_sent) { break; } } finish_update(sfd); cleanup_transfer_queue(&transfer_data); clock_gettime(CLOCK_REALTIME, &t1); struct diff_comp_results r; r.packet_size = (float)total_wire_size; r.diffcomp_time = 1.0f * (float)(t1.tv_sec - t0.tv_sec) + 1e-9f * (float)(t1.tv_nsec - t0.tv_nsec); r.comp_frac = r.packet_size / (float)net_diff_size; r.diff_frac = (float)net_diff_size / (float)test_size; samples[iter] = r.diffcomp_time; diff_frac[iter] = r.diff_frac; comp_frac[iter] = r.comp_frac; } /* Cleanup sfd and helper structures */ cleanup_thread_pool(&pool); cleanup_translation_map(&map); qsort(samples, (size_t)iter, sizeof(float), float_compare); qsort(diff_frac, (size_t)iter, sizeof(float), float_compare); qsort(comp_frac, (size_t)iter, sizeof(float), float_compare); /* Using order statistics, because moment statistics a) require * libm; b) don't work well with outliers. */ float median = samples[iter / 2]; float hiqr = (samples[(iter * 3) / 4] - samples[iter / 4]) / 2; float dmedian = diff_frac[iter / 2]; float dhiqr = (diff_frac[(iter * 3) / 4] - diff_frac[iter / 4]) / 2; float cmedian = comp_frac[iter / 2]; float chiqr = (comp_frac[(iter * 3) / 4] - comp_frac[iter / 4]) / 2; struct bench_result res; res.rng = rng; res.level = level; printf("%s, %s=%d: transfer %f+/-%f sec, diff %f+/-%f, comp %f+/-%f\n", text_like ? "txt" : "img", rng->desc, level, median, hiqr, dmedian, dhiqr, cmedian, chiqr); res.comp_time = median; res.dcomp_time = hiqr; return res; } int run_bench(float bandwidth_mBps, uint32_t test_size, int n_worker_threads) { /* 4MB test image - 1024x1024x4. Any smaller, and unrealistic caching * speedups may occur */ struct timespec tp; clock_gettime(CLOCK_REALTIME, &tp); srand((unsigned int)tp.tv_nsec); void *text_image = create_text_like_image(test_size); void *vid_image = create_video_like_image(test_size); if (!text_image || !vid_image) { free(text_image); free(vid_image); wp_error("Failed to allocate test images"); return EXIT_FAILURE; } /* Q: store an array of all the modes -> outputs */ // Then sort that array int ntests = 0; for (size_t c = 0; c < sizeof(comp_ranges) / sizeof(comp_ranges[0]); c++) { ntests += comp_ranges[c].max_val - comp_ranges[c].min_val + 1; } /* For the content, the mode is generally consistent */ struct bench_result *tresults = calloc((size_t)ntests, sizeof(struct bench_result)); struct bench_result *iresults = calloc((size_t)ntests, sizeof(struct bench_result)); int ntres = 0, nires = 0; for (int k = 0; k < 2; k++) { bool text_like = k == 0; int j = 0; for (size_t c = 0; !shutdown_flag && c < sizeof(comp_ranges) / sizeof(comp_ranges[0]); c++) { for (int lvl = comp_ranges[c].min_val; !shutdown_flag && lvl <= comp_ranges[c].max_val; lvl++) { struct bench_result res = run_sub_bench(j == 0, &comp_ranges[c], lvl, bandwidth_mBps, n_worker_threads, (unsigned int)tp.tv_nsec, text_like, test_size, text_like ? text_image : vid_image); if (text_like) { tresults[j++] = res; ntres++; } else { iresults[j++] = res; nires++; } } } } for (int k = 0; k < 2; k++) { bool text_like = k == 0; struct bench_result *results = text_like ? tresults : iresults; int nr = text_like ? ntres : nires; if (nr <= 0) { continue; } /* Print best recommendation */ qsort(results, (size_t)nr, sizeof(struct bench_result), compare_bench_result); struct bench_result best = results[0]; printf("%s, best compression level: \"%s=%d\", with %f+/-%f sec for sample transfer\n", text_like ? "Text heavy image" : "Photo-like image", best.rng->desc, best.level, best.comp_time, best.dcomp_time); } free(tresults); free(iresults); free(vid_image); free(text_image); return EXIT_SUCCESS; } waypipe-v0.9.1/src/client.c000066400000000000000000000560561463133614300156160ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include #include static inline uint32_t conntoken_version(uint32_t header) { return header >> 16; } static int check_conn_header(uint32_t header, const struct main_config *config, char *err, size_t err_size) { if ((header >> 16) != WAYPIPE_PROTOCOL_VERSION) { const char *endian_warning = ""; if ((header & CONN_FIXED_BIT) == 0 && (header & CONN_UNSET_BIT) != 0) { endian_warning = " It is also possible that server endianness does not match client"; } snprintf(err, err_size, "Waypipe client is rejecting connection header %08" PRIx32 "; as Waypipe server (application-side) protocol version (%u) is incompatible with Waypipe client protocol version (%u, from waypipe %s). Check that both sides have compatible versions of Waypipe.%s", header, conntoken_version(header), WAYPIPE_PROTOCOL_VERSION, WAYPIPE_VERSION, endian_warning); return -1; } /* Skip the following checks if config is null * (i.e., called from reconnection loop) */ if (!config) { return 0; } /* For now, reject mismatches in compression format and video coding * setting, and print an error. Adopting whatever the server asks for * is a minor security issue -- e.g., video handling is a good target * for exploits, and compression can cost CPU time, especially if the * initial connection mechanism were to be expanded to allow setting * compression level. */ if ((header & CONN_COMPRESSION_MASK) == CONN_ZSTD_COMPRESSION) { if (config->compression != COMP_ZSTD) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=ZSTD the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) == CONN_LZ4_COMPRESSION) { if (config->compression != COMP_LZ4) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=LZ4 the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) == CONN_NO_COMPRESSION) { if (config->compression != COMP_NONE) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the compression=NONE the Waypipe server expected", compression_mode_to_str( config->compression)); return -1; } } else if ((header & CONN_COMPRESSION_MASK) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client is configured for compression=%s, not the unidentified compression type the Waypipe server expected", compression_mode_to_str(config->compression)); return -1; } if ((header & CONN_VIDEO_MASK) == CONN_VP9_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_VP9) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the VP9 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_H264_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_H264) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the VP9 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_AV1_VIDEO) { if (!config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not run with video encoding enabled, unlike Waypipe server"); return -1; } if (config->video_fmt != VIDEO_AV1) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the AV1 video coding format requested by the Waypipe server"); return -1; } } else if ((header & CONN_VIDEO_MASK) == CONN_NO_VIDEO) { if (config->video_if_possible) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client has video encoding enabled, but Waypipe server does not"); return -1; } } else if ((header & CONN_VIDEO_MASK) != 0) { snprintf(err, err_size, "Waypipe client is rejecting connection, Waypipe client was not configured for the unidentified video coding format requested by the Waypipe server"); return -1; } return 0; } static void apply_conn_header(uint32_t header, struct main_config *config) { if (header & CONN_NO_DMABUF_SUPPORT) { if (config) { config->no_gpu = true; } } // todo: consider allowing to disable video encoding } static void write_rejection_message(int channel_fd, char *msg) { char buf[512]; size_t len = print_wrapped_error(buf, sizeof(buf), msg); if (!len) { wp_error("Failed to print wrapped error for message of length %zu, not enough space", strlen(msg)); return; } ssize_t written = write(channel_fd, buf, len); if (written != (ssize_t)len) { wp_error("Failed to send rejection message, only %d bytes of %d written", (int)written, (int)len); } } static inline bool key_match( const uint32_t key1[static 3], const uint32_t key2[static 3]) { return key1[0] == key2[0] && key1[1] == key2[1] && key1[2] == key2[2]; } static int get_inherited_socket(const char *wayland_socket) { uint32_t val; if (parse_uint32(wayland_socket, &val) == -1 || ((int)val) < 0) { wp_error("Failed to parse \"%s\" (value of WAYLAND_SOCKET) as a nonnegative integer, exiting", wayland_socket); return -1; } int fd = (int)val; int flags = fcntl(fd, F_GETFL, 0); if (flags == -1 && errno == EBADF) { wp_error("The file descriptor WAYLAND_SOCKET=%d was invalid, exiting", fd); return -1; } return fd; } static int get_display_path(char *path, size_t max_len) { const char *display = getenv("WAYLAND_DISPLAY"); if (!display) { wp_error("WAYLAND_DISPLAY is not set, exiting"); return -1; } if (display[0] != '/') { const char *xdg_runtime_dir = getenv("XDG_RUNTIME_DIR"); if (!xdg_runtime_dir) { wp_error("XDG_RUNTIME_DIR is not set, exiting"); return -1; } if (multi_strcat(path, max_len, xdg_runtime_dir, "/", display, NULL) == 0) { wp_error("full WAYLAND_DISPLAY path '%s' is longer than %z bytes, exiting", display, max_len); return -1; } } else { if (strlen(display) + 1 >= max_len) { wp_error("WAYLAND_DISPLAY='%s' is longer than %zu bytes, exiting", display, max_len); return -1; } strcpy(path, display); } return 0; } static int run_single_client_reconnector( int channelsock, int linkfd, struct connection_token conn_id) { int retcode = EXIT_SUCCESS; while (!shutdown_flag) { struct pollfd pf[2]; pf[0].fd = channelsock; pf[0].events = POLLIN; pf[0].revents = 0; pf[1].fd = linkfd; pf[1].events = 0; pf[1].revents = 0; int r = poll(pf, 2, -1); if (r == -1 && errno == EINTR) { continue; } else if (r == -1) { retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } if (pf[1].revents & POLLHUP) { /* Hang up, main thread has closed its link */ break; } if (!(pf[0].revents & POLLIN)) { continue; } int newclient = accept(channelsock, NULL, NULL); if (newclient == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been spurious continue; } wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } wp_debug("Reconnection to oneshot client"); struct connection_token new_conn; memset(&new_conn, 0, sizeof(new_conn)); if (read(newclient, &new_conn.header, sizeof(new_conn.header)) != sizeof(new_conn.header)) { wp_error("Failed to get connection id header"); goto done; } if (check_conn_header(new_conn.header, NULL, NULL, 0) < 0) { goto done; } if (read(newclient, &new_conn.key, sizeof(new_conn.key)) != sizeof(new_conn.key)) { wp_error("Failed to get connection id key"); goto done; } if (!key_match(new_conn.key, conn_id.key)) { wp_error("Connection attempt with unmatched key"); goto done; } bool update = new_conn.header & CONN_RECONNECTABLE_BIT; if (!update) { wp_error("Connection token is missing update flag"); goto done; } if (send_one_fd(linkfd, newclient) == -1) { wp_error("Failed to get connection id"); retcode = EXIT_FAILURE; checked_close(newclient); break; } done: checked_close(newclient); } checked_close(channelsock); checked_close(linkfd); return retcode; } static int run_single_client(int channelsock, pid_t *eol_pid, const struct main_config *config, int disp_fd) { /* To support reconnection attempts, this mode creates a child * reconnection watcher process, linked via socketpair */ int retcode = EXIT_SUCCESS; int chanclient = -1; struct connection_token conn_id; memset(&conn_id, 0, sizeof(conn_id)); while (!shutdown_flag) { int status = -1; if (wait_for_pid_and_clean(eol_pid, &status, WNOHANG, NULL)) { eol_pid = 0; // < in case eol_pid is recycled wp_debug("Child (ssh) died, exiting"); // Copy the exit code retcode = WEXITSTATUS(status); break; } struct pollfd cs; cs.fd = channelsock; cs.events = POLLIN; cs.revents = 0; int r = poll(&cs, 1, -1); if (r == -1) { if (errno == EINTR) { // If SIGCHLD, we will check the child. // If SIGINT, the loop ends continue; } retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } chanclient = accept(channelsock, NULL, NULL); if (chanclient == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been spurious continue; } wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } char err_msg[512]; wp_debug("New connection to client"); if (read(chanclient, &conn_id.header, sizeof(conn_id.header)) != sizeof(conn_id.header)) { wp_error("Failed to get connection id header"); goto fail_cc; } if (check_conn_header(conn_id.header, config, err_msg, sizeof(err_msg)) < 0) { wp_error("%s", err_msg); write_rejection_message(chanclient, err_msg); goto fail_cc; } if (read(chanclient, &conn_id.key, sizeof(conn_id.key)) != sizeof(conn_id.key)) { wp_error("Failed to get connection id key"); goto fail_cc; } break; fail_cc: retcode = EXIT_FAILURE; checked_close(chanclient); chanclient = -1; break; } if (retcode == EXIT_FAILURE || shutdown_flag || chanclient == -1) { checked_close(channelsock); checked_close(disp_fd); return retcode; } if (conn_id.header & CONN_UPDATE_BIT) { wp_error("Initial connection token had update flag set"); checked_close(channelsock); checked_close(disp_fd); return retcode; } /* Fork a reconnection handler, only if the connection is * reconnectable/has a nonzero id */ int linkfds[2] = {-1, -1}; if (conn_id.header & CONN_RECONNECTABLE_BIT) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linkfds) == -1) { wp_error("Failed to create socketpair: %s", strerror(errno)); checked_close(chanclient); return EXIT_FAILURE; } pid_t reco_pid = fork(); if (reco_pid == -1) { wp_error("Fork failure: %s", strerror(errno)); checked_close(chanclient); return EXIT_FAILURE; } else if (reco_pid == 0) { if (linkfds[0] != -1) { checked_close(linkfds[0]); } checked_close(chanclient); checked_close(disp_fd); int rc = run_single_client_reconnector( channelsock, linkfds[1], conn_id); exit(rc); } checked_close(linkfds[1]); } checked_close(channelsock); struct main_config mod_config = *config; apply_conn_header(conn_id.header, &mod_config); return main_interface_loop( chanclient, disp_fd, linkfds[0], &mod_config, true); } void send_new_connection_fd( struct conn_map *connmap, uint32_t key[static 3], int new_fd) { for (int i = 0; i < connmap->count; i++) { if (key_match(connmap->data[i].token.key, key)) { if (send_one_fd(connmap->data[i].linkfd, new_fd) == -1) { wp_error("Failed to send new connection fd to subprocess: %s", strerror(errno)); } break; } } } static void handle_new_client_connection(int cwd_fd, struct pollfd *other_fds, int n_other_fds, int chanclient, struct conn_map *connmap, const struct main_config *config, const struct socket_path disp_path, const struct connection_token *conn_id) { bool reconnectable = conn_id->header & CONN_RECONNECTABLE_BIT; if (reconnectable && buf_ensure_size(connmap->count + 1, sizeof(struct conn_addr), &connmap->size, (void **)&connmap->data) == -1) { wp_error("Failed to allocate space to track connection"); goto fail_cc; } int linkfds[2] = {-1, -1}; if (reconnectable) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linkfds) == -1) { wp_error("Failed to create socketpair: %s", strerror(errno)); goto fail_cc; } } pid_t npid = fork(); if (npid == 0) { // Run forked process, with the only shared // state being the new channel socket for (int i = 0; i < n_other_fds; i++) { if (other_fds[i].fd != chanclient) { checked_close(other_fds[i].fd); } } if (reconnectable) { checked_close(linkfds[0]); } for (int i = 0; i < connmap->count; i++) { checked_close(connmap->data[i].linkfd); } int display_fd = -1; if (connect_to_socket(cwd_fd, disp_path, NULL, &display_fd) == -1) { exit(EXIT_FAILURE); } checked_close(cwd_fd); struct main_config mod_config = *config; apply_conn_header(conn_id->header, &mod_config); int rc = main_interface_loop(chanclient, display_fd, linkfds[1], &mod_config, true); check_unclosed_fds(); exit(rc); } else if (npid == -1) { wp_error("Fork failure: %s", strerror(errno)); goto fail_ps; } // Remove connection from this process if (reconnectable) { checked_close(linkfds[1]); connmap->data[connmap->count++] = (struct conn_addr){.linkfd = linkfds[0], .token = *conn_id, .pid = npid}; } return; fail_ps: checked_close(linkfds[0]); fail_cc: checked_close(chanclient); return; } #define NUM_INCOMPLETE_CONNECTIONS 63 static void drop_incoming_connection(struct pollfd *fds, struct connection_token *tokens, uint8_t *bytes_read, int index, int incomplete) { checked_close(fds[index].fd); if (index != incomplete - 1) { size_t shift = (size_t)(incomplete - 1 - index); memmove(fds + index, fds + index + 1, sizeof(struct pollfd) * shift); memmove(tokens + index, tokens + index + 1, sizeof(struct connection_token) * shift); memmove(bytes_read + index, bytes_read + index + 1, sizeof(uint8_t) * shift); } memset(&fds[incomplete - 1], 0, sizeof(struct pollfd)); memset(&tokens[incomplete - 1], 0, sizeof(struct connection_token)); bytes_read[incomplete - 1] = 0; } static int run_multi_client(int cwd_fd, int channelsock, pid_t *eol_pid, const struct main_config *config, const struct socket_path disp_path) { struct conn_map connmap = {.data = NULL, .count = 0, .size = 0}; /* Keep track of the main socket, and all connections which have not * yet fully provided their connection token. If we run out of space, * the oldest incomplete connection gets dropped */ struct pollfd fds[NUM_INCOMPLETE_CONNECTIONS + 1]; struct connection_token tokens[NUM_INCOMPLETE_CONNECTIONS]; uint8_t bytes_read[NUM_INCOMPLETE_CONNECTIONS]; int incomplete = 0; memset(fds, 0, sizeof(fds)); memset(tokens, 0, sizeof(tokens)); memset(bytes_read, 0, sizeof(bytes_read)); fds[0].fd = channelsock; fds[0].events = POLLIN; fds[0].revents = 0; int retcode = EXIT_SUCCESS; while (!shutdown_flag) { int status = -1; if (wait_for_pid_and_clean( eol_pid, &status, WNOHANG, &connmap)) { wp_debug("Child (ssh) died, exiting"); // Copy the exit code retcode = WEXITSTATUS(status); break; } int r = poll(fds, 1 + (nfds_t)incomplete, -1); if (r == -1) { if (errno == EINTR) { // If SIGCHLD, we will check the child. // If SIGINT, the loop ends continue; } retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } for (int i = 0; i < incomplete; i++) { if (!(fds[i + 1].revents & POLLIN)) { continue; } int cur_fd = fds[i + 1].fd; char *dest = ((char *)&tokens[i]) + bytes_read[i]; ssize_t s = read(cur_fd, dest, 16 - bytes_read[i]); if (s == -1) { wp_error("Failed to read from connection: %s", strerror(errno)); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } else if (s == 0) { /* connection closed */ wp_error("Connection closed early"); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } bytes_read[i] += (uint8_t)s; if (bytes_read[i] - (uint8_t)s < 4 && bytes_read[i] >= 4) { char err_msg[512]; /* Validate connection token header */ if (check_conn_header(tokens[i].header, config, err_msg, sizeof(err_msg)) < 0) { wp_error("%s", err_msg); write_rejection_message( cur_fd, err_msg); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } } if (bytes_read[i] < 16) { continue; } /* Validate connection token key */ if (tokens[i].header & CONN_UPDATE_BIT) { send_new_connection_fd(&connmap, tokens[i].key, cur_fd); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; continue; } /* Failures here are logged, but should not * affect this process' ability to e.g. handle * reconnections. */ handle_new_client_connection(cwd_fd, fds, 1 + incomplete, cur_fd, &connmap, config, disp_path, &tokens[i]); drop_incoming_connection(fds + 1, tokens, bytes_read, i, incomplete); incomplete--; } /* Process new connections second, to give incomplete * connections a chance to clear first */ if (fds[0].revents & POLLIN) { int chanclient = accept(channelsock, NULL, NULL); if (chanclient == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been spurious continue; } // should errors like econnaborted exit? wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } wp_debug("New connection to client"); if (set_nonblocking(chanclient) == -1) { wp_error("Error making new connection nonblocking: %s", strerror(errno)); checked_close(chanclient); continue; } if (incomplete == NUM_INCOMPLETE_CONNECTIONS) { wp_error("Dropping oldest incomplete connection (out of %d)", NUM_INCOMPLETE_CONNECTIONS); drop_incoming_connection(fds + 1, tokens, bytes_read, 0, incomplete); incomplete--; } fds[1 + incomplete].fd = chanclient; fds[1 + incomplete].events = POLLIN; fds[1 + incomplete].revents = 0; memset(&tokens[incomplete], 0, sizeof(struct connection_token)); bytes_read[incomplete] = 0; incomplete++; } } for (int i = 0; i < incomplete; i++) { checked_close(fds[i + 1].fd); } for (int i = 0; i < connmap.count; i++) { checked_close(connmap.data[i].linkfd); } free(connmap.data); checked_close(channelsock); return retcode; } int run_client(int cwd_fd, const char *sock_folder_name, int sock_folder_fd, const char *sock_filename, const struct main_config *config, bool oneshot, const char *wayland_socket, pid_t eol_pid, int channelsock) { wp_debug("I'm a client listening on '%s' / '%s'", sock_folder_name, sock_filename); wp_debug("version: %s", WAYPIPE_VERSION); /* Connect to Wayland display. We don't use the wayland-client * function here, because its errors aren't immediately useful, * and older Wayland versions have edge cases */ int dispfd = -1; struct sockaddr_un display_filename = {0}; char display_folder[256] = {0}; if (wayland_socket) { dispfd = get_inherited_socket(wayland_socket); if (dispfd == -1) { goto fail; } /* This socket is inherited and meant to be closed by Waypipe */ if (dispfd >= 0 && dispfd < 256) { inherited_fds[dispfd / 64] &= ~(1uLL << (dispfd % 64)); } } else { if (get_display_path(display_folder, sizeof(display_folder)) == -1) { goto fail; } if (split_socket_path(display_folder, &display_filename) == -1) { goto fail; } } struct socket_path display_path = { .folder = display_folder, .filename = &display_filename, }; if (oneshot) { if (!wayland_socket) { connect_to_socket(cwd_fd, display_path, NULL, &dispfd); } } else { int test_conn = -1; if (connect_to_socket(cwd_fd, display_path, NULL, &test_conn) == -1) { goto fail; } checked_close(test_conn); } wp_debug("A wayland compositor is available. Proceeding."); /* These handlers close the channelsock and dispfd */ int retcode; if (oneshot) { retcode = run_single_client( channelsock, &eol_pid, config, dispfd); } else { retcode = run_multi_client(cwd_fd, channelsock, &eol_pid, config, display_path); } if (!config->vsock) { unlink_at_folder(cwd_fd, sock_folder_fd, sock_folder_name, sock_filename); } int cleanup_type = shutdown_flag ? WNOHANG : 0; int status = -1; // Don't return until all child processes complete if (wait_for_pid_and_clean(&eol_pid, &status, cleanup_type, NULL)) { retcode = WEXITSTATUS(status); } return retcode; fail: close(channelsock); if (eol_pid) { waitpid(eol_pid, NULL, 0); } if (!config->vsock) { unlink_at_folder(cwd_fd, sock_folder_fd, sock_folder_name, sock_filename); } return EXIT_FAILURE; } waypipe-v0.9.1/src/dmabuf.c000066400000000000000000000267101463133614300155700ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "dmabuf.h" #include "util.h" #ifndef HAS_DMABUF int init_render_data(struct render_data *data) { data->disabled = true; (void)data; return -1; } void cleanup_render_data(struct render_data *data) { (void)data; } struct gbm_bo *import_dmabuf(struct render_data *rd, int fd, size_t *size, const struct dmabuf_slice_data *info) { (void)rd; (void)fd; (void)size; (void)info; return NULL; } int get_unique_dmabuf_handle( struct render_data *rd, int fd, struct gbm_bo **temporary_bo) { (void)rd; (void)fd; (void)temporary_bo; return -1; } struct gbm_bo *make_dmabuf( struct render_data *rd, const struct dmabuf_slice_data *info) { (void)rd; (void)info; return NULL; } int export_dmabuf(struct gbm_bo *bo) { (void)bo; return -1; } void destroy_dmabuf(struct gbm_bo *bo) { (void)bo; } void *map_dmabuf(struct gbm_bo *bo, bool write, void **map_handle, uint32_t *exp_stride) { (void)bo; (void)write; (void)map_handle; (void)exp_stride; return NULL; } int unmap_dmabuf(struct gbm_bo *bo, void *map_handle) { (void)bo; (void)map_handle; return 0; } uint32_t dmabuf_get_simple_format_for_plane(uint32_t format, int plane) { (void)format; (void)plane; return 0; } uint32_t dmabuf_get_stride(struct gbm_bo *bo) { (void)bo; return 0; } #else /* HAS_DMABUF */ #include #include #include #include #include #include #include #include #ifndef DRM_FORMAT_MOD_INVALID #define DRM_FORMAT_MOD_INVALID 0x00ffffffffffffffULL #endif int init_render_data(struct render_data *data) { /* render node support can be disabled either by choice * or when a previous version fails */ if (data->disabled) { return -1; } if (data->drm_fd != -1) { // Silent return, idempotent return 0; } const char *card = data->drm_node_path ? data->drm_node_path : "/dev/dri/renderD128"; int drm_fd = open(card, O_RDWR | O_CLOEXEC | O_NOCTTY); if (drm_fd == -1) { wp_error("Failed to open drm fd for %s: %s", card, strerror(errno)); data->disabled = true; return -1; } struct gbm_device *dev = gbm_create_device(drm_fd); if (!dev) { data->disabled = true; checked_close(drm_fd); wp_error("Failed to create gbm device from drm_fd"); return -1; } data->drm_fd = drm_fd; data->dev = dev; /* Set the path to the card used for protocol handlers to see */ data->drm_node_path = card; /* Assume true initially, fall back to old buffer creation path * if the newer path errors out */ data->supports_modifiers = true; return 0; } void cleanup_render_data(struct render_data *data) { if (data->drm_fd != -1) { gbm_device_destroy(data->dev); checked_close(data->drm_fd); data->dev = NULL; data->drm_fd = -1; } } static bool dmabuf_info_valid(const struct dmabuf_slice_data *info) { if (info->height > (1u << 24) || info->width > (1u << 24) || info->num_planes > 4 || info->num_planes == 0) { wp_error("Invalid DMABUF slice data: height " PRIu32 " width " PRIu32 " num_planes " PRIu32, info->height, info->width, info->num_planes); return false; } return true; } struct gbm_bo *import_dmabuf(struct render_data *rd, int fd, size_t *size, const struct dmabuf_slice_data *info) { struct gbm_bo *bo; if (!dmabuf_info_valid(info)) { return NULL; } /* Multiplanar formats are all rather badly supported by * drivers/libgbm/libdrm/compositors/applications/everything. */ struct gbm_import_fd_modifier_data data; // Select all plane metadata associated to planes linked // to this fd data.modifier = info->modifier; data.num_fds = 0; uint32_t simple_format = 0; for (int i = 0; i < info->num_planes; i++) { if (info->using_planes[i]) { data.fds[data.num_fds] = fd; data.strides[data.num_fds] = (int)info->strides[i]; data.offsets[data.num_fds] = (int)info->offsets[i]; data.num_fds++; if (!simple_format) { simple_format = dmabuf_get_simple_format_for_plane( info->format, i); } } } if (!simple_format) { simple_format = info->format; } data.width = info->width; data.height = info->height; data.format = simple_format; bo = gbm_bo_import(rd->dev, GBM_BO_IMPORT_FD_MODIFIER, &data, GBM_BO_USE_RENDERING); if (!bo) { wp_error("Failed to import dmabuf (format %x, modifier %" PRIx64 ") to gbm bo: %s", info->format, info->modifier, strerror(errno)); return NULL; } /* todo: find out how to correctly map multiplanar formats */ *size = gbm_bo_get_stride(bo) * gbm_bo_get_height(bo); return bo; } int get_unique_dmabuf_handle( struct render_data *rd, int fd, struct gbm_bo **temporary_bo) { struct gbm_import_fd_data data; data.fd = fd; data.width = 1; data.stride = 1; data.height = 1; data.format = GBM_FORMAT_R8; *temporary_bo = gbm_bo_import( rd->dev, GBM_BO_IMPORT_FD, &data, GBM_BO_USE_RENDERING); if (!*temporary_bo) { return -1; } // This effectively reduces to DRM_IOCTL_PRIME_FD_TO_HANDLE. Is the // runtime dependency worth it? int handle = gbm_bo_get_handle(*temporary_bo).s32; return handle; } struct gbm_bo *make_dmabuf( struct render_data *rd, const struct dmabuf_slice_data *info) { struct gbm_bo *bo; if (!dmabuf_info_valid(info)) { return NULL; } retry: if (!rd->supports_modifiers || info->modifier == DRM_FORMAT_MOD_INVALID) { uint32_t simple_format = dmabuf_get_simple_format_for_plane( info->format, 0); /* If the modifier is nonzero, assume that the backend * preferred modifier matches it. With this old API, there * really isn't any way to do this better */ bo = gbm_bo_create(rd->dev, info->width, info->height, simple_format, GBM_BO_USE_RENDERING | (info->modifier ? 0 : GBM_BO_USE_LINEAR)); if (!bo) { wp_error("Failed to make dmabuf (old path): %s", strerror(errno)); return NULL; } uint64_t mod = gbm_bo_get_modifier(bo); if (info->modifier != DRM_FORMAT_MOD_INVALID && mod != DRM_FORMAT_MOD_INVALID && mod != info->modifier) { wp_error("DMABUF with format %08x, autoselected modifier %" PRIx64 " does not match desired %" PRIx64 ", expect a crash", simple_format, mod, info->modifier); } } else { uint64_t modifiers[2] = {info->modifier, GBM_BO_USE_RENDERING}; uint32_t simple_format = dmabuf_get_simple_format_for_plane( info->format, 0); /* Whether just size and modifiers suffice to replicate * a surface is driver dependent, and requires actual testing * with the hardware. * * i915 DRM ioctls cover size, swizzling, tiling state, only. * amdgpu, size + allocation domain/caching/align flags * etnaviv, size + caching flags * tegra, vc4: size + tiling + flags * radeon: size + tiling + flags, including pitch * * Note that gbm doesn't have a specific api for creating * buffers with minimal information, or even just getting * the size of the buffer contents. */ bo = gbm_bo_create_with_modifiers(rd->dev, info->width, info->height, simple_format, modifiers, 2); if (!bo && errno == ENOSYS) { wp_debug("Creating a DMABUF with modifiers explicitly set is not supported; retrying"); rd->supports_modifiers = false; goto retry; } if (!bo) { wp_error("Failed to make dmabuf (with format %x, modifier %" PRIx64 "): %s", simple_format, info->modifier, strerror(errno)); return NULL; } } return bo; } int export_dmabuf(struct gbm_bo *bo) { int fd = gbm_bo_get_fd(bo); if (fd == -1) { wp_error("Failed to export dmabuf: %s", strerror(errno)); } return fd; } void destroy_dmabuf(struct gbm_bo *bo) { if (bo) { gbm_bo_destroy(bo); } } void *map_dmabuf(struct gbm_bo *bo, bool write, void **map_handle, uint32_t *exp_stride) { if (!bo) { wp_error("Tried to map null gbm_bo"); return NULL; } /* With i965, the map handle MUST initially point to a NULL pointer; * otherwise the handler silently exits, sometimes with misleading errno * :-( */ *map_handle = NULL; uint32_t stride; uint32_t width = gbm_bo_get_width(bo); uint32_t height = gbm_bo_get_height(bo); /* As of writing, with amdgpu, GBM_BO_TRANSFER_WRITE invalidates * regions not written to during the mapping, while iris preserves * the original buffer contents. GBM documentation does not say which * WRITE behavior is correct. What the individual drivers do may change * in the future. Specifying READ_WRITE preserves the old contents with * both drivers. */ uint32_t flags = write ? GBM_BO_TRANSFER_READ_WRITE : GBM_BO_TRANSFER_READ; void *data = gbm_bo_map( bo, 0, 0, width, height, flags, &stride, map_handle); if (!data) { // errno is useless here wp_error("Failed to map dmabuf"); return NULL; } *exp_stride = stride; return data; } int unmap_dmabuf(struct gbm_bo *bo, void *map_handle) { gbm_bo_unmap(bo, map_handle); return 0; } // TODO: support DRM formats, like DRM_FORMAT_RGB888_A8 and // DRM_FORMAT_ARGB16161616F, defined in drm_fourcc.h. struct multiplanar_info { uint32_t format; struct { int subsample_w; int subsample_h; int cpp; } planes[3]; }; static const struct multiplanar_info plane_table[] = { {GBM_FORMAT_NV12, {{1, 1, 1}, {2, 2, 2}}}, {GBM_FORMAT_NV21, {{1, 1, 1}, {2, 2, 2}}}, {GBM_FORMAT_NV16, {{1, 1, 1}, {2, 1, 2}}}, {GBM_FORMAT_NV61, {{1, 1, 1}, {2, 1, 2}}}, {GBM_FORMAT_YUV410, {{1, 1, 1}, {4, 4, 1}, {4, 4, 1}}}, {GBM_FORMAT_YVU410, {{1, 1, 1}, {4, 4, 1}, {4, 4, 1}}}, {GBM_FORMAT_YUV411, {{1, 1, 1}, {4, 1, 1}, {4, 1, 1}}}, {GBM_FORMAT_YVU411, {{1, 1, 1}, {4, 1, 1}, {4, 1, 1}}}, {GBM_FORMAT_YUV420, {{1, 1, 1}, {2, 2, 1}, {2, 2, 1}}}, {GBM_FORMAT_YVU420, {{1, 1, 1}, {2, 2, 1}, {2, 2, 1}}}, {GBM_FORMAT_YUV422, {{1, 1, 1}, {2, 1, 1}, {2, 1, 1}}}, {GBM_FORMAT_YVU422, {{1, 1, 1}, {2, 1, 1}, {2, 1, 1}}}, {GBM_FORMAT_YUV444, {{1, 1, 1}, {1, 1, 1}, {1, 1, 1}}}, {GBM_FORMAT_YVU444, {{1, 1, 1}, {1, 1, 1}, {1, 1, 1}}}, {0}}; uint32_t dmabuf_get_simple_format_for_plane(uint32_t format, int plane) { const uint32_t by_cpp[] = {0, GBM_FORMAT_R8, GBM_FORMAT_GR88, GBM_FORMAT_RGB888, GBM_BO_FORMAT_ARGB8888}; for (int i = 0; plane_table[i].format; i++) { if (plane_table[i].format == format) { int cpp = plane_table[i].planes[plane].cpp; return by_cpp[cpp]; } } if (format == GBM_FORMAT_YUYV || format == GBM_FORMAT_YVYU || format == GBM_FORMAT_UYVY || format == GBM_FORMAT_VYUY || format == GBM_FORMAT_AYUV) { return by_cpp[4]; } return format; } uint32_t dmabuf_get_stride(struct gbm_bo *bo) { return gbm_bo_get_stride(bo); } #endif /* HAS_DMABUF */ waypipe-v0.9.1/src/dmabuf.h000066400000000000000000000071561463133614300156000ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_DMABUF_H #define WAYPIPE_DMABUF_H #include #include #include #include typedef void *VADisplay; typedef unsigned int VAGenericID; typedef VAGenericID VAConfigID; struct render_data { bool disabled; int drm_fd; const char *drm_node_path; struct gbm_device *dev; bool supports_modifiers; /* video hardware context */ bool av_disabled; int av_bpf; int av_video_fmt; struct AVBufferRef *av_hwdevice_ref; struct AVBufferRef *av_drmdevice_ref; VADisplay av_vadisplay; VAConfigID av_copy_config; }; /** Additional information to help serialize a dmabuf */ struct dmabuf_slice_data { /* This information partially duplicates that of a gbm_bo. However, for * instance with weston, it is possible for the compositor to handle * multibuffer multiplanar images, even though a driver may only support * multiplanar images derived from a single underlying dmabuf. */ uint32_t width; uint32_t height; uint32_t format; int32_t num_planes; uint32_t offsets[4]; uint32_t strides[4]; uint64_t modifier; // to which planes is the matching dmabuf assigned? uint8_t using_planes[4]; char pad[4]; }; static_assert(sizeof(struct dmabuf_slice_data) == 64, "size check"); int init_render_data(struct render_data *); void cleanup_render_data(struct render_data *); struct gbm_bo *make_dmabuf( struct render_data *rd, const struct dmabuf_slice_data *info); int export_dmabuf(struct gbm_bo *bo); /** Import DMABUF to a GBM buffer object. */ struct gbm_bo *import_dmabuf(struct render_data *rd, int fd, size_t *size, const struct dmabuf_slice_data *info); void destroy_dmabuf(struct gbm_bo *bo); /** Map a DMABUF for reading or for writing */ void *map_dmabuf(struct gbm_bo *bo, bool write, void **map_handle, uint32_t *exp_stride); int unmap_dmabuf(struct gbm_bo *bo, void *map_handle); /** The handle values are unique among the set of currently active buffer * objects. To compare a set of buffer objects, produce handles in a batch, and * then free the temporary buffer objects in a batch */ int get_unique_dmabuf_handle( struct render_data *rd, int fd, struct gbm_bo **temporary_bo); uint32_t dmabuf_get_simple_format_for_plane(uint32_t format, int plane); uint32_t dmabuf_get_stride(struct gbm_bo *bo); /** Returns the number of bytes per pixel for WL or DRM format 'format', if the * format is an RGBA-type single plane format. For YUV-type or planar formats, * returns -1. */ int get_shm_bytes_per_pixel(uint32_t format); #endif // WAYPIPE_DMABUF_H waypipe-v0.9.1/src/handlers.c000066400000000000000000001675511463133614300161430ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include "parsing.h" #include "shadow.h" #include #include #include #include #include #include #include #ifndef DRM_FORMAT_MOD_INVALID #define DRM_FORMAT_MOD_INVALID 0x00ffffffffffffffULL #endif struct obj_wl_shm_pool { struct wp_object base; struct shadow_fd *owned_buffer; }; enum buffer_type { BUF_SHM, BUF_DMA }; // This should be a safe limit for the maximum number of dmabuf planes #define MAX_DMABUF_PLANES 8 struct obj_wl_buffer { struct wp_object base; enum buffer_type type; struct shadow_fd *shm_buffer; int32_t shm_offset; int32_t shm_width; int32_t shm_height; int32_t shm_stride; uint32_t shm_format; int dmabuf_nplanes; int32_t dmabuf_width; int32_t dmabuf_height; uint32_t dmabuf_format; uint32_t dmabuf_flags; struct shadow_fd *dmabuf_buffers[MAX_DMABUF_PLANES]; uint32_t dmabuf_offsets[MAX_DMABUF_PLANES]; uint32_t dmabuf_strides[MAX_DMABUF_PLANES]; uint64_t dmabuf_modifiers[MAX_DMABUF_PLANES]; uint64_t unique_id; }; struct damage_record { int x, y, width, height; bool buffer_coordinates; }; struct damage_list { struct damage_record *list; int len; int size; }; #define SURFACE_DAMAGE_BACKLOG 7 struct obj_wl_surface { struct wp_object base; /* The zeroth list is the "current" one, 1st was damage provided at last * commit, etc. */ struct damage_list damage_lists[SURFACE_DAMAGE_BACKLOG]; /* Unique buffer identifiers to which the above damage lists apply */ uint64_t attached_buffer_uids[SURFACE_DAMAGE_BACKLOG]; uint32_t attached_buffer_id; /* protocol object id */ int32_t scale; int32_t transform; }; struct obj_wlr_screencopy_frame { struct wp_object base; /* Link to a wp_buffer instead of its underlying data, * because if the buffer object is destroyed early, then * we do not want to accidentally write over a section of a shm_pool * which is now used for transport in the reverse direction. */ uint32_t buffer_id; }; struct obj_wp_presentation { struct wp_object base; // reference clock - given clock int64_t clock_delta_nsec; int clock_id; }; struct obj_wp_presentation_feedback { struct wp_object base; int64_t clock_delta_nsec; }; struct obj_zwp_linux_dmabuf_params { struct wp_object base; struct shadow_fd *sfds; // These variables are set by 'params.create', and passed on in // params.created int32_t create_width; int32_t create_height; uint32_t create_format; uint32_t create_flags; struct { int fd; struct shadow_fd *buffer; uint32_t offset; uint32_t stride; uint64_t modifier; } add[MAX_DMABUF_PLANES]; int nplanes; }; struct format_table_entry { uint32_t format; uint32_t padding; uint64_t modifier; }; struct dmabuf_tranche { uint32_t flags; uint16_t *tranche; size_t tranche_size; }; struct obj_zwp_linux_dmabuf_feedback { struct wp_object base; struct format_table_entry *table; size_t table_len; dev_t main_device; /* the tranche being edited until tranche_done is called */ dev_t current_device; /* the tranche being edited until tranche_done is called */ struct dmabuf_tranche current; /* list of all tranches */ struct dmabuf_tranche *tranches; size_t tranche_count; }; struct obj_wlr_export_dmabuf_frame { struct wp_object base; uint32_t width; uint32_t height; uint32_t format; uint64_t modifier; // At the moment, no message reordering support, for lack of a client // to test it with struct { struct shadow_fd *buffer; uint32_t offset; uint32_t stride; uint64_t modifier; } objects[MAX_DMABUF_PLANES]; uint32_t nobjects; }; /* List of interfaces which may be advertised as globals */ static const struct wp_interface *const global_interfaces[] = { &intf_gtk_primary_selection_device_manager, &intf_wl_compositor, &intf_wl_data_device_manager, &intf_wl_drm, &intf_wl_output, &intf_wl_seat, &intf_wl_shm, &intf_wl_subcompositor, &intf_wp_presentation, &intf_xdg_wm_base, &intf_zwlr_data_control_manager_v1, &intf_zwlr_export_dmabuf_manager_v1, &intf_zwlr_gamma_control_manager_v1, &intf_zwlr_screencopy_manager_v1, &intf_zwp_input_method_manager_v2, &intf_zwp_linux_dmabuf_v1, &intf_zwp_primary_selection_device_manager_v1, &intf_zwp_virtual_keyboard_manager_v1, }; /* List of interfaces which are never advertised as globals */ static const struct wp_interface *const non_global_interfaces[] = { &intf_gtk_primary_selection_offer, &intf_gtk_primary_selection_source, &intf_wl_buffer, &intf_wl_data_offer, &intf_wl_data_source, &intf_wl_display, &intf_wl_keyboard, &intf_wl_registry, &intf_wl_shm_pool, &intf_wl_surface, &intf_wp_presentation_feedback, &intf_zwlr_data_control_offer_v1, &intf_zwlr_data_control_source_v1, &intf_zwlr_export_dmabuf_frame_v1, &intf_zwlr_gamma_control_v1, &intf_zwlr_screencopy_frame_v1, &intf_zwp_linux_buffer_params_v1, &intf_zwp_primary_selection_offer_v1, &intf_zwp_primary_selection_source_v1, }; static void cleanup_dmabuf_params_fds(struct obj_zwp_linux_dmabuf_params *r) { // Sometimes multiple entries point to the same buffer for (int i = 0; i < MAX_DMABUF_PLANES; i++) { int fd = r->add[i].fd; if (fd != -1) { checked_close(fd); for (int k = 0; k < MAX_DMABUF_PLANES; k++) { if (fd == r->add[k].fd) { r->add[k].fd = -1; } } } } } void destroy_wp_object(struct wp_object *object) { if (object->type == &intf_wl_shm_pool) { struct obj_wl_shm_pool *r = (struct obj_wl_shm_pool *)object; if (r->owned_buffer) { shadow_decref_protocol(r->owned_buffer); } } else if (object->type == &intf_wl_buffer) { struct obj_wl_buffer *r = (struct obj_wl_buffer *)object; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { if (r->dmabuf_buffers[i]) { shadow_decref_protocol(r->dmabuf_buffers[i]); } } if (r->shm_buffer) { shadow_decref_protocol(r->shm_buffer); } } else if (object->type == &intf_wl_surface) { struct obj_wl_surface *r = (struct obj_wl_surface *)object; for (int i = 0; i < SURFACE_DAMAGE_BACKLOG; i++) { free(r->damage_lists[i].list); } } else if (object->type == &intf_zwlr_screencopy_frame_v1) { struct obj_wlr_screencopy_frame *r = (struct obj_wlr_screencopy_frame *)object; (void)r; } else if (object->type == &intf_wp_presentation) { } else if (object->type == &intf_wp_presentation_feedback) { } else if (object->type == &intf_zwp_linux_buffer_params_v1) { struct obj_zwp_linux_dmabuf_params *r = (struct obj_zwp_linux_dmabuf_params *)object; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { if (r->add[i].buffer) { shadow_decref_protocol(r->add[i].buffer); } } cleanup_dmabuf_params_fds(r); } else if (object->type == &intf_zwlr_export_dmabuf_frame_v1) { struct obj_wlr_export_dmabuf_frame *r = (struct obj_wlr_export_dmabuf_frame *)object; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { if (r->objects[i].buffer) { shadow_decref_protocol(r->objects[i].buffer); } } } else if (object->type == &intf_zwp_linux_dmabuf_feedback_v1) { struct obj_zwp_linux_dmabuf_feedback *r = (struct obj_zwp_linux_dmabuf_feedback *)object; free(r->table); if (r->tranche_count > 0) { for (size_t i = 0; i < r->tranche_count; i++) { free(r->tranches[i].tranche); } free(r->tranches); } } free(object); } struct wp_object *create_wp_object(uint32_t id, const struct wp_interface *type) { /* Note: if custom types are ever implemented for globals, they would * need special replacement logic when the type is set */ size_t sz; if (type == &intf_wl_shm_pool) { sz = sizeof(struct obj_wl_shm_pool); } else if (type == &intf_wl_buffer) { sz = sizeof(struct obj_wl_buffer); } else if (type == &intf_wl_surface) { sz = sizeof(struct obj_wl_surface); } else if (type == &intf_zwlr_screencopy_frame_v1) { sz = sizeof(struct obj_wlr_screencopy_frame); } else if (type == &intf_wp_presentation) { sz = sizeof(struct obj_wp_presentation); } else if (type == &intf_wp_presentation_feedback) { sz = sizeof(struct obj_wp_presentation_feedback); } else if (type == &intf_zwp_linux_buffer_params_v1) { sz = sizeof(struct obj_zwp_linux_dmabuf_params); } else if (type == &intf_zwlr_export_dmabuf_frame_v1) { sz = sizeof(struct obj_wlr_export_dmabuf_frame); } else if (type == &intf_zwp_linux_dmabuf_feedback_v1) { sz = sizeof(struct obj_zwp_linux_dmabuf_feedback); } else { sz = sizeof(struct wp_object); } struct wp_object *new_obj = calloc(1, sz); if (!new_obj) { wp_error("Failed to allocate new wp_object id=%d type=%s", id, type->name); return NULL; } new_obj->obj_id = id; new_obj->type = type; new_obj->is_zombie = false; if (type == &intf_zwp_linux_buffer_params_v1) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)new_obj; for (int i = 0; i < MAX_DMABUF_PLANES; i++) { params->add[i].fd = -1; } } else if (type == &intf_wl_surface) { ((struct obj_wl_surface *)new_obj)->scale = 1; } return new_obj; } void do_wl_display_evt_error(struct context *ctx, struct wp_object *object_id, uint32_t code, const char *message) { const char *type_name = object_id ? (object_id->type ? object_id->type->name : "") : ""; wp_error("Display sent fatal error message %s, code %u: %s", type_name, code, message ? message : ""); (void)ctx; } void do_wl_display_evt_delete_id(struct context *ctx, uint32_t id) { struct wp_object *obj = tracker_get(ctx->tracker, id); /* ensure this isn't miscalled to have wl_display delete itself */ if (obj && obj != ctx->obj) { tracker_remove(ctx->tracker, obj); destroy_wp_object(obj); } } void do_wl_display_req_get_registry( struct context *ctx, struct wp_object *registry) { (void)ctx; (void)registry; } void do_wl_display_req_sync(struct context *ctx, struct wp_object *callback) { (void)ctx; (void)callback; } void do_wl_registry_evt_global(struct context *ctx, uint32_t name, const char *interface, uint32_t version) { if (!interface) { wp_debug("Interface name provided via wl_registry::global was NULL"); return; } bool requires_rnode = false; requires_rnode |= !strcmp(interface, "wl_drm"); requires_rnode |= !strcmp(interface, "zwp_linux_dmabuf_v1"); requires_rnode |= !strcmp(interface, "zwlr_export_dmabuf_manager_v1"); if (requires_rnode) { if (init_render_data(&ctx->g->render) == -1) { /* A gpu connection supported by waypipe is required on * both sides, since data transfers may occur in both * directions, and * modifying textures may require driver support */ wp_debug("Discarding protocol advertisement for %s, render node support disabled", interface); ctx->drop_this_msg = true; return; } } if (!strcmp(interface, "zwp_linux_dmabuf_v1")) { /* Higher versions will very likely require new Waypipe code to * support, so limit this to what Waypipe supports */ if (ctx->message[2 + 1 + 1 + 5] > ZWP_LINUX_DMABUF_V1_INTERFACE_VERSION) { ctx->message[2 + 1 + 1 + 5] = ZWP_LINUX_DMABUF_V1_INTERFACE_VERSION; } } if (!strcmp(interface, "wl_shm")) { /* Higher versions will very likely require new Waypipe code to * support, so limit this to what Waypipe supports */ if (ctx->message[2 + 1 + 1 + 2] > WL_SHM_INTERFACE_VERSION) { ctx->message[2 + 1 + 1 + 2] = WL_SHM_INTERFACE_VERSION; } } bool unsupported = false; // requires novel fd translation, not yet supported unsupported |= !strcmp( interface, "zwp_linux_explicit_synchronization_v1"); unsupported |= !strcmp(interface, "wp_linux_drm_syncobj_manager_v1"); if (unsupported) { wp_debug("Hiding %s advertisement, unsupported", interface); ctx->drop_this_msg = true; } (void)name; (void)version; } void do_wl_registry_evt_global_remove(struct context *ctx, uint32_t name) { (void)ctx; (void)name; } void do_wl_registry_req_bind(struct context *ctx, uint32_t name, const char *interface, uint32_t version, struct wp_object *id) { if (!interface) { wp_debug("Interface name provided to wl_registry::bind was NULL"); return; } /* The object has already been created, but its type is NULL */ struct wp_object *the_object = id; uint32_t obj_id = the_object->obj_id; for (size_t i = 0; i < sizeof(non_global_interfaces) / sizeof(non_global_interfaces[0]); i++) { if (!strcmp(interface, non_global_interfaces[i]->name)) { wp_error("Interface %s does not support binding globals", non_global_interfaces[i]->name); /* exit search, discard unbound object */ goto fail; } } for (size_t i = 0; i < sizeof(global_interfaces) / sizeof(global_interfaces[0]); i++) { if (!strcmp(interface, global_interfaces[i]->name)) { // Set the object type the_object->type = global_interfaces[i]; if (global_interfaces[i] == &intf_wp_presentation) { struct wp_object *new_object = create_wp_object( obj_id, &intf_wp_presentation); if (!new_object) { return; } tracker_replace_existing( ctx->tracker, new_object); free(the_object); } return; } } fail: wp_debug("Unhandled protocol %s name=%d id=%d (v%d)", interface, name, the_object->obj_id, version); tracker_remove(ctx->tracker, the_object); free(the_object); (void)name; (void)version; } void do_wl_buffer_evt_release(struct context *ctx) { (void)ctx; } int get_shm_bytes_per_pixel(uint32_t format) { switch (format) { case 0x34325241: /* DRM_FORMAT_ARGB8888 */ case 0x34325258: /* DRM_FORMAT_XRGB8888 */ case WL_SHM_FORMAT_ARGB8888: case WL_SHM_FORMAT_XRGB8888: return 4; case WL_SHM_FORMAT_C8: case WL_SHM_FORMAT_RGB332: case WL_SHM_FORMAT_BGR233: return 1; case WL_SHM_FORMAT_XRGB4444: case WL_SHM_FORMAT_XBGR4444: case WL_SHM_FORMAT_RGBX4444: case WL_SHM_FORMAT_BGRX4444: case WL_SHM_FORMAT_ARGB4444: case WL_SHM_FORMAT_ABGR4444: case WL_SHM_FORMAT_RGBA4444: case WL_SHM_FORMAT_BGRA4444: case WL_SHM_FORMAT_XRGB1555: case WL_SHM_FORMAT_XBGR1555: case WL_SHM_FORMAT_RGBX5551: case WL_SHM_FORMAT_BGRX5551: case WL_SHM_FORMAT_ARGB1555: case WL_SHM_FORMAT_ABGR1555: case WL_SHM_FORMAT_RGBA5551: case WL_SHM_FORMAT_BGRA5551: case WL_SHM_FORMAT_RGB565: case WL_SHM_FORMAT_BGR565: return 2; case WL_SHM_FORMAT_RGB888: case WL_SHM_FORMAT_BGR888: return 3; case WL_SHM_FORMAT_XBGR8888: case WL_SHM_FORMAT_RGBX8888: case WL_SHM_FORMAT_BGRX8888: case WL_SHM_FORMAT_ABGR8888: case WL_SHM_FORMAT_RGBA8888: case WL_SHM_FORMAT_BGRA8888: case WL_SHM_FORMAT_XRGB2101010: case WL_SHM_FORMAT_XBGR2101010: case WL_SHM_FORMAT_RGBX1010102: case WL_SHM_FORMAT_BGRX1010102: case WL_SHM_FORMAT_ARGB2101010: case WL_SHM_FORMAT_ABGR2101010: case WL_SHM_FORMAT_RGBA1010102: case WL_SHM_FORMAT_BGRA1010102: return 4; case WL_SHM_FORMAT_YUYV: case WL_SHM_FORMAT_YVYU: case WL_SHM_FORMAT_UYVY: case WL_SHM_FORMAT_VYUY: case WL_SHM_FORMAT_AYUV: case WL_SHM_FORMAT_NV12: case WL_SHM_FORMAT_NV21: case WL_SHM_FORMAT_NV16: case WL_SHM_FORMAT_NV61: case WL_SHM_FORMAT_YUV410: case WL_SHM_FORMAT_YVU410: case WL_SHM_FORMAT_YUV411: case WL_SHM_FORMAT_YVU411: case WL_SHM_FORMAT_YUV420: case WL_SHM_FORMAT_YVU420: case WL_SHM_FORMAT_YUV422: case WL_SHM_FORMAT_YVU422: case WL_SHM_FORMAT_YUV444: case WL_SHM_FORMAT_YVU444: goto planar; case WL_SHM_FORMAT_R8: return 1; case WL_SHM_FORMAT_R16: case WL_SHM_FORMAT_RG88: case WL_SHM_FORMAT_GR88: return 2; case WL_SHM_FORMAT_RG1616: case WL_SHM_FORMAT_GR1616: return 4; case WL_SHM_FORMAT_XRGB16161616F: case WL_SHM_FORMAT_XBGR16161616F: case WL_SHM_FORMAT_ARGB16161616F: case WL_SHM_FORMAT_ABGR16161616F: case WL_SHM_FORMAT_AXBXGXRX106106106106: return 8; case WL_SHM_FORMAT_XYUV8888: case WL_SHM_FORMAT_VUY888: case WL_SHM_FORMAT_VUY101010: case WL_SHM_FORMAT_Y210: case WL_SHM_FORMAT_Y212: case WL_SHM_FORMAT_Y216: case WL_SHM_FORMAT_Y410: case WL_SHM_FORMAT_Y412: case WL_SHM_FORMAT_Y416: case WL_SHM_FORMAT_XVYU2101010: case WL_SHM_FORMAT_XVYU12_16161616: case WL_SHM_FORMAT_XVYU16161616: case WL_SHM_FORMAT_Y0L0: case WL_SHM_FORMAT_X0L0: case WL_SHM_FORMAT_Y0L2: case WL_SHM_FORMAT_X0L2: case WL_SHM_FORMAT_YUV420_8BIT: case WL_SHM_FORMAT_YUV420_10BIT: case WL_SHM_FORMAT_XRGB8888_A8: case WL_SHM_FORMAT_XBGR8888_A8: case WL_SHM_FORMAT_RGBX8888_A8: case WL_SHM_FORMAT_BGRX8888_A8: case WL_SHM_FORMAT_RGB888_A8: case WL_SHM_FORMAT_BGR888_A8: case WL_SHM_FORMAT_RGB565_A8: case WL_SHM_FORMAT_BGR565_A8: case WL_SHM_FORMAT_NV24: case WL_SHM_FORMAT_NV42: case WL_SHM_FORMAT_P210: case WL_SHM_FORMAT_P010: case WL_SHM_FORMAT_P012: case WL_SHM_FORMAT_P016: case WL_SHM_FORMAT_NV15: case WL_SHM_FORMAT_Q410: case WL_SHM_FORMAT_Q401: goto planar; case WL_SHM_FORMAT_XRGB16161616: case WL_SHM_FORMAT_XBGR16161616: case WL_SHM_FORMAT_ARGB16161616: case WL_SHM_FORMAT_ABGR16161616: return 8; // todo: adjust API to handle bit packed formats case WL_SHM_FORMAT_C1: case WL_SHM_FORMAT_C2: case WL_SHM_FORMAT_C4: case WL_SHM_FORMAT_D1: case WL_SHM_FORMAT_D2: case WL_SHM_FORMAT_D4: goto planar; case WL_SHM_FORMAT_D8: return 1; case WL_SHM_FORMAT_R1: case WL_SHM_FORMAT_R2: case WL_SHM_FORMAT_R4: goto planar; case WL_SHM_FORMAT_R10: case WL_SHM_FORMAT_R12: return 2; case WL_SHM_FORMAT_AVUY8888: case WL_SHM_FORMAT_XVUY8888: return 4; case WL_SHM_FORMAT_P030: goto planar; default: wp_error("Unidentified WL_SHM format %x", format); return -1; } planar: return -1; } static void compute_damage_coordinates(int *xlow, int *xhigh, int *ylow, int *yhigh, const struct damage_record *rec, int buf_width, int buf_height, int transform, int scale) { if (rec->buffer_coordinates) { *xlow = rec->x; *xhigh = rec->x + rec->width; *ylow = rec->y; *yhigh = rec->y + rec->height; } else { int xl = rec->x * scale; int yl = rec->y * scale; int xh = (rec->width + rec->x) * scale; int yh = (rec->y + rec->height) * scale; /* Each of the eight transformations corresponds to a * unique set of reflections: X<->Y | Xflip | Yflip */ uint32_t magic = 0x74125630; /* idx 76543210 * xyech = 10101010 * xflip = 11000110 * yflip = 10011100 */ bool xyexch = magic & (1u << (4 * transform)); bool xflip = magic & (1u << (4 * transform + 1)); bool yflip = magic & (1u << (4 * transform + 2)); int ew = xyexch ? buf_height : buf_width; int eh = xyexch ? buf_width : buf_height; if (xflip) { int tmp = ew - xh; xh = ew - xl; xl = tmp; } if (yflip) { int tmp = eh - yh; yh = eh - yl; yl = tmp; } if (xyexch) { *xlow = yl; *xhigh = yh; *ylow = xl; *yhigh = xh; } else { *xlow = xl; *xhigh = xh; *ylow = yl; *yhigh = yh; } } } void do_wl_surface_req_attach(struct context *ctx, struct wp_object *buffer, int32_t x, int32_t y) { (void)x; (void)y; struct wp_object *bufobj = (struct wp_object *)buffer; if (!bufobj) { /* A null buffer can legitimately be send to remove * surface contents, presumably with shell-defined * semantics */ wp_debug("Buffer to be attached is null"); return; } if (bufobj->type != &intf_wl_buffer) { wp_error("Buffer to be attached has the wrong type"); return; } struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; surface->attached_buffer_id = bufobj->obj_id; } static void rotate_damage_lists(struct obj_wl_surface *surface) { free(surface->damage_lists[SURFACE_DAMAGE_BACKLOG - 1].list); memmove(surface->damage_lists + 1, surface->damage_lists, (SURFACE_DAMAGE_BACKLOG - 1) * sizeof(struct damage_list)); memset(surface->damage_lists, 0, sizeof(struct damage_list)); memmove(surface->attached_buffer_uids + 1, surface->attached_buffer_uids, (SURFACE_DAMAGE_BACKLOG - 1) * sizeof(uint64_t)); surface->attached_buffer_uids[0] = 0; } void do_wl_surface_req_commit(struct context *ctx) { struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; if (!surface->attached_buffer_id) { /* The wl_surface.commit operation applies all "pending * state", much of which we don't care about. Typically, * when a wl_surface is first created, it is soon * committed to atomically update state variables. An * attached wl_buffer is not required. */ return; } if (ctx->on_display_side) { /* commit signifies a client-side update only */ return; } struct wp_object *obj = tracker_get(ctx->tracker, surface->attached_buffer_id); if (!obj) { wp_error("Attached buffer no longer exists"); return; } if (obj->type != &intf_wl_buffer) { wp_error("Buffer to commit has the wrong type, and may have been recycled"); return; } struct obj_wl_buffer *buf = (struct obj_wl_buffer *)obj; surface->attached_buffer_uids[0] = buf->unique_id; if (buf->type == BUF_DMA) { rotate_damage_lists(surface); for (int i = 0; i < buf->dmabuf_nplanes; i++) { struct shadow_fd *sfd = buf->dmabuf_buffers[i]; if (!sfd) { wp_error("dmabuf surface buffer is missing plane %d", i); continue; } if (!(sfd->type == FDC_DMABUF || sfd->type == FDC_DMAVID_IR)) { wp_error("fd associated with dmabuf surface is not a dmabuf"); continue; } // detailed damage tracking is not yet supported sfd->is_dirty = true; damage_everything(&sfd->damage); } return; } else if (buf->type != BUF_SHM) { wp_error("wp_buffer is backed neither by DMA nor SHM, not yet supported"); return; } struct shadow_fd *sfd = buf->shm_buffer; if (!sfd) { wp_error("wp_buffer to be committed has no fd"); return; } if (sfd->type != FDC_FILE) { wp_error("fd associated with surface is not file-like"); return; } sfd->is_dirty = true; int bpp = get_shm_bytes_per_pixel(buf->shm_format); if (bpp == -1) { wp_error("Encountered unknown/planar/subsampled wl_shm format %x; marking entire buffer", buf->shm_format); goto backup; } if (surface->scale <= 0) { wp_error("Invalid buffer scale during commit (%d), assuming everything damaged", surface->scale); goto backup; } if (surface->transform < 0 || surface->transform >= 8) { wp_error("Invalid buffer transform during commit (%d), assuming everything damaged", surface->transform); goto backup; } /* The damage specified as of wl_surface commit indicates which region * of the surface has changed between the last commit and the current * one. However, the last time the attached buffer was used may have * been several commits ago, so we need to replay all the damage up * to the current point. */ int age = -1; int n_damaged_rects = surface->damage_lists[0].len; for (int j = 1; j < SURFACE_DAMAGE_BACKLOG; j++) { if (surface->attached_buffer_uids[0] == surface->attached_buffer_uids[j]) { age = j; break; } n_damaged_rects += surface->damage_lists[j].len; } if (age == -1) { /* cannot find last time buffer+surface combo was used */ goto backup; } struct ext_interval *damage_array = malloc( sizeof(struct ext_interval) * (size_t)n_damaged_rects); if (!damage_array) { wp_error("Failed to allocate damage array"); goto backup; } int i = 0; // Translate damage stack into damage records for the fd buffer for (int k = 0; k < age; k++) { const struct damage_list *frame_damage = &surface->damage_lists[k]; for (int j = 0; j < frame_damage->len; j++) { int xlow, xhigh, ylow, yhigh; compute_damage_coordinates(&xlow, &xhigh, &ylow, &yhigh, &frame_damage->list[j], buf->shm_width, buf->shm_height, surface->transform, surface->scale); /* Clip the damage rectangle to the containing * buffer. */ xlow = clamp(xlow, 0, buf->shm_width); xhigh = clamp(xhigh, 0, buf->shm_width); ylow = clamp(ylow, 0, buf->shm_height); yhigh = clamp(yhigh, 0, buf->shm_height); damage_array[i].start = buf->shm_offset + buf->shm_stride * ylow + bpp * xlow; damage_array[i].rep = yhigh - ylow; damage_array[i].stride = buf->shm_stride; damage_array[i].width = bpp * (xhigh - xlow); i++; } } merge_damage_records(&sfd->damage, i, damage_array, ctx->g->threads.diff_alignment_bits); free(damage_array); rotate_damage_lists(surface); backup: if (1) { /* damage the entire buffer (but no other part of the shm_pool) */ struct ext_interval full_surface_damage; full_surface_damage.start = buf->shm_offset; full_surface_damage.rep = 1; full_surface_damage.stride = 0; full_surface_damage.width = buf->shm_stride * buf->shm_height; merge_damage_records(&sfd->damage, 1, &full_surface_damage, ctx->g->threads.diff_alignment_bits); } rotate_damage_lists(surface); return; } static void append_damage_record(struct obj_wl_surface *surface, int32_t x, int32_t y, int32_t width, int32_t height, bool in_buffer_coordinates) { struct damage_list *current = &surface->damage_lists[0]; if (buf_ensure_size(current->len + 1, sizeof(struct damage_record), ¤t->size, (void **)¤t->list) == -1) { wp_error("Failed to allocate space for damage list, dropping damage record"); return; } // A rectangle of the buffer was damaged, hence backing buffers // may be updated. struct damage_record *damage = ¤t->list[current->len++]; damage->buffer_coordinates = in_buffer_coordinates; damage->x = x; damage->y = y; damage->width = width; damage->height = height; } void do_wl_surface_req_damage(struct context *ctx, int32_t x, int32_t y, int32_t width, int32_t height) { if (ctx->on_display_side) { // The display side does not need to track the damage return; } append_damage_record((struct obj_wl_surface *)ctx->obj, x, y, width, height, false); } void do_wl_surface_req_damage_buffer(struct context *ctx, int32_t x, int32_t y, int32_t width, int32_t height) { if (ctx->on_display_side) { // The display side does not need to track the damage return; } append_damage_record((struct obj_wl_surface *)ctx->obj, x, y, width, height, true); } void do_wl_surface_req_set_buffer_transform( struct context *ctx, int32_t transform) { struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; surface->transform = transform; } void do_wl_surface_req_set_buffer_scale(struct context *ctx, int32_t scale) { struct obj_wl_surface *surface = (struct obj_wl_surface *)ctx->obj; surface->scale = scale; } void do_wl_keyboard_evt_keymap( struct context *ctx, uint32_t format, int fd, uint32_t size) { size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; fdsz = (size_t)size; } if (fdtype != FDC_FILE || fdsz != size) { wp_error("keymap candidate fd %d was not file-like (type=%s), and with size=%zu did not match %u", fd, fdcat_to_str(fdtype), fdsz, size); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, fdsz, NULL, false); if (!sfd) { wp_error("Failed to create shadow for keymap fd=%d", fd); return; } /* The keyboard file descriptor is never changed after being sent. * Mark the shadow structure as owned by the protocol, so it can be * automatically deleted as soon as the fd has been transferred. */ sfd->has_owner = true; (void)format; } void do_wl_shm_req_create_pool( struct context *ctx, struct wp_object *id, int fd, int32_t size) { struct obj_wl_shm_pool *the_shm_pool = (struct obj_wl_shm_pool *)id; if (size <= 0) { wp_error("Ignoring attempt to create a wl_shm_pool with size %d", size); } size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; fdsz = (size_t)size; } /* It may be valid for the file descriptor size to be larger * than the immediately advertised size, since the call to * wl_shm.create_pool may be followed by wl_shm_pool.resize, * which then increases the size */ if (fdtype != FDC_FILE || (int32_t)fdsz < size) { wp_error("File type or size mismatch for fd %d with claimed: %s %s | %zu %u", fd, fdcat_to_str(fdtype), fdcat_to_str(FDC_FILE), fdsz, size); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, fdsz, NULL, false); if (!sfd) { return; } the_shm_pool->owned_buffer = shadow_incref_protocol(sfd); /* We only send shm_pool updates when the buffers created from the * pool are used. Some applications make the pool >> actual buffers, * so this can reduce communication by a lot*/ reset_damage(&sfd->damage); } void do_wl_shm_pool_req_resize(struct context *ctx, int32_t size) { struct obj_wl_shm_pool *the_shm_pool = (struct obj_wl_shm_pool *)ctx->obj; if (!the_shm_pool->owned_buffer) { wp_error("Pool to be resized owns no buffer"); return; } if ((int32_t)the_shm_pool->owned_buffer->buffer_size >= size) { // The underlying buffer was already resized by the time // this protocol message was received return; } /* The display side will be updated already via buffer update msg */ if (!ctx->on_display_side) { extend_shm_shadow(&ctx->g->threads, the_shm_pool->owned_buffer, (size_t)size); } } void do_wl_shm_pool_req_create_buffer(struct context *ctx, struct wp_object *id, int32_t offset, int32_t width, int32_t height, int32_t stride, uint32_t format) { struct obj_wl_shm_pool *the_shm_pool = (struct obj_wl_shm_pool *)ctx->obj; struct obj_wl_buffer *the_buffer = (struct obj_wl_buffer *)id; if (!the_buffer) { wp_error("No buffer available"); return; } struct shadow_fd *sfd = the_shm_pool->owned_buffer; if (!sfd) { wp_error("Creating a wl_buffer from a pool that does not own an fd"); return; } the_buffer->type = BUF_SHM; the_buffer->shm_buffer = shadow_incref_protocol(the_shm_pool->owned_buffer); the_buffer->shm_offset = offset; the_buffer->shm_width = width; the_buffer->shm_height = height; the_buffer->shm_stride = stride; the_buffer->shm_format = format; the_buffer->unique_id = ctx->g->tracker.buffer_seqno++; } void do_zwlr_screencopy_frame_v1_evt_ready(struct context *ctx, uint32_t tv_sec_hi, uint32_t tv_sec_lo, uint32_t tv_nsec) { struct obj_wlr_screencopy_frame *frame = (struct obj_wlr_screencopy_frame *)ctx->obj; if (!frame->buffer_id) { wp_error("frame has no copy target"); return; } struct wp_object *obj = (struct wp_object *)tracker_get( ctx->tracker, frame->buffer_id); if (!obj) { wp_error("frame copy target no longer exists"); return; } if (obj->type != &intf_wl_buffer) { wp_error("frame copy target is not a wl_buffer"); return; } struct obj_wl_buffer *buffer = (struct obj_wl_buffer *)obj; struct shadow_fd *sfd = buffer->shm_buffer; if (!sfd) { wp_error("frame copy target does not own any buffers"); return; } if (sfd->type != FDC_FILE) { wp_error("frame copy target buffer file descriptor (RID=%d) was not file-like (type=%d)", sfd->remote_id, sfd->type); return; } if (buffer->type != BUF_SHM) { wp_error("screencopy not yet supported for non-shm-backed buffers"); return; } if (!ctx->on_display_side) { // The display side performs the update return; } sfd->is_dirty = true; /* The protocol guarantees that the buffer attributes match * those of the written frame */ const struct ext_interval interval = {.start = buffer->shm_offset, .width = buffer->shm_height * buffer->shm_stride, .stride = 0, .rep = 1}; merge_damage_records(&sfd->damage, 1, &interval, ctx->g->threads.diff_alignment_bits); (void)tv_sec_lo; (void)tv_sec_hi; (void)tv_nsec; } void do_zwlr_screencopy_frame_v1_req_copy( struct context *ctx, struct wp_object *buffer) { struct obj_wlr_screencopy_frame *frame = (struct obj_wlr_screencopy_frame *)ctx->obj; struct wp_object *buf = (struct wp_object *)buffer; if (buf->type != &intf_wl_buffer) { wp_error("frame copy destination is not a wl_buffer"); return; } frame->buffer_id = buf->obj_id; } static int64_t timespec_diff(struct timespec val, struct timespec sub) { // Overflows only with 68 year error, insignificant return (val.tv_sec - sub.tv_sec) * 1000000000LL + (val.tv_nsec - sub.tv_nsec); } void do_wp_presentation_evt_clock_id(struct context *ctx, uint32_t clk_id) { struct obj_wp_presentation *pres = (struct obj_wp_presentation *)ctx->obj; pres->clock_id = (int)clk_id; int reference_clock = CLOCK_REALTIME; if (pres->clock_id == reference_clock) { pres->clock_delta_nsec = 0; } else { /* Estimate the difference in baseline between clocks. * (TODO: Is there a syscall for this?) do median of 3? */ struct timespec t0, t1, t2; clock_gettime(pres->clock_id, &t0); clock_gettime(reference_clock, &t1); clock_gettime(pres->clock_id, &t2); int64_t diff1m0 = timespec_diff(t1, t0); int64_t diff2m1 = timespec_diff(t2, t1); pres->clock_delta_nsec = (diff1m0 - diff2m1) / 2; } } void do_wp_presentation_req_feedback(struct context *ctx, struct wp_object *surface, struct wp_object *callback) { struct obj_wp_presentation *pres = (struct obj_wp_presentation *)ctx->obj; struct obj_wp_presentation_feedback *feedback = (struct obj_wp_presentation_feedback *)callback; (void)surface; feedback->clock_delta_nsec = pres->clock_delta_nsec; } void do_wp_presentation_feedback_evt_presented(struct context *ctx, uint32_t tv_sec_hi, uint32_t tv_sec_lo, uint32_t tv_nsec, uint32_t refresh, uint32_t seq_hi, uint32_t seq_lo, uint32_t flags) { struct obj_wp_presentation_feedback *feedback = (struct obj_wp_presentation_feedback *)ctx->obj; (void)refresh; (void)seq_hi; (void)seq_lo; (void)flags; /* convert local to reference, on display side */ int dir = ctx->on_display_side ? 1 : -1; uint64_t sec = tv_sec_lo + tv_sec_hi * 0x100000000uLL; int64_t nsec = tv_nsec; nsec += dir * feedback->clock_delta_nsec; sec = (uint64_t)((int64_t)sec + nsec / 1000000000LL); nsec = nsec % 1000000000L; if (nsec < 0) { nsec += 1000000000L; sec--; } // Size not changed, no other edits required ctx->message[2] = (uint32_t)(sec / 0x100000000uLL); ctx->message[3] = (uint32_t)(sec % 0x100000000uLL); ctx->message[4] = (uint32_t)nsec; } void do_wl_drm_evt_device(struct context *ctx, const char *name) { if (ctx->on_display_side) { /* Replacing the (remote) DRM device path with a local * render node path only is useful on the application * side */ return; } if (!name) { wp_debug("Device name provided via wl_drm::device was NULL"); return; } if (!ctx->g->render.drm_node_path) { /* While the render node should have been initialized in * wl_registry.global, setting this path, we still don't want * to crash even if this gets called by accident */ wp_debug("wl_drm::device, local render node not set up"); return; } int path_len = (int)strlen(ctx->g->render.drm_node_path); int message_bytes = 8 + 4 + 4 * ((path_len + 1 + 3) / 4); if (message_bytes > ctx->message_available_space) { wp_error("Not enough space to modify DRM device advertisement from '%s' to '%s'", name, ctx->g->render.drm_node_path); return; } ctx->message_length = message_bytes; uint32_t *payload = ctx->message + 2; memset(payload, 0, (size_t)message_bytes - 8); payload[0] = (uint32_t)path_len + 1; memcpy(ctx->message + 3, ctx->g->render.drm_node_path, (size_t)path_len); uint32_t meth = (ctx->message[1] << 16) >> 16; ctx->message[1] = message_header_2((uint32_t)message_bytes, meth); } void do_wl_drm_req_create_prime_buffer(struct context *ctx, struct wp_object *id, int name, int32_t width, int32_t height, uint32_t format, int32_t offset0, int32_t stride0, int32_t offset1, int32_t stride1, int32_t offset2, int32_t stride2) { struct obj_wl_buffer *buf = (struct obj_wl_buffer *)id; struct dmabuf_slice_data info = { .num_planes = 1, .width = (uint32_t)width, .height = (uint32_t)height, .modifier = DRM_FORMAT_MOD_INVALID, .format = format, .offsets = {(uint32_t)offset0, (uint32_t)offset1, (uint32_t)offset2, 0}, .strides = {(uint32_t)stride0, (uint32_t)stride1, (uint32_t)stride2, 0}, .using_planes = {true, false, false, false}, }; struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, name, FDC_DMABUF, 0, &info, false); if (!sfd) { return; } buf->type = BUF_DMA; buf->dmabuf_nplanes = 1; buf->dmabuf_buffers[0] = shadow_incref_protocol(sfd); buf->dmabuf_width = width; buf->dmabuf_height = height; buf->dmabuf_format = format; // handling multiple offsets (?) buf->dmabuf_offsets[0] = (uint32_t)offset0; buf->dmabuf_strides[0] = (uint32_t)stride0; buf->unique_id = ctx->g->tracker.buffer_seqno++; if (ctx->on_display_side) { /* the new dmabuf being created is not guaranteed to * have the original offset/stride parameters, so reset * them */ ctx->message[6] = 0; ctx->message[7] = dmabuf_get_stride(sfd->dmabuf_bo); } } static bool dmabuf_format_permitted( struct context *ctx, uint32_t format, uint64_t modifier) { if (ctx->g->config->only_linear_dmabuf) { /* MOD_INVALID is allowed because some drivers don't support * LINEAR. Every modern GPU+driver should be able to handle * LINEAR. Conditionally blocking INVALID (i.e, if LINEAR is an * option) can break things when the application-side Waypipe * instance does not support LINEAR. */ if (modifier != 0 && modifier != DRM_FORMAT_MOD_INVALID) { return false; } } /* Filter out formats which are not recognized, or multiplane */ if (get_shm_bytes_per_pixel(format) == -1) { return false; } /* Blacklist intel modifiers which introduce a second color control * surface; todo: add support for these, eventually */ if (modifier == (1uLL << 56 | 4) || modifier == (1uLL << 56 | 5) || modifier == (1uLL << 56 | 6) || modifier == (1uLL << 56 | 7) || modifier == (1uLL << 56 | 8)) { return false; } return true; } void do_zwp_linux_dmabuf_v1_evt_modifier(struct context *ctx, uint32_t format, uint32_t modifier_hi, uint32_t modifier_lo) { (void)format; uint64_t modifier = modifier_hi * 0x100000000uLL + modifier_lo; // Prevent all advertisements for dmabufs with modifiers if (!dmabuf_format_permitted(ctx, format, modifier)) { ctx->drop_this_msg = true; } } void do_zwp_linux_dmabuf_v1_req_get_default_feedback( struct context *ctx, struct wp_object *id) { // todo: use this to find the correct main device (void)ctx; (void)id; } void do_zwp_linux_dmabuf_v1_req_get_surface_feedback(struct context *ctx, struct wp_object *id, struct wp_object *surface) { (void)ctx; (void)id; (void)surface; } void do_zwp_linux_buffer_params_v1_evt_created( struct context *ctx, struct wp_object *buffer) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)ctx->obj; struct obj_wl_buffer *buf = (struct obj_wl_buffer *)buffer; buf->type = BUF_DMA; buf->dmabuf_nplanes = params->nplanes; for (int i = 0; i < params->nplanes; i++) { if (!params->add[i].buffer) { wp_error("dmabuf backed wl_buffer plane %d was missing", i); continue; } // Move protocol reference from `params` to `buf` // (The params object can only be used to create one buffer, // so this ensures that if the params object leaks, the // shadow_fd does not leak as well.) buf->dmabuf_buffers[i] = params->add[i].buffer; buf->dmabuf_offsets[i] = params->add[i].offset; buf->dmabuf_strides[i] = params->add[i].stride; buf->dmabuf_modifiers[i] = params->add[i].modifier; params->add[i].buffer = NULL; } cleanup_dmabuf_params_fds(params); buf->dmabuf_flags = params->create_flags; buf->dmabuf_width = params->create_width; buf->dmabuf_height = params->create_height; buf->dmabuf_format = params->create_format; buf->unique_id = ctx->g->tracker.buffer_seqno++; } void do_zwp_linux_buffer_params_v1_req_add(struct context *ctx, int fd, uint32_t plane_idx, uint32_t offset, uint32_t stride, uint32_t modifier_hi, uint32_t modifier_lo) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)ctx->obj; if (params->nplanes != (int)plane_idx) { wp_error("Expected sequentially assigned plane fds: got new_idx=%d != %d=nplanes", plane_idx, params->nplanes); return; } if (params->nplanes >= MAX_DMABUF_PLANES) { wp_error("Too many planes"); return; } params->nplanes++; params->add[plane_idx].fd = fd; params->add[plane_idx].offset = offset; params->add[plane_idx].stride = stride; params->add[plane_idx].modifier = modifier_lo + modifier_hi * 0x100000000uLL; // Only perform rearrangement on the client side, for now if (true) { ctx->drop_this_msg = true; } } static uint32_t append_zwp_linux_buffer_params_v1_req_add(uint32_t *msg, bool display_side, uint32_t obj_id, uint32_t plane_idx, uint32_t offset, uint32_t stride, uint32_t modifier_hi, uint32_t modifier_lo) { uint32_t msg_size = 2; if (msg) { msg[0] = obj_id; msg[msg_size++] = plane_idx; msg[msg_size++] = offset; msg[msg_size++] = stride; msg[msg_size++] = modifier_hi; msg[msg_size++] = modifier_lo; msg[1] = ((uint32_t)msg_size << 18) | 1; /* Tag the message as having one file descriptor */ if (!display_side) { msg[1] |= (uint32_t)(1 << 11); } } else { msg_size += 5; } return msg_size; } void do_zwp_linux_buffer_params_v1_req_create(struct context *ctx, int32_t width, int32_t height, uint32_t format, uint32_t flags) { struct obj_zwp_linux_dmabuf_params *params = (struct obj_zwp_linux_dmabuf_params *)ctx->obj; params->create_flags = flags; params->create_width = width; params->create_height = height; params->create_format = format; struct dmabuf_slice_data info = {.width = (uint32_t)width, .height = (uint32_t)height, .format = format, .num_planes = params->nplanes, .strides = {params->add[0].stride, params->add[1].stride, params->add[2].stride, params->add[3].stride}, .offsets = {params->add[0].offset, params->add[1].offset, params->add[2].offset, params->add[3].offset}}; bool all_same_fds = true; for (int i = 1; i < params->nplanes; i++) { if (params->add[i].fd != params->add[0].fd) { all_same_fds = false; } } for (int i = 0; i < params->nplanes; i++) { memset(info.using_planes, 0, sizeof(info.using_planes)); for (int k = 0; k < min(params->nplanes, 4); k++) { if (params->add[k].fd == params->add[i].fd) { info.using_planes[k] = 1; info.modifier = params->add[k].modifier; } } enum fdcat res_type = FDC_DMABUF; if (ctx->g->config->video_if_possible) { // TODO: multibuffer support if (all_same_fds && video_supports_dmabuf_format(format, info.modifier)) { res_type = ctx->on_display_side ? FDC_DMAVID_IW : FDC_DMAVID_IR; } } /* note: the``info` provided includes the incoming/as-if stride * data. */ struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, params->add[i].fd, res_type, 0, &info, false); if (!sfd) { continue; } if (ctx->on_display_side) { /* the new dmabuf being created is not guaranteed to * have the original offset/stride parameters, so reset * them */ params->add[i].offset = 0; params->add[i].stride = dmabuf_get_stride(sfd->dmabuf_bo); } /* increment for each extra time this fd will be sent */ if (sfd->has_owner) { shadow_incref_transfer(sfd); } // Convert the stored fds to buffer pointers now. params->add[i].buffer = shadow_incref_protocol(sfd); } if (true) { // Update file descriptors int nfds = params->nplanes; if (nfds > ctx->fds->size - ctx->fds->zone_end) { wp_error("Not enough space to reintroduce zwp_linux_buffer_params_v1.add message fds"); return; } int nmoved = (ctx->fds->zone_end - ctx->fds->zone_start); memmove(ctx->fds->data + ctx->fds->zone_start + nfds, ctx->fds->data + ctx->fds->zone_start, (size_t)nmoved * sizeof(int)); for (int i = 0; i < params->nplanes; i++) { ctx->fds->data[ctx->fds->zone_start + i] = params->add[i].fd; } /* We inject `nfds` new file descriptors, and advance the zone * of queued file descriptors forward, since the injected file * descriptors will not be used by the parser, but will still * be transported out. */ ctx->fds->zone_start += nfds; ctx->fds->zone_end += nfds; ctx->fds_changed = true; // Update data int net_length = ctx->message_length; uint32_t extra = 0; for (int i = 0; i < params->nplanes; i++) { extra += append_zwp_linux_buffer_params_v1_req_add(NULL, ctx->on_display_side, params->base.obj_id, (uint32_t)i, params->add[i].offset, params->add[i].stride, (uint32_t)(params->add[i].modifier >> 32), (uint32_t)(params->add[i].modifier)); } net_length += (int)(sizeof(uint32_t) * extra); if (net_length > ctx->message_available_space) { wp_error("Not enough space to reintroduce zwp_linux_buffer_params_v1.add message data"); return; } char *cmsg = (char *)ctx->message; memmove(cmsg + net_length - ctx->message_length, cmsg, (size_t)ctx->message_length); size_t start = 0; for (int i = 0; i < params->nplanes; i++) { uint32_t step = append_zwp_linux_buffer_params_v1_req_add( (uint32_t *)(cmsg + start), ctx->on_display_side, params->base.obj_id, (uint32_t)i, params->add[i].offset, params->add[i].stride, (uint32_t)(params->add[i].modifier >> 32), (uint32_t)(params->add[i].modifier)); start += step * sizeof(uint32_t); } wp_debug("Reintroducing add requests for zwp_linux_buffer_params_v1, going from %d to %d bytes", ctx->message_length, net_length); ctx->message_length = net_length; } // Avoid closing in destroy_wp_object for (int i = 0; i < MAX_DMABUF_PLANES; i++) { params->add[i].fd = -1; } } void do_zwp_linux_buffer_params_v1_req_create_immed(struct context *ctx, struct wp_object *buffer_id, int32_t width, int32_t height, uint32_t format, uint32_t flags) { // There isn't really that much unnecessary copying. Note that // 'create' may modify messages do_zwp_linux_buffer_params_v1_req_create( ctx, width, height, format, flags); do_zwp_linux_buffer_params_v1_evt_created(ctx, buffer_id); } void do_zwp_linux_dmabuf_feedback_v1_evt_done(struct context *ctx) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; int worst_case_space = 2; for (size_t i = 0; i < obj->tranche_count; i++) { for (size_t j = 0; j < obj->tranches[i].tranche_size; j++) { uint16_t idx = obj->tranches[i].tranche[j]; if (idx > obj->table_len) { wp_error("Tranche format index %u out of bounds [0,%zu)", idx, obj->table_len); return; } } worst_case_space += 2 + 3 + 3 + 3 + ((int)sizeof(dev_t) + 3) / 4 + ((int)obj->tranches[i].tranche_size + 1) / 2; } if (ctx->message_available_space < worst_case_space * 4) { wp_error("Not enough space to introduce all tranche fields"); return; } /* Inject messages for filtered tranche parameters here */ size_t m = 0; for (size_t i = 0; i < obj->tranche_count; i++) { bool empty = true; for (size_t j = 0; j < obj->tranches[i].tranche_size; j++) { uint16_t idx = obj->tranches[i].tranche[j]; if (dmabuf_format_permitted(ctx, obj->table[idx].format, obj->table[idx].modifier)) { empty = false; break; } } if (empty) { /* discard tranche, has no entries */ continue; } size_t s; s = 3 + ((sizeof(dev_t) + 3) / 4); ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 4); // tranche_target_device ctx->message[m + 2] = sizeof(dev_t); memcpy(&ctx->message[m + 3], &obj->main_device, sizeof(dev_t)); m += s; s = 3; ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 6); // tranche_flags ctx->message[m + 2] = obj->tranches[i].flags; m += s; size_t w = 0; uint16_t *fmts = (uint16_t *)&ctx->message[m + 3]; for (size_t j = 0; j < obj->tranches[i].tranche_size; j++) { uint16_t idx = obj->tranches[i].tranche[j]; if (dmabuf_format_permitted(ctx, obj->table[idx].format, obj->table[idx].modifier)) { fmts[w++] = idx; } } s = 3 + ((w + 1) / 2); ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 5); // tranche_formats ctx->message[m + 2] = (uint32_t)(2 * w); m += s; s = 2; ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2( 4 * (uint32_t)s, 3); // tranche_done m += s; } ctx->message[m] = obj->base.obj_id; ctx->message[m + 1] = message_header_2(8, 0); // done m += 2; ctx->message_length = (int)(m * 4); for (size_t i = 0; i < obj->tranche_count; i++) { free(obj->tranches[i].tranche); } free(obj->tranches); obj->tranches = NULL; obj->tranche_count = 0; } void do_zwp_linux_dmabuf_feedback_v1_evt_format_table( struct context *ctx, int fd, uint32_t size) { size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; } if (fdtype != FDC_FILE || fdsz != size) { wp_error("format tabl fd %d was not file-like (type=%s), and size=%zu did not match %u", fd, fdcat_to_str(fdtype), fdsz, size); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, size, NULL, false); if (!sfd) { return; } /* Mark the shadow structure as owned by the protocol, but do not * increase the protocol refcount, so that as soon as it gets * transferred it is destroyed */ sfd->has_owner = true; struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; free(obj->table); obj->table_len = sfd->buffer_size / sizeof(struct format_table_entry); obj->table = calloc(obj->table_len, sizeof(struct format_table_entry)); if (!obj->table) { wp_error("failed to allocate copy of dmabuf feedback format table"); return; } memcpy(obj->table, sfd->mem_local, obj->table_len * sizeof(struct format_table_entry)); } void do_zwp_linux_dmabuf_feedback_v1_evt_main_device(struct context *ctx, uint32_t device_count, const uint8_t *device_val) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; if ((size_t)device_count != sizeof(dev_t)) { wp_error("Invalid dev_t size %zu, should be %zu", (size_t)device_count, sizeof(dev_t)); return; } if (ctx->on_display_side) { memcpy(&obj->main_device, device_val, sizeof(dev_t)); } else { // adopt the main device from the render fd being used struct stat fsdata; memset(&fsdata, 0, sizeof(fsdata)); int ret = fstat(ctx->g->render.drm_fd, &fsdata); if (ret == -1) { wp_error("Failed to get render device info"); return; } obj->main_device = fsdata.st_rdev; } /* todo: add support for changing render devices in waypipe */ } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_done(struct context *ctx) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; if (obj->main_device != obj->current_device && ctx->on_display_side) { /* Filter out/ignore all tranches for anything but the main * device. */ return; } void *next = realloc(obj->tranches, (obj->tranche_count + 1) * sizeof(*obj->tranches)); if (!next) { wp_error("Failed to resize tranche list"); return; } obj->tranches = next; obj->tranches[obj->tranche_count] = obj->current; obj->tranche_count++; /* it is unclear whether flags/device get in a valid use of the * protocol, but assuming they do not costs nothing. */ // todo: what about the tranche? obj->current.tranche = NULL; obj->current.tranche_size = 0; /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_target_device( struct context *ctx, uint32_t device_count, const uint8_t *device_val) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; if ((size_t)device_count != sizeof(dev_t)) { wp_error("Invalid dev_t size %zu, should be %zu", (size_t)device_count, sizeof(dev_t)); } memcpy(&obj->current_device, device_val, sizeof(dev_t)); /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_formats(struct context *ctx, uint32_t indices_count, const uint8_t *indices_val) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; size_t num_indices = (size_t)indices_count / 2; free(obj->current.tranche); obj->current.tranche_size = num_indices; obj->current.tranche = calloc(num_indices, sizeof(uint16_t)); if (!obj->current.tranche) { wp_error("failed to allocate for tranche"); return; } // todo: translation to formats+modifiers should be performed // immediately, in case format table changes between tranches memcpy(obj->current.tranche, indices_val, num_indices * sizeof(uint16_t)); /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwp_linux_dmabuf_feedback_v1_evt_tranche_flags( struct context *ctx, uint32_t flags) { struct obj_zwp_linux_dmabuf_feedback *obj = (struct obj_zwp_linux_dmabuf_feedback *)ctx->obj; obj->current.flags = flags; /* discard message, will be resent later if needed */ ctx->drop_this_msg = true; } void do_zwlr_export_dmabuf_frame_v1_evt_frame(struct context *ctx, uint32_t width, uint32_t height, uint32_t offset_x, uint32_t offset_y, uint32_t buffer_flags, uint32_t flags, uint32_t format, uint32_t mod_high, uint32_t mod_low, uint32_t num_objects) { struct obj_wlr_export_dmabuf_frame *frame = (struct obj_wlr_export_dmabuf_frame *)ctx->obj; frame->width = width; frame->height = height; (void)offset_x; (void)offset_y; // the 'transient' flag could be cleared, technically (void)flags; (void)buffer_flags; frame->format = format; frame->modifier = mod_high * 0x100000000uLL + mod_low; frame->nobjects = num_objects; if (frame->nobjects > MAX_DMABUF_PLANES) { wp_error("Too many (%u) frame objects required", frame->nobjects); frame->nobjects = MAX_DMABUF_PLANES; } } void do_zwlr_export_dmabuf_frame_v1_evt_object(struct context *ctx, uint32_t index, int fd, uint32_t size, uint32_t offset, uint32_t stride, uint32_t plane_index) { struct obj_wlr_export_dmabuf_frame *frame = (struct obj_wlr_export_dmabuf_frame *)ctx->obj; if (index > frame->nobjects) { wp_error("Cannot add frame object with index %u >= %u", index, frame->nobjects); return; } if (frame->objects[index].buffer) { wp_error("Cannot add frame object with index %u, already used", frame->nobjects); return; } frame->objects[index].offset = offset; frame->objects[index].stride = stride; // for lack of a test program, we assume all dmabufs passed in // here are distinct, and hence need no 'multiplane' adjustments struct dmabuf_slice_data info = {.width = frame->width, .height = frame->height, .format = frame->format, .num_planes = (int32_t)frame->nobjects, .strides = {frame->objects[0].stride, frame->objects[1].stride, frame->objects[2].stride, frame->objects[3].stride}, .offsets = {frame->objects[0].offset, frame->objects[1].offset, frame->objects[2].offset, frame->objects[3].offset}, .using_planes = {false, false, false, false}, .modifier = frame->modifier}; info.using_planes[index] = true; struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_DMABUF, 0, &info, false); if (!sfd) { return; } if (sfd->buffer_size < size) { wp_error("Frame object %u has a dmabuf with less (%u) than the advertised (%u) size", index, (uint32_t)sfd->buffer_size, size); } // Convert the stored fds to buffer pointers now. frame->objects[index].buffer = shadow_incref_protocol(sfd); // in practice, index+1? (void)plane_index; } void do_zwlr_export_dmabuf_frame_v1_evt_ready(struct context *ctx, uint32_t tv_sec_hi, uint32_t tv_sec_lo, uint32_t tv_nsec) { struct obj_wlr_export_dmabuf_frame *frame = (struct obj_wlr_export_dmabuf_frame *)ctx->obj; if (!ctx->on_display_side) { /* The client side does not update the buffer */ return; } (void)tv_sec_hi; (void)tv_sec_lo; (void)tv_nsec; for (uint32_t i = 0; i < frame->nobjects; i++) { struct shadow_fd *sfd = frame->objects[i].buffer; if (sfd) { sfd->is_dirty = true; damage_everything(&sfd->damage); } } } static void translate_data_transfer_fd(struct context *ctx, int32_t fd) { /* treat the fd as a one-way pipe, even if it is e.g. a file or * socketpair, with additional properties. The fd being sent * around should be, according to the protocol, only written into and * closed */ (void)translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_PIPE, 0, NULL, true); } void do_gtk_primary_selection_offer_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_gtk_primary_selection_source_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwp_primary_selection_offer_v1_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwp_primary_selection_source_v1_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwlr_data_control_offer_v1_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwlr_data_control_source_v1_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_wl_data_offer_req_receive( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_wl_data_source_evt_send( struct context *ctx, const char *mime_type, int fd) { translate_data_transfer_fd(ctx, fd); (void)mime_type; } void do_zwlr_gamma_control_v1_req_set_gamma(struct context *ctx, int fd) { size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd, &fdsz); if (fdtype == FDC_UNKNOWN) { fdtype = FDC_FILE; /* fdsz fallback? */ } // TODO: use file size from earlier in the protocol, because some // systems may send file-like objects not supporting fstat if (fdtype != FDC_FILE) { wp_error("gamma ramp fd %d was not file-like (type=%s)", fd, fdcat_to_str(fdtype)); return; } struct shadow_fd *sfd = translate_fd(&ctx->g->map, &ctx->g->render, &ctx->g->threads, fd, FDC_FILE, fdsz, NULL, false); if (!sfd) { return; } /* Mark the shadow structure as owned by the protocol, but do not * increase the protocol refcount, so that as soon as it gets * transferred it is destroyed */ sfd->has_owner = true; } #define MSGNO_XDG_TOPLEVEL_REQ_SET_TITLE 2 void do_xdg_toplevel_req_set_title(struct context *ctx, const char *str) { if (!ctx->g->config->title_prefix) { return; } size_t prefix_len = strlen(ctx->g->config->title_prefix); if (4 + (int)prefix_len >= ctx->message_available_space) { wp_error("Not enough space (%d left, at most %d needed) to prepend title prefix", ctx->message_available_space, 4 + prefix_len); return; } size_t title_len = strlen(str); size_t str_part = alignz(prefix_len + title_len + 1, 4); ctx->message[1] = message_header_2((uint32_t)str_part + 12, MSGNO_XDG_TOPLEVEL_REQ_SET_TITLE); ctx->message[2] = (uint32_t)(prefix_len + title_len + 1); char *v = (char *)&ctx->message[3]; // Using memmove, as str=&ctx->message[3] memmove(v + prefix_len, v, title_len); memset(v + prefix_len + title_len, 0, str_part - prefix_len - title_len); memcpy(v, ctx->g->config->title_prefix, prefix_len); ctx->message_length = 12 + (int)str_part; } const struct wp_interface *the_display_interface = &intf_wl_display; waypipe-v0.9.1/src/interval.c000066400000000000000000000207571463133614300161630ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #include #include struct merge_stack_elem { int offset; int count; }; struct merge_stack { struct interval *data; int size; int count; }; static int stream_merge(int a_count, const struct interval *__restrict__ a_list, int b_count, const struct interval *__restrict__ b_list, struct interval *__restrict__ c_list, int margin) { int ia = 0, ib = 0, ic = 0; int cursor = INT32_MIN; (void)a_count; (void)b_count; /* the loop exit condition appears to be faster than checking * ia= cursor + margin) { c_list[ic++] = sel; } else { c_list[ic - 1].end = new_cursor; } cursor = new_cursor; } /* add end sentinel */ c_list[ic] = (struct interval){.start = INT32_MAX, .end = INT32_MAX}; return ic; } static int fix_merge_stack_property(int size, struct merge_stack_elem *stack, struct merge_stack *base, struct merge_stack *temp, int merge_margin, bool force_compact, int *absorbed) { while (size > 1) { struct merge_stack_elem top = stack[size - 1]; struct merge_stack_elem nxt = stack[size - 2]; if (2 * top.count <= nxt.count && !force_compact) { return size; } if (buf_ensure_size(top.count + nxt.count + 1, sizeof(struct interval), &temp->size, (void **)&temp->data) == -1) { wp_error("Failed to resize a merge buffer, some damage intervals may be lost"); return size; } int xs = stream_merge(top.count, &base->data[top.offset], nxt.count, &base->data[nxt.offset], temp->data, merge_margin); /* There are more complicated/multi-buffer alternatives with * fewer memory copies, but this is already <20% of stream * merge time */ memcpy(&base->data[nxt.offset], temp->data, (size_t)(xs + 1) * sizeof(struct interval)); base->count = nxt.offset + xs + 1; stack[size - 1] = (struct merge_stack_elem){ .offset = 0, .count = 0}; stack[size - 2] = (struct merge_stack_elem){ .offset = nxt.offset, .count = xs}; size--; *absorbed += (top.count + nxt.count - xs); } return size; } static int unpack_ext_interval(struct interval *vec, const struct ext_interval e, int alignment_bits) { int iw = 0; int last_end = INT32_MIN; for (int ir = 0; ir < e.rep; ir++) { int start = e.start + ir * e.stride; int end = start + e.width; start = (start >> alignment_bits) << alignment_bits; end = ((end + (1 << alignment_bits) - 1) >> alignment_bits) << alignment_bits; if (start > last_end) { vec[iw].start = start; vec[iw].end = end; last_end = end; iw++; } else { vec[iw - 1].end = end; last_end = end; } } /* end sentinel */ vec[iw] = (struct interval){.start = INT32_MAX, .end = INT32_MAX}; return iw; } /* By writing a mergesort by hand, we can detect duplicates early. * * TODO: optimize output with run-length-encoded segments * TODO: explicit time limiting/adaptive margin! */ void merge_mergesort(const int old_count, struct interval *old_list, const int new_count, const struct ext_interval *const new_list, int *dst_count, struct interval **dst_list, int merge_margin, int alignment_bits) { /* Stack-based mergesort: the buffer at position `i+1` * should be <= 1/2 times the size of the buffer at * position `i`; buffers will be merged * to maintain this invariant */ // TODO: improve memory management! struct merge_stack_elem substack[32]; int substack_size = 0; memset(substack, 0, sizeof(substack)); struct merge_stack base = {.data = NULL, .count = 0, .size = 0}; struct merge_stack temp = {.data = NULL, .count = 0, .size = 0}; if (old_count) { /* seed the stack with the previous damage * interval list, * including trailing terminator */ base.data = old_list; base.size = old_count + 1; base.count = old_count + 1; substack[substack_size++] = (struct merge_stack_elem){ .offset = 0, .count = old_count}; } int src_count = 0, absorbed = 0; for (int jn = 0; jn < new_count; jn++) { struct ext_interval e = new_list[jn]; /* ignore invalid intervals -- also, if e.start * is close to INT32_MIN, the stream merge * breaks */ if (e.width <= 0 || e.rep <= 0 || e.start < 0) { continue; } /* To limit CPU time, if it is very likely that * an interval would be merged anyway, then * replace it with its containing interval. */ int remaining = src_count - absorbed; bool force_combine = (absorbed > 30000) || 10 * remaining < src_count; int64_t intv_end = e.start + e.stride * (int64_t)(e.rep - 1) + e.width; if (intv_end >= INT32_MAX) { /* overflow protection */ e.width = INT32_MAX - 1 - e.start; e.rep = 1; } /* Remove internal gaps are smaller than the * margin and hence * would need to be merged away anyway. */ if (e.width > e.stride - merge_margin || force_combine) { e.width = e.stride * (e.rep - 1) + e.width; e.rep = 1; } if (buf_ensure_size(base.count + e.rep + 1, sizeof(struct interval), &base.size, (void **)&base.data) == -1) { wp_error("Failed to resize a merge buffer, some damage intervals may be lost"); continue; } struct interval *vec = &base.data[base.count]; int iw = unpack_ext_interval(vec, e, alignment_bits); src_count += iw; substack[substack_size] = (struct merge_stack_elem){ .offset = base.count, .count = iw}; substack_size++; base.count += iw + 1; /* merge down the stack as far as possible */ substack_size = fix_merge_stack_property(substack_size, substack, &base, &temp, merge_margin, false, &absorbed); } /* collapse the stack into a final interval */ fix_merge_stack_property(substack_size, substack, &base, &temp, merge_margin, true, &absorbed); free(temp.data); *dst_list = base.data; *dst_count = substack[0].count; } /* This value must be larger than 8, or diffs will explode */ #define MERGE_MARGIN 256 void merge_damage_records(struct damage *base, int nintervals, const struct ext_interval *const new_list, int alignment_bits) { for (int i = 0; i < nintervals; i++) { base->acc_damage_stat += new_list[i].width * new_list[i].rep; base->acc_count++; } // Fast return if there is nothing to do if (base->damage == DAMAGE_EVERYTHING || nintervals <= 0) { return; } if (nintervals >= (1 << 30) || base->ndamage_intvs >= (1 << 30)) { /* avoid overflow in merge routine; also would be cheaper to * damage everything at this point; */ damage_everything(base); return; } merge_mergesort(base->ndamage_intvs, base->damage, nintervals, new_list, &base->ndamage_intvs, &base->damage, MERGE_MARGIN, alignment_bits); } void reset_damage(struct damage *base) { if (base->damage != DAMAGE_EVERYTHING) { free(base->damage); } base->damage = NULL; base->ndamage_intvs = 0; base->acc_damage_stat = 0; base->acc_count = 0; } void damage_everything(struct damage *base) { if (base->damage != DAMAGE_EVERYTHING) { free(base->damage); } base->damage = DAMAGE_EVERYTHING; base->ndamage_intvs = 0; } waypipe-v0.9.1/src/interval.h000066400000000000000000000065321463133614300161630ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_INTERVAL_H #define WAYPIPE_INTERVAL_H #include /** A slight modification of the standard 'damage' rectangle * formulation, written to be agnostic of whatever buffers * underlie the system. * * [start,start+width),[start+stride,start+stride+width), * ... [start+(rep-1)*stride,start+(rep-1)*stride+width) */ struct ext_interval { int32_t start; /** Subinterval width */ int32_t width; /** Number of distinct subinterval start positions. For a single * interval, this is one. */ int32_t rep; /** Spacing between start positions, should be > width, unless * the is only one subinterval, in which case the value shouldn't * matter and is conventionally set to 0. */ int32_t stride; }; /** [start, end). (This is better than {start,width}, since width computations * are rare and trivial, while merging code branches frequently off of * endpoints) */ struct interval { int32_t start; int32_t end; }; #define DAMAGE_EVERYTHING ((struct interval *)-1) /** Interval-based damage tracking. If damage is NULL, there is * no recorded damage. If damage is DAMAGE_EVERYTHING, the entire * region should be updated. If ndamage_intvs > 0, then * damage points to an array of struct interval objects. */ struct damage { struct interval *damage; int ndamage_intvs; int64_t acc_damage_stat; int acc_count; }; /** Given an array of extended intervals, update the base damage structure * so that it contains a reasonably small disjoint set of extended intervals * which contains the old base set and the new set. Before merging, all * interval boundaries will be rounded to the next multiple of * `1 << alignment_bits`. */ void merge_damage_records(struct damage *base, int nintervals, const struct ext_interval *const new_list, int alignment_bits); /** Set damage to empty */ void reset_damage(struct damage *base); /** Expand damage to cover everything */ void damage_everything(struct damage *base); /* internal merge driver, made visible for testing */ void merge_mergesort(const int old_count, struct interval *old_list, const int new_count, const struct ext_interval *const new_list, int *dst_count, struct interval **dst_list, int merge_margin, int alignment_bits); #endif // WAYPIPE_INTERVAL_H waypipe-v0.9.1/src/kernel.c000066400000000000000000000216501463133614300156100ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "kernel.h" #include "interval.h" #include "util.h" #include #include #include #include #include static size_t run_interval_diff_C(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end) { const uint64_t *__restrict__ mod = imod; uint64_t *__restrict__ base = ibase; uint64_t *__restrict__ diff = (uint64_t *__restrict__)idiff; /* we paper over gaps of a given window size, to avoid fine * grained context switches */ const size_t i_start = i; size_t dc = 0; uint64_t changed_val = i < i_end ? mod[i] : 0; uint64_t base_val = i < i_end ? base[i] : 0; i++; // Alternating scanners, ending with a mispredict each. bool clear_exit = false; while (i < i_end) { while (changed_val == base_val && i < i_end) { changed_val = mod[i]; base_val = base[i]; i++; } if (i == i_end) { /* it's possible that the last value actually; * see exit block */ clear_exit = true; break; } uint32_t *ctrl_blocks = (uint32_t *)&diff[dc++]; ctrl_blocks[0] = (uint32_t)((i - 1) * 2); diff[dc++] = changed_val; base[i - 1] = changed_val; // changed_val != base_val, difference occurs at early // index size_t nskip = 0; // we could only sentinel this assuming a tiny window // size while (i < i_end && nskip <= (size_t)diff_window_size / 2) { base_val = base[i]; changed_val = mod[i]; base[i] = changed_val; i++; diff[dc++] = changed_val; nskip++; nskip *= (base_val == changed_val); } dc -= nskip; ctrl_blocks[1] = (uint32_t)((i - nskip) * 2); /* our sentinel, at worst, causes overcopy by one. this * is fine */ } /* If only the last block changed */ if ((clear_exit || i_start + 1 == i_end) && changed_val != base_val) { uint32_t *ctrl_blocks = (uint32_t *)&diff[dc++]; ctrl_blocks[0] = (uint32_t)(i_end - 1) * 2; ctrl_blocks[1] = (uint32_t)i_end * 2; diff[dc++] = changed_val; base[i_end - 1] = changed_val; } return dc * 2; } #ifdef HAVE_AVX512F static bool avx512f_available(void) { return __builtin_cpu_supports("avx512f"); } size_t run_interval_diff_avx512f(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif #ifdef HAVE_AVX2 static bool avx2_available(void) { return __builtin_cpu_supports("avx2"); } size_t run_interval_diff_avx2(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif #ifdef HAVE_NEON bool neon_available(void); // in platform.c size_t run_interval_diff_neon(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif #ifdef HAVE_SSE3 static bool sse3_available(void) { return __builtin_cpu_supports("sse3"); } size_t run_interval_diff_sse3(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ idiff, size_t i, const size_t i_end); #endif interval_diff_fn_t get_diff_function(enum diff_type type, int *alignment_bits) { #ifdef HAVE_AVX512F if ((type == DIFF_FASTEST || type == DIFF_AVX512F) && avx512f_available()) { *alignment_bits = 6; return run_interval_diff_avx512f; } #endif #ifdef HAVE_AVX2 if ((type == DIFF_FASTEST || type == DIFF_AVX2) && avx2_available()) { *alignment_bits = 6; return run_interval_diff_avx2; } #endif #ifdef HAVE_NEON if ((type == DIFF_FASTEST || type == DIFF_NEON) && neon_available()) { *alignment_bits = 4; return run_interval_diff_neon; } #endif #ifdef HAVE_SSE3 if ((type == DIFF_FASTEST || type == DIFF_SSE3) && sse3_available()) { *alignment_bits = 5; return run_interval_diff_sse3; } #endif if ((type == DIFF_FASTEST || type == DIFF_C)) { *alignment_bits = 3; return run_interval_diff_C; } *alignment_bits = 0; return NULL; } /** Construct the main portion of a diff. The provided arguments should * be validated beforehand. All intervals, as well as the base/changed data * pointers, should be aligned to the alignment size associated with the * interval diff function */ size_t construct_diff_core(interval_diff_fn_t idiff_fn, int alignment_bits, const struct interval *__restrict__ damaged_intervals, int n_intervals, void *__restrict__ base, const void *__restrict__ changed, void *__restrict__ diff) { uint32_t *diff_blocks = (uint32_t *)diff; size_t cursor = 0; for (int i = 0; i < n_intervals; i++) { struct interval e = damaged_intervals[i]; size_t bend = (size_t)e.end >> alignment_bits; size_t bstart = (size_t)e.start >> alignment_bits; cursor += (*idiff_fn)(24, changed, base, diff_blocks + cursor, bstart, bend); } return cursor * sizeof(uint32_t); } size_t construct_diff_trailing(size_t size, int alignment_bits, char *__restrict__ base, const char *__restrict__ changed, char *__restrict__ diff) { size_t alignment = 1u << alignment_bits; size_t ntrailing = size % alignment; size_t offset = size - ntrailing; bool tail_change = false; if (ntrailing > 0) { for (size_t i = 0; i < ntrailing; i++) { tail_change |= base[offset + i] != changed[offset + i]; } } if (tail_change) { for (size_t i = 0; i < ntrailing; i++) { diff[i] = changed[offset + i]; base[offset + i] = changed[offset + i]; } return ntrailing; } return 0; } void apply_diff(size_t size, char *__restrict__ target1, char *__restrict__ target2, size_t diffsize, size_t ntrailing, const char *__restrict__ diff) { size_t nblocks = size / sizeof(uint32_t); size_t ndiffblocks = diffsize / sizeof(uint32_t); uint32_t *__restrict__ t1_blocks = (uint32_t *)target1; uint32_t *__restrict__ t2_blocks = (uint32_t *)target2; uint32_t *__restrict__ diff_blocks = (uint32_t *)diff; for (size_t i = 0; i < ndiffblocks;) { size_t nfrom = (size_t)diff_blocks[i]; size_t nto = (size_t)diff_blocks[i + 1]; size_t span = nto - nfrom; if (nto > nblocks || nfrom >= nto || i + (nto - nfrom) >= ndiffblocks) { wp_error("Invalid copy range [%zu,%zu) > %zu=nblocks or [%zu,%zu) > %zu=ndiffblocks", nfrom, nto, nblocks, i + 1, i + 1 + span, ndiffblocks); return; } memcpy(t1_blocks + nfrom, diff_blocks + i + 2, sizeof(uint32_t) * span); memcpy(t2_blocks + nfrom, diff_blocks + i + 2, sizeof(uint32_t) * span); i += span + 2; } if (ntrailing > 0) { size_t offset = size - ntrailing; for (size_t i = 0; i < ntrailing; i++) { target1[offset + i] = diff[diffsize + i]; target2[offset + i] = diff[diffsize + i]; } } } void stride_shifted_copy(char *dest, const char *src, size_t src_start, size_t copy_length, size_t row_length, size_t src_stride, size_t dst_stride) { size_t src_end = src_start + copy_length; size_t lrow = src_start / src_stride; size_t trow = src_end / src_stride; /* special case: inside a segment */ if (lrow == trow) { size_t cstart = src_start - lrow * src_stride; if (cstart < row_length) { size_t cend = src_end - trow * src_stride; cend = cend > row_length ? row_length : cend; memcpy(dest + dst_stride * lrow + cstart, src + src_start, cend - cstart); } return; } /* leading segment */ if (src_start > lrow * src_stride) { size_t igap = src_start - lrow * src_stride; if (igap < row_length) { memcpy(dest + dst_stride * lrow + igap, src + src_start, row_length - igap); } } /* main body */ size_t srow = (src_start + src_stride - 1) / src_stride; for (size_t i = srow; i < trow; i++) { memcpy(dest + dst_stride * i, src + src_stride * i, row_length); } /* trailing segment */ if (src_end > trow * src_stride) { size_t local = src_end - trow * src_stride; local = local > row_length ? row_length : local; memcpy(dest + dst_stride * trow, src + src_end - local, local); } } waypipe-v0.9.1/src/kernel.h000066400000000000000000000064051463133614300156160ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_KERNEL_H #define WAYPIPE_KERNEL_H #include #include struct interval; typedef size_t (*interval_diff_fn_t)(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end); enum diff_type { DIFF_FASTEST, DIFF_AVX512F, DIFF_AVX2, DIFF_SSE3, DIFF_NEON, DIFF_C, }; /** Returns a function pointer to a diff construction kernel, and indicates * the alignment of the data which is to be passed in */ interval_diff_fn_t get_diff_function(enum diff_type type, int *alignment_bits); /** Given intervals aligned to 1< #include #include #include #ifdef __x86_64__ static inline int tzcnt(uint64_t v) { return (int)_tzcnt_u64(v); } #else static inline int tzcnt(uint64_t v) { return v ? __builtin_ctzll(v) : 64; } #endif #ifdef __x86_64__ static inline int lzcnt(uint64_t v) { return (int)_lzcnt_u64(v); } #else static inline int lzcnt(uint64_t v) { return v ? __builtin_clzll(v) : 64; } #endif size_t run_interval_diff_avx2(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const __m256i *__restrict__ mod = imod; __m256i *__restrict__ base = ibase; size_t dc = 0; while (1) { /* Loop: no changes */ uint32_t *ctrl_blocks = &diff[dc]; dc += 2; int trailing_unchanged = 0; for (; i < i_end; i++) { __m256i m0 = _mm256_load_si256(&mod[2 * i]); __m256i m1 = _mm256_load_si256(&mod[2 * i + 1]); __m256i b0 = _mm256_load_si256(&base[2 * i]); __m256i b1 = _mm256_load_si256(&base[2 * i + 1]); __m256i eq0 = _mm256_cmpeq_epi32(m0, b0); __m256i eq1 = _mm256_cmpeq_epi32(m1, b1); /* It's very hard to tell which loop exit method is * better, since the routine is typically bandwidth * limited */ #if 1 uint32_t mask0 = (uint32_t)_mm256_movemask_epi8(eq0); uint32_t mask1 = (uint32_t)_mm256_movemask_epi8(eq1); uint64_t mask = mask0 + mask1 * 0x100000000uLL; if (~mask) { #else __m256i andv = _mm256_and_si256(eq0, eq1); if (_mm256_testz_si256(andv, _mm256_set1_epi8(-1))) { uint32_t mask0 = (uint32_t)_mm256_movemask_epi8( eq0); uint32_t mask1 = (uint32_t)_mm256_movemask_epi8( eq1); uint64_t mask = mask0 + mask1 * 0x100000000uLL; #endif _mm256_store_si256(&base[2 * i], m0); _mm256_store_si256(&base[2 * i + 1], m1); /* Write the changed bytes, starting at the * first modified term, * and set the n_unchanged counter */ size_t ncom = (size_t)tzcnt(~mask) >> 2; size_t block_shift = (ncom & 7); uint64_t esmask = 0xffffffffuLL << (block_shift * 4); __m128i halfsize = _mm_set_epi64x( 0uLL, (long long)esmask); __m256i estoremask = _mm256_cvtepi8_epi64(halfsize); _mm256_maskstore_epi32( (int *)&diff[dc - block_shift], estoremask, ncom < 8 ? m0 : m1); if (ncom < 8) { _mm256_storeu_si256( (__m256i *)&diff[dc + 8 - block_shift], m1); } dc += 16 - ncom; trailing_unchanged = lzcnt(~mask) >> 2; ctrl_blocks[0] = (uint32_t)(16 * i + ncom); i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(16 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Loop: until no changes for DIFF_WINDOW +/- 4 spaces */ for (; i < i_end; i++) { __m256i m0 = _mm256_load_si256(&mod[2 * i]); __m256i m1 = _mm256_load_si256(&mod[2 * i + 1]); __m256i b0 = _mm256_load_si256(&base[2 * i]); __m256i b1 = _mm256_load_si256(&base[2 * i + 1]); __m256i eq0 = _mm256_cmpeq_epi32(m0, b0); __m256i eq1 = _mm256_cmpeq_epi32(m1, b1); uint32_t mask0 = (uint32_t)_mm256_movemask_epi8(eq0); uint32_t mask1 = (uint32_t)_mm256_movemask_epi8(eq1); uint64_t mask = mask0 + mask1 * 0x100000000uLL; /* Reset trailing counter if anything changed */ bool clear = ~mask == 0; trailing_unchanged = clear * trailing_unchanged + (lzcnt(~mask) >> 2); _mm256_storeu_si256((__m256i *)&diff[dc], m0); _mm256_storeu_si256((__m256i *)&diff[dc + 8], m1); dc += 16; if (trailing_unchanged > diff_window_size) { i++; break; } _mm256_store_si256(&base[2 * i], m0); _mm256_store_si256(&base[2 * i + 1], m1); } /* Write coda */ dc -= (size_t)trailing_unchanged; ctrl_blocks[1] = (uint32_t)(16 * i - (size_t)trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.9.1/src/kernel_avx512f.c000066400000000000000000000064221463133614300170640ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include size_t run_interval_diff_avx512f(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const __m512i *mod = imod; __m512i *base = ibase; size_t dc = 0; while (1) { /* Loop: no changes */ uint32_t *ctrl_blocks = (uint32_t *)&diff[dc]; dc += 2; int trailing_unchanged = 0; for (; i < i_end; i++) { __m512i m = _mm512_load_si512(&mod[i]); __m512i b = _mm512_load_si512(&base[i]); uint32_t mask = (uint32_t)_mm512_cmpeq_epi32_mask(m, b); if (mask != 0xffff) { _mm512_store_si512(&base[i], m); size_t ncom = (size_t)_tzcnt_u32( ~(unsigned int)mask); __mmask16 storemask = (__mmask16)(0xffffu << ncom); #if 0 __m512i v = _mm512_maskz_compress_epi32( storemask, m); _mm512_storeu_si512(&diff[dc], v); #else _mm512_mask_storeu_epi32( &diff[dc - ncom], storemask, m); #endif dc += 16 - ncom; trailing_unchanged = (int)_lzcnt_u32(~mask & 0xffff) - 16; ctrl_blocks[0] = (uint32_t)(16 * i + ncom); i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(16 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Loop: until an entire window is clear */ for (; i < i_end; i++) { __m512i m = _mm512_load_si512(&mod[i]); __m512i b = _mm512_load_si512(&base[i]); uint32_t mask = (uint32_t)_mm512_cmpeq_epi32_mask(m, b); /* Reset trailing counter if anything changed */ uint32_t amask = ~(mask << 16); int clear = (mask == 0xffff) ? 1 : 0; trailing_unchanged = clear * trailing_unchanged + (int)_lzcnt_u32(amask); _mm512_storeu_si512(&diff[dc], m); dc += 16; if (trailing_unchanged > diff_window_size) { i++; break; } _mm512_store_si512(&base[i], m); } /* Write coda */ dc -= (size_t)trailing_unchanged; ctrl_blocks[1] = (uint32_t)(16 * i - (size_t)trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.9.1/src/kernel_neon.c000066400000000000000000000070131463133614300166240ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include #include size_t run_interval_diff_neon(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const uint64_t *__restrict__ mod = imod; uint64_t *__restrict__ base = ibase; size_t dc = 0; while (1) { uint32_t *ctrl_blocks = &diff[dc]; dc += 2; /* Loop: no changes */ size_t trailing_unchanged = 0; for (; i < i_end; i++) { /* Q: does it make sense to unroll by 2, cutting branch * count in half? */ uint64x2_t b = vld1q_u64(&base[2 * i]); uint64x2_t m = vld1q_u64(&mod[2 * i]); uint64x2_t x = veorq_u64(m, b); uint32x2_t o = vqmovn_u64(x); uint64_t n = vget_lane_u64(vreinterpret_u64_u32(o), 0); if (n) { vst1q_u64(&base[2 * i], m); bool lead_empty = vget_lane_u32(o, 0) == 0; /* vtbl only works on u64 chunks, so we branch * instead */ if (lead_empty) { vst1_u64((uint64_t *)&diff[dc], vget_high_u64(m)); trailing_unchanged = 0; ctrl_blocks[0] = (uint32_t)(4 * i + 2); dc += 2; } else { vst1q_u64((uint64_t *)&diff[dc], m); trailing_unchanged = 2 * (vget_lane_u32(o, 1) == 0); ctrl_blocks[0] = (uint32_t)(4 * i); dc += 4; } trailing_unchanged = 0; i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(4 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Main copy loop */ for (; i < i_end; i++) { uint64x2_t m = vld1q_u64(&mod[2 * i]); uint64x2_t b = vld1q_u64(&base[2 * i]); uint64x2_t x = veorq_u64(m, b); uint32x2_t o = vqmovn_u64(x); uint64_t n = vget_lane_u64(vreinterpret_u64_u32(o), 0); /* Reset trailing counter if anything changed */ trailing_unchanged = trailing_unchanged * (n == 0); size_t nt = (size_t)((vget_lane_u32(o, 1) == 0) * (1 + (vget_lane_u32(o, 0) == 0))); trailing_unchanged += 2 * nt; vst1q_u64((uint64_t *)&diff[dc], m); dc += 4; if (trailing_unchanged > (size_t)diff_window_size) { i++; break; } vst1q_u64(&base[2 * i], m); } /* Write coda */ dc -= trailing_unchanged; ctrl_blocks[1] = (uint32_t)(4 * i - trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.9.1/src/kernel_sse3.c000066400000000000000000000101311463133614300165350ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include // sse #include // sse2 #include // sse3 size_t run_interval_diff_sse3(const int diff_window_size, const void *__restrict__ imod, void *__restrict__ ibase, uint32_t *__restrict__ diff, size_t i, const size_t i_end) { const __m128i *__restrict__ mod = imod; __m128i *__restrict__ base = ibase; size_t dc = 0; while (1) { /* Loop: no changes */ uint32_t *ctrl_blocks = (uint32_t *)&diff[dc]; dc += 2; int trailing_unchanged = 0; for (; i < i_end; i++) { __m128i b0 = _mm_load_si128(&base[2 * i]); __m128i b1 = _mm_load_si128(&base[2 * i + 1]); __m128i m0 = _mm_load_si128(&mod[2 * i]); __m128i m1 = _mm_load_si128(&mod[2 * i + 1]); /* pxor + ptest + branch could be faster, depending on * compiler choices */ __m128i eq0 = _mm_cmpeq_epi32(m0, b0); __m128i eq1 = _mm_cmpeq_epi32(m1, b1); uint32_t mask = (uint32_t)_mm_movemask_epi8(eq0); mask |= ((uint32_t)_mm_movemask_epi8(eq1)) << 16; if (mask != 0xffffffff) { _mm_storeu_si128(&base[2 * i], m0); _mm_storeu_si128(&base[2 * i + 1], m1); /* Write the changed bytes, starting at the * first modified term, and set the unchanged * counter. */ size_t ncom = (size_t)__builtin_ctz(~mask) >> 2; union { __m128i s[2]; uint32_t v[8]; } tmp; tmp.s[0] = m0; tmp.s[1] = m1; for (size_t z = ncom; z < 8; z++) { diff[dc++] = tmp.v[z]; } trailing_unchanged = __builtin_clz(~mask) >> 2; ctrl_blocks[0] = (uint32_t)(8 * i + ncom); i++; if (i >= i_end) { /* Last block, hence will not enter copy * loop */ ctrl_blocks[1] = (uint32_t)(8 * i); dc += 2; } break; } } if (i >= i_end) { dc -= 2; break; } /* Loop: until no changes for DIFF_WINDOW +/- 4 spaces */ for (; i < i_end; i++) { __m128i b0 = _mm_load_si128(&base[2 * i]); __m128i b1 = _mm_load_si128(&base[2 * i + 1]); __m128i m0 = _mm_load_si128(&mod[2 * i]); __m128i m1 = _mm_load_si128(&mod[2 * i + 1]); __m128i eq0 = _mm_cmpeq_epi32(m0, b0); __m128i eq1 = _mm_cmpeq_epi32(m1, b1); uint32_t mask = (uint32_t)_mm_movemask_epi8(eq0); mask |= ((uint32_t)_mm_movemask_epi8(eq1)) << 16; bool clear = mask == 0xffffffff; /* Because clz is undefined when mask=0, extend */ uint64_t ext_mask = ((uint64_t)mask) << 32; int nleading = __builtin_clzll(~ext_mask); trailing_unchanged = clear * (trailing_unchanged + 8) + (!clear) * (nleading >> 2); _mm_storeu_si128((__m128i *)&diff[dc], m0); _mm_storeu_si128((__m128i *)&diff[dc + 4], m1); dc += 8; if (trailing_unchanged > diff_window_size) { i++; break; } _mm_storeu_si128(&base[2 * i], m0); _mm_storeu_si128(&base[2 * i + 1], m1); } /* Write coda */ dc -= (size_t)trailing_unchanged; ctrl_blocks[1] = (uint32_t)(8 * i - (size_t)trailing_unchanged); if (i >= i_end) { break; } } return dc; } waypipe-v0.9.1/src/main.h000066400000000000000000000055661463133614300152710ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_MAIN_H #define WAYPIPE_MAIN_H #include "parsing.h" #include "shadow.h" #include "util.h" struct main_config { const char *drm_node; int n_worker_threads; enum compression_mode compression; int compression_level; bool no_gpu; bool only_linear_dmabuf; bool video_if_possible; int video_bpf; enum video_coding_fmt video_fmt; bool prefer_hwvideo; bool old_video_mode; bool vsock; uint32_t vsock_cid; uint32_t vsock_port; bool vsock_to_host; const char *title_prefix; }; struct globals { const struct main_config *config; struct fd_translation_map map; struct render_data render; struct message_tracker tracker; struct thread_pool threads; }; /** Main processing loop * * chanfd: connected socket to channel * progfd: connected socket to Wayland program * linkfd: optional socket providing new chanfds. (-1 means not provided) * * Returns either EXIT_SUCCESS or EXIT_FAILURE (if exit caused by an error.) */ int main_interface_loop(int chanfd, int progfd, int linkfd, const struct main_config *config, bool display_side); /** Act as a Wayland server */ int run_server(int cwd_fd, struct socket_path socket_path, const char *display_suffix, const char *control_path, const struct main_config *config, bool oneshot, bool unlink_at_end, char *const app_argv[], bool login_shell_if_backup); /** Act as a Wayland client */ int run_client(int cwd_fd, const char *sock_folder_name, int sock_folder_fd, const char *sock_filename, const struct main_config *config, bool oneshot, const char *wayland_socket, pid_t eol_pid, int channelsock); /** Run benchmarking tool; n_worker_threads defined as with \ref main_config */ int run_bench(float bandwidth_mBps, uint32_t test_size, int n_worker_threads); #endif // WAYPIPE_MAIN_H waypipe-v0.9.1/src/mainloop.c000066400000000000000000001361771463133614300161610ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include // The maximum number of fds libwayland can recvmsg at once #define MAX_LIBWAY_FDS 28 static ssize_t iovec_read( int conn, char *buf, size_t buflen, struct int_window *fds) { char cmsgdata[(CMSG_LEN(MAX_LIBWAY_FDS * sizeof(int32_t)))] = {0}; struct iovec the_iovec; the_iovec.iov_len = buflen; the_iovec.iov_base = buf; struct msghdr msg = {0}; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = &cmsgdata; msg.msg_controllen = sizeof(cmsgdata); msg.msg_flags = 0; ssize_t ret = recvmsg(conn, &msg, 0); if (msg.msg_flags & MSG_CTRUNC) { wp_error("Warning, control data was truncated in recvmsg"); } // Read cmsg struct cmsghdr *header = CMSG_FIRSTHDR(&msg); while (header) { struct cmsghdr *nxt_hdr = CMSG_NXTHDR(&msg, header); if (header->cmsg_level != SOL_SOCKET || header->cmsg_type != SCM_RIGHTS) { header = nxt_hdr; continue; } int *data = (int *)CMSG_DATA(header); int nf = (int)((header->cmsg_len - CMSG_LEN(0)) / sizeof(int)); if (buf_ensure_size(fds->zone_end + nf, sizeof(int), &fds->size, (void **)&fds->data) == -1) { wp_error("Failed to allocate space for new fds"); errno = ENOMEM; ret = -1; } else { for (int i = 0; i < nf; i++) { fds->data[fds->zone_end++] = data[i]; } } header = nxt_hdr; } return ret; } static ssize_t iovec_write(int conn, const char *buf, size_t buflen, const int *fds, int numfds, int *nfds_written) { bool overflow = numfds > MAX_LIBWAY_FDS; struct iovec the_iovec; the_iovec.iov_len = overflow ? 1 : buflen; the_iovec.iov_base = (char *)buf; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; union { char buf[CMSG_SPACE(sizeof(int) * MAX_LIBWAY_FDS)]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); if (numfds > 0) { msg.msg_control = uc.buf; msg.msg_controllen = sizeof(uc.buf); struct cmsghdr *frst = CMSG_FIRSTHDR(&msg); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; *nfds_written = min(numfds, MAX_LIBWAY_FDS); size_t nwritten = (size_t)(*nfds_written); memcpy(CMSG_DATA(frst), fds, nwritten * sizeof(int)); for (int i = 0; i < numfds; i++) { int flags = fcntl(fds[i], F_GETFL, 0); if (flags == -1 && errno == EBADF) { wp_error("Writing invalid fd %d", fds[i]); } } frst->cmsg_len = CMSG_LEN(nwritten * sizeof(int)); msg.msg_controllen = CMSG_SPACE(nwritten * sizeof(int)); wp_debug("Writing %d fds to cmsg data", *nfds_written); } else { *nfds_written = 0; } ssize_t ret = sendmsg(conn, &msg, 0); return ret; } static int translate_fds(struct fd_translation_map *map, struct render_data *render, struct thread_pool *threads, int nfds, const int fds[], int ids[]) { for (int i = 0; i < nfds; i++) { struct shadow_fd *sfd = get_shadow_for_local_fd(map, fds[i]); if (!sfd) { /* Autodetect type + create shadow fd */ size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fds[i], &fdsz); sfd = translate_fd(map, render, threads, fds[i], fdtype, fdsz, NULL, false); } if (sfd) { ids[i] = sfd->remote_id; } else { return -1; } } return 0; } /** Given a list of global ids, and an up-to-date translation map, produce local * file descriptors */ static void untranslate_ids(struct fd_translation_map *map, int nids, const int *ids, int *fds) { for (int i = 0; i < nids; i++) { struct shadow_fd *shadow = get_shadow_for_rid(map, ids[i]); if (!shadow) { wp_error("Could not untranslate remote id %d in map. Application will probably crash.", ids[i]); fds[i] = -1; } else { fds[i] = shadow->fd_local; } } } enum wm_state { WM_WAITING_FOR_PROGRAM, WM_WAITING_FOR_CHANNEL, WM_TERMINAL }; /** This state corresponds to the in-progress transfer from the program * (compositor or application) and its pipes/buffers to the channel. */ struct way_msg_state { enum wm_state state; /** Window zone contains the message data which has been read * but not yet parsed/copied to proto_write */ struct char_window proto_read; /** Buffer of complete protocol messages to be written to the channel */ struct char_window proto_write; /** Queue of fds to be used by protocol parser */ struct int_window fds; /** Individual messages, to be sent out via writev and deleted on * acknowledgement */ struct transfer_queue transfers; /** bytes written in this cycle, for debug */ int total_written; /** Maximum chunk size to writev at once*/ int max_iov; /** Transfers to send after the compute queue is empty */ int ntrailing; struct iovec trailing[3]; /** Statically allocated message acknowledgement messages; due * to the way they are updated out of order, at most two are needed */ struct wmsg_ack ack_msgs[2]; }; enum cm_state { CM_WAITING_FOR_PROGRAM, CM_WAITING_FOR_CHANNEL, CM_TERMINAL }; /** This state corresponds to the in-progress transfer from the channel * to the program and the buffers/pipes on which will be written. */ struct chan_msg_state { enum cm_state state; /** Edited protocol data which is being written to the program */ struct char_window proto_write; /**< FDs that should immediately be transferred to the program */ struct int_window transf_fds; /**< FD queue for the protocol parser */ struct int_window proto_fds; #define RECV_GOAL_READ_SIZE 131072 char *recv_buffer; // ring-like buffer for message data size_t recv_size; size_t recv_start; // (recv_buffer+rev_start) should be a message header size_t recv_end; // last byte read from channel, always >=recv_start int recv_unhandled_messages; // number of messages to parse }; /** State used by both forward and reverse messages */ struct cross_state { /* Which was the last received message received from the other * application, for which acknowledgement was sent? */ uint32_t last_acked_msgno; /* Which was the last message number received from the other * application? */ uint32_t last_received_msgno; /* What was the highest number message received from the other * application? (matches last_received, unless we needed a restart */ uint32_t newest_received_msgno; /* Which was the last message number sent to the other application which * was acknowledged by that side? */ uint32_t last_confirmed_msgno; }; static int interpret_chanmsg(struct chan_msg_state *cmsg, struct cross_state *cxs, struct globals *g, bool display_side, char *packet) { uint32_t size_and_type = *(uint32_t *)packet; size_t unpadded_size = transfer_size(size_and_type); enum wmsg_type type = transfer_type(size_and_type); if (type == WMSG_CLOSE) { /* No new messages from the channel to the program will be * allowed after this */ cmsg->state = CM_TERMINAL; wp_debug("Other side has closed"); if (unpadded_size < 8) { return ERR_FATAL; } int32_t code = ((int32_t *)packet)[1]; if (code == ERR_FATAL) { return ERR_FATAL; } else if (code == ERR_NOMEM) { return ERR_NOMEM; } else { return ERR_STOP; } } else if (type == WMSG_RESTART) { struct wmsg_restart *ackm = (struct wmsg_restart *)packet; wp_debug("Received WMSG_RESTART: remote last saw ack %d (we last recvd %d, acked %d)", ackm->last_ack_received, cxs->last_received_msgno, cxs->last_acked_msgno); cxs->last_received_msgno = ackm->last_ack_received; return 0; } else if (type == WMSG_ACK_NBLOCKS) { struct wmsg_ack *ackm = (struct wmsg_ack *)packet; wp_debug("Received WMSG_ACK_NBLOCKS: remote recvd %u", ackm->messages_received); if (msgno_gt(ackm->messages_received, cxs->last_confirmed_msgno)) { cxs->last_confirmed_msgno = ackm->messages_received; } return 0; } else { cxs->last_received_msgno++; if (msgno_gt(cxs->newest_received_msgno, cxs->last_received_msgno)) { /* Skip packet, as we already received it */ wp_debug("Ignoring replayed message %d (newest=%d)", cxs->last_received_msgno, cxs->newest_received_msgno); return 0; } cxs->newest_received_msgno = cxs->last_received_msgno; } if (type == WMSG_INJECT_RIDS) { const int32_t *fds = &((const int32_t *)packet)[1]; int nfds = (int)((unpadded_size - sizeof(uint32_t)) / sizeof(int32_t)); wp_debug("Received WMSG_INJECT_RIDS with %d fds", nfds); if (buf_ensure_size(nfds, sizeof(int), &cmsg->transf_fds.size, (void **)&cmsg->transf_fds.data) == -1) { wp_error("Allocation failure for fd transfer queue, expect a crash"); return ERR_NOMEM; } /* Reset transfer buffer; all fds in here were already sent */ cmsg->transf_fds.zone_start = 0; cmsg->transf_fds.zone_end = nfds; untranslate_ids(&g->map, nfds, fds, cmsg->transf_fds.data); if (nfds > 0) { if (buf_ensure_size(cmsg->proto_fds.zone_end + nfds, sizeof(int), &cmsg->proto_fds.size, (void **)&cmsg->proto_fds.data) == -1) { wp_error("Allocation failure for fd protocol queue"); return ERR_NOMEM; } // Append the new file descriptors to the parsing queue memcpy(cmsg->proto_fds.data + cmsg->proto_fds.zone_end, cmsg->transf_fds.data, sizeof(int) * (size_t)nfds); cmsg->proto_fds.zone_end += nfds; } return 0; } else if (type == WMSG_PROTOCOL) { /* While by construction, the provided message buffer should be * aligned with individual message boundaries, it is not * guaranteed that all file descriptors provided will be used by * the messages. This makes fd handling more complicated. */ int protosize = (int)(unpadded_size - sizeof(uint32_t)); wp_debug("Received WMSG_PROTOCOL with %d bytes of messages", protosize); // TODO: have message editing routines ensure size, so // that this limit can be tighter if (buf_ensure_size(protosize + 1024, 1, &cmsg->proto_write.size, (void **)&cmsg->proto_write.data) == -1) { wp_error("Allocation failure for message workspace"); return ERR_NOMEM; } cmsg->proto_write.zone_end = 0; cmsg->proto_write.zone_start = 0; struct char_window src; src.data = packet + sizeof(uint32_t); src.zone_start = 0; src.zone_end = protosize; src.size = protosize; parse_and_prune_messages(g, display_side, display_side, &src, &cmsg->proto_write, &cmsg->proto_fds); if (src.zone_start != src.zone_end) { wp_error("did not expect partial messages over channel, only parsed %d/%d bytes", src.zone_start, src.zone_end); return ERR_FATAL; } /* Update file descriptor queue */ if (cmsg->proto_fds.zone_end > cmsg->proto_fds.zone_start) { memmove(cmsg->proto_fds.data, cmsg->proto_fds.data + cmsg->proto_fds.zone_start, sizeof(int) * (size_t)(cmsg->proto_fds.zone_end > cmsg->proto_fds.zone_start)); cmsg->proto_fds.zone_end -= cmsg->proto_fds.zone_start; } return 0; } else { if (unpadded_size < sizeof(struct wmsg_basic)) { wp_error("Message is too small to contain header+RID, %d bytes", unpadded_size); return ERR_FATAL; } const struct wmsg_basic *op_header = (const struct wmsg_basic *)packet; struct bytebuf msg = { .data = packet, .size = unpadded_size, }; wp_debug("Received %s for RID=%d (len %d)", wmsg_type_to_str(type), op_header->remote_id, unpadded_size); return apply_update(&g->map, &g->threads, &g->render, type, op_header->remote_id, &msg); } } static int advance_chanmsg_chanread(struct chan_msg_state *cmsg, struct cross_state *cxs, int chanfd, bool display_side, struct globals *g) { /* Setup read operation to be able to read a minimum number of bytes, * wrapping around as early as overlap conditions permit */ if (cmsg->recv_unhandled_messages == 0) { struct iovec vec[2]; memset(vec, 0, sizeof(vec)); int nvec; if (cmsg->recv_start == cmsg->recv_end) { /* A fresh packet */ cmsg->recv_start = 0; cmsg->recv_end = 0; nvec = 1; vec[0].iov_base = cmsg->recv_buffer; vec[0].iov_len = (size_t)(cmsg->recv_size / 2); } else if (cmsg->recv_end < cmsg->recv_start + sizeof(uint32_t)) { /* Didn't quite finish reading the header */ int recvsz = (int)cmsg->recv_size; if (buf_ensure_size((int)cmsg->recv_end + RECV_GOAL_READ_SIZE, 1, &recvsz, (void **)&cmsg->recv_buffer) == -1) { wp_error("Allocation failure, resizing receive buffer failed"); return ERR_NOMEM; } cmsg->recv_size = (size_t)recvsz; nvec = 1; vec[0].iov_base = cmsg->recv_buffer + cmsg->recv_end; vec[0].iov_len = RECV_GOAL_READ_SIZE; } else { /* Continuing an old packet; space made available last * time */ uint32_t *header = (uint32_t *)&cmsg->recv_buffer [cmsg->recv_start]; size_t sz = alignz(transfer_size(*header), 4); size_t read_end = cmsg->recv_start + sz; bool wraparound = cmsg->recv_start >= RECV_GOAL_READ_SIZE; if (!wraparound) { read_end = maxu(read_end, cmsg->recv_end + RECV_GOAL_READ_SIZE); } int recvsz = (int)cmsg->recv_size; if (buf_ensure_size((int)read_end, 1, &recvsz, (void **)&cmsg->recv_buffer) == -1) { wp_error("Allocation failure, resizing receive buffer failed"); return ERR_NOMEM; } cmsg->recv_size = (size_t)recvsz; nvec = 1; vec[0].iov_base = cmsg->recv_buffer + cmsg->recv_end; vec[0].iov_len = read_end - cmsg->recv_end; if (wraparound) { nvec = 2; vec[1].iov_base = cmsg->recv_buffer; vec[1].iov_len = cmsg->recv_start; } } ssize_t r = readv(chanfd, vec, nvec); if (r == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { wp_debug("Read would block"); return 0; } else if (r == 0 || (r == -1 && errno == ECONNRESET)) { wp_debug("Channel connection closed"); return ERR_DISCONN; } else if (r == -1) { wp_error("chanfd read failure: %s", strerror(errno)); return ERR_FATAL; } else { if (nvec == 2 && (size_t)r >= vec[0].iov_len) { /* Complete parsing this message */ int cm_ret = interpret_chanmsg(cmsg, cxs, g, display_side, cmsg->recv_buffer + cmsg->recv_start); if (cm_ret < 0) { return cm_ret; } cmsg->recv_start = 0; cmsg->recv_end = (size_t)r - vec[0].iov_len; if (cmsg->proto_write.zone_start < cmsg->proto_write.zone_end) { goto next_stage; } } else { cmsg->recv_end += (size_t)r; } } } /* Recount unhandled messages */ cmsg->recv_unhandled_messages = 0; size_t i = cmsg->recv_start; while (i + sizeof(uint32_t) <= cmsg->recv_end) { uint32_t *header = (uint32_t *)&cmsg->recv_buffer[i]; size_t sz = alignz(transfer_size(*header), 4); if (sz == 0) { wp_error("Encountered malformed zero size packet"); return ERR_FATAL; } i += sz; if (i > cmsg->recv_end) { break; } cmsg->recv_unhandled_messages++; } while (cmsg->recv_unhandled_messages > 0) { char *packet_start = &cmsg->recv_buffer[cmsg->recv_start]; uint32_t *header = (uint32_t *)packet_start; size_t sz = transfer_size(*header); int cm_ret = interpret_chanmsg( cmsg, cxs, g, display_side, packet_start); if (cm_ret < 0) { return cm_ret; } cmsg->recv_start += alignz(sz, 4); cmsg->recv_unhandled_messages--; if (cmsg->proto_write.zone_start < cmsg->proto_write.zone_end) { goto next_stage; } } return 0; next_stage: /* When protocol data was sent, switch to trying to write the protocol * data to its socket, before trying to parse any other message */ cmsg->state = CM_WAITING_FOR_PROGRAM; DTRACE_PROBE(waypipe, chanmsg_program_wait); return 0; } static int advance_chanmsg_progwrite(struct chan_msg_state *cmsg, int progfd, bool display_side, struct globals *g) { const char *progdesc = display_side ? "compositor" : "application"; // Write as much as possible while (cmsg->proto_write.zone_start < cmsg->proto_write.zone_end) { ssize_t wc = iovec_write(progfd, cmsg->proto_write.data + cmsg->proto_write.zone_start, (size_t)(cmsg->proto_write.zone_end - cmsg->proto_write.zone_start), cmsg->transf_fds.data, cmsg->transf_fds.zone_end, &cmsg->transf_fds.zone_start); if (wc == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { wp_debug("Write to the %s would block", progdesc); return 0; } else if (wc == -1 && (errno == EPIPE || errno == ECONNRESET)) { wp_error("%s has closed", progdesc); /* The program has closed its end of the connection, * so waypipe can also cease to process all messages and * data updates that would be directed to it */ cmsg->state = CM_TERMINAL; return ERR_STOP; } else if (wc == -1) { wp_error("%s write failure %zd: %s", progdesc, wc, strerror(errno)); return ERR_FATAL; } else { cmsg->proto_write.zone_start += (int)wc; wp_debug("Wrote to %s, %d/%d bytes in chunk %zd, %d/%d fds", progdesc, cmsg->proto_write.zone_start, cmsg->proto_write.zone_end, wc, cmsg->transf_fds.zone_start, cmsg->transf_fds.zone_end); if (cmsg->transf_fds.zone_start > 0) { decref_transferred_fds(&g->map, cmsg->transf_fds.zone_start, cmsg->transf_fds.data); memmove(cmsg->transf_fds.data, cmsg->transf_fds.data + cmsg->transf_fds.zone_start, (size_t)(cmsg->transf_fds.zone_end - cmsg->transf_fds.zone_start) * sizeof(int)); cmsg->transf_fds.zone_end -= cmsg->transf_fds.zone_start; } } } if (cmsg->proto_write.zone_start == cmsg->proto_write.zone_end) { wp_debug("Write to the %s succeeded", progdesc); cmsg->state = CM_WAITING_FOR_CHANNEL; DTRACE_PROBE(waypipe, chanmsg_channel_wait); } return 0; } static int advance_chanmsg_transfer(struct globals *g, struct chan_msg_state *cmsg, struct cross_state *cxs, bool display_side, int chanfd, int progfd, bool any_changes) { if (!any_changes) { return 0; } if (cmsg->state == CM_WAITING_FOR_CHANNEL) { return advance_chanmsg_chanread( cmsg, cxs, chanfd, display_side, g); } else if (cmsg->state == CM_WAITING_FOR_PROGRAM) { return advance_chanmsg_progwrite(cmsg, progfd, display_side, g); } return 0; } static void clear_old_transfers( struct transfer_queue *td, uint32_t inclusive_cutoff) { for (int i = 0; i < td->end; i++) { if (td->vecs[i].iov_len == 0) { wp_error("Unexpected zero sized item %d [%d,%d)", i, td->start, td->end); } } int k = 0; for (int i = 0; i < td->start; i++) { if (!msgno_gt(inclusive_cutoff, td->meta[i].msgno)) { break; } if (!td->meta[i].static_alloc) { free(td->vecs[i].iov_base); } td->vecs[i].iov_base = NULL; td->vecs[i].iov_len = 0; k = i + 1; } if (k > 0) { size_t nshift = (size_t)(td->end - k); memmove(td->meta, td->meta + k, nshift * sizeof(td->meta[0])); memmove(td->vecs, td->vecs + k, nshift * sizeof(td->vecs[0])); td->start -= k; td->end -= k; } } /* Returns 0 sucessful -1 if fatal error, -2 if closed */ static int partial_write_transfer(int chanfd, struct transfer_queue *td, int *total_written, int max_iov) { // Waiting for channel write to complete if (td->start < td->end) { /* Advance the current element by amount actually written */ char *orig_base = td->vecs[td->start].iov_base; size_t orig_len = td->vecs[td->start].iov_len; td->vecs[td->start].iov_base = orig_base + td->partial_write_amt; td->vecs[td->start].iov_len = orig_len - td->partial_write_amt; int count = min(max_iov, td->end - td->start); ssize_t wr = writev(chanfd, &td->vecs[td->start], count); td->vecs[td->start].iov_base = orig_base; td->vecs[td->start].iov_len = orig_len; if (wr == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { return 0; } else if (wr == -1 && (errno == ECONNRESET || errno == EPIPE)) { wp_debug("Channel connection closed"); return ERR_DISCONN; } else if (wr == -1) { wp_error("chanfd write failure: %s", strerror(errno)); return ERR_FATAL; } size_t uwr = (size_t)wr; *total_written += (int)wr; while (uwr > 0 && td->start < td->end) { /* Skip past zero-length blocks */ if (td->vecs[td->start].iov_len == 0) { td->start++; continue; } size_t left = td->vecs[td->start].iov_len - td->partial_write_amt; if (left > uwr) { /* Block partially completed */ td->partial_write_amt += uwr; uwr = 0; } else { /* Block completed */ td->partial_write_amt = 0; uwr -= left; td->start++; } } } return 0; } static int inject_acknowledge( struct way_msg_state *wmsg, struct cross_state *cxs) { if (transfer_ensure_size(&wmsg->transfers, wmsg->transfers.end + 1) == -1) { wp_error("Failed to allocate space for ack message transfer"); return -1; } /* To avoid infinite regress, receive acknowledgement * messages do not themselves increase the message counters. */ uint32_t ack_msgno; if (wmsg->transfers.start == wmsg->transfers.end) { ack_msgno = wmsg->transfers.last_msgno; } else { ack_msgno = wmsg->transfers.meta[wmsg->transfers.start].msgno; } /* This is the next point where messages can be changed */ int next_slot = (wmsg->transfers.partial_write_amt > 0) ? wmsg->transfers.start + 1 : wmsg->transfers.start; struct wmsg_ack *not_in_prog_msg = NULL; struct wmsg_ack *queued_msg = NULL; for (size_t i = 0; i < 2; i++) { if (wmsg->transfers.partial_write_amt > 0 && wmsg->transfers.vecs[wmsg->transfers.start] .iov_base == &wmsg->ack_msgs[i]) { not_in_prog_msg = &wmsg->ack_msgs[1 - i]; } if (next_slot < wmsg->transfers.end && wmsg->transfers.vecs[next_slot].iov_base == &wmsg->ack_msgs[i]) { queued_msg = &wmsg->ack_msgs[i]; } } if (!queued_msg) { /* Insert a message--which is not partially written-- * in the next available slot, pushing forward other * messages */ if (!not_in_prog_msg) { queued_msg = &wmsg->ack_msgs[0]; } else { queued_msg = not_in_prog_msg; } if (next_slot < wmsg->transfers.end) { size_t nmoved = (size_t)(wmsg->transfers.end - next_slot); memmove(wmsg->transfers.vecs + next_slot + 1, wmsg->transfers.vecs + next_slot, sizeof(*wmsg->transfers.vecs) * nmoved); memmove(wmsg->transfers.meta + next_slot + 1, wmsg->transfers.meta + next_slot, sizeof(*wmsg->transfers.meta) * nmoved); } wmsg->transfers.vecs[next_slot].iov_len = sizeof(struct wmsg_ack); wmsg->transfers.vecs[next_slot].iov_base = queued_msg; wmsg->transfers.meta[next_slot].msgno = ack_msgno; wmsg->transfers.meta[next_slot].static_alloc = true; wmsg->transfers.end++; } /* Modify the message which is now next up in the transfer * queue */ queued_msg->size_and_type = transfer_header( sizeof(struct wmsg_ack), WMSG_ACK_NBLOCKS); queued_msg->messages_received = cxs->last_received_msgno; cxs->last_acked_msgno = cxs->last_received_msgno; return 0; } static int advance_waymsg_chanwrite(struct way_msg_state *wmsg, struct cross_state *cxs, struct globals *g, int chanfd, bool display_side) { const char *progdesc = display_side ? "compositor" : "application"; /* Copy the data in the transfer queue to the write queue. */ (void)transfer_load_async(&wmsg->transfers); // First, clear out any transfers that are no longer needed clear_old_transfers(&wmsg->transfers, cxs->last_confirmed_msgno); /* Acknowledge the other side's transfers as soon as possible */ if (cxs->last_acked_msgno != cxs->last_received_msgno) { (void)inject_acknowledge(wmsg, cxs); } int ret = partial_write_transfer(chanfd, &wmsg->transfers, &wmsg->total_written, wmsg->max_iov); if (ret < 0) { return ret; } bool is_done = false; struct task_data task; bool has_task = request_work_task(&g->threads, &task, &is_done); /* Run a task ourselves, making use of the main thread */ if (has_task) { run_task(&task, &g->threads.threads[0]); pthread_mutex_lock(&g->threads.work_mutex); g->threads.tasks_in_progress--; pthread_mutex_unlock(&g->threads.work_mutex); /* To skip the next poll */ uint8_t triv = 0; if (write(g->threads.selfpipe_w, &triv, 1) == -1) { wp_error("Failed to write to self-pipe"); } } if (is_done) { /* It's possible for the last task to complete between * `transfer_load_async` and `request_work_task` in this * function, so copy out any remaining messages.`*/ (void)transfer_load_async(&wmsg->transfers); } if (is_done && wmsg->ntrailing > 0) { for (int i = 0; i < wmsg->ntrailing; i++) { transfer_add(&wmsg->transfers, wmsg->trailing[i].iov_len, wmsg->trailing[i].iov_base); } wmsg->ntrailing = 0; memset(wmsg->trailing, 0, sizeof(wmsg->trailing)); } if (wmsg->transfers.start == wmsg->transfers.end && is_done) { for (struct shadow_fd_link *lcur = g->map.link.l_next, *lnxt = lcur->l_next; lcur != &g->map.link; lcur = lnxt, lnxt = lcur->l_next) { /* Note: finish_update() may delete `cur` */ struct shadow_fd *cur = (struct shadow_fd *)lcur; finish_update(cur); destroy_shadow_if_unreferenced(cur); } /* Reset work queue */ pthread_mutex_lock(&g->threads.work_mutex); if (g->threads.stack_count > 0 || g->threads.tasks_in_progress > 0) { wp_error("Multithreading state failure"); } g->threads.do_work = false; g->threads.stack_count = 0; g->threads.tasks_in_progress = 0; pthread_mutex_unlock(&g->threads.work_mutex); DTRACE_PROBE(waypipe, channel_write_end); size_t unacked_bytes = 0; for (int i = 0; i < wmsg->transfers.end; i++) { unacked_bytes += wmsg->transfers.vecs[i].iov_len; } wp_debug("Sent %d-byte message from %s to channel; %zu-bytes in flight", wmsg->total_written, progdesc, unacked_bytes); /* do not delete the used transfers yet; we need a remote * acknowledgement */ wmsg->total_written = 0; wmsg->state = WM_WAITING_FOR_PROGRAM; } return 0; } static int advance_waymsg_progread(struct way_msg_state *wmsg, struct globals *g, int progfd, bool display_side, bool progsock_readable) { const char *progdesc = display_side ? "compositor" : "application"; // We have data to read from programs/pipes bool new_proto_data = false; int old_fbuffer_end = wmsg->fds.zone_end; if (progsock_readable) { // Read /once/ ssize_t rc = iovec_read(progfd, wmsg->proto_read.data + wmsg->proto_read.zone_end, (size_t)(wmsg->proto_read.size - wmsg->proto_read.zone_end), &wmsg->fds); if (rc == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { // do nothing } else if (rc == 0 || (rc == -1 && errno == ECONNRESET)) { wp_debug("%s has closed", progdesc); // state transitions handled in main loop return ERR_STOP; } else if (rc == -1) { wp_error("%s read failure: %s", progdesc, strerror(errno)); return ERR_FATAL; } else { // We have successfully read some data. wmsg->proto_read.zone_end += (int)rc; new_proto_data = true; } } if (new_proto_data) { wp_debug("Read %d new file descriptors, have %d total now", wmsg->fds.zone_end - old_fbuffer_end, wmsg->fds.zone_end); if (buf_ensure_size(wmsg->proto_read.size + 1024, 1, &wmsg->proto_write.size, (void **)&wmsg->proto_write.data) == -1) { wp_error("Allocation failure for message workspace"); return ERR_NOMEM; } wmsg->proto_write.zone_start = 0; wmsg->proto_write.zone_end = 0; parse_and_prune_messages(g, display_side, !display_side, &wmsg->proto_read, &wmsg->proto_write, &wmsg->fds); /* Recycle partial message bytes */ if (wmsg->proto_read.zone_start > 0) { if (wmsg->proto_read.zone_end > wmsg->proto_read.zone_start) { memmove(wmsg->proto_read.data, wmsg->proto_read.data + wmsg->proto_read.zone_start, (size_t)(wmsg->proto_read.zone_end - wmsg->proto_read.zone_start)); } wmsg->proto_read.zone_end -= wmsg->proto_read.zone_start; wmsg->proto_read.zone_start = 0; } } read_readable_pipes(&g->map); for (struct shadow_fd_link *lcur = g->map.link.l_next, *lnxt = lcur->l_next; lcur != &g->map.link; lcur = lnxt, lnxt = lcur->l_next) { /* Note: finish_update() may delete `cur` */ struct shadow_fd *cur = (struct shadow_fd *)lcur; collect_update(&g->threads, cur, &wmsg->transfers, g->config->old_video_mode); /* collecting updates can reset `pipe.remote_can_X` state, so * garbage collect the sfd immediately after */ destroy_shadow_if_unreferenced(cur); } int num_mt_tasks = start_parallel_work( &g->threads, &wmsg->transfers.async_recv_queue); if (new_proto_data) { /* Send all file descriptors which have been used by the * protocol parser, translating them if this has not already * been done */ if (wmsg->fds.zone_start > 0) { size_t act_size = (size_t)wmsg->fds.zone_start * sizeof(int32_t) + sizeof(uint32_t); uint32_t *msg = malloc(act_size); if (!msg) { // TODO: use a ring buffer for allocations, // and figure out how to block until it is clear wp_error("Failed to allocate file desc tx msg"); return ERR_NOMEM; } msg[0] = transfer_header(act_size, WMSG_INJECT_RIDS); int32_t *rbuffer = (int32_t *)(msg + 1); /* Translate and adjust refcounts */ if (translate_fds(&g->map, &g->render, &g->threads, wmsg->fds.zone_start, wmsg->fds.data, rbuffer) == -1) { free(msg); return ERR_FATAL; } decref_transferred_rids( &g->map, wmsg->fds.zone_start, rbuffer); memmove(wmsg->fds.data, wmsg->fds.data + wmsg->fds.zone_start, sizeof(int) * (size_t)(wmsg->fds.zone_end - wmsg->fds.zone_start)); wmsg->fds.zone_end -= wmsg->fds.zone_start; wmsg->fds.zone_start = 0; /* Add message to trailing queue */ wmsg->trailing[wmsg->ntrailing].iov_len = act_size; wmsg->trailing[wmsg->ntrailing].iov_base = msg; wmsg->ntrailing++; } if (wmsg->proto_write.zone_end > 0) { wp_debug("We are transferring a data buffer with %d bytes", wmsg->proto_write.zone_end); size_t act_size = (size_t)wmsg->proto_write.zone_end + sizeof(uint32_t); uint32_t protoh = transfer_header( act_size, WMSG_PROTOCOL); uint8_t *copy_proto = malloc(alignz(act_size, 4)); if (!copy_proto) { wp_error("Failed to allocate protocol tx msg"); return ERR_NOMEM; } memcpy(copy_proto, &protoh, sizeof(uint32_t)); memcpy(copy_proto + sizeof(uint32_t), wmsg->proto_write.data, (size_t)wmsg->proto_write.zone_end); memset(copy_proto + sizeof(uint32_t) + wmsg->proto_write .zone_end, 0, alignz(act_size, 4) - act_size); wmsg->trailing[wmsg->ntrailing].iov_len = alignz(act_size, 4); wmsg->trailing[wmsg->ntrailing].iov_base = copy_proto; wmsg->ntrailing++; } } int n_transfers = wmsg->transfers.end - wmsg->transfers.start; size_t net_bytes = 0; for (int i = wmsg->transfers.start; i < wmsg->transfers.end; i++) { net_bytes += wmsg->transfers.vecs[i].iov_len; } if (n_transfers > 0 || num_mt_tasks > 0 || wmsg->ntrailing > 0) { wp_debug("Channel message start (%d blobs, %d bytes, %d trailing, %d tasks)", n_transfers, net_bytes, wmsg->ntrailing, num_mt_tasks); wmsg->state = WM_WAITING_FOR_CHANNEL; DTRACE_PROBE(waypipe, channel_write_start); } return 0; } static int advance_waymsg_transfer(struct globals *g, struct way_msg_state *wmsg, struct cross_state *cxs, bool display_side, int chanfd, int progfd, bool progsock_readable) { if (wmsg->state == WM_WAITING_FOR_CHANNEL) { return advance_waymsg_chanwrite( wmsg, cxs, g, chanfd, display_side); } else if (wmsg->state == WM_WAITING_FOR_PROGRAM) { return advance_waymsg_progread(wmsg, g, progfd, display_side, progsock_readable); } return 0; } static int read_new_chanfd(int linkfd, struct int_window *recon_fds) { uint8_t tmp = 0; ssize_t rd = iovec_read(linkfd, (char *)&tmp, 1, recon_fds); if (rd == -1 && (errno == EWOULDBLOCK || errno == EAGAIN)) { // do nothing return -1; } else if (rd == 0 || (rd == -1 && errno == ECONNRESET)) { wp_error("link has closed"); // sentinel value, to indicate that linkfd should be closed return -2; } else if (rd == -1) { wp_error("link read failure: %s", strerror(errno)); return -1; } for (int i = 0; i < recon_fds->zone_end - 1; i++) { checked_close(recon_fds->data[i]); } int ret_fd = -1; if (recon_fds->zone_end > 0) { ret_fd = recon_fds->data[recon_fds->zone_end - 1]; } recon_fds->zone_end = 0; return ret_fd; } static int reconnect_loop(int linkfd, int progfd, struct int_window *recon_fds) { while (!shutdown_flag) { struct pollfd rcfs[2]; rcfs[0].fd = linkfd; rcfs[0].events = POLLIN; rcfs[0].revents = 0; rcfs[1].fd = progfd; rcfs[1].events = 0; rcfs[1].revents = 0; int r = poll(rcfs, 2, -1); if (r == -1) { if (errno == EINTR) { continue; } else { break; } } if (rcfs[0].revents & POLLIN) { int nfd = read_new_chanfd(linkfd, recon_fds); if (nfd != -1) { return nfd; } } if (rcfs[0].revents & POLLHUP || rcfs[1].revents & POLLHUP) { return -1; } } return -1; } static void reset_connection(struct cross_state *cxs, struct chan_msg_state *cmsg, struct way_msg_state *wmsg, int chanfd) { /* Discard partial read transfer, throwing away complete but unread * messages, and trailing remnants */ cmsg->recv_end = 0; cmsg->recv_start = 0; cmsg->recv_unhandled_messages = 0; clear_old_transfers(&wmsg->transfers, cxs->last_confirmed_msgno); wp_debug("Resetting connection: %d blocks unacknowledged", wmsg->transfers.end); if (wmsg->transfers.end > 0) { /* If there was any data in flight, restart. If there wasn't * anything in flight, then the remote side shouldn't notice the * difference */ struct wmsg_restart restart; restart.size_and_type = transfer_header(sizeof(restart), WMSG_RESTART); restart.last_ack_received = cxs->last_confirmed_msgno; wmsg->transfers.start = 0; wmsg->transfers.partial_write_amt = 0; wp_debug("Sending restart message: last ack=%d", restart.last_ack_received); if (write(chanfd, &restart, sizeof(restart)) != sizeof(restart)) { wp_error("Failed to write restart message"); } } if (set_nonblocking(chanfd) == -1) { wp_error("Error making new channel connection nonblocking: %s", strerror(errno)); } (void)cxs; } static int set_connections_nonblocking( int chanfd, int progfd, int linkfd, bool display_side) { const char *progdesc = display_side ? "compositor" : "application"; if (set_nonblocking(chanfd) == -1) { wp_error("Error making channel connection nonblocking: %s", strerror(errno)); return -1; } if (set_nonblocking(progfd) == -1) { wp_error("Error making %s connection nonblocking: %s", progdesc, strerror(errno)); return -1; } if (linkfd != -1 && set_nonblocking(linkfd) == -1) { wp_error("Error making link connection nonblocking: %s", strerror(errno)); return -1; } return 0; } int main_interface_loop(int chanfd, int progfd, int linkfd, const struct main_config *config, bool display_side) { if (set_connections_nonblocking(chanfd, progfd, linkfd, display_side) == -1) { if (linkfd != -1) { checked_close(linkfd); } checked_close(chanfd); checked_close(progfd); return EXIT_FAILURE; } const char *progdesc = display_side ? "compositor" : "application"; wp_debug("Running main loop on %s side", progdesc); struct way_msg_state way_msg; memset(&way_msg, 0, sizeof(way_msg)); struct chan_msg_state chan_msg; memset(&chan_msg, 0, sizeof(chan_msg)); struct cross_state cross_data; memset(&cross_data, 0, sizeof(cross_data)); struct globals g; memset(&g, 0, sizeof(g)); way_msg.state = WM_WAITING_FOR_PROGRAM; /* AFAIK, there is no documented upper bound for the size of a * Wayland protocol message, but libwayland (in wl_buffer_put) * effectively limits message sizes to 4096 bytes. We must * therefore adopt a limit as least as large. */ const int max_read_size = 4096; way_msg.proto_read.size = max_read_size; way_msg.proto_read.data = malloc((size_t)way_msg.proto_read.size); way_msg.fds.size = 128; way_msg.fds.data = malloc((size_t)way_msg.fds.size * sizeof(int)); way_msg.proto_write.size = 2 * max_read_size; way_msg.proto_write.data = malloc((size_t)way_msg.proto_write.size); way_msg.max_iov = get_iov_max(); int mut_ret = pthread_mutex_init( &way_msg.transfers.async_recv_queue.lock, NULL); if (mut_ret) { wp_error("Mutex creation failed: %s", strerror(mut_ret)); goto init_failure_cleanup; } chan_msg.state = CM_WAITING_FOR_CHANNEL; chan_msg.recv_size = 2 * RECV_GOAL_READ_SIZE; chan_msg.recv_buffer = malloc((size_t)chan_msg.recv_size); chan_msg.proto_write.size = max_read_size * 2; chan_msg.proto_write.data = malloc((size_t)chan_msg.proto_write.size); if (!chan_msg.proto_write.data || !chan_msg.recv_buffer || !way_msg.proto_write.data || !way_msg.fds.data || !way_msg.proto_read.data) { wp_error("Failed to allocate a message scratch buffer"); goto init_failure_cleanup; } /* The first packet received will be #1 */ way_msg.transfers.last_msgno = 1; g.config = config; g.render = (struct render_data){ .drm_node_path = config->drm_node, .drm_fd = -1, .dev = NULL, .disabled = config->no_gpu, .av_disabled = config->no_gpu || !config->prefer_hwvideo, .av_bpf = config->video_bpf, .av_video_fmt = (int)config->video_fmt, .av_hwdevice_ref = NULL, .av_drmdevice_ref = NULL, .av_vadisplay = NULL, .av_copy_config = 0, }; if (setup_thread_pool(&g.threads, config->compression, config->compression_level, config->n_worker_threads) == -1) { goto init_failure_cleanup; } setup_translation_map(&g.map, display_side); if (init_message_tracker(&g.tracker) == -1) { goto init_failure_cleanup; } struct int_window recon_fds = { .data = NULL, .size = 0, .zone_start = 0, .zone_end = 0, }; bool needs_new_channel = false; struct pollfd *pfds = NULL; int pfds_size = 0; int exit_code = 0; while (!shutdown_flag && exit_code == 0 && !(way_msg.state == WM_TERMINAL && chan_msg.state == CM_TERMINAL)) { int psize = 4 + count_npipes(&g.map); if (buf_ensure_size(psize, sizeof(struct pollfd), &pfds_size, (void **)&pfds) == -1) { wp_error("Allocation failure, not enough space for pollfds"); exit_code = ERR_NOMEM; break; } pfds[0].fd = chanfd; pfds[1].fd = progfd; pfds[2].fd = linkfd; pfds[3].fd = g.threads.selfpipe_r; pfds[0].events = 0; pfds[1].events = 0; pfds[2].events = POLLIN; pfds[3].events = POLLIN; if (way_msg.state == WM_WAITING_FOR_CHANNEL) { pfds[0].events |= POLLOUT; } else if (way_msg.state == WM_WAITING_FOR_PROGRAM) { pfds[1].events |= POLLIN; } if (chan_msg.state == CM_WAITING_FOR_CHANNEL) { pfds[0].events |= POLLIN; } else if (chan_msg.state == CM_WAITING_FOR_PROGRAM) { pfds[1].events |= POLLOUT; } bool check_read = way_msg.state == WM_WAITING_FOR_PROGRAM; int npoll = 4 + fill_with_pipes(&g.map, pfds + 4, check_read); bool own_msg_pending = (cross_data.last_acked_msgno != cross_data.last_received_msgno) && way_msg.state == WM_WAITING_FOR_PROGRAM; bool unread_chan_msgs = chan_msg.state == CM_WAITING_FOR_CHANNEL && chan_msg.recv_unhandled_messages > 0; int poll_delay; if (unread_chan_msgs) { /* There is work to do, so continue */ poll_delay = 0; } else if (own_msg_pending) { /* To coalesce acknowledgements, we wait for a minimum * amount */ poll_delay = 20; } else { poll_delay = -1; } int r = poll(pfds, (nfds_t)npoll, poll_delay); if (r == -1) { if (errno == EINTR) { wp_error("poll interrupted: shutdown=%c", shutdown_flag ? 'Y' : 'n'); continue; } else { wp_error("poll failed due to, stopping: %s", strerror(errno)); exit_code = ERR_FATAL; break; } } if (pfds[3].revents & POLLIN) { /* After the self pipe has been used to wake up the * connection, drain it */ char tmp[64]; (void)read(g.threads.selfpipe_r, tmp, sizeof(tmp)); } mark_pipe_object_statuses(&g.map, npoll - 4, pfds + 4); /* POLLHUP sometimes implies POLLIN, but not on all systems. * Checking POLLHUP|POLLIN means that we can detect EOF when * we actually do try to read from the sockets, but also, if * there was data in the pipe just before the hang up, then we * can read and handle that data. */ bool progsock_readable = pfds[1].revents & (POLLIN | POLLHUP); bool chanmsg_active = (pfds[0].revents & (POLLIN | POLLHUP)) || (pfds[1].revents & POLLOUT) || unread_chan_msgs; bool maybe_new_channel = (pfds[2].revents & (POLLIN | POLLHUP)); if (maybe_new_channel) { int new_fd = read_new_chanfd(linkfd, &recon_fds); if (new_fd >= 0) { if (chanfd != -1) { checked_close(chanfd); } chanfd = new_fd; reset_connection(&cross_data, &chan_msg, &way_msg, chanfd); needs_new_channel = false; } else if (new_fd == -2) { wp_error("Link to root process hang-up detected"); checked_close(linkfd); linkfd = -1; } } if (needs_new_channel && linkfd != -1) { wp_error("Channel hang up detected, waiting for reconnection"); int new_fd = reconnect_loop(linkfd, progfd, &recon_fds); if (new_fd < 0) { // -1 is read failure or misc error, -2 is HUP exit_code = ERR_FATAL; break; } else { /* Actually handle the reconnection/reset state */ if (chanfd != -1) { checked_close(chanfd); } chanfd = new_fd; reset_connection(&cross_data, &chan_msg, &way_msg, chanfd); needs_new_channel = false; } } else if (needs_new_channel) { wp_error("Channel hang up detected, no reconnection link, fatal"); exit_code = ERR_FATAL; break; } // Q: randomize the order of these?, to highlight // accidental dependencies? for (int m = 0; m < 2; m++) { int tr; if (m == 0) { tr = advance_chanmsg_transfer(&g, &chan_msg, &cross_data, display_side, chanfd, progfd, chanmsg_active); } else { tr = advance_waymsg_transfer(&g, &way_msg, &cross_data, display_side, chanfd, progfd, progsock_readable); } if (tr >= 0) { /* do nothing */ } else if (tr == ERR_DISCONN) { /* Channel connection has at least * partially been shut down, so close it * fully. */ checked_close(chanfd); chanfd = -1; if (linkfd == -1) { wp_error("Channel hang up detected, no reconnection link, fatal"); exit_code = ERR_FATAL; break; } needs_new_channel = true; } else if (tr == ERR_STOP) { if (m == 0) { /* Stop returned while writing: Wayland * connection has at least partially * shut down, so close it fully. */ checked_close(progfd); progfd = -1; } else { /* Stop returned while reading */ checked_close(progfd); progfd = -1; if (way_msg.state == WM_WAITING_FOR_PROGRAM) { way_msg.state = WM_TERMINAL; } if (chan_msg.state == CM_WAITING_FOR_PROGRAM || chan_msg.recv_start == chan_msg.recv_end) { chan_msg.state = CM_TERMINAL; } } } else { /* Fatal error, close and flush */ exit_code = tr; break; } /* If the program connection has closed, and * there waypipe is not currently transferring * any message to the channel, then shutdown the * program->channel transfers. (The reverse * situation with the chnanel connection is not * a cause for permanent closure, thanks to * reconnection support */ if (progfd == -1) { if (way_msg.state == WM_WAITING_FOR_PROGRAM) { way_msg.state = WM_TERMINAL; } if (chan_msg.state == CM_WAITING_FOR_PROGRAM || chan_msg.recv_start == chan_msg.recv_end) { chan_msg.state = CM_TERMINAL; } } } // Periodic maintenance. It doesn't matter who does this flush_writable_pipes(&g.map); } free(pfds); free(recon_fds.data); wp_debug("Exiting main loop (%d, %d, %d), attempting close message", exit_code, way_msg.state, chan_msg.state); init_failure_cleanup: /* It's possible, but very very unlikely, that waypipe gets closed * while Wayland protocol messages are being written to the program * and the most recent message was only partially written. */ exit_code = ERR_FATAL; if (chan_msg.proto_write.zone_start != chan_msg.proto_write.zone_end) { wp_debug("Final write to %s was incomplete, %d/%d", progdesc, chan_msg.proto_write.zone_start, chan_msg.proto_write.zone_end); } if (!display_side && progfd != -1) { char error[128]; if (exit_code == ERR_FATAL) { size_t len = print_display_error(error, sizeof(error), 3, "waypipe internal error"); if (write(progfd, error, len) == -1) { wp_error("Failed to send waypipe error notification: %s", strerror(errno)); } } else if (exit_code == ERR_NOMEM) { size_t len = print_display_error( error, sizeof(error), 2, "no memory"); if (write(progfd, error, len) == -1) { wp_error("Failed to send OOM notification: %s", strerror(errno)); } } } /* Attempt to notify remote end that the application has closed, * waiting at most for a very short amount of time */ if (way_msg.transfers.start != way_msg.transfers.end) { wp_error("Final write to channel was incomplete, %d+%zu/%d", way_msg.transfers.start, way_msg.transfers.partial_write_amt, way_msg.transfers.end); } if (chanfd != -1) { struct pollfd close_poll; close_poll.fd = chanfd; close_poll.events = POLLOUT; int close_ret = poll(&close_poll, 1, 200); if (close_ret == 0) { wp_debug("Exit poll timed out"); } uint32_t close_msg[2]; close_msg[0] = transfer_header(sizeof(close_msg), WMSG_CLOSE); close_msg[1] = exit_code == ERR_STOP ? 0 : (uint32_t)exit_code; wp_debug("Sending close message, modecode=%d", close_msg[1]); if (write(chanfd, &close_msg, sizeof(close_msg)) == -1) { wp_error("Failed to send close notification: %s", strerror(errno)); } } else { wp_debug("Channel closed, hence no close notification"); } cleanup_thread_pool(&g.threads); cleanup_message_tracker(&g.tracker); cleanup_translation_map(&g.map); cleanup_render_data(&g.render); cleanup_hwcontext(&g.render); free(way_msg.proto_read.data); free(way_msg.proto_write.data); free(way_msg.fds.data); cleanup_transfer_queue(&way_msg.transfers); for (int i = 0; i < way_msg.ntrailing; i++) { free(way_msg.trailing[i].iov_base); } free(chan_msg.transf_fds.data); free(chan_msg.proto_fds.data); free(chan_msg.recv_buffer); free(chan_msg.proto_write.data); if (chanfd != -1) { checked_close(chanfd); } if (progfd != -1) { checked_close(progfd); } if (linkfd != -1) { checked_close(linkfd); } return EXIT_SUCCESS; } waypipe-v0.9.1/src/meson.build000066400000000000000000000054721463133614300163320ustar00rootroot00000000000000 waypipe_source_files = ['dmabuf.c', 'handlers.c', 'kernel.c', 'mainloop.c', 'parsing.c', 'platform.c', 'shadow.c', 'interval.c', 'util.c', 'video.c'] waypipe_deps = [ pthreads, # To run expensive computations in parallel rt, # For shared memory ] if config_data.has('HAS_DMABUF') # General GPU buffer creation, aligned with dmabuf proto waypipe_deps += [libgbm] endif if config_data.has('HAS_LZ4') waypipe_deps += [liblz4] # Fast compression option endif if config_data.has('HAS_ZSTD') waypipe_deps += [libzstd] # Slow compression option endif if config_data.has('HAS_VIDEO') waypipe_deps += [libavcodec,libavutil,libswscale] endif if config_data.has('HAS_VAAPI') waypipe_deps += [libva] # For NV12->RGB conversions endif # Conditionally compile SIMD-optimized code. # (The meson simd module is a bit too limited for this) kernel_libs = [] if cc.has_argument('-mavx512f') and cc.has_argument('-mlzcnt') and cc.has_argument('-mbmi') and get_option('with_avx512f') kernel_libs += static_library('kernel_avx512f', 'kernel_avx512f.c', c_args:['-mavx512f', '-mlzcnt', '-mbmi']) config_data.set('HAVE_AVX512F', 1, description: 'Compiler supports AVX-512F') endif if cc.has_argument('-mavx2') and cc.has_argument('-mlzcnt') and cc.has_argument('-mbmi') and get_option('with_avx2') kernel_libs += static_library('kernel_avx2', 'kernel_avx2.c', c_args:['-mavx2', '-mlzcnt', '-mbmi']) config_data.set('HAVE_AVX2', 1, description: 'Compiler supports AVX2') endif if cc.has_argument('-msse3') and get_option('with_sse3') kernel_libs += static_library('kernel_sse3', 'kernel_sse3.c', c_args:['-msse3']) config_data.set('HAVE_SSE3', 1, description: 'Compiler supports SSE 3') endif if ( host_machine.cpu_family() == 'aarch64' or cc.has_argument('-mfpu=neon') ) and get_option('with_neon_opts') neon_args = host_machine.cpu_family() == 'aarch64' ? [] : ['-mfpu=neon'] # Clang additionally enforces that NEON code only be compiled # to target a CPU that actually supports NEON instructions, # so bump the host CPU version for the optionally executed code only. if host_machine.cpu_family() == 'arm' and cc.get_id() == 'clang' host_cpu = host_machine.cpu() if host_cpu.contains('4') or host_cpu.contains('5') or host_cpu.contains('6') neon_args += ['-march=armv7-a'] endif endif kernel_libs += static_library('kernel_neon', 'kernel_neon.c', c_args:neon_args) config_data.set('HAVE_NEON', 1, description: 'Compiler supports NEON') endif configure_file( output: 'config-waypipe.h', configuration: config_data, ) lib_waypipe_src = static_library( 'waypipe_src', waypipe_source_files + protocols_src, include_directories: waypipe_includes, link_with: kernel_libs, dependencies: waypipe_deps, ) waypipe_prog = executable( 'waypipe', ['waypipe.c', 'bench.c', 'client.c', 'server.c'], link_with: lib_waypipe_src, install: true ) waypipe-v0.9.1/src/parsing.c000066400000000000000000000410421463133614300157700ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "parsing.h" #include "main.h" #include "util.h" #include #include #include #include static const char *get_type_name(struct wp_object *obj) { return obj->type ? obj->type->name : ""; } const char *get_nth_packed_string(const char *pack, int n) { for (int i = 0; i < n; i++) { pack += strlen(pack) + 1; } return pack; } static struct wp_object *tree_rotate_left(struct wp_object *n) { struct wp_object *tmp = n->t_right; n->t_right = tmp->t_left; tmp->t_left = n; return tmp; } static struct wp_object *tree_rotate_right(struct wp_object *n) { struct wp_object *tmp = n->t_left; n->t_left = tmp->t_right; tmp->t_right = n; return tmp; } static void tree_link_right(struct wp_object **cur, struct wp_object **rn) { (*rn)->t_left = *cur; *rn = *cur; *cur = (*cur)->t_left; } static void tree_link_left(struct wp_object **cur, struct wp_object **ln) { (*ln)->t_right = *cur; *ln = *cur; *cur = (*cur)->t_right; } /* Splay operation, following Sleator+Tarjan, 1985 */ static struct wp_object *tree_branch_splay(struct wp_object *root, uint32_t key) { if (!root) { return NULL; } struct wp_object bg = {.t_left = NULL, .t_right = NULL}; struct wp_object *ln = &bg; struct wp_object *rn = &bg; struct wp_object *cur = root; while (key != cur->obj_id) { if (key < cur->obj_id) { if (cur->t_left && key < cur->t_left->obj_id) { cur = tree_rotate_right(cur); } if (!cur->t_left) { break; } tree_link_right(&cur, &rn); } else { if (cur->t_right && key > cur->t_right->obj_id) { cur = tree_rotate_left(cur); } if (!cur->t_right) { break; } tree_link_left(&cur, &ln); } } ln->t_right = cur->t_left; rn->t_left = cur->t_right; cur->t_left = bg.t_right; cur->t_right = bg.t_left; return cur; } static void tree_insert(struct wp_object **tree, struct wp_object *new_node) { /* Reset these, just in case */ new_node->t_left = NULL; new_node->t_right = NULL; struct wp_object *r = *tree; if (!r) { *tree = new_node; return; } r = tree_branch_splay(r, new_node->obj_id); if (new_node->obj_id < r->obj_id) { new_node->t_left = r->t_left; new_node->t_right = r; r->t_left = NULL; r = new_node; } else if (new_node->obj_id > r->obj_id) { new_node->t_right = r->t_right; new_node->t_left = r; r->t_right = NULL; r = new_node; } else { /* already in tree, no effect? or do silent override */ } *tree = r; } static void tree_remove(struct wp_object **tree, uint32_t key) { struct wp_object *r = *tree; r = tree_branch_splay(r, key); if (!r || r->obj_id != key) { /* wasn't in tree */ return; } struct wp_object *lbranch = r->t_left; struct wp_object *rbranch = r->t_right; if (!lbranch) { *tree = rbranch; return; } r = tree_branch_splay(lbranch, key); r->t_right = rbranch; *tree = r; } static struct wp_object *tree_lookup(struct wp_object **tree, uint32_t key) { *tree = tree_branch_splay(*tree, key); if (*tree && (*tree)->obj_id == key) { return *tree; } return NULL; } static void tree_clear(struct wp_object **tree, void (*node_free)(struct wp_object *object)) { struct wp_object *root = *tree; while (root) { root = tree_branch_splay(root, 0); struct wp_object *right = root->t_right; root->t_right = NULL; node_free(root); root = right; } *tree = NULL; } void tracker_insert(struct message_tracker *mt, struct wp_object *obj) { struct wp_object *old_obj = tree_lookup(&mt->objtree_root, obj->obj_id); if (old_obj) { /* We /always/ replace the object, to ensure that map * elements are never duplicated and make the deletion * process cause crashes */ if (!old_obj->is_zombie) { wp_error("Replacing object @%u that already exists: old type %s, new type %s", obj->obj_id, get_type_name(old_obj), get_type_name(obj)); } /* Zombie objects (server allocated, client deleted) are * only acknowledged destroyed by the server when they * are replaced. */ tree_remove(&mt->objtree_root, old_obj->obj_id); destroy_wp_object(old_obj); } tree_insert(&mt->objtree_root, obj); } void tracker_replace_existing( struct message_tracker *mt, struct wp_object *new_obj) { tree_remove(&mt->objtree_root, new_obj->obj_id); tree_insert(&mt->objtree_root, new_obj); } void tracker_remove(struct message_tracker *mt, struct wp_object *obj) { tree_remove(&mt->objtree_root, obj->obj_id); } struct wp_object *tracker_get(struct message_tracker *mt, uint32_t id) { return tree_lookup(&mt->objtree_root, id); } struct wp_object *get_object(struct message_tracker *mt, uint32_t id, const struct wp_interface *intf) { (void)intf; return tracker_get(mt, id); } int init_message_tracker(struct message_tracker *mt) { memset(mt, 0, sizeof(*mt)); /* heap allocate this, so we don't need to protect against adversarial * replacement */ struct wp_object *disp = create_wp_object(1, the_display_interface); if (!disp) { return -1; } tracker_insert(mt, disp); return 0; } void cleanup_message_tracker(struct message_tracker *mt) { tree_clear(&mt->objtree_root, destroy_wp_object); } static bool word_has_empty_bytes(uint32_t v) { return ((v & 0xFF) == 0) || ((v & 0xFF00) == 0) || ((v & 0xFF0000) == 0) || ((v & 0xFF000000) == 0); } bool size_check(const struct msg_data *data, const uint32_t *payload, unsigned int true_length, int fd_length) { if (data->n_fds > fd_length) { wp_error("Msg overflow, not enough fds %d > %d", data->n_fds, fd_length); return false; } const uint16_t *gaps = data->gaps; uint32_t pos = 0; for (;; gaps++) { uint16_t g = (*gaps >> 2); uint16_t e = (*gaps & 0x3); pos += g; if (pos > true_length) { wp_error("Msg overflow, not enough words %d > %d", pos, true_length); return false; } switch (e) { case GAP_CODE_STR: { uint32_t x_words = (payload[pos - 1] + 3) / 4; uint32_t end_idx = pos + x_words - 1; if (end_idx < true_length && !word_has_empty_bytes( payload[end_idx])) { wp_error("Msg overflow, string termination %d < %d, %d, %x %d", pos, true_length, x_words, payload[end_idx], word_has_empty_bytes( payload[end_idx])); return false; } pos += x_words; } break; case GAP_CODE_ARR: pos += (payload[pos - 1] + 3) / 4; break; case GAP_CODE_OBJ: break; case GAP_CODE_END: return true; } } } /* Given a size-checked request, try to construct all the new objects * that the request requires. Return true if successful, false otherwise. * * The argument `caller_obj` should be the object on which the request was * invoked; this function checks to make sure that object is not * overwritten by accident/corrupt input. */ static bool build_new_objects(const struct msg_data *data, const uint32_t *payload, struct message_tracker *mt, const struct wp_object *caller_obj, int msg_offset) { const uint16_t *gaps = data->gaps; uint32_t pos = 0; uint32_t objno = 0; for (;; gaps++) { uint16_t g = (*gaps >> 2); uint16_t e = (*gaps & 0x3); pos += g; switch (e) { case GAP_CODE_STR: case GAP_CODE_ARR: pos += (payload[pos - 1] + 3) / 4; break; case GAP_CODE_OBJ: { uint32_t new_id = payload[pos - 1]; if (new_id == caller_obj->obj_id) { wp_error("In %s.%s, tried to create object id=%u conflicting with object being called, also id=%u", caller_obj->type->name, get_nth_packed_string( caller_obj->type->msg_names, msg_offset), new_id, caller_obj->obj_id); return false; } struct wp_object *new_obj = create_wp_object( new_id, data->new_objs[objno]); if (!new_obj) { return false; } tracker_insert(mt, new_obj); objno++; } break; case GAP_CODE_END: return true; } } } int peek_message_size(const void *data) { return (int)(((const uint32_t *)data)[1] >> 16); } enum parse_state handle_message(struct globals *g, bool display_side, bool from_client, struct char_window *chars, struct int_window *fds) { bool to_wire = from_client == !display_side; const uint32_t *const header = (uint32_t *)&chars->data[chars->zone_start]; uint32_t obj = header[0]; int len = (int)(header[1] >> 16); int meth = (int)((header[1] << 16) >> 16); if (len != chars->zone_end - chars->zone_start) { wp_error("Message length disagreement %d vs %d", len, chars->zone_end - chars->zone_start); return PARSE_ERROR; } // display: object = 0? struct wp_object *objh = tracker_get(&g->tracker, obj); if (!objh || !objh->type) { wp_debug("Unidentified object %d with %s", obj, from_client ? "request" : "event"); return PARSE_UNKNOWN; } /* Identify the message type. Messages sent over the wire are tagged * with the number of file descriptors that are bound to the message. * This incidentally limits the number of fds to 31, and number of * messages per type 2047. */ int num_fds_with_message = -1; if (!to_wire) { num_fds_with_message = meth >> 11; meth = meth & ((1 << 11) - 1); if (num_fds_with_message > 0) { wp_debug("Reading message tagged with %d fds.", num_fds_with_message); } // Strip out the FD counters ((uint32_t *)&chars->data[chars->zone_start])[1] &= ~(uint32_t)((1 << 16) - (1 << 11)); } const struct wp_interface *intf = objh->type; int nmsgs = from_client ? intf->nreq : intf->nevt; if (meth < 0 || meth >= nmsgs) { wp_debug("Unidentified request #%d (of %d) on interface %s", meth, nmsgs, intf->name); return PARSE_UNKNOWN; } int meth_offset = from_client ? meth : meth + intf->nreq; const struct msg_data *msg = &intf->msgs[meth_offset]; const uint32_t *payload = header + 2; if (!size_check(msg, payload, (unsigned int)len / 4 - 2, fds->zone_end - fds->zone_start)) { wp_error("Message %x %s@%u.%s parse length overflow", payload, intf->name, objh->obj_id, get_nth_packed_string( intf->msg_names, meth_offset)); return PARSE_UNKNOWN; } if (!build_new_objects(msg, payload, &g->tracker, objh, meth_offset)) { return PARSE_UNKNOWN; } int fds_used = 0; struct context ctx = { .g = g, .tracker = &g->tracker, .obj = objh, .on_display_side = display_side, .drop_this_msg = false, .message = (uint32_t *)&chars->data[chars->zone_start], .message_length = len, .message_available_space = chars->size - chars->zone_start, .fds = fds, .fds_changed = false, }; if (msg->call) { (*msg->call)(&ctx, payload, &fds->data[fds->zone_start], &g->tracker); } if (num_fds_with_message >= 0 && msg->n_fds != num_fds_with_message) { wp_error("Message used %d file descriptors, but was tagged as using %d", msg->n_fds, num_fds_with_message); } fds_used += msg->n_fds; if (objh->obj_id >= 0xff000000 && msg->is_destructor) { /* Unfortunately, the wayland server library does not explicitly * acknowledge the client requested deletion of objects that the * wayland server has created; the client assumes success, * except by creating a new object that overrides the existing * id. * * To correctly vanish all events in flight, we mark the element * as having been zombified; it will only be destroyed when a * new element is created to take its place, since there could * still be e.g. data transfers in the channel, and it's best * that those only vanish when needed. * * Fortunately, wl_registry::bind objects are all client * produced. * * TODO: early struct shadow_fd closure for all deletion * requests, with a matching zombie flag to vanish transfers; * * TODO: avert the zombie apocalypse, where the compositor * sends creation notices for a full hierarchy of objects * before it receives the root's .destroy request. */ objh->is_zombie = true; } if (ctx.drop_this_msg) { wp_debug("Dropping %s.%s, with %d fds", intf->name, get_nth_packed_string( intf->msg_names, meth_offset), fds_used); chars->zone_end = chars->zone_start; int nmoved = fds->zone_end - fds->zone_start - fds_used; memmove(&fds->data[fds->zone_start], &fds->data[fds->zone_start + fds_used], (size_t)nmoved * sizeof(int)); fds->zone_end -= fds_used; return PARSE_KNOWN; } if (!ctx.fds_changed) { // Default, autoadvance fd queue, unless handler disagreed. fds->zone_start += fds_used; // Tag message with number of FDs. If the fds were modified // nontrivially, (i.e, ctx.fds_changed is true), tagging is // handler's responsibility if (to_wire) { if (fds_used >= 32 || meth >= 2048) { wp_error("Message used %d>=32 file descriptors or had index %d>=2048. FD tagging failed, expect a crash.", fds_used, meth); } if (fds_used > 0) { wp_debug("Tagging message with %d fds.", fds_used); ((uint32_t *)&chars->data[chars->zone_start]) [1] |= (uint32_t)(fds_used << 11); } } } if (fds->zone_end < fds->zone_start) { wp_error("Handler error after %s.%s: fdzs = %d > %d = fdze", intf->name, get_nth_packed_string( intf->msg_names, meth_offset), fds->zone_start, fds->zone_end); } // Move the end, in case there were changes chars->zone_end = chars->zone_start + ctx.message_length; return PARSE_KNOWN; } void parse_and_prune_messages(struct globals *g, bool on_display_side, bool from_client, struct char_window *source_bytes, struct char_window *dest_bytes, struct int_window *fds) { bool anything_unknown = false; struct char_window scan_bytes; scan_bytes.data = dest_bytes->data; scan_bytes.zone_start = dest_bytes->zone_start; scan_bytes.zone_end = dest_bytes->zone_start; scan_bytes.size = dest_bytes->size; DTRACE_PROBE1(waypipe, parse_enter, source_bytes->zone_end - source_bytes->zone_start); for (; source_bytes->zone_start < source_bytes->zone_end;) { if (source_bytes->zone_end - source_bytes->zone_start < 8) { // Not enough remaining bytes to parse the // header wp_debug("Insufficient bytes for header: %d %d", source_bytes->zone_start, source_bytes->zone_end); break; } int msgsz = peek_message_size( &source_bytes->data[source_bytes->zone_start]); if (msgsz % 4 != 0) { wp_debug("Wayland messages lengths must be divisible by 4"); break; } if (source_bytes->zone_start + msgsz > source_bytes->zone_end) { wp_debug("Insufficient bytes"); // Not enough remaining bytes to contain the // message break; } if (msgsz < 8) { wp_debug("Degenerate message, claimed len=%d", msgsz); // Not enough remaining bytes to contain the // message break; } /* We copy the message to the trailing end of the * in-progress buffer; the parser may elect to modify * the message's size */ memcpy(&scan_bytes.data[scan_bytes.zone_start], &source_bytes->data[source_bytes->zone_start], (size_t)msgsz); source_bytes->zone_start += msgsz; scan_bytes.zone_end = scan_bytes.zone_start + msgsz; enum parse_state pstate = handle_message(g, on_display_side, from_client, &scan_bytes, fds); if (pstate == PARSE_UNKNOWN || pstate == PARSE_ERROR) { anything_unknown = true; } scan_bytes.zone_start = scan_bytes.zone_end; } dest_bytes->zone_end = scan_bytes.zone_end; if (anything_unknown) { // All-un-owned buffers are assumed to have changed. // (Note that in some cases, a new protocol could imply // a change for an existing buffer; it may make sense to // mark everything dirty, then.) for (struct shadow_fd_link *lcur = g->map.link.l_next, *lnxt = lcur->l_next; lcur != &g->map.link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (!cur->has_owner) { cur->is_dirty = true; } } } DTRACE_PROBE(waypipe, parse_exit); return; } waypipe-v0.9.1/src/parsing.h000066400000000000000000000134301463133614300157750ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_PARSING_H #define WAYPIPE_PARSING_H #include #include struct char_window; struct int_window; struct fd_translation_map; struct main_config; struct wp_interface; /** An object used by the wayland protocol. Specific types may extend * this struct, using the following data as a header */ struct wp_object { struct wp_object *t_left, *t_right; // inline tree implementation const struct wp_interface *type; // Use to lookup the message handler uint32_t obj_id; bool is_zombie; // object deleted but not yet acknowledged remotely }; struct message_tracker { /* Tree containing all objects that are currently alive or zombie */ struct wp_object *objtree_root; /* sequence number to discriminate between wl_buffer objects; object ids * and pointers are not guaranteed to be unique */ uint64_t buffer_seqno; }; /** Context object, to be passed to the protocol handler functions */ struct context { struct globals *const g; struct message_tracker *const tracker; struct wp_object *obj; bool drop_this_msg; /* If true, running as waypipe client, and interfacing with compositor's * buffers */ const bool on_display_side; /* The transferred message can be rewritten in place, and resized, as * long as there is space available. Setting 'fds_changed' will * prevent the fd zone start from autoincrementing after running * the function, which may be useful when injecting messages with fds */ const int message_available_space; uint32_t *const message; int message_length; bool fds_changed; struct int_window *const fds; }; /** Add a protocol object to the list, replacing any preceding object with * the same id. */ void tracker_insert(struct message_tracker *mt, struct wp_object *obj); void tracker_remove(struct message_tracker *mt, struct wp_object *obj); /** Replace an object that is already in the protocol list with a new object * that has the same id; will silently fail if id not present */ void tracker_replace_existing( struct message_tracker *mt, struct wp_object *obj); struct wp_object *tracker_get(struct message_tracker *mt, uint32_t id); int init_message_tracker(struct message_tracker *mt); void cleanup_message_tracker(struct message_tracker *mt); /** Read message size from header; the 8 bytes beyond data must exist */ int peek_message_size(const void *data); /** Generate the second uint32_t field of a message header; this assumes no * fds or equivalently no fd count subfield */ static inline uint32_t message_header_2(uint32_t size_bytes, uint32_t msgno) { return (size_bytes << 16) | msgno; } const char *get_nth_packed_string(const char *pack, int n); enum parse_state { PARSE_KNOWN, PARSE_UNKNOWN, PARSE_ERROR }; /** * The return value is false iff the given message should be dropped. * The flag `unidentified_changes` is set to true if the message does * not correspond to a known protocol. * * The message data payload may be modified and increased in size. * * The window `chars` should start at the message start, end * at its end, and indicate remaining space. * The window `fds` should start at the next fd in the queue, ends * with the last. * * The start and end of `chars` will be moved to the new end of the message. * The end of `fds` may be moved if any fds are inserted or discarded. * The start of fds will be moved, depending on how many fds were consumed. */ enum parse_state handle_message(struct globals *g, bool on_display_side, bool from_client, struct char_window *chars, struct int_window *fds); /** * Given a set of messages and fds, parse the messages, and if indicated * by parsing logic, compact the message buffer by removing selected * messages, or edit message contents. * * The `source_bytes` window indicates the range of unread data; it's * zone start point will be advanced. The 'dest_bytes' window indicates * the range of written data; it's zone end point will be advanced. * * The file descriptor queue `fds` will have its start advanced, leaving only * file descriptors that have not yet been read. Further edits may be made * to inject new file descriptors. */ void parse_and_prune_messages(struct globals *g, bool on_display_side, bool from_client, struct char_window *source_bytes, struct char_window *dest_bytes, struct int_window *fds); // handlers.c /** Create a new Wayland protocol object of the given type; some types * produce structs extending from wp_object */ struct wp_object *create_wp_object( uint32_t it, const struct wp_interface *type); /** Type-specific destruction routines, also dereferencing linked shadow_fds */ void destroy_wp_object(struct wp_object *object); extern const struct wp_interface *the_display_interface; #endif // WAYPIPE_PARSING_H waypipe-v0.9.1/src/platform.c000066400000000000000000000113761463133614300161600ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef _GNU_SOURCE #define _GNU_SOURCE #endif #include "config-waypipe.h" #include #include #include #include #include #include #include #include #include #include #if defined(__linux__) && defined(__arm__) #include #include #elif defined(__FreeBSD__) && defined(__arm__) #include #endif #if defined(__linux__) /* memfd_create was introduced in glibc 2.27 */ #if !defined(__GLIBC__) || (__GLIBC__ >= 2 && __GLIBC_MINOR__ >= 27) #define HAS_MEMFD 1 #endif #endif #if defined(__linux__) #define HAS_O_PATH 1 #endif int create_anon_file(void) { int new_fileno; #ifdef HAS_MEMFD new_fileno = memfd_create("waypipe", 0); #elif defined(SHM_ANON) new_fileno = shm_open(SHM_ANON, O_RDWR, 0600); #else // Fallback code. Should not be used from multiple threads static int counter = 0; int pid = getpid(); counter++; char tmp_name[64]; sprintf(tmp_name, "/waypipe%d-data_%d", pid, counter); new_fileno = shm_open(tmp_name, O_EXCL | O_RDWR | O_CREAT, 0644); if (new_fileno == -1) { return -1; } (void)shm_unlink(tmp_name); #endif return new_fileno; } int get_hardware_thread_count(void) { return (int)sysconf(_SC_NPROCESSORS_ONLN); } int get_iov_max(void) { return (int)sysconf(_SC_IOV_MAX); } #ifdef HAVE_NEON bool neon_available(void) { /* The actual methods are platform-dependent */ #if defined(__linux__) && defined(__arm__) return (getauxval(AT_HWCAP) & HWCAP_NEON) != 0; #elif defined(__FreeBSD__) && defined(__arm__) unsigned long hwcap = 0; elf_aux_info(AT_HWCAP, &hwcap, sizeof(hwcap)); return (hwcap & HWCAP_NEON) != 0; #endif return true; } #endif static void *align_ptr(void *ptr, size_t alignment) { return (uint8_t *)ptr + ((alignment - (uintptr_t)ptr) % alignment); } void *zeroed_aligned_alloc(size_t bytes, size_t alignment, void **handle) { if (*handle) { /* require a clean handle */ return NULL; } *handle = calloc(bytes + alignment - 1, 1); return align_ptr(*handle, alignment); } void *zeroed_aligned_realloc(size_t old_size_bytes, size_t new_size_bytes, size_t alignment, void *data, void **handle) { /* warning: this might copy a lot of data */ if (new_size_bytes <= 2 * old_size_bytes) { void *old_handle = *handle; ptrdiff_t old_offset = (uint8_t *)data - (uint8_t *)old_handle; void *new_handle = realloc( old_handle, new_size_bytes + alignment - 1); if (!new_handle) { return NULL; } void *new_data = align_ptr(new_handle, alignment); ptrdiff_t new_offset = (uint8_t *)new_data - (uint8_t *)new_handle; if (old_offset != new_offset) { /* realloc broke alignment offset */ memmove((uint8_t *)new_data + new_offset, (uint8_t *)new_data + old_offset, new_size_bytes > old_size_bytes ? old_size_bytes : new_size_bytes); } if (new_size_bytes > old_size_bytes) { memset((uint8_t *)new_data + old_size_bytes, 0, new_size_bytes - old_size_bytes); } *handle = new_handle; return new_data; } else { void *new_handle = calloc(new_size_bytes + alignment - 1, 1); if (!new_handle) { return NULL; } void *new_data = align_ptr(new_handle, alignment); memcpy(new_data, data, new_size_bytes > old_size_bytes ? old_size_bytes : new_size_bytes); free(*handle); *handle = new_handle; return new_data; } } void zeroed_aligned_free(void *data, void **handle) { (void)data; free(*handle); *handle = NULL; } int open_folder(const char *name) { const char *path = name[0] ? name : "."; #ifdef HAS_O_PATH return open(path, O_PATH); #else return open(path, O_RDONLY | O_DIRECTORY); #endif } waypipe-v0.9.1/src/server.c000066400000000000000000000533451463133614300156440ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include static inline uint32_t conntoken_header(const struct main_config *config, bool reconnectable, bool update) { uint32_t header = (WAYPIPE_PROTOCOL_VERSION << 16) | CONN_FIXED_BIT; header |= (update ? CONN_UPDATE_BIT : 0); header |= (reconnectable ? CONN_RECONNECTABLE_BIT : 0); // TODO: stop compile gating the 'COMP' enum entries #ifdef HAS_LZ4 header |= (config->compression == COMP_LZ4 ? CONN_LZ4_COMPRESSION : 0); #endif #ifdef HAS_ZSTD header |= (config->compression == COMP_ZSTD ? CONN_ZSTD_COMPRESSION : 0); #endif if (config->compression == COMP_NONE) { header |= CONN_NO_COMPRESSION; } if (config->video_if_possible) { header |= (config->video_fmt == VIDEO_H264 ? CONN_H264_VIDEO : 0); header |= (config->video_fmt == VIDEO_VP9 ? CONN_VP9_VIDEO : 0); header |= (config->video_fmt == VIDEO_AV1 ? CONN_AV1_VIDEO : 0); } else { header |= CONN_NO_VIDEO; } #ifdef HAS_DMABUF header |= (config->no_gpu ? CONN_NO_DMABUF_SUPPORT : 0); #else header |= CONN_NO_DMABUF_SUPPORT; #endif return header; } /** Fill the key for a token using random data with a very low accidental * collision probability. Whatever data was in the key before will be shuffled * in.*/ static void fill_random_key(struct connection_token *token) { token->key[0] *= 13; token->key[1] *= 17; token->key[2] *= 29; struct timespec tp; clock_gettime(CLOCK_REALTIME, &tp); token->key[0] += (uint32_t)getpid(); token->key[1] += 1 + (uint32_t)tp.tv_sec; token->key[2] += 2 + (uint32_t)tp.tv_nsec; int devrand = open("/dev/urandom", O_RDONLY | O_NOCTTY); if (devrand != -1) { uint32_t tmp[3]; errno = 0; (void)read(devrand, tmp, sizeof(tmp)); checked_close(devrand); token->key[0] ^= tmp[0]; token->key[1] ^= tmp[1]; token->key[2] ^= tmp[2]; } } static int read_path(int control_pipe, char *path, size_t path_space) { /* It is unlikely that a signal would interrupt a read of a ~100 byte * sockaddr; and if used properly, the control pipe should never be * sent much more data than that */ ssize_t amt = read(control_pipe, path, path_space - 1); if (amt == -1) { wp_error("Failed to read from control pipe: %s", strerror(errno)); return -1; } else if (amt == (ssize_t)path_space - 1) { wp_error("Too much data sent to control pipe\n"); return -1; } path[amt] = '\0'; return 0; } static int run_single_server_reconnector(int cwd_fd, int control_pipe, int linkfd, const struct connection_token *flagged_token) { int retcode = EXIT_SUCCESS; while (!shutdown_flag) { struct pollfd pf[2]; pf[0].fd = control_pipe; pf[0].events = POLLIN; pf[0].revents = 0; pf[1].fd = linkfd; pf[1].events = 0; pf[1].revents = 0; int r = poll(pf, 2, -1); if (r == -1 && errno == EINTR) { continue; } else if (r == -1) { retcode = EXIT_FAILURE; break; } else if (r == 0) { // Nothing to read continue; } if (pf[1].revents & POLLHUP) { /* Hang up, main thread has closed its link */ break; } if (pf[0].revents & POLLIN) { char sockaddr_folder[512]; if (read_path(control_pipe, sockaddr_folder, sizeof(sockaddr_folder)) == -1) { continue; } struct sockaddr_un sockaddr_filename = {0}; if (split_socket_path(sockaddr_folder, &sockaddr_filename)) { continue; } struct socket_path sockaddr_path = { .filename = &sockaddr_filename, .folder = sockaddr_folder, }; int new_conn = -1; if (connect_to_socket(cwd_fd, sockaddr_path, NULL, &new_conn) == -1) { wp_error("Socket path \"%s\"/\"%s\" was invalid: %s", sockaddr_path.folder, sockaddr_path.filename ->sun_path, strerror(errno)); /* Socket path was invalid */ continue; } if (write(new_conn, flagged_token, sizeof(*flagged_token)) != sizeof(*flagged_token)) { wp_error("Failed to write to new connection: %s", strerror(errno)); checked_close(new_conn); continue; } if (send_one_fd(linkfd, new_conn) == -1) { wp_error("Failed to send new connection to subprocess: %s", strerror(errno)); } checked_close(new_conn); } } checked_close(control_pipe); checked_close(linkfd); return retcode; } static int run_single_server(int cwd_fd, int control_pipe, struct socket_path socket_path, bool unlink_at_end, int server_link, const struct main_config *config) { int chanfd = -1, chanfolder_fd = -1; if (connect_to_socket(cwd_fd, socket_path, &chanfolder_fd, &chanfd) == -1) { goto fail_srv; } /* Only unlink the socket if it actually was a socket */ if (unlink_at_end) { unlink_at_folder(cwd_fd, chanfolder_fd, socket_path.folder, socket_path.filename->sun_path); } checked_close(chanfolder_fd); bool reconnectable = control_pipe != -1; struct connection_token token; memset(&token, 0, sizeof(token)); fill_random_key(&token); token.header = conntoken_header(config, reconnectable, false); wp_debug("Connection token header: %08" PRIx32, token.header); if (write(chanfd, &token, sizeof(token)) != sizeof(token)) { wp_error("Failed to write connection token to socket"); goto fail_cfd; } int linkfds[2] = {-1, -1}; if (control_pipe != -1) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linkfds) == -1) { wp_error("Failed to create socketpair: %s", strerror(errno)); goto fail_cfd; } pid_t reco_pid = fork(); if (reco_pid == -1) { wp_error("Fork failure: %s", strerror(errno)); checked_close(linkfds[0]); checked_close(linkfds[1]); goto fail_cfd; } else if (reco_pid == 0) { checked_close(chanfd); checked_close(linkfds[0]); checked_close(server_link); /* Further uses of the token will be to reconnect */ token.header |= CONN_UPDATE_BIT; int rc = run_single_server_reconnector(cwd_fd, control_pipe, linkfds[1], &token); exit(rc); } checked_close(control_pipe); checked_close(linkfds[1]); } int ret = main_interface_loop( chanfd, server_link, linkfds[0], config, false); return ret; fail_cfd: checked_close(chanfd); fail_srv: checked_close(server_link); return EXIT_FAILURE; } static int handle_new_server_connection(int cwd_fd, struct socket_path current_sockaddr, int control_pipe, int wdisplay_socket, int appfd, struct conn_map *connmap, const struct main_config *config, const struct connection_token *new_token) { bool reconnectable = control_pipe != -1; if (reconnectable && buf_ensure_size(connmap->count + 1, sizeof(struct conn_addr), &connmap->size, (void **)&connmap->data) == -1) { wp_error("Failed to allocate memory to track new connection"); goto fail_appfd; } int chanfd = -1; if (!config->vsock) { if (connect_to_socket(cwd_fd, current_sockaddr, NULL, &chanfd) == -1) { goto fail_appfd; } } else { #ifdef HAS_VSOCK if (connect_to_vsock(config->vsock_port, config->vsock_cid, config->vsock_to_host, &chanfd) == -1) { goto fail_appfd; } #endif } if (write(chanfd, new_token, sizeof(*new_token)) != sizeof(*new_token)) { wp_error("Failed to write connection token: %s", strerror(errno)); goto fail_chanfd; } int linksocks[2] = {-1, -1}; if (reconnectable) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, linksocks) == -1) { wp_error("Socketpair for process link failed: %s", strerror(errno)); goto fail_chanfd; } } pid_t npid = fork(); if (npid == 0) { // Run forked process, with the only shared state being the // new channel socket checked_close(wdisplay_socket); if (reconnectable) { checked_close(control_pipe); checked_close(linksocks[0]); } for (int i = 0; i < connmap->count; i++) { if (connmap->data[i].linkfd != -1) { checked_close(connmap->data[i].linkfd); } } int rc = main_interface_loop( chanfd, appfd, linksocks[1], config, false); check_unclosed_fds(); exit(rc); } else if (npid == -1) { wp_error("Fork failure: %s", strerror(errno)); if (reconnectable) { checked_close(linksocks[0]); checked_close(linksocks[1]); } goto fail_chanfd; } // This process no longer needs the application connection checked_close(chanfd); checked_close(appfd); if (reconnectable) { checked_close(linksocks[1]); connmap->data[connmap->count++] = (struct conn_addr){ .token = *new_token, .pid = npid, .linkfd = linksocks[0], }; } return 0; fail_chanfd: checked_close(chanfd); fail_appfd: checked_close(appfd); return -1; } static int update_connections(int cwd_fd, struct socket_path new_sock, int new_sock_folder, struct conn_map *connmap) { /* TODO: what happens if there's a partial failure? */ for (int i = 0; i < connmap->count; i++) { int chanfd = -1; if (connect_to_socket_at_folder(cwd_fd, new_sock_folder, new_sock.filename, &chanfd) == -1) { wp_error("Failed to connect to socket at \"%s\"/\"%s\": %s", new_sock.folder, new_sock.filename->sun_path, strerror(errno)); return -1; } struct connection_token flagged_token = connmap->data[i].token; flagged_token.header |= CONN_UPDATE_BIT; if (write(chanfd, &flagged_token, sizeof(flagged_token)) != sizeof(flagged_token)) { wp_error("Failed to write token to replacement connection: %s", strerror(errno)); checked_close(chanfd); continue; } /* ignore return value -- errors like the other process having * closed the connection do not count as this processes' problem */ (void)send_one_fd(connmap->data[i].linkfd, chanfd); checked_close(chanfd); } return 0; } static int run_multi_server(int cwd_fd, int control_pipe, struct socket_path socket_addr, bool unlink_at_end, int wdisplay_socket, const struct main_config *config, pid_t *child_pid) { struct conn_map connmap = {.data = NULL, .count = 0, .size = 0}; struct sockaddr_un current_sockaddr_filename = *socket_addr.filename; char current_sockaddr_folder[256] = {0}; int retcode = EXIT_SUCCESS; struct socket_path current_sockaddr = socket_addr; // TODO: grab the folder, on startup; then connectat within the folder // we do not need to remember the folder name, thankfully struct pollfd pfs[2]; pfs[0].fd = wdisplay_socket; pfs[0].events = POLLIN; pfs[0].revents = 0; pfs[1].fd = control_pipe; pfs[1].events = POLLIN; pfs[1].revents = 0; struct connection_token token; memset(&token, 0, sizeof(token)); token.header = conntoken_header(config, control_pipe != -1, false); wp_debug("Connection token header: %08" PRIx32, token.header); int current_folder_fd = open_folder(current_sockaddr.folder); if (current_folder_fd == -1) { wp_error("Failed to open folder '%s' for connection socket: %s", current_sockaddr.folder, strerror(errno)); retcode = EXIT_FAILURE; shutdown_flag = true; } while (!shutdown_flag) { int status = -1; if (wait_for_pid_and_clean( child_pid, &status, WNOHANG, &connmap)) { wp_debug("Child program has died, exiting"); retcode = WEXITSTATUS(status); break; } int r = poll(pfs, 1 + (control_pipe != -1), -1); if (r == -1) { if (errno == EINTR) { // If SIGCHLD, we will check the child. // If SIGINT, the loop ends continue; } wp_error("Poll failed: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } else if (r == 0) { continue; } if (pfs[1].revents & POLLIN) { struct sockaddr_un new_sockaddr_filename = {0}; char new_sockaddr_folder[sizeof( current_sockaddr_folder)] = {0}; if (read_path(control_pipe, new_sockaddr_folder, sizeof(new_sockaddr_folder)) == -1) { goto end_new_path; } if (split_socket_path(new_sockaddr_folder, &new_sockaddr_filename) == -1) { goto end_new_path; } struct socket_path new_sockaddr = { .filename = &new_sockaddr_filename, .folder = new_sockaddr_folder, }; int new_folder_fd = open_folder(new_sockaddr_folder); if (new_folder_fd == -1) { wp_error("Failed to open folder '%s' for proposed reconnection socket: %s", new_sockaddr_folder, strerror(errno)); goto end_new_path; } if (update_connections(cwd_fd, new_sockaddr, new_folder_fd, &connmap) == -1) { /* failed to connect to the new socket */ goto end_new_path; } bool same_path = !strcmp(current_sockaddr.filename ->sun_path, new_sockaddr.filename ->sun_path) && files_equiv(current_folder_fd, new_folder_fd); /* If switching connections succeeded, adopt the new * socket. (We avoid deleting if the old socket was * replaced by a new socket at the same name in the * same folder.) */ if (unlink_at_end && !same_path) { unlink_at_folder(cwd_fd, current_folder_fd, current_sockaddr.folder, current_sockaddr.filename ->sun_path); } checked_close(current_folder_fd); current_folder_fd = new_folder_fd; memcpy(current_sockaddr_folder, new_sockaddr_folder, sizeof(current_sockaddr_folder)); memcpy(¤t_sockaddr_filename, &new_sockaddr_filename, sizeof(current_sockaddr_filename)); current_sockaddr = (struct socket_path){ .filename = ¤t_sockaddr_filename, .folder = current_sockaddr_folder, }; end_new_path:; } if (pfs[0].revents & POLLIN) { int appfd = accept(wdisplay_socket, NULL, NULL); if (appfd == -1) { if (errno == EAGAIN || errno == EWOULDBLOCK) { // The wakeup may have been // spurious continue; } wp_error("Connection failure: %s", strerror(errno)); retcode = EXIT_FAILURE; break; } else { wp_debug("New connection to server"); fill_random_key(&token); if (handle_new_server_connection(cwd_fd, current_sockaddr, control_pipe, wdisplay_socket, appfd, &connmap, config, &token) == -1) { retcode = EXIT_FAILURE; break; } } } } if (unlink_at_end) { unlink_at_folder(cwd_fd, current_folder_fd, current_sockaddr.folder, current_sockaddr.filename->sun_path); } checked_close(wdisplay_socket); if (control_pipe != -1) { checked_close(control_pipe); } checked_close(current_folder_fd); for (int i = 0; i < connmap.count; i++) { checked_close(connmap.data[i].linkfd); } free(connmap.data); return retcode; } /* requires >=256 byte shell/shellname buffers */ static void setup_login_shell_command(char shell[static 256], char shellname[static 256], bool login_shell) { strcpy(shellname, "-sh"); strcpy(shell, "/bin/sh"); // Select the preferred shell on the system char *shell_env = getenv("SHELL"); if (!shell_env) { return; } int len = (int)strlen(shell_env); if (len >= 254) { wp_error("Environment variable $SHELL is too long at %d bytes, falling back to %s", len, shell); return; } strcpy(shell, shell_env); if (login_shell) { /* Create a login shell. The convention for this is to prefix * the name of the shell with a single hyphen */ int start = len; for (; start-- > 0;) { if (shell[start] == '/') { start++; break; } } shellname[0] = '-'; strcpy(shellname + 1, shell + start); } else { strcpy(shellname, shell); } } extern char **environ; int run_server(int cwd_fd, struct socket_path socket_path, const char *display_suffix, const char *control_path, const struct main_config *config, bool oneshot, bool unlink_at_end, char *const app_argv[], bool login_shell_if_backup) { wp_debug("I'm a server connecting on %s x %s, running: %s", socket_path.folder, socket_path.filename->sun_path, app_argv[0]); wp_debug("version: %s", WAYPIPE_VERSION); struct sockaddr_un display_path; memset(&display_path, 0, sizeof(display_path)); int display_folder_fd = -1; // Setup connection to program int wayland_socket = -1, server_link = -1, wdisplay_socket = -1; if (oneshot) { int csockpair[2]; if (socketpair(AF_UNIX, SOCK_STREAM, 0, csockpair) == -1) { wp_error("Socketpair failed: %s", strerror(errno)); return EXIT_FAILURE; } wayland_socket = csockpair[1]; server_link = csockpair[0]; /* only set cloexec for `server_link`, as `wayland_socket` * is meant to be inherited */ if (set_cloexec(server_link) == -11) { close(wayland_socket); close(server_link); return EXIT_FAILURE; } } else { // Bind a socket for WAYLAND_DISPLAY, and listen int nmaxclients = 128; char display_folder[512]; memset(&display_folder, 0, sizeof(display_folder)); if (display_suffix[0] == '/') { if (strlen(display_suffix) >= sizeof(display_folder)) { wp_error("Absolute path '%s' specified for WAYLAND_DISPLAY is far long (%zu bytes >= %zu)", display_suffix, strlen(display_suffix), sizeof(display_folder)); return EXIT_FAILURE; } strcpy(display_folder, display_suffix); } else { const char *xdg_dir = getenv("XDG_RUNTIME_DIR"); if (!xdg_dir) { wp_error("Env. var XDG_RUNTIME_DIR not available, cannot place display socket for WAYLAND_DISPLAY=\"%s\"", display_suffix); return EXIT_FAILURE; } if (multi_strcat(display_folder, sizeof(display_folder), xdg_dir, "/", display_suffix, NULL) == 0) { wp_error("Path '%s'/'%s' specified for WAYLAND_DISPLAY is far long (%zu bytes >= %zu)", xdg_dir, display_suffix, strlen(xdg_dir) + 1 + strlen(display_suffix), sizeof(display_folder)); return EXIT_FAILURE; } } if (split_socket_path(display_folder, &display_path) == -1) { return EXIT_FAILURE; } struct socket_path path; path.filename = &display_path; path.folder = display_folder; if (setup_nb_socket(cwd_fd, path, nmaxclients, &display_folder_fd, &wdisplay_socket) == -1) { // Error messages already made return EXIT_FAILURE; } if (set_cloexec(display_folder_fd) == -1 || set_cloexec(wdisplay_socket) == -1) { close(display_folder_fd); close(wdisplay_socket); return EXIT_FAILURE; } } /* Set env variables for child process */ if (oneshot) { char bufs2[16]; sprintf(bufs2, "%d", wayland_socket); // Provide the other socket in the pair to child // application unsetenv("WAYLAND_DISPLAY"); setenv("WAYLAND_SOCKET", bufs2, 1); } else { // Since Wayland 1.15, absolute paths are supported in // WAYLAND_DISPLAY unsetenv("WAYLAND_SOCKET"); setenv("WAYLAND_DISPLAY", display_suffix, 1); } // Launch program. pid_t pid = -1; { const char *application = app_argv[0]; char shell[256]; char shellname[256]; char *shellcmd[2] = {shellname, NULL}; if (!application) { setup_login_shell_command(shell, shellname, login_shell_if_backup); application = shell; app_argv = shellcmd; } int err = posix_spawnp(&pid, application, NULL, NULL, app_argv, environ); if (err) { wp_error("Spawn failure for '%s': %s", application, strerror(err)); if (!oneshot) { unlink_at_folder(cwd_fd, display_folder_fd, NULL, display_path.sun_path); checked_close(display_folder_fd); checked_close(wdisplay_socket); } else { checked_close(wayland_socket); checked_close(server_link); } return EXIT_FAILURE; } } /* Drop any env variables that were set for the child process */ unsetenv("WAYLAND_SOCKET"); unsetenv("WAYLAND_DISPLAY"); if (oneshot) { // We no longer need to see this side checked_close(wayland_socket); } int control_pipe = -1; if (control_path) { if (mkfifo(control_path, 0644) == -1) { wp_error("Failed to make a control FIFO at %s: %s", control_path, strerror(errno)); } else { /* To prevent getting POLLHUP spam after the first user * closes this pipe, open both read and write ends of * the named pipe */ control_pipe = open(control_path, O_RDWR | O_NONBLOCK | O_NOCTTY); if (control_pipe == -1) { wp_error("Failed to open created FIFO for reading: %s", control_path, strerror(errno)); } } } int retcode = EXIT_SUCCESS; /* These functions will close server_link, wdisplay_socket, and * control_pipe */ if (oneshot) { retcode = run_single_server(cwd_fd, control_pipe, socket_path, unlink_at_end, server_link, config); } else { retcode = run_multi_server(cwd_fd, control_pipe, socket_path, unlink_at_end, wdisplay_socket, config, &pid); } if (control_pipe != -1) { unlink(control_path); } if (!oneshot) { unlink_at_folder(cwd_fd, display_folder_fd, NULL, display_path.sun_path); checked_close(display_folder_fd); } // Wait for child processes to exit wp_debug("Waiting for child handlers and program"); int status = -1; if (wait_for_pid_and_clean( &pid, &status, shutdown_flag ? WNOHANG : 0, NULL)) { wp_debug("Child program has died, exiting"); retcode = WEXITSTATUS(status); } wp_debug("Program ended"); return retcode; } waypipe-v0.9.1/src/shadow.c000066400000000000000000002200371463133614300156150ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #include #include #include #include #include #include #include #include #include #include #include #ifdef HAS_LZ4 #include #include #endif #ifdef HAS_ZSTD #include #endif struct shadow_fd *get_shadow_for_local_fd( struct fd_translation_map *map, int lfd) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->fd_local == lfd) { return cur; } } return NULL; } struct shadow_fd *get_shadow_for_rid(struct fd_translation_map *map, int rid) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->remote_id == rid) { return cur; } } return NULL; } static void destroy_unlinked_sfd(struct shadow_fd *sfd) { wp_debug("Destroying %s RID=%d", fdcat_to_str(sfd->type), sfd->remote_id); /* video must be cleaned up before any buffers that it may rely on */ destroy_video_data(sfd); /* free all accumulated damage records */ reset_damage(&sfd->damage); free(sfd->damage_task_interval_store); if (sfd->type == FDC_FILE) { munmap(sfd->mem_local, sfd->buffer_size); zeroed_aligned_free(sfd->mem_mirror, &sfd->mem_mirror_handle); } else if (sfd->type == FDC_DMABUF || sfd->type == FDC_DMAVID_IR || sfd->type == FDC_DMAVID_IW) { if (sfd->dmabuf_map_handle) { unmap_dmabuf(sfd->dmabuf_bo, sfd->dmabuf_map_handle); } destroy_dmabuf(sfd->dmabuf_bo); zeroed_aligned_free(sfd->mem_mirror, &sfd->mem_mirror_handle); if (sfd->dmabuf_warped_handle) { zeroed_aligned_free(sfd->dmabuf_warped, &sfd->dmabuf_warped_handle); } } else if (sfd->type == FDC_PIPE) { if (sfd->pipe.fd != sfd->fd_local && sfd->pipe.fd != -1) { checked_close(sfd->pipe.fd); } free(sfd->pipe.recv.data); free(sfd->pipe.send.data); } if (sfd->fd_local != -1) { checked_close(sfd->fd_local); } free(sfd); } static void cleanup_thread_local(struct thread_data *data) { #ifdef HAS_ZSTD ZSTD_freeCCtx(data->comp_ctx.zstd_ccontext); ZSTD_freeDCtx(data->comp_ctx.zstd_dcontext); #endif #ifdef HAS_LZ4 free(data->comp_ctx.lz4_extstate); #endif free(data->tmp_buf); } static void setup_thread_local(struct thread_data *data, enum compression_mode mode, int compression_level) { struct comp_ctx *ctx = &data->comp_ctx; ctx->zstd_ccontext = NULL; ctx->zstd_dcontext = NULL; ctx->lz4_extstate = NULL; #ifdef HAS_LZ4 if (mode == COMP_LZ4) { /* Like LZ4Frame, integer codes indicate compression level. * Negative numbers are acceleration, positive use the HC * routines */ if (compression_level <= 0) { ctx->lz4_extstate = malloc((size_t)LZ4_sizeofState()); } else { ctx->lz4_extstate = malloc((size_t)LZ4_sizeofStateHC()); } } #endif #ifdef HAS_ZSTD if (mode == COMP_ZSTD) { ctx->zstd_ccontext = ZSTD_createCCtx(); ctx->zstd_dcontext = ZSTD_createDCtx(); } #endif (void)mode; (void)compression_level; data->tmp_buf = NULL; data->tmp_size = 0; } void cleanup_translation_map(struct fd_translation_map *map) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; destroy_unlinked_sfd(cur); } map->link.l_next = &map->link; map->link.l_prev = &map->link; } bool destroy_shadow_if_unreferenced(struct shadow_fd *sfd) { bool autodelete = sfd->has_owner; if (sfd->type == FDC_PIPE && !sfd->pipe.can_read && !sfd->pipe.can_write && !sfd->pipe.remote_can_read && !sfd->pipe.remote_can_write) { autodelete = true; } if (sfd->refcount.protocol == 0 && sfd->refcount.transfer == 0 && sfd->refcount.compute == false && autodelete) { /* remove shadowfd from list */ sfd->link.l_prev->l_next = sfd->link.l_next; sfd->link.l_next->l_prev = sfd->link.l_prev; sfd->link.l_next = NULL; sfd->link.l_prev = NULL; destroy_unlinked_sfd(sfd); return true; } else if (sfd->refcount.protocol < 0 || sfd->refcount.transfer < 0) { wp_error("Negative refcount for rid=%d: %d protocol references, %d transfer references", sfd->remote_id, sfd->refcount.protocol, sfd->refcount.transfer); } return false; } static void *worker_thread_main(void *arg); void setup_translation_map(struct fd_translation_map *map, bool display_side) { map->local_sign = display_side ? -1 : 1; map->link.l_next = &map->link; map->link.l_prev = &map->link; map->max_local_id = 1; } static void shutdown_threads(struct thread_pool *pool) { pthread_mutex_lock(&pool->work_mutex); free(pool->stack); struct task_data task; memset(&task, 0, sizeof(task)); task.type = TASK_STOP; pool->stack = &task; pool->stack_count = 1; pool->stack_size = 1; pool->do_work = true; pthread_cond_broadcast(&pool->work_cond); pthread_mutex_unlock(&pool->work_mutex); if (pool->threads) { for (int i = 1; i < pool->nthreads; i++) { if (pool->threads[i].thread) { pthread_join(pool->threads[i].thread, NULL); } } } pool->stack = NULL; } int setup_thread_pool(struct thread_pool *pool, enum compression_mode compression, int comp_level, int n_threads) { memset(pool, 0, sizeof(struct thread_pool)); pool->diff_func = get_diff_function( DIFF_FASTEST, &pool->diff_alignment_bits); pool->compression = compression; pool->compression_level = comp_level; if (n_threads <= 0) { // platform dependent int nt = get_hardware_thread_count(); pool->nthreads = max(nt / 2, 1); } else { pool->nthreads = n_threads; } pool->stack_size = 0; pool->stack_count = 0; pool->stack = NULL; pool->tasks_in_progress = 0; pool->do_work = true; /* Thread #0 is the 'main' thread */ pool->threads = calloc( (size_t)pool->nthreads, sizeof(struct thread_data)); if (!pool->threads) { wp_error("Failed to allocate list of thread data"); return -1; } int ret; ret = pthread_mutex_init(&pool->work_mutex, NULL); if (ret) { wp_error("Mutex creation failed: %s", strerror(ret)); return -1; } ret = pthread_cond_init(&pool->work_cond, NULL); if (ret) { wp_error("Condition variable creation failed: %s", strerror(ret)); return -1; } pool->threads[0].pool = pool; pool->threads[0].thread = pthread_self(); for (int i = 1; i < pool->nthreads; i++) { pool->threads[i].pool = pool; ret = pthread_create(&pool->threads[i].thread, NULL, worker_thread_main, &pool->threads[i]); if (ret) { wp_error("Thread creation failed: %s", strerror(ret)); // Stop making new threads, but keep what is there pool->nthreads = i; break; } } /* Setup thread local data from the main thread, to avoid requiring * the worker threads to allocate pools, for a few fixed buffers */ for (int i = 0; i < pool->nthreads; i++) { setup_thread_local(&pool->threads[i], compression, comp_level); } int fds[2]; if (pipe(fds) == -1) { wp_error("Failed to create pipe: %s", strerror(errno)); } pool->selfpipe_r = fds[0]; pool->selfpipe_w = fds[1]; if (set_nonblocking(pool->selfpipe_r) == -1) { wp_error("Failed to make read end of pipe nonblocking: %s", strerror(errno)); } return 0; } void cleanup_thread_pool(struct thread_pool *pool) { shutdown_threads(pool); if (pool->threads) { for (int i = 0; i < pool->nthreads; i++) { cleanup_thread_local(&pool->threads[i]); } } pthread_mutex_destroy(&pool->work_mutex); pthread_cond_destroy(&pool->work_cond); free(pool->threads); free(pool->stack); checked_close(pool->selfpipe_r); checked_close(pool->selfpipe_w); } const char *fdcat_to_str(enum fdcat cat) { switch (cat) { case FDC_UNKNOWN: return "FDC_UNKNOWN"; case FDC_FILE: return "FDC_FILE"; case FDC_PIPE: return "FDC_PIPE"; case FDC_DMABUF: return "FDC_DMABUF"; case FDC_DMAVID_IR: return "FDC_DMAVID_IR"; case FDC_DMAVID_IW: return "FDC_DMAVID_IW"; } return ""; } const char *compression_mode_to_str(enum compression_mode mode) { switch (mode) { case COMP_NONE: return "NONE"; case COMP_LZ4: return "LZ4"; case COMP_ZSTD: return "ZSTD"; default: return ""; } } enum fdcat get_fd_type(int fd, size_t *size) { struct stat fsdata; memset(&fsdata, 0, sizeof(fsdata)); int ret = fstat(fd, &fsdata); if (ret == -1) { wp_error("The fd %d is not file-like: %s", fd, strerror(errno)); return FDC_UNKNOWN; } else if (S_ISREG(fsdata.st_mode)) { if (size) { *size = (size_t)fsdata.st_size; } return FDC_FILE; } else if (S_ISFIFO(fsdata.st_mode) || S_ISCHR(fsdata.st_mode) || S_ISSOCK(fsdata.st_mode)) { if (S_ISCHR(fsdata.st_mode)) { wp_error("The fd %d, size %" PRId64 ", mode %x is a character device. Proceeding under the assumption that it is pipe-like.", fd, (int64_t)fsdata.st_size, fsdata.st_mode); } if (S_ISSOCK(fsdata.st_mode)) { wp_error("The fd %d, size %" PRId64 ", mode %x is a socket. Proceeding under the assumption that it is pipe-like.", fd, (int64_t)fsdata.st_size, fsdata.st_mode); } return FDC_PIPE; } else { /* Note: we cannot at the moment reliably identify a dmabuf; * trying to do so by importing it may file if we have the wrong * parameters. */ wp_error("The fd %d has an unusual mode %x (type=%x): blk=%d chr=%d dir=%d lnk=%d reg=%d fifo=%d sock=%d; expect an application crash!", fd, fsdata.st_mode, fsdata.st_mode & S_IFMT, S_ISBLK(fsdata.st_mode), S_ISCHR(fsdata.st_mode), S_ISDIR(fsdata.st_mode), S_ISLNK(fsdata.st_mode), S_ISREG(fsdata.st_mode), S_ISFIFO(fsdata.st_mode), S_ISSOCK(fsdata.st_mode), strerror(errno)); return FDC_UNKNOWN; } } static size_t compress_bufsize(struct thread_pool *pool, size_t max_input) { switch (pool->compression) { default: case COMP_NONE: (void)max_input; return 0; #ifdef HAS_LZ4 case COMP_LZ4: /* This bound applies for both LZ4 and LZ4HC compressors */ return (size_t)LZ4_compressBound((int)max_input); #endif #ifdef HAS_ZSTD case COMP_ZSTD: return ZSTD_compressBound(max_input); #endif } return 0; } /* With the selected compression method, compress the buffer * {isize,ibuf}, possibly modifying {msize,mbuf}, and setting * {wsize,wbuf} to indicate the result */ static void compress_buffer(struct thread_pool *pool, struct comp_ctx *ctx, size_t isize, const char *ibuf, size_t msize, char *mbuf, struct bytebuf *dst) { (void)ctx; // Ensure inputs always nontrivial if (isize == 0) { dst->size = 0; dst->data = (char *)ibuf; return; } DTRACE_PROBE1(waypipe, compress_buffer_enter, isize); switch (pool->compression) { default: case COMP_NONE: (void)msize; (void)mbuf; dst->size = isize; dst->data = (char *)ibuf; break; #ifdef HAS_LZ4 case COMP_LZ4: { int ws; if (pool->compression_level <= 0) { ws = LZ4_compress_fast_extState(ctx->lz4_extstate, ibuf, mbuf, (int)isize, (int)msize, -pool->compression_level); } else { ws = LZ4_compress_HC_extStateHC(ctx->lz4_extstate, ibuf, mbuf, (int)isize, (int)msize, pool->compression_level); } if (ws == 0) { wp_error("LZ4 compression failed for %zu bytes in %zu of space", isize, msize); } dst->size = (size_t)ws; dst->data = (char *)mbuf; break; } #endif #ifdef HAS_ZSTD case COMP_ZSTD: { size_t ws = ZSTD_compressCCtx(ctx->zstd_ccontext, mbuf, msize, ibuf, isize, pool->compression_level); if (ZSTD_isError(ws)) { wp_error("Zstd compression failed for %d bytes in %d of space: %s", (int)isize, (int)msize, ZSTD_getErrorName(ws)); } dst->size = (size_t)ws; dst->data = (char *)mbuf; break; } #endif } DTRACE_PROBE1(waypipe, compress_buffer_exit, dst->size); } /* With the selected compression method, uncompress the buffer {isize,ibuf}, * to precisely msize bytes, setting {wsize,wbuf} to indicate the result. * If the compression mode requires it. */ static void uncompress_buffer(struct thread_pool *pool, struct comp_ctx *ctx, size_t isize, const char *ibuf, size_t msize, char *mbuf, size_t *wsize, const char **wbuf) { (void)ctx; // Ensure inputs always nontrivial if (isize == 0) { *wsize = 0; *wbuf = ibuf; return; } DTRACE_PROBE1(waypipe, uncompress_buffer_enter, isize); switch (pool->compression) { default: case COMP_NONE: (void)mbuf; (void)msize; *wsize = isize; *wbuf = ibuf; break; #ifdef HAS_LZ4 case COMP_LZ4: { int ws = LZ4_decompress_safe( ibuf, mbuf, (int)isize, (int)msize); if (ws < 0 || (size_t)ws != msize) { wp_error("Lz4 decompression failed for %d bytes to %d of space, used %d", (int)isize, (int)msize, ws); } *wsize = (size_t)ws; *wbuf = mbuf; break; } #endif #ifdef HAS_ZSTD case COMP_ZSTD: { size_t ws = ZSTD_decompressDCtx( ctx->zstd_dcontext, mbuf, msize, ibuf, isize); if (ZSTD_isError(ws) || (size_t)ws != msize) { wp_error("Zstd decompression failed for %d bytes to %d of space: %s", (int)isize, (int)msize, ZSTD_getErrorName(ws)); ws = 0; } *wsize = ws; *wbuf = mbuf; break; } #endif } DTRACE_PROBE1(waypipe, uncompress_buffer_exit, *wsize); } struct shadow_fd *translate_fd(struct fd_translation_map *map, struct render_data *render, struct thread_pool *threads, int fd, enum fdcat type, size_t file_sz, const struct dmabuf_slice_data *info, bool force_pipe_iw) { struct shadow_fd *sfd = get_shadow_for_local_fd(map, fd); if (sfd) { return sfd; } if (type == FDC_DMAVID_IR || type == FDC_DMAVID_IW) { if (!info) { wp_error("No dmabuf info provided"); return NULL; } } // Create a new translation map. sfd = calloc(1, sizeof(struct shadow_fd)); if (!sfd) { wp_error("Failed to allocate shadow_fd structure"); return NULL; } sfd->link.l_prev = &map->link; sfd->link.l_next = map->link.l_next; sfd->link.l_prev->l_next = &sfd->link; sfd->link.l_next->l_prev = &sfd->link; sfd->fd_local = fd; sfd->mem_local = NULL; sfd->mem_mirror = NULL; sfd->mem_mirror_handle = NULL; sfd->buffer_size = 0; sfd->remote_id = (map->max_local_id++) * map->local_sign; sfd->type = type; // File changes must be propagated sfd->is_dirty = true; /* files/dmabufs are damaged by default; shm_pools are explicitly * undamaged in handlers.c */ damage_everything(&sfd->damage); sfd->has_owner = false; /* Start the number of expected transfers to channel remaining * at one, and number of protocol objects referencing this * shadow_fd at zero.*/ sfd->refcount.transfer = 1; sfd->refcount.protocol = 0; sfd->refcount.compute = false; sfd->only_here = true; wp_debug("Creating new %s shadow RID=%d for local fd %d", fdcat_to_str(sfd->type), sfd->remote_id, fd); switch (sfd->type) { case FDC_FILE: { if (file_sz >= UINT32_MAX / 2) { wp_error("Failed to create shadow structure, file size %zu too large to transfer", file_sz); return sfd; } sfd->buffer_size = file_sz; sfd->file_readonly = false; // both r/w permissions, because the side which allocates the // memory does not always have to be the side that modifies it sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if (sfd->mem_local == MAP_FAILED && (errno == EPERM || errno == EACCES)) { wp_debug("Initial mmap for RID=%d failed, trying private+readonly", sfd->remote_id); // Some files are memfds that are sealed // to be read-only sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ, MAP_PRIVATE, fd, 0); if (sfd->mem_local != MAP_FAILED) { sfd->file_readonly = true; } } if (sfd->mem_local == MAP_FAILED) { wp_error("Mmap failed when creating shadow RID=%d: %s", sfd->remote_id, strerror(errno)); return sfd; } // This will be created at the first transfer. // todo: why not create it now? sfd->mem_mirror = NULL; } break; case FDC_PIPE: { // Make this end of the pipe nonblocking, so that we can // include it in our main loop. if (set_nonblocking(sfd->fd_local) == -1) { wp_error("Failed to make fd nonblocking"); } sfd->pipe.fd = sfd->fd_local; if (force_pipe_iw) { sfd->pipe.can_write = true; } else { /* this classification overestimates with * socketpairs that have partially been shutdown. * what about platform-specific RW pipes? */ int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { wp_error("fctnl F_GETFL failed!"); } if ((flags & O_ACCMODE) == O_RDONLY) { sfd->pipe.can_read = true; } else if ((flags & O_ACCMODE) == O_WRONLY) { sfd->pipe.can_write = true; } else { sfd->pipe.can_read = true; sfd->pipe.can_write = true; } } } break; case FDC_DMAVID_IR: { sfd->video_fmt = render->av_video_fmt; memcpy(&sfd->dmabuf_info, info, sizeof(struct dmabuf_slice_data)); init_render_data(render); sfd->dmabuf_bo = import_dmabuf(render, sfd->fd_local, &sfd->buffer_size, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { return sfd; } if (setup_video_encode(sfd, render, threads->nthreads) == -1) { wp_error("Video encoding setup failed for RID=%d", sfd->remote_id); } } break; case FDC_DMAVID_IW: { sfd->video_fmt = render->av_video_fmt; memcpy(&sfd->dmabuf_info, info, sizeof(struct dmabuf_slice_data)); // TODO: multifd-dmabuf video surface init_render_data(render); sfd->dmabuf_bo = import_dmabuf(render, sfd->fd_local, &sfd->buffer_size, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { return sfd; } if (setup_video_decode(sfd, render) == -1) { wp_error("Video decoding setup failed for RID=%d", sfd->remote_id); } } break; case FDC_DMABUF: { sfd->buffer_size = 0; init_render_data(render); memcpy(&sfd->dmabuf_info, info, sizeof(struct dmabuf_slice_data)); sfd->dmabuf_bo = import_dmabuf(render, sfd->fd_local, &sfd->buffer_size, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { return sfd; } // to be created on first transfer sfd->mem_mirror = NULL; } break; case FDC_UNKNOWN: wp_error("Trying to create shadow_fd for unknown filedesc type"); break; } return sfd; } static void *shrink_buffer(void *buf, size_t sz) { void *nbuf = realloc(buf, sz); if (nbuf) { return nbuf; } else { wp_debug("Failed to shrink buffer with realloc, not a problem"); return buf; } } /* Construct and optionally compress a diff between sfd->mem_mirror and * the actual memmap'd data, and synchronize sfd->mem_mirror */ static void worker_run_compress_diff( struct task_data *task, struct thread_data *local) { struct shadow_fd *sfd = task->sfd; struct thread_pool *pool = local->pool; size_t diffsize = (size_t)-1; size_t damage_space = 0; for (int i = 0; i < task->damage_len; i++) { int range = task->damage_intervals[i].end - task->damage_intervals[i].start; damage_space += (size_t)range + 8; } if (task->damaged_end) { damage_space += 1u << pool->diff_alignment_bits; } DTRACE_PROBE1(waypipe, worker_compdiff_enter, damage_space); char *diff_buffer = NULL; char *diff_target = NULL; if (pool->compression == COMP_NONE) { diff_buffer = malloc( damage_space + sizeof(struct wmsg_buffer_diff)); if (!diff_buffer) { wp_error("Allocation failed, dropping diff transfer block"); goto end; } diff_target = diff_buffer + sizeof(struct wmsg_buffer_diff); } else { if (buf_ensure_size((int)damage_space, 1, &local->tmp_size, &local->tmp_buf) == -1) { wp_error("Allocation failed, dropping diff transfer block"); goto end; } diff_target = local->tmp_buf; } DTRACE_PROBE1(waypipe, construct_diff_enter, task->damage_len); char *source = sfd->mem_local; if (sfd->type == FDC_DMABUF && sfd->dmabuf_map_stride != sfd->dmabuf_info.strides[0]) { size_t tx_stride = (size_t)sfd->dmabuf_info.strides[0]; size_t common = (size_t)minu(sfd->dmabuf_map_stride, tx_stride); /* copy mapped data to temporary buffer whose stride matches * what is sent over the wire */ for (int i = 0; i < task->damage_len; i++) { size_t start = (size_t)task->damage_intervals[i].start; size_t end = (size_t)task->damage_intervals[i].end; size_t loc_start = (start % tx_stride) + (start / tx_stride) * sfd->dmabuf_map_stride; size_t loc_end = (end % tx_stride) + (end / tx_stride) * sfd->dmabuf_map_stride; stride_shifted_copy(sfd->dmabuf_warped, sfd->mem_local, loc_start, loc_end - loc_start, common, sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); } if (task->damaged_end) { size_t alignment = 1u << pool->diff_alignment_bits; size_t start = alignment * (sfd->buffer_size / alignment); size_t end = sfd->buffer_size; size_t loc_start = (start % tx_stride) + (start / tx_stride) * sfd->dmabuf_map_stride; size_t loc_end = (end % tx_stride) + (end / tx_stride) * sfd->dmabuf_map_stride; stride_shifted_copy(sfd->dmabuf_warped, sfd->mem_local, loc_start, loc_end - loc_start, common, sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); } source = sfd->dmabuf_warped; } diffsize = construct_diff_core(pool->diff_func, pool->diff_alignment_bits, task->damage_intervals, task->damage_len, sfd->mem_mirror, source, diff_target); size_t ntrailing = 0; if (task->damaged_end) { ntrailing = construct_diff_trailing(sfd->buffer_size, pool->diff_alignment_bits, sfd->mem_mirror, source, diff_target + diffsize); } DTRACE_PROBE1(waypipe, construct_diff_exit, diffsize); if (diffsize == 0 && ntrailing == 0) { free(diff_buffer); goto end; } uint8_t *msg; size_t sz; size_t net_diff_sz = diffsize + ntrailing; if (pool->compression == COMP_NONE) { sz = net_diff_sz + sizeof(struct wmsg_buffer_diff); msg = (uint8_t *)diff_buffer; } else { struct bytebuf dst; size_t comp_size = compress_bufsize(pool, net_diff_sz); char *comp_buf = malloc(alignz(comp_size, 4) + sizeof(struct wmsg_buffer_diff)); if (!comp_buf) { wp_error("Allocation failed, dropping diff transfer block"); goto end; } compress_buffer(pool, &local->comp_ctx, net_diff_sz, diff_target, comp_size, comp_buf + sizeof(struct wmsg_buffer_diff), &dst); sz = dst.size + sizeof(struct wmsg_buffer_diff); msg = (uint8_t *)comp_buf; } msg = shrink_buffer(msg, alignz(sz, 4)); memset(msg + sz, 0, alignz(sz, 4) - sz); struct wmsg_buffer_diff header; header.size_and_type = transfer_header(sz, WMSG_BUFFER_DIFF); header.remote_id = sfd->remote_id; header.diff_size = (uint32_t)diffsize; header.ntrailing = (uint32_t)ntrailing; memcpy(msg, &header, sizeof(struct wmsg_buffer_diff)); transfer_async_add(task->msg_queue, msg, alignz(sz, 4)); end: DTRACE_PROBE1(waypipe, worker_compdiff_exit, diffsize); } /* Compress data for sfd->mem_mirror, and synchronize sfd->mem_mirror */ static void worker_run_compress_block( struct task_data *task, struct thread_data *local) { struct shadow_fd *sfd = task->sfd; struct thread_pool *pool = local->pool; if (task->zone_end == task->zone_start) { wp_error("Skipping task"); return; } /* Allocate a disjoint target interval to each worker */ size_t source_start = (size_t)task->zone_start; size_t source_end = (size_t)task->zone_end; DTRACE_PROBE1(waypipe, worker_comp_enter, source_end - source_start); /* Update mirror to match local */ if (sfd->type == FDC_DMABUF && sfd->dmabuf_map_stride != sfd->dmabuf_info.strides[0]) { uint32_t tx_stride = sfd->dmabuf_info.strides[0]; size_t common = (size_t)minu(sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); size_t loc_start = (source_start % tx_stride) + (source_start / tx_stride) * sfd->dmabuf_map_stride; size_t loc_end = (source_end % tx_stride) + (source_end / tx_stride) * sfd->dmabuf_map_stride; stride_shifted_copy(sfd->mem_mirror, sfd->mem_local, loc_start, loc_end - loc_start, common, sfd->dmabuf_map_stride, sfd->dmabuf_info.strides[0]); } else { memcpy(sfd->mem_mirror + source_start, sfd->mem_local + source_start, source_end - source_start); } size_t sz = 0; uint8_t *msg; if (pool->compression == COMP_NONE) { sz = sizeof(struct wmsg_buffer_fill) + (source_end - source_start); msg = malloc(alignz(sz, 4)); if (!msg) { wp_error("Allocation failed, dropping fill transfer block"); goto end; } memcpy(msg + sizeof(struct wmsg_buffer_fill), sfd->mem_mirror + source_start, source_end - source_start); } else { size_t comp_size = compress_bufsize( pool, source_end - source_start); msg = malloc(alignz(comp_size, 4) + sizeof(struct wmsg_buffer_fill)); if (!msg) { wp_error("Allocation failed, dropping fill transfer block"); goto end; } struct bytebuf dst; compress_buffer(pool, &local->comp_ctx, source_end - source_start, &sfd->mem_mirror[source_start], comp_size, (char *)msg + sizeof(struct wmsg_buffer_fill), &dst); sz = dst.size + sizeof(struct wmsg_buffer_fill); msg = shrink_buffer(msg, alignz(sz, 4)); } memset(msg + sz, 0, alignz(sz, 4) - sz); struct wmsg_buffer_fill header; header.size_and_type = transfer_header(sz, WMSG_BUFFER_FILL); header.remote_id = sfd->remote_id; header.start = (uint32_t)source_start; header.end = (uint32_t)source_end; memcpy(msg, &header, sizeof(struct wmsg_buffer_fill)); transfer_async_add(task->msg_queue, msg, alignz(sz, 4)); end: DTRACE_PROBE1(waypipe, worker_comp_exit, sz - sizeof(struct wmsg_buffer_fill)); } /* Optionally compress the data in mem_mirror, and set up the initial * transfer blocks */ static void queue_fill_transfers(struct thread_pool *threads, struct shadow_fd *sfd, struct transfer_queue *transfers) { // new transfer, we send file contents verbatim const int chunksize = 262144; int region_start = (int)sfd->remote_bufsize; int region_end = (int)sfd->buffer_size; if (region_start > region_end) { wp_error("Cannot queue fill transfers for a size reduction from %d to %d bytes", region_start, region_end); return; } if (region_start == region_end) { return; } /* Keep sfd alive at least until write to channel is done */ sfd->refcount.compute = true; int nshards = ceildiv((region_end - region_start), chunksize); pthread_mutex_lock(&threads->work_mutex); if (buf_ensure_size(threads->stack_count + nshards, sizeof(struct task_data), &threads->stack_size, (void **)&threads->stack) == -1) { wp_error("Allocation failed, dropping some fill tasks"); pthread_mutex_unlock(&threads->work_mutex); return; } for (int i = 0; i < nshards; i++) { struct task_data task; memset(&task, 0, sizeof(task)); task.type = TASK_COMPRESS_BLOCK; task.sfd = sfd; task.msg_queue = &transfers->async_recv_queue; task.zone_start = split_interval( region_start, region_end, nshards, i); task.zone_end = split_interval( region_start, region_end, nshards, i + 1); threads->stack[threads->stack_count++] = task; } pthread_mutex_unlock(&threads->work_mutex); } static void queue_diff_transfers(struct thread_pool *threads, struct shadow_fd *sfd, struct transfer_queue *transfers) { const int chunksize = 262144; if (!sfd->damage.damage) { return; } /* Keep sfd alive at least until write to channel is done */ sfd->refcount.compute = true; int bs = 1 << threads->diff_alignment_bits; int align_end = bs * ((int)sfd->buffer_size / bs); bool check_tail = false; int net_damage = 0; if (sfd->damage.damage == DAMAGE_EVERYTHING) { reset_damage(&sfd->damage); struct ext_interval all = {.start = 0, .width = align_end, .rep = 1, .stride = 0}; merge_damage_records(&sfd->damage, 1, &all, threads->diff_alignment_bits); check_tail = true; net_damage = align_end; } else { for (int ir = 0, iw = 0; ir < sfd->damage.ndamage_intvs; ir++) { /* Extend all damage to the nearest alignment block */ struct interval e = sfd->damage.damage[ir]; check_tail |= e.end > align_end; e.end = min(e.end, align_end); if (e.start < e.end) { /* End clipping may produce empty/degenerate * intervals, so filter them out now */ sfd->damage.damage[iw++] = e; net_damage += e.end - e.start; } if (e.end & (bs - 1) || e.start & (bs - 1)) { wp_error("Interval [%d, %d) is not aligned", e.start, e.end); } } } int nshards = ceildiv(net_damage, chunksize); /* Instead of allocating individual buffers for each task, create a * global damage tracking buffer into which tasks index. It will be * deleted in `finish_update`. */ struct interval *intvs = malloc( sizeof(struct interval) * (size_t)(sfd->damage.ndamage_intvs + nshards)); int *offsets = calloc((size_t)nshards + 1, sizeof(int)); if (!offsets || !intvs) { // TODO: avoid making this allocation entirely wp_error("Failed to allocate diff region control buffer, dropping diff tasks"); free(intvs); free(offsets); return; } sfd->damage_task_interval_store = intvs; int tot_blocks = net_damage / bs; int ir = 0, iw = 0, acc_prev_blocks = 0; for (int shard = 0; shard < nshards; shard++) { int s_lower = split_interval(0, tot_blocks, nshards, shard); int s_upper = split_interval(0, tot_blocks, nshards, shard + 1); while (acc_prev_blocks < s_upper && ir < sfd->damage.ndamage_intvs) { struct interval e = sfd->damage.damage[ir]; const int w = (e.end - e.start) / bs; int a_low = max(0, s_lower - acc_prev_blocks); int a_high = min(w, s_upper - acc_prev_blocks); struct interval r = { .start = e.start + bs * a_low, .end = e.start + bs * a_high, }; intvs[iw++] = r; if (acc_prev_blocks + w > s_upper) { break; } else { acc_prev_blocks += w; ir++; } } offsets[shard + 1] = iw; } /* Reset damage, once it has been applied */ reset_damage(&sfd->damage); pthread_mutex_lock(&threads->work_mutex); if (buf_ensure_size(threads->stack_count + nshards, sizeof(struct task_data), &threads->stack_size, (void **)&threads->stack) == -1) { wp_error("Allocation failed, dropping some diff tasks"); pthread_mutex_unlock(&threads->work_mutex); free(offsets); return; } for (int i = 0; i < nshards; i++) { struct task_data task; memset(&task, 0, sizeof(task)); task.type = TASK_COMPRESS_DIFF; task.sfd = sfd; task.msg_queue = &transfers->async_recv_queue; task.damage_len = offsets[i + 1] - offsets[i]; task.damage_intervals = &sfd->damage_task_interval_store[offsets[i]]; task.damaged_end = (i == nshards - 1) && check_tail; threads->stack[threads->stack_count++] = task; } pthread_mutex_unlock(&threads->work_mutex); free(offsets); } static void add_dmabuf_create_request(struct transfer_queue *transfers, struct shadow_fd *sfd, enum wmsg_type variant) { size_t actual_len = sizeof(struct wmsg_open_dmabuf) + sizeof(struct dmabuf_slice_data); size_t padded_len = alignz(actual_len, 4); uint8_t *data = calloc(1, padded_len); struct wmsg_open_dmabuf *header = (struct wmsg_open_dmabuf *)data; header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header(actual_len, variant); memcpy(data + sizeof(struct wmsg_open_dmabuf), &sfd->dmabuf_info, sizeof(struct dmabuf_slice_data)); transfer_add(transfers, padded_len, data); } static void add_dmabuf_create_request_v2(struct transfer_queue *transfers, struct shadow_fd *sfd, enum wmsg_type variant, enum video_coding_fmt fmt) { size_t actual_len = sizeof(struct wmsg_open_dmavid) + sizeof(struct dmabuf_slice_data); static_assert((sizeof(struct wmsg_open_dmavid) + sizeof(struct dmabuf_slice_data)) % 4 == 0, "alignment"); uint8_t *data = calloc(1, actual_len); struct wmsg_open_dmavid *header = (struct wmsg_open_dmavid *)data; header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header(actual_len, variant); header->vid_flags = (uint32_t)fmt; memcpy(data + sizeof(*header), &sfd->dmabuf_info, sizeof(struct dmabuf_slice_data)); transfer_add(transfers, actual_len, data); } static void add_file_create_request( struct transfer_queue *transfers, struct shadow_fd *sfd) { struct wmsg_open_file *header = calloc(1, sizeof(struct wmsg_open_file)); header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header( sizeof(struct wmsg_open_file), WMSG_OPEN_FILE); transfer_add(transfers, sizeof(struct wmsg_open_file), header); } void finish_update(struct shadow_fd *sfd) { if (!sfd->refcount.compute) { return; } if (sfd->type == FDC_DMABUF && sfd->dmabuf_map_handle) { // if this fails, unmap_dmabuf will print error (void)unmap_dmabuf(sfd->dmabuf_bo, sfd->dmabuf_map_handle); sfd->dmabuf_map_handle = NULL; sfd->mem_local = NULL; } if (sfd->damage_task_interval_store) { free(sfd->damage_task_interval_store); sfd->damage_task_interval_store = NULL; } sfd->refcount.compute = false; } void collect_update(struct thread_pool *threads, struct shadow_fd *sfd, struct transfer_queue *transfers, bool use_old_dmavid_req) { switch (sfd->type) { case FDC_FILE: { if (!sfd->is_dirty) { // File is clean, we have no reason to believe // that its contents could have changed return; } // Clear dirty state sfd->is_dirty = false; if (sfd->only_here) { // increase space, to avoid overflow when // writing this buffer along with padding size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); if (!sfd->mem_mirror) { wp_error("Failed to allocate mirror"); return; } sfd->only_here = false; sfd->remote_bufsize = 0; add_file_create_request(transfers, sfd); sfd->remote_bufsize = sfd->buffer_size; queue_diff_transfers(threads, sfd, transfers); return; } if (sfd->remote_bufsize < sfd->buffer_size) { struct wmsg_open_file *header = calloc( 1, sizeof(struct wmsg_open_file)); header->file_size = (uint32_t)sfd->buffer_size; header->remote_id = sfd->remote_id; header->size_and_type = transfer_header( sizeof(struct wmsg_open_file), WMSG_EXTEND_FILE); transfer_add(transfers, sizeof(struct wmsg_open_file), header); sfd->remote_bufsize = sfd->buffer_size; } queue_diff_transfers(threads, sfd, transfers); } break; case FDC_DMABUF: { // If buffer is clean, do not check for changes if (!sfd->is_dirty) { return; } sfd->is_dirty = false; bool first = false; if (sfd->only_here) { sfd->only_here = false; first = true; add_dmabuf_create_request( transfers, sfd, WMSG_OPEN_DMABUF); } if (!sfd->dmabuf_bo) { // ^ was not previously able to create buffer return; } if (!sfd->mem_local) { sfd->mem_local = map_dmabuf(sfd->dmabuf_bo, false, &sfd->dmabuf_map_handle, &sfd->dmabuf_map_stride); if (!sfd->mem_local) { return; } } if (first) { size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); sfd->dmabuf_warped = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->dmabuf_warped_handle); if (!sfd->mem_mirror || !sfd->dmabuf_warped) { wp_error("Failed to allocate mirror"); return; } sfd->remote_bufsize = 0; queue_fill_transfers(threads, sfd, transfers); sfd->remote_bufsize = sfd->buffer_size; } else { // TODO: detailed damage tracking damage_everything(&sfd->damage); queue_diff_transfers(threads, sfd, transfers); } /* Unmapping will be handled by finish_update() */ } break; case FDC_DMAVID_IR: { if (!sfd->is_dirty) { return; } sfd->is_dirty = false; if (!sfd->dmabuf_bo || !sfd->video_context) { // ^ was not previously able to create buffer return; } if (sfd->only_here) { sfd->only_here = false; if (use_old_dmavid_req) { add_dmabuf_create_request(transfers, sfd, WMSG_OPEN_DMAVID_DST); } else { add_dmabuf_create_request_v2(transfers, sfd, WMSG_OPEN_DMAVID_DST_V2, sfd->video_fmt); } } collect_video_from_mirror(sfd, transfers); } break; case FDC_DMAVID_IW: { sfd->is_dirty = false; if (sfd->only_here) { sfd->only_here = false; if (use_old_dmavid_req) { add_dmabuf_create_request(transfers, sfd, WMSG_OPEN_DMAVID_SRC); } else { add_dmabuf_create_request_v2(transfers, sfd, WMSG_OPEN_DMAVID_SRC_V2, sfd->video_fmt); } } } break; case FDC_PIPE: { // Pipes always update, no matter what the message // stream indicates. if (sfd->only_here) { sfd->only_here = false; struct wmsg_basic *createh = calloc(1, sizeof(struct wmsg_basic)); enum wmsg_type type; if (sfd->pipe.can_read && !sfd->pipe.can_write) { type = WMSG_OPEN_IW_PIPE; sfd->pipe.remote_can_write = true; } else if (sfd->pipe.can_write && !sfd->pipe.can_read) { type = WMSG_OPEN_IR_PIPE; sfd->pipe.remote_can_read = true; } else { type = WMSG_OPEN_RW_PIPE; sfd->pipe.remote_can_read = true; sfd->pipe.remote_can_write = true; } createh->size_and_type = transfer_header( sizeof(struct wmsg_basic), type); createh->remote_id = sfd->remote_id; transfer_add(transfers, sizeof(struct wmsg_basic), createh); } if (sfd->pipe.recv.used > 0) { size_t msgsz = sizeof(struct wmsg_basic) + (size_t)sfd->pipe.recv.used; char *buf = malloc(alignz(msgsz, 4)); struct wmsg_basic *header = (struct wmsg_basic *)buf; header->size_and_type = transfer_header( msgsz, WMSG_PIPE_TRANSFER); header->remote_id = sfd->remote_id; memcpy(buf + sizeof(struct wmsg_basic), sfd->pipe.recv.data, (size_t)sfd->pipe.recv.used); memset(buf + msgsz, 0, alignz(msgsz, 4) - msgsz); transfer_add(transfers, alignz(msgsz, 4), buf); sfd->pipe.recv.used = 0; } if (!sfd->pipe.can_read && sfd->pipe.remote_can_write) { struct wmsg_basic *header = calloc(1, sizeof(struct wmsg_basic)); header->size_and_type = transfer_header( sizeof(struct wmsg_basic), WMSG_PIPE_SHUTDOWN_W); header->remote_id = sfd->remote_id; transfer_add(transfers, sizeof(struct wmsg_basic), header); sfd->pipe.remote_can_write = false; } if (!sfd->pipe.can_write && sfd->pipe.remote_can_read) { struct wmsg_basic *header = calloc(1, sizeof(struct wmsg_basic)); header->size_and_type = transfer_header( sizeof(struct wmsg_basic), WMSG_PIPE_SHUTDOWN_R); header->remote_id = sfd->remote_id; transfer_add(transfers, sizeof(struct wmsg_basic), header); sfd->pipe.remote_can_read = false; } } break; case FDC_UNKNOWN: break; } } static void increase_buffer_sizes(struct shadow_fd *sfd, struct thread_pool *threads, size_t new_size) { size_t old_size = sfd->buffer_size; munmap(sfd->mem_local, old_size); sfd->buffer_size = new_size; sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED, sfd->fd_local, 0); if (sfd->mem_local == MAP_FAILED) { wp_error("Mmap failed to remap increased buffer for RID=%d: %s", sfd->remote_id, strerror(errno)); return; } /* if resize happens before any transfers, mirror may still be zero */ if (sfd->mem_mirror) { // todo: handle allocation failures size_t alignment = 1u << threads->diff_alignment_bits; void *new_mirror = zeroed_aligned_realloc( alignz(old_size, alignment), alignz(sfd->buffer_size, alignment), alignment, sfd->mem_mirror, &sfd->mem_mirror_handle); if (!new_mirror) { wp_error("Failed to reallocate mirror"); return; } sfd->mem_mirror = new_mirror; } } static void pipe_close_write(struct shadow_fd *sfd) { if (sfd->pipe.can_read) { /* if pipe.fd is both readable and writable, assume * socket */ shutdown(sfd->pipe.fd, SHUT_WR); } else { checked_close(sfd->pipe.fd); if (sfd->fd_local == sfd->pipe.fd) { sfd->fd_local = -1; } sfd->pipe.fd = -1; } sfd->pipe.can_write = false; /* Also free any accumulated data that was not delivered */ free(sfd->pipe.send.data); memset(&sfd->pipe.send, 0, sizeof(sfd->pipe.send)); } static void pipe_close_read(struct shadow_fd *sfd) { if (sfd->pipe.can_write) { /* if pipe.fd is both readable and writable, assume * socket */ // TODO: check return value, can legitimately fail with ENOBUFS shutdown(sfd->pipe.fd, SHUT_RD); } else { checked_close(sfd->pipe.fd); if (sfd->fd_local == sfd->pipe.fd) { sfd->fd_local = -1; } sfd->pipe.fd = -1; } sfd->pipe.can_read = false; } static int open_sfd(struct fd_translation_map *map, struct shadow_fd **sfd_ptr, int remote_id) { if (*sfd_ptr) { wp_error("shadow structure for RID=%d was already created", remote_id); return ERR_FATAL; } wp_debug("Introducing new fd, remoteid=%d", remote_id); struct shadow_fd *sfd = calloc(1, sizeof(struct shadow_fd)); if (!sfd) { wp_error("failed to allocate shadow structure for RID=%d", remote_id); return ERR_FATAL; } sfd->link.l_prev = &map->link; sfd->link.l_next = map->link.l_next; sfd->link.l_prev->l_next = &sfd->link; sfd->link.l_next->l_prev = &sfd->link; sfd->remote_id = remote_id; sfd->fd_local = -1; sfd->is_dirty = false; /* a received file descriptor is up to date by default */ reset_damage(&sfd->damage); sfd->only_here = false; /* Start the object reference at one, so that, if it is owned by * some known protocol object, it can not be deleted until the * fd has at least be transferred over the Wayland connection */ sfd->refcount.transfer = 1; sfd->refcount.protocol = 0; sfd->refcount.compute = false; *sfd_ptr = sfd; return 0; } static int check_message_min_size( enum wmsg_type type, const struct bytebuf *msg, size_t min_size) { if (msg->size < min_size) { wp_error("Message size for %s is smaller than expected (%zu bytes vs %zu bytes)", wmsg_type_to_str(type), msg->size, min_size); return ERR_FATAL; } return 0; } static int check_sfd_type_2(struct shadow_fd *sfd, int remote_id, enum wmsg_type mtype, enum fdcat ftype1, enum fdcat ftype2) { if (!sfd) { wp_error("shadow structure for RID=%d was not available", remote_id); return ERR_FATAL; } if (sfd->type != ftype1 && sfd->type != ftype2) { wp_error("Trying to apply %s to RID=%d which has incompatible type=%s", wmsg_type_to_str(mtype), remote_id, fdcat_to_str(sfd->type)); return ERR_FATAL; } return 0; } static int check_sfd_type(struct shadow_fd *sfd, int remote_id, enum wmsg_type mtype, enum fdcat ftype) { return check_sfd_type_2(sfd, remote_id, mtype, ftype, ftype); } int apply_update(struct fd_translation_map *map, struct thread_pool *threads, struct render_data *render, enum wmsg_type type, int remote_id, const struct bytebuf *msg) { struct shadow_fd *sfd = get_shadow_for_rid(map, remote_id); int ret = 0; switch (type) { default: case WMSG_RESTART: case WMSG_CLOSE: case WMSG_ACK_NBLOCKS: case WMSG_INJECT_RIDS: case WMSG_PROTOCOL: { if (wmsg_type_is_known(type)) { wp_error("Unexpected update type: %s", wmsg_type_to_str(type)); } else { wp_error("Unidentified update type, number %u. " "This may be caused by the Waypipe instances " "on different sides of the connection having " "incompatible versions or options.", (unsigned)type); } return ERR_FATAL; } /* SFD creation messages */ case WMSG_OPEN_FILE: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_open_file))) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } const struct wmsg_open_file header = *(const struct wmsg_open_file *)msg->data; sfd->type = FDC_FILE; sfd->mem_local = NULL; sfd->buffer_size = header.file_size; sfd->remote_bufsize = sfd->buffer_size; size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); if (!sfd->mem_mirror) { wp_error("Failed to allocate mirror"); return 0; } sfd->fd_local = create_anon_file(); if (sfd->fd_local == -1) { wp_error("Failed to create anon file for object %d: %s", sfd->remote_id, strerror(errno)); return 0; } /* ftruncate zero initializes the file by default, matching * the zeroed mem_mirror buffer */ if (ftruncate(sfd->fd_local, (off_t)sfd->buffer_size) == -1) { wp_error("Failed to resize anon file to size %zu for reason: %s", sfd->buffer_size, strerror(errno)); return 0; } sfd->mem_local = mmap(NULL, sfd->buffer_size, PROT_READ | PROT_WRITE, MAP_SHARED, sfd->fd_local, 0); if (sfd->mem_local == MAP_FAILED) { wp_error("Failed to mmap newly created shm file for object %d: %s", sfd->remote_id, strerror(errno)); sfd->mem_local = NULL; return 0; } return 0; } case WMSG_OPEN_DMABUF: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_open_dmabuf) + sizeof(struct dmabuf_slice_data))) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } sfd->type = FDC_DMABUF; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmabuf), sizeof(struct dmabuf_slice_data)); /* allocate a mirror buffer that matches dimensions of incoming * data from the remote; this may disagree with the mapped size * of the buffer */ sfd->buffer_size = sfd->dmabuf_info.height * sfd->dmabuf_info.strides[0]; size_t alignment = 1u << threads->diff_alignment_bits; sfd->mem_mirror = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->mem_mirror_handle); sfd->dmabuf_warped = zeroed_aligned_alloc( alignz(sfd->buffer_size, alignment), alignment, &sfd->dmabuf_warped_handle); if (!sfd->mem_mirror || !sfd->dmabuf_warped) { wp_error("Failed to allocate mirror"); return 0; } wp_debug("Creating remote DMAbuf of %d bytes", (int)sfd->buffer_size); // Create mirror from first transfer // The file can only actually be created when we know // what type it is? if (init_render_data(render) == -1) { sfd->fd_local = -1; return 0; } sfd->dmabuf_bo = make_dmabuf(render, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { sfd->fd_local = -1; return 0; } sfd->fd_local = export_dmabuf(sfd->dmabuf_bo); return 0; } case WMSG_OPEN_DMAVID_DST: case WMSG_OPEN_DMAVID_DST_V2: { const size_t min_msg_size = sizeof(struct dmabuf_slice_data) + ((type == WMSG_OPEN_DMAVID_DST_V2) ? sizeof(struct wmsg_open_dmavid) : sizeof(struct wmsg_open_dmabuf)); if ((ret = check_message_min_size(type, msg, min_msg_size)) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } /* remote read data, this side writes data */ sfd->type = FDC_DMAVID_IW; if (type == WMSG_OPEN_DMAVID_DST) { const struct wmsg_open_dmabuf header = *(const struct wmsg_open_dmabuf *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmabuf), sizeof(struct dmabuf_slice_data)); sfd->video_fmt = VIDEO_H264; } else { const struct wmsg_open_dmavid header = *(const struct wmsg_open_dmavid *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmavid), sizeof(struct dmabuf_slice_data)); uint32_t vid_type = header.vid_flags & 0xff; if (vid_type == (uint32_t)VIDEO_H264 || vid_type == (uint32_t)VIDEO_VP9 || vid_type == (uint32_t)VIDEO_AV1) { sfd->video_fmt = (enum video_coding_fmt)vid_type; } else { wp_error("Unidentified video format %u for RID=%d", vid_type, sfd->remote_id); return ERR_FATAL; } } if (init_render_data(render) == -1) { sfd->fd_local = -1; return 0; } sfd->dmabuf_bo = make_dmabuf(render, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { wp_error("FDC_DMAVID_IW: RID=%d make_dmabuf failure, sz=%d (%d)", sfd->remote_id, (int)sfd->buffer_size, sizeof(struct dmabuf_slice_data)); return 0; } sfd->fd_local = export_dmabuf(sfd->dmabuf_bo); if (setup_video_decode(sfd, render) == -1) { wp_error("Video decoding setup failed for RID=%d", sfd->remote_id); } return 0; } case WMSG_OPEN_DMAVID_SRC: case WMSG_OPEN_DMAVID_SRC_V2: { const size_t min_msg_size = sizeof(struct dmabuf_slice_data) + ((type == WMSG_OPEN_DMAVID_SRC_V2) ? sizeof(struct wmsg_open_dmavid) : sizeof(struct wmsg_open_dmabuf)); if ((ret = check_message_min_size(type, msg, min_msg_size)) < 0) { return ret; } if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } /* remote writes data, this side reads data */ sfd->type = FDC_DMAVID_IR; // TODO: deduplicate this section with WMSG_OPEN_DMAVID_DST, // or stop handling V1 and V2 in the same branch if (type == WMSG_OPEN_DMAVID_SRC) { const struct wmsg_open_dmabuf header = *(const struct wmsg_open_dmabuf *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmabuf), sizeof(struct dmabuf_slice_data)); sfd->video_fmt = VIDEO_H264; } else { const struct wmsg_open_dmavid header = *(const struct wmsg_open_dmavid *) msg->data; sfd->buffer_size = header.file_size; memcpy(&sfd->dmabuf_info, msg->data + sizeof(struct wmsg_open_dmavid), sizeof(struct dmabuf_slice_data)); uint32_t vid_type = header.vid_flags & 0xff; if (vid_type == (uint32_t)VIDEO_H264 || vid_type == (uint32_t)VIDEO_VP9 || vid_type == (uint32_t)VIDEO_AV1) { sfd->video_fmt = (enum video_coding_fmt)vid_type; } else { wp_error("Unidentified video format %u for RID=%d", sfd->remote_id); return ERR_FATAL; } } if (init_render_data(render) == -1) { sfd->fd_local = -1; return 0; } sfd->dmabuf_bo = make_dmabuf(render, &sfd->dmabuf_info); if (!sfd->dmabuf_bo) { wp_error("FDC_DMAVID_IR: RID=%d make_dmabuf failure", sfd->remote_id); return 0; } sfd->fd_local = export_dmabuf(sfd->dmabuf_bo); if (setup_video_encode(sfd, render, threads->nthreads) == -1) { wp_error("Video encoding setup failed for RID=%d", sfd->remote_id); } return 0; } case WMSG_OPEN_RW_PIPE: case WMSG_OPEN_IW_PIPE: case WMSG_OPEN_IR_PIPE: { if ((ret = open_sfd(map, &sfd, remote_id)) < 0) { return ret; } sfd->type = FDC_PIPE; int pipedes[2]; if (type == WMSG_OPEN_RW_PIPE) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, pipedes) == -1) { wp_error("Failed to create a socketpair: %s", strerror(errno)); return 0; } } else { if (pipe(pipedes) == -1) { wp_error("Failed to create a pipe: %s", strerror(errno)); return 0; } } /* We pass 'fd_local' to the client, although we only * read and write from pipe_fd if it exists. */ if (type == WMSG_OPEN_IR_PIPE) { // Read end is 0; the other process writes sfd->fd_local = pipedes[1]; sfd->pipe.fd = pipedes[0]; sfd->pipe.can_read = true; sfd->pipe.remote_can_write = true; } else if (type == WMSG_OPEN_IW_PIPE) { // Write end is 1; the other process reads sfd->fd_local = pipedes[0]; sfd->pipe.fd = pipedes[1]; sfd->pipe.can_write = true; sfd->pipe.remote_can_read = true; } else { // FDC_PIPE_RW // Here, it doesn't matter which end is which sfd->fd_local = pipedes[0]; sfd->pipe.fd = pipedes[1]; sfd->pipe.can_read = true; sfd->pipe.can_write = true; sfd->pipe.remote_can_read = true; sfd->pipe.remote_can_write = true; } if (set_nonblocking(sfd->pipe.fd) == -1) { wp_error("Failed to make private pipe end nonblocking: %s", strerror(errno)); return 0; } return 0; } /* SFD update messages */ case WMSG_EXTEND_FILE: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_open_file))) < 0) { return ret; } if ((ret = check_sfd_type(sfd, remote_id, type, FDC_FILE)) < 0) { return ret; } const struct wmsg_open_file *header = (const struct wmsg_open_file *)msg->data; if (header->file_size <= sfd->buffer_size) { wp_error("File extend message for RID=%d does not increase size %u %z", remote_id, header->file_size, sfd->buffer_size); return ERR_FATAL; } if (ftruncate(sfd->fd_local, (off_t)header->file_size) == -1) { wp_error("Failed to resize file buffer: %s", strerror(errno)); return 0; } increase_buffer_sizes(sfd, threads, (size_t)header->file_size); // the extension implies the remote buffer is at least as large sfd->remote_bufsize = sfd->buffer_size; return 0; } case WMSG_BUFFER_FILL: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_buffer_fill))) < 0) { return ret; } if ((ret = check_sfd_type_2(sfd, remote_id, type, FDC_FILE, FDC_DMABUF)) < 0) { return ret; } if (sfd->type == FDC_FILE && sfd->file_readonly) { wp_debug("Ignoring a fill update to readonly file at RID=%d", remote_id); return 0; } const struct wmsg_buffer_fill *header = (const struct wmsg_buffer_fill *)msg->data; size_t uncomp_size = header->end - header->start; struct thread_data *local = &threads->threads[0]; if (buf_ensure_size((int)uncomp_size, 1, &local->tmp_size, &local->tmp_buf) == -1) { wp_error("Failed to expand temporary decompression buffer, dropping update"); return 0; } const char *act_buffer = NULL; size_t act_size = 0; uncompress_buffer(threads, &threads->threads[0].comp_ctx, msg->size - sizeof(struct wmsg_buffer_fill), msg->data + sizeof(struct wmsg_buffer_fill), uncomp_size, local->tmp_buf, &act_size, &act_buffer); // `memsize+8*remote_nthreads` is the worst-case diff // expansion if (header->end > sfd->buffer_size) { wp_error("Transfer end overflow %" PRIu32 " > %zu", header->end, sfd->buffer_size); return ERR_FATAL; } if (act_size != header->end - header->start) { wp_error("Transfer size mismatch %zu %" PRIu32, act_size, header->end - header->start); return ERR_FATAL; } if (sfd->type == FDC_DMABUF) { int bpp = get_shm_bytes_per_pixel( sfd->dmabuf_info.format); if (bpp == -1) { wp_error("Skipping update of RID=%d, non-RGBA/monoplane fmt %x", sfd->remote_id, sfd->dmabuf_info.format); return 0; } memcpy(sfd->mem_mirror + header->start, act_buffer, header->end - header->start); void *handle = NULL; uint32_t map_stride = 0; char *mem_local = map_dmabuf(sfd->dmabuf_bo, true, &handle, &map_stride); if (!mem_local) { wp_error("Failed to apply fill to RID=%d, fd not mapped", sfd->remote_id); return 0; } uint32_t in_stride = sfd->dmabuf_info.strides[0]; if (map_stride == in_stride) { memcpy(mem_local + header->start, sfd->mem_mirror + header->start, header->end - header->start); } else { /* stride changing transfer */ uint32_t row_length = (uint32_t)bpp * sfd->dmabuf_info.width; uint32_t copy_size = (uint32_t)minu(row_length, minu(map_stride, in_stride)); stride_shifted_copy(mem_local, act_buffer - header->start, header->start, header->end - header->start, copy_size, in_stride, map_stride); } if (unmap_dmabuf(sfd->dmabuf_bo, handle) == -1) { return 0; } } else { memcpy(sfd->mem_mirror + header->start, act_buffer, header->end - header->start); memcpy(sfd->mem_local + header->start, act_buffer, header->end - header->start); } return 0; } case WMSG_BUFFER_DIFF: { if ((ret = check_message_min_size(type, msg, sizeof(struct wmsg_buffer_diff))) < 0) { return ret; } if ((ret = check_sfd_type_2(sfd, remote_id, type, FDC_FILE, FDC_DMABUF)) < 0) { return ret; } if (sfd->type == FDC_FILE && sfd->file_readonly) { wp_debug("Ignoring a diff update to readonly file at RID=%d", remote_id); return 0; } const struct wmsg_buffer_diff *header = (const struct wmsg_buffer_diff *)msg->data; struct thread_data *local = &threads->threads[0]; if (buf_ensure_size((int)(header->diff_size + header->ntrailing), 1, &local->tmp_size, &local->tmp_buf) == -1) { wp_error("Failed to expand temporary decompression buffer, dropping update"); return 0; } const char *act_buffer = NULL; size_t act_size = 0; uncompress_buffer(threads, &threads->threads[0].comp_ctx, msg->size - sizeof(struct wmsg_buffer_diff), msg->data + sizeof(struct wmsg_buffer_diff), header->diff_size + header->ntrailing, local->tmp_buf, &act_size, &act_buffer); // `memsize+8*remote_nthreads` is the worst-case diff // expansion if (act_size != header->diff_size + header->ntrailing) { wp_error("Transfer size mismatch %zu %u", act_size, header->diff_size + header->ntrailing); return ERR_FATAL; } if (sfd->type == FDC_DMABUF) { int bpp = get_shm_bytes_per_pixel( sfd->dmabuf_info.format); if (bpp == -1) { wp_error("Skipping update of RID=%d, non-RGBA/monoplane fmt %x", sfd->remote_id, sfd->dmabuf_info.format); return 0; } void *handle = NULL; uint32_t map_stride = 0; char *mem_local = map_dmabuf(sfd->dmabuf_bo, true, &handle, &map_stride); if (!mem_local) { wp_error("Failed to apply diff to RID=%d, fd not mapped", sfd->remote_id); return 0; } uint32_t in_stride = sfd->dmabuf_info.strides[0]; uint32_t row_length = (uint32_t)bpp * sfd->dmabuf_info.width; uint32_t copy_size = (uint32_t)minu(row_length, minu(map_stride, in_stride)); (void)in_stride; size_t nblocks = sfd->buffer_size / sizeof(uint32_t); size_t ndiffblocks = header->diff_size / sizeof(uint32_t); uint32_t *diff_blocks = (uint32_t *)act_buffer; for (size_t i = 0; i < ndiffblocks;) { size_t nfrom = (size_t)diff_blocks[i]; size_t nto = (size_t)diff_blocks[i + 1]; size_t span = nto - nfrom; if (nto > nblocks || nfrom >= nto || i + (nto - nfrom) >= ndiffblocks) { wp_error("Invalid copy range [%zu,%zu) > %zu=nblocks or [%zu,%zu) > %zu=ndiffblocks", nfrom, nto, nblocks, i + 1, i + 1 + span, ndiffblocks); break; } memcpy(sfd->mem_mirror + sizeof(uint32_t) * nfrom, diff_blocks + i + 2, sizeof(uint32_t) * span); stride_shifted_copy(mem_local, (char *)((diff_blocks + i + 2) - nfrom), sizeof(uint32_t) * nfrom, sizeof(uint32_t) * span, copy_size, in_stride, map_stride); i += span + 2; } if (header->ntrailing > 0) { size_t offset = sfd->buffer_size - header->ntrailing; memcpy(sfd->mem_mirror + offset, act_buffer + header->diff_size, header->ntrailing); stride_shifted_copy(mem_local, (act_buffer + header->diff_size) - offset, offset, header->ntrailing, copy_size, in_stride, map_stride); } if (unmap_dmabuf(sfd->dmabuf_bo, handle) == -1) { return 0; } } else { DTRACE_PROBE2(waypipe, apply_diff_enter, sfd->buffer_size, header->diff_size); apply_diff(sfd->buffer_size, sfd->mem_mirror, sfd->mem_local, header->diff_size, header->ntrailing, act_buffer); DTRACE_PROBE(waypipe, apply_diff_exit); } return 0; } case WMSG_PIPE_TRANSFER: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_PIPE)) < 0) { return ret; } if (!sfd->pipe.can_write || sfd->pipe.pending_w_shutdown) { wp_debug("Discarding transfer to pipe RID=%d, because pipe cannot be written to", remote_id); return 0; } size_t transf_data_sz = msg->size - sizeof(struct wmsg_basic); int netsize = sfd->pipe.send.used + (int)transf_data_sz; if (buf_ensure_size(netsize, 1, &sfd->pipe.send.size, (void **)&sfd->pipe.send.data) == -1) { wp_error("Failed to expand pipe transfer buffer, dropping data"); return 0; } memcpy(sfd->pipe.send.data + sfd->pipe.send.used, msg->data + sizeof(struct wmsg_basic), transf_data_sz); sfd->pipe.send.used = netsize; // The pipe itself will be flushed/or closed later by // flush_writable_pipes sfd->pipe.writable = true; return 0; } case WMSG_PIPE_SHUTDOWN_R: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_PIPE)) < 0) { return ret; } sfd->pipe.remote_can_write = false; if (!sfd->pipe.can_read) { wp_debug("Discarding read shutdown to pipe RID=%d, which cannot read", remote_id); return 0; } pipe_close_read(sfd); return 0; } case WMSG_PIPE_SHUTDOWN_W: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_PIPE)) < 0) { return ret; } sfd->pipe.remote_can_read = false; if (!sfd->pipe.can_write) { wp_debug("Discarding write shutdown to pipe RID=%d, which cannot write", remote_id); return 0; } if (sfd->pipe.send.used <= 0) { pipe_close_write(sfd); } else { /* Shutdown as soon as the current data has been written */ sfd->pipe.pending_w_shutdown = true; } return 0; } case WMSG_SEND_DMAVID_PACKET: { if ((ret = check_sfd_type(sfd, remote_id, type, FDC_DMAVID_IW)) < 0) { return ret; } if (!sfd->dmabuf_bo) { wp_error("Applying update to nonexistent dma buffer object rid=%d", sfd->remote_id); return 0; } struct bytebuf data = { .data = msg->data + sizeof(struct wmsg_basic), .size = msg->size - sizeof(struct wmsg_basic)}; apply_video_packet(sfd, render, &data); return 0; } }; /* all returns should happen inside switch, so none here */ } bool shadow_decref_protocol(struct shadow_fd *sfd) { sfd->refcount.protocol--; return destroy_shadow_if_unreferenced(sfd); } bool shadow_decref_transfer(struct shadow_fd *sfd) { sfd->refcount.transfer--; if (sfd->refcount.transfer == 0 && sfd->type == FDC_PIPE) { /* fd_local has been transferred for the last time, so close * it and make it match pipe.fd, just as on the side where * the original pipe was introduced */ if (sfd->pipe.fd != sfd->fd_local) { checked_close(sfd->fd_local); sfd->fd_local = sfd->pipe.fd; } } return destroy_shadow_if_unreferenced(sfd); } struct shadow_fd *shadow_incref_protocol(struct shadow_fd *sfd) { sfd->has_owner = true; sfd->refcount.protocol++; return sfd; } struct shadow_fd *shadow_incref_transfer(struct shadow_fd *sfd) { sfd->has_owner = true; if (sfd->type == FDC_PIPE && sfd->refcount.transfer == 0) { wp_error("The other pipe end may have been closed"); } sfd->refcount.transfer++; return sfd; } void decref_transferred_fds(struct fd_translation_map *map, int nfds, int fds[]) { for (int i = 0; i < nfds; i++) { struct shadow_fd *sfd = get_shadow_for_local_fd(map, fds[i]); shadow_decref_transfer(sfd); } } void decref_transferred_rids( struct fd_translation_map *map, int nids, int ids[]) { for (int i = 0; i < nids; i++) { struct shadow_fd *sfd = get_shadow_for_rid(map, ids[i]); shadow_decref_transfer(sfd); } } int count_npipes(const struct fd_translation_map *map) { int np = 0; for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->type == FDC_PIPE) { np++; } } return np; } int fill_with_pipes(const struct fd_translation_map *map, struct pollfd *pfds, bool check_read) { int np = 0; for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->type == FDC_PIPE && cur->pipe.fd != -1) { pfds[np].fd = cur->pipe.fd; pfds[np].events = 0; if (check_read && cur->pipe.readable) { pfds[np].events |= POLLIN; } if (cur->pipe.send.used > 0) { pfds[np].events |= POLLOUT; } np++; } } return np; } static struct shadow_fd *get_shadow_for_pipe_fd( struct fd_translation_map *map, int pipefd) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; if (cur->type == FDC_PIPE && cur->pipe.fd == pipefd) { return cur; } } return NULL; } void mark_pipe_object_statuses( struct fd_translation_map *map, int nfds, struct pollfd *pfds) { for (int i = 0; i < nfds; i++) { int lfd = pfds[i].fd; struct shadow_fd *sfd = get_shadow_for_pipe_fd(map, lfd); if (!sfd) { wp_error("Failed to find shadow struct for .pipe_fd=%d", lfd); continue; } if (pfds[i].revents & POLLIN || pfds[i].revents & POLLHUP) { /* In */ sfd->pipe.readable = true; } if (pfds[i].revents & POLLOUT) { sfd->pipe.writable = true; } if (pfds[i].revents & POLLERR) { wp_debug("Pipe poll returned POLLERR for .pipe_fd=%d, closing", lfd); if (sfd->pipe.can_read) { pipe_close_read(sfd); } if (sfd->pipe.can_write) { pipe_close_write(sfd); } } } } void flush_writable_pipes(struct fd_translation_map *map) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *sfd = (struct shadow_fd *)lcur; if (sfd->type != FDC_PIPE || !sfd->pipe.writable || sfd->pipe.send.used <= 0) { continue; } sfd->pipe.writable = false; wp_debug("Flushing %zd bytes into RID=%d", sfd->pipe.send.used, sfd->remote_id); ssize_t changed = write(sfd->pipe.fd, sfd->pipe.send.data, (size_t)sfd->pipe.send.used); if (changed == -1 && (errno == EAGAIN || errno == EWOULDBLOCK)) { wp_debug("Writing to pipe RID=%d would block", sfd->remote_id); continue; } else if (changed == -1 && (errno == EPIPE || errno == EBADF)) { /* No process has access to the other end of the pipe, * or the file descriptor is otherwise permanently * unwriteable */ pipe_close_write(sfd); } else if (changed == -1) { wp_error("Failed to write into pipe with remote_id=%d: %s", sfd->remote_id, strerror(errno)); } else { wp_debug("Wrote %zd more bytes into pipe RID=%d", changed, sfd->remote_id); sfd->pipe.send.used -= (int)changed; if (sfd->pipe.send.used > 0) { memmove(sfd->pipe.send.data, sfd->pipe.send.data + changed, (size_t)sfd->pipe.send.used); } if (sfd->pipe.send.used <= 0 && sfd->pipe.pending_w_shutdown) { /* A shutdown request was made, but can only be * applied now that the write buffer has been * cleared */ pipe_close_write(sfd); sfd->pipe.pending_w_shutdown = false; } } } /* Destroy any new unreferenced objects */ for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; destroy_shadow_if_unreferenced(cur); } } void read_readable_pipes(struct fd_translation_map *map) { for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *sfd = (struct shadow_fd *)lcur; if (sfd->type != FDC_PIPE || !sfd->pipe.readable) { continue; } if (sfd->pipe.recv.size == 0) { sfd->pipe.recv.size = 32768; sfd->pipe.recv.data = malloc((size_t)sfd->pipe.recv.size); } if (sfd->pipe.recv.size > sfd->pipe.recv.used) { sfd->pipe.readable = false; ssize_t changed = read(sfd->pipe.fd, sfd->pipe.recv.data + sfd->pipe.recv.used, (size_t)(sfd->pipe.recv.size - sfd->pipe.recv.used)); if (changed == 0) { /* No process has access to the other end of the * pipe */ pipe_close_read(sfd); } else if (changed == -1 && (errno == EAGAIN || errno == EWOULDBLOCK)) { wp_debug("Reading from pipe RID=%d would block", sfd->remote_id); continue; } else if (changed == -1) { wp_error("Failed to read from pipe with remote_id=%d: %s", sfd->remote_id, strerror(errno)); } else { wp_debug("Read %zd more bytes from pipe RID=%d", changed, sfd->remote_id); sfd->pipe.recv.used += (int)changed; } } } /* Destroy any new unreferenced objects */ for (struct shadow_fd_link *lcur = map->link.l_next, *lnxt = lcur->l_next; lcur != &map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; destroy_shadow_if_unreferenced(cur); } } void extend_shm_shadow(struct thread_pool *threads, struct shadow_fd *sfd, size_t new_size) { if (sfd->buffer_size >= new_size) { return; } // Verify that the file size actually increased struct stat st; int fs = fstat(sfd->fd_local, &st); if (fs == -1) { wp_error("Checking file size failed: %s", strerror(errno)); return; } if ((size_t)st.st_size < new_size) { wp_error("Trying to resize file larger (%d) than the actual file size (%d), ignoring", (int)new_size, (int)st.st_size); return; } increase_buffer_sizes(sfd, threads, new_size); // leave `sfd->remote_bufsize` unchanged, and mark dirty sfd->is_dirty = true; } void run_task(struct task_data *task, struct thread_data *local) { if (task->type == TASK_COMPRESS_BLOCK) { worker_run_compress_block(task, local); } else if (task->type == TASK_COMPRESS_DIFF) { worker_run_compress_diff(task, local); } else { wp_error("Unidentified task type"); } } int start_parallel_work(struct thread_pool *pool, struct thread_msg_recv_buf *recv_queue) { pthread_mutex_lock(&pool->work_mutex); if (recv_queue->zone_start != recv_queue->zone_end) { wp_error("Some async messages not yet sent"); } recv_queue->zone_start = 0; recv_queue->zone_end = 0; int num_mt_tasks = pool->stack_count; if (buf_ensure_size(num_mt_tasks, sizeof(struct iovec), &recv_queue->size, (void **)&recv_queue->data) == -1) { wp_error("Failed to provide enough space for receive queue, skipping all work tasks"); num_mt_tasks = 0; } pool->do_work = num_mt_tasks > 0; /* Start the work tasks here */ if (num_mt_tasks > 0) { pthread_cond_broadcast(&pool->work_cond); } pthread_mutex_unlock(&pool->work_mutex); return num_mt_tasks; } bool request_work_task( struct thread_pool *pool, struct task_data *task, bool *is_done) { pthread_mutex_lock(&pool->work_mutex); *is_done = pool->stack_count == 0 && pool->tasks_in_progress == 0; bool has_task = false; if (pool->stack_count > 0 && pool->do_work) { int i = pool->stack_count - 1; if (pool->stack[i].type != TASK_STOP) { *task = pool->stack[i]; has_task = true; pool->stack_count--; pool->tasks_in_progress++; if (pool->stack_count <= 0) { pool->do_work = false; } } } pthread_mutex_unlock(&pool->work_mutex); return has_task; } static void *worker_thread_main(void *arg) { struct thread_data *data = arg; struct thread_pool *pool = data->pool; /* The loop is globally locked by default, and only unlocked in * pthread_cond_wait. Yes, there are fancier and faster schemes. */ pthread_mutex_lock(&pool->work_mutex); while (1) { while (!pool->do_work) { pthread_cond_wait(&pool->work_cond, &pool->work_mutex); } if (pool->stack_count <= 0) { pool->do_work = false; continue; } /* Copy task, since the queue may be resized */ int i = pool->stack_count - 1; struct task_data task = pool->stack[i]; if (task.type == TASK_STOP) { break; } pool->tasks_in_progress++; pool->stack_count--; if (pool->stack_count <= 0) { pool->do_work = false; } pthread_mutex_unlock(&pool->work_mutex); run_task(&task, data); pthread_mutex_lock(&pool->work_mutex); uint8_t triv = 0; pool->tasks_in_progress--; if (write(pool->selfpipe_w, &triv, 1) == -1) { wp_error("Failed to write to self-pipe"); } } pthread_mutex_unlock(&pool->work_mutex); return NULL; } waypipe-v0.9.1/src/shadow.h000066400000000000000000000327741463133614300156330ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_SHADOW_H #define WAYPIPE_SHADOW_H #include #include #include #include #include "dmabuf.h" #include "interval.h" #include "kernel.h" #include "util.h" struct pollfd; typedef VAGenericID VAContextID; typedef VAGenericID VASurfaceID; typedef VAGenericID VABufferID; typedef struct ZSTD_CCtx_s ZSTD_CCtx; typedef struct ZSTD_DCtx_s ZSTD_DCtx; struct comp_ctx { void *lz4_extstate; ZSTD_CCtx *zstd_ccontext; ZSTD_DCtx *zstd_dcontext; }; enum compression_mode { COMP_NONE, COMP_LZ4, COMP_ZSTD, }; struct shadow_fd_link { struct shadow_fd_link *l_prev, *l_next; /* Doubly linked list */ }; struct fd_translation_map { struct shadow_fd_link link; /* store in first position */ int max_local_id; int local_sign; }; /** Thread pool and associated global information */ struct thread_pool { int nthreads; struct thread_data *threads; // including a slot for the zero thread /* Compression information is globally shared, to save memory, and * because most rapidly changing application buffers have similar * content and use the same settings */ enum compression_mode compression; int compression_level; interval_diff_fn_t diff_func; int diff_alignment_bits; // Mutable state pthread_mutex_t work_mutex; pthread_cond_t work_cond; bool do_work; int stack_count, stack_size; struct task_data *stack; // TODO: distinct queues for wayland->channel and channel->wayland, // to make multithreaded decompression possible int tasks_in_progress; // to wake the main loop int selfpipe_r, selfpipe_w; }; struct thread_data { pthread_t thread; struct thread_pool *pool; /* Thread local data */ struct comp_ctx comp_ctx; /* A local temporary buffer, used to e.g. store diff sections before * compression */ void *tmp_buf; int tmp_size; }; enum task_type { TASK_STOP, TASK_COMPRESS_BLOCK, TASK_COMPRESS_DIFF, }; /** Specification for a task to be run on another thread */ struct task_data { enum task_type type; struct shadow_fd *sfd; /* For block compression option */ int zone_start, zone_end; /* For diff compression option */ struct interval *damage_intervals; int damage_len; bool damaged_end; struct thread_msg_recv_buf *msg_queue; }; /** Shadow object types, signifying file descriptor type and usage */ enum fdcat { FDC_UNKNOWN, FDC_FILE, /* Shared memory buffer */ FDC_PIPE, /* pipe-like object */ FDC_DMABUF, /* DMABUF buffer (will be exactly replicated) */ FDC_DMAVID_IR, /* DMABUF-based video, reading from program */ FDC_DMAVID_IW, /* DMABUF-based video, writing to program */ }; struct pipe_buffer { char *data; int size; int used; }; /** Reference count for a struct shadow_fd; the object can be safely deleted * iff all counts are zero/false. */ struct refcount { /** How many protocol objects refer to this shadow structure */ int protocol; /** How many times must the shadow_fd still be sent to the Wayland * program */ int transfer; /** Do any thread tasks potentially refer to this */ bool compute; }; struct pipe_state { /** Temporary buffers to contain small chunks of data, before it is * transported further */ struct pipe_buffer send; struct pipe_buffer recv; /** Internal file descriptor through which all pipe interactions * are mediated. This equals fd_local, except during the time period * where the shadow_fd is created but the fd_local has not yet been * sent to the remote process. */ int fd; /** 4 bits are needed for the pipe state machine (once the pipe has * been created. They describe the properties of `pipe_fd` locally * and remotely */ bool can_read, can_write; bool remote_can_read, remote_can_write; /** What is the state of the pipe, according to poll ? * (POLLIN|POLLHUP -> readable ; POLLOUT -> writeable) */ bool readable, writable; bool pending_w_shutdown; }; /** * @brief The shadow_fd struct * * This structure is created to track each file descriptor used by the * Wayland protocol. */ struct shadow_fd { struct shadow_fd_link link; /* part of doubly linked list */ enum fdcat type; int remote_id; // + if created serverside; - if created clientside int fd_local; /** true iff the shadow structure is newly created and no message * to create a copy has been sent yet */ bool only_here; // Dirty state. bool has_owner; // Are there protocol handlers which control the // is_dirty flag? bool is_dirty; // If so, should this file be scanned for updates? struct damage damage; /* For worker threads, contains their allocated damage intervals */ struct interval *damage_task_interval_store; struct refcount refcount; // common buffers for file-like types /* total memory size of either the dmabuf or the file */ size_t buffer_size; /* mmap'd long term for files, short term for dmabufs */ char *mem_local; /* exact mirror of the contents, with proper alignment */ char *mem_mirror; void *mem_mirror_handle; // File data size_t remote_bufsize; // used to check for and send file extensions bool file_readonly; // Pipe data struct pipe_state pipe; // DMAbuf data struct gbm_bo *dmabuf_bo; struct dmabuf_slice_data dmabuf_info; void *dmabuf_map_handle; /* Nonnull when DMABUF is currently mapped */ uint32_t dmabuf_map_stride; /* stride at which mem_local is mapped */ /* temporary cache of stride-fixed mem_local. Same dimensions as * mem_mirror */ char *dmabuf_warped; void *dmabuf_warped_handle; // Video data struct AVCodecContext *video_context; struct AVFrame *video_local_frame; /* In format matching DMABUF */ struct AVFrame *video_tmp_frame; /* To hold intermediate copies */ struct AVFrame *video_yuv_frame; /* In enc/dec preferred format */ void *video_yuv_frame_data; void *video_local_frame_data; struct AVPacket *video_packet; struct SwsContext *video_color_context; int64_t video_frameno; enum video_coding_fmt video_fmt; VASurfaceID video_va_surface; VAContextID video_va_context; VABufferID video_va_pipeline; }; const char *compression_mode_to_str(enum compression_mode mode); void setup_translation_map(struct fd_translation_map *map, bool display_side); void cleanup_translation_map(struct fd_translation_map *map); int setup_thread_pool(struct thread_pool *pool, enum compression_mode compression, int compression_level, int n_threads); void cleanup_thread_pool(struct thread_pool *pool); /** Given a file descriptor, return which type code would be applied to its * shadow entry. (For example, FDC_PIPE_IR for a pipe-like object that can only * be read.) Sets *size if non-NULL and if the object is an FDC_FILE. */ enum fdcat get_fd_type(int fd, size_t *size); const char *fdcat_to_str(enum fdcat cat); /** Given a local file descriptor, type hint, and already computed size, * produce matching global id, and register it into the translation map if * not already done. The function can also be provided with optional extra * information (*info). * * This may return NULL on allocation failure; other failures will in general * warn and disable replication features. **/ struct shadow_fd *translate_fd(struct fd_translation_map *map, struct render_data *render, struct thread_pool *threads, int fd, enum fdcat type, size_t sz, const struct dmabuf_slice_data *info, bool force_pipe_iw); /** Given a struct shadow_fd, produce some number of corresponding file update * transfer messages. All pointers will be to existing memory. */ void collect_update(struct thread_pool *threads, struct shadow_fd *cur, struct transfer_queue *transfers, bool use_old_dmavid_req); /** After all thread pool tasks have completed, reduce refcounts and clean up * related data. The caller should then invoke destroy_shadow_if_unreferenced. */ void finish_update(struct shadow_fd *sfd); /** Apply a data update message to an element in the translation map, creating * an entry when there is none. * * Returns -1 if the error is the fault of the other waypipe instance, * 0 otherwise. (For example, syscall failure => 0, bad message length => -1.) */ int apply_update(struct fd_translation_map *map, struct thread_pool *threads, struct render_data *render, enum wmsg_type type, int remote_id, const struct bytebuf *msg); /** Get the shadow structure associated to a remote id, or NULL if it dne */ struct shadow_fd *get_shadow_for_rid(struct fd_translation_map *map, int rid); /** Get shadow structure for a local file descriptor, or NULL if it dne */ struct shadow_fd *get_shadow_for_local_fd( struct fd_translation_map *map, int lfd); /** Count the number of pipe fds being maintained by the translation map */ int count_npipes(const struct fd_translation_map *map); /** Fill in pollfd entries, with POLLIN | POLLOUT, for applicable pipe objects. * Specifically, if check_read is true, indicate all readable pipes. * Also, indicate all writeable pipes for which we also something to write. */ int fill_with_pipes(const struct fd_translation_map *map, struct pollfd *pfds, bool check_read); /** mark pipe shadows as being ready to read or write */ void mark_pipe_object_statuses( struct fd_translation_map *map, int nfds, struct pollfd *pfds); /** For pipes marked writeable, flush as much buffered data as possible */ void flush_writable_pipes(struct fd_translation_map *map); /** For pipes marked readable, read as much data as possible without blocking */ void read_readable_pipes(struct fd_translation_map *map); /** pipe file descriptors should never be removed, since then close-detection * fails. This closes the second pipe ends if we own both of them */ void close_local_pipe_ends(struct fd_translation_map *map); /** If a pipe is remotely closed, but not locally closed, then close it too */ void close_rclosed_pipes(struct fd_translation_map *map); /** Reduce the reference count for a shadow structure which is owned. The * structure should not be used by the caller after this point. Returns true if * pointer deleted. */ bool shadow_decref_protocol(struct shadow_fd *); bool shadow_decref_transfer(struct shadow_fd *); /** Increase the reference count of a shadow structure, and mark it as being * owned. For convenience, returns the passed-in structure. */ struct shadow_fd *shadow_incref_protocol(struct shadow_fd *); struct shadow_fd *shadow_incref_transfer(struct shadow_fd *); /** If the shadow structure has no references, destroy it and remove it from the * map */ bool destroy_shadow_if_unreferenced(struct shadow_fd *sfd); /** Decrease reference count for all objects in the given list, deleting * iff they are owned by protocol objects and have refcount zero */ void decref_transferred_fds( struct fd_translation_map *map, int nfds, int fds[]); void decref_transferred_rids( struct fd_translation_map *map, int nids, int ids[]); /** If sfd->type == FDC_FILE, increase the size of the backing data to support * at least new_size, and mark the new part of underlying file as dirty */ void extend_shm_shadow(struct thread_pool *threads, struct shadow_fd *sfd, size_t new_size); /** Notify the threads so that they can start working on the tasks in the pool, * and return the total number of tasks */ int start_parallel_work(struct thread_pool *pool, struct thread_msg_recv_buf *recv_queue); /** Return true if there is a work task (not a stop task) remaining for the * main thread to work on; also set *is_done if all tasks have completed. */ bool request_work_task(struct thread_pool *pool, struct task_data *task, bool *is_done); /** Run a work task */ void run_task(struct task_data *task, struct thread_data *local); // video.c void cleanup_hwcontext(struct render_data *rd); bool video_supports_dmabuf_format(uint32_t format, uint64_t modifier); bool video_supports_shm_format(uint32_t format); /** Fast check for whether video coding format can be used */ bool video_supports_coding_format(enum video_coding_fmt fmt); /** set redirect for ffmpeg logging through wp_log */ void setup_video_logging(void); void destroy_video_data(struct shadow_fd *sfd); /** These need to have the dmabuf/dmabuf_info set beforehand */ int setup_video_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads); int setup_video_decode(struct shadow_fd *sfd, struct render_data *rd); /** the video frame to be transferred should already have been transferred into * `sfd->mem_mirror`. */ void collect_video_from_mirror( struct shadow_fd *sfd, struct transfer_queue *transfers); /** Decompress a video packet and apply the new frame onto the shadow_fd */ void apply_video_packet(struct shadow_fd *sfd, struct render_data *rd, const struct bytebuf *data); #endif // WAYPIPE_SHADOW_H waypipe-v0.9.1/src/util.c000066400000000000000000000456321463133614300153130ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "util.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include #ifdef HAS_VSOCK #include #endif int parse_uint32(const char *str, uint32_t *val) { if (!str[0] || (str[0] == '0' && str[1])) { return -1; } uint64_t v = 0; for (const char *cursor = str; *cursor; cursor++) { if (*cursor < '0' || *cursor > '9') { return -1; } uint64_t s = (uint64_t)(*cursor - '0'); v *= 10; v += s; if (v >= (1uLL << 32)) { return -1; } } *val = (uint32_t)v; return 0; } /* An integer-to-string converter which is async-signal-safe, unlike sprintf */ static char *uint_to_str(uint32_t i, char buf[static 11]) { char *pos = &buf[10]; *pos = '\0'; while (i) { --pos; *pos = (char)((i % 10) + (uint32_t)'0'); i /= 10; } return pos; } size_t multi_strcat(char *dest, size_t dest_space, ...) { size_t net_len = 0; va_list args; va_start(args, dest_space); while (true) { const char *str = va_arg(args, const char *); if (!str) { break; } net_len += strlen(str); if (net_len >= dest_space) { va_end(args); dest[0] = '\0'; return 0; } } va_end(args); va_start(args, dest_space); char *pos = dest; while (true) { const char *str = va_arg(args, const char *); if (!str) { break; } size_t len = strlen(str); memcpy(pos, str, len); pos += len; } va_end(args); *pos = '\0'; return net_len; } bool is_utf8(const char *str) { /* See Unicode Standard 15.0.0, Chapter 3, D92 and Table 3.7. */ const uint8_t *v = (const uint8_t *)str; while (*v) { if (v[0] <= 0x7f) { v++; } else if (v[0] <= 0xdf) { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } v += 2; } else if (v[0] <= 0xef) { if (v[0] == 0xe0) { if (v[1] < 0xa0 || v[1] > 0xbf) { return false; } } else if (v[0] <= 0xec) { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } } else if (v[0] == 0xed) { if (v[1] < 0x80 || v[1] > 0x9f) { return false; } } else { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } } if (v[2] < 0x80 || v[2] > 0xbf) { return false; } v += 3; } else if (v[0] <= 0xf4) { if (v[0] == 0xf0) { if (v[1] < 0x90 || v[1] > 0xbf) { return false; } } else if (v[0] <= 0xf3) { if (v[1] < 0x80 || v[1] > 0xbf) { return false; } } else { if (v[1] < 0x80 || v[1] > 0x8f) { return false; } } if (v[2] < 0x80 || v[2] > 0xbf || v[3] < 0x80 || v[3] > 0xbf) { return false; } v += 4; } else { return false; } } return true; } bool shutdown_flag = false; uint64_t inherited_fds[4] = {0, 0, 0, 0}; void handle_sigint(int sig) { (void)sig; char buf[48]; char tmp[11]; const char *pidstr = uint_to_str((uint32_t)getpid(), tmp); size_t len = multi_strcat( buf, sizeof(buf), "SIGINT(", pidstr, ")\n", NULL); (void)write(STDERR_FILENO, buf, len); shutdown_flag = true; } int set_nonblocking(int fd) { int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { return -1; } return fcntl(fd, F_SETFL, flags | O_NONBLOCK); } int set_cloexec(int fd) { int flags = fcntl(fd, F_GETFD, 0); if (flags == -1) { return -1; } return fcntl(fd, F_SETFD, flags | FD_CLOEXEC); } int setup_nb_socket(int cwd_fd, struct socket_path path, int nmaxclients, int *folder_fd_out, int *socket_fd_out) { if (path.filename->sun_family != AF_UNIX) { wp_error("Address family should be AF_UNIX, was %d", path.filename->sun_family); return -1; } if (strchr(path.filename->sun_path, '/')) { wp_error("Address '%s' should be a pure filename and not contain any forward slashes", path.filename->sun_path); return -1; } int sock = socket(AF_UNIX, SOCK_STREAM, 0); if (sock == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } if (set_nonblocking(sock) == -1) { wp_error("Error making socket nonblocking: %s", strerror(errno)); checked_close(sock); return -1; } int folder_fd = open_folder(path.folder); if (folder_fd == -1) { wp_error("Error opening folder in which to connect to socket: %s", strerror(errno)); checked_close(sock); return -1; } if (fchdir(folder_fd) == -1) { wp_error("Error changing to folder '%s'", path.folder); checked_close(sock); checked_close(folder_fd); return -1; } if (bind(sock, (struct sockaddr *)path.filename, sizeof(*path.filename)) == -1) { wp_error("Error binding socket at %s: %s", path.filename->sun_path, strerror(errno)); checked_close(sock); checked_close(folder_fd); if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } return -1; } if (listen(sock, nmaxclients) == -1) { wp_error("Error listening to socket at %s: %s", path.filename->sun_path, strerror(errno)); checked_close(sock); checked_close(folder_fd); unlink(path.filename->sun_path); if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } return -1; } if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } *folder_fd_out = folder_fd; *socket_fd_out = sock; return 0; } int connect_to_socket_at_folder(int cwd_fd, int folder_fd, const struct sockaddr_un *filename, int *socket_fd) { if (filename->sun_family != AF_UNIX) { wp_error("Address family should be AF_UNIX, was %d", filename->sun_family); return -1; } if (strchr(filename->sun_path, '/')) { wp_error("Address '%s' should be a pure filename and not contain any forward slashes", filename->sun_path); return -1; } int chanfd = socket(AF_UNIX, SOCK_STREAM, 0); if (chanfd == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } if (fchdir(folder_fd) == -1) { wp_error("Error changing to folder\n"); checked_close(chanfd); return -1; } if (connect(chanfd, (struct sockaddr *)filename, sizeof(*filename)) == -1) { wp_error("Error connecting to socket (%s): %s", filename->sun_path, strerror(errno)); checked_close(chanfd); if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } return -1; } if (fchdir(cwd_fd) == -1) { wp_error("Error returning to current working directory"); } *socket_fd = chanfd; return 0; } int connect_to_socket(int cwd_fd, struct socket_path path, int *folder_fd_out, int *socket_fd_out) { int folder_fd = open_folder(path.folder); if (folder_fd == -1) { wp_error("Error opening folder in which to connect to socket: %s", strerror(errno)); return -1; } int ret = connect_to_socket_at_folder( cwd_fd, folder_fd, path.filename, socket_fd_out); if (folder_fd_out && ret == 0) { *folder_fd_out = folder_fd; } else { checked_close(folder_fd); } return ret; } int split_socket_path(char *src_path, struct sockaddr_un *rel_socket) { size_t l = strlen(src_path); if (l == 0) { wp_error("Socket path to split is empty"); return -1; } size_t s = l; while (src_path[s] != '/' && s > 0) { s--; } if (l - s >= sizeof(rel_socket->sun_path)) { wp_error("Filename part '%s' of socket path is too long: %zu bytes >= sizeof(sun_path) = %zu", src_path + s, l - s, sizeof(rel_socket->sun_path)); return -1; } size_t t = (src_path[s] == '/') ? s + 1 : 0; rel_socket->sun_family = AF_UNIX; memset(rel_socket->sun_path, 0x3f, sizeof(rel_socket->sun_path)); memcpy(rel_socket->sun_path, src_path + t, l - t + 1); src_path[s] = '\0'; return 0; } void unlink_at_folder(int orig_dir_fd, int target_dir_fd, const char *target_dir_name, const char *filename) { if (fchdir(target_dir_fd) == -1) { wp_error("Error switching folder to '%s': %s", target_dir_name ? target_dir_name : "(null)", strerror(errno)); return; } if (unlink(filename) == -1) { wp_error("Unlinking '%s' in '%s' failed: %s", filename, target_dir_name ? target_dir_name : "(null)", strerror(errno)); } if (fchdir(orig_dir_fd) == -1) { wp_error("Error switching folder back to cwd: %s", strerror(errno)); } } bool files_equiv(int fd_a, int fd_b) { struct stat stat_a, stat_b; if (fstat(fd_a, &stat_a) == -1) { wp_error("fstat failed, %s", strerror(errno)); return false; } if (fstat(fd_b, &stat_b) == -1) { wp_error("fstat failed, %s", strerror(errno)); return false; } return (stat_a.st_dev == stat_b.st_dev) && (stat_a.st_ino == stat_b.st_ino); } void set_initial_fds(void) { struct pollfd checklist[256]; for (int i = 0; i < 256; i++) { checklist[i].fd = i; checklist[i].events = 0; checklist[i].revents = 0; } if (poll(checklist, 256, 0) == -1) { wp_error("fd-checking poll failed: %s", strerror(errno)); return; } for (int i = 0; i < 256; i++) { if (!(checklist[i].revents & POLLNVAL)) { inherited_fds[i / 64] |= (1uLL << (i % 64)); } } } void check_unclosed_fds(void) { /* Verify that all file descriptors have been closed. Since most * instances have <<256 file descriptors open at a given time, it is * safe to only check up to that level */ struct pollfd checklist[256]; for (int i = 0; i < 256; i++) { checklist[i].fd = i; checklist[i].events = 0; checklist[i].revents = 0; } if (poll(checklist, 256, 0) == -1) { wp_error("fd-checking poll failed: %s", strerror(errno)); return; } for (int i = 0; i < 256; i++) { bool initial_fd = (inherited_fds[i / 64] & (1uLL << (i % 64))) != 0; if (initial_fd) { if (checklist[i].revents & POLLNVAL) { wp_error("Unexpected closed fd %d", i); } } else { if (checklist[i].revents & POLLNVAL) { continue; } #ifdef __linux__ char fd_path[64]; char link[256]; sprintf(fd_path, "/proc/self/fd/%d", i); ssize_t len = readlink(fd_path, link, sizeof(link) - 1); if (len == -1) { wp_error("Failed to readlink /proc/self/fd/%d for unexpected open fd %d", i, i); } else { link[len] = '\0'; if (!strcmp(link, "/var/lib/sss/mc/passwd")) { wp_debug("Known issue, leaked fd %d to /var/lib/sss/mc/passwd", i); } else { wp_debug("Unexpected open fd %d: %s", i, link); } } #else wp_debug("Unexpected open fd %d", i); #endif } } } size_t print_display_error(char *dest, size_t dest_space, uint32_t error_code, const char *message) { if (dest_space < 20) { return 0; } size_t msg_len = strlen(message) + 1; size_t net_len = 4 * ((msg_len + 0x3) / 4) + 20; if (net_len > dest_space) { return 0; } uint32_t header[5] = {0x1, (uint32_t)net_len << 16, 0x1, error_code, (uint32_t)msg_len}; memcpy(dest, header, sizeof(header)); memcpy(dest + sizeof(header), message, msg_len); if (msg_len % 4 != 0) { size_t trailing = 4 - msg_len % 4; uint8_t zeros[4] = {0, 0, 0, 0}; memcpy(dest + sizeof(header) + msg_len, zeros, trailing); } return net_len; } size_t print_wrapped_error(char *dest, size_t dest_space, const char *message) { size_t msg_len = print_display_error( dest + 4, dest_space - 4, 3, message); if (msg_len == 0) { return 0; } uint32_t header = transfer_header(msg_len + 4, WMSG_PROTOCOL); memcpy(dest, &header, sizeof(header)); return msg_len + 4; } int send_one_fd(int socket, int fd) { union { char buf[CMSG_SPACE(sizeof(int))]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); struct cmsghdr *frst = (struct cmsghdr *)(uc.buf); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; *((int *)CMSG_DATA(frst)) = fd; frst->cmsg_len = CMSG_LEN(sizeof(int)); struct iovec the_iovec; the_iovec.iov_len = 1; uint8_t dummy_data = 1; the_iovec.iov_base = &dummy_data; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_flags = 0; msg.msg_control = uc.buf; msg.msg_controllen = CMSG_SPACE(sizeof(int)); return (int)sendmsg(socket, &msg, 0); } bool wait_for_pid_and_clean(pid_t *target_pid, int *status, int options, struct conn_map *map) { bool found = false; while (1) { int stat; pid_t r = waitpid((pid_t)-1, &stat, options); if (r == 0 || (r == -1 && (errno == ECHILD || errno == EINTR))) { // Valid exit reasons, not an error errno = 0; return found; } else if (r == -1) { wp_error("waitpid failed: %s", strerror(errno)); return found; } wp_debug("Child process %d has died", r); if (map) { /* Clean out all entries matching that pid */ int iw = 0; for (int ir = 0; ir < map->count; ir++) { map->data[iw] = map->data[ir]; if (map->data[ir].pid != r) { iw++; } else { checked_close(map->data[ir].linkfd); } } map->count = iw; } if (r == *target_pid) { *target_pid = 0; *status = stat; found = true; } } } int buf_ensure_size(int count, size_t obj_size, int *space, void **data) { int x = *space; if (count <= x) { return 0; } if (count >= INT32_MAX / 2 || count <= 0) { return -1; } if (x < 1) { x = 1; } while (x < count) { x *= 2; } void *new_data = realloc(*data, (size_t)x * obj_size); if (!new_data) { return -1; } *data = new_data; *space = x; return 0; } static const char *const wmsg_types[] = { "WMSG_PROTOCOL", "WMSG_INJECT_RIDS", "WMSG_OPEN_FILE", "WMSG_EXTEND_FILE", "WMSG_OPEN_DMABUF", "WMSG_BUFFER_FILL", "WMSG_BUFFER_DIFF", "WMSG_OPEN_IR_PIPE", "WMSG_OPEN_IW_PIPE", "WMSG_OPEN_RW_PIPE", "WMSG_PIPE_TRANSFER", "WMSG_PIPE_SHUTDOWN_R", "WMSG_PIPE_SHUTDOWN_W", "WMSG_OPEN_DMAVID_SRC", "WMSG_OPEN_DMAVID_DST", "WMSG_SEND_DMAVID_PACKET", "WMSG_ACK_NBLOCKS", "WMSG_RESTART", "WMSG_CLOSE", "WMSG_OPEN_DMAVID_SRC_V2", "WMSG_OPEN_DMAVID_DST_V2", }; const char *wmsg_type_to_str(enum wmsg_type tp) { if (tp >= sizeof(wmsg_types) / sizeof(wmsg_types[0])) { return "???"; } return wmsg_types[tp]; } bool wmsg_type_is_known(enum wmsg_type tp) { return (size_t)tp < (sizeof(wmsg_types) / sizeof(wmsg_types[0])); } int transfer_ensure_size(struct transfer_queue *transfers, int count) { int sz = transfers->size; if (buf_ensure_size(count, sizeof(*transfers->vecs), &sz, (void **)&transfers->vecs) == -1) { return -1; } sz = transfers->size; if (buf_ensure_size(count, sizeof(*transfers->meta), &sz, (void **)&transfers->meta) == -1) { return -1; } transfers->size = sz; return 0; } int transfer_add(struct transfer_queue *w, size_t size, void *data) { if (size == 0) { return 0; } if (transfer_ensure_size(w, w->end + 1) == -1) { return -1; } w->vecs[w->end].iov_len = size; w->vecs[w->end].iov_base = data; w->meta[w->end].msgno = w->last_msgno; w->meta[w->end].static_alloc = false; w->end++; w->last_msgno++; return 0; } void transfer_async_add(struct thread_msg_recv_buf *q, void *data, size_t sz) { struct iovec vec; vec.iov_len = sz; vec.iov_base = data; pthread_mutex_lock(&q->lock); q->data[q->zone_end++] = vec; pthread_mutex_unlock(&q->lock); } int transfer_load_async(struct transfer_queue *w) { pthread_mutex_lock(&w->async_recv_queue.lock); int zstart = w->async_recv_queue.zone_start; int zend = w->async_recv_queue.zone_end; w->async_recv_queue.zone_start = zend; pthread_mutex_unlock(&w->async_recv_queue.lock); for (int i = zstart; i < zend; i++) { struct iovec v = w->async_recv_queue.data[i]; memset(&w->async_recv_queue.data[i], 0, sizeof(struct iovec)); if (v.iov_len == 0 || v.iov_base == NULL) { wp_error("Unexpected empty message"); continue; } /* Only fill/diff messages are received async, so msgno * is always incremented */ if (transfer_add(w, v.iov_len, v.iov_base) == -1) { wp_error("Failed to add message to transfer queue"); pthread_mutex_unlock(&w->async_recv_queue.lock); return -1; } } return 0; } void cleanup_transfer_queue(struct transfer_queue *td) { for (int i = td->async_recv_queue.zone_start; i < td->async_recv_queue.zone_end; i++) { free(td->async_recv_queue.data[i].iov_base); } pthread_mutex_destroy(&td->async_recv_queue.lock); free(td->async_recv_queue.data); for (int i = 0; i < td->end; i++) { if (!td->meta[i].static_alloc) { free(td->vecs[i].iov_base); } } free(td->vecs); free(td->meta); } #ifdef HAS_VSOCK int connect_to_vsock(uint32_t port, uint32_t cid, bool to_host, int *socket_fd) { wp_debug("Connecting to vsock on port %d, cid %d, send to host %d", port, cid, to_host); int chanfd = socket(AF_VSOCK, SOCK_STREAM, 0); if (chanfd == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } struct sockaddr_vm addr; memset(&addr, 0, sizeof(struct sockaddr_vm)); addr.svm_family = AF_VSOCK; addr.svm_port = port; addr.svm_cid = cid; if (to_host) { addr.svm_flags = VMADDR_FLAG_TO_HOST; } if ((connect(chanfd, (struct sockaddr *)&addr, sizeof(struct sockaddr_vm))) == -1) { wp_error("Error connecting to vsock at port %d: %s", port, strerror(errno)); checked_close(chanfd); return -1; } *socket_fd = chanfd; return 0; } int listen_on_vsock(uint32_t port, int nmaxclients, int *socket_fd_out) { wp_debug("Listening on vsock port %d", port); int sock = socket(AF_VSOCK, SOCK_STREAM, 0); if (sock == -1) { wp_error("Error creating socket: %s", strerror(errno)); return -1; } if (set_nonblocking(sock) == -1) { wp_error("Error making socket nonblocking: %s", strerror(errno)); checked_close(sock); return -1; } struct sockaddr_vm addr; memset(&addr, 0, sizeof(struct sockaddr_vm)); addr.svm_family = AF_VSOCK; addr.svm_port = port; addr.svm_cid = VMADDR_CID_ANY; if (bind(sock, (struct sockaddr *)&addr, sizeof(struct sockaddr_vm)) == -1) { wp_error("Error binding vsock at cid %d port %d: %s", addr.svm_cid, port, strerror(errno)); checked_close(sock); return -1; } if (listen(sock, nmaxclients) == -1) { wp_error("Error listening to socket at cid %d port %d: %s", addr.svm_cid, port, strerror(errno)); checked_close(sock); return -1; } *socket_fd_out = sock; return 0; } #endif waypipe-v0.9.1/src/util.h000066400000000000000000000471451463133614300153210ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_UTIL_H #define WAYPIPE_UTIL_H #include #include #include #include #include #include #include #include "config-waypipe.h" #ifdef HAS_USDT #include #else #define DTRACE_PROBE(provider, probe) (void)0 #define DTRACE_PROBE1(provider, probe, parm1) (void)0 #define DTRACE_PROBE2(provider, probe, parm1, parm2) (void)0 #define DTRACE_PROBE3(provider, probe, parm1, parm2, parm3) (void)0 #endif // On SIGINT, this is set to true. The main program should then cleanup ASAP extern bool shutdown_flag; extern uint64_t inherited_fds[4]; void handle_sigint(int sig); /** Basic mathematical operations. */ // use macros? static inline int max(int a, int b) { return a > b ? a : b; } static inline int min(int a, int b) { return a < b ? a : b; } static inline uint64_t maxu(uint64_t a, uint64_t b) { return a > b ? a : b; } static inline uint64_t minu(uint64_t a, uint64_t b) { return a < b ? a : b; } static inline int clamp(int v, int lower, int upper) { return max(min(v, upper), lower); } static inline int align(int v, int m) { return m * ((v + m - 1) / m); } static inline size_t alignz(size_t v, size_t m) { return m * ((v + m - 1) / m); } /* only valid for nonegative v and positive u */ static inline int floordiv(int v, int u) { return v / u; } static inline int ceildiv(int v, int u) { return (v + u - 1) / u; } /* valid as long as nparts < 2**15, (hi - lo) < 2**31 */ static inline int split_interval(int lo, int hi, int nparts, int index) { return lo + index * ((hi - lo) / nparts) + (index * ((hi - lo) % nparts)) / nparts; } /** Parse a base-10 integer, forbidding leading whitespace, + sign, decimal * separators, and locale dependent stuff */ int parse_uint32(const char *str, uint32_t *val); /* Multiple string concatenation; returns number of bytes written and * ensures null termination. Is async-signal-safe, unlike sprintf. * Last argment must be NULL. If there is not enough space, returns 0. */ size_t multi_strcat(char *dest, size_t dest_space, ...); /** Is the string a well-formed UTF-8 code point sequence, per Unicode 15.0? */ bool is_utf8(const char *str); /** Make the file underlying this file descriptor nonblocking. * Silently return -1 on failure. */ int set_nonblocking(int fd); /** Set the close-on-exec flag for the file descriptor. * Silently return -1 on failure. */ int set_cloexec(int fd); /* socket path lengths being overly constrained, it is perhaps best to enforce * this constraint as early as possible by using this type */ struct sockaddr_un; struct socket_path { const char *folder; const struct sockaddr_un *filename; }; /** Create a nonblocking AF_UNIX/SOCK_STREAM socket at folder/filename, * and listen with nmaxclients. * * Prints its own error messages; returns -1 on failure. * * If successful, sets the value of folder_fd to the folder, and socket_fd * to the created socket. * * After creating the socket, will fchdir back to cwd_fd. */ int setup_nb_socket(int cwd_fd, struct socket_path socket_path, int nmaxclients, int *folder_fd, int *socket_fd); /** Opens folder, and connects to a (relative) socket in that * folder given by filename. Abstract sockets? * * After opening folder, will fchdir back to cwd_fd. * * If successful, sets the value of folder_fd to the folder, and socket_fd * to the created socket. (If folder_fd is NULL, then nothing is returned *there.) * * If successful, returns the created socket fd; otherwise returns -1. **/ int connect_to_socket(int cwd_fd, struct socket_path socket_path, int *folder_fd, int *socket_fd); int connect_to_socket_at_folder(int cwd_fd, int folder_fd, const struct sockaddr_un *socket_filename, int *socket_fd); /** Return true iff fd_a/fd_b correspond to the same filesystem file. * If fstat fails, files are assumed to be unequal. */ bool files_equiv(int fd_a, int fd_b); /** * Reads src_path, trims off the filename part, and places the filename * in rel_socket ; if the file name is too long, returns -1, otherwise * returns 0. Sets the SA_FAMILY of `rel_socket` to AF_UNIX. If src_path * contains no folder seperations, then src_path is truncated down to the * empty string. */ int split_socket_path(char *src_path, struct sockaddr_un *rel_socket); /** * Unlink `filename` in `target_dir_fd`, and then fchdir back to `orig_dir_fd`. * The value of `target_dir_name` may be NULL, and is only used for error * messages. */ void unlink_at_folder(int orig_dir_fd, int target_dir_fd, const char *target_dir_name, const char *filename); /** Call close(fd), logging error when fd is invalid */ #define checked_close(fd) \ if (close(fd) == -1) { \ wp_error("close(%d) failed: %s", fd, strerror(errno)); \ } /** Set the list of initially available fds (typically stdin/out/errno) */ void set_initial_fds(void); /** Verify that all file descriptors (except for the initial ones) are closed */ void check_unclosed_fds(void); /** Set the file descriptor to be close-on-exec; return -1 if unsuccessful */ int set_cloexec(int fd); /** Write the Wayland wire representation of a wl_display.error(error_code, * message) event into array `dest`. Return its length in bytes, or 0 if there * is not enough space. */ size_t print_display_error(char *dest, size_t dest_space, uint32_t error_code, const char *message); /** Write the Waypipe wire message of type WMSG_PROTOCOL containing a display * error as from print_display_error(..., 3, message) above. Return wire message * length in bytes, or 0 if there is not enough space. */ size_t print_wrapped_error(char *dest, size_t dest_space, const char *message); #define WAYPIPE_PROTOCOL_VERSION 0x1u /** If the byte order is wrong, the fixed set/unset bits are swapped */ #define CONN_FIXED_BIT (0x1u << 7) #define CONN_UNSET_BIT (0x1u << 31) /** The waypipe-server sends this if it supports reconnections, in which case * the main client process should remember which child to route reconnections * to. */ #define CONN_RECONNECTABLE_BIT (0x1u << 0) /** This is set when reconnecting to an established waypipe-client child process */ #define CONN_UPDATE_BIT (0x1u << 1) /** The waypipe-server sends this to indicate that it does not support DMABUFs, * so the waypipe-client side does not even need to check if it can support * them. If this is not set, the waypipe-client will support (or not) DMABUFs * depending on its flags and local capabilities. */ #define CONN_NO_DMABUF_SUPPORT (0x1u << 2) /** Indicate which compression format the waypipe-server can accept. For * backwards compatibility, if none of these flags is set, assume the server and * client match. */ #define CONN_COMPRESSION_MASK (0x7u << 8) #define CONN_NO_COMPRESSION (0x1u << 8) #define CONN_LZ4_COMPRESSION (0x2u << 8) #define CONN_ZSTD_COMPRESSION (0x3u << 8) /** Indicate which video coding format the waypipe-server can accept. For * backwards compatibility, if none of these flags is set, assume the server and * client match. */ #define CONN_VIDEO_MASK (0x7u << 11) #define CONN_NO_VIDEO (0x1u << 11) #define CONN_VP9_VIDEO (0x2u << 11) #define CONN_H264_VIDEO (0x3u << 11) #define CONN_AV1_VIDEO (0x4u << 11) struct connection_token { /** Indicate protocol version (top 16 bits), endianness, and * reconnection flags. The highest bit must stay clear. */ uint32_t header; uint32_t key[3]; /** Random bits used to identify the connection */ }; /** A type to help keep track of the connection handling processes */ struct conn_addr { struct connection_token token; pid_t pid; int linkfd; }; struct conn_map { struct conn_addr *data; int count, size; }; /** A useful helper routine for lists and stacks. `count` is the number of * objects that will be needed; `obj_size` their side; `size_t` the number * of objects that the malloc'd data can contain, and `data` the list buffer * itself. If count < space, resize the list and update space. Returns -1 on * allocation failure */ int buf_ensure_size(int count, size_t obj_size, int *space, void **data); /** sendmsg a file descriptor over socket */ int send_one_fd(int socket, int fd); enum log_level { WP_DEBUG = 0, WP_ERROR = 1 }; typedef void (*log_handler_func_t)(const char *file, int line, enum log_level level, const char *fmt, ...); /** These log functions should be set by whichever translation units have a * 'main'. The first one is the debug handler, second error handler. Set them to * NULL to disable log messages. */ extern log_handler_func_t log_funcs[2]; #ifdef WAYPIPE_REL_SRC_DIR #define WAYPIPE__FILE__ \ ((const char *)__FILE__ + sizeof(WAYPIPE_REL_SRC_DIR) - 1) #else #define WAYPIPE__FILE__ __FILE__ #endif /** No trailing ;, user must supply. The first vararg must be the format string. */ #define wp_error(...) \ if (log_funcs[WP_ERROR]) \ (*log_funcs[WP_ERROR])(WAYPIPE__FILE__, __LINE__, WP_ERROR, __VA_ARGS__) #define wp_debug(...) \ if (log_funcs[WP_DEBUG]) \ (*log_funcs[WP_DEBUG])(WAYPIPE__FILE__, __LINE__, WP_DEBUG, __VA_ARGS__) /** Run waitpid in a loop until there are no more zombies to clean up. If the * target_pid was one of the completed processes, set status, return true. The * `options` flag will be passed to waitpid. If `map` is not NULL, remove * entries in the connection map which were closed. * * The value *target_pid is set to 0 once the corresponding process has died, * as a convenience to check only the first child process with pid == * *target_pid. */ bool wait_for_pid_and_clean(pid_t *target_pid, int *status, int options, struct conn_map *map); /** An unrecoverable error-- say, running out of file descriptors */ #define ERR_FATAL -1 /** A memory allocation failed; might be fatal, might not be */ #define ERR_NOMEM -2 /** For main loop, channel disconnection */ #define ERR_DISCONN -3 /** For main loop, program disconnection */ #define ERR_STOP -4 /** A helper type, since very often buffers and their sizes are passed together * (or returned together) as arguments */ struct bytebuf { size_t size; char *data; }; struct char_window { char *data; int size; int zone_start; int zone_end; }; struct int_window { int *data; int size; int zone_start; int zone_end; }; /** * @brief Wire format message types * * Each message indicates what the receiving side should do. */ enum wmsg_type { /** Send over a set of Wayland protocol messages. Preceding messages * must create or update file descriptors and inject file descriptors * to the queue. */ // TODO: use extra bits to make parsing more consistent between systems; // i.e, to ensure that # of file descriptors consumed is the same WMSG_PROTOCOL, // header uint32_t, then protocol messages /** Inject file descriptors into the receiver's buffer, for use by the * protocol parser. */ WMSG_INJECT_RIDS, // header uint32_t, then fds /** Create a new shared memory file of the given size. * Format: \ref wmsg_open_file */ WMSG_OPEN_FILE, /** Provide a new (larger) size for the file buffer. * Format: \ref wmsg_open_file */ WMSG_EXTEND_FILE, /** Create a new DMABUF with the given size and \ref dmabuf_slice_data. * Format: \ref wmsg_open_dmabuf */ WMSG_OPEN_DMABUF, /** Fill the region of the file with the folllowing data. The data * should be compressed according to the global compression option. * Format: \ref wmsg_buffer_fill */ WMSG_BUFFER_FILL, /** Apply a diff to the file. The diff contents may be compressed. * Format: \ref wmsg_buffer_diff */ WMSG_BUFFER_DIFF, /** Create a new pipe, with the given remote R/W status */ WMSG_OPEN_IR_PIPE, // wmsg_basic WMSG_OPEN_IW_PIPE, // wmsg_basic WMSG_OPEN_RW_PIPE, // wmsg_basic /** Transfer data to the pipe */ WMSG_PIPE_TRANSFER, // wmsg_basic /** Shutdown the read end of the pipe that waypipe uses. */ WMSG_PIPE_SHUTDOWN_R, // wmsg_basic /** Shutdown the write end of the pipe that waypipe uses. */ WMSG_PIPE_SHUTDOWN_W, // wmsg_basic /** Create a DMABUF (with following data parameters) that will be used * to produce/consume video frames. Format: \ref wmsg_open_dmabuf. * Deprecated and may be disabled/removed in the future. */ WMSG_OPEN_DMAVID_SRC, WMSG_OPEN_DMAVID_DST, /** Send a packet of video data to the destination */ WMSG_SEND_DMAVID_PACKET, // wmsg_basic /** Acknowledge that a given number of messages has been received, so * that the sender of those messages no longer needs to store them * for replaying in case of reconnection. Format: \ref wmsg_ack */ WMSG_ACK_NBLOCKS, /** When restarting a connection, indicate the number of the message * which will be sent next. Format: \ref wmsg_restart */ WMSG_RESTART, // wmsg_restart /** When the remote program is closing. Format: only the header */ WMSG_CLOSE, /** Create a DMABUF (with following data parameters) that will be used * to produce/consume video frames. Format: \ref wmsg_open_dmavid */ WMSG_OPEN_DMAVID_SRC_V2, WMSG_OPEN_DMAVID_DST_V2, }; const char *wmsg_type_to_str(enum wmsg_type tp); bool wmsg_type_is_known(enum wmsg_type tp); struct wmsg_open_file { uint32_t size_and_type; int32_t remote_id; uint32_t file_size; }; static_assert(sizeof(struct wmsg_open_file) == 12, "size check"); struct wmsg_open_dmabuf { uint32_t size_and_type; int32_t remote_id; uint32_t file_size; /* following this, provide struct dmabuf_slice_data */ }; static_assert(sizeof(struct wmsg_open_dmabuf) == 12, "size check"); enum video_coding_fmt { VIDEO_H264 = 0, VIDEO_VP9 = 1, VIDEO_AV1 = 2, }; struct wmsg_open_dmavid { uint32_t size_and_type; int32_t remote_id; uint32_t file_size; uint32_t vid_flags; /* lowest 8 bits determine video type */ /* immediately followed by struct dmabuf_slice_data */ }; static_assert(sizeof(struct wmsg_open_dmavid) == 16, "size check"); struct wmsg_buffer_fill { uint32_t size_and_type; int32_t remote_id; uint32_t start; /**< [start, end), in bytes of zone to be written */ uint32_t end; /* following this, the possibly-compressed data */ }; static_assert(sizeof(struct wmsg_buffer_fill) == 16, "size check"); struct wmsg_buffer_diff { uint32_t size_and_type; int32_t remote_id; uint32_t diff_size; /**< in bytes, when uncompressed */ uint32_t ntrailing; /**< number of 'trailing' bytes, copied to tail */ /* following this, the possibly-compressed diff data */ }; static_assert(sizeof(struct wmsg_buffer_diff) == 16, "size check"); struct wmsg_basic { uint32_t size_and_type; int32_t remote_id; }; static_assert(sizeof(struct wmsg_basic) == 8, "size check"); struct wmsg_ack { uint32_t size_and_type; uint32_t messages_received; }; static_assert(sizeof(struct wmsg_ack) == 8, "size check"); struct wmsg_restart { uint32_t size_and_type; uint32_t last_ack_received; }; static_assert(sizeof(struct wmsg_restart) == 8, "size check"); /** size: the number of bytes in the message, /excluding/ trailing padding. */ static inline uint32_t transfer_header(size_t size, enum wmsg_type type) { return ((uint32_t)size << 5) | (uint32_t)type; } static inline size_t transfer_size(uint32_t header) { return (size_t)header >> 5; } static inline enum wmsg_type transfer_type(uint32_t header) { return (enum wmsg_type)(header & ((1u << 5) - 1)); } /** Worker tasks write their resulting messages to this receive buffer, * and the main thread periodically checks the messages and appends the results * to the main thread. */ struct thread_msg_recv_buf { // TODO: make this lock free, using the fact that valid iovecs have // nonzero fields struct iovec *data; /** [zone_start, zone_end] contains the set of entries which might * contain data */ int zone_start, zone_end, size; pthread_mutex_t lock; }; static inline int msgno_gt(uint32_t a, uint32_t b) { return !((a - b) & (1u << 31)); } struct transfer_block_meta { /** Indicating to which message the corresponding data block belongs. */ uint32_t msgno; /** If true, data is not heap allocated */ bool static_alloc; }; /** A queue of data blocks to be written to the channel. This should only * be used by the main thread; worker tasks should write to a \ref * thread_msg_recv_buf, from which the main thread should in turn collect data */ struct transfer_queue { /** Data to be writtenveed */ struct iovec *vecs; /** Vector with metadata for matching entries of `vecs` */ struct transfer_block_meta *meta; /** start: next block to write. end: just after last block to write; * size: number of iovec blocks */ int start, end, size; /** How much of the block at 'start' has been written */ size_t partial_write_amt; /** The most recent message number, to be incremented after almost all * message types */ uint32_t last_msgno; /** Messages added from a worker thread are introduced here, and should * be periodically copied onto the main queue */ struct thread_msg_recv_buf async_recv_queue; }; /** Ensure the queue has space for 'count' elements */ int transfer_ensure_size(struct transfer_queue *transfers, int count); /** Add transfer message to the queue, expanding the queue as necessary. * This increments the last_msgno, and thus should not be used * for WMSG_ACK_NBLOCKS messages. */ int transfer_add(struct transfer_queue *transfers, size_t size, void *data); /** Destroy the transfer queue, deallocating all attached buffers */ void cleanup_transfer_queue(struct transfer_queue *transfers); /** Move any asynchronously loaded messages to the queue */ int transfer_load_async(struct transfer_queue *w); /** Add a message to the async queue */ void transfer_async_add(struct thread_msg_recv_buf *q, void *data, size_t sz); /* Functions that are unsually platform specific */ int create_anon_file(void); int get_hardware_thread_count(void); int get_iov_max(void); /** For large allocations only; functions providing aligned-and-zeroed * allocations. They return NULL on allocation failure.*/ void *zeroed_aligned_alloc(size_t bytes, size_t alignment, void **handle); void *zeroed_aligned_realloc(size_t old_size_bytes, size_t new_size_bytes, size_t alignment, void *data, void **handle); void zeroed_aligned_free(void *data, void **handle); /** Returns a file descriptor for the folder than can be fchdir'd to, or * -1 on failure, setting errno. If `name` is the empty string, opens the * current directory. */ int open_folder(const char *name); #ifdef HAS_VSOCK int connect_to_vsock(uint32_t port, uint32_t cid, bool to_host, int *socket_fd); int listen_on_vsock(uint32_t port, int nmaxclients, int *socket_fd_out); #endif #endif // WAYPIPE_UTIL_H waypipe-v0.9.1/src/video.c000066400000000000000000001136251463133614300154420ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "shadow.h" #if !defined(HAS_VIDEO) || !defined(HAS_DMABUF) void setup_video_logging(void) {} bool video_supports_dmabuf_format(uint32_t format, uint64_t modifier) { (void)format; (void)modifier; return false; } bool video_supports_shm_format(uint32_t format) { (void)format; return false; } bool video_supports_coding_format(enum video_coding_fmt fmt) { (void)fmt; return false; } void cleanup_hwcontext(struct render_data *rd) { (void)rd; } void destroy_video_data(struct shadow_fd *sfd) { (void)sfd; } int setup_video_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads) { (void)sfd; (void)rd; (void)nthreads; return -1; } int setup_video_decode(struct shadow_fd *sfd, struct render_data *rd) { (void)sfd; (void)rd; return -1; } void collect_video_from_mirror( struct shadow_fd *sfd, struct transfer_queue *transfers) { (void)sfd; (void)transfers; } void apply_video_packet(struct shadow_fd *sfd, struct render_data *rd, const struct bytebuf *data) { (void)rd; (void)sfd; (void)data; } #else /* HAS_VIDEO */ #include #include #include #include #include #include #include #include #include #ifdef HAS_VAAPI #include #include #include #endif /* these are equivalent to the GBM formats */ #include #define VIDEO_H264_HW_ENCODER "h264_vaapi" #define VIDEO_H264_SW_ENCODER "libx264" #define VIDEO_H264_DECODER "h264" #define VIDEO_VP9_HW_ENCODER "vp9_vaapi" #define VIDEO_VP9_SW_ENCODER "libvpx-vp9" #define VIDEO_VP9_DECODER "vp9" /* librav1e currently is not sufficient as its low-latency mode doesn't * appear to entirely turn off lookahead, and a few frames of latency * are unavoidable; this may be fixed in the future. * * libsvtav1 -- might work, if suitable controls for zero latency can be found * * libaom-av1 -- works, but may be slower than the other options */ // #define VIDEO_AV1_SW_ENCODER "libsvtav1" #define VIDEO_AV1_SW_ENCODER "libaom-av1" #define VIDEO_AV1_DECODER "libdav1d" static enum AVPixelFormat drm_to_av(uint32_t format) { /* The avpixel formats are specified with reversed endianness relative * to DRM formats */ switch (format) { case 0: return AV_PIX_FMT_BGR0; case DRM_FORMAT_C8: /* indexed */ return AV_PIX_FMT_NONE; case DRM_FORMAT_R8: return AV_PIX_FMT_GRAY8; case DRM_FORMAT_RGB565: return AV_PIX_FMT_RGB565LE; /* there really isn't a matching format, because no fast video * codec supports alpha. Expect unusual error patterns */ case DRM_FORMAT_GR88: return AV_PIX_FMT_YUYV422; case DRM_FORMAT_RGB888: return AV_PIX_FMT_BGR24; case DRM_FORMAT_BGR888: return AV_PIX_FMT_RGB24; case DRM_FORMAT_XRGB8888: return AV_PIX_FMT_BGR0; case DRM_FORMAT_XBGR8888: return AV_PIX_FMT_RGB0; case DRM_FORMAT_RGBX8888: return AV_PIX_FMT_0BGR; case DRM_FORMAT_BGRX8888: return AV_PIX_FMT_0RGB; #if LIBAVUTIL_VERSION_INT >= AV_VERSION_INT(57, 7, 100) /* While X2RGB10LE was available earlier than X2BGR10LE, conversions to * X2RGB10LE were broken until just before X2BGR10LE was added */ case DRM_FORMAT_XRGB2101010: return AV_PIX_FMT_X2RGB10LE; case DRM_FORMAT_XBGR2101010: return AV_PIX_FMT_X2BGR10LE; #endif case DRM_FORMAT_NV12: return AV_PIX_FMT_NV12; case DRM_FORMAT_NV21: return AV_PIX_FMT_NV21; case DRM_FORMAT_YVU410: case DRM_FORMAT_YUV410: return AV_PIX_FMT_YUV410P; case DRM_FORMAT_YVU411: case DRM_FORMAT_YUV411: return AV_PIX_FMT_YUV411P; case DRM_FORMAT_YVU420: case DRM_FORMAT_YUV420: return AV_PIX_FMT_YUV420P; case DRM_FORMAT_YVU422: case DRM_FORMAT_YUV422: return AV_PIX_FMT_YUV422P; case DRM_FORMAT_YVU444: case DRM_FORMAT_YUV444: return AV_PIX_FMT_YUV444P; case DRM_FORMAT_YUYV: return AV_PIX_FMT_NONE; case DRM_FORMAT_YVYU: return AV_PIX_FMT_UYVY422; case DRM_FORMAT_UYVY: return AV_PIX_FMT_YVYU422; case DRM_FORMAT_VYUY: return AV_PIX_FMT_YUYV422; default: return AV_PIX_FMT_NONE; } } static bool needs_vu_flip(uint32_t drm_format) { switch (drm_format) { case DRM_FORMAT_YVU410: case DRM_FORMAT_YVU411: case DRM_FORMAT_YVU420: case DRM_FORMAT_YVU422: case DRM_FORMAT_YVU444: return true; } return false; } bool video_supports_dmabuf_format(uint32_t format, uint64_t modifier) { /* cannot handle CCS modifiers at the moment due to extra 'plane' issues */ if (modifier == fourcc_mod_code(INTEL, 4) /* Y_TILED_CCS */ || modifier == fourcc_mod_code(INTEL, 5) /* Yf_TILED_CCS */ || modifier == fourcc_mod_code(INTEL, 6) /* Y_TILED_GEN12_RC_CCS */ || modifier == fourcc_mod_code(INTEL, 7) /* Y_TILED_GEN12_MC_CCS */ || modifier == fourcc_mod_code(INTEL, 8) /* Y_TILED_GEN12_RC_CCS_CC */) { return false; } return drm_to_av(format) != AV_PIX_FMT_NONE; } bool video_supports_shm_format(uint32_t format) { if (format == 0) { return true; } return video_supports_dmabuf_format(format, 0); } static const struct AVCodec *get_video_sw_encoder( enum video_coding_fmt fmt, bool print_error) { const struct AVCodec *codec = NULL; switch (fmt) { case VIDEO_H264: codec = avcodec_find_encoder_by_name(VIDEO_H264_SW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_H264_SW_ENCODER); } return codec; case VIDEO_VP9: codec = avcodec_find_encoder_by_name(VIDEO_VP9_SW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_VP9_SW_ENCODER); } return codec; case VIDEO_AV1: codec = avcodec_find_encoder_by_name(VIDEO_AV1_SW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_AV1_SW_ENCODER); } return codec; default: return NULL; } } static const struct AVCodec *get_video_hw_encoder( enum video_coding_fmt fmt, bool print_error) { const struct AVCodec *codec = NULL; switch (fmt) { case VIDEO_H264: codec = avcodec_find_encoder_by_name(VIDEO_H264_HW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_H264_HW_ENCODER); } return codec; case VIDEO_VP9: codec = avcodec_find_encoder_by_name(VIDEO_VP9_HW_ENCODER); if (!codec && print_error) { wp_error("Failed to find encoder \"%s\"", VIDEO_VP9_HW_ENCODER); } return codec; case VIDEO_AV1: return NULL; default: return NULL; } } static const struct AVCodec *get_video_decoder( enum video_coding_fmt fmt, bool print_error) { const struct AVCodec *codec = NULL; switch (fmt) { case VIDEO_H264: codec = avcodec_find_decoder_by_name(VIDEO_H264_DECODER); if (!codec && print_error) { wp_error("Failed to find decoder \"%s\"", VIDEO_H264_DECODER); } return codec; case VIDEO_VP9: codec = avcodec_find_decoder_by_name(VIDEO_VP9_DECODER); if (!codec && print_error) { wp_error("Failed to find decoder \"%s\"", VIDEO_VP9_DECODER); } return codec; case VIDEO_AV1: codec = avcodec_find_decoder_by_name(VIDEO_AV1_DECODER); if (!codec && print_error) { wp_error("Failed to find decoder \"%s\"", VIDEO_AV1_DECODER); } return codec; default: return NULL; } } bool video_supports_coding_format(enum video_coding_fmt fmt) { return get_video_sw_encoder(fmt, false) && get_video_decoder(fmt, false); } static void video_log_callback( void *aux, int level, const char *fmt, va_list args) { (void)aux; enum log_level wp_level = (level <= AV_LOG_WARNING) ? WP_ERROR : WP_DEBUG; log_handler_func_t fn = log_funcs[wp_level]; if (!fn) { return; } char buf[1024]; int len = vsnprintf(buf, 1023, fmt, args); while (buf[len - 1] == '\n' && len > 1) { buf[len - 1] = 0; len--; } (*fn)("ffmpeg", 0, wp_level, "%s", buf); } void setup_video_logging(void) { if (log_funcs[WP_DEBUG]) { av_log_set_level(AV_LOG_INFO); } else { av_log_set_level(AV_LOG_WARNING); } av_log_set_callback(video_log_callback); } #ifdef HAS_VAAPI static uint32_t drm_to_va_fourcc(uint32_t drm_fourcc) { switch (drm_fourcc) { /* At the moment, Intel/AMD VAAPI implementations only support * various YUY configurations and RGB32. (No other RGB variants). * See also libavutil / hwcontext_vaapi.c / vaapi_drm_format_map[] */ case DRM_FORMAT_XRGB8888: return VA_FOURCC_BGRX; case DRM_FORMAT_XBGR8888: return VA_FOURCC_RGBX; case DRM_FORMAT_RGBX8888: return VA_FOURCC_XBGR; case DRM_FORMAT_BGRX8888: return VA_FOURCC_XRGB; case DRM_FORMAT_NV12: return VA_FOURCC_NV12; } return 0; } static uint32_t va_fourcc_to_rt(uint32_t va_fourcc) { switch (va_fourcc) { case VA_FOURCC_BGRX: case VA_FOURCC_RGBX: return VA_RT_FORMAT_RGB32; case VA_FOURCC_NV12: return VA_RT_FORMAT_YUV420; } return 0; } static int setup_vaapi_pipeline(struct shadow_fd *sfd, struct render_data *rd, uint32_t width, uint32_t height) { VADisplay vadisp = rd->av_vadisplay; uintptr_t buffer_val = (uintptr_t)sfd->fd_local; uint32_t va_fourcc = drm_to_va_fourcc(sfd->dmabuf_info.format); if (va_fourcc == 0) { wp_error("Could not convert DRM format %x to VA fourcc", sfd->dmabuf_info.format); return -1; } uint32_t rt_format = va_fourcc_to_rt(va_fourcc); VASurfaceAttribExternalBuffers buffer_desc; buffer_desc.num_buffers = 1; buffer_desc.buffers = &buffer_val; buffer_desc.pixel_format = va_fourcc; buffer_desc.flags = 0; buffer_desc.width = width; buffer_desc.height = height; buffer_desc.data_size = (uint32_t)sfd->buffer_size; buffer_desc.num_planes = (uint32_t)sfd->dmabuf_info.num_planes; for (int i = 0; i < (int)sfd->dmabuf_info.num_planes; i++) { buffer_desc.offsets[i] = sfd->dmabuf_info.offsets[i]; buffer_desc.pitches[i] = sfd->dmabuf_info.strides[i]; } VASurfaceAttrib attribs[3]; attribs[0].type = VASurfaceAttribPixelFormat; attribs[0].flags = VA_SURFACE_ATTRIB_SETTABLE; attribs[0].value.type = VAGenericValueTypeInteger; attribs[0].value.value.i = 0; attribs[1].type = VASurfaceAttribMemoryType; attribs[1].flags = VA_SURFACE_ATTRIB_SETTABLE; attribs[1].value.type = VAGenericValueTypeInteger; attribs[1].value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME; attribs[2].type = VASurfaceAttribExternalBufferDescriptor; attribs[2].flags = VA_SURFACE_ATTRIB_SETTABLE; attribs[2].value.type = VAGenericValueTypePointer; attribs[2].value.value.p = &buffer_desc; sfd->video_va_surface = 0; sfd->video_va_context = 0; sfd->video_va_pipeline = 0; VAStatus stat = vaCreateSurfaces(vadisp, rt_format, buffer_desc.width, buffer_desc.height, &sfd->video_va_surface, 1, attribs, 3); if (stat != VA_STATUS_SUCCESS) { wp_error("Create surface failed: %s", vaErrorStr(stat)); sfd->video_va_surface = 0; return -1; } stat = vaCreateContext(vadisp, rd->av_copy_config, (int)buffer_desc.width, (int)buffer_desc.height, 0, &sfd->video_va_surface, 1, &sfd->video_va_context); if (stat != VA_STATUS_SUCCESS) { wp_error("Create context failed %s", vaErrorStr(stat)); vaDestroySurfaces(vadisp, &sfd->video_va_surface, 1); sfd->video_va_surface = 0; sfd->video_va_context = 0; return -1; } stat = vaCreateBuffer(vadisp, sfd->video_va_context, VAProcPipelineParameterBufferType, sizeof(VAProcPipelineParameterBuffer), 1, NULL, &sfd->video_va_pipeline); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to create pipeline buffer: %s", vaErrorStr(stat)); vaDestroySurfaces(vadisp, &sfd->video_va_surface, 1); vaDestroyContext(vadisp, sfd->video_va_context); sfd->video_va_surface = 0; sfd->video_va_context = 0; sfd->video_va_pipeline = 0; return -1; } return 0; } static void cleanup_vaapi_pipeline(struct shadow_fd *sfd) { if (!sfd->video_va_surface && !sfd->video_va_context && !sfd->video_va_pipeline) { return; } AVHWDeviceContext *vwdc = (AVHWDeviceContext *) sfd->video_context->hw_device_ctx->data; if (vwdc->type != AV_HWDEVICE_TYPE_VAAPI) { return; } AVVAAPIDeviceContext *vdctx = (AVVAAPIDeviceContext *)vwdc->hwctx; VADisplay vadisp = vdctx->display; if (sfd->video_va_surface) { vaDestroySurfaces(vadisp, &sfd->video_va_surface, 1); sfd->video_va_surface = 0; } if (sfd->video_va_context) { vaDestroyContext(vadisp, sfd->video_va_context); sfd->video_va_context = 0; } if (sfd->video_va_pipeline) { vaDestroyBuffer(vadisp, sfd->video_va_pipeline); sfd->video_va_pipeline = 0; } } static void run_vaapi_conversion(struct shadow_fd *sfd, struct render_data *rd, struct AVFrame *va_frame) { VADisplay vadisp = rd->av_vadisplay; if (va_frame->format != AV_PIX_FMT_VAAPI) { wp_error("Non-vaapi pixel format: %s", av_get_pix_fmt_name(va_frame->format)); } VASurfaceID src_surf = (VASurfaceID)(ptrdiff_t)va_frame->data[3]; int stat = vaBeginPicture( vadisp, sfd->video_va_context, sfd->video_va_surface); if (stat != VA_STATUS_SUCCESS) { wp_error("Begin picture config failed: %s", vaErrorStr(stat)); } VAProcPipelineParameterBuffer *pipeline_param; stat = vaMapBuffer(vadisp, sfd->video_va_pipeline, (void **)&pipeline_param); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to map pipeline buffer: %s", vaErrorStr(stat)); } pipeline_param->surface = src_surf; pipeline_param->surface_region = NULL; pipeline_param->output_region = NULL; pipeline_param->output_background_color = 0; pipeline_param->filter_flags = VA_FILTER_SCALING_FAST; pipeline_param->filters = NULL; pipeline_param->filters = 0; stat = vaUnmapBuffer(vadisp, sfd->video_va_pipeline); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to unmap pipeline buffer: %s", vaErrorStr(stat)); } stat = vaRenderPicture(vadisp, sfd->video_va_context, &sfd->video_va_pipeline, 1); if (stat != VA_STATUS_SUCCESS) { wp_error("Failed to render picture: %s", vaErrorStr(stat)); } stat = vaEndPicture(vadisp, sfd->video_va_context); if (stat != VA_STATUS_SUCCESS) { wp_error("End picture failed: %s", vaErrorStr(stat)); } stat = vaSyncSurface(vadisp, sfd->video_va_surface); if (stat != VA_STATUS_SUCCESS) { wp_error("Sync surface failed: %s", vaErrorStr(stat)); } } #endif void destroy_video_data(struct shadow_fd *sfd) { if (sfd->video_context) { #ifdef HAS_VAAPI cleanup_vaapi_pipeline(sfd); #endif /* free contexts (which, theoretically, could have hooks into * frames/packets) first */ avcodec_free_context(&sfd->video_context); sws_freeContext(sfd->video_color_context); if (sfd->video_yuv_frame_data) { av_freep(sfd->video_yuv_frame_data); } if (sfd->video_local_frame_data) { av_freep(sfd->video_local_frame_data); } av_frame_free(&sfd->video_local_frame); av_frame_free(&sfd->video_tmp_frame); av_frame_free(&sfd->video_yuv_frame); av_packet_free(&sfd->video_packet); } } static void copy_onto_video_mirror(const char *buffer, uint32_t map_stride, AVFrame *frame, const struct dmabuf_slice_data *info) { for (int i = 0; i < info->num_planes; i++) { int j = i; if (needs_vu_flip(info->format) && (i == 1 || i == 2)) { j = 3 - i; } for (size_t r = 0; r < info->height; r++) { uint8_t *dst = frame->data[j] + frame->linesize[j] * (int)r; const char *src = buffer + (size_t)info->offsets[i] + (size_t)map_stride * r; /* todo: handle multiplanar strides properly */ size_t common = (size_t)minu(map_stride, (uint64_t)frame->linesize[j]); memcpy(dst, src, common); } } } static void copy_from_video_mirror(char *buffer, uint32_t map_stride, const AVFrame *frame, const struct dmabuf_slice_data *info) { for (int i = 0; i < info->num_planes; i++) { int j = i; if (needs_vu_flip(info->format) && (i == 1 || i == 2)) { j = 3 - i; } for (size_t r = 0; r < info->height; r++) { const uint8_t *src = frame->data[j] + frame->linesize[j] * (int)r; char *dst = buffer + (size_t)info->offsets[i] + (size_t)map_stride * r; /* todo: handle multiplanar strides properly */ size_t common = (size_t)minu(map_stride, (uint64_t)frame->linesize[j]); memcpy(dst, src, common); } } } static bool pad_hardware_size( int width, int height, int *new_width, int *new_height) { /* VAAPI drivers often impose additional alignment restrictions; for * example, requiring that width be 16-aligned, or that tiled buffers be * 128-aligned. See also intel-vaapi-driver, i965_drv_video.c, * i965_suface_external_memory() [sic] ; */ *new_width = align(width, 16); *new_height = align(height, 16); if (width % 16 != 0) { /* Something goes wrong with VAAPI/buffer state when the * width (or stride?) is not a multiple of 16, and GEM_MMAP * ioctls start failing */ return false; } return true; } static int init_hwcontext(struct render_data *rd) { if (rd->av_disabled) { return -1; } if (rd->av_hwdevice_ref != NULL) { return 0; } if (init_render_data(rd) == -1) { rd->av_disabled = true; return -1; } rd->av_vadisplay = 0; rd->av_copy_config = 0; rd->av_drmdevice_ref = NULL; // Q: what does this even do? rd->av_drmdevice_ref = av_hwdevice_ctx_alloc(AV_HWDEVICE_TYPE_DRM); if (!rd->av_drmdevice_ref) { wp_error("Failed to allocate AV DRM device context"); rd->av_disabled = true; return -1; } AVHWDeviceContext *hwdc = (AVHWDeviceContext *)rd->av_drmdevice_ref->data; AVDRMDeviceContext *dctx = hwdc->hwctx; dctx->fd = rd->drm_fd; if (av_hwdevice_ctx_init(rd->av_drmdevice_ref)) { wp_error("Failed to initialize AV DRM device context"); rd->av_disabled = true; return -1; } /* We create a derived context here, to ensure that the drm fd matches * that which was used to create the DMABUFs. Also, this ensures that * the VA implementation doesn't look for a connection via e.g. Wayland * or X11 */ if (av_hwdevice_ctx_create_derived(&rd->av_hwdevice_ref, AV_HWDEVICE_TYPE_VAAPI, rd->av_drmdevice_ref, 0) < 0) { wp_error("Failed to create VAAPI hardware device"); rd->av_disabled = true; return -1; } #ifdef HAS_VAAPI AVHWDeviceContext *vwdc = (AVHWDeviceContext *)rd->av_hwdevice_ref->data; AVVAAPIDeviceContext *vdctx = (AVVAAPIDeviceContext *)vwdc->hwctx; if (!vdctx) { wp_error("No vaapi device context"); rd->av_disabled = true; return -1; } rd->av_vadisplay = vdctx->display; int stat = vaCreateConfig(rd->av_vadisplay, VAProfileNone, VAEntrypointVideoProc, NULL, 0, &rd->av_copy_config); if (stat != VA_STATUS_SUCCESS) { wp_error("Create config failed: %s", vaErrorStr(stat)); rd->av_disabled = true; return -1; } #endif return 0; } void cleanup_hwcontext(struct render_data *rd) { rd->av_disabled = true; #if HAS_VAAPI if (rd->av_vadisplay && rd->av_copy_config) { vaDestroyConfig(rd->av_vadisplay, rd->av_copy_config); } #endif if (rd->av_hwdevice_ref) { av_buffer_unref(&rd->av_hwdevice_ref); } if (rd->av_drmdevice_ref) { av_buffer_unref(&rd->av_drmdevice_ref); } } static void configure_low_latency_enc_context(struct AVCodecContext *ctx, bool sw, enum video_coding_fmt fmt, int bpf, int nthreads) { // "time" is only meaningful in terms of the frames provided int nom_fps = 25; ctx->time_base = (AVRational){1, nom_fps}; ctx->framerate = (AVRational){nom_fps, 1}; /* B-frames are directly tied to latency, since each one * is predicted using its preceding and following * frames. The gop size is chosen by the driver. */ ctx->gop_size = -1; ctx->max_b_frames = 0; // Q: how to get this to zero? // low latency ctx->delay = 0; if (sw) { ctx->bit_rate = bpf * nom_fps; if (fmt == VIDEO_H264) { if (av_opt_set(ctx->priv_data, "preset", "ultrafast", 0) != 0) { wp_error("Failed to set x264 encode ultrafast preset"); } if (av_opt_set(ctx->priv_data, "tune", "zerolatency", 0) != 0) { wp_error("Failed to set x264 encode zerolatency"); } } else if (fmt == VIDEO_VP9) { if (av_opt_set(ctx->priv_data, "lag-in-frames", "0", 0) != 0) { wp_error("Failed to set vp9 encode lag"); } if (av_opt_set(ctx->priv_data, "quality", "realtime", 0) != 0) { wp_error("Failed to set vp9 quality"); } if (av_opt_set(ctx->priv_data, "speed", "8", 0) != 0) { wp_error("Failed to set vp9 speed"); } } else if (fmt == VIDEO_AV1) { // AOM-AV1 if (av_opt_set(ctx->priv_data, "usage", "realtime", 0) != 0) { wp_error("Failed to set av1 usage"); } if (av_opt_set(ctx->priv_data, "lag-in-frames", "0", 0) != 0) { wp_error("Failed to set av1 lag"); } if (av_opt_set(ctx->priv_data, "cpu-used", "8", 0) != 0) { wp_error("Failed to set av1 speed"); } // Use multi-threaded encoding ctx->thread_count = nthreads; } } else { ctx->bit_rate = bpf * nom_fps; if (fmt == VIDEO_H264) { /* with i965/gen8, hardware encoding is faster but has * significantly worse quality per bitrate than x264 */ if (av_opt_set(ctx->priv_data, "profile", "main", 0) != 0) { wp_error("Failed to set h264 encode main profile"); } } } } static int setup_hwvideo_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads) { /* NV12 is the preferred format for Intel VAAPI; see also * intel-vaapi-driver/src/i965_drv_video.c . Packed formats like * YUV420P typically don't work. */ const enum AVPixelFormat videofmt = AV_PIX_FMT_NV12; const struct AVCodec *codec = get_video_hw_encoder(sfd->video_fmt, true); if (!codec) { return -1; } struct AVCodecContext *ctx = avcodec_alloc_context3(codec); configure_low_latency_enc_context( ctx, false, sfd->video_fmt, rd->av_bpf, nthreads); if (!pad_hardware_size((int)sfd->dmabuf_info.width, (int)sfd->dmabuf_info.height, &ctx->width, &ctx->height)) { wp_error("Video dimensions (WxH = %dx%d) not alignable to use hardware video encoding", sfd->dmabuf_info.width, sfd->dmabuf_info.height); goto fail_alignment; } AVHWFramesConstraints *constraints = av_hwdevice_get_hwframe_constraints( rd->av_hwdevice_ref, NULL); if (!constraints) { wp_error("Failed to get hardware frame constraints"); goto fail_hwframe_constraints; } enum AVPixelFormat hw_format = constraints->valid_hw_formats[0]; av_hwframe_constraints_free(&constraints); AVBufferRef *frame_ref = av_hwframe_ctx_alloc(rd->av_hwdevice_ref); if (!frame_ref) { wp_error("Failed to allocate frame reference"); goto fail_frameref; } AVHWFramesContext *fctx = (AVHWFramesContext *)frame_ref->data; /* hw fmt is e.g. "vaapi_vld" */ fctx->format = hw_format; fctx->sw_format = videofmt; fctx->width = ctx->width; fctx->height = ctx->height; int err = av_hwframe_ctx_init(frame_ref); if (err < 0) { wp_error("Failed to init hardware frame context, %s", av_err2str(err)); goto fail_hwframe_init; } ctx->pix_fmt = hw_format; ctx->hw_frames_ctx = av_buffer_ref(frame_ref); if (!ctx->hw_frames_ctx) { wp_error("Failed to reference hardware frame context for codec context"); goto fail_ctx_hwfctx; } int open_err = avcodec_open2(ctx, codec, NULL); if (open_err < 0) { wp_error("Failed to open codec: %s", av_err2str(open_err)); goto fail_codec_open; } /* Create a VAAPI frame linked to the sfd DMABUF */ struct AVDRMFrameDescriptor *framedesc = av_mallocz(sizeof(struct AVDRMFrameDescriptor)); if (!framedesc) { wp_error("Failed to allocate DRM frame descriptor"); goto fail_framedesc_alloc; } /* todo: multiplanar support */ framedesc->nb_objects = 1; framedesc->objects[0].format_modifier = sfd->dmabuf_info.modifier; framedesc->objects[0].fd = sfd->fd_local; framedesc->objects[0].size = sfd->buffer_size; framedesc->nb_layers = 1; framedesc->layers[0].nb_planes = sfd->dmabuf_info.num_planes; framedesc->layers[0].format = sfd->dmabuf_info.format; for (int i = 0; i < (int)sfd->dmabuf_info.num_planes; i++) { framedesc->layers[0].planes[i].object_index = 0; framedesc->layers[0].planes[i].offset = sfd->dmabuf_info.offsets[i]; framedesc->layers[0].planes[i].pitch = sfd->dmabuf_info.strides[i]; } AVFrame *local_frame = av_frame_alloc(); if (!local_frame) { wp_error("Failed to allocate local frame"); goto fail_frame_alloc; } local_frame->width = ctx->width; local_frame->height = ctx->height; local_frame->format = AV_PIX_FMT_DRM_PRIME; local_frame->buf[0] = av_buffer_create((uint8_t *)framedesc, sizeof(struct AVDRMFrameDescriptor), av_buffer_default_free, local_frame, 0); if (!local_frame->buf[0]) { wp_error("Failed to reference count frame DRM description"); goto fail_framedesc_ref; } local_frame->data[0] = (uint8_t *)framedesc; local_frame->hw_frames_ctx = av_buffer_ref(frame_ref); if (!local_frame->hw_frames_ctx) { wp_error("Failed to reference hardware frame context for local frame"); goto fail_frame_hwfctx; } AVFrame *yuv_frame = av_frame_alloc(); if (!yuv_frame) { wp_error("Failed to allocate yuv frame"); goto fail_yuv_frame; } yuv_frame->format = hw_format; yuv_frame->hw_frames_ctx = av_buffer_ref(frame_ref); if (!yuv_frame->hw_frames_ctx) { wp_error("Failed to reference hardware frame context for yuv frame"); goto fail_yuv_hwfctx; } int map_err = av_hwframe_map(yuv_frame, local_frame, 0); if (map_err) { wp_error("Failed to map (DRM) local frame to (hardware) yuv frame: %s", av_err2str(map_err)); goto fail_map; } struct AVPacket *pkt = av_packet_alloc(); if (!pkt) { wp_error("Failed to allocate av packet"); goto fail_pkt_alloc; } av_buffer_unref(&frame_ref); sfd->video_context = ctx; sfd->video_local_frame = local_frame; sfd->video_yuv_frame = yuv_frame; sfd->video_packet = pkt; return 0; fail_pkt_alloc: fail_map: fail_yuv_hwfctx: av_frame_free(&yuv_frame); fail_yuv_frame: fail_framedesc_ref: fail_frame_hwfctx: av_frame_free(&local_frame); fail_frame_alloc: fail_framedesc_alloc: fail_codec_open: fail_ctx_hwfctx: fail_hwframe_init: av_buffer_unref(&frame_ref); fail_frameref: fail_hwframe_constraints: fail_alignment: avcodec_free_context(&ctx); return -1; } int setup_video_encode( struct shadow_fd *sfd, struct render_data *rd, int nthreads) { if (sfd->video_context) { wp_error("Video context already set up for sfd RID=%d", sfd->remote_id); return -1; } bool has_hw = init_hwcontext(rd) == 0; /* Attempt hardware encoding, and if it doesn't succeed, fall back * to software encoding */ if (has_hw && setup_hwvideo_encode(sfd, rd, nthreads) == 0) { return 0; } enum AVPixelFormat avpixfmt = drm_to_av(sfd->dmabuf_info.format); if (avpixfmt == AV_PIX_FMT_NONE) { wp_error("Failed to find matching AvPixelFormat for %x", sfd->dmabuf_info.format); return -1; } enum AVPixelFormat videofmt = AV_PIX_FMT_YUV420P; if (sws_isSupportedInput(avpixfmt) == 0) { wp_error("frame format %s not supported", av_get_pix_fmt_name(avpixfmt)); return -1; } if (sws_isSupportedInput(videofmt) == 0) { wp_error("videofmt %s not supported", av_get_pix_fmt_name(videofmt)); return -1; } const struct AVCodec *codec = get_video_sw_encoder(sfd->video_fmt, true); if (!codec) { return -1; } struct AVCodecContext *ctx = avcodec_alloc_context3(codec); ctx->pix_fmt = videofmt; configure_low_latency_enc_context( ctx, true, sfd->video_fmt, rd->av_bpf, nthreads); /* Increase image sizes as needed to ensure codec can run */ ctx->width = (int)sfd->dmabuf_info.width; ctx->height = (int)sfd->dmabuf_info.height; int linesize_align[AV_NUM_DATA_POINTERS]; avcodec_align_dimensions2( ctx, &ctx->width, &ctx->height, linesize_align); struct AVPacket *pkt = av_packet_alloc(); if (avcodec_open2(ctx, codec, NULL) < 0) { wp_error("Failed to open codec"); return -1; } struct AVFrame *local_frame = av_frame_alloc(); if (!local_frame) { wp_error("Could not allocate video frame"); return -1; } local_frame->format = avpixfmt; /* adopt padded sizes */ local_frame->width = ctx->width; local_frame->height = ctx->height; if (av_image_alloc(local_frame->data, local_frame->linesize, local_frame->width, local_frame->height, avpixfmt, 64) < 0) { wp_error("Failed to allocate temp image"); return -1; } struct AVFrame *yuv_frame = av_frame_alloc(); yuv_frame->width = ctx->width; yuv_frame->height = ctx->height; yuv_frame->format = videofmt; if (av_image_alloc(yuv_frame->data, yuv_frame->linesize, yuv_frame->width, yuv_frame->height, videofmt, 64) < 0) { wp_error("Failed to allocate temp image"); return -1; } struct SwsContext *sws = sws_getContext(local_frame->width, local_frame->height, avpixfmt, yuv_frame->width, yuv_frame->height, videofmt, SWS_BILINEAR, NULL, NULL, NULL); if (!sws) { wp_error("Could not create software color conversion context"); return -1; } sfd->video_yuv_frame = yuv_frame; /* recorded pointer to be freed to match av_image_alloc */ sfd->video_yuv_frame_data = &yuv_frame->data[0]; sfd->video_local_frame = local_frame; sfd->video_local_frame_data = &local_frame->data[0]; sfd->video_packet = pkt; sfd->video_context = ctx; sfd->video_color_context = sws; return 0; } static enum AVPixelFormat get_decode_format( AVCodecContext *ctx, const enum AVPixelFormat *pix_fmts) { (void)ctx; for (const enum AVPixelFormat *p = pix_fmts; *p != AV_PIX_FMT_NONE; p++) { /* Prefer VAAPI output, if available. */ if (*p == AV_PIX_FMT_VAAPI) { return AV_PIX_FMT_VAAPI; } } /* YUV420P is the typical software option, but this function is only * called when VAAPI is already available */ return AV_PIX_FMT_NONE; } int setup_video_decode(struct shadow_fd *sfd, struct render_data *rd) { bool has_hw = init_hwcontext(rd) == 0; enum AVPixelFormat avpixfmt = drm_to_av(sfd->dmabuf_info.format); if (avpixfmt == AV_PIX_FMT_NONE) { wp_error("Failed to find matching AvPixelFormat for %x", sfd->dmabuf_info.format); return -1; } enum AVPixelFormat videofmt = AV_PIX_FMT_YUV420P; if (sws_isSupportedInput(avpixfmt) == 0) { wp_error("source pixel format %x not supported", avpixfmt); return -1; } if (sws_isSupportedInput(videofmt) == 0) { wp_error("AV_PIX_FMT_YUV420P not supported"); return -1; } const struct AVCodec *codec = get_video_decoder(sfd->video_fmt, true); if (!codec) { return -1; } struct AVCodecContext *ctx = avcodec_alloc_context3(codec); if (!ctx) { wp_error("Failed to allocate context"); return -1; } ctx->delay = 0; if (has_hw) { /* If alignment permits, use hardware decoding */ has_hw = pad_hardware_size((int)sfd->dmabuf_info.width, (int)sfd->dmabuf_info.height, &ctx->width, &ctx->height); } if (has_hw) { ctx->hw_device_ctx = av_buffer_ref(rd->av_hwdevice_ref); if (!ctx->hw_device_ctx) { wp_error("Failed to reference hardware device context"); } ctx->get_format = get_decode_format; } else { ctx->pix_fmt = videofmt; /* set context dimensions, and allocate buffer to write into */ ctx->width = (int)sfd->dmabuf_info.width; ctx->height = (int)sfd->dmabuf_info.height; int linesize_align[AV_NUM_DATA_POINTERS]; avcodec_align_dimensions2( ctx, &ctx->width, &ctx->height, linesize_align); } if (avcodec_open2(ctx, codec, NULL) < 0) { wp_error("Failed to open codec"); } struct AVFrame *yuv_frame = av_frame_alloc(); if (!yuv_frame) { wp_error("Could not allocate yuv frame"); return -1; } struct AVPacket *pkt = av_packet_alloc(); if (!pkt) { wp_error("Could not allocate video packet"); return -1; } if (ctx->hw_device_ctx) { #ifdef HAS_VAAPI if (rd->av_vadisplay) { setup_vaapi_pipeline(sfd, rd, (uint32_t)ctx->width, (uint32_t)ctx->height); } #endif } sfd->video_yuv_frame = yuv_frame; sfd->video_packet = pkt; sfd->video_context = ctx; /* yuv_frame not allocated by us */ sfd->video_yuv_frame_data = NULL; /* will be allocated on frame receipt */ sfd->video_local_frame = NULL; sfd->video_color_context = NULL; return 0; } void collect_video_from_mirror( struct shadow_fd *sfd, struct transfer_queue *transfers) { if (sfd->video_color_context) { /* If using software encoding, need to convert to YUV */ void *handle = NULL; uint32_t map_stride = 0; void *data = map_dmabuf( sfd->dmabuf_bo, false, &handle, &map_stride); if (!data) { return; } copy_onto_video_mirror(data, map_stride, sfd->video_local_frame, &sfd->dmabuf_info); unmap_dmabuf(sfd->dmabuf_bo, handle); if (sws_scale(sfd->video_color_context, (const uint8_t *const *)sfd ->video_local_frame->data, sfd->video_local_frame->linesize, 0, sfd->video_local_frame->height, sfd->video_yuv_frame->data, sfd->video_yuv_frame->linesize) < 0) { wp_error("Failed to perform color conversion"); } } sfd->video_yuv_frame->pts = sfd->video_frameno++; int sendstat = avcodec_send_frame( sfd->video_context, sfd->video_yuv_frame); if (sendstat < 0) { wp_error("Failed to create frame: %s", av_err2str(sendstat)); return; } // assume 1-1 frames to packets, at the moment int recvstat = avcodec_receive_packet( sfd->video_context, sfd->video_packet); if (recvstat == AVERROR(EINVAL)) { wp_error("Failed to receive packet for RID=%d", sfd->remote_id); return; } else if (recvstat == AVERROR(EAGAIN)) { wp_error("Packet for RID=%d needs more input", sfd->remote_id); } if (recvstat == 0) { struct AVPacket *pkt = sfd->video_packet; size_t pktsz = (size_t)pkt->buf->size; size_t msgsz = sizeof(struct wmsg_basic) + pktsz; char *buf = malloc(alignz(msgsz, 4)); struct wmsg_basic *header = (struct wmsg_basic *)buf; header->size_and_type = transfer_header(msgsz, WMSG_SEND_DMAVID_PACKET); header->remote_id = sfd->remote_id; memcpy(buf + sizeof(struct wmsg_basic), pkt->buf->data, pktsz); memset(buf + msgsz, 0, alignz(msgsz, 4) - msgsz); transfer_add(transfers, alignz(msgsz, 4), buf); av_packet_unref(pkt); } } static int setup_color_conv(struct shadow_fd *sfd, struct AVFrame *cpu_frame) { struct AVCodecContext *ctx = sfd->video_context; enum AVPixelFormat avpixfmt = drm_to_av(sfd->dmabuf_info.format); struct AVFrame *local_frame = av_frame_alloc(); if (!local_frame) { wp_error("Could not allocate video frame"); return -1; } local_frame->format = avpixfmt; /* adopt padded sizes */ local_frame->width = ctx->width; local_frame->height = ctx->height; if (av_image_alloc(local_frame->data, local_frame->linesize, local_frame->width, local_frame->height, avpixfmt, 64) < 0) { wp_error("Failed to allocate local image"); av_frame_free(&local_frame); return -1; } struct SwsContext *sws = sws_getContext(cpu_frame->width, cpu_frame->height, cpu_frame->format, local_frame->width, local_frame->height, avpixfmt, SWS_BILINEAR, NULL, NULL, NULL); if (!sws) { wp_error("Could not create software color conversion context"); av_freep(&local_frame->data[0]); av_frame_free(&local_frame); return -1; } sfd->video_local_frame = local_frame; sfd->video_local_frame_data = &local_frame->data[0]; sfd->video_color_context = sws; return 0; } void apply_video_packet(struct shadow_fd *sfd, struct render_data *rd, const struct bytebuf *msg) { sfd->video_packet->data = (uint8_t *)msg->data; sfd->video_packet->size = (int)msg->size; int sendstat = avcodec_send_packet( sfd->video_context, sfd->video_packet); if (sendstat < 0) { wp_error("Failed to send packet: %s", av_err2str(sendstat)); } /* Receive all produced frames, ignoring all but the most recent */ while (true) { int recvstat = avcodec_receive_frame( sfd->video_context, sfd->video_yuv_frame); if (recvstat == 0) { struct AVFrame *cpu_frame = sfd->video_yuv_frame; #if HAS_VAAPI if (sfd->video_va_surface && sfd->video_yuv_frame->format == AV_PIX_FMT_VAAPI) { run_vaapi_conversion( sfd, rd, sfd->video_yuv_frame); continue; } #else (void)rd; #endif if (sfd->video_yuv_frame->format == AV_PIX_FMT_VAAPI) { if (!sfd->video_tmp_frame) { sfd->video_tmp_frame = av_frame_alloc(); if (!sfd->video_tmp_frame) { wp_error("Failed to allocate temporary frame"); } } int tferr = av_hwframe_transfer_data( sfd->video_tmp_frame, sfd->video_yuv_frame, 0); if (tferr < 0) { wp_error("Failed to transfer hwframe data: %s", av_err2str(tferr)); } cpu_frame = sfd->video_tmp_frame; } if (!cpu_frame) { return; } if (!sfd->video_color_context) { if (setup_color_conv(sfd, cpu_frame) == -1) { return; } } /* Handle frame immediately, since the next receive run * will clear it again */ if (sws_scale(sfd->video_color_context, (const uint8_t *const *) cpu_frame->data, cpu_frame->linesize, 0, cpu_frame->height, sfd->video_local_frame->data, sfd->video_local_frame->linesize) < 0) { wp_error("Failed to perform color conversion"); } if (!sfd->dmabuf_bo) { // ^ was not previously able to create buffer wp_error("DMABUF was not created"); return; } /* Copy data onto DMABUF */ uint32_t map_stride = 0; void *handle = NULL; void *data = map_dmabuf(sfd->dmabuf_bo, true, &handle, &map_stride); if (!data) { return; } copy_from_video_mirror(data, map_stride, sfd->video_local_frame, &sfd->dmabuf_info); unmap_dmabuf(sfd->dmabuf_bo, handle); } else { if (recvstat != AVERROR(EAGAIN)) { wp_error("Failed to receive frame due to error: %s", av_err2str(recvstat)); } break; } } } #endif /* HAS_VIDEO && HAS_DMABUF */ waypipe-v0.9.1/src/waypipe.c000066400000000000000000000767311463133614300160200ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include enum waypipe_mode { MODE_FAIL = 0x0, MODE_SSH = 0x1, MODE_CLIENT = 0x2, MODE_SERVER = 0x4, MODE_RECON = 0x8, MODE_BENCH = 0x10 }; static bool log_to_tty = false; static enum waypipe_mode log_mode = MODE_FAIL; static bool log_anti_staircase = false; log_handler_func_t log_funcs[2] = {NULL, NULL}; /* Usage: Wrapped to 79 characters */ static const char usage_string[] = "Usage: waypipe [options] mode ...\n" "A proxy for Wayland protocol applications.\n" "Example: waypipe ssh user@server weston-terminal\n" "\n" "Modes:\n" " ssh [...] Wrap an ssh invocation to run waypipe on both ends of the\n" " connection, and automatically forward Wayland applications.\n" " server CMD Run remotely to invoke CMD and forward application data through\n" " a socket to a matching 'waypipe client' instance.\n" " client Run locally to create a Unix socket to which 'waypipe server'\n" " instances can connect.\n" " recon C T Reconnect a 'waypipe server' instance. Writes the new Unix\n" " socket path T to the control pipe C.\n" " bench B Given a connection bandwidth B in MB/sec, estimate the best\n" " compression level used to send data\n" "\n" "Options:\n" " -c, --compress C choose compression method: lz4[=#], zstd=[=#], none\n" " -d, --debug print debug messages\n" " -h, --help display this help and exit\n" " -n, --no-gpu disable protocols which would use GPU resources\n" " -o, --oneshot only permit one connected application\n" " -s, --socket S set the socket path to either create or connect to:\n" " server default: /tmp/waypipe-server.sock\n" " client default: /tmp/waypipe-client.sock\n" " ssh: sets the prefix for the socket path\n" " vsock: [[s]CID:]port\n" " --version print waypipe version and exit\n" " --allow-tiled allow gpu buffers (DMABUFs) with format modifiers\n" " --control C server,ssh: set control pipe to reconnect server\n" " --display D server,ssh: the Wayland display name or path\n" " --drm-node R set the local render node. default: /dev/dri/renderD128\n" " --remote-node R ssh: set the remote render node path\n" " --remote-bin R ssh: set the remote waypipe binary. default: waypipe\n" " --login-shell server: if server CMD is empty, run a login shell\n" " --threads T set thread pool size, default=hardware threads/2\n" " --title-prefix P prepend P to all window titles\n" " --unlink-socket server: unlink the socket that waypipe connects to\n" " --video[=V] compress certain linear dmabufs only with a video codec\n" " V is list of options: sw,hw,bpf=1.2e5,h264,vp9,av1\n" " --vsock use vsock instead of unix socket\n" "\n"; static int usage(int retcode) { FILE *ostream = retcode == EXIT_SUCCESS ? stderr : stdout; fprintf(ostream, usage_string); return retcode; } static void log_handler(const char *file, int line, enum log_level level, const char *fmt, ...) { struct timespec ts; clock_gettime(CLOCK_REALTIME, &ts); int pid = getpid(); char mode; if (log_mode == MODE_SERVER) { mode = level == WP_DEBUG ? 's' : 'S'; } else { mode = level == WP_DEBUG ? 'c' : 'C'; } char msg[1024]; int nwri = 0; if (log_to_tty) { msg[nwri++] = '\x1b'; msg[nwri++] = '['; msg[nwri++] = '3'; /* blue for waypipe client, green for waypipe server, * (or unformatted for waypipe server if no pty is made */ msg[nwri++] = log_mode == MODE_SERVER ? '2' : '4'; msg[nwri++] = 'm'; if (level == WP_ERROR) { /* bold errors */ msg[nwri++] = '\x1b'; msg[nwri++] = '['; msg[nwri++] = '1'; msg[nwri++] = 'm'; } } int sec = (int)(ts.tv_sec % 100); int usec = (int)(ts.tv_nsec / 1000); nwri += sprintf(msg + nwri, "%c%d:%3d.%06d [%s:%3d] ", mode, pid, sec, usec, file, line); va_list args; va_start(args, fmt); nwri += vsnprintf(msg + nwri, (size_t)(1000 - nwri), fmt, args); va_end(args); if (log_to_tty) { msg[nwri++] = '\x1b'; msg[nwri++] = '['; msg[nwri++] = '0'; msg[nwri++] = 'm'; /* to avoid 'staircase' rendering when ssh has the '-t' flag * and sets raw mode for the shared terminal output */ if (log_anti_staircase) { msg[nwri++] = '\r'; } } msg[nwri++] = '\n'; msg[nwri] = 0; // single short writes are atomic for pipes, at least (void)write(STDERR_FILENO, msg, (size_t)nwri); } static void handle_noop(int sig) { (void)sig; } /* Configure signal handling policies */ static int setup_sighandlers(void) { struct sigaction ia; // SIGINT: abort operations, and set a flag ia.sa_handler = handle_sigint; sigemptyset(&ia.sa_mask); ia.sa_flags = 0; struct sigaction ca; // SIGCHLD: restart operations, but EINTR on poll ca.sa_handler = handle_noop; sigemptyset(&ca.sa_mask); ca.sa_flags = SA_RESTART | SA_NOCLDSTOP; struct sigaction pa; pa.sa_handler = SIG_IGN; sigemptyset(&pa.sa_mask); pa.sa_flags = 0; if (sigaction(SIGINT, &ia, NULL) == -1) { wp_error("Failed to set signal action for SIGINT"); return -1; } if (sigaction(SIGCHLD, &ca, NULL) == -1) { wp_error("Failed to set signal action for SIGCHLD"); return -1; } if (sigaction(SIGPIPE, &pa, NULL) == -1) { wp_error("Failed to set signal action for SIGPIPE"); return -1; } return 0; } /* produces a random token with a low accidental collision probability */ static void fill_rand_token(char tok[static 8]) { struct timespec tp; clock_gettime(CLOCK_REALTIME, &tp); uint32_t seed = (uint32_t)(getpid() + tp.tv_sec + (tp.tv_nsec << 2)); srand(seed); for (int i = 0; i < 8; i++) { unsigned int r = ((unsigned int)rand()) % 62; if (r < 26) { tok[i] = (char)(r + 'a'); } else if (r < 52) { tok[i] = (char)(r - 26 + 'A'); } else { tok[i] = (char)(r - 52 + '0'); } } } /* Scan a suffix which is either empty or has the form =N, returning true * if it matches */ static bool parse_level_choice(const char *str, int *dest, int defval) { if (str[0] == '\0') { *dest = defval; return true; } if (str[0] != '=') { return false; } str++; int sign = 1; if (str[0] == '-') { sign = -1; str++; } uint32_t val; if (parse_uint32(str, &val) == -1 || (int)val < 0) { return false; } *dest = sign * (int)val; return true; } /* Identifies the index at which the `destination` occurs in an openssh command, * and also sets a boolean if pty allocation was requested by an ssh flag */ static int locate_openssh_cmd_hostname( int argc, char *const *argv, bool *allocates_pty) { /* Based on command line help for openssh 8.0 */ char fixletters[] = "46AaCfGgKkMNnqsTtVvXxYy"; char argletters[] = "BbcDEeFIiJLlmOopQRSWw"; int dstidx = 0; while (dstidx < argc) { if (argv[dstidx][0] == '-' && strchr(argletters, argv[dstidx][1]) != NULL && argv[dstidx][2] == 0) { dstidx += 2; continue; } if (argv[dstidx][0] == '-' && strchr(fixletters, argv[dstidx][1]) != NULL) { for (const char *c = &argv[dstidx][1]; *c; c++) { *allocates_pty |= (*c == 't'); *allocates_pty &= (*c != 'T'); } dstidx++; continue; } if (argv[dstidx][0] == '-' && argv[dstidx][1] == '-' && argv[dstidx][2] == 0) { dstidx++; break; } break; } if (dstidx >= argc || argv[dstidx][0] == '-') { return -1; } return dstidx; } /* Send the socket at 'recon_path' to the control socket at 'control_path'. * Because connections are made by address, the waypipe server root process * must be able to connect to the `recon path`. */ static int run_recon(const char *control_path, const char *recon_path) { size_t len = strlen(recon_path); if (len >= 108) { fprintf(stderr, "Reconnection socket path \"%s\" too long, %d>=%d\n", control_path, (int)len, 108); return EXIT_FAILURE; } int cfd = open(control_path, O_WRONLY | O_NOCTTY); if (cfd == -1) { fprintf(stderr, "Failed to open control pipe at \"%s\"\n", control_path); return EXIT_FAILURE; } ssize_t written = write(cfd, recon_path, len + 1); close(cfd); if ((size_t)written != len + 1) { fprintf(stderr, "Failed to write to control pipe\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } #ifdef HAS_VIDEO static int parse_video_string(const char *str, struct main_config *config) { char tmp[128]; size_t l = strlen(str); if (l >= 127) { return -1; } memcpy(tmp, str, l + 1); char *part = strtok(tmp, ","); while (part) { if (!strcmp(part, "h264")) { config->video_fmt = VIDEO_H264; } else if (!strcmp(part, "vp9")) { config->video_fmt = VIDEO_VP9; } else if (!strcmp(part, "av1")) { config->video_fmt = VIDEO_AV1; } else if (!strcmp(part, "hw")) { config->prefer_hwvideo = true; } else if (!strcmp(part, "sw")) { config->prefer_hwvideo = false; } else if (!strncmp(part, "bpf=", 4)) { char *ep; double bpf = strtod(part + 4, &ep); if (*ep == 0 && bpf <= 1e9 && bpf >= 1.0) { config->video_bpf = (int)bpf; } else { return -1; } } else { return -1; } part = strtok(NULL, ","); } return 0; } #endif #ifdef HAS_VSOCK static int parse_vsock_addr(const char *str, struct main_config *config) { char tmp[128]; size_t l = strlen(str); if (l >= 127) { return -1; } memcpy(tmp, str, l + 1); char *port = strchr(tmp, ':'); if (port) { char *cid = tmp; port[0] = 0; port = port + 1; size_t cid_len = strlen(cid); if (cid_len > 0) { if (cid[0] == 's') { if (cid_len < 2) { return -1; } config->vsock_to_host = true; if (parse_uint32(cid + 1, &config->vsock_cid) == -1) { return -1; } } else { config->vsock_to_host = false; if (parse_uint32(cid, &config->vsock_cid) == -1) { return -1; } } } } else { port = tmp; } if (parse_uint32(port, &config->vsock_port) == -1) { return -1; } if (config->vsock_port <= 0) { return -1; } return 0; } #endif static const char *feature_names[] = { "lz4", "zstd", "dmabuf", "video", "vaapi", }; static const bool feature_flags[] = { #ifdef HAS_LZ4 true, #else false, #endif #ifdef HAS_ZSTD true, #else false, #endif #ifdef HAS_DMABUF true, #else false, #endif #ifdef HAS_VIDEO true, #else false, #endif #ifdef HAS_VAAPI true, #else false, #endif }; #define ARG_VERSION 1000 #define ARG_DISPLAY 1001 #define ARG_DRMNODE 1002 #define ARG_ALLOW_TILED 1003 #define ARG_LOGIN_SHELL 1004 #define ARG_REMOTENODE 1005 #define ARG_THREADS 1006 #define ARG_UNLINK 1007 #define ARG_VIDEO 1008 #define ARG_HWVIDEO 1009 #define ARG_CONTROL 1010 #define ARG_WAYPIPE_BINARY 1011 #define ARG_BENCH_TEST_SIZE 1012 #define ARG_VSOCK 1013 #define ARG_TITLE_PREFIX 1014 static const struct option options[] = { {"compress", required_argument, NULL, 'c'}, {"debug", no_argument, NULL, 'd'}, {"help", no_argument, NULL, 'h'}, {"no-gpu", no_argument, NULL, 'n'}, {"oneshot", no_argument, NULL, 'o'}, {"socket", required_argument, NULL, 's'}, {"version", no_argument, NULL, ARG_VERSION}, {"allow-tiled", no_argument, NULL, ARG_ALLOW_TILED}, {"unlink-socket", no_argument, NULL, ARG_UNLINK}, {"drm-node", required_argument, NULL, ARG_DRMNODE}, {"remote-node", required_argument, NULL, ARG_REMOTENODE}, {"remote-bin", required_argument, NULL, ARG_WAYPIPE_BINARY}, {"login-shell", no_argument, NULL, ARG_LOGIN_SHELL}, {"video", optional_argument, NULL, ARG_VIDEO}, {"hwvideo", no_argument, NULL, ARG_HWVIDEO}, {"threads", required_argument, NULL, ARG_THREADS}, {"display", required_argument, NULL, ARG_DISPLAY}, {"control", required_argument, NULL, ARG_CONTROL}, {"test-size", required_argument, NULL, ARG_BENCH_TEST_SIZE}, {"vsock", no_argument, NULL, ARG_VSOCK}, {"title-prefix", required_argument, NULL, ARG_TITLE_PREFIX}, {0, 0, NULL, 0}}; struct arg_permissions { int val; uint32_t mode_mask; }; #define ALL_MODES (uint32_t) - 1 static const struct arg_permissions arg_permissions[] = { {'c', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {'d', ALL_MODES}, {'h', MODE_FAIL}, {'n', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {'o', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {'s', MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_VERSION, MODE_FAIL}, {ARG_ALLOW_TILED, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_UNLINK, MODE_SERVER}, {ARG_DRMNODE, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_REMOTENODE, MODE_SSH}, {ARG_WAYPIPE_BINARY, MODE_SSH}, {ARG_LOGIN_SHELL, MODE_SERVER}, {ARG_VIDEO, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_HWVIDEO, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_THREADS, MODE_SSH | MODE_CLIENT | MODE_SERVER | MODE_BENCH}, {ARG_DISPLAY, MODE_SSH | MODE_SERVER}, {ARG_CONTROL, MODE_SSH | MODE_SERVER}, {ARG_BENCH_TEST_SIZE, MODE_BENCH}, {ARG_VSOCK, MODE_SSH | MODE_CLIENT | MODE_SERVER}, {ARG_TITLE_PREFIX, MODE_SSH | MODE_CLIENT | MODE_SERVER}}; /* envp is nonstandard, so use environ */ extern char **environ; int main(int argc, char **argv) { bool help = false; bool version = false; bool fail = false; bool debug = false; bool oneshot = false; bool unlink_at_end = false; bool login_shell = false; char *remote_drm_node = NULL; char *comp_string = NULL; char *nthread_string = NULL; char *wayland_display = NULL; char *waypipe_binary = "waypipe"; char *control_path = NULL; char *socketpath = NULL; uint32_t bench_test_size = (1u << 22) + 13; struct main_config config = { .n_worker_threads = 0, .drm_node = NULL, #ifdef HAS_LZ4 .compression = COMP_LZ4, #else .compression = COMP_NONE, #endif .compression_level = 0, .no_gpu = false, .only_linear_dmabuf = true, .video_if_possible = false, .video_bpf = 0, .video_fmt = VIDEO_H264, .prefer_hwvideo = false, .vsock = false, .vsock_cid = 2, /* VMADDR_CID_HOST */ .vsock_to_host = false, /* VMADDR_FLAG_TO_HOST */ .vsock_port = 0, .title_prefix = NULL, }; /* We do not parse any getopt arguments happening after the mode choice * string, so as not to interfere with them. */ enum waypipe_mode mode = MODE_FAIL; int mode_argc = 0; while (mode_argc < argc) { if (!strcmp(argv[mode_argc], "ssh")) { mode = MODE_SSH; break; } if (!strcmp(argv[mode_argc], "client")) { mode = MODE_CLIENT; break; } if (!strcmp(argv[mode_argc], "server")) { mode = MODE_SERVER; break; } if (!strcmp(argv[mode_argc], "recon")) { mode = MODE_RECON; break; } if (!strcmp(argv[mode_argc], "bench")) { mode = MODE_BENCH; break; } mode_argc++; } while (true) { /* todo: set opterr to 0 and use custom error handler */ int opt = getopt_long( mode_argc, argv, "c:dhnos:", options, NULL); if (opt == -1) { break; } const struct arg_permissions *perms = NULL; for (size_t k = 0; k < sizeof(arg_permissions) / sizeof(arg_permissions[0]); k++) { if (arg_permissions[k].val == opt) { perms = &arg_permissions[k]; } } if (!perms) { fail = true; break; } if (!(mode & perms->mode_mask) && mode != MODE_FAIL) { fprintf(stderr, "Option %s is not allowed in mode %s\n", argv[optind - 1], argv[mode_argc]); return EXIT_FAILURE; } switch (opt) { case 'c': if (!strcmp(optarg, "none")) { config.compression = COMP_NONE; config.compression_level = 0; } else if (!strncmp(optarg, "lz4", 3) && parse_level_choice(optarg + 3, &config.compression_level, -1)) { #ifdef HAS_LZ4 config.compression = COMP_LZ4; #else fprintf(stderr, "Compression method lz4 not available: this copy of Waypipe was not built with LZ4 compression support.\n"); return EXIT_FAILURE; #endif } else if (!strncmp(optarg, "zstd", 4) && parse_level_choice(optarg + 4, &config.compression_level, 5)) { #ifdef HAS_ZSTD config.compression = COMP_ZSTD; #else fprintf(stderr, "Compression method zstd not available: this copy of Waypipe was not built with Zstd compression support.\n"); return EXIT_FAILURE; #endif } else { fail = true; } comp_string = optarg; break; case 'd': debug = true; break; case 'h': help = true; break; case 'n': config.no_gpu = true; break; case 'o': oneshot = true; break; case 's': socketpath = optarg; break; case ARG_VERSION: version = true; break; case ARG_DISPLAY: wayland_display = optarg; break; case ARG_CONTROL: control_path = optarg; break; case ARG_UNLINK: unlink_at_end = true; break; case ARG_DRMNODE: config.drm_node = optarg; break; case ARG_REMOTENODE: remote_drm_node = optarg; break; case ARG_LOGIN_SHELL: login_shell = true; break; case ARG_ALLOW_TILED: config.only_linear_dmabuf = false; break; #ifdef HAS_VIDEO case ARG_VIDEO: config.video_if_possible = true; if (optarg) { if (parse_video_string(optarg, &config) == -1) { fail = true; } config.old_video_mode = false; } break; case ARG_HWVIDEO: config.video_if_possible = true; config.prefer_hwvideo = true; break; #else case ARG_VIDEO: case ARG_HWVIDEO: fprintf(stderr, "Option %s not allowed: this copy of Waypipe was not built with video support.\n", argv[optind - 1]); return EXIT_FAILURE; #endif case ARG_THREADS: { uint32_t nthreads; if (parse_uint32(optarg, &nthreads) == -1 || nthreads > (1u << 16)) { fail = true; } config.n_worker_threads = (int)nthreads; nthread_string = optarg; } break; case ARG_WAYPIPE_BINARY: waypipe_binary = optarg; break; case ARG_BENCH_TEST_SIZE: { if (parse_uint32(optarg, &bench_test_size) == -1 || bench_test_size > (1u << 30)) { fail = true; } } break; case ARG_VSOCK: #ifdef HAS_VSOCK config.vsock = true; break; #else fprintf(stderr, "Option --vsock not allowed: this copy of Waypipe was not built with support for Linux VM sockets.\n"); return EXIT_FAILURE; #endif case ARG_TITLE_PREFIX: if (!is_utf8(optarg)) { fprintf(stderr, "Title prefix argument must be valid UTF-8.\n"); return EXIT_FAILURE; } if (strlen(optarg) > 128) { fprintf(stderr, "Title prefix is too long (>128 bytes).\n"); return EXIT_FAILURE; } config.title_prefix = optarg; break; default: fail = true; break; } } if (optind < mode_argc) { fprintf(stderr, "unexpected argument: %s\n", argv[optind]); /* there is an extra parameter before the mode argument */ fail = true; } argv += mode_argc; argc -= mode_argc; if (fail) { return usage(EXIT_FAILURE); } else if (version) { fprintf(stdout, "waypipe " WAYPIPE_VERSION "\n"); fprintf(stdout, "features:"); for (size_t i = 0; i < sizeof(feature_flags) / sizeof(feature_flags[0]); i++) { if (feature_flags[i]) { fprintf(stdout, " %s", feature_names[i]); } } fprintf(stdout, "\n"); fprintf(stdout, "unavailable:"); for (size_t i = 0; i < sizeof(feature_flags) / sizeof(feature_flags[0]); i++) { if (!feature_flags[i]) { fprintf(stdout, " %s", feature_names[i]); } } fprintf(stdout, "\n"); return EXIT_SUCCESS; } else if (help) { return usage(EXIT_SUCCESS); } else if (mode == MODE_FAIL || argc < 1) { return usage(EXIT_FAILURE); } if (mode == MODE_CLIENT && argc > 1) { // In client mode, we do not start an application return usage(EXIT_FAILURE); } else if (mode == MODE_RECON && argc != 3) { // The reconnection helper takes exactly two trailing arguments return usage(EXIT_FAILURE); } else if (mode == MODE_BENCH && argc != 2) { return usage(EXIT_FAILURE); } argv++; argc--; if (argc > 0 && !strcmp(argv[0], "--")) { argv++; argc--; } if (config.video_bpf == 0) { config.video_bpf = config.prefer_hwvideo ? 360000 : 120000; } #ifdef HAS_VSOCK if (config.vsock) { if (socketpath == NULL) { fprintf(stderr, "Socket option (-s, --socket) is required when vsock is enabled\n"); return EXIT_FAILURE; } if (parse_vsock_addr(socketpath, &config) == -1) { fprintf(stderr, "Invalid vsock address specification: '%s' does not match form [[s]CID:]port\n", socketpath); return EXIT_FAILURE; } } #endif if (debug) { log_funcs[0] = log_handler; } log_funcs[1] = log_handler; log_mode = mode; log_anti_staircase = false; log_to_tty = isatty(STDERR_FILENO); setup_video_logging(); if (setup_sighandlers() == -1) { return EXIT_FAILURE; } set_initial_fds(); /* Waypipe connects/binds/unlinks sockets using relative paths, * to work around a) bad Unix socket API which limits path lengths * b) race conditions when directories are moved and renamed. * Unfortunately, for lack of connectat/bindat, this is done * by changing the current working directory of the process to * the desired folder, performing the operation, and then going * back. */ int cwd_fd = open_folder("."); if (cwd_fd == -1) { wp_error("Error: cannot open current directory.\n"); return EXIT_FAILURE; } if (set_cloexec(cwd_fd) == -1) { wp_error("Error: cannot set cloexec on current directory fd.\n"); return EXIT_FAILURE; } const char *wayland_socket = getenv("WAYLAND_SOCKET"); if (wayland_socket != NULL) { oneshot = true; } int ret; if (mode == MODE_RECON) { ret = run_recon(argv[0], argv[1]); } else if (mode == MODE_BENCH) { char *endptr = NULL; float bw = strtof(argv[0], &endptr); if (*endptr != 0) { wp_error("Failed to parse bandwidth '%s' in MB/sec\n", argv[0]); return EXIT_FAILURE; } ret = run_bench(bw, bench_test_size, config.n_worker_threads); } else if (mode == MODE_CLIENT) { struct sockaddr_un sockaddr; memset(&sockaddr, 0, sizeof(sockaddr)); if (socketpath && split_socket_path(socketpath, &sockaddr) == -1) { ret = EXIT_FAILURE; } else { struct socket_path client_sock_path; client_sock_path.folder = socketpath ? socketpath : "/tmp/"; client_sock_path.filename = &sockaddr; if (!socketpath) { sockaddr.sun_family = AF_UNIX; strcpy(sockaddr.sun_path, "waypipe-client.sock"); } int nmaxclients = oneshot ? 1 : 128; int client_folder_fd = -1, channelsock = -1; if (!config.vsock) { if (setup_nb_socket(cwd_fd, client_sock_path, nmaxclients, &client_folder_fd, &channelsock) == -1) { return EXIT_FAILURE; } } else { #ifdef HAS_VSOCK if (listen_on_vsock(config.vsock_port, nmaxclients, &channelsock) == -1) { return EXIT_FAILURE; } #endif } ret = run_client(cwd_fd, client_sock_path.folder, client_folder_fd, client_sock_path.filename->sun_path, &config, oneshot, wayland_socket, 0, channelsock); if (!config.vsock) { checked_close(client_folder_fd); } } } else if (mode == MODE_SERVER) { char *const *app_argv = (char *const *)argv; char display_path[20]; if (!wayland_display) { char rbytes[9]; fill_rand_token(rbytes); rbytes[8] = 0; sprintf(display_path, "wayland-%s", rbytes); wayland_display = display_path; } struct sockaddr_un sockaddr; memset(&sockaddr, 0, sizeof(sockaddr)); if (socketpath && split_socket_path(socketpath, &sockaddr) == -1) { ret = EXIT_FAILURE; } else { struct socket_path server_sock_path; server_sock_path.folder = socketpath ? socketpath : "/tmp/"; server_sock_path.filename = &sockaddr; if (!socketpath) { sockaddr.sun_family = AF_UNIX; strcpy(sockaddr.sun_path, "waypipe-server.sock"); } ret = run_server(cwd_fd, server_sock_path, wayland_display, control_path, &config, oneshot, unlink_at_end, app_argv, login_shell); } } else { struct sockaddr_un clientsock = {0}; char socket_folder[512] = {0}; if (socketpath) { if (strlen(socketpath) >= sizeof(socket_folder)) { wp_error("Socket path prefix is too long\n"); close(cwd_fd); return EXIT_FAILURE; } strcpy(socket_folder, socketpath); if (split_socket_path(socket_folder, &clientsock) == -1) { close(cwd_fd); return EXIT_FAILURE; } } else { clientsock.sun_family = AF_UNIX; strcpy(clientsock.sun_path, "waypipe"); strcpy(socket_folder, "/tmp/"); socketpath = "/tmp/waypipe"; } if (strlen(clientsock.sun_path) + sizeof("-server-88888888.sock") >= sizeof(clientsock.sun_path)) { wp_error("Socket path prefix filename '%s' is too long (more than %zu bytes).\n", socketpath, sizeof(clientsock.sun_path) - sizeof("-server-88888888.sock")); } bool allocates_pty = false; int dstidx = locate_openssh_cmd_hostname( argc, argv, &allocates_pty); if (dstidx < 0) { fprintf(stderr, "waypipe: Failed to locate destination in ssh command string\n"); close(cwd_fd); return EXIT_FAILURE; } /* If there are no arguments following the destination */ bool needs_login_shell = dstidx + 1 == argc; if (needs_login_shell || allocates_pty) { log_anti_staircase = true; } char rbytes[9]; fill_rand_token(rbytes); rbytes[8] = 0; sprintf(clientsock.sun_path + strlen(clientsock.sun_path), "-client-%s.sock", rbytes); struct socket_path client_sock_path = { .filename = &clientsock, .folder = socket_folder, }; int nmaxclients = oneshot ? 1 : 128; int channel_folder_fd = -1, channelsock = -1; if (!config.vsock) { if (setup_nb_socket(cwd_fd, client_sock_path, nmaxclients, &channel_folder_fd, &channelsock) == -1) { close(cwd_fd); return EXIT_FAILURE; } if (set_cloexec(channelsock) == -1 || set_cloexec(channel_folder_fd) == -1) { wp_error("Failed to make client socket or its folder cloexec"); close(channel_folder_fd); close(channelsock); close(cwd_fd); return EXIT_FAILURE; } } else { #ifdef HAS_VSOCK if (listen_on_vsock(config.vsock_port, nmaxclients, &channelsock) == -1) { return EXIT_FAILURE; } if (set_cloexec(channelsock) == -1) { wp_error("Failed to make client socket or its folder cloexec"); close(channelsock); close(cwd_fd); return EXIT_FAILURE; } #endif } pid_t conn_pid; { char linkage[512]; char serversock[256]; char video_str[140]; char remote_display[20]; if (!config.vsock) { sprintf(serversock, "%s-server-%s.sock", socketpath, rbytes); sprintf(linkage, "%s-server-%s.sock:%s-client-%s.sock", socketpath, rbytes, socketpath, rbytes); } else { sprintf(serversock, "%d", config.vsock_port); } sprintf(remote_display, "wayland-%s", rbytes); if (!wayland_display) { wayland_display = remote_display; } int nextra = 14 + debug + oneshot + 2 * (remote_drm_node != NULL) + 2 * (control_path != NULL) + config.video_if_possible + !config.only_linear_dmabuf + 2 * needs_login_shell + 2 * (config.n_worker_threads != 0); char **arglist = calloc((size_t)(argc + nextra), sizeof(char *)); int offset = 0; arglist[offset++] = "ssh"; if (needs_login_shell) { /* Force tty allocation, if we are attempting a * login shell. The user-override is a -T flag, * and a second -t will ensure a login shell * even if `waypipe ssh` was not run from a pty. * Unfortunately, -t disables newline * translation on the local side; see * `log_handler`. */ arglist[offset++] = "-t"; } if (!config.vsock) { arglist[offset++] = "-R"; arglist[offset++] = linkage; } for (int i = 0; i <= dstidx; i++) { arglist[offset + i] = argv[i]; } arglist[dstidx + 1 + offset++] = waypipe_binary; if (debug) { arglist[dstidx + 1 + offset++] = "-d"; } if (oneshot) { arglist[dstidx + 1 + offset++] = "-o"; } /* Always send the compression flag, because the default * was be changed from NONE to LZ4. */ arglist[dstidx + 1 + offset++] = "-c"; if (!comp_string) { switch (config.compression) { case COMP_LZ4: comp_string = "lz4"; break; case COMP_ZSTD: comp_string = "zstd"; break; default: comp_string = "none"; break; } } arglist[dstidx + 1 + offset++] = comp_string; if (needs_login_shell) { arglist[dstidx + 1 + offset++] = "--login-shell"; } if (config.video_if_possible) { if (!config.old_video_mode) { char *vid_type = NULL; switch (config.video_fmt) { case VIDEO_H264: vid_type = "h264"; break; case VIDEO_VP9: vid_type = "vp9"; break; case VIDEO_AV1: vid_type = "av1"; break; } sprintf(video_str, "--video=%s,%s,bpf=%d", vid_type, config.prefer_hwvideo ? "hw" : "sw", config.video_bpf); arglist[dstidx + 1 + offset++] = video_str; } else { arglist[dstidx + 1 + offset++] = config.prefer_hwvideo ? "--hwvideo" : "--video"; } } if (!config.only_linear_dmabuf) { arglist[dstidx + 1 + offset++] = "--allow-tiled"; } if (remote_drm_node) { arglist[dstidx + 1 + offset++] = "--drm-node"; arglist[dstidx + 1 + offset++] = remote_drm_node; } if (config.n_worker_threads != 0) { arglist[dstidx + 1 + offset++] = "--threads"; arglist[dstidx + 1 + offset++] = nthread_string; } if (control_path) { arglist[dstidx + 1 + offset++] = "--control"; arglist[dstidx + 1 + offset++] = control_path; } arglist[dstidx + 1 + offset++] = "--unlink-socket"; arglist[dstidx + 1 + offset++] = "-s"; arglist[dstidx + 1 + offset++] = serversock; arglist[dstidx + 1 + offset++] = "--display"; arglist[dstidx + 1 + offset++] = wayland_display; if (config.vsock) { arglist[dstidx + 1 + offset++] = "--vsock"; } arglist[dstidx + 1 + offset++] = "server"; for (int i = dstidx + 1; i < argc; i++) { arglist[offset + i] = argv[i]; } arglist[argc + offset] = NULL; int err = posix_spawnp(&conn_pid, arglist[0], NULL, NULL, arglist, environ); if (err) { wp_error("Failed to spawn ssh process: %s", strerror(err)); close(channelsock); free(arglist); return EXIT_FAILURE; } free(arglist); } ret = run_client(cwd_fd, client_sock_path.folder, channel_folder_fd, client_sock_path.filename->sun_path, &config, oneshot, wayland_socket, conn_pid, channelsock); if (!config.vsock) { checked_close(channel_folder_fd); } } checked_close(cwd_fd); check_unclosed_fds(); return ret; } waypipe-v0.9.1/test/000077500000000000000000000000001463133614300143505ustar00rootroot00000000000000waypipe-v0.9.1/test/build_matrix.py000077500000000000000000000060571463133614300174200ustar00rootroot00000000000000#!/usr/bin/env python3 import sys, os, subprocess, shutil """ Script to check that Waypipe builds and that tests pass in all of its configurations. """ waypipe_root, build_root = sys.argv[1], sys.argv[2] os.makedirs(build_root, exist_ok=True) setups = [ ("regular", ["--buildtype", "debugoptimized"], {}), ("release", ["--buildtype", "release"], {}), ("clang", ["--buildtype", "debugoptimized"], {"CC": "clang"}), ( "clang-tsan", ["--buildtype", "debugoptimized", "-Db_sanitize=thread"], {"CC": "clang"}, ), ( "clang-asan", ["--buildtype", "debugoptimized", "-Db_sanitize=address,undefined"], {"CC": "clang"}, ), ( "empty", [ "--buildtype", "debugoptimized", "-Dwith_video=disabled", "-Dwith_lz4=disabled", "-Dwith_zstd=disabled", "-Dwith_dmabuf=disabled", ], {"CC": "gcc"}, ), ( "novideo", [ "--buildtype", "debugoptimized", "-Dwith_video=disabled", ], {"CC": "gcc"}, ), ( "nolz4", [ "--buildtype", "debugoptimized", "-Dwith_lz4=disabled", ], {"CC": "gcc"}, ), ( "unity", ["--buildtype", "debugoptimized", "--unity", "on", "--unity-size", "400"], {"CC": "gcc", "CFLAGS": "-pedantic -D_GNU_SOURCE"}, ), ( "error", ["--buildtype", "debugoptimized"], {"CC": "gcc", "CFLAGS": "-Wunused-result -std=c11 -pedantic -ggdb3 -O1"}, ), ] main_options = ["video", "dmabuf", "lz4", "zstd", "vaapi"] bool_map = {True: "enabled", False: "disabled"} for compiler in ["gcc", "clang"]: for flags in range(2 ** len(main_options)): bool_options = [(2**i) & flags != 0 for i in range(len(main_options))] name = "-".join( ["poly", compiler] + [m for m, b in zip(main_options, bool_options) if b] ) flag_values = [ "-Dwith_{}={}".format(m, bool_map[b]) for m, b in zip(main_options, bool_options) ] setups.append( (name, ["--buildtype", "debugoptimized"] + flag_values, {"CC": compiler}) ) if len(sys.argv) >= 4: setups = [(s, c, e) for s, c, e in setups if s == sys.argv[3]] base_env = os.environ.copy() for key, options, env in setups: print(key, end=" ") sys.stdout.flush() nenv = base_env.copy() for e in env: nenv[e] = env[e] bdir = os.path.join(build_root, key) try: shutil.rmtree(bdir) except FileNotFoundError: pass r1 = subprocess.run( ["meson", waypipe_root, bdir] + options, capture_output=True, env=nenv ) if r1.returncode: print("failed") print(r1.stdout, r1.stderr, r1.returncode) continue r2 = subprocess.run(["ninja", "test"], cwd=bdir, capture_output=True, env=nenv) if r2.returncode: print("failed") print(r2.stdout, r2.stderr, r2.returncode) continue print("passed") waypipe-v0.9.1/test/common.c000066400000000000000000000304151463133614300160070ustar00rootroot00000000000000/* * Copyright © 2021 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include #include #include #include #include #include #include #include #include uint64_t time_value = 0; uint64_t local_time_offset = 0; void *read_file_into_mem(const char *path, size_t *len) { int fd = open(path, O_RDONLY | O_NOCTTY); if (fd == -1) { fprintf(stderr, "Failed to open '%s'", path); return NULL; } *len = (size_t)lseek(fd, 0, SEEK_END); if (*len == 0) { checked_close(fd); return EXIT_SUCCESS; } lseek(fd, 0, SEEK_SET); void *buf = malloc(*len); if (read(fd, buf, *len) == -1) { return NULL; } checked_close(fd); return buf; } void send_wayland_msg(struct test_state *src, const struct msg msg, struct transfer_queue *transfers) { /* assume every message uses up 1usec */ time_value += 1000; struct char_window proto_mid; // todo: test_(re)alloc for tests, to abort (but still pass?) if // allocations fail? proto_mid.data = calloc(16384, 1); proto_mid.size = 16384; proto_mid.zone_start = 0; proto_mid.zone_end = 0; struct int_window fd_window; fd_window.size = msg.nfds + 1024; fd_window.data = calloc((size_t)fd_window.size, sizeof(int)); fd_window.zone_start = 0; fd_window.zone_end = 0; if (msg.nfds > 0) { memcpy(fd_window.data, msg.fds, sizeof(uint32_t) * (size_t)msg.nfds); } fd_window.zone_end = msg.nfds; /* The protocol source window is an exact copy of the message, and only * zone_start/zone_end are ever modified */ struct char_window proto_src; proto_src.data = calloc((size_t)msg.len, sizeof(uint32_t)); proto_src.size = msg.len * (int)sizeof(uint32_t); memcpy(proto_src.data, msg.data, (size_t)proto_src.size); proto_src.zone_start = 0; proto_src.zone_end = proto_src.size; local_time_offset = src->local_time_offset; parse_and_prune_messages(&src->glob, src->display_side, !src->display_side, &proto_src, &proto_mid, &fd_window); if (fd_window.zone_start != fd_window.zone_end) { wp_error("Not all fds were consumed, final unused window %d %d", fd_window.zone_start, fd_window.zone_end); src->failed = true; goto cleanup; } /* Replace fds with RIDs in place */ for (int i = 0; i < fd_window.zone_start; i++) { struct shadow_fd *sfd = get_shadow_for_local_fd( &src->glob.map, fd_window.data[i]); if (!sfd) { /* Autodetect type + create shadow fd */ size_t fdsz = 0; enum fdcat fdtype = get_fd_type(fd_window.data[i], &fdsz); sfd = translate_fd(&src->glob.map, &src->glob.render, &src->glob.threads, fd_window.data[i], fdtype, fdsz, NULL, false); } if (sfd) { fd_window.data[i] = sfd->remote_id; } else { wp_error("failed to translate"); src->failed = true; goto cleanup; } } for (struct shadow_fd_link *lcur = src->glob.map.link.l_next, *lnxt = lcur->l_next; lcur != &src->glob.map.link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *cur = (struct shadow_fd *)lcur; collect_update(&src->glob.threads, cur, transfers, src->config.old_video_mode); destroy_shadow_if_unreferenced(cur); } decref_transferred_rids( &src->glob.map, fd_window.zone_start, fd_window.data); { start_parallel_work(&src->glob.threads, &transfers->async_recv_queue); bool is_done; struct task_data task; while (request_work_task(&src->glob.threads, &task, &is_done)) { run_task(&task, &src->glob.threads.threads[0]); src->glob.threads.tasks_in_progress--; } (void)transfer_load_async(transfers); } for (struct shadow_fd_link *lcur = src->glob.map.link.l_next, *lnxt = lcur->l_next; lcur != &src->glob.map.link; lcur = lnxt, lnxt = lcur->l_next) { /* Note: finish_update() may delete `cur` */ struct shadow_fd *cur = (struct shadow_fd *)lcur; finish_update(cur); destroy_shadow_if_unreferenced(cur); } if (fd_window.zone_start > 0) { size_t tsz = sizeof(uint32_t) * (1 + (size_t)fd_window.zone_start); void *tmsg = calloc(tsz, 1); ((uint32_t *)tmsg)[0] = transfer_header(tsz, WMSG_INJECT_RIDS); memcpy((char *)tmsg + 4, fd_window.data, 4 * (size_t)fd_window.zone_start); transfer_add(transfers, tsz, tmsg); } if (proto_mid.zone_end > 0) { size_t tsz = sizeof(uint32_t) + (size_t)proto_mid.zone_end; void *tmsg = calloc(tsz, 1); ((uint32_t *)tmsg)[0] = transfer_header(tsz, WMSG_PROTOCOL); memcpy((char *)tmsg + 4, proto_mid.data, (size_t)proto_mid.zone_end); transfer_add(transfers, tsz, tmsg); } cleanup: free(proto_src.data); free(proto_mid.data); free(fd_window.data); } void receive_wire(struct test_state *dst, struct transfer_queue *transfers) { struct char_window proto_mid; proto_mid.data = NULL; proto_mid.size = 0; proto_mid.zone_start = 0; proto_mid.zone_end = 0; const size_t fd_padding = 1024; struct int_window fd_window; fd_window.data = calloc(fd_padding, 4); fd_window.size = (int)fd_padding; fd_window.zone_start = 0; fd_window.zone_end = 0; struct char_window proto_end; proto_end.data = calloc(16384, 1); proto_end.size = 16384; proto_end.zone_start = 0; proto_end.zone_end = 0; for (int i = 0; i < transfers->end; i++) { char *msg = transfers->vecs[i].iov_base; size_t real_sz = transfers->vecs[i].iov_len; uint32_t header = ((uint32_t *)msg)[0]; size_t sz = transfer_size(header); if (sz != real_sz) { wp_error("Transfer nominal size %zu did not match actual %zu", sz, real_sz); goto cleanup; } /* note: we assume there is at most one inj_rid message * per batch*/ if (transfer_type(header) == WMSG_PROTOCOL) { void *ndata = realloc(proto_mid.data, (size_t)proto_mid.zone_end + (sz - 4)); if (!ndata) { wp_error("Failed to reallocate recv side proto data"); goto cleanup; } proto_mid.data = ndata; memcpy(proto_mid.data + proto_mid.zone_end, msg + 4, sz - 4); proto_mid.zone_end += (int)(sz - 4); proto_mid.size = proto_mid.zone_end; } else if (transfer_type(header) == WMSG_INJECT_RIDS) { void *ndata = realloc(fd_window.data, sizeof(int) * (size_t)fd_window.zone_end + (sz - 4) + fd_padding); if (!ndata) { wp_error("Failed to reallocate recv side fd data"); goto cleanup; } fd_window.data = ndata; memcpy(fd_window.data + fd_window.zone_end, msg + 4, sz - 4); fd_window.zone_end += (int)(sz - 4) / 4; fd_window.size = fd_window.zone_end; } else { int rid = (int)((uint32_t *)msg)[1]; struct bytebuf bb; bb.data = msg; bb.size = sz; int r = apply_update(&dst->glob.map, &dst->glob.threads, &dst->glob.render, transfer_type(header), rid, &bb); if (r < 0) { wp_error("Applying update failed"); goto cleanup; } } } /* Convert RIDs back to fds */ for (int i = fd_window.zone_start; i < fd_window.zone_end; i++) { struct shadow_fd *sfd = get_shadow_for_rid( &dst->glob.map, fd_window.data[i]); if (sfd) { fd_window.data[i] = sfd->fd_local; } else { fd_window.data[i] = -1; wp_error("Failed to get shadow_fd for RID=%d, index %d", fd_window.data[i], i); } } local_time_offset = dst->local_time_offset; parse_and_prune_messages(&dst->glob, dst->display_side, dst->display_side, &proto_mid, &proto_end, &fd_window); /* Finally, take the output fds, and append them to the output stack; * ditto with the output messages. Assume for now messages are 1-in * 1-out */ dst->nrcvd++; dst->rcvd = realloc(dst->rcvd, sizeof(struct msg) * (size_t)dst->nrcvd); struct msg *lastmsg = &dst->rcvd[dst->nrcvd - 1]; memset(lastmsg, 0, sizeof(struct msg)); /* Save the fds that were marked used (which should be all of them) */ if (fd_window.zone_start > 0) { lastmsg->nfds = fd_window.zone_start; lastmsg->fds = malloc( sizeof(int) * (size_t)fd_window.zone_start); for (int i = 0; i < fd_window.zone_start; i++) { /* duplicate fd, so it's still usable if shadowfd gone */ lastmsg->fds[i] = dup(fd_window.data[i]); } } if (proto_end.zone_end > 0) { lastmsg->len = proto_end.zone_end; lastmsg->data = malloc( sizeof(uint32_t) * (size_t)proto_end.zone_end); memcpy(lastmsg->data, proto_end.data, (size_t)proto_end.zone_end); } cleanup: free(proto_end.data); free(proto_mid.data); free(fd_window.data); } /* Sends a Wayland protocol message to src, and records output messages * in dst. */ void send_protocol_msg(struct test_state *src, struct test_state *dst, const struct msg msg) { if (src->failed || dst->failed) { wp_error("at least one side broken, skipping msg"); return; } struct transfer_queue transfers; memset(&transfers, 0, sizeof(transfers)); pthread_mutex_init(&transfers.async_recv_queue.lock, NULL); /* On destination side, a bit easier; process transfers, and * then deliver all messages */ send_wayland_msg(src, msg, &transfers); receive_wire(dst, &transfers); cleanup_transfer_queue(&transfers); } int setup_state(struct test_state *s, bool display_side, bool has_gpu) { memset(s, 0, sizeof(*s)); s->config = (struct main_config){.drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 0, .no_gpu = !has_gpu, .only_linear_dmabuf = true, .video_if_possible = false, .video_bpf = 120000, .video_fmt = VIDEO_H264, .prefer_hwvideo = false, .old_video_mode = false}; s->glob.config = &s->config; s->glob.render = (struct render_data){ .drm_node_path = s->config.drm_node, .drm_fd = -1, .dev = NULL, .disabled = s->config.no_gpu, .av_disabled = s->config.no_gpu || !s->config.prefer_hwvideo, .av_bpf = s->config.video_bpf, .av_video_fmt = (int)s->config.video_fmt, .av_hwdevice_ref = NULL, .av_drmdevice_ref = NULL, .av_vadisplay = NULL, .av_copy_config = 0, }; // leave render data to be set up on demand, just as in // main_loop? // TODO: what compositors _don't_ support GPU stuff? setup_thread_pool(&s->glob.threads, s->config.compression, s->config.compression_level, s->config.n_worker_threads); setup_translation_map(&s->glob.map, display_side); init_message_tracker(&s->glob.tracker); setup_video_logging(); s->display_side = display_side; // TODO: make a transfer queue for outgoing stuff return 0; } void cleanup_state(struct test_state *s) { cleanup_message_tracker(&s->glob.tracker); cleanup_translation_map(&s->glob.map); cleanup_render_data(&s->glob.render); cleanup_hwcontext(&s->glob.render); cleanup_thread_pool(&s->glob.threads); for (int i = 0; i < s->nrcvd; i++) { free(s->rcvd[i].data); for (int j = 0; j < s->rcvd[i].nfds; j++) { checked_close(s->rcvd[i].fds[j]); } free(s->rcvd[i].fds); } free(s->rcvd); } void test_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...) { (void)level; printf("[%s:%d] ", file, line); va_list args; va_start(args, fmt); vprintf(fmt, args); va_end(args); printf("\n"); } void test_atomic_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...) { pthread_t tid = pthread_self(); char msg[1024]; int nwri = 0; nwri += sprintf(msg + nwri, "%" PRIx64 " [%s:%3d] ", (uint64_t)tid, file, line); va_list args; va_start(args, fmt); nwri += vsnprintf(msg + nwri, (size_t)(1022 - nwri), fmt, args); va_end(args); msg[nwri++] = '\n'; msg[nwri] = 0; (void)write(STDOUT_FILENO, msg, (size_t)nwri); (void)level; } waypipe-v0.9.1/test/common.h000066400000000000000000000044701463133614300160160ustar00rootroot00000000000000/* * Copyright © 2021 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #ifndef WAYPIPE_TESTCOMMON_H #define WAYPIPE_TESTCOMMON_H #include "main.h" #include "parsing.h" #include "util.h" /** a simple log handler to STDOUT for use by test programs */ void test_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...); void test_atomic_log_handler(const char *file, int line, enum log_level level, const char *fmt, ...); extern uint64_t time_value; extern uint64_t local_time_offset; void *read_file_into_mem(const char *path, size_t *len); struct msg { uint32_t *data; int len; int *fds; int nfds; }; struct test_state { struct main_config config; struct globals glob; bool display_side; bool failed; /* messages received from the other side */ int nrcvd; struct msg *rcvd; uint64_t local_time_offset; }; void send_wayland_msg(struct test_state *src, const struct msg msg, struct transfer_queue *queue); void receive_wire(struct test_state *src, struct transfer_queue *queue); void send_protocol_msg(struct test_state *src, struct test_state *dst, const struct msg msg); int setup_state(struct test_state *s, bool display_side, bool has_gpu); void cleanup_state(struct test_state *s); #endif /* WAYPIPE_TESTCOMMON_H */ waypipe-v0.9.1/test/damage_merge.c000066400000000000000000000242471463133614300171220ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include static void fill_overcopy_pattern( int Ntotal, int margin, struct ext_interval *data) { int stride = 100 + margin + 1; for (int i = 0; i < Ntotal; i++) { data[i] = (struct ext_interval){ .start = (i) % (Ntotal / 2) * stride, .width = 100 - (i > Ntotal / 2), .rep = 100, .stride = stride, }; } } static void fill_line_crossing_pattern( int Ntotal, int margin, struct ext_interval *data) { int step = (margin + 1); int boxsize = ceildiv(Ntotal, 2) * step; for (int i = 0; i < Ntotal; i++) { if (i % 2 == 0) { data[i] = (struct ext_interval){ .start = (i / 2) * step, .width = 1, .rep = boxsize, .stride = boxsize, }; } else { data[i] = (struct ext_interval){ .start = (i / 2) * boxsize, .width = boxsize, .rep = 1, .stride = 0, }; } } } static void fill_vline_pattern( int Ntotal, int margin, struct ext_interval *data) { int step = (margin + 2); int stride = Ntotal * step; for (int i = 0; i < Ntotal; i++) { data[i] = (struct ext_interval){ .start = i * step, .width = 1, .rep = 2, .stride = stride, }; } } static int randint(int max) { int cap = RAND_MAX - RAND_MAX % max; while (1) { int x = rand(); if (x >= cap) { continue; } return x % max; } } static void fill_circle_pattern( int Ntotal, int margin, struct ext_interval *data) { srand((uint32_t)(Ntotal + 165 * margin)); int i = 0; int R = (int)((2 * margin + Ntotal) * 0.3); int s = (2 * margin + Ntotal) / 2; while (i < Ntotal) { int x = randint(2 * R); int w = randint(2 * R - x) + 1; int y = randint(2 * R); int h = randint(2 * R - y) + 1; int64_t x2a = (x - R) * (x - R); int64_t x2b = (x + w - R) * (x + w - R); int64_t x2 = x2a < x2b ? x2b : x2a; int64_t y2a = (y - R) * (y - R); int64_t y2b = (y + w - R) * (y + w - R); int64_t y2 = y2a < y2b ? y2b : y2a; if (x2 + y2 >= R * R) { continue; } data[i++] = (struct ext_interval){ .start = s * y + x, .width = w, .rep = h, .stride = s, }; } } static void fill_snow_pattern(int Ntotal, int margin, struct ext_interval *data) { srand((uint32_t)(Ntotal + 165 * margin)); int size = 4; while (size * size < Ntotal * margin) { size = size + size / 4; } for (int i = 0; i < Ntotal; i++) { int x = randint(size); int y = randint(size); data[i] = (struct ext_interval){ .start = size * y + x, .width = 1, .rep = 1, .stride = size, }; } } struct pattern { const char *name; void (*func)(int Ntotal, int margin, struct ext_interval *data); }; static const struct pattern patterns[] = {{"overcopy", fill_overcopy_pattern}, {"line-crossing", fill_line_crossing_pattern}, {"circle", fill_circle_pattern}, {"snow", fill_snow_pattern}, {"vline", fill_vline_pattern}, {NULL, NULL}}; static inline int eint_low(const struct ext_interval i) { return i.start; } static inline int eint_high(const struct ext_interval i) { return i.start + (i.rep - 1) * i.stride + i.width; } static void write_eint( struct ext_interval e, char *buf, int minv, uint8_t value) { for (int k = 0; k < e.rep; k++) { memset(&buf[e.start + e.stride * k - minv], value, (size_t)e.width); } } /** Verify that: * - the new set of intervals covers the old * - the new set of intervals is disjoint within margin */ static bool check_solution_properties(int nsub, const struct ext_interval *sub, int nsup, const struct interval *sup, int margin) { int minv = INT32_MAX, maxv = INT32_MIN; for (int i = 0; i < nsup; i++) { minv = min(minv, sup[i].start); maxv = max(maxv, sup[i].end); } for (int i = 0; i < nsub; i++) { minv = min(minv, eint_low(sub[i])); maxv = max(maxv, eint_high(sub[i])); } if (minv > maxv) { return true; } minv -= margin; maxv += margin; char *test = calloc((size_t)(maxv - minv), 1); // Fast & stupid containment test for (int i = 0; i < nsub; i++) { write_eint(sub[i], test, minv, 1); } for (int i = 0; i < nsup; i++) { struct interval e = sup[i]; if (memchr(&test[e.start - minv - margin], 2, (size_t)(e.end - e.start + 2 * margin)) != NULL) { printf("Internal overlap failure\n"); free(test); return false; } memset(&test[e.start - minv], 2, (size_t)(e.end - e.start)); } bool yes = memchr(test, 1, (size_t)(maxv - minv)) == NULL; if (!yes) { int count = 0; for (int i = 0; i < maxv - minv; i++) { count += test[i] == 1; } printf("Fail count: %d/%d\n", count, maxv - minv); if (maxv - minv < 200) { for (int i = 0; i < maxv - minv; i++) { printf("%d", test[i]); } printf("\n"); } } free(test); return yes; } static int convert_to_simple( struct interval *vec, int count, const struct ext_interval *ext) { int k = 0; for (int i = 0; i < count; i++) { for (int j = 0; j < ext[i].rep; j++) { vec[k].start = ext[i].start + j * ext[i].stride; vec[k].end = vec[k].start + ext[i].width; k++; } } return k; } static int simple_lexsort(const void *L, const void *R) { const struct interval *l = L; const struct interval *r = R; if (l->start != r->start) { return l->start - r->start; } return l->end - r->end; } /** A merge operation which reduces the compound intervals to simple intervals, * and then merges them that way. After all, this only expands memory use and * runtime by a factor of screen height... */ static void __attribute__((noinline)) merge_simple(const int old_count, struct ext_interval *old_list, const int new_count, const struct ext_interval *const new_list, int *dst_count, struct interval **dst_list, int merge_margin) { int nintervals = 0; for (int i = 0; i < old_count; i++) { nintervals += old_list[i].rep; } for (int i = 0; i < new_count; i++) { nintervals += new_list[i].rep; } struct interval *vec = malloc((size_t)nintervals * sizeof(struct interval)); int base = convert_to_simple(vec, old_count, old_list); convert_to_simple(&vec[base], new_count, new_list); // divide and conquer would be faster here qsort(vec, (size_t)nintervals, sizeof(struct interval), simple_lexsort); int r = 0, w = 0; while (r < nintervals) { // inside loop. int end = vec[w].end; r++; // the interval already contains itself while (r < nintervals && vec[r].start < end + merge_margin) { end = max(end, vec[r].end); r++; } vec[w].end = end; w++; if (r < nintervals) { vec[w] = vec[r]; } } *dst_list = vec; *dst_count = w; } static int get_coverage(const int c, const struct interval *li) { int n = 0; for (int i = 0; i < c; i++) { n += li[i].end - li[i].start; } return n; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; bool all_success = true; srand(0); // no larger, because e.g. test sizes are (margins*N)^2 int margins[] = {2, 11, 32, 1}; int nvec[] = {1000, 50, 10, 30}; for (int z = 0; z < (int)(sizeof(nvec) / sizeof(nvec[0])); z++) { for (int ip = 0; patterns[ip].name; ip++) { /* Pattern tests: we generate a given pattern of damage * rectangles, apply the merge function, and verify that * all the desired result properties hold */ struct ext_interval *data = calloc((size_t)nvec[z], sizeof(struct ext_interval)); printf("\n---- pattern=%s, N=%d, margin=%d\n", patterns[ip].name, nvec[z], margins[z]); (*patterns[ip].func)(nvec[z], margins[z], data); // check that minv >= 0, maxv is <= 1GB int64_t minv = 0, maxv = 0; for (int i = 0; i < nvec[z]; i++) { int64_t high = data[i].start + ((int64_t)data[i].rep) * data[i].stride + data[i].width; maxv = maxv > high ? maxv : high; minv = minv < data[i].start ? minv : data[i].start; } if (minv < 0) { printf("generated interval set violates lower bound, skipping\n"); continue; } if (maxv > 0x40000000LL) { printf("generated interval set would use too much memory to check, skipping\n"); continue; } const char *names[2] = {"simple", "merges"}; for (int k = 0; k < 2; k++) { int dst_count = 0; struct interval *dst_list = NULL; int margin = margins[z]; struct timespec t0, t1; clock_gettime(CLOCK_MONOTONIC, &t0); if (k == 0) { merge_simple(0, NULL, nvec[z], data, &dst_count, &dst_list, margin); } else if (k == 1) { merge_mergesort(0, NULL, nvec[z], data, &dst_count, &dst_list, margin, 0); } clock_gettime(CLOCK_MONOTONIC, &t1); double elapsed01 = 1.0 * (double)(t1.tv_sec - t0.tv_sec) + 1e-9 * (double)(t1.tv_nsec - t0.tv_nsec); bool pass = check_solution_properties(nvec[z], data, dst_count, dst_list, margins[z]); all_success &= pass; int coverage = get_coverage( dst_count, dst_list); printf("%s operation took %9.5f ms, %d intervals, %d bytes, %s\n", names[k], elapsed01 * 1e3, dst_count, coverage, pass ? "pass" : "FAIL"); free(dst_list); } free(data); } } return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.9.1/test/diff_roundtrip.c000066400000000000000000000144371463133614300175430ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include static int64_t rand_gap_fill(char *data, size_t size, int max_run) { if (max_run == -1) { memset(data, rand(), size); return 1; } else if (max_run == -2) { memset(data, 0, size); return 0; } max_run = max(2, max_run); size_t pos = 0; int64_t nruns = 0; while (pos < size) { int gap1 = (rand() % max_run); gap1 = min((int)(size - pos), gap1); pos += (size_t)gap1; int gap2 = (rand() % max_run); gap2 = min((int)(size - pos), gap2); int val = rand(); memset(&data[pos], val, (size_t)gap2); pos += (size_t)gap2; nruns++; } return nruns; } struct subtest { size_t size; int max_gap; uint32_t seed; int shards; }; static const struct subtest subtests[] = { {256, 128, 0x11, 3}, {333333, 128, 0x11, 3}, {39, 2, 0x13, 17}, {10000000, 262144, 0x21, 1}, {4, 4, 0x41, 1}, {65537, 177, 0x51, 1}, {17777, 2, 0x61, 1}, {60005, 60005, 0x71, 1}, {1 << 16, -1, 0x71, 4}, {1 << 16, -2, 0x71, 4}, {1 << 24, -1, 0x71, 4}, {1 << 24, -2, 0x71, 4}, }; static const enum diff_type diff_types[5] = { DIFF_AVX512F, DIFF_AVX2, DIFF_SSE3, DIFF_NEON, DIFF_C, }; static const char *diff_names[5] = { "avx512", "avx2 ", "sse3 ", "neon ", "plainC", }; static bool run_subtest(int i, const struct subtest test, char *diff, char *source, char *mirror, char *target1, char *target2, interval_diff_fn_t diff_fn, int alignment_bits, const char *diff_name) { uint64_t ns01 = 0, ns12 = 0; int64_t nruns = 0; size_t net_diffsize = 0; srand((uint32_t)test.seed); memset(mirror, 0, test.size); memset(target1, 0, test.size); memset(target2, 0, test.size); int roughtime = (int)test.size + test.shards * 500; int repetitions = min(100, max(1000000000 / roughtime, 1)); bool all_success = true; for (int x = 0; x < repetitions; x++) { nruns += rand_gap_fill(source, test.size, test.max_gap); net_diffsize = 0; for (int s = 0; s < test.shards; s++) { struct interval damage; damage.start = split_interval( 0, (int)test.size, test.shards, s); damage.end = split_interval( 0, (int)test.size, test.shards, s + 1); int alignment = 1 << alignment_bits; damage.start = alignment * (damage.start / alignment); damage.end = alignment * (damage.end / alignment); struct timespec t0, t1, t2; clock_gettime(CLOCK_MONOTONIC, &t0); size_t diffsize = 0; if (damage.start < damage.end) { diffsize = construct_diff_core(diff_fn, alignment_bits, &damage, 1, mirror, source, diff); } size_t ntrailing = 0; if (s == test.shards - 1) { ntrailing = construct_diff_trailing(test.size, alignment_bits, mirror, source, diff + diffsize); } clock_gettime(CLOCK_MONOTONIC, &t1); apply_diff(test.size, target1, target2, diffsize, ntrailing, diff); clock_gettime(CLOCK_MONOTONIC, &t2); ns01 += (uint64_t)((t1.tv_sec - t0.tv_sec) * 1000000000LL + (t1.tv_nsec - t0.tv_nsec)); ns12 += (uint64_t)((t2.tv_sec - t1.tv_sec) * 1000000000LL + (t2.tv_nsec - t1.tv_nsec)); net_diffsize += diffsize + ntrailing; } if (memcmp(target1, source, test.size)) { printf("Failed to synchronize\n"); int ndiff = 0; for (size_t k = 0; k < test.size; k++) { if (target1[k] != source[k] || mirror[k] != source[k]) { if (ndiff > 300) { printf("and still more differences\n"); break; } printf("i %d: target1 %02x mirror %02x source %02x\n", (int)k, (uint8_t)target1[k], (uint8_t)mirror[k], (uint8_t)source[k]); ndiff++; } } all_success = false; break; } } double scale = 1.0 / ((double)repetitions * (double)test.size); printf("%s #%2d, : %6.3f,%6.3f,%6.3f ns/byte create,apply,net (%d/%d@%d), %.1f bytes/run\n", diff_name, i, (double)ns01 * scale, (double)ns12 * scale, (double)(ns01 + ns12) * scale, (int)net_diffsize, (int)test.size, test.shards, (double)repetitions * (double)test.size / (double)nruns); return all_success; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; bool all_success = true; const int nsubtests = (sizeof(subtests) / sizeof(subtests[0])); for (int i = 0; i < nsubtests; i++) { struct subtest test = subtests[i]; /* Use maximum alignment */ const size_t bufsize = alignz(test.size + 8 + 64, 64); char *diff = aligned_alloc(64, bufsize); char *source = aligned_alloc(64, bufsize); char *mirror = aligned_alloc(64, bufsize); char *target1 = aligned_alloc(64, bufsize); char *target2 = aligned_alloc(64, bufsize); const int ntypes = sizeof(diff_types) / sizeof(diff_types[0]); for (int a = 0; a < ntypes; a++) { int alignment_bits; interval_diff_fn_t diff_fn = get_diff_function( diff_types[a], &alignment_bits); if (!diff_fn) { continue; } all_success &= run_subtest(i, test, diff, source, mirror, target1, target2, diff_fn, alignment_bits, diff_names[a]); } free(diff); free(source); free(mirror); free(target1); free(target2); } return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.9.1/test/fake_ssh.c000066400000000000000000000045751463133614300163120ustar00rootroot00000000000000/* * Copyright © 2021 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include #include #include #include #include static int usage(void) { fprintf(stderr, "usage: fake_ssh [-R A:B] [-t] destination command...\n"); return EXIT_FAILURE; } int main(int argc, char **argv) { if (argc < 2) { return usage(); } argv++; argc--; bool pseudoterminal = false; char *link = NULL; char *destination = NULL; while (argc > 0) { if (strcmp(argv[0], "-t") == 0) { pseudoterminal = true; argv++; argc--; } else if (strcmp(argv[0], "-R") == 0) { link = argv[1]; argv += 2; argc -= 2; } else { destination = argv[0]; argv++; argc--; break; } } if (link) { char *p1 = link, *p2 = NULL; for (char *c = link; *c; c++) { if (*c == ':') { *c = '\0'; p2 = c + 1; break; } } if (!p2) { fprintf(stderr, "Failed to split forwarding descriptor '%s'\n", p1); return EXIT_FAILURE; } unlink(p1); if (symlink(p2, p1) == -1) { fprintf(stderr, "Symlinking '%s' to '%s' failed\n", p2, p1); return EXIT_FAILURE; } } (void)destination; (void)pseudoterminal; if (execvp(argv[0], argv) == -1) { fprintf(stderr, "Failed to run program '%s'\n", argv[0]); return EXIT_FAILURE; } return EXIT_SUCCESS; } waypipe-v0.9.1/test/fd_mirror.c000066400000000000000000000323111463133614300164770ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include #include #include #include #include #include struct compression_settings { enum compression_mode mode; int level; }; static const struct compression_settings comp_modes[] = { {COMP_NONE, 0}, #ifdef HAS_LZ4 {COMP_LZ4, 1}, #endif #ifdef HAS_ZSTD {COMP_ZSTD, 5}, #endif }; #ifdef HAS_DMABUF #include #define TEST_2CPP_FORMAT GBM_FORMAT_GR88 #else #define TEST_2CPP_FORMAT 0 #endif static int update_file(int file_fd, struct gbm_bo *bo, size_t sz, int seqno) { (void)bo; if (rand() % 11 == 0) { /* no change */ return 0; } void *data = mmap(NULL, sz, PROT_READ | PROT_WRITE, MAP_SHARED, file_fd, 0); if (data == MAP_FAILED) { return -1; } size_t start = (size_t)rand() % sz; size_t end = (size_t)rand() % sz; if (start > end) { size_t tmp = start; start = end; end = tmp; } memset((char *)data + start, seqno, end - start); munmap(data, sz); return (int)(end - start); } static int update_dmabuf(int file_fd, struct gbm_bo *bo, size_t sz, int seqno) { (void)file_fd; if (rand() % 11 == 0) { /* no change */ return 0; } void *map_handle = NULL; uint32_t stride; void *data = map_dmabuf(bo, true, &map_handle, &stride); if (data == MAP_FAILED) { return -1; } size_t start = (size_t)rand() % sz; size_t end = (size_t)rand() % sz; if (start > end) { size_t tmp = start; start = end; end = tmp; } memset((char *)data + start, seqno, end - start); unmap_dmabuf(bo, map_handle); return (int)(end - start); } static struct bytebuf combine_transfer_blocks(struct transfer_queue *td) { size_t net_size = 0; for (int i = td->start; i < td->end; i++) { net_size += td->vecs[i].iov_len; } struct bytebuf ret_block; ret_block.size = net_size; ret_block.data = malloc(net_size); size_t pos = 0; for (int i = td->start; i < td->end; i++) { memcpy(ret_block.data + pos, td->vecs[i].iov_base, td->vecs[i].iov_len); pos += td->vecs[i].iov_len; } return ret_block; } static bool check_match(int orig_fd, int copy_fd, struct gbm_bo *orig_bo, struct gbm_bo *copy_bo, enum fdcat otype, enum fdcat ctype) { if (ctype != otype) { wp_error("Mirrored file descriptor has different type: ot=%d ct=%d", otype, ctype); return false; } void *ohandle = NULL, *chandle = NULL; void *cdata = NULL, *odata = NULL; bool pass; if (otype == FDC_FILE) { struct stat ofsdata = {0}, cfsdata = {0}; if (fstat(orig_fd, &ofsdata) == -1) { wp_error("Failed to stat original file descriptor"); return false; } if (fstat(copy_fd, &cfsdata) == -1) { wp_error("Failed to stat copied file descriptor"); return false; } size_t csz = (size_t)cfsdata.st_size; size_t osz = (size_t)ofsdata.st_size; if (csz != osz) { wp_error("Mirrored file descriptor has different size: os=%d cs=%d", (int)osz, (int)csz); return false; } cdata = mmap(NULL, csz, PROT_READ, MAP_SHARED, copy_fd, 0); if (cdata == MAP_FAILED) { return false; } odata = mmap(NULL, osz, PROT_READ, MAP_SHARED, orig_fd, 0); if (odata == MAP_FAILED) { munmap(cdata, csz); return false; } pass = memcmp(cdata, odata, csz) == 0; munmap(odata, osz); munmap(cdata, csz); } else if (otype == FDC_DMABUF) { uint32_t copy_stride, orig_stride; cdata = map_dmabuf(copy_bo, false, &chandle, ©_stride); if (cdata == NULL) { return false; } odata = map_dmabuf(orig_bo, false, &ohandle, &orig_stride); if (odata == NULL) { unmap_dmabuf(copy_bo, chandle); return false; } /* todo: check the file descriptor contents */ pass = true; unmap_dmabuf(orig_bo, ohandle); unmap_dmabuf(copy_bo, chandle); } else { return false; } if (!pass) { wp_error("Mirrored file descriptor contents differ"); } return pass; } static void wait_for_thread_pool(struct thread_pool *pool) { bool done = false; while (!done) { uint8_t flush[64]; (void)read(pool->selfpipe_r, flush, sizeof(flush)); /* Also run tasks on main thread, just like the real version */ // TODO: create a 'threadpool.c' struct task_data task; bool has_task = request_work_task(pool, &task, &done); if (has_task) { run_task(&task, &pool->threads[0]); pthread_mutex_lock(&pool->work_mutex); pool->tasks_in_progress--; pthread_mutex_unlock(&pool->work_mutex); /* To skip the next poll */ } else { /* Wait a short amount */ struct timespec waitspec; waitspec.tv_sec = 0; waitspec.tv_nsec = 100000; nanosleep(&waitspec, NULL); } } } static bool test_transfer(struct fd_translation_map *src_map, struct fd_translation_map *dst_map, struct thread_pool *src_pool, struct thread_pool *dst_pool, int rid, bool expect_changes, struct render_data *render_data) { struct transfer_queue transfer_data; memset(&transfer_data, 0, sizeof(struct transfer_queue)); pthread_mutex_init(&transfer_data.async_recv_queue.lock, NULL); struct shadow_fd *src_shadow = get_shadow_for_rid(src_map, rid); collect_update(src_pool, src_shadow, &transfer_data, false); start_parallel_work(src_pool, &transfer_data.async_recv_queue); wait_for_thread_pool(src_pool); finish_update(src_shadow); transfer_load_async(&transfer_data); if (!expect_changes) { size_t ns = 0; for (int i = transfer_data.start; i < transfer_data.end; i++) { ns += transfer_data.vecs[i].iov_len; } if (transfer_data.end == transfer_data.start) { /* nothing sent */ cleanup_transfer_queue(&transfer_data); return true; } /* Redundant transfers are acceptable, if inefficient */ wp_error("Collecting updates gave a transfer (%zd bytes, %d blocks) when none was expected", ns, transfer_data.end - transfer_data.start); } if (transfer_data.end == transfer_data.start) { wp_error("Collecting updates gave a unexpected number (%d) of transfers", transfer_data.end - transfer_data.start); cleanup_transfer_queue(&transfer_data); return false; } struct bytebuf res = combine_transfer_blocks(&transfer_data); cleanup_transfer_queue(&transfer_data); size_t start = 0; while (start < res.size) { struct bytebuf tmp; tmp.data = &res.data[start]; uint32_t hb = ((uint32_t *)tmp.data)[0]; int32_t xid = ((int32_t *)tmp.data)[1]; tmp.size = transfer_size(hb); apply_update(dst_map, dst_pool, render_data, transfer_type(hb), xid, &tmp); start += alignz(tmp.size, 4); } free(res.data); /* first round, this only exists after the transfer */ struct shadow_fd *dst_shadow = get_shadow_for_rid(dst_map, rid); return check_match(src_shadow->fd_local, dst_shadow->fd_local, src_shadow->dmabuf_bo, dst_shadow->dmabuf_bo, src_shadow->type, dst_shadow->type); } /* This test closes the provided file fd */ static bool test_mirror(int new_file_fd, size_t sz, int (*update)(int fd, struct gbm_bo *bo, size_t sz, int seqno), struct compression_settings comp_mode, int n_src_threads, int n_dst_threads, struct render_data *rd, const struct dmabuf_slice_data *slice_data) { struct fd_translation_map src_map; setup_translation_map(&src_map, false); struct thread_pool src_pool; setup_thread_pool(&src_pool, comp_mode.mode, comp_mode.level, n_src_threads); struct fd_translation_map dst_map; setup_translation_map(&dst_map, true); struct thread_pool dst_pool; setup_thread_pool(&dst_pool, comp_mode.mode, comp_mode.level, n_dst_threads); size_t fdsz = 0; enum fdcat fdtype; if (slice_data) { fdtype = FDC_DMABUF; } else { fdtype = get_fd_type(new_file_fd, &fdsz); } struct shadow_fd *src_shadow = translate_fd(&src_map, rd, NULL, new_file_fd, fdtype, fdsz, slice_data, false); struct shadow_fd *dst_shadow = NULL; int rid = src_shadow->remote_id; bool pass = true; for (int i = 0; i < 7; i++) { bool fwd = i == 0 || i % 2; int target_fd = fwd ? src_shadow->fd_local : dst_shadow->fd_local; struct gbm_bo *target_bo = fwd ? src_shadow->dmabuf_bo : dst_shadow->dmabuf_bo; bool expect_changes = false; if (i == 5 && fdtype == FDC_FILE) { sz = (sz * 7) / 5; if (ftruncate(target_fd, (off_t)sz) == -1) { wp_error("failed to resize file"); break; } extend_shm_shadow(fwd ? &src_pool : &dst_pool, fwd ? src_shadow : dst_shadow, sz); expect_changes = true; } int ndiff = i > 0 ? (*update)(target_fd, target_bo, sz, i) : (int)sz; if (ndiff == -1) { pass = false; break; } expect_changes = expect_changes || (ndiff > 0); bool subpass; if (fwd) { src_shadow->is_dirty = true; damage_everything(&src_shadow->damage); subpass = test_transfer(&src_map, &dst_map, &src_pool, &dst_pool, rid, expect_changes, rd); } else { dst_shadow->is_dirty = true; damage_everything(&dst_shadow->damage); subpass = test_transfer(&dst_map, &src_map, &dst_pool, &dst_pool, rid, expect_changes, rd); } pass &= subpass; if (!pass) { break; } dst_shadow = get_shadow_for_rid(&dst_map, rid); } cleanup_translation_map(&src_map); cleanup_translation_map(&dst_map); cleanup_thread_pool(&src_pool); cleanup_thread_pool(&dst_pool); return pass; } log_handler_func_t log_funcs[2] = {NULL, test_atomic_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; if (mkdir("run", S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH) == -1 && errno != EEXIST) { wp_error("Not allowed to create test directory, cannot run tests."); return EXIT_FAILURE; } /* to avoid warnings when the driver dmabuf size constraints require * significant alignment, the width/height are already 64 aligned */ const size_t test_width = 1024; const size_t test_height = 1280; const size_t test_cpp = 2; const size_t test_size = test_width * test_height * test_cpp; const struct dmabuf_slice_data slice_data = { .width = (uint32_t)test_width, .height = (uint32_t)test_height, .format = TEST_2CPP_FORMAT, .num_planes = 1, .modifier = 0, .offsets = {0, 0, 0, 0}, .strides = {(uint32_t)(test_width * test_cpp), 0, 0, 0}, .using_planes = {true, false, false, false}, }; uint8_t *test_pattern = malloc(test_size); for (size_t i = 0; i < test_size; i++) { test_pattern[i] = (uint8_t)i; } struct render_data *rd = calloc(1, sizeof(struct render_data)); rd->drm_fd = -1; rd->av_disabled = true; bool has_dmabuf = TEST_2CPP_FORMAT != 0; if (has_dmabuf && init_render_data(rd) == -1) { has_dmabuf = false; } bool all_success = true; srand(0); for (size_t c = 0; c < sizeof(comp_modes) / sizeof(comp_modes[0]); c++) { for (int gt = 1; gt <= 5; gt++) { for (int rt = 1; rt <= 5; rt++) { int file_fd = create_anon_file(); if (file_fd == -1) { wp_error("Failed to create test file: %s", strerror(errno)); continue; } if (write(file_fd, test_pattern, test_size) != (ssize_t)test_size) { wp_error("Failed to write to test file: %s", strerror(errno)); checked_close(file_fd); continue; } bool pass = test_mirror(file_fd, test_size, update_file, comp_modes[c], gt, rt, rd, NULL); printf(" FILE comp=%d src_thread=%d dst_thread=%d, %s\n", (int)c, gt, rt, pass ? "pass" : "FAIL"); all_success &= pass; if (has_dmabuf) { struct gbm_bo *bo = make_dmabuf( rd, &slice_data); if (!bo) { has_dmabuf = false; continue; } void *map_handle = NULL; uint32_t stride; void *data = map_dmabuf(bo, true, &map_handle, &stride); if (!data) { destroy_dmabuf(bo); has_dmabuf = false; continue; } memcpy(data, test_pattern, test_size); unmap_dmabuf(bo, map_handle); int dmafd = export_dmabuf(bo); if (dmafd == -1) { has_dmabuf = false; continue; } destroy_dmabuf(bo); bool dpass = test_mirror(dmafd, test_size, update_dmabuf, comp_modes[c], gt, rt, rd, &slice_data); printf("DMABUF comp=%d src_thread=%d dst_thread=%d, %s\n", (int)c, gt, rt, dpass ? "pass" : "FAIL"); all_success &= dpass; } } } } cleanup_render_data(rd); free(rd); free(test_pattern); printf("All pass: %c\n", all_success ? 'Y' : 'n'); return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.9.1/test/fuzz_hook_det.c000066400000000000000000000103511463133614300173660ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./fuzz_hook_det [--server] [--log] {input_file}\n"); printf("A program to run and control Wayland and channel inputs for core Waypipe operations\n"); return EXIT_FAILURE; } bool display_side = true; if (argc > 1 && !strcmp(argv[1], "--server")) { display_side = false; argc--; argv++; } if (argc > 1 && !strcmp(argv[1], "--log")) { log_funcs[0] = test_atomic_log_handler; log_funcs[1] = test_atomic_log_handler; argc--; argv++; } size_t len; char *buf = read_file_into_mem(argv[1], &len); if (!buf) { return EXIT_FAILURE; } printf("Loaded %zu bytes\n", len); struct test_state ts; if (setup_state(&ts, display_side, true) == -1) { return -1; } char *ignore_buf = malloc(65536); /* Main loop: RW from socketpairs with sendmsg, with short wait */ int64_t file_nwords = (int64_t)len / 4; int64_t cursor = 0; uint32_t *data = (uint32_t *)buf; while (cursor < file_nwords) { uint32_t header = data[cursor++]; bool wayland_side = header & 0x1; bool add_file = header & 0x2; int new_fileno = -1; if (add_file && wayland_side && cursor < file_nwords) { uint32_t fsize = data[cursor++]; if (fsize == 0) { /* 'copy' sink */ new_fileno = open("/dev/null", O_WRONLY | O_NOCTTY); if (new_fileno == -1) { wp_error("Failed to open /dev/null"); } } else { /* avoid buffer overflow */ fsize = fsize > 1000000 ? 1000000 : fsize; new_fileno = create_anon_file(); if (ftruncate(new_fileno, (off_t)fsize) == -1) { wp_error("Failed to resize tempfile"); checked_close(new_fileno); break; } } } uint32_t packet_size = header >> 2; int64_t words_left = file_nwords - cursor; if (packet_size > 2048) { packet_size = 2048; } if (packet_size > (uint32_t)words_left) { packet_size = (uint32_t)words_left; } struct transfer_queue transfers; memset(&transfers, 0, sizeof(transfers)); pthread_mutex_init(&transfers.async_recv_queue.lock, NULL); if (wayland_side) { /* Send a message (incl fds) */ struct msg m; m.data = &data[cursor]; m.len = (int)packet_size; if (new_fileno != -1) { m.fds = &new_fileno; m.nfds = 1; } else { m.fds = NULL; m.nfds = 0; } send_wayland_msg(&ts, m, &transfers); /* ignore any created transfers, since this is only * a test of one side */ } else { /* Send a transfer */ void *msg_copy = calloc(packet_size, 4); memcpy(msg_copy, &data[cursor], packet_size * 4); transfer_add(&transfers, packet_size * 4, msg_copy); receive_wire(&ts, &transfers); } cleanup_transfer_queue(&transfers); cursor += packet_size; } cleanup_state(&ts); free(buf); free(ignore_buf); return EXIT_SUCCESS; } waypipe-v0.9.1/test/fuzz_hook_ext.c000066400000000000000000000170021463133614300174120ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include struct copy_setup { int conn; int wayl; bool is_display_side; struct main_config *mc; }; static void *start_looper(void *data) { struct copy_setup *setup = (struct copy_setup *)data; main_interface_loop(setup->conn, setup->wayl, -1, setup->mc, setup->is_display_side); return NULL; } log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./fuzz_hook_ext [--log] {input_file}\n"); printf("A program to run and control Wayland inputs for a linked client/server pair, from a file.\n"); return EXIT_FAILURE; } if (argc > 1 && !strcmp(argv[1], "--log")) { log_funcs[0] = test_atomic_log_handler; log_funcs[1] = test_atomic_log_handler; argc--; argv++; } setup_video_logging(); size_t len; char *buf = read_file_into_mem(argv[1], &len); if (!buf) { return EXIT_FAILURE; } printf("Loaded %zu bytes\n", len); int srv_fds[2], cli_fds[2], conn_fds[2]; if (socketpair(AF_UNIX, SOCK_STREAM, 0, srv_fds) == -1 || socketpair(AF_UNIX, SOCK_STREAM, 0, cli_fds) == -1 || socketpair(AF_UNIX, SOCK_STREAM, 0, conn_fds) == -1) { printf("Socketpair failed\n"); return EXIT_FAILURE; } struct main_config config = { .drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 0, .no_gpu = true, /* until we can construct dmabufs here */ .only_linear_dmabuf = false, .video_if_possible = true, .prefer_hwvideo = false, }; pthread_t thread_a, thread_b; struct copy_setup server_conf = {.conn = conn_fds[0], .wayl = srv_fds[1], .is_display_side = true, .mc = &config}; struct copy_setup client_conf = {.conn = conn_fds[1], .wayl = cli_fds[1], .is_display_side = false, .mc = &config}; if (pthread_create(&thread_a, NULL, start_looper, &server_conf) == -1) { printf("Thread failed\n"); } if (pthread_create(&thread_b, NULL, start_looper, &client_conf) == -1) { printf("Thread failed\n"); } char *ignore_buf = malloc(65536); /* Main loop: RW from socketpairs with sendmsg, with short wait */ int64_t file_nwords = (int64_t)len / 4; int64_t cursor = 0; uint32_t *data = (uint32_t *)buf; while (cursor < file_nwords) { uint32_t header = data[cursor++]; bool to_server = header & 0x1; bool add_file = header & 0x2; int new_fileno = -1; if (add_file && cursor < file_nwords) { uint32_t fsize = data[cursor++]; if (fsize == 0) { /* 'copy' sink */ new_fileno = open("/dev/null", O_WRONLY | O_NOCTTY); if (new_fileno == -1) { wp_error("Failed to open /dev/null"); } } else { /* avoid buffer overflow */ fsize = fsize > 1000000 ? 1000000 : fsize; new_fileno = create_anon_file(); if (ftruncate(new_fileno, (off_t)fsize) == -1) { wp_error("Failed to resize tempfile"); checked_close(new_fileno); break; } } } uint32_t packet_size = header >> 2; int64_t words_left = file_nwords - cursor; if (packet_size > 2048) { packet_size = 2048; } if (packet_size > (uint32_t)words_left) { packet_size = (uint32_t)words_left; } /* 2 msec max delay for 8KB of data, assuming no system * interference, should be easily attainable */ int max_write_delay_ms = 1; int max_read_delay_ms = 2; int send_fd = to_server ? srv_fds[0] : cli_fds[0]; /* Write packet to stream */ struct pollfd write_pfd; write_pfd.fd = send_fd; write_pfd.events = POLLOUT; int nw; retry_poll: nw = poll(&write_pfd, 1, max_write_delay_ms); if (nw == -1) { if (new_fileno != -1) { checked_close(new_fileno); } if (errno == EINTR) { goto retry_poll; } printf("Poll error\n"); break; } else if (nw == 1) { /* Send message */ struct iovec the_iovec; the_iovec.iov_len = packet_size * 4; the_iovec.iov_base = (char *)&data[cursor]; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; union { char buf[CMSG_SPACE(sizeof(int))]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); if (new_fileno != -1) { msg.msg_control = uc.buf; msg.msg_controllen = sizeof(uc.buf); struct cmsghdr *frst = CMSG_FIRSTHDR(&msg); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; memcpy(CMSG_DATA(frst), &new_fileno, sizeof(int)); frst->cmsg_len = CMSG_LEN(sizeof(int)); msg.msg_controllen = CMSG_SPACE(sizeof(int)); } int target_fd = to_server ? srv_fds[0] : cli_fds[0]; ssize_t ret = sendmsg(target_fd, &msg, 0); if (ret == -1) { wp_error("Error in sendmsg"); break; } } else { wp_error("Failed to send message before timeout"); } if (new_fileno != -1) { checked_close(new_fileno); } /* Wait up to max_delay for a response. Almost all packets * should be passed on unmodified; a very small fraction * are dropped */ struct pollfd read_pfds[2]; read_pfds[0].fd = srv_fds[0]; read_pfds[1].fd = cli_fds[0]; read_pfds[0].events = POLLIN; read_pfds[1].events = POLLIN; int nr = poll(read_pfds, 2, packet_size > 0 ? max_read_delay_ms : 0); if (nr == -1) { if (errno == EINTR) { continue; } printf("Poll error\n"); break; } else if (nr == 0) { wp_debug("No reply to sent packet %d", packet_size); } for (int i = 0; i < 2; i++) { if (read_pfds[i].revents & POLLIN) { char cmsgdata[(CMSG_LEN(28 * sizeof(int32_t)))]; struct iovec the_iovec; the_iovec.iov_len = 65536; the_iovec.iov_base = ignore_buf; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = &cmsgdata; msg.msg_controllen = sizeof(cmsgdata); msg.msg_flags = 0; ssize_t ret = recvmsg(read_pfds[i].fd, &msg, 0); if (ret == -1) { wp_error("Error in recvmsg"); } } } cursor += packet_size; } checked_close(srv_fds[0]); checked_close(cli_fds[0]); pthread_join(thread_a, NULL); pthread_join(thread_b, NULL); free(buf); free(ignore_buf); return EXIT_SUCCESS; } waypipe-v0.9.1/test/fuzz_hook_int.c000066400000000000000000000167431463133614300174170ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include #include #include #include #include #include #include #include #include #include #include #include #include #include struct copy_setup { int conn; int wayl; bool is_display_side; struct main_config *mc; }; static void *start_looper(void *data) { struct copy_setup *setup = (struct copy_setup *)data; main_interface_loop(setup->conn, setup->wayl, -1, setup->mc, setup->is_display_side); return NULL; } log_handler_func_t log_funcs[2] = {NULL, NULL}; int main(int argc, char **argv) { if (argc == 1 || !strcmp(argv[1], "--help")) { printf("Usage: ./fuzz_hook_int [--server] [--log] {input_file}\n"); printf("A program to run and control Wayland and channel inputs for a waypipe main loop\n"); return EXIT_FAILURE; } bool display_side = true; if (argc > 1 && !strcmp(argv[1], "--server")) { display_side = false; argc--; argv++; } if (argc > 1 && !strcmp(argv[1], "--log")) { log_funcs[0] = test_atomic_log_handler; log_funcs[1] = test_atomic_log_handler; argc--; argv++; } setup_video_logging(); size_t len; char *buf = read_file_into_mem(argv[1], &len); if (!buf) { return EXIT_FAILURE; } printf("Loaded %zu bytes\n", len); int way_fds[2], conn_fds[2]; if (socketpair(AF_UNIX, SOCK_STREAM, 0, way_fds) == -1 || socketpair(AF_UNIX, SOCK_STREAM, 0, conn_fds) == -1) { printf("Socketpair failed\n"); return EXIT_FAILURE; } struct main_config config = { .drm_node = NULL, .n_worker_threads = 1, .compression = COMP_NONE, .compression_level = 0, .no_gpu = true, /* until we can construct dmabufs here */ .only_linear_dmabuf = false, .video_if_possible = true, .prefer_hwvideo = false, }; pthread_t thread; struct copy_setup conf = {.conn = conn_fds[1], .wayl = way_fds[1], .is_display_side = display_side, .mc = &config}; if (pthread_create(&thread, NULL, start_looper, &conf) == -1) { printf("Thread failed\n"); return EXIT_FAILURE; } char *ignore_buf = malloc(65536); /* Main loop: RW from socketpairs with sendmsg, with short wait */ int64_t file_nwords = (int64_t)len / 4; int64_t cursor = 0; uint32_t *data = (uint32_t *)buf; while (cursor < file_nwords) { uint32_t header = data[cursor++]; bool wayland_side = header & 0x1; bool add_file = header & 0x2; int new_fileno = -1; if (add_file && wayland_side && cursor < file_nwords) { uint32_t fsize = data[cursor++]; if (fsize == 0) { /* 'copy' sink */ new_fileno = open("/dev/null", O_WRONLY | O_NOCTTY); if (new_fileno == -1) { wp_error("Failed to open /dev/null"); } } else { /* avoid buffer overflow */ fsize = fsize > 1000000 ? 1000000 : fsize; new_fileno = create_anon_file(); if (ftruncate(new_fileno, (off_t)fsize) == -1) { wp_error("Failed to resize tempfile"); checked_close(new_fileno); break; } } } uint32_t packet_size = header >> 2; int64_t words_left = file_nwords - cursor; if (packet_size > 2048) { packet_size = 2048; } if (packet_size > (uint32_t)words_left) { packet_size = (uint32_t)words_left; } /* 2 msec max delay for 8KB of data, assuming no system * interference, should be easily attainable */ int max_write_delay_ms = 1; int max_read_delay_ms = 2; int send_fd = wayland_side ? way_fds[0] : conn_fds[0]; /* Write packet to stream */ struct pollfd write_pfd; write_pfd.fd = send_fd; write_pfd.events = POLLOUT; int nw; retry_poll: nw = poll(&write_pfd, 1, max_write_delay_ms); if (nw == -1) { if (new_fileno != -1) { checked_close(new_fileno); } if (errno == EINTR) { goto retry_poll; } printf("Poll error\n"); break; } else if (nw == 1 && wayland_side) { /* Send message */ struct iovec the_iovec; the_iovec.iov_len = packet_size * 4; the_iovec.iov_base = (char *)&data[cursor]; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = NULL; msg.msg_controllen = 0; msg.msg_flags = 0; union { char buf[CMSG_SPACE(sizeof(int))]; struct cmsghdr align; } uc; memset(uc.buf, 0, sizeof(uc.buf)); if (new_fileno != -1) { msg.msg_control = uc.buf; msg.msg_controllen = sizeof(uc.buf); struct cmsghdr *frst = CMSG_FIRSTHDR(&msg); frst->cmsg_level = SOL_SOCKET; frst->cmsg_type = SCM_RIGHTS; memcpy(CMSG_DATA(frst), &new_fileno, sizeof(int)); frst->cmsg_len = CMSG_LEN(sizeof(int)); msg.msg_controllen = CMSG_SPACE(sizeof(int)); } ssize_t ret = sendmsg(way_fds[0], &msg, 0); if (ret == -1) { wp_error("Error in sendmsg"); break; } } else if (nw == 1 && !wayland_side) { ssize_t ret = write(conn_fds[0], (char *)&data[cursor], packet_size * 4); if (ret == -1) { wp_error("Error in write"); break; } } else { wp_error("Failed to send message before timeout"); } if (new_fileno != -1) { checked_close(new_fileno); } /* Wait up to max_delay for a response. Almost all packets * should be passed on unmodified; a very small fraction * are dropped */ struct pollfd read_pfds[2]; read_pfds[0].fd = way_fds[0]; read_pfds[1].fd = conn_fds[0]; read_pfds[0].events = POLLIN; read_pfds[1].events = POLLIN; int nr = poll(read_pfds, 2, packet_size > 0 ? max_read_delay_ms : 0); if (nr == -1) { if (errno == EINTR) { continue; } printf("Poll error\n"); break; } else if (nr == 0) { wp_debug("No reply to sent packet %d", packet_size); } for (int i = 0; i < 2; i++) { if (read_pfds[i].revents & POLLIN) { char cmsgdata[(CMSG_LEN(28 * sizeof(int32_t)))]; struct iovec the_iovec; the_iovec.iov_len = 65536; the_iovec.iov_base = ignore_buf; struct msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &the_iovec; msg.msg_iovlen = 1; msg.msg_control = &cmsgdata; msg.msg_controllen = sizeof(cmsgdata); msg.msg_flags = 0; ssize_t ret = recvmsg(read_pfds[i].fd, &msg, 0); if (ret == -1) { wp_error("Error in recvmsg"); } } } cursor += packet_size; } checked_close(conn_fds[0]); checked_close(way_fds[0]); pthread_join(thread, NULL); free(buf); free(ignore_buf); return EXIT_SUCCESS; } waypipe-v0.9.1/test/headless.py000077500000000000000000000226761463133614300165320ustar00rootroot00000000000000#!/usr/bin/env python3 if __name__ != "__main__": quit(1) import os, subprocess, time, signal import multiprocessing def try_unlink(path): try: os.unlink(path) except FileNotFoundError: pass def wait_until_exists(path): for i in range(100): if os.path.exists(path): return True time.sleep(0.01) else: return False def safe_cleanup(process): assert type(process) == subprocess.Popen for i in range(3): if process.poll() is None: # certain weston client programs appear to initiate shutdown proceedings correctly; however, they appear to wait for a frame beforehand, and the headless weston doesn't ask for additional frames process.send_signal(signal.SIGINT) time.sleep(0.5) try: process.wait(100) except subprocess.TimeoutExpired: process.kill() try: process.wait(1) except subprocess.TimeoutExpired: # no third chances process.terminate() weston_path = os.environ["TEST_WESTON_PATH"] waypipe_path = os.environ["TEST_WAYPIPE_PATH"] ld_library_path = ( os.environ["LD_LIBRARY_PATH"] if "LD_LIBRARY_PATH" in os.environ else "" ) sub_tests = { "SHM": ["TEST_WESTON_SHM_PATH"], "EGL": ["TEST_WESTON_EGL_PATH", "-o"], "DMABUF": ["TEST_WESTON_DMA_PATH"], "TERM": ["TEST_WESTON_TERM_PATH"], "PRES": ["TEST_WESTON_PRES_PATH"], "SUBSURF": ["TEST_WESTON_SUBSURF_PATH"], } for k, v in list(sub_tests.items()): if v[0] in os.environ: v[0] = os.environ[v[0]] else: del sub_tests[k] xdg_runtime_dir = os.path.abspath("./run/") # weston does not currently appear to support setting absolute socket paths socket_path = "w_sock" abs_socket_path = os.path.join(xdg_runtime_dir, socket_path) mainenv = {"XDG_RUNTIME_DIR": xdg_runtime_dir, "LD_LIBRARY_PATH": ld_library_path} weston_command = [ weston_path, "--backend=headless-backend.so", "--socket=" + socket_path, # "--use-pixman", "--width=1111", "--height=777", ] arguments = subprocess.check_output([weston_path, "--help"]).decode() if "--use-gl" in arguments: weston_command.append("--use-gl") try: import psutil except ImportError: psutil = None nontrivial_failures = False subenv = { "WAYLAND_DISPLAY": abs_socket_path, "WAYLAND_DEBUG": "1", "XDG_RUNTIME_DIR": xdg_runtime_dir, "LD_LIBRARY_PATH": ld_library_path, "ASAN_OPTIONS": "detect_leaks=0", } wp_serv_env = { "WAYLAND_DEBUG": "1", "XDG_RUNTIME_DIR": xdg_runtime_dir, "LD_LIBRARY_PATH": ld_library_path, "ASAN_OPTIONS": "detect_leaks=0", } subproc_args = {"env": subenv, "stdin": subprocess.DEVNULL, "stderr": subprocess.STDOUT} wp_serv_args = { "env": wp_serv_env, "stdin": subprocess.DEVNULL, "stderr": subprocess.STDOUT, } def get_child_process(proc_pid, expected_name, sub_test_name): if psutil is not None: # assuming pid has not been recycled/duplicated proc = psutil.Process(proc_pid) if proc.name() == "waypipe": for i in range(5): kids = proc.children() if len(kids) > 0: break time.sleep(0.01) else: print( "For test", sub_test_name, "waypipe server's command may have crashed", ) if len(kids) == 1: wp_child = kids[0] try: if wp_child.name() != expected_name: print( "Unusual child process name", wp_child.name(), "does not match", expected_name, ) except psutil.NoSuchProcess: pass def open_logfile(name): path = os.path.join(xdg_runtime_dir, name) return path, open(path, "wb") def start_waypipe(socket_path, control_path, logfile, command, oneshot): prefix = [waypipe_path, "--debug", "--socket", socket_path] if oneshot: prefix += ["--oneshot"] client_command = prefix + ["client"] server_command = prefix + ["--control", control_path, "server"] + command client = subprocess.Popen(client_command, stdout=logfile, **subproc_args) if not wait_until_exists(socket_path): raise Exception("The waypipe socket file at " + socket_path + " did not appear") server = subprocess.Popen(server_command, stdout=logfile, **wp_serv_args) return server, client def cleanup_oneshot(client, server, child): if child is not None: try: child.send_signal(signal.SIGINT) except psutil.NoSuchProcess: time.sleep(0.1) safe_cleanup(server) time.sleep(0.1) safe_cleanup(client) else: server.wait() client.wait() else: safe_cleanup(server) time.sleep(0.1) safe_cleanup(client) return client.returncode, server.returncode def cleanup_multi(client, server, child): if child is not None: try: child.send_signal(signal.SIGINT) except psutil.NoSuchProcess: pass time.sleep(0.1) safe_cleanup(server) time.sleep(0.1) safe_cleanup(client) return client.returncode, server.returncode def run_sub_test(args): sub_test_name, command = args nontrivial_failures = False ocontrol_path = os.path.join(xdg_runtime_dir, sub_test_name + "_octrl") mcontrol_path = os.path.join(xdg_runtime_dir, sub_test_name + "_mctrl") owp_socket_path = os.path.join(xdg_runtime_dir, sub_test_name + "_osocket") mwp_socket_path = os.path.join(xdg_runtime_dir, sub_test_name + "_msocket") try_unlink(owp_socket_path) try_unlink(mwp_socket_path) try_unlink(ocontrol_path) try_unlink(mcontrol_path) ref_log_path, ref_out = open_logfile(sub_test_name + "_ref_out.txt") ref_proc = subprocess.Popen(command, stdout=ref_out, **subproc_args) owp_log_path, owp_out = open_logfile(sub_test_name + "_owp_out.txt") mwp_log_path, mwp_out = open_logfile(sub_test_name + "_mwp_out.txt") owp_server, owp_client = start_waypipe( owp_socket_path, ocontrol_path, owp_out, command, True ) mwp_server, mwp_client = start_waypipe( mwp_socket_path, mcontrol_path, mwp_out, command, False ) owp_child = get_child_process( owp_server.pid, os.path.basename(command[0]), sub_test_name ) mwp_child = get_child_process( mwp_server.pid, os.path.basename(command[0]), sub_test_name ) print("Launched", sub_test_name) time.sleep(1) # Verify that replacing the control pipe (albeit with itself) doesn't break anything # (Since the connection is a unix domain socket, almost no packets will be in flight, # so the test isn't that comprehensive) print("Resetting", sub_test_name) open(ocontrol_path, "w").write(owp_socket_path) open(mcontrol_path, "w").write(mwp_socket_path) try_unlink(ocontrol_path) try_unlink(mcontrol_path) time.sleep(1) print("Closing", sub_test_name) # Beware sudden PID reuse... safe_cleanup(ref_proc) ref_out.close() occode, oscode = cleanup_oneshot(owp_client, owp_server, owp_child) mccode, mscode = cleanup_multi(mwp_client, mwp_server, mwp_child) try_unlink(owp_socket_path) try_unlink(mwp_socket_path) owp_out.close() mwp_out.close() # -2, because applications sometimes return with the sigint error if ref_proc.returncode not in (0, -2): print( "Test {}, run directly, failed (code={}). See logfile at {}".format( sub_test_name, ref_proc.returncode, ref_log_path ) ) else: if oscode in (0, -2) and occode == 0: print("Oneshot test", sub_test_name, "passed") else: print( "Oneshot test {}, run indirectly, failed (ccode={} scode={}). See logfile at {}".format( sub_test_name, occode, oscode, owp_log_path ) ) nontrivial_failures = True if mscode in (0, -2) and mccode in (0, -2): print("Regular test", sub_test_name, "passed") else: print( "Regular test {}, run indirectly, failed (ccode={} scode={}). See logfile at {}".format( sub_test_name, mccode, mscode, mwp_log_path ) ) nontrivial_failures = True return nontrivial_failures os.makedirs(xdg_runtime_dir, mode=0o700, exist_ok=True) os.chmod(xdg_runtime_dir, 0o700) try_unlink(abs_socket_path) try_unlink(abs_socket_path + ".lock") weston_log_path = os.path.join(xdg_runtime_dir, "weston_out.txt") weston_out = open(weston_log_path, "wb") weston_proc = subprocess.Popen( weston_command, env=mainenv, stdin=subprocess.DEVNULL, stdout=weston_out, stderr=subprocess.STDOUT, ) # Otherwise it's a race between weston and the clients if not wait_until_exists(abs_socket_path): raise Exception( "weston failed to create expected display socket path, " + abs_socket_path ) with multiprocessing.Pool(3) as pool: nontriv_failures = pool.map(run_sub_test, [(k, v) for k, v in sub_tests.items()]) safe_cleanup(weston_proc) weston_out.close() if weston_proc.returncode != 0: print("Running headless weston failed. See logfile at ", weston_log_path) if any(nontriv_failures): quit(1) quit(0) waypipe-v0.9.1/test/meson.build000066400000000000000000000116701463133614300165170ustar00rootroot00000000000000 common_src = static_library( 'common', 'common.c', include_directories: waypipe_includes ) # Testing test_diff = executable( 'diff_roundtrip', ['diff_roundtrip.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src] ) test('Whether diff operations successfully roundtrip', test_diff, timeout: 60) test_damage = executable( 'damage_merge', ['damage_merge.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src] ) test('If damage rectangles merge efficiently', test_damage, timeout: 5) test_mirror = executable( 'fd_mirror', ['fd_mirror.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], dependencies: [libgbm] ) # disable leak checking, because library code is often responsible test('How well buffers are replicated', test_mirror, env: ['ASAN_OPTIONS=detect_leaks=0'], timeout: 40) test_proto_functions = files('protocol_functions.txt') proto_send_src = custom_target( 'protocol_control message serialization', output: 'protocol_functions.h', depend_files: [test_proto_functions, sendgen_path] + abs_protocols, command: [python3, sendgen_path, test_proto_functions, '@OUTPUT@'] + abs_protocols, ) test_protocol = executable( 'protocol_control', ['protocol_control.c', proto_send_src], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src] ) test('That common Wayland message patterns work', test_protocol, env: ['ASAN_OPTIONS=detect_leaks=0'], timeout: 20) test_pipe = executable( 'pipe_mirror', ['pipe_mirror.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src] ) test('How well pipes are replicated', test_pipe, timeout: 20) test_fnlist = files('test_fnlist.txt') testproto_src = custom_target( 'test-proto code', output: 'protocol-@BASENAME@.c', input: 'test-proto.xml', depend_files: [test_fnlist, symgen_path], command: [python3, symgen_path, 'data', test_fnlist, '@OUTPUT@', '@INPUT@'], ) testproto_header = custom_target( 'test-proto client-header', output: 'protocol-@BASENAME@.h', input: 'test-proto.xml', depend_files: [test_fnlist, symgen_path], command: [python3, symgen_path, 'header', test_fnlist, '@OUTPUT@', '@INPUT@'], ) test_parse = executable( 'wire_parse', ['wire_parse.c', testproto_src, testproto_header], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], ) test('That protocol parsing fails cleanly', test_parse, timeout: 5) fake_ssh = executable( 'ssh', ['fake_ssh.c'] ) weston_dep = dependency('weston', required: false) testprog_paths = [] if weston_dep.found() # Sometimes weston's test clients are installed here instead testprog_paths += weston_dep.get_pkgconfig_variable('libexecdir') endif weston_prog = find_program('weston', required: false) base_envlist = [ 'TEST_WAYPIPE_PATH=@0@'.format(waypipe_prog.full_path()), ] headless_envlist = base_envlist if weston_prog.found() headless_envlist += 'TEST_WESTON_PATH=@0@'.format(weston_prog.path()) endif test_programs = [ ['TEST_WESTON_SHM_PATH', 'weston-simple-shm'], # ['TEST_WESTON_EGL_PATH', 'weston-simple-egl'], ['TEST_WESTON_TERM_PATH', 'weston-terminal'], ['TEST_WESTON_PRES_PATH', 'weston-presentation-shm'], ['TEST_WESTON_SUBSURF_PATH', 'weston-subsurfaces'], ] if has_dmabuf test_programs += [['TEST_WESTON_DMA_PATH', 'weston-simple-dmabuf-egl']] endif have_test_progs = false foreach t : test_programs test_prog = find_program(t[1], required: false) foreach p : testprog_paths if not test_prog.found() test_prog = find_program(join_paths(p, t[1]), required: false) endif endforeach if test_prog.found() have_test_progs = true headless_envlist += '@0@=@1@'.format(t[0], test_prog.path()) endif endforeach if weston_prog.found() and have_test_progs test_headless = join_paths(meson.current_source_dir(), 'headless.py') test('If clients crash when run with weston via waypipe', python3, args: test_headless, env: headless_envlist, timeout: 30) endif sleep_prog = find_program('sleep') startup_envlist = base_envlist startup_envlist += ['TEST_SLEEP_PATH=' + sleep_prog.path()] startup_envlist += ['TEST_FAKE_SSH_PATH=' + fake_ssh.full_path()] test_startup = join_paths(meson.current_source_dir(), 'startup_failure.py') test('That waypipe exits cleanly given a bad setup', python3, args: test_startup, env: startup_envlist, timeout: 30 ) fuzz_hook_ext = executable( 'fuzz_hook_ext', ['fuzz_hook_ext.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], dependencies: [pthreads] ) fuzz_hook_int = executable( 'fuzz_hook_int', ['fuzz_hook_int.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src], dependencies: [pthreads] ) fuzz_hook_det = executable( 'fuzz_hook_det', ['fuzz_hook_det.c'], include_directories: waypipe_includes, link_with: [lib_waypipe_src, common_src] ) test('That `waypipe bench` doesn\'t crash', waypipe_prog, timeout: 20, args: ['--threads', '2', '--test-size', '16384', 'bench', '100.0'] ) waypipe-v0.9.1/test/pipe_mirror.c000066400000000000000000000234211463133614300170450ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "shadow.h" #include #include #include #include #include #include #include #include #include static int shadow_sync(struct fd_translation_map *src_map, struct fd_translation_map *dst_map) { struct transfer_queue queue; memset(&queue, 0, sizeof(queue)); pthread_mutex_init(&queue.async_recv_queue.lock, NULL); read_readable_pipes(src_map); for (struct shadow_fd_link *lcur = src_map->link.l_next, *lnxt = lcur->l_next; lcur != &src_map->link; lcur = lnxt, lnxt = lcur->l_next) { struct shadow_fd *sfd = (struct shadow_fd *)lcur; collect_update(NULL, sfd, &queue, false); /* collecting updates can reset `remote_can_X` state, so * garbage collect the sfd */ destroy_shadow_if_unreferenced(sfd); } for (int i = 0; i < queue.end; i++) { if (queue.vecs[i].iov_len < 8) { cleanup_transfer_queue(&queue); wp_error("Invalid message"); return -1; } const uint32_t *header = (const uint32_t *)queue.vecs[i].iov_base; struct bytebuf msg; msg.data = queue.vecs[i].iov_base; msg.size = transfer_size(header[0]); if (apply_update(dst_map, NULL, NULL, transfer_type(header[0]), (int32_t)header[1], &msg) == -1) { wp_error("Update failed"); cleanup_transfer_queue(&queue); return -1; } } flush_writable_pipes(dst_map); int nt = queue.end; cleanup_transfer_queue(&queue); return nt; } static int create_pseudo_pipe(bool can_read, bool can_write, bool half_open_socket, int *spec_end, int *opp_end) { bool pipe_possible = can_read != can_write; int pipe_fds[2]; if (half_open_socket || !pipe_possible) { if (socketpair(AF_UNIX, SOCK_STREAM, 0, pipe_fds) == -1) { wp_error("Socketpair failed"); return -1; } if (!can_read) { shutdown(pipe_fds[0], SHUT_RD); } if (!can_write) { shutdown(pipe_fds[0], SHUT_WR); } } else { if (pipe(pipe_fds) == -1) { wp_error("Pipe failed"); return -1; } if (can_write) { int tmp = pipe_fds[0]; pipe_fds[0] = pipe_fds[1]; pipe_fds[1] = tmp; } } *spec_end = pipe_fds[0]; *opp_end = pipe_fds[1]; return 0; } static char fd_is_readable(int fd) { int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { wp_error("fctnl F_GETFL failed!"); return '?'; } flags = flags & O_ACCMODE; return (flags == O_RDONLY || flags == O_RDWR) ? 'R' : 'n'; } static char fd_is_writable(int fd) { int flags = fcntl(fd, F_GETFL, 0); if (flags == -1) { wp_error("fctnl F_GETFL failed!"); return '?'; } flags = flags & O_ACCMODE; return (flags == O_WRONLY || flags == O_RDWR) ? 'W' : 'n'; } static void print_pipe_state(const char *desc, struct pipe_state *p) { printf("%s state: %c %c %c %c%s\n", desc, p->can_read ? 'R' : 'n', p->can_write ? 'W' : 'n', p->remote_can_read ? 'R' : 'n', p->remote_can_write ? 'W' : 'n', p->pending_w_shutdown ? " shutdownWpending" : ""); } static bool test_pipe_mirror(bool close_src, bool can_read, bool can_write, bool half_open_socket, bool interpret_as_force_iw) { if (can_read == can_write && half_open_socket) { return true; } printf("\nTesting:%s%s%s%s%s\n", can_read ? " read" : "", can_write ? " write" : "", half_open_socket ? " socket" : "", interpret_as_force_iw ? " force_iw" : "", close_src ? " close_src" : " close_dst"); int spec_end, opp_end, anti_end = -1; if (create_pseudo_pipe(can_read, can_write, half_open_socket, &spec_end, &opp_end) == -1) { return false; } struct fd_translation_map src_map; setup_translation_map(&src_map, false); struct fd_translation_map dst_map; setup_translation_map(&dst_map, true); bool success = true; /* Step 1: replicate */ struct shadow_fd *src_shadow = translate_fd(&src_map, NULL, NULL, spec_end, FDC_PIPE, 0, NULL, interpret_as_force_iw); shadow_decref_transfer(src_shadow); int rid = src_shadow->remote_id; if (shadow_sync(&src_map, &dst_map) == -1) { success = false; goto cleanup; } struct shadow_fd *dst_shadow = get_shadow_for_rid(&dst_map, rid); if (!dst_shadow) { printf("Failed to create remote shadow structure\n"); success = false; goto cleanup; } anti_end = dup(dst_shadow->fd_local); shadow_decref_transfer(dst_shadow); if (set_nonblocking(anti_end) == -1 || set_nonblocking(opp_end) == -1) { printf("Failed to make user fds nonblocking\n"); success = false; goto cleanup; } printf("spec %c %c %c %c | opp %c %c | anti %c %c\n", can_read ? 'R' : 'n', can_write ? 'W' : 'n', fd_is_readable(spec_end), fd_is_writable(spec_end), fd_is_readable(opp_end), fd_is_writable(opp_end), fd_is_readable(anti_end), fd_is_writable(anti_end)); print_pipe_state("dst", &dst_shadow->pipe); print_pipe_state("src", &src_shadow->pipe); /* Step 2: transfer tests */ for (int i = 0; i < 4; i++) { bool from_src = i % 2; /* Smaller than a pipe buffer, so writing should always succeed */ char buf[4096]; memset(buf, rand(), sizeof(buf)); int write_fd = from_src ? opp_end : anti_end; int read_fd = from_src ? anti_end : opp_end; const char *target = from_src ? "src" : "dst"; const char *antitarget = from_src ? "dst" : "src"; if (fd_is_writable(write_fd) != 'W') { /* given proper replication, the reverse end should * be readable */ continue; } int amt = max(rand() % 4096, 1); ssize_t ret = write(write_fd, buf, (size_t)amt); if (ret == amt) { struct shadow_fd *mod_sfd = from_src ? src_shadow : dst_shadow; mod_sfd->pipe.readable = true; /* Write successful */ if (shadow_sync(from_src ? &src_map : &dst_map, from_src ? &dst_map : &src_map) == -1) { success = false; goto cleanup; } bool believe_read = can_read && !interpret_as_force_iw; bool expect_transfer_fail = (from_src && !believe_read) || (!from_src && !can_write); // todo: try multiple sync cycles (?) ssize_t rr = read(read_fd, buf, 4096); bool tf_pass = rr == amt; if (!expect_transfer_fail) { /* on some systems, pipe is bidirectional, * making some additional transfers succeed. * This is fine. */ success = success && tf_pass; } const char *resdesc = tf_pass != expect_transfer_fail ? "expected" : "unexpected"; if (tf_pass) { printf("Send packet to %s, and received it from %s, %s\n", target, antitarget, resdesc); } else { printf("Failed to receive packet from %s, %d %zd %s, %s\n", antitarget, read_fd, rr, strerror(errno), resdesc); } } } /* Step 3: close one end, and verify that the other end is closed */ // TODO: test partial shutdowns as well, all 2^4 cases for a single // cycle; and test epipe closing by queuing additional data struct shadow_fd *cls_shadow = close_src ? src_shadow : dst_shadow; if (close_src) { checked_close(opp_end); opp_end = -1; } else { checked_close(anti_end); anti_end = -1; } bool shutdown_deletes = (cls_shadow->pipe.can_read && !cls_shadow->pipe.can_write); /* Special cases, which aren't very important */ shutdown_deletes |= (interpret_as_force_iw && !cls_shadow->pipe.can_write && close_src); cls_shadow->pipe.readable = cls_shadow->pipe.can_read; cls_shadow->pipe.writable = cls_shadow->pipe.can_write; if (shadow_sync(close_src ? &src_map : &dst_map, close_src ? &dst_map : &src_map) == -1) { success = false; goto cleanup; } bool deleted_shadows = true; if (dst_map.link.l_next != &dst_map.link) { print_pipe_state("dst", &dst_shadow->pipe); deleted_shadows = false; } if (src_map.link.l_next != &src_map.link) { print_pipe_state("src", &src_shadow->pipe); deleted_shadows = false; } bool correct_teardown = deleted_shadows == shutdown_deletes; success = success && correct_teardown; printf("Deleted shadows: %c (expected %c)\n", deleted_shadows ? 'Y' : 'n', shutdown_deletes ? 'Y' : 'n'); printf("Test: %s\n", success ? "pass" : "FAIL"); cleanup: if (anti_end != -1) { checked_close(anti_end); } if (opp_end != -1) { checked_close(opp_end); } cleanup_translation_map(&src_map); cleanup_translation_map(&dst_map); return success; } log_handler_func_t log_funcs[2] = {NULL, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; struct sigaction act; act.sa_handler = SIG_IGN; sigemptyset(&act.sa_mask); act.sa_flags = 0; if (sigaction(SIGPIPE, &act, NULL) == -1) { printf("Sigaction failed\n"); return EXIT_SUCCESS; } srand(0); bool all_success = true; for (uint32_t bits = 0; bits < 32; bits++) { bool pass = test_pipe_mirror(bits & 1, bits & 2, bits & 4, bits & 8, bits & 16); all_success = all_success && pass; } printf("\nSuccess: %c\n", all_success ? 'Y' : 'n'); return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.9.1/test/protocol_control.c000066400000000000000000000716551463133614300201330ustar00rootroot00000000000000/* * Copyright © 2020 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "main.h" #include "parsing.h" #include "util.h" #include "protocol_functions.h" #include #include #include #include #include #include struct msgtransfer { struct test_state *src; struct test_state *dst; }; /* Override the libc clock_gettime, so we can test presentation-time * protocol. Note: the video drivers sometimes call this function. */ int clock_gettime(clockid_t clock_id, struct timespec *tp) { /* Assume every call costs 1ns */ time_value += 1; if (clock_id == CLOCK_REALTIME) { tp->tv_sec = (int64_t)(time_value / 1000000000uLL); tp->tv_nsec = (int64_t)(time_value % 1000000000uLL); } else { tp->tv_sec = (int64_t)((time_value + local_time_offset) / 1000000000uLL); tp->tv_nsec = (int64_t)((time_value + local_time_offset) % 1000000000uLL); } return 0; } static void print_pass(bool pass) { fprintf(stdout, "%s\n", pass ? "PASS" : "FAIL"); } static char *make_filled_pattern(size_t size, uint32_t contents) { uint32_t *mem = calloc(1, size); for (size_t i = 0; i < size / 4; i++) { mem[i] = contents; } return (char *)mem; } static int make_filled_file(size_t size, const char *contents) { int fd = create_anon_file(); ftruncate(fd, (off_t)size); uint32_t *mem = (uint32_t *)mmap( NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); memcpy(mem, contents, size); munmap(mem, size); return fd; } static bool check_file_contents(int fd, size_t size, const char *contents) { if (fd == -1) { return false; } off_t fsize = lseek(fd, 0, SEEK_END); if (fsize != (off_t)size) { wp_error("fd size mismatch: %zu %zu\n", fsize, size); return -1; } uint32_t *mem = (uint32_t *)mmap( NULL, size, PROT_READ, MAP_PRIVATE, fd, 0); if (mem == MAP_FAILED) { wp_error("Failed to map file"); return -1; } bool match = memcmp(mem, contents, size) == 0; munmap(mem, size); return match; } static int get_only_fd_from_msg(const struct test_state *s) { if (s->rcvd && s->rcvd[s->nrcvd - 1].nfds == 1) { return s->rcvd[s->nrcvd - 1].fds[0]; } else { return -1; } } static int get_fd_from_nth_to_last_msg(const struct test_state *s, int nth) { if (!s->rcvd || s->nrcvd < nth) { return -1; } const struct msg *m = &s->rcvd[s->nrcvd - nth]; if (m->nfds != 1) { return -1; } return m->fds[0]; } static void msg_send_handler(struct transfer_states *ts, struct test_state *src, struct test_state *dst) { struct msg m; m.data = ts->msg_space; m.fds = ts->fd_space; m.len = (int)ts->msg_size; m.nfds = (int)ts->fd_size; for (int i = 0; i < m.nfds; i++) { m.fds[i] = dup(m.fds[i]); if (m.fds[i] == -1) { wp_error("Invalid fd provided"); } } send_protocol_msg(src, dst, m); memset(ts->msg_space, 0, sizeof(ts->msg_space)); memset(ts->fd_space, 0, sizeof(ts->fd_space)); } static int setup_tstate(struct transfer_states *ts) { memset(ts, 0, sizeof(*ts)); ts->send = msg_send_handler; ts->comp = calloc(1, sizeof(struct test_state)); ts->app = calloc(1, sizeof(struct test_state)); if (!ts->comp || !ts->app) { goto fail_alloc; } if (setup_state(ts->comp, true, true) == -1) { goto fail_comp_setup; } if (setup_state(ts->app, false, true) == -1) { goto fail_app_setup; } return 0; fail_app_setup: cleanup_state(ts->app); fail_comp_setup: cleanup_state(ts->comp); fail_alloc: free(ts->comp); free(ts->app); return -1; } static void cleanup_tstate(struct transfer_states *ts) { cleanup_state(ts->comp); cleanup_state(ts->app); free(ts->comp); free(ts->app); } static bool test_fixed_shm_buffer_copy(void) { fprintf(stdout, "\n shm_pool+buffer test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; char *testpat = make_filled_pattern(16384, 0xFEDCBA98); int fd = make_filled_file(16384, testpat); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, shm = {0x3}, compositor = {0x4}, pool = {0x5}, buffer = {0x6}, surface = {0x7}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_shm", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "wl_shm", 1, shm); send_wl_registry_req_bind( &T, registry, 2, "wl_compositor", 1, compositor); send_wl_shm_req_create_pool(&T, shm, pool, fd, 16384); ret_fd = get_only_fd_from_msg(T.comp); send_wl_shm_pool_req_create_buffer( &T, pool, buffer, 0, 64, 64, 256, 0x30334258); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(ret_fd, 16384, testpat); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } static bool test_fixed_shm_screencopy_copy(void) { fprintf(stdout, "\n screencopy test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; char *testpat_orig = make_filled_pattern(16384, 0xFEDCBA98); char *testpat_screen = make_filled_pattern(16384, 0x77557755); int fd = make_filled_file(16384, testpat_orig); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, shm = {0x3}, output = {0x4}, pool = {0x5}, buffer = {0x6}, frame = {0x7}, screencopy = {0x8}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_shm", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_output", 1); send_wl_registry_evt_global( &T, registry, 3, "zwlr_screencopy_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 1, "wl_shm", 1, shm); send_wl_registry_req_bind(&T, registry, 2, "wl_output", 1, output); send_wl_registry_req_bind(&T, registry, 3, "zwlr_screencopy_manager_v1", 1, screencopy); send_wl_shm_req_create_pool(&T, shm, pool, fd, 16384); ret_fd = get_only_fd_from_msg(T.comp); if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } send_zwlr_screencopy_manager_v1_req_capture_output( &T, screencopy, frame, 0, output); send_zwlr_screencopy_frame_v1_evt_buffer(&T, frame, 0, 64, 64, 16384); send_wl_shm_pool_req_create_buffer( &T, pool, buffer, 0, 64, 64, 256, 0x30334258); send_zwlr_screencopy_frame_v1_req_copy(&T, frame, buffer); uint32_t *mem = (uint32_t *)mmap(NULL, 16384, PROT_READ | PROT_WRITE, MAP_SHARED, ret_fd, 0); memcpy(mem, testpat_screen, 16384); munmap(mem, 16384); send_zwlr_screencopy_frame_v1_evt_flags(&T, frame, 0); send_zwlr_screencopy_frame_v1_evt_ready(&T, frame, 0, 12345, 600000000); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(fd, 16384, testpat_screen); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat_screen); free(testpat_orig); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } static bool test_fixed_keymap_copy(void) { fprintf(stdout, "\n Keymap test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; char *testpat = make_filled_pattern(16384, 0xFEDCBA98); int fd = make_filled_file(16384, testpat); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, seat = {0x3}, keyboard = {0x4}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_seat", 7); send_wl_registry_req_bind(&T, registry, 1, "wl_seat", 7, seat); send_wl_seat_evt_capabilities(&T, seat, 3); send_wl_seat_req_get_keyboard(&T, seat, keyboard); send_wl_keyboard_evt_keymap(&T, keyboard, 1, fd, 16384); ret_fd = get_only_fd_from_msg(T.app); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(ret_fd, 16384, testpat); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } #define DMABUF_FORMAT 875713112 static int create_dmabuf(void) { struct render_data rd; memset(&rd, 0, sizeof(rd)); rd.drm_fd = -1; rd.av_disabled = true; const size_t test_width = 256; const size_t test_height = 384; const size_t test_cpp = 4; const size_t test_size = test_width * test_height * test_cpp; const struct dmabuf_slice_data slice_data = { .width = (uint32_t)test_width, .height = (uint32_t)test_height, .format = DMABUF_FORMAT, .num_planes = 1, .modifier = 0, .offsets = {0, 0, 0, 0}, .strides = {(uint32_t)(test_width * test_cpp), 0, 0, 0}, .using_planes = {true, false, false, false}, }; int dmafd = -1; if (init_render_data(&rd) == -1) { return -1; } struct gbm_bo *bo = make_dmabuf(&rd, &slice_data); if (!bo) { goto end; } void *map_handle = NULL; uint32_t stride; void *data = map_dmabuf(bo, true, &map_handle, &stride); if (!data) { destroy_dmabuf(bo); goto end; } /* TODO: the best test pattern is a colored gradient, so we can * check whether the copy flips things or not */ memset(data, 0x80, test_size); unmap_dmabuf(bo, map_handle); dmafd = export_dmabuf(bo); if (dmafd == -1) { goto end; } end: destroy_dmabuf(bo); cleanup_render_data(&rd); return dmafd; } enum dmabuf_copy_type { COPY_LINUX_DMABUF, COPY_LINUX_DMABUF_INDIR, COPY_DRM_PRIME, COPY_WLR_EXPORT, }; static bool test_fixed_dmabuf_copy(enum dmabuf_copy_type type) { const char *const types[] = {"linux-dmabuf", "linux-dmabuf-indir", "drm-prime", "wlr-export"}; fprintf(stdout, "\n DMABUF test, %s\n", types[(int)type]); int dmabufd = create_dmabuf(); const int width = 256, height = 384; if (dmabufd == -1) { return true; } struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int ret_fd = -1; switch (type) { case COPY_LINUX_DMABUF: { struct wp_objid display = {0x1}, registry = {0x2}, linux_dmabuf = {0x3}, compositor = {0x4}, params = {0x5}, buffer = {0x6}, surface = {0x7}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global( &T, registry, 1, "zwp_linux_dmabuf_v1", 1); send_wl_registry_evt_global( &T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "zwp_linux_dmabuf_v1", 1, linux_dmabuf); send_wl_registry_req_bind(&T, registry, 12, "wl_compositor", 1, compositor); send_zwp_linux_dmabuf_v1_evt_modifier( &T, linux_dmabuf, DMABUF_FORMAT, 0, 0); send_zwp_linux_dmabuf_v1_req_create_params( &T, linux_dmabuf, params); send_zwp_linux_buffer_params_v1_req_add( &T, params, dmabufd, 0, 0, 256 * 4, 0, 0); send_zwp_linux_buffer_params_v1_req_create_immed( &T, params, buffer, 256, 384, DMABUF_FORMAT, 0); /* this message + previous, after reordering, are treated as one * bundle; if that is fixed, this will break, and 1 should * become 2 */ ret_fd = get_fd_from_nth_to_last_msg(T.comp, 1); send_zwp_linux_buffer_params_v1_req_destroy(&T, params); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); } break; case COPY_LINUX_DMABUF_INDIR: { struct wp_objid display = {0x1}, registry = {0x2}, linux_dmabuf = {0x3}, compositor = {0x4}, params = {0x5}, buffer = {0x6}, surface = {0x7}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global( &T, registry, 1, "zwp_linux_dmabuf_v1", 1); send_wl_registry_evt_global( &T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "zwp_linux_dmabuf_v1", 1, linux_dmabuf); send_wl_registry_req_bind(&T, registry, 12, "wl_compositor", 1, compositor); send_zwp_linux_dmabuf_v1_evt_modifier( &T, linux_dmabuf, DMABUF_FORMAT, 0, 0); send_zwp_linux_dmabuf_v1_req_create_params( &T, linux_dmabuf, params); send_zwp_linux_buffer_params_v1_req_add( &T, params, dmabufd, 0, 0, 256 * 4, 0, 0); send_zwp_linux_buffer_params_v1_req_create( &T, params, 256, 384, DMABUF_FORMAT, 0); /* this message + previous, after reordering, are treated as one * bundle; if that is fixed, this will break, and 1 should * become 2 */ ret_fd = get_fd_from_nth_to_last_msg(T.comp, 1); send_zwp_linux_buffer_params_v1_evt_created(&T, params, buffer); send_zwp_linux_buffer_params_v1_req_destroy(&T, params); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); } break; case COPY_DRM_PRIME: { struct wp_objid display = {0x1}, registry = {0x2}, wl_drm = {0x3}, compositor = {0x4}, buffer = {0x5}, surface = {0x6}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_drm", 1); send_wl_registry_evt_global( &T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind(&T, registry, 1, "wl_drm", 1, wl_drm); send_wl_registry_req_bind(&T, registry, 12, "wl_compositor", 1, compositor); send_wl_drm_evt_device(&T, wl_drm, "/dev/dri/renderD128"); send_wl_drm_evt_format(&T, wl_drm, DMABUF_FORMAT); send_wl_drm_evt_capabilities(&T, wl_drm, 1); send_wl_drm_req_create_prime_buffer(&T, wl_drm, buffer, dmabufd, width, height, DMABUF_FORMAT, 0, width * 4, 0, 0, 0, 0); ret_fd = get_fd_from_nth_to_last_msg(T.comp, 1); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_attach(&T, surface, buffer, 0, 0); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wl_surface_req_commit(&T, surface); } break; case COPY_WLR_EXPORT: { /* note: here the compositor creates and sends fd to client */ struct wp_objid display = {0x1}, registry = {0x2}, export_manager = {0x3}, output = {0x4}, dmabuf_frame = {0x5}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "zwlr_export_dmabuf_manager_v1", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_output", 1); send_wl_registry_req_bind(&T, registry, 1, "zwlr_export_dmabuf_manager_v1", 1, export_manager); send_wl_registry_req_bind( &T, registry, 12, "wl_output", 1, output); send_zwlr_export_dmabuf_manager_v1_req_capture_output( &T, export_manager, dmabuf_frame, 1, output); send_zwlr_export_dmabuf_frame_v1_evt_frame(&T, dmabuf_frame, width, height, 0, 0, 0, 1, DMABUF_FORMAT, 0, 0, 1); send_zwlr_export_dmabuf_frame_v1_evt_object(&T, dmabuf_frame, 0, dmabufd, width * height * 4, 0, width * 4, 0); ret_fd = get_only_fd_from_msg(T.app); send_zwlr_export_dmabuf_frame_v1_evt_ready( &T, dmabuf_frame, 555555, 555555555, 333333333); } break; } if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } // TODO: verify that the FD contents are correct end: checked_close(dmabufd); /* todo: the drm_fd may be dup'd by libgbm but not freed */ cleanup_tstate(&T); print_pass(pass); return pass; } enum data_device_type { DDT_WAYLAND, DDT_GTK_PRIMARY, DDT_PRIMARY, DDT_WLR, }; static const char *const data_device_type_strs[] = {"wayland main", "gtk primary selection", "primary selection", "wlroots data control"}; /* Confirm that wl_data_offer.receive creates a pipe matching the input */ static bool test_data_offer(enum data_device_type type) { fprintf(stdout, "\n Data offer test: %s\n", data_device_type_strs[type]); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int src_pipe[2]; pipe(src_pipe); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, ddman = {0x3}, seat = {0x4}, ddev = {0x5}, offer = {0xff000001}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_seat", 7); send_wl_registry_req_bind(&T, registry, 1, "wl_seat", 7, seat); switch (type) { case DDT_WAYLAND: send_wl_registry_evt_global( &T, registry, 2, "wl_data_device_manager", 3); send_wl_registry_req_bind(&T, registry, 2, "wl_data_device_manager", 3, ddman); send_wl_data_device_manager_req_get_data_device( &T, ddman, ddev, seat); send_wl_data_device_evt_data_offer(&T, ddev, offer); send_wl_data_offer_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_wl_data_device_evt_selection(&T, ddev, offer); send_wl_data_offer_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; case DDT_GTK_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "gtk_primary_selection_device_manager", 1); send_wl_registry_req_bind(&T, registry, 2, "gtk_primary_selection_device_manager", 1, ddman); send_gtk_primary_selection_device_manager_req_get_device( &T, ddman, ddev, seat); send_gtk_primary_selection_device_evt_data_offer( &T, ddev, offer); send_gtk_primary_selection_offer_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_gtk_primary_selection_device_evt_selection( &T, ddev, offer); send_gtk_primary_selection_offer_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; case DDT_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1, ddman); send_zwp_primary_selection_device_manager_v1_req_get_device( &T, ddman, ddev, seat); send_zwp_primary_selection_device_v1_evt_data_offer( &T, ddev, offer); send_zwp_primary_selection_offer_v1_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_zwp_primary_selection_device_v1_evt_selection( &T, ddev, offer); send_zwp_primary_selection_offer_v1_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; case DDT_WLR: send_wl_registry_evt_global(&T, registry, 2, "zwlr_data_control_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwlr_data_control_manager_v1", 1, ddman); send_zwlr_data_control_manager_v1_req_get_data_device( &T, ddman, ddev, seat); send_zwlr_data_control_device_v1_evt_data_offer( &T, ddev, offer); send_zwlr_data_control_offer_v1_evt_offer( &T, offer, "text/plain;charset=utf-8"); send_zwlr_data_control_device_v1_evt_selection(&T, ddev, offer); send_zwlr_data_control_offer_v1_req_receive(&T, offer, "text/plain;charset=utf-8", src_pipe[1]); break; } ret_fd = get_only_fd_from_msg(T.comp); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } uint8_t tmp = 0xab; if (write(ret_fd, &tmp, 1) != 1) { wp_error("Fd not writable"); pass = false; goto end; } end: checked_close(src_pipe[0]); checked_close(src_pipe[1]); cleanup_tstate(&T); print_pass(pass); return pass; } /* Confirm that wl_data_source.data_offer creates a pipe matching the input */ static bool test_data_source(enum data_device_type type) { fprintf(stdout, "\n Data source test: %s\n", data_device_type_strs[type]); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int dst_pipe[2]; pipe(dst_pipe); int ret_fd = -1; struct wp_objid display = {0x1}, registry = {0x2}, ddman = {0x3}, seat = {0x4}, ddev = {0x5}, dsource = {0x6}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wl_seat", 7); send_wl_registry_req_bind(&T, registry, 1, "wl_seat", 7, seat); switch (type) { case DDT_WAYLAND: send_wl_registry_evt_global( &T, registry, 2, "wl_data_device_manager", 1); send_wl_registry_req_bind(&T, registry, 2, "wl_data_device_manager", 1, ddman); send_wl_data_device_manager_req_get_data_device( &T, ddman, ddev, seat); send_wl_data_device_manager_req_create_data_source( &T, ddman, dsource); send_wl_data_source_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_wl_data_device_req_set_selection(&T, ddev, dsource, 9999); send_wl_data_source_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; case DDT_GTK_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "gtk_primary_selection_device_manager", 1); send_wl_registry_req_bind(&T, registry, 2, "gtk_primary_selection_device_manager", 1, ddman); send_gtk_primary_selection_device_manager_req_get_device( &T, ddman, ddev, seat); send_gtk_primary_selection_device_manager_req_create_source( &T, ddman, dsource); send_gtk_primary_selection_source_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_gtk_primary_selection_device_req_set_selection( &T, ddev, dsource, 9999); send_gtk_primary_selection_source_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; case DDT_PRIMARY: send_wl_registry_evt_global(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwp_primary_selection_device_manager_v1", 1, ddman); send_zwp_primary_selection_device_manager_v1_req_get_device( &T, ddman, ddev, seat); send_zwp_primary_selection_device_manager_v1_req_create_source( &T, ddman, dsource); send_zwp_primary_selection_source_v1_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_zwp_primary_selection_device_v1_req_set_selection( &T, ddev, dsource, 9999); send_zwp_primary_selection_source_v1_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; case DDT_WLR: send_wl_registry_evt_global(&T, registry, 2, "zwlr_data_control_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 2, "zwlr_data_control_manager_v1", 1, ddman); send_zwlr_data_control_manager_v1_req_get_data_device( &T, ddman, ddev, seat); send_zwlr_data_control_manager_v1_req_create_data_source( &T, ddman, dsource); send_zwlr_data_control_source_v1_req_offer( &T, dsource, "text/plain;charset=utf-8"); send_zwlr_data_control_device_v1_req_set_selection( &T, ddev, dsource); send_zwlr_data_control_source_v1_evt_send(&T, dsource, "text/plain;charset=utf-8", dst_pipe[0]); break; } ret_fd = get_only_fd_from_msg(T.app); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } /* todo: check readable */ end: checked_close(dst_pipe[0]); checked_close(dst_pipe[1]); cleanup_tstate(&T); print_pass(pass); return pass; } /* Check that gamma_control copies the input file */ static bool test_gamma_control(void) { fprintf(stdout, "\n Gamma control test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; int ret_fd = -1; char *testpat = make_filled_pattern(1024, 0x12345678); int fd = make_filled_file(1024, testpat); struct wp_objid display = {0x1}, registry = {0x2}, gamma_manager = {0x3}, output = {0x4}, gamma_control = {0x5}; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global( &T, registry, 1, "zwlr_gamma_control_manager_v1", 1); send_wl_registry_req_bind(&T, registry, 1, "zwlr_gamma_control_manager_v1", 1, gamma_manager); send_wl_registry_evt_global(&T, registry, 1, "wl_output", 3); send_wl_registry_req_bind(&T, registry, 1, "wl_output", 3, output); send_zwlr_gamma_control_manager_v1_req_get_gamma_control( &T, gamma_manager, gamma_control, output); send_zwlr_gamma_control_v1_evt_gamma_size(&T, gamma_control, 1024); send_zwlr_gamma_control_v1_req_set_gamma(&T, gamma_control, fd); ret_fd = get_only_fd_from_msg(T.comp); /* confirm receipt of fd with the correct contents; if not, * reject */ if (ret_fd == -1) { wp_error("Fd not passed through"); pass = false; goto end; } pass = check_file_contents(ret_fd, 1024, testpat); if (!pass) { wp_error("Failed to transfer file"); } end: free(testpat); checked_close(fd); cleanup_tstate(&T); print_pass(pass); return pass; } /* Check that gamma_control copies the input file */ static bool test_presentation_time(void) { fprintf(stdout, "\n Presentation time test\n"); struct transfer_states T; if (setup_tstate(&T) == -1) { wp_error("Test setup failed"); return true; } bool pass = true; struct wp_objid display = {0x1}, registry = {0x2}, presentation = {0x3}, compositor = {0x4}, surface = {0x5}, feedback = {0x6}; T.app->local_time_offset = 500; T.comp->local_time_offset = 600; send_wl_display_req_get_registry(&T, display, registry); send_wl_registry_evt_global(&T, registry, 1, "wp_presentation", 1); send_wl_registry_evt_global(&T, registry, 2, "wl_compositor", 1); send_wl_registry_req_bind( &T, registry, 1, "wp_presentation", 1, presentation); /* todo: run another branch with CLOCK_REALTIME */ send_wp_presentation_evt_clock_id(&T, presentation, CLOCK_MONOTONIC); send_wl_registry_req_bind( &T, registry, 12, "wl_compositor", 1, compositor); send_wl_compositor_req_create_surface(&T, compositor, surface); send_wl_surface_req_damage(&T, surface, 0, 0, 64, 64); send_wp_presentation_req_feedback(&T, presentation, surface, feedback); send_wl_surface_req_commit(&T, surface); send_wp_presentation_feedback_evt_presented( &T, feedback, 0, 30, 120000, 16666666, 0, 0, 7); const struct msg *const last_msg = &T.app->rcvd[T.app->nrcvd - 1]; uint32_t tv_sec_hi = last_msg->data[2], tv_sec_lo = last_msg->data[3], tv_nsec = last_msg->data[4]; if (tv_nsec != 120000 + T.app->local_time_offset - T.comp->local_time_offset) { wp_error("Time translation failed %d %d %d", tv_sec_hi, tv_sec_lo, tv_nsec); pass = false; goto end; } /* look at timestamp */ if (!pass) { goto end; } end: cleanup_tstate(&T); print_pass(pass); return pass; } /* Check whether the video encoding feature can replicate a uniform * color image */ static bool test_fixed_video_color_copy(enum video_coding_fmt fmt, bool hw) { (void)fmt; (void)hw; /* todo: back out if no dmabuf support or no video support */ return true; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; set_initial_fds(); int ntest = 21; int nsuccess = 0; nsuccess += test_fixed_shm_buffer_copy(); nsuccess += test_fixed_shm_screencopy_copy(); nsuccess += test_fixed_keymap_copy(); nsuccess += test_fixed_dmabuf_copy(COPY_LINUX_DMABUF); nsuccess += test_fixed_dmabuf_copy(COPY_LINUX_DMABUF_INDIR); nsuccess += test_fixed_dmabuf_copy(COPY_DRM_PRIME); nsuccess += test_fixed_dmabuf_copy(COPY_WLR_EXPORT); nsuccess += test_data_offer(DDT_WAYLAND); nsuccess += test_data_offer(DDT_PRIMARY); nsuccess += test_data_offer(DDT_GTK_PRIMARY); nsuccess += test_data_offer(DDT_WLR); nsuccess += test_data_source(DDT_WAYLAND); nsuccess += test_data_source(DDT_PRIMARY); nsuccess += test_data_source(DDT_GTK_PRIMARY); nsuccess += test_data_source(DDT_WLR); nsuccess += test_gamma_control(); nsuccess += test_presentation_time(); nsuccess += test_fixed_video_color_copy(VIDEO_H264, false); nsuccess += test_fixed_video_color_copy(VIDEO_H264, true); nsuccess += test_fixed_video_color_copy(VIDEO_VP9, false); nsuccess += test_fixed_video_color_copy(VIDEO_AV1, false); // TODO: add tests for handling of common errors, e.g. invalid fd, // or type confusion fprintf(stdout, "\n%d of %d cases passed\n", nsuccess, ntest); check_unclosed_fds(); return (nsuccess == ntest) ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.9.1/test/protocol_functions.txt000066400000000000000000000052241463133614300210450ustar00rootroot00000000000000gtk_primary_selection_device_evt_data_offer gtk_primary_selection_device_evt_selection gtk_primary_selection_device_manager_req_create_source gtk_primary_selection_device_manager_req_get_device gtk_primary_selection_device_req_set_selection gtk_primary_selection_offer_evt_offer gtk_primary_selection_offer_req_receive gtk_primary_selection_source_evt_send gtk_primary_selection_source_req_offer wl_compositor_req_create_surface wl_data_device_evt_data_offer wl_data_device_evt_selection wl_data_device_manager_req_create_data_source wl_data_device_manager_req_get_data_device wl_data_device_req_set_selection wl_data_offer_evt_offer wl_data_offer_req_receive wl_data_source_evt_send wl_data_source_req_offer wl_display_req_get_registry wl_drm_evt_device wl_drm_evt_format wl_drm_evt_capabilities wl_drm_req_create_prime_buffer wl_keyboard_evt_keymap wl_registry_evt_global wl_registry_req_bind wl_seat_evt_capabilities wl_seat_req_get_keyboard wl_shm_pool_req_create_buffer wl_shm_req_create_pool wl_surface_req_attach wl_surface_req_commit wl_surface_req_damage wp_presentation_evt_clock_id wp_presentation_req_feedback wp_presentation_feedback_evt_presented zwlr_data_control_device_v1_evt_data_offer zwlr_data_control_device_v1_evt_selection zwlr_data_control_device_v1_req_set_selection zwlr_data_control_manager_v1_req_create_data_source zwlr_data_control_manager_v1_req_get_data_device zwlr_data_control_offer_v1_evt_offer zwlr_data_control_offer_v1_req_receive zwlr_data_control_source_v1_evt_send zwlr_data_control_source_v1_req_offer zwlr_export_dmabuf_manager_v1_req_capture_output zwlr_export_dmabuf_frame_v1_evt_frame zwlr_export_dmabuf_frame_v1_evt_object zwlr_export_dmabuf_frame_v1_evt_ready zwlr_gamma_control_manager_v1_req_get_gamma_control zwlr_gamma_control_v1_evt_gamma_size zwlr_gamma_control_v1_req_set_gamma zwlr_screencopy_frame_v1_evt_buffer zwlr_screencopy_frame_v1_evt_flags zwlr_screencopy_frame_v1_evt_ready zwlr_screencopy_frame_v1_req_copy zwlr_screencopy_manager_v1_req_capture_output zwp_linux_buffer_params_v1_evt_created zwp_linux_buffer_params_v1_req_add zwp_linux_buffer_params_v1_req_create zwp_linux_buffer_params_v1_req_create_immed zwp_linux_buffer_params_v1_req_destroy zwp_linux_dmabuf_v1_evt_modifier zwp_linux_dmabuf_v1_req_create_params zwp_primary_selection_device_manager_v1_req_create_source zwp_primary_selection_device_manager_v1_req_get_device zwp_primary_selection_device_v1_evt_data_offer zwp_primary_selection_device_v1_evt_selection zwp_primary_selection_device_v1_req_set_selection zwp_primary_selection_offer_v1_evt_offer zwp_primary_selection_offer_v1_req_receive zwp_primary_selection_source_v1_evt_send zwp_primary_selection_source_v1_req_offer waypipe-v0.9.1/test/startup_failure.py000077500000000000000000000173051463133614300201440ustar00rootroot00000000000000#!/usr/bin/env python3 """ Verifying all the ways in which waypipe can fail before even making a connection. """ if __name__ != "__main__": quit(1) import os, subprocess, time, signal, socket def try_unlink(path): try: os.unlink(path) except FileNotFoundError: pass def make_socket(path): folder, filename = os.path.split(path) cwdir = os.open(".", os.O_RDONLY | os.O_DIRECTORY) display_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) os.chdir(folder) display_socket.bind(filename) display_socket.listen() os.fchdir(cwdir) os.close(cwdir) return display_socket waypipe_path = os.environ["TEST_WAYPIPE_PATH"] sleep_path = os.environ["TEST_SLEEP_PATH"] fake_ssh_path = os.environ["TEST_FAKE_SSH_PATH"] ld_library_path = ( os.environ["LD_LIBRARY_PATH"] if "LD_LIBRARY_PATH" in os.environ else "" ) xdg_runtime_dir = os.path.abspath("./run/") os.makedirs(xdg_runtime_dir, mode=0o700, exist_ok=True) os.chmod(xdg_runtime_dir, 0o700) all_succeeding = True wayland_display = "wayland-display" client_socket_path = xdg_runtime_dir + "/client-socket" server_socket_path = xdg_runtime_dir + "/server-socket" ssh_socket_path = xdg_runtime_dir + "/ssh-socket" wayland_display_path = xdg_runtime_dir + "/" + wayland_display try_unlink(wayland_display_path) display_socket = make_socket(wayland_display_path) USE_SOCKETPAIR = 1 << 1 EXPECT_SUCCESS = 1 << 2 EXPECT_TIMEOUT = 1 << 3 EXPECT_FAILURE = 1 << 4 def run_test(name, command, env, flags): try_unlink(client_socket_path) try_unlink(server_socket_path) try_unlink(server_socket_path + ".disp.sock") if flags & USE_SOCKETPAIR: sockets = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM) conn_socket = 999 os.dup2(sockets[1].fileno(), conn_socket, inheritable=True) env = dict(env, WAYLAND_SOCKET=str(conn_socket)) pfds = [conn_socket] else: pfds = [] timed_out = False log_path = os.path.join(xdg_runtime_dir, "sfail_{}.txt".format(name)) logfile = open(log_path, "wb") print(env, " ".join(command)) proc = subprocess.Popen( command, env=env, stdin=subprocess.DEVNULL, stdout=logfile, stderr=subprocess.STDOUT, pass_fds=pfds, start_new_session=True, ) try: output, none = proc.communicate(timeout=1.0) except subprocess.TimeoutExpired as e: # Program is waiting indefinitely for something. # Kill it, and all children. pgrp = os.getpgid(proc.pid) os.killpg(pgrp, signal.SIGKILL) retcode = None timed_out = True else: retcode = proc.returncode logfile.close() output = open(log_path, "rb").read() if flags & USE_SOCKETPAIR: os.close(conn_socket) log_path = os.path.join(xdg_runtime_dir, "weston_out.txt") with open(log_path, "wb") as out: out.write(output) result = ( "timeout" if timed_out else ("fail({})".format(retcode) if retcode != 0 else "pass") ) global all_succeeding if flags & EXPECT_SUCCESS: if timed_out or retcode != 0: print( "Run {} failed when it should have succeeded".format(name), output, retcode, "timeout" if timed_out else "notimeout", ) all_succeeding = False else: print("Run {} passed.".format(name), output) elif flags & EXPECT_FAILURE: if timed_out or retcode == 0: print( "Run {} succeeded when it should have failed".format(name), output, retcode, "timeout" if timed_out else "notimeout", ) all_succeeding = False else: print("Run {} passed.".format(name), output) elif flags & EXPECT_TIMEOUT: if not timed_out: print( "Run {} stopped when it should have continued".format(name), output, retcode, ) all_succeeding = False else: print("Run {} passed.".format(name), output) else: raise NotImplementedError wait_cmd = [sleep_path, "10.0"] invalid_hostname = "@" fake_ssh_dir = os.path.dirname(fake_ssh_path) waypipe_dir = os.path.dirname(waypipe_path) base_env = {"LD_LIBRARY_PATH": ld_library_path, "PATH": ""} standard_env = dict(base_env, XDG_RUNTIME_DIR=xdg_runtime_dir) ssh_only_env = dict(standard_env, PATH=fake_ssh_dir) ssh_env = dict(standard_env, PATH=fake_ssh_dir + ":" + waypipe_dir) # Configurations that should fail run_test( "b_client_long_disp", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_DISPLAY=("/" + "x" * 107)), EXPECT_FAILURE, ) run_test( "b_client_disp_dne", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_DISPLAY=xdg_runtime_dir + "/dne"), EXPECT_FAILURE, ) run_test( "b_client_no_env", [waypipe_path, "-s", client_socket_path, "client"], base_env, EXPECT_FAILURE, ) run_test( "b_server_oneshot_no_env", [waypipe_path, "-o", "-s", server_socket_path, "server"] + wait_cmd, base_env, EXPECT_TIMEOUT, ) run_test( "b_client_bad_pipe1", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_SOCKET="33"), EXPECT_FAILURE, ) run_test( "b_client_bad_pipe2", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_SOCKET="777777777777777777777777777"), EXPECT_FAILURE, ) run_test( "b_client_bad_pipe3", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_SOCKET="0x33"), EXPECT_FAILURE, ) run_test( "b_client_nxdg_offset", [waypipe_path, "-s", client_socket_path, "client"], dict(base_env, WAYLAND_DISPLAY=wayland_display), EXPECT_FAILURE, ) run_test( "b_server_no_env", [waypipe_path, "-s", server_socket_path, "server"] + wait_cmd, base_env, EXPECT_FAILURE, ) run_test( "g_ssh_test_nossh_env", [waypipe_path, "-o", "-s", ssh_socket_path, "ssh", invalid_hostname] + wait_cmd, dict(standard_env, WAYLAND_DISPLAY=wayland_display), EXPECT_FAILURE, ) # Configurations that should succeed run_test( "g_help", [waypipe_path, "--help"], base_env, EXPECT_SUCCESS, ) run_test( "g_server_std_env", [waypipe_path, "-s", server_socket_path, "server"] + wait_cmd, standard_env, EXPECT_TIMEOUT, ) run_test( "g_client_std_env", [waypipe_path, "-s", client_socket_path, "client"], dict(standard_env, WAYLAND_DISPLAY=wayland_display_path), EXPECT_TIMEOUT, ) run_test( "g_client_offset_sock", [waypipe_path, "-s", client_socket_path, "client"], dict(standard_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) run_test( "g_client_pipe_env", [waypipe_path, "-s", client_socket_path, "client"], dict(standard_env), EXPECT_TIMEOUT | USE_SOCKETPAIR, ) run_test( "g_ssh_test_oneshot", [waypipe_path, "-o", "-s", ssh_socket_path, "ssh", invalid_hostname] + wait_cmd, dict(ssh_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) run_test( "g_ssh_test_reg", [waypipe_path, "-s", ssh_socket_path, "ssh", invalid_hostname] + wait_cmd, dict(ssh_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) run_test( "g_ssh_test_remotebin", [ waypipe_path, "--oneshot", "--remote-bin", waypipe_path, "-s", ssh_socket_path, "ssh", invalid_hostname, ] + wait_cmd, dict(ssh_only_env, WAYLAND_DISPLAY=wayland_display), EXPECT_TIMEOUT, ) try_unlink(client_socket_path) try_unlink(wayland_display_path) quit(0 if all_succeeding else 1) waypipe-v0.9.1/test/test-proto.xml000066400000000000000000000054171463133614300172210ustar00rootroot00000000000000 Copyright © 2019 Manuel Stoeckl Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that\n the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of the copyright holders not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. The copyright holders make no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. waypipe-v0.9.1/test/test.sh000077500000000000000000000007671463133614300157000ustar00rootroot00000000000000#!/bin/sh root=`pwd` waypipe=`which waypipe` program=`which ${1:-weston-terminal}` debug= debug=-d # Orange=client, purple=server rm -f /tmp/waypipe-server.sock /tmp/waypipe-client.sock ($waypipe -o $debug client 2>&1 | sed 's/.*/&/') & # ssh-to-self; should have a local keypair set up (ssh -R /tmp/waypipe-server.sock:/tmp/waypipe-client.sock localhost $waypipe -o $debug server -- $program) 2>&1 | sed 's/.*/&/' kill %1 rm -f /tmp/waypipe-server.sock /tmp/waypipe-client.sock waypipe-v0.9.1/test/test_fnlist.txt000066400000000000000000000000021463133614300174370ustar00rootroot00000000000000* waypipe-v0.9.1/test/trace_bcc.sh000077500000000000000000000014311463133614300166130ustar00rootroot00000000000000#!/bin/sh set -e # With bcc 'tplist -l `which waypipe`', can list all probes # With bcc 'trace', can print events, arguments, and timestamps sudo /usr/share/bcc/tools/trace -t \ 'u:/usr/bin/waypipe:construct_diff_exit "diffsize %d", arg1' \ 'u:/usr/bin/waypipe:construct_diff_enter "rects %d", arg1' \ 'u:/usr/bin/waypipe:apply_diff_enter "size %d diffsize %d", arg1, arg2' \ 'u:/usr/bin/waypipe:apply_diff_exit' \ 'u:/usr/bin/waypipe:channel_write_end' \ 'u:/usr/bin/waypipe:channel_write_start "size %d", arg1' \ 'u:/usr/bin/waypipe:worker_comp_enter "index %d", arg1' \ 'u:/usr/bin/waypipe:worker_comp_exit "index %d", arg1' \ 'u:/usr/bin/waypipe:worker_compdiff_enter "index %d", arg1' \ 'u:/usr/bin/waypipe:worker_compdiff_exit "index %d", arg1' waypipe-v0.9.1/test/trace_perf.sh000077500000000000000000000010641463133614300170220ustar00rootroot00000000000000#!/bin/sh set -x # This probably requires root to set up the probes, and # a low sys/kernel/perf_event_paranoid to record them. # Also, perf record can create huge (>1 GB) files on busy machines, # so it's recommended to run this on a tmpfs prog=$(which waypipe) capture_time=${1:-120} setup="perf buildid-cache -a `which waypipe` ; perf probe -d sdt_waypipe:* ; perf probe sdt_waypipe:* ;" sudo -- sh -c "$setup" sudo perf record -e sdt_waypipe:*,sched:sched_switch -aR sleep $capture_time sudo chmod 644 perf.data perf script --ns | gzip -9 >scriptfile.gz waypipe-v0.9.1/test/wire_parse.c000066400000000000000000000152351463133614300166620ustar00rootroot00000000000000/* * Copyright © 2019 Manuel Stoeckl * * Permission is hereby granted, free of charge, to any person obtaining * a copy of this software and associated documentation files (the * "Software"), to deal in the Software without restriction, including * without limitation the rights to use, copy, modify, merge, publish, * distribute, sublicense, and/or sell copies of the Software, and to * permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice (including the * next paragraph) shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ #include "common.h" #include "parsing.h" #include "shadow.h" #include "util.h" #include #include #include #include #include "protocol-test-proto.h" /* from parsing.c */ bool size_check(const struct msg_data *data, const uint32_t *payload, unsigned int true_length, int fd_length); void do_xtype_req_blue(struct context *ctx, const char *interface, uint32_t version, struct wp_object *id, int b, int32_t c, uint32_t d, struct wp_object *e, const char *f, uint32_t g) { char buf[256]; sprintf(buf, "%s %u %u %d %d %u %u %s %u", interface, version, id ? id->obj_id : 0, b, c, d, e ? e->obj_id : 0, f, g); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "babacba 4441 992 7771 3331 4442 991 (null) 4443") != 0; } void do_xtype_evt_yellow(struct context *ctx, uint32_t c) { char buf[256]; sprintf(buf, "%u", c); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "4441") != 0; } void do_ytype_req_green(struct context *ctx, uint32_t a, const char *b, const char *c, int d, const char *e, struct wp_object *f, uint32_t g_count, const uint8_t *g_val) { char buf[256]; sprintf(buf, "%u %s %s %d %s %u %u %x|%x|%x|%x|%x|%x|%x|%x", a, b, c, d, e, f ? f->obj_id : 0, g_count, g_val[0], g_val[1], g_val[2], g_val[3], g_val[4], g_val[5], g_val[6], g_val[7]); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "4441 bea (null) 7771 cbbc 991 8 81|80|81|80|90|99|99|99") != 0; } void do_ytype_evt_red(struct context *ctx, struct wp_object *a, int32_t b, int c, struct wp_object *d, int32_t e, int32_t f, struct wp_object *g, int32_t h, uint32_t i, const char *j, int k, uint32_t l_count, const uint8_t *l_val, uint32_t n, const char *m, struct wp_object *o, int p, struct wp_object *q) { char buf[256]; sprintf(buf, "%u %d %d %u %d %d %u %d %u %s %d %u %x|%x|%x %u %s %u %d %u", a ? a->obj_id : 0, b, c, d ? d->obj_id : 0, e, f, g ? g->obj_id : 0, h, i, j, k, l_count, l_val[0], l_val[1], l_val[2], n, m, o ? o->obj_id : 0, p, q ? q->obj_id : 0); printf("%s\n", buf); ctx->drop_this_msg = strcmp(buf, "0 33330 8881 0 33331 33332 0 33333 44440 bcaba 8882 3 80|80|80 99990 (null) 992 8883 991") != 0; } struct wire_test { const struct wp_interface *intf; int msg_offset; int fds[4]; uint32_t words[50]; int nfds; int nwords; }; static inline uint32_t pack_u32(uint8_t a0, uint8_t a1, uint8_t a2, uint8_t a3) { union { uint8_t s[4]; uint32_t v; } u; u.s[0] = a0; u.s[1] = a1; u.s[2] = a2; u.s[3] = a3; return u.v; } log_handler_func_t log_funcs[2] = {test_log_handler, test_log_handler}; int main(int argc, char **argv) { (void)argc; (void)argv; struct message_tracker mt; init_message_tracker(&mt); struct wp_object *old_display = tracker_get(&mt, 1); tracker_remove(&mt, old_display); destroy_wp_object(old_display); struct wp_object xobj; xobj.type = &intf_xtype; xobj.is_zombie = false; xobj.obj_id = 991; tracker_insert(&mt, &xobj); struct wp_object yobj; yobj.type = &intf_ytype; yobj.is_zombie = false; yobj.obj_id = 992; tracker_insert(&mt, &yobj); struct context ctx = {.obj = &xobj, .g = NULL}; struct wire_test tests[] = { {&intf_xtype, 0, {7771}, {8, pack_u32(0x62, 0x61, 0x62, 0x61), pack_u32(0x63, 0x62, 0x61, 0), 4441, yobj.obj_id, 3331, 4442, xobj.obj_id, 0, 4443}, 1, 10}, {&intf_xtype, 1, {0}, {4441}, 0, 1}, {&intf_ytype, 0, {7771}, {4441, 4, pack_u32(0x62, 0x65, 0x61, 0), 0, 5, pack_u32(0x63, 0x62, 0x62, 0x63), pack_u32(0, 0x99, 0x99, 0x99), xobj.obj_id, 8, pack_u32(0x81, 0x80, 0x81, 0x80), pack_u32(0x90, 0x99, 0x99, 0x99)}, 1, 11}, {&intf_ytype, 1, {8881, 8882, 8883}, {7770, 33330, 7771, 33331, 33332, 7773, 33333, 44440, 6, pack_u32(0x62, 0x63, 0x61, 0x62), pack_u32(0x61, 0, 0x99, 0x99), 3, pack_u32(0x80, 0x80, 0x80, 0x11), 99990, 0, yobj.obj_id, xobj.obj_id}, 3, 17}}; bool all_success = true; for (size_t t = 0; t < sizeof(tests) / sizeof(tests[0]); t++) { struct wire_test *wt = &tests[t]; ctx.drop_this_msg = false; wp_callfn_t func = wt->intf->msgs[wt->msg_offset].call; (*func)(&ctx, wt->words, wt->fds, &mt); if (ctx.drop_this_msg) { all_success = false; } printf("Function call %s.%s, %s\n", wt->intf->name, get_nth_packed_string(wt->intf->msg_names, wt->msg_offset), ctx.drop_this_msg ? "FAIL" : "pass"); for (int fdlen = wt->nfds; fdlen >= 0; fdlen--) { for (int length = wt->nwords; length >= 0; length--) { if (fdlen != wt->nfds && length < wt->nwords) { /* the fd check is really trivial */ continue; } bool expect_success = (wt->nwords == length) && (fdlen == wt->nfds); printf("Trying: %d/%d words, %d/%d fds\n", length, wt->nwords, fdlen, wt->nfds); bool sp = size_check( &wt->intf->msgs[wt->msg_offset], wt->words, (unsigned int)length, fdlen); if (sp != expect_success) { wp_error("size check FAIL (%c, expected %c) at %d/%d chars, %d/%d fds", sp ? 'Y' : 'n', expect_success ? 'Y' : 'n', length, wt->nwords, fdlen, wt->nfds); } all_success &= (sp == expect_success); } } } tracker_remove(&mt, &xobj); tracker_remove(&mt, &yobj); cleanup_message_tracker(&mt); printf("Net result: %s\n", all_success ? "pass" : "FAIL"); return all_success ? EXIT_SUCCESS : EXIT_FAILURE; } waypipe-v0.9.1/waypipe.scd000066400000000000000000000356061463133614300155540ustar00rootroot00000000000000waypipe(1) # NAME waypipe - A transparent proxy for Wayland applications # SYNOPSIS *waypipe* [options...] *ssh* [ssh options] _destination_ _command..._ *waypipe* [options...] *client*++ *waypipe* [options...] *server* -- _command..._++ *waypipe* *recon* _control_pipe_ _new_socket_path_++ *waypipe* *bench* _bandwidth_++ *waypipe* [*--version*] [*-h*, *--help*] \[options...\] = [*-c*, *--compress* C] [*-d*, *--debug*] [*-n*, *--no-gpu*] [*-o*, *--oneshot*] [*-s*, *--socket* S] [*--allow-tiled*] [*--control* C] [*--display* D] [*--drm-node* R] [*--remote-node* R] [*--remote-bin* R] [*--login-shell*] [*--threads* T] [*--title-prefix* P] [*--unlink-socket*] [*--video*[=V]] [*--vsock*] # DESCRIPTION Waypipe is a proxy for Wayland clients, with the aim of supporting behavior like *ssh -X*. Prefixing an *ssh ...* command to become *waypipe ssh ...* will automatically run *waypipe* both locally and remotely, and modify the ssh command to set up forwarding between the two instances of *waypipe*. The remote instance will act like a Wayland compositor, letting Wayland applications that are run remotely be displayed locally. When run as *waypipe client*, it will open a socket (by default at _/tmp/waypipe-client.sock_) and will connect to the local Wayland compositor and forward all Wayland applications which were linked to it over the socket by a matching *waypipe server* instance. When run as *waypipe server*, it will run the command that follows in its command line invocation, set up its own Wayland compositor socket, and try to connect to its matching *waypipe client* socket (by default _/tmp/waypipe-server.sock_) and try to forward all the Wayland clients that connect to fake compositor socket to the matching *waypipe client*. The *waypipe recon* mode is used to reconnect a *waypipe server* instance which has had a control pipe (option *--control*) set. The new socket path should indicate a Unix socket whose connections are forwarded to the *waypipe client* that the *waypipe server* was initially connected to. The *waypipe bench* mode can be used to estimate, given a specific connection _bandwidth_ in MB/sec, which compression options produce the lowest latency. It tests two synthetic images, one made to be roughly as compressible as images containing text, and one made to be roughly as compressible as images containing pictures. # OPTIONS *-c C, --compress C* Select the compression method applied to data transfers. Options are _none_ (for high-bandwidth networks), _lz4_ (intermediate), _zstd_ (slow connection). The default compression is _lz4_.† The compression level can be chosen by appending = followed by a number. For example, if *C* is _zstd=7_, waypipe will use level 7 Zstd compression. † Unless *waypipe* is built without LZ4 support, in which case the default compression will be _none_. *-d, --debug* Print debug log messages. *-h, --help* Show help message and quit. *-n, --no-gpu* Block protocols like wayland-drm and linux-dmabuf which require access to e.g. render nodes. *-o, --oneshot* Only permit a single connection, and exit when it is closed. *-s S, --socket S* Use *S* as the path for the Unix socket. The default socket path for server mode is _/tmp/waypipe-server.sock_; for client mode, it is _/tmp/waypipe-client.sock_; and in ssh mode, *S* gives the prefix used by both the client and the server for their socket paths. The default prefix in ssh mode is _/tmp/waypipe_. When vsock is enabled use *S* to specify a CID and a port number. *--version* Briefly describe Waypipe's version and the features it was built with, then quit. Possible features: LZ4 compression support, ZSTD compression support, ability to transfer DMABUFs, video compression support, VAAPI hardware video de/encoding support. *--allow-tiled* By default, waypipe filters out all advertised DMABUF formats which have format layout modifiers, as CPU access to these formats may be very slow. Setting this flag disables the filtering. Since tiled images often permit faster GPU operations, most OpenGL applications will select tiling modifiers when they are available. *--control C* For server or ssh mode, provide the path to the "control pipe" that will be created the the server. Writing (with *waypipe recon C T*, or 'echo -n T > C') a new socket path to this pipe will make the server instance replace all running connections with connections to the new Unix socket. The new socket should ultimately forward data to the same waypipe client that the server was connected to before. *--display D* For server or ssh mode, provide _WAYLAND_DISPLAY_ and let waypipe configure its Wayland display socket to have a matching path. (If *D* is not an absolute path, the socket will be created in the folder given by the environment variable _XDG_RUNTIME_DIR_.) *--drm-node R* Specify the path *R* to the drm device that this instance of waypipe should use and (in server mode) notify connecting applications about. *--remote-node R* In ssh mode, specify the path *R* to the drm device that the remote instance of waypipe (running in server mode) should use. *--remote-bin R* In ssh mode, specify the path *R* to the waypipe binary on the remote computer, or its name if it is available in _PATH_. It defaults to *waypipe* if this option isn’t passed. *--login-shell* Only for server mode; if no command is being run, open a login shell. *--threads T* Set the number of total threads (including the main thread) which a *waypipe* instance will create. These threads will be used to parallelize compression operations. This flag is passed on to *waypipe server* when given to *waypipe ssh*. The flag also controls the thread count for *waypipe bench*. The default behavior (choosable by setting *T* to _0_) is to use half as many threads as the computer has hardware threads available. *--title-prefix P* Prepend *P* to any window titles specified using the XDG shell protocol. In ssh mode, the prefix is applied only on the client side. *--unlink-socket* Only for server mode; on shutdown, unlink the Unix socket that waypipe connects to. *--video[=V]* Compress specific DMABUF formats using a lossy video codec. Opaque, 10-bit, and multiplanar formats, among others, are not supported. *V* is a comma separated list of options to control the video encoding. Using the *--video* flag without setting any options is equivalent to using the default setting of: *--video=sw,bpf=120000,h264*. Later options supersede earlier ones. *sw* Use software encoding and decoding. *hw* Use hardware (VAAPI) encoding and decoding, if available. This can be finicky and may only work with specific window buffer formats and sizes. *h264* Use H.264 encoded video. *vp9* Use VP9 encoded video. *bpf=B* Set the target bit rate of the video encoder, in units of bits per frame. *B* can be written as an integer or with exponential notation; thus *--video=bpf=7.5e5* is equivalent to *--video=bpf=750000*. *--hwvideo* Deprecated option, equivalent to --video=hw . *--vsock* Use vsock instead of unix sockets. This is used when waypipe is running in virtual machines. With this option enabled specify a CID and a port number in *S*. CID is only used in the server mode and can be omitted when connecting from a guest virtual machine to host. # EXAMPLE The following *waypipe ssh* subcommand will attempt to run *weston-flower* on the server _exserv_, displaying the result on the local system. ``` waypipe ssh user@exserv weston-flower ``` One can obtain similar behavior by explicitly running waypipe and ssh: ``` waypipe --socket /tmp/socket-client client & ssh -R /tmp/socket-server:/tmp/socket-client user@exserv \\ waypipe --socket /tmp/socket-server server -- weston-flower kill %1 ``` Waypipe may be run locally without an SSH connection by specifying matching socket paths. For example: ``` waypipe --socket /tmp/waypipe.sock client & waypipe --socket /tmp/waypipe.sock server weston-simple-dmabuf-egl kill %1 rm /tmp/waypipe.sock ``` Using transports other than SSH is a bit more complicated. A recipe with ncat to connect to _remote_ from computer _local_: ``` $ waypipe --socket /tmp/waypipe-remote.sock client & $ ncat --ssl -lk 12345 --sh-exec 'ncat -U /tmp/waypipe-remote.sock' & $ ssh user@remote > ncat -lkU /tmp/waypipe-local.sock --sh-exec 'ncat --ssl local 12345' & > waypipe --display wayland-local \\ --socket /tmp/waypipe-local.sock server -- sleep inf & > WAYLAND_DISPLAY=wayland-local application ``` Given a certificate file, socat can also provide an encrypted connection (remove 'verify=0' to check certificates): ``` $ waypipe --socket /tmp/waypipe-remote.sock client & $ socat openssl-listen:12345,reuseaddr,cert=certificate.pem,verify=0,fork \\ unix-connect:/tmp/waypipe-remote.sock $ ssh user@remote > socat unix-listen:/tmp/waypipe-local.sock,reuseaddr,fork \\ openssl-connect:local:12345,verify=0 & > waypipe --socket /tmp/waypipe-local.sock server -- application ``` Many applications require specific environment variables to use Wayland instead of X11. If ssh isn't configured to support loading _~/.ssh/environment_, or to allow specific variables to be set with _AcceptEnv_/_SetEnv_, one can run *waypipe ssh* without a command (and thereby open a login shell), or use *env* to set the needed variables each time: ``` waypipe ssh user@host env XDG_SESSION_TYPE=wayland dolphin ``` In some cases, one may wish to set environment variables for the *waypipe server* process itself; the above trick with *env* will not do this, because the *env* process will be a child of *waypipe server*, not the other way around. Instead, one can use _~/.ssh/environment_, or use the *--remote-bin* option to change the remote Waypipe instance to a shell script that sets the environment before running the actual *waypipe* program. Waypipe has support for reconnecting a *waypipe client* and a *waypipe server* instance when whatever was used to transfer data between their sockets fails. For this to work, waypipe must still be running on both sides of the connection. As the *waypipe ssh* wrapper will automatically close both the *waypipe client* and the *waypipe server* when the connection fails, the client and server modes must be run seprately. For example, to persistently forward applications running on server _rserv_ to a local Wayland compositor running on _lserv_, one would first set up a waypipe client instance on _lserv_, ``` waypipe -s /tmp/waypipe.sock client & ``` and on server _rserv_, establish socket forwarding and run the server ``` ssh -fN -L /tmp/waypipe-lserv.sock:/tmp/waypipe.sock user@lserv waypipe -s /tmp/waypipe-lserv.sock --control /tmp/ctrl-lserv.pipe \\ --display wayland-lserv server -- sleep inf & ``` then set _WAYLAND_DISPLAY=wayland-lserv_ and run the desired applications. When the ssh forwarding breaks, on _rserv_, reconnect with ``` ssh -fN -L /tmp/waypipe-lserv-2.sock:/tmp/waypipe.sock user@lserv waypipe recon /tmp/ctrl-lserv.pipe /tmp/waypipe-lserv-2.sock ``` ## Running waypipe in virtual machines When running waypipe in virtual machines on the same host it is possible to use vsock for efficient inter-vm communication. The following scenarios are supported: - Running applications on host from guest. ``` host> waypipe --vsock -s 1234 client guest> waypipe --vsock -s 1234 server weston-terminal ``` - Running applications in a guest virtual machine from host. ``` guest> waypipe --vsock -s 1234 client host> waypipe --vsock -s 3:1234 server weston-terminal ``` In this example waypipe server connects to a virtual machine with CID 3 on port 1234. - Running applications in a guest virtual machine from other guest virtual machines. When running both client and server in virtual machines it is possble to enable the VMADDR_FLAG_TO_HOST flag for sibling communication by prefixing the CID with an s: ``` guest> waypipe --vsock -s 1234 client guest> waypipe --vsock -s s3:1234 server weston-terminal ``` In this case all packets will be routed to host where they can be forwarded to another virtual machine with a vhost-device-vsock device or some other utility. # ENVIRONMENT When running as a server, by default _WAYLAND_DISPLAY_ will be set for the invoked process. If the *--oneshot* flag is set, waypipe will instead set _WAYLAND_SOCKET_ and inherit an already connected socketpair file descriptor to the invoked (child) process. Some programs open and close a Wayland connection repeatedly as part of their initialization, and will not work correctly with this flag. # EXIT STATUS *waypipe ssh* will exit with the exit status code from the remote command, or with return code 1 if there has been an error. # SECURITY Waypipe does not provide any strong security guarantees, and connecting to untrusted servers is not recommended. It does not filter which Wayland protocols the compositor makes available to the client (with a few exceptions for protocols that require file descriptors which Waypipe cannot yet handle). For example, if a Wayland compositor gives all its clients access to a screenshot or lock-screen protocol, then proxied clients run under Waypipe can also make screenshots or lock the screen. In general, applications are not well tested against malicious compositors, and compositors are not well tested against malicious clients. Waypipe can connect the two, and may blindly forward denial-of-service and other attacks. Waypipe itself is written in C and links to compression, graphics, and video libraries; both it and these libraries may have security bugs. Some risk can be avoided by building Waypipe with DMABUF support turned off, or running Waypipe with the *--no-gpu* flag so that it does not expose graphics libraries. *waypipe ssh* has no explicit protections against timing attacks; an observer to the resulting network traffic may, by studying the size and timing of packets, learn information about the user's interaction with a Wayland client proxied through *waypipe ssh*. For example: a lack of activity suggests the user is not currently using the application, while an intermittant stream of messages from the compositor to the client may indicate mouse movement (or maybe something else: the contents of the messages are protected by *ssh*.) The memory used by Waypipe processes may, at a given time, include Wayland messages encoding user input, and the contents of current and recent frames drawn for application windows. Swap should be encrypted to prevent this data from being leaked to disk. # BUGS File bug reports at: https://gitlab.freedesktop.org/mstoeckl/waypipe/ Some programs (gnome-terminal, firefox, kate, among others) have special mechanisms to ensure that only one process is running at a time. Starting those programs under Waypipe while they are running under a different Wayland compositor may silently open a window or tab in the original instance of the program. Such programs may have a command line argument to create a new instance. # SEE ALSO *weston*(1), *ssh*(1), *socat(1)*, *ncat(1)*