obs-studio-32.1.0-sources/additional_install_files/000755 001751 001751 00000000000 15153330731 023255 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/libs64/000755 001751 001751 00000000000 15153330235 024357 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/libs32/000755 001751 001751 00000000000 15153330235 024352 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/misc/000755 001751 001751 00000000000 15153330235 024207 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/libs64d/000755 001751 001751 00000000000 15153330235 024523 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/exec64d/000755 001751 001751 00000000000 15153330235 024516 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/libs64r/000755 001751 001751 00000000000 15153330235 024541 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/exec32d/000755 001751 001751 00000000000 15153330235 024511 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/exec64/000755 001751 001751 00000000000 15153330235 024352 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/exec64r/000755 001751 001751 00000000000 15153330235 024534 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/data/000755 001751 001751 00000000000 15153330235 024165 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/libs32r/000755 001751 001751 00000000000 15153330235 024534 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/exec32/000755 001751 001751 00000000000 15153330235 024345 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/exec32r/000755 001751 001751 00000000000 15153330235 024527 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/additional_install_files/libs32d/000755 001751 001751 00000000000 15153330235 024516 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/COPYING000644 001751 001751 00000043254 15153330235 017277 0ustar00runnerrunner000000 000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. obs-studio-32.1.0-sources/.clang-format000644 001751 001751 00000013453 15153330235 020615 0ustar00runnerrunner000000 000000 # please use clang-format version 16 or later Standard: c++17 AccessModifierOffset: -8 AlignAfterOpenBracket: Align AlignConsecutiveAssignments: false AlignConsecutiveDeclarations: false AlignEscapedNewlines: Left AlignOperands: true AlignTrailingComments: true AllowAllArgumentsOnNextLine: false AllowAllConstructorInitializersOnNextLine: false AllowAllParametersOfDeclarationOnNextLine: false AllowShortBlocksOnASingleLine: false AllowShortCaseLabelsOnASingleLine: false AllowShortFunctionsOnASingleLine: Inline AllowShortIfStatementsOnASingleLine: false AllowShortLambdasOnASingleLine: Inline AllowShortLoopsOnASingleLine: false AlwaysBreakAfterDefinitionReturnType: None AlwaysBreakAfterReturnType: None AlwaysBreakBeforeMultilineStrings: false AlwaysBreakTemplateDeclarations: false BinPackArguments: true BinPackParameters: true BraceWrapping: AfterClass: false AfterControlStatement: false AfterEnum: false AfterFunction: true AfterNamespace: false AfterObjCDeclaration: false AfterStruct: false AfterUnion: false AfterExternBlock: false BeforeCatch: false BeforeElse: false IndentBraces: false SplitEmptyFunction: true SplitEmptyRecord: true SplitEmptyNamespace: true BreakBeforeBinaryOperators: None BreakBeforeBraces: Custom BreakBeforeTernaryOperators: true BreakConstructorInitializers: BeforeColon BreakStringLiterals: false # apparently unpredictable ColumnLimit: 120 CompactNamespaces: false ConstructorInitializerAllOnOneLineOrOnePerLine: true ConstructorInitializerIndentWidth: 8 ContinuationIndentWidth: 8 Cpp11BracedListStyle: true DerivePointerAlignment: false DisableFormat: false FixNamespaceComments: true ForEachMacros: - 'json_object_foreach' - 'json_object_foreach_safe' - 'json_array_foreach' - 'HASH_ITER' IncludeBlocks: Preserve IndentCaseLabels: false IndentPPDirectives: None IndentWidth: 8 IndentWrappedFunctionNames: false KeepEmptyLinesAtTheStartOfBlocks: true MaxEmptyLinesToKeep: 1 NamespaceIndentation: None ObjCBinPackProtocolList: Auto ObjCBlockIndentWidth: 8 ObjCSpaceAfterProperty: true ObjCSpaceBeforeProtocolList: true PenaltyBreakAssignment: 10 PenaltyBreakBeforeFirstCallParameter: 30 PenaltyBreakComment: 10 PenaltyBreakFirstLessLess: 0 PenaltyBreakString: 10 PenaltyExcessCharacter: 100 PenaltyReturnTypeOnItsOwnLine: 60 PointerAlignment: Right ReflowComments: false SkipMacroDefinitionBody: true SortIncludes: false SortUsingDeclarations: false SpaceAfterCStyleCast: false SpaceAfterLogicalNot: false SpaceAfterTemplateKeyword: false SpaceBeforeAssignmentOperators: true SpaceBeforeCtorInitializerColon: true SpaceBeforeInheritanceColon: true SpaceBeforeParens: ControlStatements SpaceBeforeRangeBasedForLoopColon: true SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 1 SpacesInAngles: false SpacesInCStyleCastParentheses: false SpacesInContainerLiterals: false SpacesInParentheses: false SpacesInSquareBrackets: false StatementMacros: - 'Q_OBJECT' TabWidth: 8 TypenameMacros: - 'DARRAY' UseTab: ForContinuationAndIndentation --- Language: ObjC AccessModifierOffset: 2 AlignArrayOfStructures: Right AlignConsecutiveAssignments: None AlignConsecutiveBitFields: None AlignConsecutiveDeclarations: None AlignConsecutiveMacros: Enabled: true AcrossEmptyLines: false AcrossComments: true AllowShortBlocksOnASingleLine: Never AllowShortEnumsOnASingleLine: false AllowShortFunctionsOnASingleLine: Empty AllowShortIfStatementsOnASingleLine: Never AllowShortLambdasOnASingleLine: None AttributeMacros: ['__unused', '__autoreleasing', '_Nonnull', '__bridge'] BitFieldColonSpacing: Both #BreakBeforeBraces: Webkit BreakBeforeBraces: Custom BraceWrapping: AfterCaseLabel: false AfterClass: true AfterControlStatement: Never AfterEnum: false AfterFunction: true AfterNamespace: false AfterObjCDeclaration: false AfterStruct: false AfterUnion: false AfterExternBlock: false BeforeCatch: false BeforeElse: false BeforeLambdaBody: false BeforeWhile: false IndentBraces: false SplitEmptyFunction: false SplitEmptyRecord: false SplitEmptyNamespace: true BreakAfterAttributes: Never BreakArrays: false BreakBeforeConceptDeclarations: Allowed BreakBeforeInlineASMColon: OnlyMultiline BreakConstructorInitializers: AfterColon BreakInheritanceList: AfterComma ColumnLimit: 120 ConstructorInitializerIndentWidth: 4 ContinuationIndentWidth: 4 EmptyLineAfterAccessModifier: Never EmptyLineBeforeAccessModifier: LogicalBlock ExperimentalAutoDetectBinPacking: false FixNamespaceComments: true IndentAccessModifiers: false IndentCaseBlocks: false IndentCaseLabels: true IndentExternBlock: Indent IndentGotoLabels: false IndentRequiresClause: true IndentWidth: 4 IndentWrappedFunctionNames: true InsertBraces: false InsertNewlineAtEOF: true KeepEmptyLinesAtTheStartOfBlocks: false LambdaBodyIndentation: Signature NamespaceIndentation: All ObjCBinPackProtocolList: Auto ObjCBlockIndentWidth: 4 ObjCBreakBeforeNestedBlockParam: false ObjCSpaceAfterProperty: true ObjCSpaceBeforeProtocolList: true PPIndentWidth: -1 PackConstructorInitializers: NextLine QualifierAlignment: Leave ReferenceAlignment: Right RemoveSemicolon: false RequiresClausePosition: WithPreceding RequiresExpressionIndentation: OuterScope SeparateDefinitionBlocks: Leave ShortNamespaceLines: 1 SortIncludes: false #SortUsingDeclarations: LexicographicNumeric SortUsingDeclarations: true SpaceAfterCStyleCast: true SpaceAfterLogicalNot: false SpaceAroundPointerQualifiers: Default SpaceBeforeCaseColon: false SpaceBeforeCpp11BracedList: true SpaceBeforeCtorInitializerColon: true SpaceBeforeInheritanceColon: true SpaceBeforeParens: ControlStatements SpaceBeforeRangeBasedForLoopColon: true SpaceBeforeSquareBrackets: false SpaceInEmptyBlock: false SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 2 SpacesInConditionalStatement: false SpacesInLineCommentPrefix: Minimum: 1 Maximum: -1 Standard: c++17 TabWidth: 4 UseTab: Never obs-studio-32.1.0-sources/CONTRIBUTING.rst000644 001751 001751 00000010661 15153330235 020701 0ustar00runnerrunner000000 000000 Contributing ============ Quick Links for Contributing ---------------------------- - Compiling and building OBS Studio: https://github.com/obsproject/obs-studio/wiki/Install-Instructions - Our bug tracker: https://github.com/obsproject/obs-studio/issues - Discord Server: https://obsproject.com/discord - Development chat: #development on the Discord server (see above) - Development forum: https://obsproject.com/forum/list/general-development.21/ - Developer/API Documentation: https://obsproject.com/docs - To contribute language translations, do not make pull requests. Instead, use crowdin. Read here for more information: https://github.com/obsproject/obs-studio/wiki/How-To-Contribute-Translations-For-OBS - To add a new service to OBS Studio please see the service submission guidelines: https://github.com/obsproject/obs-studio/wiki/Service-Submission-Guidelines General Guidelines ------------------ - The OBS Project uses English as a common language. Please ensure that any submissions have at least machine-translated English descriptions and titles. - Templates for Pull Requests and Issues must be properly filled out. Failure to do so may result in your PR or Issue being closed. The templates request the bare minimum amount of information required for us to process them. - Contributors to the OBS Project are expected to abide by the OBS Project Code of Conduct: https://github.com/obsproject/obs-studio/blob/master/COC.rst Coding Guidelines ----------------- - OBS Studio uses kernel normal form (linux variant), for more information, please read here: https://github.com/torvalds/linux/blob/master/Documentation/process/coding-style.rst - Avoid trailing spaces. To view trailing spaces before making a commit, use "git diff" on your changes. If colors are enabled for git in the command prompt, it will show you any whitespace issues marked with red. - Tabs for indentation, spaces for alignment. Tabs are treated as 8 columns wide. - 120 columns max - Comments and names of variables/functions/etc. must be in English - Formatting scripts (macOS/Linux only) are available `here <./build-aux>`__ Commit Guidelines ----------------- - OBS Studio uses the 50/72 standard for commits. 50 characters max for the title (excluding module prefix), an empty line, and then a full description of the commit, wrapped to 72 columns max. See this link for more information: http://chris.beams.io/posts/git-commit/ - Make sure commit titles are always in present tense, and are not followed by punctuation. - Prefix each commit's titles with the module name, followed by a colon and a space (unless modifying a file in the base directory). After that, the first word should be capitalized. So for example, if you are modifying the obs-ffmpeg plugin:: obs-ffmpeg: Fix bug with audio output Or for libobs:: libobs: Fix source not displaying Note: When modifying cmake modules, just prefix with "cmake". - If you still need examples, please view the commit history. - Commit titles and descriptions must be in English AI/Machine Learning Policy -------------------------- AI/machine learning systems such as those based on the GPT family (Copilot, ChatGPT, etc.) are prone to generating plausible-sounding, but wrong code that makes incorrect assumptions about OBS internals or APIs it interfaces with. This means code generated by such systems will require human review and is likely to require human intervention. If the submitter is unable to undertake that work themselves due to a lack of understanding of the OBS codebase and/or programming, the submission has a high likelihood of being invalid. Such invalid submissions end up taking maintainers' time to review and respond away from legitimate submissions. Additionally, such systems have been demonstrated to reproduce code contained in the training data, which may have been originally published under a license that would prohibit its inclusion in OBS. Because of the above concerns, we have opted to take the following policy towards submissions with regard to the use of these AI tools: - Submissions created largely or entirely by AI systems are not allowed. - The use of GitHub Copilot and other assistive AI technologies is heavily discouraged. - Low-effort or incorrect submissions that are determined to have been generated by, or created with aid of such systems may lead to a ban from contributing to the repository or project as a whole. obs-studio-32.1.0-sources/CMakePresets.json000644 001751 001751 00000022234 15153330235 021460 0ustar00runnerrunner000000 000000 { "version": 8, "cmakeMinimumRequired": { "major": 3, "minor": 28, "patch": 0 }, "configurePresets": [ { "name": "environmentVars", "hidden": true, "cacheVariables": { "RESTREAM_CLIENTID": {"type": "STRING", "value": "$penv{RESTREAM_CLIENTID}"}, "RESTREAM_HASH": {"type": "STRING", "value": "$penv{RESTREAM_HASH}"}, "TWITCH_CLIENTID": {"type": "STRING", "value": "$penv{TWITCH_CLIENTID}"}, "TWITCH_HASH": {"type": "STRING", "value": "$penv{TWITCH_HASH}"}, "YOUTUBE_CLIENTID": {"type": "STRING", "value": "$penv{YOUTUBE_CLIENTID}"}, "YOUTUBE_CLIENTID_HASH": {"type": "STRING", "value": "$penv{YOUTUBE_CLIENTID_HASH}"}, "YOUTUBE_SECRET": {"type": "STRING", "value": "$penv{YOUTUBE_SECRET}"}, "YOUTUBE_SECRET_HASH": {"type": "STRING", "value": "$penv{YOUTUBE_SECRET_HASH}"} } }, { "name": "dependencies", "hidden": true, "vendor": { "obsproject.com/obs-studio": { "dependencies": { "prebuilt": { "version": "2025-08-23", "baseUrl": "https://github.com/obsproject/obs-deps/releases/download", "label": "Pre-Built obs-deps", "hashes": { "macos-universal": "9403bb43fb0a9bb215739a5659ca274fe884dbbbcd22bd9ca781c961fb041c42", "windows-x64": "8de229cff6f1981508c0eb646b35e644633a5855787b9f5d3b90ae2aeb87ffc1", "windows-x86": "fb3c68b75911f292b3206e346053638db1c73605957207445a0a92b33ab5e00a", "windows-arm64": "dd87ba00a6cbc153182fb62b3678a3b5021d1d11eb2730442060937a645eb97e" } }, "qt6": { "version": "2025-08-23", "baseUrl": "https://github.com/obsproject/obs-deps/releases/download", "label": "Pre-Built Qt6", "hashes": { "macos-universal": "990f11638b80a4509e14e8c315f6e4caa0861e37fcd3113a256fbff835ffca29", "windows-x64": "c62e82483bc7c0bf199e8ac3220c66a85a6e8a0cd69a05b6d44f873b830e415f", "windows-arm64": "cc8ec983de9b7d81aa98beeb1b989d707ee3c73b85b4d41c85d94114eba81f91" }, "debugSymbols": { "windows-x64": "aae88a17e0211cb37db6a8602f2e20d69255be1f9700c699008ca5adbce1dde2", "windows-arm64": "6e866490277a8b29e82a87fc2f22407f93ddaf86444ea0d284370339a05511b3" } }, "cef": { "version": "6533", "baseUrl": "https://cdn-fastly.obsproject.com/downloads", "label": "Chromium Embedded Framework", "hashes": { "macos-x86_64": "37bf7571a48c5dfa8519817e4a90a3503a0eb30f9eadd68f4c3e783e363f272a", "macos-arm64": "429b50e74f6c174dcfe2f14d8204b54add497eaafe117f7b69ce6bb2354d2626", "ubuntu-x86_64": "7963335519a19ccdc5233f7334c5ab023026e2f3e9a0cc417007c09d86608146", "ubuntu-aarch64": "642514469eaa29a5c887891084d2e73f7dc2d7405f7dfa7726b2dbc24b309999", "windows-x64": "922efbda1f2f8be9e5b2754d878a14d90afc81f04e94fc9101a7513e2b5cecc1", "windows-arm64": "df9df4bd85826b4c071c6db404fd59cf93efd9c58ec3ab64e204466ae19bb02a" }, "revision": { "macos-x86_64": 5, "macos-arm64": 5, "ubuntu-x86_64": 6, "ubuntu-aarch64": 6, "windows-x64": 2 } } }, "tools": { "sparkle": { "version": "2.6.4", "baseUrl": "https://github.com/sparkle-project/Sparkle/releases/download", "label": "Sparkle 2", "hash": "50612a06038abc931f16011d7903b8326a362c1074dabccb718404ce8e585f0b" } } } } }, { "name": "macos", "displayName": "macOS", "description": "Default macOS build (single architecture only)", "inherits": ["environmentVars"], "condition": { "type": "equals", "lhs": "${hostSystemName}", "rhs": "Darwin" }, "generator": "Xcode", "binaryDir": "${sourceDir}/build_macos", "cacheVariables": { "CMAKE_OSX_DEPLOYMENT_TARGET": {"type": "STRING", "value": "12.0"}, "OBS_CODESIGN_IDENTITY": {"type": "STRING", "value": "$penv{CODESIGN_IDENT}"}, "OBS_CODESIGN_TEAM": {"type": "STRING", "value": "$penv{CODESIGN_TEAM}"}, "OBS_PROVISIONING_PROFILE": {"type": "STRING", "value": "$penv{PROVISIONING_PROFILE}"}, "VIRTUALCAM_DEVICE_UUID": {"type": "STRING", "value": "7626645E-4425-469E-9D8B-97E0FA59AC75"}, "VIRTUALCAM_SINK_UUID": {"type": "STRING", "value": "A3F16177-7044-4DD8-B900-72E2419F7A9A"}, "VIRTUALCAM_SOURCE_UUID": {"type": "STRING", "value": "A8D7B8AA-65AD-4D21-9C42-66480DBFA8E1"}, "SPARKLE_APPCAST_URL": {"type": "STRING", "value": "https://obsproject.com/osx_update/updates_$(ARCHS)_v2.xml"}, "SPARKLE_PUBLIC_KEY": {"type": "STRING", "value": "HQ5/Ba9VHOuEWaM0jtVjZzgHKFJX9YTl+HNVpgNF0iM="}, "ENABLE_BROWSER": true } }, { "name": "macos-ci", "displayName": "macOS (CI)", "description": "CI macOS build (single architecture only)", "inherits": ["macos"], "warnings": {"dev": true, "deprecated": true}, "cacheVariables": { "CMAKE_COMPILE_WARNING_AS_ERROR": true, "CMAKE_XCODE_ATTRIBUTE_COMPILATION_CACHE_ENABLE_CACHING": "YES", "CMAKE_XCODE_ATTRIBUTE_COMPILATION_CACHE_CAS_PATH": "$penv{XCODE_CAS_PATH}" } }, { "name": "ubuntu", "displayName": "Ubuntu", "description": "obs-studio for Ubuntu", "inherits": ["environmentVars"], "condition": { "type": "equals", "lhs": "${hostSystemName}", "rhs": "Linux" }, "binaryDir": "${sourceDir}/build_ubuntu", "generator": "Ninja", "warnings": {"dev": true, "deprecated": true}, "cacheVariables": { "CMAKE_BUILD_TYPE": "Debug", "CMAKE_INSTALL_LIBDIR": "lib/CMAKE_SYSTEM_PROCESSOR-linux-gnu", "ENABLE_AJA": false, "ENABLE_VLC": true, "ENABLE_WAYLAND": true, "ENABLE_WEBRTC": false } }, { "name": "ubuntu-ci", "inherits": ["ubuntu"], "cacheVariables": { "CMAKE_BUILD_TYPE": "RelWithDebInfo", "CMAKE_COMPILE_WARNING_AS_ERROR": true, "CMAKE_COLOR_DIAGNOSTICS": true, "ENABLE_CCACHE": true } }, { "name": "windows-x64", "displayName": "Windows x64", "description": "Default Windows build (x64)", "inherits": ["environmentVars"], "condition": { "type": "equals", "lhs": "${hostSystemName}", "rhs": "Windows" }, "architecture": "x64,version=10.0.22621.0", "binaryDir": "${sourceDir}/build_x64", "generator": "Visual Studio 17 2022", "cacheVariables": { "GPU_PRIORITY_VAL": {"type": "STRING", "value": "$penv{GPU_PRIORITY_VAL}"}, "VIRTUALCAM_GUID": {"type": "STRING", "value": "A3FCE0F5-3493-419F-958A-ABA1250EC20B"}, "ENABLE_BROWSER": true } }, { "name": "windows-ci-x64", "displayName": "Windows x64 (CI)", "description": "CI Windows build (x64)", "inherits": ["windows-x64"], "warnings": {"dev": true, "deprecated": true}, "cacheVariables": { "CMAKE_COMPILE_WARNING_AS_ERROR": true } }, { "name": "windows-arm64", "displayName": "Windows ARM64", "description": "Default Windows build (ARM64)", "inherits": ["environmentVars"], "condition": { "type": "equals", "lhs": "${hostSystemName}", "rhs": "Windows" }, "architecture": "ARM64,version=10.0.22621.0", "binaryDir": "${sourceDir}/build_arm64", "generator": "Visual Studio 17 2022", "cacheVariables": { "GPU_PRIORITY_VAL": {"type": "STRING", "value": "$penv{GPU_PRIORITY_VAL}"}, "VIRTUALCAM_GUID": {"type": "STRING", "value": "A3FCE0F5-3493-419F-958A-ABA1250EC20B"}, "ENABLE_AJA": false, "ENABLE_BROWSER": true, "ENABLE_SCRIPTING": false, "ENABLE_VST": false } }, { "name": "windows-ci-arm64", "displayName": "Windows ARM64 (CI)", "description": "CI Windows build (ARM64)", "inherits": ["windows-arm64"], "warnings": {"dev": true, "deprecated": true}, "cacheVariables": { "CMAKE_COMPILE_WARNING_AS_ERROR": true } } ], "buildPresets": [ { "name": "windows-x64", "configurePreset": "windows-x64", "displayName": "Windows 64-bit", "description": "Windows build for 64-bit (aka x64)", "configuration": "RelWithDebInfo" }, { "name": "windows-arm64", "configurePreset": "windows-arm64", "displayName": "Windows on ARM 64-bit", "description": "Windows build for ARM 64-bit (aka ARM64)", "configuration": "RelWithDebInfo" } ] } obs-studio-32.1.0-sources/.swift-format000644 001751 001751 00000001525 15153330235 020662 0ustar00runnerrunner000000 000000 { "version": 1, "lineLength": 120, "indentation": { "spaces": 4 }, "tabWidth": 4, "maximumBlankLines": 1, "respectsExistingLineBreaks": true, "lineBreakBeforeControlFlowKeywords": false, "lineBreakBeforeEachArgument": false, "lineBreakBeforeEachGenericRequirement": false, "lineBreakBetweenDeclarationAttributes": false, "prioritizeKeepingFunctionOutputTogether": false, "indentConditionalCompilationBlocks": true, "lineBreakAroundMultilineExpressionChainComponents": false, "fileScopedDeclarationPrivacy": {"accessLevel": "private"}, "indentSwitchCaseLabels": false, "spacesAroundRangeFormationOperators": false, "noAssignmentInExpressions": { "allowedFunctions" : ["XCTAssertNoThrow"] }, "multiElementCollectionTrailingCommas": true, "indentBlankLines": false, } obs-studio-32.1.0-sources/libobs-metal/000755 001751 001751 00000000000 15153330731 020607 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs-metal/libobs-metal-Bridging-Header.h000644 001751 001751 00000002331 15153330235 026241 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #import #import #import #import #import #import #import #import #import #import static const char *const device_name = "Metal"; static const char *const preprocessor_name = "_Metal"; obs-studio-32.1.0-sources/libobs-metal/metal-indexbuffer.swift000644 001751 001751 00000016465 15153330235 025301 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Creates a ``MetalIndexBuffer`` object to share with `libobs` and hold the provided indices /// /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - type: Size of each index value (16 bit or 32 bit) /// - indices: Opaque pointer to index buffer data set up by `libobs` /// - num: Count of vertices present at the memory address provided by the `indices` argument /// - flags: Bit field of `libobs` buffer flags /// - Returns: Opaque pointer to a retained ``MetalIndexBuffer`` instance if valid index type was provided, `nil` /// otherwise /// /// > Note: The ownership of the memory pointed to by `indices` is implicitly transferred to the ``MetalIndexBuffer`` /// instance, but is not managed by Swift. @_cdecl("device_indexbuffer_create") public func device_indexbuffer_create( device: UnsafeRawPointer, type: gs_index_type, indices: UnsafeMutableRawPointer, num: UInt32, flags: UInt32 ) -> OpaquePointer? { let device: MetalDevice = unretained(device) guard let indexType = type.mtlType else { return nil } let indexBuffer = MetalIndexBuffer( device: device, type: indexType, data: indices, count: Int(num), dynamic: (Int32(flags) & GS_DYNAMIC) != 0 ) return indexBuffer.getRetained() } /// Sets up a ``MetalIndexBuffer`` as the index buffer for the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - indexbuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// /// > Note: The reference count of the ``MetalIndexBuffer`` instance will not be increased by this call. /// /// > Important: If a `nil` pointer is provided as the index buffer, the index buffer will be _unset_. @_cdecl("device_load_indexbuffer") public func device_load_indexbuffer(device: UnsafeRawPointer, indexbuffer: UnsafeRawPointer?) { let device: MetalDevice = unretained(device) if let indexbuffer { device.renderState.indexBuffer = unretained(indexbuffer) } else { device.renderState.indexBuffer = nil } } /// Requests the deinitialization of a shared ``MetalIndexBuffer`` instance /// - Parameter indexBuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// /// The deinitialization is handled automatically by Swift after the ownership of the instance has been transferred /// into the function and becomes the last strong reference to it. After the function leaves its scope, the object will /// be deinitialized and deallocated automatically. /// /// > Note: The index buffer data memory is implicitly owned by the ``MetalIndexBuffer`` instance and will be manually /// cleaned up and deallocated by the instance's `deinit` method. @_cdecl("gs_indexbuffer_destroy") public func gs_indexbuffer_destroy(indexBuffer: UnsafeRawPointer) { let _ = retained(indexBuffer) as MetalIndexBuffer } /// Requests the index buffer's current data to be transferred into GPU memory /// - Parameter indexBuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// /// This function will call `gs_indexbuffer_flush_direct` with `nil` data pointer. @_cdecl("gs_indexbuffer_flush") public func gs_indexbuffer_flush(indexBuffer: UnsafeRawPointer) { gs_indexbuffer_flush_direct(indexBuffer: indexBuffer, data: nil) } /// Requests the index buffer to be updated with the provided data and then transferred into GPU memory /// - Parameters: /// - indexBuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// - data: Opaque pointer to index buffer data set up by `libobs` /// /// This function is called to ensure that the index buffer data that is contained in the memory pointed at by the /// `data` argument is uploaded into GPU memory. If a `nil` pointer is provided instead, the data provided to the /// instance during creation will be used instead. @_cdecl("gs_indexbuffer_flush_direct") public func gs_indexbuffer_flush_direct(indexBuffer: UnsafeRawPointer, data: UnsafeMutableRawPointer?) { let indexBuffer: MetalIndexBuffer = unretained(indexBuffer) indexBuffer.setupBuffers(data) } /// Returns an opaque pointer to the index buffer data associated with the ``MetalIndexBuffer`` instance /// - Parameter indexBuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// - Returns: Opaque pointer to index buffer data in memory /// /// The returned opaque pointer represents the unchanged memory address that was provided for the creation of the index /// buffer object. /// /// > Warning: There is only limited memory safety associated with this pointer. It is implicitly owned and its /// lifetime is managed by the ``MetalIndexBuffer`` instance, but it was originally created by `libobs`. @_cdecl("gs_indexbuffer_get_data") public func gs_indexbuffer_get_data(indexBuffer: UnsafeRawPointer) -> UnsafeMutableRawPointer? { let indexBuffer: MetalIndexBuffer = unretained(indexBuffer) return indexBuffer.indexData } /// Returns the number of indices associated with the ``MetalIndexBuffer`` instance /// - Parameter indexBuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// - Returns: Number of index buffers /// /// > Note: This returns the same number that was provided for the creation of the index buffer object. @_cdecl("gs_indexbuffer_get_num_indices") public func gs_indexbuffer_get_num_indices(indexBuffer: UnsafeRawPointer) -> UInt32 { let indexBuffer: MetalIndexBuffer = unretained(indexBuffer) return UInt32(indexBuffer.count) } /// Gets the index buffer type as a `libobs` enum value /// - Parameter indexBuffer: Opaque pointer to ``MetalIndexBuffer`` instance shared with `libobs` /// - Returns: Index buffer type as identified by the `gs_index_type` enum /// /// > Warning: As the `gs_index_type` enumeration does not provide an "invalid" value (and thus `0` becomes a valid /// value), this function has no way to communicate an incompatible index buffer type that might be introduced at a /// later point. @_cdecl("gs_indexbuffer_get_type") public func gs_indexbuffer_get_type(indexBuffer: UnsafeRawPointer) -> gs_index_type { let indexBuffer: MetalIndexBuffer = unretained(indexBuffer) switch indexBuffer.type { case .uint16: return GS_UNSIGNED_SHORT case .uint32: return GS_UNSIGNED_LONG @unknown default: assertionFailure("gs_indexbuffer_get_type: Unsupported index buffer type \(indexBuffer.type)") return GS_UNSIGNED_SHORT } } obs-studio-32.1.0-sources/libobs-metal/MTLOrigin+Extensions.swift000644 001751 001751 00000002061 15153330235 025622 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLOrigin: @retroactive Equatable { public static func == (lhs: MTLOrigin, rhs: MTLOrigin) -> Bool { lhs.x == rhs.x && lhs.y == rhs.y && lhs.z == rhs.z } } obs-studio-32.1.0-sources/libobs-metal/MetalBuffer.swift000644 001751 001751 00000027175 15153330235 024074 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal enum MetalBufferType { case vertex case index } /// The MetalBuffer class serves as the super class for both vertex and index buffer objects. /// /// It provides convenience functions to pass buffer instances as retained and unretained opaque pointers and provides /// a generic buffer factory method. class MetalBuffer { enum BufferDataType { case vertex case normal case tangent case color case texcoord } private let device: MTLDevice fileprivate let isDynamic: Bool init(device: MetalDevice, isDynamic: Bool) { self.device = device.device self.isDynamic = isDynamic } /// Creates a new buffer with the provided data or updates an existing buffer with the provided data /// - Parameters: /// - buffer: Reference to a buffer variable to either receive the new buffer or provide an existing buffer /// - data: Pointer to raw data of provided type `T` /// - count: Byte size of data to be written into the buffer /// - dynamic: `true` if underlying buffer is dynamically updated for each frame, `false` otherwise. /// /// > Note: Some sources (like the `text-freetype2` source) generate "dynamic" buffers but don't update them at /// every frame and instead treat them as "static" buffers. For this reason `MTLBuffer` objects have to be cached /// and re-used per `MetalBuffer` instance and cannot be dynamically provided from a pool of buffers of a `MTLHeap`. fileprivate func createOrUpdateBuffer( buffer: inout MTLBuffer?, data: UnsafeMutablePointer, count: Int, dynamic: Bool ) { let size = MemoryLayout.size * count let alignedSize = (size + 15) & ~15 if buffer != nil { if dynamic && buffer!.length == alignedSize { buffer!.contents().copyMemory(from: data, byteCount: size) return } } buffer = device.makeBuffer( bytes: data, length: alignedSize, options: [.cpuCacheModeWriteCombined, .storageModeShared]) } /// Gets an opaque pointer for the ``MetalBuffer`` instance and increases its reference count by one /// - Returns: `OpaquePointer` to class instance /// /// > Note: Use this method when the instance is to be shared via an `OpaquePointer` and needs to be retained. Any /// opaque pointer shared this way needs to be converted into a retained reference again to ensure automatic /// deinitialization by the Swift runtime. func getRetained() -> OpaquePointer { let retained = Unmanaged.passRetained(self).toOpaque() return OpaquePointer(retained) } /// Gets an opaque pointer for the ``MetalBuffer`` instance without increasing its reference count /// - Returns: `OpaquePointer` to class instance func getUnretained() -> OpaquePointer { let unretained = Unmanaged.passUnretained(self).toOpaque() return OpaquePointer(unretained) } } final class MetalVertexBuffer: MetalBuffer { public var vertexData: UnsafeMutablePointer? private var points: MTLBuffer? private var normals: MTLBuffer? private var tangents: MTLBuffer? private var vertexColors: MTLBuffer? private var uvCoordinates: [MTLBuffer?] init(device: MetalDevice, data: UnsafeMutablePointer, dynamic: Bool) { self.vertexData = data self.uvCoordinates = Array(repeating: nil, count: data.pointee.num_tex) super.init(device: device, isDynamic: dynamic) if !dynamic { setupBuffers() } } /// Sets up buffer objects for the data provided in the provided `gs_vb_data` structure /// - Parameter data: Pointer to a `gs_vb_data` instance /// /// The provided `gs_vb_data` instance is expected to: /// * Always contain vertex data /// * Optionally contain normals data /// * Optionally contain tangents data /// * Optionally contain color data /// * Optionally contain either 2 or 4 texture coordinates per vertex /// /// > Note: The color data needs to be converted from the packed UInt32 format used by `libobs` into a normalized /// vector of Float32 values as Metal does not support implicit conversion of these types when vertex data is /// provided in a single buffer to a vertex shader. public func setupBuffers(data: UnsafeMutablePointer? = nil) { guard let data = data ?? self.vertexData else { assertionFailure("MetalBuffer: Unable to create MTLBuffers without vertex data") return } let numVertices = data.pointee.num createOrUpdateBuffer(buffer: &points, data: data.pointee.points, count: numVertices, dynamic: isDynamic) #if DEBUG points?.label = "Vertex buffer points data" #endif if let normalsData = data.pointee.normals { createOrUpdateBuffer(buffer: &normals, data: normalsData, count: numVertices, dynamic: isDynamic) #if DEBUG normals?.label = "Vertex buffer normals data" #endif } if let tangentsData = data.pointee.tangents { createOrUpdateBuffer(buffer: &tangents, data: tangentsData, count: numVertices, dynamic: isDynamic) #if DEBUG tangents?.label = "Vertex buffer tangents data" #endif } if let colorsData = data.pointee.colors { var unpackedColors = [SIMD4]() unpackedColors.reserveCapacity(4) for i in 0..(start: $0, count: 4) let color = SIMD4( x: Float(colorValues[0]) / 255.0, y: Float(colorValues[1]) / 255.0, z: Float(colorValues[2]) / 255.0, w: Float(colorValues[3]) / 255.0 ) unpackedColors.append(color) } } unpackedColors.withUnsafeMutableBufferPointer { createOrUpdateBuffer( buffer: &vertexColors, data: $0.baseAddress!, count: numVertices, dynamic: isDynamic) } #if DEBUG vertexColors?.label = "Vertex buffer colors data" #endif } guard data.pointee.num_tex > 0 else { return } let textureVertices = UnsafeMutableBufferPointer( start: data.pointee.tvarray, count: data.pointee.num_tex) for (textureSlot, textureVertex) in textureVertices.enumerated() { textureVertex.array.withMemoryRebound(to: Float32.self, capacity: textureVertex.width * numVertices) { createOrUpdateBuffer( buffer: &uvCoordinates[textureSlot], data: $0, count: textureVertex.width * numVertices, dynamic: isDynamic) } #if DEBUG uvCoordinates[textureSlot]?.label = "Vertex buffer texture uv data (texture slot \(textureSlot))" #endif } } /// Gets a collection of all ` MTLBuffer` objects created for the vertex data contained in the ``MetalBuffer``. /// - Parameter shader: ``MetalShader`` instance for which the buffers will be used /// - Returns: Array for `MTLBuffer`s in the order required by the shader /// /// > Important: To ensure that the data in the buffers is aligned with the structures declared in the shaders, /// each ``MetalShader`` provides a "buffer order". The corresponding collection will contain the associated /// ``MTLBuffer`` objects in this order. public func getShaderBuffers(for shader: MetalShader) -> [MTLBuffer] { var bufferList = [MTLBuffer]() for bufferType in shader.bufferOrder { switch bufferType { case .vertex: if let points { bufferList.append(points) } case .normal: if let normals { bufferList.append(normals) } case .tangent: if let tangents { bufferList.append(tangents) } case .color: if let vertexColors { bufferList.append(vertexColors) } case .texcoord: guard shader.textureCount == uvCoordinates.count else { assertionFailure( "MetalBuffer: Amount of available texture uv coordinates not sufficient for vertex shader") break } for i in 0.. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import AppKit import CoreVideo import Foundation import Metal class OBSSwapChain { enum ColorRange { case sdr case hdrPQ case hdrHLG } private weak var device: MetalDevice? private var view: NSView? var colorRange: ColorRange var edrHeadroom: CGFloat = 0.0 let layer: CAMetalLayer var renderTarget: MetalTexture? var viewSize: MTLSize var fence: MTLFence var discard: Bool = false init?(device: MetalDevice, size: MTLSize, colorSpace: gs_color_format) { self.device = device self.viewSize = size self.layer = CAMetalLayer() self.layer.framebufferOnly = false self.layer.device = device.device self.layer.drawableSize = CGSize(width: viewSize.width, height: viewSize.height) self.layer.pixelFormat = .bgra8Unorm_srgb self.layer.colorspace = CGColorSpace(name: CGColorSpace.sRGB) self.layer.wantsExtendedDynamicRangeContent = false self.layer.edrMetadata = nil self.layer.displaySyncEnabled = false self.colorRange = .sdr guard let fence = device.device.makeFence() else { return nil } self.fence = fence } /// Updates the provided view to use the `CAMetalLayer` managed by the ``OBSSwapChain`` /// - Parameter view: `NSView` instance to update /// /// > Important: This function has to be called from the main thread @MainActor func updateView(_ view: NSView) { self.view = view view.layer = self.layer view.wantsLayer = true updateEdrHeadroom() } /// Updates the EDR headroom value on the ``OBSSwapChain`` with the value from the screen the managed `NSView` is /// associated with. /// /// This is necessary to ensure that the projector uses the appropriate SDR or EDR output depending on the screen /// the view is on. @MainActor func updateEdrHeadroom() { guard let view = self.view else { return } if let screen = view.window?.screen { self.edrHeadroom = screen.maximumPotentialExtendedDynamicRangeColorComponentValue } else { self.edrHeadroom = CGFloat(1.0) } } /// Resizes the drawable of the managed `CAMetalLayer` to the provided size /// - Parameter size: Desired new size of the drawable /// /// This is usually achieved via a delegate method directly on the associated `NSView` instance, but because the /// view is managed by Qt, the resize event is routed manually into the ``OBSSwapChain`` instance by `libobs`. func resize(_ size: MTLSize) { guard viewSize.width != size.width || viewSize.height != size.height else { return } viewSize = size layer.drawableSize = CGSize( width: viewSize.width, height: viewSize.height) renderTarget = nil } /// Gets an opaque pointer for the ``OBSSwapChain`` instance and increases its reference count by one /// - Returns: `OpaquePointer` to class instance /// /// > Note: Use this method when the instance is to be shared via an `OpaquePointer` and needs to be retained. Any /// opaque pointer shared this way needs to be converted into a retained reference again to ensure automatic /// deinitialization by the Swift runtime. func getRetained() -> OpaquePointer { let retained = Unmanaged.passRetained(self).toOpaque() return OpaquePointer(retained) } /// Gets an opaque pointer for the ``OBSSwapChain`` instance without increasing its reference count /// - Returns: `OpaquePointer` to class instance func getUnretained() -> OpaquePointer { let unretained = Unmanaged.passUnretained(self).toOpaque() return OpaquePointer(unretained) } } obs-studio-32.1.0-sources/libobs-metal/metal-shader.swift000644 001751 001751 00000066143 15153330235 024244 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal private typealias ParserError = MetalError.OBSShaderParserError private typealias ShaderError = MetalError.OBSShaderError private typealias MetalShaderError = MetalError.MetalShaderError /// Creates a ``MetalShader`` instance from the given shader string for use as a vertex shader. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - shader: C character pointer with the contents of the `libobs` effect file /// - file: C character pointer with the contents of the `libobs` effect file location /// - error_string: Pointer for another C character pointer with the contents of an error description /// - Returns: Opaque pointer to a new ``MetalShader`` instance on success or `nil` on error /// /// The string pointed to by the `data` argument is a re-compiled shader string created from the associated "effect" /// file (which will contain multiple effects). Each effect is made up of several passes (though usually only a single /// pass is defined), each of which contains a vertex and fragment shader. This function is then called with just the /// vertex shader string. /// /// This vertex shader string needs to be parsed again and transpiled into a Metal shader string, which is handled by /// the ``OBSShader`` class. The transpiled string is then used to create the actual ``MetalShader`` instance. @_cdecl("device_vertexshader_create") public func device_vertexshader_create( device: UnsafeRawPointer, shader: UnsafePointer, file: UnsafePointer, error_string: UnsafeMutablePointer> ) -> OpaquePointer? { let device: MetalDevice = unretained(device) let content = String(cString: shader) let fileLocation = String(cString: file) do { let obsShader = try OBSShader(type: .vertex, content: content, fileLocation: fileLocation) let transpiled = try obsShader.transpiled() guard let metaData = obsShader.metaData else { OBSLog(.error, "device_vertexshader_create: No required metadata found for transpiled shader") return nil } let metalShader = try MetalShader(device: device, source: transpiled, type: .vertex, data: metaData) return metalShader.getRetained() } catch let error as ParserError { switch error { case .parseFail(let description): OBSLog(.error, "device_vertexshader_create: Error parsing shader.\n\(description)") default: OBSLog(.error, "device_vertexshader_create: Error parsing shader.\n\(error.description)") } } catch let error as ShaderError { switch error { case .transpileError(let description): OBSLog(.error, "device_vertexshader_create: Error transpiling shader.\n\(description)") case .parseError(let description): OBSLog(.error, "device_vertexshader_create: OBS parser error.\n\(description)") case .parseFail(let description): OBSLog(.error, "device_vertexshader_create: OBS parser failure.\n\(description)") default: OBSLog(.error, "device_vertexshader_create: OBS shader error.\n\(error.description)") } } catch { switch error { case let error as MetalShaderError: OBSLog(.error, "device_vertexshader_create: Error compiling shader.\n\(error.description)") case let error as MetalError.MTLDeviceError: OBSLog(.error, "device_vertexshader_create: Device error compiling shader.\n\(error.description)") default: OBSLog(.error, "device_vertexshader_create: Unknown error occurred") } } return nil } /// Creates a ``MetalShader`` instance from the given shader string for use as a fragment shader. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - shader: C character pointer with the contents of the `libobs` effect file /// - file: C character pointer with the contents of the `libobs` effect file location /// - error_string: Pointer for another C character pointer with the contents of an error description /// - Returns: Opaque pointer to a new ``MetalShader`` instance on success or `nil` on error /// /// The string pointed to by the `data` argument is a re-compiled shader string created from the associated "effect" /// file (which will contain multiple effects). Each effect is made up of several passes (though usually only a single /// pass is defined), each of which contains a vertex and fragment shader. This function is then called with just the /// vertex shader string. /// /// This fragment shader string needs to be parsed again and transpiled into a Metal shader string, which is handled by /// the ``OBSShader`` class. The transpiled string is then used to create the actual ``MetalShader`` instance. @_cdecl("device_pixelshader_create") public func device_pixelshader_create( device: UnsafeRawPointer, shader: UnsafePointer, file: UnsafePointer, error_string: UnsafeMutablePointer> ) -> OpaquePointer? { let device: MetalDevice = unretained(device) let content = String(cString: shader) let fileLocation = String(cString: file) do { let obsShader = try OBSShader(type: .fragment, content: content, fileLocation: fileLocation) let transpiled = try obsShader.transpiled() guard let metaData = obsShader.metaData else { OBSLog(.error, "device_pixelshader_create: No required metadata found for transpiled shader") return nil } let metalShader = try MetalShader(device: device, source: transpiled, type: .fragment, data: metaData) return metalShader.getRetained() } catch let error as ParserError { switch error { case .parseFail(let description): OBSLog(.error, "device_vertexshader_create: Error parsing shader.\n\(description)") default: OBSLog(.error, "device_vertexshader_create: Error parsing shader.\n\(error.description)") } } catch let error as ShaderError { switch error { case .transpileError(let description): OBSLog(.error, "device_vertexshader_create: Error transpiling shader.\n\(description)") case .parseError(let description): OBSLog(.error, "device_vertexshader_create: OBS parser error.\n\(description)") case .parseFail(let description): OBSLog(.error, "device_vertexshader_create: OBS parser failure.\n\(description)") default: OBSLog(.error, "device_vertexshader_create: OBS shader error.\n\(error.description)") } } catch { switch error { case let error as MetalShaderError: OBSLog(.error, "device_vertexshader_create: Error compiling shader.\n\(error.description)") case let error as MetalError.MTLDeviceError: OBSLog(.error, "device_vertexshader_create: Device error compiling shader.\n\(error.description)") default: OBSLog(.error, "device_vertexshader_create: Unknown error occurred") } } return nil } /// Loads the ``MetalShader`` instance for use as the vertex shader for the current render pipeline descriptor. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - vertShader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// /// This function will simply set up the ``MTLFunction`` wrapped by the ``MetalShader`` instance as the current /// pipeline descriptor's `vertexFunction`. The Metal renderer will lazily create new render pipeline states for each /// permutation of pipeline descriptors, which is a comparatively costly operation but will only occur once for any /// such permutation. /// /// > Note: If a `NULL` pointer is passed for the `vertShader` argument, the vertex function on the current render /// pipeline descriptor will be _unset_. /// @_cdecl("device_load_vertexshader") public func device_load_vertexshader(device: UnsafeRawPointer, vertShader: UnsafeRawPointer?) { let device: MetalDevice = unretained(device) if let vertShader { let shader: MetalShader = unretained(vertShader) guard shader.type == .vertex else { assertionFailure("device_load_vertexshader: Invalid shader type \(shader.type)") return } device.renderState.vertexShader = shader device.renderState.pipelineDescriptor.vertexFunction = shader.function device.renderState.pipelineDescriptor.vertexDescriptor = shader.vertexDescriptor } else { device.renderState.vertexShader = nil device.renderState.pipelineDescriptor.vertexFunction = nil device.renderState.pipelineDescriptor.vertexDescriptor = nil } } /// Loads the ``MetalShader`` instance for use as the fragment shader for the current render pipeline descriptor. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - vertShader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// /// This function will simply set up the ``MTLFunction`` wrapped by the ``MetalShader`` instance as the current /// pipeline descriptor's `fragmentFunction`. The Metal renderer will lazily create new render pipeline states for /// each permutation of pipeline descriptors, which is a comparatively costly operation but will only occur once for /// any such permutation. /// /// As any fragment function is potentially associated with a number of textures and associated sampler states, the /// associated arrays are reset whenever a new fragment function is set up. /// /// > Note: If a `NULL` pointer is passed for the `pixelShader` argument, the fragment function on the current render /// pipeline descriptor will be _unset_. /// @_cdecl("device_load_pixelshader") public func device_load_pixelshader(device: UnsafeRawPointer, pixelShader: UnsafeRawPointer?) { let device: MetalDevice = unretained(device) for index in 0.. OpaquePointer? { let device: MetalDevice = unretained(device) if let shader = device.renderState.vertexShader { return shader.getUnretained() } else { return nil } } /// Gets the ``MetalShader`` set up as the current fragment shader for the pipeline /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Opaque pointer to ``MetalShader`` instance if a fragment shader is currently set up or `nil` otherwise @_cdecl("device_get_pixel_shader") public func device_get_pixel_shader(device: UnsafeRawPointer) -> OpaquePointer? { let device: MetalDevice = unretained(device) if let shader = device.renderState.fragmentShader { return shader.getUnretained() } else { return nil } } /// Requests the deinitialization of the ``MetalShader`` instance shared with `libobs` /// - Parameter shader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// /// Ownership of the ``MetalShader`` instance will be transferred into the function and if this was the last strong /// reference to it, the object will be automatically deinitialized and deallocated by Swift. @_cdecl("gs_shader_destroy") public func gs_shader_destroy(shader: UnsafeRawPointer) { let _ = retained(shader) as MetalShader } /// Gets the number of uniform parameters used on the ``MetalShader`` /// - Parameter shader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// - Returns: Number of uniforms @_cdecl("gs_shader_get_num_params") public func gs_shader_get_num_params(shader: UnsafeRawPointer) -> UInt32 { let shader: MetalShader = unretained(shader) return UInt32(shader.uniforms.count) } /// Gets a uniform parameter from the ``MetalShader`` by its array index /// - Parameters: /// - shader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// - param: Array index of uniform parameter to get /// - Returns: Opaque pointer to a ``ShaderUniform`` instance if index within uniform array bounds or `nil` otherwise /// /// This function requires that the array indices of the uniforms array do not change for a ``MetalShader`` and also /// that the exact order of uniforms is identical between `libobs`'s interpretation of the effects file and the /// transpiled shader's analysis of the uniforms. /// /// > Important: The opaque pointer for the ``ShaderUniform`` instance is passe unretained and as such can become /// invalid when its owning ``MetalShader`` instance either is deinitialized itself or is replaced in the uniforms /// array. @_cdecl("gs_shader_get_param_by_idx") public func gs_shader_get_param_by_idx(shader: UnsafeRawPointer, param: UInt32) -> OpaquePointer? { let shader: MetalShader = unretained(shader) guard param < shader.uniforms.count else { return nil } let uniform = shader.uniforms[Int(param)] let unretained = Unmanaged.passUnretained(uniform).toOpaque() return OpaquePointer(unretained) } /// Gets a uniform parameter from the ``MetalShader`` by its name /// - Parameters: /// - shader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// - param: C character array pointer with the name of the requested uniform parameter /// - Returns: Opaque pointer to a ``ShaderUniform`` instance if any uniform with the provided name was found or `nil` /// otherwise /// /// > Important: The opaque pointer for the ``ShaderUniform`` instance is passe unretained and as such can become /// invalid when its owning ``MetalShader`` instance either is deinitialized itself or is replaced in the uniforms /// array. /// @_cdecl("gs_shader_get_param_by_name") public func gs_shader_get_param_by_name(shader: UnsafeRawPointer, param: UnsafeMutablePointer) -> OpaquePointer? { let shader: MetalShader = unretained(shader) let paramName = String(cString: param) for uniform in shader.uniforms { if uniform.name == paramName { let unretained = Unmanaged.passUnretained(uniform).toOpaque() return OpaquePointer(unretained) } } return nil } /// Gets the uniform parameter associated with the view projection matrix used by the ``MetalShader`` /// - Parameter shader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// - Returns: Opaque pointer to a ``ShaderUniform`` instance if a uniform for the view projection matrix was found /// or `nil` otherwise /// /// The uniform for the view projection matrix has the associated name `viewProj` in the Metal renderer, thus a /// name-based lookup is used to find the associated ``ShaderUniform`` instance. /// /// > Important: The opaque pointer for the ``ShaderUniform`` instance is passe unretained and as such can become /// invalid when its owning ``MetalShader`` instance either is deinitialized itself or is replaced in the uniforms /// array. /// @_cdecl("gs_shader_get_viewproj_matrix") public func gs_shader_get_viewproj_matrix(shader: UnsafeRawPointer) -> OpaquePointer? { let shader: MetalShader = unretained(shader) let paramName = "viewProj" for uniform in shader.uniforms { if uniform.name == paramName { let unretained = Unmanaged.passUnretained(uniform).toOpaque() return OpaquePointer(unretained) } } return nil } /// Gets the uniform parameter associated with the world projection matrix used by the ``MetalShader`` /// - Parameter shader: Opaque pointer to ``MetalShader`` instance shared with `libobs` /// - Returns: Opaque pointer to a ``ShaderUniform`` instance if a uniform for the world projection matrix was found /// or `nil` otherwise /// /// The uniform for the view projection matrix has the associated name `worldProj` in the Metal renderer, thus a /// name-based lookup is used to find the associated ``ShaderUniform`` instance. /// /// > Important: The opaque pointer for the ``ShaderUniform`` instance is passe unretained and as such can become /// invalid when its owning ``MetalShader`` instance either is deinitialized itself or is replaced in the uniforms /// array. @_cdecl("gs_shader_get_world_matrix") public func gs_shader_get_world_matrix(shader: UnsafeRawPointer) -> OpaquePointer? { let shader: MetalShader = unretained(shader) let paramName = "worldProj" for uniform in shader.uniforms { if uniform.name == paramName { let unretained = Unmanaged.passUnretained(uniform).toOpaque() return OpaquePointer(unretained) } } return nil } /// Gets the name and uniform type from the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - info: Pointer to a `gs_shader_param_info` struct pre-allocated by `libobs` /// /// > Warning: The C character array pointer holding the name of the uniform is managed by Swift and might become /// invalid at any point in time. @_cdecl("gs_shader_get_param_info") public func gs_shader_get_param_info(shaderParam: UnsafeRawPointer, info: UnsafeMutablePointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) shaderUniform.name.withCString { info.pointee.name = $0 } info.pointee.type = shaderUniform.gsType } /// Sets a boolean value on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: Boolean value to set for the uniform @_cdecl("gs_shader_set_bool") public func gs_shader_set_bool(shaderParam: UnsafeRawPointer, val: Bool) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) withUnsafePointer(to: val) { shaderUniform.setParameter(data: $0, size: MemoryLayout.size) } } /// Sets a 32-bit floating point value on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: 32-bit floating point value to set for the uniform @_cdecl("gs_shader_set_float") public func gs_shader_set_float(shaderParam: UnsafeRawPointer, val: Float32) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) withUnsafePointer(to: val) { shaderUniform.setParameter(data: $0, size: MemoryLayout.size) } } /// Sets a 32-bit signed integer value on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: 32-bit signed integer value to set for the uniform @_cdecl("gs_shader_set_int") public func gs_shader_set_int(shaderParam: UnsafeRawPointer, val: Int32) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) withUnsafePointer(to: val) { shaderUniform.setParameter(data: $0, size: MemoryLayout.size) } } /// Sets a 3x3 matrix of 32-bit floating point values on the ``ShaderUniform``instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: A 3x3 matrix of 32-bit floating point values /// /// The 3x3 matrix is converted into a 4x4 matrix (padded with zeros) before actually being set as the uniform data @_cdecl("gs_shader_set_matrix3") public func gs_shader_set_matrix3(shaderParam: UnsafeRawPointer, val: UnsafePointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) var newMatrix = matrix4() matrix4_from_matrix3(&newMatrix, val) shaderUniform.setParameter(data: &newMatrix, size: MemoryLayout.size) } /// Sets a 4x4 matrix of 32-bit floating point values on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: A 4x4 matrix of 32-bit floating point values @_cdecl("gs_shader_set_matrix4") public func gs_shader_set_matrix4(shaderParam: UnsafeRawPointer, val: UnsafePointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) shaderUniform.setParameter(data: val, size: MemoryLayout.size) } /// Sets a vector of 2 32-bit floating point values on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: A vector of 2 32-bit floating point values @_cdecl("gs_shader_set_vec2") public func gs_shader_set_vec2(shaderParam: UnsafeRawPointer, val: UnsafePointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) shaderUniform.setParameter(data: val, size: MemoryLayout.size) } /// Sets a vector of 3 32-bit floating point values on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: A vector of 3 32-bit floating point values @_cdecl("gs_shader_set_vec3") public func gs_shader_set_vec3(shaderParam: UnsafeRawPointer, val: UnsafePointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) shaderUniform.setParameter(data: val, size: MemoryLayout.size) } /// Sets a vector of 4 32-bit floating point values on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: A vector of 4 32-bit floating point values @_cdecl("gs_shader_set_vec4") public func gs_shader_set_vec4(shaderParam: UnsafeRawPointer, val: UnsafePointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) shaderUniform.setParameter(data: val, size: MemoryLayout.size) } /// Sets up the data of a `gs_shader_texture` struct as a uniform on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: A pointer to a `gs_shader_struct` containing an opaque pointer to the actual ``MetalTexture`` instance /// and an sRGB gamma state flag /// /// The struct's data is copied verbatim into the uniform, which allows reconstruction of the pointer at a later point /// as long as the actual ``MetalTexture`` instance still exists. @_cdecl("gs_shader_set_texture") public func gs_shader_set_texture(shaderParam: UnsafeRawPointer, val: UnsafePointer?) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) if let val { shaderUniform.setParameter(data: val, size: MemoryLayout.size) } } /// Sets an arbitrary value on the ``ShaderUniform`` instance /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - val: Opaque pointer to some unknown data for use as the uniform /// - size: The size of the data available at the memory pointed to by the `val` argument /// /// The ``ShaderUniform`` itself is set up to hold a specific uniform type, each of which is associated with a size of /// bytes required for it. If the size of the data pointed to by `val` does not fit into this size, the uniform will /// not be updated. /// /// If the ``ShaderUniform`` expects a texture parameter, the pointer will be bound as memory of a `gs_shader_texture` /// instance before setting it up. @_cdecl("gs_shader_set_val") public func gs_shader_set_val(shaderParam: UnsafeRawPointer, val: UnsafeRawPointer, size: UInt32) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) let size = Int(size) let valueSize = shaderUniform.gsType.size guard valueSize == size else { assertionFailure("gs_shader_set_val: Required size of uniform does not match size of input") return } if shaderUniform.gsType == GS_SHADER_PARAM_TEXTURE { let shaderTexture = val.bindMemory(to: gs_shader_texture.self, capacity: 1) shaderUniform.setParameter(data: shaderTexture, size: valueSize) } else { let bytes = val.bindMemory(to: UInt8.self, capacity: valueSize) shaderUniform.setParameter(data: bytes, size: valueSize) } } /// Resets the ``ShaderUniform``'s current data with its default data /// - Parameter shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// /// Each ``ShaderUniform`` is optionally set up with a set of default data (stored as an array of bytes) which is /// simply copied into the current values. @_cdecl("gs_shader_set_default") public func gs_shader_set_default(shaderParam: UnsafeRawPointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) if let defaultValues = shaderUniform.defaultValues { shaderUniform.currentValues = Array(defaultValues) } } /// Sets up the ``MTLSamplerState`` as the sampler state for the ``ShaderUniform`` /// - Parameters: /// - shaderParam: Opaque pointer to ``ShaderUniform`` instance shared with `libobs` /// - sampler: Opaque pointer to ``MTLSamplerState`` instance shared with `libobs` /// /// If the uniform represents a texture for use in the associated shader, this function will also set up the provided /// ``MTLSamplerState`` for the associated texture's texture slot. @_cdecl("gs_shader_set_next_sampler") public func gs_shader_set_next_sampler(shaderParam: UnsafeRawPointer, sampler: UnsafeRawPointer) { let shaderUniform: MetalShader.ShaderUniform = unretained(shaderParam) let samplerState = Unmanaged.fromOpaque(sampler).takeUnretainedValue() shaderUniform.samplerState = samplerState } obs-studio-32.1.0-sources/libobs-metal/MetalShader+Extensions.swift000644 001751 001751 00000002331 15153330235 026207 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Adds the comparison operator to make ``MetalShader`` instances comparable. Comparison is based on the source string /// and function type. extension MetalShader: Equatable { static func == (lhs: MetalShader, rhs: MetalShader) -> Bool { return lhs.source == rhs.source && lhs.function.functionType == rhs.function.functionType } } obs-studio-32.1.0-sources/libobs-metal/README.md000644 001751 001751 00000016653 15153330235 022100 0ustar00runnerrunner000000 000000 libobs-metal ============ This is an alpha quality implementation of a Metal renderer backend for OBS exclusive to Apple Silicon Macs. It supports all default source types, filters, and transitions provided by OBS Studio ## Overview * The renderer backend is implemented entirely in Swift * A C interface header is generated automatically via the `-emit-objc-header` compile flag and `@cdecl("")` decorators are used to expose desired functions to `libobs` * Only Metal Version 3 is supported (this is by design) * Only Apple Silicon Macs are supported (this is by design) ## Implemented functionality * Default source types are supported: * Color Source * Image Source * Media Source * SCK Capture Source * Browser Source * Capture Card and Video Capture Device Source * Text (Freetype 2) * Default transitions are supported: * Cut * Fade * Stinger * Fade To Color * Luma Wipe * Default filters are supported: * Apply LUT * Chroma Key * Color Correction * Crop/Pad * Image Mask/Blend * Luma Key * Scaling/Aspect Ratio * Scroll * Sharpen * sRGB-aware rendering is enabled by default * HDR output in previews and projectors is supported on screens which have EDR support * HDR output is not tonemapped by OBS - if the screen has EDR support, the previews will always output content in their actual format * Recording and streaming with VideoToolbox encoders works * Preview, separate projectors, and multi-view all work (with caveats, see below) ## Known Issues * Previews can stutter or be stuck with low FPS - will not be fully fixed before alpha release (see below) * Not all possible encoder configurations have been tested * Performance is not optimized (see below) ## The State Of Previews To manually render contents into a window using Metal one has to use a `CAMetalLayer` that is set to be a `NSView`'s backing layer. This layer can provide a `CAMetalDrawable` object which the compositor will use when it renders a new frame of the desktop. This drawable can provide a texture that OBS Studio can render into to generate output like the main preview. Because Metal is much more integrated with macOS than OpenGL and designed with energy efficiency in mind, a `CAMetalLayer` will never provide more drawables than necessary, which means that there can be at most 3 drawables "in flight". If all available drawables are in use (either by OBS Studio to render into or by the compositor to render the desktop output) a request for a new drawable will block until an old drawable expires and a new one has been generated. This means that if OBS renders at a higher framerate than the operating system's compositor, it will exhaust this budget and OBS Studio's renderer will be stalled and will have to wait until a new drawable is available. This effectively means that OBS Studio's maximum frame rate is limited to the operating system's screen refresh interval. The current implementation avoids the issue of stalling OBS Studio's video render framerate, at the cost of possible framerate issues with the preview itself. OBS will always render a preview at its own framerate (which can be higher but also lower than the operating system's refresh interval) and callback provided to macOS will be used instead to copy (or "blit") this preview texture into a drawable that is only kept around as short as necessary to finish this copy operation. This decouples the update of previews from the rendering of their contents, but obviously makes this blit operation dependent on a projector having finished rendering, as otherwise the callback might blit an incomplete preview or multi-view. It is this synchronization that can lead to slow and "choppy" frame rates if the refresh interval of the operating system and the interval at which OBS can finish rendering a preview are too misaligned. **Note:** This is a known issue and work on a fix or better implementation of preview rendering is in progress. As the way `CAMetalLayer` works is the opposite of the way `DXGISwapChain`s work, it requires a lot more resource management and housekeeping in the Metal backend to get right. ## On Performance Compiled in Release configuration the Metal renderer already has about the same CPU impact and render times as the OpenGL renderer on an M1 Mac even though neither the Swift code nor the Metal code has been optimized in any way. The late generation (and switches) of pipeline states and buffers is a costly operation and the way OBS Studio's renderer operates puts a natural ceiling on the performance improvements the Metal renderer could achieve (as it does lots of small render operations but with a lot of context switching between CPU and GPU). In Debug mode the performance is a bit worse, but that's in part due to Xcode using the debug variant of the Metal framework, which allows inspection and reflection on all Metal types, including live previews of textures, buffers, debugging of shaders, and more. Usually one would prefer to upload all data in big batches (preferably into a big `MTLHeap` object) and then pick and choose elements for each render pass to limit the switch between CPU and GPU, but this is not compatible with how OBS Studio's renderer works at this moment. **Note:** All these observations are based on OBS Studio's own CPU and render time statistics which are flawed as the clock speeds of either CPU and GPU are not taken into account. ## Required Fixes and Workarounds * Metal Shader Language is stricter than HLSL and GLSL and does not allow type punning or implicit casting - all type conversions have to be explicit - and commonly allows only a specific set of types for vector data, colour data, or UV coordinates * The transpiler has to force conversions to unsigned integers and unsigned integer vectors for texture `Load` calls because `libobs` shaders depend on the implicit conversion of a 32-bit float vector to integer values when passed to the texture's load command (`read` in MSL) * Metal has no support for BGRX/RGBX formats, color always has to be specified using a vector of 4 floats, some `libobs` shaders assume BGRX and only provide a `float3` value in their pixel shaders. Transpiled Metal shaders instead return a `float4` with a `1.0` alpha value * This might not be exhaustive, as other - so far untested - shaders might depend on other implicit conversions of HLSL/GLSL and will require additional workarounds and wrapping of existing code to return the correct types expected by MSL * Metal does not support unpacking `UInt32` values into a `float4` in vertex data provided via the `[[stage_in]]` attribute to benefit from vertex fetch (where the pipeline itself is made aware of the buffer layout via a vertex descriptor and thus fetches the data from the buffer as needed) vs. the classic "vertex push" method * This is commonly used in `libobs` to provide color buffer data - to fix this, the values are unpacked and converted into a `float4` when the GPU buffers are created for a vertex buffer * There is no explicit clear command in Metal, as clears are implemented as a command that is run when a render target (or more precisely a tile of the render target) is loaded into tile memory for a render pass. If no render pass occurs, no load command is executed and the render target is not cleared. OBS Studio depends on a "clear" call actually clearing the texture, thus an explicit (but lightweight) draw call is scheduled to ensure that the render target is loaded, cleared, and stored. obs-studio-32.1.0-sources/libobs-metal/MetalDevice.swift000644 001751 001751 00000077737 15153330235 024073 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import AppKit import Foundation import Metal import simd /// Describes which clear actions to take when an explicit clear is requested struct ClearState { var colorAction: MTLLoadAction = .dontCare var depthAction: MTLLoadAction = .dontCare var stencilAction: MTLLoadAction = .dontCare var clearColor: MTLClearColor = MTLClearColor() var clearDepth: Double = 0.0 var clearStencil: UInt32 = 0 var clearTarget: MetalTexture? = nil } /// Object wrapping an `MTLDevice` object and providing convenience functions for interaction with `libobs` class MetalDevice { private let identityMatrix = matrix_float4x4.init(diagonal: SIMD4(1.0, 1.0, 1.0, 1.0)) private let fallbackVertexBuffer: MTLBuffer private var nopVertexFunction: MTLFunction private var pipelines = [Int: MTLRenderPipelineState]() private var depthStencilStates = [Int: MTLDepthStencilState]() private var obsSignalCallbacks = [MetalSignalType: () -> Void]() private var displayLink: CVDisplayLink? let device: MTLDevice let commandQueue: MTLCommandQueue var renderState: MetalRenderState var swapChains = [OBSSwapChain]() let swapChainQueue = DispatchQueue(label: "swapchainUpdateQueue", qos: .userInteractive) init(device: MTLDevice) throws { self.device = device guard let commandQueue = device.makeCommandQueue() else { throw MetalError.MTLDeviceError.commandQueueCreationFailure } guard let buffer = device.makeBuffer(length: 1, options: .storageModePrivate) else { throw MetalError.MTLDeviceError.bufferCreationFailure("Fallback vertex buffer") } let nopVertexSource = "[[vertex]] float4 vsNop() { return (float4)0; }" let compileOptions = MTLCompileOptions() if #available(macOS 15, *) { compileOptions.mathMode = .fast } else { compileOptions.fastMathEnabled = true } guard let library = try? device.makeLibrary(source: nopVertexSource, options: compileOptions), let function = library.makeFunction(name: "vsNop") else { throw MetalError.MTLDeviceError.shaderCompilationFailure("Vertex NOP shader") } CVDisplayLinkCreateWithActiveCGDisplays(&displayLink) if displayLink == nil { throw MetalError.MTLDeviceError.displayLinkCreationFailure } self.commandQueue = commandQueue self.nopVertexFunction = function self.fallbackVertexBuffer = buffer self.renderState = MetalRenderState( viewMatrix: identityMatrix, projectionMatrix: identityMatrix, viewProjectionMatrix: identityMatrix, scissorRectEnabled: false, gsColorSpace: GS_CS_SRGB ) let clearPipelineDescriptor = renderState.clearPipelineDescriptor clearPipelineDescriptor.colorAttachments[0].isBlendingEnabled = false clearPipelineDescriptor.vertexFunction = nopVertexFunction clearPipelineDescriptor.fragmentFunction = nil clearPipelineDescriptor.inputPrimitiveTopology = .point setupSignalHandlers() setupDisplayLink() } func dispatchSignal(type: MetalSignalType) { if let callback = obsSignalCallbacks[type] { callback() } } /// Creates signal handlers for specific OBS signals and adds them to a collection of signal handlers using the signal name as their key private func setupSignalHandlers() { let videoResetCallback = { [self] in guard let displayLink else { return } CVDisplayLinkStop(displayLink) CVDisplayLinkStart(displayLink) } obsSignalCallbacks.updateValue(videoResetCallback, forKey: MetalSignalType.videoReset) } /// Sets up the `CVDisplayLink` used by the ``MetalDevice`` to synchronize projector output with the operating /// system's screen refresh rate. private func setupDisplayLink() { func displayLinkCallback( displayLink: CVDisplayLink, _ now: UnsafePointer, _ outputTime: UnsafePointer, _ flagsIn: CVOptionFlags, _ flagsOut: UnsafeMutablePointer, _ displayLinkContext: UnsafeMutableRawPointer? ) -> CVReturn { guard let displayLinkContext else { return kCVReturnSuccess } let metalDevice = unsafeBitCast(displayLinkContext, to: MetalDevice.self) metalDevice.blitSwapChains() return kCVReturnSuccess } let opaqueSelf = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()) CVDisplayLinkSetOutputCallback(displayLink!, displayLinkCallback, opaqueSelf) } /// Iterates over all ``OBSSwapChain`` instances present on the ``MetalDevice`` instance and encodes a block /// transfer command on the GPU to copy the contents of the projector rendered by `libobs`'s render loop into the /// drawable provided by a `CAMetalLayer`. func blitSwapChains() { guard swapChains.count > 0 else { return } guard let commandBuffer = commandQueue.makeCommandBuffer(), let encoder = commandBuffer.makeBlitCommandEncoder() else { return } self.swapChainQueue.sync { swapChains = swapChains.filter { $0.discard == false } } for swapChain in swapChains { guard let renderTarget = swapChain.renderTarget, let drawable = swapChain.layer.nextDrawable() else { continue } guard renderTarget.texture.width == drawable.texture.width, renderTarget.texture.height == drawable.texture.height, renderTarget.texture.pixelFormat == drawable.texture.pixelFormat else { continue } autoreleasepool { encoder.waitForFence(swapChain.fence) encoder.copy(from: renderTarget.texture, to: drawable.texture) commandBuffer.present(drawable) } } encoder.endEncoding() commandBuffer.commit() } /// Simulates an explicit "clear" command commonly used in OpenGL or Direct3D11 implementations. /// - Parameter state: A ``ClearState`` object holding the requested clear actions /// /// Metal (like Direct3D12 and Vulkan) does not have an explicit clear command anymore. Devices with M- and /// A-series SOCs have deferred tile-based GPUs which do not load render targets as single large textures, but /// instead interact with textures via tiles. A load and store command is executed every time this occurs and a /// clear is achieved via a load command. /// /// If no actual rendering occurs however, no load or store commands are executed, and a render target will be /// "untouched". This would lead to issues in situations like switching to an empty scene, as the lack of any /// sources would trigger no draw calls. /// /// Thus an explicit draw call needs to be scheduled to achieve the same outcome as the explicit "clear" call in /// legacy APIs. This is achieved using the most lightweight pipeline possible: /// * A single vertex shader that returns 0 for all points /// * No fragment shader /// * Just load and store commands /// /// While this is indeed more inefficient than the "native" approach, it is the best way to ensure expected /// output with `libobs` rendering system. /// func clear(state: ClearState) throws { try ensureCommandBuffer() let commandBuffer = renderState.commandBuffer! guard let renderTarget = renderState.renderTarget else { return } let pipelineDescriptor = renderState.clearPipelineDescriptor if renderState.useSRGBGamma && renderTarget.sRGBtexture != nil { pipelineDescriptor.colorAttachments[0].pixelFormat = renderTarget.sRGBtexture!.pixelFormat } else { pipelineDescriptor.colorAttachments[0].pixelFormat = renderTarget.texture.pixelFormat } pipelineDescriptor.colorAttachments[0].isBlendingEnabled = false if let depthStencilAttachment = renderState.depthStencilAttachment { pipelineDescriptor.depthAttachmentPixelFormat = depthStencilAttachment.texture.pixelFormat pipelineDescriptor.stencilAttachmentPixelFormat = depthStencilAttachment.texture.pixelFormat } else { pipelineDescriptor.depthAttachmentPixelFormat = .invalid pipelineDescriptor.stencilAttachmentPixelFormat = .invalid } let stateHash = pipelineDescriptor.hashValue let renderPipelineState: MTLRenderPipelineState if let pipelineState = pipelines[stateHash] { renderPipelineState = pipelineState } else { do { let pipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor) pipelines.updateValue(pipelineState, forKey: stateHash) renderPipelineState = pipelineState } catch { throw MetalError.MTLDeviceError.pipelineStateCreationFailure } } let depthStencilDescriptor = MTLDepthStencilDescriptor() depthStencilDescriptor.isDepthWriteEnabled = false let depthStateHash = depthStencilDescriptor.hashValue let depthStencilState: MTLDepthStencilState if let state = depthStencilStates[depthStateHash] { depthStencilState = state } else { guard let state = device.makeDepthStencilState(descriptor: depthStencilDescriptor) else { throw MetalError.MTLDeviceError.depthStencilStateCreationFailure } depthStencilStates.updateValue(state, forKey: depthStateHash) depthStencilState = state } let renderPassDescriptor = MTLRenderPassDescriptor() if state.colorAction == .clear { renderPassDescriptor.colorAttachments[0].loadAction = .clear renderPassDescriptor.colorAttachments[0].storeAction = .store renderPassDescriptor.colorAttachments[0].clearColor = state.clearColor } else { renderPassDescriptor.colorAttachments[0].loadAction = state.colorAction } if state.depthAction == .clear { renderPassDescriptor.depthAttachment.loadAction = .clear renderPassDescriptor.depthAttachment.storeAction = .store renderPassDescriptor.depthAttachment.clearDepth = state.clearDepth } else { renderPassDescriptor.depthAttachment.loadAction = state.depthAction } if state.stencilAction == .clear { renderPassDescriptor.stencilAttachment.loadAction = .clear renderPassDescriptor.stencilAttachment.storeAction = .store renderPassDescriptor.stencilAttachment.clearStencil = state.clearStencil } else { renderPassDescriptor.stencilAttachment.loadAction = state.stencilAction } if renderState.useSRGBGamma && renderTarget.sRGBtexture != nil { renderPassDescriptor.colorAttachments[0].texture = renderTarget.sRGBtexture! } else { renderPassDescriptor.colorAttachments[0].texture = renderTarget.texture } renderTarget.hasPendingWrites = true renderState.inFlightRenderTargets.insert(renderTarget) renderPassDescriptor.colorAttachments[0].level = 0 renderPassDescriptor.colorAttachments[0].slice = 0 renderPassDescriptor.colorAttachments[0].depthPlane = 0 if let zstencilAttachment = renderState.depthStencilAttachment { renderPassDescriptor.depthAttachment.texture = zstencilAttachment.texture renderPassDescriptor.stencilAttachment.texture = zstencilAttachment.texture } else { renderPassDescriptor.depthAttachment.texture = nil renderPassDescriptor.stencilAttachment.texture = nil } guard let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else { throw MetalError.MTLCommandBufferError.encoderCreationFailure } encoder.setRenderPipelineState(renderPipelineState) if renderState.depthStencilAttachment != nil { encoder.setDepthStencilState(depthStencilState) } encoder.setCullMode(.none) encoder.drawPrimitives(type: .point, vertexStart: 0, vertexCount: 1, instanceCount: 1, baseInstance: 0) encoder.endEncoding() } /// Schedules a draw call on the GPU with the information currently set up in the ``MetalRenderState`.` /// - Parameters: /// - primitiveType: Type of primitives to render /// - vertexStart: Start index for the vertices to be drawn /// - vertexCount: Amount of vertices to be drawn /// /// Modern APIs like Metal have moved away from the "magic state" mental model used by legacy APIs like OpenGL or /// Direct3D11 which required the APIs to validate the "global state" at every draw call. Instead Metal requires /// the creation of a pipeline object which is immutable after creation and thus has to run validation once and can /// then run draw calls directly. /// /// Due to the nature of OBS Studio, the pipeline state can change constantly, as blending, filtering, and /// conversion of data can constantly be changed by users of the program, which means that the combination of blend /// modes, shaders, and attachments can change constantly. /// /// To avoid a costly re-creation of pipelines for every draw call, pipelines are cached after creation and if a /// draw call uses an established pipeline, it will be reused from cache instead. While this cannot avoid the cost /// of creating new pipelines during runtime, it mitigates the cost for consecutive draw calls. func draw(primitiveType: MTLPrimitiveType, vertexStart: Int, vertexCount: Int) throws { try ensureCommandBuffer() let commandBuffer = renderState.commandBuffer! guard let renderTarget = renderState.renderTarget else { return } guard renderState.vertexBuffer != nil || vertexCount > 0 else { assertionFailure("MetalDevice: Attempted to render without a vertex buffer set") return } guard let vertexShader = renderState.vertexShader else { assertionFailure("MetalDevice: Attempted to render without vertex shader set") return } guard let fragmentShader = renderState.fragmentShader else { assertionFailure("MetalDevice: Attempted to render without fragment shader set") return } let renderPipelineDescriptor = renderState.pipelineDescriptor let renderPassDescriptor = renderState.renderPassDescriptor if renderState.isRendertargetChanged { if renderState.useSRGBGamma && renderTarget.sRGBtexture != nil { renderPipelineDescriptor.colorAttachments[0].pixelFormat = renderTarget.sRGBtexture!.pixelFormat renderPassDescriptor.colorAttachments[0].texture = renderTarget.sRGBtexture! } else { renderPipelineDescriptor.colorAttachments[0].pixelFormat = renderTarget.texture.pixelFormat renderPassDescriptor.colorAttachments[0].texture = renderTarget.texture } renderTarget.hasPendingWrites = true renderState.inFlightRenderTargets.insert(renderTarget) if let zstencilAttachment = renderState.depthStencilAttachment { renderPipelineDescriptor.depthAttachmentPixelFormat = zstencilAttachment.texture.pixelFormat renderPipelineDescriptor.stencilAttachmentPixelFormat = zstencilAttachment.texture.pixelFormat renderPassDescriptor.depthAttachment.texture = zstencilAttachment.texture renderPassDescriptor.stencilAttachment.texture = zstencilAttachment.texture } else { renderPipelineDescriptor.depthAttachmentPixelFormat = .invalid renderPipelineDescriptor.stencilAttachmentPixelFormat = .invalid renderPassDescriptor.depthAttachment.texture = nil renderPassDescriptor.stencilAttachment.texture = nil } } renderPassDescriptor.colorAttachments[0].loadAction = .load renderPassDescriptor.depthAttachment.loadAction = .load renderPassDescriptor.stencilAttachment.loadAction = .load let stateHash = renderState.pipelineDescriptor.hashValue let pipelineState: MTLRenderPipelineState if let state = pipelines[stateHash] { pipelineState = state } else { do { let state = try device.makeRenderPipelineState(descriptor: renderPipelineDescriptor) pipelines.updateValue(state, forKey: stateHash) pipelineState = state } catch { throw MetalError.MTLDeviceError.pipelineStateCreationFailure } } guard let commandEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else { throw MetalError.MTLCommandBufferError.encoderCreationFailure } commandEncoder.setRenderPipelineState(pipelineState) if let effect: OpaquePointer = gs_get_effect() { gs_effect_update_params(effect) } commandEncoder.setViewport(renderState.viewPort) commandEncoder.setFrontFacing(.counterClockwise) commandEncoder.setCullMode(renderState.cullMode) if let scissorRect = renderState.scissorRect, renderState.scissorRectEnabled { commandEncoder.setScissorRect(scissorRect) } let depthStateHash = renderState.depthStencilDescriptor.hashValue let depthStencilState: MTLDepthStencilState if let state = depthStencilStates[depthStateHash] { depthStencilState = state } else { guard let state = device.makeDepthStencilState(descriptor: renderState.depthStencilDescriptor) else { throw MetalError.MTLDeviceError.depthStencilStateCreationFailure } depthStencilStates.updateValue(state, forKey: depthStateHash) depthStencilState = state } commandEncoder.setDepthStencilState(depthStencilState) var gsViewMatrix: matrix4 = matrix4() gs_matrix_get(&gsViewMatrix) let viewMatrix = matrix_float4x4( rows: [ SIMD4(gsViewMatrix.x.x, gsViewMatrix.x.y, gsViewMatrix.x.z, gsViewMatrix.x.w), SIMD4(gsViewMatrix.y.x, gsViewMatrix.y.y, gsViewMatrix.y.z, gsViewMatrix.y.w), SIMD4(gsViewMatrix.z.x, gsViewMatrix.z.y, gsViewMatrix.z.z, gsViewMatrix.z.w), SIMD4(gsViewMatrix.t.x, gsViewMatrix.t.y, gsViewMatrix.t.z, gsViewMatrix.t.w), ] ) renderState.viewProjectionMatrix = (viewMatrix * renderState.projectionMatrix) if let viewProjectionUniform = vertexShader.viewProjection { viewProjectionUniform.setParameter( data: &renderState.viewProjectionMatrix, size: MemoryLayout.size) } vertexShader.uploadShaderParameters(encoder: commandEncoder) fragmentShader.uploadShaderParameters(encoder: commandEncoder) if let vertexBuffer = renderState.vertexBuffer { let buffers = vertexBuffer.getShaderBuffers(for: vertexShader) commandEncoder.setVertexBuffers( buffers, offsets: .init(repeating: 0, count: buffers.count), range: 0.. 0) ? vertexCount : indexBuffer.count, indexType: indexBuffer.type, indexBuffer: bufferData, indexBufferOffset: 0 ) } else { if let vertexBuffer = renderState.vertexBuffer, let vertexData = vertexBuffer.vertexData { commandEncoder.drawPrimitives( type: primitiveType, vertexStart: vertexStart, vertexCount: vertexData.pointee.num ) } else { commandEncoder.drawPrimitives( type: primitiveType, vertexStart: vertexStart, vertexCount: vertexCount ) } } commandEncoder.endEncoding() } /// Creates a command buffer on the render state if none exists func ensureCommandBuffer() throws { if renderState.commandBuffer == nil { guard let buffer = commandQueue.makeCommandBuffer() else { throw MetalError.MTLCommandQueueError.commandBufferCreationFailure } renderState.commandBuffer = buffer } } /// Updates a memory fence used on the GPU to signal that the current render target (which is associated with a /// ``OBSSwapChain`` is available for other GPU commands. /// /// This is necessary as the final output of projectors needs to be blitted into the drawables provided by the /// `CAMetalLayer` of each ``OBSSwapChain`` at the screen refresh interval, but projectors are usually rendered /// using tens of separate little draw calls. /// /// Thus a virtual "display render stage" state is maintained by the Metal renderer, which is started when a /// ``OBSSwapChain`` instance is loaded by `libobs` and ended when `device_end_scene` is called. func finishDisplayRenderStage() { let buffer = commandQueue.makeCommandBufferWithUnretainedReferences() let encoder = buffer?.makeBlitCommandEncoder() guard let buffer, let encoder, let swapChain = renderState.swapChain else { return } encoder.updateFence(swapChain.fence) encoder.endEncoding() buffer.commit() } /// Ensures that all encoded render commands in the current command buffer are committed to the command queue for /// execution on the GPU. /// /// This is particularly important when textures (or texture data) is to be blitted into other textures or buffers, /// as pending GPU commands in the existing buffer need to run before any commands that rely on the result of these /// draw commands to have taken place. /// /// Within the same queue this is ensured by Metal itself, but requires the commands to be encoded and committed /// in the desired order. func finishPendingCommands() { guard let commandBuffer = renderState.commandBuffer, commandBuffer.status != .committed else { return } commandBuffer.commit() renderState.inFlightRenderTargets.forEach { $0.hasPendingWrites = false } renderState.inFlightRenderTargets.removeAll(keepingCapacity: true) renderState.commandBuffer = nil } /// Copies the contents of a texture into another texture of identical dimensions /// - Parameters: /// - source: Source texture to copy from /// - destination: Destination texture to copy to /// /// This function requires both textures to have been created with the same dimensions, otherwise the copy /// operation will fail. /// /// If the source texture has pending writes (e.g., it was used as the render target for a clear or draw command), /// then the current command buffer will be committed to ensure that the blit command encoded by this function /// happens after the pending commands. func copyTexture(source: MetalTexture, destination: MetalTexture) throws { if source.hasPendingWrites { finishPendingCommands() } try ensureCommandBuffer() let buffer = renderState.commandBuffer! let encoder = buffer.makeBlitCommandEncoder() guard let encoder else { throw MetalError.MTLCommandQueueError.commandBufferCreationFailure } encoder.copy(from: source.texture, to: destination.texture) encoder.endEncoding() } /// Copies the contents of a texture into a texture for CPU access /// - Parameters: /// - source: Source texture to copy from /// - destination: Destination texture to copy to /// /// This function requires both texture to have been created with the same dimensions, otherwise the copy operation /// will fail. /// /// If the source texture has pending writes (e.g., it was used as the render target for a clear or draw command), /// then the current command buffer will be committed to ensure that the blit command encoded by this function /// happens after the pending commands. /// /// > Important: This function differs from ``copyTexture`` insofar as it will wait for the completion of all /// commands in the command queue to ensure that the GPU has actually completed the blit into the destination /// texture. func stageTexture(source: MetalTexture, destination: MetalTexture) throws { if source.hasPendingWrites { finishPendingCommands() } let buffer = commandQueue.makeCommandBufferWithUnretainedReferences() let encoder = buffer?.makeBlitCommandEncoder() guard let buffer, let encoder else { throw MetalError.MTLCommandQueueError.commandBufferCreationFailure } encoder.copy(from: source.texture, to: destination.texture) encoder.endEncoding() buffer.commit() buffer.waitUntilCompleted() } /// Copies the contents of a texture into a buffer for CPU access /// - Parameters: /// - source: Source texture to copy from /// - destination: Destination buffer to copy to /// /// This function requires that the destination buffer has been created with enough capacity to hold the source /// textures pixel data. /// /// If the source texture has pending writes (e.g., it was used as the render target for a clear or draw command), /// then the current command buffer will be committed to ensure that the blit command encoded by this function /// happens after the pending commands. /// /// > Important: This function will wait for the completion of all commands in the command queue to ensure that the /// GPU has actually completed the blit into the destination buffer. /// func stageTextureToBuffer(source: MetalTexture, destination: MetalStageBuffer) throws { if source.hasPendingWrites { finishPendingCommands() } let buffer = commandQueue.makeCommandBufferWithUnretainedReferences() let encoder = buffer?.makeBlitCommandEncoder() guard let buffer, let encoder else { throw MetalError.MTLCommandQueueError.commandBufferCreationFailure } encoder.copy( from: source.texture, sourceSlice: 0, sourceLevel: 0, sourceOrigin: .init(x: 0, y: 0, z: 0), sourceSize: .init(width: source.texture.width, height: source.texture.height, depth: 1), to: destination.buffer, destinationOffset: 0, destinationBytesPerRow: destination.width * destination.format.bytesPerPixel!, destinationBytesPerImage: 0) encoder.endEncoding() buffer.commit() buffer.waitUntilCompleted() } /// Copies the contents of a buffer into a texture for GPU access /// - Parameters: /// - source: Source buffer to copy from /// - destination: Destination texture to copy to /// /// This function requires that the destination texture has been created with enough capacity to hold the source /// buffer pixel data. /// func stageBufferToTexture(source: MetalStageBuffer, destination: MetalTexture) throws { let buffer = commandQueue.makeCommandBufferWithUnretainedReferences() let encoder = buffer?.makeBlitCommandEncoder() guard let buffer, let encoder else { throw MetalError.MTLCommandQueueError.commandBufferCreationFailure } encoder.copy( from: source.buffer, sourceOffset: 0, sourceBytesPerRow: source.width * source.format.bytesPerPixel!, sourceBytesPerImage: 0, sourceSize: .init(width: source.width, height: source.height, depth: 1), to: destination.texture, destinationSlice: 0, destinationLevel: 0, destinationOrigin: .init(x: 0, y: 0, z: 0) ) encoder.endEncoding() buffer.commit() buffer.waitUntilScheduled() } /// Copies a region from a source texture into a region of a destination texture /// - Parameters: /// - source: Source texture to copy from /// - sourceRegion: Region of the source texture to copy from /// - destination: Destination texture to copy to /// - destinationRegion: Destination region to copy into /// /// This function requires that the destination region fits within the dimensions of the destination texture, /// otherwise the copy operation will fail. /// /// If the source texture has pending writes (e.g., it was used as the render target for a clear or draw command), /// then the current command buffer will be committed to ensure that the blit command encoded by this function /// happens after the pending commands. /// func copyTextureRegion( source: MetalTexture, sourceRegion: MTLRegion, destination: MetalTexture, destinationRegion: MTLRegion ) throws { if source.hasPendingWrites { finishPendingCommands() } let buffer = commandQueue.makeCommandBufferWithUnretainedReferences() let encoder = buffer?.makeBlitCommandEncoder() guard let buffer, let encoder else { throw MetalError.MTLCommandQueueError.commandBufferCreationFailure } encoder.copy( from: source.texture, sourceSlice: 0, sourceLevel: 0, sourceOrigin: sourceRegion.origin, sourceSize: sourceRegion.size, to: destination.texture, destinationSlice: 0, destinationLevel: 0, destinationOrigin: destinationRegion.origin ) encoder.endEncoding() buffer.commit() } /// Stops the `CVDisplayLink` used by the ``MetalDevice`` instance func shutdown() { guard let displayLink else { return } CVDisplayLinkStop(displayLink) self.displayLink = nil } deinit { shutdown() } } obs-studio-32.1.0-sources/libobs-metal/MTLCullMode+Extensions.swift000644 001751 001751 00000002263 15153330235 026103 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLCullMode { /// Conversion of the cull mode into its corresponding `libobs` type var obsMode: gs_cull_mode { switch self { case .back: return GS_BACK case .front: return GS_FRONT default: return GS_NEITHER } } } obs-studio-32.1.0-sources/libobs-metal/OBSShader.swift000644 001751 001751 00000177476 15153330235 023464 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal private enum SampleVariant { case load case sample case sampleBias case sampleGrad case sampleLevel } private struct VariableType: OptionSet { var rawValue: UInt static let typeUniform = VariableType(rawValue: 1 << 0) static let typeStruct = VariableType(rawValue: 1 << 1) static let typeStructMember = VariableType(rawValue: 1 << 2) static let typeInput = VariableType(rawValue: 1 << 3) static let typeOutput = VariableType(rawValue: 1 << 4) static let typeTexture = VariableType(rawValue: 1 << 5) static let typeConstant = VariableType(rawValue: 1 << 6) } private struct OBSShaderFunction { let name: String var returnType: String var typeMap: [String: String] var requiresUniformBuffers: Bool var textures: [String] var samplers: [String] var arguments: [OBSShaderVariable] let gsFunction: UnsafeMutablePointer } private struct OBSShaderVariable { let name: String var type: String var mapping: String? var storageType: VariableType var requiredBy: Set var returnedBy: Set var isStage: Bool var attributeId: Int? var isConstant: Bool var isReference: Bool let gsVariable: UnsafeMutablePointer } private struct OBSShaderStruct { let name: String var storageType: VariableType var members: [OBSShaderVariable] let gsVariable: UnsafeMutablePointer } private struct MSLTemplates { static let header = """ #include using namespace metal; """ static let variable = "[qualifier] [type] [name] [mapping]" static let shaderStruct = """ typedef struct { [variable] } [typename]; """ static let function = "[decorator] [type] [name]([parameters]) {[content]}" } private typealias ParserError = MetalError.OBSShaderParserError private typealias ShaderError = MetalError.OBSShaderError class OBSShader { private let type: MTLFunctionType private let content: String private let fileLocation: String private var parser: shader_parser private var parsed: Bool private var uniformsOrder = [String]() private var uniforms = [String: OBSShaderVariable]() private var structs = [String: OBSShaderStruct]() private var functionsOrder = [String]() private var functions = [String: OBSShaderFunction]() private var referenceVariables = [String]() var metaData: MetalShader.ShaderData? init(type: MTLFunctionType, content: String, fileLocation: String) throws { guard type == .vertex || type == .fragment else { throw ShaderError.unsupportedType } self.type = type self.content = content self.fileLocation = fileLocation self.parsed = false self.parser = shader_parser() try withUnsafeMutablePointer(to: &parser) { shader_parser_init($0) let result = shader_parse($0, content.cString(using: .utf8), content.cString(using: .utf8)) let warnings = shader_parser_geterrors($0) if let warnings { throw ShaderError.parseError(String(cString: warnings)) } if !result { throw ShaderError.parseFail("Shader failed to parse: \(fileLocation)") } else { self.parsed = true } } } /// Transpiles a `libobs` effect string into a Metal Shader Language (MSL) string /// - Returns: MSL string representing the transpiled shader func transpiled() throws -> String { try analyzeUniforms() try analyzeParameters() try analyzeFunctions() let uniforms = try transpileUniforms() let structs = try transpileStructs() let functions = try transpileFunctions() self.metaData = try buildMetadata() return [MSLTemplates.header, uniforms, structs, functions].joined(separator: "\n\n") } /// Builds a metadata object for the current shader /// - Returns: ``ShaderData`` object with the shader metadata /// /// The effects used by `libobs` are written in HLSL with some customizations to allow multiple shaders within the /// same effects file (which is supported natively by MSL). As MSL does not support "global" variables, uniforms /// have to be provided explicitly via buffers and the data inside those buffers needs to be laid out in the correct /// way. /// /// Uniforms are converted into `struct` objects in the shader files and as MSL is based on C++14, these structs /// will have a size, stride, and alignment, set by the compiler. Thus the uniform data used by the shader needs to /// be laid out in the buffer according to this alignment. /// /// The layout of vertex buffer data also needs to be communicated using `MTLVertexDescriptor` instances for vertex /// shaders and `MTLSamplerState` instances for fragment shaders. Both will be created and set up in a /// ``ShaderData`` which is used to create the actual ``MetalShader`` object. private func buildMetadata() throws -> MetalShader.ShaderData { var uniformInfo = [MetalShader.ShaderUniform]() var textureSlot = 0 var uniformBufferSize = 0 /// The order of buffers and uniforms is "load-bearing" as the order (and thus alignment and offsets) of /// uniforms in the corresponding uniforms struct are /// influenced by it. for uniformName in uniformsOrder { guard let uniform = uniforms[uniformName] else { throw ParserError.parseFail("No uniform data found for '\(uniformName)'") } let gsType = get_shader_param_type(uniform.gsVariable.pointee.type) let isTexture = uniform.storageType.contains(.typeTexture) let byteSize: Int let alignment: Int let bufferOffset: Int if isTexture { byteSize = 0 alignment = 0 bufferOffset = uniformBufferSize } else { byteSize = gsType.mtlSize alignment = gsType.mtlAlignment bufferOffset = (uniformBufferSize + (alignment - 1)) & ~(alignment - 1) } let shaderUniform = MetalShader.ShaderUniform( name: uniform.name, gsType: gsType, textureSlot: (isTexture ? textureSlot : 0), samplerState: nil, byteOffset: bufferOffset ) shaderUniform.defaultValues = Array( UnsafeMutableBufferPointer( start: uniform.gsVariable.pointee.default_val.array, count: uniform.gsVariable.pointee.default_val.num) ) shaderUniform.currentValues = shaderUniform.defaultValues uniformBufferSize = bufferOffset + byteSize if isTexture { textureSlot += 1 } uniformInfo.append(shaderUniform) } guard let mainFunction = functions["main"] else { throw ParserError.missingMainFunction } let parameterMapper = { (mapping: String) -> MetalBuffer.BufferDataType? in switch mapping { case "POSITION": .vertex case "NORMAL": .normal case "TANGENT": .tangent case "COLOR": .color case _ where mapping.hasPrefix("TEXCOORD"): .texcoord default: .none } } let descriptorMapper = { (parameter: OBSShaderVariable) -> (MTLVertexFormat, Int)? in guard let mapping = parameter.mapping else { return nil } let type = parameter.type switch mapping { case "COLOR": return (.float4, MemoryLayout.size) case "POSITION", "NORMAL", "TANGENT": return (.float4, MemoryLayout.size) case _ where mapping.hasPrefix("TEXCOORD"): guard let numCoordinates = type[type.index(type.startIndex, offsetBy: 5)].wholeNumberValue else { assertionFailure("Unsupported type \(type) for texture parameter") return nil } let format: MTLVertexFormat = switch numCoordinates { case 0: .float case 2: .float2 case 3: .float3 case 4: .float4 default: .invalid } guard format != .invalid else { assertionFailure("OBSShader: Unsupported amount of texture coordinates '\(numCoordinates)'") return nil } return (format, MemoryLayout.size * numCoordinates) case "VERTEXID": return nil default: assertionFailure("OBSShader: Unsupported mapping \(mapping)") return nil } } switch type { case .vertex: var bufferOrder = [MetalBuffer.BufferDataType]() var descriptorData = [(MTLVertexFormat, Int)?]() let descriptor = MTLVertexDescriptor() for argument in mainFunction.arguments { if argument.storageType.contains(.typeStruct) { let actualStructType = argument.type.replacingOccurrences(of: "_In", with: "") guard let shaderStruct = structs[actualStructType] else { throw ParserError.parseFail("Shader function without struct metadata encountered ") } for shaderParameter in shaderStruct.members { if let mapping = shaderParameter.mapping, let mapping = parameterMapper(mapping) { bufferOrder.append(mapping) } if let description = descriptorMapper(shaderParameter) { descriptorData.append(description) } } } else { if let mapping = argument.mapping, let mapping = parameterMapper(mapping) { bufferOrder.append(mapping) } if let description = descriptorMapper(argument) { descriptorData.append(description) } } } let textureUnitCount = bufferOrder.filter({ $0 == .texcoord }).count for (attributeId, description) in descriptorData.filter({ $0 != nil }).enumerated() { descriptor.attributes[attributeId].bufferIndex = attributeId descriptor.attributes[attributeId].format = description!.0 descriptor.layouts[attributeId].stride = description!.1 } return MetalShader.ShaderData( uniforms: uniformInfo, bufferOrder: bufferOrder, vertexDescriptor: descriptor, samplerDescriptors: nil, bufferSize: uniformBufferSize, textureCount: textureUnitCount ) case .fragment: var samplers = [MTLSamplerDescriptor]() for i in 0..? = parser.samplers.array.advanced(by: i) if let sampler { var sampler_info = gs_sampler_info() shader_sampler_convert(sampler, &sampler_info) let borderColor: MTLSamplerBorderColor = switch sampler_info.border_color { case 0x00_00_00_FF: .opaqueBlack case 0xFF_FF_FF_FF: .opaqueWhite default: .transparentBlack } let descriptor = MTLSamplerDescriptor() descriptor.borderColor = borderColor descriptor.maxAnisotropy = Int(sampler_info.max_anisotropy) guard let sAddressMode = sampler_info.address_u.mtlMode, let tAddressMode = sampler_info.address_v.mtlMode, let rAddressMode = sampler_info.address_w.mtlMode, let minMagFilter = sampler_info.filter.minMagFilter, let mipFilter = sampler_info.filter.mipFilter else { samplers.append(descriptor) continue } descriptor.sAddressMode = sAddressMode descriptor.tAddressMode = tAddressMode descriptor.rAddressMode = rAddressMode descriptor.minFilter = minMagFilter descriptor.magFilter = minMagFilter descriptor.mipFilter = mipFilter samplers.append(descriptor) } } return MetalShader.ShaderData( uniforms: uniformInfo, bufferOrder: [], vertexDescriptor: nil, samplerDescriptors: samplers, bufferSize: uniformBufferSize, textureCount: 0 ) default: throw ShaderError.unsupportedType } } /// Analyzes shader uniform parameters parsed by the ``libobs`` shader parser. /// /// Each global variable declared as a "uniform" is stored as an ``OBSShaderVariable`` struct, which will be /// extended with additional metadata by later analysis steps. /// /// This is necessary as MSL does not support global variables and all data needs to be explicitly provided /// via buffer objects, which requires these "unforms" to be wrapped into a single struct and passed as an explicit /// buffer object. private func analyzeUniforms() throws { for i in 0..? = parser.params.array.advanced(by: i) guard let uniform, let name = uniform.pointee.name, let type = uniform.pointee.type else { throw ParserError.parseFail("Uniform is missing name or type information") } let mapping: String? = if let mapping = uniform.pointee.mapping { String(cString: mapping) } else { nil } var data = OBSShaderVariable( name: String(cString: name), type: String(cString: type), mapping: mapping, storageType: .typeUniform, requiredBy: [], returnedBy: [], isStage: false, attributeId: 0, isConstant: (uniform.pointee.var_type == SHADER_VAR_CONST), isReference: false, gsVariable: uniform ) if self.type == .fragment { /// A texture uniform does not contribute to the uniform buffer if data.type.hasPrefix("texture") { data.storageType.remove(.typeUniform) data.storageType.insert(.typeTexture) } } uniformsOrder.append(data.name) uniforms.updateValue(data, forKey: data.name) } } /// Analyzes struct parameter declarations parsed by the ``libobs`` shader parser. /// /// Structured data declarations are used to pass data into and out of shaders. /// /// Whereas HLSL allows one to use "InOut" structures with attribute mappings (e.g., using the same type definition /// for vertex data going in and out of a vertex shader), MSL does not allow the mixing of input mappings and output /// mappings in the same type definition. /// /// Thus when the same struct type is used as an input argument for a function but also used as its output type, it /// needs to be split up into two separate types for the MSL shader. /// /// This function will first detect all struct type definitions in the shader file and then check if it is used as /// an input argument or function output and update the associated ``OBSShaderVariable`` structs accordingly. private func analyzeParameters() throws { for i in 0..? = parser.structs.array.advanced(by: i) guard let shaderStruct, let name = shaderStruct.pointee.name else { throw ParserError.parseFail("Constant data struct has no name") } var parameters = [OBSShaderVariable]() parameters.reserveCapacity(shaderStruct.pointee.vars.num) for j in 0..? = shaderStruct.pointee.vars.array.advanced(by: j) guard let variablePointer, let variableName = variablePointer.pointee.name, let variableType = variablePointer.pointee.type else { throw ParserError.parseFail("Constant data variable has no name") } let mapping: String? = if let variableMapping = variablePointer.pointee.mapping { String(cString: variableMapping) } else { nil } let variable = OBSShaderVariable( name: String(cString: variableName), type: String(cString: variableType), mapping: mapping, storageType: .typeStructMember, requiredBy: [], returnedBy: [], isStage: false, attributeId: nil, isConstant: false, isReference: false, gsVariable: variablePointer ) parameters.append(variable) } let data = OBSShaderStruct( name: String(cString: name), storageType: [], members: parameters, gsVariable: shaderStruct ) structs.updateValue(data, forKey: data.name) } for i in 0..? = parser.funcs.array.advanced(by: i) guard let function, let functionName = function.pointee.name, let returnType = function.pointee.return_type else { throw ParserError.parseFail("Shader function has no name or type information") } var functionData = OBSShaderFunction( name: String(cString: functionName), returnType: String(cString: returnType), typeMap: [:], requiresUniformBuffers: false, textures: [], samplers: [], arguments: [], gsFunction: function, ) for j in 0..? = function.pointee.params.array.advanced(by: j) guard let parameter, let parameterName = parameter.pointee.name, let parameterType = parameter.pointee.type else { throw ParserError.parseFail("Function parameter has no name or type information") } let mapping: String? = if let parameterMapping = parameter.pointee.mapping { String(cString: parameterMapping) } else { nil } /// Most effects do not seem to use `out` or `inout` function arguments, but the lanczos scale filter /// does. The most straight-forward way /// to support this pattern is to use C++-style references with the `thread` storage specifier. let isReferenceVariable = (parameter.pointee.var_type == SHADER_VAR_OUT || parameter.pointee.var_type == SHADER_VAR_INOUT) var parameterData = OBSShaderVariable( name: String(cString: parameterName), type: String(cString: parameterType), mapping: mapping, storageType: .typeInput, requiredBy: [functionData.name], returnedBy: [], isStage: false, attributeId: nil, isConstant: (parameter.pointee.var_type == SHADER_VAR_CONST), isReference: isReferenceVariable, gsVariable: parameter ) if isReferenceVariable { referenceVariables.append(parameterData.name) } if parameterData.type == functionData.returnType { parameterData.returnedBy.insert(functionData.name) } if !functionData.typeMap.keys.contains(parameterData.name) { functionData.typeMap.updateValue(parameterData.type, forKey: parameterData.name) } /// Metal does not support using the same attribute mappings for structs as input to shader functions /// and output. They need to use different /// mappings and thus every "InOut" struct by `libobs` needs to be split up into a separate input and /// output struct type. for var shaderStruct in structs.values { if shaderStruct.name == parameterData.type { shaderStruct.storageType.insert(.typeInput) parameterData.storageType.insert(.typeStruct) if shaderStruct.name == functionData.returnType { shaderStruct.storageType.insert(.typeOutput) parameterData.storageType.insert(.typeOutput) parameterData.type.append("_In") functionData.returnType.append("_Out") } structs.updateValue(shaderStruct, forKey: shaderStruct.name) } } functionData.arguments.append(parameterData) } if var shaderStruct = structs[functionData.returnType] { shaderStruct.storageType.insert(.typeOutput) structs.updateValue(shaderStruct, forKey: shaderStruct.name) } functions.updateValue(functionData, forKey: functionData.name) } } /// Analyzes function data parsed by the ``libobs`` shader parser /// /// As MSL does not support uniforms or using the same struct type for input and output, function bodies themselves /// need to be parsed again and checked for their usage of these types or variables. /// /// Due to the way that the ``libobs`` parser works, each body of a block (either within curly braces or /// parentheses) is analyzed recursively and updating the same ``OBSShaderFunction`` struct. /// /// After a full analysis pass, this struct should contain information about all uniforms, textures, and samplers /// used (or passed on) by the function. private func analyzeFunctions() throws { for i in 0..? = parser.funcs.array.advanced(by: i) guard var function, var token = function.pointee.start, let functionName = function.pointee.name else { throw ParserError.parseFail("Shader function has no name") } let functionData = functions[String(cString: functionName)] guard var functionData else { throw ParserError.parseFail("Shader function without function meta data encountered") } try analyzeFunction(function: &function, functionData: &functionData, token: &token, end: "}") functionData.textures = functionData.textures.unique() functionData.samplers = functionData.samplers.unique() functions.updateValue(functionData, forKey: functionData.name) functionsOrder.append(functionData.name) } } /// Analyzes a function body or source scope to check for use of global variables, textures, or samplers. /// /// Because MSL does not support global variables, unforms, textures, or samplers need to be passed explicitly to a /// function. This requires scanning the entire function body (recursively in the case of separate function scopes /// denoted by curvy brackets or parentheses) for any occurrence of a known uniform, texture, or sampler variable /// name. /// /// - Parameters: /// - function: Pointer to a ``shader_func`` element representing a parsed shader function /// - functionData: Reference to a ``OBSShaderFunction`` struct, which will be updated by this function /// - token: Pointer to a ``cf_token`` element used to interact with the shader parser provided by ``libobs`` /// - end: The sentinel character at which analysis (and parsing) should stop private func analyzeFunction( function: inout UnsafeMutablePointer, functionData: inout OBSShaderFunction, token: inout UnsafeMutablePointer, end: String ) throws { let uniformNames = (uniforms.filter { !$0.value.storageType.contains(.typeTexture) }).keys while token.pointee.type != CFTOKEN_NONE { token = token.successor() if token.pointee.str.isEqualTo(end) { break } let stringToken = token.pointee.str.getString() if token.pointee.type == CFTOKEN_NAME { if uniformNames.contains(stringToken) && functionData.requiresUniformBuffers == false { functionData.requiresUniformBuffers = true } if let function = functions[stringToken] { if function.requiresUniformBuffers && functionData.requiresUniformBuffers == false { functionData.requiresUniformBuffers = true } functionData.textures.append(contentsOf: function.textures) functionData.samplers.append(contentsOf: function.samplers) } if type == .fragment { for uniform in uniforms.values { if stringToken == uniform.name && uniform.storageType.contains(.typeTexture) { functionData.textures.append(stringToken) } } for i in 0..? = parser.samplers.array.advanced(by: i) guard let sampler, let samplerName = sampler.pointee.name else { break } if stringToken == String(cString: samplerName) { functionData.samplers.append(stringToken) } } } } else if token.pointee.type == CFTOKEN_OTHER { if token.pointee.str.isEqualTo("{") { try analyzeFunction(function: &function, functionData: &functionData, token: &token, end: "}") } else if token.pointee.str.isEqualTo("(") { try analyzeFunction(function: &function, functionData: &functionData, token: &token, end: ")") } } } } /// Transpiles the uniform global variables used by the shader into a `UniformData` struct that contains the /// uniforms. /// - Returns: String representing the uniform data struct private func transpileUniforms() throws -> String { var output = [String]() for uniformName in uniformsOrder { if var uniform = uniforms[uniformName] { uniform.isStage = false uniform.attributeId = 0 if !uniform.storageType.contains(.typeTexture) { let variableString = try transpileVariable(variable: uniform) output.append("\(variableString);") } } } if output.count > 0 { let replacements = [ ("[variable]", output.joined(separator: "\n")), ("[typename]", "UniformData"), ] let uniformString = replacements.reduce(into: MSLTemplates.shaderStruct) { string, replacement in string = string.replacingOccurrences(of: replacement.0, with: replacement.1) } return uniformString } else { return "" } } /// Transpiles the vertex data structs used by the shader /// - Returns: String representing the vertex data structs private func transpileStructs() throws -> String { var output = [String]() for var shaderStruct in structs.values { if shaderStruct.storageType.isSuperset(of: [.typeInput, .typeOutput]) { /// Metal does not support using the same attribute mappings for structs as input to shader functions /// and output. They need to use different mappings and thus every "InOut" struct by `libobs` needs to /// be split up into a separate input and output struct type. for suffix in ["_In", "_Out"] { var variables = [String]() for (structVariableId, var structVariable) in shaderStruct.members.enumerated() { let variableString: String switch suffix { case "_In": structVariable.storageType.formUnion([.typeInput]) structVariable.attributeId = structVariableId variableString = try transpileVariable(variable: structVariable) structVariable.storageType.remove([.typeInput]) case "_Out": structVariable.storageType.formUnion([.typeOutput]) variableString = try transpileVariable(variable: structVariable) structVariable.storageType.remove([.typeOutput]) default: throw ParserError.parseFail("Shader struct with unknown prefix encountered") } variables.append("\(variableString);") shaderStruct.members[structVariableId] = structVariable } let replacements = [ ("[variable]", variables.joined(separator: "\n")), ("[typename]", "\(shaderStruct.name)\(suffix)"), ] let result = replacements.reduce(into: MSLTemplates.shaderStruct) { string, replacement in string = string.replacingOccurrences(of: replacement.0, with: replacement.1) } output.append(result) } } else { var variables = [String]() for (structVariableId, var structVariable) in shaderStruct.members.enumerated() { if shaderStruct.storageType.contains(.typeInput) { structVariable.storageType.insert(.typeInput) structVariable.attributeId = structVariableId } else if shaderStruct.storageType.contains(.typeOutput) { structVariable.storageType.insert(.typeOutput) } let variableString = try transpileVariable(variable: structVariable) structVariable.storageType.subtract([.typeInput, .typeOutput]) variables.append("\(variableString);") shaderStruct.members[structVariableId] = structVariable } let replacements = [ ("[variable]", variables.joined(separator: "\n")), ("[typename]", shaderStruct.name), ] let result = replacements.reduce(into: MSLTemplates.shaderStruct) { string, replacement in string = string.replacingOccurrences(of: replacement.0, with: replacement.1) } output.append(result) } } if output.count > 0 { return output.joined(separator: "\n\n") } else { return "" } } /// Transpiles a shader function into its MSL variant /// - Returns: String representing the transpiled MSL shader function private func transpileFunctions() throws -> String { var output = [String]() for functionName in functionsOrder { guard let function = functions[functionName], var token = function.gsFunction.pointee.start else { throw ParserError.parseFail("Shader function has no name") } var stageConsumed = false let isMain = functionName == "main" var variables = [String]() for var variable in function.arguments { if isMain && !stageConsumed { variable.isStage = true stageConsumed = true } try variables.append(transpileVariable(variable: variable)) } /// As Metal has no support for global constants, the constant data needs to be wrapped into a `struct` /// and the associated data is uploaded into a vertex buffer at a specific index (30 in this case). /// /// Buffers are not automatically available to shader functions but are passed into the function explicitly ///as arguments. /// /// As `libobs` effects are based around a "main" entry function (something strongly discouraged by Metal), /// each "main" function needs to receive the actual buffer as an argument and each function called _by_ /// the main function and which internally accesses the uniform needs to have that uniform passed /// explicitly as an argument as well. if (uniforms.values.filter { !$0.storageType.contains(.typeTexture) }).count > 0 { if isMain { variables.append("constant UniformData &uniforms [[buffer(30)]]") } else if function.requiresUniformBuffers { variables.append("constant UniformData &uniforms") } } if type == .fragment { var textureId = 0 for uniformName in uniformsOrder { guard let uniform = uniforms[uniformName] else { break } if uniform.storageType.contains(.typeTexture) { if isMain { let variableString = try transpileVariable(variable: uniform) variables.append("\(variableString) [[texture(\(textureId))]]") textureId += 1 } else if function.textures.contains(uniform.name) { let variableString = try transpileVariable(variable: uniform) variables.append(variableString) } } } var samplerId = 0 for i in 0..? = parser.samplers.array.advanced(by: i) if let sampler, let samplerName = sampler.pointee.name { let name = String(cString: samplerName) if isMain { let variableString = "sampler \(name) [[sampler(\(samplerId))]]" variables.append(variableString) samplerId += 1 } else if function.samplers.contains(name) { let variabelString = "sampler \(name)" variables.append(variabelString) } } } } let mappedType = try convertToMTLType(gsType: function.returnType) let functionContent: String var replacements = [(String, String)]() /// Metal shaders do not have "main" functions - a single shader file usually contains all shader functions /// used by an application, each identified by their name and type decorator. This is not supported by OBS, /// so each shader needs to have a "main" function that calls the actual shader function, which thus /// requires a new shader library to be created for each effect file. if isMain { replacements = [ ("[name]", "_main"), ("[parameters]", variables.joined(separator: ", ")), ] switch type { case .vertex: replacements.append(("[decorator]", "[[vertex]]")) case .fragment: replacements.append(("[decorator]", "[[fragment]]")) default: fatalError("OBSShader: Unsupported shader type \(type)") } let temporaryContent = try transpileFunctionContent(token: &token, end: "}") if type == .fragment && isMain && mappedType == "float3" { replacements.append(("[type]", "float4")) // TODO: Replace with Swift-native Regex once macOS 13+ is minimum target let regex = try NSRegularExpression(pattern: "return (.+);") functionContent = regex.stringByReplacingMatches( in: temporaryContent, range: NSRange(location: 0, length: temporaryContent.count), withTemplate: "return float4($1, 1);" ) } else { functionContent = temporaryContent replacements.append(("[type]", mappedType)) } replacements.append(("[content]", functionContent)) } else { functionContent = try transpileFunctionContent(token: &token, end: "}") replacements = [ ("[decorator]", ""), ("[type]", mappedType), ("[name]", function.name), ("[parameters]", variables.joined(separator: ", ")), ("[content]", functionContent), ] } let result = replacements.reduce(into: MSLTemplates.function) { string, replacement in string = string.replacingOccurrences(of: replacement.0, with: replacement.1) } output.append(result) } if output.count > 0 { return output.joined(separator: "\n\n") } else { return "" } } /// Transpiles a variable into its MSL variant /// - Parameter variable: Variable to transpile /// - Returns: String representing a transpiled variable /// /// Variables can either be members of a `struct` or an argument to a function. The ``OBSShaderVariable`` instance /// has a `storageType` property which encodes the use of the variable and helps in creation of the appropriate MSL /// string representation. private func transpileVariable(variable: OBSShaderVariable) throws -> String { var mappings = [String]() var metalMapping: String var indent = 0 let metalType = try convertToMTLType(gsType: variable.type) if variable.storageType.contains(.typeUniform) { indent = 4 } else if variable.storageType.isSuperset(of: [.typeInput, .typeStructMember]) { switch type { case .vertex: indent = 4 /// Attributes are used to associate a member of a uniform `struct` with its data in the vertex buffer /// stage. if let attributeId = variable.attributeId { mappings.append("attribute(\(attributeId))") } case .fragment: indent = 4 if let mappingPointer = variable.gsVariable.pointee.mapping, let mappedString = convertToMTLMapping(gsMapping: String(cString: mappingPointer)) { mappings.append(mappedString) } default: fatalError("OBSShader: Unsupported shader function type \(type)") } } else if variable.storageType.isSuperset(of: [.typeOutput, .typeStructMember]) { indent = 4 if let mappingPointer = variable.gsVariable.pointee.mapping, let mappedString = convertToMTLMapping(gsMapping: String(cString: mappingPointer)) { mappings.append(mappedString) } } else { indent = 0 if variable.isStage { if let mappingPointer = variable.gsVariable.pointee.mapping, let mappedString = convertToMTLMapping(gsMapping: String(cString: mappingPointer)) { mappings.append(mappedString) } else { mappings.append("stage_in") } } } if mappings.count > 0 { metalMapping = " [[\(mappings.joined(separator: ", "))]]" } else { metalMapping = "" } let qualifier = if variable.storageType.contains(.typeConstant) { " constant " } else if variable.isReference { " thread " } else { "" } let name = if variable.isReference { "&\(variable.name)" } else { variable.name } let result = "\(String(repeating: " ", count: indent))\(qualifier)\(metalType) \(name)\(metalMapping)" return result } /// Transpiles the body of a function into its MSL representation /// - Parameters: /// - token: Stateful `libobs` parser token pointer /// - end: String representing which ends function body parsing if matched /// - Returns: String representing the body of a MSL shader function /// /// OBS effect function content needs to be transpiled into MSL function content token by token, as each token /// needs to be matched not only against direct translations (e.g., a HLSL function name into its appropriate MSL /// variant) but also to detect if a token represents a uniform variable which will not be available as a global /// variable in MSL, but instead will only exist as part of the `uniform` struct that was explicitly passed into /// the function. /// /// Similarly, if a function call is encountered, the function's metadata needs to be checked for use of such a /// uniform and the call signature extended to explicitly pass the data into the called function. /// /// Because Metal does not implicitly or automagically coerce types (but the effects files sometimes rely on this), /// some arguments and parameters need to be explicitly wrapped in casts to wider types (e.g., a `float3` is /// returned from a fragment shader, but fragment shaders _have to_ provide a `float4`). /// /// There are many such conversions necessary, as MSL is more strict than HLSL or GLSL when it comes to type safety. private func transpileFunctionContent(token: inout UnsafeMutablePointer, end: String) throws -> String { var content = [String]() while token.pointee.type != CFTOKEN_NONE { token = token.successor() if token.pointee.str.isEqualTo(end) { break } let stringToken = token.pointee.str.getString() if token.pointee.type == CFTOKEN_NAME { let type = try convertToMTLType(gsType: stringToken) if stringToken == "obs_glsl_compile" { content.append("false") continue } if type != stringToken { content.append(type) continue } if let intrinsic = try convertToMTLIntrinsic(intrinsic: stringToken) { content.append(intrinsic) continue } if stringToken == "mul" { try content.append(convertToMTLMultiplication(token: &token)) continue } else if stringToken == "mad" { try content.append(convertToMTLMultiplyAdd(token: &token)) continue } else { var skip = false for uniform in uniforms.values { if uniform.name == stringToken && uniform.storageType.contains(.typeTexture) { try content.append(createSampler(token: &token)) skip = true break } } if skip { continue } } if uniforms.keys.contains(stringToken) { let priorToken = token.predecessor() let priorString = priorToken.pointee.str.getString() if priorString != "." { content.append("uniforms.\(stringToken)") continue } } var skip = false for shaderStruct in structs.values { if shaderStruct.name == stringToken { if shaderStruct.storageType.isSuperset(of: [.typeInput, .typeOutput]) { content.append("\(stringToken)_Out") skip = true break } } } if skip { continue } if let comparison = try convertToMTLComparison(token: &token) { content.append(comparison) continue } content.append(stringToken) } else if token.pointee.type == CFTOKEN_OTHER { if token.pointee.str.isEqualTo("{") { let blockContent = try transpileFunctionContent(token: &token, end: "}") content.append("{\(blockContent)}") continue } else if token.pointee.str.isEqualTo("(") { let priorToken = token.predecessor() let functionName = priorToken.pointee.str.getString() var functionParameters = [String]() let parameters = try transpileFunctionContent(token: &token, end: ")") if functionName == "int3" { let intParameters = parameters.split( separator: ",", maxSplits: 3, omittingEmptySubsequences: true) switch intParameters.count { case 3: functionParameters.append( "int(\(intParameters[0])), int(\(intParameters[1])), int(\(intParameters[2]))") case 2: functionParameters.append("int2(\(intParameters[0])), int(\(intParameters[1]))") case 1: functionParameters.append("\(intParameters)") default: throw ParserError.parseFail("int3 constructor with invalid amount of arguments encountered") } } else { functionParameters.append(parameters) } if let additionalArguments = generateAdditionalArguments(for: functionName) { functionParameters.append(additionalArguments) } content.append("(\(functionParameters.joined(separator: ", ")))") continue } content.append(stringToken) } else { content.append(stringToken) } } return content.joined() } /// Converts a HLSL-like type into a MSL type if possible /// - Parameter gsType: HLSL-like type string /// - Returns: MSL type string private func convertToMTLType(gsType: String) throws -> String { switch gsType { case "texture2d": return "texture2d" case "texture3d": return "texture3d" case "texture_cube": return "texturecube" case "texture_rect": throw ParserError.unsupportedType case "half2": return "float2" case "half3": return "float3" case "half4": return "float4" case "half": return "float" case "min16float2": return "half2" case "min16float3": return "half3" case "min16float4": return "half4" case "min16float": return "half" case "min10float": throw ParserError.unsupportedType case "double": throw ParserError.unsupportedType case "min16int2": return "short2" case "min16int3": return "short3" case "min16int4": return "short4" case "min16int": return "short" case "min16uint2": return "ushort2" case "min16uint3": return "ushort3" case "min16uint4": return "ushort4" case "min16uint": return "ushort" case "min13int": throw ParserError.unsupportedType default: return gsType } } /// Converts an HLSL-like uniform mapping into a MSL attribute decoration if possible /// - Parameter gsMapping: HLSL-like mapping /// - Returns: MSL attribute string private func convertToMTLMapping(gsMapping: String) -> String? { switch gsMapping { case "POSITION": return "position" case "VERTEXID": return "vertex_id" default: return nil } } /// Converts a HLSL-like comparison to a vector-safe MSL comparison operation /// - Parameter token: Start token of the comparison in the function body /// - Returns: MSL comparison operation /// /// A comparison operation that involves a vector will always result in a boolean vector in MSL (and not a scalar /// vector). Thus any functions that compares two vectors will also result in a vector /// (e.g., float2 == float2 -> bool2). This will break when a ternary expression is used, as the first element of /// it needs to be as scalar boolean in MSL. /// /// Wrapping the comparison in `all` ensures that a single scalar `true` is returned if all elements of the /// resulting boolean vectors are `true` as well. private func convertToMTLComparison(token: inout UnsafeMutablePointer) throws -> String? { var isComparator = false let nextToken = token.successor() if nextToken.pointee.type == CFTOKEN_OTHER { let comparators = ["==", "!=", "<", "<=", ">=", ">"] for comparator in comparators { if nextToken.pointee.str.isEqualTo(comparator) { isComparator = true break } } } if isComparator { var cfp = parser.cfp cfp.cur_token = token let lhs = cfp.cur_token.pointee.str.getString() guard cfp.advanceToken() else { throw ParserError.missingNextToken } let comparator = cfp.cur_token.pointee.str.getString() guard cfp.advanceToken() else { throw ParserError.missingNextToken } let rhs = cfp.cur_token.pointee.str.getString() return "all(\(lhs) \(comparator) \(rhs))" } else { return nil } } /// Converts HLSL-like intrinsic into its MSL representation /// - Parameter intrinsic: HLSL-like intrinsic string /// - Returns: MSL intrinsic string private func convertToMTLIntrinsic(intrinsic: String) throws -> String? { switch intrinsic { case "clip": throw ParserError.unsupportedType case "ddx": return "dfdx" case "ddy": return "dfdy" case "frac": return "fract" case "lerp": return "mix" default: return nil } } /// Converts a HLSL-like multiplication function call into a direct multiplication /// - Parameter token: Start token of the multiplication in the function body /// - Returns: MSL multiplication string private func convertToMTLMultiplication(token: inout UnsafeMutablePointer) throws -> String { var cfp = parser.cfp cfp.cur_token = token guard cfp.advanceToken() else { throw ParserError.missingNextToken } guard cfp.tokenIsEqualTo("(") else { throw ParserError.unexpectedToken } guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let lhs = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.advanceToken() else { throw ParserError.missingNextToken } cfp.cur_token = cfp.cur_token.predecessor() let rhs = try transpileFunctionContent(token: &cfp.cur_token, end: ")") token = cfp.cur_token return "(\(lhs)) * (\(rhs))" } /// Converts a HLSL-like multiply+add function call into a direct multiplication followed by addition /// - Parameter token: Start token of the multiply+add in the function body /// - Returns: MSL multiplication and addition string private func convertToMTLMultiplyAdd(token: inout UnsafeMutablePointer) throws -> String { var cfp = parser.cfp cfp.cur_token = token guard cfp.advanceToken() else { throw ParserError.missingNextToken } guard cfp.tokenIsEqualTo("(") else { throw ParserError.unexpectedToken } guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let first = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let second = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let third = try transpileFunctionContent(token: &cfp.cur_token, end: ")") token = cfp.cur_token return "((\(first)) * (\(second))) + (\(third))" } /// Creates an MSL sampler call from a HLSL-like sampler call /// - Parameter token: Start token of the sampler call in the function /// - Returns: String of an MSL sampler call private func createSampler(token: inout UnsafeMutablePointer) throws -> String { var cfp = parser.cfp cfp.cur_token = token let stringToken = token.pointee.str.getString() guard cfp.advanceToken() else { throw ParserError.missingNextToken } guard cfp.tokenIsEqualTo(".") else { throw ParserError.unexpectedToken } guard cfp.advanceToken() else { throw ParserError.missingNextToken } guard cfp.cur_token.pointee.type == CFTOKEN_NAME else { throw ParserError.unexpectedToken } let textureCall: String if cfp.tokenIsEqualTo("Sample") { textureCall = try createTextureCall(token: &cfp.cur_token, callType: .sample) } else if cfp.tokenIsEqualTo("SampleBias") { textureCall = try createTextureCall(token: &cfp.cur_token, callType: .sampleBias) } else if cfp.tokenIsEqualTo("SampleGrad") { textureCall = try createTextureCall(token: &cfp.cur_token, callType: .sampleGrad) } else if cfp.tokenIsEqualTo("SampleLevel") { textureCall = try createTextureCall(token: &cfp.cur_token, callType: .sampleLevel) } else if cfp.tokenIsEqualTo("Load") { textureCall = try createTextureCall(token: &cfp.cur_token, callType: .load) } else { throw ParserError.missingNextToken } token = cfp.cur_token return "\(stringToken).\(textureCall)" } /// Creates a MSL sampler call based on the sampling type /// - Parameters: /// - token: Start token of the sampler call arguments in the function body /// - callType: Type of sampling used /// - Returns: String of an MSL sampler call private func createTextureCall(token: inout UnsafeMutablePointer, callType: SampleVariant) throws -> String { var cfp = parser.cfp cfp.cur_token = token guard cfp.advanceToken() else { throw ParserError.missingNextToken } guard cfp.tokenIsEqualTo("(") else { throw ParserError.unexpectedToken } guard cfp.hasNextToken() else { throw ParserError.missingNextToken } switch callType { case .sample: let first = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let second = try transpileFunctionContent(token: &cfp.cur_token, end: ")") token = cfp.cur_token return "sample(\(first), \(second))" case .sampleBias: let first = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let second = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let third = try transpileFunctionContent(token: &cfp.cur_token, end: ")") token = cfp.cur_token return "sample(\(first), \(second), bias(\(third)))" case .sampleGrad: let first = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let second = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let third = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let fourth = try transpileFunctionContent(token: &cfp.cur_token, end: ")") token = cfp.cur_token return "sample(\(first), \(second), gradient2d(\(third), \(fourth)))" case .sampleLevel: let first = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let second = try transpileFunctionContent(token: &cfp.cur_token, end: ",") guard cfp.hasNextToken() else { throw ParserError.missingNextToken } let third = try transpileFunctionContent(token: &cfp.cur_token, end: ")") token = cfp.cur_token return "sample(\(first), \(second), level(\(third)))" case .load: let first = try transpileFunctionContent(token: &cfp.cur_token, end: ")") let loadCall: String /// Many load calls in OBS effects files rely on implicit type conversion, which is not allowed in MSL in /// addition to `read` calls only accepting a `uint2` followed by a `uint`. Any instance of a `int3` thus /// needs to be converted into the appropriate variant compatible with the `read` call. if first.hasPrefix("int3(") { let loadParameters = first[ first.index(first.startIndex, offsetBy: 5).. String? { var output = [String]() for function in functions.values { if function.name != functionName { continue } if function.requiresUniformBuffers { output.append("uniforms") } for texture in function.textures { for uniform in uniforms.values { if uniform.name == texture && uniform.storageType.contains(.typeTexture) { output.append(texture) } } } for sampler in function.samplers { for i in 0..? = parser.samplers.array.advanced(by: i) if let samplerPointer { if sampler == String(cString: samplerPointer.pointee.name) { output.append(sampler) } } } } } if output.count > 0 { return output.joined(separator: ", ") } return nil } deinit { withUnsafeMutablePointer(to: &parser) { shader_parser_free($0) } } } obs-studio-32.1.0-sources/libobs-metal/libobs+Extensions.swift000644 001751 001751 00000032057 15153330235 025300 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal import simd public enum OBSLogLevel: Int32 { case error = 100 case warning = 200 case info = 300 case debug = 400 } extension strref { mutating func getString() -> String { let buffer = UnsafeRawBufferPointer(start: self.array, count: self.len) let string = String(decoding: buffer, as: UTF8.self) return string } mutating func isEqualTo(_ comparison: String) -> Bool { return strref_cmp(&self, comparison.cString(using: .utf8)) == 0 } mutating func isEqualToCString(_ comparison: UnsafeMutablePointer?) -> Bool { if let comparison { let result = withUnsafeMutablePointer(to: &self) { strref_cmp($0, comparison) == 0 } return result } return false } } extension cf_parser { mutating func advanceToken() -> Bool { let result = withUnsafeMutablePointer(to: &self) { cf_next_token($0) } return result } mutating func hasNextToken() -> Bool { let result = withUnsafeMutablePointer(to: &self) { var nextToken: UnsafeMutablePointer? switch $0.pointee.cur_token.pointee.type { case CFTOKEN_SPACETAB, CFTOKEN_NEWLINE, CFTOKEN_NONE: nextToken = $0.pointee.cur_token default: nextToken = $0.pointee.cur_token.advanced(by: 1) } if var nextToken { while nextToken.pointee.type == CFTOKEN_SPACETAB || nextToken.pointee.type == CFTOKEN_NEWLINE { nextToken = nextToken.successor() } return nextToken.pointee.type != CFTOKEN_NONE } else { return false } } return result } mutating func tokenIsEqualTo(_ comparison: String) -> Bool { let result = withUnsafeMutablePointer(to: &self) { cf_token_is($0, comparison.cString(using: .utf8)) } return result } } extension gs_shader_param_type { var size: Int { switch self { case GS_SHADER_PARAM_BOOL, GS_SHADER_PARAM_INT, GS_SHADER_PARAM_FLOAT: return MemoryLayout.size case GS_SHADER_PARAM_INT2, GS_SHADER_PARAM_VEC2: return MemoryLayout.size * 2 case GS_SHADER_PARAM_INT3, GS_SHADER_PARAM_VEC3: return MemoryLayout.size * 3 case GS_SHADER_PARAM_INT4, GS_SHADER_PARAM_VEC4: return MemoryLayout.size * 4 case GS_SHADER_PARAM_MATRIX4X4: return MemoryLayout.size * 4 * 4 case GS_SHADER_PARAM_TEXTURE: return MemoryLayout.size case GS_SHADER_PARAM_STRING, GS_SHADER_PARAM_UNKNOWN: return 0 default: return 0 } } var mtlSize: Int { switch self { case GS_SHADER_PARAM_BOOL, GS_SHADER_PARAM_INT, GS_SHADER_PARAM_FLOAT: return MemoryLayout.size case GS_SHADER_PARAM_INT2, GS_SHADER_PARAM_VEC2: return MemoryLayout.size case GS_SHADER_PARAM_INT3, GS_SHADER_PARAM_VEC3: return MemoryLayout.size case GS_SHADER_PARAM_INT4, GS_SHADER_PARAM_VEC4: return MemoryLayout.size case GS_SHADER_PARAM_MATRIX4X4: return MemoryLayout.size case GS_SHADER_PARAM_TEXTURE: return MemoryLayout.size case GS_SHADER_PARAM_STRING, GS_SHADER_PARAM_UNKNOWN: return 0 default: return 0 } } var mtlAlignment: Int { switch self { case GS_SHADER_PARAM_BOOL, GS_SHADER_PARAM_INT, GS_SHADER_PARAM_FLOAT: return MemoryLayout.alignment case GS_SHADER_PARAM_INT2, GS_SHADER_PARAM_VEC2: return MemoryLayout.alignment case GS_SHADER_PARAM_INT3, GS_SHADER_PARAM_VEC3: return MemoryLayout.alignment case GS_SHADER_PARAM_INT4, GS_SHADER_PARAM_VEC4: return MemoryLayout.alignment case GS_SHADER_PARAM_MATRIX4X4: return MemoryLayout.alignment case GS_SHADER_PARAM_TEXTURE: return 0 case GS_SHADER_PARAM_STRING, GS_SHADER_PARAM_UNKNOWN: return 0 default: return 0 } } } extension gs_color_format { var sRGBVariant: MTLPixelFormat? { switch self { case GS_RGBA: return .rgba8Unorm_srgb case GS_BGRX, GS_BGRA: return .bgra8Unorm_srgb default: return nil } } var mtlFormat: MTLPixelFormat { switch self { case GS_A8: return .a8Unorm case GS_R8: return .r8Unorm case GS_R8G8: return .rg8Unorm case GS_R16: return .r16Unorm case GS_R16F: return .r16Float case GS_RG16: return .rg16Unorm case GS_RG16F: return .rg16Float case GS_R32F: return .r32Float case GS_RG32F: return .rg32Float case GS_RGBA: return .rgba8Unorm case GS_BGRX, GS_BGRA: return .bgra8Unorm case GS_R10G10B10A2: return .rgb10a2Unorm case GS_RGBA16: return .rgba16Unorm case GS_RGBA16F: return .rgba16Float case GS_RGBA32F: return .rgba32Float case GS_DXT1: return .bc1_rgba case GS_DXT3: return .bc2_rgba case GS_DXT5: return .bc3_rgba default: return .invalid } } } extension gs_color_space { var colorFormat: gs_color_format { switch self { case GS_CS_SRGB_16F, GS_CS_709_SCRGB: return GS_RGBA16F default: return GS_RGBA } } var pixelFormat: MTLPixelFormat? { switch self { case GS_CS_SRGB: .bgra8Unorm_srgb case GS_CS_709_SCRGB: nil case GS_CS_709_EXTENDED: .bgra10_xr_srgb case GS_CS_SRGB_16F: nil default: nil } } } extension gs_depth_test { var mtlFunction: MTLCompareFunction { switch self { case GS_NEVER: return .never case GS_LESS: return .less case GS_LEQUAL: return .lessEqual case GS_EQUAL: return .equal case GS_GEQUAL: return .greaterEqual case GS_GREATER: return .greater case GS_NOTEQUAL: return .notEqual case GS_ALWAYS: return .always default: return .never } } } extension gs_stencil_op_type { var mtlOperation: MTLStencilOperation { switch self { case GS_KEEP: return .keep case GS_ZERO: return .zero case GS_REPLACE: return .replace case GS_INCR: return .incrementWrap case GS_DECR: return .decrementWrap case GS_INVERT: return .invert default: return .keep } } } extension gs_blend_type { var blendFactor: MTLBlendFactor? { switch self { case GS_BLEND_ZERO: return .zero case GS_BLEND_ONE: return .one case GS_BLEND_SRCCOLOR: return .sourceColor case GS_BLEND_INVSRCCOLOR: return .oneMinusSourceColor case GS_BLEND_SRCALPHA: return .sourceAlpha case GS_BLEND_INVSRCALPHA: return .oneMinusSourceAlpha case GS_BLEND_DSTCOLOR: return .destinationColor case GS_BLEND_INVDSTCOLOR: return .oneMinusDestinationColor case GS_BLEND_DSTALPHA: return .destinationAlpha case GS_BLEND_INVDSTALPHA: return .oneMinusDestinationAlpha case GS_BLEND_SRCALPHASAT: return .sourceAlphaSaturated default: return nil } } } extension gs_blend_op_type { var mtlOperation: MTLBlendOperation? { switch self { case GS_BLEND_OP_ADD: return .add case GS_BLEND_OP_MAX: return .max case GS_BLEND_OP_MIN: return .min case GS_BLEND_OP_SUBTRACT: return .subtract case GS_BLEND_OP_REVERSE_SUBTRACT: return .reverseSubtract default: return nil } } } extension gs_cull_mode { var mtlMode: MTLCullMode { switch self { case GS_BACK: return .back case GS_FRONT: return .front default: return .none } } } extension gs_draw_mode { var mtlPrimitive: MTLPrimitiveType? { switch self { case GS_POINTS: return .point case GS_LINES: return .line case GS_LINESTRIP: return .lineStrip case GS_TRIS: return .triangle case GS_TRISTRIP: return .triangleStrip default: return nil } } } extension gs_rect { var mtlViewPort: MTLViewport { MTLViewport( originX: Double(self.x), originY: Double(self.y), width: Double(self.cx), height: Double(self.cy), znear: 0.0, zfar: 1.0) } var mtlScissorRect: MTLScissorRect { MTLScissorRect( x: Int(self.x), y: Int(self.y), width: Int(self.cx), height: Int(self.cy)) } } extension gs_zstencil_format { var mtlFormat: MTLPixelFormat { switch self { case GS_ZS_NONE: return .invalid case GS_Z16: return .depth16Unorm case GS_Z24_S8: return .depth24Unorm_stencil8 case GS_Z32F: return .depth32Float case GS_Z32F_S8X24: return .depth32Float_stencil8 default: return .invalid } } } extension gs_index_type { var mtlType: MTLIndexType? { switch self { case GS_UNSIGNED_LONG: return .uint16 case GS_UNSIGNED_SHORT: return .uint32 default: return nil } } var byteSize: Int { guard let indexType = self.mtlType else { return 0 } let byteSize = if indexType == .uint16 { 2 } else { 4 } return byteSize } } extension gs_address_mode { var mtlMode: MTLSamplerAddressMode? { switch self { case GS_ADDRESS_WRAP: return .repeat case GS_ADDRESS_CLAMP: return .clampToEdge case GS_ADDRESS_MIRROR: return .mirrorRepeat case GS_ADDRESS_BORDER: return .clampToBorderColor case GS_ADDRESS_MIRRORONCE: return .mirrorClampToEdge default: return nil } } } extension gs_sample_filter { var minMagFilter: MTLSamplerMinMagFilter? { switch self { case GS_FILTER_POINT, GS_FILTER_MIN_MAG_POINT_MIP_LINEAR, GS_FILTER_MIN_POINT_MAG_LINEAR_MIP_POINT, GS_FILTER_MIN_POINT_MAG_MIP_LINEAR: return .nearest case GS_FILTER_LINEAR, GS_FILTER_MIN_LINEAR_MAG_MIP_POINT, GS_FILTER_MIN_LINEAR_MAG_POINT_MIP_LINEAR, GS_FILTER_MIN_MAG_LINEAR_MIP_POINT, GS_FILTER_ANISOTROPIC: return .linear default: return nil } } var mipFilter: MTLSamplerMipFilter? { switch self { case GS_FILTER_POINT, GS_FILTER_MIN_MAG_POINT_MIP_LINEAR, GS_FILTER_MIN_POINT_MAG_LINEAR_MIP_POINT, GS_FILTER_MIN_POINT_MAG_MIP_LINEAR: return .nearest case GS_FILTER_LINEAR, GS_FILTER_MIN_LINEAR_MAG_MIP_POINT, GS_FILTER_MIN_LINEAR_MAG_POINT_MIP_LINEAR, GS_FILTER_MIN_MAG_LINEAR_MIP_POINT, GS_FILTER_ANISOTROPIC: return .linear default: return nil } } } obs-studio-32.1.0-sources/libobs-metal/MetalShader.swift000644 001751 001751 00000026610 15153330235 024062 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal class MetalShader { /// This class wraps a single uniform shader variable, which will hold the data associated with the uniform updated /// by `libobs` at each render loop, which is then converted and set as vertex or fragment bytes for a render pass /// by the ``MetalDevice/draw`` function. class ShaderUniform { let name: String let gsType: gs_shader_param_type fileprivate let textureSlot: Int var samplerState: MTLSamplerState? fileprivate let byteOffset: Int var currentValues: [UInt8]? var defaultValues: [UInt8]? fileprivate var hasUpdates: Bool init( name: String, gsType: gs_shader_param_type, textureSlot: Int, samplerState: MTLSamplerState?, byteOffset: Int ) { self.name = name self.gsType = gsType self.textureSlot = textureSlot self.samplerState = samplerState self.byteOffset = byteOffset self.currentValues = nil self.defaultValues = nil self.hasUpdates = false } /// Sets the data for the shader uniform /// - Parameters: /// - data: Pointer to data of type `T` /// - size: Size of data available at the pointer provided by `data` /// /// This function will reinterpet the data provided by the pointer as raw bytes and store it as raw bytes on /// the Uniform. public func setParameter(data: UnsafePointer?, size: Int) { guard let data else { assertionFailure( "MetalShader.ShaderUniform: Attempted to set a shader parameter with an empty data pointer") return } data.withMemoryRebound(to: UInt8.self, capacity: size) { self.currentValues = Array(UnsafeBufferPointer(start: $0, count: size)) } hasUpdates = true } } /// This struct serves as a data container to communicate shader meta data between the ``OBSShader`` shader /// transpiler and the actual ``MetalShader`` instances created with them. struct ShaderData { let uniforms: [ShaderUniform] let bufferOrder: [MetalBuffer.BufferDataType] let vertexDescriptor: MTLVertexDescriptor? let samplerDescriptors: [MTLSamplerDescriptor]? let bufferSize: Int let textureCount: Int } private weak var device: MetalDevice? let source: String private var uniformData: [UInt8] private var uniformSize: Int private var uniformBuffer: MTLBuffer? private let library: MTLLibrary let function: MTLFunction var uniforms: [ShaderUniform] var vertexDescriptor: MTLVertexDescriptor? var textureCount = 0 var samplers: [MTLSamplerState]? let type: MTLFunctionType let bufferOrder: [MetalBuffer.BufferDataType] var viewProjection: ShaderUniform? init(device: MetalDevice, source: String, type: MTLFunctionType, data: ShaderData) throws { self.device = device self.source = source self.type = type self.uniforms = data.uniforms self.bufferOrder = data.bufferOrder self.uniformSize = (data.bufferSize + 0x0F) & ~0x0F self.uniformData = [UInt8](repeating: 0, count: self.uniformSize) self.textureCount = data.textureCount switch type { case .vertex: guard let descriptor = data.vertexDescriptor else { throw MetalError.MetalShaderError.missingVertexDescriptor } self.vertexDescriptor = descriptor self.viewProjection = self.uniforms.first(where: { $0.name == "ViewProj" }) case .fragment: guard let samplerDescriptors = data.samplerDescriptors else { throw MetalError.MetalShaderError.missingSamplerDescriptors } var samplers = [MTLSamplerState]() samplers.reserveCapacity(samplerDescriptors.count) for descriptor in samplerDescriptors { guard let samplerState = device.device.makeSamplerState(descriptor: descriptor) else { throw MetalError.MTLDeviceError.samplerStateCreationFailure } samplers.append(samplerState) } self.samplers = samplers default: fatalError("MetalShader: Unsupported shader type \(type)") } do { library = try device.device.makeLibrary(source: source, options: nil) } catch { throw MetalError.MTLDeviceError.shaderCompilationFailure("Failed to create shader library") } guard let function = library.makeFunction(name: "_main") else { throw MetalError.MTLDeviceError.shaderCompilationFailure("Failed to create '_main' function") } self.function = function } /// Updates the Metal-specific data associated with a ``ShaderUniform`` with the raw bytes provided by `libobs` /// - Parameter uniform: Inout reference to the ``ShaderUniform`` instance /// /// Uniform data is provided by `libobs` precisely in the format required by the shader (and interpreted by /// `libobs`), which means that the raw bytes stored on the ``ShaderUniform`` are usually already in the correct /// order and can be used without reinterpretation. /// /// The exception to this rule is data for textures, which represents a copy of a `gs_shader_texture` struct that /// itself contains the pointer address of an `OpaquePointer` for a ``MetalTexture`` instance. private func updateUniform(uniform: inout ShaderUniform) { guard let device = self.device else { return } guard let currentValues = uniform.currentValues else { return } if uniform.gsType == GS_SHADER_PARAM_TEXTURE { var textureObject: OpaquePointer? var isSrgb = false currentValues.withUnsafeBufferPointer { $0.baseAddress?.withMemoryRebound(to: gs_shader_texture.self, capacity: 1) { textureObject = $0.pointee.tex isSrgb = $0.pointee.srgb } } if let textureObject { let texture: MetalTexture = unretained(UnsafeRawPointer(textureObject)) if texture.sRGBtexture != nil, isSrgb { device.renderState.textures[uniform.textureSlot] = texture.sRGBtexture! } else { device.renderState.textures[uniform.textureSlot] = texture.texture } } if let samplerState = uniform.samplerState { device.renderState.samplers[uniform.textureSlot] = samplerState uniform.samplerState = nil } } else { if uniform.hasUpdates { let startIndex = uniform.byteOffset let endIndex = uniform.byteOffset + currentValues.count uniformData.replaceSubrange(startIndex...size * data.count let alignedSize = (size + 0x0F) & ~0x0F if buffer != nil { if buffer!.length == alignedSize { buffer!.contents().copyMemory(from: data, byteCount: size) return } } buffer = device.device.makeBuffer(bytes: data, length: alignedSize) } /// Sets uniform data for a current render encoder either directly as a buffer /// - Parameter encoder: `MTLRenderCommandEncoder` for a render pass that requires the uniform data /// /// Uniform data will be uploaded at index 30 (the very last available index) and is available as a single /// contiguous block of data. Uniforms are declared as structs in the Metal Shaders and explicitly passed into /// each function that requires access to them. func uploadShaderParameters(encoder: MTLRenderCommandEncoder) { for var uniform in uniforms { updateUniform(uniform: &uniform) } guard uniformSize > 0 else { return } switch function.functionType { case .vertex: switch uniformData.count { case 0..<4096: encoder.setVertexBytes(&uniformData, length: uniformData.count, index: 30) default: createOrUpdateBuffer(buffer: &uniformBuffer, data: &uniformData) #if DEBUG uniformBuffer?.label = "Vertex shader uniform buffer" #endif encoder.setVertexBuffer(uniformBuffer, offset: 0, index: 30) } case .fragment: switch uniformData.count { case 0..<4096: encoder.setFragmentBytes(&uniformData, length: uniformData.count, index: 30) default: createOrUpdateBuffer(buffer: &uniformBuffer, data: &uniformData) #if DEBUG uniformBuffer?.label = "Fragment shader uniform buffer" #endif encoder.setFragmentBuffer(uniformBuffer, offset: 0, index: 30) } default: fatalError("MetalShader: Unsupported shader type \(function.functionType)") } } /// Gets an opaque pointer for the ``MetalShader`` instance and increases its reference count by one /// - Returns: `OpaquePointer` to class instance /// /// > Note: Use this method when the instance is to be shared via an `OpaquePointer` and needs to be retained. Any /// opaque pointer shared this way needs to be converted into a retained reference again to ensure automatic /// deinitialization by the Swift runtime. func getRetained() -> OpaquePointer { let retained = Unmanaged.passRetained(self).toOpaque() return OpaquePointer(retained) } /// Gets an opaque pointer for the ``MetalShader`` instance without increasing its reference count /// - Returns: `OpaquePointer` to class instance func getUnretained() -> OpaquePointer { let unretained = Unmanaged.passUnretained(self).toOpaque() return OpaquePointer(unretained) } } obs-studio-32.1.0-sources/libobs-metal/MetalRenderState.swift000644 001751 001751 00000006074 15153330235 025076 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal import simd /// The MetalRenderState struct emulates a state object like Direct3D's `ID3D11DeviceContext`, holding references to /// elements of a render pipeline that would be considered the "current" variant of each. /// /// Typical "current" state elements include (but are not limited to): /// /// * Variant of the render target for linear color writes /// * Variant of the render target for color writes with automatic sRGB gamma encoding /// * View matrix and view projection matrix /// * Vertex buffer and optional index buffer /// * Depth stencil attachment /// * Vertex shader /// * Fragment shader /// * View port size /// * Cull mode /// /// These references are swapped out by OBS for each "scene" and "scene items" within it before issuing draw calls, /// thus actual pipelines need to be created "on demand" based on the pipeline descriptor and stored in a cache to /// avoid the cost of pipeline validation on consecutive render passes. struct MetalRenderState { var viewMatrix: matrix_float4x4 var projectionMatrix: matrix_float4x4 var viewProjectionMatrix: matrix_float4x4 var renderTarget: MetalTexture? var sRGBrenderTarget: MetalTexture? var depthStencilAttachment: MetalTexture? var isRendertargetChanged = false var vertexBuffer: MetalVertexBuffer? var indexBuffer: MetalIndexBuffer? var vertexShader: MetalShader? var fragmentShader: MetalShader? var viewPort = MTLViewport() var cullMode = MTLCullMode.none var scissorRectEnabled: Bool var scissorRect: MTLScissorRect? var gsColorSpace: gs_color_space var useSRGBGamma = false var swapChain: OBSSwapChain? var isInDisplaysRenderStage = false var pipelineDescriptor = MTLRenderPipelineDescriptor() var clearPipelineDescriptor = MTLRenderPipelineDescriptor() var renderPassDescriptor = MTLRenderPassDescriptor() var depthStencilDescriptor = MTLDepthStencilDescriptor() var commandBuffer: MTLCommandBuffer? var textures = [MTLTexture?](repeating: nil, count: Int(GS_MAX_TEXTURES)) var samplers = [MTLSamplerState?](repeating: nil, count: Int(GS_MAX_TEXTURES)) var projections = [matrix_float4x4]() var inFlightRenderTargets = Set() } obs-studio-32.1.0-sources/libobs-metal/CMakeLists.txt000644 001751 001751 00000003664 15153330235 023357 0ustar00runnerrunner000000 000000 cmake_minimum_required(VERSION 3.28...3.30) add_library(libobs-metal SHARED) add_library(OBS::libobs-metal ALIAS libobs-metal) target_sources( libobs-metal PRIVATE CVPixelFormat+Extensions.swift MTLCullMode+Extensions.swift MTLOrigin+Extensions.swift MTLPixelFormat+Extensions.swift MTLRegion+Extensions.swift MTLSize+Extensions.swift MTLTexture+Extensions.swift MTLTextureDescriptor+Extensions.swift MTLTextureType+Extensions.swift MTLViewport+Extensions.swift MetalBuffer.swift MetalDevice.swift MetalError.swift MetalRenderState.swift MetalShader+Extensions.swift MetalShader.swift MetalStageBuffer.swift MetalTexture.swift OBSShader.swift OBSSwapChain.swift Sequence+Hashable.swift libobs+Extensions.swift libobs+SignalHandlers.swift libobs-metal-Bridging-Header.h metal-indexbuffer.swift metal-samplerstate.swift metal-shader.swift metal-stagesurf.swift metal-subsystem.swift metal-swapchain.swift metal-texture2d.swift metal-texture3d.swift metal-unimplemented.swift metal-vertexbuffer.swift metal-zstencilbuffer.swift ) target_link_libraries(libobs-metal PRIVATE OBS::libobs) target_enable_feature(libobs "Metal renderer") set_property(SOURCE OBSMetalRenderer.swift APPEND PROPERTY COMPILE_FLAGS -emit-objc-header) set_target_properties_obs( libobs-metal PROPERTIES FOLDER core VERSION 0 PREFIX "" ) set_target_xcode_properties( libobs-metal PROPERTIES SWIFT_VERSION 6.0 CLANG_ENABLE_OBJC_ARC YES CLANG_WARN_SUSPICIOUS_IMPLICIT_CONVERSION YES GCC_WARN_SHADOW YES CLANG_ENABLE_MODULES YES CLANG_MODULES_AUTOLINK YES GCC_STRICT_ALIASING YES DEFINES_MODULE YES SWIFT_OBJC_BRIDGING_HEADER "${CMAKE_CURRENT_SOURCE_DIR}/libobs-metal-Bridging-Header.h" ) obs-studio-32.1.0-sources/libobs-metal/MTLTextureType+Extensions.swift000644 001751 001751 00000002500 15153330235 026673 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLTextureType { /// Converts the Metal texture type into a compatible `libobs` texture type or `nil` if no compatible mapping is /// possible. var gsTextureType: gs_texture_type? { switch self { case .type2D: return GS_TEXTURE_2D case .type3D: return GS_TEXTURE_3D case .typeCube: return GS_TEXTURE_CUBE default: return nil } } } obs-studio-32.1.0-sources/libobs-metal/MTLRegion+Extensions.swift000644 001751 001751 00000002057 15153330235 025623 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLRegion: @retroactive Equatable { public static func == (lhs: MTLRegion, rhs: MTLRegion) -> Bool { lhs.origin == rhs.origin && lhs.size == rhs.size } } obs-studio-32.1.0-sources/libobs-metal/metal-stagesurf.swift000644 001751 001751 00000013713 15153330235 024774 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Creates a ``MetalStageBuffer`` instance for use as a stage surface by `libobs` /// - Parameters: /// - device: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// - width: Number of data rows /// - height: Number of data columns /// - format: Color format of the stage surface texture as defined by `libobs`'s `gs_color_format` struct /// - Returns: A ``MetalStageBuffer`` instance that wraps a `MTLBuffer` or a `nil` pointer otherwise /// /// Stage surfaces are used by `libobs` for transfer of image data from the GPU to the CPU. The most common use case is /// to block transfer (blit) the video output texture into a staging texture and then downloading the texture data from /// the staging texture into CPU memory. @_cdecl("device_stagesurface_create") public func device_stagesurface_create(device: UnsafeRawPointer, width: UInt32, height: UInt32, format: gs_color_format) -> OpaquePointer? { let device: MetalDevice = unretained(device) guard let buffer = MetalStageBuffer( device: device, width: Int(width), height: Int(height), format: format.mtlFormat ) else { OBSLog(.error, "device_stagesurface_create: Unable to create MetalStageBuffer with provided format \(format)") return nil } return buffer.getRetained() } /// Requests the deinitialization of the ``MetalStageBuffer`` instance that was shared with `libobs` /// - Parameter stagesurf: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// /// The ownership of the shared pointer is transferred into this function and the instance is placed under Swift's /// memory management again. @_cdecl("gs_stagesurface_destroy") public func gs_stagesurface_destroy(stagesurf: UnsafeRawPointer) { let _ = retained(stagesurf) as MetalStageBuffer } /// Gets the "width" of the staging texture /// - Parameter stagesurf: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// - Returns: Amount of data rows in the buffer representing the width of an image @_cdecl("gs_stagesurface_get_width") public func gs_stagesurface_get_width(stagesurf: UnsafeRawPointer) -> UInt32 { let stageSurface: MetalStageBuffer = unretained(stagesurf) return UInt32(stageSurface.width) } /// Gets the "height" of the staging texture /// - Parameter stagesurf: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// - Returns: Amount of data columns in the buffer representing the height of an image @_cdecl("gs_stagesurface_get_height") public func gs_stagesurface_get_height(stagesurf: UnsafeRawPointer) -> UInt32 { let stageSurface: MetalStageBuffer = unretained(stagesurf) return UInt32(stageSurface.height) } /// Gets the color format of the staged image data /// - Parameter stagesurf: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// - Returns: Color format in `libobs`'s own color format struct /// /// The Metal color format is automatically converted into its corresponding `gs_color_format` variant. @_cdecl("gs_stagesurface_get_color_format") public func gs_stagesurface_get_height(stagesurf: UnsafeRawPointer) -> gs_color_format { let stageSurface: MetalStageBuffer = unretained(stagesurf) return stageSurface.format.gsColorFormat } /// Provides a pointer to memory that contains the buffer's raw data. /// - Parameters: /// - stagesurf: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// - ptr: Opaque pointer to memory which itself can hold a pointer to the actual image data /// - linesize: Opaque pointer to memory which itself can hold the row size of the image data /// - Returns: `true` if the data can be provided, `false` otherwise /// /// Metal does not provide "map" and "unmap" operations as they exist in Direct3D11, as resource management and /// synchronization needs to be handled explicitly by the application. To reduce unnecessary copy operations, the /// original texture's data was copied into a `MTLBuffer` (instead of another texture) using a block transfer on the /// GPU. /// /// As the Metal renderer is only available on Apple Silicon machines, this means that the buffer itself is available /// for direct access by the CPU and thus a pointer to the raw bytes of the buffer can be shared with `libobs`. @_cdecl("gs_stagesurface_map") public func gs_stagesurface_map( stagesurf: UnsafeRawPointer, ptr: UnsafeMutablePointer, linesize: UnsafeMutablePointer ) -> Bool { let stageSurface: MetalStageBuffer = unretained(stagesurf) ptr.pointee = stageSurface.buffer.contents() linesize.pointee = UInt32(stageSurface.width * stageSurface.format.bytesPerPixel!) return true } /// Signals that the downloaded image data of the stage texture is not needed anymore. /// /// - Parameter stagesurf: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs` /// /// This function has no effect as the `MTLBuffer` used by the ``MetalStageBuffer`` does not need to be "unmapped". @_cdecl("gs_stagesurface_unmap") public func gs_stagesurface_unmap(stagesurf: UnsafeRawPointer) { return } obs-studio-32.1.0-sources/libobs-metal/MTLSize+Extensions.swift000644 001751 001751 00000002105 15153330235 025304 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLSize: @retroactive Equatable { public static func == (lhs: MTLSize, rhs: MTLSize) -> Bool { lhs.width == rhs.width && lhs.height == rhs.height && lhs.depth == rhs.depth } } obs-studio-32.1.0-sources/libobs-metal/MetalStageBuffer.swift000644 001751 001751 00000004645 15153330235 025055 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal class MetalStageBuffer { let device: MetalDevice let buffer: MTLBuffer let format: MTLPixelFormat let width: Int let height: Int init?(device: MetalDevice, width: Int, height: Int, format: MTLPixelFormat) { self.device = device self.width = width self.height = height self.format = format guard let bytesPerPixel = format.bytesPerPixel, let buffer = device.device.makeBuffer( length: width * height * bytesPerPixel, options: .storageModeShared ) else { return nil } self.buffer = buffer } /// Gets an opaque pointer for the ``MetalStageBuffer`` instance and increases its reference count by one /// - Returns: `OpaquePointer` to class instance /// /// > Note: Use this method when the instance is to be shared via an `OpaquePointer` and needs to be retained. Any /// opaque pointer shared this way needs to be converted into a retained reference again to ensure automatic /// deinitialization by the Swift runtime. func getRetained() -> OpaquePointer { let retained = Unmanaged.passRetained(self).toOpaque() return OpaquePointer(retained) } /// Gets an opaque pointer for the ``MetalStageBuffer`` instance without increasing its reference count /// - Returns: `OpaquePointer` to class instance func getUnretained() -> OpaquePointer { let unretained = Unmanaged.passUnretained(self).toOpaque() return OpaquePointer(unretained) } } obs-studio-32.1.0-sources/libobs-metal/metal-samplerstate.swift000644 001751 001751 00000010402 15153330235 025465 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Creates a new ``MTLSamplerDescriptor`` to share as an opaque pointer with `libobs` /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - info: Sampler information encoded as a `gs_sampler_info` struct /// - Returns: Opaque pointer to a new ``MTLSamplerDescriptor`` instance on success, `nil` otherwise @_cdecl("device_samplerstate_create") public func device_samplerstate_create(device: UnsafeRawPointer, info: gs_sampler_info) -> OpaquePointer? { let device: MetalDevice = unretained(device) guard let sAddressMode = info.address_u.mtlMode, let tAddressMode = info.address_v.mtlMode, let rAddressMode = info.address_w.mtlMode else { assertionFailure("device_samplerstate_create: Invalid address modes provided") return nil } guard let minFilter = info.filter.minMagFilter, let magFilter = info.filter.minMagFilter, let mipFilter = info.filter.mipFilter else { assertionFailure("device_samplerstate_create: Invalid filter modes provided") return nil } let descriptor = MTLSamplerDescriptor() descriptor.sAddressMode = sAddressMode descriptor.tAddressMode = tAddressMode descriptor.rAddressMode = rAddressMode descriptor.minFilter = minFilter descriptor.magFilter = magFilter descriptor.mipFilter = mipFilter descriptor.maxAnisotropy = max(16, min(1, Int(info.max_anisotropy))) descriptor.compareFunction = .always descriptor.borderColor = if (info.border_color & 0x00_00_00_FF) == 0 { .transparentBlack } else if info.border_color == 0xFF_FF_FF_FF { .opaqueWhite } else { .opaqueBlack } guard let samplerState = device.device.makeSamplerState(descriptor: descriptor) else { assertionFailure("device_samplerstate_create: Unable to create sampler state") return nil } let retained = Unmanaged.passRetained(samplerState).toOpaque() return OpaquePointer(retained) } /// Requests the deinitialization of the ``MTLSamplerState`` instance shared with `libobs` /// - Parameter samplerstate: Opaque pointer to ``MTLSamplerState`` instance shared with `libobs` /// /// Ownership of the ``MTLSamplerState`` instance will be transferred into the function and if this was the last /// strong reference to it, the object will be automatically deinitialized and deallocated by Swift. @_cdecl("gs_samplerstate_destroy") public func gs_samplerstate_destroy(samplerstate: UnsafeRawPointer) { let _ = retained(samplerstate) as MTLSamplerState } /// Loads the provided ``MTLSamplerState`` into the current pipeline's sampler array at the requested texture unit /// number /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - samplerstate: Opaque pointer to ``MTLSamplerState`` instance shared with `libobs` /// - unit: Number identifying the "texture slot" used by OBS Studio's renderer. /// /// Texture slot numbers are equivalent to array index and represent a direct mapping between samplers and textures. @_cdecl("device_load_samplerstate") public func device_load_samplerstate(device: UnsafeRawPointer, samplerstate: UnsafeRawPointer, unit: UInt32) { let device: MetalDevice = unretained(device) let samplerState: MTLSamplerState = unretained(samplerstate) device.renderState.samplers[Int(unit)] = samplerState } obs-studio-32.1.0-sources/libobs-metal/MTLViewport+Extensions.swift000644 001751 001751 00000002616 15153330235 026220 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLViewport: @retroactive Equatable { /// Checks two ``MTLViewPort`` objects for equality /// - Parameters: /// - lhs: First ``MTLViewPort``object /// - rhs: Second ``MTLViewPort`` object /// - Returns: `true` if the dimensions and origins of both view ports match, `false` otherwise. public static func == (lhs: MTLViewport, rhs: MTLViewport) -> Bool { lhs.width == rhs.width && lhs.height == rhs.height && lhs.originX == rhs.originX && lhs.originY == rhs.originY } } obs-studio-32.1.0-sources/libobs-metal/Sequence+Hashable.swift000644 001751 001751 00000002335 15153330235 025142 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ extension Sequence where Iterator.Element: Hashable { /// Filters a `Sequence` to only contain its unique elements, retaining order for unique elements. /// - Returns: Filtered `Sequence` with unique elements of original `Sequence` func unique() -> [Iterator.Element] { var seen: Set = [] return filter { seen.insert($0).inserted } } } obs-studio-32.1.0-sources/libobs-metal/metal-subsystem.swift000644 001751 001751 00000132642 15153330235 025032 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal import simd @inlinable public func unretained(_ pointer: UnsafeRawPointer) -> Instance where Instance: AnyObject { Unmanaged.fromOpaque(pointer).takeUnretainedValue() } @inlinable public func retained(_ pointer: UnsafeRawPointer) -> Instance where Instance: AnyObject { Unmanaged.fromOpaque(pointer).takeRetainedValue() } @inlinable public func OBSLog(_ level: OBSLogLevel, _ format: String, _ args: CVarArg...) { let logMessage = String.localizedStringWithFormat(format, args) logMessage.withCString { cMessage in withVaList([cMessage]) { arguments in blogva(level.rawValue, "%s", arguments) } } } /// Returns the graphics API name implemented by the "device". /// - Returns: Constant pointer to a C string with the API name /// @_cdecl("device_get_name") public func device_get_name() -> UnsafePointer { return device_name } /// Gets the graphics API identifier number for the "device". /// - Returns: Numerical identifier /// @_cdecl("device_get_type") public func device_get_type() -> Int32 { return GS_DEVICE_METAL } /// Returns a string to be used as a suffix for libobs' shader preprocessor, which will be used as part of a shaders /// identifying information. /// - Returns: Constant pointer to a C string with the suffix text @_cdecl("device_preprocessor_name") public func device_preprocessor_name() -> UnsafePointer { return preprocessor_name } /// Creates a new Metal device instance and stores an opaque pointer to a ``MetalDevice`` instance in the provided /// pointer. /// /// - Parameters: /// - devicePointer: Pointer to memory allocated by the caller to receive the pointer of the create device instance /// - adapter: Numerical identifier of a graphics display adaptor to create the device on. /// - Returns: Device creation result value defined as preprocessor macro in libobs' graphics API header /// /// This method will increment the reference count on the created ``MetalDevice`` instance to ensure it will not be /// deallocated until `libobs` actively relinquishes ownership of it via a call of `device_destroy`. /// /// > Important: As the Metal API is only supported on Apple Silicon devices, the adapter argument is effectively /// ignored (there is only ever one "adapter" in an Apple Silicon machine and thus only the "default" device is used. @_cdecl("device_create") public func device_create(devicePointer: UnsafeMutableRawPointer, adapter: UInt32) -> Int32 { guard NSProtocolFromString("MTLDevice") != nil else { OBSLog(.error, "This Mac does not support Metal.") return GS_ERROR_NOT_SUPPORTED } OBSLog(.info, "---------------------------------") guard let metalDevice = MTLCreateSystemDefaultDevice() else { OBSLog(.error, "Unable to initialize Metal device.") return GS_ERROR_FAIL } var descriptions: [String] = [] descriptions.append("Initializing Metal...") descriptions.append("\t- Name : \(metalDevice.name)") descriptions.append("\t- Unified Memory : \(metalDevice.hasUnifiedMemory ? "Yes" : "No")") descriptions.append("\t- Raytracing Support : \(metalDevice.supportsRaytracing ? "Yes" : "No")") if #available(macOS 14.0, *) { descriptions.append("\t- Architecture : \(metalDevice.architecture.name)") } OBSLog(.info, descriptions.joined(separator: "\n")) do { let device = try MetalDevice(device: metalDevice) let retained = Unmanaged.passRetained(device).toOpaque() let signalName = MetalSignalType.videoReset.rawValue let signalHandler = obs_get_signal_handler() signalName.withCString { signal_handler_connect(signalHandler, $0, metal_video_reset_handler, retained) } devicePointer.storeBytes(of: OpaquePointer(retained), as: OpaquePointer.self) } catch { OBSLog(.error, "Unable to create MetalDevice wrapper instance") return GS_ERROR_FAIL } return GS_SUCCESS } /// Uninitializes the Metal device instance created for libobs. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// This method will take ownership of the reference shared with `libobs` and thus return all strong references to the /// shared ``MetalDevice`` instance to pure Swift code (and thus its own memory managed). The active call to /// ``MetalDevice/shutdown()`` is necessary to ensure that internal clean up code runs _before_ `libobs` runs any of /// its own clean up code (which is not memory safe). @_cdecl("device_destroy") public func device_destroy(device: UnsafeMutableRawPointer) { let signalName = MetalSignalType.videoReset.rawValue let signalHandler = obs_get_signal_handler() signalName.withCString { signal_handler_disconnect(signalHandler, $0, metal_video_reset_handler, device) } let device: MetalDevice = retained(device) device.shutdown() } /// Returns opaque pointer to actual (wrapped) API-specific device object /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Opaque pointer to ``MTLDevice`` object wrapped by ``MetalDevice`` instance /// /// The pointer shared by this function is unretained and is thus unsafe. It doesn't seem that anything in OBS Studio's /// codebase actually uses this function, but it is part of the graphics API and thus has to be implemented. @_cdecl("device_get_device_obj") public func device_get_device_obj(device: UnsafeMutableRawPointer) -> OpaquePointer? { let metalDevice: MetalDevice = unretained(device) let mtlDevice = metalDevice.device return OpaquePointer(Unmanaged.passUnretained(mtlDevice).toOpaque()) } /// Sets up the blend factor to be used by the current pipeline. /// /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - src: `libobs` blend type for the source /// - dest: `libobs` blend type for the destination /// /// This function uses the same blend factor for color and alpha channel. The enum values provided by `libobs` are /// converted into their appropriate ``MTLBlendFactor``variants automatically (if possible). /// /// > Important: Calling this function can trigger the creation of an entirely new render pipeline state, which is a /// costly operation. @_cdecl("device_blend_function") public func device_blend_function(device: UnsafeRawPointer, src: gs_blend_type, dest: gs_blend_type) { device_blend_function_separate( device: device, src_c: src, dest_c: dest, src_a: src, dest_a: dest ) } /// Sets up the color and alpha blend factors to be used by the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - src_c: `libobs` blend factor for the source color /// - dest_c: `libobs` blend factor for the destination color /// - src_a: `libobs` blend factor for the source alpha channel /// - dest_a: `libobs` blend factor for the destination alpha channel /// /// This function uses different blend factors for color and alpha channel. The enum values provided by `libobs` are /// converted into their appropriate ``MTLBlendFactor`` variants automatically (if possible). /// /// > Important: Calling this function can trigger the creation of an entirely new render pipeline state, which is a /// costly operation. @_cdecl("device_blend_function_separate") public func device_blend_function_separate( device: UnsafeRawPointer, src_c: gs_blend_type, dest_c: gs_blend_type, src_a: gs_blend_type, dest_a: gs_blend_type ) { let device: MetalDevice = unretained(device) let pipelineDescriptor = device.renderState.pipelineDescriptor guard let sourceRGBFactor = src_c.blendFactor, let sourceAlphaFactor = src_a.blendFactor, let destinationRGBFactor = dest_c.blendFactor, let destinationAlphaFactor = dest_a.blendFactor else { assertionFailure( """ device_blend_function_separate: Incompatible blend factors used. Values: - Source RGB : \(src_c) - Source Alpha : \(src_a) - Destination RGB : \(dest_c) - Destination Alpha : \(dest_a) """) return } pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = sourceRGBFactor pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = sourceAlphaFactor pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = destinationRGBFactor pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = destinationAlphaFactor } /// Sets the blend operation to be used by the current pipeline. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - op: `libobs` blend operation name /// /// This function converts the provided `libobs` value into its appropriate ``MTLBlendOperation`` variant automatically /// (if possible). /// /// > Important: Calling this function can trigger the creation of an entirely new render pipeline state, which is a /// costly operation. @_cdecl("device_blend_op") public func device_blend_op(device: UnsafeRawPointer, op: gs_blend_op_type) { let device: MetalDevice = unretained(device) let pipelineDescriptor = device.renderState.pipelineDescriptor guard let blendOperation = op.mtlOperation else { assertionFailure("device_blend_op: Incompatible blend operation provided. Value: \(op)") return } pipelineDescriptor.colorAttachments[0].rgbBlendOperation = blendOperation } /// Returns the _current_ color space as set up by any preceding calls of the `libobs` renderer. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Color space enum value as defined by `libobs` /// /// This color space value is commonly set by `libobs`' renderer to check the "current state", and make necessary /// switches to ensure color-correct rendering /// (e.g., to check if the renderer uses an SDR color space but the current source might provide HDR image data). This /// value is effectively just retained as a state variable for `libobs`. @_cdecl("device_get_color_space") public func device_get_color_space(device: UnsafeRawPointer) -> gs_color_space { let device: MetalDevice = unretained(device) return device.renderState.gsColorSpace } /// Signals the beginning of a new render loop iteration by `libobs` renderer. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// This function is the first graphics API-specific function called by `libobs` render loop and can be used as a /// signal to reset any lingering state of the prior loop iteration. /// /// For the Metal renderer this ensures that the current render target, current swap chain, as well as the list of /// active swap chains is reset. As the Metal renderer also needs to keep track of whether `libobs` is rendering any /// "displays", the associated state variable is also reset here. @_cdecl("device_begin_frame") public func device_begin_frame(device: UnsafeRawPointer) { let device: MetalDevice = unretained(device) device.renderState.useSRGBGamma = false device.renderState.renderTarget = nil device.renderState.swapChain = nil device.renderState.isInDisplaysRenderStage = false return } /// Gets a pointer to the current render target /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Opaque pointer to ``MetalTexture`` object representing the render target /// /// OBS Studio's renderer only ever uses a single render target at the same time and switches them out if it needs to /// render a different output. Due to this single state approach, it needs to retain any "current" values before /// replacing them with (temporary) new values. It does so by retrieving pointers to the current objects set up within /// the graphics API's opaque implementation and storing them for later use. @_cdecl("device_get_render_target") public func device_get_render_target(device: UnsafeRawPointer) -> OpaquePointer? { let device: MetalDevice = unretained(device) guard let renderTarget = device.renderState.renderTarget else { return nil } return renderTarget.getUnretained() } /// Replaces the "current" render target and zstencil attachment with the objects associated by any provided non-`nil` /// pointers. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - tex: Opaque (optional) pointer to ``MetalTexture`` instance shared with `libobs` /// - zstencil: Opaque (optional) pointer to ``MetalTexture`` instance shared with `libobs` /// /// This setter function is often used in conjunction with its associated getter function to temporarily "switch state" /// of the renderer by retaining a pointer to the "current" render target, setting up a new one, issuing a draw call, /// before restoring the original render target. /// /// This is regularly used for "texrender" instances, such as combining the chroma and luma components of a video frame /// (and uploaded as single- and dual-channel textures respectively) back into an RGB texture. This texture is then /// used as the "output" of its corresponding source in the "actual" render pass, which will use the original render /// target again. @_cdecl("device_set_render_target") public func device_set_render_target(device: UnsafeRawPointer, tex: UnsafeRawPointer?, zstencil: UnsafeRawPointer?) { device_set_render_target_with_color_space( device: device, tex: tex, zstencil: zstencil, space: GS_CS_SRGB ) } /// Replaces the "current" render target and zstencil attachment with the objects associated by any provided non-`nil` /// pointers and also updated the "current" color space used by the renderer. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - tex: Opaque (optional) pointer to ``MetalTexture`` instance shared with `libobs` /// - zstencil: Opaque (optional) pointer to ``MetalTexture`` instance shared with `libobs` /// - space: `libobs`-based color space value /// /// This setter function is often used in conjunction with its associated getter function to temporarily "switch state" /// of the renderer by retaining a pointer to the "current" render target, setting up a new one, issuing a draw call, /// before restoring the original render target. /// /// This is regularly used for "texrender" instances, such as combining the chroma and luma components of a video frame /// (and uploaded as single- and dual-channel textures respectively) back into an RGB texture. This texture is then /// used as the "output" of its corresponding source in the "actual" render pass, which will use the original render /// target again. /// /// A `nil` pointer provided for either the render target or zstencil attachment means that the "current" value for /// either should be removed, leaving the renderer in an "invalid" state at least for the render target (using no /// zstencil attachment is a valid state however). /// /// > Important: Use this variant if you need to also update the "current" color space which might be checked by /// sources' render function to check whether linear gamma or sRGB's gamma will be used to encode color values. @_cdecl("device_set_render_target_with_color_space") public func device_set_render_target_with_color_space( device: UnsafeRawPointer, tex: UnsafeRawPointer?, zstencil: UnsafeRawPointer?, space: gs_color_space ) { let device: MetalDevice = unretained(device) if let tex { let metalTexture: MetalTexture = unretained(tex) device.renderState.renderTarget = metalTexture device.renderState.isRendertargetChanged = true } else { device.renderState.renderTarget = nil } if let zstencil { let zstencilAttachment: MetalTexture = unretained(zstencil) device.renderState.depthStencilAttachment = zstencilAttachment device.renderState.isRendertargetChanged = true } else { device.renderState.depthStencilAttachment = nil } device.renderState.gsColorSpace = space } /// Switches the current render state to use sRGB gamma encoding and decoding when reading from textures and writing /// into render targets /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - enable: Boolean to enable or disable the automatic sRGB gamma encoding and decoding /// /// OBS Studio's renderer has been retroactively updated to use sRGB color primaries _and_ gamma encoding by /// preference, but not by default. Any source has to opt-in to the use of automatic sRGB gamma encoding and decoding, /// while the default is still to use linear gamma. /// /// This method is thus used by sources to enable or disable the associated behavior and control the way color values /// generated by fragment shaders are written into the render target. @_cdecl("device_enable_framebuffer_srgb") public func device_enable_framebuffer_srgb(device: UnsafeRawPointer, enable: Bool) { let device: MetalDevice = unretained(device) if device.renderState.useSRGBGamma != enable { device.renderState.useSRGBGamma = enable device.renderState.isRendertargetChanged = true } } /// Retrieves the current render state's setting for using automatic encoding and decoding of color values using sRGB /// gamma. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Boolean value of the sRGB gamma setting /// /// This function is used to check the current state which might have possibly been explicitly changed by calls of /// ``device_enable_framebuffer_srgb``. /// /// A source which might only be able to work with color values that have sRGB gamma already applied to them and thus /// might want to ensure that the color values provided by the fragment shader will not have the sRGB gamma curve /// encoded on them again. /// /// By calling this function, a source can check if automatic gamma encoding is enabled and then turn it off /// explicitly, which will ensure that color data is written as-is and no additional encoding will take place. @_cdecl("device_framebuffer_srgb_enabled") public func device_framebuffer_srgb_enabled(device: UnsafeRawPointer) -> Bool { let device: MetalDevice = unretained(device) return device.renderState.useSRGBGamma } /// Signals the beginning of a new scene. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// OBS Studio's renderer signals a new scene for each "display" and for every "video mix", which implicitly signals a /// change of output format. This usually also implies that all current textures that might have been set up for /// fragment shaders should be reset. For Metal this also requires creating a new "current" command buffer which should /// contain all GPU commands necessary to render the "scene". @_cdecl("device_begin_scene") public func device_begin_scene(device: UnsafeMutableRawPointer) { let device: MetalDevice = unretained(device) for index in 0.., depth: Float, stencil: UInt8 ) { let device: MetalDevice = unretained(device) var clearState = ClearState() if (Int32(clearFlags) & GS_CLEAR_COLOR) == 1 { clearState.colorAction = .clear clearState.clearColor = MTLClearColor( red: Double(color.pointee.x), green: Double(color.pointee.y), blue: Double(color.pointee.z), alpha: Double(color.pointee.w) ) } else { clearState.colorAction = .load } if (Int32(clearFlags) & GS_CLEAR_DEPTH) == 1 { clearState.clearDepth = Double(depth) clearState.depthAction = .clear } else { clearState.depthAction = .load } if (Int32(clearFlags) & GS_CLEAR_STENCIL) == 1 { clearState.clearStencil = UInt32(stencil) clearState.stencilAction = .clear } else { clearState.stencilAction = .load } do { try device.clear(state: clearState) } catch let error as MetalError.MTLDeviceError { OBSLog(.error, "device_clear: \(error.description)") } catch { OBSLog(.error, "device_clear: Unknown error occurred") } } /// Returns whether the current display is ready to preset a frame generated the renderer /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Boolean value to state whether a frame generated by the renderer could actually be displayed /// /// As OBS Studio's renderer is not synced with the operating system's compositor, situations could arise where the /// renderer needs to be able to "hand off" a generated display output to the compositor but might not be able to /// because it's not "ready" to receive such a frame. If that is the case, the graphics API can check for such a state /// and return `false` here, allowing `libobs` to skip rendering the output for the "current" display entirely. /// /// In Direct3D11 the `DXGI_SWAP_EFFECT_FLIP_DISCARD` flip effect is used, which allows OBS Studio to render a preview /// into a buffer without having to care about the compositor. This is not possible in Metal as it's not the /// application that provides the output buffer, it's the compositor which provides a "drawable" surface. For each /// display there can only be a maximum of 3 drawables "in flight", a request for any consecutive drawable will stall /// the renderer. /// /// There is currently no way to check for the amount of available drawables, which could be used to return `false` /// here and would allow `libobs` to skip output rendering on its current frame and try again on the next. /// /// > Note: This check applies to the display associated with whichever "swap chain" might be "current" and is thus /// depends on swap chain state. @_cdecl("device_is_present_ready") public func device_is_present_ready(device: UnsafeRawPointer) -> Bool { return true } /// Commits the current command buffer to schedule and execute the GPU commands encoded within it and waits until they /// have been scheduled. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// OBS Studio's renderer will call this function when it has set up all draw commands for a given "display". It is /// usually accompanied by a call to end the current scene just before and thus marks the end of commands for the /// current command buffer. @_cdecl("device_present") public func device_present(device: UnsafeRawPointer) { let device: MetalDevice = unretained(device) device.finishPendingCommands() } /// Commits the current command buffer to schedule and execute the GPU commands encoded within it and waits until they /// have been scheduled. /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// OBS Studio's renderer will call this function when it is finished setting up all draw commands for the video output /// texture, and also after it has used the GPU to encode a video output frame. @_cdecl("device_flush") public func device_flush(device: UnsafeRawPointer) { let device: MetalDevice = unretained(device) device.finishPendingCommands() } /// Sets the "current" cull mode to be used by the next draw call /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - mode: `libobs` cull mode identifier /// /// Converts the cull mode provided by `libobs` into its appropriate ``MTLCullMode`` variant. @_cdecl("device_set_cull_mode") public func device_set_cull_mode(device: UnsafeRawPointer, mode: gs_cull_mode) { let device: MetalDevice = unretained(device) device.renderState.cullMode = mode.mtlMode } /// Gets the "current" cull mode that was set up for the next draw call /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: `libobs` cull mode /// /// Converts the ``MTLCullMode`` set up currently into its `libobs` variation @_cdecl("device_get_cull_mode") public func device_get_cull_mode(device: UnsafeRawPointer) -> gs_cull_mode { let device: MetalDevice = unretained(device) return device.renderState.cullMode.obsMode } /// Switches blending of the next draw operation with the contents of the "current" framebuffer. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - enable: `true` if contents should be blended, `false` otherwise /// /// This function directly enables or disables blending for the first render target set up in the current pipeline. @_cdecl("device_enable_blending") public func device_enable_blending(device: UnsafeRawPointer, enable: Bool) { let device: MetalDevice = unretained(device) device.renderState.pipelineDescriptor.colorAttachments[0].isBlendingEnabled = enable } /// Switches depth testing on the next draw operation with the contents of the current depth stencil buffer. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - enable: `true` if depth testing should be enabled, `false` otherwise /// /// This function directly enables or disables depth texting for the depth stencil attachment set up in the current pipeline @_cdecl("device_enable_depth_test") public func device_enable_depth_test(device: UnsafeRawPointer, enable: Bool) { let device: MetalDevice = unretained(device) device.renderState.depthStencilDescriptor.isDepthWriteEnabled = enable } /// Sets the read mask in the depth stencil descriptor set up in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - enable: `true` if the read mask should be `1`, `false` for a read mask of `0` /// /// The `MTLDepthStencilDescriptor` can differentiate between a front facing stencil and a back facing stencil. As /// `libobs` does not make this distinction, both values will be set to the same value. @_cdecl("device_enable_stencil_test") public func device_enable_stencil_test(device: UnsafeRawPointer, enable: Bool) { let device: MetalDevice = unretained(device) device.renderState.depthStencilDescriptor.frontFaceStencil.readMask = enable ? 1 : 0 device.renderState.depthStencilDescriptor.backFaceStencil.readMask = enable ? 1 : 0 } /// Sets the write mask in the depth stencil descriptor set up in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - enable: `true` if the write mask should be `1`, `false` for a write mask of `0` /// /// The `MTLDepthStencilDescriptor` can differentiate between a front facing stencil and a back facing stencil. As /// `libobs` does not make this distinction, both values will be set to the same value. @_cdecl("device_enable_stencil_write") public func device_enable_stencil_write(device: UnsafeRawPointer, enable: Bool) { let device: MetalDevice = unretained(device) device.renderState.depthStencilDescriptor.frontFaceStencil.writeMask = enable ? 1 : 0 device.renderState.depthStencilDescriptor.backFaceStencil.writeMask = enable ? 1 : 0 } /// Sets the color write mask for the render target set up in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - red: `true` if the red color channel should be written, `false` otherwise /// - green: `true` if the green color channel should be written, `false` otherwise /// - blue: `true` if the blue color channel should be written, `false` otherwise /// - alpha: `true` if the alpha channel should be written, `false` otherwise /// /// The separate `bool` values are converted into an ``MTLColorWriteMask`` which is then set up on the first render /// target of the current pipeline. @_cdecl("device_enable_color") public func device_enable_color(device: UnsafeRawPointer, red: Bool, green: Bool, blue: Bool, alpha: Bool) { let device: MetalDevice = unretained(device) var colorMask = MTLColorWriteMask() if red { colorMask.insert(.red) } if green { colorMask.insert(.green) } if blue { colorMask.insert(.blue) } if alpha { colorMask.insert(.alpha) } device.renderState.pipelineDescriptor.colorAttachments[0].writeMask = colorMask } /// Sets the depth compare function for the depth stencil descriptor to be used in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - test: `libobs` enum describing the depth compare function to use /// /// The enum value provided by `libobs` is converted into a ``MTLCompareFunction``, which is then set directly as the /// compare function on the depth stencil descriptor. @_cdecl("device_depth_function") public func device_depth_function(device: UnsafeRawPointer, test: gs_depth_test) { let device: MetalDevice = unretained(device) device.renderState.depthStencilDescriptor.depthCompareFunction = test.mtlFunction } /// Sets the stencil compare functions for the specified stencil side(s) on the depth stencil descriptor in the current /// pipeline. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - side: The stencil side(s) for which the compare function should be set up /// - test: `libobs` enum describing the stencil test function to use /// /// The enum values provided by `libobs` are first checked for the stencil side, after which the compare function value /// itself is converted into a ``MTLCompareFunction``, which is then set directly as the compare function on the depth /// stencil descriptor. @_cdecl("device_stencil_function") public func device_stencil_function(device: UnsafeRawPointer, side: gs_stencil_side, test: gs_depth_test) { let device: MetalDevice = unretained(device) let stencilCompareFunction: (MTLCompareFunction, MTLCompareFunction) if side == GS_STENCIL_FRONT { stencilCompareFunction = (test.mtlFunction, .never) } else if side == GS_STENCIL_BACK { stencilCompareFunction = (.never, test.mtlFunction) } else { stencilCompareFunction = (test.mtlFunction, test.mtlFunction) } device.renderState.depthStencilDescriptor.frontFaceStencil.stencilCompareFunction = stencilCompareFunction.0 device.renderState.depthStencilDescriptor.backFaceStencil.stencilCompareFunction = stencilCompareFunction.1 } /// Sets the stencil fail, depth fail, and depth pass operations for the specified stencil side(s) on the depth stencil /// descriptor for the current pipeline. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - side: The stencil side(s) for which the fail and pass functions should be set up /// - fail: `libobs` enum value describing the stencil fail operation /// - zfail: `libobs` enum value describing the depth fail operation /// - zpass: `libobs` enum value describing the depth pass operation /// /// The enum values provided by `libobs` are first checked for the stencil side, after which the fail function values /// themselves are converted into their ``MTLCompareFunction`` variants, which are then set directly on the depth /// stencil descriptor. @_cdecl("device_stencil_op") public func device_stencil_op( device: UnsafeRawPointer, side: gs_stencil_side, fail: gs_stencil_op_type, zfail: gs_stencil_op_type, zpass: gs_stencil_op_type ) { let device: MetalDevice = unretained(device) let stencilFailOperation: (MTLStencilOperation, MTLStencilOperation) let depthFailOperation: (MTLStencilOperation, MTLStencilOperation) let depthPassOperation: (MTLStencilOperation, MTLStencilOperation) if side == GS_STENCIL_FRONT { stencilFailOperation = (fail.mtlOperation, .keep) depthFailOperation = (zfail.mtlOperation, .keep) depthPassOperation = (zpass.mtlOperation, .keep) } else if side == GS_STENCIL_BACK { stencilFailOperation = (.keep, fail.mtlOperation) depthFailOperation = (.keep, zfail.mtlOperation) depthPassOperation = (.keep, zpass.mtlOperation) } else { stencilFailOperation = (fail.mtlOperation, fail.mtlOperation) depthFailOperation = (zfail.mtlOperation, zfail.mtlOperation) depthPassOperation = (zpass.mtlOperation, zpass.mtlOperation) } device.renderState.depthStencilDescriptor.frontFaceStencil.stencilFailureOperation = stencilFailOperation.0 device.renderState.depthStencilDescriptor.frontFaceStencil.depthFailureOperation = depthFailOperation.0 device.renderState.depthStencilDescriptor.frontFaceStencil.depthStencilPassOperation = depthPassOperation.0 device.renderState.depthStencilDescriptor.backFaceStencil.stencilFailureOperation = stencilFailOperation.1 device.renderState.depthStencilDescriptor.backFaceStencil.depthFailureOperation = depthFailOperation.1 device.renderState.depthStencilDescriptor.backFaceStencil.depthStencilPassOperation = depthPassOperation.1 } /// Sets up the viewport for use in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - x: Origin X coordinate for the viewport /// - y: Origin Y coordinate for the viewport /// - width: Width of the viewport /// - height: Height of the viewport /// /// The separate values for origin and dimension are converted into an ``MTLViewport`` which is then retained as the /// "current" viewport for later use when the pipeline is actually set up. @_cdecl("device_set_viewport") public func device_set_viewport(device: UnsafeRawPointer, x: Int32, y: Int32, width: Int32, height: Int32) { let device: MetalDevice = unretained(device) let viewPort = MTLViewport( originX: Double(x), originY: Double(y), width: Double(width), height: Double(height), znear: 0.0, zfar: 1.0 ) device.renderState.viewPort = viewPort } /// Gets the origin and dimensions of the viewport currently set up for use by the pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - rect: A pointer to a ``gs_rect`` struct in memory /// /// The function is provided a pointer to a ``gs_struct`` instance in memory which can hold the x and y values for the /// origin and dimension of the viewport. /// /// This function is usually called when some source needs to retain the current "state" of the pipeline (of which /// there can ever only be one) and overwrite the state with its own (in this case its own viewport). To be able to /// restore the prior state, the "current" state needs to be retrieved from the pipeline. @_cdecl("device_get_viewport") public func device_get_viewport(device: UnsafeRawPointer, rect: UnsafeMutablePointer) { let device: MetalDevice = unretained(device) rect.pointee.x = Int32(device.renderState.viewPort.originX) rect.pointee.y = Int32(device.renderState.viewPort.originY) rect.pointee.cx = Int32(device.renderState.viewPort.width) rect.pointee.cy = Int32(device.renderState.viewPort.height) } /// Sets up a scissor rect to be used by the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - rect: Pointer to a ``gs_rect`` struct in memory that contains origin and dimension of the scissor rect /// /// The ``gs_rect`` is converted into a ``MTLScissorRect`` object before saving it in the "current" render state /// for use in the next draw call. @_cdecl("device_set_scissor_rect") public func device_set_scissor_rect(device: UnsafeRawPointer, rect: UnsafePointer?) { let device: MetalDevice = unretained(device) if let rect { device.renderState.scissorRect = rect.pointee.mtlScissorRect device.renderState.scissorRectEnabled = true } else { device.renderState.scissorRect = nil device.renderState.scissorRectEnabled = false } } /// Sets up an orthographic projection matrix with the provided view frustum /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - left: Left edge of view frustum on the near plane /// - right: Right edge of view frustum on the near plane /// - top: Top edge of view frustum on the near plane /// - bottom: Bottom edge of view frustum on the near plane /// - near: Distance of near plane on the Z axis /// - far: Distance of far plane on the Z axis @_cdecl("device_ortho") public func device_ortho( device: UnsafeRawPointer, left: Float, right: Float, top: Float, bottom: Float, near: Float, far: Float ) { let device: MetalDevice = unretained(device) let rml = right - left let bmt = bottom - top let fmn = far - near device.renderState.projectionMatrix = matrix_float4x4( rows: [ SIMD4((2.0 / rml), 0.0, 0.0, 0.0), SIMD4(0.0, (2.0 / -bmt), 0.0, 0.0), SIMD4(0.0, 0.0, (1 / fmn), 0.0), SIMD4((left + right) / -rml, (bottom + top) / bmt, near / -fmn, 1.0), ] ) } /// Sets up a perspective projection matrix with the provided view frustum /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - left: Left edge of view frustum on the near plane /// - right: Right edge of view frustum on the near plane /// - top: Top edge of view frustum on the near plane /// - bottom: Bottom edge of view frustum on the near plane /// - near: Distance of near plane on the Z axis /// - far: Distance of far plane on the Z axis @_cdecl("device_frustum") public func device_frustum( device: UnsafeRawPointer, left: Float, right: Float, top: Float, bottom: Float, near: Float, far: Float ) { let device: MetalDevice = unretained(device) let rml = right - left let tmb = top - bottom let fmn = far - near device.renderState.projectionMatrix = matrix_float4x4( columns: ( SIMD4(((2 * near) / rml), 0.0, 0.0, 0.0), SIMD4(0.0, ((2 * near) / tmb), 0.0, 0.0), SIMD4(((left + right) / rml), ((top + bottom) / tmb), (-far / fmn), -1.0), SIMD4(0.0, 0.0, (-(far * near) / fmn), 0.0) ) ) } /// Requests the current projection matrix to be pushed into a projection stack /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// OBS Studio's renderer works with the assumption of one big "current" state stack, which requires the entire state /// to be changed to meet different rendering requirements. Part of this state is the current projection matrix, which /// might need to be replaced temporarily. This function will be called when another projection matrix will be set up /// to allow for its restoration later. @_cdecl("device_projection_push") public func device_projection_push(device: UnsafeRawPointer) { let device: MetalDevice = unretained(device) device.renderState.projections.append(device.renderState.projectionMatrix) } /// Requests the most recently pushed projection matrix to be removed from the stack and set up as the new current /// matrix /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// OBS Studio's renderer works with the assumption of one big "current" state stack. This requires some elements of /// this state to be temporarily retained before reinstating them after. This function will reinstate the most recently /// added matrix as the new "current" matrix. @_cdecl("device_projection_pop") public func device_projection_pop(device: UnsafeRawPointer) { let device: MetalDevice = unretained(device) device.renderState.projectionMatrix = device.renderState.projections.removeLast() } /// Checks whether the current display is capable of displaying high dynamic range content. /// /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - monitor: Opaque pointer of a platform-dependent monitor identifier /// - Returns: `true` if the display is capable of displaying high dynamic range content, `false` otherwise /// /// On macOS this capability is described by the ``NSScreen/maximumPotentialExtendedDynamicRangeColorComponentValue`` /// property, which can be checked using the ``NSWindow/screen`` property after retrieving the ``NSView/window`` /// property. @_cdecl("device_is_monitor_hdr") public func device_is_monitor_hdr(device: UnsafeRawPointer, monitor: UnsafeRawPointer) -> Bool { let device: MetalDevice = unretained(device) guard let swapChain = device.renderState.swapChain else { return false } return swapChain.edrHeadroom > 1.0 } obs-studio-32.1.0-sources/libobs-metal/MTLPixelFormat+Extensions.swift000644 001751 001751 00000033000 15153330235 026622 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import CoreGraphics import CoreVideo import Foundation import Metal extension MTLPixelFormat { /// Property to check whether the pixel format is an 8-bit format var is8Bit: Bool { switch self { case .a8Unorm, .r8Unorm, .r8Snorm, .r8Uint, .r8Sint: return true case .r8Unorm_srgb: return true default: return false } } /// Property to check whether the pixel format is a 16-bit format var is16Bit: Bool { switch self { case .r16Unorm, .r16Snorm, .r16Uint, .r16Sint: return true case .rg8Unorm, .rg8Snorm, .rg8Uint, .rg8Sint: return true case .rg16Float: return true case .rg8Unorm_srgb: return true default: return false } } /// Property to check whether the pixel format is a packed 16-bit format var isPacked16Bit: Bool { switch self { case .b5g6r5Unorm, .a1bgr5Unorm, .abgr4Unorm, .bgr5A1Unorm: return true default: return false } } /// Property to check whether the pixel format is a 32-bit format var is32Bit: Bool { switch self { case .r32Uint, .r32Sint: return true case .r32Float: return true case .rg16Unorm, .rg16Snorm, .rg16Uint, .rg16Sint: return true case .rg16Float: return true case .rgba8Unorm, .rgba8Snorm, .rgba8Uint, .rgba8Sint, .bgra8Unorm: return true case .rgba8Unorm_srgb, .bgra8Unorm_srgb: return true default: return false } } /// Property to check whether the pixel format is a packed 32-bit format var isPacked32Bit: Bool { switch self { case .rgb10a2Unorm, .rgb10a2Uint, .bgr10a2Unorm: return true case .rg11b10Float: return true case .rgb9e5Float: return true case .bgr10_xr, .bgr10_xr_srgb: return true default: return false } } /// Property to check whether the pixel format is a 64-bit format var is64Bit: Bool { switch self { case .rg32Uint, .rg32Sint: return true case .rg32Float: return true case .rgba16Unorm, .rgba16Snorm, .rgba16Uint, .rgba16Sint: return true case .rgba16Float: return true case .bgra10_xr, .bgra10_xr_srgb: return true default: return false } } /// Property to check whether the pixel format is a 128-bit format var is128Bit: Bool { switch self { case .rgba32Uint, .rgba32Sint: return true case .rgba32Float: return true default: return false } } /// Property to check whether the pixel format will trigger automatic sRGB gamma encoding and decoding var isSRGB: Bool { switch self { case .r8Unorm_srgb, .rg8Unorm_srgb, .bgra8Unorm_srgb, .rgba8Unorm_srgb: return true case .bgr10_xr_srgb, .bgra10_xr_srgb: return true case .astc_4x4_srgb, .astc_5x4_srgb, .astc_5x5_srgb, .astc_6x5_srgb, .astc_6x6_srgb, .astc_8x5_srgb, .astc_8x6_srgb, .astc_8x8_srgb, .astc_10x5_srgb, .astc_10x6_srgb, .astc_10x8_srgb, .astc_10x10_srgb, .astc_12x10_srgb, .astc_12x12_srgb: return true case .bc1_rgba_srgb, .bc2_rgba_srgb, .bc3_rgba_srgb, .bc7_rgbaUnorm_srgb: return true case .eac_rgba8_srgb, .etc2_rgb8, .etc2_rgb8a1_srgb: return true default: return false } } /// Property to check whether the pixel format is an extended dynamic range (EDR) format var isEDR: Bool { switch self { case .bgr10_xr, .bgra10_xr, .bgr10_xr_srgb, .bgra10_xr_srgb: return true default: return false } } /// Property to check whether the pixel format uses a form of texture compression var isCompressed: Bool { switch self { // S3TC case .bc1_rgba, .bc1_rgba_srgb, .bc2_rgba, .bc2_rgba_srgb, .bc3_rgba, .bc3_rgba_srgb: return true // RGTC case .bc4_rUnorm, .bc4_rSnorm, .bc5_rgUnorm, .bc5_rgSnorm: return true // BPTC case .bc6H_rgbFloat, .bc6H_rgbuFloat, .bc7_rgbaUnorm, .bc7_rgbaUnorm_srgb: return true // EAC case .eac_r11Unorm, .eac_r11Snorm, .eac_rg11Unorm, .eac_rg11Snorm, .eac_rgba8, .eac_rgba8_srgb: return true // ETC case .etc2_rgb8, .etc2_rgb8_srgb, .etc2_rgb8a1, .etc2_rgb8a1_srgb: return true // ASTC case .astc_4x4_srgb, .astc_5x4_srgb, .astc_5x5_srgb, .astc_6x5_srgb, .astc_6x6_srgb, .astc_8x5_srgb, .astc_8x6_srgb, .astc_8x8_srgb, .astc_10x5_srgb, .astc_10x6_srgb, .astc_10x8_srgb, .astc_10x10_srgb, .astc_12x10_srgb, .astc_12x12_srgb, .astc_4x4_ldr, .astc_5x4_ldr, .astc_5x5_ldr, .astc_6x5_ldr, .astc_6x6_ldr, .astc_8x5_ldr, .astc_8x6_ldr, .astc_8x8_ldr, .astc_10x5_ldr, .astc_10x6_ldr, .astc_10x8_ldr, .astc_10x10_ldr, .astc_12x10_ldr, .astc_12x12_ldr: return true // ASTC HDR case .astc_4x4_hdr, .astc_5x4_hdr, .astc_5x5_hdr, .astc_6x5_hdr, .astc_6x6_hdr, .astc_8x5_hdr, .astc_8x6_hdr, .astc_8x8_hdr, .astc_10x5_hdr, .astc_10x6_hdr, .astc_10x8_hdr, .astc_10x10_hdr, .astc_12x10_hdr, .astc_12x12_hdr: return true default: return false } } /// Property to check whether the pixel format is a depth buffer format var isDepth: Bool { switch self { case .depth16Unorm, .depth32Float: return true default: return false } } /// Property to check whether the pixel format is depth stencil format var isStencil: Bool { switch self { case .stencil8, .x24_stencil8, .x32_stencil8, .depth24Unorm_stencil8, .depth32Float_stencil8: return true default: return false } } /// Returns number of color components used by the pixel format var componentCount: Int? { switch self { case .a8Unorm, .r8Unorm, .r8Snorm, .r8Uint, .r8Sint, .r8Unorm_srgb: return 1 case .r16Unorm, .r16Snorm, .r16Uint, .r16Sint, .r16Float: return 1 case .r32Uint, .r32Sint, .r32Float: return 1 case .rg8Unorm, .rg8Snorm, .rg8Uint, .rg8Sint, .rg8Unorm_srgb: return 2 case .rg16Unorm, .rg16Snorm, .rg16Uint, .rg16Sint: return 2 case .rg32Uint, .rg32Sint, .rg32Float: return 2 case .b5g6r5Unorm, .rg11b10Float, .rgb9e5Float, .gbgr422, .bgrg422: return 3 case .a1bgr5Unorm, .abgr4Unorm, .bgr5A1Unorm: return 4 case .rgba8Unorm, .rgba8Snorm, .rgba8Uint, .rgba8Sint, .rgba8Unorm_srgb, .bgra8Unorm, .bgra8Unorm_srgb: return 4 case .rgb10a2Unorm, .rgb10a2Uint, .bgr10a2Unorm, .bgr10_xr, .bgr10_xr_srgb: return 4 case .rgba16Unorm, .rgba16Snorm, .rgba16Uint, .rgba16Sint, .rgba16Float: return 4 case .rgba32Uint, .rgba32Sint, .rgba32Float: return 4 case .bc4_rUnorm, .bc4_rSnorm, .eac_r11Unorm, .eac_r11Snorm: return 1 case .bc5_rgUnorm, .bc5_rgSnorm: return 2 case .bc6H_rgbFloat, .bc6H_rgbuFloat, .eac_rg11Unorm, .eac_rg11Snorm, .etc2_rgb8, .etc2_rgb8_srgb: return 3 case .bc1_rgba, .bc1_rgba_srgb, .bc2_rgba, .bc2_rgba_srgb, .bc3_rgba, .bc3_rgba_srgb, .etc2_rgb8a1, .etc2_rgb8a1_srgb, .eac_rgba8, .eac_rgba8_srgb, .bc7_rgbaUnorm, .bc7_rgbaUnorm_srgb: return 4 default: return nil } } /// Conversion of pixel format to `libobs` color format var gsColorFormat: gs_color_format { switch self { case .a8Unorm: return GS_A8 case .r8Unorm: return GS_R8 case .rgba8Unorm: return GS_RGBA case .bgra8Unorm: return GS_BGRA case .rgb10a2Unorm: return GS_R10G10B10A2 case .rgba16Unorm: return GS_RGBA16 case .r16Unorm: return GS_R16 case .rgba16Float: return GS_RGBA16F case .rgba32Float: return GS_RGBA32F case .rg16Float: return GS_RG16F case .rg32Float: return GS_RG32F case .r16Float: return GS_R16F case .r32Float: return GS_R32F case .bc1_rgba: return GS_DXT1 case .bc2_rgba: return GS_DXT3 case .bc3_rgba: return GS_DXT5 default: return GS_UNKNOWN } } /// Returns the bits per pixel based on the pixel format var bitsPerPixel: Int? { if self.is8Bit { return 8 } else if self.is16Bit || self.isPacked16Bit { return 16 } else if self.is32Bit || self.isPacked32Bit { return 32 } else if self.is64Bit { return 64 } else if self.is128Bit { return 128 } else { return nil } } /// Returns the bytes per pixel based on the pixel format var bytesPerPixel: Int? { if self.is8Bit { return 1 } else if self.is16Bit || self.isPacked16Bit { return 2 } else if self.is32Bit { return 4 } else if self.isPacked32Bit { switch self { case .rgb10a2Unorm, .rgb10a2Uint, .bgr10a2Unorm, .rg11b10Float, .rgb9e5Float: return 4 case .bgr10_xr, .bgr10_xr_srgb: return 8 default: return nil } } else if self.is64Bit { return 8 } else { return nil } } /// Returns the bytes used per color component of the pixel format var bitsPerComponent: Int? { if !self.isCompressed { if let bitsPerPixel = self.bitsPerPixel, let componentCount = self.componentCount { return bitsPerPixel / componentCount } } return nil } } extension MTLPixelFormat { /// Converts the pixel format into a compatible CoreGraphics color space var colorSpace: CGColorSpace? { switch self { case .a8Unorm, .r8Unorm, .r8Snorm, .r8Uint, .r8Sint, .r16Unorm, .r16Snorm, .r16Uint, .r16Sint, .r16Float, .r32Uint, .r32Sint, .r32Float: return CGColorSpace(name: CGColorSpace.linearGray) case .rg8Unorm, .rg8Snorm, .rg8Uint, .rg8Sint, .rgba8Unorm, .rgba8Snorm, .rgba8Uint, .rgba8Sint, .bgra8Unorm, .rgba16Unorm, .rgba16Snorm, .rgba16Uint, .rgba16Sint: return CGColorSpace(name: CGColorSpace.linearSRGB) case .rg8Unorm_srgb, .rgba8Unorm_srgb, .bgra8Unorm_srgb: return CGColorSpace(name: CGColorSpace.sRGB) case .rg16Float, .rg32Float, .rgba16Float, .rgba32Float, .bgr10_xr, .bgr10a2Unorm: return CGColorSpace(name: CGColorSpace.extendedLinearSRGB) case .bgr10_xr_srgb: return CGColorSpace(name: CGColorSpace.extendedSRGB) default: return nil } } } extension MTLPixelFormat { /// Initializes a ``MTLPixelFormat`` with a compatible CoreVideo video pixel format init?(osType: OSType) { guard let pixelFormat = osType.mtlFormat else { return nil } self = pixelFormat } /// Conversion of the pixel format into a compatible CoreVideo video pixel format var videoPixelFormat: OSType? { switch self { case .r8Unorm, .r8Unorm_srgb: return kCVPixelFormatType_OneComponent8 case .r16Float: return kCVPixelFormatType_OneComponent16Half case .r32Float: return kCVPixelFormatType_OneComponent32Float case .rg8Unorm, .rg8Unorm_srgb: return kCVPixelFormatType_TwoComponent8 case .rg16Float: return kCVPixelFormatType_TwoComponent16Half case .rg32Float: return kCVPixelFormatType_TwoComponent32Float case .bgra8Unorm, .bgra8Unorm_srgb: return kCVPixelFormatType_32BGRA case .rgba8Unorm, .rgba8Unorm_srgb: return kCVPixelFormatType_32RGBA case .rgba16Float: return kCVPixelFormatType_64RGBAHalf case .rgba32Float: return kCVPixelFormatType_128RGBAFloat default: return nil } } } obs-studio-32.1.0-sources/libobs-metal/metal-swapchain.swift000644 001751 001751 00000026530 15153330235 024747 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import AppKit import Foundation /// Creates a ``OBSSwapChain`` instance for use as a pseudo swap chain implementation to be shared with `libobs` /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - data: Pointer to platform-specific `gs_init_data` struct /// - Returns: Opaque pointer to a new ``OBSSwapChain`` on success or `nil` on error /// /// As interaction with UI elements needs to happen on the main thread of macOS, this function is marked with /// `@MainActor`. This is also necessary because ``OBSSwapChain/updateView`` itself interacts with the ``NSView`` /// instance passed via the `data` argument and also has to occur on the main thread. /// /// As applications cannot manage their own swap chain on macOS, the ``OBSSwapChain`` class merely wraps the /// management of the ``CAMetalLayer`` that will be associated with the ``NSView`` and handles the drawables used to /// render their contents. /// /// > Important: This function can only be called from the main thread. @MainActor @_cdecl("device_swapchain_create") public func device_swapchain_create(device: UnsafeMutableRawPointer, data: UnsafePointer) -> OpaquePointer? { let device: MetalDevice = unretained(device) let view = data.pointee.window.view.takeUnretainedValue() as! NSView let size = MTLSize( width: Int(data.pointee.cx), height: Int(data.pointee.cy), depth: 0 ) guard let swapChain = OBSSwapChain(device: device, size: size, colorSpace: data.pointee.format) else { return nil } swapChain.updateView(view) device.swapChainQueue.sync { device.swapChains.append(swapChain) } return swapChain.getRetained() } /// Updates the internal size parameter and dimension of the ``CAMetalLayer`` managed by the ``OBSSwapChain`` instance /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - width: Width to update the layer's dimensions to /// - height: Height to update the layer's dimensions to /// /// As the relationship between the ``CAMetalLayer`` and the ``NSView`` it is associated with is managed indirectly, /// the metal layer cannot directly react to size changes (even though it would be possible to do so). Instead /// ``AppKit`` will report a size change to the application, which will be picked up by Qt, who will emit a size /// change event on the main loop, which will update internal state of the ``OBSQTDisplay`` class. These changes are /// asynchronously picked up by `libobs` render loop, which will then call this function. @_cdecl("device_resize") public func device_resize(device: UnsafeMutableRawPointer, width: UInt32, height: UInt32) { let device: MetalDevice = unretained(device) guard let swapChain = device.renderState.swapChain else { return } swapChain.resize(.init(width: Int(width), height: Int(height), depth: 0)) } /// This function does nothing on Metal /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// /// The intended purpose of this function is to update the render target in the "current" swap chain with the color /// space of its "display" and thus pick up changes in color spaces between different screens. /// /// On macOS this just requires updating the EDR headroom for the screen the view might be associated with, as the /// actual color space and EDR capabilities are evaluated on every render loop. /// /// > Important: This function can only be called from the main thread. @_cdecl("device_update_color_space") public func device_update_color_space(device: UnsafeRawPointer) { let device: MetalDevice = unretained(device) guard device.renderState.swapChain != nil else { return } nonisolated(unsafe) let swapChain = device.renderState.swapChain! Task { @MainActor in swapChain.updateEdrHeadroom() } } /// Gets the dimensions of the ``CAMetalLayer`` managed by the ``OBSSwapChain`` instance set up in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - cx: Pointer to memory for the width of the layer /// - cy: Pointer to memory for the height of the layer @_cdecl("device_get_size") public func device_get_size( device: UnsafeMutableRawPointer, cx: UnsafeMutablePointer, cy: UnsafeMutablePointer ) { let device: MetalDevice = unretained(device) guard let swapChain = device.renderState.swapChain else { cx.pointee = 0 cy.pointee = 0 return } cx.pointee = UInt32(swapChain.viewSize.width) cy.pointee = UInt32(swapChain.viewSize.height) } /// Gets the width of the ``CAMetalLayer`` managed by the ``OBSSwapChain`` instance set up in the current pipeline /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Width of the layer @_cdecl("device_get_width") public func device_get_width(device: UnsafeRawPointer) -> UInt32 { let device: MetalDevice = unretained(device) guard let swapChain = device.renderState.swapChain else { return 0 } return UInt32(swapChain.viewSize.width) } /// Gets the height of the ``CAMetalLayer`` managed by the ``OBSSwapChain`` instance set up in the current pipeline /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Height of the layer @_cdecl("device_get_height") public func device_get_height(device: UnsafeRawPointer) -> UInt32 { let device: MetalDevice = unretained(device) guard let swapChain = device.renderState.swapChain else { return 0 } return UInt32(swapChain.viewSize.height) } /// Sets up the ``OBSSwapChain`` for use in the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - swap: Opaque pointer to ``OBSSwapChain`` instance shared with `libobs` /// /// The first call of this function in any render loop marks the "begin" of OBS Studio's display render stage. There /// will only ever be one "current" swap chain in use by `libobs` and there is no dedicated call to "reset" or unload /// the current swap chain, instead a new swap chain is loaded or the "scene end" function is called. @_cdecl("device_load_swapchain") public func device_load_swapchain(device: UnsafeRawPointer, swap: UnsafeRawPointer) { let device: MetalDevice = unretained(device) let swapChain: OBSSwapChain = unretained(swap) if swapChain.edrHeadroom > 1.0 { var videoInfo: obs_video_info = obs_video_info() obs_get_video_info(&videoInfo) let videoColorSpace = videoInfo.colorspace switch videoColorSpace { case VIDEO_CS_2100_PQ: if swapChain.colorRange != .hdrPQ { // TODO: Investigate whether it's viable to use PQ or HLG tone mapping for the preview // Use the following code to enable it for either: // 2100 PQ: // let maxLuminance = obs_get_video_hdr_nominal_peak_level() // swapChain.layer.edrMetadata = .hdr10( // minLuminance: 0.0001, maxLuminance: maxLuminance, opticalOutputScale: 10000) // HLG: // swapChain.layer.edrMetadata = .hlg swapChain.layer.pixelFormat = .rgba16Float swapChain.layer.colorspace = CGColorSpace(name: CGColorSpace.extendedLinearSRGB) swapChain.layer.wantsExtendedDynamicRangeContent = true swapChain.layer.edrMetadata = nil swapChain.colorRange = .hdrPQ swapChain.renderTarget = nil } case VIDEO_CS_2100_HLG: if swapChain.colorRange != .hdrHLG { swapChain.layer.pixelFormat = .rgba16Float swapChain.layer.colorspace = CGColorSpace(name: CGColorSpace.extendedLinearSRGB) swapChain.layer.wantsExtendedDynamicRangeContent = true swapChain.layer.edrMetadata = nil swapChain.colorRange = .hdrHLG swapChain.renderTarget = nil } default: if swapChain.colorRange != .sdr { swapChain.layer.pixelFormat = .bgra8Unorm_srgb swapChain.layer.colorspace = CGColorSpace(name: CGColorSpace.sRGB) swapChain.layer.wantsExtendedDynamicRangeContent = false swapChain.layer.edrMetadata = nil swapChain.colorRange = .sdr swapChain.renderTarget = nil } } } else { if swapChain.colorRange != .sdr { swapChain.layer.pixelFormat = .bgra8Unorm_srgb swapChain.layer.colorspace = CGColorSpace(name: CGColorSpace.sRGB) swapChain.layer.wantsExtendedDynamicRangeContent = false swapChain.layer.edrMetadata = nil swapChain.colorRange = .sdr swapChain.renderTarget = nil } } switch swapChain.colorRange { case .hdrHLG, .hdrPQ: device.renderState.gsColorSpace = GS_CS_709_EXTENDED device.renderState.useSRGBGamma = false case .sdr: device.renderState.gsColorSpace = GS_CS_SRGB device.renderState.useSRGBGamma = true } if let renderTarget = swapChain.renderTarget { device.renderState.renderTarget = renderTarget } else { let descriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: swapChain.layer.pixelFormat, width: Int(swapChain.layer.drawableSize.width), height: Int(swapChain.layer.drawableSize.height), mipmapped: false) descriptor.usage = [.renderTarget] guard let renderTarget = MetalTexture(device: device, descriptor: descriptor) else { return } swapChain.renderTarget = renderTarget device.renderState.renderTarget = renderTarget } device.renderState.depthStencilAttachment = nil device.renderState.isRendertargetChanged = true device.renderState.isInDisplaysRenderStage = true device.renderState.swapChain = swapChain } /// Requests deinitialization of the ``OBSSwapChain`` instance shared with `libobs` /// - Parameter texture: Opaque pointer to ``OBSSwapChain`` instance shared with `libobs` /// /// The ownership of the shared pointer is transferred into this function and the instance is placed under Swift's /// memory management again. @_cdecl("gs_swapchain_destroy") public func gs_swapchain_destroy(swapChain: UnsafeMutableRawPointer) { let swapChain = retained(swapChain) as OBSSwapChain swapChain.discard = true } obs-studio-32.1.0-sources/libobs-metal/MetalError.swift000644 001751 001751 00000011177 15153330235 023747 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ enum MetalError { enum MTLCommandQueueError: Error, CustomStringConvertible { case commandBufferCreationFailure var description: String { switch self { case .commandBufferCreationFailure: "MTLCommandQueue failed to create command buffer" } } } enum MTLDeviceError: Error, CustomStringConvertible { case commandQueueCreationFailure case displayLinkCreationFailure case bufferCreationFailure(String) case shaderCompilationFailure(String) case pipelineStateCreationFailure case depthStencilStateCreationFailure case samplerStateCreationFailure var description: String { switch self { case .commandQueueCreationFailure: "MTLDevice failed to create command queue" case .displayLinkCreationFailure: "MTLDevice failed to create CVDisplayLink for projector output" case .bufferCreationFailure(_): "MTLDevice failed to create buffer" case .shaderCompilationFailure(_): "MTLDevice failed to create shader library and function" case .pipelineStateCreationFailure: "MTLDevice failed to create render pipeline state" case .depthStencilStateCreationFailure: "MTLDevice failed to create depth stencil state" case .samplerStateCreationFailure: "MTLDevice failed to create sampler state with provided descriptor" } } } enum MTLCommandBufferError: Error, CustomStringConvertible { case encoderCreationFailure var description: String { switch self { case .encoderCreationFailure: "MTLCommandBuffer failed to create command encoder" } } } enum MetalShaderError: Error, CustomStringConvertible { case missingVertexDescriptor case missingSamplerDescriptors var description: String { switch self { case .missingVertexDescriptor: "MetalShader of type vertex requires a vertex descriptor" case .missingSamplerDescriptors: "MetalShader of type fragment requires at least a single sampler descriptor" } } } enum OBSShaderParserError: Error, CustomStringConvertible { case parseFail(String) case unsupportedType case missingNextToken case unexpectedToken case missingMainFunction var description: String { switch self { case .parseFail: "Failed to parse provided shader string" case .unsupportedType: "Provided GS type is not convertible to a Metal type" case .missingNextToken: "Required next token not found in parser token collection" case .unexpectedToken: "Required next token had unexpected type in parser token collection" case .missingMainFunction: "Shader has no main function" } } } enum OBSShaderError: Error, CustomStringConvertible { case unsupportedType case parseFail(String) case parseError(String) case transpileError(String) var description: String { switch self { case .unsupportedType: "Unsupported Metal shader type" case .parseFail(_): "OBS shader parser failed to parse effect" case .parseError(_): "OBS shader parser encountered warnings and/or errors while parsing effect" case .transpileError(_): "Transpiling OBS effects file into MSL shader failed" } } } } obs-studio-32.1.0-sources/libobs-metal/MetalTexture.swift000644 001751 001751 00000041740 15153330235 024315 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import CoreVideo import Foundation import Metal private let bgraSurfaceFormat = kCVPixelFormatType_32BGRA // 0x42_47_52_41 private let l10rSurfaceFormat = kCVPixelFormatType_ARGB2101010LEPacked // 0x6C_31_30_72 enum MetalTextureMapMode { case unmapped case read case write } /// Struct used for data exchange between ``MetalTexture`` and `libobs` API functions during mapping and unmapping of /// textures. struct MetalTextureMapping { let mode: MetalTextureMapMode let rowSize: Int let data: UnsafeMutableRawPointer } /// Convenience class for managing ``MTLTexture`` objects class MetalTexture { private let descriptor: MTLTextureDescriptor private var mappingMode: MetalTextureMapMode private let resourceID: UUID weak var device: MetalDevice? var data: UnsafeMutableRawPointer? var hasPendingWrites: Bool = false var sRGBtexture: MTLTexture? var texture: MTLTexture var stageBuffer: MetalStageBuffer? /// Binds the provided `IOSurfaceRef` to a new `MTLTexture` instance /// - Parameters: /// - device: `MTLDevice` instance to use for texture object creation /// - surface: `IOSurfaceRef` reference to an existing `IOSurface` /// - Returns: `MTLTexture` instance if texture was created successfully, `nil` otherwise private static func bindSurface(device: MetalDevice, surface: IOSurfaceRef) -> MTLTexture? { guard let pixelFormat = MTLPixelFormat.init(osType: IOSurfaceGetPixelFormat(surface)) else { assertionFailure("MetalDevice: IOSurface pixel format is not supported") return nil } let descriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: pixelFormat, width: IOSurfaceGetWidth(surface), height: IOSurfaceGetHeight(surface), mipmapped: false ) descriptor.usage = [.shaderRead] let texture = device.device.makeTexture(descriptor: descriptor, iosurface: surface, plane: 0) return texture } /// Creates a new ``MetalDevice`` instance with the provided `MTLTextureDescriptor` /// - Parameters: /// - device: `MTLDevice` instance to use for texture object creation /// - descriptor: `MTLTextureDescriptor` to use for texture object creation init?(device: MetalDevice, descriptor: MTLTextureDescriptor) { self.device = device let texture = device.device.makeTexture(descriptor: descriptor) guard let texture else { assertionFailure( "MetalTexture: Failed to create texture with size \(descriptor.width)x\(descriptor.height)") return nil } self.texture = texture self.resourceID = UUID() self.mappingMode = .unmapped self.descriptor = texture.descriptor updateSRGBView() } /// Creates a new ``MetalDevice`` instance with the provided `IOSurfaceRef` /// - Parameters: /// - device: `MTLDevice` instance to use for texture object creation /// - surface: `IOSurfaceRef` to use for texture object creation init?(device: MetalDevice, surface: IOSurfaceRef) { self.device = device let texture = MetalTexture.bindSurface(device: device, surface: surface) guard let texture else { assertionFailure("MetalTexture: Failed to create texture with IOSurface") return nil } self.texture = texture self.resourceID = UUID() self.mappingMode = .unmapped self.descriptor = texture.descriptor updateSRGBView() } /// Creates a new ``MetalDevice`` instance with the provided `MTLTexture` /// - Parameters: /// - device: `MTLDevice` instance to use for future texture operations /// - surface: `MTLTexture` to wrap in the ``MetalDevice`` instance init?(device: MetalDevice, texture: MTLTexture) { self.device = device self.texture = texture self.resourceID = UUID() self.mappingMode = .unmapped self.descriptor = texture.descriptor updateSRGBView() } /// Creates a new ``MetalDevice`` instance with a placeholder texture /// - Parameters: /// - device: `MTLDevice` instance to use for future texture operations /// /// This constructor creates a "placeholder" object that can be shared with `libobs` or updated with an actual /// `MTLTexture` later. init?(device: MetalDevice) { self.device = device let descriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: 2, height: 2, mipmapped: false) guard let texture = device.device.makeTexture(descriptor: descriptor) else { assertionFailure("MetalTexture: Failed to create placeholder texture object") return nil } self.texture = texture self.sRGBtexture = nil self.resourceID = UUID() self.mappingMode = .unmapped self.descriptor = texture.descriptor } /// Updates the ``MetalTexture`` with a new `IOSurfaceRef` /// - Parameter surface: Updated `IOSurfaceRef` to a new `IOSurface` /// - Returns: `true` if update was successful, `false` otherwise /// /// "Rebinding" was used with the OpenGL backend, but is not available in Metal. Instead a new `MTLTexture` is /// created with the provided `IOSurfaceRef` and the ``MetalTexture`` is updated accordingly. /// func rebind(surface: IOSurfaceRef) -> Bool { guard let device = self.device, let texture = MetalTexture.bindSurface(device: device, surface: surface) else { assertionFailure("MetalTexture: Failed to rebind IOSurface to texture") return false } self.texture = texture updateSRGBView() return true } /// Creates a `MTLTextureView` for the texture wrapped by the ``MetalTexture`` instance with a corresponding sRGB /// pixel format, if the texture's pixel format has an appropriate sRGB variant. func updateSRGBView() { guard !texture.isFramebufferOnly else { self.sRGBtexture = nil return } let sRGBFormat: MTLPixelFormat? = switch texture.pixelFormat { case .bgra8Unorm: .bgra8Unorm_srgb case .rgba8Unorm: .rgba8Unorm_srgb case .r8Unorm: .r8Unorm_srgb case .rg8Unorm: .rg8Unorm_srgb case .bgra10_xr: .bgra10_xr_srgb default: nil } if let sRGBFormat { self.sRGBtexture = texture.makeTextureView(pixelFormat: sRGBFormat) } else { self.sRGBtexture = nil } } /// Downloads pixel data from the wrapped `MTLTexture` to the memory location provided by a pointer. /// - Parameters: /// - data: Pointer to memory that should receive the texture data /// - mipmapLevel: Mipmap level of the texture to copy data from /// /// > Important: The access of texture data is neither protected nor synchronized. If any draw calls to the texture /// take place while this function is executed, the downloaded data will reflect this. Use explicit synchronization /// before initiating a download to prevent this. func download(data: UnsafeMutableRawPointer, mipmapLevel: Int = 0) { let mipmapWidth = texture.width >> mipmapLevel let mipmapHeight = texture.height >> mipmapLevel let rowSize = mipmapWidth * texture.pixelFormat.bytesPerPixel! let region = MTLRegionMake2D(0, 0, mipmapWidth, mipmapHeight) texture.getBytes(data, bytesPerRow: rowSize, from: region, mipmapLevel: mipmapLevel) } /// Uploads pixel data into the wrappred `MTLTexture` from the memory location provided by a pointer. /// - Parameters: /// - data: Pointer to memory that contains the texture data /// - mipmapLevels: Mipmap level of the texture to copy data into /// /// > Important: The write access of texture data is neither protected nor synchronized. If any draw calls use this /// texture for reading or writing while this function is executed, the upload might have been incomplete or the /// data might have been overwritten by the GPU. Use explicit synchronization before initiaitng an upload to /// prevent this. func upload(data: UnsafePointer?>, mipmapLevels: Int) { let bytesPerPixel = texture.pixelFormat.bytesPerPixel! switch texture.textureType { case .type2D, .typeCube: let textureCount = if texture.textureType == .typeCube { 6 } else { 1 } let data = UnsafeBufferPointer(start: data, count: (textureCount * mipmapLevels)) for i in 0..> mipmapLevel let mipmapHeight = texture.height >> mipmapLevel let rowSize = mipmapWidth * bytesPerPixel let region = MTLRegionMake2D(0, 0, mipmapWidth, mipmapHeight) texture.replace( region: region, mipmapLevel: mipmapLevel, slice: i, withBytes: data, bytesPerRow: rowSize, bytesPerImage: 0) } } case .type3D: let data = UnsafeBufferPointer(start: data, count: mipmapLevels) for (mipmapLevel, mipmapData) in data.enumerated() { guard let mipmapData else { break } let mipmapWidth = texture.width >> mipmapLevel let mipmapHeight = texture.height >> mipmapLevel let mipmapDepth = texture.depth >> mipmapLevel let rowSize = mipmapWidth * bytesPerPixel let imageSize = rowSize * mipmapHeight let region = MTLRegionMake3D(0, 0, 0, mipmapWidth, mipmapHeight, mipmapDepth) texture.replace( region: region, mipmapLevel: mipmapLevel, slice: 0, withBytes: mipmapData, bytesPerRow: rowSize, bytesPerImage: imageSize ) } default: fatalError("MetalTexture: Unsupported texture type \(texture.textureType)") } if texture.mipmapLevelCount > 1 { let device = self.device! try? device.ensureCommandBuffer() guard let buffer = device.renderState.commandBuffer, let encoder = buffer.makeBlitCommandEncoder() else { assertionFailure("MetalTexture: Failed to create command buffer for mipmap generation") return } encoder.generateMipmaps(for: texture) encoder.endEncoding() } } /// Emulates the "map" operation available in Direct3D, providing a pointer for texture uploads or downloads /// - Parameters: /// - mode: Map mode to use (writing or reading) /// - mipmapLevel: Mip map level to map /// - Returns: A ``MetalTextureMapping`` struct that provides the result of the mapping /// /// In Direct3D a "map" operation will do many things at once depending on the current state of its pipelines and /// the mapping mode used: /// * When mapped for writing, Direct3D will provide a pointer to CPU memory into which an application can write /// new texture data. /// * When mapped for reading, Direct3D will provide a pointer to CPU memory into which it has copied the contents /// of the texture /// /// In either case, the texture will be blocked from access by the GPU until it is unmapped again. In some cases a /// "map" operation will also implicitly initiate a "flush" operation to ensure that pending GPU commands involving /// this texture are submitted before it becomes unavailable. /// /// Metal does not provide such a convenience method and because `libobs` operates under the assumption that it has /// to copy its own data into a memory location provided by Direct3D, this has to be emulated explicitly here, /// albeit without the blocking of access to the texture. /// /// This function always needs to be balanced by an appropriate ``unmap`` call. func map(mode: MetalTextureMapMode, mipmapLevel: Int = 0) -> MetalTextureMapping? { guard mappingMode == .unmapped else { assertionFailure("MetalTexture: Attempted to map already-mapped texture.") return nil } let mipmapWidth = texture.width >> mipmapLevel let mipmapHeight = texture.height >> mipmapLevel let rowSize = mipmapWidth * texture.pixelFormat.bytesPerPixel! let dataSize = rowSize * mipmapHeight // TODO: Evaluate whether a blit to/from a `MTLBuffer` with its `contents` pointer shared is more efficient let data = UnsafeMutableRawBufferPointer.allocate(byteCount: dataSize, alignment: MemoryLayout.alignment) guard let baseAddress = data.baseAddress else { return nil } if mode == .read { download(data: baseAddress, mipmapLevel: mipmapLevel) } self.data = baseAddress self.mappingMode = mode let mapping = MetalTextureMapping( mode: mode, rowSize: rowSize, data: baseAddress ) return mapping } /// Emulates the "unmap" operation available in Direct3D /// - Parameter mipmapLevel: The mipmap level that is to be unmapped /// /// This function will replace the contents of the "mapped" texture with the data written into the memory provided /// by the "mapping". /// /// As such this function has to always balance the corresponding ``map`` call to ensure that the data written into /// the provided memory location is written into the texture and the memory itself is deallocated. func unmap(mipmapLevel: Int = 0) { guard mappingMode != .unmapped else { assertionFailure("MetalTexture: Attempted to unmap an unmapped texture") return } let mipmapWidth = texture.width >> mipmapLevel let mipmapHeight = texture.height >> mipmapLevel let rowSize = mipmapWidth * texture.pixelFormat.bytesPerPixel! let region = MTLRegionMake2D(0, 0, mipmapWidth, mipmapHeight) if let textureData = self.data { if self.mappingMode == .write { texture.replace( region: region, mipmapLevel: mipmapLevel, withBytes: textureData, bytesPerRow: rowSize ) } textureData.deallocate() self.data = nil } self.mappingMode = .unmapped } /// Gets an opaque pointer for the ``MetalTexture`` instance and increases its reference count by one /// - Returns: `OpaquePointer` to class instance /// /// > Note: Use this method when the instance is to be shared via an `OpaquePointer` and needs to be retained. Any /// opaque pointer shared this way needs to be converted into a retained reference again to ensure automatic /// deinitialization by the Swift runtime. func getRetained() -> OpaquePointer { let retained = Unmanaged.passRetained(self).toOpaque() return OpaquePointer(retained) } /// Gets an opaque pointer for the ``MetalTexture`` instance without increasing its reference count /// - Returns: `OpaquePointer` to class instance func getUnretained() -> OpaquePointer { let unretained = Unmanaged.passUnretained(self).toOpaque() return OpaquePointer(unretained) } } /// Extends the ``MetalTexture`` class with comparison operators and a hash function to enable the use inside a `Set` /// collection extension MetalTexture: Hashable { static func == (lhs: MetalTexture, rhs: MetalTexture) -> Bool { lhs.resourceID == rhs.resourceID } static func != (lhs: MetalTexture, rhs: MetalTexture) -> Bool { lhs.resourceID != rhs.resourceID } func hash(into hasher: inout Hasher) { hasher.combine(resourceID) } } obs-studio-32.1.0-sources/libobs-metal/metal-zstencilbuffer.swift000644 001751 001751 00000005605 15153330235 026017 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Creates ``MetalTexture`` for use as a depth stencil attachment /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - width: Desired width of the texture /// - height: Desired height of the texture /// - color_format: Desired color format of the depth stencil attachment as described by `gs_zstencil_format` /// - Returns: Opaque pointer to a created ``MetalTexture`` instance or a `NULL` pointer on error @_cdecl("device_zstencil_create") public func device_zstencil_create(device: UnsafeRawPointer, width: UInt32, height: UInt32, format: gs_zstencil_format) -> OpaquePointer? { let device: MetalDevice = unretained(device) let descriptor = MTLTextureDescriptor.init( width: width, height: height, colorFormat: format ) guard let descriptor, let texture = MetalTexture(device: device, descriptor: descriptor) else { return nil } return texture.getRetained() } /// Gets the ``MetalTexture`` instance used as the depth stencil attachment for the current pipeline /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Opaque pointer to ``MetalTexture`` instance if any is set, `nil` otherwise @_cdecl("device_get_zstencil_target") public func device_get_zstencil_target(device: UnsafeRawPointer) -> OpaquePointer? { let device: MetalDevice = unretained(device) guard let stencilAttachment = device.renderState.depthStencilAttachment else { return nil } return stencilAttachment.getUnretained() } /// Requests deinitialization of the ``MetalTexture`` instance shared with `libobs` /// - Parameter zstencil: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// /// The ownership of the shared pointer is transferred into this function and the instance is placed under Swift's /// memory management again. @_cdecl("gs_zstencil_destroy") public func gs_zstencil_destroy(zstencil: UnsafeRawPointer) { let _ = retained(zstencil) as MetalTexture } obs-studio-32.1.0-sources/libobs-metal/metal-texture3d.swift000644 001751 001751 00000011514 15153330235 024715 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Creates a three-dimensional ``MetalTexture`` instance with the specified usage options and the raw image data /// (if provided) /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - size: Desired size of the texture /// - color_format: Desired color format of the texture as described by `gs_color_format` /// - levels: Amount of mip map levels to generate for the texture /// - data: Optional pointer to raw pixel data per mip map level /// - flags: Texture resource use information encoded as `libobs` bitfield /// - Returns: Opaque pointer to a created ``MetalTexture`` instance or a `NULL` pointer on error /// /// This function will create a new ``MTLTexture`` wrapped within a ``MetalTexture`` class and also upload any pixel /// data if non-`NULL` pointers have been provided via the `data` argument. /// /// > Important: If mipmap generation is requested, execution will be blocked by waiting for the blit command encoder /// to generate the mipmaps. @_cdecl("device_voltexture_create") public func device_voltexture_create( device: UnsafeRawPointer, width: UInt32, height: UInt32, depth: UInt32, color_format: gs_color_format, levels: UInt32, data: UnsafePointer?>?, flags: UInt32 ) -> OpaquePointer? { let device = Unmanaged.fromOpaque(device).takeUnretainedValue() let descriptor = MTLTextureDescriptor.init( type: .type3D, width: width, height: height, depth: depth, colorFormat: color_format, levels: levels, flags: flags ) guard let descriptor, let texture = MetalTexture(device: device, descriptor: descriptor) else { return nil } if let data { texture.upload(data: data, mipmapLevels: descriptor.mipmapLevelCount) } return texture.getRetained() } /// Requests deinitialization of the ``MetalTexture`` instance shared with `libobs` /// - Parameter texture: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// /// The ownership of the shared pointer is transferred into this function and the instance is placed under /// Swift's memory management again. @_cdecl("gs_voltexture_destroy") public func gs_voltexture_destroy(voltex: UnsafeRawPointer) { let _ = retained(voltex) as MetalTexture } /// Gets the width of the texture wrapped by the ``MetalTexture`` instance /// - Parameter voltex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Width of the texture @_cdecl("gs_voltexture_get_width") public func gs_voltexture_get_width(voltex: UnsafeRawPointer) -> UInt32 { let texture: MetalTexture = unretained(voltex) return UInt32(texture.texture.width) } /// Gets the height of the texture wrapped by the ``MetalTexture`` instance /// - Parameter voltex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Height of the texture @_cdecl("gs_voltexture_get_height") public func gs_voltexture_get_height(voltex: UnsafeRawPointer) -> UInt32 { let texture: MetalTexture = unretained(voltex) return UInt32(texture.texture.height) } /// Gets the depth of the texture wrapped by the ``Metaltexture`` instance /// - Parameter voltex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Depth of the texture @_cdecl("gs_voltexture_get_depth") public func gs_voltexture_get_depth(voltex: UnsafeRawPointer) -> UInt32 { let texture: MetalTexture = unretained(voltex) return UInt32(texture.texture.depth) } /// Gets the color format of the texture wrapped by the ``MetalTexture`` instance /// - Parameter voltex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Color format as defined by the `gs_color_format` enumeration @_cdecl("gs_voltexture_get_color_format") public func gs_voltexture_get_color_format(voltex: UnsafeRawPointer) -> gs_color_format { let texture: MetalTexture = unretained(voltex) return texture.texture.pixelFormat.gsColorFormat } obs-studio-32.1.0-sources/libobs-metal/CVPixelFormat+Extensions.swift000644 001751 001751 00000003664 15153330235 026513 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import CoreVideo import Metal extension OSType { /// Conversion of CoreVideo pixel formats into corresponding Metal pixel formats var mtlFormat: MTLPixelFormat? { switch self { case kCVPixelFormatType_OneComponent8: return .r8Unorm case kCVPixelFormatType_OneComponent16Half: return .r16Float case kCVPixelFormatType_OneComponent32Float: return .r32Float case kCVPixelFormatType_TwoComponent8: return .rg8Unorm case kCVPixelFormatType_TwoComponent16Half: return .rg16Float case kCVPixelFormatType_TwoComponent32Float: return .rg32Float case kCVPixelFormatType_32BGRA: return .bgra8Unorm case kCVPixelFormatType_32RGBA: return .rgba8Unorm case kCVPixelFormatType_64RGBAHalf: return .rgba16Float case kCVPixelFormatType_128RGBAFloat: return .rgba32Float case kCVPixelFormatType_ARGB2101010LEPacked: return .bgr10a2Unorm default: return nil } } } obs-studio-32.1.0-sources/libobs-metal/MTLTextureDescriptor+Extensions.swift000644 001751 001751 00000006264 15153330235 030103 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Metal extension MTLTextureDescriptor { /// Convenience initializer for a texture descriptor with `libobs` data /// - Parameters: /// - type: Metal texture type /// - width: Width of texture /// - height: Height of texture /// - depth: Depth of texture /// - colorFormat: `libobs` color format for the texture /// - levels: Mip map levels /// - flags: Additional usage flags as `libobs` bitfield convenience init?( type: MTLTextureType, width: UInt32, height: UInt32, depth: UInt32, colorFormat: gs_color_format, levels: UInt32, flags: UInt32 ) { let arrayLength: Int switch type { case .type2D: arrayLength = 1 case .type3D: arrayLength = 1 case .typeCube: arrayLength = 6 default: assertionFailure("MTLTextureDescriptor: Unsupported texture type for libobs initializer") return nil } self.init() self.textureType = type self.pixelFormat = colorFormat.mtlFormat self.width = Int(width) self.height = Int(height) self.depth = Int(depth) self.sampleCount = 1 self.arrayLength = arrayLength self.cpuCacheMode = .defaultCache self.allowGPUOptimizedContents = true self.hazardTrackingMode = .default if (Int32(flags) & GS_BUILD_MIPMAPS) != 0 { self.mipmapLevelCount = Int(levels) } else { self.mipmapLevelCount = 1 } if (Int32(flags) & GS_RENDER_TARGET) != 0 { self.storageMode = .private self.usage = [.shaderRead, .renderTarget] } else { self.storageMode = .shared self.usage = [.shaderRead] } } convenience init?(width: UInt32, height: UInt32, colorFormat: gs_zstencil_format) { self.init() self.textureType = .type2D self.pixelFormat = colorFormat.mtlFormat self.width = Int(width) self.height = Int(height) self.depth = 1 self.sampleCount = 1 self.arrayLength = 1 self.cpuCacheMode = .defaultCache self.allowGPUOptimizedContents = true self.hazardTrackingMode = .default self.mipmapLevelCount = 1 self.storageMode = .private self.usage = [.shaderRead] } } obs-studio-32.1.0-sources/libobs-metal/metal-vertexbuffer.swift000644 001751 001751 00000013074 15153330235 025500 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ /// Creates a new ``MetalVertexBuffer`` instance with the given vertex buffer data and usage flags /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - data: Pointer to `gs_vb_data` vertex buffer data created by `libobs` /// - flags: Usage flags encoded as `libobs` bitmask /// - Returns: Opaque pointer to a new ``MetalVertexBuffer`` instance if successful, `nil` otherwise /// /// > Note: The ownership of the memory pointed to by `data` is implicitly transferred to the ``MetalVertexBuffer`` /// instance, but is not managed by Swift. @_cdecl("device_vertexbuffer_create") public func device_vertexbuffer_create(device: UnsafeRawPointer, data: UnsafeMutablePointer, flags: UInt32) -> OpaquePointer { let device: MetalDevice = unretained(device) let vertexBuffer = MetalVertexBuffer( device: device, data: data, dynamic: (Int32(flags) & GS_DYNAMIC) != 0 ) return vertexBuffer.getRetained() } /// Requests the deinitialization of a shared ``MetalVertexBuffer`` instance /// - Parameter indexBuffer: Opaque pointer to ``MetalVertexBuffer`` instance shared with `libobs` /// /// The deinitialization is handled automatically by Swift after the ownership of the instance has been transferred /// into the function and becomes the last strong reference to it. After the function leaves its scope, the object will /// be deinitialized and deallocated automatically. /// /// > Note: The vertex buffer data memory is implicitly owned by the ``MetalVertexBuffer`` instance and will be /// manually cleaned up and deallocated by the instance's ``deinit`` method. @_cdecl("gs_vertexbuffer_destroy") public func gs_vertexbuffer_destroy(vertBuffer: UnsafeRawPointer) { let _ = retained(vertBuffer) as MetalVertexBuffer } /// Sets up a ``MetalVertexBuffer`` as the vertex buffer for the current pipeline /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - vertbuffer: Opaque pointer to ``MetalVertexBuffer`` instance shared with `libobs` /// /// > Note: The reference count of the ``MetalVertexBuffer`` instance will not be increased by this call. /// /// > Important: If a `nil` pointer is provided as the vertex buffer, the index buffer will be _unset_. @_cdecl("device_load_vertexbuffer") public func device_load_vertexbuffer(device: UnsafeRawPointer, vertBuffer: UnsafeMutableRawPointer?) { let device: MetalDevice = unretained(device) if let vertBuffer { device.renderState.vertexBuffer = unretained(vertBuffer) } else { device.renderState.vertexBuffer = nil } } /// Requests the vertex buffer's current data to be transferred into GPU memory /// - Parameter vertBuffer: Opaque pointer to ``MetalVertexBuffer`` instance shared with `libobs` /// /// This function will call `gs_vertexbuffer_flush_direct` with a `nil` pointer as the data pointer. @_cdecl("gs_vertexbuffer_flush") public func gs_vertexbuffer_flush(vertbuffer: UnsafeRawPointer) { gs_vertexbuffer_flush_direct(vertbuffer: vertbuffer, data: nil) } /// Requests the vertex buffer to be updated with the provided data and then transferred into GPU memory /// - Parameters: /// - vertBuffer: Opaque pointer to ``MetalVertexBuffer`` instance shared with `libobs` /// - data: Opaque pointer to vertex buffer data set up by `libobs` /// /// This function is called to ensure that the vertex buffer data that is contained in the memory pointed at by the /// `data` argument is uploaded into GPU memory. /// /// If a `nil` pointer is provided instead, the data provided to the instance during creation will be used instead. @_cdecl("gs_vertexbuffer_flush_direct") public func gs_vertexbuffer_flush_direct(vertbuffer: UnsafeRawPointer, data: UnsafeMutablePointer?) { let vertexBuffer: MetalVertexBuffer = unretained(vertbuffer) vertexBuffer.setupBuffers(data: data) } /// Returns an opaque pointer to the vertex buffer data associated with the ``MetalVertexBuffer`` instance /// - Parameter vertBuffer: Opaque pointer to ``MetalVertexBuffer`` instance shared with `libobs` /// - Returns: Opaque pointer to index buffer data in memory /// /// The returned opaque pointer represents the unchanged memory address that was provided for the creation of the index /// buffer object. /// /// > Warning: There is only limited memory safety associated with this pointer. It is implicitly owned and its /// lifetime is managed by the ``MetalVertexBuffer`` /// instance, but it was originally created by `libobs`. @_cdecl("gs_vertexbuffer_get_data") public func gs_vertexbuffer_get_data(vertBuffer: UnsafeRawPointer) -> UnsafeMutablePointer? { let vertexBuffer: MetalVertexBuffer = unretained(vertBuffer) return vertexBuffer.vertexData } obs-studio-32.1.0-sources/libobs-metal/metal-texture2d.swift000644 001751 001751 00000060543 15153330235 024722 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal /// Creates a two-dimensional ``MetalTexture`` instance with the specified usage options and the raw image data (if /// provided) /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - width: Desired width of the texture /// - height: Desired height of the texture /// - color_format: Desired color format of the texture as described by `gs_color_format` /// - levels: Amount of mip map levels to generate for the texture /// - data: Optional pointer to raw pixel data per mip map level /// - flags: Texture resource use information encoded as `libobs` bitfield /// - Returns: Opaque pointer to a created ``MetalTexture`` instance or a `nil` pointer on error /// /// This function will create a new ``MTLTexture`` wrapped within a ``MetalTexture`` class and also upload any pixel /// data if non-`nil` pointers have been provided via the `data` argument. /// /// > Important: If mipmap generation is requested, execution will be blocked by waiting for the blit command encoder /// to generate the mipmaps. @_cdecl("device_texture_create") public func device_texture_create( device: UnsafeRawPointer, width: UInt32, height: UInt32, color_format: gs_color_format, levels: UInt32, data: UnsafePointer?>?, flags: UInt32 ) -> OpaquePointer? { let device: MetalDevice = unretained(device) let descriptor = MTLTextureDescriptor.init( type: .type2D, width: width, height: height, depth: 1, colorFormat: color_format, levels: levels, flags: flags ) guard let descriptor, let texture = MetalTexture(device: device, descriptor: descriptor) else { return nil } if let data { texture.upload(data: data, mipmapLevels: descriptor.mipmapLevelCount) } return texture.getRetained() } /// Creates a ``MetalTexture`` instance for a cube texture with the specified usage options and the raw image data (if provided) /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - size: Desized edge length for the cube /// - color_format: Desired color format of the texture as described by `gs_color_format` /// - levels: Amount of mip map levels to generate for the texture /// - data: Optional pointer to raw pixel data per mip map level /// - flags: Texture resource use information encoded as `libobs` bitfield /// - Returns: Opaque pointer to created ``MetalTexture`` instance or a `nil` pointer on error /// /// This function will create a new ``MTLTexture`` wrapped within a ``MetalTexture`` class and also upload any pixel /// data if non-`nil` pointers have /// been provided via the `data` argument. /// /// > Important: If mipmap generation is requested, execution will be blocked by waiting for the blit command encoder /// to generate the mipmaps. @_cdecl("device_cubetexture_create") public func device_cubetexture_create( device: UnsafeRawPointer, size: UInt32, color_format: gs_color_format, levels: UInt32, data: UnsafePointer?>?, flags: UInt32 ) -> OpaquePointer? { let device: MetalDevice = unretained(device) let descriptor = MTLTextureDescriptor.init( type: .typeCube, width: size, height: size, depth: 1, colorFormat: color_format, levels: levels, flags: flags ) guard let descriptor, let texture = MetalTexture(device: device, descriptor: descriptor) else { return nil } if let data { texture.upload(data: data, mipmapLevels: descriptor.mipmapLevelCount) } return texture.getRetained() } /// Requests deinitialization of the ``MetalTexture`` instance shared with `libobs` /// - Parameter texture: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// /// The ownership of the shared pointer is transferred into this function and the instance is placed under Swift's /// memory management again. @_cdecl("gs_texture_destroy") public func gs_texture_destroy(texture: UnsafeRawPointer) { let _ = retained(texture) as MetalTexture } /// Gets the type of the texture wrapped by the ``MetalTexture`` instance /// - Parameter texture: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Texture type identified by `gs_texture_type` enum value /// /// > Warning: As `libobs` has no enum value for "invalid texture type", there is no way for this function to signal /// that the wrapped texture has an incompatible ``MTLTextureType``. Instead of crashing the program (which would /// avoid undefined behavior), this function will return the 2D texture type value instead, which is incorrect, but is /// more in line with how OBS Studio handles undefined behavior. @_cdecl("device_get_texture_type") public func device_get_texture_type(texture: UnsafeRawPointer) -> gs_texture_type { let texture: MetalTexture = unretained(texture) return texture.texture.textureType.gsTextureType ?? GS_TEXTURE_2D } /// Requests the ``MetalTexture`` instance to be loaded as one of the current pipeline's fragment attachments in the /// specified texture slot /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - unit: Texture slot for fragment attachment /// /// OBS Studio expects pipelines to support fragment attachments for textures and samplers up to the amount defined in /// the `GS_MAX_TEXTURES` preprocessor directive. The order of this calls can be arbitrary, so at any point in time a /// request to load a texture into slot "5" can take place, even if slots 0 to 4 are empty. @_cdecl("device_load_texture") public func device_load_texture(device: UnsafeRawPointer, tex: UnsafeRawPointer, unit: UInt32) { let device: MetalDevice = unretained(device) let texture: MetalTexture = unretained(tex) device.renderState.textures[Int(unit)] = texture.texture } /// Requests an sRGB variant of a ``MetalTexture`` instance to be set as one of the current pipeline's fragment /// attachments in the specified texture slot. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - unit: Texture slot for fragment attachment /// OBS Studio expects pipelines to support fragment attachments for textures and samplers up to the amount defined in /// the `GS_MAX_TEXTURES` preprocessor directive. The order of this calls can be arbitrary, so at any point in time a /// request to load a texture into slot "5" can take place, even if slots 0 to 4 are empty. /// /// > Important: This variant of the texture load functions expects a texture whose color values are already sRGB gamma /// encoded and thus also expects that the color values used in the fragment shader will have been automatically /// decoded into linear gamma. If the ``MetalTexture`` instance has no dedicated ``MetalTexture/sRGBtexture`` instance, /// it will use the normal ``MetalTexture/texture`` instance instead. @_cdecl("device_load_texture_srgb") public func device_load_texture_srgb(device: UnsafeRawPointer, tex: UnsafeRawPointer, unit: UInt32) { let device: MetalDevice = unretained(device) let texture: MetalTexture = unretained(tex) if texture.sRGBtexture != nil { device.renderState.textures[Int(unit)] = texture.sRGBtexture! } else { device.renderState.textures[Int(unit)] = texture.texture } } /// Copies image data from a region in the source ``MetalTexture`` into a destination ``MetalTexture`` at the provided /// origin /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - dst: Opaque pointer to ``MetalTexture`` instance shared with `libobs`, used as destination for the copy operation /// - dst_x: X coordinate of the origin in the destination texture /// - dst_y: Y coordinate of the origin in the destination texture /// - src: Opaque pointer to ``MetalTexture`` instance shared with `libobs`, used as source for the copy operation /// - src_x: X coordinate of the origin in the source texture /// - src_y: Y coordinate of the origin in the source texture /// - src_w: Width of the region in the source texture /// - src_h: Height of the region in the source texture /// /// This function will fail if the destination texture's dimensions aren't large enough to hold the region copied from /// the source texture. This check will use the desired origin within the destination texture and the region's size /// into account and checks whether the total dimensions of the destination are large enough (starting at the /// destination origin) to hold the source's region. /// /// > Important: Execution will **not** be blocked, the copy operation will be committed to the command queue and /// executed at some point after this function returns. @_cdecl("device_copy_texture_region") public func device_copy_texture_region( device: UnsafeRawPointer, dst: UnsafeRawPointer, dst_x: UInt32, dst_y: UInt32, src: UnsafeRawPointer, src_x: UInt32, src_y: UInt32, src_w: UInt32, src_h: UInt32 ) { let device: MetalDevice = unretained(device) let source: MetalTexture = unretained(src) let destination: MetalTexture = unretained(dst) var sourceRegion = MTLRegion( origin: .init(x: Int(src_x), y: Int(src_y), z: 0), size: .init(width: Int(src_w), height: Int(src_h), depth: 1) ) let destinationRegion = MTLRegion( origin: .init(x: Int(dst_x), y: Int(dst_y), z: 0), size: .init(width: destination.texture.width, height: destination.texture.height, depth: 1) ) if sourceRegion.size.width == 0 { sourceRegion.size.width = source.texture.width - sourceRegion.origin.x } if sourceRegion.size.height == 0 { sourceRegion.size.height = source.texture.height - sourceRegion.origin.y } guard destinationRegion.size.width - destinationRegion.origin.x > sourceRegion.size.width && destinationRegion.size.height - destinationRegion.origin.y > sourceRegion.size.height else { OBSLog( .error, "device_copy_texture_region: Destination texture \(destinationRegion.size) is not large enough to hold source region (\(sourceRegion.size) at origin \(destinationRegion.origin)" ) return } do { try device.copyTextureRegion( source: source, sourceRegion: sourceRegion, destination: destination, destinationRegion: destinationRegion) } catch let error as MetalError.MTLDeviceError { OBSLog(.error, "device_clear: \(error.description)") } catch { OBSLog(.error, "device_clear: Unknown error occurred") } } /// Copies the image data from the source ``MetalTexture`` into the destination ``MetalTexture`` /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - dst: Opaque pointer to ``MetalTexture`` instance shared with `libobs`, used as destination for the copy /// operation /// - src: Opaque pointer to ``MetalTexture`` instance shared with `libobs`, used as source for the copy operation /// /// > Warning: This function requires that the source and destination texture dimensions are identical, otherwise the /// copy operation will fail. /// /// > Important: Execution will **not** be blocked, the copy operation will be committed to the command queue and /// executed at some point after this function returns. @_cdecl("device_copy_texture") public func device_copy_texture(device: UnsafeRawPointer, dst: UnsafeRawPointer, src: UnsafeRawPointer) { let device: MetalDevice = unretained(device) let source: MetalTexture = unretained(src) let destination: MetalTexture = unretained(dst) do { try device.copyTexture(source: source, destination: destination) } catch let error as MetalError.MTLDeviceError { OBSLog(.error, "device_clear: \(error.description)") } catch { OBSLog(.error, "device_clear: Unknown error occurred") } } /// Copies the image data from the source ``MetalTexture`` into the destination ``MetalTexture`` and blocks execution /// until the copy operation has finished. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - dst: Opaque pointer to ``MetalStageBuffer`` instance shared with `libobs`, used as destination for the copy /// operation /// - src: Opaque pointer to ``MetalTexture`` instance shared with `libobs`, used as source for the copy operation /// /// > Important: Execution will be blocked by waiting for the blit command encoder to finish the copy operation. @_cdecl("device_stage_texture") public func device_stage_texture(device: UnsafeRawPointer, dst: UnsafeRawPointer, src: UnsafeRawPointer) { let device: MetalDevice = unretained(device) let source: MetalTexture = unretained(src) let destination: MetalStageBuffer = unretained(dst) do { try device.stageTextureToBuffer(source: source, destination: destination) } catch let error as MetalError.MTLDeviceError { OBSLog(.error, "device_clear: \(error.description)") } catch { OBSLog(.error, "device_clear: Unknown error occurred") } } /// Gets the width of the texture wrapped by the ``MetalTexture`` instance /// - Parameter tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Width of the texture @_cdecl("gs_texture_get_width") public func device_texture_get_width(tex: UnsafeRawPointer) -> UInt32 { let texture: MetalTexture = unretained(tex) return UInt32(texture.texture.width) } /// Gets the height of the texture wrapped by the ``MetalTexture`` instance /// - Parameter tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Height of the texture @_cdecl("gs_texture_get_height") public func device_texture_get_height(tex: UnsafeRawPointer) -> UInt32 { let texture: MetalTexture = unretained(tex) return UInt32(texture.texture.height) } /// Gets the color format of the texture wrapped by the ``MetalTexture`` instance /// - Parameter tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Color format as defined by the `gs_color_format` enumeration @_cdecl("gs_texture_get_color_format") public func gs_texture_get_color_format(tex: UnsafeRawPointer) -> gs_color_format { let texture: MetalTexture = unretained(tex) return texture.texture.pixelFormat.gsColorFormat } /// Allocates memory for an update of the texture's image data wrapped by the ``MetalTexture`` instance. /// - Parameters: /// - tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - ptr: Pointer to memory for the raw image data /// - linesize: Pointer to integer for the row size of the texture /// - Returns: `true` if the mapping memory was allocated successfully, `false` otherwise /// /// Metal does not provide "map" and "unmap" operations as they exist in Direct3D11, as resource management and /// synchronization needs to be handled explicitly by the application. Thus "mapping" just means that enough memory for /// raw image data is allocated and an unmanaged pointer to that memory is shared with `libobs` for writing the image data. /// /// To ensure that the data written into the memory provided by this function is actually used to update the texture, /// the corresponding function `gs_texture_unmap` needs to be used. /// /// > Important: This function can only be used to **push** new image data into the texture. To _pull_ image data from /// the texture, use a stage surface instead. @_cdecl("gs_texture_map") public func gs_texture_map( tex: UnsafeRawPointer, ptr: UnsafeMutablePointer, linesize: UnsafeMutablePointer ) -> Bool { let texture: MetalTexture = unretained(tex) guard texture.texture.textureType == .type2D, let device = texture.device else { return false } let stageBuffer: MetalStageBuffer if texture.stageBuffer == nil || (texture.stageBuffer!.width != texture.texture.width && texture.stageBuffer!.height != texture.texture.height) { guard let buffer = MetalStageBuffer( device: device, width: texture.texture.width, height: texture.texture.height, format: texture.texture.pixelFormat ) else { OBSLog(.error, "gs_texture_map: Unable to create MetalStageBuffer for mapping texture") return false } texture.stageBuffer = buffer stageBuffer = buffer } else { stageBuffer = texture.stageBuffer! } ptr.pointee = stageBuffer.buffer.contents() linesize.pointee = UInt32(stageBuffer.width * stageBuffer.format.bytesPerPixel!) return true } /// Writes back raw image data into the texture wrapped by the ``MetalTexture`` instance /// - Parameter tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// /// This function needs to be used in tandem with `gs_texture_map`, which allocates memory for raw image data that /// should be used in an update of the wrapped `MTLTexture`. This function will then actually replace the image data /// in the texture with that raw image data and deallocate the memory that was allocated during `gs_texture_map`. @_cdecl("gs_texture_unmap") public func gs_texture_unmap(tex: UnsafeRawPointer) { let texture: MetalTexture = unretained(tex) guard texture.texture.textureType == .type2D, let stageBuffer = texture.stageBuffer, let device = texture.device else { return } do { try device.stageBufferToTexture(source: stageBuffer, destination: texture) } catch let error as MetalError.MTLDeviceError { OBSLog(.error, "gs_texture_unmap: \(error.description)") } catch { OBSLog(.error, "gs_texture_unmap: Unknown error occurred") } } /// Gets an opaque pointer to the ``MTLTexture`` instance wrapped by the provided ``MetalTexture`` instance /// - Parameter tex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Opaque pointer to ``MTLTexture`` instance /// /// > Important: The opaque pointer returned by this function is **unretained**, which means that the ``MTLTexture`` /// instance it refers to might be deinitialized at any point when no other Swift code holds a strong reference to it. @_cdecl("gs_texture_get_obj") public func gs_texture_get_obj(tex: UnsafeRawPointer) -> OpaquePointer { let texture: MetalTexture = unretained(tex) let unretained = Unmanaged.passUnretained(texture.texture).toOpaque() return OpaquePointer(unretained) } /// Requests deinitialization of the ``MetalTexture`` instance shared with `libobs` /// - Parameter cubetex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// /// The ownership of the shared pointer is transferred into this function and the instance is placed under /// Swift's memory management again. @_cdecl("gs_cubetexture_destroy") public func gs_cubetexture_destroy(cubetex: UnsafeRawPointer) { let _ = retained(cubetex) as MetalTexture } /// Gets the edge size of the cube texture wrapped by the ``MetalTexture`` instance /// - Parameter cubetex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Edge size of the cube @_cdecl("gs_cubetexture_get_size") public func gs_cubetexture_get_size(cubetex: UnsafeRawPointer) -> UInt32 { let texture: MetalTexture = unretained(cubetex) return UInt32(texture.texture.width) } /// Gets the color format of the cube texture wrapped by the ``MetalTexture`` instance /// - Parameter cubetex: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - Returns: Color format value @_cdecl("gs_cubetexture_get_color_format") public func gs_cubetexture_get_color_format(cubetex: UnsafeRawPointer) -> gs_color_format { let texture: MetalTexture = unretained(cubetex) return texture.texture.pixelFormat.gsColorFormat } /// Gets the device capability state for shared textures /// - Parameter device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - Returns: Always `true` /// /// While Metal provides a specific "shared texture" type, OBS Studio understands this to mean "textures shared between /// processes", which is usually achieved using ``IOSurface`` references on macOS. Metal textures can be created from /// these references, so this is always `true`. @_cdecl("device_shared_texture_available") public func device_shared_texture_available(device: UnsafeRawPointer) -> Bool { return true } /// Creates a ``MetalTexture`` wrapping an ``MTLTexture`` that was created using the provided ``IOSurface`` reference. /// - Parameters: /// - device: Opaque pointer to ``MetalDevice`` instance shared with `libobs` /// - iosurf: ``IOSurface`` reference to use as the image data source for the texture /// - Returns: An opaque pointer to a ``MetalTexture`` instance on success, `nil` otherwise /// /// If the provided ``IOSurface`` uses a video image format that has no compatible ``Metal`` pixel format, creation of /// the texture will fail. @_cdecl("device_texture_create_from_iosurface") public func device_texture_create_from_iosurface(device: UnsafeRawPointer, iosurf: IOSurfaceRef) -> OpaquePointer? { let device: MetalDevice = unretained(device) let texture = MetalTexture(device: device, surface: iosurf) guard let texture else { return nil } return texture.getRetained() } /// Replaces the current ``IOSurface``-based ``MTLTexture`` wrapped by the provided ``MetalTexture`` instance with a /// new instance. /// - Parameters: /// - texture: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - iosurf: ``IOSurface`` reference to use as the image data source for the texture /// - Returns: An opaque pointer to a ``MetalTexture`` instance on success, `nil` otherwise /// /// The "rebind" mentioned in the function name is limited to the ``MTLTexture`` instance wrapped inside the /// ``MetalTexture`` instance, as textures are immutable objects (but their underlying data is mutable). This allows /// `libobs` to hold onto the same opaque ``MetalTexture`` pointer even though the backing surface might have changed. @_cdecl("gs_texture_rebind_iosurface") public func gs_texture_rebind_iosurface(texture: UnsafeRawPointer, iosurf: IOSurfaceRef) -> Bool { let texture: MetalTexture = unretained(texture) return texture.rebind(surface: iosurf) } /// Creates a new ``MetalTexture`` instance with an opaque shared texture "handle" /// - Parameters: /// - device: Opaque pointer to ``MetalTexture`` instance shared with `libobs` /// - handle: Arbitrary handle value that needs to be reinterpreted into the correct platform specific shared /// reference type /// - Returns: An opaque pointer to a ``MetalTexture`` instance on success, `nil` otherwise /// /// The "handle" is a generalised argument used on all platforms and needs to be converted into a platform-specific /// type before the "shared" texture can be created. In case of macOS this means converting the unsigned integer into /// a ``IOSurface`` address. /// /// > Warning: As the handle is a 32-bit integer, this can break on 64-bit systems if the ``IOSurface`` pointer /// address does not fit into a 32-bit number. @_cdecl("device_texture_open_shared") public func device_texture_open_shared(device: UnsafeRawPointer, handle: UInt32) -> OpaquePointer? { if let reference = IOSurfaceLookupFromMachPort(handle) { let texture = device_texture_create_from_iosurface(device: device, iosurf: reference) return texture } else { return nil } } obs-studio-32.1.0-sources/libobs-metal/libobs+SignalHandlers.swift000644 001751 001751 00000002603 15153330235 026031 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation enum MetalSignalType: String { case videoReset = "video_reset" } /// Dispatches the video reset event to the ``MetalDevice`` instance /// - Parameters: /// - param: Opaque pointer to a ``MetalDevice`` instance /// - _: Unused pointer to signal callback data public func metal_video_reset_handler(_ param: UnsafeMutableRawPointer?, _: UnsafeMutablePointer?) { guard let param else { return } let metalDevice = unsafeBitCast(param, to: MetalDevice.self) metalDevice.dispatchSignal(type: .videoReset) } obs-studio-32.1.0-sources/libobs-metal/MTLTexture+Extensions.swift000644 001751 001751 00000005404 15153330235 026037 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ import Foundation import Metal extension MTLTexture { /// Creates an opaque pointer of a ``MTLTexture`` instance and increases the reference count. /// - Returns: Opaque pointer for the ``MTLTexture`` func getRetained() -> OpaquePointer { let retained = Unmanaged.passRetained(self).toOpaque() return OpaquePointer(retained) } /// Creates an opaque pointer of a ``MTLTexture`` instance without increasing the reference count. /// - Returns: Opaque pointer for the ``MTLTexture`` func getUnretained() -> OpaquePointer { let unretained = Unmanaged.passUnretained(self).toOpaque() return OpaquePointer(unretained) } } extension MTLTexture { /// Convenience property to get the texture's size as a ``MTLSize`` object var size: MTLSize { .init( width: self.width, height: self.height, depth: self.depth ) } /// Convenience property to get the texture's region as a ``MTLRegion`` object var region: MTLRegion { .init( origin: .init(x: 0, y: 0, z: 0), size: self.size ) } /// Gets a new ``MTLTextureDescriptor`` instance with the properties of the texture var descriptor: MTLTextureDescriptor { let descriptor = MTLTextureDescriptor() descriptor.textureType = self.textureType descriptor.pixelFormat = self.pixelFormat descriptor.width = self.width descriptor.height = self.height descriptor.depth = self.depth descriptor.mipmapLevelCount = self.mipmapLevelCount descriptor.sampleCount = self.sampleCount descriptor.arrayLength = self.arrayLength descriptor.storageMode = self.storageMode descriptor.cpuCacheMode = self.cpuCacheMode descriptor.usage = self.usage descriptor.allowGPUOptimizedContents = self.allowGPUOptimizedContents return descriptor } } obs-studio-32.1.0-sources/libobs-metal/metal-unimplemented.swift000644 001751 001751 00000005343 15153330235 025637 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2024 by Patrick Heyer This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ @_cdecl("device_load_default_samplerstate") public func device_load_default_samplerstate(device: UnsafeRawPointer, b_3d: Bool, unit: Int) { return } @_cdecl("device_enter_context") public func device_enter_context(device: UnsafeMutableRawPointer) { return } @_cdecl("device_leave_context") public func device_leave_context(device: UnsafeMutableRawPointer) { return } @_cdecl("device_timer_create") public func device_timer_create(device: UnsafeRawPointer) { return } @_cdecl("device_timer_range_create") public func device_timer_range_create(device: UnsafeRawPointer) { } @_cdecl("gs_timer_destroy") public func gs_timer_destroy(timer: UnsafeRawPointer) { return } @_cdecl("gs_timer_begin") public func gs_timer_begin(timer: UnsafeRawPointer) { return } @_cdecl("gs_timer_end") public func gs_timer_end(timer: UnsafeRawPointer) { return } @_cdecl("gs_timer_get_data") public func gs_timer_get_data(timer: UnsafeRawPointer) -> Bool { return false } @_cdecl("gs_timer_range_destroy") public func gs_timer_range_destroy(range: UnsafeRawPointer) { return } @_cdecl("gs_timer_range_begin") public func gs_timer_range_begin(range: UnsafeRawPointer) { return } @_cdecl("gs_timer_range_end") public func gs_timer_range_end(range: UnsafeRawPointer) { return } @_cdecl("gs_timer_range_get_data") public func gs_timer_range_get_data(range: UnsafeRawPointer, disjoint: Bool, frequency: UInt64) -> Bool { return false } @_cdecl("device_debug_marker_begin") public func device_debug_marker_begin(device: UnsafeRawPointer, monitor: UnsafeMutableRawPointer) { return } @_cdecl("device_debug_marker_end") public func device_debug_marker_end(device: UnsafeRawPointer) { return } @_cdecl("device_set_cube_render_target") public func device_set_cube_render_target( device: UnsafeRawPointer, cubetex: UnsafeRawPointer, side: Int, zstencil: UnsafeRawPointer ) { return } obs-studio-32.1.0-sources/libobs/000755 001751 001751 00000000000 15153330731 017507 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/obs-win-crash-handler.c000644 001751 001751 00000040433 15153330235 023745 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include #include #include #include #include "obs-config.h" #include "util/dstr.h" #include "util/platform.h" #include "util/windows/win-version.h" typedef BOOL(WINAPI *ENUMERATELOADEDMODULES64)(HANDLE process, PENUMLOADED_MODULES_CALLBACK64 enum_loaded_modules_callback, PVOID user_context); typedef DWORD(WINAPI *SYMSETOPTIONS)(DWORD sym_options); typedef BOOL(WINAPI *SYMINITIALIZE)(HANDLE process, PCTSTR user_search_path, BOOL invade_process); typedef BOOL(WINAPI *SYMCLEANUP)(HANDLE process); typedef BOOL(WINAPI *STACKWALK64)(DWORD machine_type, HANDLE process, HANDLE thread, LPSTACKFRAME64 stack_frame, PVOID context_record, PREAD_PROCESS_MEMORY_ROUTINE64 read_memory_routine, PFUNCTION_TABLE_ACCESS_ROUTINE64 function_table_access_routine, PGET_MODULE_BASE_ROUTINE64 get_module_base_routine, PTRANSLATE_ADDRESS_ROUTINE64 translate_address); typedef BOOL(WINAPI *SYMREFRESHMODULELIST)(HANDLE process); typedef PVOID(WINAPI *SYMFUNCTIONTABLEACCESS64)(HANDLE process, DWORD64 addr_base); typedef DWORD64(WINAPI *SYMGETMODULEBASE64)(HANDLE process, DWORD64 addr); typedef BOOL(WINAPI *SYMFROMADDR)(HANDLE process, DWORD64 address, PDWORD64 displacement, PSYMBOL_INFOW symbol); typedef BOOL(WINAPI *SYMGETMODULEINFO64)(HANDLE process, DWORD64 addr, PIMAGEHLP_MODULE64 module_info); typedef DWORD64(WINAPI *SYMLOADMODULE64)(HANDLE process, HANDLE file, PSTR image_name, PSTR module_name, DWORD64 base_of_dll, DWORD size_of_dll); typedef BOOL(WINAPI *MINIDUMPWRITEDUMP)(HANDLE process, DWORD process_id, HANDLE file, MINIDUMP_TYPE dump_type, PMINIDUMP_EXCEPTION_INFORMATION exception_param, PMINIDUMP_USER_STREAM_INFORMATION user_stream_param, PMINIDUMP_CALLBACK_INFORMATION callback_param); typedef HINSTANCE(WINAPI *SHELLEXECUTEA)(HWND hwnd, LPCTSTR operation, LPCTSTR file, LPCTSTR parameters, LPCTSTR directory, INT show_flags); typedef HRESULT(WINAPI *GETTHREADDESCRIPTION)(HANDLE thread, PWSTR *desc); struct stack_trace { CONTEXT context; DWORD64 instruction_ptr; STACKFRAME64 frame; DWORD image_type; }; struct exception_handler_data { SYMINITIALIZE sym_initialize; SYMCLEANUP sym_cleanup; SYMSETOPTIONS sym_set_options; SYMFUNCTIONTABLEACCESS64 sym_function_table_access64; SYMGETMODULEBASE64 sym_get_module_base64; SYMFROMADDR sym_from_addr; SYMGETMODULEINFO64 sym_get_module_info64; SYMREFRESHMODULELIST sym_refresh_module_list; STACKWALK64 stack_walk64; ENUMERATELOADEDMODULES64 enumerate_loaded_modules64; MINIDUMPWRITEDUMP minidump_write_dump; HMODULE dbghelp; SYMBOL_INFOW *sym_info; PEXCEPTION_POINTERS exception; struct win_version_info win_version; SYSTEMTIME time_info; HANDLE process; struct stack_trace main_trace; struct dstr str; struct dstr cpu_info; struct dstr module_name; struct dstr module_list; }; static inline void exception_handler_data_free(struct exception_handler_data *data) { LocalFree(data->sym_info); dstr_free(&data->str); dstr_free(&data->cpu_info); dstr_free(&data->module_name); dstr_free(&data->module_list); FreeLibrary(data->dbghelp); } static inline void *get_proc(HMODULE module, const char *func) { return (void *)GetProcAddress(module, func); } #define GET_DBGHELP_IMPORT(target, str) \ do { \ data->target = get_proc(data->dbghelp, str); \ if (!data->target) \ return false; \ } while (false) static inline bool get_dbghelp_imports(struct exception_handler_data *data) { data->dbghelp = LoadLibraryW(L"DbgHelp"); if (!data->dbghelp) return false; GET_DBGHELP_IMPORT(sym_initialize, "SymInitialize"); GET_DBGHELP_IMPORT(sym_cleanup, "SymCleanup"); GET_DBGHELP_IMPORT(sym_set_options, "SymSetOptions"); GET_DBGHELP_IMPORT(sym_function_table_access64, "SymFunctionTableAccess64"); GET_DBGHELP_IMPORT(sym_get_module_base64, "SymGetModuleBase64"); GET_DBGHELP_IMPORT(sym_from_addr, "SymFromAddrW"); GET_DBGHELP_IMPORT(sym_get_module_info64, "SymGetModuleInfo64"); GET_DBGHELP_IMPORT(sym_refresh_module_list, "SymRefreshModuleList"); GET_DBGHELP_IMPORT(stack_walk64, "StackWalk64"); GET_DBGHELP_IMPORT(enumerate_loaded_modules64, "EnumerateLoadedModulesW64"); GET_DBGHELP_IMPORT(minidump_write_dump, "MiniDumpWriteDump"); return true; } static inline void init_instruction_data(struct stack_trace *trace) { #if defined(_M_ARM64) trace->instruction_ptr = trace->context.Pc; trace->frame.AddrPC.Offset = trace->instruction_ptr; trace->frame.AddrFrame.Offset = trace->context.Fp; trace->frame.AddrStack.Offset = trace->context.Sp; trace->image_type = IMAGE_FILE_MACHINE_ARM64; #elif defined(_WIN64) trace->instruction_ptr = trace->context.Rip; trace->frame.AddrPC.Offset = trace->instruction_ptr; trace->frame.AddrFrame.Offset = trace->context.Rbp; trace->frame.AddrStack.Offset = trace->context.Rsp; trace->image_type = IMAGE_FILE_MACHINE_AMD64; #else trace->instruction_ptr = trace->context.Eip; trace->frame.AddrPC.Offset = trace->instruction_ptr; trace->frame.AddrFrame.Offset = trace->context.Ebp; trace->frame.AddrStack.Offset = trace->context.Esp; trace->image_type = IMAGE_FILE_MACHINE_I386; #endif trace->frame.AddrFrame.Mode = AddrModeFlat; trace->frame.AddrPC.Mode = AddrModeFlat; trace->frame.AddrStack.Mode = AddrModeFlat; } extern bool sym_initialize_called; static inline void init_sym_info(struct exception_handler_data *data) { data->sym_set_options(SYMOPT_UNDNAME | SYMOPT_FAIL_CRITICAL_ERRORS | SYMOPT_LOAD_ANYTHING); if (!sym_initialize_called) data->sym_initialize(data->process, NULL, true); else data->sym_refresh_module_list(data->process); data->sym_info = LocalAlloc(LPTR, sizeof(*data->sym_info) + 256); if (data->sym_info) { data->sym_info->SizeOfStruct = sizeof(SYMBOL_INFO); data->sym_info->MaxNameLen = 256; } } static inline void init_version_info(struct exception_handler_data *data) { get_win_ver(&data->win_version); } #define PROCESSOR_REG_KEY L"HARDWARE\\DESCRIPTION\\System\\CentralProcessor\\0" #define CPU_ERROR "" static inline void init_cpu_info(struct exception_handler_data *data) { HKEY key; LSTATUS status; status = RegOpenKeyW(HKEY_LOCAL_MACHINE, PROCESSOR_REG_KEY, &key); if (status == ERROR_SUCCESS) { wchar_t str[1024]; DWORD size = 1024; status = RegQueryValueExW(key, L"ProcessorNameString", NULL, NULL, (LPBYTE)str, &size); if (status == ERROR_SUCCESS) dstr_from_wcs(&data->cpu_info, str); else dstr_copy(&data->cpu_info, CPU_ERROR); } else { dstr_copy(&data->cpu_info, CPU_ERROR); } } static BOOL CALLBACK enum_all_modules(PCTSTR module_name, DWORD64 module_base, ULONG module_size, struct exception_handler_data *data) { char name_utf8[MAX_PATH]; os_wcs_to_utf8(module_name, 0, name_utf8, MAX_PATH); if (data->main_trace.instruction_ptr >= module_base && data->main_trace.instruction_ptr < module_base + module_size) { dstr_copy(&data->module_name, name_utf8); strlwr(data->module_name.array); } #ifdef _WIN64 dstr_catf(&data->module_list, "%016" PRIX64 "-%016" PRIX64 " %s\r\n", module_base, module_base + module_size, name_utf8); #else dstr_catf(&data->module_list, "%08" PRIX64 "-%08" PRIX64 " %s\r\n", module_base, module_base + module_size, name_utf8); #endif return true; } static inline void init_module_info(struct exception_handler_data *data) { data->enumerate_loaded_modules64(data->process, (PENUMLOADED_MODULES_CALLBACK64)enum_all_modules, data); } extern const char *get_win_release_id(); static inline void write_header(struct exception_handler_data *data) { char date_time[80]; time_t now = time(0); struct tm ts; ts = *localtime(&now); strftime(date_time, sizeof(date_time), "%Y-%m-%d, %X", &ts); const char *obs_bitness; if (sizeof(void *) == 8) obs_bitness = "64"; else obs_bitness = "32"; const char *release_id = get_win_release_id(); dstr_catf(&data->str, "Unhandled exception: %x\r\n" "Date/Time: %s\r\n" "Fault address: %" PRIX64 " (%s)\r\n" "libobs version: " OBS_VERSION " (%s-bit)\r\n" "Windows version: %d.%d build %d (release: %s; revision: %d; " "%s-bit)\r\n" "CPU: %s\r\n\r\n", data->exception->ExceptionRecord->ExceptionCode, date_time, data->main_trace.instruction_ptr, data->module_name.array, obs_bitness, data->win_version.major, data->win_version.minor, data->win_version.build, release_id, data->win_version.revis, is_64_bit_windows() ? "64" : "32", data->cpu_info.array); } struct module_info { DWORD64 addr; char name_utf8[MAX_PATH]; }; static BOOL CALLBACK enum_module(PCTSTR module_name, DWORD64 module_base, ULONG module_size, struct module_info *info) { if (info->addr >= module_base && info->addr < module_base + module_size) { os_wcs_to_utf8(module_name, 0, info->name_utf8, MAX_PATH); strlwr(info->name_utf8); return false; } return true; } static inline void get_module_name(struct exception_handler_data *data, struct module_info *info) { data->enumerate_loaded_modules64(data->process, (PENUMLOADED_MODULES_CALLBACK64)enum_module, info); } static inline bool walk_stack(struct exception_handler_data *data, HANDLE thread, struct stack_trace *trace) { struct module_info module_info = {0}; DWORD64 func_offset; char sym_name[256]; char *p; bool success = data->stack_walk64(trace->image_type, data->process, thread, &trace->frame, &trace->context, NULL, data->sym_function_table_access64, data->sym_get_module_base64, NULL); if (!success) return false; module_info.addr = trace->frame.AddrPC.Offset; get_module_name(data, &module_info); if (!!module_info.name_utf8[0]) { p = strrchr(module_info.name_utf8, '\\'); p = p ? (p + 1) : module_info.name_utf8; } else { strcpy(module_info.name_utf8, ""); p = module_info.name_utf8; } if (data->sym_info) { success = !!data->sym_from_addr(data->process, trace->frame.AddrPC.Offset, &func_offset, data->sym_info); if (success) os_wcs_to_utf8(data->sym_info->Name, 0, sym_name, 256); } else { success = false; } #ifdef _WIN64 #define SUCCESS_FORMAT \ "%016I64X %016I64X %016I64X %016I64X " \ "%016I64X %016I64X %s!%s+0x%I64x\r\n" #define FAIL_FORMAT \ "%016I64X %016I64X %016I64X %016I64X " \ "%016I64X %016I64X %s!0x%I64x\r\n" #else #define SUCCESS_FORMAT \ "%08.8I64X %08.8I64X %08.8I64X %08.8I64X " \ "%08.8I64X %08.8I64X %s!%s+0x%I64x\r\n" #define FAIL_FORMAT \ "%08.8I64X %08.8I64X %08.8I64X %08.8I64X " \ "%08.8I64X %08.8I64X %s!0x%I64x\r\n" trace->frame.AddrStack.Offset &= 0xFFFFFFFFF; trace->frame.AddrPC.Offset &= 0xFFFFFFFFF; trace->frame.Params[0] &= 0xFFFFFFFF; trace->frame.Params[1] &= 0xFFFFFFFF; trace->frame.Params[2] &= 0xFFFFFFFF; trace->frame.Params[3] &= 0xFFFFFFFF; #endif if (success && (data->sym_info->Flags & SYMFLAG_EXPORT) == 0) { dstr_catf(&data->str, SUCCESS_FORMAT, trace->frame.AddrStack.Offset, trace->frame.AddrPC.Offset, trace->frame.Params[0], trace->frame.Params[1], trace->frame.Params[2], trace->frame.Params[3], p, sym_name, func_offset); } else { dstr_catf(&data->str, FAIL_FORMAT, trace->frame.AddrStack.Offset, trace->frame.AddrPC.Offset, trace->frame.Params[0], trace->frame.Params[1], trace->frame.Params[2], trace->frame.Params[3], p, trace->frame.AddrPC.Offset); } return true; } #ifdef _WIN64 #define TRACE_TOP \ "Stack EIP Arg0 " \ "Arg1 Arg2 Arg3 Address\r\n" #else #define TRACE_TOP \ "Stack EIP Arg0 " \ "Arg1 Arg2 Arg3 Address\r\n" #endif static inline char *get_thread_name(HANDLE thread) { static GETTHREADDESCRIPTION get_thread_desc = NULL; static bool failed = false; if (!get_thread_desc) { if (failed) { return NULL; } HMODULE k32 = LoadLibraryW(L"kernel32.dll"); get_thread_desc = (GETTHREADDESCRIPTION)GetProcAddress(k32, "GetThreadDescription"); if (!get_thread_desc) { failed = true; return NULL; } } wchar_t *w_name; HRESULT hr = get_thread_desc(thread, &w_name); if (FAILED(hr) || !w_name) { return NULL; } struct dstr name = {0}; dstr_from_wcs(&name, w_name); if (name.len) dstr_insert_ch(&name, 0, ' '); LocalFree(w_name); return name.array; } static inline void write_thread_trace(struct exception_handler_data *data, THREADENTRY32 *entry, bool first_thread) { bool crash_thread = entry->th32ThreadID == GetCurrentThreadId(); struct stack_trace trace = {0}; struct stack_trace *ptrace; HANDLE thread; char *thread_name; if (first_thread != crash_thread) return; if (entry->th32OwnerProcessID != GetCurrentProcessId()) return; thread = OpenThread(THREAD_ALL_ACCESS, false, entry->th32ThreadID); if (!thread) return; trace.context.ContextFlags = CONTEXT_ALL; GetThreadContext(thread, &trace.context); init_instruction_data(&trace); thread_name = get_thread_name(thread); dstr_catf(&data->str, "\r\nThread %lX:%s%s\r\n" TRACE_TOP, entry->th32ThreadID, thread_name ? thread_name : "", crash_thread ? " (Crashed)" : ""); bfree(thread_name); ptrace = crash_thread ? &data->main_trace : &trace; while (walk_stack(data, thread, ptrace)) ; CloseHandle(thread); } static inline void write_thread_traces(struct exception_handler_data *data) { THREADENTRY32 entry = {0}; HANDLE snapshot = CreateToolhelp32Snapshot(TH32CS_SNAPTHREAD, GetCurrentProcessId()); bool success; if (snapshot == INVALID_HANDLE_VALUE) return; entry.dwSize = sizeof(entry); success = !!Thread32First(snapshot, &entry); while (success) { write_thread_trace(data, &entry, true); success = !!Thread32Next(snapshot, &entry); } success = !!Thread32First(snapshot, &entry); while (success) { write_thread_trace(data, &entry, false); success = !!Thread32Next(snapshot, &entry); } CloseHandle(snapshot); } static inline void write_module_list(struct exception_handler_data *data) { dstr_cat(&data->str, "\r\nLoaded modules:\r\n"); #ifdef _WIN64 dstr_cat(&data->str, "Base Address Module\r\n"); #else dstr_cat(&data->str, "Base Address Module\r\n"); #endif dstr_cat_dstr(&data->str, &data->module_list); } /* ------------------------------------------------------------------------- */ static inline void handle_exception(struct exception_handler_data *data, PEXCEPTION_POINTERS exception) { if (!get_dbghelp_imports(data)) return; data->exception = exception; data->process = GetCurrentProcess(); data->main_trace.context = *exception->ContextRecord; GetSystemTime(&data->time_info); init_sym_info(data); init_version_info(data); init_cpu_info(data); init_instruction_data(&data->main_trace); init_module_info(data); write_header(data); write_thread_traces(data); write_module_list(data); } static LONG CALLBACK exception_handler(PEXCEPTION_POINTERS exception) { struct exception_handler_data data = {0}; static bool inside_handler = false; /* don't use if a debugger is present */ if (IsDebuggerPresent()) return EXCEPTION_CONTINUE_SEARCH; if (inside_handler) return EXCEPTION_CONTINUE_SEARCH; inside_handler = true; handle_exception(&data, exception); bcrash("%s", data.str.array); exception_handler_data_free(&data); inside_handler = false; return EXCEPTION_CONTINUE_SEARCH; } void initialize_crash_handler(void) { static bool initialized = false; if (!initialized) { SetUnhandledExceptionFilter(exception_handler); initialized = true; } } obs-studio-32.1.0-sources/libobs/obsconfig.h.in000644 001751 001751 00000000556 15153330235 022243 0ustar00runnerrunner000000 000000 #pragma once #cmakedefine OBS_DATA_PATH "@OBS_DATA_PATH@" #cmakedefine OBS_PLUGIN_PATH "@OBS_PLUGIN_PATH@" #cmakedefine OBS_PLUGIN_DESTINATION "@OBS_PLUGIN_DESTINATION@" #cmakedefine GIO_FOUND #cmakedefine PULSEAUDIO_FOUND #cmakedefine XCB_XINPUT_FOUND #cmakedefine ENABLE_WAYLAND #define OBS_RELEASE_CANDIDATE @OBS_RELEASE_CANDIDATE@ #define OBS_BETA @OBS_BETA@ obs-studio-32.1.0-sources/libobs/obs-display.c000644 001751 001751 00000020442 15153330235 022102 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "graphics/vec4.h" #include "obs.h" #include "obs-internal.h" bool obs_display_init(struct obs_display *display, const struct gs_init_data *graphics_data) { pthread_mutex_init_value(&display->draw_callbacks_mutex); pthread_mutex_init_value(&display->draw_info_mutex); #if defined(_WIN32) /* Conservative test for NVIDIA flickering in multi-GPU setups */ display->use_clear_workaround = gs_get_adapter_count() > 1 && !gs_can_adapter_fast_clear(); #else display->use_clear_workaround = false; #endif if (graphics_data) { display->swap = gs_swapchain_create(graphics_data); if (!display->swap) { blog(LOG_ERROR, "obs_display_init: Failed to " "create swap chain"); return false; } const uint32_t cx = graphics_data->cx; const uint32_t cy = graphics_data->cy; display->cx = cx; display->cy = cy; display->next_cx = cx; display->next_cy = cy; } if (pthread_mutex_init(&display->draw_callbacks_mutex, NULL) != 0) { blog(LOG_ERROR, "obs_display_init: Failed to create mutex"); return false; } if (pthread_mutex_init(&display->draw_info_mutex, NULL) != 0) { blog(LOG_ERROR, "obs_display_init: Failed to create mutex"); return false; } display->enabled = true; return true; } obs_display_t *obs_display_create(const struct gs_init_data *graphics_data, uint32_t background_color) { struct obs_display *display = bzalloc(sizeof(struct obs_display)); gs_enter_context(obs->video.graphics); display->background_color = background_color; if (!obs_display_init(display, graphics_data)) { obs_display_destroy(display); display = NULL; } else { pthread_mutex_lock(&obs->data.displays_mutex); display->prev_next = &obs->data.first_display; display->next = obs->data.first_display; obs->data.first_display = display; if (display->next) display->next->prev_next = &display->next; pthread_mutex_unlock(&obs->data.displays_mutex); } gs_leave_context(); return display; } void obs_display_free(obs_display_t *display) { pthread_mutex_destroy(&display->draw_callbacks_mutex); pthread_mutex_destroy(&display->draw_info_mutex); da_free(display->draw_callbacks); if (display->swap) { gs_swapchain_destroy(display->swap); display->swap = NULL; } } void obs_display_destroy(obs_display_t *display) { if (display) { pthread_mutex_lock(&obs->data.displays_mutex); if (display->prev_next) *display->prev_next = display->next; if (display->next) display->next->prev_next = display->prev_next; pthread_mutex_unlock(&obs->data.displays_mutex); obs_enter_graphics(); obs_display_free(display); obs_leave_graphics(); bfree(display); } } void obs_display_resize(obs_display_t *display, uint32_t cx, uint32_t cy) { if (!display) return; pthread_mutex_lock(&display->draw_info_mutex); display->next_cx = cx; display->next_cy = cy; pthread_mutex_unlock(&display->draw_info_mutex); } void obs_display_update_color_space(obs_display_t *display) { if (!display) return; pthread_mutex_lock(&display->draw_info_mutex); display->update_color_space = true; pthread_mutex_unlock(&display->draw_info_mutex); } void obs_display_add_draw_callback(obs_display_t *display, void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param) { if (!display) return; struct draw_callback data = {draw, param}; pthread_mutex_lock(&display->draw_callbacks_mutex); da_push_back(display->draw_callbacks, &data); pthread_mutex_unlock(&display->draw_callbacks_mutex); } void obs_display_remove_draw_callback(obs_display_t *display, void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param) { if (!display) return; struct draw_callback data = {draw, param}; pthread_mutex_lock(&display->draw_callbacks_mutex); da_erase_item(display->draw_callbacks, &data); pthread_mutex_unlock(&display->draw_callbacks_mutex); } static inline bool render_display_begin(struct obs_display *display, uint32_t cx, uint32_t cy, bool update_color_space) { struct vec4 clear_color; gs_load_swapchain(display->swap); if ((display->cx != cx) || (display->cy != cy)) { gs_resize(cx, cy); display->cx = cx; display->cy = cy; } else if (update_color_space) { gs_update_color_space(); } const bool success = gs_is_present_ready(); if (success) { gs_begin_scene(); /* * In contrast to OpenGL or Direct3D 11, Metal and Direct3D 12 require the clear color to use linear gamma * as either the load command to clear the render target (Metal) or the explicit clear command seem to operate * on the render target in linear space. * * As OpenGL is implemented via Metal on Apple Silicon Macs and "glClear" has to be emulated via an explicit * render pass that returns the clear color for every fragment, the color becomes subject to automatic sRGB * gamma encoding if the render target uses an sRGB color format. */ #if defined(__APPLE__) && defined(__aarch64__) vec4_from_rgba_srgb(&clear_color, display->background_color); #else if (gs_get_color_space() == GS_CS_SRGB) vec4_from_rgba(&clear_color, display->background_color); else vec4_from_rgba_srgb(&clear_color, display->background_color); #endif clear_color.w = 1.0f; const bool use_clear_workaround = display->use_clear_workaround; uint32_t clear_flags = GS_CLEAR_DEPTH | GS_CLEAR_STENCIL; if (!use_clear_workaround) clear_flags |= GS_CLEAR_COLOR; gs_clear(clear_flags, &clear_color, 1.0f, 0); gs_enable_depth_test(false); /* gs_enable_blending(false); */ gs_set_cull_mode(GS_NEITHER); gs_ortho(0.0f, (float)cx, 0.0f, (float)cy, -100.0f, 100.0f); gs_set_viewport(0, 0, cx, cy); if (use_clear_workaround) { gs_effect_t *const solid_effect = obs->video.solid_effect; gs_effect_set_vec4(gs_effect_get_param_by_name(solid_effect, "color"), &clear_color); while (gs_effect_loop(solid_effect, "Solid")) gs_draw_sprite(NULL, 0, cx, cy); } } return success; } static inline void render_display_end() { gs_end_scene(); } void render_display(struct obs_display *display) { uint32_t cx, cy; bool update_color_space; if (!display || !display->enabled) return; /* -------------------------------------------- */ pthread_mutex_lock(&display->draw_info_mutex); cx = display->next_cx; cy = display->next_cy; update_color_space = display->update_color_space; display->update_color_space = false; pthread_mutex_unlock(&display->draw_info_mutex); /* -------------------------------------------- */ if (render_display_begin(display, cx, cy, update_color_space)) { GS_DEBUG_MARKER_BEGIN(GS_DEBUG_COLOR_DISPLAY, "obs_display"); pthread_mutex_lock(&display->draw_callbacks_mutex); for (size_t i = 0; i < display->draw_callbacks.num; i++) { struct draw_callback *callback; callback = display->draw_callbacks.array + i; callback->draw(callback->param, cx, cy); } pthread_mutex_unlock(&display->draw_callbacks_mutex); render_display_end(); GS_DEBUG_MARKER_END(); gs_present(); } } void obs_display_set_enabled(obs_display_t *display, bool enable) { if (display) display->enabled = enable; } bool obs_display_enabled(obs_display_t *display) { return display ? display->enabled : false; } void obs_display_set_background_color(obs_display_t *display, uint32_t color) { if (display) display->background_color = color; } void obs_display_size(obs_display_t *display, uint32_t *width, uint32_t *height) { *width = 0; *height = 0; if (display) { pthread_mutex_lock(&display->draw_info_mutex); *width = display->cx; *height = display->cy; pthread_mutex_unlock(&display->draw_info_mutex); } } obs-studio-32.1.0-sources/libobs/obs-audio-controls.h000644 001751 001751 00000017222 15153330235 023406 0ustar00runnerrunner000000 000000 /* Copyright (C) 2014 by Leonhard Oelke This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #pragma once #include "obs.h" /** * @file * @brief header for audio controls * * @brief Audio controls for use in GUIs */ #ifdef __cplusplus extern "C" { #endif /** * @brief Fader types */ enum obs_fader_type { /** * @brief A simple cubic fader for controlling audio levels * * This is a very common type of software fader since it yields good * results while being quite performant. * The input value is mapped to mul values with the simple formula x^3. */ OBS_FADER_CUBIC, /** * @brief A fader compliant to IEC 60-268-18 * * This type of fader has several segments with different slopes that * map deflection linearly to dB values. The segments are defined as * in the following table: * @code Deflection | Volume ------------------------------------------ [ 100 %, 75 % ] | [ 0 dB, -9 dB ] [ 75 %, 50 % ] | [ -9 dB, -20 dB ] [ 50 %, 30 % ] | [ -20 dB, -30 dB ] [ 30 %, 15 % ] | [ -30 dB, -40 dB ] [ 15 %, 7.5 % ] | [ -40 dB, -50 dB ] [ 7.5 %, 2.5 % ] | [ -50 dB, -60 dB ] [ 2.5 %, 0 % ] | [ -60 dB, -inf dB ] @endcode */ OBS_FADER_IEC, /** * @brief Logarithmic fader */ OBS_FADER_LOG }; /** * @brief Peak meter types */ enum obs_peak_meter_type { /** * @brief A simple peak meter measuring the maximum of all samples. * * This was a very common type of peak meter used for audio, but * is not very accurate with regards to further audio processing. */ SAMPLE_PEAK_METER, /** * @brief An accurate peak meter measure the maximum of inter-samples. * * This meter is more computational intensive due to 4x oversampling * to determine the true peak to an accuracy of +/- 0.5 dB. */ TRUE_PEAK_METER }; /** * @brief Create a fader * @param type the type of the fader * @return pointer to the fader object * * A fader object is used to map input values from a gui element to dB and * subsequently multiplier values used by libobs to mix audio. * The current "position" of the fader is internally stored as dB value. */ EXPORT obs_fader_t *obs_fader_create(enum obs_fader_type type); /** * @brief Destroy a fader * @param fader pointer to the fader object * * Destroy the fader and free all related data */ EXPORT void obs_fader_destroy(obs_fader_t *fader); /** * @brief Set the fader dB value * @param fader pointer to the fader object * @param db new dB value * @return true if value was set without clamping */ EXPORT bool obs_fader_set_db(obs_fader_t *fader, const float db); /** * @brief Get the current fader dB value * @param fader pointer to the fader object * @return current fader dB value */ EXPORT float obs_fader_get_db(obs_fader_t *fader); /** * @brief Set the fader value from deflection * @param fader pointer to the fader object * @param def new deflection * @return true if value was set without clamping * * This sets the new fader value from the supplied deflection, in case the * resulting value was clamped due to limits this function will return false. * The deflection is typically in the range [0.0, 1.0] but may be higher in * order to provide some amplification. In order for this to work the high dB * limit has to be set. */ EXPORT bool obs_fader_set_deflection(obs_fader_t *fader, const float def); /** * @brief Get the current fader deflection * @param fader pointer to the fader object * @return current fader deflection */ EXPORT float obs_fader_get_deflection(obs_fader_t *fader); /** * @brief Set the fader value from multiplier * @param fader pointer to the fader object * @return true if the value was set without clamping */ EXPORT bool obs_fader_set_mul(obs_fader_t *fader, const float mul); /** * @brief Get the current fader multiplier value * @param fader pointer to the fader object * @return current fader multiplier */ EXPORT float obs_fader_get_mul(obs_fader_t *fader); /** * @brief Attach the fader to a source * @param fader pointer to the fader object * @param source pointer to the source object * @return true on success * * When the fader is attached to a source it will automatically sync it's state * to the volume of the source. */ EXPORT bool obs_fader_attach_source(obs_fader_t *fader, obs_source_t *source); /** * @brief Detach the fader from the currently attached source * @param fader pointer to the fader object */ EXPORT void obs_fader_detach_source(obs_fader_t *fader); typedef void (*obs_fader_changed_t)(void *param, float db); EXPORT void obs_fader_add_callback(obs_fader_t *fader, obs_fader_changed_t callback, void *param); EXPORT void obs_fader_remove_callback(obs_fader_t *fader, obs_fader_changed_t callback, void *param); /** * @brief Create a volume meter * @param type the mapping type to use for the volume meter * @return pointer to the volume meter object * * A volume meter object is used to prepare the sound levels reported by audio * sources for display in a GUI. * It will automatically take source volume into account and map the levels * to a range [0.0f, 1.0f]. */ EXPORT obs_volmeter_t *obs_volmeter_create(enum obs_fader_type type); /** * @brief Destroy a volume meter * @param volmeter pointer to the volmeter object * * Destroy the volume meter and free all related data */ EXPORT void obs_volmeter_destroy(obs_volmeter_t *volmeter); /** * @brief Attach the volume meter to a source * @param volmeter pointer to the volume meter object * @param source pointer to the source object * @return true on success * * When the volume meter is attached to a source it will start to listen to * volume updates on the source and after preparing the data emit its own * signal. */ EXPORT bool obs_volmeter_attach_source(obs_volmeter_t *volmeter, obs_source_t *source); /** * @brief Detach the volume meter from the currently attached source * @param volmeter pointer to the volume meter object */ EXPORT void obs_volmeter_detach_source(obs_volmeter_t *volmeter); /** * @brief Set the peak meter type for the volume meter * @param volmeter pointer to the volume meter object * @param peak_meter_type set if true-peak needs to be measured. */ EXPORT void obs_volmeter_set_peak_meter_type(obs_volmeter_t *volmeter, enum obs_peak_meter_type peak_meter_type); /** * @brief Get the number of channels which are configured for this source. * @param volmeter pointer to the volume meter object */ EXPORT int obs_volmeter_get_nr_channels(obs_volmeter_t *volmeter); typedef void (*obs_volmeter_updated_t)(void *param, const float magnitude[MAX_AUDIO_CHANNELS], const float peak[MAX_AUDIO_CHANNELS], const float input_peak[MAX_AUDIO_CHANNELS]); EXPORT void obs_volmeter_add_callback(obs_volmeter_t *volmeter, obs_volmeter_updated_t callback, void *param); EXPORT void obs_volmeter_remove_callback(obs_volmeter_t *volmeter, obs_volmeter_updated_t callback, void *param); EXPORT float obs_mul_to_db(float mul); EXPORT float obs_db_to_mul(float db); typedef float (*obs_fader_conversion_t)(const float val); EXPORT obs_fader_conversion_t obs_fader_db_to_def(obs_fader_t *fader); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-scene.h000644 001751 001751 00000005422 15153330235 021540 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs.h" #include "graphics/matrix4.h" /* how obs scene! */ struct item_action { bool visible; uint64_t timestamp; }; struct obs_scene_item { volatile long ref; volatile bool removed; bool is_group; bool is_scene; bool update_transform; bool update_group_resize; int64_t id; struct obs_scene *parent; struct obs_source *source; volatile long active_refs; volatile long defer_update; volatile long defer_group_resize; bool user_visible; bool visible; bool selected; bool locked; gs_texrender_t *item_render; struct obs_sceneitem_crop crop; bool absolute_coordinates; struct vec2 pos; struct vec2 scale; struct vec2 scale_ref; float rot; uint32_t align; /* last width/height of the source, this is used to check whether * the transform needs updating */ uint32_t last_width; uint32_t last_height; struct vec2 output_scale; enum obs_scale_type scale_filter; enum obs_blending_method blend_method; enum obs_blending_type blend_type; struct matrix4 box_transform; struct vec2 box_scale; struct matrix4 draw_transform; enum obs_bounds_type bounds_type; uint32_t bounds_align; struct vec2 bounds; bool crop_to_bounds; struct obs_sceneitem_crop bounds_crop; obs_hotkey_pair_id toggle_visibility; obs_data_t *private_settings; pthread_mutex_t actions_mutex; DARRAY(struct item_action) audio_actions; struct obs_source *show_transition; struct obs_source *hide_transition; uint32_t show_transition_duration; uint32_t hide_transition_duration; /* would do **prev_next, but not really great for reordering */ struct obs_scene_item *prev; struct obs_scene_item *next; }; struct obs_scene { struct obs_source *source; bool is_group; bool custom_size; uint32_t cx; uint32_t cy; bool absolute_coordinates; uint32_t last_width; uint32_t last_height; int64_t id_counter; pthread_mutex_t video_mutex; pthread_mutex_t audio_mutex; struct obs_scene_item *first_item; }; obs-studio-32.1.0-sources/libobs/obs-source.c000644 001751 001751 00000523654 15153330235 021752 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include "media-io/format-conversion.h" #include "media-io/video-frame.h" #include "media-io/audio-io.h" #include "util/threading.h" #include "util/platform.h" #include "util/util_uint64.h" #include "callback/calldata.h" #include "graphics/matrix3.h" #include "graphics/vec3.h" #include "obs.h" #include "obs-internal.h" #define get_weak(source) ((obs_weak_source_t *)source->context.control) static bool filter_compatible(obs_source_t *source, obs_source_t *filter); static inline bool data_valid(const struct obs_source *source, const char *f) { return obs_source_valid(source, f) && source->context.data; } static inline bool deinterlacing_enabled(const struct obs_source *source) { return source->deinterlace_mode != OBS_DEINTERLACE_MODE_DISABLE; } static inline bool destroying(const struct obs_source *source) { return os_atomic_load_long(&source->destroying); } struct obs_source_info *get_source_info(const char *id) { for (size_t i = 0; i < obs->source_types.num; i++) { struct obs_source_info *info = &obs->source_types.array[i]; if (strcmp(info->id, id) == 0) return info; } return NULL; } struct obs_source_info *get_source_info2(const char *unversioned_id, uint32_t ver) { for (size_t i = 0; i < obs->source_types.num; i++) { struct obs_source_info *info = &obs->source_types.array[i]; if (strcmp(info->unversioned_id, unversioned_id) == 0 && info->version == ver) return info; } return NULL; } static const char *source_signals[] = { "void destroy(ptr source)", "void remove(ptr source)", "void update(ptr source)", "void save(ptr source)", "void load(ptr source)", "void activate(ptr source)", "void deactivate(ptr source)", "void show(ptr source)", "void hide(ptr source)", "void mute(ptr source, bool muted)", "void push_to_mute_changed(ptr source, bool enabled)", "void push_to_mute_delay(ptr source, int delay)", "void push_to_talk_changed(ptr source, bool enabled)", "void push_to_talk_delay(ptr source, int delay)", "void enable(ptr source, bool enabled)", "void rename(ptr source, string new_name, string prev_name)", "void volume(ptr source, in out float volume)", "void update_properties(ptr source)", "void update_flags(ptr source, int flags)", "void audio_sync(ptr source, int out int offset)", "void audio_balance(ptr source, in out float balance)", "void audio_mixers(ptr source, in out int mixers)", "void audio_monitoring(ptr source, int type)", "void audio_activate(ptr source)", "void audio_deactivate(ptr source)", "void filter_add(ptr source, ptr filter)", "void filter_remove(ptr source, ptr filter)", "void reorder_filters(ptr source)", "void transition_start(ptr source)", "void transition_video_stop(ptr source)", "void transition_stop(ptr source)", "void media_play(ptr source)", "void media_pause(ptr source)", "void media_restart(ptr source)", "void media_stopped(ptr source)", "void media_next(ptr source)", "void media_previous(ptr source)", "void media_started(ptr source)", "void media_ended(ptr source)", NULL, }; bool obs_source_init_context(struct obs_source *source, obs_data_t *settings, const char *name, const char *uuid, obs_data_t *hotkey_data, bool private) { if (!obs_context_data_init(&source->context, OBS_OBJ_TYPE_SOURCE, settings, name, uuid, hotkey_data, private)) return false; return signal_handler_add_array(source->context.signals, source_signals); } const char *obs_source_get_display_name(const char *id) { const struct obs_source_info *info = get_source_info(id); return (info != NULL) ? info->get_name(info->type_data) : NULL; } obs_module_t *obs_source_get_module(const char *id) { obs_module_t *module = obs->first_module; while (module) { for (size_t i = 0; i < module->sources.num; i++) { if (strcmp(module->sources.array[i], id) == 0) { return module; } } module = module->next; } module = obs->first_disabled_module; while (module) { for (size_t i = 0; i < module->sources.num; i++) { if (strcmp(module->sources.array[i], id) == 0) { return module; } } module = module->next; } return NULL; } enum obs_module_load_state obs_source_load_state(const char *id) { if (!id) return OBS_MODULE_INVALID; if (obs_source_type_is_scene(id) || obs_source_type_is_group(id)) return OBS_MODULE_ENABLED; obs_module_t *module = obs_source_get_module(id); if (!module) { return OBS_MODULE_MISSING; } return module->load_state; } static void allocate_audio_output_buffer(struct obs_source *source) { size_t size = sizeof(float) * AUDIO_OUTPUT_FRAMES * MAX_AUDIO_CHANNELS * MAX_AUDIO_MIXES; float *ptr = bzalloc(size); for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { size_t mix_pos = mix * AUDIO_OUTPUT_FRAMES * MAX_AUDIO_CHANNELS; for (size_t i = 0; i < MAX_AUDIO_CHANNELS; i++) { source->audio_output_buf[mix][i] = ptr + mix_pos + AUDIO_OUTPUT_FRAMES * i; } } } static void allocate_audio_mix_buffer(struct obs_source *source) { size_t size = sizeof(float) * AUDIO_OUTPUT_FRAMES * MAX_AUDIO_CHANNELS; float *ptr = bzalloc(size); for (size_t i = 0; i < MAX_AUDIO_CHANNELS; i++) { source->audio_mix_buf[i] = ptr + AUDIO_OUTPUT_FRAMES * i; } } static inline bool is_audio_source(const struct obs_source *source) { return source->info.output_flags & OBS_SOURCE_AUDIO; } static inline bool is_composite_source(const struct obs_source *source) { return source->info.output_flags & OBS_SOURCE_COMPOSITE; } static inline bool requires_canvas(const struct obs_source *source) { return source->info.output_flags & OBS_SOURCE_REQUIRES_CANVAS; } extern char *find_libobs_data_file(const char *file); /* internal initialization */ static bool obs_source_init(struct obs_source *source) { source->user_volume = 1.0f; source->volume = 1.0f; source->sync_offset = 0; source->balance = 0.5f; source->audio_active = true; pthread_mutex_init_value(&source->filter_mutex); pthread_mutex_init_value(&source->async_mutex); pthread_mutex_init_value(&source->audio_mutex); pthread_mutex_init_value(&source->audio_buf_mutex); pthread_mutex_init_value(&source->audio_cb_mutex); pthread_mutex_init_value(&source->caption_cb_mutex); pthread_mutex_init_value(&source->media_actions_mutex); if (pthread_mutex_init_recursive(&source->filter_mutex) != 0) return false; if (pthread_mutex_init(&source->audio_buf_mutex, NULL) != 0) return false; if (pthread_mutex_init(&source->audio_actions_mutex, NULL) != 0) return false; if (pthread_mutex_init(&source->audio_cb_mutex, NULL) != 0) return false; if (pthread_mutex_init(&source->audio_mutex, NULL) != 0) return false; if (pthread_mutex_init_recursive(&source->async_mutex) != 0) return false; if (pthread_mutex_init(&source->caption_cb_mutex, NULL) != 0) return false; if (pthread_mutex_init(&source->media_actions_mutex, NULL) != 0) return false; if (is_audio_source(source) || is_composite_source(source)) allocate_audio_output_buffer(source); if (source->info.audio_mix) allocate_audio_mix_buffer(source); if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) { if (!obs_transition_init(source)) return false; } obs_context_init_control(&source->context, source, (obs_destroy_cb)obs_source_destroy); source->deinterlace_top_first = true; source->audio_mixers = 0xFF; source->private_settings = obs_data_create(); return true; } static void obs_source_init_finalize(struct obs_source *source, obs_canvas_t *canvas) { if (is_audio_source(source)) { pthread_mutex_lock(&obs->data.audio_sources_mutex); source->next_audio_source = obs->data.first_audio_source; source->prev_next_audio_source = &obs->data.first_audio_source; if (obs->data.first_audio_source) obs->data.first_audio_source->prev_next_audio_source = &source->next_audio_source; obs->data.first_audio_source = source; pthread_mutex_unlock(&obs->data.audio_sources_mutex); } if (!source->context.private) { if (requires_canvas(source)) { obs_canvas_insert_source(canvas, source); } else { obs_context_data_insert_name(&source->context, &obs->data.sources_mutex, &obs->data.public_sources); } } obs_context_data_insert_uuid(&source->context, &obs->data.sources_mutex, &obs->data.sources); } static bool obs_source_hotkey_mute(void *data, obs_hotkey_pair_id id, obs_hotkey_t *key, bool pressed) { UNUSED_PARAMETER(id); UNUSED_PARAMETER(key); struct obs_source *source = data; if (!pressed || obs_source_muted(source)) return false; obs_source_set_muted(source, true); return true; } static bool obs_source_hotkey_unmute(void *data, obs_hotkey_pair_id id, obs_hotkey_t *key, bool pressed) { UNUSED_PARAMETER(id); UNUSED_PARAMETER(key); struct obs_source *source = data; if (!pressed || !obs_source_muted(source)) return false; obs_source_set_muted(source, false); return true; } static void obs_source_hotkey_push_to_mute(void *data, obs_hotkey_id id, obs_hotkey_t *key, bool pressed) { struct audio_action action = {.timestamp = os_gettime_ns(), .type = AUDIO_ACTION_PTM, .set = pressed}; UNUSED_PARAMETER(id); UNUSED_PARAMETER(key); struct obs_source *source = data; pthread_mutex_lock(&source->audio_actions_mutex); da_push_back(source->audio_actions, &action); pthread_mutex_unlock(&source->audio_actions_mutex); source->user_push_to_mute_pressed = pressed; } static void obs_source_hotkey_push_to_talk(void *data, obs_hotkey_id id, obs_hotkey_t *key, bool pressed) { struct audio_action action = {.timestamp = os_gettime_ns(), .type = AUDIO_ACTION_PTT, .set = pressed}; UNUSED_PARAMETER(id); UNUSED_PARAMETER(key); struct obs_source *source = data; pthread_mutex_lock(&source->audio_actions_mutex); da_push_back(source->audio_actions, &action); pthread_mutex_unlock(&source->audio_actions_mutex); source->user_push_to_talk_pressed = pressed; } static void obs_source_init_audio_hotkeys(struct obs_source *source) { if (!(source->info.output_flags & OBS_SOURCE_AUDIO) || source->info.type != OBS_SOURCE_TYPE_INPUT) { source->mute_unmute_key = OBS_INVALID_HOTKEY_ID; source->push_to_talk_key = OBS_INVALID_HOTKEY_ID; return; } source->mute_unmute_key = obs_hotkey_pair_register_source(source, "libobs.mute", obs->hotkeys.mute, "libobs.unmute", obs->hotkeys.unmute, obs_source_hotkey_mute, obs_source_hotkey_unmute, source, source); source->push_to_mute_key = obs_hotkey_register_source(source, "libobs.push-to-mute", obs->hotkeys.push_to_mute, obs_source_hotkey_push_to_mute, source); source->push_to_talk_key = obs_hotkey_register_source(source, "libobs.push-to-talk", obs->hotkeys.push_to_talk, obs_source_hotkey_push_to_talk, source); } void obs_source_audio_output_capture_device_activated(void *vptr, calldata_t *cd) { UNUSED_PARAMETER(vptr); obs_source_t *src = calldata_ptr(cd, "source"); if (!src) return; obs_data_t *settings = obs_source_get_settings(src); const char *device_id = obs_data_get_string(settings, "device_id"); obs_source_audio_output_capture_device_changed(src, device_id); obs_data_release(settings); } extern bool devices_match(const char *id1, const char *id2); void obs_source_audio_output_capture_device_changed(obs_source_t *src, const char *device_id) { struct obs_core_audio *audio = &obs->audio; if (!audio->monitoring_device_name) return; if (!(src->info.output_flags & OBS_SOURCE_DO_NOT_SELF_MONITOR)) return; const char *mon_id = audio->monitoring_device_id; bool id_match = false; #ifdef __APPLE__ extern void get_desktop_default_id(char **p_id); if (device_id && strcmp(device_id, "default") == 0) { char *def_id = NULL; get_desktop_default_id(&def_id); id_match = devices_match(def_id, mon_id); if (def_id) bfree(def_id); } else { id_match = devices_match(device_id, mon_id); } #else id_match = devices_match(device_id, mon_id); #endif struct calldata cd; uint8_t stack[128]; calldata_init_fixed(&cd, stack, sizeof(stack)); if (id_match) { calldata_set_ptr(&cd, "source", src); signal_handler_signal(obs->signals, "deduplication_changed", &cd); signal_handler_connect(src->context.signals, "activate", obs_source_audio_output_capture_device_activated, NULL); blog(LOG_INFO, "Device for 'Audio Output Capture' source %s is also used for audio monitoring." "\nDeduplication logic is being applied to all monitored sources.", src->context.name); } else { if (src == audio->monitoring_duplicating_source) { calldata_set_ptr(&cd, "source", NULL); signal_handler_disconnect(src->context.signals, "activate", obs_source_audio_output_capture_device_activated, NULL); signal_handler_signal(obs->signals, "deduplication_changed", &cd); blog(LOG_INFO, "Deduplication logic stopped."); } } } static obs_source_t *obs_source_create_internal(const char *id, const char *name, const char *uuid, obs_data_t *settings, obs_data_t *hotkey_data, bool private, uint32_t last_obs_ver, obs_canvas_t *canvas) { struct obs_source *source = bzalloc(sizeof(struct obs_source)); const struct obs_source_info *info = get_source_info(id); if (!info) { blog(LOG_ERROR, "Source ID '%s' not found", id); source->info.id = bstrdup(id); source->owns_info_id = true; source->info.unversioned_id = bstrdup(source->info.id); } else { source->info = *info; /* Always mark filters as private so they aren't found by * source enum/search functions. * * XXX: Fix design flaws with filters */ if (info->type == OBS_SOURCE_TYPE_FILTER) private = true; } source->mute_unmute_key = OBS_INVALID_HOTKEY_PAIR_ID; source->push_to_mute_key = OBS_INVALID_HOTKEY_ID; source->push_to_talk_key = OBS_INVALID_HOTKEY_ID; source->last_obs_ver = last_obs_ver; if (!obs_source_init_context(source, settings, name, uuid, hotkey_data, private)) goto fail; if (info) { if (info->get_defaults) { info->get_defaults(source->context.settings); } if (info->get_defaults2) { info->get_defaults2(info->type_data, source->context.settings); } } if (!obs_source_init(source)) goto fail; /* Scenes need canvases, fall back to using default canvas if none provided here. */ if (requires_canvas(source) && !canvas) { blog(LOG_WARNING, "Attempted to add Scene without specifying a canvas! Using default canvas instead."); canvas = obs->data.main_canvas; } if (!private) obs_source_init_audio_hotkeys(source); /* allow the source to be created even if creation fails so that the * user's data doesn't become lost */ if (info && info->create) source->context.data = info->create(source->context.settings, source); if ((!info || info->create) && !source->context.data) blog(LOG_ERROR, "Failed to create source '%s'!", name); blog(LOG_DEBUG, "%ssource '%s' (%s) created", private ? "private " : "", name, id); source->flags = source->default_flags; source->enabled = true; /* audio deduplication initialization */ source->audio_is_duplicated = false; obs_source_init_finalize(source, canvas); if (!private) { if (canvas) obs_source_dosignal_canvas(source, canvas, "source_create_canvas", NULL); if (!canvas || canvas == obs->data.main_canvas) obs_source_dosignal(source, "source_create", NULL); } return source; fail: blog(LOG_ERROR, "obs_source_create failed"); obs_source_destroy(source); return NULL; } obs_source_t *obs_source_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { return obs_source_create_internal(id, name, NULL, settings, hotkey_data, false, LIBOBS_API_VER, NULL); } obs_source_t *obs_source_create_private(const char *id, const char *name, obs_data_t *settings) { return obs_source_create_internal(id, name, NULL, settings, NULL, true, LIBOBS_API_VER, NULL); } obs_source_t *obs_source_create_canvas(obs_canvas_t *canvas, const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { return obs_source_create_internal(id, name, NULL, settings, hotkey_data, false, LIBOBS_API_VER, canvas); } obs_source_t *obs_source_create_set_last_ver(obs_canvas_t *canvas, const char *id, const char *name, const char *uuid, obs_data_t *settings, obs_data_t *hotkey_data, uint32_t last_obs_ver, bool is_private) { return obs_source_create_internal(id, name, uuid, settings, hotkey_data, is_private, last_obs_ver, canvas); } static char *get_new_filter_name(obs_source_t *dst, const char *name) { struct dstr new_name = {0}; int inc = 0; dstr_copy(&new_name, name); for (;;) { obs_source_t *existing_filter = obs_source_get_filter_by_name(dst, new_name.array); if (!existing_filter) break; obs_source_release(existing_filter); dstr_printf(&new_name, "%s %d", name, ++inc + 1); } return new_name.array; } static void duplicate_filters(obs_source_t *dst, obs_source_t *src, bool private) { DARRAY(obs_source_t *) filters; da_init(filters); pthread_mutex_lock(&src->filter_mutex); da_reserve(filters, src->filters.num); for (size_t i = 0; i < src->filters.num; i++) { obs_source_t *s = obs_source_get_ref(src->filters.array[i]); if (s) da_push_back(filters, &s); } pthread_mutex_unlock(&src->filter_mutex); for (size_t i = filters.num; i > 0; i--) { obs_source_t *src_filter = filters.array[i - 1]; char *new_name = get_new_filter_name(dst, src_filter->context.name); bool enabled = obs_source_enabled(src_filter); obs_source_t *dst_filter = obs_source_duplicate(src_filter, new_name, private); obs_source_set_enabled(dst_filter, enabled); bfree(new_name); obs_source_filter_add(dst, dst_filter); obs_source_release(dst_filter); obs_source_release(src_filter); } da_free(filters); } void obs_source_copy_filters(obs_source_t *dst, obs_source_t *src) { if (!obs_source_valid(dst, "obs_source_copy_filters")) return; if (!obs_source_valid(src, "obs_source_copy_filters")) return; duplicate_filters(dst, src, dst->context.private); } static void duplicate_filter(obs_source_t *dst, obs_source_t *filter) { if (!filter_compatible(dst, filter)) return; char *new_name = get_new_filter_name(dst, filter->context.name); bool enabled = obs_source_enabled(filter); obs_source_t *dst_filter = obs_source_duplicate(filter, new_name, true); obs_source_set_enabled(dst_filter, enabled); bfree(new_name); obs_source_filter_add(dst, dst_filter); obs_source_release(dst_filter); } void obs_source_copy_single_filter(obs_source_t *dst, obs_source_t *filter) { if (!obs_source_valid(dst, "obs_source_copy_single_filter")) return; if (!obs_source_valid(filter, "obs_source_copy_single_filter")) return; duplicate_filter(dst, filter); } obs_source_t *obs_source_duplicate(obs_source_t *source, const char *new_name, bool create_private) { obs_source_t *new_source; obs_data_t *settings; if (!obs_source_valid(source, "obs_source_duplicate")) return NULL; if (source->info.type == OBS_SOURCE_TYPE_SCENE) { obs_scene_t *scene = obs_scene_from_source(source); if (scene && !create_private) { return obs_source_get_ref(source); } if (!scene) scene = obs_group_from_source(source); if (!scene) return NULL; obs_scene_t *new_scene = obs_scene_duplicate( scene, new_name, create_private ? OBS_SCENE_DUP_PRIVATE_COPY : OBS_SCENE_DUP_COPY); obs_source_t *new_source = obs_scene_get_source(new_scene); return new_source; } if ((source->info.output_flags & OBS_SOURCE_DO_NOT_DUPLICATE) != 0) { return obs_source_get_ref(source); } settings = obs_data_create(); obs_data_apply(settings, source->context.settings); new_source = create_private ? obs_source_create_private(source->info.id, new_name, settings) : obs_source_create(source->info.id, new_name, settings, NULL); new_source->audio_mixers = source->audio_mixers; new_source->sync_offset = source->sync_offset; new_source->user_volume = source->user_volume; new_source->user_muted = source->user_muted; new_source->volume = source->volume; new_source->muted = source->muted; new_source->flags = source->flags; obs_data_apply(new_source->private_settings, source->private_settings); if (source->info.type != OBS_SOURCE_TYPE_FILTER) duplicate_filters(new_source, source, create_private); obs_data_release(settings); return new_source; } void obs_source_frame_init(struct obs_source_frame *frame, enum video_format format, uint32_t width, uint32_t height) { struct video_frame vid_frame; if (!obs_ptr_valid(frame, "obs_source_frame_init")) return; video_frame_init(&vid_frame, format, width, height); frame->format = format; frame->width = width; frame->height = height; for (size_t i = 0; i < MAX_AV_PLANES; i++) { frame->data[i] = vid_frame.data[i]; frame->linesize[i] = vid_frame.linesize[i]; } } static inline void obs_source_frame_decref(struct obs_source_frame *frame) { if (os_atomic_dec_long(&frame->refs) == 0) obs_source_frame_destroy(frame); } static bool obs_source_filter_remove_refless(obs_source_t *source, obs_source_t *filter); static void obs_source_destroy_defer(struct obs_source *source); void obs_source_destroy(struct obs_source *source) { if (!obs_source_valid(source, "obs_source_destroy")) return; if (os_atomic_set_long(&source->destroying, true) == true) { blog(LOG_ERROR, "Double destroy just occurred. " "Something called addref on a source " "after it was already fully released, " "I guess."); return; } if (is_audio_source(source)) { pthread_mutex_lock(&source->audio_cb_mutex); da_free(source->audio_cb_list); pthread_mutex_unlock(&source->audio_cb_mutex); } pthread_mutex_lock(&source->caption_cb_mutex); da_free(source->caption_cb_list); pthread_mutex_unlock(&source->caption_cb_mutex); if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_clear(source); pthread_mutex_lock(&obs->data.audio_sources_mutex); if (source->prev_next_audio_source) { *source->prev_next_audio_source = source->next_audio_source; if (source->next_audio_source) source->next_audio_source->prev_next_audio_source = source->prev_next_audio_source; } pthread_mutex_unlock(&obs->data.audio_sources_mutex); if (source->filter_parent) obs_source_filter_remove_refless(source->filter_parent, source); while (source->filters.num) obs_source_filter_remove(source, source->filters.array[0]); obs_context_data_remove_uuid(&source->context, &obs->data.sources_mutex, &obs->data.sources); if (!source->context.private) { if (requires_canvas(source)) { obs_canvas_remove_source(source); } else { obs_context_data_remove_name(&source->context, &obs->data.sources_mutex, &obs->data.public_sources); } } source_profiler_remove_source(source); /* defer source destroy */ os_task_queue_queue_task(obs->destruction_task_thread, (os_task_t)obs_source_destroy_defer, source); } static void obs_source_destroy_defer(struct obs_source *source) { size_t i; /* prevents the destruction of sources if destroy triggered inside of * a video tick call */ obs_context_wait(&source->context); obs_source_dosignal(source, "source_destroy", "destroy"); if (source->context.data) { source->info.destroy(source->context.data); source->context.data = NULL; } blog(LOG_DEBUG, "%ssource '%s' destroyed", source->context.private ? "private " : "", source->context.name); audio_monitor_destroy(source->monitor); obs_hotkey_unregister(source->push_to_talk_key); obs_hotkey_unregister(source->push_to_mute_key); obs_hotkey_pair_unregister(source->mute_unmute_key); for (i = 0; i < source->async_cache.num; i++) obs_source_frame_decref(source->async_cache.array[i].frame); gs_enter_context(obs->video.graphics); if (source->async_texrender) gs_texrender_destroy(source->async_texrender); if (source->async_prev_texrender) gs_texrender_destroy(source->async_prev_texrender); for (size_t c = 0; c < MAX_AV_PLANES; c++) { gs_texture_destroy(source->async_textures[c]); gs_texture_destroy(source->async_prev_textures[c]); } if (source->filter_texrender) gs_texrender_destroy(source->filter_texrender); if (source->color_space_texrender) gs_texrender_destroy(source->color_space_texrender); gs_leave_context(); for (i = 0; i < MAX_AV_PLANES; i++) bfree(source->audio_data.data[i]); for (i = 0; i < MAX_AUDIO_CHANNELS; i++) deque_free(&source->audio_input_buf[i]); audio_resampler_destroy(source->resampler); bfree(source->audio_output_buf[0][0]); bfree(source->audio_mix_buf[0]); obs_source_frame_destroy(source->async_preload_frame); if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_free(source); da_free(source->audio_actions); da_free(source->audio_cb_list); da_free(source->caption_cb_list); da_free(source->async_cache); da_free(source->async_frames); da_free(source->filters); da_free(source->media_actions); pthread_mutex_destroy(&source->filter_mutex); pthread_mutex_destroy(&source->audio_actions_mutex); pthread_mutex_destroy(&source->audio_buf_mutex); pthread_mutex_destroy(&source->audio_cb_mutex); pthread_mutex_destroy(&source->audio_mutex); pthread_mutex_destroy(&source->caption_cb_mutex); pthread_mutex_destroy(&source->async_mutex); pthread_mutex_destroy(&source->media_actions_mutex); obs_data_release(source->private_settings); obs_context_data_free(&source->context); if (source->owns_info_id) { bfree((void *)source->info.id); bfree((void *)source->info.unversioned_id); } bfree(source); } void obs_source_addref(obs_source_t *source) { if (!source) return; obs_ref_addref(&source->context.control->ref); } void obs_source_release(obs_source_t *source) { if (!obs && source) { blog(LOG_WARNING, "Tried to release a source when the OBS " "core is shut down!"); return; } if (!source) return; obs_weak_source_t *control = get_weak(source); if (obs_ref_release(&control->ref)) { obs_source_destroy(source); obs_weak_source_release(control); } } void obs_weak_source_addref(obs_weak_source_t *weak) { if (!weak) return; obs_weak_ref_addref(&weak->ref); } void obs_weak_source_release(obs_weak_source_t *weak) { if (!weak) return; if (obs_weak_ref_release(&weak->ref)) bfree(weak); } obs_source_t *obs_source_get_ref(obs_source_t *source) { if (!source) return NULL; return obs_weak_source_get_source(get_weak(source)); } obs_weak_source_t *obs_source_get_weak_source(obs_source_t *source) { if (!source) return NULL; obs_weak_source_t *weak = get_weak(source); obs_weak_source_addref(weak); return weak; } obs_source_t *obs_weak_source_get_source(obs_weak_source_t *weak) { if (!weak) return NULL; if (obs_weak_ref_get_ref(&weak->ref)) return weak->source; return NULL; } bool obs_weak_source_expired(obs_weak_source_t *weak) { return weak ? obs_weak_ref_expired(&weak->ref) : true; } bool obs_weak_source_references_source(obs_weak_source_t *weak, obs_source_t *source) { return weak && source && weak->source == source; } void obs_source_remove(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_remove")) return; if (!source->removed) { obs_source_t *s = obs_source_get_ref(source); if (s) { s->removed = true; obs_source_dosignal(s, "source_remove", "remove"); /* Remove from canvas if there is one. */ if (source->canvas) obs_canvas_remove_source(s); obs_source_release(s); } } } bool obs_source_removed(const obs_source_t *source) { return obs_source_valid(source, "obs_source_removed") ? source->removed : true; } static inline obs_data_t *get_defaults(const struct obs_source_info *info) { obs_data_t *settings = obs_data_create(); if (info->get_defaults2) info->get_defaults2(info->type_data, settings); else if (info->get_defaults) info->get_defaults(settings); return settings; } obs_data_t *obs_source_settings(const char *id) { const struct obs_source_info *info = get_source_info(id); return (info) ? get_defaults(info) : NULL; } obs_data_t *obs_get_source_defaults(const char *id) { const struct obs_source_info *info = get_source_info(id); return info ? get_defaults(info) : NULL; } obs_properties_t *obs_get_source_properties(const char *id) { const struct obs_source_info *info = get_source_info(id); if (info && (info->get_properties || info->get_properties2)) { obs_data_t *defaults = get_defaults(info); obs_properties_t *props; if (info->get_properties2) props = info->get_properties2(NULL, info->type_data); else props = info->get_properties(NULL); obs_properties_apply_settings(props, defaults); obs_data_release(defaults); return props; } return NULL; } obs_missing_files_t *obs_source_get_missing_files(const obs_source_t *source) { if (!data_valid(source, "obs_source_get_missing_files")) return obs_missing_files_create(); if (source->info.missing_files) { return source->info.missing_files(source->context.data); } return obs_missing_files_create(); } void obs_source_replace_missing_file(obs_missing_file_cb cb, obs_source_t *source, const char *new_path, void *data) { if (!data_valid(source, "obs_source_replace_missing_file")) return; cb(source->context.data, new_path, data); } bool obs_is_source_configurable(const char *id) { const struct obs_source_info *info = get_source_info(id); return info && (info->get_properties || info->get_properties2); } bool obs_source_configurable(const obs_source_t *source) { return data_valid(source, "obs_source_configurable") && (source->info.get_properties || source->info.get_properties2); } obs_properties_t *obs_source_properties(const obs_source_t *source) { if (!data_valid(source, "obs_source_properties")) return NULL; if (source->info.get_properties2) { obs_properties_t *props; props = source->info.get_properties2(source->context.data, source->info.type_data); obs_properties_apply_settings(props, source->context.settings); return props; } else if (source->info.get_properties) { obs_properties_t *props; props = source->info.get_properties(source->context.data); obs_properties_apply_settings(props, source->context.settings); return props; } return NULL; } uint32_t obs_source_get_output_flags(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_output_flags") ? source->info.output_flags : 0; } uint32_t obs_get_source_output_flags(const char *id) { const struct obs_source_info *info = get_source_info(id); return info ? info->output_flags : 0; } static void obs_source_deferred_update(obs_source_t *source) { if (source->context.data && source->info.update) { long count = os_atomic_load_long(&source->defer_update_count); source->info.update(source->context.data, source->context.settings); os_atomic_compare_swap_long(&source->defer_update_count, count, 0); obs_source_dosignal(source, "source_update", "update"); } } void obs_source_update(obs_source_t *source, obs_data_t *settings) { if (!obs_source_valid(source, "obs_source_update")) return; if (settings) { obs_data_apply(source->context.settings, settings); } if (source->info.output_flags & OBS_SOURCE_VIDEO) { os_atomic_inc_long(&source->defer_update_count); } else if (source->context.data && source->info.update) { source->info.update(source->context.data, source->context.settings); obs_source_dosignal(source, "source_update", "update"); } } void obs_source_reset_settings(obs_source_t *source, obs_data_t *settings) { if (!obs_source_valid(source, "obs_source_reset_settings")) return; obs_data_clear(source->context.settings); obs_source_update(source, settings); } void obs_source_update_properties(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_update_properties")) return; obs_source_dosignal(source, NULL, "update_properties"); } void obs_source_send_mouse_click(obs_source_t *source, const struct obs_mouse_event *event, int32_t type, bool mouse_up, uint32_t click_count) { if (!obs_source_valid(source, "obs_source_send_mouse_click")) return; if (source->info.output_flags & OBS_SOURCE_INTERACTION) { if (source->info.mouse_click) { source->info.mouse_click(source->context.data, event, type, mouse_up, click_count); } } } void obs_source_send_mouse_move(obs_source_t *source, const struct obs_mouse_event *event, bool mouse_leave) { if (!obs_source_valid(source, "obs_source_send_mouse_move")) return; if (source->info.output_flags & OBS_SOURCE_INTERACTION) { if (source->info.mouse_move) { source->info.mouse_move(source->context.data, event, mouse_leave); } } } void obs_source_send_mouse_wheel(obs_source_t *source, const struct obs_mouse_event *event, int x_delta, int y_delta) { if (!obs_source_valid(source, "obs_source_send_mouse_wheel")) return; if (source->info.output_flags & OBS_SOURCE_INTERACTION) { if (source->info.mouse_wheel) { source->info.mouse_wheel(source->context.data, event, x_delta, y_delta); } } } void obs_source_send_focus(obs_source_t *source, bool focus) { if (!obs_source_valid(source, "obs_source_send_focus")) return; if (source->info.output_flags & OBS_SOURCE_INTERACTION) { if (source->info.focus) { source->info.focus(source->context.data, focus); } } } void obs_source_send_key_click(obs_source_t *source, const struct obs_key_event *event, bool key_up) { if (!obs_source_valid(source, "obs_source_send_key_click")) return; if (source->info.output_flags & OBS_SOURCE_INTERACTION) { if (source->info.key_click) { source->info.key_click(source->context.data, event, key_up); } } } bool obs_source_get_texcoords_centered(obs_source_t *source) { return source->texcoords_centered; } void obs_source_set_texcoords_centered(obs_source_t *source, bool centered) { source->texcoords_centered = centered; } static void activate_source(obs_source_t *source) { if (source->context.data && source->info.activate) source->info.activate(source->context.data); obs_source_dosignal(source, "source_activate", "activate"); } static void deactivate_source(obs_source_t *source) { if (source->context.data && source->info.deactivate) source->info.deactivate(source->context.data); obs_source_dosignal(source, "source_deactivate", "deactivate"); } static void show_source(obs_source_t *source) { if (source->context.data && source->info.show) source->info.show(source->context.data); obs_source_dosignal(source, "source_show", "show"); } static void hide_source(obs_source_t *source) { if (source->context.data && source->info.hide) source->info.hide(source->context.data); obs_source_dosignal(source, "source_hide", "hide"); } static void activate_tree(obs_source_t *parent, obs_source_t *child, void *param) { os_atomic_inc_long(&child->activate_refs); UNUSED_PARAMETER(parent); UNUSED_PARAMETER(param); } static void deactivate_tree(obs_source_t *parent, obs_source_t *child, void *param) { os_atomic_dec_long(&child->activate_refs); UNUSED_PARAMETER(parent); UNUSED_PARAMETER(param); } static void show_tree(obs_source_t *parent, obs_source_t *child, void *param) { os_atomic_inc_long(&child->show_refs); UNUSED_PARAMETER(parent); UNUSED_PARAMETER(param); } static void hide_tree(obs_source_t *parent, obs_source_t *child, void *param) { os_atomic_dec_long(&child->show_refs); UNUSED_PARAMETER(parent); UNUSED_PARAMETER(param); } void obs_source_activate(obs_source_t *source, enum view_type type) { if (!obs_source_valid(source, "obs_source_activate")) return; os_atomic_inc_long(&source->show_refs); obs_source_enum_active_tree(source, show_tree, NULL); if (type == MAIN_VIEW) { os_atomic_inc_long(&source->activate_refs); obs_source_enum_active_tree(source, activate_tree, NULL); } } void obs_source_deactivate(obs_source_t *source, enum view_type type) { if (!obs_source_valid(source, "obs_source_deactivate")) return; if (os_atomic_load_long(&source->show_refs) > 0) { os_atomic_dec_long(&source->show_refs); obs_source_enum_active_tree(source, hide_tree, NULL); } if (type == MAIN_VIEW) { if (os_atomic_load_long(&source->activate_refs) > 0) { os_atomic_dec_long(&source->activate_refs); obs_source_enum_active_tree(source, deactivate_tree, NULL); } } } static inline struct obs_source_frame *get_closest_frame(obs_source_t *source, uint64_t sys_time); static void filter_frame(obs_source_t *source, struct obs_source_frame **ref_frame) { struct obs_source_frame *frame = *ref_frame; if (frame) { os_atomic_inc_long(&frame->refs); frame = filter_async_video(source, frame); if (frame) os_atomic_dec_long(&frame->refs); } *ref_frame = frame; } void process_media_actions(obs_source_t *source) { struct media_action action = {0}; for (;;) { pthread_mutex_lock(&source->media_actions_mutex); if (source->media_actions.num) { action = source->media_actions.array[0]; da_pop_front(source->media_actions); } else { action.type = MEDIA_ACTION_NONE; } pthread_mutex_unlock(&source->media_actions_mutex); switch (action.type) { case MEDIA_ACTION_NONE: return; case MEDIA_ACTION_PLAY_PAUSE: source->info.media_play_pause(source->context.data, action.pause); if (action.pause) obs_source_dosignal(source, NULL, "media_pause"); else obs_source_dosignal(source, NULL, "media_play"); break; case MEDIA_ACTION_RESTART: source->info.media_restart(source->context.data); obs_source_dosignal(source, NULL, "media_restart"); break; case MEDIA_ACTION_STOP: source->info.media_stop(source->context.data); obs_source_dosignal(source, NULL, "media_stopped"); break; case MEDIA_ACTION_NEXT: source->info.media_next(source->context.data); obs_source_dosignal(source, NULL, "media_next"); break; case MEDIA_ACTION_PREVIOUS: source->info.media_previous(source->context.data); obs_source_dosignal(source, NULL, "media_previous"); break; case MEDIA_ACTION_SET_TIME: source->info.media_set_time(source->context.data, action.ms); break; } } } static void async_tick(obs_source_t *source) { uint64_t sys_time = obs->video.video_time; pthread_mutex_lock(&source->async_mutex); if (deinterlacing_enabled(source)) { deinterlace_process_last_frame(source, sys_time); } else { if (source->cur_async_frame) { remove_async_frame(source, source->cur_async_frame); source->cur_async_frame = NULL; } source->cur_async_frame = get_closest_frame(source, sys_time); } source->last_sys_timestamp = sys_time; if (deinterlacing_enabled(source)) filter_frame(source, &source->prev_async_frame); filter_frame(source, &source->cur_async_frame); if (source->cur_async_frame) source->async_update_texture = set_async_texture_size(source, source->cur_async_frame); pthread_mutex_unlock(&source->async_mutex); } void obs_source_video_tick(obs_source_t *source, float seconds) { bool now_showing, now_active; if (!obs_source_valid(source, "obs_source_video_tick")) return; if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_tick(source, seconds); if ((source->info.output_flags & OBS_SOURCE_ASYNC) != 0) async_tick(source); if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) != 0) process_media_actions(source); if (os_atomic_load_long(&source->defer_update_count) > 0) obs_source_deferred_update(source); /* reset the filter render texture information once every frame */ if (source->filter_texrender) gs_texrender_reset(source->filter_texrender); /* call show/hide if the reference changed */ now_showing = !!source->show_refs; if (now_showing != source->showing) { if (now_showing) { show_source(source); } else { hide_source(source); } if (source->filters.num) { for (size_t i = source->filters.num; i > 0; i--) { obs_source_t *filter = source->filters.array[i - 1]; if (now_showing) { show_source(filter); } else { hide_source(filter); } } } source->showing = now_showing; } /* call activate/deactivate if the reference changed */ now_active = !!source->activate_refs; if (now_active != source->active) { if (now_active) { activate_source(source); } else { deactivate_source(source); } if (source->filters.num) { for (size_t i = source->filters.num; i > 0; i--) { obs_source_t *filter = source->filters.array[i - 1]; if (now_active) { activate_source(filter); } else { deactivate_source(filter); } } } source->active = now_active; } if (source->context.data && source->info.video_tick) source->info.video_tick(source->context.data, seconds); source->async_rendered = false; source->deinterlace_rendered = false; } /* unless the value is 3+ hours worth of frames, this won't overflow */ static inline uint64_t conv_frames_to_time(const size_t sample_rate, const size_t frames) { if (!sample_rate) return 0; return util_mul_div64(frames, 1000000000ULL, sample_rate); } static inline size_t conv_time_to_frames(const size_t sample_rate, const uint64_t duration) { return (size_t)util_mul_div64(duration, sample_rate, 1000000000ULL); } /* maximum buffer size */ #define MAX_BUF_SIZE (1000 * AUDIO_OUTPUT_FRAMES * sizeof(float)) /* time threshold in nanoseconds to ensure audio timing is as seamless as * possible */ #define TS_SMOOTHING_THRESHOLD 70000000ULL static inline void reset_audio_timing(obs_source_t *source, uint64_t timestamp, uint64_t os_time) { source->timing_set = true; source->timing_adjust = os_time - timestamp; } static void reset_audio_data(obs_source_t *source, uint64_t os_time) { for (size_t i = 0; i < MAX_AUDIO_CHANNELS; i++) { if (source->audio_input_buf[i].size) deque_pop_front(&source->audio_input_buf[i], NULL, source->audio_input_buf[i].size); } source->last_audio_input_buf_size = 0; source->audio_ts = os_time; source->next_audio_sys_ts_min = os_time; } static void handle_ts_jump(obs_source_t *source, uint64_t expected, uint64_t ts, uint64_t diff, uint64_t os_time) { blog(LOG_DEBUG, "Timestamp for source '%s' jumped by '%" PRIu64 "', " "expected value %" PRIu64 ", input value %" PRIu64, source->context.name, diff, expected, ts); pthread_mutex_lock(&source->audio_buf_mutex); reset_audio_timing(source, ts, os_time); reset_audio_data(source, os_time); pthread_mutex_unlock(&source->audio_buf_mutex); } static void source_signal_audio_data(obs_source_t *source, const struct audio_data *in, bool muted) { pthread_mutex_lock(&source->audio_cb_mutex); for (size_t i = source->audio_cb_list.num; i > 0; i--) { struct audio_cb_info info = source->audio_cb_list.array[i - 1]; info.callback(info.param, source, in, muted); } pthread_mutex_unlock(&source->audio_cb_mutex); } static inline uint64_t uint64_diff(uint64_t ts1, uint64_t ts2) { return (ts1 < ts2) ? (ts2 - ts1) : (ts1 - ts2); } static inline size_t get_buf_placement(audio_t *audio, uint64_t offset) { uint32_t sample_rate = audio_output_get_sample_rate(audio); return (size_t)util_mul_div64(offset, sample_rate, 1000000000ULL); } static void source_output_audio_place(obs_source_t *source, const struct audio_data *in) { audio_t *audio = obs->audio.audio; size_t buf_placement; size_t channels = audio_output_get_channels(audio); size_t size = in->frames * sizeof(float); if (!source->audio_ts || in->timestamp < source->audio_ts) reset_audio_data(source, in->timestamp); buf_placement = get_buf_placement(audio, in->timestamp - source->audio_ts) * sizeof(float); #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "frames: %lu, size: %lu, placement: %lu, base_ts: %llu, ts: %llu", (unsigned long)in->frames, (unsigned long)source->audio_input_buf[0].size, (unsigned long)buf_placement, source->audio_ts, in->timestamp); #endif /* do not allow the circular buffers to become too big */ if ((buf_placement + size) > MAX_BUF_SIZE) return; for (size_t i = 0; i < channels; i++) { deque_place(&source->audio_input_buf[i], buf_placement, in->data[i], size); deque_pop_back(&source->audio_input_buf[i], NULL, source->audio_input_buf[i].size - (buf_placement + size)); } source->last_audio_input_buf_size = 0; } static inline void source_output_audio_push_back(obs_source_t *source, const struct audio_data *in) { audio_t *audio = obs->audio.audio; size_t channels = audio_output_get_channels(audio); size_t size = in->frames * sizeof(float); /* do not allow the circular buffers to become too big */ if ((source->audio_input_buf[0].size + size) > MAX_BUF_SIZE) return; for (size_t i = 0; i < channels; i++) deque_push_back(&source->audio_input_buf[i], in->data[i], size); /* reset audio input buffer size to ensure that audio doesn't get * perpetually cut */ source->last_audio_input_buf_size = 0; } static inline bool source_muted(obs_source_t *source, uint64_t os_time) { if (source->push_to_mute_enabled && source->user_push_to_mute_pressed) source->push_to_mute_stop_time = os_time + source->push_to_mute_delay * 1000000; if (source->push_to_talk_enabled && source->user_push_to_talk_pressed) source->push_to_talk_stop_time = os_time + source->push_to_talk_delay * 1000000; bool push_to_mute_active = source->user_push_to_mute_pressed || os_time < source->push_to_mute_stop_time; bool push_to_talk_active = source->user_push_to_talk_pressed || os_time < source->push_to_talk_stop_time; return !source->enabled || source->user_muted || (source->push_to_mute_enabled && push_to_mute_active) || (source->push_to_talk_enabled && !push_to_talk_active); } static void source_output_audio_data(obs_source_t *source, const struct audio_data *data) { size_t sample_rate = audio_output_get_sample_rate(obs->audio.audio); struct audio_data in = *data; uint64_t diff; uint64_t os_time = os_gettime_ns(); int64_t sync_offset; bool using_direct_ts = false; bool push_back = false; /* detects 'directly' set timestamps as long as they're within * a certain threshold */ if (uint64_diff(in.timestamp, os_time) < MAX_TS_VAR) { source->timing_adjust = 0; source->timing_set = true; using_direct_ts = true; } if (!source->timing_set) { reset_audio_timing(source, in.timestamp, os_time); } else if (source->next_audio_ts_min != 0) { diff = uint64_diff(source->next_audio_ts_min, in.timestamp); /* smooth audio if within threshold */ if (diff > MAX_TS_VAR && !using_direct_ts) handle_ts_jump(source, source->next_audio_ts_min, in.timestamp, diff, os_time); else if (diff < TS_SMOOTHING_THRESHOLD) { if (source->async_unbuffered && source->async_decoupled) source->timing_adjust = os_time - in.timestamp; in.timestamp = source->next_audio_ts_min; } else { blog(LOG_DEBUG, "Audio timestamp for '%s' exceeded TS_SMOOTHING_THRESHOLD, diff=%" PRIu64 " ns, expected %" PRIu64 ", input %" PRIu64, source->context.name, diff, source->next_audio_ts_min, in.timestamp); } } source->next_audio_ts_min = in.timestamp + conv_frames_to_time(sample_rate, in.frames); in.timestamp += source->timing_adjust; pthread_mutex_lock(&source->audio_buf_mutex); if (source->next_audio_sys_ts_min == in.timestamp) { push_back = true; } else if (source->next_audio_sys_ts_min) { diff = uint64_diff(source->next_audio_sys_ts_min, in.timestamp); if (diff < TS_SMOOTHING_THRESHOLD) { push_back = true; } else if (diff > MAX_TS_VAR) { /* This typically only happens if used with async video when * audio/video start transitioning in to a timestamp jump. * Audio will typically have a timestamp jump, and then video * will have a timestamp jump. If that case is encountered, * just clear the audio data in that small window and force a * resync. This handles all cases rather than just looping. */ reset_audio_timing(source, data->timestamp, os_time); in.timestamp = data->timestamp + source->timing_adjust; } } sync_offset = source->sync_offset; in.timestamp += sync_offset; in.timestamp -= source->resample_offset; source->next_audio_sys_ts_min = source->next_audio_ts_min + source->timing_adjust; if (source->last_sync_offset != sync_offset) { if (source->last_sync_offset) push_back = false; source->last_sync_offset = sync_offset; } if (source->monitoring_type != OBS_MONITORING_TYPE_MONITOR_ONLY) { if (push_back && source->audio_ts) source_output_audio_push_back(source, &in); else source_output_audio_place(source, &in); } pthread_mutex_unlock(&source->audio_buf_mutex); source_signal_audio_data(source, data, source_muted(source, os_time)); } enum convert_type { CONVERT_NONE, CONVERT_NV12, CONVERT_420, CONVERT_420_PQ, CONVERT_420_A, CONVERT_422, CONVERT_422P10LE, CONVERT_422_A, CONVERT_422_PACK, CONVERT_444, CONVERT_444P12LE, CONVERT_444_A, CONVERT_444P12LE_A, CONVERT_444_A_PACK, CONVERT_800, CONVERT_RGB_LIMITED, CONVERT_BGR3, CONVERT_I010, CONVERT_P010, CONVERT_V210, CONVERT_R10L, }; static inline enum convert_type get_convert_type(enum video_format format, bool full_range, uint8_t trc) { switch (format) { case VIDEO_FORMAT_I420: return (trc == VIDEO_TRC_PQ) ? CONVERT_420_PQ : CONVERT_420; case VIDEO_FORMAT_NV12: return CONVERT_NV12; case VIDEO_FORMAT_I444: return CONVERT_444; case VIDEO_FORMAT_I412: return CONVERT_444P12LE; case VIDEO_FORMAT_I422: return CONVERT_422; case VIDEO_FORMAT_I210: return CONVERT_422P10LE; case VIDEO_FORMAT_YVYU: case VIDEO_FORMAT_YUY2: case VIDEO_FORMAT_UYVY: return CONVERT_422_PACK; case VIDEO_FORMAT_Y800: return CONVERT_800; case VIDEO_FORMAT_NONE: case VIDEO_FORMAT_RGBA: case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: return full_range ? CONVERT_NONE : CONVERT_RGB_LIMITED; case VIDEO_FORMAT_BGR3: return CONVERT_BGR3; case VIDEO_FORMAT_I40A: return CONVERT_420_A; case VIDEO_FORMAT_I42A: return CONVERT_422_A; case VIDEO_FORMAT_YUVA: return CONVERT_444_A; case VIDEO_FORMAT_YA2L: return CONVERT_444P12LE_A; case VIDEO_FORMAT_AYUV: return CONVERT_444_A_PACK; case VIDEO_FORMAT_I010: return CONVERT_I010; case VIDEO_FORMAT_P010: return CONVERT_P010; case VIDEO_FORMAT_V210: return CONVERT_V210; case VIDEO_FORMAT_R10L: return CONVERT_R10L; case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: /* Unimplemented */ break; } return CONVERT_NONE; } static inline bool set_packed422_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; source->async_convert_width[0] = half_width; source->async_convert_height[0] = height; source->async_texture_formats[0] = GS_BGRA; source->async_channel_count = 1; return true; } static inline bool set_packed444_alpha_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_height[0] = frame->height; source->async_texture_formats[0] = GS_BGRA; source->async_channel_count = 1; return true; } static inline bool set_planar444_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_width[1] = frame->width; source->async_convert_width[2] = frame->width; source->async_convert_height[0] = frame->height; source->async_convert_height[1] = frame->height; source->async_convert_height[2] = frame->height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8; source->async_texture_formats[2] = GS_R8; source->async_channel_count = 3; return true; } static inline bool set_planar444_16_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_width[1] = frame->width; source->async_convert_width[2] = frame->width; source->async_convert_height[0] = frame->height; source->async_convert_height[1] = frame->height; source->async_convert_height[2] = frame->height; source->async_texture_formats[0] = GS_R16; source->async_texture_formats[1] = GS_R16; source->async_texture_formats[2] = GS_R16; source->async_channel_count = 3; return true; } static inline bool set_planar444_alpha_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_width[1] = frame->width; source->async_convert_width[2] = frame->width; source->async_convert_width[3] = frame->width; source->async_convert_height[0] = frame->height; source->async_convert_height[1] = frame->height; source->async_convert_height[2] = frame->height; source->async_convert_height[3] = frame->height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8; source->async_texture_formats[2] = GS_R8; source->async_texture_formats[3] = GS_R8; source->async_channel_count = 4; return true; } static inline bool set_planar444_16_alpha_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_width[1] = frame->width; source->async_convert_width[2] = frame->width; source->async_convert_width[3] = frame->width; source->async_convert_height[0] = frame->height; source->async_convert_height[1] = frame->height; source->async_convert_height[2] = frame->height; source->async_convert_height[3] = frame->height; source->async_texture_formats[0] = GS_R16; source->async_texture_formats[1] = GS_R16; source->async_texture_formats[2] = GS_R16; source->async_texture_formats[3] = GS_R16; source->async_channel_count = 4; return true; } static inline bool set_planar420_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; const uint32_t half_height = (height + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_width[2] = half_width; source->async_convert_height[0] = height; source->async_convert_height[1] = half_height; source->async_convert_height[2] = half_height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8; source->async_texture_formats[2] = GS_R8; source->async_channel_count = 3; return true; } static inline bool set_planar420_alpha_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; const uint32_t half_height = (height + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_width[2] = half_width; source->async_convert_width[3] = width; source->async_convert_height[0] = height; source->async_convert_height[1] = half_height; source->async_convert_height[2] = half_height; source->async_convert_height[3] = height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8; source->async_texture_formats[2] = GS_R8; source->async_texture_formats[3] = GS_R8; source->async_channel_count = 4; return true; } static inline bool set_planar422_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_width[2] = half_width; source->async_convert_height[0] = height; source->async_convert_height[1] = height; source->async_convert_height[2] = height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8; source->async_texture_formats[2] = GS_R8; source->async_channel_count = 3; return true; } static inline bool set_planar422_16_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_width[2] = half_width; source->async_convert_height[0] = height; source->async_convert_height[1] = height; source->async_convert_height[2] = height; source->async_texture_formats[0] = GS_R16; source->async_texture_formats[1] = GS_R16; source->async_texture_formats[2] = GS_R16; source->async_channel_count = 3; return true; } static inline bool set_planar422_alpha_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_width[2] = half_width; source->async_convert_width[3] = width; source->async_convert_height[0] = height; source->async_convert_height[1] = height; source->async_convert_height[2] = height; source->async_convert_height[3] = height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8; source->async_texture_formats[2] = GS_R8; source->async_texture_formats[3] = GS_R8; source->async_channel_count = 4; return true; } static inline bool set_nv12_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; const uint32_t half_height = (height + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_height[0] = height; source->async_convert_height[1] = half_height; source->async_texture_formats[0] = GS_R8; source->async_texture_formats[1] = GS_R8G8; source->async_channel_count = 2; return true; } static inline bool set_y800_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_height[0] = frame->height; source->async_texture_formats[0] = GS_R8; source->async_channel_count = 1; return true; } static inline bool set_rgb_limited_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_height[0] = frame->height; source->async_texture_formats[0] = convert_video_format(frame->format, frame->trc); source->async_channel_count = 1; return true; } static inline bool set_bgr3_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width * 3; source->async_convert_height[0] = frame->height; source->async_texture_formats[0] = GS_R8; source->async_channel_count = 1; return true; } static inline bool set_i010_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; const uint32_t half_height = (height + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_width[2] = half_width; source->async_convert_height[0] = height; source->async_convert_height[1] = half_height; source->async_convert_height[2] = half_height; source->async_texture_formats[0] = GS_R16; source->async_texture_formats[1] = GS_R16; source->async_texture_formats[2] = GS_R16; source->async_channel_count = 3; return true; } static inline bool set_p010_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t half_width = (width + 1) / 2; const uint32_t half_height = (height + 1) / 2; source->async_convert_width[0] = width; source->async_convert_width[1] = half_width; source->async_convert_height[0] = height; source->async_convert_height[1] = half_height; source->async_texture_formats[0] = GS_R16; source->async_texture_formats[1] = GS_RG16; source->async_channel_count = 2; return true; } static inline bool set_v210_sizes(struct obs_source *source, const struct obs_source_frame *frame) { const uint32_t width = frame->width; const uint32_t height = frame->height; const uint32_t adjusted_width = ((width + 5) / 6) * 4; source->async_convert_width[0] = adjusted_width; source->async_convert_height[0] = height; source->async_texture_formats[0] = GS_R10G10B10A2; source->async_channel_count = 1; return true; } static inline bool set_r10l_sizes(struct obs_source *source, const struct obs_source_frame *frame) { source->async_convert_width[0] = frame->width; source->async_convert_height[0] = frame->height; source->async_texture_formats[0] = GS_BGRA_UNORM; source->async_channel_count = 1; return true; } static inline bool init_gpu_conversion(struct obs_source *source, const struct obs_source_frame *frame) { switch (get_convert_type(frame->format, frame->full_range, frame->trc)) { case CONVERT_422_PACK: return set_packed422_sizes(source, frame); case CONVERT_420: case CONVERT_420_PQ: return set_planar420_sizes(source, frame); case CONVERT_422: return set_planar422_sizes(source, frame); case CONVERT_422P10LE: return set_planar422_16_sizes(source, frame); case CONVERT_NV12: return set_nv12_sizes(source, frame); case CONVERT_444: return set_planar444_sizes(source, frame); case CONVERT_444P12LE: return set_planar444_16_sizes(source, frame); case CONVERT_800: return set_y800_sizes(source, frame); case CONVERT_RGB_LIMITED: return set_rgb_limited_sizes(source, frame); case CONVERT_BGR3: return set_bgr3_sizes(source, frame); case CONVERT_420_A: return set_planar420_alpha_sizes(source, frame); case CONVERT_422_A: return set_planar422_alpha_sizes(source, frame); case CONVERT_444_A: return set_planar444_alpha_sizes(source, frame); case CONVERT_444P12LE_A: return set_planar444_16_alpha_sizes(source, frame); case CONVERT_444_A_PACK: return set_packed444_alpha_sizes(source, frame); case CONVERT_I010: return set_i010_sizes(source, frame); case CONVERT_P010: return set_p010_sizes(source, frame); case CONVERT_V210: return set_v210_sizes(source, frame); case CONVERT_R10L: return set_r10l_sizes(source, frame); case CONVERT_NONE: assert(false && "No conversion requested"); break; } return false; } bool set_async_texture_size(struct obs_source *source, const struct obs_source_frame *frame) { enum convert_type cur = get_convert_type(frame->format, frame->full_range, frame->trc); if (source->async_width == frame->width && source->async_height == frame->height && source->async_format == frame->format && source->async_full_range == frame->full_range && source->async_trc == frame->trc) return true; source->async_width = frame->width; source->async_height = frame->height; source->async_format = frame->format; source->async_full_range = frame->full_range; source->async_trc = frame->trc; gs_enter_context(obs->video.graphics); for (size_t c = 0; c < MAX_AV_PLANES; c++) { gs_texture_destroy(source->async_textures[c]); source->async_textures[c] = NULL; gs_texture_destroy(source->async_prev_textures[c]); source->async_prev_textures[c] = NULL; } gs_texrender_destroy(source->async_texrender); gs_texrender_destroy(source->async_prev_texrender); source->async_texrender = NULL; source->async_prev_texrender = NULL; const enum gs_color_format format = convert_video_format(frame->format, frame->trc); const bool async_gpu_conversion = (cur != CONVERT_NONE) && init_gpu_conversion(source, frame); source->async_gpu_conversion = async_gpu_conversion; if (async_gpu_conversion) { source->async_texrender = gs_texrender_create(format, GS_ZS_NONE); for (int c = 0; c < source->async_channel_count; ++c) source->async_textures[c] = gs_texture_create(source->async_convert_width[c], source->async_convert_height[c], source->async_texture_formats[c], 1, NULL, GS_DYNAMIC); } else { source->async_textures[0] = gs_texture_create(frame->width, frame->height, format, 1, NULL, GS_DYNAMIC); } if (deinterlacing_enabled(source)) set_deinterlace_texture_size(source); gs_leave_context(); return source->async_textures[0] != NULL; } static void upload_raw_frame(gs_texture_t *tex[MAX_AV_PLANES], const struct obs_source_frame *frame) { switch (get_convert_type(frame->format, frame->full_range, frame->trc)) { case CONVERT_422_PACK: case CONVERT_800: case CONVERT_RGB_LIMITED: case CONVERT_BGR3: case CONVERT_420: case CONVERT_420_PQ: case CONVERT_422: case CONVERT_422P10LE: case CONVERT_NV12: case CONVERT_444: case CONVERT_444P12LE: case CONVERT_420_A: case CONVERT_422_A: case CONVERT_444_A: case CONVERT_444P12LE_A: case CONVERT_444_A_PACK: case CONVERT_I010: case CONVERT_P010: case CONVERT_V210: case CONVERT_R10L: for (size_t c = 0; c < MAX_AV_PLANES; c++) { if (tex[c]) gs_texture_set_image(tex[c], frame->data[c], frame->linesize[c], false); } break; case CONVERT_NONE: assert(false && "No conversion requested"); break; } } static const char *select_conversion_technique(enum video_format format, bool full_range, uint8_t trc) { switch (format) { case VIDEO_FORMAT_UYVY: return "UYVY_Reverse"; case VIDEO_FORMAT_YUY2: switch (trc) { case VIDEO_TRC_PQ: return "YUY2_PQ_Reverse"; case VIDEO_TRC_HLG: return "YUY2_HLG_Reverse"; default: return "YUY2_Reverse"; } case VIDEO_FORMAT_YVYU: return "YVYU_Reverse"; case VIDEO_FORMAT_I420: switch (trc) { case VIDEO_TRC_PQ: return "I420_PQ_Reverse"; case VIDEO_TRC_HLG: return "I420_HLG_Reverse"; default: return "I420_Reverse"; } case VIDEO_FORMAT_NV12: switch (trc) { case VIDEO_TRC_PQ: return "NV12_PQ_Reverse"; case VIDEO_TRC_HLG: return "NV12_HLG_Reverse"; default: return "NV12_Reverse"; } case VIDEO_FORMAT_I444: return "I444_Reverse"; case VIDEO_FORMAT_I412: switch (trc) { case VIDEO_TRC_PQ: return "I412_PQ_Reverse"; case VIDEO_TRC_HLG: return "I412_HLG_Reverse"; default: return "I412_Reverse"; } case VIDEO_FORMAT_Y800: return full_range ? "Y800_Full" : "Y800_Limited"; case VIDEO_FORMAT_BGR3: return full_range ? "BGR3_Full" : "BGR3_Limited"; case VIDEO_FORMAT_I422: return "I422_Reverse"; case VIDEO_FORMAT_I210: switch (trc) { case VIDEO_TRC_PQ: return "I210_PQ_Reverse"; case VIDEO_TRC_HLG: return "I210_HLG_Reverse"; default: return "I210_Reverse"; } case VIDEO_FORMAT_I40A: return "I40A_Reverse"; case VIDEO_FORMAT_I42A: return "I42A_Reverse"; case VIDEO_FORMAT_YUVA: return "YUVA_Reverse"; case VIDEO_FORMAT_YA2L: return "YA2L_Reverse"; case VIDEO_FORMAT_AYUV: return "AYUV_Reverse"; case VIDEO_FORMAT_I010: { switch (trc) { case VIDEO_TRC_PQ: return "I010_PQ_2020_709_Reverse"; case VIDEO_TRC_HLG: return "I010_HLG_2020_709_Reverse"; default: return "I010_SRGB_Reverse"; } } case VIDEO_FORMAT_P010: { switch (trc) { case VIDEO_TRC_PQ: return "P010_PQ_2020_709_Reverse"; case VIDEO_TRC_HLG: return "P010_HLG_2020_709_Reverse"; default: return "P010_SRGB_Reverse"; } } case VIDEO_FORMAT_V210: { switch (trc) { case VIDEO_TRC_PQ: return "V210_PQ_2020_709_Reverse"; case VIDEO_TRC_HLG: return "V210_HLG_2020_709_Reverse"; default: return "V210_SRGB_Reverse"; } } case VIDEO_FORMAT_R10L: { switch (trc) { case VIDEO_TRC_PQ: return full_range ? "R10L_PQ_2020_709_Full_Reverse" : "R10L_PQ_2020_709_Limited_Reverse"; case VIDEO_TRC_HLG: return full_range ? "R10L_HLG_2020_709_Full_Reverse" : "R10L_HLG_2020_709_Limited_Reverse"; default: return full_range ? "R10L_SRGB_Full_Reverse" : "R10L_SRGB_Limited_Reverse"; } } case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: case VIDEO_FORMAT_RGBA: case VIDEO_FORMAT_NONE: if (full_range) assert(false && "No conversion requested"); else return "RGB_Limited"; break; case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: /* Unimplemented */ break; } return NULL; } static bool need_linear_output(enum video_format format) { return (format == VIDEO_FORMAT_I010) || (format == VIDEO_FORMAT_P010) || (format == VIDEO_FORMAT_I210) || (format == VIDEO_FORMAT_I412) || (format == VIDEO_FORMAT_YA2L); } static inline void set_eparam(gs_effect_t *effect, const char *name, float val) { gs_eparam_t *param = gs_effect_get_param_by_name(effect, name); gs_effect_set_float(param, val); } static bool update_async_texrender(struct obs_source *source, const struct obs_source_frame *frame, gs_texture_t *tex[MAX_AV_PLANES], gs_texrender_t *texrender) { GS_DEBUG_MARKER_BEGIN(GS_DEBUG_COLOR_CONVERT_FORMAT, "Convert Format"); gs_texrender_reset(texrender); upload_raw_frame(tex, frame); uint32_t cx = source->async_width; uint32_t cy = source->async_height; const char *tech_name = select_conversion_technique(frame->format, frame->full_range, frame->trc); gs_effect_t *conv = obs->video.conversion_effect; gs_technique_t *tech = gs_effect_get_technique(conv, tech_name); const bool linear = need_linear_output(frame->format); const bool success = gs_texrender_begin(texrender, cx, cy); if (success) { const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(linear); gs_enable_blending(false); gs_technique_begin(tech); gs_technique_begin_pass(tech, 0); if (tex[0]) gs_effect_set_texture(gs_effect_get_param_by_name(conv, "image"), tex[0]); if (tex[1]) gs_effect_set_texture(gs_effect_get_param_by_name(conv, "image1"), tex[1]); if (tex[2]) gs_effect_set_texture(gs_effect_get_param_by_name(conv, "image2"), tex[2]); if (tex[3]) gs_effect_set_texture(gs_effect_get_param_by_name(conv, "image3"), tex[3]); set_eparam(conv, "width", (float)cx); set_eparam(conv, "height", (float)cy); set_eparam(conv, "width_d2", (float)cx * 0.5f); set_eparam(conv, "height_d2", (float)cy * 0.5f); set_eparam(conv, "width_x2_i", 0.5f / (float)cx); set_eparam(conv, "height_x2_i", 0.5f / (float)cy); /* BT.2408 says higher than 1000 isn't comfortable */ float hlg_peak_level = obs->video.hdr_nominal_peak_level; if (hlg_peak_level > 1000.f) hlg_peak_level = 1000.f; const float maximum_nits = (frame->trc == VIDEO_TRC_HLG) ? hlg_peak_level : 10000.f; set_eparam(conv, "maximum_over_sdr_white_nits", maximum_nits / obs_get_video_sdr_white_level()); const float hlg_exponent = 0.2f + (0.42f * log10f(hlg_peak_level / 1000.f)); set_eparam(conv, "hlg_exponent", hlg_exponent); set_eparam(conv, "hdr_lw", (float)frame->max_luminance); set_eparam(conv, "hdr_lmax", obs_get_video_hdr_nominal_peak_level()); struct vec4 vec0, vec1, vec2; vec4_set(&vec0, frame->color_matrix[0], frame->color_matrix[1], frame->color_matrix[2], frame->color_matrix[3]); vec4_set(&vec1, frame->color_matrix[4], frame->color_matrix[5], frame->color_matrix[6], frame->color_matrix[7]); vec4_set(&vec2, frame->color_matrix[8], frame->color_matrix[9], frame->color_matrix[10], frame->color_matrix[11]); gs_effect_set_vec4(gs_effect_get_param_by_name(conv, "color_vec0"), &vec0); gs_effect_set_vec4(gs_effect_get_param_by_name(conv, "color_vec1"), &vec1); gs_effect_set_vec4(gs_effect_get_param_by_name(conv, "color_vec2"), &vec2); if (!frame->full_range) { gs_eparam_t *min_param = gs_effect_get_param_by_name(conv, "color_range_min"); gs_effect_set_val(min_param, frame->color_range_min, sizeof(float) * 3); gs_eparam_t *max_param = gs_effect_get_param_by_name(conv, "color_range_max"); gs_effect_set_val(max_param, frame->color_range_max, sizeof(float) * 3); } gs_draw(GS_TRIS, 0, 3); gs_technique_end_pass(tech); gs_technique_end(tech); gs_enable_blending(true); gs_enable_framebuffer_srgb(previous); gs_texrender_end(texrender); } GS_DEBUG_MARKER_END(); return success; } bool update_async_texture(struct obs_source *source, const struct obs_source_frame *frame, gs_texture_t *tex, gs_texrender_t *texrender) { gs_texture_t *tex3[MAX_AV_PLANES] = {tex, NULL, NULL, NULL, NULL, NULL, NULL, NULL}; return update_async_textures(source, frame, tex3, texrender); } bool update_async_textures(struct obs_source *source, const struct obs_source_frame *frame, gs_texture_t *tex[MAX_AV_PLANES], gs_texrender_t *texrender) { enum convert_type type; source->async_flip = frame->flip; source->async_linear_alpha = (frame->flags & OBS_SOURCE_FRAME_LINEAR_ALPHA) != 0; if (source->async_gpu_conversion && texrender) return update_async_texrender(source, frame, tex, texrender); type = get_convert_type(frame->format, frame->full_range, frame->trc); if (type == CONVERT_NONE) { gs_texture_set_image(tex[0], frame->data[0], frame->linesize[0], false); return true; } return false; } static inline void obs_source_draw_texture(struct obs_source *source, gs_effect_t *effect) { gs_texture_t *tex = source->async_textures[0]; gs_eparam_t *param; if (source->async_texrender) tex = gs_texrender_get_texture(source->async_texrender); if (!tex) return; param = gs_effect_get_param_by_name(effect, "image"); const bool linear_srgb = gs_get_linear_srgb(); const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(linear_srgb); if (linear_srgb) { gs_effect_set_texture_srgb(param, tex); } else { gs_effect_set_texture(param, tex); } gs_draw_sprite(tex, source->async_flip ? GS_FLIP_V : 0, 0, 0); gs_enable_framebuffer_srgb(previous); } static void recreate_async_texture(obs_source_t *source, enum gs_color_format format) { uint32_t cx = gs_texture_get_width(source->async_textures[0]); uint32_t cy = gs_texture_get_height(source->async_textures[0]); gs_texture_destroy(source->async_textures[0]); source->async_textures[0] = gs_texture_create(cx, cy, format, 1, NULL, GS_DYNAMIC); } static inline void check_to_swap_bgrx_bgra(obs_source_t *source, struct obs_source_frame *frame) { enum gs_color_format format = gs_texture_get_color_format(source->async_textures[0]); if (format == GS_BGRX && frame->format == VIDEO_FORMAT_BGRA) { recreate_async_texture(source, GS_BGRA); } else if (format == GS_BGRA && frame->format == VIDEO_FORMAT_BGRX) { recreate_async_texture(source, GS_BGRX); } } static void obs_source_update_async_video(obs_source_t *source) { if (!source->async_rendered) { source->async_rendered = true; struct obs_source_frame *frame = obs_source_get_frame(source); if (frame) { check_to_swap_bgrx_bgra(source, frame); if (!source->async_decoupled || !source->async_unbuffered) { source->timing_adjust = obs->video.video_time - frame->timestamp; source->timing_set = true; } if (source->async_update_texture) { update_async_textures(source, frame, source->async_textures, source->async_texrender); source->async_update_texture = false; } source->async_last_rendered_ts = frame->timestamp; obs_source_release_frame(source, frame); } } } static void rotate_async_video(obs_source_t *source, long rotation) { float x = 0; float y = 0; switch (rotation) { case 90: y = (float)source->async_width; break; case 270: case -90: x = (float)source->async_height; break; case 180: x = (float)source->async_width; y = (float)source->async_height; } gs_matrix_translate3f(x, y, 0); gs_matrix_rotaa4f(0.0f, 0.0f, -1.0f, RAD((float)rotation)); } static inline void obs_source_render_async_video(obs_source_t *source) { if (source->async_textures[0] && source->async_active) { gs_timer_t *timer = NULL; const uint64_t start = source_profiler_source_render_begin(&timer); const enum gs_color_space source_space = convert_video_space(source->async_format, source->async_trc); gs_effect_t *const effect = obs_get_base_effect(OBS_EFFECT_DEFAULT); const char *tech_name = "Draw"; float multiplier = 1.0; const enum gs_color_space current_space = gs_get_color_space(); bool linear_srgb = gs_get_linear_srgb(); bool nonlinear_alpha = false; switch (source_space) { case GS_CS_SRGB: linear_srgb = linear_srgb || (current_space != GS_CS_SRGB); nonlinear_alpha = linear_srgb && !source->async_linear_alpha; switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: case GS_CS_709_EXTENDED: if (nonlinear_alpha) tech_name = "DrawNonlinearAlpha"; break; case GS_CS_709_SCRGB: tech_name = nonlinear_alpha ? "DrawNonlinearAlphaMultiply" : "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; } break; case GS_CS_SRGB_16F: if (current_space == GS_CS_709_SCRGB) { tech_name = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; } break; case GS_CS_709_EXTENDED: switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: tech_name = "DrawTonemap"; linear_srgb = true; break; case GS_CS_709_SCRGB: tech_name = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; break; case GS_CS_709_EXTENDED: break; } break; case GS_CS_709_SCRGB: switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: tech_name = "DrawMultiplyTonemap"; multiplier = 80.0f / obs_get_video_sdr_white_level(); linear_srgb = true; break; case GS_CS_709_EXTENDED: tech_name = "DrawMultiply"; multiplier = 80.0f / obs_get_video_sdr_white_level(); break; case GS_CS_709_SCRGB: break; } } const bool previous = gs_set_linear_srgb(linear_srgb); gs_technique_t *const tech = gs_effect_get_technique(effect, tech_name); gs_effect_set_float(gs_effect_get_param_by_name(effect, "multiplier"), multiplier); gs_technique_begin(tech); gs_technique_begin_pass(tech, 0); long rotation = source->async_rotation; if (rotation) { gs_matrix_push(); rotate_async_video(source, rotation); } if (nonlinear_alpha) { gs_blend_state_push(); gs_blend_function(GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); } obs_source_draw_texture(source, effect); if (nonlinear_alpha) { gs_blend_state_pop(); } if (rotation) { gs_matrix_pop(); } gs_technique_end_pass(tech); gs_technique_end(tech); gs_set_linear_srgb(previous); source_profiler_source_render_end(source, start, timer); } } static inline void obs_source_render_filters(obs_source_t *source) { obs_source_t *first_filter; pthread_mutex_lock(&source->filter_mutex); first_filter = obs_source_get_ref(source->filters.array[0]); pthread_mutex_unlock(&source->filter_mutex); source->rendering_filter = true; obs_source_video_render(first_filter); source->rendering_filter = false; obs_source_release(first_filter); } static inline uint32_t get_async_width(const obs_source_t *source) { return ((source->async_rotation % 180) == 0) ? source->async_width : source->async_height; } static inline uint32_t get_async_height(const obs_source_t *source) { return ((source->async_rotation % 180) == 0) ? source->async_height : source->async_width; } static uint32_t get_base_width(const obs_source_t *source) { bool is_filter = !!source->filter_parent; bool func_valid = source->context.data && source->info.get_width; if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) { return source->enabled ? source->transition_actual_cx : 0; } else if (func_valid && (!is_filter || source->enabled)) { return source->info.get_width(source->context.data); } else if (is_filter) { return get_base_width(source->filter_target); } return source->async_active ? get_async_width(source) : 0; } static uint32_t get_base_height(const obs_source_t *source) { bool is_filter = !!source->filter_parent; bool func_valid = source->context.data && source->info.get_height; if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) { return source->enabled ? source->transition_actual_cy : 0; } else if (func_valid && (!is_filter || source->enabled)) { return source->info.get_height(source->context.data); } else if (is_filter) { return get_base_height(source->filter_target); } return source->async_active ? get_async_height(source) : 0; } static void source_render(obs_source_t *source, gs_effect_t *effect) { gs_timer_t *timer = NULL; const uint64_t start = source_profiler_source_render_begin(&timer); void *const data = source->context.data; const enum gs_color_space current_space = gs_get_color_space(); const enum gs_color_space source_space = obs_source_get_color_space(source, 1, ¤t_space); const char *convert_tech = NULL; float multiplier = 1.0; enum gs_color_format format = gs_get_format_from_space(source_space); switch (source_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: switch (current_space) { case GS_CS_709_EXTENDED: convert_tech = "Draw"; break; case GS_CS_709_SCRGB: convert_tech = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; break; case GS_CS_SRGB: break; case GS_CS_SRGB_16F: break; } break; case GS_CS_709_EXTENDED: switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: convert_tech = "DrawTonemap"; break; case GS_CS_709_SCRGB: convert_tech = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; break; case GS_CS_709_EXTENDED: break; } break; case GS_CS_709_SCRGB: switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: convert_tech = "DrawMultiplyTonemap"; multiplier = 80.0f / obs_get_video_sdr_white_level(); break; case GS_CS_709_EXTENDED: convert_tech = "DrawMultiply"; multiplier = 80.0f / obs_get_video_sdr_white_level(); break; case GS_CS_709_SCRGB: break; } } if (convert_tech) { if (source->color_space_texrender) { if (gs_texrender_get_format(source->color_space_texrender) != format) { gs_texrender_destroy(source->color_space_texrender); source->color_space_texrender = NULL; } } if (!source->color_space_texrender) { source->color_space_texrender = gs_texrender_create(format, GS_ZS_NONE); } gs_texrender_reset(source->color_space_texrender); const int cx = get_base_width(source); const int cy = get_base_height(source); if (gs_texrender_begin_with_color_space(source->color_space_texrender, cx, cy, source_space)) { gs_enable_blending(false); struct vec4 clear_color; vec4_zero(&clear_color); gs_clear(GS_CLEAR_COLOR, &clear_color, 0.0f, 0); gs_ortho(0.0f, (float)cx, 0.0f, (float)cy, -100.0f, 100.0f); source->info.video_render(data, effect); gs_enable_blending(true); gs_texrender_end(source->color_space_texrender); gs_effect_t *default_effect = obs->video.default_effect; gs_technique_t *tech = gs_effect_get_technique(default_effect, convert_tech); const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(true); gs_texture_t *const tex = gs_texrender_get_texture(source->color_space_texrender); gs_effect_set_texture_srgb(gs_effect_get_param_by_name(default_effect, "image"), tex); gs_effect_set_float(gs_effect_get_param_by_name(default_effect, "multiplier"), multiplier); gs_blend_state_push(); gs_blend_function(GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); const size_t passes = gs_technique_begin(tech); for (size_t i = 0; i < passes; i++) { gs_technique_begin_pass(tech, i); gs_draw_sprite(tex, 0, 0, 0); gs_technique_end_pass(tech); } gs_technique_end(tech); gs_blend_state_pop(); gs_enable_framebuffer_srgb(previous); } } else { source->info.video_render(data, effect); } source_profiler_source_render_end(source, start, timer); } void obs_source_default_render(obs_source_t *source) { if (source->context.data) { gs_effect_t *effect = obs->video.default_effect; gs_technique_t *tech = gs_effect_get_technique(effect, "Draw"); size_t passes, i; passes = gs_technique_begin(tech); for (i = 0; i < passes; i++) { gs_technique_begin_pass(tech, i); source_render(source, effect); gs_technique_end_pass(tech); } gs_technique_end(tech); } } static inline void obs_source_main_render(obs_source_t *source) { uint32_t flags = source->info.output_flags; bool custom_draw = (flags & OBS_SOURCE_CUSTOM_DRAW) != 0; bool srgb_aware = (flags & OBS_SOURCE_SRGB) != 0; bool default_effect = !source->filter_parent && source->filters.num == 0 && !custom_draw; bool previous_srgb = false; if (!srgb_aware) { previous_srgb = gs_get_linear_srgb(); gs_set_linear_srgb(false); } if (default_effect) { obs_source_default_render(source); } else if (source->context.data) { source_render(source, custom_draw ? NULL : gs_get_effect()); } if (!srgb_aware) gs_set_linear_srgb(previous_srgb); } static bool ready_async_frame(obs_source_t *source, uint64_t sys_time); #if GS_USE_DEBUG_MARKERS static const char *get_type_format(enum obs_source_type type) { switch (type) { case OBS_SOURCE_TYPE_INPUT: return "Input: %s"; case OBS_SOURCE_TYPE_FILTER: return "Filter: %s"; case OBS_SOURCE_TYPE_TRANSITION: return "Transition: %s"; case OBS_SOURCE_TYPE_SCENE: return "Scene: %s"; default: return "[Unknown]: %s"; } } #endif static inline void render_video(obs_source_t *source) { if (source->info.type != OBS_SOURCE_TYPE_FILTER && (source->info.output_flags & OBS_SOURCE_VIDEO) == 0) { if (source->filter_parent) obs_source_skip_video_filter(source); return; } if (source->info.type == OBS_SOURCE_TYPE_INPUT && (source->info.output_flags & OBS_SOURCE_ASYNC) != 0 && !source->rendering_filter) { if (deinterlacing_enabled(source)) deinterlace_update_async_video(source); obs_source_update_async_video(source); } if (!source->context.data || !source->enabled) { if (source->filter_parent) obs_source_skip_video_filter(source); return; } GS_DEBUG_MARKER_BEGIN_FORMAT(GS_DEBUG_COLOR_SOURCE, get_type_format(source->info.type), obs_source_get_name(source)); if (source->filters.num && !source->rendering_filter) obs_source_render_filters(source); else if (source->info.video_render) obs_source_main_render(source); else if (source->filter_target) obs_source_video_render(source->filter_target); else if (deinterlacing_enabled(source)) deinterlace_render(source); else obs_source_render_async_video(source); GS_DEBUG_MARKER_END(); } void obs_source_video_render(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_video_render")) return; source = obs_source_get_ref(source); if (source) { render_video(source); obs_source_release(source); } } static uint32_t get_recurse_width(obs_source_t *source) { uint32_t width; pthread_mutex_lock(&source->filter_mutex); width = (source->filters.num) ? get_base_width(source->filters.array[0]) : get_base_width(source); pthread_mutex_unlock(&source->filter_mutex); return width; } static uint32_t get_recurse_height(obs_source_t *source) { uint32_t height; pthread_mutex_lock(&source->filter_mutex); height = (source->filters.num) ? get_base_height(source->filters.array[0]) : get_base_height(source); pthread_mutex_unlock(&source->filter_mutex); return height; } uint32_t obs_source_get_width(obs_source_t *source) { if (!data_valid(source, "obs_source_get_width")) return 0; return (source->info.type != OBS_SOURCE_TYPE_FILTER) ? get_recurse_width(source) : get_base_width(source); } uint32_t obs_source_get_height(obs_source_t *source) { if (!data_valid(source, "obs_source_get_height")) return 0; return (source->info.type != OBS_SOURCE_TYPE_FILTER) ? get_recurse_height(source) : get_base_height(source); } enum gs_color_space obs_source_get_color_space(obs_source_t *source, size_t count, const enum gs_color_space *preferred_spaces) { if (!data_valid(source, "obs_source_get_color_space")) return GS_CS_SRGB; if (source->info.type != OBS_SOURCE_TYPE_FILTER && (source->info.output_flags & OBS_SOURCE_VIDEO) == 0) { if (source->filter_parent) return obs_source_get_color_space(source->filter_parent, count, preferred_spaces); } if (!source->context.data || !source->enabled) { if (source->filter_target) return obs_source_get_color_space(source->filter_target, count, preferred_spaces); } if (source->info.output_flags & OBS_SOURCE_ASYNC) { const enum gs_color_space video_space = convert_video_space(source->async_format, source->async_trc); enum gs_color_space space = video_space; for (size_t i = 0; i < count; ++i) { space = preferred_spaces[i]; if (space == video_space) break; } return space; } assert(source->context.data); return source->info.video_get_color_space ? source->info.video_get_color_space(source->context.data, count, preferred_spaces) : GS_CS_SRGB; } uint32_t obs_source_get_base_width(obs_source_t *source) { if (!data_valid(source, "obs_source_get_base_width")) return 0; return get_base_width(source); } uint32_t obs_source_get_base_height(obs_source_t *source) { if (!data_valid(source, "obs_source_get_base_height")) return 0; return get_base_height(source); } obs_source_t *obs_filter_get_parent(const obs_source_t *filter) { return obs_ptr_valid(filter, "obs_filter_get_parent") ? filter->filter_parent : NULL; } obs_source_t *obs_filter_get_target(const obs_source_t *filter) { return obs_ptr_valid(filter, "obs_filter_get_target") ? filter->filter_target : NULL; } #define OBS_SOURCE_AV (OBS_SOURCE_ASYNC_VIDEO | OBS_SOURCE_AUDIO) static bool filter_compatible(obs_source_t *source, obs_source_t *filter) { uint32_t s_caps = source->info.output_flags & OBS_SOURCE_AV; uint32_t f_caps = filter->info.output_flags & OBS_SOURCE_AV; if ((f_caps & OBS_SOURCE_AUDIO) != 0 && (f_caps & OBS_SOURCE_VIDEO) == 0) f_caps &= ~OBS_SOURCE_ASYNC; return (s_caps & f_caps) == f_caps; } void obs_source_filter_add(obs_source_t *source, obs_source_t *filter) { struct calldata cd; uint8_t stack[128]; if (!obs_source_valid(source, "obs_source_filter_add")) return; if (!obs_ptr_valid(filter, "obs_source_filter_add")) return; pthread_mutex_lock(&source->filter_mutex); if (da_find(source->filters, &filter, 0) != DARRAY_INVALID) { blog(LOG_WARNING, "Tried to add a filter that was already " "present on the source"); pthread_mutex_unlock(&source->filter_mutex); return; } if (!source->owns_info_id && !filter_compatible(source, filter)) { pthread_mutex_unlock(&source->filter_mutex); return; } filter = obs_source_get_ref(filter); if (!obs_ptr_valid(filter, "obs_source_filter_add")) return; filter->filter_parent = source; filter->filter_target = !source->filters.num ? source : source->filters.array[0]; da_insert(source->filters, 0, &filter); pthread_mutex_unlock(&source->filter_mutex); calldata_init_fixed(&cd, stack, sizeof(stack)); calldata_set_ptr(&cd, "source", source); calldata_set_ptr(&cd, "filter", filter); signal_handler_signal(obs->signals, "source_filter_add", &cd); signal_handler_signal(source->context.signals, "filter_add", &cd); blog(LOG_DEBUG, "- filter '%s' (%s) added to source '%s'", filter->context.name, filter->info.id, source->context.name); if (filter->info.filter_add) filter->info.filter_add(filter->context.data, filter->filter_parent); } static bool obs_source_filter_remove_refless(obs_source_t *source, obs_source_t *filter) { struct calldata cd; uint8_t stack[128]; size_t idx; pthread_mutex_lock(&source->filter_mutex); idx = da_find(source->filters, &filter, 0); if (idx == DARRAY_INVALID) { pthread_mutex_unlock(&source->filter_mutex); return false; } if (idx > 0) { obs_source_t *prev = source->filters.array[idx - 1]; prev->filter_target = filter->filter_target; } da_erase(source->filters, idx); pthread_mutex_unlock(&source->filter_mutex); calldata_init_fixed(&cd, stack, sizeof(stack)); calldata_set_ptr(&cd, "source", source); calldata_set_ptr(&cd, "filter", filter); signal_handler_signal(obs->signals, "source_filter_remove", &cd); signal_handler_signal(source->context.signals, "filter_remove", &cd); blog(LOG_DEBUG, "- filter '%s' (%s) removed from source '%s'", filter->context.name, filter->info.id, source->context.name); if (filter->info.filter_remove) filter->info.filter_remove(filter->context.data, filter->filter_parent); filter->filter_parent = NULL; filter->filter_target = NULL; return true; } void obs_source_filter_remove(obs_source_t *source, obs_source_t *filter) { if (!obs_source_valid(source, "obs_source_filter_remove")) return; if (!obs_ptr_valid(filter, "obs_source_filter_remove")) return; if (obs_source_filter_remove_refless(source, filter)) obs_source_release(filter); } static size_t find_next_filter(obs_source_t *source, obs_source_t *filter, size_t cur_idx) { bool curAsync = (filter->info.output_flags & OBS_SOURCE_ASYNC) != 0; bool nextAsync; obs_source_t *next; if (cur_idx == source->filters.num - 1) return DARRAY_INVALID; next = source->filters.array[cur_idx + 1]; nextAsync = (next->info.output_flags & OBS_SOURCE_ASYNC); if (nextAsync == curAsync) return cur_idx + 1; else return find_next_filter(source, filter, cur_idx + 1); } static size_t find_prev_filter(obs_source_t *source, obs_source_t *filter, size_t cur_idx) { bool curAsync = (filter->info.output_flags & OBS_SOURCE_ASYNC) != 0; bool prevAsync; obs_source_t *prev; if (cur_idx == 0) return DARRAY_INVALID; prev = source->filters.array[cur_idx - 1]; prevAsync = (prev->info.output_flags & OBS_SOURCE_ASYNC); if (prevAsync == curAsync) return cur_idx - 1; else return find_prev_filter(source, filter, cur_idx - 1); } static void reorder_filter_targets(obs_source_t *source) { /* reorder filter targets, not the nicest way of dealing with things */ for (size_t i = 0; i < source->filters.num; i++) { obs_source_t *next_filter = (i == source->filters.num - 1) ? source : source->filters.array[i + 1]; source->filters.array[i]->filter_target = next_filter; } } /* moves filters above/below matching filter types */ static bool move_filter_dir(obs_source_t *source, obs_source_t *filter, enum obs_order_movement movement) { size_t idx; idx = da_find(source->filters, &filter, 0); if (idx == DARRAY_INVALID) return false; if (movement == OBS_ORDER_MOVE_UP) { size_t next_id = find_next_filter(source, filter, idx); if (next_id == DARRAY_INVALID) return false; da_move_item(source->filters, idx, next_id); } else if (movement == OBS_ORDER_MOVE_DOWN) { size_t prev_id = find_prev_filter(source, filter, idx); if (prev_id == DARRAY_INVALID) return false; da_move_item(source->filters, idx, prev_id); } else if (movement == OBS_ORDER_MOVE_TOP) { if (idx == source->filters.num - 1) return false; da_move_item(source->filters, idx, source->filters.num - 1); } else if (movement == OBS_ORDER_MOVE_BOTTOM) { if (idx == 0) return false; da_move_item(source->filters, idx, 0); } reorder_filter_targets(source); return true; } void obs_source_filter_set_order(obs_source_t *source, obs_source_t *filter, enum obs_order_movement movement) { bool success; if (!obs_source_valid(source, "obs_source_filter_set_order")) return; if (!obs_ptr_valid(filter, "obs_source_filter_set_order")) return; pthread_mutex_lock(&source->filter_mutex); success = move_filter_dir(source, filter, movement); pthread_mutex_unlock(&source->filter_mutex); if (success) obs_source_dosignal(source, NULL, "reorder_filters"); } int obs_source_filter_get_index(obs_source_t *source, obs_source_t *filter) { if (!obs_source_valid(source, "obs_source_filter_get_index")) return -1; if (!obs_ptr_valid(filter, "obs_source_filter_get_index")) return -1; size_t idx; pthread_mutex_lock(&source->filter_mutex); idx = da_find(source->filters, &filter, 0); pthread_mutex_unlock(&source->filter_mutex); return idx != DARRAY_INVALID ? (int)idx : -1; } static bool set_filter_index(obs_source_t *source, obs_source_t *filter, size_t index) { size_t idx = da_find(source->filters, &filter, 0); if (idx == DARRAY_INVALID) return false; da_move_item(source->filters, idx, index); reorder_filter_targets(source); return true; } void obs_source_filter_set_index(obs_source_t *source, obs_source_t *filter, size_t index) { bool success; if (!obs_source_valid(source, "obs_source_filter_set_index")) return; if (!obs_ptr_valid(filter, "obs_source_filter_set_index")) return; pthread_mutex_lock(&source->filter_mutex); success = set_filter_index(source, filter, index); pthread_mutex_unlock(&source->filter_mutex); if (success) obs_source_dosignal(source, NULL, "reorder_filters"); } obs_data_t *obs_source_get_settings(const obs_source_t *source) { if (!obs_source_valid(source, "obs_source_get_settings")) return NULL; obs_data_addref(source->context.settings); return source->context.settings; } struct obs_source_frame *filter_async_video(obs_source_t *source, struct obs_source_frame *in) { size_t i; pthread_mutex_lock(&source->filter_mutex); for (i = source->filters.num; i > 0; i--) { struct obs_source *filter = source->filters.array[i - 1]; if (!filter->enabled) continue; if (filter->context.data && filter->info.filter_video) { in = filter->info.filter_video(filter->context.data, in); if (!in) break; } } pthread_mutex_unlock(&source->filter_mutex); return in; } static inline void copy_frame_data_line(struct obs_source_frame *dst, const struct obs_source_frame *src, uint32_t plane, uint32_t y) { uint32_t pos_src = y * src->linesize[plane]; uint32_t pos_dst = y * dst->linesize[plane]; uint32_t bytes = dst->linesize[plane] < src->linesize[plane] ? dst->linesize[plane] : src->linesize[plane]; memcpy(dst->data[plane] + pos_dst, src->data[plane] + pos_src, bytes); } static inline void copy_frame_data_plane(struct obs_source_frame *dst, const struct obs_source_frame *src, uint32_t plane, uint32_t lines) { if (dst->linesize[plane] != src->linesize[plane]) { for (uint32_t y = 0; y < lines; y++) copy_frame_data_line(dst, src, plane, y); } else { memcpy(dst->data[plane], src->data[plane], (size_t)dst->linesize[plane] * (size_t)lines); } } static void copy_frame_data(struct obs_source_frame *dst, const struct obs_source_frame *src) { dst->flip = src->flip; dst->flags = src->flags; dst->trc = src->trc; dst->full_range = src->full_range; dst->max_luminance = src->max_luminance; dst->timestamp = src->timestamp; memcpy(dst->color_matrix, src->color_matrix, sizeof(float) * 16); if (!dst->full_range) { size_t const size = sizeof(float) * 3; memcpy(dst->color_range_min, src->color_range_min, size); memcpy(dst->color_range_max, src->color_range_max, size); } switch (src->format) { case VIDEO_FORMAT_I420: case VIDEO_FORMAT_I010: { const uint32_t height = dst->height; const uint32_t half_height = (height + 1) / 2; copy_frame_data_plane(dst, src, 0, height); copy_frame_data_plane(dst, src, 1, half_height); copy_frame_data_plane(dst, src, 2, half_height); break; } case VIDEO_FORMAT_NV12: case VIDEO_FORMAT_P010: { const uint32_t height = dst->height; const uint32_t half_height = (height + 1) / 2; copy_frame_data_plane(dst, src, 0, height); copy_frame_data_plane(dst, src, 1, half_height); break; } case VIDEO_FORMAT_I444: case VIDEO_FORMAT_I422: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_I412: copy_frame_data_plane(dst, src, 0, dst->height); copy_frame_data_plane(dst, src, 1, dst->height); copy_frame_data_plane(dst, src, 2, dst->height); break; case VIDEO_FORMAT_YVYU: case VIDEO_FORMAT_YUY2: case VIDEO_FORMAT_UYVY: case VIDEO_FORMAT_NONE: case VIDEO_FORMAT_RGBA: case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: case VIDEO_FORMAT_Y800: case VIDEO_FORMAT_BGR3: case VIDEO_FORMAT_AYUV: case VIDEO_FORMAT_V210: case VIDEO_FORMAT_R10L: copy_frame_data_plane(dst, src, 0, dst->height); break; case VIDEO_FORMAT_I40A: { const uint32_t height = dst->height; const uint32_t half_height = (height + 1) / 2; copy_frame_data_plane(dst, src, 0, height); copy_frame_data_plane(dst, src, 1, half_height); copy_frame_data_plane(dst, src, 2, half_height); copy_frame_data_plane(dst, src, 3, height); break; } case VIDEO_FORMAT_I42A: case VIDEO_FORMAT_YUVA: case VIDEO_FORMAT_YA2L: copy_frame_data_plane(dst, src, 0, dst->height); copy_frame_data_plane(dst, src, 1, dst->height); copy_frame_data_plane(dst, src, 2, dst->height); copy_frame_data_plane(dst, src, 3, dst->height); break; case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: /* Unimplemented */ break; } } void obs_source_frame_copy(struct obs_source_frame *dst, const struct obs_source_frame *src) { copy_frame_data(dst, src); } static inline bool async_texture_changed(struct obs_source *source, const struct obs_source_frame *frame) { enum convert_type prev, cur; prev = get_convert_type(source->async_cache_format, source->async_cache_full_range, source->async_cache_trc); cur = get_convert_type(frame->format, frame->full_range, frame->trc); return source->async_cache_width != frame->width || source->async_cache_height != frame->height || prev != cur; } static inline void free_async_cache(struct obs_source *source) { for (size_t i = 0; i < source->async_cache.num; i++) obs_source_frame_decref(source->async_cache.array[i].frame); da_resize(source->async_cache, 0); da_resize(source->async_frames, 0); source->cur_async_frame = NULL; source->prev_async_frame = NULL; } #define MAX_UNUSED_FRAME_DURATION 5 /* frees frame allocations if they haven't been used for a specific period * of time */ static void clean_cache(obs_source_t *source) { for (size_t i = source->async_cache.num; i > 0; i--) { struct async_frame *af = &source->async_cache.array[i - 1]; if (!af->used) { if (++af->unused_count == MAX_UNUSED_FRAME_DURATION) { obs_source_frame_destroy(af->frame); da_erase(source->async_cache, i - 1); } } } } #define MAX_ASYNC_FRAMES 30 //if return value is not null then do (os_atomic_dec_long(&output->refs) == 0) && obs_source_frame_destroy(output) static inline struct obs_source_frame *cache_video(struct obs_source *source, const struct obs_source_frame *frame) { struct obs_source_frame *new_frame = NULL; pthread_mutex_lock(&source->async_mutex); if (source->async_frames.num >= MAX_ASYNC_FRAMES) { free_async_cache(source); source->last_frame_ts = 0; pthread_mutex_unlock(&source->async_mutex); return NULL; } if (async_texture_changed(source, frame)) { free_async_cache(source); source->async_cache_width = frame->width; source->async_cache_height = frame->height; } const enum video_format format = frame->format; source->async_cache_format = format; source->async_cache_full_range = frame->full_range; source->async_cache_trc = frame->trc; for (size_t i = 0; i < source->async_cache.num; i++) { struct async_frame *af = &source->async_cache.array[i]; if (!af->used) { new_frame = af->frame; new_frame->format = format; af->used = true; af->unused_count = 0; break; } } clean_cache(source); if (!new_frame) { struct async_frame new_af; new_frame = obs_source_frame_create(format, frame->width, frame->height); new_af.frame = new_frame; new_af.used = true; new_af.unused_count = 0; new_frame->refs = 1; da_push_back(source->async_cache, &new_af); } os_atomic_inc_long(&new_frame->refs); pthread_mutex_unlock(&source->async_mutex); copy_frame_data(new_frame, frame); return new_frame; } static void obs_source_output_video_internal(obs_source_t *source, const struct obs_source_frame *frame) { if (!obs_source_valid(source, "obs_source_output_video")) return; if (!frame) { pthread_mutex_lock(&source->async_mutex); source->async_active = false; source->last_frame_ts = 0; free_async_cache(source); pthread_mutex_unlock(&source->async_mutex); return; } source_profiler_async_frame_received(source); struct obs_source_frame *output = cache_video(source, frame); /* ------------------------------------------- */ pthread_mutex_lock(&source->async_mutex); if (output) { if (os_atomic_dec_long(&output->refs) == 0) { obs_source_frame_destroy(output); output = NULL; } else { da_push_back(source->async_frames, &output); source->async_active = true; } } pthread_mutex_unlock(&source->async_mutex); } void obs_source_output_video(obs_source_t *source, const struct obs_source_frame *frame) { if (destroying(source)) return; if (!frame) { obs_source_output_video_internal(source, NULL); return; } struct obs_source_frame new_frame = *frame; new_frame.full_range = format_is_yuv(frame->format) ? new_frame.full_range : true; obs_source_output_video_internal(source, &new_frame); } void obs_source_output_video2(obs_source_t *source, const struct obs_source_frame2 *frame) { if (destroying(source)) return; if (!frame) { obs_source_output_video_internal(source, NULL); return; } struct obs_source_frame new_frame = {0}; enum video_range_type range = resolve_video_range(frame->format, frame->range); for (size_t i = 0; i < MAX_AV_PLANES; i++) { new_frame.data[i] = frame->data[i]; new_frame.linesize[i] = frame->linesize[i]; } new_frame.width = frame->width; new_frame.height = frame->height; new_frame.timestamp = frame->timestamp; new_frame.format = frame->format; new_frame.full_range = range == VIDEO_RANGE_FULL; new_frame.max_luminance = 0; new_frame.flip = frame->flip; new_frame.flags = frame->flags; new_frame.trc = frame->trc; memcpy(&new_frame.color_matrix, &frame->color_matrix, sizeof(frame->color_matrix)); memcpy(&new_frame.color_range_min, &frame->color_range_min, sizeof(frame->color_range_min)); memcpy(&new_frame.color_range_max, &frame->color_range_max, sizeof(frame->color_range_max)); obs_source_output_video_internal(source, &new_frame); } void obs_source_set_async_rotation(obs_source_t *source, long rotation) { if (source) source->async_rotation = rotation; } void obs_source_output_cea708(obs_source_t *source, const struct obs_source_cea_708 *captions) { if (destroying(source)) return; if (!captions) { return; } pthread_mutex_lock(&source->caption_cb_mutex); for (size_t i = source->caption_cb_list.num; i > 0; i--) { struct caption_cb_info info = source->caption_cb_list.array[i - 1]; info.callback(info.param, source, captions); } pthread_mutex_unlock(&source->caption_cb_mutex); } void obs_source_add_caption_callback(obs_source_t *source, obs_source_caption_t callback, void *param) { struct caption_cb_info info = {callback, param}; if (!obs_source_valid(source, "obs_source_add_caption_callback")) return; pthread_mutex_lock(&source->caption_cb_mutex); da_push_back(source->caption_cb_list, &info); pthread_mutex_unlock(&source->caption_cb_mutex); } void obs_source_remove_caption_callback(obs_source_t *source, obs_source_caption_t callback, void *param) { struct caption_cb_info info = {callback, param}; if (!obs_source_valid(source, "obs_source_remove_caption_callback")) return; pthread_mutex_lock(&source->caption_cb_mutex); da_erase_item(source->caption_cb_list, &info); pthread_mutex_unlock(&source->caption_cb_mutex); } static inline bool preload_frame_changed(obs_source_t *source, const struct obs_source_frame *in) { if (!source->async_preload_frame) return true; return in->width != source->async_preload_frame->width || in->height != source->async_preload_frame->height || in->format != source->async_preload_frame->format; } static void obs_source_preload_video_internal(obs_source_t *source, const struct obs_source_frame *frame) { if (!obs_source_valid(source, "obs_source_preload_video")) return; if (destroying(source)) return; if (!frame) return; if (preload_frame_changed(source, frame)) { obs_source_frame_destroy(source->async_preload_frame); source->async_preload_frame = obs_source_frame_create(frame->format, frame->width, frame->height); } copy_frame_data(source->async_preload_frame, frame); source->last_frame_ts = frame->timestamp; } void obs_source_preload_video(obs_source_t *source, const struct obs_source_frame *frame) { if (destroying(source)) return; if (!frame) { obs_source_preload_video_internal(source, NULL); return; } struct obs_source_frame new_frame = *frame; new_frame.full_range = format_is_yuv(frame->format) ? new_frame.full_range : true; obs_source_preload_video_internal(source, &new_frame); } void obs_source_preload_video2(obs_source_t *source, const struct obs_source_frame2 *frame) { if (destroying(source)) return; if (!frame) { obs_source_preload_video_internal(source, NULL); return; } struct obs_source_frame new_frame = {0}; enum video_range_type range = resolve_video_range(frame->format, frame->range); for (size_t i = 0; i < MAX_AV_PLANES; i++) { new_frame.data[i] = frame->data[i]; new_frame.linesize[i] = frame->linesize[i]; } new_frame.width = frame->width; new_frame.height = frame->height; new_frame.timestamp = frame->timestamp; new_frame.format = frame->format; new_frame.full_range = range == VIDEO_RANGE_FULL; new_frame.max_luminance = 0; new_frame.flip = frame->flip; new_frame.flags = frame->flags; new_frame.trc = frame->trc; memcpy(&new_frame.color_matrix, &frame->color_matrix, sizeof(frame->color_matrix)); memcpy(&new_frame.color_range_min, &frame->color_range_min, sizeof(frame->color_range_min)); memcpy(&new_frame.color_range_max, &frame->color_range_max, sizeof(frame->color_range_max)); obs_source_preload_video_internal(source, &new_frame); } void obs_source_show_preloaded_video(obs_source_t *source) { uint64_t sys_ts; if (!obs_source_valid(source, "obs_source_show_preloaded_video")) return; if (destroying(source)) return; if (!source->async_preload_frame) return; obs_enter_graphics(); set_async_texture_size(source, source->async_preload_frame); update_async_textures(source, source->async_preload_frame, source->async_textures, source->async_texrender); source->async_active = true; obs_leave_graphics(); pthread_mutex_lock(&source->audio_buf_mutex); sys_ts = (source->monitoring_type != OBS_MONITORING_TYPE_MONITOR_ONLY) ? os_gettime_ns() : 0; reset_audio_timing(source, source->last_frame_ts, sys_ts); reset_audio_data(source, sys_ts); pthread_mutex_unlock(&source->audio_buf_mutex); } static void obs_source_set_video_frame_internal(obs_source_t *source, const struct obs_source_frame *frame) { if (!obs_source_valid(source, "obs_source_set_video_frame")) return; if (!frame) return; obs_enter_graphics(); if (preload_frame_changed(source, frame)) { obs_source_frame_destroy(source->async_preload_frame); source->async_preload_frame = obs_source_frame_create(frame->format, frame->width, frame->height); } copy_frame_data(source->async_preload_frame, frame); set_async_texture_size(source, source->async_preload_frame); update_async_textures(source, source->async_preload_frame, source->async_textures, source->async_texrender); source->last_frame_ts = frame->timestamp; obs_leave_graphics(); } void obs_source_set_video_frame(obs_source_t *source, const struct obs_source_frame *frame) { if (destroying(source)) return; if (!frame) { obs_source_preload_video_internal(source, NULL); return; } struct obs_source_frame new_frame = *frame; new_frame.full_range = format_is_yuv(frame->format) ? new_frame.full_range : true; obs_source_set_video_frame_internal(source, &new_frame); } void obs_source_set_video_frame2(obs_source_t *source, const struct obs_source_frame2 *frame) { if (destroying(source)) return; if (!frame) { obs_source_preload_video_internal(source, NULL); return; } struct obs_source_frame new_frame = {0}; enum video_range_type range = resolve_video_range(frame->format, frame->range); for (size_t i = 0; i < MAX_AV_PLANES; i++) { new_frame.data[i] = frame->data[i]; new_frame.linesize[i] = frame->linesize[i]; } new_frame.width = frame->width; new_frame.height = frame->height; new_frame.timestamp = frame->timestamp; new_frame.format = frame->format; new_frame.full_range = range == VIDEO_RANGE_FULL; new_frame.max_luminance = 0; new_frame.flip = frame->flip; new_frame.flags = frame->flags; new_frame.trc = frame->trc; memcpy(&new_frame.color_matrix, &frame->color_matrix, sizeof(frame->color_matrix)); memcpy(&new_frame.color_range_min, &frame->color_range_min, sizeof(frame->color_range_min)); memcpy(&new_frame.color_range_max, &frame->color_range_max, sizeof(frame->color_range_max)); obs_source_set_video_frame_internal(source, &new_frame); } static inline struct obs_audio_data *filter_async_audio(obs_source_t *source, struct obs_audio_data *in) { size_t i; for (i = source->filters.num; i > 0; i--) { struct obs_source *filter = source->filters.array[i - 1]; if (!filter->enabled) continue; if (filter->context.data && filter->info.filter_audio) { in = filter->info.filter_audio(filter->context.data, in); if (!in) return NULL; } } return in; } static inline void reset_resampler(obs_source_t *source, const struct obs_source_audio *audio) { const struct audio_output_info *obs_info; struct resample_info output_info; obs_info = audio_output_get_info(obs->audio.audio); output_info.format = obs_info->format; output_info.samples_per_sec = obs_info->samples_per_sec; output_info.speakers = obs_info->speakers; source->sample_info.format = audio->format; source->sample_info.samples_per_sec = audio->samples_per_sec; source->sample_info.speakers = audio->speakers; audio_resampler_destroy(source->resampler); source->resampler = NULL; source->resample_offset = 0; if (source->sample_info.samples_per_sec == obs_info->samples_per_sec && source->sample_info.format == obs_info->format && source->sample_info.speakers == obs_info->speakers) { source->audio_failed = false; return; } source->resampler = audio_resampler_create(&output_info, &source->sample_info); source->audio_failed = source->resampler == NULL; if (source->resampler == NULL) blog(LOG_ERROR, "creation of resampler failed"); } static void copy_audio_data(obs_source_t *source, const uint8_t *const data[], uint32_t frames, uint64_t ts) { size_t planes = audio_output_get_planes(obs->audio.audio); size_t blocksize = audio_output_get_block_size(obs->audio.audio); size_t size = (size_t)frames * blocksize; bool resize = source->audio_storage_size < size; source->audio_data.frames = frames; source->audio_data.timestamp = ts; for (size_t i = 0; i < planes; i++) { /* ensure audio storage capacity */ if (resize) { bfree(source->audio_data.data[i]); source->audio_data.data[i] = bmalloc(size); } memcpy(source->audio_data.data[i], data[i], size); } if (resize) source->audio_storage_size = size; } /* TODO: SSE optimization */ static void downmix_to_mono_planar(struct obs_source *source, uint32_t frames) { size_t channels = audio_output_get_channels(obs->audio.audio); const float channels_i = 1.0f / (float)channels; float **data = (float **)source->audio_data.data; for (size_t channel = 1; channel < channels; channel++) { for (uint32_t frame = 0; frame < frames; frame++) data[0][frame] += data[channel][frame]; } for (uint32_t frame = 0; frame < frames; frame++) data[0][frame] *= channels_i; for (size_t channel = 1; channel < channels; channel++) { for (uint32_t frame = 0; frame < frames; frame++) data[channel][frame] = data[0][frame]; } } static void process_audio_balancing(struct obs_source *source, uint32_t frames, float balance, enum obs_balance_type type) { float **data = (float **)source->audio_data.data; switch (type) { case OBS_BALANCE_TYPE_SINE_LAW: for (uint32_t frame = 0; frame < frames; frame++) { data[0][frame] = data[0][frame] * sinf((1.0f - balance) * (M_PI / 2.0f)); data[1][frame] = data[1][frame] * sinf(balance * (M_PI / 2.0f)); } break; case OBS_BALANCE_TYPE_SQUARE_LAW: for (uint32_t frame = 0; frame < frames; frame++) { data[0][frame] = data[0][frame] * sqrtf(1.0f - balance); data[1][frame] = data[1][frame] * sqrtf(balance); } break; case OBS_BALANCE_TYPE_LINEAR: for (uint32_t frame = 0; frame < frames; frame++) { data[0][frame] = data[0][frame] * (1.0f - balance); data[1][frame] = data[1][frame] * balance; } break; default: break; } } /* resamples/remixes new audio to the designated main audio output format */ static void process_audio(obs_source_t *source, const struct obs_source_audio *audio) { uint32_t frames = audio->frames; bool mono_output; if (source->sample_info.samples_per_sec != audio->samples_per_sec || source->sample_info.format != audio->format || source->sample_info.speakers != audio->speakers) reset_resampler(source, audio); if (source->audio_failed) return; if (source->resampler) { uint8_t *output[MAX_AV_PLANES]; memset(output, 0, sizeof(output)); audio_resampler_resample(source->resampler, output, &frames, &source->resample_offset, audio->data, audio->frames); copy_audio_data(source, (const uint8_t *const *)output, frames, audio->timestamp); } else { copy_audio_data(source, audio->data, audio->frames, audio->timestamp); } mono_output = audio_output_get_channels(obs->audio.audio) == 1; if (!mono_output && source->sample_info.speakers == SPEAKERS_STEREO && (source->balance > 0.51f || source->balance < 0.49f)) { process_audio_balancing(source, frames, source->balance, OBS_BALANCE_TYPE_SINE_LAW); } if (!mono_output && (source->flags & OBS_SOURCE_FLAG_FORCE_MONO) != 0) downmix_to_mono_planar(source, frames); } void obs_source_output_audio(obs_source_t *source, const struct obs_source_audio *audio_in) { struct obs_audio_data *output; if (!obs_source_valid(source, "obs_source_output_audio")) return; if (destroying(source)) return; if (!obs_ptr_valid(audio_in, "obs_source_output_audio")) return; /* sets unused data pointers to NULL automatically because apparently * some filter plugins aren't checking the actual channel count, and * instead are checking to see whether the pointer is non-zero. */ struct obs_source_audio audio = *audio_in; size_t channels = get_audio_planes(audio.format, audio.speakers); for (size_t i = channels; i < MAX_AUDIO_CHANNELS; i++) audio.data[i] = NULL; process_audio(source, &audio); pthread_mutex_lock(&source->filter_mutex); output = filter_async_audio(source, &source->audio_data); if (output) { struct audio_data data; for (int i = 0; i < MAX_AV_PLANES; i++) data.data[i] = output->data[i]; data.frames = output->frames; data.timestamp = output->timestamp; pthread_mutex_lock(&source->audio_mutex); source_output_audio_data(source, &data); pthread_mutex_unlock(&source->audio_mutex); } pthread_mutex_unlock(&source->filter_mutex); } void remove_async_frame(obs_source_t *source, struct obs_source_frame *frame) { if (frame) frame->prev_frame = false; for (size_t i = 0; i < source->async_cache.num; i++) { struct async_frame *f = &source->async_cache.array[i]; if (f->frame == frame) { f->used = false; break; } } } /* #define DEBUG_ASYNC_FRAMES 1 */ static bool ready_async_frame(obs_source_t *source, uint64_t sys_time) { struct obs_source_frame *next_frame = source->async_frames.array[0]; struct obs_source_frame *frame = NULL; uint64_t sys_offset = sys_time - source->last_sys_timestamp; uint64_t frame_time = next_frame->timestamp; uint64_t frame_offset = 0; if (source->async_unbuffered) { while (source->async_frames.num > 1) { da_erase(source->async_frames, 0); remove_async_frame(source, next_frame); next_frame = source->async_frames.array[0]; } source->last_frame_ts = next_frame->timestamp; return true; } #if DEBUG_ASYNC_FRAMES blog(LOG_DEBUG, "source->last_frame_ts: %llu, frame_time: %llu, " "sys_offset: %llu, frame_offset: %llu, " "number of frames: %lu", source->last_frame_ts, frame_time, sys_offset, frame_time - source->last_frame_ts, (unsigned long)source->async_frames.num); #endif /* account for timestamp invalidation */ if (frame_out_of_bounds(source, frame_time)) { #if DEBUG_ASYNC_FRAMES blog(LOG_DEBUG, "timing jump"); #endif source->last_frame_ts = next_frame->timestamp; return true; } else { frame_offset = frame_time - source->last_frame_ts; source->last_frame_ts += sys_offset; } while (source->last_frame_ts > next_frame->timestamp) { /* this tries to reduce the needless frame duplication, also * helps smooth out async rendering to frame boundaries. In * other words, tries to keep the framerate as smooth as * possible */ if (frame && (source->last_frame_ts - next_frame->timestamp) < 2000000) break; if (frame) da_erase(source->async_frames, 0); #if DEBUG_ASYNC_FRAMES blog(LOG_DEBUG, "new frame, " "source->last_frame_ts: %llu, " "next_frame->timestamp: %llu", source->last_frame_ts, next_frame->timestamp); #endif remove_async_frame(source, frame); if (source->async_frames.num == 1) return true; frame = next_frame; next_frame = source->async_frames.array[1]; /* more timestamp checking and compensating */ if ((next_frame->timestamp - frame_time) > MAX_TS_VAR) { #if DEBUG_ASYNC_FRAMES blog(LOG_DEBUG, "timing jump"); #endif source->last_frame_ts = next_frame->timestamp - frame_offset; } frame_time = next_frame->timestamp; frame_offset = frame_time - source->last_frame_ts; } #if DEBUG_ASYNC_FRAMES if (!frame) blog(LOG_DEBUG, "no frame!"); #endif return frame != NULL; } static inline struct obs_source_frame *get_closest_frame(obs_source_t *source, uint64_t sys_time) { if (!source->async_frames.num) return NULL; if (!source->last_frame_ts || ready_async_frame(source, sys_time)) { struct obs_source_frame *frame = source->async_frames.array[0]; da_erase(source->async_frames, 0); if (!source->last_frame_ts) source->last_frame_ts = frame->timestamp; return frame; } return NULL; } /* * Ensures that cached frames are displayed on time. If multiple frames * were cached between renders, then releases the unnecessary frames and uses * the frame with the closest timing to ensure sync. Also ensures that timing * with audio is synchronized. */ struct obs_source_frame *obs_source_get_frame(obs_source_t *source) { struct obs_source_frame *frame = NULL; if (!obs_source_valid(source, "obs_source_get_frame")) return NULL; pthread_mutex_lock(&source->async_mutex); frame = source->cur_async_frame; source->cur_async_frame = NULL; if (frame) { os_atomic_inc_long(&frame->refs); } pthread_mutex_unlock(&source->async_mutex); return frame; } void obs_source_release_frame(obs_source_t *source, struct obs_source_frame *frame) { if (!frame) return; if (!source) { obs_source_frame_destroy(frame); } else { pthread_mutex_lock(&source->async_mutex); if (os_atomic_dec_long(&frame->refs) == 0) obs_source_frame_destroy(frame); else remove_async_frame(source, frame); pthread_mutex_unlock(&source->async_mutex); } } const char *obs_source_get_name(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_name") ? source->context.name : NULL; } const char *obs_source_get_uuid(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_uuid") ? source->context.uuid : NULL; } void obs_source_set_name(obs_source_t *source, const char *name) { if (!obs_source_valid(source, "obs_source_set_name")) return; if (!name || !*name || !source->context.name || strcmp(name, source->context.name) != 0) { if (requires_canvas(source)) { obs_canvas_rename_source(source, name); } else { struct calldata data; char *prev_name = bstrdup(source->context.name); if (!source->context.private) { obs_context_data_setname_ht(&source->context, name, &obs->data.public_sources); } else { obs_context_data_setname(&source->context, name); } calldata_init(&data); calldata_set_ptr(&data, "source", source); calldata_set_string(&data, "new_name", source->context.name); calldata_set_string(&data, "prev_name", prev_name); if (!source->context.private) signal_handler_signal(obs->signals, "source_rename", &data); signal_handler_signal(source->context.signals, "rename", &data); calldata_free(&data); bfree(prev_name); } } } enum obs_source_type obs_source_get_type(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_type") ? source->info.type : OBS_SOURCE_TYPE_INPUT; } const char *obs_source_get_id(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_id") ? source->info.id : NULL; } const char *obs_source_get_unversioned_id(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_unversioned_id") ? source->info.unversioned_id : NULL; } static inline void render_filter_bypass(obs_source_t *target, gs_effect_t *effect, const char *tech_name) { gs_technique_t *tech = gs_effect_get_technique(effect, tech_name); size_t passes, i; passes = gs_technique_begin(tech); for (i = 0; i < passes; i++) { gs_technique_begin_pass(tech, i); obs_source_video_render(target); gs_technique_end_pass(tech); } gs_technique_end(tech); } static inline void render_filter_tex(gs_texture_t *tex, gs_effect_t *effect, uint32_t width, uint32_t height, const char *tech_name) { gs_technique_t *tech = gs_effect_get_technique(effect, tech_name); gs_eparam_t *image = gs_effect_get_param_by_name(effect, "image"); size_t passes, i; const bool linear_srgb = gs_get_linear_srgb(); const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(linear_srgb); if (linear_srgb) gs_effect_set_texture_srgb(image, tex); else gs_effect_set_texture(image, tex); passes = gs_technique_begin(tech); for (i = 0; i < passes; i++) { gs_technique_begin_pass(tech, i); gs_draw_sprite(tex, 0, width, height); gs_technique_end_pass(tech); } gs_technique_end(tech); gs_enable_framebuffer_srgb(previous); } static inline bool can_bypass(obs_source_t *target, obs_source_t *parent, uint32_t filter_flags, uint32_t parent_flags, enum obs_allow_direct_render allow_direct, enum gs_color_space space) { return (target == parent) && (allow_direct == OBS_ALLOW_DIRECT_RENDERING) && ((parent_flags & OBS_SOURCE_CUSTOM_DRAW) == 0) && ((parent_flags & OBS_SOURCE_ASYNC) == 0) && ((filter_flags & OBS_SOURCE_SRGB) == (parent_flags & OBS_SOURCE_SRGB) && space == gs_get_color_space()); } bool obs_source_process_filter_begin(obs_source_t *filter, enum gs_color_format format, enum obs_allow_direct_render allow_direct) { return obs_source_process_filter_begin_with_color_space(filter, format, GS_CS_SRGB, allow_direct); } bool obs_source_process_filter_begin_with_color_space(obs_source_t *filter, enum gs_color_format format, enum gs_color_space space, enum obs_allow_direct_render allow_direct) { obs_source_t *target, *parent; uint32_t filter_flags, parent_flags; int cx, cy; if (!obs_ptr_valid(filter, "obs_source_process_filter_begin_with_color_space")) return false; filter->filter_bypass_active = false; target = obs_filter_get_target(filter); parent = obs_filter_get_parent(filter); if (!target) { blog(LOG_INFO, "filter '%s' being processed with no target!", filter->context.name); return false; } if (!parent) { blog(LOG_INFO, "filter '%s' being processed with no parent!", filter->context.name); return false; } filter_flags = filter->info.output_flags; parent_flags = parent->info.output_flags; cx = get_base_width(target); cy = get_base_height(target); filter->allow_direct = allow_direct; /* if the parent does not use any custom effects, and this is the last * filter in the chain for the parent, then render the parent directly * using the filter effect instead of rendering to texture to reduce * the total number of passes */ if (can_bypass(target, parent, filter_flags, parent_flags, allow_direct, space)) { filter->filter_bypass_active = true; return true; } if (!cx || !cy) { obs_source_skip_video_filter(filter); return false; } if (filter->filter_texrender && (gs_texrender_get_format(filter->filter_texrender) != format)) { gs_texrender_destroy(filter->filter_texrender); filter->filter_texrender = NULL; } if (!filter->filter_texrender) { filter->filter_texrender = gs_texrender_create(format, GS_ZS_NONE); } if (gs_texrender_begin_with_color_space(filter->filter_texrender, cx, cy, space)) { gs_blend_state_push(); gs_blend_function_separate(GS_BLEND_SRCALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); bool custom_draw = (parent_flags & OBS_SOURCE_CUSTOM_DRAW) != 0; bool async = (parent_flags & OBS_SOURCE_ASYNC) != 0; struct vec4 clear_color; vec4_zero(&clear_color); gs_clear(GS_CLEAR_COLOR, &clear_color, 0.0f, 0); gs_ortho(0.0f, (float)cx, 0.0f, (float)cy, -100.0f, 100.0f); if (target == parent && !custom_draw && !async) obs_source_default_render(target); else obs_source_video_render(target); gs_blend_state_pop(); gs_texrender_end(filter->filter_texrender); } return true; } void obs_source_process_filter_tech_end(obs_source_t *filter, gs_effect_t *effect, uint32_t width, uint32_t height, const char *tech_name) { obs_source_t *target, *parent; gs_texture_t *texture; uint32_t filter_flags; if (!filter) return; const bool filter_bypass_active = filter->filter_bypass_active; filter->filter_bypass_active = false; target = obs_filter_get_target(filter); parent = obs_filter_get_parent(filter); if (!target || !parent) return; filter_flags = filter->info.output_flags; const bool previous = gs_set_linear_srgb((filter_flags & OBS_SOURCE_SRGB) != 0); const char *tech = tech_name ? tech_name : "Draw"; if (filter_bypass_active) { render_filter_bypass(target, effect, tech); } else { texture = gs_texrender_get_texture(filter->filter_texrender); if (texture) { render_filter_tex(texture, effect, width, height, tech); } } gs_set_linear_srgb(previous); } void obs_source_process_filter_end(obs_source_t *filter, gs_effect_t *effect, uint32_t width, uint32_t height) { if (!obs_ptr_valid(filter, "obs_source_process_filter_end")) return; obs_source_process_filter_tech_end(filter, effect, width, height, "Draw"); } void obs_source_skip_video_filter(obs_source_t *filter) { obs_source_t *target, *parent; bool custom_draw, async; uint32_t parent_flags; if (!obs_ptr_valid(filter, "obs_source_skip_video_filter")) return; target = obs_filter_get_target(filter); parent = obs_filter_get_parent(filter); parent_flags = parent->info.output_flags; custom_draw = (parent_flags & OBS_SOURCE_CUSTOM_DRAW) != 0; async = (parent_flags & OBS_SOURCE_ASYNC) != 0; if (target == parent) { if (!custom_draw && !async) obs_source_default_render(target); else if (target->info.video_render) obs_source_main_render(target); else if (deinterlacing_enabled(target)) deinterlace_render(target); else obs_source_render_async_video(target); } else { obs_source_video_render(target); } } signal_handler_t *obs_source_get_signal_handler(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_signal_handler") ? source->context.signals : NULL; } proc_handler_t *obs_source_get_proc_handler(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_proc_handler") ? source->context.procs : NULL; } void obs_source_set_volume(obs_source_t *source, float volume) { if (obs_source_valid(source, "obs_source_set_volume")) { struct audio_action action = {.timestamp = os_gettime_ns(), .type = AUDIO_ACTION_VOL, .vol = volume}; struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_float(&data, "volume", volume); signal_handler_signal(source->context.signals, "volume", &data); if (!source->context.private) signal_handler_signal(obs->signals, "source_volume", &data); volume = (float)calldata_float(&data, "volume"); pthread_mutex_lock(&source->audio_actions_mutex); da_push_back(source->audio_actions, &action); pthread_mutex_unlock(&source->audio_actions_mutex); source->user_volume = volume; } } float obs_source_get_volume(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_volume") ? source->user_volume : 0.0f; } void obs_source_set_sync_offset(obs_source_t *source, int64_t offset) { if (obs_source_valid(source, "obs_source_set_sync_offset")) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_int(&data, "offset", offset); signal_handler_signal(source->context.signals, "audio_sync", &data); source->sync_offset = calldata_int(&data, "offset"); } } int64_t obs_source_get_sync_offset(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_sync_offset") ? source->sync_offset : 0; } struct source_enum_data { obs_source_enum_proc_t enum_callback; void *param; }; static void enum_source_active_tree_callback(obs_source_t *parent, obs_source_t *child, void *param) { struct source_enum_data *data = param; bool is_transition = child->info.type == OBS_SOURCE_TYPE_TRANSITION; if (is_transition) obs_transition_enum_sources(child, enum_source_active_tree_callback, param); if (child->info.enum_active_sources) { if (child->context.data) { child->info.enum_active_sources(child->context.data, enum_source_active_tree_callback, data); } } data->enum_callback(parent, child, data->param); } void obs_source_enum_active_sources(obs_source_t *source, obs_source_enum_proc_t enum_callback, void *param) { bool is_transition; if (!data_valid(source, "obs_source_enum_active_sources")) return; is_transition = source->info.type == OBS_SOURCE_TYPE_TRANSITION; if (!is_transition && !source->info.enum_active_sources) return; source = obs_source_get_ref(source); if (!data_valid(source, "obs_source_enum_active_sources")) return; if (is_transition) obs_transition_enum_sources(source, enum_callback, param); if (source->info.enum_active_sources) source->info.enum_active_sources(source->context.data, enum_callback, param); obs_source_release(source); } void obs_source_enum_active_tree(obs_source_t *source, obs_source_enum_proc_t enum_callback, void *param) { struct source_enum_data data = {enum_callback, param}; bool is_transition; if (!data_valid(source, "obs_source_enum_active_tree")) return; is_transition = source->info.type == OBS_SOURCE_TYPE_TRANSITION; if (!is_transition && !source->info.enum_active_sources) return; source = obs_source_get_ref(source); if (!data_valid(source, "obs_source_enum_active_tree")) return; if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_enum_sources(source, enum_source_active_tree_callback, &data); if (source->info.enum_active_sources) source->info.enum_active_sources(source->context.data, enum_source_active_tree_callback, &data); obs_source_release(source); } static void enum_source_full_tree_callback(obs_source_t *parent, obs_source_t *child, void *param) { struct source_enum_data *data = param; bool is_transition = child->info.type == OBS_SOURCE_TYPE_TRANSITION; if (is_transition) obs_transition_enum_sources(child, enum_source_full_tree_callback, param); if (child->info.enum_all_sources) { if (child->context.data) { child->info.enum_all_sources(child->context.data, enum_source_full_tree_callback, data); } } else if (child->info.enum_active_sources) { if (child->context.data) { child->info.enum_active_sources(child->context.data, enum_source_full_tree_callback, data); } } data->enum_callback(parent, child, data->param); } void obs_source_enum_full_tree(obs_source_t *source, obs_source_enum_proc_t enum_callback, void *param) { struct source_enum_data data = {enum_callback, param}; bool is_transition; if (!data_valid(source, "obs_source_enum_full_tree")) return; is_transition = source->info.type == OBS_SOURCE_TYPE_TRANSITION; if (!is_transition && !source->info.enum_active_sources) return; source = obs_source_get_ref(source); if (!data_valid(source, "obs_source_enum_full_tree")) return; if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_enum_sources(source, enum_source_full_tree_callback, &data); if (source->info.enum_all_sources) { source->info.enum_all_sources(source->context.data, enum_source_full_tree_callback, &data); } else if (source->info.enum_active_sources) { source->info.enum_active_sources(source->context.data, enum_source_full_tree_callback, &data); } obs_source_release(source); } struct descendant_info { bool exists; obs_source_t *target; }; static void check_descendant(obs_source_t *parent, obs_source_t *child, void *param) { struct descendant_info *info = param; if (child == info->target || parent == info->target) info->exists = true; } bool obs_source_add_active_child(obs_source_t *parent, obs_source_t *child) { struct descendant_info info = {false, parent}; if (!obs_ptr_valid(parent, "obs_source_add_active_child")) return false; if (!obs_ptr_valid(child, "obs_source_add_active_child")) return false; if (parent == child) { blog(LOG_WARNING, "obs_source_add_active_child: " "parent == child"); return false; } obs_source_enum_full_tree(child, check_descendant, &info); if (info.exists) return false; for (int i = 0; i < parent->show_refs; i++) { enum view_type type; type = (i < parent->activate_refs) ? MAIN_VIEW : AUX_VIEW; obs_source_activate(child, type); } return true; } void obs_source_remove_active_child(obs_source_t *parent, obs_source_t *child) { if (!obs_ptr_valid(parent, "obs_source_remove_active_child")) return; if (!obs_ptr_valid(child, "obs_source_remove_active_child")) return; for (int i = 0; i < parent->show_refs; i++) { enum view_type type; type = (i < parent->activate_refs) ? MAIN_VIEW : AUX_VIEW; obs_source_deactivate(child, type); } } void obs_source_save(obs_source_t *source) { if (!data_valid(source, "obs_source_save")) return; obs_source_dosignal(source, "source_save", "save"); if (source->info.save) source->info.save(source->context.data, source->context.settings); } void obs_source_load(obs_source_t *source) { if (!data_valid(source, "obs_source_load")) return; if (source->info.load) source->info.load(source->context.data, source->context.settings); obs_source_dosignal(source, "source_load", "load"); } void obs_source_load2(obs_source_t *source) { if (!data_valid(source, "obs_source_load2")) return; obs_source_load(source); for (size_t i = source->filters.num; i > 0; i--) { obs_source_t *filter = source->filters.array[i - 1]; obs_source_load(filter); } } bool obs_source_active(const obs_source_t *source) { return obs_source_valid(source, "obs_source_active") ? source->activate_refs != 0 : false; } bool obs_source_showing(const obs_source_t *source) { return obs_source_valid(source, "obs_source_showing") ? source->show_refs != 0 : false; } static inline void signal_flags_updated(obs_source_t *source) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_int(&data, "flags", source->flags); signal_handler_signal(source->context.signals, "update_flags", &data); } void obs_source_set_flags(obs_source_t *source, uint32_t flags) { if (!obs_source_valid(source, "obs_source_set_flags")) return; if (flags != source->flags) { source->flags = flags; signal_flags_updated(source); } } void obs_source_set_default_flags(obs_source_t *source, uint32_t flags) { if (!obs_source_valid(source, "obs_source_set_default_flags")) return; source->default_flags = flags; } uint32_t obs_source_get_flags(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_flags") ? source->flags : 0; } void obs_source_set_audio_mixers(obs_source_t *source, uint32_t mixers) { struct calldata data; uint8_t stack[128]; if (!obs_source_valid(source, "obs_source_set_audio_mixers")) return; if (!source->owns_info_id && (source->info.output_flags & OBS_SOURCE_AUDIO) == 0) return; if (source->audio_mixers == mixers) return; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_int(&data, "mixers", mixers); signal_handler_signal(source->context.signals, "audio_mixers", &data); mixers = (uint32_t)calldata_int(&data, "mixers"); source->audio_mixers = mixers; } uint32_t obs_source_get_audio_mixers(const obs_source_t *source) { if (!obs_source_valid(source, "obs_source_get_audio_mixers")) return 0; if (!source->owns_info_id && (source->info.output_flags & OBS_SOURCE_AUDIO) == 0) return 0; return source->audio_mixers; } void obs_source_draw_set_color_matrix(const struct matrix4 *color_matrix, const struct vec3 *color_range_min, const struct vec3 *color_range_max) { struct vec3 color_range_min_def; struct vec3 color_range_max_def; vec3_set(&color_range_min_def, 0.0f, 0.0f, 0.0f); vec3_set(&color_range_max_def, 1.0f, 1.0f, 1.0f); gs_effect_t *effect = gs_get_effect(); gs_eparam_t *matrix; gs_eparam_t *range_min; gs_eparam_t *range_max; if (!effect) { blog(LOG_WARNING, "obs_source_draw_set_color_matrix: no " "active effect!"); return; } if (!obs_ptr_valid(color_matrix, "obs_source_draw_set_color_matrix")) return; if (!color_range_min) color_range_min = &color_range_min_def; if (!color_range_max) color_range_max = &color_range_max_def; matrix = gs_effect_get_param_by_name(effect, "color_matrix"); range_min = gs_effect_get_param_by_name(effect, "color_range_min"); range_max = gs_effect_get_param_by_name(effect, "color_range_max"); gs_effect_set_matrix4(matrix, color_matrix); gs_effect_set_val(range_min, color_range_min, sizeof(float) * 3); gs_effect_set_val(range_max, color_range_max, sizeof(float) * 3); } void obs_source_draw(gs_texture_t *texture, int x, int y, uint32_t cx, uint32_t cy, bool flip) { if (!obs_ptr_valid(texture, "obs_source_draw")) return; gs_effect_t *effect = gs_get_effect(); if (!effect) { blog(LOG_WARNING, "obs_source_draw: no active effect!"); return; } const bool linear_srgb = gs_get_linear_srgb(); const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(linear_srgb); gs_eparam_t *image = gs_effect_get_param_by_name(effect, "image"); if (linear_srgb) gs_effect_set_texture_srgb(image, texture); else gs_effect_set_texture(image, texture); const bool change_pos = (x != 0 || y != 0); if (change_pos) { gs_matrix_push(); gs_matrix_translate3f((float)x, (float)y, 0.0f); } gs_draw_sprite(texture, flip ? GS_FLIP_V : 0, cx, cy); if (change_pos) gs_matrix_pop(); gs_enable_framebuffer_srgb(previous); } void obs_source_inc_showing(obs_source_t *source) { if (obs_source_valid(source, "obs_source_inc_showing")) obs_source_activate(source, AUX_VIEW); } void obs_source_inc_active(obs_source_t *source) { if (obs_source_valid(source, "obs_source_inc_active")) obs_source_activate(source, MAIN_VIEW); } void obs_source_dec_showing(obs_source_t *source) { if (obs_source_valid(source, "obs_source_dec_showing")) obs_source_deactivate(source, AUX_VIEW); } void obs_source_dec_active(obs_source_t *source) { if (obs_source_valid(source, "obs_source_dec_active")) obs_source_deactivate(source, MAIN_VIEW); } void obs_source_enum_filters(obs_source_t *source, obs_source_enum_proc_t callback, void *param) { if (!obs_source_valid(source, "obs_source_enum_filters")) return; if (!obs_ptr_valid(callback, "obs_source_enum_filters")) return; pthread_mutex_lock(&source->filter_mutex); for (size_t i = source->filters.num; i > 0; i--) { struct obs_source *filter = source->filters.array[i - 1]; callback(source, filter, param); } pthread_mutex_unlock(&source->filter_mutex); } void obs_source_set_hidden(obs_source_t *source, bool hidden) { source->temp_removed = hidden; } bool obs_source_is_hidden(obs_source_t *source) { return source->temp_removed; } obs_source_t *obs_source_get_filter_by_name(obs_source_t *source, const char *name) { obs_source_t *filter = NULL; if (!obs_source_valid(source, "obs_source_get_filter_by_name")) return NULL; if (!obs_ptr_valid(name, "obs_source_get_filter_by_name")) return NULL; pthread_mutex_lock(&source->filter_mutex); for (size_t i = 0; i < source->filters.num; i++) { struct obs_source *cur_filter = source->filters.array[i]; if (strcmp(cur_filter->context.name, name) == 0) { filter = obs_source_get_ref(cur_filter); break; } } pthread_mutex_unlock(&source->filter_mutex); return filter; } size_t obs_source_filter_count(const obs_source_t *source) { return obs_source_valid(source, "obs_source_filter_count") ? source->filters.num : 0; } bool obs_source_enabled(const obs_source_t *source) { return obs_source_valid(source, "obs_source_enabled") ? source->enabled : false; } void obs_source_set_enabled(obs_source_t *source, bool enabled) { struct calldata data; uint8_t stack[128]; if (!obs_source_valid(source, "obs_source_set_enabled")) return; source->enabled = enabled; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_bool(&data, "enabled", enabled); signal_handler_signal(source->context.signals, "enable", &data); } bool obs_source_muted(const obs_source_t *source) { return obs_source_valid(source, "obs_source_muted") ? source->user_muted : false; } void obs_source_set_muted(obs_source_t *source, bool muted) { struct calldata data; uint8_t stack[128]; struct audio_action action = {.timestamp = os_gettime_ns(), .type = AUDIO_ACTION_MUTE, .set = muted}; if (!obs_source_valid(source, "obs_source_set_muted")) return; source->user_muted = muted; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_bool(&data, "muted", muted); signal_handler_signal(source->context.signals, "mute", &data); pthread_mutex_lock(&source->audio_actions_mutex); da_push_back(source->audio_actions, &action); pthread_mutex_unlock(&source->audio_actions_mutex); } static void source_signal_push_to_changed(obs_source_t *source, const char *signal, bool enabled) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_bool(&data, "enabled", enabled); signal_handler_signal(source->context.signals, signal, &data); } static void source_signal_push_to_delay(obs_source_t *source, const char *signal, uint64_t delay) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_int(&data, "delay", delay); signal_handler_signal(source->context.signals, signal, &data); } bool obs_source_push_to_mute_enabled(obs_source_t *source) { bool enabled; if (!obs_source_valid(source, "obs_source_push_to_mute_enabled")) return false; pthread_mutex_lock(&source->audio_mutex); enabled = source->push_to_mute_enabled; pthread_mutex_unlock(&source->audio_mutex); return enabled; } void obs_source_enable_push_to_mute(obs_source_t *source, bool enabled) { if (!obs_source_valid(source, "obs_source_enable_push_to_mute")) return; pthread_mutex_lock(&source->audio_mutex); bool changed = source->push_to_mute_enabled != enabled; if (obs_source_get_output_flags(source) & OBS_SOURCE_AUDIO && changed) blog(LOG_INFO, "source '%s' %s push-to-mute", obs_source_get_name(source), enabled ? "enabled" : "disabled"); source->push_to_mute_enabled = enabled; if (changed) source_signal_push_to_changed(source, "push_to_mute_changed", enabled); pthread_mutex_unlock(&source->audio_mutex); } uint64_t obs_source_get_push_to_mute_delay(obs_source_t *source) { uint64_t delay; if (!obs_source_valid(source, "obs_source_get_push_to_mute_delay")) return 0; pthread_mutex_lock(&source->audio_mutex); delay = source->push_to_mute_delay; pthread_mutex_unlock(&source->audio_mutex); return delay; } void obs_source_set_push_to_mute_delay(obs_source_t *source, uint64_t delay) { if (!obs_source_valid(source, "obs_source_set_push_to_mute_delay")) return; pthread_mutex_lock(&source->audio_mutex); source->push_to_mute_delay = delay; source_signal_push_to_delay(source, "push_to_mute_delay", delay); pthread_mutex_unlock(&source->audio_mutex); } bool obs_source_push_to_talk_enabled(obs_source_t *source) { bool enabled; if (!obs_source_valid(source, "obs_source_push_to_talk_enabled")) return false; pthread_mutex_lock(&source->audio_mutex); enabled = source->push_to_talk_enabled; pthread_mutex_unlock(&source->audio_mutex); return enabled; } void obs_source_enable_push_to_talk(obs_source_t *source, bool enabled) { if (!obs_source_valid(source, "obs_source_enable_push_to_talk")) return; pthread_mutex_lock(&source->audio_mutex); bool changed = source->push_to_talk_enabled != enabled; if (obs_source_get_output_flags(source) & OBS_SOURCE_AUDIO && changed) blog(LOG_INFO, "source '%s' %s push-to-talk", obs_source_get_name(source), enabled ? "enabled" : "disabled"); source->push_to_talk_enabled = enabled; if (changed) source_signal_push_to_changed(source, "push_to_talk_changed", enabled); pthread_mutex_unlock(&source->audio_mutex); } uint64_t obs_source_get_push_to_talk_delay(obs_source_t *source) { uint64_t delay; if (!obs_source_valid(source, "obs_source_get_push_to_talk_delay")) return 0; pthread_mutex_lock(&source->audio_mutex); delay = source->push_to_talk_delay; pthread_mutex_unlock(&source->audio_mutex); return delay; } void obs_source_set_push_to_talk_delay(obs_source_t *source, uint64_t delay) { if (!obs_source_valid(source, "obs_source_set_push_to_talk_delay")) return; pthread_mutex_lock(&source->audio_mutex); source->push_to_talk_delay = delay; source_signal_push_to_delay(source, "push_to_talk_delay", delay); pthread_mutex_unlock(&source->audio_mutex); } void *obs_source_get_type_data(obs_source_t *source) { return obs_source_valid(source, "obs_source_get_type_data") ? source->info.type_data : NULL; } static float get_source_volume(obs_source_t *source, uint64_t os_time) { if (source->push_to_mute_enabled && source->push_to_mute_pressed) source->push_to_mute_stop_time = os_time + source->push_to_mute_delay * 1000000; if (source->push_to_talk_enabled && source->push_to_talk_pressed) source->push_to_talk_stop_time = os_time + source->push_to_talk_delay * 1000000; bool push_to_mute_active = source->push_to_mute_pressed || os_time < source->push_to_mute_stop_time; bool push_to_talk_active = source->push_to_talk_pressed || os_time < source->push_to_talk_stop_time; bool muted = !source->enabled || source->muted || (source->push_to_mute_enabled && push_to_mute_active) || (source->push_to_talk_enabled && !push_to_talk_active); if (muted || close_float(source->volume, 0.0f, 0.0001f)) return 0.0f; if (close_float(source->volume, 1.0f, 0.0001f)) return 1.0f; return source->volume; } static inline void multiply_output_audio(obs_source_t *source, size_t mix, size_t channels, float vol) { register float *out = source->audio_output_buf[mix][0]; register float *end = out + AUDIO_OUTPUT_FRAMES * channels; while (out < end) *(out++) *= vol; } static inline void multiply_vol_data(obs_source_t *source, size_t mix, size_t channels, float *vol_data) { for (size_t ch = 0; ch < channels; ch++) { register float *out = source->audio_output_buf[mix][ch]; register float *end = out + AUDIO_OUTPUT_FRAMES; register float *vol = vol_data; while (out < end) *(out++) *= *(vol++); } } static inline void apply_audio_action(obs_source_t *source, const struct audio_action *action) { switch (action->type) { case AUDIO_ACTION_VOL: source->volume = action->vol; break; case AUDIO_ACTION_MUTE: source->muted = action->set; break; case AUDIO_ACTION_PTT: source->push_to_talk_pressed = action->set; break; case AUDIO_ACTION_PTM: source->push_to_mute_pressed = action->set; break; } } static void apply_audio_actions(obs_source_t *source, size_t channels, size_t sample_rate) { float vol_data[AUDIO_OUTPUT_FRAMES]; float cur_vol = get_source_volume(source, source->audio_ts); size_t frame_num = 0; pthread_mutex_lock(&source->audio_actions_mutex); for (size_t i = 0; i < source->audio_actions.num; i++) { struct audio_action action = source->audio_actions.array[i]; uint64_t timestamp = action.timestamp; size_t new_frame_num; if (timestamp < source->audio_ts) timestamp = source->audio_ts; new_frame_num = conv_time_to_frames(sample_rate, timestamp - source->audio_ts); if (new_frame_num >= AUDIO_OUTPUT_FRAMES) break; da_erase(source->audio_actions, i--); apply_audio_action(source, &action); if (new_frame_num > frame_num) { for (; frame_num < new_frame_num; frame_num++) vol_data[frame_num] = cur_vol; } cur_vol = get_source_volume(source, timestamp); } for (; frame_num < AUDIO_OUTPUT_FRAMES; frame_num++) vol_data[frame_num] = cur_vol; pthread_mutex_unlock(&source->audio_actions_mutex); for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { if ((source->audio_mixers & (1 << mix)) != 0) multiply_vol_data(source, mix, channels, vol_data); } } static void apply_audio_volume(obs_source_t *source, uint32_t mixers, size_t channels, size_t sample_rate) { struct audio_action action; bool actions_pending; float vol; pthread_mutex_lock(&source->audio_actions_mutex); actions_pending = source->audio_actions.num > 0; if (actions_pending) action = source->audio_actions.array[0]; pthread_mutex_unlock(&source->audio_actions_mutex); if (actions_pending) { uint64_t duration = conv_frames_to_time(sample_rate, AUDIO_OUTPUT_FRAMES); if (action.timestamp < (source->audio_ts + duration)) { apply_audio_actions(source, channels, sample_rate); return; } } vol = get_source_volume(source, source->audio_ts); if (vol == 1.0f) return; if (vol == 0.0f || mixers == 0) { memset(source->audio_output_buf[0][0], 0, AUDIO_OUTPUT_FRAMES * sizeof(float) * MAX_AUDIO_CHANNELS * MAX_AUDIO_MIXES); return; } for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { uint32_t mix_and_val = (1 << mix); if ((source->audio_mixers & mix_and_val) != 0 && (mixers & mix_and_val) != 0) multiply_output_audio(source, mix, channels, vol); } } static void custom_audio_render(obs_source_t *source, uint32_t mixers, size_t channels, size_t sample_rate) { struct obs_source_audio_mix audio_data; bool success; uint64_t ts; for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { for (size_t ch = 0; ch < channels; ch++) { audio_data.output[mix].data[ch] = source->audio_output_buf[mix][ch]; } if ((source->audio_mixers & mixers & (1 << mix)) != 0) { memset(source->audio_output_buf[mix][0], 0, sizeof(float) * AUDIO_OUTPUT_FRAMES * channels); } } success = source->info.audio_render(source->context.data, &ts, &audio_data, mixers, channels, sample_rate); source->audio_ts = success ? ts : 0; source->audio_pending = !success; if (!success || !source->audio_ts || !mixers) return; for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { uint32_t mix_bit = 1 << mix; if ((mixers & mix_bit) == 0) continue; if ((source->audio_mixers & mix_bit) == 0) { memset(source->audio_output_buf[mix][0], 0, sizeof(float) * AUDIO_OUTPUT_FRAMES * channels); } } apply_audio_volume(source, mixers, channels, sample_rate); } static void audio_submix(obs_source_t *source, size_t channels, size_t sample_rate) { struct audio_output_data audio_data; struct obs_source_audio audio = {0}; bool success; uint64_t ts; for (size_t ch = 0; ch < channels; ch++) { audio_data.data[ch] = source->audio_mix_buf[ch]; } memset(source->audio_mix_buf[0], 0, sizeof(float) * AUDIO_OUTPUT_FRAMES * channels); success = source->info.audio_mix(source->context.data, &ts, &audio_data, channels, sample_rate); if (!success) return; for (size_t i = 0; i < channels; i++) audio.data[i] = (const uint8_t *)audio_data.data[i]; audio.samples_per_sec = (uint32_t)sample_rate; audio.frames = AUDIO_OUTPUT_FRAMES; audio.format = AUDIO_FORMAT_FLOAT_PLANAR; audio.speakers = (enum speaker_layout)channels; audio.timestamp = ts; obs_source_output_audio(source, &audio); } static inline void process_audio_source_tick(obs_source_t *source, uint32_t mixers, size_t channels, size_t sample_rate, size_t size) { bool audio_submix = !!(source->info.output_flags & OBS_SOURCE_SUBMIX); pthread_mutex_lock(&source->audio_buf_mutex); if (source->audio_input_buf[0].size < size) { source->audio_pending = true; pthread_mutex_unlock(&source->audio_buf_mutex); return; } for (size_t ch = 0; ch < channels; ch++) deque_peek_front(&source->audio_input_buf[ch], source->audio_output_buf[0][ch], size); pthread_mutex_unlock(&source->audio_buf_mutex); for (size_t mix = 1; mix < MAX_AUDIO_MIXES; mix++) { uint32_t mix_and_val = (1 << mix); if (audio_submix) { if (mix > 1) break; mixers = 1; mix_and_val = 1; } if ((source->audio_mixers & mix_and_val) == 0 || (mixers & mix_and_val) == 0) { memset(source->audio_output_buf[mix][0], 0, size * channels); continue; } for (size_t ch = 0; ch < channels; ch++) memcpy(source->audio_output_buf[mix][ch], source->audio_output_buf[0][ch], size); } if (audio_submix) { source->audio_pending = false; return; } if ((source->audio_mixers & 1) == 0 || (mixers & 1) == 0) memset(source->audio_output_buf[0][0], 0, size * channels); apply_audio_volume(source, mixers, channels, sample_rate); source->audio_pending = false; } void obs_source_audio_render(obs_source_t *source, uint32_t mixers, size_t channels, size_t sample_rate, size_t size) { if (!source->audio_output_buf[0][0]) { source->audio_pending = true; return; } if (source->info.audio_render) { if (!source->context.data) { source->audio_pending = true; return; } custom_audio_render(source, mixers, channels, sample_rate); return; } if (source->info.audio_mix) { audio_submix(source, channels, sample_rate); } if (!source->audio_ts) { source->audio_pending = true; return; } process_audio_source_tick(source, mixers, channels, sample_rate, size); } bool obs_source_audio_pending(const obs_source_t *source) { if (!obs_source_valid(source, "obs_source_audio_pending")) return true; if (obs_source_removed(source)) return true; return (is_composite_source(source) || is_audio_source(source)) ? source->audio_pending : true; } uint64_t obs_source_get_audio_timestamp(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_audio_timestamp") ? source->audio_ts : 0; } void obs_source_get_audio_mix(const obs_source_t *source, struct obs_source_audio_mix *audio) { if (!obs_source_valid(source, "obs_source_get_audio_mix")) return; if (!obs_ptr_valid(audio, "audio")) return; for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { for (size_t ch = 0; ch < MAX_AUDIO_CHANNELS; ch++) { audio->output[mix].data[ch] = source->audio_output_buf[mix][ch]; } } } void obs_source_add_audio_pause_callback(obs_source_t *source, signal_callback_t callback, void *param) { if (!obs_source_valid(source, "obs_source_add_audio_pause_callback")) return; signal_handler_t *handler = obs_source_get_signal_handler(source); signal_handler_connect(handler, "media_pause", callback, param); signal_handler_connect(handler, "media_stopped", callback, param); } void obs_source_remove_audio_pause_callback(obs_source_t *source, signal_callback_t callback, void *param) { if (!obs_source_valid(source, "obs_source_remove_audio_pause_callback")) return; signal_handler_t *handler = obs_source_get_signal_handler(source); signal_handler_disconnect(handler, "media_pause", callback, param); signal_handler_disconnect(handler, "media_stopped", callback, param); } void obs_source_add_audio_capture_callback(obs_source_t *source, obs_source_audio_capture_t callback, void *param) { struct audio_cb_info info = {callback, param}; if (!obs_source_valid(source, "obs_source_add_audio_capture_callback")) return; pthread_mutex_lock(&source->audio_cb_mutex); da_push_back(source->audio_cb_list, &info); pthread_mutex_unlock(&source->audio_cb_mutex); } void obs_source_remove_audio_capture_callback(obs_source_t *source, obs_source_audio_capture_t callback, void *param) { struct audio_cb_info info = {callback, param}; if (!obs_source_valid(source, "obs_source_remove_audio_capture_callback")) return; pthread_mutex_lock(&source->audio_cb_mutex); da_erase_item(source->audio_cb_list, &info); pthread_mutex_unlock(&source->audio_cb_mutex); } void obs_source_set_monitoring_type(obs_source_t *source, enum obs_monitoring_type type) { struct calldata data; uint8_t stack[128]; bool was_on; bool now_on; if (!obs_source_valid(source, "obs_source_set_monitoring_type")) return; if (source->monitoring_type == type) return; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_int(&data, "type", type); signal_handler_signal(source->context.signals, "audio_monitoring", &data); was_on = source->monitoring_type != OBS_MONITORING_TYPE_NONE; now_on = type != OBS_MONITORING_TYPE_NONE; if (was_on != now_on) { if (!was_on) { source->monitor = audio_monitor_create(source); } else { audio_monitor_destroy(source->monitor); source->monitor = NULL; } } source->monitoring_type = type; } enum obs_monitoring_type obs_source_get_monitoring_type(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_monitoring_type") ? source->monitoring_type : OBS_MONITORING_TYPE_NONE; } void obs_source_set_async_unbuffered(obs_source_t *source, bool unbuffered) { if (!obs_source_valid(source, "obs_source_set_async_unbuffered")) return; source->async_unbuffered = unbuffered; } bool obs_source_async_unbuffered(const obs_source_t *source) { return obs_source_valid(source, "obs_source_async_unbuffered") ? source->async_unbuffered : false; } obs_data_t *obs_source_get_private_settings(obs_source_t *source) { if (!obs_ptr_valid(source, "obs_source_get_private_settings")) return NULL; obs_data_addref(source->private_settings); return source->private_settings; } void obs_source_set_async_decoupled(obs_source_t *source, bool decouple) { if (!obs_ptr_valid(source, "obs_source_set_async_decoupled")) return; source->async_decoupled = decouple; if (decouple) { pthread_mutex_lock(&source->audio_buf_mutex); source->timing_set = false; reset_audio_data(source, 0); pthread_mutex_unlock(&source->audio_buf_mutex); } } bool obs_source_async_decoupled(const obs_source_t *source) { return obs_source_valid(source, "obs_source_async_decoupled") ? source->async_decoupled : false; } /* hidden/undocumented export to allow source type redefinition for scripts */ EXPORT void obs_enable_source_type(const char *name, bool enable) { struct obs_source_info *info = get_source_info(name); if (!info) return; if (enable) info->output_flags &= ~OBS_SOURCE_CAP_DISABLED; else info->output_flags |= OBS_SOURCE_CAP_DISABLED; } enum speaker_layout obs_source_get_speaker_layout(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_get_audio_channels")) return SPEAKERS_UNKNOWN; return source->sample_info.speakers; } void obs_source_set_balance_value(obs_source_t *source, float balance) { if (obs_source_valid(source, "obs_source_set_balance_value")) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_float(&data, "balance", balance); signal_handler_signal(source->context.signals, "audio_balance", &data); source->balance = (float)calldata_float(&data, "balance"); } } float obs_source_get_balance_value(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_balance_value") ? source->balance : 0.5f; } void obs_source_set_audio_active(obs_source_t *source, bool active) { if (!obs_source_valid(source, "obs_source_set_audio_active")) return; if (os_atomic_set_bool(&source->audio_active, active) == active) return; if (active) obs_source_dosignal(source, "source_audio_activate", "audio_activate"); else obs_source_dosignal(source, "source_audio_deactivate", "audio_deactivate"); } bool obs_source_audio_active(const obs_source_t *source) { return obs_source_valid(source, "obs_source_audio_active") ? os_atomic_load_bool(&source->audio_active) : false; } uint32_t obs_source_get_last_obs_version(const obs_source_t *source) { return obs_source_valid(source, "obs_source_get_last_obs_version") ? source->last_obs_ver : 0; } enum obs_icon_type obs_source_get_icon_type(const char *id) { const struct obs_source_info *info = get_source_info(id); return (info) ? info->icon_type : OBS_ICON_TYPE_UNKNOWN; } void obs_source_media_play_pause(obs_source_t *source, bool pause) { if (!data_valid(source, "obs_source_media_play_pause")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; if (!source->info.media_play_pause) return; struct media_action action = { .type = MEDIA_ACTION_PLAY_PAUSE, .pause = pause, }; pthread_mutex_lock(&source->media_actions_mutex); da_push_back(source->media_actions, &action); pthread_mutex_unlock(&source->media_actions_mutex); } void obs_source_media_restart(obs_source_t *source) { if (!data_valid(source, "obs_source_media_restart")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; if (!source->info.media_restart) return; struct media_action action = { .type = MEDIA_ACTION_RESTART, }; pthread_mutex_lock(&source->media_actions_mutex); da_push_back(source->media_actions, &action); pthread_mutex_unlock(&source->media_actions_mutex); } void obs_source_media_stop(obs_source_t *source) { if (!data_valid(source, "obs_source_media_stop")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; if (!source->info.media_stop) return; struct media_action action = { .type = MEDIA_ACTION_STOP, }; pthread_mutex_lock(&source->media_actions_mutex); da_push_back(source->media_actions, &action); pthread_mutex_unlock(&source->media_actions_mutex); } void obs_source_media_next(obs_source_t *source) { if (!data_valid(source, "obs_source_media_next")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; if (!source->info.media_next) return; struct media_action action = { .type = MEDIA_ACTION_NEXT, }; pthread_mutex_lock(&source->media_actions_mutex); da_push_back(source->media_actions, &action); pthread_mutex_unlock(&source->media_actions_mutex); } void obs_source_media_previous(obs_source_t *source) { if (!data_valid(source, "obs_source_media_previous")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; if (!source->info.media_previous) return; struct media_action action = { .type = MEDIA_ACTION_PREVIOUS, }; pthread_mutex_lock(&source->media_actions_mutex); da_push_back(source->media_actions, &action); pthread_mutex_unlock(&source->media_actions_mutex); } int64_t obs_source_media_get_duration(obs_source_t *source) { if (!data_valid(source, "obs_source_media_get_duration")) return 0; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return 0; if (source->info.media_get_duration) return source->info.media_get_duration(source->context.data); else return 0; } int64_t obs_source_media_get_time(obs_source_t *source) { if (!data_valid(source, "obs_source_media_get_time")) return 0; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return 0; if (source->info.media_get_time) return source->info.media_get_time(source->context.data); else return 0; } void obs_source_media_set_time(obs_source_t *source, int64_t ms) { if (!data_valid(source, "obs_source_media_set_time")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; if (!source->info.media_set_time) return; struct media_action action = { .type = MEDIA_ACTION_SET_TIME, .ms = ms, }; pthread_mutex_lock(&source->media_actions_mutex); da_push_back(source->media_actions, &action); pthread_mutex_unlock(&source->media_actions_mutex); } enum obs_media_state obs_source_media_get_state(obs_source_t *source) { if (!data_valid(source, "obs_source_media_get_state")) return OBS_MEDIA_STATE_NONE; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return OBS_MEDIA_STATE_NONE; if (source->info.media_get_state) return source->info.media_get_state(source->context.data); else return OBS_MEDIA_STATE_NONE; } void obs_source_media_started(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_media_started")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; obs_source_dosignal(source, NULL, "media_started"); } void obs_source_media_ended(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_media_ended")) return; if ((source->info.output_flags & OBS_SOURCE_CONTROLLABLE_MEDIA) == 0) return; obs_source_dosignal(source, NULL, "media_ended"); } obs_data_array_t *obs_source_backup_filters(obs_source_t *source) { if (!obs_source_valid(source, "obs_source_backup_filters")) return NULL; obs_data_array_t *array = obs_data_array_create(); pthread_mutex_lock(&source->filter_mutex); for (size_t i = 0; i < source->filters.num; i++) { struct obs_source *filter = source->filters.array[i]; obs_data_t *data = obs_save_source(filter); obs_data_array_push_back(array, data); obs_data_release(data); } pthread_mutex_unlock(&source->filter_mutex); return array; } void obs_source_restore_filters(obs_source_t *source, obs_data_array_t *array) { if (!obs_source_valid(source, "obs_source_restore_filters")) return; if (!obs_ptr_valid(array, "obs_source_restore_filters")) return; DARRAY(obs_source_t *) cur_filters; DARRAY(obs_source_t *) new_filters; obs_source_t *prev = NULL; da_init(cur_filters); da_init(new_filters); pthread_mutex_lock(&source->filter_mutex); /* clear filter list */ da_reserve(cur_filters, source->filters.num); da_reserve(new_filters, source->filters.num); for (size_t i = 0; i < source->filters.num; i++) { obs_source_t *filter = source->filters.array[i]; da_push_back(cur_filters, &filter); filter->filter_parent = NULL; filter->filter_target = NULL; } da_free(source->filters); pthread_mutex_unlock(&source->filter_mutex); /* add backed up filters */ size_t count = obs_data_array_count(array); for (size_t i = 0; i < count; i++) { obs_data_t *data = obs_data_array_item(array, i); const char *name = obs_data_get_string(data, "name"); obs_source_t *filter = NULL; /* if backed up filter already exists, don't create */ for (size_t j = 0; j < cur_filters.num; j++) { obs_source_t *cur = cur_filters.array[j]; const char *cur_name = cur->context.name; if (cur_name && strcmp(cur_name, name) == 0) { filter = obs_source_get_ref(cur); break; } } if (!filter) filter = obs_load_source(data); /* add filter */ if (prev) prev->filter_target = filter; prev = filter; filter->filter_parent = source; da_push_back(new_filters, &filter); obs_data_release(data); } if (prev) prev->filter_target = source; pthread_mutex_lock(&source->filter_mutex); da_move(source->filters, new_filters); pthread_mutex_unlock(&source->filter_mutex); /* release filters */ for (size_t i = 0; i < cur_filters.num; i++) { obs_source_t *filter = cur_filters.array[i]; obs_source_release(filter); } da_free(cur_filters); } uint64_t obs_source_get_last_async_ts(const obs_source_t *source) { return source->async_last_rendered_ts; } obs_canvas_t *obs_source_get_canvas(const obs_source_t *source) { return obs_weak_canvas_get_canvas(source->canvas); } obs-studio-32.1.0-sources/libobs/obs-view.c000644 001751 001751 00000010711 15153330235 021405 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs.h" #include "obs-internal.h" bool obs_view_init(struct obs_view *view, enum view_type type) { if (!view) return false; pthread_mutex_init_value(&view->channels_mutex); if (pthread_mutex_init(&view->channels_mutex, NULL) != 0) { blog(LOG_ERROR, "obs_view_init: Failed to create mutex"); return false; } view->type = type; return true; } obs_view_t *obs_view_create(void) { struct obs_view *view = bzalloc(sizeof(struct obs_view)); if (!obs_view_init(view, AUX_VIEW)) { bfree(view); view = NULL; } return view; } void obs_view_free(struct obs_view *view) { if (!view) return; for (size_t i = 0; i < MAX_CHANNELS; i++) { struct obs_source *source = view->channels[i]; if (source) { obs_source_deactivate(source, view->type); obs_source_release(source); } } memset(view->channels, 0, sizeof(view->channels)); pthread_mutex_destroy(&view->channels_mutex); } void obs_view_destroy(obs_view_t *view) { if (view) { obs_view_free(view); bfree(view); } } obs_source_t *obs_view_get_source(obs_view_t *view, uint32_t channel) { obs_source_t *source; assert(channel < MAX_CHANNELS); if (!view) return NULL; if (channel >= MAX_CHANNELS) return NULL; pthread_mutex_lock(&view->channels_mutex); source = obs_source_get_ref(view->channels[channel]); pthread_mutex_unlock(&view->channels_mutex); return source; } void obs_view_set_source(obs_view_t *view, uint32_t channel, obs_source_t *source) { struct obs_source *prev_source; assert(channel < MAX_CHANNELS); if (!view) return; if (channel >= MAX_CHANNELS) return; pthread_mutex_lock(&view->channels_mutex); source = obs_source_get_ref(source); prev_source = view->channels[channel]; view->channels[channel] = source; pthread_mutex_unlock(&view->channels_mutex); if (source) obs_source_activate(source, view->type); if (prev_source) { obs_source_deactivate(prev_source, view->type); obs_source_release(prev_source); } } void obs_view_render(obs_view_t *view) { if (!view) return; pthread_mutex_lock(&view->channels_mutex); for (size_t i = 0; i < MAX_CHANNELS; i++) { struct obs_source *source; source = view->channels[i]; if (source) { if (source->removed) { obs_source_release(source); view->channels[i] = NULL; } else { obs_source_video_render(source); } } } pthread_mutex_unlock(&view->channels_mutex); } video_t *obs_view_add(obs_view_t *view) { if (!obs->data.main_canvas->mix) return NULL; return obs_view_add2(view, &obs->data.main_canvas->mix->ovi); } video_t *obs_view_add2(obs_view_t *view, struct obs_video_info *ovi) { if (!view || !ovi) return NULL; struct obs_core_video_mix *mix = obs_create_video_mix(ovi); if (!mix) { return NULL; } mix->view = view; pthread_mutex_lock(&obs->video.mixes_mutex); da_push_back(obs->video.mixes, &mix); pthread_mutex_unlock(&obs->video.mixes_mutex); return mix->video; } void obs_view_remove(obs_view_t *view) { if (!view) return; pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { if (obs->video.mixes.array[i]->view == view) obs->video.mixes.array[i]->view = NULL; } pthread_mutex_unlock(&obs->video.mixes_mutex); } void obs_view_enum_video_info(obs_view_t *view, bool (*enum_proc)(void *, struct obs_video_info *), void *param) { pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { struct obs_core_video_mix *mix = obs->video.mixes.array[i]; if (mix->view != view) continue; if (!enum_proc(param, &mix->ovi)) break; } pthread_mutex_unlock(&obs->video.mixes_mutex); } obs-studio-32.1.0-sources/libobs/obs-hotkey.h000644 001751 001751 00000024403 15153330235 021746 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2014-2015 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #ifdef __cplusplus extern "C" { #endif typedef size_t obs_hotkey_id; typedef size_t obs_hotkey_pair_id; #ifndef SWIG #define OBS_INVALID_HOTKEY_ID (~(obs_hotkey_id)0) #define OBS_INVALID_HOTKEY_PAIR_ID (~(obs_hotkey_pair_id)0) #else const size_t OBS_INVALID_HOTKEY_ID = (size_t)-1; const size_t OBS_INVALID_HOTKEY_PAIR_ID = (size_t)-1; #endif #define XINPUT_MOUSE_LEN 33 enum obs_key { #define OBS_HOTKEY(x) x, #include "obs-hotkeys.h" #undef OBS_HOTKEY OBS_KEY_LAST_VALUE //not an actual key }; typedef enum obs_key obs_key_t; struct obs_key_combination { uint32_t modifiers; obs_key_t key; }; typedef struct obs_key_combination obs_key_combination_t; typedef struct obs_hotkey obs_hotkey_t; typedef struct obs_hotkey_binding obs_hotkey_binding_t; enum obs_hotkey_registerer_type { OBS_HOTKEY_REGISTERER_FRONTEND, OBS_HOTKEY_REGISTERER_SOURCE, OBS_HOTKEY_REGISTERER_OUTPUT, OBS_HOTKEY_REGISTERER_ENCODER, OBS_HOTKEY_REGISTERER_SERVICE, }; typedef enum obs_hotkey_registerer_type obs_hotkey_registerer_t; /* getter functions */ EXPORT obs_hotkey_id obs_hotkey_get_id(const obs_hotkey_t *key); EXPORT const char *obs_hotkey_get_name(const obs_hotkey_t *key); EXPORT const char *obs_hotkey_get_description(const obs_hotkey_t *key); EXPORT obs_hotkey_registerer_t obs_hotkey_get_registerer_type(const obs_hotkey_t *key); EXPORT void *obs_hotkey_get_registerer(const obs_hotkey_t *key); EXPORT obs_hotkey_id obs_hotkey_get_pair_partner_id(const obs_hotkey_t *key); EXPORT obs_key_combination_t obs_hotkey_binding_get_key_combination(obs_hotkey_binding_t *binding); EXPORT obs_hotkey_id obs_hotkey_binding_get_hotkey_id(obs_hotkey_binding_t *binding); EXPORT obs_hotkey_t *obs_hotkey_binding_get_hotkey(obs_hotkey_binding_t *binding); /* setter functions */ EXPORT void obs_hotkey_set_name(obs_hotkey_id id, const char *name); EXPORT void obs_hotkey_set_description(obs_hotkey_id id, const char *desc); EXPORT void obs_hotkey_pair_set_names(obs_hotkey_pair_id id, const char *name0, const char *name1); EXPORT void obs_hotkey_pair_set_descriptions(obs_hotkey_pair_id id, const char *desc0, const char *desc1); #ifndef SWIG struct obs_hotkeys_translations { const char *insert; const char *del; const char *home; const char *end; const char *page_up; const char *page_down; const char *num_lock; const char *scroll_lock; const char *caps_lock; const char *backspace; const char *tab; const char *print; const char *pause; const char *left; const char *right; const char *up; const char *down; const char *shift; const char *alt; const char *control; const char *meta; /* windows/super key */ const char *menu; const char *space; const char *numpad_num; /* For example, "Numpad %1" */ const char *numpad_divide; const char *numpad_multiply; const char *numpad_minus; const char *numpad_plus; const char *numpad_decimal; const char *apple_keypad_num; /* For example, "%1 (Keypad)" */ const char *apple_keypad_divide; const char *apple_keypad_multiply; const char *apple_keypad_minus; const char *apple_keypad_plus; const char *apple_keypad_decimal; const char *apple_keypad_equal; const char *mouse_num; /* For example, "Mouse %1" */ const char *escape; }; /* This function is an optional way to provide translations for specific keys * that may not have translations. If the operating system can provide * translations for these keys, it will use the operating system's translation * over these translations. If no translations are specified, it will use * the default English translations for that specific operating system. */ EXPORT void obs_hotkeys_set_translations_s(struct obs_hotkeys_translations *translations, size_t size); #endif #define obs_hotkeys_set_translations(translations) \ obs_hotkeys_set_translations_s(translations, sizeof(struct obs_hotkeys_translations)) EXPORT void obs_hotkeys_set_audio_hotkeys_translations(const char *mute, const char *unmute, const char *push_to_mute, const char *push_to_talk); EXPORT void obs_hotkeys_set_sceneitem_hotkeys_translations(const char *show, const char *hide); /* registering hotkeys (giving hotkeys a name and a function) */ typedef void (*obs_hotkey_func)(void *data, obs_hotkey_id id, obs_hotkey_t *hotkey, bool pressed); EXPORT obs_hotkey_id obs_hotkey_register_frontend(const char *name, const char *description, obs_hotkey_func func, void *data); EXPORT obs_hotkey_id obs_hotkey_register_encoder(obs_encoder_t *encoder, const char *name, const char *description, obs_hotkey_func func, void *data); EXPORT obs_hotkey_id obs_hotkey_register_output(obs_output_t *output, const char *name, const char *description, obs_hotkey_func func, void *data); EXPORT obs_hotkey_id obs_hotkey_register_service(obs_service_t *service, const char *name, const char *description, obs_hotkey_func func, void *data); EXPORT obs_hotkey_id obs_hotkey_register_source(obs_source_t *source, const char *name, const char *description, obs_hotkey_func func, void *data); typedef bool (*obs_hotkey_active_func)(void *data, obs_hotkey_pair_id id, obs_hotkey_t *hotkey, bool pressed); EXPORT obs_hotkey_pair_id obs_hotkey_pair_register_frontend(const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1); EXPORT obs_hotkey_pair_id obs_hotkey_pair_register_encoder(obs_encoder_t *encoder, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1); EXPORT obs_hotkey_pair_id obs_hotkey_pair_register_output(obs_output_t *output, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1); EXPORT obs_hotkey_pair_id obs_hotkey_pair_register_service(obs_service_t *service, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1); EXPORT obs_hotkey_pair_id obs_hotkey_pair_register_source(obs_source_t *source, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1); EXPORT void obs_hotkey_unregister(obs_hotkey_id id); EXPORT void obs_hotkey_pair_unregister(obs_hotkey_pair_id id); /* loading hotkeys (associating a hotkey with a physical key and modifiers) */ EXPORT void obs_hotkey_load_bindings(obs_hotkey_id id, obs_key_combination_t *combinations, size_t num); EXPORT void obs_hotkey_load(obs_hotkey_id id, obs_data_array_t *data); EXPORT void obs_hotkeys_load_encoder(obs_encoder_t *encoder, obs_data_t *hotkeys); EXPORT void obs_hotkeys_load_output(obs_output_t *output, obs_data_t *hotkeys); EXPORT void obs_hotkeys_load_service(obs_service_t *service, obs_data_t *hotkeys); EXPORT void obs_hotkeys_load_source(obs_source_t *source, obs_data_t *hotkeys); EXPORT void obs_hotkey_pair_load(obs_hotkey_pair_id id, obs_data_array_t *data0, obs_data_array_t *data1); EXPORT obs_data_array_t *obs_hotkey_save(obs_hotkey_id id); EXPORT void obs_hotkey_pair_save(obs_hotkey_pair_id id, obs_data_array_t **p_data0, obs_data_array_t **p_data1); EXPORT obs_data_t *obs_hotkeys_save_encoder(obs_encoder_t *encoder); EXPORT obs_data_t *obs_hotkeys_save_output(obs_output_t *output); EXPORT obs_data_t *obs_hotkeys_save_service(obs_service_t *service); EXPORT obs_data_t *obs_hotkeys_save_source(obs_source_t *source); /* enumerating hotkeys */ typedef bool (*obs_hotkey_enum_func)(void *data, obs_hotkey_id id, obs_hotkey_t *key); EXPORT void obs_enum_hotkeys(obs_hotkey_enum_func func, void *data); /* enumerating bindings */ typedef bool (*obs_hotkey_binding_enum_func)(void *data, size_t idx, obs_hotkey_binding_t *binding); EXPORT void obs_enum_hotkey_bindings(obs_hotkey_binding_enum_func func, void *data); /* hotkey event control */ EXPORT void obs_hotkey_inject_event(obs_key_combination_t hotkey, bool pressed); EXPORT void obs_hotkey_enable_background_press(bool enable); /* hotkey callback routing (trigger callbacks through e.g. a UI thread) */ typedef void (*obs_hotkey_callback_router_func)(void *data, obs_hotkey_id id, bool pressed); EXPORT void obs_hotkey_set_callback_routing_func(obs_hotkey_callback_router_func func, void *data); EXPORT void obs_hotkey_trigger_routed_callback(obs_hotkey_id id, bool pressed); /* hotkey callbacks won't be processed if callback rerouting is enabled and no * router func is set */ EXPORT void obs_hotkey_enable_callback_rerouting(bool enable); /* misc */ typedef void (*obs_hotkey_atomic_update_func)(void *); EXPORT void obs_hotkey_update_atomic(obs_hotkey_atomic_update_func func, void *data); struct dstr; EXPORT void obs_key_to_str(obs_key_t key, struct dstr *str); EXPORT void obs_key_combination_to_str(obs_key_combination_t key, struct dstr *str); EXPORT obs_key_t obs_key_from_virtual_key(int code); EXPORT int obs_key_to_virtual_key(obs_key_t key); EXPORT const char *obs_key_to_name(obs_key_t key); EXPORT obs_key_t obs_key_from_name(const char *name); static inline bool obs_key_combination_is_empty(obs_key_combination_t combo) { return !combo.modifiers && combo.key == OBS_KEY_NONE; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-hevc.c000644 001751 001751 00000013430 15153330235 021361 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-hevc.h" #include "obs.h" #include "obs-nal.h" #include "util/array-serializer.h" bool obs_hevc_keyframe(const uint8_t *data, size_t size) { const uint8_t *nal_start, *nal_end; const uint8_t *end = data + size; nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; const uint8_t type = (nal_start[0] & 0x7F) >> 1; if (type <= OBS_HEVC_NAL_RSV_IRAP_VCL23) return type >= OBS_HEVC_NAL_BLA_W_LP; nal_end = obs_nal_find_startcode(nal_start, end); nal_start = nal_end; } return false; } static int compute_hevc_keyframe_priority(const uint8_t *nal_start, bool *is_keyframe, int priority) { int new_priority; // HEVC contains NAL unit specifier at [6..1] bits of // the byte next to the startcode 0x000001 const int type = (nal_start[0] & 0x7F) >> 1; switch (type) { case OBS_HEVC_NAL_BLA_W_LP: case OBS_HEVC_NAL_BLA_W_RADL: case OBS_HEVC_NAL_BLA_N_LP: case OBS_HEVC_NAL_IDR_W_RADL: case OBS_HEVC_NAL_IDR_N_LP: case OBS_HEVC_NAL_CRA_NUT: case OBS_HEVC_NAL_RSV_IRAP_VCL22: case OBS_HEVC_NAL_RSV_IRAP_VCL23: /* intra random access point (IRAP) picture, keyframe and highest priority */ *is_keyframe = true; new_priority = OBS_NAL_PRIORITY_HIGHEST; break; case OBS_HEVC_NAL_TRAIL_R: case OBS_HEVC_NAL_TSA_R: case OBS_HEVC_NAL_STSA_R: case OBS_HEVC_NAL_RADL_R: case OBS_HEVC_NAL_RASL_R: /* sub-layer reference picture (mainly P-frames), high priority */ new_priority = OBS_NAL_PRIORITY_HIGH; break; case OBS_HEVC_NAL_TRAIL_N: case OBS_HEVC_NAL_TSA_N: case OBS_HEVC_NAL_STSA_N: case OBS_HEVC_NAL_RADL_N: case OBS_HEVC_NAL_RASL_N: /* sub-layer non-reference (SLNR) picture (mainly B-frames), disposable */ new_priority = OBS_NAL_PRIORITY_DISPOSABLE; break; default: new_priority = OBS_NAL_PRIORITY_DISPOSABLE; } return priority > new_priority ? priority : new_priority; } static void serialize_hevc_data(struct serializer *s, const uint8_t *data, size_t size, bool *is_keyframe, int *priority) { const uint8_t *const end = data + size; const uint8_t *nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; *priority = compute_hevc_keyframe_priority(nal_start, is_keyframe, *priority); const uint8_t *const nal_end = obs_nal_find_startcode(nal_start, end); const size_t nal_size = nal_end - nal_start; s_wb32(s, (uint32_t)nal_size); s_write(s, nal_start, nal_size); nal_start = nal_end; } } void obs_parse_hevc_packet(struct encoder_packet *hevc_packet, const struct encoder_packet *src) { struct array_output_data output; struct serializer s; long ref = 1; array_output_serializer_init(&s, &output); *hevc_packet = *src; serialize(&s, &ref, sizeof(ref)); serialize_hevc_data(&s, src->data, src->size, &hevc_packet->keyframe, &hevc_packet->priority); hevc_packet->data = output.bytes.array + sizeof(ref); hevc_packet->size = output.bytes.num - sizeof(ref); hevc_packet->drop_priority = hevc_packet->priority; } int obs_parse_hevc_packet_priority(const struct encoder_packet *packet) { int priority = packet->priority; const uint8_t *const data = packet->data; const uint8_t *const end = data + packet->size; const uint8_t *nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; bool unused; priority = compute_hevc_keyframe_priority(nal_start, &unused, priority); nal_start = obs_nal_find_startcode(nal_start, end); } return priority; } void obs_extract_hevc_headers(const uint8_t *packet, size_t size, uint8_t **new_packet_data, size_t *new_packet_size, uint8_t **header_data, size_t *header_size, uint8_t **sei_data, size_t *sei_size) { DARRAY(uint8_t) new_packet; DARRAY(uint8_t) header; DARRAY(uint8_t) sei; const uint8_t *nal_start, *nal_end, *nal_codestart; const uint8_t *end = packet + size; da_init(new_packet); da_init(header); da_init(sei); nal_start = obs_nal_find_startcode(packet, end); nal_end = NULL; while (nal_end != end) { nal_codestart = nal_start; while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; const uint8_t type = (nal_start[0] & 0x7F) >> 1; nal_end = obs_nal_find_startcode(nal_start, end); if (!nal_end) nal_end = end; if (type == OBS_HEVC_NAL_VPS || type == OBS_HEVC_NAL_SPS || type == OBS_HEVC_NAL_PPS) { da_push_back_array(header, nal_codestart, nal_end - nal_codestart); } else if (type == OBS_HEVC_NAL_SEI_PREFIX || type == OBS_HEVC_NAL_SEI_SUFFIX) { da_push_back_array(sei, nal_codestart, nal_end - nal_codestart); } else { da_push_back_array(new_packet, nal_codestart, nal_end - nal_codestart); } nal_start = nal_end; } *new_packet_data = new_packet.array; *new_packet_size = new_packet.num; *header_data = header.array; *header_size = header.num; *sei_data = sei.array; *sei_size = sei.num; } obs-studio-32.1.0-sources/libobs/obs-canvas.c000644 001751 001751 00000040100 15153330235 021701 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2025 by Dennis Sädtler This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs.h" #include "obs-internal.h" #include "obs-scene.h" /* The primary canvas has static name/uuid. */ static const char *MAIN_CANVAS_NAME = "Main"; static const char *MAIN_CANVAS_UUID = "6c69626f-6273-4c00-9d88-c5136d61696e"; /* Internal flag to mark a canvas as removed */ static const uint32_t REMOVED = 1u << 31; /*** Signals ***/ static const char *canvas_signals[] = { "void destroy(ptr canvas)", "void remove(ptr canvas)", "void video_reset(ptr canvas)", "void source_add(ptr canvas, ptr source)", "void source_remove(ptr canvas, ptr source)", "void source_rename(ptr source, string new_name, string prev_name)", "void rename(ptr source, string new_name, string prev_name)", "void channel_change(ptr canvas, int channel, in out ptr source, ptr prev_source)", NULL, }; static inline void canvas_dosignal(obs_canvas_t *canvas, const char *signal_obs, const char *signal_source) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "canvas", canvas); if (signal_obs) signal_handler_signal(obs->signals, signal_obs, &data); if (signal_source) signal_handler_signal(canvas->context.signals, signal_source, &data); } static inline void canvas_dosignal_source(const char *signal, obs_canvas_t *canvas, obs_source_t *source) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "canvas", canvas); calldata_set_ptr(&data, "source", source); signal_handler_signal(canvas->context.signals, signal, &data); } /*** Reference Counting ***/ void obs_canvas_release(obs_canvas_t *canvas) { if (!obs && canvas) { blog(LOG_WARNING, "Tried to release a canvas when the OBS core is shut down!"); return; } if (!canvas) return; obs_weak_canvas_t *control = (obs_weak_canvas_t *)canvas->context.control; if (obs_ref_release(&control->ref)) { obs_canvas_destroy(canvas); obs_weak_canvas_release(control); } } void obs_weak_canvas_addref(obs_weak_canvas_t *weak) { if (!weak) return; obs_weak_ref_addref(&weak->ref); } void obs_weak_canvas_release(obs_weak_canvas_t *weak) { if (!weak) return; if (obs_weak_ref_release(&weak->ref)) bfree(weak); } obs_canvas_t *obs_canvas_get_ref(obs_canvas_t *canvas) { if (!canvas) return NULL; return obs_weak_canvas_get_canvas((obs_weak_canvas_t *)canvas->context.control); } obs_weak_canvas_t *obs_canvas_get_weak_canvas(obs_canvas_t *canvas) { if (!canvas) return NULL; obs_weak_canvas_t *weak = (obs_weak_canvas_t *)canvas->context.control; obs_weak_canvas_addref(weak); return weak; } obs_canvas_t *obs_weak_canvas_get_canvas(obs_weak_canvas_t *weak) { if (!weak) return NULL; if (obs_weak_ref_get_ref(&weak->ref)) return weak->canvas; return NULL; } /*** Creation / Destruction ***/ static obs_canvas_t *obs_canvas_create_internal(const char *name, const char *uuid, struct obs_video_info *ovi, uint32_t flags, bool private) { struct obs_canvas *canvas = bzalloc(sizeof(struct obs_canvas)); canvas->flags = flags; if (!obs_context_data_init(&canvas->context, OBS_OBJ_TYPE_CANVAS, NULL, name, uuid, NULL, private)) return NULL; if (!signal_handler_add_array(canvas->context.signals, canvas_signals)) { obs_context_data_free(&canvas->context); bfree(canvas); return NULL; } if (pthread_mutex_init_recursive(&canvas->sources_mutex) != 0) { obs_context_data_free(&canvas->context); bfree(canvas); return NULL; } obs_view_init(&canvas->view, flags & ACTIVATE ? MAIN_VIEW : AUX_VIEW); obs_context_init_control(&canvas->context, canvas, (obs_destroy_cb)obs_canvas_destroy); /* A canvas can be created without a mix. */ if (ovi) { canvas->ovi = *ovi; canvas->mix = obs_create_video_mix(ovi); if (canvas->mix) { canvas->mix->view = &canvas->view; canvas->mix->mix_audio = (flags & MIX_AUDIO) != 0; pthread_mutex_lock(&obs->video.mixes_mutex); da_push_back(obs->video.mixes, &canvas->mix); pthread_mutex_unlock(&obs->video.mixes_mutex); } } obs_context_data_insert_uuid(&canvas->context, &obs->data.canvases_mutex, &obs->data.canvases); if (!private) { obs_context_data_insert_name(&canvas->context, &obs->data.canvases_mutex, &obs->data.named_canvases); canvas_dosignal(canvas, "canvas_create", NULL); } blog(LOG_DEBUG, "%scanvas '%s' (%s) created", private ? "private " : "", canvas->context.name, canvas->context.uuid); return canvas; } obs_canvas_t *obs_create_main_canvas(void) { const uint32_t main_flags = MAIN | PROGRAM; return obs_canvas_create_internal(MAIN_CANVAS_NAME, MAIN_CANVAS_UUID, NULL, main_flags, false); } obs_canvas_t *obs_canvas_create(const char *name, struct obs_video_info *ovi, uint32_t flags) { flags &= ~MAIN; /* Prevent user from creating a MAIN canvas. */ return obs_canvas_create_internal(name, NULL, ovi, flags, false); } obs_canvas_t *obs_canvas_create_private(const char *name, struct obs_video_info *ovi, uint32_t flags) { flags &= ~MAIN; /* Prevent user from creating a MAIN canvas. */ return obs_canvas_create_internal(name, NULL, ovi, flags, true); } void obs_canvas_destroy(obs_canvas_t *canvas) { canvas_dosignal(canvas, "canvas_destroy", "destroy"); obs_canvas_clear_mix(canvas); obs_source_t *source = canvas->sources; while (source) { /* Canvases can hold strong refs to scene sources, release them here. */ if (canvas->flags & SCENE_REF && obs_source_is_scene(source)) obs_source_release(source); source = source->context.hh.next; } obs_context_data_remove_uuid(&canvas->context, &obs->data.canvases_mutex, &obs->data.canvases); if (!canvas->context.private) { obs_context_data_remove_name(&canvas->context, &obs->data.canvases_mutex, &obs->data.named_canvases); } blog(LOG_DEBUG, "%scanvas '%s' (%s) destroyed", canvas->context.private ? "private " : "", canvas->context.name, canvas->context.uuid); pthread_mutex_destroy(&canvas->sources_mutex); obs_context_data_free(&canvas->context); obs_view_free(&canvas->view); bfree(canvas); } /*** Saving / Loading ***/ obs_data_t *obs_save_canvas(obs_canvas_t *canvas) { if (canvas->flags & (EPHEMERAL | REMOVED)) return NULL; obs_data_t *canvas_data = obs_data_create(); obs_data_set_string(canvas_data, "name", canvas->context.name); obs_data_set_string(canvas_data, "uuid", canvas->context.uuid); obs_data_set_bool(canvas_data, "private", canvas->context.private); obs_data_set_int(canvas_data, "flags", canvas->flags); return canvas_data; } obs_canvas_t *obs_load_canvas(obs_data_t *data) { const char *name = obs_data_get_string(data, "name"); const char *uuid = obs_data_get_string(data, "uuid"); const bool private = obs_data_get_bool(data, "private"); uint32_t flags = (uint32_t)obs_data_get_int(data, "flags"); flags &= ~MAIN; /* Prevent user from creating a MAIN canvas. */ return obs_canvas_create_internal(name, uuid, NULL, flags, private); } /*** Internal API ***/ /* Free canvas mix (if any) */ void obs_canvas_clear_mix(obs_canvas_t *canvas) { if (!canvas->mix) return; pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0; i < obs->video.mixes.num; i++) { struct obs_core_video_mix *mix = obs->video.mixes.array[i]; if (mix == canvas->mix) { da_erase(obs->video.mixes, i); obs_free_video_mix(mix); break; } } pthread_mutex_unlock(&obs->video.mixes_mutex); canvas->mix = NULL; } /* Clear mixes attached to canvases */ void obs_free_canvas_mixes(void) { pthread_mutex_lock(&obs->data.canvases_mutex); struct obs_context_data *ctx, *tmp; HASH_ITER (hh, (struct obs_context_data *)obs->data.canvases, ctx, tmp) { obs_canvas_t *canvas = (obs_canvas_t *)ctx; obs_canvas_clear_mix(canvas); } pthread_mutex_unlock(&obs->data.canvases_mutex); } bool obs_canvas_reset_video_internal(obs_canvas_t *canvas, struct obs_video_info *ovi) { if (!ovi && !canvas->mix) return true; obs_canvas_clear_mix(canvas); if (ovi) canvas->ovi = *ovi; canvas->mix = obs_create_video_mix(&canvas->ovi); if (canvas->mix) { canvas->mix->view = &canvas->view; canvas->mix->mix_audio = (canvas->flags & MIX_AUDIO) != 0; pthread_mutex_lock(&obs->video.mixes_mutex); da_push_back(obs->video.mixes, &canvas->mix); pthread_mutex_unlock(&obs->video.mixes_mutex); } canvas_dosignal(canvas, "canvas_video_reset", "video_reset"); return !!canvas->mix; } void obs_canvas_insert_source(obs_canvas_t *canvas, obs_source_t *source) { if (canvas->flags & SCENE_REF && obs_source_is_scene(source)) obs_source_get_ref(source); if (source->canvas) obs_canvas_remove_source(source); source->canvas = obs_canvas_get_weak_canvas(canvas); obs_context_data_insert_name(&source->context, &canvas->sources_mutex, &canvas->sources); canvas_dosignal_source("source_add", canvas, source); } static bool remove_groups_items_cb(obs_scene_t *scene, obs_sceneitem_t *item, void *param) { UNUSED_PARAMETER(scene); obs_source_t *source = param; if (item->source == source) obs_sceneitem_remove(item); return true; } static bool remove_groups_enum_cb(void *param, obs_source_t *scene_source) { obs_source_t *source = param; obs_scene_t *scene = obs_scene_from_source(scene_source); obs_scene_enum_items(scene, remove_groups_items_cb, source); return true; } void obs_canvas_remove_source(obs_source_t *source) { obs_canvas_t *canvas = obs_weak_canvas_get_canvas(source->canvas); if (canvas) { obs_weak_canvas_release(source->canvas); obs_context_data_remove_name(&source->context, &canvas->sources_mutex, &canvas->sources); canvas_dosignal_source("source_remove", canvas, source); if (canvas->flags & SCENE_REF && obs_source_is_scene(source)) obs_source_release(source); /* If source is a group, also remove it from all other scenes in the old canvas */ if (obs_source_is_group(source)) obs_canvas_enum_scenes(canvas, remove_groups_enum_cb, source); obs_canvas_release(canvas); } source->canvas = NULL; } void obs_canvas_rename_source(obs_source_t *source, const char *name) { obs_canvas_t *canvas = obs_weak_canvas_get_canvas(source->canvas); if (canvas) { struct calldata data; char *prev_name = bstrdup(source->context.name); obs_context_data_setname_ht(&source->context, name, &canvas->sources); calldata_init(&data); calldata_set_ptr(&data, "source", source); calldata_set_string(&data, "new_name", source->context.name); calldata_set_string(&data, "prev_name", prev_name); signal_handler_signal(source->context.signals, "rename", &data); signal_handler_signal(canvas->context.signals, "source_rename", &data); if (canvas->flags & MAIN) signal_handler_signal(obs->signals, "source_rename", &data); calldata_free(&data); bfree(prev_name); obs_canvas_release(canvas); } } /*** Public Canvas Object API ***/ bool obs_canvas_reset_video(obs_canvas_t *canvas, struct obs_video_info *ovi) { if (canvas->flags & MAIN || obs_video_active()) return false; return obs_canvas_reset_video_internal(canvas, ovi); } video_t *obs_canvas_get_video(const obs_canvas_t *canvas) { return canvas->mix ? canvas->mix->video : NULL; } bool obs_canvas_get_video_info(const obs_canvas_t *canvas, struct obs_video_info *ovi) { if (!obs->video.graphics || !canvas->mix) return false; *ovi = canvas->ovi; return true; } signal_handler_t *obs_canvas_get_signal_handler(obs_canvas_t *canvas) { return canvas->context.signals; } void obs_canvas_set_channel(obs_canvas_t *canvas, uint32_t channel, obs_source_t *source) { assert(channel < MAX_CHANNELS); if (channel >= MAX_CHANNELS) return; struct obs_view *view = &canvas->view; pthread_mutex_lock(&view->channels_mutex); source = obs_source_get_ref(source); obs_source_t *prev_source = view->channels[channel]; if (source == prev_source) { obs_source_release(source); pthread_mutex_unlock(&view->channels_mutex); return; } struct calldata params = {0}; calldata_set_ptr(¶ms, "canvas", canvas); calldata_set_int(¶ms, "channel", channel); calldata_set_ptr(¶ms, "prev_source", prev_source); calldata_set_ptr(¶ms, "source", source); signal_handler_signal(canvas->context.signals, "channel_change", ¶ms); if (canvas->flags & MAIN) signal_handler_signal(obs->signals, "channel_change", ¶ms); /* For some reason the original implementation allows overriding the source from the callback, * so just in case support that here as well. This isn't used anywhere in OBS itself. */ calldata_get_ptr(¶ms, "source", &source); view->channels[channel] = source; calldata_free(¶ms); pthread_mutex_unlock(&view->channels_mutex); if (source) obs_source_activate(source, view->type); if (prev_source) { obs_source_deactivate(prev_source, view->type); obs_source_release(prev_source); } } obs_source_t *obs_canvas_get_channel(obs_canvas_t *canvas, uint32_t channel) { return obs_view_get_source(&canvas->view, channel); } obs_scene_t *obs_canvas_scene_create(obs_canvas_t *canvas, const char *name) { struct obs_source *source = obs_source_create_canvas(canvas, "scene", name, NULL, NULL); return source->context.data; } void obs_canvas_scene_remove(obs_scene_t *scene) { obs_canvas_remove_source(scene->source); } void obs_canvas_set_name(obs_canvas_t *canvas, const char *name) { if (!name || !*name) return; if (canvas->flags & MAIN) /* Do not allow renaming main canvases. */ return; if (strcmp(name, canvas->context.name) == 0) return; char *prev_name = bstrdup(canvas->context.name); if (canvas->context.private) obs_context_data_setname(&canvas->context, name); else obs_context_data_setname_ht(&canvas->context, name, &obs->data.named_canvases); struct calldata data; calldata_init(&data); calldata_set_ptr(&data, "canvas", canvas); calldata_set_string(&data, "new_name", canvas->context.name); calldata_set_string(&data, "prev_name", prev_name); signal_handler_signal(canvas->context.signals, "rename", &data); if (!canvas->context.private) signal_handler_signal(obs->signals, "canvas_rename", &data); calldata_free(&data); bfree(prev_name); } const char *obs_canvas_get_name(const obs_canvas_t *canvas) { return canvas->context.name; } const char *obs_canvas_get_uuid(const obs_canvas_t *canvas) { return canvas->context.uuid; } uint32_t obs_canvas_get_flags(const obs_canvas_t *canvas) { return canvas->flags; } static bool enum_move_cb(obs_scene_t *scene, obs_sceneitem_t *item, void *param) { UNUSED_PARAMETER(scene); obs_canvas_t *dst = param; obs_source_t *source = item->source; if (obs_source_is_group(source)) { obs_canvas_remove_source(source); obs_canvas_insert_source(dst, source); } return true; } void obs_canvas_move_scene(obs_scene_t *scene, obs_canvas_t *dst) { obs_source_t *source = scene->source; obs_canvas_remove_source(source); obs_canvas_insert_source(dst, source); /* Also move all groups within this scene */ obs_scene_enum_items(scene, enum_move_cb, dst); } void obs_canvas_remove(obs_canvas_t *canvas) { /* Do not allow removing the main canvas, or canvases already marked as removed. */ if (canvas->flags & (REMOVED | MAIN)) return; obs_canvas_t *c = obs_canvas_get_ref(canvas); if (c) { c->flags |= REMOVED; canvas_dosignal(c, "canvas_remove", "remove"); obs_canvas_release(c); } } bool obs_canvas_removed(obs_canvas_t *canvas) { return (canvas->flags & REMOVED) != 0; } bool obs_canvas_has_video(obs_canvas_t *canvas) { return canvas->mix != NULL; } void obs_canvas_render(obs_canvas_t *canvas) { obs_view_render(&canvas->view); } obs-studio-32.1.0-sources/libobs/obs-output.c000644 001751 001751 00000271740 15153330235 022006 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "util/platform.h" #include "util/util_uint64.h" #include "util/array-serializer.h" #include "graphics/math-extra.h" #include "obs.h" #include "obs-internal.h" #include "obs-av1.h" #include #include #define get_weak(output) ((obs_weak_output_t *)output->context.control) #define RECONNECT_RETRY_MAX_MSEC (15 * 60 * 1000) #define RECONNECT_RETRY_BASE_EXP 1.5f static inline bool active(const struct obs_output *output) { return os_atomic_load_bool(&output->active); } static inline bool reconnecting(const struct obs_output *output) { return os_atomic_load_bool(&output->reconnecting); } static inline bool stopping(const struct obs_output *output) { return os_event_try(output->stopping_event) == EAGAIN; } static inline bool delay_active(const struct obs_output *output) { return os_atomic_load_bool(&output->delay_active); } static inline bool delay_capturing(const struct obs_output *output) { return os_atomic_load_bool(&output->delay_capturing); } static inline bool data_capture_ending(const struct obs_output *output) { return os_atomic_load_bool(&output->end_data_capture_thread_active); } static inline bool flag_encoded(const struct obs_output *output) { return (output->info.flags & OBS_OUTPUT_ENCODED) != 0; } static inline bool log_flag_encoded(const struct obs_output *output, const char *func_name, bool inverse_log) { const char *prefix = inverse_log ? "n encoded" : " raw"; bool ret = flag_encoded(output); if ((!inverse_log && !ret) || (inverse_log && ret)) blog(LOG_WARNING, "Output '%s': Tried to use %s on a%s output", output->context.name, func_name, prefix); return ret; } static inline bool flag_video(const struct obs_output *output) { return (output->info.flags & OBS_OUTPUT_VIDEO) != 0; } static inline bool log_flag_video(const struct obs_output *output, const char *func_name) { bool ret = flag_video(output); if (!ret) blog(LOG_WARNING, "Output '%s': Tried to use %s on a non-video output", output->context.name, func_name); return ret; } static inline bool flag_audio(const struct obs_output *output) { return (output->info.flags & OBS_OUTPUT_AUDIO) != 0; } static inline bool log_flag_audio(const struct obs_output *output, const char *func_name) { bool ret = flag_audio(output); if (!ret) blog(LOG_WARNING, "Output '%s': Tried to use %s on a non-audio output", output->context.name, func_name); return ret; } static inline bool flag_service(const struct obs_output *output) { return (output->info.flags & OBS_OUTPUT_SERVICE) != 0; } static inline bool log_flag_service(const struct obs_output *output, const char *func_name) { bool ret = flag_service(output); if (!ret) blog(LOG_WARNING, "Output '%s': Tried to use %s on a non-service output", output->context.name, func_name); return ret; } const struct obs_output_info *find_output(const char *id) { size_t i; for (i = 0; i < obs->output_types.num; i++) if (strcmp(obs->output_types.array[i].id, id) == 0) return obs->output_types.array + i; return NULL; } const char *obs_output_get_display_name(const char *id) { const struct obs_output_info *info = find_output(id); return (info != NULL) ? info->get_name(info->type_data) : NULL; } obs_module_t *obs_output_get_module(const char *id) { obs_module_t *module = obs->first_module; while (module) { for (size_t i = 0; i < module->outputs.num; i++) { if (strcmp(module->outputs.array[i], id) == 0) { return module; } } module = module->next; } module = obs->first_disabled_module; while (module) { for (size_t i = 0; i < module->outputs.num; i++) { if (strcmp(module->outputs.array[i], id) == 0) { return module; } } module = module->next; } return NULL; } enum obs_module_load_state obs_output_load_state(const char *id) { obs_module_t *module = obs_output_get_module(id); if (!module) { return OBS_MODULE_MISSING; } return module->load_state; } static const char *output_signals[] = { "void start(ptr output)", "void stop(ptr output, int code)", "void pause(ptr output)", "void unpause(ptr output)", "void starting(ptr output)", "void stopping(ptr output)", "void activate(ptr output)", "void deactivate(ptr output)", "void reconnect(ptr output)", "void reconnect_success(ptr output)", NULL, }; static bool init_output_handlers(struct obs_output *output, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { if (!obs_context_data_init(&output->context, OBS_OBJ_TYPE_OUTPUT, settings, name, NULL, hotkey_data, false)) return false; signal_handler_add_array(output->context.signals, output_signals); return true; } obs_output_t *obs_output_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { const struct obs_output_info *info = find_output(id); struct obs_output *output; int ret; output = bzalloc(sizeof(struct obs_output)); pthread_mutex_init_value(&output->interleaved_mutex); pthread_mutex_init_value(&output->delay_mutex); pthread_mutex_init_value(&output->pause.mutex); pthread_mutex_init_value(&output->pkt_callbacks_mutex); if (pthread_mutex_init(&output->interleaved_mutex, NULL) != 0) goto fail; if (pthread_mutex_init(&output->delay_mutex, NULL) != 0) goto fail; if (pthread_mutex_init(&output->pause.mutex, NULL) != 0) goto fail; if (pthread_mutex_init(&output->pkt_callbacks_mutex, NULL) != 0) goto fail; if (os_event_init(&output->stopping_event, OS_EVENT_TYPE_MANUAL) != 0) goto fail; if (!init_output_handlers(output, name, settings, hotkey_data)) goto fail; os_event_signal(output->stopping_event); if (!info) { blog(LOG_ERROR, "Output ID '%s' not found", id); output->info.id = bstrdup(id); output->owns_info_id = true; } else { output->info = *info; } if (!flag_encoded(output)) { output->video = obs_get_video(); output->audio = obs_get_audio(); } if (output->info.get_defaults) output->info.get_defaults(output->context.settings); ret = os_event_init(&output->reconnect_stop_event, OS_EVENT_TYPE_MANUAL); if (ret < 0) goto fail; output->reconnect_retry_sec = 2; output->reconnect_retry_max = 20; output->reconnect_retry_exp = RECONNECT_RETRY_BASE_EXP + (rand_float(0) * 0.05f); output->valid = true; obs_context_init_control(&output->context, output, (obs_destroy_cb)obs_output_destroy); obs_context_data_insert(&output->context, &obs->data.outputs_mutex, &obs->data.first_output); if (info) output->context.data = info->create(output->context.settings, output); if (!output->context.data) blog(LOG_ERROR, "Failed to create output '%s'!", name); blog(LOG_DEBUG, "output '%s' (%s) created", name, id); return output; fail: obs_output_destroy(output); return NULL; } static inline void free_packets(struct obs_output *output) { for (size_t i = 0; i < output->interleaved_packets.num; i++) obs_encoder_packet_release(output->interleaved_packets.array + i); da_free(output->interleaved_packets); } static inline void clear_raw_audio_buffers(obs_output_t *output) { for (size_t i = 0; i < MAX_AUDIO_MIXES; i++) { for (size_t j = 0; j < MAX_AV_PLANES; j++) { deque_free(&output->audio_buffer[i][j]); } } } static void destroy_caption_track(struct caption_track_data **ctrack_ptr) { if (!ctrack_ptr || !*ctrack_ptr) { return; } struct caption_track_data *ctrack = *ctrack_ptr; pthread_mutex_destroy(&ctrack->caption_mutex); deque_free(&ctrack->caption_data); bfree(ctrack); *ctrack_ptr = NULL; } void obs_output_destroy(obs_output_t *output) { if (output) { obs_context_data_remove(&output->context); blog(LOG_DEBUG, "output '%s' destroyed", output->context.name); if (output->valid && active(output)) obs_output_actual_stop(output, true, 0); os_event_wait(output->stopping_event); if (data_capture_ending(output)) pthread_join(output->end_data_capture_thread, NULL); if (output->service) output->service->output = NULL; if (output->context.data) output->info.destroy(output->context.data); free_packets(output); for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) { obs_encoder_remove_output(output->video_encoders[i], output); obs_encoder_release(output->video_encoders[i]); } if (output->caption_tracks[i]) { destroy_caption_track(&output->caption_tracks[i]); } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { obs_encoder_remove_output(output->audio_encoders[i], output); obs_encoder_release(output->audio_encoders[i]); } } da_free(output->keyframe_group_tracking); for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) da_free(output->encoder_packet_times[i]); da_free(output->pkt_callbacks); clear_raw_audio_buffers(output); os_event_destroy(output->stopping_event); pthread_mutex_destroy(&output->pause.mutex); pthread_mutex_destroy(&output->interleaved_mutex); pthread_mutex_destroy(&output->delay_mutex); pthread_mutex_destroy(&output->pkt_callbacks_mutex); os_event_destroy(output->reconnect_stop_event); obs_context_data_free(&output->context); deque_free(&output->delay_data); if (output->owns_info_id) bfree((void *)output->info.id); if (output->last_error_message) bfree(output->last_error_message); bfree(output); } } const char *obs_output_get_name(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_name") ? output->context.name : NULL; } bool obs_output_actual_start(obs_output_t *output) { bool success = false; os_event_wait(output->stopping_event); output->stop_code = 0; if (output->last_error_message) { bfree(output->last_error_message); output->last_error_message = NULL; } if (output->context.data) success = output->info.start(output->context.data); if (success) { output->starting_drawn_count = obs->video.total_frames; output->starting_lagged_count = obs->video.lagged_frames; } if (os_atomic_load_long(&output->delay_restart_refs)) os_atomic_dec_long(&output->delay_restart_refs); for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { struct caption_track_data *ctrack = output->caption_tracks[i]; if (!ctrack) { continue; } pthread_mutex_lock(&ctrack->caption_mutex); ctrack->caption_timestamp = 0; deque_free(&ctrack->caption_data); deque_init(&ctrack->caption_data); pthread_mutex_unlock(&ctrack->caption_mutex); } return success; } bool obs_output_start(obs_output_t *output) { if (!obs_output_valid(output, "obs_output_start")) return false; if (!output->context.data) return false; if (flag_service(output) && !(obs_service_can_try_to_connect(output->service) && obs_service_initialize(output->service, output))) return false; if (output->delay_sec) { return obs_output_delay_start(output); } else { if (obs_output_actual_start(output)) { do_output_signal(output, "starting"); return true; } return false; } } static inline bool data_active(struct obs_output *output) { return os_atomic_load_bool(&output->data_active); } static void log_frame_info(struct obs_output *output) { struct obs_core_video *video = &obs->video; uint32_t drawn = video->total_frames - output->starting_drawn_count; uint32_t lagged = video->lagged_frames - output->starting_lagged_count; int dropped = obs_output_get_frames_dropped(output); int total = output->total_frames; double percentage_lagged = 0.0f; double percentage_dropped = 0.0f; if (drawn) percentage_lagged = (double)lagged / (double)drawn * 100.0; if (dropped) percentage_dropped = (double)dropped / (double)total * 100.0; blog(LOG_INFO, "Output '%s': stopping", output->context.name); if (!dropped || !total) blog(LOG_INFO, "Output '%s': Total frames output: %d", output->context.name, total); else blog(LOG_INFO, "Output '%s': Total frames output: %d" " (%d attempted)", output->context.name, total - dropped, total); if (!lagged || !drawn) blog(LOG_INFO, "Output '%s': Total drawn frames: %" PRIu32, output->context.name, drawn); else blog(LOG_INFO, "Output '%s': Total drawn frames: %" PRIu32 " (%" PRIu32 " attempted)", output->context.name, drawn - lagged, drawn); if (drawn && lagged) blog(LOG_INFO, "Output '%s': Number of lagged frames due " "to rendering lag/stalls: %" PRIu32 " (%0.1f%%)", output->context.name, lagged, percentage_lagged); if (total && dropped) blog(LOG_INFO, "Output '%s': Number of dropped frames due " "to insufficient bandwidth/connection stalls: " "%d (%0.1f%%)", output->context.name, dropped, percentage_dropped); } static inline void signal_stop(struct obs_output *output); void obs_output_actual_stop(obs_output_t *output, bool force, uint64_t ts) { bool call_stop = true; bool was_reconnecting = false; if (stopping(output) && !force) return; obs_output_pause(output, false); os_event_reset(output->stopping_event); was_reconnecting = reconnecting(output) && !delay_active(output); if (reconnecting(output)) { os_event_signal(output->reconnect_stop_event); if (output->reconnect_thread_active) pthread_join(output->reconnect_thread, NULL); } if (force) { if (delay_active(output)) { call_stop = delay_capturing(output); os_atomic_set_bool(&output->delay_active, false); os_atomic_set_bool(&output->delay_capturing, false); output->stop_code = OBS_OUTPUT_SUCCESS; obs_output_end_data_capture(output); os_event_signal(output->stopping_event); } else { call_stop = true; } } else { call_stop = true; } if (output->context.data && call_stop) { output->info.stop(output->context.data, ts); } else if (was_reconnecting) { output->stop_code = OBS_OUTPUT_SUCCESS; signal_stop(output); os_event_signal(output->stopping_event); } for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { struct caption_track_data *ctrack = output->caption_tracks[i]; if (!ctrack) { continue; } while (ctrack->caption_head) { ctrack->caption_tail = ctrack->caption_head->next; bfree(ctrack->caption_head); ctrack->caption_head = ctrack->caption_tail; } } da_clear(output->keyframe_group_tracking); } void obs_output_stop(obs_output_t *output) { if (!obs_output_valid(output, "obs_output_stop")) return; if (!output->context.data) return; if (!active(output) && !reconnecting(output)) return; if (reconnecting(output)) { obs_output_force_stop(output); return; } if (flag_encoded(output) && output->active_delay_ns) { obs_output_delay_stop(output); } else if (!stopping(output)) { do_output_signal(output, "stopping"); obs_output_actual_stop(output, false, os_gettime_ns()); } } void obs_output_force_stop(obs_output_t *output) { if (!obs_output_valid(output, "obs_output_force_stop")) return; if (!stopping(output)) { output->stop_code = 0; do_output_signal(output, "stopping"); } obs_output_actual_stop(output, true, 0); } bool obs_output_active(const obs_output_t *output) { return (output != NULL) ? (active(output) || reconnecting(output)) : false; } uint32_t obs_output_get_flags(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_flags") ? output->info.flags : 0; } uint32_t obs_get_output_flags(const char *id) { const struct obs_output_info *info = find_output(id); return info ? info->flags : 0; } static inline obs_data_t *get_defaults(const struct obs_output_info *info) { obs_data_t *settings = obs_data_create(); if (info->get_defaults) info->get_defaults(settings); return settings; } obs_data_t *obs_output_defaults(const char *id) { const struct obs_output_info *info = find_output(id); return (info) ? get_defaults(info) : NULL; } obs_properties_t *obs_get_output_properties(const char *id) { const struct obs_output_info *info = find_output(id); if (info && info->get_properties) { obs_data_t *defaults = get_defaults(info); obs_properties_t *properties; properties = info->get_properties(NULL); obs_properties_apply_settings(properties, defaults); obs_data_release(defaults); return properties; } return NULL; } obs_properties_t *obs_output_properties(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_properties")) return NULL; if (output && output->info.get_properties) { obs_properties_t *props; props = output->info.get_properties(output->context.data); obs_properties_apply_settings(props, output->context.settings); return props; } return NULL; } void obs_output_update(obs_output_t *output, obs_data_t *settings) { if (!obs_output_valid(output, "obs_output_update")) return; obs_data_apply(output->context.settings, settings); if (output->info.update) output->info.update(output->context.data, output->context.settings); } obs_data_t *obs_output_get_settings(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_settings")) return NULL; obs_data_addref(output->context.settings); return output->context.settings; } bool obs_output_can_pause(const obs_output_t *output) { return obs_output_valid(output, "obs_output_can_pause") ? !!(output->info.flags & OBS_OUTPUT_CAN_PAUSE) : false; } static inline void end_pause(struct pause_data *pause, uint64_t ts) { if (!pause->ts_end) { pause->ts_end = ts; pause->ts_offset += pause->ts_end - pause->ts_start; } } static inline uint64_t get_closest_v_ts(struct pause_data *pause) { uint64_t interval = obs->video.video_frame_interval_ns; uint64_t i2 = interval * 2; uint64_t ts = os_gettime_ns(); return pause->last_video_ts + ((ts - pause->last_video_ts + i2) / interval) * interval; } static inline bool pause_can_start(struct pause_data *pause) { return !pause->ts_start && !pause->ts_end; } static inline bool pause_can_stop(struct pause_data *pause) { return !!pause->ts_start && !pause->ts_end; } static bool get_first_audio_encoder_index(const struct obs_output *output, size_t *index) { if (!index) return false; for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { *index = i; return true; } } return false; } static bool get_first_video_encoder_index(const struct obs_output *output, size_t *index) { if (!index) return false; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) { *index = i; return true; } } return false; } static bool obs_encoded_output_pause(obs_output_t *output, bool pause) { obs_encoder_t *venc[MAX_OUTPUT_VIDEO_ENCODERS]; obs_encoder_t *aenc[MAX_OUTPUT_AUDIO_ENCODERS]; uint64_t closest_v_ts; bool success = false; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) venc[i] = output->video_encoders[i]; for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) aenc[i] = output->audio_encoders[i]; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (venc[i]) { pthread_mutex_lock(&venc[i]->pause.mutex); } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (aenc[i]) { pthread_mutex_lock(&aenc[i]->pause.mutex); } } /* ---------------------------- */ size_t first_venc_index; if (!get_first_video_encoder_index(output, &first_venc_index)) goto fail; closest_v_ts = get_closest_v_ts(&venc[first_venc_index]->pause); if (pause) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (venc[i] && !pause_can_start(&venc[i]->pause)) { goto fail; } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (aenc[i] && !pause_can_start(&aenc[i]->pause)) { goto fail; } } for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (venc[i]) { os_atomic_set_bool(&venc[i]->paused, true); venc[i]->pause.ts_start = closest_v_ts; } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (aenc[i]) { os_atomic_set_bool(&aenc[i]->paused, true); aenc[i]->pause.ts_start = closest_v_ts; } } } else { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (venc[i] && !pause_can_stop(&venc[i]->pause)) { goto fail; } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (aenc[i] && !pause_can_stop(&aenc[i]->pause)) { goto fail; } } for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (venc[i]) { os_atomic_set_bool(&venc[i]->paused, false); end_pause(&venc[i]->pause, closest_v_ts); } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (aenc[i]) { os_atomic_set_bool(&aenc[i]->paused, false); end_pause(&aenc[i]->pause, closest_v_ts); } } } /* ---------------------------- */ success = true; fail: for (size_t i = MAX_OUTPUT_AUDIO_ENCODERS; i > 0; i--) { if (aenc[i - 1]) { pthread_mutex_unlock(&aenc[i - 1]->pause.mutex); } } for (size_t i = MAX_OUTPUT_VIDEO_ENCODERS; i > 0; i--) { if (venc[i - 1]) { pthread_mutex_unlock(&venc[i - 1]->pause.mutex); } } return success; } static bool obs_raw_output_pause(obs_output_t *output, bool pause) { bool success; uint64_t closest_v_ts; pthread_mutex_lock(&output->pause.mutex); closest_v_ts = get_closest_v_ts(&output->pause); if (pause) { success = pause_can_start(&output->pause); if (success) output->pause.ts_start = closest_v_ts; } else { success = pause_can_stop(&output->pause); if (success) end_pause(&output->pause, closest_v_ts); } pthread_mutex_unlock(&output->pause.mutex); return success; } bool obs_output_pause(obs_output_t *output, bool pause) { bool success; if (!obs_output_valid(output, "obs_output_pause")) return false; if ((output->info.flags & OBS_OUTPUT_CAN_PAUSE) == 0) return false; if (!os_atomic_load_bool(&output->active)) return false; if (os_atomic_load_bool(&output->paused) == pause) return true; success = flag_encoded(output) ? obs_encoded_output_pause(output, pause) : obs_raw_output_pause(output, pause); if (success) { os_atomic_set_bool(&output->paused, pause); do_output_signal(output, pause ? "pause" : "unpause"); blog(LOG_INFO, "output %s %spaused", output->context.name, pause ? "" : "un"); } return success; } bool obs_output_paused(const obs_output_t *output) { return obs_output_valid(output, "obs_output_paused") ? os_atomic_load_bool(&output->paused) : false; } uint64_t obs_output_get_pause_offset(obs_output_t *output) { uint64_t offset; if (!obs_output_valid(output, "obs_output_get_pause_offset")) return 0; pthread_mutex_lock(&output->pause.mutex); offset = output->pause.ts_offset; pthread_mutex_unlock(&output->pause.mutex); return offset; } signal_handler_t *obs_output_get_signal_handler(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_signal_handler") ? output->context.signals : NULL; } proc_handler_t *obs_output_get_proc_handler(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_proc_handler") ? output->context.procs : NULL; } void obs_output_set_media(obs_output_t *output, video_t *video, audio_t *audio) { if (!obs_output_valid(output, "obs_output_set_media")) return; if (log_flag_encoded(output, __FUNCTION__, true)) return; if (flag_video(output)) output->video = video; if (flag_audio(output)) output->audio = audio; } video_t *obs_output_video(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_video")) return NULL; if (!flag_encoded(output)) return output->video; obs_encoder_t *vencoder = obs_output_get_video_encoder(output); return obs_encoder_video(vencoder); } audio_t *obs_output_audio(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_audio")) return NULL; if (!flag_encoded(output)) return output->audio; for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) return obs_encoder_audio(output->audio_encoders[i]); } return NULL; } static inline size_t get_first_mixer(const obs_output_t *output) { for (size_t i = 0; i < MAX_AUDIO_MIXES; i++) { if ((((size_t)1 << i) & output->mixer_mask) != 0) { return i; } } return 0; } void obs_output_set_mixer(obs_output_t *output, size_t mixer_idx) { if (!obs_output_valid(output, "obs_output_set_mixer")) return; if (log_flag_encoded(output, __FUNCTION__, true)) return; if (active(output)) return; output->mixer_mask = (size_t)1 << mixer_idx; } size_t obs_output_get_mixer(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_mixer")) return 0; return get_first_mixer(output); } void obs_output_set_mixers(obs_output_t *output, size_t mixers) { if (!obs_output_valid(output, "obs_output_set_mixers")) return; if (log_flag_encoded(output, __FUNCTION__, true)) return; if (active(output)) return; output->mixer_mask = mixers; } size_t obs_output_get_mixers(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_mixers") ? output->mixer_mask : 0; } void obs_output_remove_encoder_internal(struct obs_output *output, struct obs_encoder *encoder) { if (!obs_output_valid(output, "obs_output_remove_encoder_internal")) return; if (encoder->info.type == OBS_ENCODER_VIDEO) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { obs_encoder_t *video = output->video_encoders[i]; if (video == encoder) { output->video_encoders[i] = NULL; obs_encoder_release(video); } } } else if (encoder->info.type == OBS_ENCODER_AUDIO) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { obs_encoder_t *audio = output->audio_encoders[i]; if (audio == encoder) { output->audio_encoders[i] = NULL; obs_encoder_release(audio); } } } } void obs_output_remove_encoder(struct obs_output *output, struct obs_encoder *encoder) { if (!obs_output_valid(output, "obs_output_remove_encoder")) return; if (active(output)) return; obs_output_remove_encoder_internal(output, encoder); } static struct caption_track_data *create_caption_track() { struct caption_track_data *rval = bzalloc(sizeof(struct caption_track_data)); pthread_mutex_init_value(&rval->caption_mutex); if (pthread_mutex_init(&rval->caption_mutex, NULL) != 0) { bfree(rval); rval = NULL; } return rval; } void obs_output_set_video_encoder2(obs_output_t *output, obs_encoder_t *encoder, size_t idx) { if (!obs_output_valid(output, "obs_output_set_video_encoder2")) return; if (!log_flag_encoded(output, __FUNCTION__, false) || !log_flag_video(output, __FUNCTION__)) return; if (encoder && encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_output_set_video_encoder: " "encoder passed is not a video encoder"); return; } if (active(output)) { blog(LOG_WARNING, "%s: tried to set video encoder on output \"%s\" " "while the output is still active!", __FUNCTION__, output->context.name); return; } if ((output->info.flags & OBS_OUTPUT_MULTI_TRACK_VIDEO) != 0) { if (idx >= MAX_OUTPUT_VIDEO_ENCODERS) { return; } } else { if (idx > 0) { return; } } if (output->video_encoders[idx] == encoder) return; obs_encoder_remove_output(output->video_encoders[idx], output); obs_encoder_release(output->video_encoders[idx]); output->video_encoders[idx] = obs_encoder_get_ref(encoder); obs_encoder_add_output(output->video_encoders[idx], output); destroy_caption_track(&output->caption_tracks[idx]); if (encoder != NULL) { output->caption_tracks[idx] = create_caption_track(); } else { output->caption_tracks[idx] = NULL; } // Set preferred resolution on the default index to preserve old behavior if (idx == 0) { /* set the preferred resolution on the encoder */ if (output->scaled_width && output->scaled_height) obs_encoder_set_scaled_size(output->video_encoders[idx], output->scaled_width, output->scaled_height); } } void obs_output_set_video_encoder(obs_output_t *output, obs_encoder_t *encoder) { if (!obs_output_valid(output, "obs_output_set_video_encoder")) return; obs_output_set_video_encoder2(output, encoder, 0); } void obs_output_set_audio_encoder(obs_output_t *output, obs_encoder_t *encoder, size_t idx) { if (!obs_output_valid(output, "obs_output_set_audio_encoder")) return; if (!log_flag_encoded(output, __FUNCTION__, false) || !log_flag_audio(output, __FUNCTION__)) return; if (encoder && encoder->info.type != OBS_ENCODER_AUDIO) { blog(LOG_WARNING, "obs_output_set_audio_encoder: " "encoder passed is not an audio encoder"); return; } if (active(output)) { blog(LOG_WARNING, "%s: tried to set audio encoder %d on output \"%s\" " "while the output is still active!", __FUNCTION__, (int)idx, output->context.name); return; } if ((output->info.flags & OBS_OUTPUT_MULTI_TRACK_AUDIO) != 0) { if (idx >= MAX_OUTPUT_AUDIO_ENCODERS) { return; } } else { if (idx > 0) { return; } } if (output->audio_encoders[idx] == encoder) return; obs_encoder_remove_output(output->audio_encoders[idx], output); obs_encoder_release(output->audio_encoders[idx]); output->audio_encoders[idx] = obs_encoder_get_ref(encoder); obs_encoder_add_output(output->audio_encoders[idx], output); } obs_encoder_t *obs_output_get_video_encoder2(const obs_output_t *output, size_t idx) { if (!obs_output_valid(output, "obs_output_get_video_encoder2")) return NULL; if (idx >= MAX_OUTPUT_VIDEO_ENCODERS) return NULL; return output->video_encoders[idx]; } obs_encoder_t *obs_output_get_video_encoder(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_video_encoder")) return NULL; size_t first_venc_idx; if (get_first_video_encoder_index(output, &first_venc_idx)) return obs_output_get_video_encoder2(output, first_venc_idx); else return NULL; } obs_encoder_t *obs_output_get_audio_encoder(const obs_output_t *output, size_t idx) { if (!obs_output_valid(output, "obs_output_get_audio_encoder")) return NULL; if (idx >= MAX_OUTPUT_AUDIO_ENCODERS) return NULL; return output->audio_encoders[idx]; } void obs_output_set_service(obs_output_t *output, obs_service_t *service) { if (!obs_output_valid(output, "obs_output_set_service")) return; if (!log_flag_service(output, __FUNCTION__) || active(output) || !service || service->active) return; if (service->output) service->output->service = NULL; output->service = service; service->output = output; } obs_service_t *obs_output_get_service(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_service") ? output->service : NULL; } void obs_output_set_reconnect_settings(obs_output_t *output, int retry_count, int retry_sec) { if (!obs_output_valid(output, "obs_output_set_reconnect_settings")) return; output->reconnect_retry_max = retry_count; output->reconnect_retry_sec = retry_sec; } uint64_t obs_output_get_total_bytes(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_total_bytes")) return 0; if (!output->info.get_total_bytes) return 0; if (delay_active(output) && !delay_capturing(output)) return 0; return output->info.get_total_bytes(output->context.data); } int obs_output_get_frames_dropped(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_frames_dropped")) return 0; if (!output->info.get_dropped_frames) return 0; return output->info.get_dropped_frames(output->context.data); } int obs_output_get_total_frames(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_total_frames") ? output->total_frames : 0; } void obs_output_set_preferred_size2(obs_output_t *output, uint32_t width, uint32_t height, size_t idx) { if (!obs_output_valid(output, "obs_output_set_preferred_size2")) return; if (!log_flag_video(output, __FUNCTION__)) return; if (idx >= MAX_OUTPUT_VIDEO_ENCODERS) return; if (active(output)) { blog(LOG_WARNING, "output '%s': Cannot set the preferred " "resolution while the output is active", obs_output_get_name(output)); return; } // Used for raw video output if (idx == 0) { output->scaled_width = width; output->scaled_height = height; } if (flag_encoded(output)) { if (output->video_encoders[idx]) obs_encoder_set_scaled_size(output->video_encoders[idx], width, height); } } void obs_output_set_preferred_size(obs_output_t *output, uint32_t width, uint32_t height) { if (!obs_output_valid(output, "obs_output_set_preferred_size")) return; if (!log_flag_video(output, __FUNCTION__)) return; obs_output_set_preferred_size2(output, width, height, 0); } uint32_t obs_output_get_width2(const obs_output_t *output, size_t idx) { if (!obs_output_valid(output, "obs_output_get_width2")) return 0; if (!log_flag_video(output, __FUNCTION__)) return 0; if (idx >= MAX_OUTPUT_VIDEO_ENCODERS) return 0; if (flag_encoded(output)) { if (output->video_encoders[idx]) return obs_encoder_get_width(output->video_encoders[idx]); else return 0; } else return output->scaled_width != 0 ? output->scaled_width : video_output_get_width(output->video); } uint32_t obs_output_get_width(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_width")) return 0; if (!log_flag_video(output, __FUNCTION__)) return 0; return obs_output_get_width2(output, 0); } uint32_t obs_output_get_height2(const obs_output_t *output, size_t idx) { if (!obs_output_valid(output, "obs_output_get_height2")) return 0; if (!log_flag_video(output, __FUNCTION__)) return 0; if (idx >= MAX_OUTPUT_VIDEO_ENCODERS) return 0; if (flag_encoded(output)) { if (output->video_encoders[idx]) return obs_encoder_get_height(output->video_encoders[idx]); else return 0; } else return output->scaled_height != 0 ? output->scaled_height : video_output_get_height(output->video); } uint32_t obs_output_get_height(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_height")) return 0; if (!log_flag_video(output, __FUNCTION__)) return 0; return obs_output_get_height2(output, 0); } void obs_output_set_video_conversion(obs_output_t *output, const struct video_scale_info *conversion) { if (!obs_output_valid(output, "obs_output_set_video_conversion")) return; if (!obs_ptr_valid(conversion, "obs_output_set_video_conversion")) return; if (log_flag_encoded(output, __FUNCTION__, true) || !log_flag_video(output, __FUNCTION__)) return; output->video_conversion = *conversion; output->video_conversion_set = true; } void obs_output_set_audio_conversion(obs_output_t *output, const struct audio_convert_info *conversion) { if (!obs_output_valid(output, "obs_output_set_audio_conversion")) return; if (!obs_ptr_valid(conversion, "obs_output_set_audio_conversion")) return; if (log_flag_encoded(output, __FUNCTION__, true) || !log_flag_audio(output, __FUNCTION__)) return; output->audio_conversion = *conversion; output->audio_conversion_set = true; } static inline bool video_valid(const struct obs_output *output) { if (flag_encoded(output)) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) { return true; } } return false; } else { return output->video != NULL; } } static inline bool audio_valid(const struct obs_output *output) { if (flag_encoded(output)) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { return true; } } return false; } return output->audio != NULL; } static bool can_begin_data_capture(const struct obs_output *output) { if (flag_video(output) && !video_valid(output)) return false; if (flag_audio(output) && !audio_valid(output)) return false; if (flag_service(output) && !output->service) return false; return true; } static inline bool has_scaling(const struct obs_output *output) { uint32_t video_width = video_output_get_width(output->video); uint32_t video_height = video_output_get_height(output->video); return output->scaled_width && output->scaled_height && (video_width != output->scaled_width || video_height != output->scaled_height); } const struct video_scale_info *obs_output_get_video_conversion(struct obs_output *output) { if (log_flag_encoded(output, __FUNCTION__, true) || !log_flag_video(output, __FUNCTION__)) return NULL; if (output->video_conversion_set) { if (!output->video_conversion.width) output->video_conversion.width = obs_output_get_width(output); if (!output->video_conversion.height) output->video_conversion.height = obs_output_get_height(output); return &output->video_conversion; } else if (has_scaling(output)) { const struct video_output_info *info = video_output_get_info(output->video); output->video_conversion.format = info->format; output->video_conversion.colorspace = VIDEO_CS_DEFAULT; output->video_conversion.range = VIDEO_RANGE_DEFAULT; output->video_conversion.width = output->scaled_width; output->video_conversion.height = output->scaled_height; return &output->video_conversion; } return NULL; } static inline struct audio_convert_info *get_audio_conversion(struct obs_output *output) { return output->audio_conversion_set ? &output->audio_conversion : NULL; } static size_t get_encoder_index(const struct obs_output *output, struct encoder_packet *pkt) { if (pkt->type == OBS_ENCODER_VIDEO) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { struct obs_encoder *encoder = output->video_encoders[i]; if (encoder && pkt->encoder == encoder) return i; } } else if (pkt->type == OBS_ENCODER_AUDIO) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { struct obs_encoder *encoder = output->audio_encoders[i]; if (encoder && pkt->encoder == encoder) return i; } } assert(false); return 0; } static inline void check_received(struct obs_output *output, struct encoder_packet *out) { if (out->type == OBS_ENCODER_VIDEO) { if (!output->received_video[out->track_idx]) output->received_video[out->track_idx] = true; } else { if (!output->received_audio) output->received_audio = true; } } static inline void apply_interleaved_packet_offset(struct obs_output *output, struct encoder_packet *out, struct encoder_packet_time *packet_time) { int64_t offset; /* audio and video need to start at timestamp 0, and the encoders * may not currently be at 0 when we get data. so, we store the * current dts as offset and subtract that value from the dts/pts * of the output packet. */ offset = (out->type == OBS_ENCODER_VIDEO) ? output->video_offsets[out->track_idx] : output->audio_offsets[out->track_idx]; out->dts -= offset; out->pts -= offset; if (packet_time) packet_time->pts -= offset; /* convert the newly adjusted dts to relative dts time to ensure proper * interleaving. if we're using an audio encoder that's already been * started on another output, then the first audio packet may not be * quite perfectly synced up in terms of system time (and there's * nothing we can really do about that), but it will always at least be * within a 23ish millisecond threshold (at least for AAC) */ out->dts_usec = packet_dts_usec(out); } static inline bool has_higher_opposing_ts(struct obs_output *output, struct encoder_packet *packet) { bool has_higher = true; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (!output->video_encoders[i] || (packet->type == OBS_ENCODER_VIDEO && i == packet->track_idx)) continue; has_higher = has_higher && output->highest_video_ts[i] > packet->dts_usec; } return packet->type == OBS_ENCODER_AUDIO ? has_higher : (has_higher && output->highest_audio_ts > packet->dts_usec); } static size_t extract_buffer_from_sei(sei_t *sei, uint8_t **data_out) { if (!sei || !sei->head) { return 0; } /* We should only need to get one payload, because the SEI that was * generated should only have one message, so no need to iterate. If * we did iterate, we would need to generate multiple OBUs. */ sei_message_t *msg = sei_message_head(sei); int payload_size = (int)sei_message_size(msg); uint8_t *payload_data = sei_message_data(msg); *data_out = bmalloc(payload_size); memcpy(*data_out, payload_data, payload_size); return payload_size; } static const uint8_t nal_start[4] = {0, 0, 0, 1}; static bool add_caption(struct obs_output *output, struct encoder_packet *out) { struct encoder_packet backup = *out; sei_t sei; uint8_t *data = NULL; size_t size; long ref = 1; bool avc = false; bool hevc = false; bool av1 = false; /* Instead of exiting early for unsupported codecs, we will continue * processing to allow the freeing of caption data even if the captions * will not be included in the bitstream due to being unimplemented in * the given codec. */ if (strcmp(out->encoder->info.codec, "h264") == 0) { avc = true; } else if (strcmp(out->encoder->info.codec, "av1") == 0) { av1 = true; #ifdef ENABLE_HEVC } else if (strcmp(out->encoder->info.codec, "hevc") == 0) { hevc = true; #endif } DARRAY(uint8_t) out_data; if (out->priority > 1) return false; struct caption_track_data *ctrack = output->caption_tracks[out->track_idx]; if (!ctrack) { blog(LOG_DEBUG, "Caption track for index: %lu has not been initialized", out->track_idx); return false; } #ifdef ENABLE_HEVC uint8_t hevc_nal_header[2]; if (hevc) { size_t nal_header_index_start = 4; // Skip past the annex-b start code if (memcmp(out->data, nal_start + 1, 3) == 0) { nal_header_index_start = 3; } else if (memcmp(out->data, nal_start, 4) == 0) { nal_header_index_start = 4; } else { /* We shouldn't ever see this unless we start getting * packets without annex-b start codes. */ blog(LOG_DEBUG, "Annex-B start code not found. We may not " "generate a valid HEVC NAL unit header " "for our caption"); return false; } /* We will use the same 2 byte NAL unit header for the CC SEI, * but swap the NAL types out. */ hevc_nal_header[0] = out->data[nal_header_index_start]; hevc_nal_header[1] = out->data[nal_header_index_start + 1]; } #endif sei_init(&sei, 0.0); da_init(out_data); da_push_back_array(out_data, (uint8_t *)&ref, sizeof(ref)); da_push_back_array(out_data, out->data, out->size); if (ctrack->caption_data.size > 0) { cea708_t cea708; cea708_init(&cea708, 0); // set up a new popon frame void *caption_buf = bzalloc(3 * sizeof(uint8_t)); while (ctrack->caption_data.size > 0) { deque_pop_front(&ctrack->caption_data, caption_buf, 3 * sizeof(uint8_t)); if ((((uint8_t *)caption_buf)[0] & 0x3) != 0) { // only send cea 608 continue; } uint16_t captionData = ((uint8_t *)caption_buf)[1]; captionData = captionData << 8; captionData += ((uint8_t *)caption_buf)[2]; // padding if (captionData == 0x8080) { continue; } if (captionData == 0) { continue; } if (!eia608_parity_varify(captionData)) { continue; } cea708_add_cc_data(&cea708, 1, ((uint8_t *)caption_buf)[0] & 0x3, captionData); } bfree(caption_buf); sei_message_t *msg = sei_message_new(sei_type_user_data_registered_itu_t_t35, 0, CEA608_MAX_SIZE); msg->size = cea708_render(&cea708, sei_message_data(msg), sei_message_size(msg)); sei_message_append(&sei, msg); } else if (ctrack->caption_head) { caption_frame_t cf; caption_frame_init(&cf); caption_frame_from_text(&cf, &ctrack->caption_head->text[0]); sei_from_caption_frame(&sei, &cf); struct caption_text *next = ctrack->caption_head->next; bfree(ctrack->caption_head); ctrack->caption_head = next; } if (avc || hevc || av1) { if (avc || hevc) { data = bmalloc(sei_render_size(&sei)); size = sei_render(&sei, data); } /* In each of these specs there is an identical structure that * carries caption information. It is named slightly differently * in each one. The metadata_itut_t35 in AV1 or the * user_data_registered_itu_t_t35 in HEVC/AVC. We have an AVC * SEI wrapped version of that here. We will strip it out and * repackage it slightly to fit the different codec carrying * mechanisms. A slightly modified SEI for HEVC and a metadata * OBU for AV1. */ if (avc) { /* TODO: SEI should come after AUD/SPS/PPS, * but before any VCL */ da_push_back_array(out_data, nal_start, 4); da_push_back_array(out_data, data, size); #ifdef ENABLE_HEVC } else if (hevc) { /* Only first NAL (VPS/PPS/SPS) should use the 4 byte * start code. SEIs use 3 byte version */ da_push_back_array(out_data, nal_start + 1, 3); /* nal_unit_header( ) { * forbidden_zero_bit f(1) * nal_unit_type u(6) * nuh_layer_id u(6) * nuh_temporal_id_plus1 u(3) * } */ const uint8_t suffix_sei_nal_type = 40; /* The first bit is always 0, so we just need to * save the last bit off the original header and * add the SEI NAL type. */ uint8_t first_byte = (suffix_sei_nal_type << 1) | (0x01 & hevc_nal_header[0]); hevc_nal_header[0] = first_byte; /* The HEVC NAL unit header is 2 byte instead of * one, otherwise everything else is the * same. */ da_push_back_array(out_data, hevc_nal_header, 2); da_push_back_array(out_data, &data[1], size - 1); #endif } else if (av1) { uint8_t *obu_buffer = NULL; size_t obu_buffer_size = 0; size = extract_buffer_from_sei(&sei, &data); metadata_obu(data, size, &obu_buffer, &obu_buffer_size, METADATA_TYPE_ITUT_T35); if (obu_buffer) { da_push_back_array(out_data, obu_buffer, obu_buffer_size); bfree(obu_buffer); } } if (data) { bfree(data); } obs_encoder_packet_release(out); *out = backup; out->data = (uint8_t *)out_data.array + sizeof(ref); out->size = out_data.num - sizeof(ref); } sei_free(&sei); return avc || hevc || av1; } static inline void send_interleaved(struct obs_output *output) { struct encoder_packet out = output->interleaved_packets.array[0]; struct encoder_packet_time ept_local = {0}; bool found_ept = false; da_erase(output->interleaved_packets, 0); if (out.type == OBS_ENCODER_VIDEO) { output->total_frames++; pthread_mutex_lock(&output->caption_tracks[out.track_idx]->caption_mutex); double frame_timestamp = (out.pts * out.timebase_num) / (double)out.timebase_den; struct caption_track_data *ctrack = output->caption_tracks[out.track_idx]; if (ctrack->caption_head && ctrack->caption_timestamp <= frame_timestamp) { blog(LOG_DEBUG, "Sending caption: %f \"%s\"", frame_timestamp, &ctrack->caption_head->text[0]); double display_duration = ctrack->caption_head->display_duration; if (add_caption(output, &out)) { ctrack->caption_timestamp = frame_timestamp + display_duration; } } if (ctrack->caption_data.size > 0) { if (ctrack->last_caption_timestamp < frame_timestamp) { ctrack->last_caption_timestamp = frame_timestamp; add_caption(output, &out); } } pthread_mutex_unlock(&ctrack->caption_mutex); /* Iterate the array of encoder packet times to * find a matching PTS entry, and drain the array. * Packet timing currently applies to video only. */ struct encoder_packet_time *ept = NULL; size_t num_ept = output->encoder_packet_times[out.track_idx].num; if (num_ept) { for (size_t i = 0; i < num_ept; i++) { ept = &output->encoder_packet_times[out.track_idx].array[i]; if (ept->pts == out.pts) { ept_local = *ept; da_erase(output->encoder_packet_times[out.track_idx], i); found_ept = true; break; } } if (found_ept == false) { blog(LOG_DEBUG, "%s: Track %lu encoder packet timing for PTS%" PRId64 " not found.", __FUNCTION__, out.track_idx, out.pts); } } else { // encoder_packet_times should not be empty; log if so. blog(LOG_DEBUG, "%s: Track %lu encoder packet timing array empty.", __FUNCTION__, out.track_idx); } } /* Iterate the registered packet callback(s) and invoke * each one. The caption track logic further above should * eventually migrate to the packet callback mechanism. */ pthread_mutex_lock(&output->pkt_callbacks_mutex); for (size_t i = 0; i < output->pkt_callbacks.num; ++i) { struct packet_callback *const callback = &output->pkt_callbacks.array[i]; // Packet interleave request timestamp ept_local.pir = os_gettime_ns(); callback->packet_cb(output, &out, found_ept ? &ept_local : NULL, callback->param); } pthread_mutex_unlock(&output->pkt_callbacks_mutex); output->info.encoded_packet(output->context.data, &out); obs_encoder_packet_release(&out); } static inline void set_higher_ts(struct obs_output *output, struct encoder_packet *packet) { if (packet->type == OBS_ENCODER_VIDEO) { if (output->highest_video_ts[packet->track_idx] < packet->dts_usec) output->highest_video_ts[packet->track_idx] = packet->dts_usec; } else { if (output->highest_audio_ts < packet->dts_usec) output->highest_audio_ts = packet->dts_usec; } } static inline struct encoder_packet *find_first_packet_type(struct obs_output *output, enum obs_encoder_type type, size_t audio_idx); static int find_first_packet_type_idx(struct obs_output *output, enum obs_encoder_type type, size_t audio_idx); /* gets the point where audio and video are closest together */ static size_t get_interleaved_start_idx(struct obs_output *output) { int64_t closest_diff = 0x7FFFFFFFFFFFFFFFLL; struct encoder_packet *first_video = find_first_packet_type(output, OBS_ENCODER_VIDEO, 0); size_t video_idx = DARRAY_INVALID; size_t idx = 0; for (size_t i = 0; i < output->interleaved_packets.num; i++) { struct encoder_packet *packet = &output->interleaved_packets.array[i]; int64_t diff; if (packet->type != OBS_ENCODER_AUDIO) { if (packet == first_video) video_idx = i; continue; } diff = llabs(packet->dts_usec - first_video->dts_usec); if (diff < closest_diff) { closest_diff = diff; idx = i; } } idx = video_idx < idx ? video_idx : idx; /* Early AAC/Opus audio packets will be for "priming" the encoder and contain silence, but they should not be * discarded. Set the idx to the first audio packet if closest PTS was <= 0. */ size_t first_audio_idx = idx; while (output->interleaved_packets.array[first_audio_idx].type != OBS_ENCODER_AUDIO) first_audio_idx++; if (output->interleaved_packets.array[first_audio_idx].pts <= 0) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { int audio_idx = find_first_packet_type_idx(output, OBS_ENCODER_AUDIO, i); if (audio_idx >= 0 && (size_t)audio_idx < idx) idx = audio_idx; } } return idx; } static int64_t get_encoder_duration(struct obs_encoder *encoder) { return (encoder->timebase_num * 1000000LL / encoder->timebase_den) * encoder->framesize; } static int prune_premature_packets(struct obs_output *output) { struct encoder_packet *video; int video_idx; int max_idx; int64_t duration_usec, max_audio_duration_usec = 0; int64_t max_diff = 0; int64_t diff = 0; int audio_encoders = 0; video_idx = find_first_packet_type_idx(output, OBS_ENCODER_VIDEO, 0); if (video_idx == -1) return -1; max_idx = video_idx; video = &output->interleaved_packets.array[video_idx]; duration_usec = video->timebase_num * 1000000LL / video->timebase_den; for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { struct encoder_packet *audio; int audio_idx; int64_t audio_duration_usec = 0; if (!output->audio_encoders[i]) continue; audio_encoders++; audio_idx = find_first_packet_type_idx(output, OBS_ENCODER_AUDIO, i); if (audio_idx == -1) { output->received_audio = false; return -1; } audio = &output->interleaved_packets.array[audio_idx]; if (audio_idx > max_idx) max_idx = audio_idx; diff = audio->dts_usec - video->dts_usec; if (diff > max_diff) max_diff = diff; audio_duration_usec = get_encoder_duration(output->audio_encoders[i]); if (audio_duration_usec > max_audio_duration_usec) max_audio_duration_usec = audio_duration_usec; } /* Once multiple audio encoders are running they are almost always out * of phase by ~Xms. If users change their video to > 100fps then it * becomes probable that this phase difference will be larger than the * video duration preventing us from ever finding a synchronization * point due to their larger frame duration. Instead give up on a tight * video sync. */ if (audio_encoders > 1 && duration_usec < max_audio_duration_usec) { duration_usec = max_audio_duration_usec; } return diff > duration_usec ? max_idx + 1 : 0; } #define DEBUG_STARTING_PACKETS 0 static void discard_to_idx(struct obs_output *output, size_t idx) { for (size_t i = 0; i < idx; i++) { struct encoder_packet *packet = &output->interleaved_packets.array[i]; #if DEBUG_STARTING_PACKETS == 1 blog(LOG_DEBUG, "discarding %s packet, dts: %lld, pts: %lld", packet->type == OBS_ENCODER_VIDEO ? "video" : "audio", packet->dts, packet->pts); #endif if (packet->type == OBS_ENCODER_VIDEO) { da_pop_front(output->encoder_packet_times[packet->track_idx]); } obs_encoder_packet_release(packet); } da_erase_range(output->interleaved_packets, 0, idx); } static bool prune_interleaved_packets(struct obs_output *output) { size_t start_idx = 0; int prune_start = prune_premature_packets(output); #if DEBUG_STARTING_PACKETS == 1 blog(LOG_DEBUG, "--------- Pruning! %d ---------", prune_start); for (size_t i = 0; i < output->interleaved_packets.num; i++) { struct encoder_packet *packet = &output->interleaved_packets.array[i]; blog(LOG_DEBUG, "packet: %s %d, ts: %lld, pruned = %s", packet->type == OBS_ENCODER_AUDIO ? "audio" : "video", (int)packet->track_idx, packet->dts_usec, (int)i < prune_start ? "true" : "false"); } #endif /* prunes the first video packet if it's too far away from audio */ if (prune_start == -1) return false; else if (prune_start != 0) start_idx = (size_t)prune_start; else start_idx = get_interleaved_start_idx(output); if (start_idx) discard_to_idx(output, start_idx); return true; } static int find_first_packet_type_idx(struct obs_output *output, enum obs_encoder_type type, size_t idx) { for (size_t i = 0; i < output->interleaved_packets.num; i++) { struct encoder_packet *packet = &output->interleaved_packets.array[i]; if (packet->type == type && packet->track_idx == idx) return (int)i; } return -1; } static int find_last_packet_type_idx(struct obs_output *output, enum obs_encoder_type type, size_t idx) { for (size_t i = output->interleaved_packets.num; i > 0; i--) { struct encoder_packet *packet = &output->interleaved_packets.array[i - 1]; if (packet->type == type && packet->track_idx == idx) return (int)(i - 1); } return -1; } static inline struct encoder_packet *find_first_packet_type(struct obs_output *output, enum obs_encoder_type type, size_t audio_idx) { int idx = find_first_packet_type_idx(output, type, audio_idx); return (idx != -1) ? &output->interleaved_packets.array[idx] : NULL; } static inline struct encoder_packet *find_last_packet_type(struct obs_output *output, enum obs_encoder_type type, size_t audio_idx) { int idx = find_last_packet_type_idx(output, type, audio_idx); return (idx != -1) ? &output->interleaved_packets.array[idx] : NULL; } static bool get_audio_and_video_packets(struct obs_output *output, struct encoder_packet **video, struct encoder_packet **audio) { bool found_video = false; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) { video[i] = find_first_packet_type(output, OBS_ENCODER_VIDEO, i); if (!video[i]) { output->received_video[i] = false; return false; } else { found_video = true; } } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { audio[i] = find_first_packet_type(output, OBS_ENCODER_AUDIO, i); if (!audio[i]) { output->received_audio = false; return false; } } } return found_video; } static bool initialize_interleaved_packets(struct obs_output *output) { struct encoder_packet *video[MAX_OUTPUT_VIDEO_ENCODERS] = {0}; struct encoder_packet *audio[MAX_OUTPUT_AUDIO_ENCODERS] = {0}; struct encoder_packet *last_audio[MAX_OUTPUT_AUDIO_ENCODERS] = {0}; size_t start_idx; size_t first_audio_idx; size_t first_video_idx; if (!get_first_audio_encoder_index(output, &first_audio_idx)) return false; if (!get_first_video_encoder_index(output, &first_video_idx)) return false; if (!get_audio_and_video_packets(output, video, audio)) return false; for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { last_audio[i] = find_last_packet_type(output, OBS_ENCODER_AUDIO, i); } } /* ensure that there is audio past the first video packet */ for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { if (last_audio[i]->dts_usec < video[first_video_idx]->dts_usec) { output->received_audio = false; return false; } } } /* clear out excess starting audio if it hasn't been already */ start_idx = get_interleaved_start_idx(output); if (start_idx) { discard_to_idx(output, start_idx); if (!get_audio_and_video_packets(output, video, audio)) return false; } /* get new offsets */ for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) { output->video_offsets[i] = video[i]->pts; } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i] && audio[i]->dts > 0) { output->audio_offsets[i] = audio[i]->dts; } } #if DEBUG_STARTING_PACKETS == 1 int64_t v = video[first_video_idx]->dts_usec; int64_t a = audio[first_audio_idx]->dts_usec; int64_t diff = v - a; blog(LOG_DEBUG, "output '%s' offset for video: %lld, audio: %lld, " "diff: %lldms", output->context.name, v, a, diff / 1000LL); #endif /* subtract offsets from highest TS offset variables */ output->highest_audio_ts -= audio[first_audio_idx]->dts_usec; /* apply new offsets to all existing packet DTS/PTS values */ for (size_t i = 0; i < output->interleaved_packets.num; i++) { struct encoder_packet *packet = &output->interleaved_packets.array[i]; apply_interleaved_packet_offset(output, packet, NULL); } return true; } static inline void insert_interleaved_packet(struct obs_output *output, struct encoder_packet *out) { size_t idx; for (idx = 0; idx < output->interleaved_packets.num; idx++) { struct encoder_packet *cur_packet; cur_packet = output->interleaved_packets.array + idx; // sort video packets with same DTS by track index, // to prevent the pruning logic from removing additional // video tracks if (out->dts_usec == cur_packet->dts_usec && out->type == OBS_ENCODER_VIDEO && cur_packet->type == OBS_ENCODER_VIDEO && out->track_idx > cur_packet->track_idx) continue; if (out->dts_usec == cur_packet->dts_usec && out->type == OBS_ENCODER_VIDEO) { break; } else if (out->dts_usec < cur_packet->dts_usec) { break; } } da_insert(output->interleaved_packets, idx, out); } static void resort_interleaved_packets(struct obs_output *output) { DARRAY(struct encoder_packet) old_array; old_array.da = output->interleaved_packets.da; memset(&output->interleaved_packets, 0, sizeof(output->interleaved_packets)); for (size_t i = 0; i < old_array.num; i++) { set_higher_ts(output, &old_array.array[i]); insert_interleaved_packet(output, &old_array.array[i]); } da_free(old_array); } static void discard_unused_audio_packets(struct obs_output *output, int64_t dts_usec) { size_t idx = 0; for (; idx < output->interleaved_packets.num; idx++) { struct encoder_packet *p = &output->interleaved_packets.array[idx]; if (p->dts_usec >= dts_usec) break; } if (idx) discard_to_idx(output, idx); } static bool purge_encoder_group_keyframe_data(obs_output_t *output, size_t idx) { struct keyframe_group_data *data = &output->keyframe_group_tracking.array[idx]; uint32_t modified_count = 0; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (data->seen_on_track[i] != KEYFRAME_TRACK_STATUS_NOT_SEEN) modified_count += 1; } if (modified_count == data->required_tracks) { da_erase(output->keyframe_group_tracking, idx); return true; } return false; } /* Check whether keyframes are emitted from all grouped encoders, and log * if keyframes haven't been emitted from all grouped encoders. */ static void check_encoder_group_keyframe_alignment(obs_output_t *output, struct encoder_packet *packet) { size_t idx = 0; struct keyframe_group_data insert_data = {0}; if (!packet->keyframe || packet->type != OBS_ENCODER_VIDEO || !packet->encoder->encoder_group) return; for (; idx < output->keyframe_group_tracking.num;) { struct keyframe_group_data *data = &output->keyframe_group_tracking.array[idx]; if (data->pts > packet->pts) break; if (data->group_id != (uintptr_t)packet->encoder->encoder_group) { idx += 1; continue; } if (data->pts < packet->pts) { if (data->seen_on_track[packet->track_idx] == KEYFRAME_TRACK_STATUS_NOT_SEEN) { blog(LOG_WARNING, "obs-output '%s': Missing keyframe with pts %" PRIi64 " for encoder '%s' (track: %zu)", obs_output_get_name(output), data->pts, obs_encoder_get_name(packet->encoder), packet->track_idx); } data->seen_on_track[packet->track_idx] = KEYFRAME_TRACK_STATUS_SKIPPED; if (!purge_encoder_group_keyframe_data(output, idx)) idx += 1; continue; } data->seen_on_track[packet->track_idx] = KEYFRAME_TRACK_STATUS_SEEN; purge_encoder_group_keyframe_data(output, idx); return; } insert_data.group_id = (uintptr_t)packet->encoder->encoder_group; insert_data.pts = packet->pts; insert_data.seen_on_track[packet->track_idx] = KEYFRAME_TRACK_STATUS_SEEN; pthread_mutex_lock(&packet->encoder->encoder_group->mutex); insert_data.required_tracks = packet->encoder->encoder_group->num_encoders_started; pthread_mutex_unlock(&packet->encoder->encoder_group->mutex); da_insert(output->keyframe_group_tracking, idx, &insert_data); } static void apply_ept_offsets(struct obs_output *output) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { for (size_t j = 0; j < output->encoder_packet_times[i].num; j++) { output->encoder_packet_times[i].array[j].pts -= output->video_offsets[i]; } } } static inline size_t count_streamable_frames(struct obs_output *output) { size_t eligible = 0; for (size_t idx = 0; idx < output->interleaved_packets.num; idx++) { struct encoder_packet *pkt = &output->interleaved_packets.array[idx]; /* Only count an interleaved packet as streamable if there are packets of the opposing type and of a * higher timestamp in the interleave buffer. This ensures that the timestamps are monotonic. */ if (!has_higher_opposing_ts(output, pkt)) break; eligible++; } return eligible; } static void interleave_packets(void *data, struct encoder_packet *packet, struct encoder_packet_time *packet_time) { struct obs_output *output = data; struct encoder_packet out; bool was_started; bool received_video; struct encoder_packet_time *output_packet_time = NULL; if (!active(output)) return; packet->track_idx = get_encoder_index(output, packet); pthread_mutex_lock(&output->interleaved_mutex); /* if first video frame is not a keyframe, discard until received */ if (packet->type == OBS_ENCODER_VIDEO && !output->received_video[packet->track_idx] && !packet->keyframe) { discard_unused_audio_packets(output, packet->dts_usec); pthread_mutex_unlock(&output->interleaved_mutex); if (output->active_delay_ns) obs_encoder_packet_release(packet); return; } received_video = true; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) received_video = received_video && output->received_video[i]; } check_encoder_group_keyframe_alignment(output, packet); was_started = output->received_audio && received_video; if (output->active_delay_ns) out = *packet; else obs_encoder_packet_create_instance(&out, packet); if (packet_time) { output_packet_time = da_push_back_new(output->encoder_packet_times[packet->track_idx]); *output_packet_time = *packet_time; } if (was_started) apply_interleaved_packet_offset(output, &out, output_packet_time); else check_received(output, packet); insert_interleaved_packet(output, &out); received_video = true; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) received_video = received_video && output->received_video[i]; } /* when both video and audio have been received, we're ready * to start sending out packets (one at a time) */ if (output->received_audio && received_video) { if (!was_started) { if (prune_interleaved_packets(output)) { if (initialize_interleaved_packets(output)) { resort_interleaved_packets(output); apply_ept_offsets(output); send_interleaved(output); } } } else { set_higher_ts(output, &out); size_t streamable = count_streamable_frames(output); if (streamable) { send_interleaved(output); /* If we have more eligible packets queued than we normally should have, * send one additional packet until we're back below the limit. */ if (--streamable > output->interleaver_max_batch_size) send_interleaved(output); } } } pthread_mutex_unlock(&output->interleaved_mutex); } static void default_encoded_callback(void *param, struct encoder_packet *packet, struct encoder_packet_time *packet_time) { UNUSED_PARAMETER(packet_time); struct obs_output *output = param; if (data_active(output)) { packet->track_idx = get_encoder_index(output, packet); output->info.encoded_packet(output->context.data, packet); if (packet->type == OBS_ENCODER_VIDEO) output->total_frames++; } if (output->active_delay_ns) obs_encoder_packet_release(packet); } static void default_raw_video_callback(void *param, struct video_data *frame) { struct obs_output *output = param; if (video_pause_check(&output->pause, frame->timestamp)) return; if (data_active(output)) output->info.raw_video(output->context.data, frame); output->total_frames++; } static bool prepare_audio(struct obs_output *output, const struct audio_data *old, struct audio_data *new) { if ((output->info.flags & OBS_OUTPUT_VIDEO) == 0) { *new = *old; return true; } if (!output->video_start_ts) { pthread_mutex_lock(&output->pause.mutex); output->video_start_ts = output->pause.last_video_ts; pthread_mutex_unlock(&output->pause.mutex); } if (!output->video_start_ts) return false; /* ------------------ */ *new = *old; if (old->timestamp < output->video_start_ts) { uint64_t duration = util_mul_div64(old->frames, 1000000000ULL, output->sample_rate); uint64_t end_ts = (old->timestamp + duration); uint64_t cutoff; if (end_ts <= output->video_start_ts) return false; cutoff = output->video_start_ts - old->timestamp; new->timestamp += cutoff; cutoff = util_mul_div64(cutoff, output->sample_rate, 1000000000ULL); for (size_t i = 0; i < output->planes; i++) new->data[i] += output->audio_size *(uint32_t)cutoff; new->frames -= (uint32_t)cutoff; } return true; } static void default_raw_audio_callback(void *param, size_t mix_idx, struct audio_data *in) { struct obs_output *output = param; struct audio_data out; size_t frame_size_bytes; if (!data_active(output)) return; /* -------------- */ if (!prepare_audio(output, in, &out)) return; if (audio_pause_check(&output->pause, &out, output->sample_rate)) return; if (!output->audio_start_ts) { output->audio_start_ts = out.timestamp; } frame_size_bytes = AUDIO_OUTPUT_FRAMES * output->audio_size; for (size_t i = 0; i < output->planes; i++) deque_push_back(&output->audio_buffer[mix_idx][i], out.data[i], out.frames * output->audio_size); /* -------------- */ while (output->audio_buffer[mix_idx][0].size > frame_size_bytes) { for (size_t i = 0; i < output->planes; i++) { deque_pop_front(&output->audio_buffer[mix_idx][i], output->audio_data[i], frame_size_bytes); out.data[i] = (uint8_t *)output->audio_data[i]; } out.frames = AUDIO_OUTPUT_FRAMES; out.timestamp = output->audio_start_ts + audio_frames_to_ns(output->sample_rate, output->total_audio_frames); pthread_mutex_lock(&output->pause.mutex); out.timestamp += output->pause.ts_offset; pthread_mutex_unlock(&output->pause.mutex); output->total_audio_frames += AUDIO_OUTPUT_FRAMES; if (output->info.raw_audio2) output->info.raw_audio2(output->context.data, mix_idx, &out); else output->info.raw_audio(output->context.data, &out); } } static inline void start_audio_encoders(struct obs_output *output, encoded_callback_t encoded_callback) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (output->audio_encoders[i]) { obs_encoder_start(output->audio_encoders[i], encoded_callback, output); } } } static inline void start_video_encoders(struct obs_output *output, encoded_callback_t encoded_callback) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (output->video_encoders[i]) { obs_encoder_start(output->video_encoders[i], encoded_callback, output); } } } static inline void start_raw_audio(obs_output_t *output) { if (output->info.raw_audio2) { for (int idx = 0; idx < MAX_AUDIO_MIXES; idx++) { if ((output->mixer_mask & ((size_t)1 << idx)) != 0) { audio_output_connect(output->audio, idx, get_audio_conversion(output), default_raw_audio_callback, output); } } } else { audio_output_connect(output->audio, get_first_mixer(output), get_audio_conversion(output), default_raw_audio_callback, output); } } static void reset_packet_data(obs_output_t *output) { output->received_audio = false; output->highest_audio_ts = 0; for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { output->encoder_packet_times[i].num = 0; } for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { output->received_video[i] = false; output->video_offsets[i] = 0; output->highest_video_ts[i] = INT64_MIN; } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) output->audio_offsets[i] = 0; free_packets(output); } static inline bool preserve_active(struct obs_output *output) { return (output->delay_flags & OBS_OUTPUT_DELAY_PRESERVE) != 0; } static void hook_data_capture(struct obs_output *output) { encoded_callback_t encoded_callback; bool has_video = flag_video(output); bool has_audio = flag_audio(output); if (flag_encoded(output)) { pthread_mutex_lock(&output->interleaved_mutex); reset_packet_data(output); pthread_mutex_unlock(&output->interleaved_mutex); encoded_callback = (has_video && has_audio) ? interleave_packets : default_encoded_callback; if (output->delay_sec) { output->active_delay_ns = (uint64_t)output->delay_sec * 1000000000ULL; output->delay_cur_flags = output->delay_flags; output->delay_callback = encoded_callback; encoded_callback = process_delay; os_atomic_set_bool(&output->delay_active, true); blog(LOG_INFO, "Output '%s': %" PRIu32 " second delay " "active, preserve on disconnect is %s", output->context.name, output->delay_sec, preserve_active(output) ? "on" : "off"); } if (has_audio) start_audio_encoders(output, encoded_callback); if (has_video) start_video_encoders(output, encoded_callback); } else { if (has_video) start_raw_video(output->video, obs_output_get_video_conversion(output), 1, default_raw_video_callback, output); if (has_audio) start_raw_audio(output); } } static inline void signal_start(struct obs_output *output) { do_output_signal(output, "start"); } static inline void signal_reconnect(struct obs_output *output) { struct calldata params; uint8_t stack[128]; calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_int(¶ms, "timeout_sec", output->reconnect_retry_cur_msec / 1000); calldata_set_ptr(¶ms, "output", output); signal_handler_signal(output->context.signals, "reconnect", ¶ms); } static inline void signal_reconnect_success(struct obs_output *output) { do_output_signal(output, "reconnect_success"); } static inline void signal_stop(struct obs_output *output) { struct calldata params; calldata_init(¶ms); calldata_set_string(¶ms, "last_error", obs_output_get_last_error(output)); calldata_set_int(¶ms, "code", output->stop_code); calldata_set_ptr(¶ms, "output", output); signal_handler_signal(output->context.signals, "stop", ¶ms); calldata_free(¶ms); } bool obs_output_can_begin_data_capture(const obs_output_t *output, uint32_t flags) { UNUSED_PARAMETER(flags); if (!obs_output_valid(output, "obs_output_can_begin_data_capture")) return false; if (delay_active(output)) return true; if (active(output)) return false; if (data_capture_ending(output)) pthread_join(output->end_data_capture_thread, NULL); return can_begin_data_capture(output); } static inline bool initialize_audio_encoders(obs_output_t *output) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { obs_encoder_t *audio = output->audio_encoders[i]; if (audio && !obs_encoder_initialize(audio)) { obs_output_set_last_error(output, obs_encoder_get_last_error(audio)); return false; } } return true; } static inline bool initialize_video_encoders(obs_output_t *output) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { obs_encoder_t *video = output->video_encoders[i]; if (video && !obs_encoder_initialize(video)) { obs_output_set_last_error(output, obs_encoder_get_last_error(video)); return false; } } return true; } static inline void pair_encoders(obs_output_t *output) { size_t first_venc_idx; if (!get_first_video_encoder_index(output, &first_venc_idx)) return; struct obs_encoder *video = output->video_encoders[first_venc_idx]; pthread_mutex_lock(&video->init_mutex); if (video->active) { pthread_mutex_unlock(&video->init_mutex); return; } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { struct obs_encoder *audio = output->audio_encoders[i]; if (!audio) continue; pthread_mutex_lock(&audio->init_mutex); if (!audio->active && !audio->paired_encoders.num) { obs_weak_encoder_t *weak_audio = obs_encoder_get_weak_encoder(audio); obs_weak_encoder_t *weak_video = obs_encoder_get_weak_encoder(video); da_push_back(video->paired_encoders, &weak_audio); da_push_back(audio->paired_encoders, &weak_video); } pthread_mutex_unlock(&audio->init_mutex); } pthread_mutex_unlock(&video->init_mutex); } bool obs_output_initialize_encoders(obs_output_t *output, uint32_t flags) { UNUSED_PARAMETER(flags); if (!obs_output_valid(output, "obs_output_initialize_encoders")) return false; if (!log_flag_encoded(output, __FUNCTION__, false)) return false; if (active(output)) return delay_active(output); if (flag_video(output) && !initialize_video_encoders(output)) return false; if (flag_audio(output) && !initialize_audio_encoders(output)) return false; return true; } static bool begin_delayed_capture(obs_output_t *output) { if (delay_capturing(output)) return false; pthread_mutex_lock(&output->interleaved_mutex); reset_packet_data(output); os_atomic_set_bool(&output->delay_capturing, true); pthread_mutex_unlock(&output->interleaved_mutex); if (reconnecting(output)) { signal_reconnect_success(output); os_atomic_set_bool(&output->reconnecting, false); } else { signal_start(output); } return true; } static void reset_raw_output(obs_output_t *output) { clear_raw_audio_buffers(output); if (output->audio) { const struct audio_output_info *aoi = audio_output_get_info(output->audio); struct audio_convert_info conv = output->audio_conversion; struct audio_convert_info info = { aoi->samples_per_sec, aoi->format, aoi->speakers, }; if (output->audio_conversion_set) { if (conv.samples_per_sec) info.samples_per_sec = conv.samples_per_sec; if (conv.format != AUDIO_FORMAT_UNKNOWN) info.format = conv.format; if (conv.speakers != SPEAKERS_UNKNOWN) info.speakers = conv.speakers; } output->sample_rate = info.samples_per_sec; output->planes = get_audio_planes(info.format, info.speakers); output->total_audio_frames = 0; output->audio_size = get_audio_size(info.format, info.speakers, 1); } output->audio_start_ts = 0; output->video_start_ts = 0; pause_reset(&output->pause); } static void calculate_batch_size(struct obs_output *output) { struct obs_video_info ovi; obs_get_video_info(&ovi); DARRAY(uint64_t) intervals; da_init(intervals); uint64_t largest_interval = 0; /* Step 1: Calculate the largest interval between packets of any encoder. */ for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { if (!output->video_encoders[i]) continue; uint32_t den = ovi.fps_den * obs_encoder_get_frame_rate_divisor(output->video_encoders[i]); uint64_t encoder_interval = util_mul_div64(1000000000ULL, den, ovi.fps_num); da_push_back(intervals, &encoder_interval); largest_interval = encoder_interval > largest_interval ? encoder_interval : largest_interval; } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { if (!output->audio_encoders[i]) continue; uint32_t sample_rate = obs_encoder_get_sample_rate(output->audio_encoders[i]); size_t frame_size = obs_encoder_get_frame_size(output->audio_encoders[i]); uint64_t encoder_interval = util_mul_div64(1000000000ULL, frame_size, sample_rate); da_push_back(intervals, &encoder_interval); largest_interval = encoder_interval > largest_interval ? encoder_interval : largest_interval; } /* Step 2: Calculate how many packets would fit into double that interval given each encoder's packet rate. * The doubling is done to provide some amount of wiggle room as the largest interval may not be evenly * divisible by all smaller ones. For example, 33.3... ms video (30 FPS) and 21.3... ms audio (48 kHz AAC). */ for (size_t i = 0; i < intervals.num; i++) { uint64_t num = (largest_interval * 2) / intervals.array[i]; output->interleaver_max_batch_size += num; } blog(LOG_DEBUG, "Maximum interleaver batch size for '%s' calculated to be %zu packets", obs_output_get_name(output), output->interleaver_max_batch_size); da_free(intervals); } bool obs_output_begin_data_capture(obs_output_t *output, uint32_t flags) { UNUSED_PARAMETER(flags); if (!obs_output_valid(output, "obs_output_begin_data_capture")) return false; if (delay_active(output)) return begin_delayed_capture(output); if (active(output)) return false; output->total_frames = 0; if (!flag_encoded(output)) reset_raw_output(output); if (!can_begin_data_capture(output)) return false; if (flag_video(output) && flag_audio(output)) pair_encoders(output); os_atomic_set_bool(&output->data_active, true); hook_data_capture(output); calculate_batch_size(output); if (flag_service(output)) obs_service_activate(output->service); do_output_signal(output, "activate"); os_atomic_set_bool(&output->active, true); if (reconnecting(output)) { signal_reconnect_success(output); os_atomic_set_bool(&output->reconnecting, false); } else if (delay_active(output)) { do_output_signal(output, "starting"); } else { signal_start(output); } return true; } static inline void stop_audio_encoders(obs_output_t *output, encoded_callback_t encoded_callback) { for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { obs_encoder_t *audio = output->audio_encoders[i]; if (audio) obs_encoder_stop(audio, encoded_callback, output); } } static inline void stop_video_encoders(obs_output_t *output, encoded_callback_t encoded_callback) { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { obs_encoder_t *video = output->video_encoders[i]; if (video) obs_encoder_stop(video, encoded_callback, output); } } static inline void stop_raw_audio(obs_output_t *output) { if (output->info.raw_audio2) { for (int idx = 0; idx < MAX_AUDIO_MIXES; idx++) { if ((output->mixer_mask & ((size_t)1 << idx)) != 0) { audio_output_disconnect(output->audio, idx, default_raw_audio_callback, output); } } } else { audio_output_disconnect(output->audio, get_first_mixer(output), default_raw_audio_callback, output); } } static void *end_data_capture_thread(void *data) { encoded_callback_t encoded_callback; obs_output_t *output = data; bool has_video = flag_video(output); bool has_audio = flag_audio(output); if (flag_encoded(output)) { if (output->active_delay_ns) encoded_callback = process_delay; else encoded_callback = (has_video && has_audio) ? interleave_packets : default_encoded_callback; if (has_video) stop_video_encoders(output, encoded_callback); if (has_audio) stop_audio_encoders(output, encoded_callback); } else { if (has_video) stop_raw_video(output->video, default_raw_video_callback, output); if (has_audio) stop_raw_audio(output); } if (flag_service(output)) obs_service_deactivate(output->service, false); if (output->active_delay_ns) obs_output_cleanup_delay(output); do_output_signal(output, "deactivate"); os_atomic_set_bool(&output->active, false); os_event_signal(output->stopping_event); os_atomic_set_bool(&output->end_data_capture_thread_active, false); return NULL; } static void obs_output_end_data_capture_internal(obs_output_t *output, bool signal) { int ret; if (!obs_output_valid(output, "obs_output_end_data_capture")) return; if (!active(output) || !data_active(output)) { if (signal) { signal_stop(output); output->stop_code = OBS_OUTPUT_SUCCESS; os_event_signal(output->stopping_event); } return; } if (delay_active(output)) { os_atomic_set_bool(&output->delay_capturing, false); if (!os_atomic_load_long(&output->delay_restart_refs)) { os_atomic_set_bool(&output->delay_active, false); } else { os_event_signal(output->stopping_event); return; } } os_atomic_set_bool(&output->data_active, false); if (flag_video(output)) log_frame_info(output); if (data_capture_ending(output)) pthread_join(output->end_data_capture_thread, NULL); os_atomic_set_bool(&output->end_data_capture_thread_active, true); ret = pthread_create(&output->end_data_capture_thread, NULL, end_data_capture_thread, output); if (ret != 0) { blog(LOG_WARNING, "Failed to create end_data_capture_thread " "for output '%s'!", output->context.name); end_data_capture_thread(output); } if (signal) { signal_stop(output); output->stop_code = OBS_OUTPUT_SUCCESS; } } void obs_output_end_data_capture(obs_output_t *output) { obs_output_end_data_capture_internal(output, true); } static void *reconnect_thread(void *param) { struct obs_output *output = param; output->reconnect_thread_active = true; if (os_event_timedwait(output->reconnect_stop_event, output->reconnect_retry_cur_msec) == ETIMEDOUT) obs_output_actual_start(output); if (os_event_try(output->reconnect_stop_event) == EAGAIN) pthread_detach(output->reconnect_thread); else os_atomic_set_bool(&output->reconnecting, false); output->reconnect_thread_active = false; return NULL; } static void output_reconnect(struct obs_output *output) { int ret; if (reconnecting(output) && os_event_try(output->reconnect_stop_event) != EAGAIN) { os_atomic_set_bool(&output->reconnecting, false); return; } if (!reconnecting(output)) { output->reconnect_retry_cur_msec = output->reconnect_retry_sec * 1000; output->reconnect_retries = 0; } if (output->reconnect_retries >= output->reconnect_retry_max) { output->stop_code = OBS_OUTPUT_DISCONNECTED; os_atomic_set_bool(&output->reconnecting, false); if (delay_active(output)) os_atomic_set_bool(&output->delay_active, false); obs_output_end_data_capture(output); return; } if (!reconnecting(output)) { os_atomic_set_bool(&output->reconnecting, true); os_event_reset(output->reconnect_stop_event); } if (output->reconnect_retries) { output->reconnect_retry_cur_msec = (uint32_t)(output->reconnect_retry_cur_msec * output->reconnect_retry_exp); if (output->reconnect_retry_cur_msec > RECONNECT_RETRY_MAX_MSEC) { output->reconnect_retry_cur_msec = RECONNECT_RETRY_MAX_MSEC; } } output->reconnect_retries++; output->stop_code = OBS_OUTPUT_DISCONNECTED; ret = pthread_create(&output->reconnect_thread, NULL, &reconnect_thread, output); if (ret < 0) { blog(LOG_WARNING, "Failed to create reconnect thread"); os_atomic_set_bool(&output->reconnecting, false); } else { blog(LOG_INFO, "Output '%s': Reconnecting in %.02f seconds..", output->context.name, (float)(output->reconnect_retry_cur_msec / 1000.0)); signal_reconnect(output); } } static inline bool check_reconnect_cb(obs_output_t *output, int code) { if (!output->reconnect_callback.reconnect_cb) return true; return output->reconnect_callback.reconnect_cb(output->reconnect_callback.param, output, code); } static inline bool can_reconnect(obs_output_t *output, int code) { bool reconnect_active = output->reconnect_retry_max != 0; if (reconnect_active && !check_reconnect_cb(output, code)) return false; return (reconnecting(output) && code != OBS_OUTPUT_SUCCESS) || (reconnect_active && code == OBS_OUTPUT_DISCONNECTED); } void obs_output_signal_stop(obs_output_t *output, int code) { if (!obs_output_valid(output, "obs_output_signal_stop")) return; output->stop_code = code; if (can_reconnect(output, code)) { if (delay_active(output)) os_atomic_inc_long(&output->delay_restart_refs); obs_output_end_data_capture_internal(output, false); output_reconnect(output); } else { if (delay_active(output)) os_atomic_set_bool(&output->delay_active, false); if (reconnecting(output)) os_atomic_set_bool(&output->reconnecting, false); obs_output_end_data_capture(output); } } void obs_output_release(obs_output_t *output) { if (!output) return; obs_weak_output_t *control = get_weak(output); if (obs_ref_release(&control->ref)) { // The order of operations is important here since // get_context_by_name in obs.c relies on weak refs // being alive while the context is listed obs_output_destroy(output); obs_weak_output_release(control); } } void obs_weak_output_addref(obs_weak_output_t *weak) { if (!weak) return; obs_weak_ref_addref(&weak->ref); } void obs_weak_output_release(obs_weak_output_t *weak) { if (!weak) return; if (obs_weak_ref_release(&weak->ref)) bfree(weak); } obs_output_t *obs_output_get_ref(obs_output_t *output) { if (!output) return NULL; return obs_weak_output_get_output(get_weak(output)); } obs_weak_output_t *obs_output_get_weak_output(obs_output_t *output) { if (!output) return NULL; obs_weak_output_t *weak = get_weak(output); obs_weak_output_addref(weak); return weak; } obs_output_t *obs_weak_output_get_output(obs_weak_output_t *weak) { if (!weak) return NULL; if (obs_weak_ref_get_ref(&weak->ref)) return weak->output; return NULL; } bool obs_weak_output_references_output(obs_weak_output_t *weak, obs_output_t *output) { return weak && output && weak->output == output; } void *obs_output_get_type_data(obs_output_t *output) { return obs_output_valid(output, "obs_output_get_type_data") ? output->info.type_data : NULL; } const char *obs_output_get_id(const obs_output_t *output) { return obs_output_valid(output, "obs_output_get_id") ? output->info.id : NULL; } void obs_output_caption(obs_output_t *output, const struct obs_source_cea_708 *captions) { for (int i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { struct caption_track_data *ctrack = output->caption_tracks[i]; if (!ctrack) { continue; } pthread_mutex_lock(&ctrack->caption_mutex); for (size_t i = 0; i < captions->packets; i++) { deque_push_back(&ctrack->caption_data, captions->data + (i * 3), 3 * sizeof(uint8_t)); } pthread_mutex_unlock(&ctrack->caption_mutex); } } static struct caption_text *caption_text_new(const char *text, size_t bytes, struct caption_text *tail, struct caption_text **head, double display_duration) { struct caption_text *next = bzalloc(sizeof(struct caption_text)); snprintf(&next->text[0], CAPTION_LINE_BYTES + 1, "%.*s", (int)bytes, text); next->display_duration = display_duration; if (!*head) { *head = next; } else { tail->next = next; } return next; } void obs_output_output_caption_text1(obs_output_t *output, const char *text) { if (!obs_output_valid(output, "obs_output_output_caption_text1")) return; obs_output_output_caption_text2(output, text, 2.0f); } void obs_output_output_caption_text2(obs_output_t *output, const char *text, double display_duration) { if (!obs_output_valid(output, "obs_output_output_caption_text2")) return; if (!active(output)) return; // split text into 32 character strings int size = (int)strlen(text); blog(LOG_DEBUG, "Caption text: %s", text); for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { struct caption_track_data *ctrack = output->caption_tracks[i]; if (!ctrack) { continue; } pthread_mutex_lock(&ctrack->caption_mutex); ctrack->caption_tail = caption_text_new(text, size, ctrack->caption_tail, &ctrack->caption_head, display_duration); pthread_mutex_unlock(&ctrack->caption_mutex); } } float obs_output_get_congestion(obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_congestion")) return 0; if (output->info.get_congestion) { float val = output->info.get_congestion(output->context.data); if (val < 0.0f) val = 0.0f; else if (val > 1.0f) val = 1.0f; return val; } return 0; } int obs_output_get_connect_time_ms(obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_connect_time_ms")) return -1; if (output->info.get_connect_time_ms) return output->info.get_connect_time_ms(output->context.data); return -1; } const char *obs_output_get_last_error(obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_last_error")) return NULL; if (output->last_error_message) { return output->last_error_message; } else { for (size_t i = 0; i < MAX_OUTPUT_VIDEO_ENCODERS; i++) { obs_encoder_t *vencoder = output->video_encoders[i]; if (vencoder && vencoder->last_error_message) { return vencoder->last_error_message; } } for (size_t i = 0; i < MAX_OUTPUT_AUDIO_ENCODERS; i++) { obs_encoder_t *aencoder = output->audio_encoders[i]; if (aencoder && aencoder->last_error_message) { return aencoder->last_error_message; } } } return NULL; } void obs_output_set_last_error(obs_output_t *output, const char *message) { if (!obs_output_valid(output, "obs_output_set_last_error")) return; if (output->last_error_message) bfree(output->last_error_message); if (message) output->last_error_message = bstrdup(message); else output->last_error_message = NULL; } bool obs_output_reconnecting(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_reconnecting")) return false; return reconnecting(output); } const char *obs_output_get_supported_video_codecs(const obs_output_t *output) { return obs_output_valid(output, __FUNCTION__) ? output->info.encoded_video_codecs : NULL; } const char *obs_output_get_supported_audio_codecs(const obs_output_t *output) { return obs_output_valid(output, __FUNCTION__) ? output->info.encoded_audio_codecs : NULL; } const char *obs_output_get_protocols(const obs_output_t *output) { if (!obs_output_valid(output, "obs_output_get_protocols")) return NULL; return flag_service(output) ? output->info.protocols : NULL; } void obs_enum_output_types_with_protocol(const char *protocol, void *data, bool (*enum_cb)(void *data, const char *id)) { if (!obs_is_output_protocol_registered(protocol)) return; size_t protocol_len = strlen(protocol); for (size_t i = 0; i < obs->output_types.num; i++) { if (!(obs->output_types.array[i].flags & OBS_OUTPUT_SERVICE)) continue; const char *substr = obs->output_types.array[i].protocols; while (substr && substr[0] != '\0') { const char *next = strchr(substr, ';'); size_t len = next ? (size_t)(next - substr) : strlen(substr); if (protocol_len == len && strncmp(substr, protocol, len) == 0) { if (!enum_cb(data, obs->output_types.array[i].id)) return; } substr = next ? next + 1 : NULL; } } } const char *obs_get_output_supported_video_codecs(const char *id) { const struct obs_output_info *info = find_output(id); return info ? info->encoded_video_codecs : NULL; } const char *obs_get_output_supported_audio_codecs(const char *id) { const struct obs_output_info *info = find_output(id); return info ? info->encoded_audio_codecs : NULL; } void obs_output_add_packet_callback(obs_output_t *output, void (*packet_cb)(obs_output_t *output, struct encoder_packet *pkt, struct encoder_packet_time *pkt_time, void *param), void *param) { struct packet_callback data = {packet_cb, param}; pthread_mutex_lock(&output->pkt_callbacks_mutex); da_insert(output->pkt_callbacks, 0, &data); pthread_mutex_unlock(&output->pkt_callbacks_mutex); } void obs_output_remove_packet_callback(obs_output_t *output, void (*packet_cb)(obs_output_t *output, struct encoder_packet *pkt, struct encoder_packet_time *pkt_time, void *param), void *param) { struct packet_callback data = {packet_cb, param}; pthread_mutex_lock(&output->pkt_callbacks_mutex); da_erase_item(output->pkt_callbacks, &data); pthread_mutex_unlock(&output->pkt_callbacks_mutex); } void obs_output_set_reconnect_callback(obs_output_t *output, bool (*reconnect_cb)(void *data, obs_output_t *output, int code), void *param) { if (!reconnect_cb) { output->reconnect_callback.reconnect_cb = NULL; output->reconnect_callback.param = NULL; } else { output->reconnect_callback.reconnect_cb = reconnect_cb; output->reconnect_callback.param = param; } } obs-studio-32.1.0-sources/libobs/media-io/000755 001751 001751 00000000000 15153330731 021173 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/media-io/frame-rate.h000644 001751 001751 00000001121 15153330235 023361 0ustar00runnerrunner000000 000000 #pragma once #ifdef __cplusplus extern "C" { #endif struct media_frames_per_second { uint32_t numerator; uint32_t denominator; }; static inline double media_frames_per_second_to_frame_interval(struct media_frames_per_second fps) { return (double)fps.denominator / fps.numerator; } static inline double media_frames_per_second_to_fps(struct media_frames_per_second fps) { return (double)fps.numerator / fps.denominator; } static inline bool media_frames_per_second_is_valid(struct media_frames_per_second fps) { return fps.numerator && fps.denominator; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/media-io-defs.h000644 001751 001751 00000001620 15153330235 023745 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #define MAX_AV_PLANES 8 obs-studio-32.1.0-sources/libobs/media-io/audio-resampler-ffmpeg.c000644 001751 001751 00000014652 15153330235 025701 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/bmem.h" #include "audio-resampler.h" #include "audio-io.h" #include #include #include struct audio_resampler { struct SwrContext *context; bool opened; uint32_t input_freq; enum AVSampleFormat input_format; uint8_t *output_buffer[MAX_AV_PLANES]; enum AVSampleFormat output_format; int output_size; uint32_t output_ch; uint32_t output_freq; uint32_t output_planes; #if LIBSWRESAMPLE_VERSION_INT < AV_VERSION_INT(4, 5, 100) uint64_t input_layout; uint64_t output_layout; #else AVChannelLayout input_ch_layout; AVChannelLayout output_ch_layout; #endif }; static inline enum AVSampleFormat convert_audio_format(enum audio_format format) { switch (format) { case AUDIO_FORMAT_UNKNOWN: return AV_SAMPLE_FMT_S16; case AUDIO_FORMAT_U8BIT: return AV_SAMPLE_FMT_U8; case AUDIO_FORMAT_16BIT: return AV_SAMPLE_FMT_S16; case AUDIO_FORMAT_32BIT: return AV_SAMPLE_FMT_S32; case AUDIO_FORMAT_FLOAT: return AV_SAMPLE_FMT_FLT; case AUDIO_FORMAT_U8BIT_PLANAR: return AV_SAMPLE_FMT_U8P; case AUDIO_FORMAT_16BIT_PLANAR: return AV_SAMPLE_FMT_S16P; case AUDIO_FORMAT_32BIT_PLANAR: return AV_SAMPLE_FMT_S32P; case AUDIO_FORMAT_FLOAT_PLANAR: return AV_SAMPLE_FMT_FLTP; } /* shouldn't get here */ return AV_SAMPLE_FMT_S16; } #if LIBSWRESAMPLE_VERSION_INT < AV_VERSION_INT(4, 5, 100) static inline uint64_t convert_speaker_layout(enum speaker_layout layout) { switch (layout) { case SPEAKERS_UNKNOWN: return 0; case SPEAKERS_MONO: return AV_CH_LAYOUT_MONO; case SPEAKERS_STEREO: return AV_CH_LAYOUT_STEREO; case SPEAKERS_2POINT1: return AV_CH_LAYOUT_SURROUND; case SPEAKERS_4POINT0: return AV_CH_LAYOUT_4POINT0; case SPEAKERS_4POINT1: return AV_CH_LAYOUT_4POINT1; case SPEAKERS_5POINT1: return AV_CH_LAYOUT_5POINT1_BACK; case SPEAKERS_7POINT1: return AV_CH_LAYOUT_7POINT1; } /* shouldn't get here */ return 0; } #endif audio_resampler_t *audio_resampler_create(const struct resample_info *dst, const struct resample_info *src) { struct audio_resampler *rs = bzalloc(sizeof(struct audio_resampler)); int errcode; rs->opened = false; rs->input_freq = src->samples_per_sec; rs->input_format = convert_audio_format(src->format); rs->output_size = 0; rs->output_ch = get_audio_channels(dst->speakers); rs->output_freq = dst->samples_per_sec; rs->output_format = convert_audio_format(dst->format); rs->output_planes = is_audio_planar(dst->format) ? rs->output_ch : 1; #if (LIBSWRESAMPLE_VERSION_INT < AV_VERSION_INT(4, 5, 100)) rs->input_layout = convert_speaker_layout(src->speakers); rs->output_layout = convert_speaker_layout(dst->speakers); rs->context = swr_alloc_set_opts(NULL, rs->output_layout, rs->output_format, dst->samples_per_sec, rs->input_layout, rs->input_format, src->samples_per_sec, 0, NULL); #else int nb_ch = get_audio_channels(src->speakers); av_channel_layout_default(&rs->input_ch_layout, nb_ch); av_channel_layout_default(&rs->output_ch_layout, rs->output_ch); if (src->speakers == SPEAKERS_4POINT1) rs->input_ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_4POINT1; if (dst->speakers == SPEAKERS_4POINT1) rs->output_ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_4POINT1; swr_alloc_set_opts2(&rs->context, &rs->output_ch_layout, rs->output_format, dst->samples_per_sec, &rs->input_ch_layout, rs->input_format, src->samples_per_sec, 0, NULL); #endif if (!rs->context) { blog(LOG_ERROR, "swr_alloc_set_opts failed"); audio_resampler_destroy(rs); return NULL; } #if (LIBSWRESAMPLE_VERSION_INT < AV_VERSION_INT(4, 5, 100)) if (rs->input_layout == AV_CH_LAYOUT_MONO && rs->output_ch > 1) { #else AVChannelLayout test_ch = AV_CHANNEL_LAYOUT_MONO; if (av_channel_layout_compare(&rs->input_ch_layout, &test_ch) == 0 && rs->output_ch > 1) { #endif const double matrix[MAX_AUDIO_CHANNELS][MAX_AUDIO_CHANNELS] = { {1}, {1, 1}, {1, 1, 0}, {1, 1, 1, 1}, {1, 1, 1, 0, 1}, {1, 1, 1, 1, 1, 1}, {1, 1, 1, 0, 1, 1, 1}, {1, 1, 1, 0, 1, 1, 1, 1}, }; if (swr_set_matrix(rs->context, matrix[rs->output_ch - 1], 1) < 0) blog(LOG_DEBUG, "swr_set_matrix failed for mono upmix\n"); } errcode = swr_init(rs->context); if (errcode != 0) { blog(LOG_ERROR, "avresample_open failed: error code %d", errcode); audio_resampler_destroy(rs); return NULL; } return rs; } void audio_resampler_destroy(audio_resampler_t *rs) { if (rs) { if (rs->context) swr_free(&rs->context); if (rs->output_buffer[0]) av_freep(&rs->output_buffer[0]); bfree(rs); } } bool audio_resampler_resample(audio_resampler_t *rs, uint8_t *output[], uint32_t *out_frames, uint64_t *ts_offset, const uint8_t *const input[], uint32_t in_frames) { if (!rs) return false; struct SwrContext *context = rs->context; int ret; int64_t delay = swr_get_delay(context, rs->input_freq); int estimated = (int)av_rescale_rnd(delay + (int64_t)in_frames, (int64_t)rs->output_freq, (int64_t)rs->input_freq, AV_ROUND_UP); *ts_offset = (uint64_t)swr_get_delay(context, 1000000000); /* resize the buffer if bigger */ if (estimated > rs->output_size) { if (rs->output_buffer[0]) av_freep(&rs->output_buffer[0]); av_samples_alloc(rs->output_buffer, NULL, rs->output_ch, estimated, rs->output_format, 0); rs->output_size = estimated; } ret = swr_convert(context, rs->output_buffer, rs->output_size, (const uint8_t **)input, in_frames); if (ret < 0) { blog(LOG_ERROR, "swr_convert failed: %d", ret); return false; } for (uint32_t i = 0; i < rs->output_planes; i++) output[i] = rs->output_buffer[i]; *out_frames = (uint32_t)ret; return true; } obs-studio-32.1.0-sources/libobs/media-io/media-remux.c000644 001751 001751 00000016101 15153330235 023552 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "media-remux.h" #include "../util/base.h" #include "../util/bmem.h" #include "../util/platform.h" #include #include #include #include struct media_remux_job { int64_t in_size; AVFormatContext *ifmt_ctx, *ofmt_ctx; }; static inline void init_size(media_remux_job_t job, const char *in_filename) { #ifdef _MSC_VER struct _stat64 st = {0}; _stat64(in_filename, &st); #else struct stat st = {0}; stat(in_filename, &st); #endif job->in_size = st.st_size; } static inline bool init_input(media_remux_job_t job, const char *in_filename) { int ret = avformat_open_input(&job->ifmt_ctx, in_filename, NULL, NULL); if (ret < 0) { blog(LOG_ERROR, "media_remux: Could not open input file '%s'", in_filename); return false; } ret = avformat_find_stream_info(job->ifmt_ctx, NULL); if (ret < 0) { blog(LOG_ERROR, "media_remux: Failed to retrieve input stream" " information"); return false; } #ifndef NDEBUG av_dump_format(job->ifmt_ctx, 0, in_filename, false); #endif return true; } static inline bool init_output(media_remux_job_t job, const char *out_filename) { int ret; avformat_alloc_output_context2(&job->ofmt_ctx, NULL, NULL, out_filename); if (!job->ofmt_ctx) { blog(LOG_ERROR, "media_remux: Could not create output context"); return false; } for (unsigned i = 0; i < job->ifmt_ctx->nb_streams; i++) { AVStream *in_stream = job->ifmt_ctx->streams[i]; AVStream *out_stream = avformat_new_stream(job->ofmt_ctx, NULL); if (!out_stream) { blog(LOG_ERROR, "media_remux: Failed to allocate output" " stream"); return false; } ret = avcodec_parameters_copy(out_stream->codecpar, in_stream->codecpar); if (ret < 0) { blog(LOG_ERROR, "media_remux: Failed to copy parameters"); return false; } av_dict_copy(&out_stream->metadata, in_stream->metadata, 0); if (in_stream->codecpar->codec_id == AV_CODEC_ID_HEVC && job->ofmt_ctx->oformat->codec_tag && av_codec_get_id(job->ofmt_ctx->oformat->codec_tag, MKTAG('h', 'v', 'c', '1')) == out_stream->codecpar->codec_id) { // Tag HEVC files with industry standard HVC1 tag for wider device compatibility // when HVC1 tag is supported by out stream codec out_stream->codecpar->codec_tag = MKTAG('h', 'v', 'c', '1'); } else { // Otherwise tag 0 to let FFmpeg automatically select the appropriate tag out_stream->codecpar->codec_tag = 0; } if (in_stream->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) { av_channel_layout_default(&out_stream->codecpar->ch_layout, in_stream->codecpar->ch_layout.nb_channels); /* The avutil default channel layout for 5 channels is * 5.0, which OBS does not support. Manually set 5 * channels to 4.1. */ if (in_stream->codecpar->ch_layout.nb_channels == 5) out_stream->codecpar->ch_layout = (AVChannelLayout)AV_CHANNEL_LAYOUT_4POINT1; } } #ifndef NDEBUG av_dump_format(job->ofmt_ctx, 0, out_filename, true); #endif if (!(job->ofmt_ctx->oformat->flags & AVFMT_NOFILE)) { ret = avio_open(&job->ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE); if (ret < 0) { blog(LOG_ERROR, "media_remux: Failed to open output" " file '%s'", out_filename); return false; } } return true; } bool media_remux_job_create(media_remux_job_t *job, const char *in_filename, const char *out_filename) { if (!job) return false; *job = NULL; if (!os_file_exists(in_filename)) return false; if (strcmp(in_filename, out_filename) == 0) return false; *job = (media_remux_job_t)bzalloc(sizeof(struct media_remux_job)); if (!*job) return false; init_size(*job, in_filename); if (!init_input(*job, in_filename)) goto fail; if (!init_output(*job, out_filename)) goto fail; return true; fail: media_remux_job_destroy(*job); return false; } static inline void process_packet(AVPacket *pkt, AVStream *in_stream, AVStream *out_stream) { pkt->pts = av_rescale_q_rnd(pkt->pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX); pkt->dts = av_rescale_q_rnd(pkt->dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX); pkt->duration = (int)av_rescale_q(pkt->duration, in_stream->time_base, out_stream->time_base); pkt->pos = -1; } static inline int process_packets(media_remux_job_t job, media_remux_progress_callback callback, void *data) { AVPacket pkt; int ret, throttle = 0; for (;;) { ret = av_read_frame(job->ifmt_ctx, &pkt); if (ret < 0) { if (ret != AVERROR_EOF) blog(LOG_ERROR, "media_remux: Error reading" " packet: %s", av_err2str(ret)); break; } if (callback != NULL && throttle++ > 10) { float progress = pkt.pos / (float)job->in_size * 100.f; if (!callback(data, progress)) break; throttle = 0; } process_packet(&pkt, job->ifmt_ctx->streams[pkt.stream_index], job->ofmt_ctx->streams[pkt.stream_index]); ret = av_interleaved_write_frame(job->ofmt_ctx, &pkt); av_packet_unref(&pkt); if (ret < 0) { blog(LOG_ERROR, "media_remux: Error muxing packet: %s", av_err2str(ret)); /* Treat "Invalid data found when processing input" and * "Invalid argument" as non-fatal */ if (ret == AVERROR_INVALIDDATA || ret == -EINVAL) continue; break; } } return ret; } bool media_remux_job_process(media_remux_job_t job, media_remux_progress_callback callback, void *data) { int ret; bool success = false; if (!job) return success; ret = avformat_write_header(job->ofmt_ctx, NULL); if (ret < 0) { blog(LOG_ERROR, "media_remux: Error opening output file: %s", av_err2str(ret)); return success; } if (callback != NULL) callback(data, 0.f); ret = process_packets(job, callback, data); success = ret >= 0 || ret == AVERROR_EOF; ret = av_write_trailer(job->ofmt_ctx); if (ret < 0) { blog(LOG_ERROR, "media_remux: av_write_trailer: %s", av_err2str(ret)); success = false; } if (callback != NULL) callback(data, 100.f); return success; } void media_remux_job_destroy(media_remux_job_t job) { if (!job) return; avformat_close_input(&job->ifmt_ctx); if (job->ofmt_ctx && !(job->ofmt_ctx->oformat->flags & AVFMT_NOFILE)) avio_close(job->ofmt_ctx->pb); avformat_free_context(job->ofmt_ctx); bfree(job); } obs-studio-32.1.0-sources/libobs/media-io/format-conversion.h000644 001751 001751 00000004024 15153330235 025016 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #ifdef __cplusplus extern "C" { #endif /* * Functions for converting to and from packed 444 YUV */ EXPORT void compress_uyvx_to_i420(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output[], const uint32_t out_linesize[]); EXPORT void compress_uyvx_to_nv12(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output[], const uint32_t out_linesize[]); EXPORT void convert_uyvx_to_i444(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output[], const uint32_t out_linesize[]); EXPORT void decompress_nv12(const uint8_t *const input[], const uint32_t in_linesize[], uint32_t start_y, uint32_t end_y, uint8_t *output, uint32_t out_linesize); EXPORT void decompress_420(const uint8_t *const input[], const uint32_t in_linesize[], uint32_t start_y, uint32_t end_y, uint8_t *output, uint32_t out_linesize); EXPORT void decompress_422(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output, uint32_t out_linesize, bool leading_lum); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/video-fourcc.c000644 001751 001751 00000003513 15153330235 023725 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2014 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/c99defs.h" #include "video-io.h" #define MAKE_FOURCC(a, b, c, d) ((uint32_t)(((d) << 24) | ((c) << 16) | ((b) << 8) | (a))) enum video_format video_format_from_fourcc(uint32_t fourcc) { switch (fourcc) { case MAKE_FOURCC('U', 'Y', 'V', 'Y'): case MAKE_FOURCC('H', 'D', 'Y', 'C'): case MAKE_FOURCC('U', 'Y', 'N', 'V'): case MAKE_FOURCC('U', 'Y', 'N', 'Y'): case MAKE_FOURCC('u', 'y', 'v', '1'): case MAKE_FOURCC('2', 'v', 'u', 'y'): case MAKE_FOURCC('2', 'V', 'u', 'y'): return VIDEO_FORMAT_UYVY; case MAKE_FOURCC('Y', 'U', 'Y', '2'): case MAKE_FOURCC('Y', '4', '2', '2'): case MAKE_FOURCC('V', '4', '2', '2'): case MAKE_FOURCC('V', 'Y', 'U', 'Y'): case MAKE_FOURCC('Y', 'U', 'N', 'V'): case MAKE_FOURCC('y', 'u', 'v', '2'): case MAKE_FOURCC('y', 'u', 'v', 's'): return VIDEO_FORMAT_YUY2; case MAKE_FOURCC('Y', 'V', 'Y', 'U'): return VIDEO_FORMAT_YVYU; case MAKE_FOURCC('Y', '8', '0', '0'): return VIDEO_FORMAT_Y800; } return VIDEO_FORMAT_NONE; } obs-studio-32.1.0-sources/libobs/media-io/audio-resampler.h000644 001751 001751 00000003041 15153330235 024432 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #include "audio-io.h" #ifdef __cplusplus extern "C" { #endif struct audio_resampler; typedef struct audio_resampler audio_resampler_t; struct resample_info { uint32_t samples_per_sec; enum audio_format format; enum speaker_layout speakers; }; EXPORT audio_resampler_t *audio_resampler_create(const struct resample_info *dst, const struct resample_info *src); EXPORT void audio_resampler_destroy(audio_resampler_t *resampler); EXPORT bool audio_resampler_resample(audio_resampler_t *resampler, uint8_t *output[], uint32_t *out_frames, uint64_t *ts_offset, const uint8_t *const input[], uint32_t in_frames); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/audio-math.h000644 001751 001751 00000002467 15153330235 023404 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #include #ifdef _MSC_VER #include #pragma warning(push) #pragma warning(disable : 4056) #pragma warning(disable : 4756) #endif static inline float mul_to_db(const float mul) { return (mul == 0.0f) ? -INFINITY : (20.0f * log10f(mul)); } static inline float db_to_mul(const float db) { return isfinite((double)db) ? powf(10.0f, db / 20.0f) : 0.0f; } #ifdef _MSC_VER #pragma warning(pop) #endif obs-studio-32.1.0-sources/libobs/media-io/video-io.h000644 001751 001751 00000021321 15153330235 023055 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "media-io-defs.h" #include "../util/c99defs.h" #ifdef __cplusplus extern "C" { #endif struct video_frame; /* Base video output component. Use this to create a video output track. */ struct video_output; typedef struct video_output video_t; enum video_format { VIDEO_FORMAT_NONE, /* planar 4:2:0 formats */ VIDEO_FORMAT_I420, /* three-plane */ VIDEO_FORMAT_NV12, /* two-plane, luma and packed chroma */ /* packed 4:2:2 formats */ VIDEO_FORMAT_YVYU, VIDEO_FORMAT_YUY2, /* YUYV */ VIDEO_FORMAT_UYVY, /* packed uncompressed formats */ VIDEO_FORMAT_RGBA, VIDEO_FORMAT_BGRA, VIDEO_FORMAT_BGRX, VIDEO_FORMAT_Y800, /* grayscale */ /* planar 4:4:4 */ VIDEO_FORMAT_I444, /* more packed uncompressed formats */ VIDEO_FORMAT_BGR3, /* planar 4:2:2 */ VIDEO_FORMAT_I422, /* planar 4:2:0 with alpha */ VIDEO_FORMAT_I40A, /* planar 4:2:2 with alpha */ VIDEO_FORMAT_I42A, /* planar 4:4:4 with alpha */ VIDEO_FORMAT_YUVA, /* packed 4:4:4 with alpha */ VIDEO_FORMAT_AYUV, /* planar 4:2:0 format, 10 bpp */ VIDEO_FORMAT_I010, /* three-plane */ VIDEO_FORMAT_P010, /* two-plane, luma and packed chroma */ /* planar 4:2:2 format, 10 bpp */ VIDEO_FORMAT_I210, /* planar 4:4:4 format, 12 bpp */ VIDEO_FORMAT_I412, /* planar 4:4:4:4 format, 12 bpp */ VIDEO_FORMAT_YA2L, /* planar 4:2:2 format, 16 bpp */ VIDEO_FORMAT_P216, /* two-plane, luma and packed chroma */ /* planar 4:4:4 format, 16 bpp */ VIDEO_FORMAT_P416, /* two-plane, luma and packed chroma */ /* packed 4:2:2 format, 10 bpp */ VIDEO_FORMAT_V210, /* packed uncompressed 10-bit format */ VIDEO_FORMAT_R10L, }; enum video_trc { VIDEO_TRC_DEFAULT, VIDEO_TRC_SRGB, VIDEO_TRC_PQ, VIDEO_TRC_HLG, }; enum video_colorspace { VIDEO_CS_DEFAULT, VIDEO_CS_601, VIDEO_CS_709, VIDEO_CS_SRGB, VIDEO_CS_2100_PQ, VIDEO_CS_2100_HLG, }; enum video_range_type { VIDEO_RANGE_DEFAULT, VIDEO_RANGE_PARTIAL, VIDEO_RANGE_FULL, }; struct video_data { uint8_t *data[MAX_AV_PLANES]; uint32_t linesize[MAX_AV_PLANES]; uint64_t timestamp; }; struct video_output_info { const char *name; enum video_format format; uint32_t fps_num; uint32_t fps_den; uint32_t width; uint32_t height; size_t cache_size; enum video_colorspace colorspace; enum video_range_type range; }; static inline bool format_is_yuv(enum video_format format) { switch (format) { case VIDEO_FORMAT_I420: case VIDEO_FORMAT_NV12: case VIDEO_FORMAT_I422: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_YVYU: case VIDEO_FORMAT_YUY2: case VIDEO_FORMAT_UYVY: case VIDEO_FORMAT_I444: case VIDEO_FORMAT_I412: case VIDEO_FORMAT_I40A: case VIDEO_FORMAT_I42A: case VIDEO_FORMAT_YUVA: case VIDEO_FORMAT_YA2L: case VIDEO_FORMAT_AYUV: case VIDEO_FORMAT_I010: case VIDEO_FORMAT_P010: case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: case VIDEO_FORMAT_V210: return true; case VIDEO_FORMAT_NONE: case VIDEO_FORMAT_RGBA: case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: case VIDEO_FORMAT_Y800: case VIDEO_FORMAT_BGR3: case VIDEO_FORMAT_R10L: return false; } return false; } static inline const char *get_video_format_name(enum video_format format) { switch (format) { case VIDEO_FORMAT_I420: return "I420"; case VIDEO_FORMAT_NV12: return "NV12"; case VIDEO_FORMAT_I422: return "I422"; case VIDEO_FORMAT_I210: return "I210"; case VIDEO_FORMAT_YVYU: return "YVYU"; case VIDEO_FORMAT_YUY2: return "YUY2"; case VIDEO_FORMAT_UYVY: return "UYVY"; case VIDEO_FORMAT_RGBA: return "RGBA"; case VIDEO_FORMAT_BGRA: return "BGRA"; case VIDEO_FORMAT_BGRX: return "BGRX"; case VIDEO_FORMAT_I444: return "I444"; case VIDEO_FORMAT_I412: return "I412"; case VIDEO_FORMAT_Y800: return "Y800"; case VIDEO_FORMAT_BGR3: return "BGR3"; case VIDEO_FORMAT_I40A: return "I40A"; case VIDEO_FORMAT_I42A: return "I42A"; case VIDEO_FORMAT_YUVA: return "YUVA"; case VIDEO_FORMAT_YA2L: return "YA2L"; case VIDEO_FORMAT_AYUV: return "AYUV"; case VIDEO_FORMAT_I010: return "I010"; case VIDEO_FORMAT_P010: return "P010"; case VIDEO_FORMAT_P216: return "P216"; case VIDEO_FORMAT_P416: return "P416"; case VIDEO_FORMAT_V210: return "v210"; case VIDEO_FORMAT_R10L: return "R10l"; case VIDEO_FORMAT_NONE:; } return "None"; } static inline const char *get_video_colorspace_name(enum video_colorspace cs) { switch (cs) { case VIDEO_CS_DEFAULT: case VIDEO_CS_709: return "Rec. 709"; case VIDEO_CS_SRGB: return "sRGB"; case VIDEO_CS_601: return "Rec. 601"; case VIDEO_CS_2100_PQ: return "Rec. 2100 (PQ)"; case VIDEO_CS_2100_HLG: return "Rec. 2100 (HLG)"; } return "Unknown"; } static inline enum video_range_type resolve_video_range(enum video_format format, enum video_range_type range) { if (range == VIDEO_RANGE_DEFAULT) { range = format_is_yuv(format) ? VIDEO_RANGE_PARTIAL : VIDEO_RANGE_FULL; } return range; } static inline const char *get_video_range_name(enum video_format format, enum video_range_type range) { range = resolve_video_range(format, range); return range == VIDEO_RANGE_FULL ? "Full" : "Partial"; } enum video_scale_type { VIDEO_SCALE_DEFAULT, VIDEO_SCALE_POINT, VIDEO_SCALE_FAST_BILINEAR, VIDEO_SCALE_BILINEAR, VIDEO_SCALE_BICUBIC, }; struct video_scale_info { enum video_format format; uint32_t width; uint32_t height; enum video_range_type range; enum video_colorspace colorspace; }; EXPORT enum video_format video_format_from_fourcc(uint32_t fourcc); EXPORT bool video_format_get_parameters(enum video_colorspace color_space, enum video_range_type range, float matrix[16], float min_range[3], float max_range[3]); EXPORT bool video_format_get_parameters_for_format(enum video_colorspace color_space, enum video_range_type range, enum video_format format, float matrix[16], float min_range[3], float max_range[3]); #define VIDEO_OUTPUT_SUCCESS 0 #define VIDEO_OUTPUT_INVALIDPARAM -1 #define VIDEO_OUTPUT_FAIL -2 EXPORT int video_output_open(video_t **video, struct video_output_info *info); EXPORT void video_output_close(video_t *video); EXPORT bool video_output_connect(video_t *video, const struct video_scale_info *conversion, void (*callback)(void *param, struct video_data *frame), void *param); EXPORT bool video_output_connect2(video_t *video, const struct video_scale_info *conversion, uint32_t frame_rate_divisor, void (*callback)(void *param, struct video_data *frame), void *param); EXPORT void video_output_disconnect(video_t *video, void (*callback)(void *param, struct video_data *frame), void *param); EXPORT bool video_output_disconnect2(video_t *video, void (*callback)(void *param, struct video_data *frame), void *param); EXPORT bool video_output_active(const video_t *video); EXPORT const struct video_output_info *video_output_get_info(const video_t *video); EXPORT bool video_output_lock_frame(video_t *video, struct video_frame *frame, int count, uint64_t timestamp); EXPORT void video_output_unlock_frame(video_t *video); EXPORT uint64_t video_output_get_frame_time(const video_t *video); EXPORT void video_output_stop(video_t *video); EXPORT bool video_output_stopped(video_t *video); EXPORT enum video_format video_output_get_format(const video_t *video); EXPORT uint32_t video_output_get_width(const video_t *video); EXPORT uint32_t video_output_get_height(const video_t *video); EXPORT double video_output_get_frame_rate(const video_t *video); EXPORT uint32_t video_output_get_skipped_frames(const video_t *video); EXPORT uint32_t video_output_get_total_frames(const video_t *video); extern void video_output_inc_texture_encoders(video_t *video); extern void video_output_dec_texture_encoders(video_t *video); extern void video_output_inc_texture_frames(video_t *video); extern void video_output_inc_texture_skipped_frames(video_t *video); extern video_t *video_output_create_with_frame_rate_divisor(video_t *video, uint32_t divisor); extern void video_output_free_frame_rate_divisor(video_t *video); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/audio-io.c000644 001751 001751 00000027541 15153330235 023055 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include "../util/threading.h" #include "../util/darray.h" #include "../util/deque.h" #include "../util/platform.h" #include "../util/profiler.h" #include "../util/util_uint64.h" #include "audio-io.h" #include "audio-resampler.h" #ifdef _WIN32 #define WIN32_LEAN_AND_MEAN #include #include #endif extern profiler_name_store_t *obs_get_profiler_name_store(void); /* #define DEBUG_AUDIO */ #define nop() \ do { \ int invalid = 0; \ } while (0) struct audio_input { struct audio_convert_info conversion; audio_resampler_t *resampler; audio_output_callback_t callback; void *param; }; static inline void audio_input_free(struct audio_input *input) { audio_resampler_destroy(input->resampler); } struct audio_mix { DARRAY(struct audio_input) inputs; float buffer[MAX_AUDIO_CHANNELS][AUDIO_OUTPUT_FRAMES]; float buffer_unclamped[MAX_AUDIO_CHANNELS][AUDIO_OUTPUT_FRAMES]; }; struct audio_output { struct audio_output_info info; size_t block_size; size_t channels; size_t planes; pthread_t thread; os_event_t *stop_event; bool initialized; audio_input_callback_t input_cb; void *input_param; pthread_mutex_t input_mutex; struct audio_mix mixes[MAX_AUDIO_MIXES]; }; /* ------------------------------------------------------------------------- */ static bool resample_audio_output(struct audio_input *input, struct audio_data *data) { bool success = true; if (input->resampler) { uint8_t *output[MAX_AV_PLANES]; uint32_t frames; uint64_t offset; memset(output, 0, sizeof(output)); success = audio_resampler_resample(input->resampler, output, &frames, &offset, (const uint8_t *const *)data->data, data->frames); for (size_t i = 0; i < MAX_AV_PLANES; i++) data->data[i] = output[i]; data->frames = frames; data->timestamp -= offset; } return success; } static inline void do_audio_output(struct audio_output *audio, size_t mix_idx, uint64_t timestamp, uint32_t frames) { struct audio_mix *mix = &audio->mixes[mix_idx]; struct audio_data data; pthread_mutex_lock(&audio->input_mutex); for (size_t i = mix->inputs.num; i > 0; i--) { struct audio_input *input = mix->inputs.array + (i - 1); float(*buf)[AUDIO_OUTPUT_FRAMES] = input->conversion.allow_clipping ? mix->buffer_unclamped : mix->buffer; for (size_t i = 0; i < audio->planes; i++) data.data[i] = (uint8_t *)buf[i]; data.frames = frames; data.timestamp = timestamp; if (resample_audio_output(input, &data)) input->callback(input->param, mix_idx, &data); } pthread_mutex_unlock(&audio->input_mutex); } static inline void clamp_audio_output(struct audio_output *audio, size_t bytes) { size_t float_size = bytes / sizeof(float); for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) { struct audio_mix *mix = &audio->mixes[mix_idx]; /* do not process mixing if a specific mix is inactive */ if (!mix->inputs.num) continue; for (size_t plane = 0; plane < audio->planes; plane++) { float *mix_data = mix->buffer[plane]; float *mix_end = &mix_data[float_size]; /* Unclamped mix is copied directly. */ memcpy(mix->buffer_unclamped[plane], mix_data, bytes); while (mix_data < mix_end) { float val = *mix_data; val = (val == val) ? val : 0.0f; val = (val > 1.0f) ? 1.0f : val; val = (val < -1.0f) ? -1.0f : val; *(mix_data++) = val; } } } } static void input_and_output(struct audio_output *audio, uint64_t audio_time, uint64_t prev_time) { size_t bytes = AUDIO_OUTPUT_FRAMES * audio->block_size; struct audio_output_data data[MAX_AUDIO_MIXES]; uint32_t active_mixes = 0; uint64_t new_ts = 0; bool success; memset(data, 0, sizeof(data)); #ifdef DEBUG_AUDIO blog(LOG_DEBUG, "audio_time: %llu, prev_time: %llu, bytes: %lu", audio_time, prev_time, bytes); #endif /* get mixers */ pthread_mutex_lock(&audio->input_mutex); for (size_t i = 0; i < MAX_AUDIO_MIXES; i++) { if (audio->mixes[i].inputs.num) active_mixes |= (1 << i); } pthread_mutex_unlock(&audio->input_mutex); /* clear mix buffers */ for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) { struct audio_mix *mix = &audio->mixes[mix_idx]; memset(mix->buffer, 0, sizeof(mix->buffer)); for (size_t i = 0; i < audio->planes; i++) data[mix_idx].data[i] = mix->buffer[i]; } /* get new audio data */ success = audio->input_cb(audio->input_param, prev_time, audio_time, &new_ts, active_mixes, data); if (!success) return; /* clamps audio data to -1.0..1.0 */ clamp_audio_output(audio, bytes); /* output */ for (size_t i = 0; i < MAX_AUDIO_MIXES; i++) do_audio_output(audio, i, new_ts, AUDIO_OUTPUT_FRAMES); } static void *audio_thread(void *param) { #ifdef _WIN32 DWORD unused = 0; const HANDLE handle = AvSetMmThreadCharacteristics(L"Audio", &unused); #endif struct audio_output *audio = param; size_t rate = audio->info.samples_per_sec; uint64_t samples = 0; uint64_t start_time = os_gettime_ns(); uint64_t prev_time = start_time; os_set_thread_name("audio-io: audio thread"); const char *audio_thread_name = profile_store_name(obs_get_profiler_name_store(), "audio_thread(%s)", audio->info.name); while (os_event_try(audio->stop_event) == EAGAIN) { samples += AUDIO_OUTPUT_FRAMES; uint64_t audio_time = start_time + audio_frames_to_ns(rate, samples); os_sleepto_ns_fast(audio_time); profile_start(audio_thread_name); input_and_output(audio, audio_time, prev_time); prev_time = audio_time; profile_end(audio_thread_name); profile_reenable_thread(); } #ifdef _WIN32 if (handle) AvRevertMmThreadCharacteristics(handle); #endif return NULL; } /* ------------------------------------------------------------------------- */ static size_t audio_get_input_idx(const audio_t *audio, size_t mix_idx, audio_output_callback_t callback, void *param) { const struct audio_mix *mix = &audio->mixes[mix_idx]; for (size_t i = 0; i < mix->inputs.num; i++) { struct audio_input *input = mix->inputs.array + i; if (input->callback == callback && input->param == param) return i; } return DARRAY_INVALID; } static inline bool audio_input_init(struct audio_input *input, struct audio_output *audio) { if (input->conversion.format != audio->info.format || input->conversion.samples_per_sec != audio->info.samples_per_sec || input->conversion.speakers != audio->info.speakers) { struct resample_info from = {.format = audio->info.format, .samples_per_sec = audio->info.samples_per_sec, .speakers = audio->info.speakers}; struct resample_info to = {.format = input->conversion.format, .samples_per_sec = input->conversion.samples_per_sec, .speakers = input->conversion.speakers}; input->resampler = audio_resampler_create(&to, &from); if (!input->resampler) { blog(LOG_ERROR, "audio_input_init: Failed to " "create resampler"); return false; } } else { input->resampler = NULL; } return true; } bool audio_output_connect(audio_t *audio, size_t mi, const struct audio_convert_info *conversion, audio_output_callback_t callback, void *param) { bool success = false; if (!audio || mi >= MAX_AUDIO_MIXES) return false; pthread_mutex_lock(&audio->input_mutex); if (audio_get_input_idx(audio, mi, callback, param) == DARRAY_INVALID) { struct audio_mix *mix = &audio->mixes[mi]; struct audio_input input = { .callback = callback, .param = param, }; if (conversion) { input.conversion = *conversion; } else { input.conversion.format = audio->info.format; input.conversion.speakers = audio->info.speakers; input.conversion.samples_per_sec = audio->info.samples_per_sec; } if (input.conversion.format == AUDIO_FORMAT_UNKNOWN) input.conversion.format = audio->info.format; if (input.conversion.speakers == SPEAKERS_UNKNOWN) input.conversion.speakers = audio->info.speakers; if (input.conversion.samples_per_sec == 0) input.conversion.samples_per_sec = audio->info.samples_per_sec; success = audio_input_init(&input, audio); if (success) da_push_back(mix->inputs, &input); } pthread_mutex_unlock(&audio->input_mutex); return success; } void audio_output_disconnect(audio_t *audio, size_t mix_idx, audio_output_callback_t callback, void *param) { if (!audio || mix_idx >= MAX_AUDIO_MIXES) return; pthread_mutex_lock(&audio->input_mutex); size_t idx = audio_get_input_idx(audio, mix_idx, callback, param); if (idx != DARRAY_INVALID) { struct audio_mix *mix = &audio->mixes[mix_idx]; audio_input_free(mix->inputs.array + idx); da_erase(mix->inputs, idx); } pthread_mutex_unlock(&audio->input_mutex); } static inline bool valid_audio_params(const struct audio_output_info *info) { return info->format && info->name && info->samples_per_sec > 0 && info->speakers > 0; } int audio_output_open(audio_t **audio, struct audio_output_info *info) { struct audio_output *out; bool planar = is_audio_planar(info->format); if (!valid_audio_params(info)) return AUDIO_OUTPUT_INVALIDPARAM; out = bzalloc(sizeof(struct audio_output)); if (!out) goto fail0; memcpy(&out->info, info, sizeof(struct audio_output_info)); out->channels = get_audio_channels(info->speakers); out->planes = planar ? out->channels : 1; out->input_cb = info->input_callback; out->input_param = info->input_param; out->block_size = (planar ? 1 : out->channels) * get_audio_bytes_per_channel(info->format); if (pthread_mutex_init_recursive(&out->input_mutex) != 0) goto fail0; if (os_event_init(&out->stop_event, OS_EVENT_TYPE_MANUAL) != 0) goto fail1; if (pthread_create(&out->thread, NULL, audio_thread, out) != 0) goto fail2; out->initialized = true; *audio = out; return AUDIO_OUTPUT_SUCCESS; fail2: os_event_destroy(out->stop_event); fail1: pthread_mutex_destroy(&out->input_mutex); fail0: audio_output_close(out); return AUDIO_OUTPUT_FAIL; } void audio_output_close(audio_t *audio) { void *thread_ret; if (!audio) return; if (audio->initialized) { os_event_signal(audio->stop_event); pthread_join(audio->thread, &thread_ret); os_event_destroy(audio->stop_event); pthread_mutex_destroy(&audio->input_mutex); } for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) { struct audio_mix *mix = &audio->mixes[mix_idx]; for (size_t i = 0; i < mix->inputs.num; i++) audio_input_free(mix->inputs.array + i); da_free(mix->inputs); } bfree(audio); } const struct audio_output_info *audio_output_get_info(const audio_t *audio) { return audio ? &audio->info : NULL; } bool audio_output_active(const audio_t *audio) { if (!audio) return false; for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) { const struct audio_mix *mix = &audio->mixes[mix_idx]; if (mix->inputs.num != 0) return true; } return false; } size_t audio_output_get_block_size(const audio_t *audio) { return audio->block_size; } size_t audio_output_get_planes(const audio_t *audio) { return audio->planes; } size_t audio_output_get_channels(const audio_t *audio) { return audio->channels; } uint32_t audio_output_get_sample_rate(const audio_t *audio) { return audio->info.samples_per_sec; } obs-studio-32.1.0-sources/libobs/media-io/video-io.c000644 001751 001751 00000042165 15153330235 023061 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include "../util/bmem.h" #include "../util/platform.h" #include "../util/profiler.h" #include "../util/threading.h" #include "../util/darray.h" #include "../util/util_uint64.h" #include "format-conversion.h" #include "video-io.h" #include "video-frame.h" #include "video-scaler.h" extern profiler_name_store_t *obs_get_profiler_name_store(void); #define MAX_CONVERT_BUFFERS 3 #define MAX_CACHE_SIZE 16 struct cached_frame_info { struct video_data frame; int skipped; int count; }; struct video_input { struct video_scale_info conversion; video_scaler_t *scaler; struct video_frame frame[MAX_CONVERT_BUFFERS]; int cur_frame; // allow outputting at fractions of main composition FPS, // e.g. 60 FPS with frame_rate_divisor = 1 turns into 30 FPS // // a separate counter is used in favor of using remainder calculations // to allow "inputs" started at the same time to start on the same frame // whereas with remainder calculation the frame alignment would depend on // the total frame count at the time the encoder was started uint32_t frame_rate_divisor; uint32_t frame_rate_divisor_counter; void (*callback)(void *param, struct video_data *frame); void *param; }; static inline void video_input_free(struct video_input *input) { for (size_t i = 0; i < MAX_CONVERT_BUFFERS; i++) video_frame_free(&input->frame[i]); video_scaler_destroy(input->scaler); } struct video_output { struct video_output_info info; pthread_t thread; pthread_mutex_t data_mutex; bool stop; os_sem_t *update_semaphore; uint64_t frame_time; volatile long skipped_frames; volatile long total_frames; pthread_mutex_t input_mutex; DARRAY(struct video_input) inputs; size_t available_frames; size_t first_added; size_t last_added; struct cached_frame_info cache[MAX_CACHE_SIZE]; struct video_output *parent; volatile bool raw_active; volatile long gpu_refs; }; /* ------------------------------------------------------------------------- */ static inline bool scale_video_output(struct video_input *input, struct video_data *data) { bool success = true; if (input->scaler) { struct video_frame *frame; if (++input->cur_frame == MAX_CONVERT_BUFFERS) input->cur_frame = 0; frame = &input->frame[input->cur_frame]; success = video_scaler_scale(input->scaler, frame->data, frame->linesize, (const uint8_t *const *)data->data, data->linesize); if (success) { for (size_t i = 0; i < MAX_AV_PLANES; i++) { data->data[i] = frame->data[i]; data->linesize[i] = frame->linesize[i]; } } else { blog(LOG_WARNING, "video-io: Could not scale frame!"); } } return success; } static inline bool video_output_cur_frame(struct video_output *video) { struct cached_frame_info *frame_info; bool complete; bool skipped; /* -------------------------------- */ pthread_mutex_lock(&video->data_mutex); frame_info = &video->cache[video->first_added]; pthread_mutex_unlock(&video->data_mutex); /* -------------------------------- */ pthread_mutex_lock(&video->input_mutex); for (size_t i = 0; i < video->inputs.num; i++) { struct video_input *input = video->inputs.array + i; struct video_data frame = frame_info->frame; // an explicit counter is used instead of remainder calculation // to allow multiple encoders started at the same time to start on // the same frame uint32_t skip = input->frame_rate_divisor_counter++; if (input->frame_rate_divisor_counter == input->frame_rate_divisor) input->frame_rate_divisor_counter = 0; if (skip) continue; if (scale_video_output(input, &frame)) input->callback(input->param, &frame); } pthread_mutex_unlock(&video->input_mutex); /* -------------------------------- */ pthread_mutex_lock(&video->data_mutex); frame_info->frame.timestamp += video->frame_time; complete = --frame_info->count == 0; skipped = frame_info->skipped > 0; if (complete) { if (++video->first_added == video->info.cache_size) video->first_added = 0; if (++video->available_frames == video->info.cache_size) video->last_added = video->first_added; } else if (skipped) { --frame_info->skipped; os_atomic_inc_long(&video->skipped_frames); } pthread_mutex_unlock(&video->data_mutex); /* -------------------------------- */ return complete; } static void *video_thread(void *param) { struct video_output *video = param; os_set_thread_name("video-io: video thread"); const char *video_thread_name = profile_store_name(obs_get_profiler_name_store(), "video_thread(%s)", video->info.name); while (os_sem_wait(video->update_semaphore) == 0) { if (video->stop) break; profile_start(video_thread_name); while (!video->stop && !video_output_cur_frame(video)) { os_atomic_inc_long(&video->total_frames); } os_atomic_inc_long(&video->total_frames); profile_end(video_thread_name); profile_reenable_thread(); } return NULL; } /* ------------------------------------------------------------------------- */ static inline bool valid_video_params(const struct video_output_info *info) { return info->height != 0 && info->width != 0 && info->fps_den != 0 && info->fps_num != 0; } static inline void init_cache(struct video_output *video) { if (video->info.cache_size > MAX_CACHE_SIZE) video->info.cache_size = MAX_CACHE_SIZE; for (size_t i = 0; i < video->info.cache_size; i++) { struct video_frame *frame; frame = (struct video_frame *)&video->cache[i]; video_frame_init(frame, video->info.format, video->info.width, video->info.height); } video->available_frames = video->info.cache_size; } int video_output_open(video_t **video, struct video_output_info *info) { struct video_output *out; if (!valid_video_params(info)) return VIDEO_OUTPUT_INVALIDPARAM; out = bzalloc(sizeof(struct video_output)); if (!out) goto fail0; memcpy(&out->info, info, sizeof(struct video_output_info)); out->frame_time = util_mul_div64(1000000000ULL, info->fps_den, info->fps_num); if (pthread_mutex_init_recursive(&out->data_mutex) != 0) goto fail0; if (pthread_mutex_init_recursive(&out->input_mutex) != 0) goto fail1; if (os_sem_init(&out->update_semaphore, 0) != 0) goto fail2; if (pthread_create(&out->thread, NULL, video_thread, out) != 0) goto fail3; init_cache(out); *video = out; return VIDEO_OUTPUT_SUCCESS; fail3: os_sem_destroy(out->update_semaphore); fail2: pthread_mutex_destroy(&out->input_mutex); fail1: pthread_mutex_destroy(&out->data_mutex); fail0: bfree(out); return VIDEO_OUTPUT_FAIL; } void video_output_close(video_t *video) { if (!video) return; video_output_stop(video); pthread_mutex_lock(&video->input_mutex); for (size_t i = 0; i < video->inputs.num; i++) video_input_free(&video->inputs.array[i]); da_free(video->inputs); for (size_t i = 0; i < video->info.cache_size; i++) video_frame_free((struct video_frame *)&video->cache[i]); pthread_mutex_unlock(&video->input_mutex); os_sem_destroy(video->update_semaphore); pthread_mutex_destroy(&video->data_mutex); pthread_mutex_destroy(&video->input_mutex); bfree(video); } static size_t video_get_input_idx(const video_t *video, void (*callback)(void *param, struct video_data *frame), void *param) { for (size_t i = 0; i < video->inputs.num; i++) { struct video_input *input = video->inputs.array + i; if (input->callback == callback && input->param == param) return i; } return DARRAY_INVALID; } static bool match_range(enum video_range_type a, enum video_range_type b) { return (a == VIDEO_RANGE_FULL) == (b == VIDEO_RANGE_FULL); } static enum video_colorspace collapse_space(enum video_colorspace cs) { switch (cs) { case VIDEO_CS_SRGB: cs = VIDEO_CS_709; break; case VIDEO_CS_2100_HLG: cs = VIDEO_CS_2100_PQ; break; default: break; } return cs; } static bool match_space(enum video_colorspace a, enum video_colorspace b) { return (a == VIDEO_CS_DEFAULT) || (b == VIDEO_CS_DEFAULT) || (collapse_space(a) == collapse_space(b)); } static inline bool video_input_init(struct video_input *input, struct video_output *video) { if (input->conversion.width != video->info.width || input->conversion.height != video->info.height || input->conversion.format != video->info.format || !match_range(input->conversion.range, video->info.range) || !match_space(input->conversion.colorspace, video->info.colorspace)) { struct video_scale_info from = {.format = video->info.format, .width = video->info.width, .height = video->info.height, .range = video->info.range, .colorspace = video->info.colorspace}; int ret = video_scaler_create(&input->scaler, &input->conversion, &from, VIDEO_SCALE_FAST_BILINEAR); if (ret != VIDEO_SCALER_SUCCESS) { if (ret == VIDEO_SCALER_BAD_CONVERSION) blog(LOG_ERROR, "video_input_init: Bad " "scale conversion type"); else blog(LOG_ERROR, "video_input_init: Failed to " "create scaler"); return false; } for (size_t i = 0; i < MAX_CONVERT_BUFFERS; i++) video_frame_init(&input->frame[i], input->conversion.format, input->conversion.width, input->conversion.height); } return true; } static inline void reset_frames(video_t *video) { os_atomic_set_long(&video->skipped_frames, 0); os_atomic_set_long(&video->total_frames, 0); } static const video_t *get_const_root(const video_t *video) { while (video->parent) video = video->parent; return video; } static video_t *get_root(video_t *video) { while (video->parent) video = video->parent; return video; } bool video_output_connect(video_t *video, const struct video_scale_info *conversion, void (*callback)(void *param, struct video_data *frame), void *param) { return video_output_connect2(video, conversion, 1, callback, param); } bool video_output_connect2(video_t *video, const struct video_scale_info *conversion, uint32_t frame_rate_divisor, void (*callback)(void *param, struct video_data *frame), void *param) { bool success = false; video = get_root(video); if (!video || !callback || frame_rate_divisor == 0) return false; pthread_mutex_lock(&video->input_mutex); if (video_get_input_idx(video, callback, param) == DARRAY_INVALID) { struct video_input input; memset(&input, 0, sizeof(input)); input.callback = callback; input.param = param; input.frame_rate_divisor = frame_rate_divisor; if (conversion) { input.conversion = *conversion; } else { input.conversion.format = video->info.format; input.conversion.width = video->info.width; input.conversion.height = video->info.height; input.conversion.range = video->info.range; input.conversion.colorspace = video->info.colorspace; } if (input.conversion.width == 0) input.conversion.width = video->info.width; if (input.conversion.height == 0) input.conversion.height = video->info.height; success = video_input_init(&input, video); if (success) { if (video->inputs.num == 0) { if (!os_atomic_load_long(&video->gpu_refs)) { reset_frames(video); } os_atomic_set_bool(&video->raw_active, true); } da_push_back(video->inputs, &input); } } pthread_mutex_unlock(&video->input_mutex); return success; } static void log_skipped(video_t *video) { long skipped = os_atomic_load_long(&video->skipped_frames); double percentage_skipped = (double)skipped / (double)os_atomic_load_long(&video->total_frames) * 100.0; if (skipped) blog(LOG_INFO, "Video stopped, number of " "skipped frames due " "to encoding lag: " "%ld/%ld (%0.1f%%)", video->skipped_frames, video->total_frames, percentage_skipped); } void video_output_disconnect(video_t *video, void (*callback)(void *param, struct video_data *frame), void *param) { video_output_disconnect2(video, callback, param); } bool video_output_disconnect2(video_t *video, void (*callback)(void *param, struct video_data *frame), void *param) { if (!video || !callback) return false; video = get_root(video); pthread_mutex_lock(&video->input_mutex); size_t idx = video_get_input_idx(video, callback, param); if (idx != DARRAY_INVALID) { video_input_free(video->inputs.array + idx); da_erase(video->inputs, idx); if (video->inputs.num == 0) { os_atomic_set_bool(&video->raw_active, false); if (!os_atomic_load_long(&video->gpu_refs)) { log_skipped(video); } } } pthread_mutex_unlock(&video->input_mutex); return idx != DARRAY_INVALID; } bool video_output_active(const video_t *video) { if (!video) return false; return os_atomic_load_bool(&get_const_root(video)->raw_active); } const struct video_output_info *video_output_get_info(const video_t *video) { return video ? &video->info : NULL; } bool video_output_lock_frame(video_t *video, struct video_frame *frame, int count, uint64_t timestamp) { struct cached_frame_info *cfi; bool locked; if (!video) return false; video = get_root(video); pthread_mutex_lock(&video->data_mutex); if (video->available_frames == 0) { video->cache[video->last_added].count += count; video->cache[video->last_added].skipped += count; locked = false; } else { if (video->available_frames != video->info.cache_size) { if (++video->last_added == video->info.cache_size) video->last_added = 0; } cfi = &video->cache[video->last_added]; cfi->frame.timestamp = timestamp; cfi->count = count; cfi->skipped = 0; memcpy(frame, &cfi->frame, sizeof(*frame)); locked = true; } pthread_mutex_unlock(&video->data_mutex); return locked; } void video_output_unlock_frame(video_t *video) { if (!video) return; video = get_root(video); pthread_mutex_lock(&video->data_mutex); video->available_frames--; os_sem_post(video->update_semaphore); pthread_mutex_unlock(&video->data_mutex); } uint64_t video_output_get_frame_time(const video_t *video) { return video ? video->frame_time : 0; } void video_output_stop(video_t *video) { void *thread_ret; if (!video) return; video = get_root(video); if (!video->stop) { video->stop = true; os_sem_post(video->update_semaphore); pthread_join(video->thread, &thread_ret); } } bool video_output_stopped(video_t *video) { if (!video) return true; return get_root(video)->stop; } enum video_format video_output_get_format(const video_t *video) { return video ? get_const_root(video)->info.format : VIDEO_FORMAT_NONE; } uint32_t video_output_get_width(const video_t *video) { return video ? get_const_root(video)->info.width : 0; } uint32_t video_output_get_height(const video_t *video) { return video ? get_const_root(video)->info.height : 0; } double video_output_get_frame_rate(const video_t *video) { if (!video) return 0.0; video = get_const_root(video); return (double)video->info.fps_num / (double)video->info.fps_den; } uint32_t video_output_get_skipped_frames(const video_t *video) { return (uint32_t)os_atomic_load_long(&get_const_root(video)->skipped_frames); } uint32_t video_output_get_total_frames(const video_t *video) { return (uint32_t)os_atomic_load_long(&get_const_root(video)->total_frames); } /* Note: These four functions below are a very slight bit of a hack. If the * texture encoder thread is active while the raw encoder thread is active, the * total frame count will just be doubled while they're both active. Which is * fine. What's more important is having a relatively accurate skipped frame * count. */ void video_output_inc_texture_encoders(video_t *video) { video = get_root(video); if (os_atomic_inc_long(&video->gpu_refs) == 1 && !os_atomic_load_bool(&video->raw_active)) { reset_frames(video); } } void video_output_dec_texture_encoders(video_t *video) { video = get_root(video); if (os_atomic_dec_long(&video->gpu_refs) == 0 && !os_atomic_load_bool(&video->raw_active)) { log_skipped(video); } } void video_output_inc_texture_frames(video_t *video) { os_atomic_inc_long(&get_root(video)->total_frames); } void video_output_inc_texture_skipped_frames(video_t *video) { os_atomic_inc_long(&get_root(video)->skipped_frames); } video_t *video_output_create_with_frame_rate_divisor(video_t *video, uint32_t divisor) { // `divisor == 1` would result in the same frame rate, // resulting in an unnecessary additional video output if (!video || divisor == 0 || divisor == 1) return NULL; video_t *new_video = bzalloc(sizeof(video_t)); memcpy(new_video, video, sizeof(*new_video)); new_video->parent = video; new_video->info.fps_den *= divisor; return new_video; } void video_output_free_frame_rate_divisor(video_t *video) { if (video && video->parent) bfree(video); } obs-studio-32.1.0-sources/libobs/media-io/format-conversion.c000644 001751 001751 00000026614 15153330235 025022 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "format-conversion.h" #include "../util/sse-intrin.h" /* ...surprisingly, if I don't use a macro to force inlining, it causes the * CPU usage to boost by a tremendous amount in debug builds. */ #define get_m128_32_0(val) (*((uint32_t *)&val)) #define get_m128_32_1(val) (*(((uint32_t *)&val) + 1)) #define pack_shift(lum_plane, lum_pos0, lum_pos1, line1, line2, mask, sh) \ do { \ __m128i pack_val = _mm_packs_epi32(_mm_srli_si128(_mm_and_si128(line1, mask), sh), \ _mm_srli_si128(_mm_and_si128(line2, mask), sh)); \ pack_val = _mm_packus_epi16(pack_val, pack_val); \ \ *(uint32_t *)(lum_plane + lum_pos0) = get_m128_32_0(pack_val); \ *(uint32_t *)(lum_plane + lum_pos1) = get_m128_32_1(pack_val); \ } while (false) #define pack_val(lum_plane, lum_pos0, lum_pos1, line1, line2, mask) \ do { \ __m128i pack_val = _mm_packs_epi32(_mm_and_si128(line1, mask), _mm_and_si128(line2, mask)); \ pack_val = _mm_packus_epi16(pack_val, pack_val); \ \ *(uint32_t *)(lum_plane + lum_pos0) = get_m128_32_0(pack_val); \ *(uint32_t *)(lum_plane + lum_pos1) = get_m128_32_1(pack_val); \ } while (false) #define pack_ch_1plane(uv_plane, chroma_pos, line1, line2, uv_mask) \ do { \ __m128i add_val = _mm_add_epi64(_mm_and_si128(line1, uv_mask), _mm_and_si128(line2, uv_mask)); \ __m128i avg_val = _mm_add_epi64(add_val, _mm_shuffle_epi32(add_val, _MM_SHUFFLE(2, 3, 0, 1))); \ avg_val = _mm_srai_epi16(avg_val, 2); \ avg_val = _mm_shuffle_epi32(avg_val, _MM_SHUFFLE(3, 1, 2, 0)); \ avg_val = _mm_packus_epi16(avg_val, avg_val); \ \ *(uint32_t *)(uv_plane + chroma_pos) = get_m128_32_0(avg_val); \ } while (false) #define pack_ch_2plane(u_plane, v_plane, chroma_pos, line1, line2, uv_mask) \ do { \ uint32_t packed_vals; \ \ __m128i add_val = _mm_add_epi64(_mm_and_si128(line1, uv_mask), _mm_and_si128(line2, uv_mask)); \ __m128i avg_val = _mm_add_epi64(add_val, _mm_shuffle_epi32(add_val, _MM_SHUFFLE(2, 3, 0, 1))); \ avg_val = _mm_srai_epi16(avg_val, 2); \ avg_val = _mm_shuffle_epi32(avg_val, _MM_SHUFFLE(3, 1, 2, 0)); \ avg_val = _mm_shufflelo_epi16(avg_val, _MM_SHUFFLE(3, 1, 2, 0)); \ avg_val = _mm_packus_epi16(avg_val, avg_val); \ \ packed_vals = get_m128_32_0(avg_val); \ \ *(uint16_t *)(u_plane + chroma_pos) = (uint16_t)(packed_vals); \ *(uint16_t *)(v_plane + chroma_pos) = (uint16_t)(packed_vals >> 16); \ } while (false) static FORCE_INLINE uint32_t min_uint32(uint32_t a, uint32_t b) { return a < b ? a : b; } void compress_uyvx_to_i420(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output[], const uint32_t out_linesize[]) { uint8_t *lum_plane = output[0]; uint8_t *u_plane = output[1]; uint8_t *v_plane = output[2]; uint32_t width = min_uint32(in_linesize, out_linesize[0]); uint32_t y; __m128i lum_mask = _mm_set1_epi32(0x0000FF00); __m128i uv_mask = _mm_set1_epi16(0x00FF); for (y = start_y; y < end_y; y += 2) { uint32_t y_pos = y * in_linesize; uint32_t chroma_y_pos = (y >> 1) * out_linesize[1]; uint32_t lum_y_pos = y * out_linesize[0]; uint32_t x; for (x = 0; x < width; x += 4) { const uint8_t *img = input + y_pos + x * 4; uint32_t lum_pos0 = lum_y_pos + x; uint32_t lum_pos1 = lum_pos0 + out_linesize[0]; __m128i line1 = _mm_load_si128((const __m128i *)img); __m128i line2 = _mm_load_si128((const __m128i *)(img + in_linesize)); pack_shift(lum_plane, lum_pos0, lum_pos1, line1, line2, lum_mask, 1); pack_ch_2plane(u_plane, v_plane, chroma_y_pos + (x >> 1), line1, line2, uv_mask); } } } void compress_uyvx_to_nv12(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output[], const uint32_t out_linesize[]) { uint8_t *lum_plane = output[0]; uint8_t *chroma_plane = output[1]; uint32_t width = min_uint32(in_linesize, out_linesize[0]); uint32_t y; __m128i lum_mask = _mm_set1_epi32(0x0000FF00); __m128i uv_mask = _mm_set1_epi16(0x00FF); for (y = start_y; y < end_y; y += 2) { uint32_t y_pos = y * in_linesize; uint32_t chroma_y_pos = (y >> 1) * out_linesize[1]; uint32_t lum_y_pos = y * out_linesize[0]; uint32_t x; for (x = 0; x < width; x += 4) { const uint8_t *img = input + y_pos + x * 4; uint32_t lum_pos0 = lum_y_pos + x; uint32_t lum_pos1 = lum_pos0 + out_linesize[0]; __m128i line1 = _mm_load_si128((const __m128i *)img); __m128i line2 = _mm_load_si128((const __m128i *)(img + in_linesize)); pack_shift(lum_plane, lum_pos0, lum_pos1, line1, line2, lum_mask, 1); pack_ch_1plane(chroma_plane, chroma_y_pos + x, line1, line2, uv_mask); } } } void convert_uyvx_to_i444(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output[], const uint32_t out_linesize[]) { uint8_t *lum_plane = output[0]; uint8_t *u_plane = output[1]; uint8_t *v_plane = output[2]; uint32_t width = min_uint32(in_linesize, out_linesize[0]); uint32_t y; __m128i lum_mask = _mm_set1_epi32(0x0000FF00); __m128i u_mask = _mm_set1_epi32(0x000000FF); __m128i v_mask = _mm_set1_epi32(0x00FF0000); for (y = start_y; y < end_y; y += 2) { uint32_t y_pos = y * in_linesize; uint32_t lum_y_pos = y * out_linesize[0]; uint32_t x; for (x = 0; x < width; x += 4) { const uint8_t *img = input + y_pos + x * 4; uint32_t lum_pos0 = lum_y_pos + x; uint32_t lum_pos1 = lum_pos0 + out_linesize[0]; __m128i line1 = _mm_load_si128((const __m128i *)img); __m128i line2 = _mm_load_si128((const __m128i *)(img + in_linesize)); pack_shift(lum_plane, lum_pos0, lum_pos1, line1, line2, lum_mask, 1); pack_val(u_plane, lum_pos0, lum_pos1, line1, line2, u_mask); pack_shift(v_plane, lum_pos0, lum_pos1, line1, line2, v_mask, 2); } } } void decompress_420(const uint8_t *const input[], const uint32_t in_linesize[], uint32_t start_y, uint32_t end_y, uint8_t *output, uint32_t out_linesize) { uint32_t start_y_d2 = start_y / 2; uint32_t width_d2 = in_linesize[0] / 2; uint32_t height_d2 = end_y / 2; uint32_t y; for (y = start_y_d2; y < height_d2; y++) { const uint8_t *chroma0 = input[1] + y * in_linesize[1]; const uint8_t *chroma1 = input[2] + y * in_linesize[2]; register const uint8_t *lum0, *lum1; register uint32_t *output0, *output1; uint32_t x; lum0 = input[0] + y * 2 * in_linesize[0]; lum1 = lum0 + in_linesize[0]; output0 = (uint32_t *)(output + y * 2 * out_linesize); output1 = (uint32_t *)((uint8_t *)output0 + out_linesize); for (x = 0; x < width_d2; x++) { uint32_t out; out = (*(chroma0++) << 8) | *(chroma1++); *(output0++) = (*(lum0++) << 16) | out; *(output0++) = (*(lum0++) << 16) | out; *(output1++) = (*(lum1++) << 16) | out; *(output1++) = (*(lum1++) << 16) | out; } } } void decompress_nv12(const uint8_t *const input[], const uint32_t in_linesize[], uint32_t start_y, uint32_t end_y, uint8_t *output, uint32_t out_linesize) { uint32_t start_y_d2 = start_y / 2; uint32_t width_d2 = min_uint32(in_linesize[0], out_linesize) / 2; uint32_t height_d2 = end_y / 2; uint32_t y; for (y = start_y_d2; y < height_d2; y++) { const uint16_t *chroma; register const uint8_t *lum0, *lum1; register uint32_t *output0, *output1; uint32_t x; chroma = (const uint16_t *)(input[1] + y * in_linesize[1]); lum0 = input[0] + y * 2 * in_linesize[0]; lum1 = lum0 + in_linesize[0]; output0 = (uint32_t *)(output + y * 2 * out_linesize); output1 = (uint32_t *)((uint8_t *)output0 + out_linesize); for (x = 0; x < width_d2; x++) { uint32_t out = *(chroma++) << 8; *(output0++) = *(lum0++) | out; *(output0++) = *(lum0++) | out; *(output1++) = *(lum1++) | out; *(output1++) = *(lum1++) | out; } } } void decompress_422(const uint8_t *input, uint32_t in_linesize, uint32_t start_y, uint32_t end_y, uint8_t *output, uint32_t out_linesize, bool leading_lum) { uint32_t width_d2 = min_uint32(in_linesize, out_linesize) / 2; uint32_t y; register const uint32_t *input32; register const uint32_t *input32_end; register uint32_t *output32; if (leading_lum) { for (y = start_y; y < end_y; y++) { input32 = (const uint32_t *)(input + y * in_linesize); input32_end = input32 + width_d2; output32 = (uint32_t *)(output + y * out_linesize); while (input32 < input32_end) { register uint32_t dw = *input32; output32[0] = dw; dw &= 0xFFFFFF00; dw |= (uint8_t)(dw >> 16); output32[1] = dw; output32 += 2; input32++; } } } else { for (y = start_y; y < end_y; y++) { input32 = (const uint32_t *)(input + y * in_linesize); input32_end = input32 + width_d2; output32 = (uint32_t *)(output + y * out_linesize); while (input32 < input32_end) { register uint32_t dw = *input32; output32[0] = dw; dw &= 0xFFFF00FF; dw |= (dw >> 16) & 0xFF00; output32[1] = dw; output32 += 2; input32++; } } } } obs-studio-32.1.0-sources/libobs/media-io/video-scaler.h000644 001751 001751 00000003053 15153330235 023721 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #include "video-io.h" #ifdef __cplusplus extern "C" { #endif struct video_scaler; typedef struct video_scaler video_scaler_t; #define VIDEO_SCALER_SUCCESS 0 #define VIDEO_SCALER_BAD_CONVERSION -1 #define VIDEO_SCALER_FAILED -2 EXPORT int video_scaler_create(video_scaler_t **scaler, const struct video_scale_info *dst, const struct video_scale_info *src, enum video_scale_type type); EXPORT void video_scaler_destroy(video_scaler_t *scaler); EXPORT bool video_scaler_scale(video_scaler_t *scaler, uint8_t *output[], const uint32_t out_linesize[], const uint8_t *const input[], const uint32_t in_linesize[]); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/video-frame.c000644 001751 001751 00000017105 15153330235 023540 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "video-frame.h" #define HALF(size) ((size + 1) / 2) #define ALIGN(size, alignment) *size = (*size + alignment - 1) & (~(alignment - 1)); static inline void align_size(size_t *size, size_t alignment) { ALIGN(size, alignment); } static inline void align_uint32(uint32_t *size, size_t alignment) { ALIGN(size, (uint32_t)alignment); } /* assumes already-zeroed array */ void video_frame_get_linesizes(uint32_t linesize[MAX_AV_PLANES], enum video_format format, uint32_t width) { switch (format) { default: case VIDEO_FORMAT_NONE: break; case VIDEO_FORMAT_BGR3: /* one plane: triple width */ linesize[0] = width * 3; break; case VIDEO_FORMAT_RGBA: /* one plane: quadruple width */ case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: case VIDEO_FORMAT_AYUV: case VIDEO_FORMAT_R10L: linesize[0] = width * 4; break; case VIDEO_FORMAT_P416: /* two planes: double width, quadruple width */ linesize[0] = width * 2; linesize[1] = width * 4; break; case VIDEO_FORMAT_I420: /* three planes: full width, half width, half width */ case VIDEO_FORMAT_I422: linesize[0] = width; linesize[1] = HALF(width); linesize[2] = HALF(width); break; case VIDEO_FORMAT_I210: /* three planes: double width, full width, full width */ case VIDEO_FORMAT_I010: linesize[0] = width * 2; linesize[1] = width; linesize[2] = width; break; case VIDEO_FORMAT_I40A: /* four planes: full width, half width, half width, full width */ case VIDEO_FORMAT_I42A: linesize[0] = width; linesize[1] = HALF(width); linesize[2] = HALF(width); linesize[3] = width; break; case VIDEO_FORMAT_YVYU: /* one plane: double width */ case VIDEO_FORMAT_YUY2: case VIDEO_FORMAT_UYVY: linesize[0] = width * 2; break; case VIDEO_FORMAT_P010: /* two planes: all double width */ case VIDEO_FORMAT_P216: linesize[0] = width * 2; linesize[1] = width * 2; break; case VIDEO_FORMAT_I412: /* three planes: all double width */ linesize[0] = width * 2; linesize[1] = width * 2; linesize[2] = width * 2; break; case VIDEO_FORMAT_YA2L: /* four planes: all double width */ linesize[0] = width * 2; linesize[1] = width * 2; linesize[2] = width * 2; linesize[3] = width * 2; break; case VIDEO_FORMAT_Y800: /* one plane: full width */ linesize[0] = width; break; case VIDEO_FORMAT_NV12: /* two planes: all full width */ linesize[0] = width; linesize[1] = width; break; case VIDEO_FORMAT_I444: /* three planes: all full width */ linesize[0] = width; linesize[1] = width; linesize[2] = width; break; case VIDEO_FORMAT_YUVA: /* four planes: all full width */ linesize[0] = width; linesize[1] = width; linesize[2] = width; linesize[3] = width; break; case VIDEO_FORMAT_V210: { /* one plane: bruh (Little Endian Compressed) */ align_uint32(&width, 48); linesize[0] = ((width + 5) / 6) * 16; break; } } } void video_frame_get_plane_heights(uint32_t heights[MAX_AV_PLANES], enum video_format format, uint32_t height) { switch (format) { default: case VIDEO_FORMAT_NONE: return; case VIDEO_FORMAT_I420: /* three planes: full height, half height, half height */ case VIDEO_FORMAT_I010: heights[0] = height; heights[1] = HALF(height); heights[2] = HALF(height); break; case VIDEO_FORMAT_NV12: /* two planes: full height, half height */ case VIDEO_FORMAT_P010: heights[0] = height; heights[1] = HALF(height); break; case VIDEO_FORMAT_Y800: /* one plane: full height */ case VIDEO_FORMAT_YVYU: case VIDEO_FORMAT_YUY2: case VIDEO_FORMAT_UYVY: case VIDEO_FORMAT_RGBA: case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: case VIDEO_FORMAT_BGR3: case VIDEO_FORMAT_AYUV: case VIDEO_FORMAT_V210: case VIDEO_FORMAT_R10L: heights[0] = height; break; case VIDEO_FORMAT_I444: /* three planes: all full height */ case VIDEO_FORMAT_I422: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_I412: heights[0] = height; heights[1] = height; heights[2] = height; break; case VIDEO_FORMAT_I40A: /* four planes: full height, half height, half height, full height */ heights[0] = height; heights[1] = HALF(height); heights[2] = HALF(height); heights[3] = height; break; case VIDEO_FORMAT_I42A: /* four planes: all full height */ case VIDEO_FORMAT_YUVA: case VIDEO_FORMAT_YA2L: heights[0] = height; heights[1] = height; heights[2] = height; heights[3] = height; break; case VIDEO_FORMAT_P216: /* two planes: all full height */ case VIDEO_FORMAT_P416: heights[0] = height; heights[1] = height; break; } } void video_frame_init(struct video_frame *frame, enum video_format format, uint32_t width, uint32_t height) { size_t size = 0; uint32_t linesizes[MAX_AV_PLANES]; uint32_t heights[MAX_AV_PLANES]; size_t offsets[MAX_AV_PLANES]; int alignment = base_get_alignment(); if (!frame) return; memset(frame, 0, sizeof(struct video_frame)); memset(linesizes, 0, sizeof(linesizes)); memset(heights, 0, sizeof(heights)); memset(offsets, 0, sizeof(offsets)); /* determine linesizes for each plane */ video_frame_get_linesizes(linesizes, format, width); /* determine line count for each plane */ video_frame_get_plane_heights(heights, format, height); /* calculate total buffer required */ for (uint32_t i = 0; i < MAX_AV_PLANES; i++) { if (!linesizes[i] || !heights[i]) continue; size_t plane_size = (size_t)linesizes[i] * (size_t)heights[i]; align_size(&plane_size, alignment); size += plane_size; offsets[i] = size; } /* allocate memory */ frame->data[0] = bmalloc(size); frame->linesize[0] = linesizes[0]; /* apply plane data pointers according to offsets */ for (uint32_t i = 1; i < MAX_AV_PLANES; i++) { if (!linesizes[i] || !heights[i]) continue; frame->data[i] = frame->data[0] + offsets[i - 1]; frame->linesize[i] = linesizes[i]; } } void video_frame_copy(struct video_frame *dst, const struct video_frame *src, enum video_format format, uint32_t cy) { uint32_t heights[MAX_AV_PLANES]; memset(heights, 0, sizeof(heights)); /* determine line count for each plane */ video_frame_get_plane_heights(heights, format, cy); /* copy each plane */ for (uint32_t i = 0; i < MAX_AV_PLANES; i++) { if (!heights[i]) continue; if (src->linesize[i] == dst->linesize[i]) { memcpy(dst->data[i], src->data[i], src->linesize[i] * heights[i]); } else { /* linesizes which do not match must be copied line-by-line */ size_t src_linesize = src->linesize[i]; size_t dst_linesize = dst->linesize[i]; /* determine how much we can write (frames with different line sizes require more )*/ size_t linesize = src_linesize < dst_linesize ? src_linesize : dst_linesize; for (uint32_t y = 0; y < heights[i]; y++) { uint8_t *src_pos = src->data[i] + (src_linesize * y); uint8_t *dst_pos = dst->data[i] + (dst_linesize * y); memcpy(dst_pos, src_pos, linesize); } } } } obs-studio-32.1.0-sources/libobs/media-io/video-scaler-ffmpeg.c000644 001751 001751 00000016661 15153330235 025167 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/bmem.h" #include "video-scaler.h" #include #include #include struct video_scaler { struct SwsContext *swscale; int src_height; int dst_heights[4]; uint8_t *dst_pointers[4]; int dst_linesizes[4]; }; static inline enum AVPixelFormat get_ffmpeg_video_format(enum video_format format) { switch (format) { case VIDEO_FORMAT_I420: return AV_PIX_FMT_YUV420P; case VIDEO_FORMAT_NV12: return AV_PIX_FMT_NV12; case VIDEO_FORMAT_YUY2: return AV_PIX_FMT_YUYV422; case VIDEO_FORMAT_UYVY: return AV_PIX_FMT_UYVY422; case VIDEO_FORMAT_YVYU: return AV_PIX_FMT_YVYU422; case VIDEO_FORMAT_RGBA: return AV_PIX_FMT_RGBA; case VIDEO_FORMAT_BGRA: return AV_PIX_FMT_BGRA; case VIDEO_FORMAT_BGRX: return AV_PIX_FMT_BGRA; case VIDEO_FORMAT_Y800: return AV_PIX_FMT_GRAY8; case VIDEO_FORMAT_I444: return AV_PIX_FMT_YUV444P; case VIDEO_FORMAT_I412: return AV_PIX_FMT_YUV444P12LE; case VIDEO_FORMAT_BGR3: return AV_PIX_FMT_BGR24; case VIDEO_FORMAT_I422: return AV_PIX_FMT_YUV422P; case VIDEO_FORMAT_I210: return AV_PIX_FMT_YUV422P10LE; case VIDEO_FORMAT_I40A: return AV_PIX_FMT_YUVA420P; case VIDEO_FORMAT_I42A: return AV_PIX_FMT_YUVA422P; case VIDEO_FORMAT_YUVA: return AV_PIX_FMT_YUVA444P; case VIDEO_FORMAT_YA2L: return AV_PIX_FMT_YUVA444P12LE; case VIDEO_FORMAT_I010: return AV_PIX_FMT_YUV420P10LE; case VIDEO_FORMAT_P010: return AV_PIX_FMT_P010LE; case VIDEO_FORMAT_P216: return AV_PIX_FMT_P216LE; case VIDEO_FORMAT_P416: return AV_PIX_FMT_P416LE; case VIDEO_FORMAT_NONE: case VIDEO_FORMAT_AYUV: default: return AV_PIX_FMT_NONE; } } static inline int get_ffmpeg_scale_type(enum video_scale_type type) { switch (type) { case VIDEO_SCALE_DEFAULT: return SWS_FAST_BILINEAR; case VIDEO_SCALE_POINT: return SWS_POINT; case VIDEO_SCALE_FAST_BILINEAR: return SWS_FAST_BILINEAR; case VIDEO_SCALE_BILINEAR: return SWS_BILINEAR | SWS_AREA; case VIDEO_SCALE_BICUBIC: return SWS_BICUBIC; } return SWS_POINT; } static inline const int *get_ffmpeg_coeffs(enum video_colorspace cs) { int colorspace = SWS_CS_ITU709; switch (cs) { case VIDEO_CS_DEFAULT: case VIDEO_CS_709: case VIDEO_CS_SRGB: default: colorspace = SWS_CS_ITU709; break; case VIDEO_CS_601: colorspace = SWS_CS_ITU601; break; case VIDEO_CS_2100_PQ: case VIDEO_CS_2100_HLG: colorspace = SWS_CS_BT2020; } return sws_getCoefficients(colorspace); } static inline int get_ffmpeg_range_type(enum video_range_type type) { switch (type) { case VIDEO_RANGE_DEFAULT: return 0; case VIDEO_RANGE_PARTIAL: return 0; case VIDEO_RANGE_FULL: return 1; } return 0; } #define FIXED_1_0 (1 << 16) int video_scaler_create(video_scaler_t **scaler_out, const struct video_scale_info *dst, const struct video_scale_info *src, enum video_scale_type type) { enum AVPixelFormat format_src = get_ffmpeg_video_format(src->format); enum AVPixelFormat format_dst = get_ffmpeg_video_format(dst->format); int scale_type = get_ffmpeg_scale_type(type); const int *coeff_src = get_ffmpeg_coeffs(src->colorspace); const int *coeff_dst = get_ffmpeg_coeffs(dst->colorspace); int range_src = get_ffmpeg_range_type(src->range); int range_dst = get_ffmpeg_range_type(dst->range); struct video_scaler *scaler; int ret; if (!scaler_out) return VIDEO_SCALER_FAILED; if (format_src == AV_PIX_FMT_NONE || format_dst == AV_PIX_FMT_NONE) return VIDEO_SCALER_BAD_CONVERSION; scaler = bzalloc(sizeof(struct video_scaler)); scaler->src_height = src->height; const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(format_dst); bool has_plane[4] = {0}; for (size_t i = 0; i < 4; i++) has_plane[desc->comp[i].plane] = 1; scaler->dst_heights[0] = dst->height; for (size_t i = 1; i < 4; ++i) { if (has_plane[i]) { const int s = (i == 1 || i == 2) ? desc->log2_chroma_h : 0; scaler->dst_heights[i] = dst->height >> s; } } ret = av_image_alloc(scaler->dst_pointers, scaler->dst_linesizes, dst->width, dst->height, format_dst, 32); if (ret < 0) { blog(LOG_WARNING, "video_scaler_create: av_image_alloc failed: %d", ret); goto fail; } scaler->swscale = sws_alloc_context(); if (!scaler->swscale) { blog(LOG_ERROR, "video_scaler_create: Could not create " "swscale"); goto fail; } av_opt_set_int(scaler->swscale, "sws_flags", scale_type, 0); av_opt_set_int(scaler->swscale, "srcw", src->width, 0); av_opt_set_int(scaler->swscale, "srch", src->height, 0); av_opt_set_int(scaler->swscale, "dstw", dst->width, 0); av_opt_set_int(scaler->swscale, "dsth", dst->height, 0); av_opt_set_int(scaler->swscale, "src_format", format_src, 0); av_opt_set_int(scaler->swscale, "dst_format", format_dst, 0); av_opt_set_int(scaler->swscale, "src_range", range_src, 0); av_opt_set_int(scaler->swscale, "dst_range", range_dst, 0); if (sws_init_context(scaler->swscale, NULL, NULL) < 0) { blog(LOG_ERROR, "video_scaler_create: sws_init_context failed"); goto fail; } ret = sws_setColorspaceDetails(scaler->swscale, coeff_src, range_src, coeff_dst, range_dst, 0, FIXED_1_0, FIXED_1_0); if (ret < 0) { blog(LOG_DEBUG, "video_scaler_create: " "sws_setColorspaceDetails failed, ignoring"); } *scaler_out = scaler; return VIDEO_SCALER_SUCCESS; fail: video_scaler_destroy(scaler); return VIDEO_SCALER_FAILED; } void video_scaler_destroy(video_scaler_t *scaler) { if (scaler) { sws_freeContext(scaler->swscale); if (scaler->dst_pointers[0]) av_freep(scaler->dst_pointers); bfree(scaler); } } bool video_scaler_scale(video_scaler_t *scaler, uint8_t *output[], const uint32_t out_linesize[], const uint8_t *const input[], const uint32_t in_linesize[]) { if (!scaler) return false; int ret = sws_scale(scaler->swscale, input, (const int *)in_linesize, 0, scaler->src_height, scaler->dst_pointers, scaler->dst_linesizes); if (ret <= 0) { blog(LOG_ERROR, "video_scaler_scale: sws_scale failed: %d", ret); return false; } for (size_t plane = 0; plane < 4; ++plane) { if (!scaler->dst_pointers[plane]) continue; const size_t scaled_linesize = scaler->dst_linesizes[plane]; const size_t plane_linesize = out_linesize[plane]; uint8_t *dst = output[plane]; const uint8_t *src = scaler->dst_pointers[plane]; const size_t height = scaler->dst_heights[plane]; if (scaled_linesize == plane_linesize) { memcpy(dst, src, scaled_linesize * height); } else { size_t linesize = scaled_linesize; if (linesize > plane_linesize) linesize = plane_linesize; for (size_t y = 0; y < height; y++) { memcpy(dst, src, linesize); dst += plane_linesize; src += scaled_linesize; } } } return true; } obs-studio-32.1.0-sources/libobs/media-io/video-frame.h000644 001751 001751 00000003574 15153330235 023552 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/bmem.h" #include "video-io.h" #ifdef __cplusplus extern "C" { #endif struct video_frame { uint8_t *data[MAX_AV_PLANES]; uint32_t linesize[MAX_AV_PLANES]; }; EXPORT void video_frame_init(struct video_frame *frame, enum video_format format, uint32_t width, uint32_t height); static inline void video_frame_free(struct video_frame *frame) { if (frame) { bfree(frame->data[0]); memset(frame, 0, sizeof(struct video_frame)); } } static inline struct video_frame *video_frame_create(enum video_format format, uint32_t width, uint32_t height) { struct video_frame *frame; frame = (struct video_frame *)bzalloc(sizeof(struct video_frame)); video_frame_init(frame, format, width, height); return frame; } static inline void video_frame_destroy(struct video_frame *frame) { if (frame) { bfree(frame->data[0]); bfree(frame); } } EXPORT void video_frame_copy(struct video_frame *dst, const struct video_frame *src, enum video_format format, uint32_t height); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/audio-io.h000644 001751 001751 00000014312 15153330235 023052 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "media-io-defs.h" #include "../util/c99defs.h" #include "../util/util_uint64.h" #ifdef __cplusplus extern "C" { #endif #define MAX_AUDIO_MIXES 6 #define MAX_AUDIO_CHANNELS 8 #define MAX_DEVICE_INPUT_CHANNELS 64 #define AUDIO_OUTPUT_FRAMES 1024 #define TOTAL_AUDIO_SIZE (MAX_AUDIO_MIXES * MAX_AUDIO_CHANNELS * AUDIO_OUTPUT_FRAMES * sizeof(float)) /* * Base audio output component. Use this to create an audio output track * for the media. */ struct audio_output; typedef struct audio_output audio_t; enum audio_format { AUDIO_FORMAT_UNKNOWN, AUDIO_FORMAT_U8BIT, AUDIO_FORMAT_16BIT, AUDIO_FORMAT_32BIT, AUDIO_FORMAT_FLOAT, AUDIO_FORMAT_U8BIT_PLANAR, AUDIO_FORMAT_16BIT_PLANAR, AUDIO_FORMAT_32BIT_PLANAR, AUDIO_FORMAT_FLOAT_PLANAR, }; /** * The speaker layout describes where the speakers are located in the room. * For OBS it dictates: * * how many channels are available and * * which channels are used for which speakers. * * Standard channel layouts where retrieved from ffmpeg documentation at: * https://trac.ffmpeg.org/wiki/AudioChannelManipulation */ enum speaker_layout { SPEAKERS_UNKNOWN, /**< Unknown setting, fallback is stereo. */ SPEAKERS_MONO, /**< Channels: MONO */ SPEAKERS_STEREO, /**< Channels: FL, FR */ SPEAKERS_2POINT1, /**< Channels: FL, FR, LFE */ SPEAKERS_4POINT0, /**< Channels: FL, FR, FC, RC */ SPEAKERS_4POINT1, /**< Channels: FL, FR, FC, LFE, RC */ SPEAKERS_5POINT1, /**< Channels: FL, FR, FC, LFE, RL, RR */ SPEAKERS_7POINT1 = 8, /**< Channels: FL, FR, FC, LFE, RL, RR, SL, SR */ }; struct audio_data { uint8_t *data[MAX_AV_PLANES]; uint32_t frames; uint64_t timestamp; }; struct audio_output_data { float *data[MAX_AUDIO_CHANNELS]; }; typedef bool (*audio_input_callback_t)(void *param, uint64_t start_ts, uint64_t end_ts, uint64_t *new_ts, uint32_t active_mixers, struct audio_output_data *mixes); struct audio_output_info { const char *name; uint32_t samples_per_sec; enum audio_format format; enum speaker_layout speakers; audio_input_callback_t input_callback; void *input_param; }; struct audio_convert_info { uint32_t samples_per_sec; enum audio_format format; enum speaker_layout speakers; bool allow_clipping; }; static inline uint32_t get_audio_channels(enum speaker_layout speakers) { switch (speakers) { case SPEAKERS_MONO: return 1; case SPEAKERS_STEREO: return 2; case SPEAKERS_2POINT1: return 3; case SPEAKERS_4POINT0: return 4; case SPEAKERS_4POINT1: return 5; case SPEAKERS_5POINT1: return 6; case SPEAKERS_7POINT1: return 8; case SPEAKERS_UNKNOWN: return 0; } return 0; } static inline size_t get_audio_bytes_per_channel(enum audio_format format) { switch (format) { case AUDIO_FORMAT_U8BIT: case AUDIO_FORMAT_U8BIT_PLANAR: return 1; case AUDIO_FORMAT_16BIT: case AUDIO_FORMAT_16BIT_PLANAR: return 2; case AUDIO_FORMAT_FLOAT: case AUDIO_FORMAT_FLOAT_PLANAR: case AUDIO_FORMAT_32BIT: case AUDIO_FORMAT_32BIT_PLANAR: return 4; case AUDIO_FORMAT_UNKNOWN: return 0; } return 0; } static inline bool is_audio_planar(enum audio_format format) { switch (format) { case AUDIO_FORMAT_U8BIT: case AUDIO_FORMAT_16BIT: case AUDIO_FORMAT_32BIT: case AUDIO_FORMAT_FLOAT: return false; case AUDIO_FORMAT_U8BIT_PLANAR: case AUDIO_FORMAT_FLOAT_PLANAR: case AUDIO_FORMAT_16BIT_PLANAR: case AUDIO_FORMAT_32BIT_PLANAR: return true; case AUDIO_FORMAT_UNKNOWN: return false; } return false; } static inline size_t get_audio_planes(enum audio_format format, enum speaker_layout speakers) { return (is_audio_planar(format) ? get_audio_channels(speakers) : 1); } static inline size_t get_audio_size(enum audio_format format, enum speaker_layout speakers, uint32_t frames) { bool planar = is_audio_planar(format); return (planar ? 1 : get_audio_channels(speakers)) * get_audio_bytes_per_channel(format) * frames; } static inline size_t get_total_audio_size(enum audio_format format, enum speaker_layout speakers, uint32_t frames) { return get_audio_channels(speakers) * get_audio_bytes_per_channel(format) * frames; } static inline uint64_t audio_frames_to_ns(size_t sample_rate, uint64_t frames) { return util_mul_div64(frames, 1000000000ULL, sample_rate); } static inline uint64_t ns_to_audio_frames(size_t sample_rate, uint64_t frames) { return util_mul_div64(frames, sample_rate, 1000000000ULL); } #define AUDIO_OUTPUT_SUCCESS 0 #define AUDIO_OUTPUT_INVALIDPARAM -1 #define AUDIO_OUTPUT_FAIL -2 EXPORT int audio_output_open(audio_t **audio, struct audio_output_info *info); EXPORT void audio_output_close(audio_t *audio); typedef void (*audio_output_callback_t)(void *param, size_t mix_idx, struct audio_data *data); EXPORT bool audio_output_connect(audio_t *video, size_t mix_idx, const struct audio_convert_info *conversion, audio_output_callback_t callback, void *param); EXPORT void audio_output_disconnect(audio_t *video, size_t mix_idx, audio_output_callback_t callback, void *param); EXPORT bool audio_output_active(const audio_t *audio); EXPORT size_t audio_output_get_block_size(const audio_t *audio); EXPORT size_t audio_output_get_planes(const audio_t *audio); EXPORT size_t audio_output_get_channels(const audio_t *audio); EXPORT uint32_t audio_output_get_sample_rate(const audio_t *audio); EXPORT const struct audio_output_info *audio_output_get_info(const audio_t *audio); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/media-io/video-matrices.c000644 001751 001751 00000017255 15153330235 024263 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2014 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "video-io.h" #include "../util/bmem.h" #include "../graphics/matrix3.h" #include //#define LOG_MATRICES static struct { float range_min[3]; float range_max[3]; float black_levels[2][3]; float float_range_min[3]; float float_range_max[3]; } bpp_info[9]; static struct { enum video_colorspace const color_space; float const Kb, Kr; float matrix[OBS_COUNTOF(bpp_info)][2][16]; } format_info[] = { { VIDEO_CS_601, 0.114f, 0.299f, }, { VIDEO_CS_709, 0.0722f, 0.2126f, }, { VIDEO_CS_2100_PQ, 0.0593f, 0.2627f, }, }; #define NUM_FORMATS (sizeof(format_info) / sizeof(format_info[0])) #ifdef LOG_MATRICES static void log_matrix(float const matrix[16]) { blog(LOG_DEBUG, "\n% f, % f, % f, % f" "\n% f, % f, % f, % f" "\n% f, % f, % f, % f" "\n% f, % f, % f, % f", matrix[0], matrix[1], matrix[2], matrix[3], matrix[4], matrix[5], matrix[6], matrix[7], matrix[8], matrix[9], matrix[10], matrix[11], matrix[12], matrix[13], matrix[14], matrix[15]); } #endif static void initialize_matrix(float const Kb, float const Kr, float bit_range_max, float const range_min[3], float const range_max[3], float const black_levels[3], float matrix[16]) { struct matrix3 color_matrix; float const yvals = range_max[0] - range_min[0]; float const uvals = (range_max[1] - range_min[1]) / 2.f; float const vvals = (range_max[2] - range_min[2]) / 2.f; float const yscale = bit_range_max / yvals; float const uscale = bit_range_max / uvals; float const vscale = bit_range_max / vvals; float const Kg = (1.f - Kb - Kr); vec3_set(&color_matrix.x, yscale, 0.f, vscale * (1.f - Kr)); vec3_set(&color_matrix.y, yscale, uscale * (Kb - 1.f) * Kb / Kg, vscale * (Kr - 1.f) * Kr / Kg); vec3_set(&color_matrix.z, yscale, uscale * (1.f - Kb), 0.f); struct vec3 offsets, multiplied; vec3_set(&offsets, -black_levels[0] / bit_range_max, -black_levels[1] / bit_range_max, -black_levels[2] / bit_range_max); vec3_rotate(&multiplied, &offsets, &color_matrix); matrix[0] = color_matrix.x.x; matrix[1] = color_matrix.x.y; matrix[2] = color_matrix.x.z; matrix[3] = multiplied.x; matrix[4] = color_matrix.y.x; matrix[5] = color_matrix.y.y; matrix[6] = color_matrix.y.z; matrix[7] = multiplied.y; matrix[8] = color_matrix.z.x; matrix[9] = color_matrix.z.y; matrix[10] = color_matrix.z.z; matrix[11] = multiplied.z; matrix[12] = matrix[13] = matrix[14] = 0.; matrix[15] = 1.; #ifdef LOG_MATRICES log_matrix(matrix); #endif } static void initialize_matrices() { static const float full_range_min3[] = {0, 0, 0}; float min_value = 16.f; float max_luma = 235.f; float max_chroma = 240.f; float range = 256.f; for (uint32_t bpp = 8; bpp <= 16; ++bpp) { const uint32_t bpp_index = bpp - 8; bpp_info[bpp_index].range_min[0] = min_value; bpp_info[bpp_index].range_min[1] = min_value; bpp_info[bpp_index].range_min[2] = min_value; bpp_info[bpp_index].range_max[0] = max_luma; bpp_info[bpp_index].range_max[1] = max_chroma; bpp_info[bpp_index].range_max[2] = max_chroma; const float mid_chroma = 0.5f * (min_value + max_chroma); bpp_info[bpp_index].black_levels[0][0] = min_value; bpp_info[bpp_index].black_levels[0][1] = mid_chroma; bpp_info[bpp_index].black_levels[0][2] = mid_chroma; bpp_info[bpp_index].black_levels[1][0] = 0.f; bpp_info[bpp_index].black_levels[1][1] = mid_chroma; bpp_info[bpp_index].black_levels[1][2] = mid_chroma; const float range_max = range - 1.f; bpp_info[bpp_index].float_range_min[0] = min_value / range_max; bpp_info[bpp_index].float_range_min[1] = min_value / range_max; bpp_info[bpp_index].float_range_min[2] = min_value / range_max; bpp_info[bpp_index].float_range_max[0] = max_luma / range_max; bpp_info[bpp_index].float_range_max[1] = max_chroma / range_max; bpp_info[bpp_index].float_range_max[2] = max_chroma / range_max; for (size_t i = 0; i < NUM_FORMATS; i++) { float full_range_max3[] = {range_max, range_max, range_max}; initialize_matrix(format_info[i].Kb, format_info[i].Kr, range_max, full_range_min3, full_range_max3, bpp_info[bpp_index].black_levels[1], format_info[i].matrix[bpp_index][1]); initialize_matrix(format_info[i].Kb, format_info[i].Kr, range_max, bpp_info[bpp_index].range_min, bpp_info[bpp_index].range_max, bpp_info[bpp_index].black_levels[0], format_info[i].matrix[bpp_index][0]); } min_value *= 2.f; max_luma *= 2.f; max_chroma *= 2.f; range *= 2.f; } } static bool matrices_initialized = false; static const float full_min[3] = {0.0f, 0.0f, 0.0f}; static const float full_max[3] = {1.0f, 1.0f, 1.0f}; static bool video_format_get_parameters_for_bpc(enum video_colorspace color_space, enum video_range_type range, float matrix[16], float range_min[3], float range_max[3], uint32_t bpc) { if (!matrices_initialized) { initialize_matrices(); matrices_initialized = true; } if ((color_space == VIDEO_CS_DEFAULT) || (color_space == VIDEO_CS_SRGB)) color_space = VIDEO_CS_709; else if (color_space == VIDEO_CS_2100_HLG) color_space = VIDEO_CS_2100_PQ; if (bpc < 8) bpc = 8; if (bpc > 16) bpc = 16; const uint32_t bpc_index = bpc - 8; assert(bpc_index < OBS_COUNTOF(bpp_info)); bool success = false; for (size_t i = 0; i < NUM_FORMATS; i++) { success = format_info[i].color_space == color_space; if (success) { const bool full_range = range == VIDEO_RANGE_FULL; memcpy(matrix, format_info[i].matrix[bpc_index][full_range], sizeof(float) * 16); if (range_min) { const float *src_range_min = full_range ? full_min : bpp_info[bpc_index].float_range_min; memcpy(range_min, src_range_min, sizeof(float) * 3); } if (range_max) { const float *src_range_max = full_range ? full_max : bpp_info[bpc_index].float_range_max; memcpy(range_max, src_range_max, sizeof(float) * 3); } break; } } return success; } bool video_format_get_parameters(enum video_colorspace color_space, enum video_range_type range, float matrix[16], float range_min[3], float range_max[3]) { uint32_t bpc = (color_space == VIDEO_CS_2100_PQ || color_space == VIDEO_CS_2100_HLG) ? 10 : 8; return video_format_get_parameters_for_bpc(color_space, range, matrix, range_min, range_max, bpc); } bool video_format_get_parameters_for_format(enum video_colorspace color_space, enum video_range_type range, enum video_format format, float matrix[16], float range_min[3], float range_max[3]) { uint32_t bpc; switch (format) { case VIDEO_FORMAT_I010: case VIDEO_FORMAT_P010: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_V210: case VIDEO_FORMAT_R10L: bpc = 10; break; case VIDEO_FORMAT_I412: case VIDEO_FORMAT_YA2L: bpc = 12; break; case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: bpc = 16; break; default: bpc = 8; break; } return video_format_get_parameters_for_bpc(color_space, range, matrix, range_min, range_max, bpc); } obs-studio-32.1.0-sources/libobs/media-io/media-remux.h000644 001751 001751 00000002610 15153330235 023557 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2014 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/c99defs.h" #pragma once struct media_remux_job; typedef struct media_remux_job *media_remux_job_t; typedef bool(media_remux_progress_callback)(void *data, float percent); #ifdef __cplusplus extern "C" { #endif EXPORT bool media_remux_job_create(media_remux_job_t *job, const char *in_filename, const char *out_filename); EXPORT bool media_remux_job_process(media_remux_job_t job, media_remux_progress_callback callback, void *data); EXPORT void media_remux_job_destroy(media_remux_job_t job); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/pkgconfig/000755 001751 001751 00000000000 15153330731 021456 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/pkgconfig/libobs.pc.in000644 001751 001751 00000000551 15153330235 023661 0ustar00runnerrunner000000 000000 prefix=@CMAKE_INSTALL_PREFIX@ exec_prefix=${prefix} libdir=@CMAKE_INSTALL_FULL_LIBDIR@ includedir=@CMAKE_INSTALL_FULL_INCLUDEDIR@/obs Name: libobs Description: OBS Studio Library Version: @OBS_VERSION_CANONICAL@ Cflags: -I${includedir} @_TARGET_DEFINITIONS@ @_TARGET_OPTIONS@ @_LINKED_DEFINITIONS@ @_LINKED_OPTIONS@ Libs: -L${libdir} -lobs @_LINKED_LIBRARIES@ obs-studio-32.1.0-sources/libobs/graphics/000755 001751 001751 00000000000 15153330731 021307 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/graphics/input.h000644 001751 001751 00000002215 15153330235 022616 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once /* TODO: incomplete/may not be necessary */ #ifdef __cplusplus extern "C" { #endif /* wrapped opaque data types */ struct input_subsystem; typedef struct input_subsystem input_t; EXPORT int input_getbuttonstate(input_t *input, uint32_t button); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/effect-parser.h000644 001751 001751 00000015716 15153330235 024217 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/darray.h" #include "../util/cf-parser.h" #include "graphics.h" #include "shader-parser.h" #ifdef __cplusplus extern "C" { #endif struct dstr; typedef DARRAY(struct ep_param) ep_param_array_t; typedef DARRAY(struct ep_var) ep_var_array_t; /* * The effect parser takes an effect file and converts it into individual * shaders for each technique's pass. It automatically writes all dependent * structures/functions/parameters to the shader and builds shader text for * each shader component of each pass. */ /* ------------------------------------------------------------------------- */ /* effect parser var data */ enum ep_var_type { EP_VAR_NONE, EP_VAR_IN = EP_VAR_NONE, EP_VAR_INOUT, EP_VAR_OUT, EP_VAR_UNIFORM }; struct ep_var { char *type, *name, *mapping; enum ep_var_type var_type; }; static inline void ep_var_init(struct ep_var *epv) { memset(epv, 0, sizeof(struct ep_var)); } static inline void ep_var_free(struct ep_var *epv) { bfree(epv->type); bfree(epv->name); bfree(epv->mapping); } /* ------------------------------------------------------------------------- */ /* effect parser param data */ struct ep_param { char *type, *name; DARRAY(uint8_t) default_val; DARRAY(char *) properties; struct gs_effect_param *param; bool is_const, is_property, is_uniform, is_texture, written; int writeorder, array_count; ep_param_array_t annotations; }; static inline void ep_param_init(struct ep_param *epp, char *type, char *name, bool is_property, bool is_const, bool is_uniform) { epp->type = type; epp->name = name; epp->is_property = is_property; epp->is_const = is_const; epp->is_uniform = is_uniform; epp->is_texture = (astrcmp_n(epp->type, "texture", 7) == 0); epp->written = false; epp->writeorder = false; epp->array_count = 0; da_init(epp->default_val); da_init(epp->properties); da_init(epp->annotations); } static inline void ep_param_free(struct ep_param *epp) { bfree(epp->type); bfree(epp->name); da_free(epp->default_val); da_free(epp->properties); for (size_t i = 0; i < epp->annotations.num; i++) ep_param_free(epp->annotations.array + i); da_free(epp->annotations); } /* ------------------------------------------------------------------------- */ /* effect parser struct data */ struct ep_struct { char *name; ep_var_array_t vars; /* struct ep_var */ bool written; }; static inline bool ep_struct_mapped(struct ep_struct *eps) { if (eps->vars.num > 0) return eps->vars.array[0].mapping != NULL; return false; } static inline void ep_struct_init(struct ep_struct *eps) { memset(eps, 0, sizeof(struct ep_struct)); } static inline void ep_struct_free(struct ep_struct *eps) { size_t i; bfree(eps->name); for (i = 0; i < eps->vars.num; i++) ep_var_free(eps->vars.array + i); da_free(eps->vars); } /* ------------------------------------------------------------------------- */ /* effect parser sampler data */ struct ep_sampler { char *name; DARRAY(char *) states; DARRAY(char *) values; bool written; }; static inline void ep_sampler_init(struct ep_sampler *eps) { memset(eps, 0, sizeof(struct ep_sampler)); } static inline void ep_sampler_free(struct ep_sampler *eps) { size_t i; for (i = 0; i < eps->states.num; i++) bfree(eps->states.array[i]); for (i = 0; i < eps->values.num; i++) bfree(eps->values.array[i]); bfree(eps->name); da_free(eps->states); da_free(eps->values); } /* ------------------------------------------------------------------------- */ /* effect parser pass data */ struct ep_pass { char *name; cf_token_array_t vertex_program; cf_token_array_t fragment_program; struct gs_effect_pass *pass; }; static inline void ep_pass_init(struct ep_pass *epp) { memset(epp, 0, sizeof(struct ep_pass)); } static inline void ep_pass_free(struct ep_pass *epp) { bfree(epp->name); da_free(epp->vertex_program); da_free(epp->fragment_program); } /* ------------------------------------------------------------------------- */ /* effect parser technique data */ struct ep_technique { char *name; DARRAY(struct ep_pass) passes; /* struct ep_pass */ }; static inline void ep_technique_init(struct ep_technique *ept) { memset(ept, 0, sizeof(struct ep_technique)); } static inline void ep_technique_free(struct ep_technique *ept) { size_t i; for (i = 0; i < ept->passes.num; i++) ep_pass_free(ept->passes.array + i); bfree(ept->name); da_free(ept->passes); } /* ------------------------------------------------------------------------- */ /* effect parser function data */ struct ep_func { char *name, *ret_type, *mapping; struct dstr contents; ep_var_array_t param_vars; DARRAY(char *) func_deps; DARRAY(char *) struct_deps; DARRAY(char *) param_deps; DARRAY(char *) sampler_deps; bool written; }; static inline void ep_func_init(struct ep_func *epf, char *ret_type, char *name) { memset(epf, 0, sizeof(struct ep_func)); epf->name = name; epf->ret_type = ret_type; } static inline void ep_func_free(struct ep_func *epf) { size_t i; for (i = 0; i < epf->param_vars.num; i++) ep_var_free(epf->param_vars.array + i); bfree(epf->name); bfree(epf->ret_type); bfree(epf->mapping); dstr_free(&epf->contents); da_free(epf->param_vars); da_free(epf->func_deps); da_free(epf->struct_deps); da_free(epf->param_deps); da_free(epf->sampler_deps); } /* ------------------------------------------------------------------------- */ struct effect_parser { gs_effect_t *effect; ep_param_array_t params; DARRAY(struct ep_struct) structs; DARRAY(struct ep_func) funcs; DARRAY(struct ep_sampler) samplers; DARRAY(struct ep_technique) techniques; /* internal vars */ DARRAY(struct cf_lexer) files; cf_token_array_t tokens; struct gs_effect_pass *cur_pass; struct cf_parser cfp; }; static inline void ep_init(struct effect_parser *ep) { da_init(ep->params); da_init(ep->structs); da_init(ep->funcs); da_init(ep->samplers); da_init(ep->techniques); da_init(ep->files); da_init(ep->tokens); ep->cur_pass = NULL; cf_parser_init(&ep->cfp); } extern void ep_free(struct effect_parser *ep); extern bool ep_parse(struct effect_parser *ep, gs_effect_t *effect, const char *effect_string, const char *file); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/matrix3.h000644 001751 001751 00000006204 15153330235 023050 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "vec3.h" #include "axisang.h" /* 3x4 Matrix */ #ifdef __cplusplus extern "C" { #endif struct matrix4; struct matrix3 { struct vec3 x; struct vec3 y; struct vec3 z; struct vec3 t; }; static inline void matrix3_copy(struct matrix3 *dst, const struct matrix3 *m) { vec3_copy(&dst->x, &m->x); vec3_copy(&dst->y, &m->y); vec3_copy(&dst->z, &m->z); vec3_copy(&dst->t, &m->t); } static inline void matrix3_identity(struct matrix3 *dst) { vec3_zero(&dst->x); vec3_zero(&dst->y); vec3_zero(&dst->z); vec3_zero(&dst->t); dst->x.x = dst->y.y = dst->z.z = 1.0f; } EXPORT void matrix3_from_quat(struct matrix3 *dst, const struct quat *q); EXPORT void matrix3_from_axisang(struct matrix3 *dst, const struct axisang *aa); EXPORT void matrix3_from_matrix4(struct matrix3 *dst, const struct matrix4 *m); EXPORT void matrix3_mul(struct matrix3 *dst, const struct matrix3 *m1, const struct matrix3 *m2); static inline void matrix3_translate(struct matrix3 *dst, const struct matrix3 *m, const struct vec3 *v) { vec3_sub(&dst->t, &m->t, v); } EXPORT void matrix3_rotate(struct matrix3 *dst, const struct matrix3 *m, const struct quat *q); EXPORT void matrix3_rotate_aa(struct matrix3 *dst, const struct matrix3 *m, const struct axisang *aa); EXPORT void matrix3_scale(struct matrix3 *dst, const struct matrix3 *m, const struct vec3 *v); EXPORT void matrix3_transpose(struct matrix3 *dst, const struct matrix3 *m); EXPORT void matrix3_inv(struct matrix3 *dst, const struct matrix3 *m); EXPORT void matrix3_mirror(struct matrix3 *dst, const struct matrix3 *m, const struct plane *p); EXPORT void matrix3_mirrorv(struct matrix3 *dst, const struct matrix3 *m, const struct vec3 *v); static inline void matrix3_translate3f(struct matrix3 *dst, const struct matrix3 *m, float x, float y, float z) { struct vec3 v; vec3_set(&v, x, y, z); matrix3_translate(dst, m, &v); } static inline void matrix3_rotate_aa4f(struct matrix3 *dst, const struct matrix3 *m, float x, float y, float z, float rot) { struct axisang aa; axisang_set(&aa, x, y, z, rot); matrix3_rotate_aa(dst, m, &aa); } static inline void matrix3_scale3f(struct matrix3 *dst, const struct matrix3 *m, float x, float y, float z) { struct vec3 v; vec3_set(&v, x, y, z); matrix3_scale(dst, m, &v); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/graphics.c000644 001751 001751 00000234531 15153330235 023262 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "../util/base.h" #include "../util/bmem.h" #include "../util/platform.h" #include "graphics-internal.h" #include "vec2.h" #include "vec3.h" #include "quat.h" #include "axisang.h" #include "effect-parser.h" #include "effect.h" #ifdef near #undef near #endif #ifdef far #undef far #endif static THREAD_LOCAL graphics_t *thread_graphics = NULL; static inline bool gs_obj_valid(const void *obj, const char *f, const char *name) { if (!obj) { blog(LOG_DEBUG, "%s: Null '%s' parameter", f, name); return false; } return true; } static inline bool gs_valid(const char *f) { if (!thread_graphics) { blog(LOG_DEBUG, "%s: called while not in a graphics context", f); return false; } return true; } #define ptr_valid(ptr, func) gs_obj_valid(ptr, func, #ptr) #define gs_valid_p(func, param1) (gs_valid(func) && ptr_valid(param1, func)) #define gs_valid_p2(func, param1, param2) (gs_valid(func) && ptr_valid(param1, func) && ptr_valid(param2, func)) #define gs_valid_p3(func, param1, param2, param3) \ (gs_valid(func) && ptr_valid(param1, func) && ptr_valid(param2, func) && ptr_valid(param3, func)) #define IMMEDIATE_COUNT 512 void gs_enum_adapters(bool (*callback)(void *param, const char *name, uint32_t id), void *param) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_enum_adapters", callback)) return; if (graphics->exports.device_enum_adapters) { if (graphics->exports.device_enum_adapters(graphics->device, callback, param)) { return; } } /* If the subsystem does not currently support device enumeration of * adapters or fails to enumerate adapters, just set it to one adapter * named "Default" */ callback(param, "Default", 0); } extern void gs_init_image_deps(void); extern void gs_free_image_deps(void); bool load_graphics_imports(struct gs_exports *exports, void *module, const char *module_name); static bool graphics_init_immediate_vb(struct graphics_subsystem *graphics) { struct gs_vb_data *vbd; vbd = gs_vbdata_create(); vbd->num = IMMEDIATE_COUNT; vbd->points = bmalloc(sizeof(struct vec3) * IMMEDIATE_COUNT); vbd->normals = bmalloc(sizeof(struct vec3) * IMMEDIATE_COUNT); vbd->colors = bmalloc(sizeof(uint32_t) * IMMEDIATE_COUNT); vbd->num_tex = 1; vbd->tvarray = bmalloc(sizeof(struct gs_tvertarray)); vbd->tvarray[0].width = 2; vbd->tvarray[0].array = bmalloc(sizeof(struct vec2) * IMMEDIATE_COUNT); graphics->immediate_vertbuffer = graphics->exports.device_vertexbuffer_create(graphics->device, vbd, GS_DYNAMIC); if (!graphics->immediate_vertbuffer) return false; return true; } static bool graphics_init_sprite_vbs(struct graphics_subsystem *graphics) { struct gs_vb_data *vbd; vbd = gs_vbdata_create(); vbd->num = 4; vbd->points = bzalloc(sizeof(struct vec3) * 4); vbd->num_tex = 1; vbd->tvarray = bzalloc(sizeof(struct gs_tvertarray)); vbd->tvarray[0].width = 2; vbd->tvarray[0].array = bzalloc(sizeof(struct vec2) * 4); vbd->points[1].x = 1.0f; vbd->points[2].y = 1.0f; vbd->points[3].x = 1.0f; vbd->points[3].y = 1.0f; struct vec2 *uvs = vbd->tvarray[0].array; uvs[1].x = 1.0f; uvs[2].y = 1.0f; uvs[3].x = 1.0f; uvs[3].y = 1.0f; graphics->sprite_buffer = gs_vertexbuffer_create(vbd, GS_DUP_BUFFER); if (!graphics->sprite_buffer) return false; graphics->subregion_buffer = gs_vertexbuffer_create(vbd, GS_DUP_BUFFER | GS_DYNAMIC); if (!graphics->subregion_buffer) return false; uvs[0].y = 1.0f; uvs[1].y = 1.0f; uvs[2].y = 0.0f; uvs[3].y = 0.0f; graphics->flipped_sprite_buffer = gs_vertexbuffer_create(vbd, 0); if (!graphics->flipped_sprite_buffer) return false; return true; } static bool graphics_init(struct graphics_subsystem *graphics) { struct matrix4 top_mat; matrix4_identity(&top_mat); da_push_back(graphics->matrix_stack, &top_mat); graphics->exports.device_enter_context(graphics->device); thread_graphics = graphics; if (!graphics_init_immediate_vb(graphics)) return false; if (!graphics_init_sprite_vbs(graphics)) return false; if (pthread_mutex_init(&graphics->mutex, NULL) != 0) return false; if (pthread_mutex_init(&graphics->effect_mutex, NULL) != 0) return false; graphics->exports.device_blend_function_separate(graphics->device, GS_BLEND_SRCALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); graphics->cur_blend_state.enabled = true; graphics->cur_blend_state.src_c = GS_BLEND_SRCALPHA; graphics->cur_blend_state.dest_c = GS_BLEND_INVSRCALPHA; graphics->cur_blend_state.src_a = GS_BLEND_ONE; graphics->cur_blend_state.dest_a = GS_BLEND_INVSRCALPHA; graphics->cur_blend_state.op = GS_BLEND_OP_ADD; graphics->exports.device_blend_op(graphics->device, graphics->cur_blend_state.op); graphics->exports.device_leave_context(graphics->device); gs_init_image_deps(); thread_graphics = NULL; return true; } int gs_create(graphics_t **pgraphics, const char *module, uint32_t adapter) { int errcode = GS_ERROR_FAIL; graphics_t *graphics = bzalloc(sizeof(struct graphics_subsystem)); pthread_mutex_init_value(&graphics->mutex); pthread_mutex_init_value(&graphics->effect_mutex); graphics->module = os_dlopen(module); if (!graphics->module) { errcode = GS_ERROR_MODULE_NOT_FOUND; goto error; } if (!load_graphics_imports(&graphics->exports, graphics->module, module)) goto error; errcode = graphics->exports.device_create(&graphics->device, adapter); if (errcode != GS_SUCCESS) goto error; if (!graphics_init(graphics)) { errcode = GS_ERROR_FAIL; goto error; } *pgraphics = graphics; return errcode; error: gs_destroy(graphics); return errcode; } extern void gs_effect_actually_destroy(gs_effect_t *effect); void gs_destroy(graphics_t *graphics) { if (!ptr_valid(graphics, "gs_destroy")) return; while (thread_graphics) gs_leave_context(); if (graphics->device) { struct gs_effect *effect = graphics->first_effect; thread_graphics = graphics; graphics->exports.device_enter_context(graphics->device); while (effect) { struct gs_effect *next = effect->next; gs_effect_actually_destroy(effect); effect = next; } graphics->exports.gs_vertexbuffer_destroy(graphics->subregion_buffer); graphics->exports.gs_vertexbuffer_destroy(graphics->flipped_sprite_buffer); graphics->exports.gs_vertexbuffer_destroy(graphics->sprite_buffer); graphics->exports.gs_vertexbuffer_destroy(graphics->immediate_vertbuffer); graphics->exports.device_destroy(graphics->device); thread_graphics = NULL; } pthread_mutex_destroy(&graphics->mutex); pthread_mutex_destroy(&graphics->effect_mutex); da_free(graphics->matrix_stack); da_free(graphics->viewport_stack); da_free(graphics->blend_state_stack); if (graphics->module) os_dlclose(graphics->module); bfree(graphics); gs_free_image_deps(); } void gs_enter_context(graphics_t *graphics) { if (!ptr_valid(graphics, "gs_enter_context")) return; bool is_current = thread_graphics == graphics; if (thread_graphics && !is_current) { while (thread_graphics) gs_leave_context(); } if (!is_current) { pthread_mutex_lock(&graphics->mutex); graphics->exports.device_enter_context(graphics->device); thread_graphics = graphics; } os_atomic_inc_long(&graphics->ref); } void gs_leave_context(void) { if (gs_valid("gs_leave_context")) { if (!os_atomic_dec_long(&thread_graphics->ref)) { graphics_t *graphics = thread_graphics; graphics->exports.device_leave_context(graphics->device); pthread_mutex_unlock(&graphics->mutex); thread_graphics = NULL; } } } graphics_t *gs_get_context(void) { return thread_graphics; } void *gs_get_device_obj(void) { if (!gs_valid("gs_get_device_obj")) return NULL; return thread_graphics->exports.device_get_device_obj(thread_graphics->device); } const char *gs_get_device_name(void) { return gs_valid("gs_get_device_name") ? thread_graphics->exports.device_get_name() : NULL; } const char *gs_get_driver_version(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_driver_version")) return NULL; if (graphics->exports.gpu_get_driver_version) return (graphics->exports.gpu_get_driver_version()); else return NULL; } const char *gs_get_renderer(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_renderer")) return NULL; if (graphics->exports.gpu_get_renderer) return (graphics->exports.gpu_get_renderer()); else return NULL; } uint64_t gs_get_gpu_dmem(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_gpu_dmem")) return 0; if (graphics->exports.gpu_get_dmem) return (graphics->exports.gpu_get_dmem()); else return 0; } uint64_t gs_get_gpu_smem(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_gpu_smem")) return 0; if (graphics->exports.gpu_get_smem) return (graphics->exports.gpu_get_smem()); else return 0; } int gs_get_device_type(void) { return gs_valid("gs_get_device_type") ? thread_graphics->exports.device_get_type() : -1; } static inline struct matrix4 *top_matrix(graphics_t *graphics) { return graphics->matrix_stack.array + graphics->cur_matrix; } void gs_matrix_push(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_matrix_push")) return; struct matrix4 mat, *top_mat = top_matrix(graphics); memcpy(&mat, top_mat, sizeof(struct matrix4)); da_push_back(graphics->matrix_stack, &mat); graphics->cur_matrix++; } void gs_matrix_pop(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_matrix_pop")) return; if (graphics->cur_matrix == 0) { blog(LOG_ERROR, "Tried to pop last matrix on stack"); return; } da_erase(graphics->matrix_stack, graphics->cur_matrix); graphics->cur_matrix--; } void gs_matrix_identity(void) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_identity")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_identity(top_mat); } void gs_matrix_transpose(void) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_transpose")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_transpose(top_mat, top_mat); } void gs_matrix_set(const struct matrix4 *matrix) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_set")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_copy(top_mat, matrix); } void gs_matrix_get(struct matrix4 *dst) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_get")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_copy(dst, top_mat); } void gs_matrix_mul(const struct matrix4 *matrix) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_mul")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_mul(top_mat, matrix, top_mat); } void gs_matrix_rotquat(const struct quat *rot) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_rotquat")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_rotate_i(top_mat, rot, top_mat); } void gs_matrix_rotaa(const struct axisang *rot) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_rotaa")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_rotate_aa_i(top_mat, rot, top_mat); } void gs_matrix_translate(const struct vec3 *pos) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_translate")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_translate3v_i(top_mat, pos, top_mat); } void gs_matrix_scale(const struct vec3 *scale) { struct matrix4 *top_mat; if (!gs_valid("gs_matrix_scale")) return; top_mat = top_matrix(thread_graphics); if (top_mat) matrix4_scale_i(top_mat, scale, top_mat); } void gs_matrix_rotaa4f(float x, float y, float z, float angle) { struct matrix4 *top_mat; struct axisang aa; if (!gs_valid("gs_matrix_rotaa4f")) return; top_mat = top_matrix(thread_graphics); if (top_mat) { axisang_set(&aa, x, y, z, angle); matrix4_rotate_aa_i(top_mat, &aa, top_mat); } } void gs_matrix_translate3f(float x, float y, float z) { struct matrix4 *top_mat; struct vec3 p; if (!gs_valid("gs_matrix_translate3f")) return; top_mat = top_matrix(thread_graphics); if (top_mat) { vec3_set(&p, x, y, z); matrix4_translate3v_i(top_mat, &p, top_mat); } } void gs_matrix_scale3f(float x, float y, float z) { struct matrix4 *top_mat = top_matrix(thread_graphics); struct vec3 p; if (top_mat) { vec3_set(&p, x, y, z); matrix4_scale_i(top_mat, &p, top_mat); } } static inline void reset_immediate_arrays(graphics_t *graphics) { da_init(graphics->verts); da_init(graphics->norms); da_init(graphics->colors); for (size_t i = 0; i < 16; i++) da_init(graphics->texverts[i]); } void gs_render_start(bool b_new) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_render_start")) return; graphics->using_immediate = !b_new; reset_immediate_arrays(graphics); if (b_new) { graphics->vbd = gs_vbdata_create(); } else { graphics->vbd = gs_vertexbuffer_get_data(graphics->immediate_vertbuffer); memset(graphics->vbd->colors, 0xFF, sizeof(uint32_t) * IMMEDIATE_COUNT); graphics->verts.array = graphics->vbd->points; graphics->norms.array = graphics->vbd->normals; graphics->colors.array = graphics->vbd->colors; graphics->texverts[0].array = graphics->vbd->tvarray[0].array; graphics->verts.capacity = IMMEDIATE_COUNT; graphics->norms.capacity = IMMEDIATE_COUNT; graphics->colors.capacity = IMMEDIATE_COUNT; graphics->texverts[0].capacity = IMMEDIATE_COUNT; } } static inline size_t min_size(const size_t a, const size_t b) { return (a < b) ? a : b; } void gs_render_stop(enum gs_draw_mode mode) { graphics_t *graphics = thread_graphics; size_t i, num; if (!gs_valid("gs_render_stop")) return; num = graphics->verts.num; if (!num) { if (!graphics->using_immediate) { da_free(graphics->verts); da_free(graphics->norms); da_free(graphics->colors); for (i = 0; i < 16; i++) da_free(graphics->texverts[i]); gs_vbdata_destroy(graphics->vbd); } return; } if (graphics->norms.num && (graphics->norms.num != graphics->verts.num)) { blog(LOG_ERROR, "gs_render_stop: normal count does " "not match vertex count"); num = min_size(num, graphics->norms.num); } if (graphics->colors.num && (graphics->colors.num != graphics->verts.num)) { blog(LOG_ERROR, "gs_render_stop: color count does " "not match vertex count"); num = min_size(num, graphics->colors.num); } if (graphics->texverts[0].num && (graphics->texverts[0].num != graphics->verts.num)) { blog(LOG_ERROR, "gs_render_stop: texture vertex count does " "not match vertex count"); num = min_size(num, graphics->texverts[0].num); } if (graphics->using_immediate) { gs_vertexbuffer_flush(graphics->immediate_vertbuffer); gs_load_vertexbuffer(graphics->immediate_vertbuffer); gs_load_indexbuffer(NULL); gs_draw(mode, 0, (uint32_t)num); reset_immediate_arrays(graphics); } else { gs_vertbuffer_t *vb = gs_render_save(); gs_load_vertexbuffer(vb); gs_load_indexbuffer(NULL); gs_draw(mode, 0, 0); gs_vertexbuffer_destroy(vb); } graphics->vbd = NULL; } gs_vertbuffer_t *gs_render_save(void) { graphics_t *graphics = thread_graphics; size_t num_tex, i; if (!gs_valid("gs_render_save")) return NULL; if (graphics->using_immediate) return NULL; if (!graphics->verts.num) { gs_vbdata_destroy(graphics->vbd); return NULL; } for (num_tex = 0; num_tex < 16; num_tex++) if (!graphics->texverts[num_tex].num) break; graphics->vbd->points = graphics->verts.array; graphics->vbd->normals = graphics->norms.array; graphics->vbd->colors = graphics->colors.array; graphics->vbd->num = graphics->verts.num; graphics->vbd->num_tex = num_tex; if (graphics->vbd->num_tex) { graphics->vbd->tvarray = bmalloc(sizeof(struct gs_tvertarray) * num_tex); for (i = 0; i < num_tex; i++) { graphics->vbd->tvarray[i].width = 2; graphics->vbd->tvarray[i].array = graphics->texverts[i].array; } } reset_immediate_arrays(graphics); return gs_vertexbuffer_create(graphics->vbd, 0); } void gs_vertex2f(float x, float y) { struct vec3 v3; vec3_set(&v3, x, y, 0.0f); gs_vertex3v(&v3); } void gs_vertex3f(float x, float y, float z) { struct vec3 v3; vec3_set(&v3, x, y, z); gs_vertex3v(&v3); } void gs_normal3f(float x, float y, float z) { struct vec3 v3; vec3_set(&v3, x, y, z); gs_normal3v(&v3); } static inline bool validvertsize(graphics_t *graphics, size_t num, const char *name) { if (graphics->using_immediate && num == IMMEDIATE_COUNT) { blog(LOG_ERROR, "%s: tried to use over %u " "for immediate rendering", name, IMMEDIATE_COUNT); return false; } return true; } void gs_color(uint32_t color) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_color")) return; if (!validvertsize(graphics, graphics->colors.num, "gs_color")) return; da_push_back(graphics->colors, &color); } void gs_texcoord(float x, float y, int unit) { struct vec2 v2; vec2_set(&v2, x, y); gs_texcoord2v(&v2, unit); } void gs_vertex2v(const struct vec2 *v) { struct vec3 v3; vec3_set(&v3, v->x, v->y, 0.0f); gs_vertex3v(&v3); } void gs_vertex3v(const struct vec3 *v) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_vertex3v")) return; if (!validvertsize(graphics, graphics->verts.num, "gs_vertex")) return; da_push_back(graphics->verts, v); } void gs_normal3v(const struct vec3 *v) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_normal3v")) return; if (!validvertsize(graphics, graphics->norms.num, "gs_normal")) return; da_push_back(graphics->norms, v); } void gs_color4v(const struct vec4 *v) { /* TODO */ UNUSED_PARAMETER(v); } void gs_texcoord2v(const struct vec2 *v, int unit) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texcoord2v")) return; if (!validvertsize(graphics, graphics->texverts[unit].num, "gs_texcoord")) return; da_push_back(graphics->texverts[unit], v); } input_t *gs_get_input(void) { /* TODO */ return NULL; } gs_effect_t *gs_get_effect(void) { if (!gs_valid("gs_get_effect")) return NULL; return thread_graphics ? thread_graphics->cur_effect : NULL; } static inline struct gs_effect *find_cached_effect(const char *filename) { struct gs_effect *effect = thread_graphics->first_effect; while (effect) { if (strcmp(effect->effect_path, filename) == 0) break; effect = effect->next; } return effect; } gs_effect_t *gs_effect_create_from_file(const char *file, char **error_string) { char *file_string; gs_effect_t *effect = NULL; if (!gs_valid_p("gs_effect_create_from_file", file)) return NULL; effect = find_cached_effect(file); if (effect) return effect; file_string = os_quick_read_utf8_file(file); if (!file_string) { blog(LOG_ERROR, "Could not load effect file '%s'", file); return NULL; } effect = gs_effect_create(file_string, file, error_string); bfree(file_string); return effect; } gs_effect_t *gs_effect_create(const char *effect_string, const char *filename, char **error_string) { if (!gs_valid_p("gs_effect_create", effect_string)) return NULL; struct gs_effect *effect = bzalloc(sizeof(struct gs_effect)); struct effect_parser parser; bool success; effect->graphics = thread_graphics; effect->effect_path = bstrdup(filename); ep_init(&parser); success = ep_parse(&parser, effect, effect_string, filename); if (!success) { if (error_string) *error_string = error_data_buildstring(&parser.cfp.error_list); gs_effect_destroy(effect); effect = NULL; } if (effect) { pthread_mutex_lock(&thread_graphics->effect_mutex); if (effect->effect_path) { effect->cached = true; effect->next = thread_graphics->first_effect; thread_graphics->first_effect = effect; } pthread_mutex_unlock(&thread_graphics->effect_mutex); } ep_free(&parser); return effect; } gs_shader_t *gs_vertexshader_create_from_file(const char *file, char **error_string) { if (!gs_valid_p("gs_vertexshader_create_from_file", file)) return NULL; char *file_string; gs_shader_t *shader = NULL; file_string = os_quick_read_utf8_file(file); if (!file_string) { blog(LOG_ERROR, "Could not load vertex shader file '%s'", file); return NULL; } shader = gs_vertexshader_create(file_string, file, error_string); bfree(file_string); return shader; } gs_shader_t *gs_pixelshader_create_from_file(const char *file, char **error_string) { char *file_string; gs_shader_t *shader = NULL; if (!gs_valid_p("gs_pixelshader_create_from_file", file)) return NULL; file_string = os_quick_read_utf8_file(file); if (!file_string) { blog(LOG_ERROR, "Could not load pixel shader file '%s'", file); return NULL; } shader = gs_pixelshader_create(file_string, file, error_string); bfree(file_string); return shader; } gs_texture_t *gs_texture_create_from_file(const char *file) { enum gs_color_format format; uint32_t cx; uint32_t cy; uint8_t *data = gs_create_texture_file_data(file, &format, &cx, &cy); gs_texture_t *tex = NULL; if (data) { tex = gs_texture_create(cx, cy, format, 1, (const uint8_t **)&data, 0); bfree(data); } return tex; } static inline void assign_sprite_rect(float *start, float *end, float size, bool flip) { if (!flip) { *start = 0.0f; *end = size; } else { *start = size; *end = 0.0f; } } static inline void assign_sprite_uv(float *start, float *end, bool flip) { if (!flip) { *start = 0.0f; *end = 1.0f; } else { *start = 1.0f; *end = 0.0f; } } static void build_sprite(struct gs_vb_data *data, float fcx, float fcy, float start_u, float end_u, float start_v, float end_v) { struct vec2 *tvarray = data->tvarray[0].array; vec3_zero(data->points); vec3_set(data->points + 1, fcx, 0.0f, 0.0f); vec3_set(data->points + 2, 0.0f, fcy, 0.0f); vec3_set(data->points + 3, fcx, fcy, 0.0f); vec2_set(tvarray, start_u, start_v); vec2_set(tvarray + 1, end_u, start_v); vec2_set(tvarray + 2, start_u, end_v); vec2_set(tvarray + 3, end_u, end_v); } static inline void build_sprite_norm(struct gs_vb_data *data, float fcx, float fcy, uint32_t flip) { float start_u, end_u; float start_v, end_v; assign_sprite_uv(&start_u, &end_u, (flip & GS_FLIP_U) != 0); assign_sprite_uv(&start_v, &end_v, (flip & GS_FLIP_V) != 0); build_sprite(data, fcx, fcy, start_u, end_u, start_v, end_v); } static inline void build_subsprite_norm(struct gs_vb_data *data, float fsub_x, float fsub_y, float fsub_cx, float fsub_cy, float fcx, float fcy, uint32_t flip) { float start_u, end_u; float start_v, end_v; if ((flip & GS_FLIP_U) == 0) { start_u = fsub_x / fcx; end_u = (fsub_x + fsub_cx) / fcx; } else { start_u = (fsub_x + fsub_cx) / fcx; end_u = fsub_x / fcx; } if ((flip & GS_FLIP_V) == 0) { start_v = fsub_y / fcy; end_v = (fsub_y + fsub_cy) / fcy; } else { start_v = (fsub_y + fsub_cy) / fcy; end_v = fsub_y / fcy; } build_sprite(data, fsub_cx, fsub_cy, start_u, end_u, start_v, end_v); } static inline void build_sprite_rect(struct gs_vb_data *data, gs_texture_t *tex, float fcx, float fcy, uint32_t flip) { float start_u, end_u; float start_v, end_v; float width = (float)gs_texture_get_width(tex); float height = (float)gs_texture_get_height(tex); assign_sprite_rect(&start_u, &end_u, width, (flip & GS_FLIP_U) != 0); assign_sprite_rect(&start_v, &end_v, height, (flip & GS_FLIP_V) != 0); build_sprite(data, fcx, fcy, start_u, end_u, start_v, end_v); } void gs_draw_quadf(gs_texture_t *tex, uint32_t flip, float width, float height) { graphics_t *graphics = thread_graphics; float fcx, fcy; struct gs_vb_data *data; if (tex) { if (gs_get_texture_type(tex) != GS_TEXTURE_2D) { blog(LOG_ERROR, "A sprite must be a 2D texture"); return; } } else { if (width == 0.0f || height == 0.0f) { blog(LOG_ERROR, "A sprite cannot be drawn without " "a width/height"); return; } } fcx = width != 0.0f ? width : (float)gs_texture_get_width(tex); fcy = height != 0.0f ? height : (float)gs_texture_get_height(tex); gs_matrix_push(); gs_matrix_scale3f(fcx, fcy, 1.0f); gs_load_indexbuffer(NULL); if (tex && gs_texture_is_rect(tex)) { data = gs_vertexbuffer_get_data(graphics->subregion_buffer); build_sprite_rect(data, tex, 1.0f, 1.0f, flip); gs_vertexbuffer_flush(graphics->subregion_buffer); gs_load_vertexbuffer(graphics->subregion_buffer); gs_draw(GS_TRISTRIP, 0, 0); } else { gs_load_vertexbuffer(flip ? graphics->flipped_sprite_buffer : graphics->sprite_buffer); gs_draw(GS_TRISTRIP, 0, 0); } gs_matrix_pop(); } void gs_draw_sprite(gs_texture_t *tex, uint32_t flip, uint32_t width, uint32_t height) { gs_draw_quadf(tex, flip, (float)width, (float)height); } void gs_draw_sprite_subregion(gs_texture_t *tex, uint32_t flip, uint32_t sub_x, uint32_t sub_y, uint32_t sub_cx, uint32_t sub_cy) { graphics_t *graphics = thread_graphics; uint32_t cx, cy; float fcx, fcy; struct gs_vb_data *data; if (tex) { if (gs_get_texture_type(tex) != GS_TEXTURE_2D) { blog(LOG_ERROR, "A sprite must be a 2D texture"); return; } } cx = gs_texture_get_width(tex); cy = gs_texture_get_height(tex); if (sub_x == 0 && sub_y == 0 && sub_cx == cx && sub_cy == cy) { gs_draw_sprite(tex, flip, 0, 0); return; } fcx = (float)cx; fcy = (float)cy; data = gs_vertexbuffer_get_data(graphics->subregion_buffer); build_subsprite_norm(data, (float)sub_x, (float)sub_y, (float)sub_cx, (float)sub_cy, fcx, fcy, flip); gs_vertexbuffer_flush(graphics->subregion_buffer); gs_load_vertexbuffer(graphics->subregion_buffer); gs_load_indexbuffer(NULL); gs_draw(GS_TRISTRIP, 0, 0); } void gs_draw_cube_backdrop(gs_texture_t *cubetex, const struct quat *rot, float left, float right, float top, float bottom, float znear) { /* TODO */ UNUSED_PARAMETER(cubetex); UNUSED_PARAMETER(rot); UNUSED_PARAMETER(left); UNUSED_PARAMETER(right); UNUSED_PARAMETER(top); UNUSED_PARAMETER(bottom); UNUSED_PARAMETER(znear); } void gs_reset_viewport(void) { uint32_t cx, cy; if (!gs_valid("gs_reset_viewport")) return; gs_get_size(&cx, &cy); gs_set_viewport(0, 0, (int)cx, (int)cy); } void gs_set_2d_mode(void) { uint32_t cx, cy; if (!gs_valid("gs_set_2d_mode")) return; gs_get_size(&cx, &cy); gs_ortho(0.0f, (float)cx, 0.0f, (float)cy, -1.0, -1024.0f); } void gs_set_3d_mode(double fovy, double znear, double zvar) { /* TODO */ UNUSED_PARAMETER(fovy); UNUSED_PARAMETER(znear); UNUSED_PARAMETER(zvar); } void gs_viewport_push(void) { if (!gs_valid("gs_viewport_push")) return; struct gs_rect *rect = da_push_back_new(thread_graphics->viewport_stack); gs_get_viewport(rect); } void gs_viewport_pop(void) { struct gs_rect *rect; if (!gs_valid("gs_viewport_pop")) return; if (!thread_graphics->viewport_stack.num) return; rect = da_end(thread_graphics->viewport_stack); gs_set_viewport(rect->x, rect->y, rect->cx, rect->cy); da_pop_back(thread_graphics->viewport_stack); } void gs_texture_set_image(gs_texture_t *tex, const uint8_t *data, uint32_t linesize, bool flip) { uint8_t *ptr; uint32_t linesize_out; size_t row_copy; size_t height; if (!gs_valid_p2("gs_texture_set_image", tex, data)) return; if (!gs_texture_map(tex, &ptr, &linesize_out)) return; row_copy = (linesize < linesize_out) ? linesize : linesize_out; height = gs_texture_get_height(tex); if (flip) { uint8_t *const end = ptr + height * linesize_out; data += (height - 1) * linesize; while (ptr < end) { memcpy(ptr, data, row_copy); ptr += linesize_out; data -= linesize; } } else if (linesize == linesize_out) { memcpy(ptr, data, row_copy * height); } else { uint8_t *const end = ptr + height * linesize_out; while (ptr < end) { memcpy(ptr, data, row_copy); ptr += linesize_out; data += linesize; } } gs_texture_unmap(tex); } void gs_cubetexture_set_image(gs_texture_t *cubetex, uint32_t side, const void *data, uint32_t linesize, bool invert) { /* TODO */ UNUSED_PARAMETER(cubetex); UNUSED_PARAMETER(side); UNUSED_PARAMETER(data); UNUSED_PARAMETER(linesize); UNUSED_PARAMETER(invert); } void gs_perspective(float angle, float aspect, float near, float far) { graphics_t *graphics = thread_graphics; float xmin, xmax, ymin, ymax; if (!gs_valid("gs_perspective")) return; ymax = near * tanf(RAD(angle) * 0.5f); ymin = -ymax; xmin = ymin * aspect; xmax = ymax * aspect; graphics->exports.device_frustum(graphics->device, xmin, xmax, ymin, ymax, near, far); } void gs_blend_state_push(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_blend_state_push")) return; da_push_back(graphics->blend_state_stack, &graphics->cur_blend_state); } void gs_blend_state_pop(void) { graphics_t *graphics = thread_graphics; struct blend_state *state; if (!gs_valid("gs_blend_state_pop")) return; state = da_end(graphics->blend_state_stack); if (!state) return; gs_enable_blending(state->enabled); gs_blend_function_separate(state->src_c, state->dest_c, state->src_a, state->dest_a); gs_blend_op(state->op); da_pop_back(graphics->blend_state_stack); } void gs_reset_blend_state(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_preprocessor_name")) return; if (!graphics->cur_blend_state.enabled) gs_enable_blending(true); if (graphics->cur_blend_state.src_c != GS_BLEND_SRCALPHA || graphics->cur_blend_state.dest_c != GS_BLEND_INVSRCALPHA || graphics->cur_blend_state.src_a != GS_BLEND_ONE || graphics->cur_blend_state.dest_a != GS_BLEND_INVSRCALPHA) { gs_blend_function_separate(GS_BLEND_SRCALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); gs_blend_op(GS_BLEND_OP_ADD); } } /* ------------------------------------------------------------------------- */ const char *gs_preprocessor_name(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_preprocessor_name")) return NULL; return graphics->exports.device_preprocessor_name(); } gs_swapchain_t *gs_swapchain_create(const struct gs_init_data *data) { struct gs_init_data new_data = *data; graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_swapchain_create", data)) return NULL; if (new_data.num_backbuffers == 0) new_data.num_backbuffers = 1; return graphics->exports.device_swapchain_create(graphics->device, &new_data); } void gs_resize(uint32_t x, uint32_t y) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_resize")) return; graphics->exports.device_resize(graphics->device, x, y); } void gs_update_color_space(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_update_color_space")) return; graphics->exports.device_update_color_space(graphics->device); } void gs_get_size(uint32_t *x, uint32_t *y) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_size")) return; graphics->exports.device_get_size(graphics->device, x, y); } uint32_t gs_get_width(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_width")) return 0; return graphics->exports.device_get_width(graphics->device); } uint32_t gs_get_height(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_height")) return 0; return graphics->exports.device_get_height(graphics->device); } static inline bool is_pow2(uint32_t size) { return size >= 2 && (size & (size - 1)) == 0; } gs_texture_t *gs_texture_create(uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags) { graphics_t *graphics = thread_graphics; bool pow2tex = is_pow2(width) && is_pow2(height); bool uses_mipmaps = (flags & GS_BUILD_MIPMAPS || levels != 1); if (!gs_valid("gs_texture_create")) return NULL; if (uses_mipmaps && !pow2tex) { blog(LOG_WARNING, "Cannot use mipmaps with a " "non-power-of-two texture. Disabling " "mipmaps for this texture."); uses_mipmaps = false; flags &= ~GS_BUILD_MIPMAPS; levels = 1; } if (uses_mipmaps && flags & GS_RENDER_TARGET) { blog(LOG_WARNING, "Cannot use mipmaps with render targets. " "Disabling mipmaps for this texture."); flags &= ~GS_BUILD_MIPMAPS; levels = 1; } return graphics->exports.device_texture_create(graphics->device, width, height, color_format, levels, data, flags); } #if defined(__linux__) || defined(__FreeBSD__) || defined(__DragonFly__) gs_texture_t *gs_texture_create_from_dmabuf(unsigned int width, unsigned int height, uint32_t drm_format, enum gs_color_format color_format, uint32_t n_planes, const int *fds, const uint32_t *strides, const uint32_t *offsets, const uint64_t *modifiers) { graphics_t *graphics = thread_graphics; return graphics->exports.device_texture_create_from_dmabuf( graphics->device, width, height, drm_format, color_format, n_planes, fds, strides, offsets, modifiers); } bool gs_query_dmabuf_capabilities(enum gs_dmabuf_flags *dmabuf_flags, uint32_t **drm_formats, size_t *n_formats) { graphics_t *graphics = thread_graphics; return graphics->exports.device_query_dmabuf_capabilities(graphics->device, dmabuf_flags, drm_formats, n_formats); } bool gs_query_dmabuf_modifiers_for_format(uint32_t drm_format, uint64_t **modifiers, size_t *n_modifiers) { graphics_t *graphics = thread_graphics; return graphics->exports.device_query_dmabuf_modifiers_for_format(graphics->device, drm_format, modifiers, n_modifiers); } gs_texture_t *gs_texture_create_from_pixmap(uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t target, void *pixmap) { graphics_t *graphics = thread_graphics; return graphics->exports.device_texture_create_from_pixmap(graphics->device, width, height, color_format, target, pixmap); } bool gs_query_sync_capabilities(void) { graphics_t *graphics = thread_graphics; return graphics->exports.device_query_sync_capabilities(graphics->device); } gs_sync_t *gs_sync_create(void) { graphics_t *graphics = thread_graphics; return graphics->exports.device_sync_create(graphics->device); } gs_sync_t *gs_sync_create_from_syncobj_timeline_point(int syncobj_fd, uint64_t timeline_point) { graphics_t *graphics = thread_graphics; return graphics->exports.device_sync_create_from_syncobj_timeline_point(graphics->device, syncobj_fd, timeline_point); } void gs_sync_destroy(gs_sync_t *sync) { graphics_t *graphics = thread_graphics; return graphics->exports.device_sync_destroy(graphics->device, sync); } bool gs_sync_export_syncobj_timeline_point(gs_sync_t *sync, int syncobj_fd, uint64_t timeline_point) { graphics_t *graphics = thread_graphics; return graphics->exports.device_sync_export_syncobj_timeline_point(graphics->device, sync, syncobj_fd, timeline_point); } bool gs_sync_signal_syncobj_timeline_point(int syncobj_fd, uint64_t timeline_point) { graphics_t *graphics = thread_graphics; return graphics->exports.device_sync_signal_syncobj_timeline_point(graphics->device, syncobj_fd, timeline_point); } bool gs_sync_wait(gs_sync_t *sync) { graphics_t *graphics = thread_graphics; return graphics->exports.device_sync_wait(graphics->device, sync); } #endif gs_texture_t *gs_cubetexture_create(uint32_t size, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags) { graphics_t *graphics = thread_graphics; bool pow2tex = is_pow2(size); bool uses_mipmaps = (flags & GS_BUILD_MIPMAPS || levels != 1); if (!gs_valid("gs_cubetexture_create")) return NULL; if (uses_mipmaps && !pow2tex) { blog(LOG_WARNING, "Cannot use mipmaps with a " "non-power-of-two texture. Disabling " "mipmaps for this texture."); uses_mipmaps = false; flags &= ~GS_BUILD_MIPMAPS; levels = 1; } if (uses_mipmaps && flags & GS_RENDER_TARGET) { blog(LOG_WARNING, "Cannot use mipmaps with render targets. " "Disabling mipmaps for this texture."); flags &= ~GS_BUILD_MIPMAPS; levels = 1; data = NULL; } return graphics->exports.device_cubetexture_create(graphics->device, size, color_format, levels, data, flags); } gs_texture_t *gs_voltexture_create(uint32_t width, uint32_t height, uint32_t depth, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_voltexture_create")) return NULL; return graphics->exports.device_voltexture_create(graphics->device, width, height, depth, color_format, levels, data, flags); } gs_zstencil_t *gs_zstencil_create(uint32_t width, uint32_t height, enum gs_zstencil_format format) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_zstencil_create")) return NULL; return graphics->exports.device_zstencil_create(graphics->device, width, height, format); } gs_stagesurf_t *gs_stagesurface_create(uint32_t width, uint32_t height, enum gs_color_format color_format) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stagesurface_create")) return NULL; return graphics->exports.device_stagesurface_create(graphics->device, width, height, color_format); } gs_samplerstate_t *gs_samplerstate_create(const struct gs_sampler_info *info) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_samplerstate_create", info)) return NULL; return graphics->exports.device_samplerstate_create(graphics->device, info); } gs_shader_t *gs_vertexshader_create(const char *shader, const char *file, char **error_string) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_vertexshader_create", shader)) return NULL; return graphics->exports.device_vertexshader_create(graphics->device, shader, file, error_string); } gs_shader_t *gs_pixelshader_create(const char *shader, const char *file, char **error_string) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_pixelshader_create", shader)) return NULL; return graphics->exports.device_pixelshader_create(graphics->device, shader, file, error_string); } gs_vertbuffer_t *gs_vertexbuffer_create(struct gs_vb_data *data, uint32_t flags) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_vertexbuffer_create")) return NULL; if (data && data->num && (flags & GS_DUP_BUFFER) != 0) { struct gs_vb_data *new_data = gs_vbdata_create(); new_data->num = data->num; #define DUP_VAL(val) \ do { \ if (data->val) \ new_data->val = bmemdup(data->val, sizeof(*data->val) * data->num); \ } while (false) DUP_VAL(points); DUP_VAL(normals); DUP_VAL(tangents); DUP_VAL(colors); #undef DUP_VAL if (data->tvarray && data->num_tex) { new_data->num_tex = data->num_tex; new_data->tvarray = bzalloc(sizeof(struct gs_tvertarray) * data->num_tex); for (size_t i = 0; i < data->num_tex; i++) { struct gs_tvertarray *tv = &data->tvarray[i]; struct gs_tvertarray *new_tv = &new_data->tvarray[i]; size_t size = tv->width * sizeof(float); new_tv->width = tv->width; new_tv->array = bmemdup(tv->array, size * data->num); } } data = new_data; } return graphics->exports.device_vertexbuffer_create(graphics->device, data, flags); } gs_indexbuffer_t *gs_indexbuffer_create(enum gs_index_type type, void *indices, size_t num, uint32_t flags) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_indexbuffer_create")) return NULL; if (indices && num && (flags & GS_DUP_BUFFER) != 0) { size_t size = type == GS_UNSIGNED_SHORT ? 2 : 4; indices = bmemdup(indices, size * num); } return graphics->exports.device_indexbuffer_create(graphics->device, type, indices, num, flags); } gs_timer_t *gs_timer_create() { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_create")) return NULL; return graphics->exports.device_timer_create(graphics->device); } gs_timer_range_t *gs_timer_range_create() { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_range_create")) return NULL; return graphics->exports.device_timer_range_create(graphics->device); } enum gs_texture_type gs_get_texture_type(const gs_texture_t *texture) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_get_texture_type", texture)) return GS_TEXTURE_2D; return graphics->exports.device_get_texture_type(texture); } void gs_load_vertexbuffer(gs_vertbuffer_t *vertbuffer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_vertexbuffer")) return; graphics->exports.device_load_vertexbuffer(graphics->device, vertbuffer); } void gs_load_indexbuffer(gs_indexbuffer_t *indexbuffer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_indexbuffer")) return; graphics->exports.device_load_indexbuffer(graphics->device, indexbuffer); } void gs_load_texture(gs_texture_t *tex, int unit) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_texture")) return; graphics->exports.device_load_texture(graphics->device, tex, unit); } void gs_load_samplerstate(gs_samplerstate_t *samplerstate, int unit) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_samplerstate")) return; graphics->exports.device_load_samplerstate(graphics->device, samplerstate, unit); } void gs_load_vertexshader(gs_shader_t *vertshader) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_vertexshader")) return; graphics->exports.device_load_vertexshader(graphics->device, vertshader); } void gs_load_pixelshader(gs_shader_t *pixelshader) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_pixelshader")) return; graphics->exports.device_load_pixelshader(graphics->device, pixelshader); } void gs_load_default_samplerstate(bool b_3d, int unit) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_default_samplerstate")) return; graphics->exports.device_load_default_samplerstate(graphics->device, b_3d, unit); } gs_shader_t *gs_get_vertex_shader(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_vertex_shader")) return NULL; return graphics->exports.device_get_vertex_shader(graphics->device); } gs_shader_t *gs_get_pixel_shader(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_pixel_shader")) return NULL; return graphics->exports.device_get_pixel_shader(graphics->device); } enum gs_color_space gs_get_color_space(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_color_space")) return GS_CS_SRGB; return graphics->exports.device_get_color_space(graphics->device); } gs_texture_t *gs_get_render_target(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_render_target")) return NULL; return graphics->exports.device_get_render_target(graphics->device); } gs_zstencil_t *gs_get_zstencil_target(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_zstencil_target")) return NULL; return graphics->exports.device_get_zstencil_target(graphics->device); } void gs_set_render_target(gs_texture_t *tex, gs_zstencil_t *zstencil) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_render_target")) return; graphics->exports.device_set_render_target(graphics->device, tex, zstencil); } void gs_set_render_target_with_color_space(gs_texture_t *tex, gs_zstencil_t *zstencil, enum gs_color_space space) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_render_target_with_color_space")) return; graphics->exports.device_set_render_target_with_color_space(graphics->device, tex, zstencil, space); } void gs_set_cube_render_target(gs_texture_t *cubetex, int side, gs_zstencil_t *zstencil) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_cube_render_target")) return; graphics->exports.device_set_cube_render_target(graphics->device, cubetex, side, zstencil); } void gs_enable_framebuffer_srgb(bool enable) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_enable_framebuffer_srgb")) return; graphics->exports.device_enable_framebuffer_srgb(graphics->device, enable); } bool gs_framebuffer_srgb_enabled(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_framebuffer_srgb_enabled")) return false; return graphics->exports.device_framebuffer_srgb_enabled(graphics->device); } bool gs_get_linear_srgb(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_linear_srgb")) return false; return graphics->linear_srgb; } bool gs_set_linear_srgb(bool linear_srgb) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_linear_srgb")) return false; const bool previous = graphics->linear_srgb; graphics->linear_srgb = linear_srgb; return previous; } void gs_copy_texture(gs_texture_t *dst, gs_texture_t *src) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_copy_texture", dst, src)) return; graphics->exports.device_copy_texture(graphics->device, dst, src); } void gs_copy_texture_region(gs_texture_t *dst, uint32_t dst_x, uint32_t dst_y, gs_texture_t *src, uint32_t src_x, uint32_t src_y, uint32_t src_w, uint32_t src_h) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_copy_texture_region", dst)) return; graphics->exports.device_copy_texture_region(graphics->device, dst, dst_x, dst_y, src, src_x, src_y, src_w, src_h); } void gs_stage_texture(gs_stagesurf_t *dst, gs_texture_t *src) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stage_texture")) return; graphics->exports.device_stage_texture(graphics->device, dst, src); } void gs_begin_frame(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_begin_frame")) return; graphics->exports.device_begin_frame(graphics->device); } void gs_begin_scene(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_begin_scene")) return; graphics->exports.device_begin_scene(graphics->device); } void gs_draw(enum gs_draw_mode draw_mode, uint32_t start_vert, uint32_t num_verts) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_draw")) return; graphics->exports.device_draw(graphics->device, draw_mode, start_vert, num_verts); } void gs_end_scene(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_end_scene")) return; graphics->exports.device_end_scene(graphics->device); } void gs_load_swapchain(gs_swapchain_t *swapchain) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_load_swapchain")) return; graphics->exports.device_load_swapchain(graphics->device, swapchain); } void gs_clear(uint32_t clear_flags, const struct vec4 *color, float depth, uint8_t stencil) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_clear")) return; graphics->exports.device_clear(graphics->device, clear_flags, color, depth, stencil); } bool gs_is_present_ready(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_is_present_ready")) return false; return graphics->exports.device_is_present_ready(graphics->device); } void gs_present(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_present")) return; graphics->exports.device_present(graphics->device); } void gs_flush(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_flush")) return; graphics->exports.device_flush(graphics->device); } void gs_set_cull_mode(enum gs_cull_mode mode) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_cull_mode")) return; graphics->exports.device_set_cull_mode(graphics->device, mode); } enum gs_cull_mode gs_get_cull_mode(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_get_cull_mode")) return GS_NEITHER; return graphics->exports.device_get_cull_mode(graphics->device); } void gs_enable_blending(bool enable) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_enable_blending")) return; graphics->cur_blend_state.enabled = enable; graphics->exports.device_enable_blending(graphics->device, enable); } void gs_enable_depth_test(bool enable) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_enable_depth_test")) return; graphics->exports.device_enable_depth_test(graphics->device, enable); } void gs_enable_stencil_test(bool enable) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_enable_stencil_test")) return; graphics->exports.device_enable_stencil_test(graphics->device, enable); } void gs_enable_stencil_write(bool enable) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_enable_stencil_write")) return; graphics->exports.device_enable_stencil_write(graphics->device, enable); } void gs_enable_color(bool red, bool green, bool blue, bool alpha) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_enable_color")) return; graphics->exports.device_enable_color(graphics->device, red, green, blue, alpha); } void gs_blend_function(enum gs_blend_type src, enum gs_blend_type dest) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_blend_function")) return; graphics->cur_blend_state.src_c = src; graphics->cur_blend_state.dest_c = dest; graphics->cur_blend_state.src_a = src; graphics->cur_blend_state.dest_a = dest; graphics->exports.device_blend_function(graphics->device, src, dest); } void gs_blend_function_separate(enum gs_blend_type src_c, enum gs_blend_type dest_c, enum gs_blend_type src_a, enum gs_blend_type dest_a) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_blend_function_separate")) return; graphics->cur_blend_state.src_c = src_c; graphics->cur_blend_state.dest_c = dest_c; graphics->cur_blend_state.src_a = src_a; graphics->cur_blend_state.dest_a = dest_a; graphics->exports.device_blend_function_separate(graphics->device, src_c, dest_c, src_a, dest_a); } void gs_blend_op(enum gs_blend_op_type op) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_blend_op")) return; graphics->cur_blend_state.op = op; graphics->exports.device_blend_op(graphics->device, graphics->cur_blend_state.op); } void gs_depth_function(enum gs_depth_test test) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_depth_function")) return; graphics->exports.device_depth_function(graphics->device, test); } void gs_stencil_function(enum gs_stencil_side side, enum gs_depth_test test) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stencil_function")) return; graphics->exports.device_stencil_function(graphics->device, side, test); } void gs_stencil_op(enum gs_stencil_side side, enum gs_stencil_op_type fail, enum gs_stencil_op_type zfail, enum gs_stencil_op_type zpass) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stencil_op")) return; graphics->exports.device_stencil_op(graphics->device, side, fail, zfail, zpass); } void gs_set_viewport(int x, int y, int width, int height) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_viewport")) return; graphics->exports.device_set_viewport(graphics->device, x, y, width, height); } void gs_get_viewport(struct gs_rect *rect) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_get_viewport", rect)) return; graphics->exports.device_get_viewport(graphics->device, rect); } void gs_set_scissor_rect(const struct gs_rect *rect) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_set_scissor_rect")) return; graphics->exports.device_set_scissor_rect(graphics->device, rect); } void gs_ortho(float left, float right, float top, float bottom, float znear, float zfar) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_ortho")) return; graphics->exports.device_ortho(graphics->device, left, right, top, bottom, znear, zfar); } void gs_frustum(float left, float right, float top, float bottom, float znear, float zfar) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_frustum")) return; graphics->exports.device_frustum(graphics->device, left, right, top, bottom, znear, zfar); } void gs_projection_push(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_projection_push")) return; graphics->exports.device_projection_push(graphics->device); } void gs_projection_pop(void) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_projection_pop")) return; graphics->exports.device_projection_pop(graphics->device); } void gs_swapchain_destroy(gs_swapchain_t *swapchain) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_swapchain_destroy")) return; if (!swapchain) return; graphics->exports.gs_swapchain_destroy(swapchain); } void gs_shader_destroy(gs_shader_t *shader) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_shader_destroy")) return; if (!shader) return; graphics->exports.gs_shader_destroy(shader); } int gs_shader_get_num_params(const gs_shader_t *shader) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_get_num_params", shader)) return 0; return graphics->exports.gs_shader_get_num_params(shader); } gs_sparam_t *gs_shader_get_param_by_idx(gs_shader_t *shader, uint32_t param) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_get_param_by_idx", shader)) return NULL; return graphics->exports.gs_shader_get_param_by_idx(shader, param); } gs_sparam_t *gs_shader_get_param_by_name(gs_shader_t *shader, const char *name) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_get_param_by_name", shader, name)) return NULL; return graphics->exports.gs_shader_get_param_by_name(shader, name); } gs_sparam_t *gs_shader_get_viewproj_matrix(const gs_shader_t *shader) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_get_viewproj_matrix", shader)) return NULL; return graphics->exports.gs_shader_get_viewproj_matrix(shader); } gs_sparam_t *gs_shader_get_world_matrix(const gs_shader_t *shader) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_get_world_matrix", shader)) return NULL; return graphics->exports.gs_shader_get_world_matrix(shader); } void gs_shader_get_param_info(const gs_sparam_t *param, struct gs_shader_param_info *info) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_get_param_info", param, info)) return; graphics->exports.gs_shader_get_param_info(param, info); } void gs_shader_set_bool(gs_sparam_t *param, bool val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_set_bool", param)) return; graphics->exports.gs_shader_set_bool(param, val); } void gs_shader_set_float(gs_sparam_t *param, float val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_set_float", param)) return; graphics->exports.gs_shader_set_float(param, val); } void gs_shader_set_int(gs_sparam_t *param, int val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_set_int", param)) return; graphics->exports.gs_shader_set_int(param, val); } void gs_shader_set_matrix3(gs_sparam_t *param, const struct matrix3 *val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_set_matrix3", param, val)) return; graphics->exports.gs_shader_set_matrix3(param, val); } void gs_shader_set_matrix4(gs_sparam_t *param, const struct matrix4 *val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_set_matrix4", param, val)) return; graphics->exports.gs_shader_set_matrix4(param, val); } void gs_shader_set_vec2(gs_sparam_t *param, const struct vec2 *val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_set_vec2", param, val)) return; graphics->exports.gs_shader_set_vec2(param, val); } void gs_shader_set_vec3(gs_sparam_t *param, const struct vec3 *val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_set_vec3", param, val)) return; graphics->exports.gs_shader_set_vec3(param, val); } void gs_shader_set_vec4(gs_sparam_t *param, const struct vec4 *val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_set_vec4", param, val)) return; graphics->exports.gs_shader_set_vec4(param, val); } void gs_shader_set_texture(gs_sparam_t *param, gs_texture_t *val) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_set_texture", param)) return; graphics->exports.gs_shader_set_texture(param, val); } void gs_shader_set_val(gs_sparam_t *param, const void *val, size_t size) { graphics_t *graphics = thread_graphics; if (!gs_valid_p2("gs_shader_set_val", param, val)) return; graphics->exports.gs_shader_set_val(param, val, size); } void gs_shader_set_default(gs_sparam_t *param) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_set_default", param)) return; graphics->exports.gs_shader_set_default(param); } void gs_shader_set_next_sampler(gs_sparam_t *param, gs_samplerstate_t *sampler) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_shader_set_next_sampler", param)) return; graphics->exports.gs_shader_set_next_sampler(param, sampler); } void gs_texture_destroy(gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_destroy")) return; if (!tex) return; graphics->exports.gs_texture_destroy(tex); } uint32_t gs_texture_get_width(const gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_get_width", tex)) return 0; return graphics->exports.gs_texture_get_width(tex); } uint32_t gs_texture_get_height(const gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_get_height", tex)) return 0; return graphics->exports.gs_texture_get_height(tex); } enum gs_color_format gs_texture_get_color_format(const gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_get_color_format", tex)) return GS_UNKNOWN; return graphics->exports.gs_texture_get_color_format(tex); } bool gs_texture_map(gs_texture_t *tex, uint8_t **ptr, uint32_t *linesize) { graphics_t *graphics = thread_graphics; if (!gs_valid_p3("gs_texture_map", tex, ptr, linesize)) return false; return graphics->exports.gs_texture_map(tex, ptr, linesize); } void gs_texture_unmap(gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_unmap", tex)) return; graphics->exports.gs_texture_unmap(tex); } bool gs_texture_is_rect(const gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_is_rect", tex)) return false; if (graphics->exports.gs_texture_is_rect) return graphics->exports.gs_texture_is_rect(tex); else return false; } void *gs_texture_get_obj(gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_get_obj", tex)) return NULL; return graphics->exports.gs_texture_get_obj(tex); } void gs_cubetexture_destroy(gs_texture_t *cubetex) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_cubetexture_destroy")) return; if (!cubetex) return; graphics->exports.gs_cubetexture_destroy(cubetex); } uint32_t gs_cubetexture_get_size(const gs_texture_t *cubetex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_cubetexture_get_size", cubetex)) return 0; return graphics->exports.gs_cubetexture_get_size(cubetex); } enum gs_color_format gs_cubetexture_get_color_format(const gs_texture_t *cubetex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_cubetexture_get_color_format", cubetex)) return GS_UNKNOWN; return graphics->exports.gs_cubetexture_get_color_format(cubetex); } void gs_voltexture_destroy(gs_texture_t *voltex) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_voltexture_destroy")) return; if (!voltex) return; graphics->exports.gs_voltexture_destroy(voltex); } uint32_t gs_voltexture_get_width(const gs_texture_t *voltex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_voltexture_get_width", voltex)) return 0; return graphics->exports.gs_voltexture_get_width(voltex); } uint32_t gs_voltexture_get_height(const gs_texture_t *voltex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_voltexture_get_height", voltex)) return 0; return graphics->exports.gs_voltexture_get_height(voltex); } uint32_t gs_voltexture_get_depth(const gs_texture_t *voltex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_voltexture_get_depth", voltex)) return 0; return graphics->exports.gs_voltexture_get_depth(voltex); } enum gs_color_format gs_voltexture_get_color_format(const gs_texture_t *voltex) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_voltexture_get_color_format", voltex)) return GS_UNKNOWN; return graphics->exports.gs_voltexture_get_color_format(voltex); } void gs_stagesurface_destroy(gs_stagesurf_t *stagesurf) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stagesurface_destroy")) return; if (!stagesurf) return; graphics->exports.gs_stagesurface_destroy(stagesurf); } uint32_t gs_stagesurface_get_width(const gs_stagesurf_t *stagesurf) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_stagesurface_get_width", stagesurf)) return 0; return graphics->exports.gs_stagesurface_get_width(stagesurf); } uint32_t gs_stagesurface_get_height(const gs_stagesurf_t *stagesurf) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_stagesurface_get_height", stagesurf)) return 0; return graphics->exports.gs_stagesurface_get_height(stagesurf); } enum gs_color_format gs_stagesurface_get_color_format(const gs_stagesurf_t *stagesurf) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_stagesurface_get_color_format", stagesurf)) return GS_UNKNOWN; return graphics->exports.gs_stagesurface_get_color_format(stagesurf); } bool gs_stagesurface_map(gs_stagesurf_t *stagesurf, uint8_t **data, uint32_t *linesize) { graphics_t *graphics = thread_graphics; if (!gs_valid_p3("gs_stagesurface_map", stagesurf, data, linesize)) return 0; return graphics->exports.gs_stagesurface_map(stagesurf, data, linesize); } void gs_stagesurface_unmap(gs_stagesurf_t *stagesurf) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_stagesurface_unmap", stagesurf)) return; graphics->exports.gs_stagesurface_unmap(stagesurf); } void gs_zstencil_destroy(gs_zstencil_t *zstencil) { if (!gs_valid("gs_zstencil_destroy")) return; if (!zstencil) return; thread_graphics->exports.gs_zstencil_destroy(zstencil); } void gs_samplerstate_destroy(gs_samplerstate_t *samplerstate) { if (!gs_valid("gs_samplerstate_destroy")) return; if (!samplerstate) return; thread_graphics->exports.gs_samplerstate_destroy(samplerstate); } void gs_vertexbuffer_destroy(gs_vertbuffer_t *vertbuffer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_vertexbuffer_destroy")) return; if (!vertbuffer) return; graphics->exports.gs_vertexbuffer_destroy(vertbuffer); } void gs_vertexbuffer_flush(gs_vertbuffer_t *vertbuffer) { if (!gs_valid_p("gs_vertexbuffer_flush", vertbuffer)) return; thread_graphics->exports.gs_vertexbuffer_flush(vertbuffer); } void gs_vertexbuffer_flush_direct(gs_vertbuffer_t *vertbuffer, const struct gs_vb_data *data) { if (!gs_valid_p2("gs_vertexbuffer_flush_direct", vertbuffer, data)) return; thread_graphics->exports.gs_vertexbuffer_flush_direct(vertbuffer, data); } struct gs_vb_data *gs_vertexbuffer_get_data(const gs_vertbuffer_t *vertbuffer) { if (!gs_valid_p("gs_vertexbuffer_get_data", vertbuffer)) return NULL; return thread_graphics->exports.gs_vertexbuffer_get_data(vertbuffer); } void gs_indexbuffer_destroy(gs_indexbuffer_t *indexbuffer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_indexbuffer_destroy")) return; if (!indexbuffer) return; graphics->exports.gs_indexbuffer_destroy(indexbuffer); } void gs_indexbuffer_flush(gs_indexbuffer_t *indexbuffer) { if (!gs_valid_p("gs_indexbuffer_flush", indexbuffer)) return; thread_graphics->exports.gs_indexbuffer_flush(indexbuffer); } void gs_indexbuffer_flush_direct(gs_indexbuffer_t *indexbuffer, const void *data) { if (!gs_valid_p2("gs_indexbuffer_flush_direct", indexbuffer, data)) return; thread_graphics->exports.gs_indexbuffer_flush_direct(indexbuffer, data); } void *gs_indexbuffer_get_data(const gs_indexbuffer_t *indexbuffer) { if (!gs_valid_p("gs_indexbuffer_get_data", indexbuffer)) return NULL; return thread_graphics->exports.gs_indexbuffer_get_data(indexbuffer); } size_t gs_indexbuffer_get_num_indices(const gs_indexbuffer_t *indexbuffer) { if (!gs_valid_p("gs_indexbuffer_get_num_indices", indexbuffer)) return 0; return thread_graphics->exports.gs_indexbuffer_get_num_indices(indexbuffer); } enum gs_index_type gs_indexbuffer_get_type(const gs_indexbuffer_t *indexbuffer) { if (!gs_valid_p("gs_indexbuffer_get_type", indexbuffer)) return (enum gs_index_type)0; return thread_graphics->exports.gs_indexbuffer_get_type(indexbuffer); } void gs_timer_destroy(gs_timer_t *timer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_destroy")) return; if (!timer) return; graphics->exports.gs_timer_destroy(timer); } void gs_timer_begin(gs_timer_t *timer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_begin")) return; if (!timer) return; graphics->exports.gs_timer_begin(timer); } void gs_timer_end(gs_timer_t *timer) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_end")) return; if (!timer) return; graphics->exports.gs_timer_end(timer); } bool gs_timer_get_data(gs_timer_t *timer, uint64_t *ticks) { if (!gs_valid_p2("gs_timer_get_data", timer, ticks)) return false; return thread_graphics->exports.gs_timer_get_data(timer, ticks); } void gs_timer_range_destroy(gs_timer_range_t *range) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_range_destroy")) return; if (!range) return; graphics->exports.gs_timer_range_destroy(range); } void gs_timer_range_begin(gs_timer_range_t *range) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_range_begin")) return; if (!range) return; graphics->exports.gs_timer_range_begin(range); } void gs_timer_range_end(gs_timer_range_t *range) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_timer_range_end")) return; if (!range) return; graphics->exports.gs_timer_range_end(range); } bool gs_timer_range_get_data(gs_timer_range_t *range, bool *disjoint, uint64_t *frequency) { if (!gs_valid_p2("gs_timer_range_get_data", disjoint, frequency)) return false; return thread_graphics->exports.gs_timer_range_get_data(range, disjoint, frequency); } bool gs_nv12_available(void) { if (!gs_valid("gs_nv12_available")) return false; if (!thread_graphics->exports.device_nv12_available) return false; return thread_graphics->exports.device_nv12_available(thread_graphics->device); } bool gs_p010_available(void) { if (!gs_valid("gs_p010_available")) return false; if (!thread_graphics->exports.device_p010_available) return false; return thread_graphics->exports.device_p010_available(thread_graphics->device); } bool gs_is_monitor_hdr(void *monitor) { if (!gs_valid("gs_is_monitor_hdr")) return false; return thread_graphics->exports.device_is_monitor_hdr(thread_graphics->device, monitor); } void gs_debug_marker_begin(const float color[4], const char *markername) { if (!gs_valid("gs_debug_marker_begin")) return; if (!markername) markername = "(null)"; thread_graphics->exports.device_debug_marker_begin(thread_graphics->device, markername, color); } void gs_debug_marker_begin_format(const float color[4], const char *format, ...) { if (!gs_valid("gs_debug_marker_begin")) return; if (format) { char markername[64]; va_list args; va_start(args, format); vsnprintf(markername, sizeof(markername), format, args); va_end(args); thread_graphics->exports.device_debug_marker_begin(thread_graphics->device, markername, color); } else { gs_debug_marker_begin(color, NULL); } } void gs_debug_marker_end(void) { if (!gs_valid("gs_debug_marker_end")) return; thread_graphics->exports.device_debug_marker_end(thread_graphics->device); } bool gs_texture_create_nv12(gs_texture_t **tex_y, gs_texture_t **tex_uv, uint32_t width, uint32_t height, uint32_t flags) { graphics_t *graphics = thread_graphics; bool success = false; if (!gs_valid("gs_texture_create_nv12")) return false; if ((width & 1) == 1 || (height & 1) == 1) { blog(LOG_ERROR, "NV12 textures must have dimensions " "divisible by 2."); return false; } if (graphics->exports.device_texture_create_nv12) { success = graphics->exports.device_texture_create_nv12(graphics->device, tex_y, tex_uv, width, height, flags); if (success) return true; } *tex_y = gs_texture_create(width, height, GS_R8, 1, NULL, flags); *tex_uv = gs_texture_create(width / 2, height / 2, GS_R8G8, 1, NULL, flags); if (!*tex_y || !*tex_uv) { if (*tex_y) gs_texture_destroy(*tex_y); if (*tex_uv) gs_texture_destroy(*tex_uv); *tex_y = NULL; *tex_uv = NULL; return false; } return true; } bool gs_texture_create_p010(gs_texture_t **tex_y, gs_texture_t **tex_uv, uint32_t width, uint32_t height, uint32_t flags) { graphics_t *graphics = thread_graphics; bool success = false; if (!gs_valid("gs_texture_create_p010")) return false; if ((width & 1) == 1 || (height & 1) == 1) { blog(LOG_ERROR, "P010 textures must have dimensions " "divisible by 2."); return false; } if (graphics->exports.device_texture_create_p010) { success = graphics->exports.device_texture_create_p010(graphics->device, tex_y, tex_uv, width, height, flags); if (success) return true; } *tex_y = gs_texture_create(width, height, GS_R16, 1, NULL, flags); *tex_uv = gs_texture_create(width / 2, height / 2, GS_RG16, 1, NULL, flags); if (!*tex_y || !*tex_uv) { if (*tex_y) gs_texture_destroy(*tex_y); if (*tex_uv) gs_texture_destroy(*tex_uv); *tex_y = NULL; *tex_uv = NULL; return false; } return true; } uint32_t gs_get_adapter_count(void) { if (!gs_valid("gs_get_adapter_count")) return 0; if (!thread_graphics->exports.gs_get_adapter_count) return 0; return thread_graphics->exports.gs_get_adapter_count(); } #ifdef __APPLE__ /** Platform specific functions */ gs_texture_t *gs_texture_create_from_iosurface(void *iosurf) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_create_from_iosurface", iosurf)) return NULL; if (!graphics->exports.device_texture_create_from_iosurface) return NULL; return graphics->exports.device_texture_create_from_iosurface(graphics->device, iosurf); } bool gs_texture_rebind_iosurface(gs_texture_t *texture, void *iosurf) { graphics_t *graphics = thread_graphics; if (!gs_valid_p("gs_texture_rebind_iosurface", texture)) return false; if (!graphics->exports.gs_texture_rebind_iosurface) return false; return graphics->exports.gs_texture_rebind_iosurface(texture, iosurf); } bool gs_shared_texture_available(void) { if (!gs_valid("gs_shared_texture_available")) return false; return thread_graphics->exports.device_shared_texture_available(); } gs_texture_t *gs_texture_open_shared(uint32_t handle) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_open_shared")) return NULL; if (graphics->exports.device_texture_open_shared) return graphics->exports.device_texture_open_shared(graphics->device, handle); return NULL; } #elif _WIN32 bool gs_gdi_texture_available(void) { if (!gs_valid("gs_gdi_texture_available")) return false; return thread_graphics->exports.device_gdi_texture_available(); } bool gs_shared_texture_available(void) { if (!gs_valid("gs_shared_texture_available")) return false; return thread_graphics->exports.device_shared_texture_available(); } bool gs_get_duplicator_monitor_info(int monitor_idx, struct gs_monitor_info *monitor_info) { if (!gs_valid_p("gs_get_duplicator_monitor_info", monitor_info)) return false; if (!thread_graphics->exports.device_get_duplicator_monitor_info) return false; return thread_graphics->exports.device_get_duplicator_monitor_info(thread_graphics->device, monitor_idx, monitor_info); } int gs_duplicator_get_monitor_index(void *monitor) { if (!gs_valid("gs_duplicator_get_monitor_index")) return false; if (!thread_graphics->exports.device_duplicator_get_monitor_index) return false; return thread_graphics->exports.device_duplicator_get_monitor_index(thread_graphics->device, monitor); } gs_duplicator_t *gs_duplicator_create(int monitor_idx) { if (!gs_valid("gs_duplicator_create")) return NULL; if (!thread_graphics->exports.device_duplicator_create) return NULL; return thread_graphics->exports.device_duplicator_create(thread_graphics->device, monitor_idx); } void gs_duplicator_destroy(gs_duplicator_t *duplicator) { if (!gs_valid("gs_duplicator_destroy")) return; if (!duplicator) return; if (!thread_graphics->exports.gs_duplicator_destroy) return; thread_graphics->exports.gs_duplicator_destroy(duplicator); } bool gs_duplicator_update_frame(gs_duplicator_t *duplicator) { if (!gs_valid_p("gs_duplicator_update_frame", duplicator)) return false; if (!thread_graphics->exports.gs_duplicator_update_frame) return false; return thread_graphics->exports.gs_duplicator_update_frame(duplicator); } bool gs_can_adapter_fast_clear(void) { if (!gs_valid("gs_can_adapter_fast_clear")) return false; if (!thread_graphics->exports.device_can_adapter_fast_clear) return false; return thread_graphics->exports.device_can_adapter_fast_clear(thread_graphics->device); } gs_texture_t *gs_duplicator_get_texture(gs_duplicator_t *duplicator) { if (!gs_valid_p("gs_duplicator_get_texture", duplicator)) return NULL; if (!thread_graphics->exports.gs_duplicator_get_texture) return NULL; return thread_graphics->exports.gs_duplicator_get_texture(duplicator); } enum gs_color_space gs_duplicator_get_color_space(gs_duplicator_t *duplicator) { if (!gs_valid_p("gs_duplicator_get_color_space", duplicator)) return GS_CS_SRGB; if (!thread_graphics->exports.gs_duplicator_get_color_space) return GS_CS_SRGB; return thread_graphics->exports.gs_duplicator_get_color_space(duplicator); } float gs_duplicator_get_sdr_white_level(gs_duplicator_t *duplicator) { if (!gs_valid_p("gs_duplicator_get_sdr_white_level", duplicator)) return 80.f; if (!thread_graphics->exports.gs_duplicator_get_sdr_white_level) return 80.f; return thread_graphics->exports.gs_duplicator_get_sdr_white_level(duplicator); } /** creates a windows GDI-lockable texture */ gs_texture_t *gs_texture_create_gdi(uint32_t width, uint32_t height) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_create_gdi")) return NULL; if (graphics->exports.device_texture_create_gdi) return graphics->exports.device_texture_create_gdi(graphics->device, width, height); return NULL; } void *gs_texture_get_dc(gs_texture_t *gdi_tex) { if (!gs_valid_p("gs_texture_release_dc", gdi_tex)) return NULL; if (thread_graphics->exports.gs_texture_get_dc) return thread_graphics->exports.gs_texture_get_dc(gdi_tex); return NULL; } void gs_texture_release_dc(gs_texture_t *gdi_tex) { if (!gs_valid_p("gs_texture_release_dc", gdi_tex)) return; if (thread_graphics->exports.gs_texture_release_dc) thread_graphics->exports.gs_texture_release_dc(gdi_tex); } gs_texture_t *gs_texture_open_shared(uint32_t handle) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_open_shared")) return NULL; if (graphics->exports.device_texture_open_shared) return graphics->exports.device_texture_open_shared(graphics->device, handle); return NULL; } gs_texture_t *gs_texture_open_nt_shared(uint32_t handle) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_open_nt_shared")) return NULL; if (graphics->exports.device_texture_open_nt_shared) return graphics->exports.device_texture_open_nt_shared(graphics->device, handle); return NULL; } uint32_t gs_texture_get_shared_handle(gs_texture_t *tex) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_get_shared_handle")) return GS_INVALID_HANDLE; if (graphics->exports.device_texture_get_shared_handle) return graphics->exports.device_texture_get_shared_handle(tex); return GS_INVALID_HANDLE; } gs_texture_t *gs_texture_wrap_obj(void *obj) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_wrap_obj")) return NULL; if (graphics->exports.device_texture_wrap_obj) return graphics->exports.device_texture_wrap_obj(graphics->device, obj); return NULL; } int gs_texture_acquire_sync(gs_texture_t *tex, uint64_t key, uint32_t ms) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_acquire_sync")) return -1; if (graphics->exports.device_texture_acquire_sync) return graphics->exports.device_texture_acquire_sync(tex, key, ms); return -1; } int gs_texture_release_sync(gs_texture_t *tex, uint64_t key) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_texture_release_sync")) return -1; if (graphics->exports.device_texture_release_sync) return graphics->exports.device_texture_release_sync(tex, key); return -1; } gs_stagesurf_t *gs_stagesurface_create_nv12(uint32_t width, uint32_t height) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stagesurface_create_nv12")) return NULL; if ((width & 1) == 1 || (height & 1) == 1) { blog(LOG_ERROR, "NV12 textures must have dimensions " "divisible by 2."); return NULL; } if (graphics->exports.device_stagesurface_create_nv12) return graphics->exports.device_stagesurface_create_nv12(graphics->device, width, height); return NULL; } gs_stagesurf_t *gs_stagesurface_create_p010(uint32_t width, uint32_t height) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_stagesurface_create_p010")) return NULL; if ((width & 1) == 1 || (height & 1) == 1) { blog(LOG_ERROR, "P010 textures must have dimensions " "divisible by 2."); return NULL; } if (graphics->exports.device_stagesurface_create_p010) return graphics->exports.device_stagesurface_create_p010(graphics->device, width, height); return NULL; } void gs_register_loss_callbacks(const struct gs_device_loss *callbacks) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_register_loss_callbacks")) return; if (graphics->exports.device_register_loss_callbacks) graphics->exports.device_register_loss_callbacks(graphics->device, callbacks); } void gs_unregister_loss_callbacks(void *data) { graphics_t *graphics = thread_graphics; if (!gs_valid("gs_unregister_loss_callbacks")) return; if (graphics->exports.device_unregister_loss_callbacks) graphics->exports.device_unregister_loss_callbacks(graphics->device, data); } #endif obs-studio-32.1.0-sources/libobs/graphics/effect-parser.c000644 001751 001751 00000144371 15153330235 024212 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include "../util/platform.h" #include "effect-parser.h" #include "effect.h" typedef DARRAY(struct dstr) dstr_array_t; static inline bool ep_parse_param_assign(struct effect_parser *ep, struct ep_param *param); static enum gs_shader_param_type get_effect_param_type(const char *type) { if (strcmp(type, "float") == 0) return GS_SHADER_PARAM_FLOAT; else if (strcmp(type, "float2") == 0) return GS_SHADER_PARAM_VEC2; else if (strcmp(type, "float3") == 0) return GS_SHADER_PARAM_VEC3; else if (strcmp(type, "float4") == 0) return GS_SHADER_PARAM_VEC4; else if (strcmp(type, "int2") == 0) return GS_SHADER_PARAM_INT2; else if (strcmp(type, "int3") == 0) return GS_SHADER_PARAM_INT3; else if (strcmp(type, "int4") == 0) return GS_SHADER_PARAM_INT4; else if (astrcmp_n(type, "texture", 7) == 0) return GS_SHADER_PARAM_TEXTURE; else if (strcmp(type, "float4x4") == 0) return GS_SHADER_PARAM_MATRIX4X4; else if (strcmp(type, "bool") == 0) return GS_SHADER_PARAM_BOOL; else if (strcmp(type, "int") == 0) return GS_SHADER_PARAM_INT; else if (strcmp(type, "string") == 0) return GS_SHADER_PARAM_STRING; return GS_SHADER_PARAM_UNKNOWN; } void ep_free(struct effect_parser *ep) { size_t i; for (i = 0; i < ep->params.num; i++) ep_param_free(ep->params.array + i); for (i = 0; i < ep->structs.num; i++) ep_struct_free(ep->structs.array + i); for (i = 0; i < ep->funcs.num; i++) ep_func_free(ep->funcs.array + i); for (i = 0; i < ep->samplers.num; i++) ep_sampler_free(ep->samplers.array + i); for (i = 0; i < ep->techniques.num; i++) ep_technique_free(ep->techniques.array + i); ep->cur_pass = NULL; cf_parser_free(&ep->cfp); da_free(ep->params); da_free(ep->structs); da_free(ep->funcs); da_free(ep->samplers); da_free(ep->techniques); } static inline struct ep_func *ep_getfunc(struct effect_parser *ep, const char *name) { size_t i; for (i = 0; i < ep->funcs.num; i++) { if (strcmp(name, ep->funcs.array[i].name) == 0) return ep->funcs.array + i; } return NULL; } static inline struct ep_struct *ep_getstruct(struct effect_parser *ep, const char *name) { size_t i; for (i = 0; i < ep->structs.num; i++) { if (strcmp(name, ep->structs.array[i].name) == 0) return ep->structs.array + i; } return NULL; } static inline struct ep_sampler *ep_getsampler(struct effect_parser *ep, const char *name) { size_t i; for (i = 0; i < ep->samplers.num; i++) { if (strcmp(name, ep->samplers.array[i].name) == 0) return ep->samplers.array + i; } return NULL; } static inline struct ep_param *ep_getparam(struct effect_parser *ep, const char *name) { size_t i; for (i = 0; i < ep->params.num; i++) { if (strcmp(name, ep->params.array[i].name) == 0) return ep->params.array + i; } return NULL; } static inline struct ep_param *ep_getannotation(struct ep_param *param, const char *name) { size_t i; for (i = 0; i < param->annotations.num; i++) { if (strcmp(name, param->annotations.array[i].name) == 0) return param->annotations.array + i; } return NULL; } static inline struct ep_func *ep_getfunc_strref(struct effect_parser *ep, const struct strref *ref) { size_t i; for (i = 0; i < ep->funcs.num; i++) { if (strref_cmp(ref, ep->funcs.array[i].name) == 0) return ep->funcs.array + i; } return NULL; } static inline struct ep_struct *ep_getstruct_strref(struct effect_parser *ep, const struct strref *ref) { size_t i; for (i = 0; i < ep->structs.num; i++) { if (strref_cmp(ref, ep->structs.array[i].name) == 0) return ep->structs.array + i; } return NULL; } static inline struct ep_sampler *ep_getsampler_strref(struct effect_parser *ep, const struct strref *ref) { size_t i; for (i = 0; i < ep->samplers.num; i++) { if (strref_cmp(ref, ep->samplers.array[i].name) == 0) return ep->samplers.array + i; } return NULL; } static inline struct ep_param *ep_getparam_strref(struct effect_parser *ep, const struct strref *ref) { size_t i; for (i = 0; i < ep->params.num; i++) { if (strref_cmp(ref, ep->params.array[i].name) == 0) return ep->params.array + i; } return NULL; } static inline int ep_parse_struct_var(struct effect_parser *ep, struct ep_var *var) { int code; /* -------------------------------------- */ /* variable type */ if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ";")) return PARSE_CONTINUE; if (cf_token_is(&ep->cfp, "}")) return PARSE_BREAK; code = cf_token_is_type(&ep->cfp, CFTOKEN_NAME, "type name", ";"); if (code != PARSE_SUCCESS) return code; cf_copy_token(&ep->cfp, &var->type); /* -------------------------------------- */ /* variable name */ if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ";")) return PARSE_UNEXPECTED_CONTINUE; if (cf_token_is(&ep->cfp, "}")) return PARSE_UNEXPECTED_BREAK; code = cf_token_is_type(&ep->cfp, CFTOKEN_NAME, "variable name", ";"); if (code != PARSE_SUCCESS) return code; cf_copy_token(&ep->cfp, &var->name); /* -------------------------------------- */ /* variable mapping if any (POSITION, TEXCOORD, etc) */ if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ":")) { if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ";")) return PARSE_UNEXPECTED_CONTINUE; if (cf_token_is(&ep->cfp, "}")) return PARSE_UNEXPECTED_BREAK; code = cf_token_is_type(&ep->cfp, CFTOKEN_NAME, "mapping name", ";"); if (code != PARSE_SUCCESS) return code; cf_copy_token(&ep->cfp, &var->mapping); if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; } /* -------------------------------------- */ if (!cf_token_is(&ep->cfp, ";")) { if (!cf_go_to_valid_token(&ep->cfp, ";", "}")) return PARSE_EOF; return PARSE_CONTINUE; } return PARSE_SUCCESS; } static void ep_parse_struct(struct effect_parser *ep) { struct ep_struct eps; ep_struct_init(&eps); if (cf_next_name(&ep->cfp, &eps.name, "name", ";") != PARSE_SUCCESS) goto error; if (cf_next_token_should_be(&ep->cfp, "{", ";", NULL) != PARSE_SUCCESS) goto error; /* get structure variables */ while (true) { bool do_break = false; struct ep_var var; ep_var_init(&var); switch (ep_parse_struct_var(ep, &var)) { case PARSE_UNEXPECTED_CONTINUE: cf_adderror_syntax_error(&ep->cfp); /* Falls through. */ case PARSE_CONTINUE: ep_var_free(&var); continue; case PARSE_UNEXPECTED_BREAK: cf_adderror_syntax_error(&ep->cfp); /* Falls through. */ case PARSE_BREAK: ep_var_free(&var); do_break = true; break; case PARSE_EOF: ep_var_free(&var); goto error; } if (do_break) break; da_push_back(eps.vars, &var); } if (cf_next_token_should_be(&ep->cfp, ";", NULL, NULL) != PARSE_SUCCESS) goto error; da_push_back(ep->structs, &eps); return; error: ep_struct_free(&eps); } static inline int ep_parse_param_annotation_var(struct effect_parser *ep, struct ep_param *var) { int code; /* -------------------------------------- */ /* variable type */ if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ";")) return PARSE_CONTINUE; if (cf_token_is(&ep->cfp, ">")) return PARSE_BREAK; code = cf_token_is_type(&ep->cfp, CFTOKEN_NAME, "type name", ";"); if (code != PARSE_SUCCESS) return code; bfree(var->type); cf_copy_token(&ep->cfp, &var->type); /* -------------------------------------- */ /* variable name */ if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ";")) { cf_adderror_expecting(&ep->cfp, "variable name"); return PARSE_UNEXPECTED_CONTINUE; } if (cf_token_is(&ep->cfp, ">")) { cf_adderror_expecting(&ep->cfp, "variable name"); return PARSE_UNEXPECTED_BREAK; } code = cf_token_is_type(&ep->cfp, CFTOKEN_NAME, "variable name", ";"); if (code != PARSE_SUCCESS) return code; bfree(var->name); cf_copy_token(&ep->cfp, &var->name); /* -------------------------------------- */ /* variable mapping if any (POSITION, TEXCOORD, etc) */ if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ":")) { cf_adderror_expecting(&ep->cfp, "= or ;"); return PARSE_UNEXPECTED_BREAK; } else if (cf_token_is(&ep->cfp, ">")) { cf_adderror_expecting(&ep->cfp, "= or ;"); return PARSE_UNEXPECTED_BREAK; } else if (cf_token_is(&ep->cfp, "=")) { if (!ep_parse_param_assign(ep, var)) { cf_adderror_expecting(&ep->cfp, "assignment value"); return PARSE_UNEXPECTED_BREAK; } } /* -------------------------------------- */ if (!cf_token_is(&ep->cfp, ";")) { if (!cf_go_to_valid_token(&ep->cfp, ";", ">")) { cf_adderror_expecting(&ep->cfp, "; or >"); return PARSE_EOF; } return PARSE_CONTINUE; } return PARSE_SUCCESS; } static int ep_parse_annotations(struct effect_parser *ep, ep_param_array_t *annotations) { if (!cf_token_is(&ep->cfp, "<")) { cf_adderror_expecting(&ep->cfp, "<"); goto error; } /* get annotation variables */ while (true) { bool do_break = false; struct ep_param var; ep_param_init(&var, bstrdup(""), bstrdup(""), false, false, false); switch (ep_parse_param_annotation_var(ep, &var)) { case PARSE_UNEXPECTED_CONTINUE: cf_adderror_syntax_error(&ep->cfp); /* Falls through. */ case PARSE_CONTINUE: ep_param_free(&var); continue; case PARSE_UNEXPECTED_BREAK: cf_adderror_syntax_error(&ep->cfp); /* Falls through. */ case PARSE_BREAK: ep_param_free(&var); do_break = true; break; case PARSE_EOF: ep_param_free(&var); goto error; } if (do_break) break; da_push_back(*annotations, &var); } if (!cf_token_is(&ep->cfp, ">")) { cf_adderror_expecting(&ep->cfp, ">"); goto error; } if (!cf_next_valid_token(&ep->cfp)) goto error; return true; error: return false; } static int ep_parse_param_annotations(struct effect_parser *ep, struct ep_param *param) { return ep_parse_annotations(ep, ¶m->annotations); } static inline int ep_parse_pass_command_call(struct effect_parser *ep, cf_token_array_t *call) { struct cf_token end_token; cf_token_clear(&end_token); while (!cf_token_is(&ep->cfp, ";")) { if (cf_token_is(&ep->cfp, "}")) { cf_adderror_expecting(&ep->cfp, ";"); return PARSE_CONTINUE; } da_push_back(*call, ep->cfp.cur_token); if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; } da_push_back(*call, ep->cfp.cur_token); da_push_back(*call, &end_token); return PARSE_SUCCESS; } static int ep_parse_pass_command(struct effect_parser *ep, struct ep_pass *pass) { cf_token_array_t *call; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, "vertex_shader") || cf_token_is(&ep->cfp, "vertex_program")) { call = &pass->vertex_program; } else if (cf_token_is(&ep->cfp, "pixel_shader") || cf_token_is(&ep->cfp, "pixel_program")) { call = &pass->fragment_program; } else { cf_adderror_syntax_error(&ep->cfp); if (!cf_go_to_valid_token(&ep->cfp, ";", "}")) return PARSE_EOF; return PARSE_CONTINUE; } if (cf_next_token_should_be(&ep->cfp, "=", ";", "}") != PARSE_SUCCESS) return PARSE_CONTINUE; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, "compile")) { cf_adderror(&ep->cfp, "compile keyword not necessary", LEX_WARNING, NULL, NULL, NULL); if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; } return ep_parse_pass_command_call(ep, call); } static int ep_parse_pass(struct effect_parser *ep, struct ep_pass *pass) { struct cf_token peek; if (!cf_token_is(&ep->cfp, "pass")) return PARSE_UNEXPECTED_CONTINUE; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (!cf_token_is(&ep->cfp, "{")) { pass->name = bstrdup_n(ep->cfp.cur_token->str.array, ep->cfp.cur_token->str.len); if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; } if (!cf_peek_valid_token(&ep->cfp, &peek)) return PARSE_EOF; while (strref_cmp(&peek.str, "}") != 0) { int ret = ep_parse_pass_command(ep, pass); if (ret < 0 && ret != PARSE_CONTINUE) return ret; if (!cf_peek_valid_token(&ep->cfp, &peek)) return PARSE_EOF; } /* token is '}' */ cf_next_token(&ep->cfp); return PARSE_SUCCESS; } static void ep_parse_technique(struct effect_parser *ep) { struct ep_technique ept; ep_technique_init(&ept); if (cf_next_name(&ep->cfp, &ept.name, "name", ";") != PARSE_SUCCESS) goto error; if (!cf_next_valid_token(&ep->cfp)) return; if (!cf_token_is(&ep->cfp, "{")) { if (!cf_go_to_token(&ep->cfp, ";", NULL)) { cf_adderror_expecting(&ep->cfp, ";"); return; } cf_adderror_expecting(&ep->cfp, "{"); goto error; } if (!cf_next_valid_token(&ep->cfp)) goto error; while (!cf_token_is(&ep->cfp, "}")) { struct ep_pass pass; ep_pass_init(&pass); switch (ep_parse_pass(ep, &pass)) { case PARSE_UNEXPECTED_CONTINUE: ep_pass_free(&pass); if (!cf_go_to_token(&ep->cfp, "}", NULL)) goto error; continue; case PARSE_EOF: ep_pass_free(&pass); goto error; } da_push_back(ept.passes, &pass); if (!cf_next_valid_token(&ep->cfp)) goto error; } /* pass the current token (which is '}') if we reached here */ cf_next_token(&ep->cfp); da_push_back(ep->techniques, &ept); return; error: cf_next_token(&ep->cfp); ep_technique_free(&ept); } static int ep_parse_sampler_state_item(struct effect_parser *ep, struct ep_sampler *eps) { int ret; char *state = NULL; struct dstr value = {0}; ret = cf_next_name(&ep->cfp, &state, "state name", ";"); if (ret != PARSE_SUCCESS) goto fail; ret = cf_next_token_should_be(&ep->cfp, "=", ";", NULL); if (ret != PARSE_SUCCESS) goto fail; for (;;) { const char *cur_str; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; cur_str = ep->cfp.cur_token->str.array; if (*cur_str == ';') break; dstr_ncat(&value, cur_str, ep->cfp.cur_token->str.len); } if (value.len) { da_push_back(eps->states, &state); da_push_back(eps->values, &value.array); } return ret; fail: bfree(state); dstr_free(&value); return ret; } static void ep_parse_sampler_state(struct effect_parser *ep) { struct ep_sampler eps; struct cf_token peek; ep_sampler_init(&eps); if (cf_next_name(&ep->cfp, &eps.name, "name", ";") != PARSE_SUCCESS) goto error; if (cf_next_token_should_be(&ep->cfp, "{", ";", NULL) != PARSE_SUCCESS) goto error; if (!cf_peek_valid_token(&ep->cfp, &peek)) goto error; while (strref_cmp(&peek.str, "}") != 0) { int ret = ep_parse_sampler_state_item(ep, &eps); if (ret == PARSE_EOF) goto error; if (!cf_peek_valid_token(&ep->cfp, &peek)) goto error; } if (cf_next_token_should_be(&ep->cfp, "}", ";", NULL) != PARSE_SUCCESS) goto error; if (cf_next_token_should_be(&ep->cfp, ";", NULL, NULL) != PARSE_SUCCESS) goto error; da_push_back(ep->samplers, &eps); return; error: ep_sampler_free(&eps); } static inline int ep_check_for_keyword(struct effect_parser *ep, const char *keyword, bool *val) { bool new_val = cf_token_is(&ep->cfp, keyword); if (new_val) { if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (new_val && *val) cf_adderror(&ep->cfp, "'$1' keyword already specified", LEX_WARNING, keyword, NULL, NULL); *val = new_val; return PARSE_CONTINUE; } return PARSE_SUCCESS; } static inline int ep_parse_func_param(struct effect_parser *ep, struct ep_func *func, struct ep_var *var) { int code; bool var_type_keyword = false; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; code = ep_check_for_keyword(ep, "in", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = EP_VAR_IN; if (!var_type_keyword) { code = ep_check_for_keyword(ep, "inout", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = EP_VAR_INOUT; } if (!var_type_keyword) { code = ep_check_for_keyword(ep, "out", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = EP_VAR_OUT; } if (!var_type_keyword) { code = ep_check_for_keyword(ep, "uniform", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = EP_VAR_UNIFORM; } code = cf_get_name(&ep->cfp, &var->type, "type", ")"); if (code != PARSE_SUCCESS) return code; code = cf_next_name(&ep->cfp, &var->name, "name", ")"); if (code != PARSE_SUCCESS) return code; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, ":")) { code = cf_next_name(&ep->cfp, &var->mapping, "mapping specifier", ")"); if (code != PARSE_SUCCESS) return code; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; } if (ep_getstruct(ep, var->type) != NULL) da_push_back(func->struct_deps, &var->type); else if (ep_getsampler(ep, var->type) != NULL) da_push_back(func->sampler_deps, &var->type); return PARSE_SUCCESS; } static bool ep_parse_func_params(struct effect_parser *ep, struct ep_func *func) { struct cf_token peek; int code; cf_token_clear(&peek); if (!cf_peek_valid_token(&ep->cfp, &peek)) return false; if (*peek.str.array == ')') { cf_next_token(&ep->cfp); goto exit; } do { struct ep_var var; ep_var_init(&var); if (!cf_token_is(&ep->cfp, "(") && !cf_token_is(&ep->cfp, ",")) cf_adderror_syntax_error(&ep->cfp); code = ep_parse_func_param(ep, func, &var); if (code != PARSE_SUCCESS) { ep_var_free(&var); if (code == PARSE_CONTINUE) goto exit; else if (code == PARSE_EOF) return false; } da_push_back(func->param_vars, &var); } while (!cf_token_is(&ep->cfp, ")")); exit: return true; } static inline bool ep_process_struct_dep(struct effect_parser *ep, struct ep_func *func) { struct ep_struct *val = ep_getstruct_strref(ep, &ep->cfp.cur_token->str); if (val) da_push_back(func->struct_deps, &val->name); return val != NULL; } static inline bool ep_process_func_dep(struct effect_parser *ep, struct ep_func *func) { struct ep_func *val = ep_getfunc_strref(ep, &ep->cfp.cur_token->str); if (val) da_push_back(func->func_deps, &val->name); return val != NULL; } static inline bool ep_process_sampler_dep(struct effect_parser *ep, struct ep_func *func) { struct ep_sampler *val = ep_getsampler_strref(ep, &ep->cfp.cur_token->str); if (val) da_push_back(func->sampler_deps, &val->name); return val != NULL; } static inline bool ep_process_param_dep(struct effect_parser *ep, struct ep_func *func) { struct ep_param *val = ep_getparam_strref(ep, &ep->cfp.cur_token->str); if (val) da_push_back(func->param_deps, &val->name); return val != NULL; } static inline bool ep_parse_func_contents(struct effect_parser *ep, struct ep_func *func) { int braces = 1; dstr_cat_strref(&func->contents, &ep->cfp.cur_token->str); while (braces > 0) { if ((ep->cfp.cur_token++)->type == CFTOKEN_NONE) return false; if (ep->cfp.cur_token->type == CFTOKEN_SPACETAB || ep->cfp.cur_token->type == CFTOKEN_NEWLINE) { } else if (cf_token_is(&ep->cfp, "{")) { braces++; } else if (cf_token_is(&ep->cfp, "}")) { braces--; } else if (ep_process_struct_dep(ep, func) || ep_process_func_dep(ep, func) || ep_process_sampler_dep(ep, func) || ep_process_param_dep(ep, func)) { } dstr_cat_strref(&func->contents, &ep->cfp.cur_token->str); } return true; } static void ep_parse_function(struct effect_parser *ep, char *type, char *name) { struct ep_func func; int code; ep_func_init(&func, type, name); if (ep_getstruct(ep, type)) da_push_back(func.struct_deps, &func.ret_type); if (!ep_parse_func_params(ep, &func)) goto error; if (!cf_next_valid_token(&ep->cfp)) goto error; /* if function is mapped to something, for example COLOR */ if (cf_token_is(&ep->cfp, ":")) { code = cf_next_name(&ep->cfp, &func.mapping, "mapping specifier", "{"); if (code == PARSE_EOF) goto error; else if (code != PARSE_CONTINUE) { if (!cf_next_valid_token(&ep->cfp)) goto error; } } if (!cf_token_is(&ep->cfp, "{")) { cf_adderror_expecting(&ep->cfp, "{"); goto error; } if (!ep_parse_func_contents(ep, &func)) goto error; /* it is established that the current token is '}' if we reach this */ cf_next_token(&ep->cfp); da_push_back(ep->funcs, &func); return; error: ep_func_free(&func); } /* parses "array[count]" */ static bool ep_parse_param_array(struct effect_parser *ep, struct ep_param *param) { if (!cf_next_valid_token(&ep->cfp)) return false; if (ep->cfp.cur_token->type != CFTOKEN_NUM || !valid_int_str(ep->cfp.cur_token->str.array, ep->cfp.cur_token->str.len)) return false; param->array_count = (int)strtol(ep->cfp.cur_token->str.array, NULL, 10); if (cf_next_token_should_be(&ep->cfp, "]", ";", NULL) == PARSE_EOF) return false; if (!cf_next_valid_token(&ep->cfp)) return false; return true; } static inline int ep_parse_param_assign_texture(struct effect_parser *ep, struct ep_param *param) { int code; char *str; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; code = cf_token_is_type(&ep->cfp, CFTOKEN_STRING, "texture path string", ";"); if (code != PARSE_SUCCESS) return code; str = cf_literal_to_str(ep->cfp.cur_token->str.array, ep->cfp.cur_token->str.len); if (str) { da_copy_array(param->default_val, str, strlen(str) + 1); bfree(str); } return PARSE_SUCCESS; } static inline int ep_parse_param_assign_string(struct effect_parser *ep, struct ep_param *param) { int code; char *str = NULL; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; code = cf_token_is_type(&ep->cfp, CFTOKEN_STRING, "string", ";"); if (code != PARSE_SUCCESS) return code; str = cf_literal_to_str(ep->cfp.cur_token->str.array, ep->cfp.cur_token->str.len); if (str) { da_copy_array(param->default_val, str, strlen(str) + 1); bfree(str); } return PARSE_SUCCESS; } static inline int ep_parse_param_assign_intfloat(struct effect_parser *ep, struct ep_param *param, bool is_float) { int code; bool is_negative = false; if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, "-")) { is_negative = true; if (!cf_next_token(&ep->cfp)) return PARSE_EOF; } code = cf_token_is_type(&ep->cfp, CFTOKEN_NUM, "numeric value", ";"); if (code != PARSE_SUCCESS) return code; if (is_float) { float f = (float)os_strtod(ep->cfp.cur_token->str.array); if (is_negative) f = -f; da_push_back_array(param->default_val, (uint8_t *)&f, sizeof(float)); } else { long l = strtol(ep->cfp.cur_token->str.array, NULL, 10); if (is_negative) l = -l; da_push_back_array(param->default_val, (uint8_t *)&l, sizeof(long)); } return PARSE_SUCCESS; } static inline int ep_parse_param_assign_bool(struct effect_parser *ep, struct ep_param *param) { if (!cf_next_valid_token(&ep->cfp)) return PARSE_EOF; if (cf_token_is(&ep->cfp, "true")) { long l = 1; da_push_back_array(param->default_val, (uint8_t *)&l, sizeof(long)); return PARSE_SUCCESS; } else if (cf_token_is(&ep->cfp, "false")) { long l = 0; da_push_back_array(param->default_val, (uint8_t *)&l, sizeof(long)); return PARSE_SUCCESS; } cf_adderror_expecting(&ep->cfp, "true or false"); return PARSE_EOF; } /* * parses assignment for float1, float2, float3, float4, int1, int2, int3, int4, * and any combination for float3x3, float4x4, int3x3, int4x4, etc */ static inline int ep_parse_param_assign_intfloat_array(struct effect_parser *ep, struct ep_param *param, bool is_float) { const char *intfloat_type = param->type + (is_float ? 5 : 3); int intfloat_count = 0, code, i; /* -------------------------------------------- */ if (intfloat_type[0] < '1' || intfloat_type[0] > '4') cf_adderror(&ep->cfp, "Invalid row count", LEX_ERROR, NULL, NULL, NULL); intfloat_count = intfloat_type[0] - '0'; if (intfloat_type[1] == 'x') { if (intfloat_type[2] < '1' || intfloat_type[2] > '4') cf_adderror(&ep->cfp, "Invalid column count", LEX_ERROR, NULL, NULL, NULL); intfloat_count *= intfloat_type[2] - '0'; } /* -------------------------------------------- */ code = cf_next_token_should_be(&ep->cfp, "{", ";", NULL); if (code != PARSE_SUCCESS) return code; for (i = 0; i < intfloat_count; i++) { char *next = ((i + 1) < intfloat_count) ? "," : "}"; code = ep_parse_param_assign_intfloat(ep, param, is_float); if (code != PARSE_SUCCESS) return code; code = cf_next_token_should_be(&ep->cfp, next, ";", NULL); if (code != PARSE_SUCCESS) return code; } return PARSE_SUCCESS; } static int ep_parse_param_assignment_val(struct effect_parser *ep, struct ep_param *param) { if (param->is_texture) return ep_parse_param_assign_texture(ep, param); else if (strcmp(param->type, "int") == 0) return ep_parse_param_assign_intfloat(ep, param, false); else if (strcmp(param->type, "float") == 0) return ep_parse_param_assign_intfloat(ep, param, true); else if (astrcmp_n(param->type, "int", 3) == 0) return ep_parse_param_assign_intfloat_array(ep, param, false); else if (astrcmp_n(param->type, "float", 5) == 0) return ep_parse_param_assign_intfloat_array(ep, param, true); else if (astrcmp_n(param->type, "string", 6) == 0) return ep_parse_param_assign_string(ep, param); else if (strcmp(param->type, "bool") == 0) return ep_parse_param_assign_bool(ep, param); cf_adderror(&ep->cfp, "Invalid type '$1' used for assignment", LEX_ERROR, param->type, NULL, NULL); return PARSE_CONTINUE; } static inline bool ep_parse_param_assign(struct effect_parser *ep, struct ep_param *param) { if (ep_parse_param_assignment_val(ep, param) != PARSE_SUCCESS) return false; if (!cf_next_valid_token(&ep->cfp)) return false; return true; } /* static bool ep_parse_param_property(struct effect_parser *ep, struct ep_param *param) { } */ static void ep_parse_param(struct effect_parser *ep, char *type, char *name, bool is_property, bool is_const, bool is_uniform) { struct ep_param param; ep_param_init(¶m, type, name, is_property, is_const, is_uniform); if (cf_token_is(&ep->cfp, ";")) goto complete; if (cf_token_is(&ep->cfp, "[") && !ep_parse_param_array(ep, ¶m)) goto error; if (cf_token_is(&ep->cfp, "<") && !ep_parse_param_annotations(ep, ¶m)) goto error; if (cf_token_is(&ep->cfp, "=") && !ep_parse_param_assign(ep, ¶m)) goto error; /* if (cf_token_is(&ep->cfp, "<") && !ep_parse_param_property(ep, ¶m)) goto error; */ if (!cf_token_is(&ep->cfp, ";")) goto error; complete: da_push_back(ep->params, ¶m); return; error: ep_param_free(¶m); } static bool ep_get_var_specifiers(struct effect_parser *ep, bool *is_property, bool *is_const, bool *is_uniform) { while (true) { int code; code = ep_check_for_keyword(ep, "property", is_property); if (code == PARSE_EOF) return false; else if (code == PARSE_CONTINUE) continue; code = ep_check_for_keyword(ep, "const", is_const); if (code == PARSE_EOF) return false; else if (code == PARSE_CONTINUE) continue; code = ep_check_for_keyword(ep, "uniform", is_uniform); if (code == PARSE_EOF) return false; else if (code == PARSE_CONTINUE) continue; break; } return true; } static inline void report_invalid_func_keyword(struct effect_parser *ep, const char *name, bool val) { if (val) cf_adderror(&ep->cfp, "'$1' keyword cannot be used with a " "function", LEX_ERROR, name, NULL, NULL); } static void ep_parse_other(struct effect_parser *ep) { bool is_property = false, is_const = false, is_uniform = false; char *type = NULL, *name = NULL; if (!ep_get_var_specifiers(ep, &is_property, &is_const, &is_uniform)) goto error; if (cf_get_name(&ep->cfp, &type, "type", ";") != PARSE_SUCCESS) goto error; if (cf_next_name(&ep->cfp, &name, "name", ";") != PARSE_SUCCESS) goto error; if (!cf_next_valid_token(&ep->cfp)) goto error; if (cf_token_is(&ep->cfp, "(")) { report_invalid_func_keyword(ep, "property", is_property); report_invalid_func_keyword(ep, "const", is_const); report_invalid_func_keyword(ep, "uniform", is_uniform); ep_parse_function(ep, type, name); return; } else { ep_parse_param(ep, type, name, is_property, is_const, is_uniform); return; } error: bfree(type); bfree(name); } static bool ep_compile(struct effect_parser *ep); extern const char *gs_preprocessor_name(void); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) static void debug_get_default_value(struct gs_effect_param *param, char *buffer, unsigned long long buf_size) { if (param->default_val.num == 0) { snprintf(buffer, buf_size, "(null)"); return; } switch (param->type) { case GS_SHADER_PARAM_STRING: snprintf(buffer, buf_size, "'%.*s'", param->default_val.num, param->default_val.array); break; case GS_SHADER_PARAM_INT: snprintf(buffer, buf_size, "%ld", *(int *)(param->default_val.array + 0)); break; case GS_SHADER_PARAM_INT2: snprintf(buffer, buf_size, "%ld,%ld", *(int *)(param->default_val.array + 0), *(int *)(param->default_val.array + 4)); break; case GS_SHADER_PARAM_INT3: snprintf(buffer, buf_size, "%ld,%ld,%ld", *(int *)(param->default_val.array + 0), *(int *)(param->default_val.array + 4), *(int *)(param->default_val.array + 8)); break; case GS_SHADER_PARAM_INT4: snprintf(buffer, buf_size, "%ld,%ld,%ld,%ld", *(int *)(param->default_val.array + 0), *(int *)(param->default_val.array + 4), *(int *)(param->default_val.array + 8), *(int *)(param->default_val.array + 12)); break; case GS_SHADER_PARAM_FLOAT: snprintf(buffer, buf_size, "%e", *(float *)(param->default_val.array + 0)); break; case GS_SHADER_PARAM_VEC2: snprintf(buffer, buf_size, "%e,%e", *(float *)(param->default_val.array + 0), *(float *)(param->default_val.array + 4)); break; case GS_SHADER_PARAM_VEC3: snprintf(buffer, buf_size, "%e,%e,%e", *(float *)(param->default_val.array + 0), *(float *)(param->default_val.array + 4), *(float *)(param->default_val.array + 8)); break; case GS_SHADER_PARAM_VEC4: snprintf(buffer, buf_size, "%e,%e,%e,%e", *(float *)(param->default_val.array + 0), *(float *)(param->default_val.array + 4), *(float *)(param->default_val.array + 8), *(float *)(param->default_val.array + 12)); break; case GS_SHADER_PARAM_MATRIX4X4: snprintf(buffer, buf_size, "[[%e,%e,%e,%e],[%e,%e,%e,%e]," "[%e,%e,%e,%e],[%e,%e,%e,%e]]", *(float *)(param->default_val.array + 0), *(float *)(param->default_val.array + 4), *(float *)(param->default_val.array + 8), *(float *)(param->default_val.array + 12), *(float *)(param->default_val.array + 16), *(float *)(param->default_val.array + 20), *(float *)(param->default_val.array + 24), *(float *)(param->default_val.array + 28), *(float *)(param->default_val.array + 32), *(float *)(param->default_val.array + 36), *(float *)(param->default_val.array + 40), *(float *)(param->default_val.array + 44), *(float *)(param->default_val.array + 48), *(float *)(param->default_val.array + 52), *(float *)(param->default_val.array + 56), *(float *)(param->default_val.array + 60)); break; case GS_SHADER_PARAM_BOOL: snprintf(buffer, buf_size, "%s", (*param->default_val.array) != 0 ? "true\0" : "false\0"); break; case GS_SHADER_PARAM_UNKNOWN: case GS_SHADER_PARAM_TEXTURE: snprintf(buffer, buf_size, ""); break; } } static void debug_param(struct gs_effect_param *param, struct ep_param *param_in, unsigned long long idx, const char *offset) { char _debug_type[4096]; switch (param->type) { case GS_SHADER_PARAM_STRING: snprintf(_debug_type, sizeof(_debug_type), "string"); break; case GS_SHADER_PARAM_INT: snprintf(_debug_type, sizeof(_debug_type), "int"); break; case GS_SHADER_PARAM_INT2: snprintf(_debug_type, sizeof(_debug_type), "int2"); break; case GS_SHADER_PARAM_INT3: snprintf(_debug_type, sizeof(_debug_type), "int3"); break; case GS_SHADER_PARAM_INT4: snprintf(_debug_type, sizeof(_debug_type), "int4"); break; case GS_SHADER_PARAM_FLOAT: snprintf(_debug_type, sizeof(_debug_type), "float"); break; case GS_SHADER_PARAM_VEC2: snprintf(_debug_type, sizeof(_debug_type), "float2"); break; case GS_SHADER_PARAM_VEC3: snprintf(_debug_type, sizeof(_debug_type), "float3"); break; case GS_SHADER_PARAM_VEC4: snprintf(_debug_type, sizeof(_debug_type), "float4"); break; case GS_SHADER_PARAM_MATRIX4X4: snprintf(_debug_type, sizeof(_debug_type), "float4x4"); break; case GS_SHADER_PARAM_BOOL: snprintf(_debug_type, sizeof(_debug_type), "bool"); break; case GS_SHADER_PARAM_UNKNOWN: snprintf(_debug_type, sizeof(_debug_type), "unknown"); break; case GS_SHADER_PARAM_TEXTURE: snprintf(_debug_type, sizeof(_debug_type), "texture"); break; } char _debug_buf[4096]; debug_get_default_value(param, _debug_buf, sizeof(_debug_buf)); if (param->annotations.num > 0) { blog(LOG_DEBUG, "%s[%4lld] %.*s '%s' with value %.*s and %lld annotations:", offset, idx, sizeof(_debug_type), _debug_type, param->name, sizeof(_debug_buf), _debug_buf, param->annotations.num); } else { blog(LOG_DEBUG, "%s[%4lld] %.*s '%s' with value %.*s.", offset, idx, sizeof(_debug_type), _debug_type, param->name, sizeof(_debug_buf), _debug_buf); } } static void debug_param_annotation(struct gs_effect_param *param, struct ep_param *param_in, unsigned long long idx, const char *offset) { char _debug_buf[4096]; debug_get_default_value(param, _debug_buf, sizeof(_debug_buf)); blog(LOG_DEBUG, "%s[%4lld] %s '%s' with value %.*s", offset, idx, param_in->type, param->name, sizeof(_debug_buf), _debug_buf); } static void debug_print_string(const char *offset, const char *str) { // Bypass 4096 limit in def_log_handler. char const *begin = str; unsigned long long line = 1; for (char const *here = begin; here[0] != '\0'; here++) { char const *str = begin; unsigned long long len = here - begin; bool is_line = false; if (here[0] == '\r') { is_line = true; if (here[1] == '\n') { here += 1; } begin = here + 1; } else if (here[0] == '\n') { is_line = true; begin = here + 1; } if (is_line) { blog(LOG_DEBUG, "\t\t\t\t[%4lld] %.*s", line, len, str); line++; } } if (begin[0] != '\0') { // Final line was not written. blog(LOG_DEBUG, "\t\t\t\t[%4lld] %*s", line, strlen(begin), begin); } } #endif bool ep_parse(struct effect_parser *ep, gs_effect_t *effect, const char *effect_string, const char *file) { bool success; const char *graphics_preprocessor = gs_preprocessor_name(); if (graphics_preprocessor) { struct cf_def def; cf_def_init(&def); def.name.str.array = graphics_preprocessor; def.name.str.len = strlen(graphics_preprocessor); strref_copy(&def.name.unmerged_str, &def.name.str); cf_preprocessor_add_def(&ep->cfp.pp, &def); } ep->effect = effect; if (!cf_parser_parse(&ep->cfp, effect_string, file)) return false; while (ep->cfp.cur_token && ep->cfp.cur_token->type != CFTOKEN_NONE) { if (cf_token_is(&ep->cfp, ";") || is_whitespace(*ep->cfp.cur_token->str.array)) { /* do nothing */ ep->cfp.cur_token++; } else if (cf_token_is(&ep->cfp, "struct")) { ep_parse_struct(ep); } else if (cf_token_is(&ep->cfp, "technique")) { ep_parse_technique(ep); } else if (cf_token_is(&ep->cfp, "sampler_state")) { ep_parse_sampler_state(ep); } else if (cf_token_is(&ep->cfp, "{")) { /* add error and pass braces */ cf_adderror(&ep->cfp, "Unexpected code segment", LEX_ERROR, NULL, NULL, NULL); cf_pass_pair(&ep->cfp, '{', '}'); } else { /* parameters and functions */ ep_parse_other(ep); } } #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "================================================================================"); blog(LOG_DEBUG, "Effect Parser reformatted shader '%s' to:", file); debug_print_string("\t", ep->cfp.lex.reformatted); #endif success = !error_data_has_errors(&ep->cfp.error_list); if (success) success = ep_compile(ep); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "================================================================================"); #endif return success; } /* ------------------------------------------------------------------------- */ static inline void ep_write_param(struct dstr *shader, struct ep_param *param, dstr_array_t *used_params) { if (param->written) return; if (param->is_const) { dstr_cat(shader, "const "); } else if (param->is_uniform) { struct dstr new; dstr_init_copy(&new, param->name); da_push_back(*used_params, &new); dstr_cat(shader, "uniform "); } dstr_cat(shader, param->type); dstr_cat(shader, " "); dstr_cat(shader, param->name); if (param->array_count) dstr_catf(shader, "[%u]", param->array_count); dstr_cat(shader, ";\n"); param->written = true; } static inline void ep_write_func_param_deps(struct effect_parser *ep, struct dstr *shader, struct ep_func *func, dstr_array_t *used_params) { size_t i; for (i = 0; i < func->param_deps.num; i++) { const char *name = func->param_deps.array[i]; struct ep_param *param = ep_getparam(ep, name); ep_write_param(shader, param, used_params); } if (func->param_deps.num) dstr_cat(shader, "\n\n"); } static void ep_write_sampler(struct dstr *shader, struct ep_sampler *sampler) { size_t i; if (sampler->written) return; dstr_cat(shader, "sampler_state "); dstr_cat(shader, sampler->name); dstr_cat(shader, " {"); for (i = 0; i < sampler->values.num; i++) { dstr_cat(shader, "\n\t"); dstr_cat(shader, sampler->states.array[i]); dstr_cat(shader, " = "); dstr_cat(shader, sampler->values.array[i]); dstr_cat(shader, ";\n"); } dstr_cat(shader, "\n};\n"); sampler->written = true; } static inline void ep_write_func_sampler_deps(struct effect_parser *ep, struct dstr *shader, struct ep_func *func) { size_t i; for (i = 0; i < func->sampler_deps.num; i++) { const char *name = func->sampler_deps.array[i]; struct ep_sampler *sampler = ep_getsampler(ep, name); ep_write_sampler(shader, sampler); dstr_cat(shader, "\n"); } } static inline void ep_write_var(struct dstr *shader, struct ep_var *var) { if (var->var_type == EP_VAR_INOUT) dstr_cat(shader, "inout "); else if (var->var_type == EP_VAR_OUT) dstr_cat(shader, "out "); else if (var->var_type == EP_VAR_UNIFORM) dstr_cat(shader, "uniform "); // The "in" input modifier is implied by default, so leave it blank // in that case. dstr_cat(shader, var->type); dstr_cat(shader, " "); dstr_cat(shader, var->name); if (var->mapping) { dstr_cat(shader, " : "); dstr_cat(shader, var->mapping); } } static void ep_write_struct(struct dstr *shader, struct ep_struct *st) { size_t i; if (st->written) return; dstr_cat(shader, "struct "); dstr_cat(shader, st->name); dstr_cat(shader, " {"); for (i = 0; i < st->vars.num; i++) { dstr_cat(shader, "\n\t"); ep_write_var(shader, st->vars.array + i); dstr_cat(shader, ";"); } dstr_cat(shader, "\n};\n"); st->written = true; } static inline void ep_write_func_struct_deps(struct effect_parser *ep, struct dstr *shader, struct ep_func *func) { size_t i; for (i = 0; i < func->struct_deps.num; i++) { const char *name = func->struct_deps.array[i]; struct ep_struct *st = ep_getstruct(ep, name); if (!st->written) { ep_write_struct(shader, st); dstr_cat(shader, "\n"); st->written = true; } } } static void ep_write_func(struct effect_parser *ep, struct dstr *shader, struct ep_func *func, dstr_array_t *used_params); static inline void ep_write_func_func_deps(struct effect_parser *ep, struct dstr *shader, struct ep_func *func, dstr_array_t *used_params) { size_t i; for (i = 0; i < func->func_deps.num; i++) { const char *name = func->func_deps.array[i]; struct ep_func *func_dep = ep_getfunc(ep, name); if (!func_dep->written) { ep_write_func(ep, shader, func_dep, used_params); dstr_cat(shader, "\n\n"); } } } static void ep_write_func(struct effect_parser *ep, struct dstr *shader, struct ep_func *func, dstr_array_t *used_params) { size_t i; func->written = true; ep_write_func_param_deps(ep, shader, func, used_params); ep_write_func_sampler_deps(ep, shader, func); ep_write_func_struct_deps(ep, shader, func); ep_write_func_func_deps(ep, shader, func, used_params); /* ------------------------------------ */ dstr_cat(shader, func->ret_type); dstr_cat(shader, " "); dstr_cat(shader, func->name); dstr_cat(shader, "("); for (i = 0; i < func->param_vars.num; i++) { struct ep_var *var = func->param_vars.array + i; if (i) dstr_cat(shader, ", "); ep_write_var(shader, var); } dstr_cat(shader, ")\n"); dstr_cat_dstr(shader, &func->contents); dstr_cat(shader, "\n"); } /* writes mapped vars used by the call as parameters for main */ static void ep_write_main_params(struct effect_parser *ep, struct dstr *shader, struct dstr *param_str, struct ep_func *func) { size_t i; bool empty_params = dstr_is_empty(param_str); for (i = 0; i < func->param_vars.num; i++) { struct ep_var *var = func->param_vars.array + i; struct ep_struct *st = NULL; bool mapped = (var->mapping != NULL); if (!mapped) { st = ep_getstruct(ep, var->type); if (st) mapped = ep_struct_mapped(st); } if (mapped) { dstr_cat(shader, var->type); dstr_cat(shader, " "); dstr_cat(shader, var->name); if (!st) { dstr_cat(shader, " : "); dstr_cat(shader, var->mapping); } if (!dstr_is_empty(param_str)) dstr_cat(param_str, ", "); dstr_cat(param_str, var->name); } } if (!empty_params) dstr_cat(param_str, ", "); } static void ep_write_main(struct effect_parser *ep, struct dstr *shader, struct ep_func *func, struct dstr *call_str) { struct dstr param_str; struct dstr adjusted_call; dstr_init(¶m_str); dstr_init_copy_dstr(&adjusted_call, call_str); dstr_cat(shader, "\n"); dstr_cat(shader, func->ret_type); dstr_cat(shader, " main("); ep_write_main_params(ep, shader, ¶m_str, func); dstr_cat(shader, ")"); if (func->mapping) { dstr_cat(shader, " : "); dstr_cat(shader, func->mapping); } dstr_cat(shader, "\n{\n\treturn "); dstr_cat_dstr(shader, &adjusted_call); dstr_cat(shader, "\n}\n"); dstr_free(&adjusted_call); dstr_free(¶m_str); } static inline void ep_reset_written(struct effect_parser *ep) { size_t i; for (i = 0; i < ep->params.num; i++) ep->params.array[i].written = false; for (i = 0; i < ep->structs.num; i++) ep->structs.array[i].written = false; for (i = 0; i < ep->funcs.num; i++) ep->funcs.array[i].written = false; for (i = 0; i < ep->samplers.num; i++) ep->samplers.array[i].written = false; } static void ep_makeshaderstring(struct effect_parser *ep, struct dstr *shader, cf_token_array_t *shader_call, dstr_array_t *used_params) { struct cf_token *token = shader_call->array; struct cf_token *func_name; struct ep_func *func; struct dstr call_str; dstr_init(&call_str); if (!token) return; while (token->type != CFTOKEN_NONE && is_whitespace(*token->str.array)) token++; if (token->type == CFTOKEN_NONE || strref_cmp(&token->str, "NULL") == 0) return; func_name = token; while (token->type != CFTOKEN_NONE) { struct ep_param *param = ep_getparam_strref(ep, &token->str); if (param) ep_write_param(shader, param, used_params); dstr_cat_strref(&call_str, &token->str); token++; } func = ep_getfunc_strref(ep, &func_name->str); if (!func) return; ep_write_func(ep, shader, func, used_params); ep_write_main(ep, shader, func, &call_str); dstr_free(&call_str); ep_reset_written(ep); } static void ep_compile_annotations(ep_param_array_t *ep_annotations, gs_effect_param_array_t *gsp_annotations, struct effect_parser *ep) { da_resize(*gsp_annotations, ep_annotations->num); size_t i; for (i = 0; i < ep_annotations->num; i++) { struct gs_effect_param *param = gsp_annotations->array + i; struct ep_param *param_in = ep_annotations->array + i; param->name = bstrdup(param_in->name); param->section = EFFECT_ANNOTATION; param->effect = ep->effect; da_move(param->default_val, param_in->default_val); param->type = get_effect_param_type(param_in->type); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) debug_param(param, param_in, i, "\t\t"); #endif } } static void ep_compile_param_annotations(struct ep_param *ep_param_input, struct gs_effect_param *gs_effect_input, struct effect_parser *ep) { ep_compile_annotations(&(ep_param_input->annotations), &(gs_effect_input->annotations), ep); } static void ep_compile_param(struct effect_parser *ep, size_t idx) { struct gs_effect_param *param; struct ep_param *param_in; param = ep->effect->params.array + idx; param_in = ep->params.array + idx; param_in->param = param; param->name = bstrdup(param_in->name); param->section = EFFECT_PARAM; param->effect = ep->effect; da_move(param->default_val, param_in->default_val); param->type = get_effect_param_type(param_in->type); if (strcmp(param_in->name, "ViewProj") == 0) ep->effect->view_proj = param; else if (strcmp(param_in->name, "World") == 0) ep->effect->world = param; #if defined(_DEBUG) && defined(_DEBUG_SHADERS) debug_param(param, param_in, idx, "\t"); #endif ep_compile_param_annotations(param_in, param, ep); } static bool ep_compile_pass_shaderparams(struct effect_parser *ep, pass_shaderparam_array_t *pass_params, dstr_array_t *used_params, gs_shader_t *shader) { size_t i; da_resize(*pass_params, used_params->num); for (i = 0; i < pass_params->num; i++) { struct dstr *param_name = used_params->array + i; struct pass_shaderparam *param = pass_params->array + i; param->eparam = gs_effect_get_param_by_name(ep->effect, param_name->array); param->sparam = gs_shader_get_param_by_name(shader, param_name->array); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) debug_param(param->eparam, 0, i, "\t\t\t\t"); #endif if (!param->sparam) { blog(LOG_ERROR, "Effect shader parameter not found"); return false; } } return true; } static inline bool ep_compile_pass_shader(struct effect_parser *ep, struct gs_effect_technique *tech, struct gs_effect_pass *pass, struct ep_pass *pass_in, size_t pass_idx, enum gs_shader_type type) { struct dstr shader_str; struct dstr location; dstr_array_t used_params; pass_shaderparam_array_t *pass_params = NULL; gs_shader_t *shader = NULL; bool success = true; char *errors = NULL; dstr_init(&shader_str); da_init(used_params); dstr_init(&location); dstr_copy(&location, ep->cfp.lex.file); if (type == GS_SHADER_VERTEX) dstr_cat(&location, " (Vertex "); else if (type == GS_SHADER_PIXEL) dstr_cat(&location, " (Pixel "); /*else if (type == SHADER_GEOMETRY) dstr_cat(&location, " (Geometry ");*/ assert(pass_idx <= UINT_MAX); dstr_catf(&location, "shader, technique %s, pass %u)", tech->name, (unsigned)pass_idx); if (type == GS_SHADER_VERTEX) { ep_makeshaderstring(ep, &shader_str, &pass_in->vertex_program, &used_params); pass->vertshader = gs_vertexshader_create(shader_str.array, location.array, &errors); shader = pass->vertshader; pass_params = &pass->vertshader_params; } else if (type == GS_SHADER_PIXEL) { ep_makeshaderstring(ep, &shader_str, &pass_in->fragment_program, &used_params); pass->pixelshader = gs_pixelshader_create(shader_str.array, location.array, &errors); shader = pass->pixelshader; pass_params = &pass->pixelshader_params; } if (errors && strlen(errors)) { cf_adderror(&ep->cfp, "Error creating shader: $1", LEX_ERROR, errors, NULL, NULL); } bfree(errors); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "\t\t\t%s Shader:", type == GS_SHADER_VERTEX ? "Vertex" : "Fragment"); blog(LOG_DEBUG, "\t\t\tCode:"); debug_print_string("\t\t\t\t\t", shader_str.array); blog(LOG_DEBUG, "\t\t\tParameters:"); #endif if (shader) success = ep_compile_pass_shaderparams(ep, pass_params, &used_params, shader); else success = false; dstr_free(&location); dstr_array_free(used_params.array, used_params.num); da_free(used_params); dstr_free(&shader_str); return success; } static bool ep_compile_pass(struct effect_parser *ep, struct gs_effect_technique *tech, struct ep_technique *tech_in, size_t idx) { struct gs_effect_pass *pass; struct ep_pass *pass_in; bool success = true; pass = tech->passes.array + idx; pass_in = tech_in->passes.array + idx; pass->name = bstrdup(pass_in->name); pass->section = EFFECT_PASS; #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "\t\t[%4lld] Pass '%s':", idx, pass->name); #endif if (!ep_compile_pass_shader(ep, tech, pass, pass_in, idx, GS_SHADER_VERTEX)) { success = false; blog(LOG_ERROR, "Pass (%zu) <%s> missing vertex shader!", idx, pass->name ? pass->name : ""); } if (!ep_compile_pass_shader(ep, tech, pass, pass_in, idx, GS_SHADER_PIXEL)) { success = false; blog(LOG_ERROR, "Pass (%zu) <%s> missing pixel shader!", idx, pass->name ? pass->name : ""); } return success; } static inline bool ep_compile_technique(struct effect_parser *ep, size_t idx) { struct gs_effect_technique *tech; struct ep_technique *tech_in; bool success = true; size_t i; tech = ep->effect->techniques.array + idx; tech_in = ep->techniques.array + idx; tech->name = bstrdup(tech_in->name); tech->section = EFFECT_TECHNIQUE; tech->effect = ep->effect; da_resize(tech->passes, tech_in->passes.num); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "\t[%4lld] Technique '%s' has %lld passes:", idx, tech->name, tech->passes.num); #endif for (i = 0; i < tech->passes.num; i++) { if (!ep_compile_pass(ep, tech, tech_in, i)) success = false; } return success; } static bool ep_compile(struct effect_parser *ep) { bool success = true; size_t i; assert(ep->effect); da_resize(ep->effect->params, ep->params.num); da_resize(ep->effect->techniques, ep->techniques.num); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "Shader has %lld parameters:", ep->params.num); #endif for (i = 0; i < ep->params.num; i++) ep_compile_param(ep, i); #if defined(_DEBUG) && defined(_DEBUG_SHADERS) blog(LOG_DEBUG, "Shader has %lld techniques:", ep->techniques.num); #endif for (i = 0; i < ep->techniques.num; i++) { if (!ep_compile_technique(ep, i)) success = false; } return success; } obs-studio-32.1.0-sources/libobs/graphics/vec3.c000644 001751 001751 00000004646 15153330235 022324 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "vec3.h" #include "vec4.h" #include "quat.h" #include "axisang.h" #include "plane.h" #include "matrix3.h" #include "math-extra.h" void vec3_from_vec4(struct vec3 *dst, const struct vec4 *v) { dst->m = v->m; dst->w = 0.0f; } float vec3_plane_dist(const struct vec3 *v, const struct plane *p) { return vec3_dot(v, &p->dir) - p->dist; } void vec3_rotate(struct vec3 *dst, const struct vec3 *v, const struct matrix3 *m) { struct vec3 temp; vec3_copy(&temp, v); dst->x = vec3_dot(&temp, &m->x); dst->y = vec3_dot(&temp, &m->y); dst->z = vec3_dot(&temp, &m->z); dst->w = 0.0f; } void vec3_transform(struct vec3 *dst, const struct vec3 *v, const struct matrix4 *m) { struct vec4 v4; vec4_from_vec3(&v4, v); vec4_transform(&v4, &v4, m); vec3_from_vec4(dst, &v4); } void vec3_transform3x4(struct vec3 *dst, const struct vec3 *v, const struct matrix3 *m) { struct vec3 temp; vec3_sub(&temp, v, &m->t); dst->x = vec3_dot(&temp, &m->x); dst->y = vec3_dot(&temp, &m->y); dst->z = vec3_dot(&temp, &m->z); dst->w = 0.0f; } void vec3_mirror(struct vec3 *dst, const struct vec3 *v, const struct plane *p) { struct vec3 temp; vec3_mulf(&temp, &p->dir, vec3_plane_dist(v, p) * 2.0f); vec3_sub(dst, v, &temp); } void vec3_mirrorv(struct vec3 *dst, const struct vec3 *v, const struct vec3 *vec) { struct vec3 temp; vec3_mulf(&temp, vec, vec3_dot(v, vec) * 2.0f); vec3_sub(dst, v, &temp); } void vec3_rand(struct vec3 *dst, int positive_only) { dst->x = rand_float(positive_only); dst->y = rand_float(positive_only); dst->z = rand_float(positive_only); dst->w = 0.0f; } obs-studio-32.1.0-sources/libobs/graphics/matrix3.c000644 001751 001751 00000007777 15153330235 023063 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "matrix3.h" #include "matrix4.h" #include "plane.h" #include "quat.h" void matrix3_from_quat(struct matrix3 *dst, const struct quat *q) { float norm = quat_dot(q, q); float s = (norm > 0.0f) ? (2.0f / norm) : 0.0f; float xx = q->x * q->x * s; float yy = q->y * q->y * s; float zz = q->z * q->z * s; float xy = q->x * q->y * s; float xz = q->x * q->z * s; float yz = q->y * q->z * s; float wx = q->w * q->x * s; float wy = q->w * q->y * s; float wz = q->w * q->z * s; vec3_set(&dst->x, 1.0f - (yy + zz), xy + wz, xz - wy); vec3_set(&dst->y, xy - wz, 1.0f - (xx + zz), yz + wx); vec3_set(&dst->z, xz + wy, yz - wx, 1.0f - (xx + yy)); vec3_zero(&dst->t); } void matrix3_from_axisang(struct matrix3 *dst, const struct axisang *aa) { struct quat q; quat_from_axisang(&q, aa); matrix3_from_quat(dst, &q); } void matrix3_from_matrix4(struct matrix3 *dst, const struct matrix4 *m) { dst->x.m = m->x.m; dst->y.m = m->y.m; dst->z.m = m->z.m; dst->t.m = m->t.m; dst->x.w = 0.0f; dst->y.w = 0.0f; dst->z.w = 0.0f; dst->t.w = 0.0f; } void matrix3_mul(struct matrix3 *dst, const struct matrix3 *m1, const struct matrix3 *m2) { if (dst == m2) { struct matrix3 temp; vec3_rotate(&temp.x, &m1->x, m2); vec3_rotate(&temp.y, &m1->y, m2); vec3_rotate(&temp.z, &m1->z, m2); vec3_transform3x4(&temp.t, &m1->t, m2); matrix3_copy(dst, &temp); } else { vec3_rotate(&dst->x, &m1->x, m2); vec3_rotate(&dst->y, &m1->y, m2); vec3_rotate(&dst->z, &m1->z, m2); vec3_transform3x4(&dst->t, &m1->t, m2); } } void matrix3_rotate(struct matrix3 *dst, const struct matrix3 *m, const struct quat *q) { struct matrix3 temp; matrix3_from_quat(&temp, q); matrix3_mul(dst, m, &temp); } void matrix3_rotate_aa(struct matrix3 *dst, const struct matrix3 *m, const struct axisang *aa) { struct matrix3 temp; matrix3_from_axisang(&temp, aa); matrix3_mul(dst, m, &temp); } void matrix3_scale(struct matrix3 *dst, const struct matrix3 *m, const struct vec3 *v) { vec3_mul(&dst->x, &m->x, v); vec3_mul(&dst->y, &m->y, v); vec3_mul(&dst->z, &m->z, v); vec3_mul(&dst->t, &m->t, v); } void matrix3_transpose(struct matrix3 *dst, const struct matrix3 *m) { __m128 tmp1, tmp2; vec3_rotate(&dst->t, &m->t, m); vec3_neg(&dst->t, &dst->t); tmp1 = _mm_movelh_ps(m->x.m, m->y.m); tmp2 = _mm_movehl_ps(m->y.m, m->x.m); dst->x.m = _mm_shuffle_ps(tmp1, m->z.m, _MM_SHUFFLE(3, 0, 2, 0)); dst->y.m = _mm_shuffle_ps(tmp1, m->z.m, _MM_SHUFFLE(3, 1, 3, 1)); dst->z.m = _mm_shuffle_ps(tmp2, m->z.m, _MM_SHUFFLE(3, 2, 2, 0)); } void matrix3_inv(struct matrix3 *dst, const struct matrix3 *m) { struct matrix4 m4; matrix4_from_matrix3(&m4, m); matrix4_inv((struct matrix4 *)dst, &m4); dst->t.w = 0.0f; } void matrix3_mirror(struct matrix3 *dst, const struct matrix3 *m, const struct plane *p) { vec3_mirrorv(&dst->x, &m->x, &p->dir); vec3_mirrorv(&dst->y, &m->y, &p->dir); vec3_mirrorv(&dst->z, &m->z, &p->dir); vec3_mirror(&dst->t, &m->t, p); } void matrix3_mirrorv(struct matrix3 *dst, const struct matrix3 *m, const struct vec3 *v) { vec3_mirrorv(&dst->x, &m->x, v); vec3_mirrorv(&dst->y, &m->y, v); vec3_mirrorv(&dst->z, &m->z, v); vec3_mirrorv(&dst->t, &m->t, v); } obs-studio-32.1.0-sources/libobs/graphics/shader-parser.h000644 001751 001751 00000015106 15153330235 024222 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/cf-parser.h" #include "graphics.h" #ifdef __cplusplus extern "C" { #endif EXPORT enum gs_shader_param_type get_shader_param_type(const char *type); EXPORT enum gs_sample_filter get_sample_filter(const char *filter); EXPORT enum gs_address_mode get_address_mode(const char *address_mode); /* * Shader Parser * * Parses a shader and extracts data such as shader constants, samplers, * and vertex input information. Also allows the reformatting of shaders for * different libraries. This is usually used only by graphics libraries, */ enum shader_var_type { SHADER_VAR_NONE, SHADER_VAR_IN = SHADER_VAR_NONE, SHADER_VAR_INOUT, SHADER_VAR_OUT, SHADER_VAR_UNIFORM, SHADER_VAR_CONST }; struct shader_var { char *type; char *name; char *mapping; enum shader_var_type var_type; int array_count; size_t gl_sampler_id; /* optional: used/parsed by GL */ DARRAY(uint8_t) default_val; }; static inline void shader_var_init(struct shader_var *sv) { memset(sv, 0, sizeof(struct shader_var)); } static inline void shader_var_init_param(struct shader_var *sv, char *type, char *name, bool is_uniform, bool is_const) { if (is_uniform) sv->var_type = SHADER_VAR_UNIFORM; else if (is_const) sv->var_type = SHADER_VAR_CONST; else sv->var_type = SHADER_VAR_NONE; sv->type = type; sv->name = name; sv->mapping = NULL; sv->array_count = 0; sv->gl_sampler_id = (size_t)-1; da_init(sv->default_val); } static inline void shader_var_free(struct shader_var *sv) { bfree(sv->type); bfree(sv->name); bfree(sv->mapping); da_free(sv->default_val); } /* ------------------------------------------------------------------------- */ struct shader_sampler { char *name; DARRAY(char *) states; DARRAY(char *) values; }; static inline void shader_sampler_init(struct shader_sampler *ss) { memset(ss, 0, sizeof(struct shader_sampler)); } static inline void shader_sampler_free(struct shader_sampler *ss) { size_t i; for (i = 0; i < ss->states.num; i++) bfree(ss->states.array[i]); for (i = 0; i < ss->values.num; i++) bfree(ss->values.array[i]); bfree(ss->name); da_free(ss->states); da_free(ss->values); } EXPORT void shader_sampler_convert(struct shader_sampler *ss, struct gs_sampler_info *info); /* ------------------------------------------------------------------------- */ struct shader_struct { char *name; DARRAY(struct shader_var) vars; }; static inline void shader_struct_init(struct shader_struct *ss) { memset(ss, 0, sizeof(struct shader_struct)); } static inline void shader_struct_free(struct shader_struct *ss) { size_t i; for (i = 0; i < ss->vars.num; i++) shader_var_free(ss->vars.array + i); bfree(ss->name); da_free(ss->vars); } /* ------------------------------------------------------------------------- */ struct shader_func { char *name; char *return_type; char *mapping; DARRAY(struct shader_var) params; struct cf_token *start, *end; }; static inline void shader_func_init(struct shader_func *sf, char *return_type, char *name) { da_init(sf->params); sf->return_type = return_type; sf->mapping = NULL; sf->name = name; sf->start = NULL; sf->end = NULL; } static inline void shader_func_free(struct shader_func *sf) { size_t i; for (i = 0; i < sf->params.num; i++) shader_var_free(sf->params.array + i); bfree(sf->name); bfree(sf->return_type); bfree(sf->mapping); da_free(sf->params); } /* ------------------------------------------------------------------------- */ struct shader_parser { struct cf_parser cfp; DARRAY(struct shader_var) params; DARRAY(struct shader_struct) structs; DARRAY(struct shader_sampler) samplers; DARRAY(struct shader_func) funcs; }; static inline void shader_parser_init(struct shader_parser *sp) { cf_parser_init(&sp->cfp); da_init(sp->params); da_init(sp->structs); da_init(sp->samplers); da_init(sp->funcs); } static inline void shader_parser_free(struct shader_parser *sp) { size_t i; for (i = 0; i < sp->params.num; i++) shader_var_free(sp->params.array + i); for (i = 0; i < sp->structs.num; i++) shader_struct_free(sp->structs.array + i); for (i = 0; i < sp->samplers.num; i++) shader_sampler_free(sp->samplers.array + i); for (i = 0; i < sp->funcs.num; i++) shader_func_free(sp->funcs.array + i); cf_parser_free(&sp->cfp); da_free(sp->params); da_free(sp->structs); da_free(sp->samplers); da_free(sp->funcs); } EXPORT bool shader_parse(struct shader_parser *sp, const char *shader, const char *file); static inline char *shader_parser_geterrors(struct shader_parser *sp) { return error_data_buildstring(&sp->cfp.error_list); } static inline struct shader_var *shader_parser_getparam(struct shader_parser *sp, const char *param_name) { size_t i; for (i = 0; i < sp->params.num; i++) { struct shader_var *param = sp->params.array + i; if (strcmp(param->name, param_name) == 0) return param; } return NULL; } static inline struct shader_struct *shader_parser_getstruct(struct shader_parser *sp, const char *struct_name) { size_t i; for (i = 0; i < sp->structs.num; i++) { struct shader_struct *st = sp->structs.array + i; if (strcmp(st->name, struct_name) == 0) return st; } return NULL; } static inline struct shader_sampler *shader_parser_getsampler(struct shader_parser *sp, const char *sampler_name) { size_t i; for (i = 0; i < sp->samplers.num; i++) { struct shader_sampler *sampler = sp->samplers.array + i; if (strcmp(sampler->name, sampler_name) == 0) return sampler; } return NULL; } static inline struct shader_func *shader_parser_getfunc(struct shader_parser *sp, const char *func_name) { size_t i; for (i = 0; i < sp->funcs.num; i++) { struct shader_func *func = sp->funcs.array + i; if (strcmp(func->name, func_name) == 0) return func; } return NULL; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/bounds.h000644 001751 001751 00000007715 15153330235 022763 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "math-defs.h" #include "vec3.h" /* * Axis Aligned Bounding Box */ #ifdef __cplusplus extern "C" { #endif #define BOUNDS_OUTSIDE 1 #define BOUNDS_INSIDE 2 #define BOUNDS_PARTIAL 3 struct bounds { struct vec3 min, max; }; static inline void bounds_zero(struct bounds *dst) { vec3_zero(&dst->min); vec3_zero(&dst->max); } static inline void bounds_copy(struct bounds *dst, const struct bounds *b) { vec3_copy(&dst->min, &b->min); vec3_copy(&dst->max, &b->max); } EXPORT void bounds_move(struct bounds *dst, const struct bounds *b, const struct vec3 *v); EXPORT void bounds_scale(struct bounds *dst, const struct bounds *b, const struct vec3 *v); EXPORT void bounds_merge(struct bounds *dst, const struct bounds *b1, const struct bounds *b2); EXPORT void bounds_merge_point(struct bounds *dst, const struct bounds *b, const struct vec3 *v); EXPORT void bounds_get_point(struct vec3 *dst, const struct bounds *b, unsigned int i); EXPORT void bounds_get_center(struct vec3 *dst, const struct bounds *b); /** * Note: transforms as OBB, then converts back to AABB, which can result in * the actual size becoming larger than it originally was. */ EXPORT void bounds_transform(struct bounds *dst, const struct bounds *b, const struct matrix4 *m); EXPORT void bounds_transform3x4(struct bounds *dst, const struct bounds *b, const struct matrix3 *m); EXPORT bool bounds_intersection_ray(const struct bounds *b, const struct vec3 *orig, const struct vec3 *dir, float *t); EXPORT bool bounds_intersection_line(const struct bounds *b, const struct vec3 *p1, const struct vec3 *p2, float *t); EXPORT bool bounds_plane_test(const struct bounds *b, const struct plane *p); EXPORT bool bounds_under_plane(const struct bounds *b, const struct plane *p); static inline bool bounds_inside(const struct bounds *b, const struct bounds *test) { return test->min.x >= b->min.x && test->min.y >= b->min.y && test->min.z >= b->min.z && test->max.x <= b->max.x && test->max.y <= b->max.y && test->max.z <= b->max.z; } static inline bool bounds_vec3_inside(const struct bounds *b, const struct vec3 *v) { return v->x >= (b->min.x - EPSILON) && v->x <= (b->max.x + EPSILON) && v->y >= (b->min.y - EPSILON) && v->y <= (b->max.y + EPSILON) && v->z >= (b->min.z - EPSILON) && v->z <= (b->max.z + EPSILON); } EXPORT bool bounds_intersects(const struct bounds *b, const struct bounds *test, float epsilon); EXPORT bool bounds_intersects_obb(const struct bounds *b, const struct bounds *test, const struct matrix4 *m, float epsilon); EXPORT bool bounds_intersects_obb3x4(const struct bounds *b, const struct bounds *test, const struct matrix3 *m, float epsilon); static inline bool bounds_intersects_ray(const struct bounds *b, const struct vec3 *orig, const struct vec3 *dir) { float t; return bounds_intersection_ray(b, orig, dir, &t); } static inline bool bounds_intersects_line(const struct bounds *b, const struct vec3 *p1, const struct vec3 *p2) { float t; return bounds_intersection_line(b, p1, p2, &t); } EXPORT float bounds_min_dist(const struct bounds *b, const struct plane *p); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/vec2.h000644 001751 001751 00000007527 15153330235 022331 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #include #ifdef __cplusplus extern "C" { #endif struct vec2 { union { struct { float x, y; }; float ptr[2]; }; }; static inline void vec2_zero(struct vec2 *dst) { dst->x = 0.0f; dst->y = 0.0f; } static inline void vec2_set(struct vec2 *dst, float x, float y) { dst->x = x; dst->y = y; } static inline void vec2_copy(struct vec2 *dst, const struct vec2 *v) { dst->x = v->x; dst->y = v->y; } static inline void vec2_add(struct vec2 *dst, const struct vec2 *v1, const struct vec2 *v2) { vec2_set(dst, v1->x + v2->x, v1->y + v2->y); } static inline void vec2_sub(struct vec2 *dst, const struct vec2 *v1, const struct vec2 *v2) { vec2_set(dst, v1->x - v2->x, v1->y - v2->y); } static inline void vec2_mul(struct vec2 *dst, const struct vec2 *v1, const struct vec2 *v2) { vec2_set(dst, v1->x * v2->x, v1->y * v2->y); } static inline void vec2_div(struct vec2 *dst, const struct vec2 *v1, const struct vec2 *v2) { vec2_set(dst, v1->x / v2->x, v1->y / v2->y); } static inline void vec2_addf(struct vec2 *dst, const struct vec2 *v, float f) { vec2_set(dst, v->x + f, v->y + f); } static inline void vec2_subf(struct vec2 *dst, const struct vec2 *v, float f) { vec2_set(dst, v->x - f, v->y - f); } static inline void vec2_mulf(struct vec2 *dst, const struct vec2 *v, float f) { vec2_set(dst, v->x * f, v->y * f); } static inline void vec2_divf(struct vec2 *dst, const struct vec2 *v, float f) { vec2_set(dst, v->x / f, v->y / f); } static inline void vec2_neg(struct vec2 *dst, const struct vec2 *v) { vec2_set(dst, -v->x, -v->y); } static inline float vec2_dot(const struct vec2 *v1, const struct vec2 *v2) { return v1->x * v2->x + v1->y * v2->y; } static inline float vec2_len(const struct vec2 *v) { return sqrtf(v->x * v->x + v->y * v->y); } static inline float vec2_dist(const struct vec2 *v1, const struct vec2 *v2) { struct vec2 temp; vec2_sub(&temp, v1, v2); return vec2_len(&temp); } static inline void vec2_minf(struct vec2 *dst, const struct vec2 *v, float val) { dst->x = (v->x < val) ? v->x : val; dst->y = (v->y < val) ? v->y : val; } static inline void vec2_min(struct vec2 *dst, const struct vec2 *v, const struct vec2 *min_v) { dst->x = (v->x < min_v->x) ? v->x : min_v->x; dst->y = (v->y < min_v->y) ? v->y : min_v->y; } static inline void vec2_maxf(struct vec2 *dst, const struct vec2 *v, float val) { dst->x = (v->x > val) ? v->x : val; dst->y = (v->y > val) ? v->y : val; } static inline void vec2_max(struct vec2 *dst, const struct vec2 *v, const struct vec2 *max_v) { dst->x = (v->x > max_v->x) ? v->x : max_v->x; dst->y = (v->y > max_v->y) ? v->y : max_v->y; } EXPORT void vec2_abs(struct vec2 *dst, const struct vec2 *v); EXPORT void vec2_floor(struct vec2 *dst, const struct vec2 *v); EXPORT void vec2_ceil(struct vec2 *dst, const struct vec2 *v); EXPORT int vec2_close(const struct vec2 *v1, const struct vec2 *v2, float epsilon); EXPORT void vec2_norm(struct vec2 *dst, const struct vec2 *v); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/axisang.c000644 001751 001751 00000002430 15153330235 023103 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "axisang.h" #include "quat.h" void axisang_from_quat(struct axisang *dst, const struct quat *q) { float len, leni; len = q->x * q->x + q->y * q->y + q->z * q->z; if (!close_float(len, 0.0f, EPSILON)) { leni = 1.0f / sqrtf(len); dst->x = q->x * leni; dst->y = q->y * leni; dst->z = q->z * leni; dst->w = acosf(q->w) * 2.0f; } else { dst->x = 0.0f; dst->y = 0.0f; dst->z = 0.0f; dst->w = 0.0f; } } obs-studio-32.1.0-sources/libobs/graphics/vec4.c000644 001751 001751 00000002536 15153330235 022321 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "vec4.h" #include "vec3.h" #include "matrix4.h" void vec4_from_vec3(struct vec4 *dst, const struct vec3 *v) { dst->m = v->m; dst->w = 1.0f; } void vec4_transform(struct vec4 *dst, const struct vec4 *v, const struct matrix4 *m) { struct vec4 temp; struct matrix4 transpose; matrix4_transpose(&transpose, m); temp.x = vec4_dot(&transpose.x, v); temp.y = vec4_dot(&transpose.y, v); temp.z = vec4_dot(&transpose.z, v); temp.w = vec4_dot(&transpose.t, v); vec4_copy(dst, &temp); } obs-studio-32.1.0-sources/libobs/graphics/image-file.c000644 001751 001751 00000026750 15153330235 023463 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "image-file.h" #include "../util/base.h" #include "../util/platform.h" #include "../util/dstr.h" #include "vec4.h" #define blog(level, format, ...) blog(level, "%s: " format, __FUNCTION__, __VA_ARGS__) static void *bi_def_bitmap_create(int width, int height) { return bmalloc((size_t)4 * width * height); } static void bi_def_bitmap_set_opaque(void *bitmap, bool opaque) { UNUSED_PARAMETER(bitmap); UNUSED_PARAMETER(opaque); } static bool bi_def_bitmap_test_opaque(void *bitmap) { UNUSED_PARAMETER(bitmap); return false; } static unsigned char *bi_def_bitmap_get_buffer(void *bitmap) { return (unsigned char *)bitmap; } static void bi_def_bitmap_destroy(void *bitmap) { bfree(bitmap); } static void bi_def_bitmap_modified(void *bitmap) { UNUSED_PARAMETER(bitmap); } static inline int get_full_decoded_gif_size(gs_image_file_t *image) { return image->gif.width * image->gif.height * 4 * image->gif.frame_count; } static inline void *alloc_mem(gs_image_file_t *image, uint64_t *mem_usage, size_t size) { UNUSED_PARAMETER(image); if (mem_usage) *mem_usage += size; return bzalloc(size); } static bool init_animated_gif(gs_image_file_t *image, const char *path, uint64_t *mem_usage, enum gs_image_alpha_mode alpha_mode) { bool is_animated_gif = true; gif_result result; uint64_t max_size; size_t size, size_read; FILE *file; image->bitmap_callbacks.bitmap_create = bi_def_bitmap_create; image->bitmap_callbacks.bitmap_destroy = bi_def_bitmap_destroy; image->bitmap_callbacks.bitmap_get_buffer = bi_def_bitmap_get_buffer; image->bitmap_callbacks.bitmap_modified = bi_def_bitmap_modified; image->bitmap_callbacks.bitmap_set_opaque = bi_def_bitmap_set_opaque; image->bitmap_callbacks.bitmap_test_opaque = bi_def_bitmap_test_opaque; gif_create(&image->gif, &image->bitmap_callbacks); file = os_fopen(path, "rb"); if (!file) { blog(LOG_WARNING, "Failed to open file '%s'", path); goto fail; } fseek(file, 0, SEEK_END); size = (size_t)os_ftelli64(file); fseek(file, 0, SEEK_SET); image->gif_data = bmalloc(size); size_read = fread(image->gif_data, 1, size, file); if (size_read != size) { blog(LOG_WARNING, "Failed to fully read gif file '%s'.", path); goto fail; } do { result = gif_initialise(&image->gif, size, image->gif_data); if (result < 0) { blog(LOG_WARNING, "Failed to initialize gif '%s', " "possible file corruption", path); goto fail; } } while (result != GIF_OK); if (image->gif.width > 4096 || image->gif.height > 4096) { blog(LOG_WARNING, "Bad texture dimensions (%dx%d) in '%s'", image->gif.width, image->gif.height, path); goto fail; } max_size = (uint64_t)image->gif.width * (uint64_t)image->gif.height * (uint64_t)image->gif.frame_count * 4LLU; if ((uint64_t)get_full_decoded_gif_size(image) != max_size) { blog(LOG_WARNING, "Gif '%s' overflowed maximum pointer size", path); goto fail; } image->is_animated_gif = (image->gif.frame_count > 1 && result >= 0); if (image->is_animated_gif) { gif_decode_frame(&image->gif, 0); image->animation_frame_cache = alloc_mem(image, mem_usage, image->gif.frame_count * sizeof(uint8_t *)); image->animation_frame_data = alloc_mem(image, mem_usage, get_full_decoded_gif_size(image)); for (unsigned int i = 0; i < image->gif.frame_count; i++) { if (gif_decode_frame(&image->gif, i) != GIF_OK) blog(LOG_WARNING, "Couldn't decode frame %u " "of '%s'", i, path); } gif_decode_frame(&image->gif, 0); image->cx = (uint32_t)image->gif.width; image->cy = (uint32_t)image->gif.height; image->format = GS_RGBA; if (mem_usage) { *mem_usage += (size_t)4 * image->cx * image->cy; *mem_usage += size; } if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY_SRGB) { gs_premultiply_xyza_srgb_loop(image->gif.frame_image, (size_t)image->cx * image->cy); } else if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY) { gs_premultiply_xyza_loop(image->gif.frame_image, (size_t)image->cx * image->cy); } } else { gif_finalise(&image->gif); bfree(image->gif_data); image->gif_data = NULL; is_animated_gif = false; goto not_animated; } image->loaded = true; fail: if (!image->loaded) gs_image_file_free(image); not_animated: if (file) fclose(file); return is_animated_gif; } static void gs_image_file_init_internal(gs_image_file_t *image, const char *file, uint64_t *mem_usage, enum gs_color_space *space, enum gs_image_alpha_mode alpha_mode) { size_t len; if (!image) return; memset(image, 0, sizeof(*image)); if (!file) return; len = strlen(file); if (len > 4 && astrcmpi(file + len - 4, ".gif") == 0) { if (init_animated_gif(image, file, mem_usage, alpha_mode)) { return; } } image->texture_data = gs_create_texture_file_data3(file, alpha_mode, &image->format, &image->cx, &image->cy, space); if (mem_usage) { *mem_usage += image->cx * image->cy * gs_get_format_bpp(image->format) / 8; } image->loaded = !!image->texture_data; if (!image->loaded) { blog(LOG_WARNING, "Failed to load file '%s'", file); gs_image_file_free(image); } } void gs_image_file_init(gs_image_file_t *image, const char *file) { enum gs_color_space unused; gs_image_file_init_internal(image, file, NULL, &unused, GS_IMAGE_ALPHA_STRAIGHT); } void gs_image_file_free(gs_image_file_t *image) { if (!image) return; if (image->loaded) { if (image->is_animated_gif) { gif_finalise(&image->gif); bfree(image->animation_frame_cache); bfree(image->animation_frame_data); } gs_texture_destroy(image->texture); } bfree(image->texture_data); bfree(image->gif_data); memset(image, 0, sizeof(*image)); } void gs_image_file2_init(gs_image_file2_t *if2, const char *file) { enum gs_color_space unused; gs_image_file_init_internal(&if2->image, file, &if2->mem_usage, &unused, GS_IMAGE_ALPHA_STRAIGHT); } void gs_image_file3_init(gs_image_file3_t *if3, const char *file, enum gs_image_alpha_mode alpha_mode) { enum gs_color_space unused; gs_image_file_init_internal(&if3->image2.image, file, &if3->image2.mem_usage, &unused, alpha_mode); if3->alpha_mode = alpha_mode; } void gs_image_file4_init(gs_image_file4_t *if4, const char *file, enum gs_image_alpha_mode alpha_mode) { gs_image_file_init_internal(&if4->image3.image2.image, file, &if4->image3.image2.mem_usage, &if4->space, alpha_mode); if4->image3.alpha_mode = alpha_mode; } void gs_image_file_init_texture(gs_image_file_t *image) { if (!image->loaded) return; if (image->is_animated_gif) { image->texture = gs_texture_create(image->cx, image->cy, image->format, 1, (const uint8_t **)&image->gif.frame_image, GS_DYNAMIC); } else { image->texture = gs_texture_create(image->cx, image->cy, image->format, 1, (const uint8_t **)&image->texture_data, 0); bfree(image->texture_data); image->texture_data = NULL; } } static inline uint64_t get_time(gs_image_file_t *image, int i) { uint64_t val = (uint64_t)image->gif.frames[i].frame_delay * 10000000ULL; if (!val) val = 100000000; return val; } static inline int calculate_new_frame(gs_image_file_t *image, uint64_t elapsed_time_ns, int loops) { int new_frame = image->cur_frame; image->cur_time += elapsed_time_ns; for (;;) { uint64_t t = get_time(image, new_frame); if (image->cur_time <= t) break; image->cur_time -= t; if ((unsigned int)++new_frame == image->gif.frame_count) { if (!loops || ++image->cur_loop < loops) { new_frame = 0; } else if (image->cur_loop == loops) { new_frame--; break; } } } return new_frame; } static void decode_new_frame(gs_image_file_t *image, int new_frame, enum gs_image_alpha_mode alpha_mode) { if (!image->animation_frame_cache[new_frame]) { int last_frame; /* if looped, decode frame 0 */ last_frame = (new_frame < image->last_decoded_frame) ? 0 : image->last_decoded_frame + 1; /* decode missed frames */ for (int i = last_frame; i < new_frame; i++) { if (gif_decode_frame(&image->gif, i) != GIF_OK) return; } /* decode actual desired frame */ if (gif_decode_frame(&image->gif, new_frame) == GIF_OK) { const size_t area = (size_t)image->gif.width * image->gif.height; size_t pos = new_frame * area * 4; image->animation_frame_cache[new_frame] = image->animation_frame_data + pos; if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY_SRGB) { gs_premultiply_xyza_srgb_loop(image->gif.frame_image, area); } else if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY) { gs_premultiply_xyza_loop(image->gif.frame_image, area); } memcpy(image->animation_frame_cache[new_frame], image->gif.frame_image, area * 4); image->last_decoded_frame = new_frame; } } image->cur_frame = new_frame; } static bool gs_image_file_tick_internal(gs_image_file_t *image, uint64_t elapsed_time_ns, enum gs_image_alpha_mode alpha_mode) { int loops; if (!image->is_animated_gif || !image->loaded) return false; loops = image->gif.loop_count; if (loops >= 0xFFFF) loops = 0; if (!loops || image->cur_loop < loops) { int new_frame = calculate_new_frame(image, elapsed_time_ns, loops); if (new_frame != image->cur_frame) { decode_new_frame(image, new_frame, alpha_mode); return true; } } return false; } bool gs_image_file_tick(gs_image_file_t *image, uint64_t elapsed_time_ns) { return gs_image_file_tick_internal(image, elapsed_time_ns, false); } bool gs_image_file2_tick(gs_image_file2_t *if2, uint64_t elapsed_time_ns) { return gs_image_file_tick_internal(&if2->image, elapsed_time_ns, false); } bool gs_image_file3_tick(gs_image_file3_t *if3, uint64_t elapsed_time_ns) { return gs_image_file_tick_internal(&if3->image2.image, elapsed_time_ns, if3->alpha_mode); } bool gs_image_file4_tick(gs_image_file4_t *if4, uint64_t elapsed_time_ns) { return gs_image_file_tick_internal(&if4->image3.image2.image, elapsed_time_ns, if4->image3.alpha_mode); } static void gs_image_file_update_texture_internal(gs_image_file_t *image, enum gs_image_alpha_mode alpha_mode) { if (!image->is_animated_gif || !image->loaded) return; if (!image->animation_frame_cache[image->cur_frame]) decode_new_frame(image, image->cur_frame, alpha_mode); gs_texture_set_image(image->texture, image->animation_frame_cache[image->cur_frame], image->gif.width * 4, false); } void gs_image_file_update_texture(gs_image_file_t *image) { gs_image_file_update_texture_internal(image, false); } void gs_image_file2_update_texture(gs_image_file2_t *if2) { gs_image_file_update_texture_internal(&if2->image, false); } void gs_image_file3_update_texture(gs_image_file3_t *if3) { gs_image_file_update_texture_internal(&if3->image2.image, if3->alpha_mode); } void gs_image_file4_update_texture(gs_image_file4_t *if4) { gs_image_file_update_texture_internal(&if4->image3.image2.image, if4->image3.alpha_mode); } obs-studio-32.1.0-sources/libobs/graphics/graphics-ffmpeg.c000644 001751 001751 00000050232 15153330235 024516 0ustar00runnerrunner000000 000000 #include "graphics.h" #include "half.h" #include "srgb.h" #include #include #include #include #include #include #include #ifdef _WIN32 #include #pragma comment(lib, "windowscodecs.lib") #endif struct ffmpeg_image { const char *file; AVFormatContext *fmt_ctx; AVCodecContext *decoder_ctx; int cx, cy; enum AVPixelFormat format; }; static bool ffmpeg_image_open_decoder_context(struct ffmpeg_image *info) { AVFormatContext *const fmt_ctx = info->fmt_ctx; int ret = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, 1, NULL, 0); if (ret < 0) { blog(LOG_WARNING, "Couldn't find video stream in file '%s': %s", info->file, av_err2str(ret)); return false; } AVStream *const stream = fmt_ctx->streams[ret]; AVCodecParameters *const codecpar = stream->codecpar; const AVCodec *const decoder = avcodec_find_decoder(codecpar->codec_id); // fix discarded-qualifiers if (!decoder) { blog(LOG_WARNING, "Failed to find decoder for file '%s'", info->file); return false; } AVCodecContext *const decoder_ctx = avcodec_alloc_context3(decoder); avcodec_parameters_to_context(decoder_ctx, codecpar); info->decoder_ctx = decoder_ctx; info->cx = codecpar->width; info->cy = codecpar->height; info->format = codecpar->format; ret = avcodec_open2(decoder_ctx, decoder, NULL); if (ret < 0) { blog(LOG_WARNING, "Failed to open video codec for file '%s': " "%s", info->file, av_err2str(ret)); return false; } return true; } static void ffmpeg_image_free(struct ffmpeg_image *info) { avcodec_free_context(&info->decoder_ctx); avformat_close_input(&info->fmt_ctx); } static bool ffmpeg_image_init(struct ffmpeg_image *info, const char *file) { int ret; if (!file || !*file) return false; memset(info, 0, sizeof(struct ffmpeg_image)); info->file = file; ret = avformat_open_input(&info->fmt_ctx, file, NULL, NULL); if (ret < 0) { blog(LOG_WARNING, "Failed to open file '%s': %s", info->file, av_err2str(ret)); return false; } ret = avformat_find_stream_info(info->fmt_ctx, NULL); if (ret < 0) { blog(LOG_WARNING, "Could not find stream info for file '%s':" " %s", info->file, av_err2str(ret)); goto fail; } if (!ffmpeg_image_open_decoder_context(info)) goto fail; return true; fail: ffmpeg_image_free(info); return false; } #ifdef _MSC_VER #define obs_bswap16(v) _byteswap_ushort(v) #else #define obs_bswap16(v) __builtin_bswap16(v) #endif static void *ffmpeg_image_copy_data_straight(struct ffmpeg_image *info, AVFrame *frame) { const size_t linesize = (size_t)info->cx * 4; const size_t totalsize = info->cy * linesize; void *data = bmalloc(totalsize); const size_t src_linesize = frame->linesize[0]; if (linesize != src_linesize) { const size_t min_line = linesize < src_linesize ? linesize : src_linesize; uint8_t *dst = data; const uint8_t *src = frame->data[0]; for (int y = 0; y < info->cy; y++) { memcpy(dst, src, min_line); dst += linesize; src += src_linesize; } } else { memcpy(data, frame->data[0], totalsize); } return data; } static inline size_t get_dst_position(const size_t w, const size_t h, const size_t x, const size_t y, int orient) { size_t res_x = 0; size_t res_y = 0; if (orient == 2) { /* * Orientation 2: Flip X * * 888888 888888 * 88 -> 88 * 8888 -> 8888 * 88 -> 88 * 88 88 * * (0, 0) -> (w, 0) * (0, h) -> (w, h) * (w, 0) -> (0, 0) * (w, h) -> (0, h) * * (w - x, y) */ res_x = w - 1 - x; res_y = y; } else if (orient == 3) { /* * Orientation 3: 180 degree * * 88 888888 * 88 -> 88 * 8888 -> 8888 * 88 -> 88 * 888888 88 * * (0, 0) -> (w, h) * (0, h) -> (w, 0) * (w, 0) -> (0, h) * (w, h) -> (0, 0) * * (w - x, h - y) */ res_x = w - 1 - x; res_y = h - 1 - y; } else if (orient == 4) { /* * Orientation 4: Flip Y * * 88 888888 * 88 -> 88 * 8888 -> 8888 * 88 -> 88 * 888888 88 * * (0, 0) -> (0, h) * (0, h) -> (0, 0) * (w, 0) -> (w, h) * (w, h) -> (w, 0) * * (x, h - y) */ res_x = x; res_y = h - 1 - y; } else if (orient == 5) { /* * Orientation 5: Flip Y + 90 degree CW * * 8888888888 888888 * 88 88 -> 88 * 88 -> 8888 * -> 88 * 88 * * (0, 0) -> (0, 0) * (0, h) -> (w, 0) * (w, 0) -> (0, h) * (w, h) -> (w, h) * * (y, x) */ res_x = y; res_y = x; } else if (orient == 6) { /* * Orientation 6: 90 degree CW * * 88 888888 * 88 88 -> 88 * 8888888888 -> 8888 * -> 88 * 88 * * (0, 0) -> (w, 0) * (0, h) -> (0, 0) * (w, 0) -> (w, h) * (w, h) -> (0, h) * * (w - y, x) */ res_x = w - 1 - y; res_y = x; } else if (orient == 7) { /* * Orientation 7: Flip Y + 90 degree CCW * * 88 888888 * 88 88 -> 88 * 8888888888 -> 8888 * -> 88 * 88 * * (0, 0) -> (w, h) * (0, h) -> (0, h) * (w, 0) -> (w, 0) * (w, h) -> (0, 0) * * (w - y, h - x) */ res_x = w - 1 - y; res_y = h - 1 - x; } else if (orient == 8) { /* * Orientation 8: 90 degree CCW * * 8888888888 888888 * 88 88 -> 88 * 88 -> 8888 * -> 88 * 88 * * (0, 0) -> (0, h) * (0, h) -> (w, h) * (w, 0) -> (0, 0) * (w, h) -> (w, 0) * * (y, h - x) */ res_x = y; res_y = h - 1 - x; } return (res_x + res_y * w) * 4; } #define TILE_SIZE 16 #define MIN(a, b) (((a) < (b)) ? (a) : (b)) static void *ffmpeg_image_orient(struct ffmpeg_image *info, void *in_data, int orient) { const size_t sx = (size_t)info->cx; const size_t sy = (size_t)info->cy; uint8_t *data = NULL; if (orient == 0 || orient == 1) return in_data; data = bmalloc(sx * 4 * sy); if (orient >= 5 && orient < 9) { info->cx = (int)sy; info->cy = (int)sx; } uint8_t *src = in_data; size_t off_dst; size_t off_src = 0; for (size_t y0 = 0; y0 < sy; y0 += TILE_SIZE) { for (size_t x0 = 0; x0 < sx; x0 += TILE_SIZE) { size_t lim_x = MIN((size_t)sx, x0 + TILE_SIZE); size_t lim_y = MIN((size_t)sy, y0 + TILE_SIZE); for (size_t y = y0; y < lim_y; y++) { for (size_t x = x0; x < lim_x; x++) { off_src = (x + y * sx) * 4; off_dst = get_dst_position(info->cx, info->cy, x, y, orient); memcpy(data + off_dst, src + off_src, 4); } } } } bfree(in_data); return data; } static void *ffmpeg_image_reformat_frame(struct ffmpeg_image *info, AVFrame *frame, enum gs_image_alpha_mode alpha_mode) { struct SwsContext *sws_ctx = NULL; void *data = NULL; int ret = 0; AVDictionary *dict = frame->metadata; AVDictionaryEntry *entry = NULL; int orient = 0; if (dict) { entry = av_dict_get(dict, "Orientation", NULL, AV_DICT_MATCH_CASE); if (entry && entry->value) { orient = atoi(entry->value); } } if (info->format == AV_PIX_FMT_BGR0) { data = ffmpeg_image_copy_data_straight(info, frame); } else if (info->format == AV_PIX_FMT_RGBA || info->format == AV_PIX_FMT_BGRA) { if (alpha_mode == GS_IMAGE_ALPHA_STRAIGHT) { data = ffmpeg_image_copy_data_straight(info, frame); } else { const size_t linesize = (size_t)info->cx * 4; const size_t totalsize = info->cy * linesize; data = bmalloc(totalsize); const size_t src_linesize = frame->linesize[0]; const size_t min_line = linesize < src_linesize ? linesize : src_linesize; uint8_t *dst = data; const uint8_t *src = frame->data[0]; const size_t row_elements = min_line >> 2; if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY_SRGB) { for (int y = 0; y < info->cy; y++) { gs_premultiply_xyza_srgb_loop_restrict(dst, src, row_elements); dst += linesize; src += src_linesize; } } else if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY) { for (int y = 0; y < info->cy; y++) { gs_premultiply_xyza_loop_restrict(dst, src, row_elements); dst += linesize; src += src_linesize; } } } } else if (info->format == AV_PIX_FMT_RGBA64BE) { const size_t dst_linesize = (size_t)info->cx * 4; data = bmalloc(info->cy * dst_linesize); const size_t src_linesize = frame->linesize[0]; const size_t src_min_line = (dst_linesize * 2) < src_linesize ? (dst_linesize * 2) : src_linesize; const size_t row_elements = src_min_line >> 3; uint8_t *dst = data; const uint8_t *src = frame->data[0]; uint16_t value[4]; float f[4]; if (alpha_mode == GS_IMAGE_ALPHA_STRAIGHT) { for (int y = 0; y < info->cy; y++) { for (size_t x = 0; x < row_elements; ++x) { memcpy(value, src, sizeof(value)); f[0] = (float)obs_bswap16(value[0]) / 65535.0f; f[1] = (float)obs_bswap16(value[1]) / 65535.0f; f[2] = (float)obs_bswap16(value[2]) / 65535.0f; f[3] = (float)obs_bswap16(value[3]) / 65535.0f; gs_float4_to_u8x4(dst, f); dst += sizeof(*dst) * 4; src += sizeof(value); } src += src_linesize - src_min_line; } } else if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY_SRGB) { for (int y = 0; y < info->cy; y++) { for (size_t x = 0; x < row_elements; ++x) { memcpy(value, src, sizeof(value)); f[0] = (float)obs_bswap16(value[0]) / 65535.0f; f[1] = (float)obs_bswap16(value[1]) / 65535.0f; f[2] = (float)obs_bswap16(value[2]) / 65535.0f; f[3] = (float)obs_bswap16(value[3]) / 65535.0f; gs_float3_srgb_nonlinear_to_linear(f); gs_premultiply_float4(f); gs_float3_srgb_linear_to_nonlinear(f); gs_float4_to_u8x4(dst, f); dst += sizeof(*dst) * 4; src += sizeof(value); } src += src_linesize - src_min_line; } } else if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY) { for (int y = 0; y < info->cy; y++) { for (size_t x = 0; x < row_elements; ++x) { memcpy(value, src, sizeof(value)); f[0] = (float)obs_bswap16(value[0]) / 65535.0f; f[1] = (float)obs_bswap16(value[1]) / 65535.0f; f[2] = (float)obs_bswap16(value[2]) / 65535.0f; f[3] = (float)obs_bswap16(value[3]) / 65535.0f; gs_premultiply_float4(f); gs_float4_to_u8x4(dst, f); dst += sizeof(*dst) * 4; src += sizeof(value); } src += src_linesize - src_min_line; } } info->format = AV_PIX_FMT_RGBA; } else { static const enum AVPixelFormat format = AV_PIX_FMT_BGRA; sws_ctx = sws_getContext(info->cx, info->cy, info->format, info->cx, info->cy, format, SWS_POINT, NULL, NULL, NULL); if (!sws_ctx) { blog(LOG_WARNING, "Failed to create scale context " "for '%s'", info->file); goto fail; } uint8_t *pointers[4]; int linesizes[4]; ret = av_image_alloc(pointers, linesizes, info->cx, info->cy, format, 32); if (ret < 0) { blog(LOG_WARNING, "av_image_alloc failed for '%s': %s", info->file, av_err2str(ret)); sws_freeContext(sws_ctx); goto fail; } ret = sws_scale(sws_ctx, (const uint8_t *const *)frame->data, frame->linesize, 0, info->cy, pointers, linesizes); sws_freeContext(sws_ctx); if (ret < 0) { blog(LOG_WARNING, "sws_scale failed for '%s': %s", info->file, av_err2str(ret)); av_freep(pointers); goto fail; } const size_t linesize = (size_t)info->cx * 4; data = bmalloc(info->cy * linesize); const uint8_t *src = pointers[0]; uint8_t *dst = data; for (size_t y = 0; y < (size_t)info->cy; y++) { memcpy(dst, src, linesize); dst += linesize; src += linesizes[0]; } av_freep(pointers); if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY_SRGB) { gs_premultiply_xyza_srgb_loop(data, (size_t)info->cx * info->cy); } else if (alpha_mode == GS_IMAGE_ALPHA_PREMULTIPLY) { gs_premultiply_xyza_loop(data, (size_t)info->cx * info->cy); } info->format = format; } data = ffmpeg_image_orient(info, data, orient); fail: return data; } static void *ffmpeg_image_decode(struct ffmpeg_image *info, enum gs_image_alpha_mode alpha_mode) { AVPacket packet = {0}; void *data = NULL; AVFrame *frame = av_frame_alloc(); int got_frame = 0; int ret; if (!frame) { blog(LOG_WARNING, "Failed to create frame data for '%s'", info->file); return NULL; } ret = av_read_frame(info->fmt_ctx, &packet); if (ret < 0) { blog(LOG_WARNING, "Failed to read image frame from '%s': %s", info->file, av_err2str(ret)); goto fail; } while (!got_frame) { ret = avcodec_send_packet(info->decoder_ctx, &packet); if (ret == 0) ret = avcodec_receive_frame(info->decoder_ctx, frame); got_frame = (ret == 0); if (ret == AVERROR_EOF || ret == AVERROR(EAGAIN)) ret = 0; if (ret < 0) { blog(LOG_WARNING, "Failed to decode frame for '%s': %s", info->file, av_err2str(ret)); goto fail; } } data = ffmpeg_image_reformat_frame(info, frame, alpha_mode); fail: av_packet_unref(&packet); av_frame_free(&frame); return data; } void gs_init_image_deps(void) {} void gs_free_image_deps(void) {} static inline enum gs_color_format convert_format(enum AVPixelFormat format) { switch (format) { case AV_PIX_FMT_RGBA: return GS_RGBA; case AV_PIX_FMT_BGRA: return GS_BGRA; case AV_PIX_FMT_BGR0: return GS_BGRX; case AV_PIX_FMT_RGBA64BE: return GS_RGBA16; default: return GS_BGRX; } } uint8_t *gs_create_texture_file_data(const char *file, enum gs_color_format *format, uint32_t *cx_out, uint32_t *cy_out) { struct ffmpeg_image image; uint8_t *data = NULL; if (ffmpeg_image_init(&image, file)) { data = ffmpeg_image_decode(&image, GS_IMAGE_ALPHA_STRAIGHT); if (data) { *format = convert_format(image.format); *cx_out = (uint32_t)image.cx; *cy_out = (uint32_t)image.cy; } ffmpeg_image_free(&image); } return data; } #ifdef _WIN32 static float pq_to_linear(float u) { const float common = powf(u, 1.f / 78.84375f); return powf(fabsf(max(common - 0.8359375f, 0.f) / (18.8515625f - 18.6875f * common)), 1.f / 0.1593017578f); } static void convert_pq_to_cccs(const BYTE *intermediate, const UINT intermediate_size, BYTE *bytes) { const BYTE *src_cursor = intermediate; const BYTE *src_cursor_end = src_cursor + intermediate_size; BYTE *dst_cursor = bytes; uint32_t rgb10; struct half rgba16[4]; rgba16[3].u = 0x3c00; while (src_cursor < src_cursor_end) { memcpy(&rgb10, src_cursor, sizeof(rgb10)); const float blue = (float)(rgb10 & 0x3ff) / 1023.f; const float green = (float)((rgb10 >> 10) & 0x3ff) / 1023.f; const float red = (float)((rgb10 >> 20) & 0x3ff) / 1023.f; const float red2020 = pq_to_linear(red); const float green2020 = pq_to_linear(green); const float blue2020 = pq_to_linear(blue); const float red709 = 1.6604910021084345f * red2020 - 0.58764113878854951f * green2020 - 0.072849863319884883f * blue2020; const float green709 = -0.12455047452159074f * red2020 + 1.1328998971259603f * green2020 - 0.0083494226043694768f * blue2020; const float blue709 = -0.018150763354905303f * red2020 - 0.10057889800800739f * green2020 + 1.1187296613629127f * blue2020; rgba16[0] = half_from_float(red709 * 125.f); rgba16[1] = half_from_float(green709 * 125.f); rgba16[2] = half_from_float(blue709 * 125.f); memcpy(dst_cursor, &rgba16, sizeof(rgba16)); src_cursor += 4; dst_cursor += 8; } } static void *wic_image_init_internal(const char *file, IWICBitmapFrameDecode *pFrame, enum gs_color_format *format, uint32_t *cx_out, uint32_t *cy_out, enum gs_color_space *space) { BYTE *bytes = NULL; WICPixelFormatGUID pixelFormat; HRESULT hr = pFrame->lpVtbl->GetPixelFormat(pFrame, &pixelFormat); if (SUCCEEDED(hr)) { const bool scrgb = memcmp(&pixelFormat, &GUID_WICPixelFormat64bppRGBAHalf, sizeof(pixelFormat)) == 0; const bool pq10 = memcmp(&pixelFormat, &GUID_WICPixelFormat32bppBGR101010, sizeof(pixelFormat)) == 0; if (scrgb || pq10) { UINT width, height; hr = pFrame->lpVtbl->GetSize(pFrame, &width, &height); if (SUCCEEDED(hr)) { const UINT pitch = 8 * width; const UINT size = pitch * height; bytes = bmalloc(size); if (bytes) { bool success = false; if (pq10) { const UINT intermediate_pitch = 4 * width; const UINT intermediate_size = intermediate_pitch * height; BYTE *intermediate = bmalloc(intermediate_size); if (intermediate) { hr = pFrame->lpVtbl->CopyPixels(pFrame, NULL, intermediate_pitch, intermediate_size, intermediate); success = SUCCEEDED(hr); if (success) { convert_pq_to_cccs(intermediate, intermediate_size, bytes); } else { blog(LOG_WARNING, "WIC: Failed to CopyPixels intermediate for file: %s", file); } bfree(intermediate); } else { blog(LOG_WARNING, "WIC: Failed to allocate intermediate for file: %s", file); } } else { hr = pFrame->lpVtbl->CopyPixels(pFrame, NULL, pitch, size, bytes); success = SUCCEEDED(hr); if (!success) { blog(LOG_WARNING, "WIC: Failed to CopyPixels for file: %s", file); } } if (success) { *format = GS_RGBA16F; *cx_out = width; *cy_out = height; *space = GS_CS_709_SCRGB; } else { bfree(bytes); bytes = NULL; } } else { blog(LOG_WARNING, "WIC: Failed to allocate for file: %s", file); } } else { blog(LOG_WARNING, "WIC: Failed to GetSize of frame for file: %s", file); } } else { blog(LOG_WARNING, "WIC: Only handle GUID_WICPixelFormat32bppBGR101010 and GUID_WICPixelFormat64bppRGBAHalf for now"); } } else { blog(LOG_WARNING, "WIC: Failed to GetPixelFormat for file: %s", file); } return bytes; } static void *wic_image_init(const struct ffmpeg_image *info, const char *file, enum gs_color_format *format, uint32_t *cx_out, uint32_t *cy_out, enum gs_color_space *space) { const size_t len = strlen(file); if (len <= 4 && astrcmpi(file + len - 4, ".jxr") != 0) { blog(LOG_WARNING, "WIC: Only handle JXR for WIC images for now"); return NULL; } BYTE *bytes = NULL; wchar_t *file_w = NULL; os_utf8_to_wcs_ptr(file, 0, &file_w); if (file_w) { IWICImagingFactory *pFactory = NULL; HRESULT hr = CoCreateInstance(&CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, &IID_IWICImagingFactory, &pFactory); if (SUCCEEDED(hr)) { IWICBitmapDecoder *pDecoder = NULL; hr = pFactory->lpVtbl->CreateDecoderFromFilename(pFactory, file_w, NULL, GENERIC_READ, WICDecodeMetadataCacheOnDemand, &pDecoder); if (SUCCEEDED(hr)) { IWICBitmapFrameDecode *pFrame = NULL; hr = pDecoder->lpVtbl->GetFrame(pDecoder, 0, &pFrame); if (SUCCEEDED(hr)) { bytes = wic_image_init_internal(file, pFrame, format, cx_out, cy_out, space); pFrame->lpVtbl->Release(pFrame); } else { blog(LOG_WARNING, "WIC: Failed to create IWICBitmapFrameDecode from file: %s", file); } pDecoder->lpVtbl->Release(pDecoder); } else { blog(LOG_WARNING, "WIC: Failed to create IWICBitmapDecoder from file: %s", file); } pFactory->lpVtbl->Release(pFactory); } else { blog(LOG_WARNING, "WIC: Failed to create IWICImagingFactory"); } bfree(file_w); } else { blog(LOG_WARNING, "WIC: Failed to widen file name: %s", file); } return bytes; } #endif uint8_t *gs_create_texture_file_data2(const char *file, enum gs_image_alpha_mode alpha_mode, enum gs_color_format *format, uint32_t *cx_out, uint32_t *cy_out) { enum gs_color_space unused; return gs_create_texture_file_data3(file, alpha_mode, format, cx_out, cy_out, &unused); } uint8_t *gs_create_texture_file_data3(const char *file, enum gs_image_alpha_mode alpha_mode, enum gs_color_format *format, uint32_t *cx_out, uint32_t *cy_out, enum gs_color_space *space) { struct ffmpeg_image image; uint8_t *data = NULL; if (ffmpeg_image_init(&image, file)) { data = ffmpeg_image_decode(&image, alpha_mode); if (data) { *format = convert_format(image.format); *cx_out = (uint32_t)image.cx; *cy_out = (uint32_t)image.cy; *space = GS_CS_SRGB; } ffmpeg_image_free(&image); } #ifdef _WIN32 if (data == NULL) { data = wic_image_init(&image, file, format, cx_out, cy_out, space); } #endif return data; } obs-studio-32.1.0-sources/libobs/graphics/quat.c000644 001751 001751 00000012464 15153330235 022433 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "quat.h" #include "vec3.h" #include "matrix3.h" #include "matrix4.h" #include "axisang.h" static inline void quat_vec3(struct vec3 *v, const struct quat *q) { v->m = q->m; v->w = 0.0f; } void quat_mul(struct quat *dst, const struct quat *q1, const struct quat *q2) { struct vec3 q1axis, q2axis; struct vec3 temp1, temp2; quat_vec3(&q1axis, q1); quat_vec3(&q2axis, q2); vec3_mulf(&temp1, &q2axis, q1->w); vec3_mulf(&temp2, &q1axis, q2->w); vec3_add(&temp1, &temp1, &temp2); vec3_cross(&temp2, &q1axis, &q2axis); vec3_add((struct vec3 *)dst, &temp1, &temp2); dst->w = (q1->w * q2->w) - vec3_dot(&q1axis, &q2axis); } void quat_from_axisang(struct quat *dst, const struct axisang *aa) { float halfa = aa->w * 0.5f; float sine = sinf(halfa); dst->x = aa->x * sine; dst->y = aa->y * sine; dst->z = aa->z * sine; dst->w = cosf(halfa); } struct f4x4 { float ptr[4][4]; }; void quat_from_matrix3(struct quat *dst, const struct matrix3 *m) { quat_from_matrix4(dst, (const struct matrix4 *)m); } void quat_from_matrix4(struct quat *dst, const struct matrix4 *m) { float tr = (m->x.x + m->y.y + m->z.z); float inv_half; float four_d; int i, j, k; if (tr > 0.0f) { four_d = sqrtf(tr + 1.0f); dst->w = four_d * 0.5f; inv_half = 0.5f / four_d; dst->x = (m->y.z - m->z.y) * inv_half; dst->y = (m->z.x - m->x.z) * inv_half; dst->z = (m->x.y - m->y.x) * inv_half; } else { struct f4x4 *val = (struct f4x4 *)m; i = (m->x.x > m->y.y) ? 0 : 1; if (m->z.z > val->ptr[i][i]) i = 2; j = (i + 1) % 3; k = (i + 2) % 3; /* ---------------------------------- */ four_d = sqrtf((val->ptr[i][i] - val->ptr[j][j] - val->ptr[k][k]) + 1.0f); dst->ptr[i] = four_d * 0.5f; inv_half = 0.5f / four_d; dst->ptr[j] = (val->ptr[i][j] + val->ptr[j][i]) * inv_half; dst->ptr[k] = (val->ptr[i][k] + val->ptr[k][i]) * inv_half; dst->w = (val->ptr[j][k] - val->ptr[k][j]) * inv_half; } } void quat_get_dir(struct vec3 *dst, const struct quat *q) { struct matrix3 m; matrix3_from_quat(&m, q); vec3_copy(dst, &m.z); } void quat_set_look_dir(struct quat *dst, const struct vec3 *dir) { struct vec3 new_dir; struct quat xz_rot, yz_rot; bool xz_valid; bool yz_valid; struct axisang aa; vec3_norm(&new_dir, dir); vec3_neg(&new_dir, &new_dir); quat_identity(&xz_rot); quat_identity(&yz_rot); xz_valid = close_float(new_dir.x, 0.0f, EPSILON) || close_float(new_dir.z, 0.0f, EPSILON); yz_valid = close_float(new_dir.y, 0.0f, EPSILON); if (xz_valid) { axisang_set(&aa, 0.0f, 1.0f, 0.0f, atan2f(new_dir.x, new_dir.z)); quat_from_axisang(&xz_rot, &aa); } if (yz_valid) { axisang_set(&aa, -1.0f, 0.0f, 0.0f, asinf(new_dir.y)); quat_from_axisang(&yz_rot, &aa); } if (!xz_valid) quat_copy(dst, &yz_rot); else if (!yz_valid) quat_copy(dst, &xz_rot); else quat_mul(dst, &xz_rot, &yz_rot); } void quat_log(struct quat *dst, const struct quat *q) { float angle = acosf(q->w); float sine = sinf(angle); float w = q->w; quat_copy(dst, q); dst->w = 0.0f; if ((fabsf(w) < 1.0f) && (fabsf(sine) >= EPSILON)) { sine = angle / sine; quat_mulf(dst, dst, sine); } } void quat_exp(struct quat *dst, const struct quat *q) { float length = sqrtf(q->x * q->x + q->y * q->y + q->z * q->z); float sine = sinf(length); quat_copy(dst, q); sine = (length > EPSILON) ? (sine / length) : 1.0f; quat_mulf(dst, dst, sine); dst->w = cosf(length); } void quat_interpolate(struct quat *dst, const struct quat *q1, const struct quat *q2, float t) { float dot = quat_dot(q1, q2); float anglef = acosf(dot); float sine, sinei, sinet, sineti; struct quat temp; if (anglef >= EPSILON) { sine = sinf(anglef); sinei = 1 / sine; sinet = sinf(anglef * t) * sinei; sineti = sinf(anglef * (1.0f - t)) * sinei; quat_mulf(&temp, q1, sineti); quat_mulf(dst, q2, sinet); quat_add(dst, &temp, dst); } else { quat_sub(&temp, q2, q1); quat_mulf(&temp, &temp, t); quat_add(dst, &temp, q1); } } void quat_get_tangent(struct quat *dst, const struct quat *prev, const struct quat *q, const struct quat *next) { struct quat temp; quat_sub(&temp, q, prev); quat_add(&temp, &temp, next); quat_sub(&temp, &temp, q); quat_mulf(dst, &temp, 0.5f); } void quat_interpolate_cubic(struct quat *dst, const struct quat *q1, const struct quat *q2, const struct quat *m1, const struct quat *m2, float t) { struct quat temp1, temp2; quat_interpolate(&temp1, q1, q2, t); quat_interpolate(&temp2, m1, m2, t); quat_interpolate(dst, &temp1, &temp2, 2.0f * (1.0f - t) * t); } obs-studio-32.1.0-sources/libobs/graphics/libnsgif/000755 001751 001751 00000000000 15153330731 023104 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/graphics/libnsgif/libnsgif.c000644 001751 001751 00000134106 15153330235 025051 0ustar00runnerrunner000000 000000 /* * Copyright 2003 James Bursa * Copyright 2004 John Tytgat * Copyright 2004 Richard Wilson * Copyright 2008 Sean Fox * * This file is part of NetSurf's libnsgif, http://www.netsurf-browser.org/ * Licenced under the MIT License, * http://www.opensource.org/licenses/mit-license.php */ //#include #include #include #include #include #include #include "libnsgif.h" #ifdef NDEBUG # define LOG(x) ((void) 0) #else # define LOG(x) do { fprintf(stderr, x), fputc('\n', stderr); } while (0) #endif /* NDEBUG */ /* READING GIF FILES ================= The functions provided by this file allow for efficient progressive GIF decoding. Whilst the initialisation does not ensure that there is sufficient image data to complete the entire frame, it does ensure that the information provided is valid. Any subsequent attempts to decode an initialised GIF are guaranteed to succeed, and any bytes of the image not present are assumed to be totally transparent. To begin decoding a GIF, the 'gif' structure must be initialised with the 'gif_data' and 'buffer_size' set to their initial values. The 'buffer_position' should initially be 0, and will be internally updated as the decoding commences. The caller should then repeatedly call gif_initialise() with the structure until the function returns 1, or no more data is available. Once the initialisation has begun, the decoder completes the variables 'frame_count' and 'frame_count_partial'. The former being the total number of frames that have been successfully initialised, and the latter being the number of frames that a partial amount of data is available for. This assists the caller in managing the animation whilst decoding is continuing. To decode a frame, the caller must use gif_decode_frame() which updates the current 'frame_image' to reflect the desired frame. The required 'disposal_method' is also updated to reflect how the frame should be plotted. The caller must not assume that the current 'frame_image' will be valid between calls if initialisation is still occurring, and should either always request that the frame is decoded (no processing will occur if the 'decoded_frame' has not been invalidated by initialisation) or perform the check itself. It should be noted that gif_finalise() should always be called, even if no frames were initialised. Additionally, it is the responsibility of the caller to free 'gif_data'. [rjw] - Fri 2nd April 2004 */ /* TO-DO LIST ================= + Plain text and comment extensions could be implemented if there is any interest in doing so. */ /* Maximum colour table size */ #define GIF_MAX_COLOURS 256 /* Internal flag that the colour table needs to be processed */ #define GIF_PROCESS_COLOURS 0xaa000000 /* Internal flag that a frame is invalid/unprocessed */ #define GIF_INVALID_FRAME -1 /* Transparent colour */ #define GIF_TRANSPARENT_COLOUR 0x00 /* GIF Flags */ #define GIF_FRAME_COMBINE 1 #define GIF_FRAME_CLEAR 2 #define GIF_FRAME_RESTORE 3 #define GIF_FRAME_QUIRKS_RESTORE 4 #define GIF_IMAGE_SEPARATOR 0x2c #define GIF_INTERLACE_MASK 0x40 #define GIF_COLOUR_TABLE_MASK 0x80 #define GIF_COLOUR_TABLE_SIZE_MASK 0x07 #define GIF_EXTENSION_INTRODUCER 0x21 #define GIF_EXTENSION_GRAPHIC_CONTROL 0xf9 #define GIF_DISPOSAL_MASK 0x1c #define GIF_TRANSPARENCY_MASK 0x01 #define GIF_EXTENSION_COMMENT 0xfe #define GIF_EXTENSION_PLAIN_TEXT 0x01 #define GIF_EXTENSION_APPLICATION 0xff #define GIF_BLOCK_TERMINATOR 0x00 #define GIF_TRAILER 0x3b /* Internal GIF routines */ static gif_result gif_initialise_sprite(gif_animation *gif, unsigned int width, unsigned int height); static gif_result gif_initialise_frame(gif_animation *gif); static gif_result gif_initialise_frame_extensions(gif_animation *gif, const int frame); static gif_result gif_skip_frame_extensions(gif_animation *gif); static unsigned int gif_interlaced_line(int height, int y); /* Internal LZW routines */ static void gif_init_LZW(gif_animation *gif); static bool gif_next_LZW(gif_animation *gif); static int gif_next_code(gif_animation *gif, int code_size); static const int maskTbl[16] = {0x0000, 0x0001, 0x0003, 0x0007, 0x000f, 0x001f, 0x003f, 0x007f, 0x00ff, 0x01ff, 0x03ff, 0x07ff, 0x0fff, 0x1fff, 0x3fff, 0x7fff}; /** Initialises necessary gif_animation members. */ void gif_create(gif_animation *gif, gif_bitmap_callback_vt *bitmap_callbacks) { memset(gif, 0, sizeof(gif_animation)); gif->bitmap_callbacks = *bitmap_callbacks; gif->decoded_frame = GIF_INVALID_FRAME; } /** Initialises any workspace held by the animation and attempts to decode any information that hasn't already been decoded. If an error occurs, all previously decoded frames are retained. @return GIF_FRAME_DATA_ERROR for GIF frame data error GIF_INSUFFICIENT_FRAME_DATA for insufficient data to process any more frames GIF_INSUFFICIENT_MEMORY for memory error GIF_DATA_ERROR for GIF error GIF_INSUFFICIENT_DATA for insufficient data to do anything GIF_OK for successful decoding GIF_WORKING for successful decoding if more frames are expected */ gif_result gif_initialise(gif_animation *gif, size_t size, unsigned char *data) { unsigned char *gif_data; unsigned int index; gif_result return_value; /* The GIF format is thoroughly documented; a full description * can be found at http://www.w3.org/Graphics/GIF/spec-gif89a.txt */ /* Initialize values */ gif->buffer_size = (unsigned int)size; gif->gif_data = data; /* Check for sufficient data to be a GIF (6-byte header + 7-byte logical screen descriptor) */ if (gif->buffer_size < 13) return GIF_INSUFFICIENT_DATA; /* Get our current processing position */ gif_data = gif->gif_data + gif->buffer_position; /* See if we should initialise the GIF */ if (gif->buffer_position == 0) { /* We want everything to be NULL before we start so we've no chance of freeing bad pointers (paranoia) */ gif->frame_image = NULL; gif->frames = NULL; gif->local_colour_table = NULL; gif->global_colour_table = NULL; /* The caller may have been lazy and not reset any values */ gif->frame_count = 0; gif->frame_count_partial = 0; gif->decoded_frame = GIF_INVALID_FRAME; /* 6-byte GIF file header is: * * +0 3CHARS Signature ('GIF') * +3 3CHARS Version ('87a' or '89a') */ if (strncmp((const char *) gif_data, "GIF", 3) != 0) return GIF_DATA_ERROR; gif_data += 3; /* Ensure GIF reports version 87a or 89a */ /* if ((strncmp(gif_data, "87a", 3) != 0) && (strncmp(gif_data, "89a", 3) != 0)) LOG(("Unknown GIF format - proceeding anyway")); */ gif_data += 3; /* 7-byte Logical Screen Descriptor is: * * +0 SHORT Logical Screen Width * +2 SHORT Logical Screen Height * +4 CHAR __Packed Fields__ * 1BIT Global Colour Table Flag * 3BITS Colour Resolution * 1BIT Sort Flag * 3BITS Size of Global Colour Table * +5 CHAR Background Colour Index * +6 CHAR Pixel Aspect Ratio */ gif->width = gif_data[0] | (gif_data[1] << 8); gif->height = gif_data[2] | (gif_data[3] << 8); gif->global_colours = (gif_data[4] & GIF_COLOUR_TABLE_MASK); gif->colour_table_size = (2 << (gif_data[4] & GIF_COLOUR_TABLE_SIZE_MASK)); gif->background_index = gif_data[5]; gif->aspect_ratio = gif_data[6]; gif->loop_count = 1; gif_data += 7; /* Some broken GIFs report the size as the screen size they were created in. As such, we detect for the common cases and set the sizes as 0 if they are found which results in the GIF being the maximum size of the frames. */ if (((gif->width == 640) && (gif->height == 480)) || ((gif->width == 640) && (gif->height == 512)) || ((gif->width == 800) && (gif->height == 600)) || ((gif->width == 1024) && (gif->height == 768)) || ((gif->width == 1280) && (gif->height == 1024)) || ((gif->width == 1600) && (gif->height == 1200)) || ((gif->width == 0) || (gif->height == 0)) || ((gif->width > 2048) || (gif->height > 2048))) { gif->width = 1; gif->height = 1; } /* Allocate some data irrespective of whether we've got any colour tables. We always get the maximum size in case a GIF is lying to us. It's far better to give the wrong colours than to trample over some memory somewhere. */ gif->global_colour_table = (unsigned int *)calloc(GIF_MAX_COLOURS, sizeof(int)); gif->local_colour_table = (unsigned int *)calloc(GIF_MAX_COLOURS, sizeof(int)); if ((gif->global_colour_table == NULL) || (gif->local_colour_table == NULL)) { gif_finalise(gif); return GIF_INSUFFICIENT_MEMORY; } /* Set the first colour to a value that will never occur in reality so we know if we've processed it */ gif->global_colour_table[0] = GIF_PROCESS_COLOURS; /* Check if the GIF has no frame data (13-byte header + 1-byte termination block) * Although generally useless, the GIF specification does not expressly prohibit this */ if (gif->buffer_size == 14) { if (gif_data[0] == GIF_TRAILER) return GIF_OK; else return GIF_INSUFFICIENT_DATA; } /* Initialise enough workspace for 4 frames initially */ if ((gif->frames = (gif_frame *)malloc(sizeof(gif_frame))) == NULL) { gif_finalise(gif); return GIF_INSUFFICIENT_MEMORY; } gif->frame_holders = 1; /* Initialise the sprite header */ assert(gif->bitmap_callbacks.bitmap_create); if ((gif->frame_image = gif->bitmap_callbacks.bitmap_create(gif->width, gif->height)) == NULL) { gif_finalise(gif); return GIF_INSUFFICIENT_MEMORY; } /* Remember we've done this now */ gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); } /* Do the colour map if we haven't already. As the top byte is always 0xff or 0x00 depending on the transparency we know if it's been filled in. */ if (gif->global_colour_table[0] == GIF_PROCESS_COLOURS) { /* Check for a global colour map signified by bit 7 */ if (gif->global_colours) { if (gif->buffer_size < (gif->colour_table_size * 3 + 12)) { return GIF_INSUFFICIENT_DATA; } for (index = 0; index < gif->colour_table_size; index++) { /* Gif colour map contents are r,g,b. * * We want to pack them bytewise into the * colour table, such that the red component * is in byte 0 and the alpha component is in * byte 3. */ unsigned char *entry = (unsigned char *) &gif-> global_colour_table[index]; entry[0] = gif_data[0]; /* r */ entry[1] = gif_data[1]; /* g */ entry[2] = gif_data[2]; /* b */ entry[3] = 0xff; /* a */ gif_data += 3; } gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); } else { /* Create a default colour table with the first two colours as black and white */ unsigned int *entry = gif->global_colour_table; entry[0] = 0x00000000; /* Force Alpha channel to opaque */ ((unsigned char *) entry)[3] = 0xff; entry[1] = 0xffffffff; } } /* Repeatedly try to initialise frames */ while ((return_value = gif_initialise_frame(gif)) == GIF_WORKING); /* If there was a memory error tell the caller */ if ((return_value == GIF_INSUFFICIENT_MEMORY) || (return_value == GIF_DATA_ERROR)) return return_value; /* If we didn't have some frames then a GIF_INSUFFICIENT_DATA becomes a GIF_INSUFFICIENT_FRAME_DATA */ if ((return_value == GIF_INSUFFICIENT_DATA) && (gif->frame_count_partial > 0)) return GIF_INSUFFICIENT_FRAME_DATA; /* Return how many we got */ return return_value; } /** Updates the sprite memory size @return GIF_INSUFFICIENT_MEMORY for a memory error GIF_OK for success */ static gif_result gif_initialise_sprite(gif_animation *gif, unsigned int width, unsigned int height) { unsigned int max_width; unsigned int max_height; struct bitmap *buffer; /* Check if we've changed */ if ((width <= gif->width) && (height <= gif->height)) return GIF_OK; /* Get our maximum values */ max_width = (width > gif->width) ? width : gif->width; max_height = (height > gif->height) ? height : gif->height; /* Allocate some more memory */ assert(gif->bitmap_callbacks.bitmap_create); if ((buffer = gif->bitmap_callbacks.bitmap_create(max_width, max_height)) == NULL) return GIF_INSUFFICIENT_MEMORY; assert(gif->bitmap_callbacks.bitmap_destroy); gif->bitmap_callbacks.bitmap_destroy(gif->frame_image); gif->frame_image = buffer; gif->width = max_width; gif->height = max_height; /* Invalidate our currently decoded image */ gif->decoded_frame = GIF_INVALID_FRAME; return GIF_OK; } /** Attempts to initialise the next frame @return GIF_INSUFFICIENT_DATA for insufficient data to do anything GIF_FRAME_DATA_ERROR for GIF frame data error GIF_INSUFFICIENT_MEMORY for insufficient memory to process GIF_INSUFFICIENT_FRAME_DATA for insufficient data to complete the frame GIF_DATA_ERROR for GIF error (invalid frame header) GIF_OK for successful decoding GIF_WORKING for successful decoding if more frames are expected */ static gif_result gif_initialise_frame(gif_animation *gif) { int frame; gif_frame *temp_buf; unsigned char *gif_data, *gif_end; int gif_bytes; unsigned int flags = 0; unsigned int width, height, offset_x, offset_y; unsigned int block_size, colour_table_size; bool first_image = true; gif_result return_value; /* Get the frame to decode and our data position */ frame = gif->frame_count; /* Get our buffer position etc. */ gif_data = (unsigned char *)(gif->gif_data + gif->buffer_position); gif_end = (unsigned char *)(gif->gif_data + gif->buffer_size); gif_bytes = (unsigned int)(gif_end - gif_data); /* Check if we've finished */ if ((gif_bytes > 0) && (gif_data[0] == GIF_TRAILER)) return GIF_OK; /* Check if we have enough data * The shortest block of data is a 4-byte comment extension + 1-byte block terminator + 1-byte gif trailer */ if (gif_bytes < 6) return GIF_INSUFFICIENT_DATA; /* We could theoretically get some junk data that gives us millions of frames, so we ensure that we don't have a silly number */ if (frame > 4096) return GIF_FRAME_DATA_ERROR; /* Get some memory to store our pointers in etc. */ if ((int)gif->frame_holders <= frame) { /* Allocate more memory */ if ((temp_buf = (gif_frame *)realloc(gif->frames, (frame + 1) * sizeof(gif_frame))) == NULL) return GIF_INSUFFICIENT_MEMORY; gif->frames = temp_buf; gif->frame_holders = frame + 1; } /* Store our frame pointer. We would do it when allocating except we start off with one frame allocated so we can always use realloc. */ gif->frames[frame].frame_pointer = gif->buffer_position; gif->frames[frame].display = false; gif->frames[frame].virgin = true; gif->frames[frame].disposal_method = 0; gif->frames[frame].transparency = false; gif->frames[frame].frame_delay = 100; gif->frames[frame].redraw_required = false; /* Invalidate any previous decoding we have of this frame */ if (gif->decoded_frame == frame) gif->decoded_frame = GIF_INVALID_FRAME; /* We pretend to initialise the frames, but really we just skip over all the data contained within. This is all basically a cut down version of gif_decode_frame that doesn't have any of the LZW bits in it. */ /* Initialise any extensions */ gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); if ((return_value = gif_initialise_frame_extensions(gif, frame)) != GIF_OK) return return_value; gif_data = (gif->gif_data + gif->buffer_position); gif_bytes = (unsigned int)(gif_end - gif_data); /* Check if we've finished */ if ((gif_bytes = (unsigned int)(gif_end - gif_data)) < 1) return GIF_INSUFFICIENT_FRAME_DATA; else if (gif_data[0] == GIF_TRAILER) { gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); gif->frame_count = frame + 1; return GIF_OK; } /* If we're not done, there should be an image descriptor */ if (gif_data[0] != GIF_IMAGE_SEPARATOR) return GIF_FRAME_DATA_ERROR; /* Do some simple boundary checking */ offset_x = gif_data[1] | (gif_data[2] << 8); offset_y = gif_data[3] | (gif_data[4] << 8); width = gif_data[5] | (gif_data[6] << 8); height = gif_data[7] | (gif_data[8] << 8); /* Set up the redraw characteristics. We have to check for extending the area due to multi-image frames. */ if (!first_image) { if (gif->frames[frame].redraw_x > offset_x) { gif->frames[frame].redraw_width += (gif->frames[frame].redraw_x - offset_x); gif->frames[frame].redraw_x = offset_x; } if (gif->frames[frame].redraw_y > offset_y) { gif->frames[frame].redraw_height += (gif->frames[frame].redraw_y - offset_y); gif->frames[frame].redraw_y = offset_y; } if ((offset_x + width) > (gif->frames[frame].redraw_x + gif->frames[frame].redraw_width)) gif->frames[frame].redraw_width = (offset_x + width) - gif->frames[frame].redraw_x; if ((offset_y + height) > (gif->frames[frame].redraw_y + gif->frames[frame].redraw_height)) gif->frames[frame].redraw_height = (offset_y + height) - gif->frames[frame].redraw_y; } else { first_image = false; gif->frames[frame].redraw_x = offset_x; gif->frames[frame].redraw_y = offset_y; gif->frames[frame].redraw_width = width; gif->frames[frame].redraw_height = height; } /* if we are clearing the background then we need to redraw enough to cover the previous frame too */ gif->frames[frame].redraw_required = ((gif->frames[frame].disposal_method == GIF_FRAME_CLEAR) || (gif->frames[frame].disposal_method == GIF_FRAME_RESTORE)); /* Boundary checking - shouldn't ever happen except with junk data */ if (gif_initialise_sprite(gif, (offset_x + width), (offset_y + height))) return GIF_INSUFFICIENT_MEMORY; /* Decode the flags */ flags = gif_data[9]; colour_table_size = 2 << (flags & GIF_COLOUR_TABLE_SIZE_MASK); /* Move our data onwards and remember we've got a bit of this frame */ gif_data += 10; gif_bytes = (unsigned int)(gif_end - gif_data); gif->frame_count_partial = frame + 1; /* Skip the local colour table */ if (flags & GIF_COLOUR_TABLE_MASK) { gif_data += 3 * colour_table_size; if ((gif_bytes = (unsigned int)(gif_end - gif_data)) < 0) return GIF_INSUFFICIENT_FRAME_DATA; } /* Ensure we have a correct code size */ if (gif_data[0] > GIF_MAX_LZW) return GIF_DATA_ERROR; /* Move our pointer to the actual image data */ gif_data++; if (--gif_bytes < 0) return GIF_INSUFFICIENT_FRAME_DATA; /* Repeatedly skip blocks until we get a zero block or run out of data * These blocks of image data are processed later by gif_decode_frame() */ block_size = 0; while (block_size != 1) { block_size = gif_data[0] + 1; /* Check if the frame data runs off the end of the file */ if ((int)(gif_bytes - block_size) < 0) { /* Try to recover by signaling the end of the gif. * Once we get garbage data, there is no logical * way to determine where the next frame is. * It's probably better to partially load the gif * than not at all. */ if (gif_bytes >= 2) { gif_data[0] = 0; gif_data[1] = GIF_TRAILER; gif_bytes = 1; ++gif_data; break; } else return GIF_INSUFFICIENT_FRAME_DATA; } else { gif_bytes -= block_size; gif_data += block_size; } } /* Add the frame and set the display flag */ gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); gif->frame_count = frame + 1; gif->frames[frame].display = true; /* Check if we've finished */ if (gif_bytes < 1) return GIF_INSUFFICIENT_FRAME_DATA; else if (gif_data[0] == GIF_TRAILER) return GIF_OK; return GIF_WORKING; } /** Attempts to initialise the frame's extensions @return GIF_INSUFFICIENT_FRAME_DATA for insufficient data to complete the frame GIF_OK for successful initialisation */ static gif_result gif_initialise_frame_extensions(gif_animation *gif, const int frame) { unsigned char *gif_data, *gif_end; int gif_bytes; unsigned int block_size; /* Get our buffer position etc. */ gif_data = (unsigned char *)(gif->gif_data + gif->buffer_position); gif_end = (unsigned char *)(gif->gif_data + gif->buffer_size); /* Initialise the extensions */ while (gif_data[0] == GIF_EXTENSION_INTRODUCER) { ++gif_data; gif_bytes = (unsigned int)(gif_end - gif_data); /* Switch on extension label */ switch(gif_data[0]) { /* 6-byte Graphic Control Extension is: * * +0 CHAR Graphic Control Label * +1 CHAR Block Size * +2 CHAR __Packed Fields__ * 3BITS Reserved * 3BITS Disposal Method * 1BIT User Input Flag * 1BIT Transparent Color Flag * +3 SHORT Delay Time * +5 CHAR Transparent Color Index */ case GIF_EXTENSION_GRAPHIC_CONTROL: if (gif_bytes < 6) return GIF_INSUFFICIENT_FRAME_DATA; gif->frames[frame].frame_delay = gif_data[3] | (gif_data[4] << 8); if (gif_data[2] & GIF_TRANSPARENCY_MASK) { gif->frames[frame].transparency = true; gif->frames[frame].transparency_index = gif_data[5]; } gif->frames[frame].disposal_method = ((gif_data[2] & GIF_DISPOSAL_MASK) >> 2); /* I have encountered documentation and GIFs in the wild that use * 0x04 to restore the previous frame, rather than the officially * documented 0x03. I believe some (older?) software may even actually * export this way. We handle this as a type of "quirks" mode. */ if (gif->frames[frame].disposal_method == GIF_FRAME_QUIRKS_RESTORE) gif->frames[frame].disposal_method = GIF_FRAME_RESTORE; gif_data += (2 + gif_data[1]); break; /* 14-byte+ Application Extension is: * * +0 CHAR Application Extension Label * +1 CHAR Block Size * +2 8CHARS Application Identifier * +10 3CHARS Appl. Authentication Code * +13 1-256 Application Data (Data sub-blocks) */ case GIF_EXTENSION_APPLICATION: if (gif_bytes < 17) return GIF_INSUFFICIENT_FRAME_DATA; if ((gif_data[1] == 0x0b) && (strncmp((const char *) gif_data + 2, "NETSCAPE2.0", 11) == 0) && (gif_data[13] == 0x03) && (gif_data[14] == 0x01)) { gif->loop_count = gif_data[15] | (gif_data[16] << 8); } gif_data += (2 + gif_data[1]); break; /* Move the pointer to the first data sub-block * Skip 1 byte for the extension label */ case GIF_EXTENSION_COMMENT: ++gif_data; break; /* Move the pointer to the first data sub-block * Skip 2 bytes for the extension label and size fields * Skip the extension size itself */ default: gif_data += (2 + gif_data[1]); } /* Repeatedly skip blocks until we get a zero block or run out of data * This data is ignored by this gif decoder */ gif_bytes = (unsigned int)(gif_end - gif_data); block_size = 0; while (gif_data[0] != GIF_BLOCK_TERMINATOR) { block_size = gif_data[0] + 1; if ((gif_bytes -= block_size) < 0) return GIF_INSUFFICIENT_FRAME_DATA; gif_data += block_size; } ++gif_data; } /* Set buffer position and return */ gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); return GIF_OK; } /** Decodes a GIF frame. @return GIF_FRAME_DATA_ERROR for GIF frame data error GIF_INSUFFICIENT_FRAME_DATA for insufficient data to complete the frame GIF_DATA_ERROR for GIF error (invalid frame header) GIF_INSUFFICIENT_DATA for insufficient data to do anything GIF_INSUFFICIENT_MEMORY for insufficient memory to process GIF_OK for successful decoding If a frame does not contain any image data, GIF_OK is returned and gif->current_error is set to GIF_FRAME_NO_DISPLAY */ gif_result gif_decode_frame(gif_animation *gif, unsigned int frame) { unsigned int index = 0; unsigned char *gif_data, *gif_end; int gif_bytes; unsigned int width, height, offset_x, offset_y; unsigned int flags, colour_table_size, interlace; unsigned int *colour_table; unsigned int *frame_data = 0; // Set to 0 for no warnings unsigned int *frame_scanline; unsigned int save_buffer_position; gif_result return_value = GIF_OK; unsigned int x, y, decode_y, burst_bytes; int last_undisposed_frame = (frame - 1); register unsigned char colour; /* Ensure we have a frame to decode */ if (frame >= gif->frame_count_partial) return GIF_INSUFFICIENT_DATA; /* Ensure this frame is supposed to be decoded */ if (gif->frames[frame].display == false) { gif->current_error = GIF_FRAME_NO_DISPLAY; return GIF_OK; } if ((!gif->clear_image) && ((int)frame == gif->decoded_frame)) return GIF_OK; /* Get the start of our frame data and the end of the GIF data */ gif_data = gif->gif_data + gif->frames[frame].frame_pointer; gif_end = gif->gif_data + gif->buffer_size; gif_bytes = (unsigned int)(gif_end - gif_data); /* Check if we have enough data * The shortest block of data is a 10-byte image descriptor + 1-byte gif trailer */ if (gif_bytes < 12) return GIF_INSUFFICIENT_FRAME_DATA; /* Save the buffer position */ save_buffer_position = gif->buffer_position; gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); /* Skip any extensions because we all ready processed them */ if ((return_value = gif_skip_frame_extensions(gif)) != GIF_OK) goto gif_decode_frame_exit; gif_data = (gif->gif_data + gif->buffer_position); gif_bytes = (unsigned int)(gif_end - gif_data); /* Ensure we have enough data for the 10-byte image descriptor + 1-byte gif trailer */ if (gif_bytes < 12) { return_value = GIF_INSUFFICIENT_FRAME_DATA; goto gif_decode_frame_exit; } /* 10-byte Image Descriptor is: * * +0 CHAR Image Separator (0x2c) * +1 SHORT Image Left Position * +3 SHORT Image Top Position * +5 SHORT Width * +7 SHORT Height * +9 CHAR __Packed Fields__ * 1BIT Local Colour Table Flag * 1BIT Interlace Flag * 1BIT Sort Flag * 2BITS Reserved * 3BITS Size of Local Colour Table */ if (gif_data[0] != GIF_IMAGE_SEPARATOR) { return_value = GIF_DATA_ERROR; goto gif_decode_frame_exit; } offset_x = gif_data[1] | (gif_data[2] << 8); offset_y = gif_data[3] | (gif_data[4] << 8); width = gif_data[5] | (gif_data[6] << 8); height = gif_data[7] | (gif_data[8] << 8); /* Boundary checking - shouldn't ever happen except unless the data has been modified since initialisation. */ if ((offset_x + width > gif->width) || (offset_y + height > gif->height)) { return_value = GIF_DATA_ERROR; goto gif_decode_frame_exit; } /* Decode the flags */ flags = gif_data[9]; colour_table_size = 2 << (flags & GIF_COLOUR_TABLE_SIZE_MASK); interlace = flags & GIF_INTERLACE_MASK; /* Move our pointer to the colour table or image data (if no colour table is given) */ gif_data += 10; gif_bytes = (unsigned int)(gif_end - gif_data); /* Set up the colour table */ if (flags & GIF_COLOUR_TABLE_MASK) { if (gif_bytes < (int)(3 * colour_table_size)) { return_value = GIF_INSUFFICIENT_FRAME_DATA; goto gif_decode_frame_exit; } colour_table = gif->local_colour_table; if (!gif->clear_image) { for (index = 0; index < colour_table_size; index++) { /* Gif colour map contents are r,g,b. * * We want to pack them bytewise into the * colour table, such that the red component * is in byte 0 and the alpha component is in * byte 3. */ unsigned char *entry = (unsigned char *) &colour_table[index]; entry[0] = gif_data[0]; /* r */ entry[1] = gif_data[1]; /* g */ entry[2] = gif_data[2]; /* b */ entry[3] = 0xff; /* a */ gif_data += 3; } } else { gif_data += 3 * colour_table_size; } gif_bytes = (unsigned int)(gif_end - gif_data); } else { colour_table = gif->global_colour_table; } /* Check if we've finished */ if (gif_bytes < 1) { return_value = GIF_INSUFFICIENT_FRAME_DATA; goto gif_decode_frame_exit; } else if (gif_data[0] == GIF_TRAILER) { return_value = GIF_OK; goto gif_decode_frame_exit; } /* Get the frame data */ assert(gif->bitmap_callbacks.bitmap_get_buffer); frame_data = (void *)gif->bitmap_callbacks.bitmap_get_buffer(gif->frame_image); if (!frame_data) return GIF_INSUFFICIENT_MEMORY; /* If we are clearing the image we just clear, if not decode */ if (!gif->clear_image) { /* Ensure we have enough data for a 1-byte LZW code size + 1-byte gif trailer */ if (gif_bytes < 2) { return_value = GIF_INSUFFICIENT_FRAME_DATA; goto gif_decode_frame_exit; /* If we only have a 1-byte LZW code size + 1-byte gif trailer, we're finished */ } else if ((gif_bytes == 2) && (gif_data[1] == GIF_TRAILER)) { return_value = GIF_OK; goto gif_decode_frame_exit; } /* If the previous frame's disposal method requires we restore the background * colour or this is the first frame, clear the frame data */ if ((frame == 0) || (gif->decoded_frame == GIF_INVALID_FRAME)) { memset((char*)frame_data, GIF_TRANSPARENT_COLOUR, gif->width * gif->height * sizeof(int)); gif->decoded_frame = frame; /* The line below would fill the image with its background color, but because GIFs support * transparency we likely wouldn't want to do that. */ /* memset((char*)frame_data, colour_table[gif->background_index], gif->width * gif->height * sizeof(int)); */ } else if ((frame != 0) && (gif->frames[frame - 1].disposal_method == GIF_FRAME_CLEAR)) { gif->clear_image = true; if ((return_value = gif_decode_frame(gif, (frame - 1))) != GIF_OK) goto gif_decode_frame_exit; gif->clear_image = false; /* If the previous frame's disposal method requires we restore the previous * image, find the last image set to "do not dispose" and get that frame data */ } else if ((frame != 0) && (gif->frames[frame - 1].disposal_method == GIF_FRAME_RESTORE)) { while ((last_undisposed_frame != -1) && (gif->frames[last_undisposed_frame--].disposal_method == GIF_FRAME_RESTORE)) { } /* If we don't find one, clear the frame data */ if (last_undisposed_frame == -1) { /* see notes above on transparency vs. background color */ memset((char*)frame_data, GIF_TRANSPARENT_COLOUR, gif->width * gif->height * sizeof(int)); } else { if ((return_value = gif_decode_frame(gif, last_undisposed_frame)) != GIF_OK) goto gif_decode_frame_exit; /* Get this frame's data */ assert(gif->bitmap_callbacks.bitmap_get_buffer); frame_data = (void *)gif->bitmap_callbacks.bitmap_get_buffer(gif->frame_image); if (!frame_data) return GIF_INSUFFICIENT_MEMORY; } } gif->decoded_frame = frame; /* Initialise the LZW decoding */ gif->set_code_size = gif_data[0]; gif->buffer_position = (unsigned int)(gif_data - gif->gif_data + 1); /* Set our code variables */ gif->code_size = gif->set_code_size + 1; gif->clear_code = (1 << gif->set_code_size); gif->end_code = gif->clear_code + 1; gif->max_code_size = gif->clear_code << 1; gif->max_code = gif->clear_code + 2; gif->curbit = gif->lastbit = 0; gif->last_byte = 2; gif->get_done = false; gif->direct = gif->buf; gif_init_LZW(gif); /* Decompress the data */ for (y = 0; y < height; y++) { if (interlace) decode_y = gif_interlaced_line(height, y) + offset_y; else decode_y = y + offset_y; frame_scanline = frame_data + offset_x + (decode_y * gif->width); /* Rather than decoding pixel by pixel, we try to burst out streams of data to remove the need for end-of data checks every pixel. */ x = width; while (x > 0) { burst_bytes = (unsigned int)(gif->stack_pointer - gif->stack); if (burst_bytes > 0) { if (burst_bytes > x) burst_bytes = x; x -= burst_bytes; while (burst_bytes-- > 0) { colour = *--gif->stack_pointer; if (((gif->frames[frame].transparency) && (colour != gif->frames[frame].transparency_index)) || (!gif->frames[frame].transparency)) *frame_scanline = colour_table[colour]; frame_scanline++; } } else { if (!gif_next_LZW(gif)) { /* Unexpected end of frame, try to recover */ if (gif->current_error == GIF_END_OF_FRAME) return_value = GIF_OK; else return_value = gif->current_error; goto gif_decode_frame_exit; } } } } } else { /* Clear our frame */ if (gif->frames[frame].disposal_method == GIF_FRAME_CLEAR) { for (y = 0; y < height; y++) { frame_scanline = frame_data + offset_x + ((offset_y + y) * gif->width); if (gif->frames[frame].transparency) memset(frame_scanline, GIF_TRANSPARENT_COLOUR, width * 4); else memset(frame_scanline, colour_table[gif->background_index], width * 4); } } } gif_decode_frame_exit: /* Check if we should test for optimisation */ if (gif->frames[frame].virgin) { if (gif->bitmap_callbacks.bitmap_test_opaque) gif->frames[frame].opaque = gif->bitmap_callbacks.bitmap_test_opaque(gif->frame_image); else gif->frames[frame].opaque = false; gif->frames[frame].virgin = false; } if (gif->bitmap_callbacks.bitmap_set_opaque) gif->bitmap_callbacks.bitmap_set_opaque(gif->frame_image, gif->frames[frame].opaque); if (gif->bitmap_callbacks.bitmap_modified) gif->bitmap_callbacks.bitmap_modified(gif->frame_image); /* Restore the buffer position */ gif->buffer_position = save_buffer_position; /* Success! */ return return_value; } /** Skips the frame's extensions (which have been previously initialised) @return GIF_INSUFFICIENT_FRAME_DATA for insufficient data to complete the frame GIF_OK for successful decoding */ static gif_result gif_skip_frame_extensions(gif_animation *gif) { unsigned char *gif_data, *gif_end; int gif_bytes; unsigned int block_size; /* Get our buffer position etc. */ gif_data = (unsigned char *)(gif->gif_data + gif->buffer_position); gif_end = (unsigned char *)(gif->gif_data + gif->buffer_size); gif_bytes = (unsigned int)(gif_end - gif_data); /* Skip the extensions */ while (gif_data[0] == GIF_EXTENSION_INTRODUCER) { ++gif_data; /* Switch on extension label */ switch(gif_data[0]) { /* Move the pointer to the first data sub-block * 1 byte for the extension label */ case GIF_EXTENSION_COMMENT: ++gif_data; break; /* Move the pointer to the first data sub-block * 2 bytes for the extension label and size fields * Skip the extension size itself */ default: gif_data += (2 + gif_data[1]); } /* Repeatedly skip blocks until we get a zero block or run out of data * This data is ignored by this gif decoder */ gif_bytes = (unsigned int)(gif_end - gif_data); block_size = 0; while (gif_data[0] != GIF_BLOCK_TERMINATOR) { block_size = gif_data[0] + 1; if ((gif_bytes -= block_size) < 0) return GIF_INSUFFICIENT_FRAME_DATA; gif_data += block_size; } ++gif_data; } /* Set buffer position and return */ gif->buffer_position = (unsigned int)(gif_data - gif->gif_data); return GIF_OK; } static unsigned int gif_interlaced_line(int height, int y) { if ((y << 3) < height) return (y << 3); y -= ((height + 7) >> 3); if ((y << 3) < (height - 4)) return (y << 3) + 4; y -= ((height + 3) >> 3); if ((y << 2) < (height - 2)) return (y << 2) + 2; y -= ((height + 1) >> 2); return (y << 1) + 1; } /* Releases any workspace held by the animation */ void gif_finalise(gif_animation *gif) { /* Release all our memory blocks */ if (gif->frame_image) { assert(gif->bitmap_callbacks.bitmap_destroy); gif->bitmap_callbacks.bitmap_destroy(gif->frame_image); } gif->frame_image = NULL; free(gif->frames); gif->frames = NULL; free(gif->local_colour_table); gif->local_colour_table = NULL; free(gif->global_colour_table); gif->global_colour_table = NULL; } /** * Initialise LZW decoding */ void gif_init_LZW(gif_animation *gif) { int i; gif->current_error = 0; if (gif->clear_code >= (1 << GIF_MAX_LZW)) { gif->stack_pointer = gif->stack; gif->current_error = GIF_FRAME_DATA_ERROR; return; } /* initialise our table */ memset(gif->table, 0x00, (1 << GIF_MAX_LZW) * 8); for (i = 0; i < gif->clear_code; ++i) gif->table[1][i] = i; /* update our LZW parameters */ gif->code_size = gif->set_code_size + 1; gif->max_code_size = gif->clear_code << 1; gif->max_code = gif->clear_code + 2; gif->stack_pointer = gif->stack; do { gif->firstcode = gif->oldcode = gif_next_code(gif, gif->code_size); } while (gif->firstcode == gif->clear_code); *gif->stack_pointer++ =gif->firstcode; } static bool gif_next_LZW(gif_animation *gif) { int code, incode; int block_size; int new_code; code = gif_next_code(gif, gif->code_size); if (code < 0) { gif->current_error = code; return false; } else if (code == gif->clear_code) { gif_init_LZW(gif); return true; } else if (code == gif->end_code) { /* skip to the end of our data so multi-image GIFs work */ if (gif->zero_data_block) { gif->current_error = GIF_FRAME_DATA_ERROR; return false; } block_size = 0; while (block_size != 1) { block_size = gif->gif_data[gif->buffer_position] + 1; gif->buffer_position += block_size; } gif->current_error = GIF_FRAME_DATA_ERROR; return false; } incode = code; if (code >= gif->max_code) { if (gif->stack_pointer >= gif->stack + ((1 << GIF_MAX_LZW) * 2)) { gif->current_error = GIF_FRAME_DATA_ERROR; return false; } *gif->stack_pointer++ = gif->firstcode; code = gif->oldcode; } /* The following loop is the most important in the GIF decoding cycle as every * single pixel passes through it. * * Note: our gif->stack is always big enough to hold a complete decompressed chunk. */ while (code >= gif->clear_code) { if (gif->stack_pointer >= gif->stack + ((1 << GIF_MAX_LZW) * 2)) { gif->current_error = GIF_FRAME_DATA_ERROR; return false; } *gif->stack_pointer++ = gif->table[1][code]; new_code = gif->table[0][code]; if (new_code < gif->clear_code) { code = new_code; break; } if (gif->stack_pointer >= gif->stack + ((1 << GIF_MAX_LZW) * 2)) { gif->current_error = GIF_FRAME_DATA_ERROR; return false; } *gif->stack_pointer++ = gif->table[1][new_code]; code = gif->table[0][new_code]; if (code == new_code) { gif->current_error = GIF_FRAME_DATA_ERROR; return false; } } if (gif->stack_pointer >= gif->stack + ((1 << GIF_MAX_LZW) * 2)) { gif->current_error = GIF_FRAME_DATA_ERROR; return false; } *gif->stack_pointer++ = gif->firstcode = gif->table[1][code]; if ((code = gif->max_code) < (1 << GIF_MAX_LZW)) { gif->table[0][code] = gif->oldcode; gif->table[1][code] = gif->firstcode; ++gif->max_code; if ((gif->max_code >= gif->max_code_size) && (gif->max_code_size < (1 << GIF_MAX_LZW))) { gif->max_code_size = gif->max_code_size << 1; ++gif->code_size; } } gif->oldcode = incode; return true; } static int gif_next_code(gif_animation *gif, int code_size) { int i, j, end, count, ret; unsigned char *b; (void)code_size; end = gif->curbit + gif->code_size; if (end >= gif->lastbit) { if (gif->get_done) return GIF_END_OF_FRAME; gif->buf[0] = gif->direct[gif->last_byte - 2]; gif->buf[1] = gif->direct[gif->last_byte - 1]; /* get the next block */ gif->direct = gif->gif_data + gif->buffer_position; gif->zero_data_block = ((count = gif->direct[0]) == 0); if ((gif->buffer_position + count) >= gif->buffer_size) return GIF_INSUFFICIENT_FRAME_DATA; if (count == 0) gif->get_done = true; else { gif->direct -= 1; gif->buf[2] = gif->direct[2]; gif->buf[3] = gif->direct[3]; } gif->buffer_position += count + 1; /* update our variables */ gif->last_byte = 2 + count; gif->curbit = (gif->curbit - gif->lastbit) + 16; gif->lastbit = (2 + count) << 3; end = gif->curbit + gif->code_size; } i = gif->curbit >> 3; if (i < 2) b = gif->buf; else b = gif->direct; ret = b[i]; j = (end >> 3) - 1; if (i <= j) { ret |= (b[i + 1] << 8); if (i < j) ret |= (b[i + 2] << 16); } ret = (ret >> (gif->curbit % 8)) & maskTbl[gif->code_size]; gif->curbit += gif->code_size; return ret; } obs-studio-32.1.0-sources/libobs/graphics/libnsgif/.clang-format000644 001751 001751 00000000066 15153330235 025460 0ustar00runnerrunner000000 000000 Language: Cpp SortIncludes: false DisableFormat: true obs-studio-32.1.0-sources/libobs/graphics/libnsgif/libnsgif.h000644 001751 001751 00000013606 15153330235 025057 0ustar00runnerrunner000000 000000 /* * Copyright 2004 Richard Wilson * Copyright 2008 Sean Fox * * This file is part of NetSurf's libnsgif, http://www.netsurf-browser.org/ * Licenced under the MIT License, * http://www.opensource.org/licenses/mit-license.php */ /** \file * Progressive animated GIF file decoding (interface). */ #ifndef _LIBNSGIF_H_ #define _LIBNSGIF_H_ #include #include #if defined(__cplusplus) extern "C" { #endif /* Error return values */ typedef enum { GIF_WORKING = 1, GIF_OK = 0, GIF_INSUFFICIENT_FRAME_DATA = -1, GIF_FRAME_DATA_ERROR = -2, GIF_INSUFFICIENT_DATA = -3, GIF_DATA_ERROR = -4, GIF_INSUFFICIENT_MEMORY = -5, GIF_FRAME_NO_DISPLAY = -6, GIF_END_OF_FRAME = -7 } gif_result; /* Maximum LZW bits available */ #define GIF_MAX_LZW 12 /* The GIF frame data */ typedef struct gif_frame { bool display; /**< whether the frame should be displayed/animated */ unsigned int frame_delay; /**< delay (in 100th second intervals) before animating the frame */ /** Internal members are listed below */ unsigned int frame_pointer; /**< offset (in bytes) to the GIF frame data */ bool virgin; /**< whether the frame has previously been used */ bool opaque; /**< whether the frame is totally opaque */ bool redraw_required; /**< whether a forcable screen redraw is required */ unsigned char disposal_method; /**< how the previous frame should be disposed; affects plotting */ bool transparency; /**< whether we acknowledge transparency */ unsigned char transparency_index; /**< the index designating a transparent pixel */ unsigned int redraw_x; /**< x co-ordinate of redraw rectangle */ unsigned int redraw_y; /**< y co-ordinate of redraw rectangle */ unsigned int redraw_width; /**< width of redraw rectangle */ unsigned int redraw_height; /**< height of redraw rectangle */ } gif_frame; /* API for Bitmap callbacks */ typedef void* (*gif_bitmap_cb_create)(int width, int height); typedef void (*gif_bitmap_cb_destroy)(void *bitmap); typedef unsigned char* (*gif_bitmap_cb_get_buffer)(void *bitmap); typedef void (*gif_bitmap_cb_set_opaque)(void *bitmap, bool opaque); typedef bool (*gif_bitmap_cb_test_opaque)(void *bitmap); typedef void (*gif_bitmap_cb_modified)(void *bitmap); /* The Bitmap callbacks function table */ typedef struct gif_bitmap_callback_vt { gif_bitmap_cb_create bitmap_create; /**< Create a bitmap. */ gif_bitmap_cb_destroy bitmap_destroy; /**< Free a bitmap. */ gif_bitmap_cb_get_buffer bitmap_get_buffer; /**< Return a pointer to the pixel data in a bitmap. */ /** Members below are optional */ gif_bitmap_cb_set_opaque bitmap_set_opaque; /**< Sets whether a bitmap should be plotted opaque. */ gif_bitmap_cb_test_opaque bitmap_test_opaque; /**< Tests whether a bitmap has an opaque alpha channel. */ gif_bitmap_cb_modified bitmap_modified; /**< The bitmap image has changed, so flush any persistent cache. */ } gif_bitmap_callback_vt; /* The GIF animation data */ typedef struct gif_animation { gif_bitmap_callback_vt bitmap_callbacks; /**< callbacks for bitmap functions */ unsigned char *gif_data; /**< pointer to GIF data */ unsigned int width; /**< width of GIF (may increase during decoding) */ unsigned int height; /**< height of GIF (may increase during decoding) */ unsigned int frame_count; /**< number of frames decoded */ unsigned int frame_count_partial; /**< number of frames partially decoded */ gif_frame *frames; /**< decoded frames */ int decoded_frame; /**< current frame decoded to bitmap */ void *frame_image; /**< currently decoded image; stored as bitmap from bitmap_create callback */ int loop_count; /**< number of times to loop animation */ gif_result current_error; /**< current error type, or 0 for none*/ /** Internal members are listed below */ unsigned int buffer_position; /**< current index into GIF data */ unsigned int buffer_size; /**< total number of bytes of GIF data available */ unsigned int frame_holders; /**< current number of frame holders */ unsigned int background_index; /**< index in the colour table for the background colour */ unsigned int aspect_ratio; /**< image aspect ratio (ignored) */ unsigned int colour_table_size; /**< size of colour table (in entries) */ bool global_colours; /**< whether the GIF has a global colour table */ unsigned int *global_colour_table; /**< global colour table */ unsigned int *local_colour_table; /**< local colour table */ /* General LZW values. They are NO LONGER shared for all GIFs being decoded BECAUSE THAT IS A TERRIBLE IDEA TO SAVE 10Kb or so per GIF. */ unsigned char buf[4]; unsigned char *direct; int table[2][(1 << GIF_MAX_LZW)]; unsigned char stack[(1 << GIF_MAX_LZW) * 2]; unsigned char *stack_pointer; int code_size, set_code_size; int max_code, max_code_size; int clear_code, end_code; int curbit, lastbit, last_byte; int firstcode, oldcode; bool zero_data_block; bool get_done; /* Whether to clear the decoded image rather than plot */ bool clear_image; } gif_animation; void gif_create(gif_animation *gif, gif_bitmap_callback_vt *bitmap_callbacks); gif_result gif_initialise(gif_animation *gif, size_t size, unsigned char *data); gif_result gif_decode_frame(gif_animation *gif, unsigned int frame); void gif_finalise(gif_animation *gif); #if defined(__cplusplus) }; #endif #endif obs-studio-32.1.0-sources/libobs/graphics/libnsgif/LICENSE.libnsgif000644 001751 001751 00000003045 15153330235 025706 0ustar00runnerrunner000000 000000 libnsgif is licensed under the MIT license. The licensing statement and the full license are reproduced below. /* * Copyright 2003 James Bursa * Copyright 2004 John Tytgat * Copyright 2004 Richard Wilson * Copyright 2008 Sean Fox * * This file is part of NetSurf's libnsgif, http://www.netsurf-browser.org/ * Licenced under the MIT License, * http://www.opensource.org/licenses/mit-license.php */ The MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. obs-studio-32.1.0-sources/libobs/graphics/bounds.c000644 001751 001751 00000016545 15153330235 022757 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "bounds.h" #include "matrix3.h" #include "matrix4.h" #include "plane.h" void bounds_move(struct bounds *dst, const struct bounds *b, const struct vec3 *v) { vec3_add(&dst->min, &b->min, v); vec3_add(&dst->max, &b->max, v); } void bounds_scale(struct bounds *dst, const struct bounds *b, const struct vec3 *v) { vec3_mul(&dst->min, &b->min, v); vec3_mul(&dst->max, &b->max, v); } void bounds_merge(struct bounds *dst, const struct bounds *b1, const struct bounds *b2) { vec3_min(&dst->min, &b1->min, &b2->min); vec3_max(&dst->max, &b1->max, &b2->max); } void bounds_merge_point(struct bounds *dst, const struct bounds *b, const struct vec3 *v) { vec3_min(&dst->min, &b->min, v); vec3_max(&dst->max, &b->max, v); } void bounds_get_point(struct vec3 *dst, const struct bounds *b, unsigned int i) { if (i > 8) return; /* * Note: * 0 = min.x,min.y,min.z * 1 = min.x,min.y,MAX.z * 2 = min.x,MAX.y,min.z * 3 = min.x,MAX.y,MAX.z * 4 = MAX.x,min.y,min.z * 5 = MAX.x,min.y,MAX.z * 6 = MAX.x,MAX.y,min.z * 7 = MAX.x,MAX.y,MAX.z */ if (i > 3) { dst->x = b->max.x; i -= 4; } else { dst->x = b->min.x; } if (i > 1) { dst->y = b->max.y; i -= 2; } else { dst->y = b->min.y; } dst->z = (i == 1) ? b->max.z : b->min.z; } void bounds_get_center(struct vec3 *dst, const struct bounds *b) { vec3_sub(dst, &b->max, &b->min); vec3_mulf(dst, dst, 0.5f); vec3_add(dst, dst, &b->min); } void bounds_transform(struct bounds *dst, const struct bounds *b, const struct matrix4 *m) { struct bounds temp; bool b_init = false; int i; memset(&temp, 0, sizeof(temp)); for (i = 0; i < 8; i++) { struct vec3 p; bounds_get_point(&p, b, i); vec3_transform(&p, &p, m); if (!b_init) { vec3_copy(&temp.min, &p); vec3_copy(&temp.max, &p); b_init = true; } else { if (p.x < temp.min.x) temp.min.x = p.x; else if (p.x > temp.max.x) temp.max.x = p.x; if (p.y < temp.min.y) temp.min.y = p.y; else if (p.y > temp.max.y) temp.max.y = p.y; if (p.z < temp.min.z) temp.min.z = p.z; else if (p.z > temp.max.z) temp.max.z = p.z; } } bounds_copy(dst, &temp); } void bounds_transform3x4(struct bounds *dst, const struct bounds *b, const struct matrix3 *m) { struct bounds temp; bool b_init = false; int i; memset(&temp, 0, sizeof(temp)); for (i = 0; i < 8; i++) { struct vec3 p; bounds_get_point(&p, b, i); vec3_transform3x4(&p, &p, m); if (!b_init) { vec3_copy(&temp.min, &p); vec3_copy(&temp.max, &p); b_init = true; } else { if (p.x < temp.min.x) temp.min.x = p.x; else if (p.x > temp.max.x) temp.max.x = p.x; if (p.y < temp.min.y) temp.min.y = p.y; else if (p.y > temp.max.y) temp.max.y = p.y; if (p.z < temp.min.z) temp.min.z = p.z; else if (p.z > temp.max.z) temp.max.z = p.z; } } bounds_copy(dst, &temp); } bool bounds_intersection_ray(const struct bounds *b, const struct vec3 *orig, const struct vec3 *dir, float *t) { float t_max = M_INFINITE; float t_min = -M_INFINITE; struct vec3 center, max_offset, box_offset; int i; bounds_get_center(¢er, b); vec3_sub(&max_offset, &b->max, ¢er); vec3_sub(&box_offset, ¢er, orig); for (i = 0; i < 3; i++) { float e = box_offset.ptr[i]; float f = dir->ptr[i]; if (fabsf(f) > 0.0f) { float fi = 1.0f / f; float t1 = (e + max_offset.ptr[i]) * fi; float t2 = (e - max_offset.ptr[i]) * fi; if (t1 > t2) { if (t2 > t_min) t_min = t2; if (t1 < t_max) t_max = t1; } else { if (t1 > t_min) t_min = t1; if (t2 < t_max) t_max = t2; } if (t_min > t_max) return false; if (t_max < 0.0f) return false; } else if ((-e - max_offset.ptr[i]) > 0.0f || (-e + max_offset.ptr[i]) < 0.0f) { return false; } } *t = (t_min > 0.0f) ? t_min : t_max; return true; } bool bounds_intersection_line(const struct bounds *b, const struct vec3 *p1, const struct vec3 *p2, float *t) { struct vec3 dir; float length; vec3_sub(&dir, p2, p1); length = vec3_len(&dir); if (length <= TINY_EPSILON) return false; vec3_mulf(&dir, &dir, 1.0f / length); if (!bounds_intersection_ray(b, p1, &dir, t)) return false; *t /= length; return true; } bool bounds_plane_test(const struct bounds *b, const struct plane *p) { struct vec3 vmin, vmax; int i; for (i = 0; i < 3; i++) { if (p->dir.ptr[i] >= 0.0f) { vmin.ptr[i] = b->min.ptr[i]; vmax.ptr[i] = b->max.ptr[i]; } else { vmin.ptr[i] = b->max.ptr[i]; vmax.ptr[i] = b->min.ptr[i]; } } if (vec3_plane_dist(&vmin, p) > 0.0f) return BOUNDS_OUTSIDE; if (vec3_plane_dist(&vmax, p) >= 0.0f) return BOUNDS_PARTIAL; return BOUNDS_INSIDE; } bool bounds_under_plane(const struct bounds *b, const struct plane *p) { struct vec3 vmin; vmin.x = (p->dir.x < 0.0f) ? b->max.x : b->min.x; vmin.y = (p->dir.y < 0.0f) ? b->max.y : b->min.y; vmin.z = (p->dir.z < 0.0f) ? b->max.z : b->min.z; return (vec3_dot(&vmin, &p->dir) <= p->dist); } bool bounds_intersects(const struct bounds *b, const struct bounds *test, float epsilon) { return ((b->min.x - test->max.x) <= epsilon) && ((test->min.x - b->max.x) <= epsilon) && ((b->min.y - test->max.y) <= epsilon) && ((test->min.y - b->max.y) <= epsilon) && ((b->min.z - test->max.z) <= epsilon) && ((test->min.z - b->max.z) <= epsilon); } bool bounds_intersects_obb(const struct bounds *b, const struct bounds *test, const struct matrix4 *m, float epsilon) { struct bounds b_tr, test_tr; struct matrix4 m_inv; matrix4_inv(&m_inv, m); bounds_transform(&b_tr, b, m); bounds_transform(&test_tr, test, &m_inv); return bounds_intersects(b, &test_tr, epsilon) && bounds_intersects(&b_tr, test, epsilon); } bool bounds_intersects_obb3x4(const struct bounds *b, const struct bounds *test, const struct matrix3 *m, float epsilon) { struct bounds b_tr, test_tr; struct matrix3 m_inv; matrix3_transpose(&m_inv, m); bounds_transform3x4(&b_tr, b, m); bounds_transform3x4(&test_tr, test, &m_inv); return bounds_intersects(b, &test_tr, epsilon) && bounds_intersects(&b_tr, test, epsilon); } static inline float vec3or_offset_len(const struct bounds *b, const struct vec3 *v) { struct vec3 temp1, temp2; vec3_sub(&temp1, &b->max, &b->min); vec3_abs(&temp2, v); return vec3_dot(&temp1, &temp2); } float bounds_min_dist(const struct bounds *b, const struct plane *p) { struct vec3 center; float vec_len = vec3or_offset_len(b, &p->dir) * 0.5f; float center_dist; bounds_get_center(¢er, b); center_dist = vec3_plane_dist(¢er, p); return p->dist + center_dist - vec_len; } obs-studio-32.1.0-sources/libobs/graphics/matrix4.h000644 001751 001751 00000007044 15153330235 023054 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "vec3.h" #include "vec4.h" #include "axisang.h" /* 4x4 Matrix */ #ifdef __cplusplus extern "C" { #endif struct matrix3; struct matrix4 { struct vec4 x, y, z, t; }; static inline void matrix4_copy(struct matrix4 *dst, const struct matrix4 *m) { dst->x.m = m->x.m; dst->y.m = m->y.m; dst->z.m = m->z.m; dst->t.m = m->t.m; } static inline void matrix4_identity(struct matrix4 *dst) { vec4_zero(&dst->x); vec4_zero(&dst->y); vec4_zero(&dst->z); vec4_zero(&dst->t); dst->x.x = 1.0f; dst->y.y = 1.0f; dst->z.z = 1.0f; dst->t.w = 1.0f; } EXPORT void matrix4_from_matrix3(struct matrix4 *dst, const struct matrix3 *m); EXPORT void matrix4_from_quat(struct matrix4 *dst, const struct quat *q); EXPORT void matrix4_from_axisang(struct matrix4 *dst, const struct axisang *aa); EXPORT void matrix4_mul(struct matrix4 *dst, const struct matrix4 *m1, const struct matrix4 *m2); EXPORT float matrix4_determinant(const struct matrix4 *m); EXPORT void matrix4_translate3v(struct matrix4 *dst, const struct matrix4 *m, const struct vec3 *v); EXPORT void matrix4_translate4v(struct matrix4 *dst, const struct matrix4 *m, const struct vec4 *v); EXPORT void matrix4_rotate(struct matrix4 *dst, const struct matrix4 *m, const struct quat *q); EXPORT void matrix4_rotate_aa(struct matrix4 *dst, const struct matrix4 *m, const struct axisang *aa); EXPORT void matrix4_scale(struct matrix4 *dst, const struct matrix4 *m, const struct vec3 *v); EXPORT bool matrix4_inv(struct matrix4 *dst, const struct matrix4 *m); EXPORT void matrix4_transpose(struct matrix4 *dst, const struct matrix4 *m); EXPORT void matrix4_translate3v_i(struct matrix4 *dst, const struct vec3 *v, const struct matrix4 *m); EXPORT void matrix4_translate4v_i(struct matrix4 *dst, const struct vec4 *v, const struct matrix4 *m); EXPORT void matrix4_rotate_i(struct matrix4 *dst, const struct quat *q, const struct matrix4 *m); EXPORT void matrix4_rotate_aa_i(struct matrix4 *dst, const struct axisang *aa, const struct matrix4 *m); EXPORT void matrix4_scale_i(struct matrix4 *dst, const struct vec3 *v, const struct matrix4 *m); static inline void matrix4_translate3f(struct matrix4 *dst, const struct matrix4 *m, float x, float y, float z) { struct vec3 v; vec3_set(&v, x, y, z); matrix4_translate3v(dst, m, &v); } static inline void matrix4_rotate_aa4f(struct matrix4 *dst, const struct matrix4 *m, float x, float y, float z, float rot) { struct axisang aa; axisang_set(&aa, x, y, z, rot); matrix4_rotate_aa(dst, m, &aa); } static inline void matrix4_scale3f(struct matrix4 *dst, const struct matrix4 *m, float x, float y, float z) { struct vec3 v; vec3_set(&v, x, y, z); matrix4_scale(dst, m, &v); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/effect.c000644 001751 001751 00000030727 15153330235 022717 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "effect.h" #include "graphics-internal.h" #include "vec2.h" #include "vec3.h" #include "vec4.h" void gs_effect_actually_destroy(gs_effect_t *effect) { effect_free(effect); bfree(effect); } void gs_effect_destroy(gs_effect_t *effect) { if (effect) { if (!effect->cached) gs_effect_actually_destroy(effect); } } gs_technique_t *gs_effect_get_technique(const gs_effect_t *effect, const char *name) { if (!effect) return NULL; for (size_t i = 0; i < effect->techniques.num; i++) { struct gs_effect_technique *tech = effect->techniques.array + i; if (strcmp(tech->name, name) == 0) return tech; } return NULL; } gs_technique_t *gs_effect_get_current_technique(const gs_effect_t *effect) { if (!effect) return NULL; return effect->cur_technique; } bool gs_effect_loop(gs_effect_t *effect, const char *name) { if (!effect) { return false; } if (!effect->looping) { gs_technique_t *tech; if (!!gs_get_effect()) { blog(LOG_WARNING, "gs_effect_loop: An effect is " "already active"); return false; } tech = gs_effect_get_technique(effect, name); if (!tech) { blog(LOG_WARNING, "gs_effect_loop: Technique '%s' " "not found.", name); return false; } gs_technique_begin(tech); effect->looping = true; } else { gs_technique_end_pass(effect->cur_technique); } if (!gs_technique_begin_pass(effect->cur_technique, effect->loop_pass++)) { gs_technique_end(effect->cur_technique); effect->looping = false; effect->loop_pass = 0; return false; } return true; } size_t gs_technique_begin(gs_technique_t *tech) { if (!tech) return 0; tech->effect->cur_technique = tech; tech->effect->graphics->cur_effect = tech->effect; return tech->passes.num; } void gs_technique_end(gs_technique_t *tech) { if (!tech) return; struct gs_effect *effect = tech->effect; struct gs_effect_param *params = effect->params.array; size_t i; gs_load_vertexshader(NULL); gs_load_pixelshader(NULL); tech->effect->cur_technique = NULL; tech->effect->graphics->cur_effect = NULL; for (i = 0; i < effect->params.num; i++) { struct gs_effect_param *param = params + i; da_resize(param->cur_val, 0); param->changed = false; if (param->next_sampler) param->next_sampler = NULL; } } static inline void reset_params(pass_shaderparam_array_t *shaderparams) { struct pass_shaderparam *params = shaderparams->array; size_t i; for (i = 0; i < shaderparams->num; i++) params[i].eparam->changed = false; } static void upload_shader_params(pass_shaderparam_array_t *pass_params, bool changed_only) { struct pass_shaderparam *params = pass_params->array; size_t i; for (i = 0; i < pass_params->num; i++) { struct pass_shaderparam *param = params + i; struct gs_effect_param *eparam = param->eparam; gs_sparam_t *sparam = param->sparam; if (eparam->next_sampler) gs_shader_set_next_sampler(sparam, eparam->next_sampler); if (changed_only && !eparam->changed) continue; if (!eparam->cur_val.num) { if (eparam->default_val.num) da_copy(eparam->cur_val, eparam->default_val); else continue; } gs_shader_set_val(sparam, eparam->cur_val.array, eparam->cur_val.num); } } static inline void upload_parameters(struct gs_effect *effect, bool changed_only) { pass_shaderparam_array_t *vshader_params, *pshader_params; if (!effect->cur_pass) return; vshader_params = &effect->cur_pass->vertshader_params; pshader_params = &effect->cur_pass->pixelshader_params; upload_shader_params(vshader_params, changed_only); upload_shader_params(pshader_params, changed_only); reset_params(vshader_params); reset_params(pshader_params); } void gs_effect_update_params(gs_effect_t *effect) { if (effect) upload_parameters(effect, true); } bool gs_technique_begin_pass(gs_technique_t *tech, size_t idx) { struct gs_effect_pass *passes; struct gs_effect_pass *cur_pass; if (!tech || idx >= tech->passes.num) return false; passes = tech->passes.array; cur_pass = passes + idx; tech->effect->cur_pass = cur_pass; gs_load_vertexshader(cur_pass->vertshader); gs_load_pixelshader(cur_pass->pixelshader); upload_parameters(tech->effect, false); return true; } bool gs_technique_begin_pass_by_name(gs_technique_t *tech, const char *name) { if (!tech) return false; for (size_t i = 0; i < tech->passes.num; i++) { struct gs_effect_pass *pass = tech->passes.array + i; if (strcmp(pass->name, name) == 0) { gs_technique_begin_pass(tech, i); return true; } } return false; } static inline void clear_tex_params(pass_shaderparam_array_t *in_params) { struct pass_shaderparam *params = in_params->array; for (size_t i = 0; i < in_params->num; i++) { struct pass_shaderparam *param = params + i; struct gs_shader_param_info info; gs_shader_get_param_info(param->sparam, &info); if (info.type == GS_SHADER_PARAM_TEXTURE) gs_shader_set_texture(param->sparam, NULL); } } void gs_technique_end_pass(gs_technique_t *tech) { if (!tech) return; struct gs_effect_pass *pass = tech->effect->cur_pass; if (!pass) return; clear_tex_params(&pass->vertshader_params); clear_tex_params(&pass->pixelshader_params); tech->effect->cur_pass = NULL; } size_t gs_effect_get_num_params(const gs_effect_t *effect) { return effect ? effect->params.num : 0; } gs_eparam_t *gs_effect_get_param_by_idx(const gs_effect_t *effect, size_t param) { if (!effect) return NULL; struct gs_effect_param *params = effect->params.array; if (param >= effect->params.num) return NULL; return params + param; } gs_eparam_t *gs_effect_get_param_by_name(const gs_effect_t *effect, const char *name) { if (!effect) return NULL; struct gs_effect_param *params = effect->params.array; for (size_t i = 0; i < effect->params.num; i++) { struct gs_effect_param *param = params + i; if (strcmp(param->name, name) == 0) return param; } return NULL; } size_t gs_param_get_num_annotations(const gs_eparam_t *param) { return param ? param->annotations.num : 0; } gs_eparam_t *gs_param_get_annotation_by_idx(const gs_eparam_t *param, size_t annotation) { if (!param) return NULL; struct gs_effect_param *params = param->annotations.array; if (annotation > param->annotations.num) return NULL; return params + annotation; } gs_eparam_t *gs_param_get_annotation_by_name(const gs_eparam_t *param, const char *name) { if (!param) return NULL; struct gs_effect_param *params = param->annotations.array; for (size_t i = 0; i < param->annotations.num; i++) { struct gs_effect_param *g_param = params + i; if (strcmp(g_param->name, name) == 0) return g_param; } return NULL; } gs_epass_t *gs_technique_get_pass_by_idx(const gs_technique_t *technique, size_t pass) { if (!technique) return NULL; struct gs_effect_pass *passes = technique->passes.array; if (pass > technique->passes.num) return NULL; return passes + pass; } gs_epass_t *gs_technique_get_pass_by_name(const gs_technique_t *technique, const char *name) { if (!technique) return NULL; struct gs_effect_pass *passes = technique->passes.array; for (size_t i = 0; i < technique->passes.num; i++) { struct gs_effect_pass *g_pass = passes + i; if (strcmp(g_pass->name, name) == 0) return g_pass; } return NULL; } gs_eparam_t *gs_effect_get_viewproj_matrix(const gs_effect_t *effect) { return effect ? effect->view_proj : NULL; } gs_eparam_t *gs_effect_get_world_matrix(const gs_effect_t *effect) { return effect ? effect->world : NULL; } void gs_effect_get_param_info(const gs_eparam_t *param, struct gs_effect_param_info *info) { if (!param) return; info->name = param->name; info->type = param->type; } static inline void effect_setval_inline(gs_eparam_t *param, const void *data, size_t size) { bool size_changed; if (!param) { blog(LOG_ERROR, "effect_setval_inline: invalid param"); return; } if (!data) { blog(LOG_ERROR, "effect_setval_inline: invalid data"); return; } size_changed = param->cur_val.num != size; if (size_changed) da_resize(param->cur_val, size); if (size_changed || memcmp(param->cur_val.array, data, size) != 0) { memcpy(param->cur_val.array, data, size); param->changed = true; } } #ifndef min #define min(a, b) (((a) < (b)) ? (a) : (b)) #endif static inline void effect_getval_inline(gs_eparam_t *param, void *data, size_t size) { if (!param) { blog(LOG_ERROR, "effect_getval_inline: invalid param"); return; } if (!data) { blog(LOG_ERROR, "effect_getval_inline: invalid data"); return; } size_t bytes = min(size, param->cur_val.num); memcpy(data, param->cur_val.array, bytes); } static inline void effect_getdefaultval_inline(gs_eparam_t *param, void *data, size_t size) { if (!param) { blog(LOG_ERROR, "effect_getdefaultval_inline: invalid param"); return; } if (!data) { blog(LOG_ERROR, "effect_getdefaultval_inline: invalid data"); return; } size_t bytes = min(size, param->default_val.num); memcpy(data, param->default_val.array, bytes); } void gs_effect_set_bool(gs_eparam_t *param, bool val) { int b_val = (int)val; effect_setval_inline(param, &b_val, sizeof(int)); } void gs_effect_set_float(gs_eparam_t *param, float val) { effect_setval_inline(param, &val, sizeof(float)); } void gs_effect_set_int(gs_eparam_t *param, int val) { effect_setval_inline(param, &val, sizeof(int)); } void gs_effect_set_matrix4(gs_eparam_t *param, const struct matrix4 *val) { effect_setval_inline(param, val, sizeof(struct matrix4)); } void gs_effect_set_vec2(gs_eparam_t *param, const struct vec2 *val) { effect_setval_inline(param, val, sizeof(struct vec2)); } void gs_effect_set_vec3(gs_eparam_t *param, const struct vec3 *val) { effect_setval_inline(param, val, sizeof(float) * 3); } void gs_effect_set_vec4(gs_eparam_t *param, const struct vec4 *val) { effect_setval_inline(param, val, sizeof(struct vec4)); } void gs_effect_set_color(gs_eparam_t *param, uint32_t argb) { struct vec4 v_color; vec4_from_bgra(&v_color, argb); effect_setval_inline(param, &v_color, sizeof(struct vec4)); } void gs_effect_set_texture(gs_eparam_t *param, gs_texture_t *val) { struct gs_shader_texture shader_tex; shader_tex.tex = val; shader_tex.srgb = false; effect_setval_inline(param, &shader_tex, sizeof(shader_tex)); } void gs_effect_set_texture_srgb(gs_eparam_t *param, gs_texture_t *val) { struct gs_shader_texture shader_tex; shader_tex.tex = val; shader_tex.srgb = true; effect_setval_inline(param, &shader_tex, sizeof(shader_tex)); } void gs_effect_set_val(gs_eparam_t *param, const void *val, size_t size) { effect_setval_inline(param, val, size); } void *gs_effect_get_val(gs_eparam_t *param) { if (!param) { blog(LOG_ERROR, "gs_effect_get_val: invalid param"); return NULL; } size_t size = param->cur_val.num; void *data; if (size) data = (void *)bzalloc(size); else return NULL; effect_getval_inline(param, data, size); return data; } size_t gs_effect_get_val_size(gs_eparam_t *param) { return param ? param->cur_val.num : 0; } void *gs_effect_get_default_val(gs_eparam_t *param) { if (!param) { blog(LOG_ERROR, "gs_effect_get_default_val: invalid param"); return NULL; } size_t size = param->default_val.num; void *data; if (size) data = (void *)bzalloc(size); else return NULL; effect_getdefaultval_inline(param, data, size); return data; } size_t gs_effect_get_default_val_size(gs_eparam_t *param) { return param ? param->default_val.num : 0; } void gs_effect_set_default(gs_eparam_t *param) { effect_setval_inline(param, param->default_val.array, param->default_val.num); } void gs_effect_set_next_sampler(gs_eparam_t *param, gs_samplerstate_t *sampler) { if (!param) { blog(LOG_ERROR, "gs_effect_set_next_sampler: invalid param"); return; } if (param->type == GS_SHADER_PARAM_TEXTURE) param->next_sampler = sampler; } obs-studio-32.1.0-sources/libobs/graphics/srgb.h000644 001751 001751 00000010670 15153330235 022420 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include #include #ifdef __cplusplus extern "C" { #endif static inline float gs_srgb_nonlinear_to_linear(float u) { return (u <= 0.04045f) ? (u / 12.92f) : powf((u + 0.055f) / 1.055f, 2.4f); } static inline float gs_srgb_linear_to_nonlinear(float u) { return (u <= 0.0031308f) ? (12.92f * u) : ((1.055f * powf(u, 1.0f / 2.4f)) - 0.055f); } static inline float gs_u8_to_float(uint8_t u) { return (float)u / 255.0f; } static inline void gs_u8x4_to_float4(float *f, const uint8_t *u) { f[0] = gs_u8_to_float(u[0]); f[1] = gs_u8_to_float(u[1]); f[2] = gs_u8_to_float(u[2]); f[3] = gs_u8_to_float(u[3]); } static inline uint8_t gs_float_to_u8(float f) { return (uint8_t)(f * 255.0f + 0.5f); } static inline void gs_premultiply_float4(float *f) { f[0] *= f[3]; f[1] *= f[3]; f[2] *= f[3]; } static inline void gs_float3_to_u8x3(uint8_t *u, const float *f) { u[0] = gs_float_to_u8(f[0]); u[1] = gs_float_to_u8(f[1]); u[2] = gs_float_to_u8(f[2]); } static inline void gs_float4_to_u8x4(uint8_t *u, const float *f) { u[0] = gs_float_to_u8(f[0]); u[1] = gs_float_to_u8(f[1]); u[2] = gs_float_to_u8(f[2]); u[3] = gs_float_to_u8(f[3]); } static inline void gs_float3_srgb_nonlinear_to_linear(float *f) { f[0] = gs_srgb_nonlinear_to_linear(f[0]); f[1] = gs_srgb_nonlinear_to_linear(f[1]); f[2] = gs_srgb_nonlinear_to_linear(f[2]); } static inline void gs_float3_srgb_linear_to_nonlinear(float *f) { f[0] = gs_srgb_linear_to_nonlinear(f[0]); f[1] = gs_srgb_linear_to_nonlinear(f[1]); f[2] = gs_srgb_linear_to_nonlinear(f[2]); } static inline void gs_premultiply_xyza(uint8_t *data) { uint8_t u[4]; float f[4]; memcpy(&u, data, sizeof(u)); gs_u8x4_to_float4(f, u); gs_premultiply_float4(f); gs_float3_to_u8x3(u, f); memcpy(data, &u, sizeof(u)); } static inline void gs_premultiply_xyza_srgb(uint8_t *data) { uint8_t u[4]; float f[4]; memcpy(&u, data, sizeof(u)); gs_u8x4_to_float4(f, u); gs_float3_srgb_nonlinear_to_linear(f); gs_premultiply_float4(f); gs_float3_srgb_linear_to_nonlinear(f); gs_float3_to_u8x3(u, f); memcpy(data, &u, sizeof(u)); } static inline void gs_premultiply_xyza_restrict(uint8_t *__restrict dst, const uint8_t *__restrict src) { uint8_t u[4]; float f[4]; memcpy(&u, src, sizeof(u)); gs_u8x4_to_float4(f, u); gs_premultiply_float4(f); gs_float3_to_u8x3(u, f); memcpy(dst, &u, sizeof(u)); } static inline void gs_premultiply_xyza_srgb_restrict(uint8_t *__restrict dst, const uint8_t *__restrict src) { uint8_t u[4]; float f[4]; memcpy(&u, src, sizeof(u)); gs_u8x4_to_float4(f, u); gs_float3_srgb_nonlinear_to_linear(f); gs_premultiply_float4(f); gs_float3_srgb_linear_to_nonlinear(f); gs_float3_to_u8x3(u, f); memcpy(dst, &u, sizeof(u)); } static inline void gs_premultiply_xyza_loop(uint8_t *data, size_t texel_count) { for (size_t i = 0; i < texel_count; ++i) { gs_premultiply_xyza(data); data += 4; } } static inline void gs_premultiply_xyza_srgb_loop(uint8_t *data, size_t texel_count) { for (size_t i = 0; i < texel_count; ++i) { gs_premultiply_xyza_srgb(data); data += 4; } } static inline void gs_premultiply_xyza_loop_restrict(uint8_t *__restrict dst, const uint8_t *__restrict src, size_t texel_count) { for (size_t i = 0; i < texel_count; ++i) { gs_premultiply_xyza_restrict(dst, src); dst += 4; src += 4; } } static inline void gs_premultiply_xyza_srgb_loop_restrict(uint8_t *__restrict dst, const uint8_t *__restrict src, size_t texel_count) { for (size_t i = 0; i < texel_count; ++i) { gs_premultiply_xyza_srgb_restrict(dst, src); dst += 4; src += 4; } } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/graphics-internal.h000644 001751 001751 00000043654 15153330235 025105 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/threading.h" #include "../util/darray.h" #include "graphics.h" #include "matrix3.h" #include "matrix4.h" /* ========================================================================= * * Exports * * ========================================================================= */ struct gs_exports { const char *(*device_get_name)(void); const char *(*gpu_get_driver_version)(void); const char *(*gpu_get_renderer)(void); uint64_t (*gpu_get_dmem)(void); uint64_t (*gpu_get_smem)(void); int (*device_get_type)(void); bool (*device_enum_adapters)(gs_device_t *device, bool (*callback)(void *, const char *, uint32_t), void *); const char *(*device_preprocessor_name)(void); int (*device_create)(gs_device_t **device, uint32_t adapter); void (*device_destroy)(gs_device_t *device); void (*device_enter_context)(gs_device_t *device); void (*device_leave_context)(gs_device_t *device); void *(*device_get_device_obj)(gs_device_t *device); gs_swapchain_t *(*device_swapchain_create)(gs_device_t *device, const struct gs_init_data *data); void (*device_resize)(gs_device_t *device, uint32_t x, uint32_t y); enum gs_color_space (*device_get_color_space)(gs_device_t *device); void (*device_update_color_space)(gs_device_t *device); void (*device_get_size)(const gs_device_t *device, uint32_t *x, uint32_t *y); uint32_t (*device_get_width)(const gs_device_t *device); uint32_t (*device_get_height)(const gs_device_t *device); gs_texture_t *(*device_texture_create)(gs_device_t *device, uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); gs_texture_t *(*device_cubetexture_create)(gs_device_t *device, uint32_t size, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); gs_texture_t *(*device_voltexture_create)(gs_device_t *device, uint32_t width, uint32_t height, uint32_t depth, enum gs_color_format color_format, uint32_t levels, const uint8_t *const *data, uint32_t flags); gs_zstencil_t *(*device_zstencil_create)(gs_device_t *device, uint32_t width, uint32_t height, enum gs_zstencil_format format); gs_stagesurf_t *(*device_stagesurface_create)(gs_device_t *device, uint32_t width, uint32_t height, enum gs_color_format color_format); gs_samplerstate_t *(*device_samplerstate_create)(gs_device_t *device, const struct gs_sampler_info *info); gs_shader_t *(*device_vertexshader_create)(gs_device_t *device, const char *shader, const char *file, char **error_string); gs_shader_t *(*device_pixelshader_create)(gs_device_t *device, const char *shader, const char *file, char **error_string); gs_vertbuffer_t *(*device_vertexbuffer_create)(gs_device_t *device, struct gs_vb_data *data, uint32_t flags); gs_indexbuffer_t *(*device_indexbuffer_create)(gs_device_t *device, enum gs_index_type type, void *indices, size_t num, uint32_t flags); gs_timer_t *(*device_timer_create)(gs_device_t *device); gs_timer_range_t *(*device_timer_range_create)(gs_device_t *device); enum gs_texture_type (*device_get_texture_type)(const gs_texture_t *texture); void (*device_load_vertexbuffer)(gs_device_t *device, gs_vertbuffer_t *vertbuffer); void (*device_load_indexbuffer)(gs_device_t *device, gs_indexbuffer_t *indexbuffer); void (*device_load_texture)(gs_device_t *device, gs_texture_t *tex, int unit); void (*device_load_samplerstate)(gs_device_t *device, gs_samplerstate_t *samplerstate, int unit); void (*device_load_vertexshader)(gs_device_t *device, gs_shader_t *vertshader); void (*device_load_pixelshader)(gs_device_t *device, gs_shader_t *pixelshader); void (*device_load_default_samplerstate)(gs_device_t *device, bool b_3d, int unit); gs_shader_t *(*device_get_vertex_shader)(const gs_device_t *device); gs_shader_t *(*device_get_pixel_shader)(const gs_device_t *device); gs_texture_t *(*device_get_render_target)(const gs_device_t *device); gs_zstencil_t *(*device_get_zstencil_target)(const gs_device_t *device); void (*device_set_render_target)(gs_device_t *device, gs_texture_t *tex, gs_zstencil_t *zstencil); void (*device_set_render_target_with_color_space)(gs_device_t *device, gs_texture_t *tex, gs_zstencil_t *zstencil, enum gs_color_space space); void (*device_set_cube_render_target)(gs_device_t *device, gs_texture_t *cubetex, int side, gs_zstencil_t *zstencil); void (*device_enable_framebuffer_srgb)(gs_device_t *device, bool enable); bool (*device_framebuffer_srgb_enabled)(gs_device_t *device); void (*device_copy_texture)(gs_device_t *device, gs_texture_t *dst, gs_texture_t *src); void (*device_copy_texture_region)(gs_device_t *device, gs_texture_t *dst, uint32_t dst_x, uint32_t dst_y, gs_texture_t *src, uint32_t src_x, uint32_t src_y, uint32_t src_w, uint32_t src_h); void (*device_stage_texture)(gs_device_t *device, gs_stagesurf_t *dst, gs_texture_t *src); void (*device_begin_frame)(gs_device_t *device); void (*device_begin_scene)(gs_device_t *device); void (*device_draw)(gs_device_t *device, enum gs_draw_mode draw_mode, uint32_t start_vert, uint32_t num_verts); void (*device_end_scene)(gs_device_t *device); void (*device_load_swapchain)(gs_device_t *device, gs_swapchain_t *swaphchain); void (*device_clear)(gs_device_t *device, uint32_t clear_flags, const struct vec4 *color, float depth, uint8_t stencil); bool (*device_is_present_ready)(gs_device_t *device); void (*device_present)(gs_device_t *device); void (*device_flush)(gs_device_t *device); void (*device_set_cull_mode)(gs_device_t *device, enum gs_cull_mode mode); enum gs_cull_mode (*device_get_cull_mode)(const gs_device_t *device); void (*device_enable_blending)(gs_device_t *device, bool enable); void (*device_enable_depth_test)(gs_device_t *device, bool enable); void (*device_enable_stencil_test)(gs_device_t *device, bool enable); void (*device_enable_stencil_write)(gs_device_t *device, bool enable); void (*device_enable_color)(gs_device_t *device, bool red, bool green, bool blue, bool alpha); void (*device_blend_function)(gs_device_t *device, enum gs_blend_type src, enum gs_blend_type dest); void (*device_blend_function_separate)(gs_device_t *device, enum gs_blend_type src_c, enum gs_blend_type dest_c, enum gs_blend_type src_a, enum gs_blend_type dest_a); void (*device_blend_op)(gs_device_t *device, enum gs_blend_op_type op); void (*device_depth_function)(gs_device_t *device, enum gs_depth_test test); void (*device_stencil_function)(gs_device_t *device, enum gs_stencil_side side, enum gs_depth_test test); void (*device_stencil_op)(gs_device_t *device, enum gs_stencil_side side, enum gs_stencil_op_type fail, enum gs_stencil_op_type zfail, enum gs_stencil_op_type zpass); void (*device_set_viewport)(gs_device_t *device, int x, int y, int width, int height); void (*device_get_viewport)(const gs_device_t *device, struct gs_rect *rect); void (*device_set_scissor_rect)(gs_device_t *device, const struct gs_rect *rect); void (*device_ortho)(gs_device_t *device, float left, float right, float top, float bottom, float znear, float zfar); void (*device_frustum)(gs_device_t *device, float left, float right, float top, float bottom, float znear, float zfar); void (*device_projection_push)(gs_device_t *device); void (*device_projection_pop)(gs_device_t *device); void (*gs_swapchain_destroy)(gs_swapchain_t *swapchain); void (*gs_texture_destroy)(gs_texture_t *tex); uint32_t (*gs_texture_get_width)(const gs_texture_t *tex); uint32_t (*gs_texture_get_height)(const gs_texture_t *tex); enum gs_color_format (*gs_texture_get_color_format)(const gs_texture_t *tex); bool (*gs_texture_map)(gs_texture_t *tex, uint8_t **ptr, uint32_t *linesize); void (*gs_texture_unmap)(gs_texture_t *tex); bool (*gs_texture_is_rect)(const gs_texture_t *tex); void *(*gs_texture_get_obj)(const gs_texture_t *tex); void (*gs_cubetexture_destroy)(gs_texture_t *cubetex); uint32_t (*gs_cubetexture_get_size)(const gs_texture_t *cubetex); enum gs_color_format (*gs_cubetexture_get_color_format)(const gs_texture_t *cubetex); void (*gs_voltexture_destroy)(gs_texture_t *voltex); uint32_t (*gs_voltexture_get_width)(const gs_texture_t *voltex); uint32_t (*gs_voltexture_get_height)(const gs_texture_t *voltex); uint32_t (*gs_voltexture_get_depth)(const gs_texture_t *voltex); enum gs_color_format (*gs_voltexture_get_color_format)(const gs_texture_t *voltex); void (*gs_stagesurface_destroy)(gs_stagesurf_t *stagesurf); uint32_t (*gs_stagesurface_get_width)(const gs_stagesurf_t *stagesurf); uint32_t (*gs_stagesurface_get_height)(const gs_stagesurf_t *stagesurf); enum gs_color_format (*gs_stagesurface_get_color_format)(const gs_stagesurf_t *stagesurf); bool (*gs_stagesurface_map)(gs_stagesurf_t *stagesurf, uint8_t **data, uint32_t *linesize); void (*gs_stagesurface_unmap)(gs_stagesurf_t *stagesurf); void (*gs_zstencil_destroy)(gs_zstencil_t *zstencil); void (*gs_samplerstate_destroy)(gs_samplerstate_t *samplerstate); void (*gs_vertexbuffer_destroy)(gs_vertbuffer_t *vertbuffer); void (*gs_vertexbuffer_flush)(gs_vertbuffer_t *vertbuffer); void (*gs_vertexbuffer_flush_direct)(gs_vertbuffer_t *vertbuffer, const struct gs_vb_data *data); struct gs_vb_data *(*gs_vertexbuffer_get_data)(const gs_vertbuffer_t *vertbuffer); void (*gs_indexbuffer_destroy)(gs_indexbuffer_t *indexbuffer); void (*gs_indexbuffer_flush)(gs_indexbuffer_t *indexbuffer); void (*gs_indexbuffer_flush_direct)(gs_indexbuffer_t *indexbuffer, const void *data); void *(*gs_indexbuffer_get_data)(const gs_indexbuffer_t *indexbuffer); size_t (*gs_indexbuffer_get_num_indices)(const gs_indexbuffer_t *indexbuffer); enum gs_index_type (*gs_indexbuffer_get_type)(const gs_indexbuffer_t *indexbuffer); void (*gs_timer_destroy)(gs_timer_t *timer); void (*gs_timer_begin)(gs_timer_t *timer); void (*gs_timer_end)(gs_timer_t *timer); bool (*gs_timer_get_data)(gs_timer_t *timer, uint64_t *ticks); void (*gs_timer_range_destroy)(gs_timer_range_t *range); bool (*gs_timer_range_begin)(gs_timer_range_t *range); bool (*gs_timer_range_end)(gs_timer_range_t *range); bool (*gs_timer_range_get_data)(gs_timer_range_t *range, bool *disjoint, uint64_t *frequency); void (*gs_shader_destroy)(gs_shader_t *shader); int (*gs_shader_get_num_params)(const gs_shader_t *shader); gs_sparam_t *(*gs_shader_get_param_by_idx)(gs_shader_t *shader, uint32_t param); gs_sparam_t *(*gs_shader_get_param_by_name)(gs_shader_t *shader, const char *name); gs_sparam_t *(*gs_shader_get_viewproj_matrix)(const gs_shader_t *shader); gs_sparam_t *(*gs_shader_get_world_matrix)(const gs_shader_t *shader); void (*gs_shader_get_param_info)(const gs_sparam_t *param, struct gs_shader_param_info *info); void (*gs_shader_set_bool)(gs_sparam_t *param, bool val); void (*gs_shader_set_float)(gs_sparam_t *param, float val); void (*gs_shader_set_int)(gs_sparam_t *param, int val); void (*gs_shader_set_matrix3)(gs_sparam_t *param, const struct matrix3 *val); void (*gs_shader_set_matrix4)(gs_sparam_t *param, const struct matrix4 *val); void (*gs_shader_set_vec2)(gs_sparam_t *param, const struct vec2 *val); void (*gs_shader_set_vec3)(gs_sparam_t *param, const struct vec3 *val); void (*gs_shader_set_vec4)(gs_sparam_t *param, const struct vec4 *val); void (*gs_shader_set_texture)(gs_sparam_t *param, gs_texture_t *val); void (*gs_shader_set_val)(gs_sparam_t *param, const void *val, size_t size); void (*gs_shader_set_default)(gs_sparam_t *param); void (*gs_shader_set_next_sampler)(gs_sparam_t *param, gs_samplerstate_t *sampler); bool (*device_nv12_available)(gs_device_t *device); bool (*device_p010_available)(gs_device_t *device); bool (*device_texture_create_nv12)(gs_device_t *device, gs_texture_t **tex_y, gs_texture_t **tex_uv, uint32_t width, uint32_t height, uint32_t flags); bool (*device_texture_create_p010)(gs_device_t *device, gs_texture_t **tex_y, gs_texture_t **tex_uv, uint32_t width, uint32_t height, uint32_t flags); bool (*device_is_monitor_hdr)(gs_device_t *device, void *monitor); void (*device_debug_marker_begin)(gs_device_t *device, const char *markername, const float color[4]); void (*device_debug_marker_end)(gs_device_t *device); uint32_t (*gs_get_adapter_count)(void); #ifdef __APPLE__ /* OSX/Cocoa specific functions */ gs_texture_t *(*device_texture_create_from_iosurface)(gs_device_t *dev, void *iosurf); gs_texture_t *(*device_texture_open_shared)(gs_device_t *dev, uint32_t handle); bool (*gs_texture_rebind_iosurface)(gs_texture_t *texture, void *iosurf); bool (*device_shared_texture_available)(void); #elif _WIN32 bool (*device_gdi_texture_available)(void); bool (*device_shared_texture_available)(void); bool (*device_get_duplicator_monitor_info)(gs_device_t *device, int monitor_idx, struct gs_monitor_info *monitor_info); int (*device_duplicator_get_monitor_index)(gs_device_t *device, void *monitor); gs_duplicator_t *(*device_duplicator_create)(gs_device_t *device, int monitor_idx); void (*gs_duplicator_destroy)(gs_duplicator_t *duplicator); bool (*gs_duplicator_update_frame)(gs_duplicator_t *duplicator); gs_texture_t *(*gs_duplicator_get_texture)(gs_duplicator_t *duplicator); enum gs_color_space (*gs_duplicator_get_color_space)(gs_duplicator_t *duplicator); float (*gs_duplicator_get_sdr_white_level)(gs_duplicator_t *duplicator); bool (*device_can_adapter_fast_clear)(gs_device_t *device); gs_texture_t *(*device_texture_create_gdi)(gs_device_t *device, uint32_t width, uint32_t height); void *(*gs_texture_get_dc)(gs_texture_t *gdi_tex); void (*gs_texture_release_dc)(gs_texture_t *gdi_tex); gs_texture_t *(*device_texture_open_shared)(gs_device_t *device, uint32_t handle); gs_texture_t *(*device_texture_open_nt_shared)(gs_device_t *device, uint32_t handle); uint32_t (*device_texture_get_shared_handle)(gs_texture_t *tex); gs_texture_t *(*device_texture_wrap_obj)(gs_device_t *device, void *obj); int (*device_texture_acquire_sync)(gs_texture_t *tex, uint64_t key, uint32_t ms); int (*device_texture_release_sync)(gs_texture_t *tex, uint64_t key); gs_stagesurf_t *(*device_stagesurface_create_nv12)(gs_device_t *device, uint32_t width, uint32_t height); gs_stagesurf_t *(*device_stagesurface_create_p010)(gs_device_t *device, uint32_t width, uint32_t height); void (*device_register_loss_callbacks)(gs_device_t *device, const struct gs_device_loss *callbacks); void (*device_unregister_loss_callbacks)(gs_device_t *device, void *data); #elif defined(__linux__) || defined(__FreeBSD__) || defined(__DragonFly__) struct gs_texture *(*device_texture_create_from_dmabuf)(gs_device_t *device, unsigned int width, unsigned int height, uint32_t drm_format, enum gs_color_format color_format, uint32_t n_planes, const int *fds, const uint32_t *strides, const uint32_t *offsets, const uint64_t *modifiers); bool (*device_query_dmabuf_capabilities)(gs_device_t *device, enum gs_dmabuf_flags *dmabuf_flags, uint32_t **drm_formats, size_t *n_formats); bool (*device_query_dmabuf_modifiers_for_format)(gs_device_t *device, uint32_t drm_format, uint64_t **modifiers, size_t *n_modifiers); struct gs_texture *(*device_texture_create_from_pixmap)(gs_device_t *device, uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t target, void *pixmap); bool (*device_query_sync_capabilities)(gs_device_t *device); gs_sync_t *(*device_sync_create)(gs_device_t *device); gs_sync_t *(*device_sync_create_from_syncobj_timeline_point)(gs_device_t *device, int syncobj_fd, uint64_t timeline_point); void (*device_sync_destroy)(gs_device_t *device, gs_sync_t *sync); bool (*device_sync_export_syncobj_timeline_point)(gs_device_t *device, gs_sync_t *sync, int syncobj_fd, uint64_t timeline_point); bool (*device_sync_signal_syncobj_timeline_point)(gs_device_t *device, int syncobj_fd, uint64_t timeline_point); bool (*device_sync_wait)(gs_device_t *device, gs_sync_t *sync); #endif }; /* ========================================================================= * * Graphics Subsystem Data * * ========================================================================= */ struct blend_state { bool enabled; enum gs_blend_type src_c; enum gs_blend_type dest_c; enum gs_blend_type src_a; enum gs_blend_type dest_a; enum gs_blend_op_type op; }; struct graphics_subsystem { void *module; gs_device_t *device; struct gs_exports exports; DARRAY(struct gs_rect) viewport_stack; DARRAY(struct matrix4) matrix_stack; size_t cur_matrix; struct matrix4 projection; struct gs_effect *cur_effect; gs_vertbuffer_t *sprite_buffer; gs_vertbuffer_t *flipped_sprite_buffer; gs_vertbuffer_t *subregion_buffer; bool using_immediate; struct gs_vb_data *vbd; gs_vertbuffer_t *immediate_vertbuffer; DARRAY(struct vec3) verts; DARRAY(struct vec3) norms; DARRAY(uint32_t) colors; DARRAY(struct vec2) texverts[16]; pthread_mutex_t effect_mutex; struct gs_effect *first_effect; pthread_mutex_t mutex; volatile long ref; struct blend_state cur_blend_state; DARRAY(struct blend_state) blend_state_stack; bool linear_srgb; }; obs-studio-32.1.0-sources/libobs/graphics/quat.h000644 001751 001751 00000011272 15153330235 022434 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #include "math-defs.h" #include "vec3.h" #include "../util/sse-intrin.h" /* * Quaternion math * * Generally used to represent rotational data more than anything. Allows * for efficient and correct rotational interpolation without suffering from * things like gimbal lock. */ #ifdef __cplusplus extern "C" { #endif struct matrix3; struct matrix4; struct axisang; struct quat { union { struct { float x, y, z, w; }; float ptr[4]; __m128 m; }; }; static inline void quat_identity(struct quat *q) { q->m = _mm_setzero_ps(); q->w = 1.0f; } static inline void quat_set(struct quat *dst, float x, float y, float z, float w) { dst->m = _mm_set_ps(x, y, z, w); } static inline void quat_copy(struct quat *dst, const struct quat *q) { dst->m = q->m; } static inline void quat_add(struct quat *dst, const struct quat *q1, const struct quat *q2) { dst->m = _mm_add_ps(q1->m, q2->m); } static inline void quat_sub(struct quat *dst, const struct quat *q1, const struct quat *q2) { dst->m = _mm_sub_ps(q1->m, q2->m); } EXPORT void quat_mul(struct quat *dst, const struct quat *q1, const struct quat *q2); static inline void quat_addf(struct quat *dst, const struct quat *q, float f) { dst->m = _mm_add_ps(q->m, _mm_set1_ps(f)); } static inline void quat_subf(struct quat *dst, const struct quat *q, float f) { dst->m = _mm_sub_ps(q->m, _mm_set1_ps(f)); } static inline void quat_mulf(struct quat *dst, const struct quat *q, float f) { dst->m = _mm_mul_ps(q->m, _mm_set1_ps(f)); } static inline void quat_divf(struct quat *dst, const struct quat *q, float f) { dst->m = _mm_div_ps(q->m, _mm_set1_ps(f)); } static inline float quat_dot(const struct quat *q1, const struct quat *q2) { struct vec3 add; __m128 mul = _mm_mul_ps(q1->m, q2->m); add.m = _mm_add_ps(_mm_movehl_ps(mul, mul), mul); add.m = _mm_add_ps(_mm_shuffle_ps(add.m, add.m, 0x55), add.m); return add.x; } static inline void quat_inv(struct quat *dst, const struct quat *q) { dst->x = -q->x; dst->y = -q->y; dst->z = -q->z; } static inline void quat_neg(struct quat *dst, const struct quat *q) { dst->x = -q->x; dst->y = -q->y; dst->z = -q->z; dst->w = -q->w; } static inline float quat_len(const struct quat *q) { float dot_val = quat_dot(q, q); return (dot_val > 0.0f) ? sqrtf(dot_val) : 0.0f; } static inline float quat_dist(const struct quat *q1, const struct quat *q2) { struct quat temp; float dot_val; quat_sub(&temp, q1, q2); dot_val = quat_dot(&temp, &temp); return (dot_val > 0.0f) ? sqrtf(dot_val) : 0.0f; } static inline void quat_norm(struct quat *dst, const struct quat *q) { float dot_val = quat_dot(q, q); dst->m = (dot_val > 0.0f) ? _mm_mul_ps(q->m, _mm_set1_ps(1.0f / sqrtf(dot_val))) : _mm_setzero_ps(); } static inline bool quat_close(const struct quat *q1, const struct quat *q2, float epsilon) { struct quat test; quat_sub(&test, q1, q2); return test.x < epsilon && test.y < epsilon && test.z < epsilon && test.w < epsilon; } EXPORT void quat_from_axisang(struct quat *dst, const struct axisang *aa); EXPORT void quat_from_matrix3(struct quat *dst, const struct matrix3 *m); EXPORT void quat_from_matrix4(struct quat *dst, const struct matrix4 *m); EXPORT void quat_get_dir(struct vec3 *dst, const struct quat *q); EXPORT void quat_set_look_dir(struct quat *dst, const struct vec3 *dir); EXPORT void quat_log(struct quat *dst, const struct quat *q); EXPORT void quat_exp(struct quat *dst, const struct quat *q); EXPORT void quat_interpolate(struct quat *dst, const struct quat *q1, const struct quat *q2, float t); EXPORT void quat_get_tangent(struct quat *dst, const struct quat *prev, const struct quat *q, const struct quat *next); EXPORT void quat_interpolate_cubic(struct quat *dst, const struct quat *q1, const struct quat *q2, const struct quat *m1, const struct quat *m2, float t); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/texture-render.c000644 001751 001751 00000007547 15153330235 024444 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ /* * This is a set of helper functions to more easily render to textures * without having to duplicate too much code. */ #include #include "graphics.h" struct gs_texture_render { gs_texture_t *target, *prev_target; gs_zstencil_t *zs, *prev_zs; enum gs_color_space prev_space; uint32_t cx, cy; enum gs_color_format format; enum gs_zstencil_format zsformat; bool rendered; }; gs_texrender_t *gs_texrender_create(enum gs_color_format format, enum gs_zstencil_format zsformat) { struct gs_texture_render *texrender; texrender = bzalloc(sizeof(struct gs_texture_render)); texrender->format = format; texrender->zsformat = zsformat; return texrender; } void gs_texrender_destroy(gs_texrender_t *texrender) { if (texrender) { gs_texture_destroy(texrender->target); gs_zstencil_destroy(texrender->zs); bfree(texrender); } } static bool texrender_resetbuffer(gs_texrender_t *texrender, uint32_t cx, uint32_t cy) { if (!texrender) return false; gs_texture_destroy(texrender->target); gs_zstencil_destroy(texrender->zs); texrender->target = NULL; texrender->zs = NULL; texrender->cx = cx; texrender->cy = cy; texrender->target = gs_texture_create(cx, cy, texrender->format, 1, NULL, GS_RENDER_TARGET); if (!texrender->target) return false; if (texrender->zsformat != GS_ZS_NONE) { texrender->zs = gs_zstencil_create(cx, cy, texrender->zsformat); if (!texrender->zs) { gs_texture_destroy(texrender->target); texrender->target = NULL; return false; } } return true; } bool gs_texrender_begin(gs_texrender_t *texrender, uint32_t cx, uint32_t cy) { return gs_texrender_begin_with_color_space(texrender, cx, cy, GS_CS_SRGB); } bool gs_texrender_begin_with_color_space(gs_texrender_t *texrender, uint32_t cx, uint32_t cy, enum gs_color_space space) { if (!texrender || texrender->rendered) return false; if (!cx || !cy) return false; if (texrender->cx != cx || texrender->cy != cy) if (!texrender_resetbuffer(texrender, cx, cy)) return false; if (!texrender->target) return false; gs_viewport_push(); gs_projection_push(); gs_matrix_push(); gs_matrix_identity(); texrender->prev_target = gs_get_render_target(); texrender->prev_zs = gs_get_zstencil_target(); texrender->prev_space = gs_get_color_space(); gs_set_render_target_with_color_space(texrender->target, texrender->zs, space); gs_set_viewport(0, 0, texrender->cx, texrender->cy); return true; } void gs_texrender_end(gs_texrender_t *texrender) { if (!texrender) return; gs_set_render_target_with_color_space(texrender->prev_target, texrender->prev_zs, texrender->prev_space); gs_matrix_pop(); gs_projection_pop(); gs_viewport_pop(); texrender->rendered = true; } void gs_texrender_reset(gs_texrender_t *texrender) { if (texrender) texrender->rendered = false; } gs_texture_t *gs_texrender_get_texture(const gs_texrender_t *texrender) { return texrender ? texrender->target : NULL; } enum gs_color_format gs_texrender_get_format(const gs_texrender_t *texrender) { return texrender->format; } obs-studio-32.1.0-sources/libobs/graphics/math-defs.h000644 001751 001751 00000002604 15153330235 023331 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #include #ifdef __cplusplus extern "C" { #endif #ifndef M_PI #define M_PI 3.1415926535897932384626433832795f #endif #define RAD(val) ((val) * 0.0174532925199432957692369076848f) #define DEG(val) ((val) * 57.295779513082320876798154814105f) #define LARGE_EPSILON 1e-2f #define EPSILON 1e-4f #define TINY_EPSILON 1e-5f #define M_INFINITE 3.4e38f static inline bool close_float(float f1, float f2, float precision) { return fabsf(f1 - f2) <= precision; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/axisang.h000644 001751 001751 00000003047 15153330235 023115 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" #ifdef __cplusplus extern "C" { #endif struct quat; struct axisang { union { struct { float x, y, z, w; }; float ptr[4]; }; }; static inline void axisang_zero(struct axisang *dst) { dst->x = 0.0f; dst->y = 0.0f; dst->z = 0.0f; dst->w = 0.0f; } static inline void axisang_copy(struct axisang *dst, struct axisang *aa) { dst->x = aa->x; dst->y = aa->y; dst->z = aa->z; dst->w = aa->w; } static inline void axisang_set(struct axisang *dst, float x, float y, float z, float w) { dst->x = x; dst->y = y; dst->z = z; dst->w = w; } EXPORT void axisang_from_quat(struct axisang *dst, const struct quat *q); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/device-exports.h000644 001751 001751 00000024651 15153330235 024430 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include #ifdef __cplusplus extern "C" { #endif EXPORT const char *device_get_name(void); EXPORT const char *gpu_get_driver_version(void); EXPORT const char *gpu_get_renderer(void); EXPORT uint64_t gpu_get_dmem(void); EXPORT uint64_t gpu_get_smem(void); EXPORT int device_get_type(void); EXPORT bool device_enum_adapters(gs_device_t *device, bool (*callback)(void *param, const char *name, uint32_t id), void *param); EXPORT const char *device_preprocessor_name(void); EXPORT int device_create(gs_device_t **device, uint32_t adapter); EXPORT void device_destroy(gs_device_t *device); EXPORT void device_enter_context(gs_device_t *device); EXPORT void device_leave_context(gs_device_t *device); EXPORT void *device_get_device_obj(gs_device_t *device); EXPORT gs_swapchain_t *device_swapchain_create(gs_device_t *device, const struct gs_init_data *data); EXPORT void device_resize(gs_device_t *device, uint32_t x, uint32_t y); EXPORT enum gs_color_space device_get_color_space(gs_device_t *device); EXPORT void device_update_color_space(gs_device_t *device); EXPORT void device_get_size(const gs_device_t *device, uint32_t *x, uint32_t *y); EXPORT uint32_t device_get_width(const gs_device_t *device); EXPORT uint32_t device_get_height(const gs_device_t *device); EXPORT gs_texture_t *device_texture_create(gs_device_t *device, uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); EXPORT gs_texture_t *device_cubetexture_create(gs_device_t *device, uint32_t size, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); EXPORT gs_texture_t *device_voltexture_create(gs_device_t *device, uint32_t width, uint32_t height, uint32_t depth, enum gs_color_format color_format, uint32_t levels, const uint8_t *const *data, uint32_t flags); EXPORT gs_zstencil_t *device_zstencil_create(gs_device_t *device, uint32_t width, uint32_t height, enum gs_zstencil_format format); EXPORT gs_stagesurf_t *device_stagesurface_create(gs_device_t *device, uint32_t width, uint32_t height, enum gs_color_format color_format); EXPORT gs_samplerstate_t *device_samplerstate_create(gs_device_t *device, const struct gs_sampler_info *info); EXPORT gs_shader_t *device_vertexshader_create(gs_device_t *device, const char *shader, const char *file, char **error_string); EXPORT gs_shader_t *device_pixelshader_create(gs_device_t *device, const char *shader, const char *file, char **error_string); EXPORT gs_vertbuffer_t *device_vertexbuffer_create(gs_device_t *device, struct gs_vb_data *data, uint32_t flags); EXPORT gs_indexbuffer_t *device_indexbuffer_create(gs_device_t *device, enum gs_index_type type, void *indices, size_t num, uint32_t flags); EXPORT gs_timer_t *device_timer_create(gs_device_t *device); EXPORT gs_timer_range_t *device_timer_range_create(gs_device_t *device); EXPORT enum gs_texture_type device_get_texture_type(const gs_texture_t *texture); EXPORT void device_load_vertexbuffer(gs_device_t *device, gs_vertbuffer_t *vertbuffer); EXPORT void device_load_indexbuffer(gs_device_t *device, gs_indexbuffer_t *indexbuffer); EXPORT void device_load_texture(gs_device_t *device, gs_texture_t *tex, int unit); EXPORT void device_load_texture_srgb(gs_device_t *device, gs_texture_t *tex, int unit); EXPORT void device_load_samplerstate(gs_device_t *device, gs_samplerstate_t *samplerstate, int unit); EXPORT void device_load_vertexshader(gs_device_t *device, gs_shader_t *vertshader); EXPORT void device_load_pixelshader(gs_device_t *device, gs_shader_t *pixelshader); EXPORT void device_load_default_samplerstate(gs_device_t *device, bool b_3d, int unit); EXPORT gs_shader_t *device_get_vertex_shader(const gs_device_t *device); EXPORT gs_shader_t *device_get_pixel_shader(const gs_device_t *device); EXPORT gs_texture_t *device_get_render_target(const gs_device_t *device); EXPORT gs_zstencil_t *device_get_zstencil_target(const gs_device_t *device); EXPORT void device_set_render_target(gs_device_t *device, gs_texture_t *tex, gs_zstencil_t *zstencil); EXPORT void device_set_render_target_with_color_space(gs_device_t *device, gs_texture_t *tex, gs_zstencil_t *zstencil, enum gs_color_space space); EXPORT void device_set_cube_render_target(gs_device_t *device, gs_texture_t *cubetex, int side, gs_zstencil_t *zstencil); EXPORT void device_enable_framebuffer_srgb(gs_device_t *device, bool enable); EXPORT bool device_framebuffer_srgb_enabled(gs_device_t *device); EXPORT void device_copy_texture(gs_device_t *device, gs_texture_t *dst, gs_texture_t *src); EXPORT void device_copy_texture_region(gs_device_t *device, gs_texture_t *dst, uint32_t dst_x, uint32_t dst_y, gs_texture_t *src, uint32_t src_x, uint32_t src_y, uint32_t src_w, uint32_t src_h); EXPORT void device_stage_texture(gs_device_t *device, gs_stagesurf_t *dst, gs_texture_t *src); EXPORT void device_begin_frame(gs_device_t *device); EXPORT void device_begin_scene(gs_device_t *device); EXPORT void device_draw(gs_device_t *device, enum gs_draw_mode draw_mode, uint32_t start_vert, uint32_t num_verts); EXPORT void device_end_scene(gs_device_t *device); EXPORT void device_load_swapchain(gs_device_t *device, gs_swapchain_t *swapchain); EXPORT void device_clear(gs_device_t *device, uint32_t clear_flags, const struct vec4 *color, float depth, uint8_t stencil); EXPORT bool device_is_present_ready(gs_device_t *device); EXPORT void device_present(gs_device_t *device); EXPORT void device_flush(gs_device_t *device); EXPORT void device_set_cull_mode(gs_device_t *device, enum gs_cull_mode mode); EXPORT enum gs_cull_mode device_get_cull_mode(const gs_device_t *device); EXPORT void device_enable_blending(gs_device_t *device, bool enable); EXPORT void device_enable_depth_test(gs_device_t *device, bool enable); EXPORT void device_enable_stencil_test(gs_device_t *device, bool enable); EXPORT void device_enable_stencil_write(gs_device_t *device, bool enable); EXPORT void device_enable_color(gs_device_t *device, bool red, bool green, bool blue, bool alpha); EXPORT void device_blend_function(gs_device_t *device, enum gs_blend_type src, enum gs_blend_type dest); EXPORT void device_blend_function_separate(gs_device_t *device, enum gs_blend_type src_c, enum gs_blend_type dest_c, enum gs_blend_type src_a, enum gs_blend_type dest_a); EXPORT void device_blend_op(gs_device_t *device, enum gs_blend_op_type op); EXPORT void device_depth_function(gs_device_t *device, enum gs_depth_test test); EXPORT void device_stencil_function(gs_device_t *device, enum gs_stencil_side side, enum gs_depth_test test); EXPORT void device_stencil_op(gs_device_t *device, enum gs_stencil_side side, enum gs_stencil_op_type fail, enum gs_stencil_op_type zfail, enum gs_stencil_op_type zpass); EXPORT void device_set_viewport(gs_device_t *device, int x, int y, int width, int height); EXPORT void device_get_viewport(const gs_device_t *device, struct gs_rect *rect); EXPORT void device_set_scissor_rect(gs_device_t *device, const struct gs_rect *rect); EXPORT void device_ortho(gs_device_t *device, float left, float right, float top, float bottom, float znear, float zfar); EXPORT void device_frustum(gs_device_t *device, float left, float right, float top, float bottom, float znear, float zfar); EXPORT void device_projection_push(gs_device_t *device); EXPORT void device_projection_pop(gs_device_t *device); EXPORT void device_debug_marker_begin(gs_device_t *device, const char *markername, const float color[4]); EXPORT void device_debug_marker_end(gs_device_t *device); EXPORT bool device_is_monitor_hdr(gs_device_t *device, void *monitor); EXPORT bool device_shared_texture_available(void); EXPORT bool device_nv12_available(gs_device_t *device); EXPORT bool device_p010_available(gs_device_t *device); #ifdef __APPLE__ EXPORT gs_texture_t *device_texture_create_from_iosurface(gs_device_t *device, void *iosurf); EXPORT gs_texture_t *device_texture_open_shared(gs_device_t *device, uint32_t handle); #endif #if defined(__linux__) || defined(__FreeBSD__) || defined(__DragonFly__) EXPORT gs_texture_t *device_texture_create_from_dmabuf(gs_device_t *device, unsigned int width, unsigned int height, uint32_t drm_format, enum gs_color_format color_format, uint32_t n_planes, const int *fds, const uint32_t *strides, const uint32_t *offsets, const uint64_t *modifiers); EXPORT bool device_query_dmabuf_capabilities(gs_device_t *device, enum gs_dmabuf_flags *gs_dmabuf_flags, uint32_t **drm_formats, size_t *n_formats); EXPORT bool device_query_dmabuf_modifiers_for_format(gs_device_t *device, uint32_t drm_format, uint64_t **modifiers, size_t *n_modifiers); EXPORT gs_texture_t *device_texture_create_from_pixmap(gs_device_t *device, uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t target, void *pixmap); EXPORT bool device_query_sync_capabilities(gs_device_t *device); EXPORT gs_sync_t *device_sync_create(gs_device_t *device); EXPORT gs_sync_t *device_sync_create_from_syncobj_timeline_point(gs_device_t *device, int syncobj_fd, uint64_t timeline_point); EXPORT void device_sync_destroy(gs_device_t *device, gs_sync_t *sync); EXPORT bool device_sync_export_syncobj_timeline_point(gs_device_t *device, gs_sync_t *sync, int syncobj_fd, uint64_t timeline_point); EXPORT bool device_sync_signal_syncobj_timeline_point(gs_device_t *device, int syncobj_fd, uint64_t timeline_point); EXPORT bool device_sync_wait(gs_device_t *device, gs_sync_t *sync); #endif #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/half.h000644 001751 001751 00000006633 15153330235 022401 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ /****************************************************************************** The MIT License (MIT) Copyright (c) 2011-2019 Microsoft Corp Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ******************************************************************************/ #pragma once #include "math-defs.h" #ifdef __cplusplus extern "C" { #endif struct half { uint16_t u; }; /* adapted from DirectXMath XMConvertFloatToHalf */ static inline struct half half_from_float(float f) { uint32_t Result; uint32_t IValue; memcpy(&IValue, &f, sizeof(IValue)); uint32_t Sign = (IValue & 0x80000000U) >> 16U; IValue = IValue & 0x7FFFFFFFU; // Hack off the sign if (IValue > 0x477FE000U) { // The number is too large to be represented as a half. Saturate to infinity. if (((IValue & 0x7F800000) == 0x7F800000) && ((IValue & 0x7FFFFF) != 0)) { Result = 0x7FFF; // NAN } else { Result = 0x7C00U; // INF } } else if (!IValue) { Result = 0; } else { if (IValue < 0x38800000U) { // The number is too small to be represented as a normalized half. // Convert it to a denormalized value. uint32_t Shift = 113U - (IValue >> 23U); IValue = (0x800000U | (IValue & 0x7FFFFFU)) >> Shift; } else { // Rebias the exponent to represent the value as a normalized half. IValue += 0xC8000000U; } Result = ((IValue + 0x0FFFU + ((IValue >> 13U) & 1U)) >> 13U) & 0x7FFFU; } struct half h; h.u = (uint16_t)(Result | Sign); return h; } static inline struct half half_from_bits(uint16_t u) { struct half h; h.u = u; return h; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/plane.h000644 001751 001751 00000005470 15153330235 022564 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "math-defs.h" #include "vec3.h" #ifdef __cplusplus extern "C" { #endif struct matrix3; struct matrix4; struct plane { struct vec3 dir; float dist; }; static inline void plane_copy(struct plane *dst, const struct plane *p) { vec3_copy(&dst->dir, &p->dir); dst->dist = p->dist; } static inline void plane_set(struct plane *dst, const struct vec3 *dir, float dist) { vec3_copy(&dst->dir, dir); dst->dist = dist; } static inline void plane_setf(struct plane *dst, float a, float b, float c, float d) { vec3_set(&dst->dir, a, b, c); dst->dist = d; } EXPORT void plane_from_tri(struct plane *dst, const struct vec3 *v1, const struct vec3 *v2, const struct vec3 *v3); EXPORT void plane_transform(struct plane *dst, const struct plane *p, const struct matrix4 *m); EXPORT void plane_transform3x4(struct plane *dst, const struct plane *p, const struct matrix3 *m); EXPORT bool plane_intersection_ray(const struct plane *p, const struct vec3 *orig, const struct vec3 *dir, float *t); EXPORT bool plane_intersection_line(const struct plane *p, const struct vec3 *v1, const struct vec3 *v2, float *t); EXPORT bool plane_tri_inside(const struct plane *p, const struct vec3 *v1, const struct vec3 *v2, const struct vec3 *v3, float precision); EXPORT bool plane_line_inside(const struct plane *p, const struct vec3 *v1, const struct vec3 *v2, float precision); static inline bool plane_close(const struct plane *p1, const struct plane *p2, float precision) { return vec3_close(&p1->dir, &p2->dir, precision) && close_float(p1->dist, p2->dist, precision); } static inline bool plane_coplanar(const struct plane *p1, const struct plane *p2, float precision) { float cos_angle = vec3_dot(&p1->dir, &p2->dir); if (close_float(cos_angle, 1.0f, precision)) return close_float(p1->dist, p2->dist, precision); else if (close_float(cos_angle, -1.0f, precision)) return close_float(-p1->dist, p2->dist, precision); return false; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/math-extra.h000644 001751 001751 00000003700 15153330235 023531 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/c99defs.h" /* * A few general math functions that I couldn't really decide where to put. * * Polar/Cart conversion, torque functions (for smooth movement), percentage, * random floats. */ #ifdef __cplusplus extern "C" { #endif struct vec2; struct vec3; EXPORT void polar_to_cart(struct vec3 *dst, const struct vec3 *v); EXPORT void cart_to_polar(struct vec3 *dst, const struct vec3 *v); EXPORT void norm_to_polar(struct vec2 *dst, const struct vec3 *norm); EXPORT void polar_to_norm(struct vec3 *dst, const struct vec2 *polar); EXPORT float calc_torquef(float val1, float val2, float torque, float min_adjust, float t); EXPORT void calc_torque(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2, float torque, float min_adjust, float t); static inline float get_percentage(float start, float end, float mid) { return (mid - start) / (end - start); } static inline float get_percentagei(int start, int end, int mid) { return (float)(mid - start) / (float)(end - start); } EXPORT float rand_float(int positive_only); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/vec4.h000644 001751 001751 00000013336 15153330235 022326 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "math-defs.h" #include "srgb.h" #include "../util/sse-intrin.h" #ifdef __cplusplus extern "C" { #endif struct vec3; struct matrix4; struct vec4 { union { struct { float x, y, z, w; }; float ptr[4]; __m128 m; }; }; static inline void vec4_zero(struct vec4 *v) { v->m = _mm_setzero_ps(); } static inline void vec4_set(struct vec4 *dst, float x, float y, float z, float w) { dst->m = _mm_set_ps(w, z, y, x); } static inline void vec4_copy(struct vec4 *dst, const struct vec4 *v) { dst->m = v->m; } EXPORT void vec4_from_vec3(struct vec4 *dst, const struct vec3 *v); static inline void vec4_add(struct vec4 *dst, const struct vec4 *v1, const struct vec4 *v2) { dst->m = _mm_add_ps(v1->m, v2->m); } static inline void vec4_sub(struct vec4 *dst, const struct vec4 *v1, const struct vec4 *v2) { dst->m = _mm_sub_ps(v1->m, v2->m); } static inline void vec4_mul(struct vec4 *dst, const struct vec4 *v1, const struct vec4 *v2) { dst->m = _mm_mul_ps(v1->m, v2->m); } static inline void vec4_div(struct vec4 *dst, const struct vec4 *v1, const struct vec4 *v2) { dst->m = _mm_div_ps(v1->m, v2->m); } static inline void vec4_addf(struct vec4 *dst, const struct vec4 *v, float f) { dst->m = _mm_add_ps(v->m, _mm_set1_ps(f)); } static inline void vec4_subf(struct vec4 *dst, const struct vec4 *v, float f) { dst->m = _mm_sub_ps(v->m, _mm_set1_ps(f)); } static inline void vec4_mulf(struct vec4 *dst, const struct vec4 *v, float f) { dst->m = _mm_mul_ps(v->m, _mm_set1_ps(f)); } static inline void vec4_divf(struct vec4 *dst, const struct vec4 *v, float f) { dst->m = _mm_div_ps(v->m, _mm_set1_ps(f)); } static inline float vec4_dot(const struct vec4 *v1, const struct vec4 *v2) { struct vec4 add; __m128 mul = _mm_mul_ps(v1->m, v2->m); add.m = _mm_add_ps(_mm_movehl_ps(mul, mul), mul); add.m = _mm_add_ps(_mm_shuffle_ps(add.m, add.m, 0x55), add.m); return add.x; } static inline void vec4_neg(struct vec4 *dst, const struct vec4 *v) { dst->x = -v->x; dst->y = -v->y; dst->z = -v->z; dst->w = -v->w; } static inline float vec4_len(const struct vec4 *v) { float dot_val = vec4_dot(v, v); return (dot_val > 0.0f) ? sqrtf(dot_val) : 0.0f; } static inline float vec4_dist(const struct vec4 *v1, const struct vec4 *v2) { struct vec4 temp; float dot_val; vec4_sub(&temp, v1, v2); dot_val = vec4_dot(&temp, &temp); return (dot_val > 0.0f) ? sqrtf(dot_val) : 0.0f; } static inline void vec4_norm(struct vec4 *dst, const struct vec4 *v) { float dot_val = vec4_dot(v, v); dst->m = (dot_val > 0.0f) ? _mm_mul_ps(v->m, _mm_set1_ps(1.0f / sqrtf(dot_val))) : _mm_setzero_ps(); } static inline int vec4_close(const struct vec4 *v1, const struct vec4 *v2, float epsilon) { struct vec4 test; vec4_sub(&test, v1, v2); return test.x < epsilon && test.y < epsilon && test.z < epsilon && test.w < epsilon; } static inline void vec4_min(struct vec4 *dst, const struct vec4 *v1, const struct vec4 *v2) { dst->m = _mm_min_ps(v1->m, v2->m); } static inline void vec4_minf(struct vec4 *dst, const struct vec4 *v, float f) { dst->m = _mm_min_ps(v->m, _mm_set1_ps(f)); } static inline void vec4_max(struct vec4 *dst, const struct vec4 *v1, const struct vec4 *v2) { dst->m = _mm_max_ps(v1->m, v2->m); } static inline void vec4_maxf(struct vec4 *dst, const struct vec4 *v, float f) { dst->m = _mm_max_ps(v->m, _mm_set1_ps(f)); } static inline void vec4_abs(struct vec4 *dst, const struct vec4 *v) { dst->x = fabsf(v->x); dst->y = fabsf(v->y); dst->z = fabsf(v->z); dst->w = fabsf(v->w); } static inline void vec4_floor(struct vec4 *dst, const struct vec4 *v) { dst->x = floorf(v->x); dst->y = floorf(v->y); dst->z = floorf(v->z); dst->w = floorf(v->w); } static inline void vec4_ceil(struct vec4 *dst, const struct vec4 *v) { dst->x = ceilf(v->x); dst->y = ceilf(v->y); dst->z = ceilf(v->z); dst->w = ceilf(v->w); } static inline uint32_t vec4_to_rgba(const struct vec4 *src) { float f[4]; memcpy(f, src->ptr, sizeof(f)); uint8_t u[4]; gs_float4_to_u8x4(u, f); uint32_t val; memcpy(&val, u, sizeof(val)); return val; } static inline uint32_t vec4_to_bgra(const struct vec4 *src) { float f[4]; memcpy(f, src->ptr, sizeof(f)); uint8_t u[4]; gs_float4_to_u8x4(u, f); uint8_t temp = u[0]; u[0] = u[2]; u[2] = temp; uint32_t val; memcpy(&val, u, sizeof(val)); return val; } static inline void vec4_from_rgba(struct vec4 *dst, uint32_t rgba) { uint8_t u[4]; memcpy(u, &rgba, sizeof(u)); gs_u8x4_to_float4(dst->ptr, u); } static inline void vec4_from_bgra(struct vec4 *dst, uint32_t bgra) { uint8_t u[4]; memcpy(u, &bgra, sizeof(u)); uint8_t temp = u[0]; u[0] = u[2]; u[2] = temp; gs_u8x4_to_float4(dst->ptr, u); } static inline void vec4_from_rgba_srgb(struct vec4 *dst, uint32_t rgba) { vec4_from_rgba(dst, rgba); gs_float3_srgb_nonlinear_to_linear(dst->ptr); } EXPORT void vec4_transform(struct vec4 *dst, const struct vec4 *v, const struct matrix4 *m); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/graphics.h000644 001751 001751 00000103352 15153330235 023263 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "../util/bmem.h" #include "input.h" #ifdef __APPLE__ #include #endif /* * This is an API-independent graphics subsystem wrapper. * * This allows the use of OpenGL and different Direct3D versions through * one shared interface. */ #ifdef __cplusplus extern "C" { #endif #define GS_MAX_TEXTURES 8 struct vec2; struct vec3; struct vec4; struct quat; struct axisang; struct plane; struct matrix3; struct matrix4; enum gs_draw_mode { GS_POINTS, GS_LINES, GS_LINESTRIP, GS_TRIS, GS_TRISTRIP, }; enum gs_color_format { GS_UNKNOWN, GS_A8, GS_R8, GS_RGBA, GS_BGRX, GS_BGRA, GS_R10G10B10A2, GS_RGBA16, GS_R16, GS_RGBA16F, GS_RGBA32F, GS_RG16F, GS_RG32F, GS_R16F, GS_R32F, GS_DXT1, GS_DXT3, GS_DXT5, GS_R8G8, GS_RGBA_UNORM, GS_BGRX_UNORM, GS_BGRA_UNORM, GS_RG16, }; enum gs_color_space { GS_CS_SRGB, /* SDR */ GS_CS_SRGB_16F, /* High-precision SDR */ GS_CS_709_EXTENDED, /* Canvas, Mac EDR (HDR) */ GS_CS_709_SCRGB, /* 1.0 = 80 nits, Windows/Linux HDR */ }; enum gs_zstencil_format { GS_ZS_NONE, GS_Z16, GS_Z24_S8, GS_Z32F, GS_Z32F_S8X24, }; enum gs_index_type { GS_UNSIGNED_SHORT, GS_UNSIGNED_LONG, }; enum gs_cull_mode { GS_BACK, GS_FRONT, GS_NEITHER, }; enum gs_blend_type { GS_BLEND_ZERO, GS_BLEND_ONE, GS_BLEND_SRCCOLOR, GS_BLEND_INVSRCCOLOR, GS_BLEND_SRCALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_DSTCOLOR, GS_BLEND_INVDSTCOLOR, GS_BLEND_DSTALPHA, GS_BLEND_INVDSTALPHA, GS_BLEND_SRCALPHASAT, }; enum gs_blend_op_type { GS_BLEND_OP_ADD, GS_BLEND_OP_SUBTRACT, GS_BLEND_OP_REVERSE_SUBTRACT, GS_BLEND_OP_MIN, GS_BLEND_OP_MAX }; enum gs_depth_test { GS_NEVER, GS_LESS, GS_LEQUAL, GS_EQUAL, GS_GEQUAL, GS_GREATER, GS_NOTEQUAL, GS_ALWAYS, }; enum gs_stencil_side { GS_STENCIL_FRONT = 1, GS_STENCIL_BACK, GS_STENCIL_BOTH, }; enum gs_stencil_op_type { GS_KEEP, GS_ZERO, GS_REPLACE, GS_INCR, GS_DECR, GS_INVERT, }; enum gs_cube_sides { GS_POSITIVE_X, GS_NEGATIVE_X, GS_POSITIVE_Y, GS_NEGATIVE_Y, GS_POSITIVE_Z, GS_NEGATIVE_Z, }; enum gs_sample_filter { GS_FILTER_POINT, GS_FILTER_LINEAR, GS_FILTER_ANISOTROPIC, GS_FILTER_MIN_MAG_POINT_MIP_LINEAR, GS_FILTER_MIN_POINT_MAG_LINEAR_MIP_POINT, GS_FILTER_MIN_POINT_MAG_MIP_LINEAR, GS_FILTER_MIN_LINEAR_MAG_MIP_POINT, GS_FILTER_MIN_LINEAR_MAG_POINT_MIP_LINEAR, GS_FILTER_MIN_MAG_LINEAR_MIP_POINT, }; enum gs_address_mode { GS_ADDRESS_CLAMP, GS_ADDRESS_WRAP, GS_ADDRESS_MIRROR, GS_ADDRESS_BORDER, GS_ADDRESS_MIRRORONCE, }; enum gs_texture_type { GS_TEXTURE_2D, GS_TEXTURE_3D, GS_TEXTURE_CUBE, }; struct gs_device_loss { void (*device_loss_release)(void *data); void (*device_loss_rebuild)(void *device, void *data); void *data; }; struct gs_monitor_info { int rotation_degrees; long x; long y; long cx; long cy; }; struct gs_tvertarray { size_t width; void *array; }; struct gs_vb_data { size_t num; struct vec3 *points; struct vec3 *normals; struct vec3 *tangents; uint32_t *colors; size_t num_tex; struct gs_tvertarray *tvarray; }; static inline struct gs_vb_data *gs_vbdata_create(void) { return (struct gs_vb_data *)bzalloc(sizeof(struct gs_vb_data)); } static inline void gs_vbdata_destroy(struct gs_vb_data *data) { uint32_t i; if (!data) return; bfree(data->points); bfree(data->normals); bfree(data->tangents); bfree(data->colors); for (i = 0; i < data->num_tex; i++) bfree(data->tvarray[i].array); bfree(data->tvarray); bfree(data); } struct gs_sampler_info { enum gs_sample_filter filter; enum gs_address_mode address_u; enum gs_address_mode address_v; enum gs_address_mode address_w; int max_anisotropy; uint32_t border_color; }; struct gs_display_mode { uint32_t width; uint32_t height; uint32_t bits; uint32_t freq; }; struct gs_rect { int x; int y; int cx; int cy; }; /* wrapped opaque data types */ struct gs_texture; struct gs_stage_surface; struct gs_zstencil_buffer; struct gs_vertex_buffer; struct gs_index_buffer; struct gs_sampler_state; struct gs_shader; struct gs_swap_chain; struct gs_timer; struct gs_texrender; struct gs_shader_param; struct gs_effect; struct gs_effect_technique; struct gs_effect_pass; struct gs_effect_param; struct gs_device; struct graphics_subsystem; typedef struct gs_texture gs_texture_t; typedef struct gs_stage_surface gs_stagesurf_t; typedef struct gs_zstencil_buffer gs_zstencil_t; typedef struct gs_vertex_buffer gs_vertbuffer_t; typedef struct gs_index_buffer gs_indexbuffer_t; typedef struct gs_sampler_state gs_samplerstate_t; typedef struct gs_swap_chain gs_swapchain_t; typedef struct gs_timer gs_timer_t; typedef struct gs_timer_range gs_timer_range_t; typedef struct gs_texture_render gs_texrender_t; typedef struct gs_shader gs_shader_t; typedef struct gs_shader_param gs_sparam_t; typedef struct gs_effect gs_effect_t; typedef struct gs_effect_technique gs_technique_t; typedef struct gs_effect_pass gs_epass_t; typedef struct gs_effect_param gs_eparam_t; typedef struct gs_device gs_device_t; typedef void gs_sync_t; typedef struct graphics_subsystem graphics_t; /* --------------------------------------------------- * shader functions * --------------------------------------------------- */ enum gs_shader_param_type { GS_SHADER_PARAM_UNKNOWN, GS_SHADER_PARAM_BOOL, GS_SHADER_PARAM_FLOAT, GS_SHADER_PARAM_INT, GS_SHADER_PARAM_STRING, GS_SHADER_PARAM_VEC2, GS_SHADER_PARAM_VEC3, GS_SHADER_PARAM_VEC4, GS_SHADER_PARAM_INT2, GS_SHADER_PARAM_INT3, GS_SHADER_PARAM_INT4, GS_SHADER_PARAM_MATRIX4X4, GS_SHADER_PARAM_TEXTURE, }; struct gs_shader_texture { gs_texture_t *tex; bool srgb; }; #ifndef SWIG struct gs_shader_param_info { enum gs_shader_param_type type; const char *name; }; enum gs_shader_type { GS_SHADER_VERTEX, GS_SHADER_PIXEL, }; EXPORT void gs_shader_destroy(gs_shader_t *shader); EXPORT int gs_shader_get_num_params(const gs_shader_t *shader); EXPORT gs_sparam_t *gs_shader_get_param_by_idx(gs_shader_t *shader, uint32_t param); EXPORT gs_sparam_t *gs_shader_get_param_by_name(gs_shader_t *shader, const char *name); EXPORT gs_sparam_t *gs_shader_get_viewproj_matrix(const gs_shader_t *shader); EXPORT gs_sparam_t *gs_shader_get_world_matrix(const gs_shader_t *shader); EXPORT void gs_shader_get_param_info(const gs_sparam_t *param, struct gs_shader_param_info *info); EXPORT void gs_shader_set_bool(gs_sparam_t *param, bool val); EXPORT void gs_shader_set_float(gs_sparam_t *param, float val); EXPORT void gs_shader_set_int(gs_sparam_t *param, int val); EXPORT void gs_shader_set_matrix3(gs_sparam_t *param, const struct matrix3 *val); EXPORT void gs_shader_set_matrix4(gs_sparam_t *param, const struct matrix4 *val); EXPORT void gs_shader_set_vec2(gs_sparam_t *param, const struct vec2 *val); EXPORT void gs_shader_set_vec3(gs_sparam_t *param, const struct vec3 *val); EXPORT void gs_shader_set_vec4(gs_sparam_t *param, const struct vec4 *val); EXPORT void gs_shader_set_texture(gs_sparam_t *param, gs_texture_t *val); EXPORT void gs_shader_set_val(gs_sparam_t *param, const void *val, size_t size); EXPORT void gs_shader_set_default(gs_sparam_t *param); EXPORT void gs_shader_set_next_sampler(gs_sparam_t *param, gs_samplerstate_t *sampler); #endif /* --------------------------------------------------- * effect functions * --------------------------------------------------- */ /*enum gs_effect_property_type { GS_EFFECT_NONE, GS_EFFECT_BOOL, GS_EFFECT_FLOAT, GS_EFFECT_COLOR, GS_EFFECT_TEXTURE };*/ #ifndef SWIG struct gs_effect_param_info { const char *name; enum gs_shader_param_type type; /* const char *full_name; enum gs_effect_property_type prop_type; float min, max, inc, mul; */ }; #endif EXPORT void gs_effect_destroy(gs_effect_t *effect); EXPORT gs_technique_t *gs_effect_get_technique(const gs_effect_t *effect, const char *name); EXPORT gs_technique_t *gs_effect_get_current_technique(const gs_effect_t *effect); EXPORT size_t gs_technique_begin(gs_technique_t *technique); EXPORT void gs_technique_end(gs_technique_t *technique); EXPORT bool gs_technique_begin_pass(gs_technique_t *technique, size_t pass); EXPORT bool gs_technique_begin_pass_by_name(gs_technique_t *technique, const char *name); EXPORT void gs_technique_end_pass(gs_technique_t *technique); EXPORT gs_epass_t *gs_technique_get_pass_by_idx(const gs_technique_t *technique, size_t pass); EXPORT gs_epass_t *gs_technique_get_pass_by_name(const gs_technique_t *technique, const char *name); EXPORT size_t gs_effect_get_num_params(const gs_effect_t *effect); EXPORT gs_eparam_t *gs_effect_get_param_by_idx(const gs_effect_t *effect, size_t param); EXPORT gs_eparam_t *gs_effect_get_param_by_name(const gs_effect_t *effect, const char *name); EXPORT size_t gs_param_get_num_annotations(const gs_eparam_t *param); EXPORT gs_eparam_t *gs_param_get_annotation_by_idx(const gs_eparam_t *param, size_t annotation); EXPORT gs_eparam_t *gs_param_get_annotation_by_name(const gs_eparam_t *param, const char *name); /** Helper function to simplify effect usage. Use with a while loop that * contains drawing functions. Automatically handles techniques, passes, and * unloading. */ EXPORT bool gs_effect_loop(gs_effect_t *effect, const char *name); /** used internally */ EXPORT void gs_effect_update_params(gs_effect_t *effect); EXPORT gs_eparam_t *gs_effect_get_viewproj_matrix(const gs_effect_t *effect); EXPORT gs_eparam_t *gs_effect_get_world_matrix(const gs_effect_t *effect); #ifndef SWIG EXPORT void gs_effect_get_param_info(const gs_eparam_t *param, struct gs_effect_param_info *info); #endif EXPORT void gs_effect_set_bool(gs_eparam_t *param, bool val); EXPORT void gs_effect_set_float(gs_eparam_t *param, float val); EXPORT void gs_effect_set_int(gs_eparam_t *param, int val); EXPORT void gs_effect_set_matrix4(gs_eparam_t *param, const struct matrix4 *val); EXPORT void gs_effect_set_vec2(gs_eparam_t *param, const struct vec2 *val); EXPORT void gs_effect_set_vec3(gs_eparam_t *param, const struct vec3 *val); EXPORT void gs_effect_set_vec4(gs_eparam_t *param, const struct vec4 *val); EXPORT void gs_effect_set_texture(gs_eparam_t *param, gs_texture_t *val); EXPORT void gs_effect_set_texture_srgb(gs_eparam_t *param, gs_texture_t *val); EXPORT void gs_effect_set_val(gs_eparam_t *param, const void *val, size_t size); EXPORT void gs_effect_set_default(gs_eparam_t *param); EXPORT size_t gs_effect_get_val_size(gs_eparam_t *param); EXPORT void *gs_effect_get_val(gs_eparam_t *param); EXPORT size_t gs_effect_get_default_val_size(gs_eparam_t *param); EXPORT void *gs_effect_get_default_val(gs_eparam_t *param); EXPORT void gs_effect_set_next_sampler(gs_eparam_t *param, gs_samplerstate_t *sampler); EXPORT void gs_effect_set_color(gs_eparam_t *param, uint32_t argb); /* --------------------------------------------------- * texture render helper functions * --------------------------------------------------- */ EXPORT gs_texrender_t *gs_texrender_create(enum gs_color_format format, enum gs_zstencil_format zsformat); EXPORT void gs_texrender_destroy(gs_texrender_t *texrender); EXPORT bool gs_texrender_begin(gs_texrender_t *texrender, uint32_t cx, uint32_t cy); EXPORT bool gs_texrender_begin_with_color_space(gs_texrender_t *texrender, uint32_t cx, uint32_t cy, enum gs_color_space space); EXPORT void gs_texrender_end(gs_texrender_t *texrender); EXPORT void gs_texrender_reset(gs_texrender_t *texrender); EXPORT gs_texture_t *gs_texrender_get_texture(const gs_texrender_t *texrender); EXPORT enum gs_color_format gs_texrender_get_format(const gs_texrender_t *texrender); /* --------------------------------------------------- * graphics subsystem * --------------------------------------------------- */ #define GS_BUILD_MIPMAPS (1 << 0) #define GS_DYNAMIC (1 << 1) #define GS_RENDER_TARGET (1 << 2) #define GS_GL_DUMMYTEX (1 << 3) /**<< texture with no allocated texture data */ #define GS_DUP_BUFFER \ (1 << 4) /**<< do not pass buffer ownership when * creating a vertex/index buffer */ #define GS_SHARED_TEX (1 << 5) #define GS_SHARED_KM_TEX (1 << 6) /* ---------------- */ /* global functions */ #define GS_SUCCESS 0 #define GS_ERROR_FAIL -1 #define GS_ERROR_MODULE_NOT_FOUND -2 #define GS_ERROR_NOT_SUPPORTED -3 struct gs_window { #if defined(_WIN32) void *hwnd; #elif defined(__APPLE__) __unsafe_unretained id view; #elif defined(__linux__) || defined(__FreeBSD__) /* I'm not sure how portable defining id to uint32_t is. */ uint32_t id; void *display; #endif }; struct gs_init_data { struct gs_window window; uint32_t cx, cy; uint32_t num_backbuffers; enum gs_color_format format; enum gs_zstencil_format zsformat; uint32_t adapter; }; #define GS_DEVICE_OPENGL 1 #define GS_DEVICE_DIRECT3D_11 2 #define GS_DEVICE_METAL 3 EXPORT const char *gs_get_device_name(void); EXPORT const char *gs_get_driver_version(void); EXPORT const char *gs_get_renderer(void); EXPORT uint64_t gs_get_gpu_dmem(void); EXPORT uint64_t gs_get_gpu_smem(void); EXPORT int gs_get_device_type(void); EXPORT uint32_t gs_get_adapter_count(void); EXPORT void gs_enum_adapters(bool (*callback)(void *param, const char *name, uint32_t id), void *param); EXPORT int gs_create(graphics_t **graphics, const char *module, uint32_t adapter); EXPORT void gs_destroy(graphics_t *graphics); EXPORT void gs_enter_context(graphics_t *graphics); EXPORT void gs_leave_context(void); EXPORT graphics_t *gs_get_context(void); EXPORT void *gs_get_device_obj(void); EXPORT void gs_matrix_push(void); EXPORT void gs_matrix_pop(void); EXPORT void gs_matrix_identity(void); EXPORT void gs_matrix_transpose(void); EXPORT void gs_matrix_set(const struct matrix4 *matrix); EXPORT void gs_matrix_get(struct matrix4 *dst); EXPORT void gs_matrix_mul(const struct matrix4 *matrix); EXPORT void gs_matrix_rotquat(const struct quat *rot); EXPORT void gs_matrix_rotaa(const struct axisang *rot); EXPORT void gs_matrix_translate(const struct vec3 *pos); EXPORT void gs_matrix_scale(const struct vec3 *scale); EXPORT void gs_matrix_rotaa4f(float x, float y, float z, float angle); EXPORT void gs_matrix_translate3f(float x, float y, float z); EXPORT void gs_matrix_scale3f(float x, float y, float z); EXPORT void gs_render_start(bool b_new); EXPORT void gs_render_stop(enum gs_draw_mode mode); EXPORT gs_vertbuffer_t *gs_render_save(void); EXPORT void gs_vertex2f(float x, float y); EXPORT void gs_vertex3f(float x, float y, float z); EXPORT void gs_normal3f(float x, float y, float z); EXPORT void gs_color(uint32_t color); EXPORT void gs_texcoord(float x, float y, int unit); EXPORT void gs_vertex2v(const struct vec2 *v); EXPORT void gs_vertex3v(const struct vec3 *v); EXPORT void gs_normal3v(const struct vec3 *v); EXPORT void gs_color4v(const struct vec4 *v); EXPORT void gs_texcoord2v(const struct vec2 *v, int unit); EXPORT input_t *gs_get_input(void); EXPORT gs_effect_t *gs_get_effect(void); EXPORT gs_effect_t *gs_effect_create_from_file(const char *file, char **error_string); EXPORT gs_effect_t *gs_effect_create(const char *effect_string, const char *filename, char **error_string); EXPORT gs_shader_t *gs_vertexshader_create_from_file(const char *file, char **error_string); EXPORT gs_shader_t *gs_pixelshader_create_from_file(const char *file, char **error_string); enum gs_image_alpha_mode { GS_IMAGE_ALPHA_STRAIGHT, GS_IMAGE_ALPHA_PREMULTIPLY_SRGB, GS_IMAGE_ALPHA_PREMULTIPLY, }; EXPORT gs_texture_t *gs_texture_create_from_file(const char *file); EXPORT uint8_t *gs_create_texture_file_data(const char *file, enum gs_color_format *format, uint32_t *cx, uint32_t *cy); EXPORT uint8_t *gs_create_texture_file_data2(const char *file, enum gs_image_alpha_mode alpha_mode, enum gs_color_format *format, uint32_t *cx, uint32_t *cy); EXPORT uint8_t *gs_create_texture_file_data3(const char *file, enum gs_image_alpha_mode alpha_mode, enum gs_color_format *format, uint32_t *cx, uint32_t *cy, enum gs_color_space *space); #define GS_FLIP_U (1 << 0) #define GS_FLIP_V (1 << 1) /** * Draws a 2D sprite * * If width or height is 0, the width or height of the texture will be used. * The flip value specifies whether the texture should be flipped on the U or V * axis with GS_FLIP_U and GS_FLIP_V. */ EXPORT void gs_draw_sprite(gs_texture_t *tex, uint32_t flip, uint32_t width, uint32_t height); EXPORT void gs_draw_quadf(gs_texture_t *tex, uint32_t flip, float width, float height); EXPORT void gs_draw_sprite_subregion(gs_texture_t *tex, uint32_t flip, uint32_t x, uint32_t y, uint32_t cx, uint32_t cy); EXPORT void gs_draw_cube_backdrop(gs_texture_t *cubetex, const struct quat *rot, float left, float right, float top, float bottom, float znear); /** sets the viewport to current swap chain size */ EXPORT void gs_reset_viewport(void); /** sets default screen-sized orthographic mode */ EXPORT void gs_set_2d_mode(void); /** sets default screen-sized perspective mode */ EXPORT void gs_set_3d_mode(double fovy, double znear, double zvar); EXPORT void gs_viewport_push(void); EXPORT void gs_viewport_pop(void); EXPORT void gs_texture_set_image(gs_texture_t *tex, const uint8_t *data, uint32_t linesize, bool invert); EXPORT void gs_cubetexture_set_image(gs_texture_t *cubetex, uint32_t side, const void *data, uint32_t linesize, bool invert); EXPORT void gs_perspective(float fovy, float aspect, float znear, float zfar); EXPORT void gs_blend_state_push(void); EXPORT void gs_blend_state_pop(void); EXPORT void gs_reset_blend_state(void); /* -------------------------- */ /* library-specific functions */ EXPORT gs_swapchain_t *gs_swapchain_create(const struct gs_init_data *data); EXPORT void gs_resize(uint32_t x, uint32_t y); EXPORT void gs_update_color_space(void); EXPORT void gs_get_size(uint32_t *x, uint32_t *y); EXPORT uint32_t gs_get_width(void); EXPORT uint32_t gs_get_height(void); EXPORT gs_texture_t *gs_texture_create(uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); EXPORT gs_texture_t *gs_cubetexture_create(uint32_t size, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); EXPORT gs_texture_t *gs_voltexture_create(uint32_t width, uint32_t height, uint32_t depth, enum gs_color_format color_format, uint32_t levels, const uint8_t **data, uint32_t flags); EXPORT gs_zstencil_t *gs_zstencil_create(uint32_t width, uint32_t height, enum gs_zstencil_format format); EXPORT gs_stagesurf_t *gs_stagesurface_create(uint32_t width, uint32_t height, enum gs_color_format color_format); EXPORT gs_samplerstate_t *gs_samplerstate_create(const struct gs_sampler_info *info); EXPORT gs_shader_t *gs_vertexshader_create(const char *shader, const char *file, char **error_string); EXPORT gs_shader_t *gs_pixelshader_create(const char *shader, const char *file, char **error_string); EXPORT gs_vertbuffer_t *gs_vertexbuffer_create(struct gs_vb_data *data, uint32_t flags); EXPORT gs_indexbuffer_t *gs_indexbuffer_create(enum gs_index_type type, void *indices, size_t num, uint32_t flags); EXPORT gs_timer_t *gs_timer_create(); EXPORT gs_timer_range_t *gs_timer_range_create(); EXPORT enum gs_texture_type gs_get_texture_type(const gs_texture_t *texture); EXPORT void gs_load_vertexbuffer(gs_vertbuffer_t *vertbuffer); EXPORT void gs_load_indexbuffer(gs_indexbuffer_t *indexbuffer); EXPORT void gs_load_texture(gs_texture_t *tex, int unit); EXPORT void gs_load_samplerstate(gs_samplerstate_t *samplerstate, int unit); EXPORT void gs_load_vertexshader(gs_shader_t *vertshader); EXPORT void gs_load_pixelshader(gs_shader_t *pixelshader); EXPORT void gs_load_default_samplerstate(bool b_3d, int unit); EXPORT gs_shader_t *gs_get_vertex_shader(void); EXPORT gs_shader_t *gs_get_pixel_shader(void); EXPORT enum gs_color_space gs_get_color_space(void); EXPORT gs_texture_t *gs_get_render_target(void); EXPORT gs_zstencil_t *gs_get_zstencil_target(void); EXPORT void gs_set_render_target(gs_texture_t *tex, gs_zstencil_t *zstencil); EXPORT void gs_set_render_target_with_color_space(gs_texture_t *tex, gs_zstencil_t *zstencil, enum gs_color_space space); EXPORT void gs_set_cube_render_target(gs_texture_t *cubetex, int side, gs_zstencil_t *zstencil); EXPORT void gs_enable_framebuffer_srgb(bool enable); EXPORT bool gs_framebuffer_srgb_enabled(void); EXPORT bool gs_get_linear_srgb(void); EXPORT bool gs_set_linear_srgb(bool linear_srgb); EXPORT void gs_copy_texture(gs_texture_t *dst, gs_texture_t *src); EXPORT void gs_copy_texture_region(gs_texture_t *dst, uint32_t dst_x, uint32_t dst_y, gs_texture_t *src, uint32_t src_x, uint32_t src_y, uint32_t src_w, uint32_t src_h); EXPORT void gs_stage_texture(gs_stagesurf_t *dst, gs_texture_t *src); EXPORT void gs_begin_frame(void); EXPORT void gs_begin_scene(void); EXPORT void gs_draw(enum gs_draw_mode draw_mode, uint32_t start_vert, uint32_t num_verts); EXPORT void gs_end_scene(void); #define GS_CLEAR_COLOR (1 << 0) #define GS_CLEAR_DEPTH (1 << 1) #define GS_CLEAR_STENCIL (1 << 2) EXPORT void gs_load_swapchain(gs_swapchain_t *swapchain); EXPORT void gs_clear(uint32_t clear_flags, const struct vec4 *color, float depth, uint8_t stencil); EXPORT bool gs_is_present_ready(void); EXPORT void gs_present(void); EXPORT void gs_flush(void); EXPORT void gs_set_cull_mode(enum gs_cull_mode mode); EXPORT enum gs_cull_mode gs_get_cull_mode(void); EXPORT void gs_enable_blending(bool enable); EXPORT void gs_enable_depth_test(bool enable); EXPORT void gs_enable_stencil_test(bool enable); EXPORT void gs_enable_stencil_write(bool enable); EXPORT void gs_enable_color(bool red, bool green, bool blue, bool alpha); EXPORT void gs_blend_function(enum gs_blend_type src, enum gs_blend_type dest); EXPORT void gs_blend_function_separate(enum gs_blend_type src_c, enum gs_blend_type dest_c, enum gs_blend_type src_a, enum gs_blend_type dest_a); EXPORT void gs_blend_op(enum gs_blend_op_type op); EXPORT void gs_depth_function(enum gs_depth_test test); EXPORT void gs_stencil_function(enum gs_stencil_side side, enum gs_depth_test test); EXPORT void gs_stencil_op(enum gs_stencil_side side, enum gs_stencil_op_type fail, enum gs_stencil_op_type zfail, enum gs_stencil_op_type zpass); EXPORT void gs_set_viewport(int x, int y, int width, int height); EXPORT void gs_get_viewport(struct gs_rect *rect); EXPORT void gs_set_scissor_rect(const struct gs_rect *rect); EXPORT void gs_ortho(float left, float right, float top, float bottom, float znear, float zfar); EXPORT void gs_frustum(float left, float right, float top, float bottom, float znear, float zfar); EXPORT void gs_projection_push(void); EXPORT void gs_projection_pop(void); EXPORT void gs_swapchain_destroy(gs_swapchain_t *swapchain); EXPORT void gs_texture_destroy(gs_texture_t *tex); EXPORT uint32_t gs_texture_get_width(const gs_texture_t *tex); EXPORT uint32_t gs_texture_get_height(const gs_texture_t *tex); EXPORT enum gs_color_format gs_texture_get_color_format(const gs_texture_t *tex); EXPORT bool gs_texture_map(gs_texture_t *tex, uint8_t **ptr, uint32_t *linesize); EXPORT void gs_texture_unmap(gs_texture_t *tex); /** special-case function (GL only) - specifies whether the texture is a * GL_TEXTURE_RECTANGLE type, which doesn't use normalized texture * coordinates, doesn't support mipmapping, and requires address clamping */ EXPORT bool gs_texture_is_rect(const gs_texture_t *tex); /** * Gets a pointer to the context-specific object associated with the texture. * For example, for GL, this is a GLuint*. For D3D11, ID3D11Texture2D*. */ EXPORT void *gs_texture_get_obj(gs_texture_t *tex); EXPORT void gs_cubetexture_destroy(gs_texture_t *cubetex); EXPORT uint32_t gs_cubetexture_get_size(const gs_texture_t *cubetex); EXPORT enum gs_color_format gs_cubetexture_get_color_format(const gs_texture_t *cubetex); EXPORT void gs_voltexture_destroy(gs_texture_t *voltex); EXPORT uint32_t gs_voltexture_get_width(const gs_texture_t *voltex); EXPORT uint32_t gs_voltexture_get_height(const gs_texture_t *voltex); EXPORT uint32_t gs_voltexture_get_depth(const gs_texture_t *voltex); EXPORT enum gs_color_format gs_voltexture_get_color_format(const gs_texture_t *voltex); EXPORT void gs_stagesurface_destroy(gs_stagesurf_t *stagesurf); EXPORT uint32_t gs_stagesurface_get_width(const gs_stagesurf_t *stagesurf); EXPORT uint32_t gs_stagesurface_get_height(const gs_stagesurf_t *stagesurf); EXPORT enum gs_color_format gs_stagesurface_get_color_format(const gs_stagesurf_t *stagesurf); EXPORT bool gs_stagesurface_map(gs_stagesurf_t *stagesurf, uint8_t **data, uint32_t *linesize); EXPORT void gs_stagesurface_unmap(gs_stagesurf_t *stagesurf); EXPORT void gs_zstencil_destroy(gs_zstencil_t *zstencil); EXPORT void gs_samplerstate_destroy(gs_samplerstate_t *samplerstate); EXPORT void gs_vertexbuffer_destroy(gs_vertbuffer_t *vertbuffer); EXPORT void gs_vertexbuffer_flush(gs_vertbuffer_t *vertbuffer); EXPORT void gs_vertexbuffer_flush_direct(gs_vertbuffer_t *vertbuffer, const struct gs_vb_data *data); EXPORT struct gs_vb_data *gs_vertexbuffer_get_data(const gs_vertbuffer_t *vertbuffer); EXPORT void gs_indexbuffer_destroy(gs_indexbuffer_t *indexbuffer); EXPORT void gs_indexbuffer_flush(gs_indexbuffer_t *indexbuffer); EXPORT void gs_indexbuffer_flush_direct(gs_indexbuffer_t *indexbuffer, const void *data); EXPORT void *gs_indexbuffer_get_data(const gs_indexbuffer_t *indexbuffer); EXPORT size_t gs_indexbuffer_get_num_indices(const gs_indexbuffer_t *indexbuffer); EXPORT enum gs_index_type gs_indexbuffer_get_type(const gs_indexbuffer_t *indexbuffer); EXPORT void gs_timer_destroy(gs_timer_t *timer); EXPORT void gs_timer_begin(gs_timer_t *timer); EXPORT void gs_timer_end(gs_timer_t *timer); EXPORT bool gs_timer_get_data(gs_timer_t *timer, uint64_t *ticks); EXPORT void gs_timer_range_destroy(gs_timer_range_t *timer); EXPORT void gs_timer_range_begin(gs_timer_range_t *range); EXPORT void gs_timer_range_end(gs_timer_range_t *range); EXPORT bool gs_timer_range_get_data(gs_timer_range_t *range, bool *disjoint, uint64_t *frequency); EXPORT bool gs_nv12_available(void); EXPORT bool gs_p010_available(void); EXPORT bool gs_texture_create_nv12(gs_texture_t **tex_y, gs_texture_t **tex_uv, uint32_t width, uint32_t height, uint32_t flags); EXPORT bool gs_texture_create_p010(gs_texture_t **tex_y, gs_texture_t **tex_uv, uint32_t width, uint32_t height, uint32_t flags); EXPORT bool gs_is_monitor_hdr(void *monitor); #define GS_USE_DEBUG_MARKERS 0 #if GS_USE_DEBUG_MARKERS static const float GS_DEBUG_COLOR_DEFAULT[] = {0.5f, 0.5f, 0.5f, 1.0f}; static const float GS_DEBUG_COLOR_RENDER_VIDEO[] = {0.0f, 0.5f, 0.0f, 1.0f}; static const float GS_DEBUG_COLOR_MAIN_TEXTURE[] = {0.0f, 0.25f, 0.0f, 1.0f}; static const float GS_DEBUG_COLOR_DISPLAY[] = {0.0f, 0.5f, 0.5f, 1.0f}; static const float GS_DEBUG_COLOR_SOURCE[] = {0.0f, 0.5f, 5.0f, 1.0f}; static const float GS_DEBUG_COLOR_ITEM[] = {0.5f, 0.0f, 0.0f, 1.0f}; static const float GS_DEBUG_COLOR_ITEM_TEXTURE[] = {0.25f, 0.0f, 0.0f, 1.0f}; static const float GS_DEBUG_COLOR_CONVERT_FORMAT[] = {0.5f, 0.5f, 0.0f, 1.0f}; #define GS_DEBUG_MARKER_BEGIN(color, markername) gs_debug_marker_begin(color, markername) #define GS_DEBUG_MARKER_BEGIN_FORMAT(color, format, ...) gs_debug_marker_begin_format(color, format, __VA_ARGS__) #define GS_DEBUG_MARKER_END() gs_debug_marker_end() #else #define GS_DEBUG_MARKER_BEGIN(color, markername) ((void)0) #define GS_DEBUG_MARKER_BEGIN_FORMAT(color, format, ...) ((void)0) #define GS_DEBUG_MARKER_END() ((void)0) #endif EXPORT void gs_debug_marker_begin(const float color[4], const char *markername); EXPORT void gs_debug_marker_begin_format(const float color[4], const char *format, ...); EXPORT void gs_debug_marker_end(void); #ifdef __APPLE__ /** platform specific function for creating (GL_TEXTURE_RECTANGLE) textures * from shared surface resources */ EXPORT gs_texture_t *gs_texture_create_from_iosurface(void *iosurf); EXPORT bool gs_texture_rebind_iosurface(gs_texture_t *texture, void *iosurf); EXPORT gs_texture_t *gs_texture_open_shared(uint32_t handle); EXPORT bool gs_shared_texture_available(void); #elif _WIN32 EXPORT bool gs_gdi_texture_available(void); EXPORT bool gs_shared_texture_available(void); struct gs_duplicator; typedef struct gs_duplicator gs_duplicator_t; /** * Gets information about the monitor at the specific index, returns false * when there is no monitor at the specified index */ EXPORT bool gs_get_duplicator_monitor_info(int monitor_idx, struct gs_monitor_info *monitor_info); EXPORT int gs_duplicator_get_monitor_index(void *monitor); /** creates a windows 8+ output duplicator (monitor capture) */ EXPORT gs_duplicator_t *gs_duplicator_create(int monitor_idx); EXPORT void gs_duplicator_destroy(gs_duplicator_t *duplicator); EXPORT bool gs_duplicator_update_frame(gs_duplicator_t *duplicator); EXPORT gs_texture_t *gs_duplicator_get_texture(gs_duplicator_t *duplicator); EXPORT enum gs_color_space gs_duplicator_get_color_space(gs_duplicator_t *duplicator); EXPORT float gs_duplicator_get_sdr_white_level(gs_duplicator_t *duplicator); EXPORT bool gs_can_adapter_fast_clear(void); /** creates a windows GDI-lockable texture */ EXPORT gs_texture_t *gs_texture_create_gdi(uint32_t width, uint32_t height); EXPORT void *gs_texture_get_dc(gs_texture_t *gdi_tex); EXPORT void gs_texture_release_dc(gs_texture_t *gdi_tex); /** creates a windows shared texture from a texture handle */ EXPORT gs_texture_t *gs_texture_open_shared(uint32_t handle); EXPORT gs_texture_t *gs_texture_open_nt_shared(uint32_t handle); #define GS_INVALID_HANDLE (uint32_t) - 1 EXPORT uint32_t gs_texture_get_shared_handle(gs_texture_t *tex); EXPORT gs_texture_t *gs_texture_wrap_obj(void *obj); #define GS_WAIT_INFINITE (uint32_t) - 1 /** * acquires a lock on a keyed mutex texture. * returns -1 on generic failure, ETIMEDOUT if timed out */ EXPORT int gs_texture_acquire_sync(gs_texture_t *tex, uint64_t key, uint32_t ms); /** * releases a lock on a keyed mutex texture to another device. * return 0 on success, -1 on error */ EXPORT int gs_texture_release_sync(gs_texture_t *tex, uint64_t key); EXPORT gs_stagesurf_t *gs_stagesurface_create_nv12(uint32_t width, uint32_t height); EXPORT gs_stagesurf_t *gs_stagesurface_create_p010(uint32_t width, uint32_t height); EXPORT void gs_register_loss_callbacks(const struct gs_device_loss *callbacks); EXPORT void gs_unregister_loss_callbacks(void *data); #elif defined(__linux__) || defined(__FreeBSD__) || defined(__DragonFly__) EXPORT gs_texture_t *gs_texture_create_from_dmabuf(unsigned int width, unsigned int height, uint32_t drm_format, enum gs_color_format color_format, uint32_t n_planes, const int *fds, const uint32_t *strides, const uint32_t *offsets, const uint64_t *modifiers); enum gs_dmabuf_flags { GS_DMABUF_FLAG_NONE = 0, GS_DMABUF_FLAG_IMPLICIT_MODIFIERS_SUPPORTED = (1 << 0), }; EXPORT bool gs_query_dmabuf_capabilities(enum gs_dmabuf_flags *dmabuf_flags, uint32_t **drm_formats, size_t *n_formats); EXPORT bool gs_query_dmabuf_modifiers_for_format(uint32_t drm_format, uint64_t **modifiers, size_t *n_modifiers); EXPORT gs_texture_t *gs_texture_create_from_pixmap(uint32_t width, uint32_t height, enum gs_color_format color_format, uint32_t target, void *pixmap); EXPORT bool gs_query_sync_capabilities(void); EXPORT gs_sync_t *gs_sync_create(void); EXPORT gs_sync_t *gs_sync_create_from_syncobj_timeline_point(int syncobj_fd, uint64_t timeline_point); EXPORT void gs_sync_destroy(gs_sync_t *sync); EXPORT bool gs_sync_export_syncobj_timeline_point(gs_sync_t *sync, int syncobj_fd, uint64_t timeline_point); EXPORT bool gs_sync_signal_syncobj_timeline_point(int syncobj_fd, uint64_t timeline_point); EXPORT bool gs_sync_wait(gs_sync_t *sync); #endif /* inline functions used by modules */ static inline uint32_t gs_get_format_bpp(enum gs_color_format format) { switch (format) { case GS_DXT1: return 4; case GS_A8: case GS_R8: case GS_DXT3: case GS_DXT5: return 8; case GS_R16: case GS_R16F: case GS_R8G8: return 16; case GS_RGBA: case GS_BGRX: case GS_BGRA: case GS_R10G10B10A2: case GS_RG16F: case GS_R32F: case GS_RGBA_UNORM: case GS_BGRX_UNORM: case GS_BGRA_UNORM: case GS_RG16: return 32; case GS_RGBA16: case GS_RGBA16F: case GS_RG32F: return 64; case GS_RGBA32F: return 128; case GS_UNKNOWN: return 0; } return 0; } static inline bool gs_is_compressed_format(enum gs_color_format format) { return (format == GS_DXT1 || format == GS_DXT3 || format == GS_DXT5); } static inline bool gs_is_srgb_format(enum gs_color_format format) { switch (format) { case GS_RGBA: case GS_BGRX: case GS_BGRA: return true; default: return false; } } static inline enum gs_color_format gs_generalize_format(enum gs_color_format format) { switch (format) { case GS_RGBA_UNORM: return GS_RGBA; case GS_BGRX_UNORM: return GS_BGRX; case GS_BGRA_UNORM: return GS_BGRA; default: return format; } } static inline enum gs_color_format gs_get_format_from_space(enum gs_color_space space) { switch (space) { case GS_CS_SRGB: break; case GS_CS_SRGB_16F: case GS_CS_709_EXTENDED: case GS_CS_709_SCRGB: return GS_RGBA16F; } return GS_RGBA; } static inline uint32_t gs_get_total_levels(uint32_t width, uint32_t height, uint32_t depth) { uint32_t size = width > height ? width : height; size = size > depth ? size : depth; uint32_t num_levels = 1; while (size > 1) { size /= 2; num_levels++; } return num_levels; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/plane.c000644 001751 001751 00000007522 15153330235 022557 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/c99defs.h" #include "matrix3.h" #include "plane.h" void plane_from_tri(struct plane *dst, const struct vec3 *v1, const struct vec3 *v2, const struct vec3 *v3) { struct vec3 temp; vec3_sub(&temp, v2, v1); vec3_sub(&dst->dir, v3, v1); vec3_cross(&dst->dir, &temp, &dst->dir); vec3_norm(&dst->dir, &dst->dir); dst->dist = vec3_dot(v1, &dst->dir); } void plane_transform(struct plane *dst, const struct plane *p, const struct matrix4 *m) { struct vec3 temp; vec3_zero(&temp); vec3_transform(&dst->dir, &p->dir, m); vec3_norm(&dst->dir, &dst->dir); vec3_transform(&temp, &temp, m); dst->dist = p->dist - vec3_dot(&dst->dir, &temp); } void plane_transform3x4(struct plane *dst, const struct plane *p, const struct matrix3 *m) { struct vec3 temp; vec3_transform3x4(&dst->dir, &p->dir, m); vec3_norm(&dst->dir, &dst->dir); vec3_transform3x4(&temp, &m->t, m); dst->dist = p->dist - vec3_dot(&dst->dir, &temp); } bool plane_intersection_ray(const struct plane *p, const struct vec3 *orig, const struct vec3 *dir, float *t) { float c = vec3_dot(&p->dir, dir); if (fabsf(c) < EPSILON) { *t = 0.0f; return false; } else { *t = (p->dist - vec3_dot(&p->dir, orig)) / c; return true; } } bool plane_intersection_line(const struct plane *p, const struct vec3 *v1, const struct vec3 *v2, float *t) { float p1_dist, p2_dist, p1_abs_dist, dist2; bool p1_over, p2_over; p1_dist = vec3_plane_dist(v1, p); p2_dist = vec3_plane_dist(v2, p); if (close_float(p1_dist, 0.0f, EPSILON)) { if (close_float(p2_dist, 0.0f, EPSILON)) return false; *t = 0.0f; return true; } else if (close_float(p2_dist, 0.0f, EPSILON)) { *t = 1.0f; return true; } p1_over = (p1_dist > 0.0f); p2_over = (p2_dist > 0.0f); if (p1_over == p2_over) return false; p1_abs_dist = fabsf(p1_dist); dist2 = p1_abs_dist + fabsf(p2_dist); if (dist2 < EPSILON) return false; *t = p1_abs_dist / dist2; return true; } bool plane_tri_inside(const struct plane *p, const struct vec3 *v1, const struct vec3 *v2, const struct vec3 *v3, float precision) { /* bit 1: part or all is behind the plane */ /* bit 2: part or all is in front of the plane */ int sides = 0; float d1 = vec3_plane_dist(v1, p); float d2 = vec3_plane_dist(v2, p); float d3 = vec3_plane_dist(v3, p); if (d1 >= precision) sides = 2; else if (d1 <= -precision) sides = 1; if (d2 >= precision) sides |= 2; else if (d2 <= -precision) sides |= 1; if (d3 >= precision) sides |= 2; else if (d3 <= -precision) sides |= 1; return sides; } bool plane_line_inside(const struct plane *p, const struct vec3 *v1, const struct vec3 *v2, float precision) { /* bit 1: part or all is behind the plane */ /* bit 2: part or all is in front of the plane */ int sides = 0; float d1 = vec3_plane_dist(v1, p); float d2 = vec3_plane_dist(v2, p); if (d1 >= precision) sides = 2; else if (d1 <= -precision) sides = 1; if (d2 >= precision) sides |= 2; else if (d2 <= -precision) sides |= 1; return sides; } obs-studio-32.1.0-sources/libobs/graphics/graphics-imports.c000644 001751 001751 00000024427 15153330235 024756 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/base.h" #include "../util/dstr.h" #include "../util/platform.h" #include "graphics-internal.h" #define GRAPHICS_IMPORT(func) \ do { \ exports->func = os_dlsym(module, #func); \ if (!exports->func) { \ success = false; \ blog(LOG_ERROR, \ "Could not load function '%s' from " \ "module '%s'", \ #func, module_name); \ } \ } while (false) #define GRAPHICS_IMPORT_OPTIONAL(func) \ do { \ exports->func = os_dlsym(module, #func); \ } while (false) bool load_graphics_imports(struct gs_exports *exports, void *module, const char *module_name) { bool success = true; GRAPHICS_IMPORT(device_get_name); GRAPHICS_IMPORT_OPTIONAL(gpu_get_driver_version); GRAPHICS_IMPORT_OPTIONAL(gpu_get_renderer); GRAPHICS_IMPORT_OPTIONAL(gpu_get_dmem); GRAPHICS_IMPORT_OPTIONAL(gpu_get_smem); GRAPHICS_IMPORT(device_get_type); GRAPHICS_IMPORT_OPTIONAL(device_enum_adapters); GRAPHICS_IMPORT(device_preprocessor_name); GRAPHICS_IMPORT(device_create); GRAPHICS_IMPORT(device_destroy); GRAPHICS_IMPORT(device_enter_context); GRAPHICS_IMPORT(device_leave_context); GRAPHICS_IMPORT(device_get_device_obj); GRAPHICS_IMPORT(device_swapchain_create); GRAPHICS_IMPORT(device_resize); GRAPHICS_IMPORT(device_get_color_space); GRAPHICS_IMPORT(device_update_color_space); GRAPHICS_IMPORT(device_get_size); GRAPHICS_IMPORT(device_get_width); GRAPHICS_IMPORT(device_get_height); GRAPHICS_IMPORT(device_texture_create); GRAPHICS_IMPORT(device_cubetexture_create); GRAPHICS_IMPORT(device_voltexture_create); GRAPHICS_IMPORT(device_zstencil_create); GRAPHICS_IMPORT(device_stagesurface_create); GRAPHICS_IMPORT(device_samplerstate_create); GRAPHICS_IMPORT(device_vertexshader_create); GRAPHICS_IMPORT(device_pixelshader_create); GRAPHICS_IMPORT(device_vertexbuffer_create); GRAPHICS_IMPORT(device_indexbuffer_create); GRAPHICS_IMPORT(device_timer_create); GRAPHICS_IMPORT(device_timer_range_create); GRAPHICS_IMPORT(device_get_texture_type); GRAPHICS_IMPORT(device_load_vertexbuffer); GRAPHICS_IMPORT(device_load_indexbuffer); GRAPHICS_IMPORT(device_load_texture); GRAPHICS_IMPORT(device_load_samplerstate); GRAPHICS_IMPORT(device_load_vertexshader); GRAPHICS_IMPORT(device_load_pixelshader); GRAPHICS_IMPORT(device_load_default_samplerstate); GRAPHICS_IMPORT(device_get_vertex_shader); GRAPHICS_IMPORT(device_get_pixel_shader); GRAPHICS_IMPORT(device_get_render_target); GRAPHICS_IMPORT(device_get_zstencil_target); GRAPHICS_IMPORT(device_set_render_target); GRAPHICS_IMPORT(device_set_render_target_with_color_space); GRAPHICS_IMPORT(device_set_cube_render_target); GRAPHICS_IMPORT(device_enable_framebuffer_srgb); GRAPHICS_IMPORT(device_framebuffer_srgb_enabled); GRAPHICS_IMPORT(device_copy_texture_region); GRAPHICS_IMPORT(device_copy_texture); GRAPHICS_IMPORT(device_stage_texture); GRAPHICS_IMPORT(device_begin_frame); GRAPHICS_IMPORT(device_begin_scene); GRAPHICS_IMPORT(device_draw); GRAPHICS_IMPORT(device_load_swapchain); GRAPHICS_IMPORT(device_end_scene); GRAPHICS_IMPORT(device_clear); GRAPHICS_IMPORT(device_is_present_ready); GRAPHICS_IMPORT(device_present); GRAPHICS_IMPORT(device_flush); GRAPHICS_IMPORT(device_set_cull_mode); GRAPHICS_IMPORT(device_get_cull_mode); GRAPHICS_IMPORT(device_enable_blending); GRAPHICS_IMPORT(device_enable_depth_test); GRAPHICS_IMPORT(device_enable_stencil_test); GRAPHICS_IMPORT(device_enable_stencil_write); GRAPHICS_IMPORT(device_enable_color); GRAPHICS_IMPORT(device_blend_function); GRAPHICS_IMPORT(device_blend_function_separate); GRAPHICS_IMPORT(device_blend_op); GRAPHICS_IMPORT(device_depth_function); GRAPHICS_IMPORT(device_stencil_function); GRAPHICS_IMPORT(device_stencil_op); GRAPHICS_IMPORT(device_set_viewport); GRAPHICS_IMPORT(device_get_viewport); GRAPHICS_IMPORT(device_set_scissor_rect); GRAPHICS_IMPORT(device_ortho); GRAPHICS_IMPORT(device_frustum); GRAPHICS_IMPORT(device_projection_push); GRAPHICS_IMPORT(device_projection_pop); GRAPHICS_IMPORT(gs_swapchain_destroy); GRAPHICS_IMPORT(gs_texture_destroy); GRAPHICS_IMPORT(gs_texture_get_width); GRAPHICS_IMPORT(gs_texture_get_height); GRAPHICS_IMPORT(gs_texture_get_color_format); GRAPHICS_IMPORT(gs_texture_map); GRAPHICS_IMPORT(gs_texture_unmap); GRAPHICS_IMPORT_OPTIONAL(gs_texture_is_rect); GRAPHICS_IMPORT(gs_texture_get_obj); GRAPHICS_IMPORT(gs_cubetexture_destroy); GRAPHICS_IMPORT(gs_cubetexture_get_size); GRAPHICS_IMPORT(gs_cubetexture_get_color_format); GRAPHICS_IMPORT(gs_voltexture_destroy); GRAPHICS_IMPORT(gs_voltexture_get_width); GRAPHICS_IMPORT(gs_voltexture_get_height); GRAPHICS_IMPORT(gs_voltexture_get_depth); GRAPHICS_IMPORT(gs_voltexture_get_color_format); GRAPHICS_IMPORT(gs_stagesurface_destroy); GRAPHICS_IMPORT(gs_stagesurface_get_width); GRAPHICS_IMPORT(gs_stagesurface_get_height); GRAPHICS_IMPORT(gs_stagesurface_get_color_format); GRAPHICS_IMPORT(gs_stagesurface_map); GRAPHICS_IMPORT(gs_stagesurface_unmap); GRAPHICS_IMPORT(gs_zstencil_destroy); GRAPHICS_IMPORT(gs_samplerstate_destroy); GRAPHICS_IMPORT(gs_vertexbuffer_destroy); GRAPHICS_IMPORT(gs_vertexbuffer_flush); GRAPHICS_IMPORT(gs_vertexbuffer_flush_direct); GRAPHICS_IMPORT(gs_vertexbuffer_get_data); GRAPHICS_IMPORT(gs_indexbuffer_destroy); GRAPHICS_IMPORT(gs_indexbuffer_flush); GRAPHICS_IMPORT(gs_indexbuffer_flush_direct); GRAPHICS_IMPORT(gs_indexbuffer_get_data); GRAPHICS_IMPORT(gs_indexbuffer_get_num_indices); GRAPHICS_IMPORT(gs_indexbuffer_get_type); GRAPHICS_IMPORT(gs_timer_destroy); GRAPHICS_IMPORT(gs_timer_begin); GRAPHICS_IMPORT(gs_timer_end); GRAPHICS_IMPORT(gs_timer_get_data); GRAPHICS_IMPORT(gs_timer_range_destroy); GRAPHICS_IMPORT(gs_timer_range_begin); GRAPHICS_IMPORT(gs_timer_range_end); GRAPHICS_IMPORT(gs_timer_range_get_data); GRAPHICS_IMPORT(gs_shader_destroy); GRAPHICS_IMPORT(gs_shader_get_num_params); GRAPHICS_IMPORT(gs_shader_get_param_by_idx); GRAPHICS_IMPORT(gs_shader_get_param_by_name); GRAPHICS_IMPORT(gs_shader_get_viewproj_matrix); GRAPHICS_IMPORT(gs_shader_get_world_matrix); GRAPHICS_IMPORT(gs_shader_get_param_info); GRAPHICS_IMPORT(gs_shader_set_bool); GRAPHICS_IMPORT(gs_shader_set_float); GRAPHICS_IMPORT(gs_shader_set_int); GRAPHICS_IMPORT(gs_shader_set_matrix3); GRAPHICS_IMPORT(gs_shader_set_matrix4); GRAPHICS_IMPORT(gs_shader_set_vec2); GRAPHICS_IMPORT(gs_shader_set_vec3); GRAPHICS_IMPORT(gs_shader_set_vec4); GRAPHICS_IMPORT(gs_shader_set_texture); GRAPHICS_IMPORT(gs_shader_set_val); GRAPHICS_IMPORT(gs_shader_set_default); GRAPHICS_IMPORT(gs_shader_set_next_sampler); GRAPHICS_IMPORT_OPTIONAL(device_nv12_available); GRAPHICS_IMPORT_OPTIONAL(device_p010_available); GRAPHICS_IMPORT_OPTIONAL(device_texture_create_nv12); GRAPHICS_IMPORT_OPTIONAL(device_texture_create_p010); GRAPHICS_IMPORT(device_is_monitor_hdr); GRAPHICS_IMPORT(device_debug_marker_begin); GRAPHICS_IMPORT(device_debug_marker_end); GRAPHICS_IMPORT_OPTIONAL(gs_get_adapter_count); /* OSX/Cocoa specific functions */ #ifdef __APPLE__ GRAPHICS_IMPORT(device_shared_texture_available); GRAPHICS_IMPORT(device_texture_open_shared); GRAPHICS_IMPORT(device_texture_create_from_iosurface); GRAPHICS_IMPORT(gs_texture_rebind_iosurface); /* win32 specific functions */ #elif _WIN32 GRAPHICS_IMPORT(device_gdi_texture_available); GRAPHICS_IMPORT(device_shared_texture_available); GRAPHICS_IMPORT_OPTIONAL(device_get_duplicator_monitor_info); GRAPHICS_IMPORT_OPTIONAL(device_duplicator_get_monitor_index); GRAPHICS_IMPORT_OPTIONAL(device_duplicator_create); GRAPHICS_IMPORT_OPTIONAL(gs_duplicator_destroy); GRAPHICS_IMPORT_OPTIONAL(gs_duplicator_update_frame); GRAPHICS_IMPORT_OPTIONAL(gs_duplicator_get_texture); GRAPHICS_IMPORT_OPTIONAL(gs_duplicator_get_color_space); GRAPHICS_IMPORT_OPTIONAL(gs_duplicator_get_sdr_white_level); GRAPHICS_IMPORT_OPTIONAL(device_can_adapter_fast_clear); GRAPHICS_IMPORT_OPTIONAL(device_texture_create_gdi); GRAPHICS_IMPORT_OPTIONAL(gs_texture_get_dc); GRAPHICS_IMPORT_OPTIONAL(gs_texture_release_dc); GRAPHICS_IMPORT_OPTIONAL(device_texture_open_shared); GRAPHICS_IMPORT_OPTIONAL(device_texture_open_nt_shared); GRAPHICS_IMPORT_OPTIONAL(device_texture_get_shared_handle); GRAPHICS_IMPORT_OPTIONAL(device_texture_wrap_obj); GRAPHICS_IMPORT_OPTIONAL(device_texture_acquire_sync); GRAPHICS_IMPORT_OPTIONAL(device_texture_release_sync); GRAPHICS_IMPORT_OPTIONAL(device_stagesurface_create_nv12); GRAPHICS_IMPORT_OPTIONAL(device_stagesurface_create_p010); GRAPHICS_IMPORT_OPTIONAL(device_register_loss_callbacks); GRAPHICS_IMPORT_OPTIONAL(device_unregister_loss_callbacks); #elif defined(__linux__) || defined(__FreeBSD__) || defined(__DragonFly__) GRAPHICS_IMPORT(device_texture_create_from_dmabuf); GRAPHICS_IMPORT(device_query_dmabuf_capabilities); GRAPHICS_IMPORT(device_query_dmabuf_modifiers_for_format); GRAPHICS_IMPORT(device_texture_create_from_pixmap); GRAPHICS_IMPORT(device_query_sync_capabilities); GRAPHICS_IMPORT(device_sync_create); GRAPHICS_IMPORT(device_sync_create_from_syncobj_timeline_point); GRAPHICS_IMPORT(device_sync_destroy); GRAPHICS_IMPORT(device_sync_export_syncobj_timeline_point); GRAPHICS_IMPORT(device_sync_signal_syncobj_timeline_point); GRAPHICS_IMPORT(device_sync_wait); #endif return success; } obs-studio-32.1.0-sources/libobs/graphics/basemath.hpp000644 001751 001751 00000000054 15153330235 023602 0ustar00runnerrunner000000 000000 #pragma once /* TODO: C++ math wrappers */ obs-studio-32.1.0-sources/libobs/graphics/shader-parser.c000644 001751 001751 00000043717 15153330235 024226 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "../util/platform.h" #include "shader-parser.h" enum gs_shader_param_type get_shader_param_type(const char *type) { if (strcmp(type, "float") == 0) return GS_SHADER_PARAM_FLOAT; else if (strcmp(type, "float2") == 0) return GS_SHADER_PARAM_VEC2; else if (strcmp(type, "float3") == 0) return GS_SHADER_PARAM_VEC3; else if (strcmp(type, "float4") == 0) return GS_SHADER_PARAM_VEC4; else if (strcmp(type, "int2") == 0) return GS_SHADER_PARAM_INT2; else if (strcmp(type, "int3") == 0) return GS_SHADER_PARAM_INT3; else if (strcmp(type, "int4") == 0) return GS_SHADER_PARAM_INT4; else if (astrcmp_n(type, "texture", 7) == 0) return GS_SHADER_PARAM_TEXTURE; else if (strcmp(type, "float4x4") == 0) return GS_SHADER_PARAM_MATRIX4X4; else if (strcmp(type, "bool") == 0) return GS_SHADER_PARAM_BOOL; else if (strcmp(type, "int") == 0) return GS_SHADER_PARAM_INT; else if (strcmp(type, "string") == 0) return GS_SHADER_PARAM_STRING; return GS_SHADER_PARAM_UNKNOWN; } enum gs_sample_filter get_sample_filter(const char *filter) { if (astrcmpi(filter, "Anisotropy") == 0) return GS_FILTER_ANISOTROPIC; else if (astrcmpi(filter, "Point") == 0 || strcmp(filter, "MIN_MAG_MIP_POINT") == 0) return GS_FILTER_POINT; else if (astrcmpi(filter, "Linear") == 0 || strcmp(filter, "MIN_MAG_MIP_LINEAR") == 0) return GS_FILTER_LINEAR; else if (strcmp(filter, "MIN_MAG_POINT_MIP_LINEAR") == 0) return GS_FILTER_MIN_MAG_POINT_MIP_LINEAR; else if (strcmp(filter, "MIN_POINT_MAG_LINEAR_MIP_POINT") == 0) return GS_FILTER_MIN_POINT_MAG_LINEAR_MIP_POINT; else if (strcmp(filter, "MIN_POINT_MAG_MIP_LINEAR") == 0) return GS_FILTER_MIN_POINT_MAG_MIP_LINEAR; else if (strcmp(filter, "MIN_LINEAR_MAG_MIP_POINT") == 0) return GS_FILTER_MIN_LINEAR_MAG_MIP_POINT; else if (strcmp(filter, "MIN_LINEAR_MAG_POINT_MIP_LINEAR") == 0) return GS_FILTER_MIN_LINEAR_MAG_POINT_MIP_LINEAR; else if (strcmp(filter, "MIN_MAG_LINEAR_MIP_POINT") == 0) return GS_FILTER_MIN_MAG_LINEAR_MIP_POINT; return GS_FILTER_LINEAR; } extern enum gs_address_mode get_address_mode(const char *mode) { if (astrcmpi(mode, "Wrap") == 0 || astrcmpi(mode, "Repeat") == 0) return GS_ADDRESS_WRAP; else if (astrcmpi(mode, "Clamp") == 0 || astrcmpi(mode, "None") == 0) return GS_ADDRESS_CLAMP; else if (astrcmpi(mode, "Mirror") == 0) return GS_ADDRESS_MIRROR; else if (astrcmpi(mode, "Border") == 0) return GS_ADDRESS_BORDER; else if (astrcmpi(mode, "MirrorOnce") == 0) return GS_ADDRESS_MIRRORONCE; return GS_ADDRESS_CLAMP; } void shader_sampler_convert(struct shader_sampler *ss, struct gs_sampler_info *info) { size_t i; memset(info, 0, sizeof(struct gs_sampler_info)); info->max_anisotropy = 1; for (i = 0; i < ss->states.num; i++) { const char *state = ss->states.array[i]; const char *value = ss->values.array[i]; if (astrcmpi(state, "Filter") == 0) info->filter = get_sample_filter(value); else if (astrcmpi(state, "AddressU") == 0) info->address_u = get_address_mode(value); else if (astrcmpi(state, "AddressV") == 0) info->address_v = get_address_mode(value); else if (astrcmpi(state, "AddressW") == 0) info->address_w = get_address_mode(value); else if (astrcmpi(state, "MaxAnisotropy") == 0) info->max_anisotropy = (int)strtol(value, NULL, 10); else if (astrcmpi(state, "BorderColor") == 0) info->border_color = strtol(value + 1, NULL, 16); } } /* ------------------------------------------------------------------------- */ static int sp_parse_sampler_state_item(struct shader_parser *sp, struct shader_sampler *ss) { int ret; char *state = NULL, *value = NULL; ret = cf_next_name(&sp->cfp, &state, "state name", ";"); if (ret != PARSE_SUCCESS) goto fail; ret = cf_next_token_should_be(&sp->cfp, "=", ";", NULL); if (ret != PARSE_SUCCESS) goto fail; ret = cf_next_token_copy(&sp->cfp, &value); if (ret != PARSE_SUCCESS) goto fail; ret = cf_next_token_should_be(&sp->cfp, ";", ";", NULL); if (ret != PARSE_SUCCESS) goto fail; da_push_back(ss->states, &state); da_push_back(ss->values, &value); return ret; fail: bfree(state); bfree(value); return ret; } static void sp_parse_sampler_state(struct shader_parser *sp) { struct shader_sampler ss; struct cf_token peek; shader_sampler_init(&ss); if (cf_next_name(&sp->cfp, &ss.name, "name", ";") != PARSE_SUCCESS) goto error; if (cf_next_token_should_be(&sp->cfp, "{", ";", NULL) != PARSE_SUCCESS) goto error; if (!cf_peek_valid_token(&sp->cfp, &peek)) goto error; while (strref_cmp(&peek.str, "}") != 0) { int ret = sp_parse_sampler_state_item(sp, &ss); if (ret == PARSE_EOF) goto error; if (!cf_peek_valid_token(&sp->cfp, &peek)) goto error; } if (cf_next_token_should_be(&sp->cfp, "}", ";", NULL) != PARSE_SUCCESS) goto error; if (cf_next_token_should_be(&sp->cfp, ";", NULL, NULL) != PARSE_SUCCESS) goto error; da_push_back(sp->samplers, &ss); return; error: shader_sampler_free(&ss); } static inline int sp_parse_struct_var(struct shader_parser *sp, struct shader_var *var) { int code; /* -------------------------------------- */ /* variable type */ if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (cf_token_is(&sp->cfp, ";")) return PARSE_CONTINUE; if (cf_token_is(&sp->cfp, "}")) return PARSE_BREAK; code = cf_token_is_type(&sp->cfp, CFTOKEN_NAME, "type name", ";"); if (code != PARSE_SUCCESS) return code; cf_copy_token(&sp->cfp, &var->type); /* -------------------------------------- */ /* variable name */ if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (cf_token_is(&sp->cfp, ";")) return PARSE_UNEXPECTED_CONTINUE; if (cf_token_is(&sp->cfp, "}")) return PARSE_UNEXPECTED_BREAK; code = cf_token_is_type(&sp->cfp, CFTOKEN_NAME, "variable name", ";"); if (code != PARSE_SUCCESS) return code; cf_copy_token(&sp->cfp, &var->name); /* -------------------------------------- */ /* variable mapping if any (POSITION, TEXCOORD, etc) */ if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (cf_token_is(&sp->cfp, ":")) { if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (cf_token_is(&sp->cfp, ";")) return PARSE_UNEXPECTED_CONTINUE; if (cf_token_is(&sp->cfp, "}")) return PARSE_UNEXPECTED_BREAK; code = cf_token_is_type(&sp->cfp, CFTOKEN_NAME, "mapping name", ";"); if (code != PARSE_SUCCESS) return code; cf_copy_token(&sp->cfp, &var->mapping); if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; } /* -------------------------------------- */ if (!cf_token_is(&sp->cfp, ";")) { if (!cf_go_to_valid_token(&sp->cfp, ";", "}")) return PARSE_EOF; return PARSE_CONTINUE; } return PARSE_SUCCESS; } static void sp_parse_struct(struct shader_parser *sp) { struct shader_struct ss; shader_struct_init(&ss); if (cf_next_name(&sp->cfp, &ss.name, "name", ";") != PARSE_SUCCESS) goto error; if (cf_next_token_should_be(&sp->cfp, "{", ";", NULL) != PARSE_SUCCESS) goto error; /* get structure variables */ while (true) { bool do_break = false; struct shader_var var; shader_var_init(&var); switch (sp_parse_struct_var(sp, &var)) { case PARSE_UNEXPECTED_CONTINUE: cf_adderror_syntax_error(&sp->cfp); /* Falls through. */ case PARSE_CONTINUE: shader_var_free(&var); continue; case PARSE_UNEXPECTED_BREAK: cf_adderror_syntax_error(&sp->cfp); /* Falls through. */ case PARSE_BREAK: shader_var_free(&var); do_break = true; break; case PARSE_EOF: shader_var_free(&var); goto error; } if (do_break) break; da_push_back(ss.vars, &var); } if (cf_next_token_should_be(&sp->cfp, ";", NULL, NULL) != PARSE_SUCCESS) goto error; da_push_back(sp->structs, &ss); return; error: shader_struct_free(&ss); } static inline int sp_check_for_keyword(struct shader_parser *sp, const char *keyword, bool *val) { bool new_val = cf_token_is(&sp->cfp, keyword); if (new_val) { if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (new_val && *val) cf_adderror(&sp->cfp, "'$1' keyword already specified", LEX_WARNING, keyword, NULL, NULL); *val = new_val; return PARSE_CONTINUE; } return PARSE_SUCCESS; } static inline int sp_parse_func_param(struct shader_parser *sp, struct shader_var *var) { int code; bool var_type_keyword = false; if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; code = sp_check_for_keyword(sp, "in", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = SHADER_VAR_IN; if (!var_type_keyword) { code = sp_check_for_keyword(sp, "inout", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = SHADER_VAR_INOUT; } if (!var_type_keyword) { code = sp_check_for_keyword(sp, "out", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = SHADER_VAR_OUT; } if (!var_type_keyword) { code = sp_check_for_keyword(sp, "uniform", &var_type_keyword); if (code == PARSE_EOF) return PARSE_EOF; else if (var_type_keyword) var->var_type = SHADER_VAR_UNIFORM; } code = cf_get_name(&sp->cfp, &var->type, "type", ")"); if (code != PARSE_SUCCESS) return code; code = cf_next_name(&sp->cfp, &var->name, "name", ")"); if (code != PARSE_SUCCESS) return code; if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (cf_token_is(&sp->cfp, ":")) { code = cf_next_name(&sp->cfp, &var->mapping, "mapping specifier", ")"); if (code != PARSE_SUCCESS) return code; if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; } return PARSE_SUCCESS; } static bool sp_parse_func_params(struct shader_parser *sp, struct shader_func *func) { struct cf_token peek; int code; cf_token_clear(&peek); if (!cf_peek_valid_token(&sp->cfp, &peek)) return false; if (*peek.str.array == ')') { cf_next_token(&sp->cfp); goto exit; } do { struct shader_var var; shader_var_init(&var); if (!cf_token_is(&sp->cfp, "(") && !cf_token_is(&sp->cfp, ",")) cf_adderror_syntax_error(&sp->cfp); code = sp_parse_func_param(sp, &var); if (code != PARSE_SUCCESS) { shader_var_free(&var); if (code == PARSE_CONTINUE) goto exit; else if (code == PARSE_EOF) return false; } da_push_back(func->params, &var); } while (!cf_token_is(&sp->cfp, ")")); exit: return true; } static void sp_parse_function(struct shader_parser *sp, char *type, char *name) { struct shader_func func; shader_func_init(&func, type, name); if (!sp_parse_func_params(sp, &func)) goto error; if (!cf_next_valid_token(&sp->cfp)) goto error; /* if function is mapped to something, for example COLOR */ if (cf_token_is(&sp->cfp, ":")) { char *mapping = NULL; int errorcode = cf_next_name(&sp->cfp, &mapping, "mapping", "{"); if (errorcode != PARSE_SUCCESS) goto error; func.mapping = mapping; if (!cf_next_valid_token(&sp->cfp)) goto error; } if (!cf_token_is(&sp->cfp, "{")) { cf_adderror_expecting(&sp->cfp, "{"); goto error; } func.start = sp->cfp.cur_token; if (!cf_pass_pair(&sp->cfp, '{', '}')) goto error; /* it is established that the current token is '}' if we reach this */ cf_next_token(&sp->cfp); func.end = sp->cfp.cur_token; da_push_back(sp->funcs, &func); return; error: shader_func_free(&func); } /* parses "array[count]" */ static bool sp_parse_param_array(struct shader_parser *sp, struct shader_var *param) { if (!cf_next_valid_token(&sp->cfp)) return false; if (sp->cfp.cur_token->type != CFTOKEN_NUM || !valid_int_str(sp->cfp.cur_token->str.array, sp->cfp.cur_token->str.len)) return false; param->array_count = (int)strtol(sp->cfp.cur_token->str.array, NULL, 10); if (cf_next_token_should_be(&sp->cfp, "]", ";", NULL) == PARSE_EOF) return false; if (!cf_next_valid_token(&sp->cfp)) return false; return true; } static inline int sp_parse_param_assign_intfloat(struct shader_parser *sp, struct shader_var *param, bool is_float) { int code; bool is_negative = false; if (!cf_next_valid_token(&sp->cfp)) return PARSE_EOF; if (cf_token_is(&sp->cfp, "-")) { is_negative = true; if (!cf_next_token(&sp->cfp)) return PARSE_EOF; } code = cf_token_is_type(&sp->cfp, CFTOKEN_NUM, "numeric value", ";"); if (code != PARSE_SUCCESS) return code; if (is_float) { float f = (float)os_strtod(sp->cfp.cur_token->str.array); if (is_negative) f = -f; da_push_back_array(param->default_val, (uint8_t *)&f, sizeof(float)); } else { long l = strtol(sp->cfp.cur_token->str.array, NULL, 10); if (is_negative) l = -l; da_push_back_array(param->default_val, (uint8_t *)&l, sizeof(long)); } return PARSE_SUCCESS; } /* * parses assignment for float1, float2, float3, float4, and any combination * for float3x3, float4x4, etc */ static inline int sp_parse_param_assign_float_array(struct shader_parser *sp, struct shader_var *param) { const char *float_type = param->type + 5; int float_count = 0, code, i; /* -------------------------------------------- */ if (float_type[0] < '1' || float_type[0] > '4') cf_adderror(&sp->cfp, "Invalid row count", LEX_ERROR, NULL, NULL, NULL); float_count = float_type[0] - '0'; if (float_type[1] == 'x') { if (float_type[2] < '1' || float_type[2] > '4') cf_adderror(&sp->cfp, "Invalid column count", LEX_ERROR, NULL, NULL, NULL); float_count *= float_type[2] - '0'; } /* -------------------------------------------- */ code = cf_next_token_should_be(&sp->cfp, "{", ";", NULL); if (code != PARSE_SUCCESS) return code; for (i = 0; i < float_count; i++) { char *next = ((i + 1) < float_count) ? "," : "}"; code = sp_parse_param_assign_intfloat(sp, param, true); if (code != PARSE_SUCCESS) return code; code = cf_next_token_should_be(&sp->cfp, next, ";", NULL); if (code != PARSE_SUCCESS) return code; } return PARSE_SUCCESS; } static int sp_parse_param_assignment_val(struct shader_parser *sp, struct shader_var *param) { if (strcmp(param->type, "int") == 0) return sp_parse_param_assign_intfloat(sp, param, false); else if (strcmp(param->type, "float") == 0) return sp_parse_param_assign_intfloat(sp, param, true); else if (astrcmp_n(param->type, "float", 5) == 0) return sp_parse_param_assign_float_array(sp, param); cf_adderror(&sp->cfp, "Invalid type '$1' used for assignment", LEX_ERROR, param->type, NULL, NULL); return PARSE_CONTINUE; } static inline bool sp_parse_param_assign(struct shader_parser *sp, struct shader_var *param) { if (sp_parse_param_assignment_val(sp, param) != PARSE_SUCCESS) return false; if (!cf_next_valid_token(&sp->cfp)) return false; return true; } static void sp_parse_param(struct shader_parser *sp, char *type, char *name, bool is_const, bool is_uniform) { struct shader_var param; shader_var_init_param(¶m, type, name, is_uniform, is_const); if (cf_token_is(&sp->cfp, ";")) goto complete; if (cf_token_is(&sp->cfp, "[") && !sp_parse_param_array(sp, ¶m)) goto error; if (cf_token_is(&sp->cfp, "=") && !sp_parse_param_assign(sp, ¶m)) goto error; if (!cf_token_is(&sp->cfp, ";")) goto error; complete: da_push_back(sp->params, ¶m); return; error: shader_var_free(¶m); } static bool sp_get_var_specifiers(struct shader_parser *sp, bool *is_const, bool *is_uniform) { while (true) { int code = sp_check_for_keyword(sp, "const", is_const); if (code == PARSE_EOF) return false; else if (code == PARSE_CONTINUE) continue; code = sp_check_for_keyword(sp, "uniform", is_uniform); if (code == PARSE_EOF) return false; else if (code == PARSE_CONTINUE) continue; break; } return true; } static inline void report_invalid_func_keyword(struct shader_parser *sp, const char *name, bool val) { if (val) cf_adderror(&sp->cfp, "'$1' keyword cannot be used with a " "function", LEX_ERROR, name, NULL, NULL); } static void sp_parse_other(struct shader_parser *sp) { bool is_const = false, is_uniform = false; char *type = NULL, *name = NULL; if (!sp_get_var_specifiers(sp, &is_const, &is_uniform)) goto error; if (cf_get_name(&sp->cfp, &type, "type", ";") != PARSE_SUCCESS) goto error; if (cf_next_name(&sp->cfp, &name, "name", ";") != PARSE_SUCCESS) goto error; if (!cf_next_valid_token(&sp->cfp)) goto error; if (cf_token_is(&sp->cfp, "(")) { report_invalid_func_keyword(sp, "const", is_const); report_invalid_func_keyword(sp, "uniform", is_uniform); sp_parse_function(sp, type, name); return; } else { sp_parse_param(sp, type, name, is_const, is_uniform); return; } error: bfree(type); bfree(name); } bool shader_parse(struct shader_parser *sp, const char *shader, const char *file) { if (!cf_parser_parse(&sp->cfp, shader, file)) return false; while (sp->cfp.cur_token && sp->cfp.cur_token->type != CFTOKEN_NONE) { if (cf_token_is(&sp->cfp, ";") || is_whitespace(*sp->cfp.cur_token->str.array)) { sp->cfp.cur_token++; } else if (cf_token_is(&sp->cfp, "struct")) { sp_parse_struct(sp); } else if (cf_token_is(&sp->cfp, "sampler_state")) { sp_parse_sampler_state(sp); } else if (cf_token_is(&sp->cfp, "{")) { cf_adderror(&sp->cfp, "Unexpected code segment", LEX_ERROR, NULL, NULL, NULL); cf_pass_pair(&sp->cfp, '{', '}'); } else { /* parameters and functions */ sp_parse_other(sp); } } return !error_data_has_errors(&sp->cfp.error_list); } obs-studio-32.1.0-sources/libobs/graphics/vec3.h000644 001751 001751 00000013526 15153330235 022326 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "math-defs.h" #include "vec4.h" #include "../util/sse-intrin.h" #ifdef __cplusplus extern "C" { #endif struct plane; struct matrix3; struct matrix4; struct quat; struct vec3 { union { struct { float x, y, z, w; }; float ptr[4]; __m128 m; }; }; static inline void vec3_zero(struct vec3 *v) { v->m = _mm_setzero_ps(); } static inline void vec3_set(struct vec3 *dst, float x, float y, float z) { dst->m = _mm_set_ps(0.0f, z, y, x); } static inline void vec3_copy(struct vec3 *dst, const struct vec3 *v) { dst->m = v->m; } EXPORT void vec3_from_vec4(struct vec3 *dst, const struct vec4 *v); static inline void vec3_add(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { dst->m = _mm_add_ps(v1->m, v2->m); dst->w = 0.0f; } static inline void vec3_sub(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { dst->m = _mm_sub_ps(v1->m, v2->m); dst->w = 0.0f; } static inline void vec3_mul(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { dst->m = _mm_mul_ps(v1->m, v2->m); } static inline void vec3_div(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { dst->m = _mm_div_ps(v1->m, v2->m); dst->w = 0.0f; } static inline void vec3_addf(struct vec3 *dst, const struct vec3 *v, float f) { dst->m = _mm_add_ps(v->m, _mm_set1_ps(f)); dst->w = 0.0f; } static inline void vec3_subf(struct vec3 *dst, const struct vec3 *v, float f) { dst->m = _mm_sub_ps(v->m, _mm_set1_ps(f)); dst->w = 0.0f; } static inline void vec3_mulf(struct vec3 *dst, const struct vec3 *v, float f) { dst->m = _mm_mul_ps(v->m, _mm_set1_ps(f)); } static inline void vec3_divf(struct vec3 *dst, const struct vec3 *v, float f) { dst->m = _mm_div_ps(v->m, _mm_set1_ps(f)); dst->w = 0.0f; } static inline float vec3_dot(const struct vec3 *v1, const struct vec3 *v2) { struct vec3 add; __m128 mul = _mm_mul_ps(v1->m, v2->m); add.m = _mm_add_ps(_mm_movehl_ps(mul, mul), mul); add.m = _mm_add_ps(_mm_shuffle_ps(add.m, add.m, 0x55), add.m); return add.x; } static inline void vec3_cross(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { __m128 s1v1 = _mm_shuffle_ps(v1->m, v1->m, _MM_SHUFFLE(3, 0, 2, 1)); __m128 s1v2 = _mm_shuffle_ps(v2->m, v2->m, _MM_SHUFFLE(3, 1, 0, 2)); __m128 s2v1 = _mm_shuffle_ps(v1->m, v1->m, _MM_SHUFFLE(3, 1, 0, 2)); __m128 s2v2 = _mm_shuffle_ps(v2->m, v2->m, _MM_SHUFFLE(3, 0, 2, 1)); dst->m = _mm_sub_ps(_mm_mul_ps(s1v1, s1v2), _mm_mul_ps(s2v1, s2v2)); } static inline void vec3_neg(struct vec3 *dst, const struct vec3 *v) { dst->x = -v->x; dst->y = -v->y; dst->z = -v->z; dst->w = 0.0f; } static inline float vec3_len(const struct vec3 *v) { float dot_val = vec3_dot(v, v); return (dot_val > 0.0f) ? sqrtf(dot_val) : 0.0f; } static inline float vec3_dist(const struct vec3 *v1, const struct vec3 *v2) { struct vec3 temp; float dot_val; vec3_sub(&temp, v1, v2); dot_val = vec3_dot(&temp, &temp); return (dot_val > 0.0f) ? sqrtf(dot_val) : 0.0f; } static inline void vec3_norm(struct vec3 *dst, const struct vec3 *v) { float dot_val = vec3_dot(v, v); dst->m = (dot_val > 0.0f) ? _mm_mul_ps(v->m, _mm_set1_ps(1.0f / sqrtf(dot_val))) : _mm_setzero_ps(); } static inline bool vec3_close(const struct vec3 *v1, const struct vec3 *v2, float epsilon) { struct vec3 test; vec3_sub(&test, v1, v2); return test.x < epsilon && test.y < epsilon && test.z < epsilon; } static inline void vec3_min(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { dst->m = _mm_min_ps(v1->m, v2->m); dst->w = 0.0f; } static inline void vec3_minf(struct vec3 *dst, const struct vec3 *v, float f) { dst->m = _mm_min_ps(v->m, _mm_set1_ps(f)); dst->w = 0.0f; } static inline void vec3_max(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2) { dst->m = _mm_max_ps(v1->m, v2->m); dst->w = 0.0f; } static inline void vec3_maxf(struct vec3 *dst, const struct vec3 *v, float f) { dst->m = _mm_max_ps(v->m, _mm_set1_ps(f)); dst->w = 0.0f; } static inline void vec3_abs(struct vec3 *dst, const struct vec3 *v) { dst->x = fabsf(v->x); dst->y = fabsf(v->y); dst->z = fabsf(v->z); dst->w = 0.0f; } static inline void vec3_floor(struct vec3 *dst, const struct vec3 *v) { dst->x = floorf(v->x); dst->y = floorf(v->y); dst->z = floorf(v->z); dst->w = 0.0f; } static inline void vec3_ceil(struct vec3 *dst, const struct vec3 *v) { dst->x = ceilf(v->x); dst->y = ceilf(v->y); dst->z = ceilf(v->z); dst->w = 0.0f; } EXPORT float vec3_plane_dist(const struct vec3 *v, const struct plane *p); EXPORT void vec3_transform(struct vec3 *dst, const struct vec3 *v, const struct matrix4 *m); EXPORT void vec3_rotate(struct vec3 *dst, const struct vec3 *v, const struct matrix3 *m); EXPORT void vec3_transform3x4(struct vec3 *dst, const struct vec3 *v, const struct matrix3 *m); EXPORT void vec3_mirror(struct vec3 *dst, const struct vec3 *v, const struct plane *p); EXPORT void vec3_mirrorv(struct vec3 *dst, const struct vec3 *v, const struct vec3 *vec); EXPORT void vec3_rand(struct vec3 *dst, int positive_only); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/effect.h000644 001751 001751 00000011752 15153330235 022721 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "effect-parser.h" #include "graphics.h" #ifdef __cplusplus extern "C" { #endif typedef DARRAY(struct gs_effect_param) gs_effect_param_array_t; typedef DARRAY(struct pass_shaderparam) pass_shaderparam_array_t; /* * Effects introduce a means of bundling together shader text into one * file with shared functions and parameters. This is done because often * shaders must be duplicated when you need to alter minor aspects of the code * that cannot be done via constants. Effects allow developers to easily * switch shaders and set constants that can be used between shaders. * * Effects are built via the effect parser, and shaders are automatically * generated for each technique's pass. */ /* ------------------------------------------------------------------------- */ enum effect_section { EFFECT_PARAM, EFFECT_TECHNIQUE, EFFECT_SAMPLER, EFFECT_PASS, EFFECT_ANNOTATION }; /* ------------------------------------------------------------------------- */ struct gs_effect_param { char *name; enum effect_section section; enum gs_shader_param_type type; bool changed; DARRAY(uint8_t) cur_val; DARRAY(uint8_t) default_val; gs_effect_t *effect; gs_samplerstate_t *next_sampler; /*char *full_name; float scroller_min, scroller_max, scroller_inc, scroller_mul;*/ gs_effect_param_array_t annotations; }; static inline void effect_param_init(struct gs_effect_param *param) { memset(param, 0, sizeof(struct gs_effect_param)); da_init(param->annotations); } static inline void effect_param_free(struct gs_effect_param *param) { bfree(param->name); //bfree(param->full_name); da_free(param->cur_val); da_free(param->default_val); size_t i; for (i = 0; i < param->annotations.num; i++) effect_param_free(param->annotations.array + i); da_free(param->annotations); } EXPORT void effect_param_parse_property(gs_eparam_t *param, const char *property); /* ------------------------------------------------------------------------- */ struct pass_shaderparam { struct gs_effect_param *eparam; gs_sparam_t *sparam; }; struct gs_effect_pass { char *name; enum effect_section section; gs_shader_t *vertshader; gs_shader_t *pixelshader; pass_shaderparam_array_t vertshader_params; pass_shaderparam_array_t pixelshader_params; }; static inline void effect_pass_init(struct gs_effect_pass *pass) { memset(pass, 0, sizeof(struct gs_effect_pass)); } static inline void effect_pass_free(struct gs_effect_pass *pass) { bfree(pass->name); da_free(pass->vertshader_params); da_free(pass->pixelshader_params); gs_shader_destroy(pass->vertshader); gs_shader_destroy(pass->pixelshader); } /* ------------------------------------------------------------------------- */ struct gs_effect_technique { char *name; enum effect_section section; struct gs_effect *effect; DARRAY(struct gs_effect_pass) passes; }; static inline void effect_technique_init(struct gs_effect_technique *t) { memset(t, 0, sizeof(struct gs_effect_technique)); } static inline void effect_technique_free(struct gs_effect_technique *t) { size_t i; for (i = 0; i < t->passes.num; i++) effect_pass_free(t->passes.array + i); da_free(t->passes); bfree(t->name); } /* ------------------------------------------------------------------------- */ struct gs_effect { bool processing; bool cached; char *effect_path, *effect_dir; gs_effect_param_array_t params; DARRAY(struct gs_effect_technique) techniques; struct gs_effect_technique *cur_technique; struct gs_effect_pass *cur_pass; gs_eparam_t *view_proj, *world, *scale; graphics_t *graphics; struct gs_effect *next; size_t loop_pass; bool looping; }; static inline void effect_init(gs_effect_t *effect) { memset(effect, 0, sizeof(struct gs_effect)); } static inline void effect_free(gs_effect_t *effect) { size_t i; for (i = 0; i < effect->params.num; i++) effect_param_free(effect->params.array + i); for (i = 0; i < effect->techniques.num; i++) effect_technique_free(effect->techniques.array + i); da_free(effect->params); da_free(effect->techniques); bfree(effect->effect_path); bfree(effect->effect_dir); effect->effect_path = NULL; effect->effect_dir = NULL; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/vec2.c000644 001751 001751 00000003057 15153330235 022316 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "math-extra.h" #include "math-defs.h" #include "vec2.h" void vec2_abs(struct vec2 *dst, const struct vec2 *v) { vec2_set(dst, fabsf(v->x), fabsf(v->y)); } void vec2_floor(struct vec2 *dst, const struct vec2 *v) { vec2_set(dst, floorf(v->x), floorf(v->y)); } void vec2_ceil(struct vec2 *dst, const struct vec2 *v) { vec2_set(dst, ceilf(v->x), ceilf(v->y)); } int vec2_close(const struct vec2 *v1, const struct vec2 *v2, float epsilon) { return close_float(v1->x, v2->x, epsilon) && close_float(v1->y, v2->y, epsilon); } void vec2_norm(struct vec2 *dst, const struct vec2 *v) { float len = vec2_len(v); if (len > 0.0f) { len = 1.0f / len; vec2_mulf(dst, v, len); } } obs-studio-32.1.0-sources/libobs/graphics/image-file.h000644 001751 001751 00000007105 15153330235 023461 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "graphics.h" #include "libnsgif/libnsgif.h" #ifdef __cplusplus extern "C" { #endif struct gs_image_file { gs_texture_t *texture; enum gs_color_format format; uint32_t cx; uint32_t cy; bool is_animated_gif; bool frame_updated; bool loaded; gif_animation gif; uint8_t *gif_data; uint8_t **animation_frame_cache; uint8_t *animation_frame_data; uint64_t cur_time; int cur_frame; int cur_loop; int last_decoded_frame; uint8_t *texture_data; gif_bitmap_callback_vt bitmap_callbacks; }; struct gs_image_file2 { struct gs_image_file image; uint64_t mem_usage; }; struct gs_image_file3 { struct gs_image_file2 image2; enum gs_image_alpha_mode alpha_mode; }; struct gs_image_file4 { struct gs_image_file3 image3; enum gs_color_space space; }; typedef struct gs_image_file gs_image_file_t; typedef struct gs_image_file2 gs_image_file2_t; typedef struct gs_image_file3 gs_image_file3_t; typedef struct gs_image_file4 gs_image_file4_t; EXPORT void gs_image_file_init(gs_image_file_t *image, const char *file); EXPORT void gs_image_file_free(gs_image_file_t *image); EXPORT void gs_image_file_init_texture(gs_image_file_t *image); EXPORT bool gs_image_file_tick(gs_image_file_t *image, uint64_t elapsed_time_ns); EXPORT void gs_image_file_update_texture(gs_image_file_t *image); EXPORT void gs_image_file2_init(gs_image_file2_t *if2, const char *file); EXPORT bool gs_image_file2_tick(gs_image_file2_t *if2, uint64_t elapsed_time_ns); EXPORT void gs_image_file2_update_texture(gs_image_file2_t *if2); EXPORT void gs_image_file3_init(gs_image_file3_t *if3, const char *file, enum gs_image_alpha_mode alpha_mode); EXPORT bool gs_image_file3_tick(gs_image_file3_t *if3, uint64_t elapsed_time_ns); EXPORT void gs_image_file3_update_texture(gs_image_file3_t *if3); EXPORT void gs_image_file4_init(gs_image_file4_t *if4, const char *file, enum gs_image_alpha_mode alpha_mode); EXPORT bool gs_image_file4_tick(gs_image_file4_t *if4, uint64_t elapsed_time_ns); EXPORT void gs_image_file4_update_texture(gs_image_file4_t *if4); static inline void gs_image_file2_free(gs_image_file2_t *if2) { gs_image_file_free(&if2->image); if2->mem_usage = 0; } static inline void gs_image_file2_init_texture(gs_image_file2_t *if2) { gs_image_file_init_texture(&if2->image); } static inline void gs_image_file3_free(gs_image_file3_t *if3) { gs_image_file2_free(&if3->image2); } static inline void gs_image_file3_init_texture(gs_image_file3_t *if3) { gs_image_file2_init_texture(&if3->image2); } static inline void gs_image_file4_free(gs_image_file4_t *if4) { gs_image_file3_free(&if4->image3); } static inline void gs_image_file4_init_texture(gs_image_file4_t *if4) { gs_image_file3_init_texture(&if4->image3); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/graphics/math-extra.c000644 001751 001751 00000006422 15153330235 023530 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "vec2.h" #include "vec3.h" #include "math-defs.h" #include "math-extra.h" void polar_to_cart(struct vec3 *dst, const struct vec3 *v) { struct vec3 cart; float sinx = cosf(v->x); float sinx_z = v->z * sinx; cart.x = sinx_z * sinf(v->y); cart.z = sinx_z * cosf(v->y); cart.y = v->z * sinf(v->x); vec3_copy(dst, &cart); } void cart_to_polar(struct vec3 *dst, const struct vec3 *v) { struct vec3 polar; polar.z = vec3_len(v); if (close_float(polar.z, 0.0f, EPSILON)) { vec3_zero(&polar); } else { polar.x = asinf(v->y / polar.z); polar.y = atan2f(v->x, v->z); } vec3_copy(dst, &polar); } void norm_to_polar(struct vec2 *dst, const struct vec3 *norm) { dst->x = atan2f(norm->x, norm->z); dst->y = asinf(norm->y); } void polar_to_norm(struct vec3 *dst, const struct vec2 *polar) { float sinx = sinf(polar->x); dst->x = sinx * cosf(polar->y); dst->y = sinx * sinf(polar->y); dst->z = cosf(polar->x); } float calc_torquef(float val1, float val2, float torque, float min_adjust, float t) { float out = val1; float dist; bool over; if (close_float(val1, val2, EPSILON)) return val2; dist = (val2 - val1) * torque; over = dist > 0.0f; if (over) { if (dist < min_adjust) /* prevents from going too slow */ dist = min_adjust; out += dist * t; /* add torque */ if (out > val2) /* clamp if overshoot */ out = val2; } else { if (dist > -min_adjust) dist = -min_adjust; out += dist * t; if (out < val2) out = val2; } return out; } void calc_torque(struct vec3 *dst, const struct vec3 *v1, const struct vec3 *v2, float torque, float min_adjust, float t) { struct vec3 line, dir; float orig_dist, torque_dist, adjust_dist; if (vec3_close(v1, v2, EPSILON)) { vec3_copy(dst, v2); return; } vec3_sub(&line, v2, v1); orig_dist = vec3_len(&line); vec3_mulf(&dir, &line, 1.0f / orig_dist); torque_dist = orig_dist * torque; /* use distance to determine speed */ if (torque_dist < min_adjust) /* prevent from going too slow */ torque_dist = min_adjust; adjust_dist = torque_dist * t; if (adjust_dist <= (orig_dist - LARGE_EPSILON)) { vec3_mulf(dst, &dir, adjust_dist); vec3_add(dst, dst, v1); /* add torque */ } else { vec3_copy(dst, v2); /* clamp if overshoot */ } } float rand_float(int positive_only) { if (positive_only) return (float)((double)rand() / (double)RAND_MAX); else return (float)(((double)rand() / (double)RAND_MAX * 2.0) - 1.0); } obs-studio-32.1.0-sources/libobs/graphics/matrix4.c000644 001751 001751 00000021363 15153330235 023047 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "math-defs.h" #include "matrix4.h" #include "matrix3.h" #include "quat.h" void matrix4_from_matrix3(struct matrix4 *dst, const struct matrix3 *m) { dst->x.m = m->x.m; dst->y.m = m->y.m; dst->z.m = m->z.m; dst->t.m = m->t.m; dst->t.w = 1.0f; } void matrix4_from_quat(struct matrix4 *dst, const struct quat *q) { float norm = quat_dot(q, q); float s = (norm > 0.0f) ? (2.0f / norm) : 0.0f; float xx = q->x * q->x * s; float yy = q->y * q->y * s; float zz = q->z * q->z * s; float xy = q->x * q->y * s; float xz = q->x * q->z * s; float yz = q->y * q->z * s; float wx = q->w * q->x * s; float wy = q->w * q->y * s; float wz = q->w * q->z * s; vec4_set(&dst->x, 1.0f - (yy + zz), xy + wz, xz - wy, 0.0f); vec4_set(&dst->y, xy - wz, 1.0f - (xx + zz), yz + wx, 0.0f); vec4_set(&dst->z, xz + wy, yz - wx, 1.0f - (xx + yy), 0.0f); vec4_set(&dst->t, 0.0f, 0.0f, 0.0f, 1.0f); } void matrix4_from_axisang(struct matrix4 *dst, const struct axisang *aa) { struct quat q; quat_from_axisang(&q, aa); matrix4_from_quat(dst, &q); } void matrix4_mul(struct matrix4 *dst, const struct matrix4 *m1, const struct matrix4 *m2) { struct matrix4 transposed; struct matrix4 out; matrix4_transpose(&transposed, m2); out.x.x = vec4_dot(&m1->x, &transposed.x); out.x.y = vec4_dot(&m1->x, &transposed.y); out.x.z = vec4_dot(&m1->x, &transposed.z); out.x.w = vec4_dot(&m1->x, &transposed.t); out.y.x = vec4_dot(&m1->y, &transposed.x); out.y.y = vec4_dot(&m1->y, &transposed.y); out.y.z = vec4_dot(&m1->y, &transposed.z); out.y.w = vec4_dot(&m1->y, &transposed.t); out.z.x = vec4_dot(&m1->z, &transposed.x); out.z.y = vec4_dot(&m1->z, &transposed.y); out.z.z = vec4_dot(&m1->z, &transposed.z); out.z.w = vec4_dot(&m1->z, &transposed.t); out.t.x = vec4_dot(&m1->t, &transposed.x); out.t.y = vec4_dot(&m1->t, &transposed.y); out.t.z = vec4_dot(&m1->t, &transposed.z); out.t.w = vec4_dot(&m1->t, &transposed.t); matrix4_copy(dst, &out); } void matrix4_mul_4x3_only(struct matrix4 *dst, const struct matrix4 *m1, const struct matrix4 *m2) { struct matrix4 transposed; struct vec4 x; struct vec4 y; struct vec4 z; matrix4_transpose(&transposed, m2); x.x = vec4_dot(&m1->x, &transposed.x); x.y = vec4_dot(&m1->x, &transposed.y); x.z = vec4_dot(&m1->x, &transposed.z); x.w = vec4_dot(&m1->x, &transposed.t); y.x = vec4_dot(&m1->y, &transposed.x); y.y = vec4_dot(&m1->y, &transposed.y); y.z = vec4_dot(&m1->y, &transposed.z); y.w = vec4_dot(&m1->y, &transposed.t); z.x = vec4_dot(&m1->z, &transposed.x); z.y = vec4_dot(&m1->z, &transposed.y); z.z = vec4_dot(&m1->z, &transposed.z); z.w = vec4_dot(&m1->z, &transposed.t); vec4_copy(&dst->x, &x); vec4_copy(&dst->y, &y); vec4_copy(&dst->z, &z); vec4_copy(&dst->t, &m2->t); } static inline void get_3x3_submatrix(float *dst, const struct matrix4 *m, int i, int j) { const float *mf = (const float *)m; int ti, tj, idst, jdst; for (ti = 0; ti < 4; ti++) { if (ti < i) idst = ti; else if (ti > i) idst = ti - 1; else continue; for (tj = 0; tj < 4; tj++) { if (tj < j) jdst = tj; else if (tj > j) jdst = tj - 1; else continue; dst[(idst * 3) + jdst] = mf[(ti * 4) + tj]; } } } static inline float get_3x3_determinant(const float *m) { return (m[0] * ((m[4] * m[8]) - (m[7] * m[5]))) - (m[1] * ((m[3] * m[8]) - (m[6] * m[5]))) + (m[2] * ((m[3] * m[7]) - (m[6] * m[4]))); } float matrix4_determinant(const struct matrix4 *m) { const float *mf = (const float *)m; float det, result = 0.0f, i = 1.0f; float m3x3[9]; int n; for (n = 0; n < 4; n++, i = -i) { // NOLINT(clang-tidy-cert-flp30-c) get_3x3_submatrix(m3x3, m, 0, n); det = get_3x3_determinant(m3x3); result += mf[n] * det * i; } return result; } void matrix4_translate3v(struct matrix4 *dst, const struct matrix4 *m, const struct vec3 *v) { struct matrix4 temp; vec4_set(&temp.x, 1.0f, 0.0f, 0.0f, 0.0f); vec4_set(&temp.y, 0.0f, 1.0f, 0.0f, 0.0f); vec4_set(&temp.z, 0.0f, 0.0f, 1.0f, 0.0f); vec4_from_vec3(&temp.t, v); matrix4_mul(dst, m, &temp); } void matrix4_translate4v(struct matrix4 *dst, const struct matrix4 *m, const struct vec4 *v) { struct matrix4 temp; vec4_set(&temp.x, 1.0f, 0.0f, 0.0f, 0.0f); vec4_set(&temp.y, 0.0f, 1.0f, 0.0f, 0.0f); vec4_set(&temp.z, 0.0f, 0.0f, 1.0f, 0.0f); vec4_copy(&temp.t, v); matrix4_mul(dst, m, &temp); } void matrix4_rotate(struct matrix4 *dst, const struct matrix4 *m, const struct quat *q) { struct matrix4 temp; matrix4_from_quat(&temp, q); matrix4_mul(dst, m, &temp); } void matrix4_rotate_aa(struct matrix4 *dst, const struct matrix4 *m, const struct axisang *aa) { struct matrix4 temp; matrix4_from_axisang(&temp, aa); matrix4_mul(dst, m, &temp); } void matrix4_scale(struct matrix4 *dst, const struct matrix4 *m, const struct vec3 *v) { struct matrix4 temp; vec4_set(&temp.x, v->x, 0.0f, 0.0f, 0.0f); vec4_set(&temp.y, 0.0f, v->y, 0.0f, 0.0f); vec4_set(&temp.z, 0.0f, 0.0f, v->z, 0.0f); vec4_set(&temp.t, 0.0f, 0.0f, 0.0f, 1.0f); matrix4_mul(dst, m, &temp); } void matrix4_translate3v_i(struct matrix4 *dst, const struct vec3 *v, const struct matrix4 *m) { struct matrix4 transposed; struct vec4 v4; struct vec4 t; vec4_from_vec3(&v4, v); matrix4_transpose(&transposed, m); t.x = vec4_dot(&v4, &transposed.x); t.y = vec4_dot(&v4, &transposed.y); t.z = vec4_dot(&v4, &transposed.z); t.w = vec4_dot(&v4, &transposed.t); vec4_copy(&dst->x, &m->x); vec4_copy(&dst->y, &m->y); vec4_copy(&dst->z, &m->z); vec4_copy(&dst->t, &t); } void matrix4_translate4v_i(struct matrix4 *dst, const struct vec4 *v, const struct matrix4 *m) { struct matrix4 transposed; struct vec4 t; matrix4_transpose(&transposed, m); t.x = vec4_dot(v, &transposed.x); t.y = vec4_dot(v, &transposed.y); t.z = vec4_dot(v, &transposed.z); t.w = vec4_dot(v, &transposed.t); vec4_copy(&dst->x, &m->x); vec4_copy(&dst->y, &m->y); vec4_copy(&dst->z, &m->z); vec4_copy(&dst->t, &t); } void matrix4_rotate_i(struct matrix4 *dst, const struct quat *q, const struct matrix4 *m) { struct matrix4 temp; matrix4_from_quat(&temp, q); matrix4_mul_4x3_only(dst, &temp, m); } void matrix4_rotate_aa_i(struct matrix4 *dst, const struct axisang *aa, const struct matrix4 *m) { struct matrix4 temp; matrix4_from_axisang(&temp, aa); matrix4_mul_4x3_only(dst, &temp, m); } void matrix4_scale_i(struct matrix4 *dst, const struct vec3 *v, const struct matrix4 *m) { struct matrix4 temp; vec4_set(&temp.x, v->x, 0.0f, 0.0f, 0.0f); vec4_set(&temp.y, 0.0f, v->y, 0.0f, 0.0f); vec4_set(&temp.z, 0.0f, 0.0f, v->z, 0.0f); vec4_set(&temp.t, 0.0f, 0.0f, 0.0f, 1.0f); matrix4_mul_4x3_only(dst, &temp, m); } bool matrix4_inv(struct matrix4 *dst, const struct matrix4 *m) { struct vec4 *dstv; float det; float m3x3[9]; int i, j, sign; if (dst == m) { struct matrix4 temp = *m; return matrix4_inv(dst, &temp); } dstv = (struct vec4 *)dst; det = matrix4_determinant(m); if (fabs(det) < 0.0005f) return false; for (i = 0; i < 4; i++) { for (j = 0; j < 4; j++) { sign = 1 - ((i + j) % 2) * 2; get_3x3_submatrix(m3x3, m, i, j); dstv[j].ptr[i] = get_3x3_determinant(m3x3) * (float)sign / det; } } return true; } void matrix4_transpose(struct matrix4 *dst, const struct matrix4 *m) { if (dst == m) { struct matrix4 temp = *m; matrix4_transpose(dst, &temp); return; } #ifdef NO_INTRINSICS dst->x.x = m->x.x; dst->x.y = m->y.x; dst->x.z = m->z.x; dst->x.w = m->t.x; dst->y.x = m->x.y; dst->y.y = m->y.y; dst->y.z = m->z.y; dst->y.w = m->t.y; dst->z.x = m->x.z; dst->z.y = m->y.z; dst->z.z = m->z.z; dst->z.w = m->t.z; dst->t.x = m->x.w; dst->t.y = m->y.w; dst->t.z = m->z.w; dst->t.w = m->t.w; #else __m128 a0 = _mm_unpacklo_ps(m->x.m, m->z.m); __m128 a1 = _mm_unpacklo_ps(m->y.m, m->t.m); __m128 a2 = _mm_unpackhi_ps(m->x.m, m->z.m); __m128 a3 = _mm_unpackhi_ps(m->y.m, m->t.m); dst->x.m = _mm_unpacklo_ps(a0, a1); dst->y.m = _mm_unpackhi_ps(a0, a1); dst->z.m = _mm_unpacklo_ps(a2, a3); dst->t.m = _mm_unpackhi_ps(a2, a3); #endif } obs-studio-32.1.0-sources/libobs/obs-nix-wayland.h000644 001751 001751 00000002001 15153330235 022664 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Jason Francis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs-nix.h" void obs_nix_wayland_log_info(void); const struct obs_nix_hotkeys_vtable *obs_nix_wayland_get_hotkeys_vtable(void); obs-studio-32.1.0-sources/libobs/callback/000755 001751 001751 00000000000 15153330731 021243 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/callback/decl.c000644 001751 001751 00000013512 15153330235 022317 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "../util/cf-parser.h" #include "decl.h" static inline void err_specifier_exists(struct cf_parser *cfp, const char *storage) { cf_adderror(cfp, "'$1' specifier already exists", LEX_ERROR, storage, NULL, NULL); } static inline void err_reserved_name(struct cf_parser *cfp, const char *name) { cf_adderror(cfp, "'$1' is a reserved name", LEX_ERROR, name, NULL, NULL); } static inline void err_existing_name(struct cf_parser *cfp, const char *name) { cf_adderror(cfp, "'$1' already exists", LEX_ERROR, name, NULL, NULL); } static bool is_in_out_specifier(struct cf_parser *cfp, struct strref *name, uint32_t *type) { if (strref_cmp(name, "in") == 0) { if (*type & CALL_PARAM_IN) err_specifier_exists(cfp, "in"); *type |= CALL_PARAM_IN; } else if (strref_cmp(name, "out") == 0) { if (*type & CALL_PARAM_OUT) err_specifier_exists(cfp, "out"); *type |= CALL_PARAM_OUT; } else { return false; } return true; } #define TYPE_OR_STORAGE "type or storage specifier" static bool get_type(struct strref *ref, enum call_param_type *type, bool is_return) { if (strref_cmp(ref, "int") == 0) *type = CALL_PARAM_TYPE_INT; else if (strref_cmp(ref, "float") == 0) *type = CALL_PARAM_TYPE_FLOAT; else if (strref_cmp(ref, "bool") == 0) *type = CALL_PARAM_TYPE_BOOL; else if (strref_cmp(ref, "ptr") == 0) *type = CALL_PARAM_TYPE_PTR; else if (strref_cmp(ref, "string") == 0) *type = CALL_PARAM_TYPE_STRING; else if (is_return && strref_cmp(ref, "void") == 0) *type = CALL_PARAM_TYPE_VOID; else return false; return true; } static bool is_reserved_name(const char *str) { return (strcmp(str, "int") == 0) || (strcmp(str, "float") == 0) || (strcmp(str, "bool") == 0) || (strcmp(str, "ptr") == 0) || (strcmp(str, "string") == 0) || (strcmp(str, "void") == 0) || (strcmp(str, "return") == 0); } static bool name_exists(struct decl_info *decl, const char *name) { for (size_t i = 0; i < decl->params.num; i++) { const char *param_name = decl->params.array[i].name; if (strcmp(name, param_name) == 0) return true; } return false; } static int parse_param(struct cf_parser *cfp, struct decl_info *decl) { struct strref ref; int code; struct decl_param param = {0}; /* get storage specifiers */ code = cf_next_name_ref(cfp, &ref, TYPE_OR_STORAGE, ","); if (code != PARSE_SUCCESS) return code; while (is_in_out_specifier(cfp, &ref, ¶m.flags)) { code = cf_next_name_ref(cfp, &ref, TYPE_OR_STORAGE, ","); if (code != PARSE_SUCCESS) return code; } /* parameters not marked with specifiers are input parameters */ if (param.flags == 0) param.flags = CALL_PARAM_IN; if (!get_type(&ref, ¶m.type, false)) { cf_adderror_expecting(cfp, "type"); cf_go_to_token(cfp, ",", ")"); return PARSE_CONTINUE; } /* name */ code = cf_next_name(cfp, ¶m.name, "parameter name", ","); if (code != PARSE_SUCCESS) return code; if (name_exists(decl, param.name)) err_existing_name(cfp, param.name); if (is_reserved_name(param.name)) err_reserved_name(cfp, param.name); da_push_back(decl->params, ¶m); return PARSE_SUCCESS; } static void parse_params(struct cf_parser *cfp, struct decl_info *decl) { struct cf_token peek; int code; if (!cf_peek_valid_token(cfp, &peek)) return; while (peek.type == CFTOKEN_NAME) { code = parse_param(cfp, decl); if (code == PARSE_EOF) return; if (code != PARSE_CONTINUE && !cf_next_valid_token(cfp)) return; if (cf_token_is(cfp, ")")) break; else if (cf_token_should_be(cfp, ",", ",", NULL) == PARSE_EOF) return; if (!cf_peek_valid_token(cfp, &peek)) return; } if (!cf_token_is(cfp, ")")) cf_next_token_should_be(cfp, ")", NULL, NULL); } static void print_errors(struct cf_parser *cfp, const char *decl_string) { char *errors = error_data_buildstring(&cfp->error_list); if (errors) { blog(LOG_WARNING, "Errors/warnings for '%s':\n\n%s", decl_string, errors); bfree(errors); } } bool parse_decl_string(struct decl_info *decl, const char *decl_string) { struct cf_parser cfp; struct strref ret_type; struct decl_param ret_param = {0}; int code; bool success = false; decl->decl_string = decl_string; ret_param.flags = CALL_PARAM_OUT; cf_parser_init(&cfp); if (!cf_parser_parse(&cfp, decl_string, "declaration")) goto fail; code = cf_get_name_ref(&cfp, &ret_type, "return type", NULL); if (code == PARSE_EOF) goto fail; if (!get_type(&ret_type, &ret_param.type, true)) cf_adderror_expecting(&cfp, "return type"); code = cf_next_name(&cfp, &decl->name, "function name", "("); if (code == PARSE_EOF) goto fail; if (is_reserved_name(decl->name)) err_reserved_name(&cfp, decl->name); code = cf_next_token_should_be(&cfp, "(", "(", NULL); if (code == PARSE_EOF) goto fail; parse_params(&cfp, decl); success = true; fail: if (error_data_has_errors(&cfp.error_list)) success = false; if (success && ret_param.type != CALL_PARAM_TYPE_VOID) { ret_param.name = bstrdup("return"); da_push_back(decl->params, &ret_param); } if (!success) decl_info_free(decl); print_errors(&cfp, decl_string); cf_parser_free(&cfp); return success; } obs-studio-32.1.0-sources/libobs/callback/signal.h000644 001751 001751 00000005004 15153330235 022667 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "../util/c99defs.h" #include "calldata.h" #ifdef __cplusplus extern "C" { #endif /* * Signal handler * * This is used to create a signal handler which can broadcast events * to one or more callbacks connected to a signal. */ struct signal_handler; typedef struct signal_handler signal_handler_t; typedef void (*global_signal_callback_t)(void *, const char *, calldata_t *); typedef void (*signal_callback_t)(void *, calldata_t *); EXPORT signal_handler_t *signal_handler_create(void); EXPORT void signal_handler_destroy(signal_handler_t *handler); EXPORT bool signal_handler_add(signal_handler_t *handler, const char *signal_decl); static inline bool signal_handler_add_array(signal_handler_t *handler, const char **signal_decls) { bool success = true; if (!signal_decls) return false; while (*signal_decls) if (!signal_handler_add(handler, *(signal_decls++))) success = false; return success; } EXPORT void signal_handler_connect(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data); EXPORT void signal_handler_connect_ref(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data); EXPORT void signal_handler_disconnect(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data); EXPORT void signal_handler_connect_global(signal_handler_t *handler, global_signal_callback_t callback, void *data); EXPORT void signal_handler_disconnect_global(signal_handler_t *handler, global_signal_callback_t callback, void *data); EXPORT void signal_handler_remove_current(void); EXPORT void signal_handler_signal(signal_handler_t *handler, const char *signal, calldata_t *params); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/callback/proc.c000644 001751 001751 00000006177 15153330235 022364 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "../util/darray.h" #include "../util/threading.h" #include "decl.h" #include "proc.h" struct proc_info { struct decl_info func; void *data; proc_handler_proc_t callback; }; static inline void proc_info_free(struct proc_info *pi) { decl_info_free(&pi->func); } struct proc_handler { /* TODO: replace with hash table lookup? */ pthread_mutex_t mutex; DARRAY(struct proc_info) procs; }; static struct proc_info *getproc(proc_handler_t *handler, const char *name) { for (size_t i = 0; i < handler->procs.num; i++) { struct proc_info *info = handler->procs.array + i; if (strcmp(info->func.name, name) == 0) { return info; } } return NULL; } /* ------------------------------------------------------------------------- */ proc_handler_t *proc_handler_create(void) { struct proc_handler *handler = bmalloc(sizeof(struct proc_handler)); if (pthread_mutex_init_recursive(&handler->mutex) != 0) { blog(LOG_ERROR, "Couldn't create proc_handler mutex"); bfree(handler); return NULL; } da_init(handler->procs); return handler; } void proc_handler_destroy(proc_handler_t *handler) { if (!handler) return; for (size_t i = 0; i < handler->procs.num; i++) proc_info_free(handler->procs.array + i); da_free(handler->procs); pthread_mutex_destroy(&handler->mutex); bfree(handler); } void proc_handler_add(proc_handler_t *handler, const char *decl_string, proc_handler_proc_t proc, void *data) { if (!handler) return; struct proc_info pi; memset(&pi, 0, sizeof(struct proc_info)); if (!parse_decl_string(&pi.func, decl_string)) { blog(LOG_ERROR, "Function declaration invalid: %s", decl_string); return; } pi.callback = proc; pi.data = data; pthread_mutex_lock(&handler->mutex); struct proc_info *existing = getproc(handler, pi.func.name); if (existing) { blog(LOG_WARNING, "Procedure '%s' already exists", pi.func.name); proc_info_free(&pi); } else { da_push_back(handler->procs, &pi); } pthread_mutex_unlock(&handler->mutex); } bool proc_handler_call(proc_handler_t *handler, const char *name, calldata_t *params) { if (!handler) return false; pthread_mutex_lock(&handler->mutex); struct proc_info *info = getproc(handler, name); struct proc_info info_copy; if (info) info_copy = *info; pthread_mutex_unlock(&handler->mutex); if (!info) return false; info_copy.callback(info_copy.data, params); return true; } obs-studio-32.1.0-sources/libobs/callback/calldata.h000644 001751 001751 00000012225 15153330235 023162 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #include "../util/c99defs.h" #include "../util/bmem.h" #ifdef __cplusplus extern "C" { #endif /* * Procedure call data structure * * This is used to store parameters (and return value) sent to/from signals, * procedures, and callbacks. */ enum call_param_type { CALL_PARAM_TYPE_VOID, CALL_PARAM_TYPE_INT, CALL_PARAM_TYPE_FLOAT, CALL_PARAM_TYPE_BOOL, CALL_PARAM_TYPE_PTR, CALL_PARAM_TYPE_STRING }; #define CALL_PARAM_IN (1 << 0) #define CALL_PARAM_OUT (1 << 1) struct calldata { uint8_t *stack; size_t size; /* size of the stack, in bytes */ size_t capacity; /* capacity of the stack, in bytes */ bool fixed; /* fixed size (using call stack) */ }; typedef struct calldata calldata_t; static inline void calldata_init(struct calldata *data) { memset(data, 0, sizeof(struct calldata)); } static inline void calldata_clear(struct calldata *data); static inline void calldata_init_fixed(struct calldata *data, uint8_t *stack, size_t size) { data->stack = stack; data->capacity = size; data->fixed = true; data->size = 0; calldata_clear(data); } static inline void calldata_free(struct calldata *data) { if (!data->fixed) bfree(data->stack); } EXPORT bool calldata_get_data(const calldata_t *data, const char *name, void *out, size_t size); EXPORT void calldata_set_data(calldata_t *data, const char *name, const void *in, size_t new_size); static inline void calldata_clear(struct calldata *data) { if (data->stack) { data->size = sizeof(size_t); memset(data->stack, 0, sizeof(size_t)); } } static inline calldata_t *calldata_create(void) { return (calldata_t *)bzalloc(sizeof(struct calldata)); } static inline void calldata_destroy(calldata_t *cd) { calldata_free(cd); bfree(cd); } /* ------------------------------------------------------------------------- */ /* NOTE: 'get' functions return true only if parameter exists, and is the * same type. They return false otherwise. */ static inline bool calldata_get_int(const calldata_t *data, const char *name, long long *val) { return calldata_get_data(data, name, val, sizeof(*val)); } static inline bool calldata_get_float(const calldata_t *data, const char *name, double *val) { return calldata_get_data(data, name, val, sizeof(*val)); } static inline bool calldata_get_bool(const calldata_t *data, const char *name, bool *val) { return calldata_get_data(data, name, val, sizeof(*val)); } static inline bool calldata_get_ptr(const calldata_t *data, const char *name, void *p_ptr) { return calldata_get_data(data, name, p_ptr, sizeof(p_ptr)); } EXPORT bool calldata_get_string(const calldata_t *data, const char *name, const char **str); /* ------------------------------------------------------------------------- */ /* call if you know your data is valid */ static inline long long calldata_int(const calldata_t *data, const char *name) { long long val = 0; calldata_get_int(data, name, &val); return val; } static inline double calldata_float(const calldata_t *data, const char *name) { double val = 0.0; calldata_get_float(data, name, &val); return val; } static inline bool calldata_bool(const calldata_t *data, const char *name) { bool val = false; calldata_get_bool(data, name, &val); return val; } static inline void *calldata_ptr(const calldata_t *data, const char *name) { void *val = NULL; calldata_get_ptr(data, name, &val); return val; } static inline const char *calldata_string(const calldata_t *data, const char *name) { const char *val = NULL; calldata_get_string(data, name, &val); return val; } /* ------------------------------------------------------------------------- */ static inline void calldata_set_int(calldata_t *data, const char *name, long long val) { calldata_set_data(data, name, &val, sizeof(val)); } static inline void calldata_set_float(calldata_t *data, const char *name, double val) { calldata_set_data(data, name, &val, sizeof(val)); } static inline void calldata_set_bool(calldata_t *data, const char *name, bool val) { calldata_set_data(data, name, &val, sizeof(val)); } static inline void calldata_set_ptr(calldata_t *data, const char *name, void *ptr) { calldata_set_data(data, name, &ptr, sizeof(ptr)); } static inline void calldata_set_string(calldata_t *data, const char *name, const char *str) { if (str) calldata_set_data(data, name, str, strlen(str) + 1); else calldata_set_data(data, name, NULL, 0); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/callback/decl.h000644 001751 001751 00000003101 15153330235 022315 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "calldata.h" #include "../util/darray.h" #ifdef __cplusplus extern "C" { #endif struct decl_param { char *name; enum call_param_type type; uint32_t flags; }; static inline void decl_param_free(struct decl_param *param) { if (param->name) bfree(param->name); memset(param, 0, sizeof(struct decl_param)); } struct decl_info { char *name; const char *decl_string; DARRAY(struct decl_param) params; }; static inline void decl_info_free(struct decl_info *decl) { if (decl) { for (size_t i = 0; i < decl->params.num; i++) decl_param_free(decl->params.array + i); da_free(decl->params); bfree(decl->name); memset(decl, 0, sizeof(struct decl_info)); } } EXPORT bool parse_decl_string(struct decl_info *decl, const char *decl_string); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/callback/proc.h000644 001751 001751 00000003254 15153330235 022362 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "../util/c99defs.h" #include "calldata.h" #ifdef __cplusplus extern "C" { #endif /* * Procedure handler * * This handler is used to allow access to one or more procedures that can be * added and called without having to have direct access to declarations or * procedure callback pointers. */ struct proc_handler; typedef struct proc_handler proc_handler_t; typedef void (*proc_handler_proc_t)(void *, calldata_t *); EXPORT proc_handler_t *proc_handler_create(void); EXPORT void proc_handler_destroy(proc_handler_t *handler); EXPORT void proc_handler_add(proc_handler_t *handler, const char *decl_string, proc_handler_proc_t proc, void *data); /** * Calls a function in a procedure handler. Returns false if the named * procedure is not found. */ EXPORT bool proc_handler_call(proc_handler_t *handler, const char *name, calldata_t *params); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/callback/signal.c000644 001751 001751 00000023131 15153330235 022663 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "../util/darray.h" #include "../util/threading.h" #include "decl.h" #include "signal.h" struct signal_callback { signal_callback_t callback; void *data; bool remove; bool keep_ref; }; struct signal_info { struct decl_info func; DARRAY(struct signal_callback) callbacks; pthread_mutex_t mutex; bool signalling; struct signal_info *next; }; static inline struct signal_info *signal_info_create(struct decl_info *info) { struct signal_info *si = bmalloc(sizeof(struct signal_info)); si->func = *info; si->next = NULL; si->signalling = false; da_init(si->callbacks); if (pthread_mutex_init_recursive(&si->mutex) != 0) { blog(LOG_ERROR, "Could not create signal"); decl_info_free(&si->func); bfree(si); return NULL; } return si; } static inline void signal_info_destroy(struct signal_info *si) { if (si) { pthread_mutex_destroy(&si->mutex); decl_info_free(&si->func); da_free(si->callbacks); bfree(si); } } static inline size_t signal_get_callback_idx(struct signal_info *si, signal_callback_t callback, void *data) { for (size_t i = 0; i < si->callbacks.num; i++) { struct signal_callback *sc = si->callbacks.array + i; if (sc->callback == callback && sc->data == data) return i; } return DARRAY_INVALID; } struct global_callback_info { global_signal_callback_t callback; void *data; long signaling; bool remove; }; struct signal_handler { struct signal_info *first; pthread_mutex_t mutex; volatile long refs; DARRAY(struct global_callback_info) global_callbacks; pthread_mutex_t global_callbacks_mutex; }; static struct signal_info *getsignal(signal_handler_t *handler, const char *name, struct signal_info **p_last) { struct signal_info *signal, *last = NULL; signal = handler->first; while (signal != NULL) { if (strcmp(signal->func.name, name) == 0) break; last = signal; signal = signal->next; } if (p_last) *p_last = last; return signal; } /* ------------------------------------------------------------------------- */ signal_handler_t *signal_handler_create(void) { struct signal_handler *handler = bzalloc(sizeof(struct signal_handler)); handler->first = NULL; handler->refs = 1; if (pthread_mutex_init(&handler->mutex, NULL) != 0) { blog(LOG_ERROR, "Couldn't create signal handler mutex!"); bfree(handler); return NULL; } if (pthread_mutex_init_recursive(&handler->global_callbacks_mutex) != 0) { blog(LOG_ERROR, "Couldn't create signal handler global " "callbacks mutex!"); pthread_mutex_destroy(&handler->mutex); bfree(handler); return NULL; } return handler; } static void signal_handler_actually_destroy(signal_handler_t *handler) { struct signal_info *sig = handler->first; while (sig != NULL) { struct signal_info *next = sig->next; signal_info_destroy(sig); sig = next; } da_free(handler->global_callbacks); pthread_mutex_destroy(&handler->global_callbacks_mutex); pthread_mutex_destroy(&handler->mutex); bfree(handler); } void signal_handler_destroy(signal_handler_t *handler) { if (handler && os_atomic_dec_long(&handler->refs) == 0) { signal_handler_actually_destroy(handler); } } bool signal_handler_add(signal_handler_t *handler, const char *signal_decl) { struct decl_info func = {0}; struct signal_info *sig, *last; bool success = true; if (!parse_decl_string(&func, signal_decl)) { blog(LOG_ERROR, "Signal declaration invalid: %s", signal_decl); return false; } pthread_mutex_lock(&handler->mutex); sig = getsignal(handler, func.name, &last); if (sig) { blog(LOG_WARNING, "Signal declaration '%s' exists", func.name); decl_info_free(&func); success = false; } else { sig = signal_info_create(&func); if (!last) handler->first = sig; else last->next = sig; } pthread_mutex_unlock(&handler->mutex); return success; } static void signal_handler_connect_internal(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data, bool keep_ref) { struct signal_info *sig, *last; struct signal_callback cb_data = {callback, data, false, keep_ref}; size_t idx; if (!handler) return; pthread_mutex_lock(&handler->mutex); sig = getsignal(handler, signal, &last); pthread_mutex_unlock(&handler->mutex); if (!sig) { blog(LOG_WARNING, "signal_handler_connect: " "signal '%s' not found", signal); return; } /* -------------- */ pthread_mutex_lock(&sig->mutex); if (keep_ref) os_atomic_inc_long(&handler->refs); idx = signal_get_callback_idx(sig, callback, data); if (keep_ref || idx == DARRAY_INVALID) da_push_back(sig->callbacks, &cb_data); pthread_mutex_unlock(&sig->mutex); } void signal_handler_connect(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data) { signal_handler_connect_internal(handler, signal, callback, data, false); } void signal_handler_connect_ref(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data) { signal_handler_connect_internal(handler, signal, callback, data, true); } static inline struct signal_info *getsignal_locked(signal_handler_t *handler, const char *name) { struct signal_info *sig; if (!handler) return NULL; pthread_mutex_lock(&handler->mutex); sig = getsignal(handler, name, NULL); pthread_mutex_unlock(&handler->mutex); return sig; } void signal_handler_disconnect(signal_handler_t *handler, const char *signal, signal_callback_t callback, void *data) { struct signal_info *sig = getsignal_locked(handler, signal); bool keep_ref = false; size_t idx; if (!sig) return; pthread_mutex_lock(&sig->mutex); idx = signal_get_callback_idx(sig, callback, data); if (idx != DARRAY_INVALID) { if (sig->signalling) { sig->callbacks.array[idx].remove = true; } else { keep_ref = sig->callbacks.array[idx].keep_ref; da_erase(sig->callbacks, idx); } } pthread_mutex_unlock(&sig->mutex); if (keep_ref && os_atomic_dec_long(&handler->refs) == 0) { signal_handler_actually_destroy(handler); } } static THREAD_LOCAL struct signal_callback *current_signal_cb = NULL; static THREAD_LOCAL struct global_callback_info *current_global_cb = NULL; void signal_handler_remove_current(void) { if (current_signal_cb) current_signal_cb->remove = true; else if (current_global_cb) current_global_cb->remove = true; } void signal_handler_signal(signal_handler_t *handler, const char *signal, calldata_t *params) { struct signal_info *sig = getsignal_locked(handler, signal); long remove_refs = 0; if (!sig) return; pthread_mutex_lock(&sig->mutex); sig->signalling = true; for (size_t i = 0; i < sig->callbacks.num; i++) { struct signal_callback *cb = sig->callbacks.array + i; if (!cb->remove) { current_signal_cb = cb; cb->callback(cb->data, params); current_signal_cb = NULL; } } for (size_t i = sig->callbacks.num; i > 0; i--) { struct signal_callback *cb = sig->callbacks.array + i - 1; if (cb->remove) { if (cb->keep_ref) remove_refs++; da_erase(sig->callbacks, i - 1); } } sig->signalling = false; pthread_mutex_unlock(&sig->mutex); pthread_mutex_lock(&handler->global_callbacks_mutex); if (handler->global_callbacks.num) { for (size_t i = 0; i < handler->global_callbacks.num; i++) { struct global_callback_info *cb = handler->global_callbacks.array + i; if (!cb->remove) { cb->signaling++; current_global_cb = cb; cb->callback(cb->data, signal, params); current_global_cb = NULL; cb->signaling--; } } for (size_t i = handler->global_callbacks.num; i > 0; i--) { struct global_callback_info *cb = handler->global_callbacks.array + (i - 1); if (cb->remove && !cb->signaling) da_erase(handler->global_callbacks, i - 1); } } pthread_mutex_unlock(&handler->global_callbacks_mutex); if (remove_refs) { os_atomic_set_long(&handler->refs, os_atomic_load_long(&handler->refs) - remove_refs); } } void signal_handler_connect_global(signal_handler_t *handler, global_signal_callback_t callback, void *data) { struct global_callback_info cb_data = {callback, data, 0, false}; size_t idx; if (!handler || !callback) return; pthread_mutex_lock(&handler->global_callbacks_mutex); idx = da_find(handler->global_callbacks, &cb_data, 0); if (idx == DARRAY_INVALID) da_push_back(handler->global_callbacks, &cb_data); pthread_mutex_unlock(&handler->global_callbacks_mutex); } void signal_handler_disconnect_global(signal_handler_t *handler, global_signal_callback_t callback, void *data) { struct global_callback_info cb_data = {callback, data, 0, false}; size_t idx; if (!handler || !callback) return; pthread_mutex_lock(&handler->global_callbacks_mutex); idx = da_find(handler->global_callbacks, &cb_data, 0); if (idx != DARRAY_INVALID) { struct global_callback_info *cb = handler->global_callbacks.array + idx; if (cb->signaling) cb->remove = true; else da_erase(handler->global_callbacks, idx); } pthread_mutex_unlock(&handler->global_callbacks_mutex); } obs-studio-32.1.0-sources/libobs/callback/calldata.c000644 001751 001751 00000012650 15153330235 023157 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include "../util/bmem.h" #include "../util/base.h" #include "calldata.h" /* * Uses a data stack. Probably more complex than it should be, but reduces * fetching. * * Stack format is: * [size_t param1_name_size] * [char[] param1_name] * [size_t param1_data_size] * [uint8_t[] param1_data] * [size_t param2_name_size] * [char[] param2_name] * [size_t param2_data_size] * [uint8_t[] param2_data] * [...] * [size_t 0] * * Strings and string sizes always include the null terminator to allow for * direct referencing. */ static inline size_t cd_serialize_size(uint8_t **pos) { size_t size = 0; memcpy(&size, *pos, sizeof(size_t)); *pos += sizeof(size_t); return size; } static inline const char *cd_serialize_string(uint8_t **pos) { size_t size = cd_serialize_size(pos); const char *str = (const char *)*pos; *pos += size; return (size != 0) ? str : NULL; } static bool cd_getparam(const calldata_t *data, const char *name, uint8_t **pos) { size_t name_size; if (!data->size) return false; *pos = data->stack; name_size = cd_serialize_size(pos); while (name_size != 0) { const char *param_name = (const char *)*pos; size_t param_size; *pos += name_size; if (strcmp(param_name, name) == 0) return true; param_size = cd_serialize_size(pos); *pos += param_size; name_size = cd_serialize_size(pos); } *pos -= sizeof(size_t); return false; } static inline void cd_copy_string(uint8_t **pos, const char *str, size_t len) { if (!len) len = strlen(str) + 1; memcpy(*pos, &len, sizeof(size_t)); *pos += sizeof(size_t); memcpy(*pos, str, len); *pos += len; } static inline void cd_copy_data(uint8_t **pos, const void *in, size_t size) { memcpy(*pos, &size, sizeof(size_t)); *pos += sizeof(size_t); if (size) { memcpy(*pos, in, size); *pos += size; } } static inline void cd_set_first_param(calldata_t *data, const char *name, const void *in, size_t size) { uint8_t *pos; size_t capacity; size_t name_len = strlen(name) + 1; capacity = sizeof(size_t) * 3 + name_len + size; data->size = capacity; if (capacity < 128) capacity = 128; data->capacity = capacity; data->stack = bmalloc(capacity); pos = data->stack; cd_copy_string(&pos, name, name_len); cd_copy_data(&pos, in, size); memset(pos, 0, sizeof(size_t)); } static inline bool cd_ensure_capacity(calldata_t *data, uint8_t **pos, size_t new_size) { size_t offset; size_t new_capacity; if (new_size < data->capacity) return true; if (data->fixed) { blog(LOG_ERROR, "Tried to go above fixed calldata stack size!"); return false; } offset = *pos - data->stack; new_capacity = data->capacity * 2; if (new_capacity < new_size) new_capacity = new_size; data->stack = brealloc(data->stack, new_capacity); data->capacity = new_capacity; *pos = data->stack + offset; return true; } /* ------------------------------------------------------------------------- */ bool calldata_get_data(const calldata_t *data, const char *name, void *out, size_t size) { uint8_t *pos; size_t data_size; if (!data || !name || !*name) return false; if (!cd_getparam(data, name, &pos)) return false; data_size = cd_serialize_size(&pos); if (data_size != size) return false; memcpy(out, pos, size); return true; } void calldata_set_data(calldata_t *data, const char *name, const void *in, size_t size) { uint8_t *pos = NULL; if (!data || !name || !*name) return; if (!data->fixed && !data->stack) { cd_set_first_param(data, name, in, size); return; } if (cd_getparam(data, name, &pos)) { size_t cur_size; memcpy(&cur_size, pos, sizeof(size_t)); if (cur_size < size) { size_t offset = size - cur_size; size_t bytes = data->size; if (!cd_ensure_capacity(data, &pos, bytes + offset)) return; memmove(pos + offset, pos, bytes - (pos - data->stack)); data->size += offset; } else if (cur_size > size) { size_t offset = cur_size - size; size_t bytes = data->size - offset; memmove(pos, pos + offset, bytes - (pos - data->stack)); data->size -= offset; } cd_copy_data(&pos, in, size); } else { size_t name_len = strlen(name) + 1; size_t offset = name_len + size + sizeof(size_t) * 2; if (!cd_ensure_capacity(data, &pos, data->size + offset)) return; data->size += offset; cd_copy_string(&pos, name, 0); cd_copy_data(&pos, in, size); memset(pos, 0, sizeof(size_t)); } } bool calldata_get_string(const calldata_t *data, const char *name, const char **str) { uint8_t *pos; if (!data || !name || !*name) return false; if (!cd_getparam(data, name, &pos)) return false; *str = cd_serialize_string(&pos); return true; } obs-studio-32.1.0-sources/libobs/obs-video-gpu-encode.c000644 001751 001751 00000022001 15153330235 023560 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" #define NBSP "\xC2\xA0" static const char *gpu_encode_frame_name = "gpu_encode_frame"; static void *gpu_encode_thread(void *data) { struct obs_core_video_mix *video = data; uint64_t interval = video_output_get_frame_time(video->video); DARRAY(obs_encoder_t *) encoders; int wait_frames = NUM_ENCODE_TEXTURE_FRAMES_TO_WAIT; da_init(encoders); os_set_thread_name("obs gpu encode thread"); const char *gpu_encode_thread_name = profile_store_name( obs_get_profiler_name_store(), "obs_gpu_encode_thread(%g" NBSP "ms)", interval / 1000000.); profile_register_root(gpu_encode_thread_name, interval); while (os_sem_wait(video->gpu_encode_semaphore) == 0) { struct obs_tex_frame tf; uint64_t timestamp; uint64_t lock_key; uint64_t next_key; size_t lock_count = 0; uint64_t fer_ts = 0; if (os_atomic_load_bool(&video->gpu_encode_stop)) break; if (wait_frames) { wait_frames--; continue; } profile_start(gpu_encode_thread_name); os_event_reset(video->gpu_encode_inactive); /* -------------- */ pthread_mutex_lock(&video->gpu_encoder_mutex); deque_pop_front(&video->gpu_encoder_queue, &tf, sizeof(tf)); timestamp = tf.timestamp; lock_key = tf.lock_key; next_key = tf.lock_key; video_output_inc_texture_frames(video->video); for (size_t i = 0; i < video->gpu_encoders.num; i++) { obs_encoder_t *encoder = obs_encoder_get_ref(video->gpu_encoders.array[i]); if (encoder) da_push_back(encoders, &encoder); } pthread_mutex_unlock(&video->gpu_encoder_mutex); /* -------------- */ for (size_t i = 0; i < encoders.num; i++) { struct encoder_packet pkt = {0}; bool received = false; bool success = false; uint32_t skip = 0; obs_encoder_t *encoder = encoders.array[i]; obs_weak_encoder_t **paired = encoder->paired_encoders.array; size_t num_paired = encoder->paired_encoders.num; pkt.timebase_num = encoder->timebase_num * encoder->frame_rate_divisor; pkt.timebase_den = encoder->timebase_den; pkt.encoder = encoder; if (encoder->encoder_group && !encoder->start_ts) { struct obs_encoder_group *group = encoder->encoder_group; bool ready = false; pthread_mutex_lock(&group->mutex); ready = group->start_timestamp == timestamp; pthread_mutex_unlock(&group->mutex); if (!ready) continue; } if (!encoder->first_received && num_paired) { bool wait_for_audio = false; for (size_t idx = 0; !wait_for_audio && idx < num_paired; idx++) { obs_encoder_t *enc = obs_weak_encoder_get_encoder(paired[idx]); if (!enc) continue; if (!enc->first_received || enc->first_raw_ts > timestamp) { wait_for_audio = true; } obs_encoder_release(enc); } if (wait_for_audio) continue; } if (video_pause_check(&encoder->pause, timestamp)) continue; if (encoder->reconfigure_requested) { encoder->reconfigure_requested = false; encoder->info.update(encoder->context.data, encoder->context.settings); } // an explicit counter is used instead of remainder calculation // to allow multiple encoders started at the same time to start on // the same frame skip = encoder->frame_rate_divisor_counter++; if (encoder->frame_rate_divisor_counter == encoder->frame_rate_divisor) encoder->frame_rate_divisor_counter = 0; if (skip) continue; if (!encoder->start_ts) encoder->start_ts = timestamp; if (++lock_count == encoders.num) next_key = 0; else next_key++; /* Get the frame encode request timestamp. This * needs to be read just before the encode request. */ fer_ts = os_gettime_ns(); profile_start(gpu_encode_frame_name); if (encoder->info.encode_texture2) { struct encoder_texture tex = {0}; tex.handle = tf.handle; tex.tex[0] = tf.tex; tex.tex[1] = tf.tex_uv; tex.tex[2] = NULL; success = encoder->info.encode_texture2(encoder->context.data, &tex, encoder->cur_pts, lock_key, &next_key, &pkt, &received); } else { success = encoder->info.encode_texture(encoder->context.data, tf.handle, encoder->cur_pts, lock_key, &next_key, &pkt, &received); } profile_end(gpu_encode_frame_name); /* Generate and enqueue the frame timing metrics, namely * the CTS (composition time), FER (frame encode request), FERC * (frame encode request complete) and current PTS. PTS is used to * associate the frame timing data with the encode packet. */ if (tf.timestamp) { struct encoder_packet_time *ept = da_push_back_new(encoder->encoder_packet_times); // Get the frame encode request complete timestamp if (success) { ept->ferc = os_gettime_ns(); } else { // Encode had error, set ferc to 0 ept->ferc = 0; } ept->pts = encoder->cur_pts; ept->cts = tf.timestamp; ept->fer = fer_ts; } send_off_encoder_packet(encoder, success, received, &pkt); lock_key = next_key; encoder->cur_pts += encoder->timebase_num * encoder->frame_rate_divisor; } /* -------------- */ pthread_mutex_lock(&video->gpu_encoder_mutex); tf.lock_key = next_key; if (--tf.count) { tf.timestamp += interval; deque_push_front(&video->gpu_encoder_queue, &tf, sizeof(tf)); video_output_inc_texture_skipped_frames(video->video); } else { deque_push_back(&video->gpu_encoder_avail_queue, &tf, sizeof(tf)); } pthread_mutex_unlock(&video->gpu_encoder_mutex); /* -------------- */ os_event_signal(video->gpu_encode_inactive); for (size_t i = 0; i < encoders.num; i++) obs_encoder_release(encoders.array[i]); da_resize(encoders, 0); profile_end(gpu_encode_thread_name); profile_reenable_thread(); } da_free(encoders); return NULL; } bool init_gpu_encoding(struct obs_core_video_mix *video) { const struct video_output_info *info = video_output_get_info(video->video); video->gpu_encode_stop = false; deque_reserve(&video->gpu_encoder_avail_queue, NUM_ENCODE_TEXTURES); for (size_t i = 0; i < NUM_ENCODE_TEXTURES; i++) { gs_texture_t *tex; gs_texture_t *tex_uv; if (info->format == VIDEO_FORMAT_P010) { gs_texture_create_p010(&tex, &tex_uv, info->width, info->height, GS_RENDER_TARGET | GS_SHARED_KM_TEX); } else { gs_texture_create_nv12(&tex, &tex_uv, info->width, info->height, GS_RENDER_TARGET | GS_SHARED_KM_TEX); } if (!tex) { return false; } #ifdef _WIN32 uint32_t handle = gs_texture_get_shared_handle(tex); #else uint32_t handle = (uint32_t)-1; #endif struct obs_tex_frame frame = {.tex = tex, .tex_uv = tex_uv, .handle = handle}; deque_push_back(&video->gpu_encoder_avail_queue, &frame, sizeof(frame)); } if (os_sem_init(&video->gpu_encode_semaphore, 0) != 0) return false; if (os_event_init(&video->gpu_encode_inactive, OS_EVENT_TYPE_MANUAL) != 0) return false; if (pthread_create(&video->gpu_encode_thread, NULL, gpu_encode_thread, video) != 0) return false; os_event_signal(video->gpu_encode_inactive); video->gpu_encode_thread_initialized = true; return true; } void stop_gpu_encoding_thread(struct obs_core_video_mix *video) { if (video->gpu_encode_thread_initialized) { os_atomic_set_bool(&video->gpu_encode_stop, true); os_sem_post(video->gpu_encode_semaphore); pthread_join(video->gpu_encode_thread, NULL); video->gpu_encode_thread_initialized = false; } } void free_gpu_encoding(struct obs_core_video_mix *video) { if (video->gpu_encode_semaphore) { os_sem_destroy(video->gpu_encode_semaphore); video->gpu_encode_semaphore = NULL; } if (video->gpu_encode_inactive) { os_event_destroy(video->gpu_encode_inactive); video->gpu_encode_inactive = NULL; } #define free_deque(x) \ do { \ while (x.size) { \ struct obs_tex_frame frame; \ deque_pop_front(&x, &frame, sizeof(frame)); \ gs_texture_destroy(frame.tex); \ gs_texture_destroy(frame.tex_uv); \ } \ deque_free(&x); \ } while (false) free_deque(video->gpu_encoder_queue); free_deque(video->gpu_encoder_avail_queue); #undef free_deque } obs-studio-32.1.0-sources/libobs/obs-source.h000644 001751 001751 00000042141 15153330235 021742 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs.h" /** * @file * @brief header for modules implementing sources. * * Sources are modules that either feed data to libobs or modify it. */ #ifdef __cplusplus extern "C" { #endif enum obs_source_type { OBS_SOURCE_TYPE_INPUT, OBS_SOURCE_TYPE_FILTER, OBS_SOURCE_TYPE_TRANSITION, OBS_SOURCE_TYPE_SCENE, }; enum obs_balance_type { OBS_BALANCE_TYPE_SINE_LAW, OBS_BALANCE_TYPE_SQUARE_LAW, OBS_BALANCE_TYPE_LINEAR, }; enum obs_icon_type { OBS_ICON_TYPE_UNKNOWN, OBS_ICON_TYPE_IMAGE, OBS_ICON_TYPE_COLOR, OBS_ICON_TYPE_SLIDESHOW, OBS_ICON_TYPE_AUDIO_INPUT, OBS_ICON_TYPE_AUDIO_OUTPUT, OBS_ICON_TYPE_DESKTOP_CAPTURE, OBS_ICON_TYPE_WINDOW_CAPTURE, OBS_ICON_TYPE_GAME_CAPTURE, OBS_ICON_TYPE_CAMERA, OBS_ICON_TYPE_TEXT, OBS_ICON_TYPE_MEDIA, OBS_ICON_TYPE_BROWSER, OBS_ICON_TYPE_CUSTOM, OBS_ICON_TYPE_PROCESS_AUDIO_OUTPUT, }; enum obs_media_state { OBS_MEDIA_STATE_NONE, OBS_MEDIA_STATE_PLAYING, OBS_MEDIA_STATE_OPENING, OBS_MEDIA_STATE_BUFFERING, OBS_MEDIA_STATE_PAUSED, OBS_MEDIA_STATE_STOPPED, OBS_MEDIA_STATE_ENDED, OBS_MEDIA_STATE_ERROR, }; /** * @name Source output flags * * These flags determine what type of data the source outputs and expects. * @{ */ /** * Source has video. * * Unless SOURCE_ASYNC_VIDEO is specified, the source must include the * video_render callback in the source definition structure. */ #define OBS_SOURCE_VIDEO (1 << 0) /** * Source has audio. * * Use the obs_source_output_audio function to pass raw audio data, which will * be automatically converted and uploaded. If used with SOURCE_ASYNC_VIDEO, * audio will automatically be synced up to the video output. */ #define OBS_SOURCE_AUDIO (1 << 1) /** Async video flag (use OBS_SOURCE_ASYNC_VIDEO) */ #define OBS_SOURCE_ASYNC (1 << 2) /** * Source passes raw video data via RAM. * * Use the obs_source_output_video function to pass raw video data, which will * be automatically uploaded at the specified timestamp. * * If this flag is specified, it is not necessary to include the video_render * callback. However, if you wish to use that function as well, you must call * obs_source_getframe to get the current frame data, and * obs_source_releaseframe to release the data when complete. */ #define OBS_SOURCE_ASYNC_VIDEO (OBS_SOURCE_ASYNC | OBS_SOURCE_VIDEO) /** * Source uses custom drawing, rather than a default effect. * * If this flag is specified, the video_render callback will pass a NULL * effect, and effect-based filters will not use direct rendering. */ #define OBS_SOURCE_CUSTOM_DRAW (1 << 3) /** * Source supports interaction. * * When this is used, the source will receive interaction events * if they provide the necessary callbacks in the source definition structure. */ #define OBS_SOURCE_INTERACTION (1 << 5) /** * Source composites sub-sources * * When used specifies that the source composites one or more sub-sources. * Sources that render sub-sources must implement the audio_render callback * in order to perform custom mixing of sub-sources. * * This capability flag is always set for transitions. */ #define OBS_SOURCE_COMPOSITE (1 << 6) /** * Source should not be fully duplicated * * When this is used, specifies that the source should not be fully duplicated, * and should prefer to duplicate via holding references rather than full * duplication. */ #define OBS_SOURCE_DO_NOT_DUPLICATE (1 << 7) /** * Source is deprecated and should not be used */ #define OBS_SOURCE_DEPRECATED (1 << 8) /** * Source cannot have its audio monitored * * Specifies that this source may cause a feedback loop if audio is monitored * with a device selected as desktop audio. * * This is used primarily with desktop audio capture sources. */ #define OBS_SOURCE_DO_NOT_SELF_MONITOR (1 << 9) /** * Source type is currently disabled and should not be shown to the user */ #define OBS_SOURCE_CAP_DISABLED (1 << 10) /** * Source type is obsolete (has been updated with new defaults/properties/etc) */ #define OBS_SOURCE_CAP_OBSOLETE OBS_SOURCE_CAP_DISABLED /** * Source should enable monitoring by default. Monitoring should be set by the * frontend if this flag is set. */ #define OBS_SOURCE_MONITOR_BY_DEFAULT (1 << 11) /** Used internally for audio submixing */ #define OBS_SOURCE_SUBMIX (1 << 12) /** * Source type can be controlled by media controls */ #define OBS_SOURCE_CONTROLLABLE_MEDIA (1 << 13) /** * Source type provides cea708 data */ #define OBS_SOURCE_CEA_708 (1 << 14) /** * Source understands SRGB rendering */ #define OBS_SOURCE_SRGB (1 << 15) /** * Source type prefers not to have its properties shown on creation * (prefers to rely on defaults first) */ #define OBS_SOURCE_CAP_DONT_SHOW_PROPERTIES (1 << 16) /** * Source requires a canvas to operate */ #define OBS_SOURCE_REQUIRES_CANVAS (1 << 17) /** @} */ typedef void (*obs_source_enum_proc_t)(obs_source_t *parent, obs_source_t *child, void *param); struct obs_source_audio_mix { struct audio_output_data output[MAX_AUDIO_MIXES]; }; /** * Source definition structure */ struct obs_source_info { /* ----------------------------------------------------------------- */ /* Required implementation*/ /** Unique string identifier for the source */ const char *id; /** * Type of source. * * OBS_SOURCE_TYPE_INPUT for input sources, * OBS_SOURCE_TYPE_FILTER for filter sources, and * OBS_SOURCE_TYPE_TRANSITION for transition sources. */ enum obs_source_type type; /** Source output flags */ uint32_t output_flags; /** * Get the translated name of the source type * * @param type_data The type_data variable of this structure * @return The translated name of the source type */ const char *(*get_name)(void *type_data); /** * Creates the source data for the source * * @param settings Settings to initialize the source with * @param source Source that this data is associated with * @return The data associated with this source */ void *(*create)(obs_data_t *settings, obs_source_t *source); /** * Destroys the private data for the source * * Async sources must not call obs_source_output_video after returning * from destroy */ void (*destroy)(void *data); /** Returns the width of the source. Required if this is an input * source and has non-async video */ uint32_t (*get_width)(void *data); /** Returns the height of the source. Required if this is an input * source and has non-async video */ uint32_t (*get_height)(void *data); /* ----------------------------------------------------------------- */ /* Optional implementation */ /** * Gets the default settings for this source * * @param[out] settings Data to assign default settings to * @deprecated Use get_defaults2 if type_data is needed */ void (*get_defaults)(obs_data_t *settings); /** * Gets the property information of this source * * @return The properties data * @deprecated Use get_properties2 if type_data is needed */ obs_properties_t *(*get_properties)(void *data); /** * Updates the settings for this source * * @param data Source data * @param settings New settings for this source */ void (*update)(void *data, obs_data_t *settings); /** Called when the source has been activated in the main view */ void (*activate)(void *data); /** * Called when the source has been deactivated from the main view * (no longer being played/displayed) */ void (*deactivate)(void *data); /** Called when the source is visible */ void (*show)(void *data); /** Called when the source is no longer visible */ void (*hide)(void *data); /** * Called each video frame with the time elapsed * * @param data Source data * @param seconds Seconds elapsed since the last frame */ void (*video_tick)(void *data, float seconds); /** * Called when rendering the source with the graphics subsystem. * * If this is an input/transition source, this is called to draw the * source texture with the graphics subsystem using the specified * effect. * * If this is a filter source, it wraps source draw calls (for * example applying a custom effect with custom parameters to a * source). In this case, it's highly recommended to use the * obs_source_process_filter function to automatically handle * effect-based filter processing. However, you can implement custom * draw handling as desired as well. * * If the source output flags do not include SOURCE_CUSTOM_DRAW, all * a source needs to do is set the "image" parameter of the effect to * the desired texture, and then draw. If the output flags include * SOURCE_COLOR_MATRIX, you may optionally set the "color_matrix" * parameter of the effect to a custom 4x4 conversion matrix (by * default it will be set to an YUV->RGB conversion matrix) * * @param data Source data * @param effect Effect to be used with this source. If the source * output flags include SOURCE_CUSTOM_DRAW, this will * be NULL, and the source is expected to process with * an effect manually. */ void (*video_render)(void *data, gs_effect_t *effect); /** * Called to filter raw async video data. * * @note This function is only used with filter sources. * * @param data Filter data * @param frame Video frame to filter * @return New video frame data. This can defer video data to * be drawn later if time is needed for processing */ struct obs_source_frame *(*filter_video)(void *data, struct obs_source_frame *frame); /** * Called to filter raw audio data. * * @note This function is only used with filter sources. * * @param data Filter data * @param audio Audio data to filter. * @return Modified or new audio data. You can directly modify * the data passed and return it, or you can defer audio * data for later if time is needed for processing. If * you are returning new data, that data must exist * until the next call to the filter_audio callback or * until the filter is removed/destroyed. */ struct obs_audio_data *(*filter_audio)(void *data, struct obs_audio_data *audio); /** * Called to enumerate all active sources being used within this * source. If the source has children that render audio/video it must * implement this callback. * * @param data Filter data * @param enum_callback Enumeration callback * @param param User data to pass to callback */ void (*enum_active_sources)(void *data, obs_source_enum_proc_t enum_callback, void *param); /** * Called when saving a source. This is a separate function because * sometimes a source needs to know when it is being saved so it * doesn't always have to update the current settings until a certain * point. * * @param data Source data * @param settings Settings */ void (*save)(void *data, obs_data_t *settings); /** * Called when loading a source from saved data. This should be called * after all the loading sources have actually been created because * sometimes there are sources that depend on each other. * * @param data Source data * @param settings Settings */ void (*load)(void *data, obs_data_t *settings); /** * Called when interacting with a source and a mouse-down or mouse-up * occurs. * * @param data Source data * @param event Mouse event properties * @param type Mouse button pushed * @param mouse_up Mouse event type (true if mouse-up) * @param click_count Mouse click count (1 for single click, etc.) */ void (*mouse_click)(void *data, const struct obs_mouse_event *event, int32_t type, bool mouse_up, uint32_t click_count); /** * Called when interacting with a source and a mouse-move occurs. * * @param data Source data * @param event Mouse event properties * @param mouse_leave Mouse leave state (true if mouse left source) */ void (*mouse_move)(void *data, const struct obs_mouse_event *event, bool mouse_leave); /** * Called when interacting with a source and a mouse-wheel occurs. * * @param data Source data * @param event Mouse event properties * @param x_delta Movement delta in the horizontal direction * @param y_delta Movement delta in the vertical direction */ void (*mouse_wheel)(void *data, const struct obs_mouse_event *event, int x_delta, int y_delta); /** * Called when interacting with a source and gain focus/lost focus event * occurs. * * @param data Source data * @param focus Focus state (true if focus gained) */ void (*focus)(void *data, bool focus); /** * Called when interacting with a source and a key-up or key-down * occurs. * * @param data Source data * @param event Key event properties * @param focus Key event type (true if mouse-up) */ void (*key_click)(void *data, const struct obs_key_event *event, bool key_up); /** * Called when the filter is removed from a source * * @param data Filter data * @param source Source that the filter being removed from */ void (*filter_remove)(void *data, obs_source_t *source); /** * Private data associated with this entry */ void *type_data; /** * If defined, called to free private data on shutdown */ void (*free_type_data)(void *type_data); bool (*audio_render)(void *data, uint64_t *ts_out, struct obs_source_audio_mix *audio_output, uint32_t mixers, size_t channels, size_t sample_rate); /** * Called to enumerate all active and inactive sources being used * within this source. If this callback isn't implemented, * enum_active_sources will be called instead. * * This is typically used if a source can have inactive child sources. * * @param data Filter data * @param enum_callback Enumeration callback * @param param User data to pass to callback */ void (*enum_all_sources)(void *data, obs_source_enum_proc_t enum_callback, void *param); void (*transition_start)(void *data); void (*transition_stop)(void *data); /** * Gets the default settings for this source * * If get_defaults is also defined both will be called, and the first * call will be to get_defaults, then to get_defaults2. * * @param type_data The type_data variable of this structure * @param[out] settings Data to assign default settings to */ void (*get_defaults2)(void *type_data, obs_data_t *settings); /** * Gets the property information of this source * * @param data Source data * @param type_data The type_data variable of this structure * @return The properties data */ obs_properties_t *(*get_properties2)(void *data, void *type_data); bool (*audio_mix)(void *data, uint64_t *ts_out, struct audio_output_data *audio_output, size_t channels, size_t sample_rate); /** Icon type for the source */ enum obs_icon_type icon_type; /** Media controls */ void (*media_play_pause)(void *data, bool pause); void (*media_restart)(void *data); void (*media_stop)(void *data); void (*media_next)(void *data); void (*media_previous)(void *data); int64_t (*media_get_duration)(void *data); int64_t (*media_get_time)(void *data); void (*media_set_time)(void *data, int64_t miliseconds); enum obs_media_state (*media_get_state)(void *data); /* version-related stuff */ uint32_t version; /* increment if needed to specify a new version */ const char *unversioned_id; /* set internally, don't set manually */ /** Missing files **/ obs_missing_files_t *(*missing_files)(void *data); /** Get color space **/ enum gs_color_space (*video_get_color_space)(void *data, size_t count, const enum gs_color_space *preferred_spaces); /** * Called when the filter is added to a source * * @param data Filter data * @param source Source that the filter is being added to */ void (*filter_add)(void *data, obs_source_t *source); }; EXPORT void obs_register_source_s(const struct obs_source_info *info, size_t size); /** * Registers a source definition to the current obs context. This should be * used in obs_module_load. * * @param info Pointer to the source definition structure */ #define obs_register_source(info) obs_register_source_s(info, sizeof(struct obs_source_info)) #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-nix.c000644 001751 001751 00000030140 15153330235 021227 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey Copyright (C) 2014 by Zachary Lund This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" #include "obs-nix.h" #include "obs-nix-platform.h" #include "obs-nix-x11.h" #include "util/config-file.h" #ifdef ENABLE_WAYLAND #include "obs-nix-wayland.h" #endif #if defined(__FreeBSD__) #define _GNU_SOURCE #endif #include #include #include #if defined(__FreeBSD__) || defined(__OpenBSD__) #include #endif #if !defined(__OpenBSD__) #include #endif #include #include const char *get_module_extension(void) { return ".so"; } #define FLATPAK_PLUGIN_PATH "/app/plugins" static const char *module_bin[] = { "../../obs-plugins/64bit", OBS_INSTALL_PREFIX "/" OBS_PLUGIN_DESTINATION, FLATPAK_PLUGIN_PATH "/" OBS_PLUGIN_DESTINATION, }; static const char *module_data[] = { OBS_DATA_PATH "/obs-plugins/%module%", OBS_INSTALL_DATA_PATH "/obs-plugins/%module%", FLATPAK_PLUGIN_PATH "/share/obs/obs-plugins/%module%", }; static const int module_patterns_size = sizeof(module_bin) / sizeof(module_bin[0]); static const struct obs_nix_hotkeys_vtable *hotkeys_vtable = NULL; void add_default_module_paths(void) { char *module_bin_path = os_get_executable_path_ptr("../" OBS_PLUGIN_PATH); char *module_data_path = os_get_executable_path_ptr("../" OBS_DATA_PATH "/obs-plugins/%module%"); if (module_bin_path && module_data_path) { char *abs_module_bin_path = os_get_abs_path_ptr(module_bin_path); char *abs_module_install_path = os_get_abs_path_ptr(OBS_INSTALL_PREFIX "/" OBS_PLUGIN_DESTINATION); if (abs_module_bin_path && (!abs_module_install_path || strcmp(abs_module_bin_path, abs_module_install_path) != 0)) { obs_add_module_path(module_bin_path, module_data_path); } bfree(abs_module_install_path); bfree(abs_module_bin_path); } bfree(module_bin_path); bfree(module_data_path); for (int i = 0; i < module_patterns_size; i++) { obs_add_module_path(module_bin[i], module_data[i]); } } /* * /usr/local/share/libobs * /usr/share/libobs */ char *find_libobs_data_file(const char *file) { struct dstr output; dstr_init(&output); if (check_path(file, OBS_DATA_PATH "/libobs/", &output)) return output.array; char *relative_data_path = os_get_executable_path_ptr("../" OBS_DATA_PATH "/libobs/"); if (relative_data_path) { bool found = check_path(file, relative_data_path, &output); bfree(relative_data_path); if (found) { return output.array; } } if (OBS_INSTALL_PREFIX[0] != 0) { if (check_path(file, OBS_INSTALL_DATA_PATH "/libobs/", &output)) return output.array; } dstr_free(&output); return NULL; } static void log_processor_cores(void) { blog(LOG_INFO, "Physical Cores: %d, Logical Cores: %d", os_get_physical_cores(), os_get_logical_cores()); } #if defined(__linux__) static void log_processor_info(void) { int physical_id = -1; int last_physical_id = -1; char *line = NULL; size_t linecap = 0; FILE *fp; struct dstr proc_name; struct dstr proc_speed; fp = fopen("/proc/cpuinfo", "r"); if (!fp) return; dstr_init(&proc_name); dstr_init(&proc_speed); while (getline(&line, &linecap, fp) != -1) { if (!strncmp(line, "model name", 10)) { char *start = strchr(line, ':'); if (!start || *(++start) == '\0') continue; dstr_copy(&proc_name, start); dstr_resize(&proc_name, proc_name.len - 1); dstr_depad(&proc_name); } if (!strncmp(line, "physical id", 11)) { char *start = strchr(line, ':'); if (!start || *(++start) == '\0') continue; physical_id = atoi(start); } if (!strncmp(line, "cpu MHz", 7)) { char *start = strchr(line, ':'); if (!start || *(++start) == '\0') continue; dstr_copy(&proc_speed, start); dstr_resize(&proc_speed, proc_speed.len - 1); dstr_depad(&proc_speed); } if (*line == '\n' && physical_id != last_physical_id) { last_physical_id = physical_id; blog(LOG_INFO, "CPU Name: %s", proc_name.array); blog(LOG_INFO, "CPU Speed: %sMHz", proc_speed.array); } } fclose(fp); dstr_free(&proc_name); dstr_free(&proc_speed); free(line); } #elif defined(__FreeBSD__) || defined(__OpenBSD__) static void log_processor_speed(void) { #ifndef __OpenBSD__ char *line = NULL; size_t linecap = 0; FILE *fp; struct dstr proc_speed; fp = fopen("/var/run/dmesg.boot", "r"); if (!fp) { blog(LOG_INFO, "CPU: Missing /var/run/dmesg.boot !"); return; } dstr_init(&proc_speed); while (getline(&line, &linecap, fp) != -1) { if (!strncmp(line, "CPU: ", 5)) { char *start = strrchr(line, '('); if (!start || *(++start) == '\0') continue; size_t len = strcspn(start, "-"); dstr_ncopy(&proc_speed, start, len); } } blog(LOG_INFO, "CPU Speed: %sMHz", proc_speed.array); fclose(fp); dstr_free(&proc_speed); free(line); #endif } static void log_processor_name(void) { int mib[2]; size_t len; char *proc; mib[0] = CTL_HW; mib[1] = HW_MODEL; sysctl(mib, 2, NULL, &len, NULL, 0); proc = bmalloc(len); if (!proc) return; sysctl(mib, 2, proc, &len, NULL, 0); blog(LOG_INFO, "CPU Name: %s", proc); bfree(proc); } static void log_processor_info(void) { log_processor_name(); log_processor_speed(); } #endif static void log_memory_info(void) { #if defined(__OpenBSD__) int mib[2]; size_t len; int64_t mem; mib[0] = CTL_HW; mib[1] = HW_PHYSMEM64; len = sizeof(mem); if (sysctl(mib, 2, &mem, &len, NULL, 0) >= 0) blog(LOG_INFO, "Physical Memory: %" PRIi64 "MB Total", mem / 1024 / 1024); #else struct sysinfo info; if (sysinfo(&info) < 0) return; blog(LOG_INFO, "Physical Memory: %" PRIu64 "MB Total, %" PRIu64 "MB Free", (uint64_t)info.totalram * info.mem_unit / 1024 / 1024, ((uint64_t)info.freeram + (uint64_t)info.bufferram) * info.mem_unit / 1024 / 1024); #endif } static void log_kernel_version(void) { struct utsname info; if (uname(&info) < 0) return; blog(LOG_INFO, "Kernel Version: %s %s", info.sysname, info.release); } #if defined(__linux__) || defined(__FreeBSD__) static void log_distribution_info(void) { FILE *fp; char *line = NULL; size_t linecap = 0; struct dstr distro; struct dstr version; fp = fopen("/etc/os-release", "r"); if (!fp) { blog(LOG_INFO, "Distribution: Missing /etc/os-release !"); return; } dstr_init_copy(&distro, "Unknown"); dstr_init_copy(&version, "Unknown"); while (getline(&line, &linecap, fp) != -1) { if (!strncmp(line, "NAME", 4)) { char *start = strchr(line, '='); if (!start || *(++start) == '\0') continue; dstr_copy(&distro, start); dstr_resize(&distro, distro.len - 1); } if (!strncmp(line, "VERSION_ID", 10)) { char *start = strchr(line, '='); if (!start || *(++start) == '\0') continue; dstr_copy(&version, start); dstr_resize(&version, version.len - 1); } } blog(LOG_INFO, "Distribution: %s %s", distro.array, version.array); fclose(fp); dstr_free(&version); dstr_free(&distro); free(line); } static void log_flatpak_extensions(const char *extensions) { if (!extensions) return; char **exts_list = strlist_split(extensions, ';', false); for (char **ext = exts_list; *ext != NULL; ext++) { // Log the extension name without its commit hash char **name = strlist_split(*ext, '=', false); blog(LOG_INFO, " - %s", *name); strlist_free(name); } strlist_free(exts_list); } static void log_flatpak_info(void) { config_t *fp_info = NULL; if (config_open(&fp_info, "/.flatpak-info", CONFIG_OPEN_EXISTING) != CONFIG_SUCCESS) { blog(LOG_ERROR, "Unable to open .flatpak-info file"); return; } const char *branch = config_get_string(fp_info, "Instance", "branch"); const char *arch = config_get_string(fp_info, "Instance", "arch"); const char *commit = config_get_string(fp_info, "Instance", "app-commit"); const char *runtime = config_get_string(fp_info, "Application", "runtime"); const char *app_exts = config_get_string(fp_info, "Instance", "app-extensions"); const char *runtime_exts = config_get_string(fp_info, "Instance", "runtime-extensions"); const char *fp_version = config_get_string(fp_info, "Instance", "flatpak-version"); blog(LOG_INFO, "Flatpak Branch: %s", branch ? branch : "none"); blog(LOG_INFO, "Flatpak Arch: %s", arch ? arch : "unknown"); blog(LOG_INFO, "Flatpak Commit: %s", commit ? commit : "unknown"); blog(LOG_INFO, "Flatpak Runtime: %s", runtime ? runtime : "none"); if (app_exts) { blog(LOG_INFO, "App Extensions:"); log_flatpak_extensions(app_exts); } if (runtime_exts) { blog(LOG_INFO, "Runtime Extensions:"); log_flatpak_extensions(runtime_exts); } blog(LOG_INFO, "Flatpak Framework Version: %s", fp_version ? fp_version : "unknown"); config_close(fp_info); } static void log_desktop_session_info(void) { char *current_desktop = getenv("XDG_CURRENT_DESKTOP"); char *session_desktop = getenv("XDG_SESSION_DESKTOP"); char *session_type = getenv("XDG_SESSION_TYPE"); if (current_desktop && session_desktop) blog(LOG_INFO, "Desktop Environment: %s (%s)", current_desktop, session_desktop); else if (current_desktop || session_desktop) blog(LOG_INFO, "Desktop Environment: %s", current_desktop ? current_desktop : session_desktop); if (session_type) blog(LOG_INFO, "Session Type: %s", session_type); } #endif void log_system_info(void) { #if defined(__linux__) || defined(__FreeBSD__) log_processor_info(); #endif log_processor_cores(); log_memory_info(); log_kernel_version(); #if defined(__linux__) || defined(__FreeBSD__) if (access("/.flatpak-info", F_OK) == 0) log_flatpak_info(); else log_distribution_info(); log_desktop_session_info(); #endif if (obs_get_nix_platform() == OBS_NIX_PLATFORM_X11_EGL) obs_nix_x11_log_info(); } bool obs_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys) { switch (obs_get_nix_platform()) { case OBS_NIX_PLATFORM_X11_EGL: hotkeys_vtable = obs_nix_x11_get_hotkeys_vtable(); break; #ifdef ENABLE_WAYLAND case OBS_NIX_PLATFORM_WAYLAND: hotkeys_vtable = obs_nix_wayland_get_hotkeys_vtable(); break; #endif default: break; } return hotkeys_vtable->init(hotkeys); } void obs_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys) { hotkeys_vtable->free(hotkeys); hotkeys_vtable = NULL; } bool obs_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *context, obs_key_t key) { return hotkeys_vtable->is_pressed(context, key); } void obs_key_to_str(obs_key_t key, struct dstr *dstr) { return hotkeys_vtable->key_to_str(key, dstr); } obs_key_t obs_key_from_virtual_key(int sym) { return hotkeys_vtable->key_from_virtual_key(sym); } int obs_key_to_virtual_key(obs_key_t key) { return hotkeys_vtable->key_to_virtual_key(key); } static inline void add_combo_key(obs_key_t key, struct dstr *str) { struct dstr key_str = {0}; obs_key_to_str(key, &key_str); if (!dstr_is_empty(&key_str)) { if (!dstr_is_empty(str)) { dstr_cat(str, " + "); } dstr_cat_dstr(str, &key_str); } dstr_free(&key_str); } void obs_key_combination_to_str(obs_key_combination_t combination, struct dstr *str) { if ((combination.modifiers & INTERACT_CONTROL_KEY) != 0) { add_combo_key(OBS_KEY_CONTROL, str); } if ((combination.modifiers & INTERACT_COMMAND_KEY) != 0) { add_combo_key(OBS_KEY_META, str); } if ((combination.modifiers & INTERACT_ALT_KEY) != 0) { add_combo_key(OBS_KEY_ALT, str); } if ((combination.modifiers & INTERACT_SHIFT_KEY) != 0) { add_combo_key(OBS_KEY_SHIFT, str); } if (combination.key != OBS_KEY_NONE) { add_combo_key(combination.key, str); } } obs-studio-32.1.0-sources/libobs/obs-hevc.h000644 001751 001751 00000005133 15153330235 021367 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #ifdef __cplusplus extern "C" { #endif struct encoder_packet; enum { OBS_HEVC_NAL_TRAIL_N = 0, OBS_HEVC_NAL_TRAIL_R = 1, OBS_HEVC_NAL_TSA_N = 2, OBS_HEVC_NAL_TSA_R = 3, OBS_HEVC_NAL_STSA_N = 4, OBS_HEVC_NAL_STSA_R = 5, OBS_HEVC_NAL_RADL_N = 6, OBS_HEVC_NAL_RADL_R = 7, OBS_HEVC_NAL_RASL_N = 8, OBS_HEVC_NAL_RASL_R = 9, OBS_HEVC_NAL_VCL_N10 = 10, OBS_HEVC_NAL_VCL_R11 = 11, OBS_HEVC_NAL_VCL_N12 = 12, OBS_HEVC_NAL_VCL_R13 = 13, OBS_HEVC_NAL_VCL_N14 = 14, OBS_HEVC_NAL_VCL_R15 = 15, OBS_HEVC_NAL_BLA_W_LP = 16, OBS_HEVC_NAL_BLA_W_RADL = 17, OBS_HEVC_NAL_BLA_N_LP = 18, OBS_HEVC_NAL_IDR_W_RADL = 19, OBS_HEVC_NAL_IDR_N_LP = 20, OBS_HEVC_NAL_CRA_NUT = 21, OBS_HEVC_NAL_RSV_IRAP_VCL22 = 22, OBS_HEVC_NAL_RSV_IRAP_VCL23 = 23, OBS_HEVC_NAL_RSV_VCL24 = 24, OBS_HEVC_NAL_RSV_VCL25 = 25, OBS_HEVC_NAL_RSV_VCL26 = 26, OBS_HEVC_NAL_RSV_VCL27 = 27, OBS_HEVC_NAL_RSV_VCL28 = 28, OBS_HEVC_NAL_RSV_VCL29 = 29, OBS_HEVC_NAL_RSV_VCL30 = 30, OBS_HEVC_NAL_RSV_VCL31 = 31, OBS_HEVC_NAL_VPS = 32, OBS_HEVC_NAL_SPS = 33, OBS_HEVC_NAL_PPS = 34, OBS_HEVC_NAL_AUD = 35, OBS_HEVC_NAL_EOS_NUT = 36, OBS_HEVC_NAL_EOB_NUT = 37, OBS_HEVC_NAL_FD_NUT = 38, OBS_HEVC_NAL_SEI_PREFIX = 39, OBS_HEVC_NAL_SEI_SUFFIX = 40, }; EXPORT bool obs_hevc_keyframe(const uint8_t *data, size_t size); EXPORT void obs_parse_hevc_packet(struct encoder_packet *hevc_packet, const struct encoder_packet *src); EXPORT int obs_parse_hevc_packet_priority(const struct encoder_packet *packet); EXPORT void obs_extract_hevc_headers(const uint8_t *packet, size_t size, uint8_t **new_packet_data, size_t *new_packet_size, uint8_t **header_data, size_t *header_size, uint8_t **sei_data, size_t *sei_size); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-cocoa.m000644 001751 001751 00000104062 15153330235 021534 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2013 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/dstr.h" #include "obs.h" #include "obs-internal.h" #include #include #include #include #include #include #import // MARK: macOS Bundle Management const char *get_module_extension(void) { return ""; } void add_default_module_paths(void) { NSURL *pluginURL = [[NSBundle mainBundle] builtInPlugInsURL]; NSString *pluginModulePath = [[pluginURL path] stringByAppendingString:@"/%module%.plugin/Contents/MacOS/"]; NSString *pluginDataPath = [[pluginURL path] stringByAppendingString:@"/%module%.plugin/Contents/Resources/"]; obs_add_module_path(pluginModulePath.UTF8String, pluginDataPath.UTF8String); } char *find_libobs_data_file(const char *file) { NSBundle *frameworkBundle = [NSBundle bundleWithIdentifier:@"com.obsproject.libobs"]; NSString *libobsDataPath = [[[frameworkBundle bundleURL] path] stringByAppendingFormat:@"/%@/%s", @"Resources", file]; size_t path_length = strlen(libobsDataPath.UTF8String); char *path = bmalloc(path_length + 1); snprintf(path, (path_length + 1), "%s", libobsDataPath.UTF8String); return path; } // MARK: - macOS Hardware Info Helpers static void log_processor_name(void) { char *name = NULL; size_t size; int ret; ret = sysctlbyname("machdep.cpu.brand_string", NULL, &size, NULL, 0); if (ret != 0) return; name = bmalloc(size); ret = sysctlbyname("machdep.cpu.brand_string", name, &size, NULL, 0); if (ret == 0) blog(LOG_INFO, "CPU Name: %s", name); bfree(name); } static void log_processor_speed(void) { size_t size; long long freq; int ret; size = sizeof(freq); ret = sysctlbyname("hw.cpufrequency", &freq, &size, NULL, 0); if (ret == 0) blog(LOG_INFO, "CPU Speed: %lldMHz", freq / 1000000); } static void log_model_name(void) { char *name = NULL; size_t size; int ret; ret = sysctlbyname("hw.model", NULL, &size, NULL, 0); if (ret != 0) return; name = bmalloc(size); ret = sysctlbyname("hw.model", name, &size, NULL, 0); if (ret == 0) blog(LOG_INFO, "Model Identifier: %s", name); bfree(name); } static void log_processor_cores(void) { blog(LOG_INFO, "Physical Cores: %d, Logical Cores: %d", os_get_physical_cores(), os_get_logical_cores()); } static void log_emulation_status(void) { blog(LOG_INFO, "Rosetta translation used: %s", os_get_emulation_status() ? "true" : "false"); } static void log_available_memory(void) { size_t size; long long memory_available; int ret; size = sizeof(memory_available); ret = sysctlbyname("hw.memsize", &memory_available, &size, NULL, 0); if (ret == 0) blog(LOG_INFO, "Physical Memory: %lldMB Total", memory_available / 1024 / 1024); } static void log_os(void) { NSProcessInfo *pi = [NSProcessInfo processInfo]; blog(LOG_INFO, "OS Name: macOS"); blog(LOG_INFO, "OS Version: %s", [[pi operatingSystemVersionString] UTF8String]); } static void log_kernel_version(void) { char kernel_version[1024]; size_t size = sizeof(kernel_version); int ret; ret = sysctlbyname("kern.osrelease", kernel_version, &size, NULL, 0); if (ret == 0) blog(LOG_INFO, "Kernel Version: %s", kernel_version); } void log_system_info(void) { log_processor_name(); log_processor_speed(); log_processor_cores(); log_available_memory(); log_model_name(); log_os(); log_emulation_status(); log_kernel_version(); } // MARK: - Type Conversion Utilities static bool dstr_from_cfstring(struct dstr *str, CFStringRef ref) { CFIndex length = CFStringGetLength(ref); CFIndex max_size = CFStringGetMaximumSizeForEncoding(length, kCFStringEncodingUTF8) + 1; assert(max_size > 0); dstr_reserve(str, max_size); if (!CFStringGetCString(ref, str->array, max_size, kCFStringEncodingUTF8)) return false; str->len = strlen(str->array); return true; } // MARK: - Graphics Thread Wrappers void *obs_graphics_thread_autorelease(void *param) { @autoreleasepool { return obs_graphics_thread(param); } } bool obs_graphics_thread_loop_autorelease(struct obs_graphics_context *context) { @autoreleasepool { return obs_graphics_thread_loop(context); } } // MARK: - macOS Hotkey Management typedef struct obs_key_code { int code; bool is_valid; } obs_key_code_t; typedef struct macOS_glyph_desc { UniChar glyph; char *desc; bool is_glyph; bool is_valid; } macOS_glyph_desc_t; typedef struct obs_key_desc { char *desc; bool is_valid; } obs_key_desc_t; static int INVALID_KEY = 0xFF; /* clang-format off */ static const obs_key_code_t virtual_keys[OBS_KEY_LAST_VALUE] = { [OBS_KEY_A] = {.code = kVK_ANSI_A, .is_valid = true}, [OBS_KEY_B] = {.code = kVK_ANSI_B, .is_valid = true}, [OBS_KEY_C] = {.code = kVK_ANSI_C, .is_valid = true}, [OBS_KEY_D] = {.code = kVK_ANSI_D, .is_valid = true}, [OBS_KEY_E] = {.code = kVK_ANSI_E, .is_valid = true}, [OBS_KEY_F] = {.code = kVK_ANSI_F, .is_valid = true}, [OBS_KEY_G] = {.code = kVK_ANSI_G, .is_valid = true}, [OBS_KEY_H] = {.code = kVK_ANSI_H, .is_valid = true}, [OBS_KEY_I] = {.code = kVK_ANSI_I, .is_valid = true}, [OBS_KEY_J] = {.code = kVK_ANSI_J, .is_valid = true}, [OBS_KEY_K] = {.code = kVK_ANSI_K, .is_valid = true}, [OBS_KEY_L] = {.code = kVK_ANSI_L, .is_valid = true}, [OBS_KEY_M] = {.code = kVK_ANSI_M, .is_valid = true}, [OBS_KEY_N] = {.code = kVK_ANSI_N, .is_valid = true}, [OBS_KEY_O] = {.code = kVK_ANSI_O, .is_valid = true}, [OBS_KEY_P] = {.code = kVK_ANSI_P, .is_valid = true}, [OBS_KEY_Q] = {.code = kVK_ANSI_Q, .is_valid = true}, [OBS_KEY_R] = {.code = kVK_ANSI_R, .is_valid = true}, [OBS_KEY_S] = {.code = kVK_ANSI_S, .is_valid = true}, [OBS_KEY_T] = {.code = kVK_ANSI_T, .is_valid = true}, [OBS_KEY_U] = {.code = kVK_ANSI_U, .is_valid = true}, [OBS_KEY_V] = {.code = kVK_ANSI_V, .is_valid = true}, [OBS_KEY_W] = {.code = kVK_ANSI_W, .is_valid = true}, [OBS_KEY_X] = {.code = kVK_ANSI_X, .is_valid = true}, [OBS_KEY_Y] = {.code = kVK_ANSI_Y, .is_valid = true}, [OBS_KEY_Z] = {.code = kVK_ANSI_Z, .is_valid = true}, [OBS_KEY_1] = {.code = kVK_ANSI_1, .is_valid = true}, [OBS_KEY_2] = {.code = kVK_ANSI_2, .is_valid = true}, [OBS_KEY_3] = {.code = kVK_ANSI_3, .is_valid = true}, [OBS_KEY_4] = {.code = kVK_ANSI_4, .is_valid = true}, [OBS_KEY_5] = {.code = kVK_ANSI_5, .is_valid = true}, [OBS_KEY_6] = {.code = kVK_ANSI_6, .is_valid = true}, [OBS_KEY_7] = {.code = kVK_ANSI_7, .is_valid = true}, [OBS_KEY_8] = {.code = kVK_ANSI_8, .is_valid = true}, [OBS_KEY_9] = {.code = kVK_ANSI_9, .is_valid = true}, [OBS_KEY_0] = {.code = kVK_ANSI_0, .is_valid = true}, [OBS_KEY_RETURN] = {.code = kVK_Return, .is_valid = true}, [OBS_KEY_ESCAPE] = {.code = kVK_Escape, .is_valid = true}, [OBS_KEY_BACKSPACE] = {.code = kVK_Delete, .is_valid = true}, [OBS_KEY_TAB] = {.code = kVK_Tab, .is_valid = true}, [OBS_KEY_SPACE] = {.code = kVK_Space, .is_valid = true}, [OBS_KEY_MINUS] = {.code = kVK_ANSI_Minus, .is_valid = true}, [OBS_KEY_EQUAL] = {.code = kVK_ANSI_Equal, .is_valid = true}, [OBS_KEY_BRACKETLEFT] = {.code = kVK_ANSI_LeftBracket, .is_valid = true}, [OBS_KEY_BRACKETRIGHT] = {.code = kVK_ANSI_RightBracket, .is_valid = true}, [OBS_KEY_BACKSLASH] = {.code = kVK_ANSI_Backslash, .is_valid = true}, [OBS_KEY_SEMICOLON] = {.code = kVK_ANSI_Semicolon, .is_valid = true}, [OBS_KEY_QUOTE] = {.code = kVK_ANSI_Quote, .is_valid = true}, [OBS_KEY_DEAD_GRAVE] = {.code = kVK_ANSI_Grave, .is_valid = true}, [OBS_KEY_COMMA] = {.code = kVK_ANSI_Comma, .is_valid = true}, [OBS_KEY_PERIOD] = {.code = kVK_ANSI_Period, .is_valid = true}, [OBS_KEY_SLASH] = {.code = kVK_ANSI_Slash, .is_valid = true}, [OBS_KEY_CAPSLOCK] = {.code = kVK_CapsLock, .is_valid = true}, [OBS_KEY_SECTION] = {.code = kVK_ISO_Section, .is_valid = true}, [OBS_KEY_F1] = {.code = kVK_F1, .is_valid = true}, [OBS_KEY_F2] = {.code = kVK_F2, .is_valid = true}, [OBS_KEY_F3] = {.code = kVK_F3, .is_valid = true}, [OBS_KEY_F4] = {.code = kVK_F4, .is_valid = true}, [OBS_KEY_F5] = {.code = kVK_F5, .is_valid = true}, [OBS_KEY_F6] = {.code = kVK_F6, .is_valid = true}, [OBS_KEY_F7] = {.code = kVK_F7, .is_valid = true}, [OBS_KEY_F8] = {.code = kVK_F8, .is_valid = true}, [OBS_KEY_F9] = {.code = kVK_F9, .is_valid = true}, [OBS_KEY_F10] = {.code = kVK_F10, .is_valid = true}, [OBS_KEY_F11] = {.code = kVK_F11, .is_valid = true}, [OBS_KEY_F12] = {.code = kVK_F12, .is_valid = true}, [OBS_KEY_HELP] = {.code = kVK_Help, .is_valid = true}, [OBS_KEY_HOME] = {.code = kVK_Home, .is_valid = true}, [OBS_KEY_PAGEUP] = {.code = kVK_PageUp, .is_valid = true}, [OBS_KEY_DELETE] = {.code = kVK_ForwardDelete, .is_valid = true}, [OBS_KEY_END] = {.code = kVK_End, .is_valid = true}, [OBS_KEY_PAGEDOWN] = {.code = kVK_PageDown, .is_valid = true}, [OBS_KEY_RIGHT] = {.code = kVK_RightArrow, .is_valid = true}, [OBS_KEY_LEFT] = {.code = kVK_LeftArrow, .is_valid = true}, [OBS_KEY_DOWN] = {.code = kVK_DownArrow, .is_valid = true}, [OBS_KEY_UP] = {.code = kVK_UpArrow, .is_valid = true}, [OBS_KEY_CLEAR] = {.code = kVK_ANSI_KeypadClear, .is_valid = true}, [OBS_KEY_NUMSLASH] = {.code = kVK_ANSI_KeypadDivide, .is_valid = true}, [OBS_KEY_NUMASTERISK] = {.code = kVK_ANSI_KeypadMultiply, .is_valid = true}, [OBS_KEY_NUMMINUS] = {.code = kVK_ANSI_KeypadMinus, .is_valid = true}, [OBS_KEY_NUMPLUS] = {.code = kVK_ANSI_KeypadPlus, .is_valid = true}, [OBS_KEY_ENTER] = {.code = kVK_ANSI_KeypadEnter, .is_valid = true}, [OBS_KEY_NUM1] = {.code = kVK_ANSI_Keypad1, .is_valid = true}, [OBS_KEY_NUM2] = {.code = kVK_ANSI_Keypad2, .is_valid = true}, [OBS_KEY_NUM3] = {.code = kVK_ANSI_Keypad3, .is_valid = true}, [OBS_KEY_NUM4] = {.code = kVK_ANSI_Keypad4, .is_valid = true}, [OBS_KEY_NUM5] = {.code = kVK_ANSI_Keypad5, .is_valid = true}, [OBS_KEY_NUM6] = {.code = kVK_ANSI_Keypad6, .is_valid = true}, [OBS_KEY_NUM7] = {.code = kVK_ANSI_Keypad7, .is_valid = true}, [OBS_KEY_NUM8] = {.code = kVK_ANSI_Keypad8, .is_valid = true}, [OBS_KEY_NUM9] = {.code = kVK_ANSI_Keypad9, .is_valid = true}, [OBS_KEY_NUM0] = {.code = kVK_ANSI_Keypad0, .is_valid = true}, [OBS_KEY_NUMPERIOD] = {.code = kVK_ANSI_KeypadDecimal, .is_valid = true}, [OBS_KEY_NUMEQUAL] = {.code = kVK_ANSI_KeypadEquals, .is_valid = true}, [OBS_KEY_F13] = {.code = kVK_F13, .is_valid = true}, [OBS_KEY_F14] = {.code = kVK_F14, .is_valid = true}, [OBS_KEY_F15] = {.code = kVK_F15, .is_valid = true}, [OBS_KEY_F16] = {.code = kVK_F16, .is_valid = true}, [OBS_KEY_F17] = {.code = kVK_F17, .is_valid = true}, [OBS_KEY_F18] = {.code = kVK_F18, .is_valid = true}, [OBS_KEY_F19] = {.code = kVK_F19, .is_valid = true}, [OBS_KEY_F20] = {.code = kVK_F20, .is_valid = true}, [OBS_KEY_CONTROL] = {.code = kVK_Control, .is_valid = true}, [OBS_KEY_SHIFT] = {.code = kVK_Shift, .is_valid = true}, [OBS_KEY_ALT] = {.code = kVK_Option, .is_valid = true}, [OBS_KEY_META] = {.code = kVK_Command, .is_valid = true}, }; static const obs_key_desc_t key_descriptions[OBS_KEY_LAST_VALUE] = { [OBS_KEY_SPACE] = {.desc = "Space", .is_valid = true}, [OBS_KEY_NUMEQUAL] = {.desc = "= (Keypad)", .is_valid = true}, [OBS_KEY_NUMASTERISK] = {.desc = "* (Keypad)", .is_valid = true}, [OBS_KEY_NUMPLUS] = {.desc = "+ (Keypad)", .is_valid = true}, [OBS_KEY_NUMMINUS] = {.desc = "- (Keypad)", .is_valid = true}, [OBS_KEY_NUMPERIOD] = {.desc = ". (Keypad)", .is_valid = true}, [OBS_KEY_NUMSLASH] = {.desc = "/ (Keypad)", .is_valid = true}, [OBS_KEY_NUM0] = {.desc = "0 (Keypad)", .is_valid = true}, [OBS_KEY_NUM1] = {.desc = "1 (Keypad)", .is_valid = true}, [OBS_KEY_NUM2] = {.desc = "2 (Keypad)", .is_valid = true}, [OBS_KEY_NUM3] = {.desc = "3 (Keypad)", .is_valid = true}, [OBS_KEY_NUM4] = {.desc = "4 (Keypad)", .is_valid = true}, [OBS_KEY_NUM5] = {.desc = "5 (Keypad)", .is_valid = true}, [OBS_KEY_NUM6] = {.desc = "6 (Keypad)", .is_valid = true}, [OBS_KEY_NUM7] = {.desc = "7 (Keypad)", .is_valid = true}, [OBS_KEY_NUM8] = {.desc = "8 (Keypad)", .is_valid = true}, [OBS_KEY_NUM9] = {.desc = "9 (Keypad)", .is_valid = true}, [OBS_KEY_MOUSE1] = {.desc = "Mouse 1", .is_valid = true}, [OBS_KEY_MOUSE2] = {.desc = "Mouse 2", .is_valid = true}, [OBS_KEY_MOUSE3] = {.desc = "Mouse 3", .is_valid = true}, [OBS_KEY_MOUSE4] = {.desc = "Mouse 4", .is_valid = true}, [OBS_KEY_MOUSE5] = {.desc = "Mouse 5", .is_valid = true}, [OBS_KEY_MOUSE6] = {.desc = "Mouse 6", .is_valid = true}, [OBS_KEY_MOUSE7] = {.desc = "Mouse 7", .is_valid = true}, [OBS_KEY_MOUSE8] = {.desc = "Mouse 8", .is_valid = true}, [OBS_KEY_MOUSE9] = {.desc = "Mouse 9", .is_valid = true}, [OBS_KEY_MOUSE10] = {.desc = "Mouse 10", .is_valid = true}, [OBS_KEY_MOUSE11] = {.desc = "Mouse 11", .is_valid = true}, [OBS_KEY_MOUSE12] = {.desc = "Mouse 12", .is_valid = true}, [OBS_KEY_MOUSE13] = {.desc = "Mouse 13", .is_valid = true}, [OBS_KEY_MOUSE14] = {.desc = "Mouse 14", .is_valid = true}, [OBS_KEY_MOUSE15] = {.desc = "Mouse 15", .is_valid = true}, [OBS_KEY_MOUSE16] = {.desc = "Mouse 16", .is_valid = true}, [OBS_KEY_MOUSE17] = {.desc = "Mouse 17", .is_valid = true}, [OBS_KEY_MOUSE18] = {.desc = "Mouse 18", .is_valid = true}, [OBS_KEY_MOUSE19] = {.desc = "Mouse 19", .is_valid = true}, [OBS_KEY_MOUSE20] = {.desc = "Mouse 20", .is_valid = true}, [OBS_KEY_MOUSE21] = {.desc = "Mouse 21", .is_valid = true}, [OBS_KEY_MOUSE22] = {.desc = "Mouse 22", .is_valid = true}, [OBS_KEY_MOUSE23] = {.desc = "Mouse 23", .is_valid = true}, [OBS_KEY_MOUSE24] = {.desc = "Mouse 24", .is_valid = true}, [OBS_KEY_MOUSE25] = {.desc = "Mouse 25", .is_valid = true}, [OBS_KEY_MOUSE26] = {.desc = "Mouse 26", .is_valid = true}, [OBS_KEY_MOUSE27] = {.desc = "Mouse 27", .is_valid = true}, [OBS_KEY_MOUSE28] = {.desc = "Mouse 28", .is_valid = true}, [OBS_KEY_MOUSE29] = {.desc = "Mouse 29", .is_valid = true}, }; static const macOS_glyph_desc_t key_glyphs[(keyCodeMask >> 8)] = { [kVK_Return] = {.glyph = 0x21A9, .is_glyph = true, .is_valid = true}, [kVK_Escape] = {.glyph = 0x238B, .is_glyph = true, .is_valid = true}, [kVK_Delete] = {.glyph = 0x232B, .is_glyph = true, .is_valid = true}, [kVK_Tab] = {.glyph = 0x21e5, .is_glyph = true, .is_valid = true}, [kVK_CapsLock] = {.glyph = 0x21EA, .is_glyph = true, .is_valid = true}, [kVK_ANSI_KeypadClear] = {.glyph = 0x2327, .is_glyph = true, .is_valid = true}, [kVK_ANSI_KeypadEnter] = {.glyph = 0x2305, .is_glyph = true, .is_valid = true}, [kVK_Help] = {.glyph = 0x003F, .is_glyph = true, .is_valid = true}, [kVK_Home] = {.glyph = 0x2196, .is_glyph = true, .is_valid = true}, [kVK_PageUp] = {.glyph = 0x21de, .is_glyph = true, .is_valid = true}, [kVK_ForwardDelete] = {.glyph = 0x2326, .is_glyph = true, .is_valid = true}, [kVK_End] = {.glyph = 0x2198, .is_glyph = true, .is_valid = true}, [kVK_PageDown] = {.glyph = 0x21df, .is_glyph = true, .is_valid = true}, [kVK_Control] = {.glyph = kControlUnicode, .is_glyph = true, .is_valid = true}, [kVK_Shift] = {.glyph = kShiftUnicode, .is_glyph = true, .is_valid = true}, [kVK_Option] = {.glyph = kOptionUnicode, .is_glyph = true, .is_valid = true}, [kVK_Command] = {.glyph = kCommandUnicode, .is_glyph = true, .is_valid = true}, [kVK_RightControl] = {.glyph = kControlUnicode, .is_glyph = true, .is_valid = true}, [kVK_RightShift] = {.glyph = kShiftUnicode, .is_glyph = true, .is_valid = true}, [kVK_RightOption] = {.glyph = kOptionUnicode, .is_glyph = true, .is_valid = true}, [kVK_F1] = {.desc = "F1", .is_valid = true}, [kVK_F2] = {.desc = "F2", .is_valid = true}, [kVK_F3] = {.desc = "F3", .is_valid = true}, [kVK_F4] = {.desc = "F4", .is_valid = true}, [kVK_F5] = {.desc = "F5", .is_valid = true}, [kVK_F6] = {.desc = "F6", .is_valid = true}, [kVK_F7] = {.desc = "F7", .is_valid = true}, [kVK_F8] = {.desc = "F8", .is_valid = true}, [kVK_F9] = {.desc = "F9", .is_valid = true}, [kVK_F10] = {.desc = "F10", .is_valid = true}, [kVK_F11] = {.desc = "F11", .is_valid = true}, [kVK_F12] = {.desc = "F12", .is_valid = true}, [kVK_F13] = {.desc = "F13", .is_valid = true}, [kVK_F14] = {.desc = "F14", .is_valid = true}, [kVK_F15] = {.desc = "F15", .is_valid = true}, [kVK_F16] = {.desc = "F16", .is_valid = true}, [kVK_F17] = {.desc = "F17", .is_valid = true}, [kVK_F18] = {.desc = "F18", .is_valid = true}, [kVK_F19] = {.desc = "F19", .is_valid = true}, [kVK_F20] = {.desc = "F20", .is_valid = true} }; /* clang-format on */ struct obs_hotkeys_platform { volatile long refs; CFMachPortRef eventTap; bool is_key_down[OBS_KEY_LAST_VALUE]; TISInputSourceRef tis; CFDataRef layout_data; UCKeyboardLayout *layout; }; // MARK: macOS Hotkey Implementation #define OBS_COCOA_MODIFIER_SIZE (int) 7 static char string_control[OBS_COCOA_MODIFIER_SIZE]; static char string_option[OBS_COCOA_MODIFIER_SIZE]; static char string_shift[OBS_COCOA_MODIFIER_SIZE]; static char string_command[OBS_COCOA_MODIFIER_SIZE]; static dispatch_once_t onceToken; static void hotkeys_retain(obs_hotkeys_platform_t *platform) { os_atomic_inc_long(&platform->refs); } static void hotkeys_release(obs_hotkeys_platform_t *platform) { if (os_atomic_dec_long(&platform->refs) == -1) { if (platform->tis) { CFRelease(platform->tis); platform->tis = NULL; } if (platform->layout_data) { CFRelease(platform->layout_data); platform->layout_data = NULL; } if (platform->eventTap) { CGEventTapEnable(platform->eventTap, false); CFRelease(platform->eventTap); platform->eventTap = NULL; } bfree(platform); } } static bool obs_key_to_localized_string(obs_key_t key, struct dstr *str) { if (key < OBS_KEY_LAST_VALUE && !key_descriptions[key].is_valid) { return false; } dstr_copy(str, obs_get_hotkey_translation(key, key_descriptions[key].desc)); return true; } static bool key_code_to_string(int code, struct dstr *str) { if (code < INVALID_KEY) { macOS_glyph_desc_t glyph = key_glyphs[code]; if (glyph.is_valid && glyph.is_glyph && glyph.glyph > 0) { dstr_from_wcs(str, (wchar_t[]) {glyph.glyph, 0}); } else if (glyph.is_valid && glyph.desc) { dstr_copy(str, glyph.desc); } else { return false; } } return true; } static bool log_layout_name(TISInputSourceRef tis) { struct dstr layout_name = {0}; CFStringRef sid = (CFStringRef) TISGetInputSourceProperty(tis, kTISPropertyInputSourceID); if (!sid) { blog(LOG_ERROR, "hotkeys-cocoa: Unable to get input source ID"); return false; } if (!dstr_from_cfstring(&layout_name, sid)) { blog(LOG_ERROR, "hotkeys-cocoa: Unable to convert input source ID"); dstr_free(&layout_name); return false; } blog(LOG_INFO, "hotkeys-cocoa: Using keyboard layout '%s'", layout_name.array); dstr_free(&layout_name); return true; } // MARK: macOS Hotkey CoreFoundation Callbacks static CGEventRef KeyboardEventProc(CGEventTapProxy proxy __unused, CGEventType type, CGEventRef event, void *userInfo) { obs_hotkeys_platform_t *platform = userInfo; const CGEventFlags flags = CGEventGetFlags(event); platform->is_key_down[OBS_KEY_SHIFT] = !!(flags & kCGEventFlagMaskShift); platform->is_key_down[OBS_KEY_ALT] = !!(flags & kCGEventFlagMaskAlternate); platform->is_key_down[OBS_KEY_META] = !!(flags & kCGEventFlagMaskCommand); platform->is_key_down[OBS_KEY_CONTROL] = !!(flags & kCGEventFlagMaskControl); switch (type) { case kCGEventKeyDown: { const int64_t keycode = CGEventGetIntegerValueField(event, kCGKeyboardEventKeycode); platform->is_key_down[obs_key_from_virtual_key(keycode)] = true; break; } case kCGEventKeyUp: { const int64_t keycode = CGEventGetIntegerValueField(event, kCGKeyboardEventKeycode); platform->is_key_down[obs_key_from_virtual_key(keycode)] = false; break; } case kCGEventFlagsChanged: { break; } case kCGEventTapDisabledByTimeout: { blog(LOG_DEBUG, "[hotkeys-cocoa]: Hotkey event tap disabled by timeout. Reenabling..."); CGEventTapEnable(platform->eventTap, true); break; } default: { blog(LOG_WARNING, "[hotkeys-cocoa]: Received unexpected event with code '%d'", type); } } return event; } static void InputMethodChangedProc(CFNotificationCenterRef center __unused, void *observer, CFNotificationName name __unused, const void *object __unused, CFDictionaryRef userInfo __unused) { struct obs_core_hotkeys *hotkeys = observer; obs_hotkeys_platform_t *platform = hotkeys->platform_context; pthread_mutex_lock(&hotkeys->mutex); if (platform->layout_data) { CFRelease(platform->layout_data); } platform->tis = TISCopyCurrentKeyboardLayoutInputSource(); platform->layout_data = (CFDataRef) TISGetInputSourceProperty(platform->tis, kTISPropertyUnicodeKeyLayoutData); if (!platform->layout_data) { blog(LOG_ERROR, "hotkeys-cocoa: Failed to retrieve keyboard layout data"); hotkeys->platform_context = NULL; pthread_mutex_unlock(&hotkeys->mutex); hotkeys_release(platform); return; } CFRetain(platform->layout_data); platform->layout = (UCKeyboardLayout *) CFDataGetBytePtr(platform->layout_data); pthread_mutex_unlock(&hotkeys->mutex); } // MARK: macOS Hotkey API Implementation bool obs_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys) { CFNotificationCenterAddObserver(CFNotificationCenterGetDistributedCenter(), hotkeys, InputMethodChangedProc, kTISNotifySelectedKeyboardInputSourceChanged, NULL, CFNotificationSuspensionBehaviorDeliverImmediately); obs_hotkeys_platform_t *platform = bzalloc(sizeof(obs_hotkeys_platform_t)); const bool has_event_access = CGPreflightListenEventAccess(); if (has_event_access) { platform->eventTap = CGEventTapCreate(kCGHIDEventTap, kCGHeadInsertEventTap, kCGEventTapOptionListenOnly, CGEventMaskBit(kCGEventKeyDown) | CGEventMaskBit(kCGEventKeyUp) | CGEventMaskBit(kCGEventFlagsChanged), KeyboardEventProc, platform); if (!platform->eventTap) { blog(LOG_WARNING, "[hotkeys-cocoa]: Couldn't create hotkey event tap."); hotkeys_release(platform); platform = NULL; return false; } CFRunLoopSourceRef source = CFMachPortCreateRunLoopSource(kCFAllocatorDefault, platform->eventTap, 0); CFRunLoopAddSource(CFRunLoopGetCurrent(), source, kCFRunLoopCommonModes); CFRelease(source); CGEventTapEnable(platform->eventTap, true); } else { blog(LOG_WARNING, "[hotkeys-cocoa]: No event permissions, could not add global hotkeys."); } platform->tis = TISCopyCurrentKeyboardLayoutInputSource(); platform->layout_data = (CFDataRef) TISGetInputSourceProperty(platform->tis, kTISPropertyUnicodeKeyLayoutData); if (!platform->layout_data) { blog(LOG_ERROR, "hotkeys-cocoa: Failed to retrieve keyboard layout data"); hotkeys_release(platform); platform = NULL; return false; } CFRetain(platform->layout_data); platform->layout = (UCKeyboardLayout *) CFDataGetBytePtr(platform->layout_data); obs_hotkeys_platform_t *currentPlatform; pthread_mutex_lock(&hotkeys->mutex); currentPlatform = hotkeys->platform_context; if (platform && currentPlatform && platform->layout_data == currentPlatform->layout_data) { pthread_mutex_unlock(&hotkeys->mutex); hotkeys_release(platform); return true; } hotkeys->platform_context = platform; log_layout_name(platform->tis); pthread_mutex_unlock(&hotkeys->mutex); calldata_t parameters = {0}; signal_handler_signal(hotkeys->signals, "hotkey_layout_change", ¶meters); if (currentPlatform) { hotkeys_release(currentPlatform); } bool hasPlatformContext = hotkeys->platform_context != NULL; return hasPlatformContext; } void obs_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys) { CFNotificationCenterRemoveEveryObserver(CFNotificationCenterGetDistributedCenter(), hotkeys); hotkeys_release(hotkeys->platform_context); } int obs_key_to_virtual_key(obs_key_t key) { if (virtual_keys[key].is_valid) { return virtual_keys[key].code; } else { return INVALID_KEY; } } obs_key_t obs_key_from_virtual_key(int keyCode) { for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) { if (virtual_keys[i].is_valid && virtual_keys[i].code == keyCode) { return (obs_key_t) i; } } return OBS_KEY_NONE; } bool obs_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *platform, obs_key_t key) { if (key >= OBS_KEY_MOUSE1 && key <= OBS_KEY_MOUSE29) { int button = key - 1; NSUInteger buttons = [NSEvent pressedMouseButtons]; if ((buttons & (1 << button)) != 0) { return true; } else { return false; } } if (!platform) { return false; } if (key >= OBS_KEY_LAST_VALUE) { return false; } return platform->is_key_down[key]; } static void unichar_to_utf8(const UniChar *character, char *buffer) { CFStringRef string = CFStringCreateWithCharactersNoCopy(NULL, character, 2, kCFAllocatorNull); if (!string) { blog(LOG_ERROR, "hotkey-cocoa: Unable to create CFStringRef with UniChar character"); return; } if (!CFStringGetCString(string, buffer, OBS_COCOA_MODIFIER_SIZE, kCFStringEncodingUTF8)) { blog(LOG_ERROR, "hotkey-cocoa: Error while populating buffer with glyph %d (0x%x)", character[0], character[0]); } CFRelease(string); } void obs_key_combination_to_str(obs_key_combination_t key, struct dstr *str) { struct dstr keyString = {0}; if (key.key != OBS_KEY_NONE) { obs_key_to_str(key.key, &keyString); } dispatch_once(&onceToken, ^{ const UniChar controlCharacter[] = {kControlUnicode, 0}; const UniChar optionCharacter[] = {kOptionUnicode, 0}; const UniChar shiftCharacter[] = {kShiftUnicode, 0}; const UniChar commandCharacter[] = {kCommandUnicode, 0}; unichar_to_utf8(controlCharacter, string_control); unichar_to_utf8(optionCharacter, string_option); unichar_to_utf8(shiftCharacter, string_shift); unichar_to_utf8(commandCharacter, string_command); }); const char *modifier_control = (key.modifiers & INTERACT_CONTROL_KEY) ? string_control : ""; const char *modifier_option = (key.modifiers & INTERACT_ALT_KEY) ? string_option : ""; const char *modifier_shift = (key.modifiers & INTERACT_SHIFT_KEY) ? string_shift : ""; const char *modifier_command = (key.modifiers & INTERACT_COMMAND_KEY) ? string_command : ""; const char *modifier_key = keyString.len ? keyString.array : ""; dstr_printf(str, "%s%s%s%s%s", modifier_control, modifier_option, modifier_shift, modifier_command, modifier_key); dstr_free(&keyString); } void obs_key_to_str(obs_key_t key, struct dstr *str) { const UniCharCount maxLength = 16; UniChar buffer[16]; if (obs_key_to_localized_string(key, str)) { return; } int code = obs_key_to_virtual_key(key); if (key_code_to_string(code, str)) { return; } if (code == INVALID_KEY) { const char *keyName = obs_key_to_name(key); blog(LOG_ERROR, "hotkey-cocoa: Got invalid key while translating '%d' (%s)", key, keyName); dstr_copy(str, keyName); return; } struct obs_hotkeys_platform *platform = NULL; if (obs) { pthread_mutex_lock(&obs->hotkeys.mutex); platform = obs->hotkeys.platform_context; hotkeys_retain(platform); pthread_mutex_unlock(&obs->hotkeys.mutex); } if (!platform) { const char *keyName = obs_key_to_name(key); blog(LOG_ERROR, "hotkey-cocoa: Could not get hotkey platform while translating '%d' (%s)", key, keyName); dstr_copy(str, keyName); return; } UInt32 deadKeyState = 0; UniCharCount length = 0; OSStatus err = UCKeyTranslate(platform->layout, code, kUCKeyActionDown, ((alphaLock >> 8) & 0xFF), LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxLength, &length, buffer); if (err == noErr && length <= 0 && deadKeyState) { err = UCKeyTranslate(platform->layout, kVK_Space, kUCKeyActionDown, ((alphaLock >> 8) & 0xFF), LMGetKbdType(), kUCKeyTranslateNoDeadKeysBit, &deadKeyState, maxLength, &length, buffer); } hotkeys_release(platform); if (err != noErr) { const char *keyName = obs_key_to_name(key); blog(LOG_ERROR, "hotkey-cocoa: Error while translating '%d' (0x%x, %s) to string: %d", key, code, keyName, err); dstr_copy(str, keyName); return; } if (length == 0) { const char *keyName = obs_key_to_name(key); blog(LOG_ERROR, "hotkey-cocoa: Got zero length string while translating '%d' (0x%x, %s) to string", key, code, keyName); dstr_copy(str, keyName); return; } CFStringRef string = CFStringCreateWithCharactersNoCopy(NULL, buffer, length, kCFAllocatorNull); if (!string) { const char *keyName = obs_key_to_name(key); blog(LOG_ERROR, "hotkey-cocoa: Could not create CFStringRef while translating '%d' (0x%x, %s) to string", key, code, keyName); dstr_copy(str, keyName); return; } if (!dstr_from_cfstring(str, string)) { const char *keyName = obs_key_to_name(key); blog(LOG_ERROR, "hotkey-cocoa: Could not translate CFStringRef to CString while translating '%d' (0x%x, %s)", key, code, keyName); CFRelease(string); dstr_copy(str, keyName); return; } CFRelease(string); return; } obs-studio-32.1.0-sources/libobs/CMakeLists.txt000644 001751 001751 00000020155 15153330235 022251 0ustar00runnerrunner000000 000000 cmake_minimum_required(VERSION 3.28...3.30) include(cmake/obs-version.cmake) if(OS_WINDOWS AND NOT OBS_PARENT_ARCHITECTURE STREQUAL CMAKE_VS_PLATFORM_NAME) include(cmake/os-windows.cmake) return() endif() find_package(SIMDe REQUIRED) find_package(Threads REQUIRED) find_package(FFmpeg 6.1 REQUIRED avformat avutil swscale swresample OPTIONAL_COMPONENTS avcodec) find_package(ZLIB REQUIRED) find_package(Uthash REQUIRED) find_package(jansson REQUIRED) if(NOT TARGET OBS::caption) add_subdirectory("${CMAKE_SOURCE_DIR}/deps/libcaption" "${CMAKE_BINARY_DIR}/deps/libcaption") endif() add_library(libobs SHARED) add_library(OBS::libobs ALIAS libobs) target_sources( libobs PRIVATE $<$:obs-hevc.c> $<$:obs-hevc.h> obs-audio-controls.c obs-audio-controls.h obs-audio.c obs-av1.c obs-av1.h obs-avc.c obs-avc.h obs-canvas.c obs-config.h obs-data.c obs-data.h obs-defs.h obs-display.c obs-encoder.c obs-encoder.h obs-ffmpeg-compat.h obs-hotkey-name-map.c obs-hotkey.c obs-hotkey.h obs-hotkeys.h obs-interaction.h obs-internal.h obs-missing-files.c obs-missing-files.h obs-module.c obs-module.h obs-nal.c obs-nal.h obs-output-delay.c obs-output.c obs-output.h obs-properties.c obs-properties.h obs-scene.c obs-scene.h obs-service.c obs-service.h obs-source-deinterlace.c obs-source-transition.c obs-source.c obs-source.h obs-video-gpu-encode.c obs-video.c obs-view.c obs.c obs.h obs.hpp ) target_sources( libobs PRIVATE util/array-serializer.c util/array-serializer.h util/base.c util/base.h util/bitstream.c util/bitstream.h util/bmem.c util/bmem.h util/buffered-file-serializer.c util/buffered-file-serializer.h util/c99defs.h util/cf-lexer.c util/cf-lexer.h util/cf-parser.c util/cf-parser.h util/config-file.c util/config-file.h util/crc32.c util/crc32.h util/curl/curl-helper.h util/darray.h util/deque.h util/dstr.c util/dstr.h util/file-serializer.c util/file-serializer.h util/lexer.c util/lexer.h util/pipe.c util/pipe.h util/platform.c util/platform.h util/profiler.c util/profiler.h util/profiler.hpp util/serializer.h util/source-profiler.c util/source-profiler.h util/sse-intrin.h util/task.c util/task.h util/text-lookup.c util/text-lookup.h util/threading.h util/utf8.c util/utf8.h util/uthash.h util/util.hpp util/util_uint128.h util/util_uint64.h ) target_sources( libobs PRIVATE callback/calldata.c callback/calldata.h callback/decl.c callback/decl.h callback/proc.c callback/proc.h callback/signal.c callback/signal.h ) target_sources( libobs PRIVATE media-io/audio-io.c media-io/audio-io.h media-io/audio-math.h media-io/audio-resampler-ffmpeg.c media-io/audio-resampler.h media-io/format-conversion.c media-io/format-conversion.h media-io/frame-rate.h media-io/media-io-defs.h media-io/media-remux.c media-io/media-remux.h media-io/video-fourcc.c media-io/video-frame.c media-io/video-frame.h media-io/video-io.c media-io/video-io.h media-io/video-matrices.c media-io/video-scaler-ffmpeg.c media-io/video-scaler.h ) target_sources( libobs PRIVATE graphics/axisang.c graphics/axisang.h graphics/bounds.c graphics/bounds.h graphics/device-exports.h graphics/effect-parser.c graphics/effect-parser.h graphics/effect.c graphics/effect.h graphics/graphics-ffmpeg.c graphics/graphics-imports.c graphics/graphics-internal.h graphics/graphics.c graphics/graphics.h graphics/half.h graphics/image-file.c graphics/image-file.h graphics/input.h graphics/libnsgif/libnsgif.c graphics/libnsgif/libnsgif.h graphics/math-defs.h graphics/math-extra.c graphics/math-extra.h graphics/matrix3.c graphics/matrix3.h graphics/matrix4.c graphics/matrix4.h graphics/plane.c graphics/plane.h graphics/quat.c graphics/quat.h graphics/shader-parser.c graphics/shader-parser.h graphics/srgb.h graphics/texture-render.c graphics/vec2.c graphics/vec2.h graphics/vec3.c graphics/vec3.h graphics/vec4.c graphics/vec4.h ) # Temporarily disables deprecation warnings while obs_data_autoselect_* is deprecated. if(CMAKE_C_COMPILER_ID STREQUAL "MSVC") set_source_files_properties(obs-data.c PROPERTIES COMPILE_OPTIONS "/wd4996") elseif( CMAKE_C_COMPILER_ID STREQUAL "GNU" OR CMAKE_C_COMPILER_ID STREQUAL "Clang" OR CMAKE_C_COMPILER_ID STREQUAL "AppleClang" ) set_source_files_properties(obs-data.c PROPERTIES COMPILE_OPTIONS "-Wno-deprecated-declarations") endif() target_compile_features(libobs PUBLIC cxx_std_17) target_compile_definitions( libobs PRIVATE IS_LIBOBS PUBLIC $:ENABLE_HEVC>> $:SHOW_SUBPROCESSES>> ) target_link_libraries( libobs PRIVATE OBS::caption OBS::libobs-version FFmpeg::avcodec FFmpeg::avformat FFmpeg::avutil FFmpeg::swscale FFmpeg::swresample jansson::jansson Uthash::Uthash ZLIB::ZLIB PUBLIC SIMDe::SIMDe Threads::Threads ) if(OS_WINDOWS) include(cmake/os-windows.cmake) elseif(OS_MACOS) include(cmake/os-macos.cmake) elseif(OS_LINUX) include(cmake/os-linux.cmake) elseif(OS_FREEBSD OR OS_OPENBSD) include(cmake/os-freebsd.cmake) endif() configure_file(obsconfig.h.in "${CMAKE_BINARY_DIR}/config/obsconfig.h" @ONLY) target_include_directories( libobs PUBLIC "$" "$" ) set( public_headers callback/calldata.h callback/decl.h callback/proc.h callback/signal.h graphics/axisang.h graphics/bounds.h graphics/effect-parser.h graphics/effect.h graphics/graphics.h graphics/image-file.h graphics/input.h graphics/libnsgif/libnsgif.h graphics/math-defs.h graphics/math-extra.h graphics/matrix3.h graphics/matrix4.h graphics/plane.h graphics/quat.h graphics/shader-parser.h graphics/srgb.h graphics/vec2.h graphics/vec3.h graphics/vec4.h media-io/audio-io.h media-io/audio-math.h media-io/audio-resampler.h media-io/format-conversion.h media-io/frame-rate.h media-io/media-io-defs.h media-io/media-remux.h media-io/video-frame.h media-io/video-io.h media-io/video-scaler.h obs-audio-controls.h obs-avc.h obs-config.h obs-data.h obs-defs.h obs-encoder.h obs-hotkey.h obs-hotkeys.h obs-interaction.h obs-missing-files.h obs-module.h obs-nal.h obs-nix-platform.h obs-output.h obs-properties.h obs-service.h obs-source.h obs.h obs.hpp util/array-serializer.h util/base.h util/bitstream.h util/bmem.h util/c99defs.h util/cf-lexer.h util/cf-parser.h util/config-file.h util/crc32.h util/darray.h util/deque.h util/dstr.h util/dstr.hpp util/file-serializer.h util/lexer.h util/pipe.h util/platform.h util/profiler.h util/profiler.hpp util/serializer.h util/sse-intrin.h util/task.h util/text-lookup.h util/threading-posix.h util/threading.h util/uthash.h util/util.hpp util/util_uint128.h util/util_uint64.h ) if(OS_WINDOWS) list( APPEND public_headers util/threading-windows.h util/windows/ComPtr.hpp util/windows/CoTaskMemPtr.hpp util/windows/device-enum.h util/windows/HRError.hpp util/windows/win-registry.h util/windows/win-version.h util/windows/window-helpers.h util/windows/WinHandle.hpp ) elseif(OS_MACOS) list(APPEND public_headers util/apple/cfstring-utils.h) endif() if(ENABLE_HEVC) list(APPEND public_headers obs-hevc.h) endif() set_property(TARGET libobs APPEND PROPERTY OBS_PUBLIC_HEADERS ${public_headers}) set_target_properties_obs( libobs PROPERTIES FOLDER core VERSION 0 SOVERSION "${OBS_VERSION_MAJOR}" ) target_export(libobs) obs-studio-32.1.0-sources/libobs/obs-av1.h000644 001751 001751 00000002325 15153330235 021131 0ustar00runnerrunner000000 000000 // SPDX-FileCopyrightText: 2023 David Rosca // // SPDX-License-Identifier: GPL-2.0-or-later #pragma once #include "util/c99defs.h" #ifdef __cplusplus extern "C" { #endif enum { OBS_OBU_SEQUENCE_HEADER = 1, OBS_OBU_TEMPORAL_DELIMITER = 2, OBS_OBU_FRAME_HEADER = 3, OBS_OBU_TILE_GROUP = 4, OBS_OBU_METADATA = 5, OBS_OBU_FRAME = 6, OBS_OBU_REDUNDANT_FRAME_HEADER = 7, OBS_OBU_TILE_LIST = 8, OBS_OBU_PADDING = 15, }; enum av1_obu_metadata_type { METADATA_TYPE_HDR_CLL = 1, METADATA_TYPE_HDR_MDCV, METADATA_TYPE_SCALABILITY, METADATA_TYPE_ITUT_T35, METADATA_TYPE_TIMECODE, METADATA_TYPE_USER_PRIVATE_6 }; /* Helpers for parsing AV1 OB units. */ EXPORT bool obs_av1_keyframe(const uint8_t *data, size_t size); EXPORT void obs_extract_av1_headers(const uint8_t *packet, size_t size, uint8_t **new_packet_data, size_t *new_packet_size, uint8_t **header_data, size_t *header_size); EXPORT void metadata_obu_itu_t35(const uint8_t *itut_t35_buffer, size_t itut_bufsize, uint8_t **out_buffer, size_t *outbuf_size); EXPORT void metadata_obu(const uint8_t *source_buffer, size_t source_bufsize, uint8_t **out_buffer, size_t *outbuf_size, uint8_t metadata_type); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-avc.h000644 001751 001751 00000003612 15153330235 021213 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs-nal.h" #ifdef __cplusplus extern "C" { #endif struct encoder_packet; enum { OBS_NAL_UNKNOWN = 0, OBS_NAL_SLICE = 1, OBS_NAL_SLICE_DPA = 2, OBS_NAL_SLICE_DPB = 3, OBS_NAL_SLICE_DPC = 4, OBS_NAL_SLICE_IDR = 5, OBS_NAL_SEI = 6, OBS_NAL_SPS = 7, OBS_NAL_PPS = 8, OBS_NAL_AUD = 9, OBS_NAL_FILLER = 12, }; /* Helpers for parsing AVC NAL units. */ EXPORT bool obs_avc_keyframe(const uint8_t *data, size_t size); EXPORT const uint8_t *obs_avc_find_startcode(const uint8_t *p, const uint8_t *end); EXPORT void obs_parse_avc_packet(struct encoder_packet *avc_packet, const struct encoder_packet *src); EXPORT int obs_parse_avc_packet_priority(const struct encoder_packet *packet); EXPORT size_t obs_parse_avc_header(uint8_t **header, const uint8_t *data, size_t size); EXPORT void obs_extract_avc_headers(const uint8_t *packet, size_t size, uint8_t **new_packet_data, size_t *new_packet_size, uint8_t **header_data, size_t *header_size, uint8_t **sei_data, size_t *sei_size); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-interaction.h000644 001751 001751 00000003063 15153330235 022761 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" enum obs_interaction_flags { INTERACT_NONE = 0, INTERACT_CAPS_KEY = 1, INTERACT_SHIFT_KEY = 1 << 1, INTERACT_CONTROL_KEY = 1 << 2, INTERACT_ALT_KEY = 1 << 3, INTERACT_MOUSE_LEFT = 1 << 4, INTERACT_MOUSE_MIDDLE = 1 << 5, INTERACT_MOUSE_RIGHT = 1 << 6, INTERACT_COMMAND_KEY = 1 << 7, INTERACT_NUMLOCK_KEY = 1 << 8, INTERACT_IS_KEY_PAD = 1 << 9, INTERACT_IS_LEFT = 1 << 10, INTERACT_IS_RIGHT = 1 << 11, }; enum obs_mouse_button_type { MOUSE_LEFT, MOUSE_MIDDLE, MOUSE_RIGHT, }; struct obs_mouse_event { uint32_t modifiers; int32_t x; int32_t y; }; struct obs_key_event { uint32_t modifiers; char *text; uint32_t native_modifiers; uint32_t native_scancode; uint32_t native_vkey; }; obs-studio-32.1.0-sources/libobs/obs-nix-platform.h000644 001751 001751 00000003107 15153330235 023061 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Jason Francis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #ifdef __cplusplus extern "C" { #endif enum obs_nix_platform_type { OBS_NIX_PLATFORM_INVALID, OBS_NIX_PLATFORM_X11_EGL, OBS_NIX_PLATFORM_WAYLAND, }; /** * Sets the Unix platform. * @param platform The platform to select. */ EXPORT void obs_set_nix_platform(enum obs_nix_platform_type platform); /** * Gets the host platform. */ EXPORT enum obs_nix_platform_type obs_get_nix_platform(void); /** * Sets the host platform's display connection. * @param display The host display connection. */ EXPORT void obs_set_nix_platform_display(void *display); /** * Gets the host platform's display connection. */ EXPORT void *obs_get_nix_platform_display(void); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-scene.c000644 001751 001751 00000334454 15153330235 021545 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey Philippe Groarke This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/threading.h" #include "util/util_uint64.h" #include "graphics/math-defs.h" #include "obs-scene.h" #include "obs-internal.h" const struct obs_source_info group_info; static void resize_group(obs_sceneitem_t *group, bool scene_resize); static void resize_scene(obs_scene_t *scene); static void signal_parent(obs_scene_t *parent, const char *name, calldata_t *params); static void get_ungrouped_transform(obs_sceneitem_t *group, obs_sceneitem_t *item, struct vec2 *pos, struct vec2 *scale, float *rot); static inline bool crop_enabled(const struct obs_sceneitem_crop *crop); static inline bool item_texture_enabled(const struct obs_scene_item *item); static void init_hotkeys(obs_scene_t *scene, obs_sceneitem_t *item, const char *name); typedef DARRAY(struct obs_scene_item *) obs_scene_item_ptr_array_t; /* NOTE: For proper mutex lock order (preventing mutual cross-locks), never * lock the graphics mutex inside either of the scene mutexes. * * Another thing that must be done to prevent that cross-lock (and improve * performance), is to not create/release/update sources within the scene * mutexes. * * It's okay to lock the graphics mutex before locking either of the scene * mutexes, but not after. */ static const char *obs_scene_signals[] = { "void item_add(ptr scene, ptr item)", "void item_remove(ptr scene, ptr item)", "void reorder(ptr scene)", "void refresh(ptr scene)", "void item_visible(ptr scene, ptr item, bool visible)", "void item_select(ptr scene, ptr item)", "void item_deselect(ptr scene, ptr item)", "void item_transform(ptr scene, ptr item)", "void item_locked(ptr scene, ptr item, bool locked)", NULL, }; static const struct { enum gs_blend_type src_color; enum gs_blend_type src_alpha; enum gs_blend_type dst_color; enum gs_blend_type dst_alpha; enum gs_blend_op_type op; } obs_blend_mode_params[] = { /* clang-format off */ // OBS_BLEND_NORMAL { GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_OP_ADD, }, // OBS_BLEND_ADDITIVE { GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_OP_ADD, }, // OBS_BLEND_SUBTRACT { GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_OP_REVERSE_SUBTRACT, }, // OBS_BLEND_SCREEN { GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_INVSRCCOLOR, GS_BLEND_INVSRCALPHA, GS_BLEND_OP_ADD }, // OBS_BLEND_MULTIPLY { GS_BLEND_DSTCOLOR, GS_BLEND_DSTALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_INVSRCALPHA, GS_BLEND_OP_ADD }, // OBS_BLEND_LIGHTEN { GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_OP_MAX, }, // OBS_BLEND_DARKEN { GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_ONE, GS_BLEND_OP_MIN, }, /* clang-format on */ }; static inline void signal_item_remove(struct obs_scene *parent, struct obs_scene_item *item) { struct calldata params; uint8_t stack[128]; calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_ptr(¶ms, "item", item); signal_parent(parent, "item_remove", ¶ms); } static const char *scene_getname(void *unused) { UNUSED_PARAMETER(unused); return "Scene"; } static const char *group_getname(void *unused) { UNUSED_PARAMETER(unused); return "Group"; } static void *scene_create(obs_data_t *settings, struct obs_source *source) { struct obs_scene *scene = bzalloc(sizeof(struct obs_scene)); scene->source = source; if (strcmp(source->info.id, group_info.id) == 0) { scene->is_group = true; scene->custom_size = true; scene->cx = 0; scene->cy = 0; } signal_handler_add_array(obs_source_get_signal_handler(source), obs_scene_signals); if (pthread_mutex_init_recursive(&scene->audio_mutex) != 0) { blog(LOG_ERROR, "scene_create: Couldn't initialize audio " "mutex"); goto fail; } if (pthread_mutex_init_recursive(&scene->video_mutex) != 0) { blog(LOG_ERROR, "scene_create: Couldn't initialize video " "mutex"); goto fail; } scene->absolute_coordinates = obs_data_get_bool(obs->data.private_data, "AbsoluteCoordinates"); UNUSED_PARAMETER(settings); return scene; fail: bfree(scene); return NULL; } #define audio_lock(scene) pthread_mutex_lock(&scene->audio_mutex) #define video_lock(scene) pthread_mutex_lock(&scene->video_mutex) #define audio_unlock(scene) pthread_mutex_unlock(&scene->audio_mutex) #define video_unlock(scene) pthread_mutex_unlock(&scene->video_mutex) static inline void full_lock(struct obs_scene *scene) { video_lock(scene); audio_lock(scene); } static inline void full_unlock(struct obs_scene *scene) { audio_unlock(scene); video_unlock(scene); } static void obs_sceneitem_remove_internal(obs_sceneitem_t *item); static void remove_all_items(struct obs_scene *scene) { struct obs_scene_item *item; obs_scene_item_ptr_array_t items; da_init(items); full_lock(scene); item = scene->first_item; while (item) { struct obs_scene_item *del_item = item; item = item->next; obs_sceneitem_remove_internal(del_item); da_push_back(items, &del_item); } full_unlock(scene); for (size_t i = 0; i < items.num; i++) obs_sceneitem_release(items.array[i]); da_free(items); } static void scene_destroy(void *data) { struct obs_scene *scene = data; remove_all_items(scene); pthread_mutex_destroy(&scene->video_mutex); pthread_mutex_destroy(&scene->audio_mutex); bfree(scene); } static inline bool transition_active(obs_source_t *transition) { return transition && (transition->transitioning_audio || transition->transitioning_video); } static void scene_enum_sources(void *data, obs_source_enum_proc_t enum_callback, void *param, bool active) { struct obs_scene *scene = data; struct obs_scene_item *item; struct obs_scene_item *next; full_lock(scene); item = scene->first_item; while (item) { next = item->next; obs_sceneitem_addref(item); if (active) { if (item->visible && transition_active(item->show_transition)) enum_callback(scene->source, item->show_transition, param); else if (!item->visible && transition_active(item->hide_transition)) enum_callback(scene->source, item->hide_transition, param); if (os_atomic_load_long(&item->active_refs) > 0) enum_callback(scene->source, item->source, param); } else { if (item->show_transition) enum_callback(scene->source, item->show_transition, param); if (item->hide_transition) enum_callback(scene->source, item->hide_transition, param); enum_callback(scene->source, item->source, param); } obs_sceneitem_release(item); item = next; } full_unlock(scene); } static void scene_enum_active_sources(void *data, obs_source_enum_proc_t enum_callback, void *param) { scene_enum_sources(data, enum_callback, param, true); } static void scene_enum_all_sources(void *data, obs_source_enum_proc_t enum_callback, void *param) { scene_enum_sources(data, enum_callback, param, false); } static inline void detach_sceneitem(struct obs_scene_item *item) { if (item->prev) item->prev->next = item->next; else item->parent->first_item = item->next; if (item->next) item->next->prev = item->prev; item->parent = NULL; } static inline void attach_sceneitem(struct obs_scene *parent, struct obs_scene_item *item, struct obs_scene_item *prev) { item->prev = prev; item->parent = parent; if (prev) { item->next = prev->next; if (prev->next) prev->next->prev = item; prev->next = item; } else { item->next = parent->first_item; if (parent->first_item) parent->first_item->prev = item; parent->first_item = item; } } void add_alignment(struct vec2 *v, uint32_t align, int cx, int cy) { if (align & OBS_ALIGN_RIGHT) v->x += (float)cx; else if ((align & OBS_ALIGN_LEFT) == 0) v->x += (float)(cx / 2); if (align & OBS_ALIGN_BOTTOM) v->y += (float)cy; else if ((align & OBS_ALIGN_TOP) == 0) v->y += (float)(cy / 2); } static uint32_t scene_getwidth(void *data); static uint32_t scene_getheight(void *data); static uint32_t canvas_getwidth(obs_weak_canvas_t *weak); static uint32_t canvas_getheight(obs_weak_canvas_t *weak); static inline void get_scene_dimensions(const obs_sceneitem_t *item, float *x, float *y) { obs_scene_t *parent = item->parent; if (!parent || (parent->is_group && !parent->source->canvas)) { *x = (float)obs->data.main_canvas->mix->ovi.base_width; *y = (float)obs->data.main_canvas->mix->ovi.base_height; } else if (parent->is_group) { *x = (float)canvas_getwidth(parent->source->canvas); *y = (float)canvas_getheight(parent->source->canvas); } else { *x = (float)scene_getwidth(parent); *y = (float)scene_getheight(parent); } } /* Rounds absolute pixel values to next .5. */ static inline void nudge_abs_values(struct vec2 *dst, const struct vec2 *v) { dst->x = floorf(v->x * 2.0f + 0.5f) / 2.0f; dst->y = floorf(v->y * 2.0f + 0.5f) / 2.0f; } static inline void pos_from_absolute(struct vec2 *dst, const struct vec2 *v, const obs_sceneitem_t *item) { float x, y; get_scene_dimensions(item, &x, &y); /* Scaled so that height (y) is [-1, 1]. */ dst->x = (2 * v->x - x) / y; dst->y = 2 * v->y / y - 1.0f; } static inline void pos_to_absolute(struct vec2 *dst, const struct vec2 *v, const obs_sceneitem_t *item) { float x, y; get_scene_dimensions(item, &x, &y); dst->x = (v->x * y + x) / 2; dst->y = (v->y * y + y) / 2; /* In order for group resizing to behave properly they need all the precision * they can get, so do not nudge their values. */ if (!item->is_group && !(item->parent && item->parent->is_group)) nudge_abs_values(dst, dst); } static inline void size_from_absolute(struct vec2 *dst, const struct vec2 *v, const obs_sceneitem_t *item) { float x, y; get_scene_dimensions(item, &x, &y); /* The height of the canvas is from [-1, 1], so 2.0f * aspect is the * full width (depending on aspect ratio). */ dst->x = (2 * v->x) / y; dst->y = 2 * v->y / y; } static inline void size_to_absolute(struct vec2 *dst, const struct vec2 *v, const obs_sceneitem_t *item) { float x, y; get_scene_dimensions(item, &x, &y); dst->x = (v->x * y) / 2; dst->y = (v->y * y) / 2; if (!item->is_group && !(item->parent && item->parent->is_group)) nudge_abs_values(dst, dst); } /* Return item's scale value scaled from original to current canvas size. */ static inline void item_canvas_scale(struct vec2 *dst, const obs_sceneitem_t *item) { /* Groups will themselves resize so their items do not need to be * rescaled manually. Nested scenes will use the updated canvas * resolution, so they also don't need manual adjustment. */ if (item->is_group || item->is_scene) { vec2_copy(dst, &item->scale); return; } float x, y; get_scene_dimensions(item, &x, &y); float scale_factor = y / item->scale_ref.y; vec2_mulf(dst, &item->scale, scale_factor); } /* Return scale value scaled to original canvas size. */ static inline void item_relative_scale(struct vec2 *dst, const struct vec2 *v, const obs_sceneitem_t *item) { if (item->is_group || item->is_scene) { vec2_copy(dst, v); return; } float x, y; get_scene_dimensions(item, &x, &y); float scale_factor = item->scale_ref.y / y; vec2_mulf(dst, v, scale_factor); } static inline bool crop_to_bounds(const struct obs_scene_item *item) { return item->crop_to_bounds && (item->bounds_type == OBS_BOUNDS_SCALE_OUTER || item->bounds_type == OBS_BOUNDS_SCALE_TO_HEIGHT || item->bounds_type == OBS_BOUNDS_SCALE_TO_WIDTH); } static void calculate_bounds_data(struct obs_scene_item *item, struct vec2 *origin, struct vec2 *scale, uint32_t *cx, uint32_t *cy) { struct vec2 bounds; if (item->absolute_coordinates) vec2_copy(&bounds, &item->bounds); else size_to_absolute(&bounds, &item->bounds, item); float width = (float)(*cx) * fabsf(scale->x); float height = (float)(*cy) * fabsf(scale->y); float item_aspect = width / height; float bounds_aspect = bounds.x / bounds.y; uint32_t bounds_type = item->bounds_type; float width_diff, height_diff; if (item->bounds_type == OBS_BOUNDS_MAX_ONLY) if (width > bounds.x || height > bounds.y) bounds_type = OBS_BOUNDS_SCALE_INNER; if (bounds_type == OBS_BOUNDS_SCALE_INNER || bounds_type == OBS_BOUNDS_SCALE_OUTER) { bool use_width = (bounds_aspect < item_aspect); float mul; if (item->bounds_type == OBS_BOUNDS_SCALE_OUTER) use_width = !use_width; mul = use_width ? bounds.x / width : bounds.y / height; vec2_mulf(scale, scale, mul); } else if (bounds_type == OBS_BOUNDS_SCALE_TO_WIDTH) { vec2_mulf(scale, scale, bounds.x / width); } else if (bounds_type == OBS_BOUNDS_SCALE_TO_HEIGHT) { vec2_mulf(scale, scale, bounds.y / height); } else if (bounds_type == OBS_BOUNDS_STRETCH) { scale->x = copysignf(bounds.x / (float)(*cx), scale->x); scale->y = copysignf(bounds.y / (float)(*cy), scale->y); } width = (float)(*cx) * scale->x; height = (float)(*cy) * scale->y; /* Disregards flip when calculating size diff */ width_diff = bounds.x - fabsf(width); height_diff = bounds.y - fabsf(height); *cx = (uint32_t)bounds.x; *cy = (uint32_t)bounds.y; add_alignment(origin, item->bounds_align, (int)-width_diff, (int)-height_diff); /* Set cropping if enabled and large enough size difference exists */ if (crop_to_bounds(item) && (width_diff < -0.1 || height_diff < -0.1)) { bool crop_width = width_diff < -0.1; bool crop_flipped = crop_width ? width < 0.0f : height < 0.0f; float crop_diff = crop_width ? width_diff : height_diff; float crop_scale = crop_width ? scale->x : scale->y; float crop_origin = crop_width ? origin->x : origin->y; /* Only get alignment for relevant axis */ uint32_t crop_align_mask = crop_width ? OBS_ALIGN_LEFT | OBS_ALIGN_RIGHT : OBS_ALIGN_TOP | OBS_ALIGN_BOTTOM; uint32_t crop_align = item->bounds_align & crop_align_mask; /* Cropping values need to scaled to input source */ float overdraw = fabsf(crop_diff / crop_scale); /* tl = top / left, br = bottom / right */ float overdraw_tl; if (crop_align & (OBS_ALIGN_TOP | OBS_ALIGN_LEFT)) overdraw_tl = 0; else if (crop_align & (OBS_ALIGN_BOTTOM | OBS_ALIGN_RIGHT)) overdraw_tl = overdraw; else overdraw_tl = overdraw / 2; float overdraw_br = overdraw - overdraw_tl; int crop_br, crop_tl; if (crop_flipped) { /* Adjust origin for flips */ if (crop_align == OBS_ALIGN_CENTER) crop_origin *= 2; else if (crop_align & (OBS_ALIGN_TOP | OBS_ALIGN_LEFT)) crop_origin -= crop_diff; /* Note that crops are swapped if the axis is flipped */ crop_br = (int)roundf(overdraw_tl); crop_tl = (int)roundf(overdraw_br); } else { crop_origin = 0; crop_br = (int)roundf(overdraw_br); crop_tl = (int)roundf(overdraw_tl); } if (crop_width) { item->bounds_crop.right = crop_br; item->bounds_crop.left = crop_tl; origin->x = crop_origin; } else { item->bounds_crop.bottom = crop_br; item->bounds_crop.top = crop_tl; origin->y = crop_origin; } } /* Makes the item stay in-place in the box if flipped */ origin->x += (width < 0.0f) ? width : 0.0f; origin->y += (height < 0.0f) ? height : 0.0f; } static inline uint32_t calc_cx(const struct obs_scene_item *item, uint32_t width) { uint32_t crop_cx = item->crop.left + item->crop.right + item->bounds_crop.left + item->bounds_crop.right; return (crop_cx > width) ? 2 : (width - crop_cx); } static inline uint32_t calc_cy(const struct obs_scene_item *item, uint32_t height) { uint32_t crop_cy = item->crop.top + item->crop.bottom + item->bounds_crop.top + item->bounds_crop.bottom; return (crop_cy > height) ? 2 : (height - crop_cy); } #ifdef DEBUG_TRANSFORM static inline void log_matrix(const struct matrix4 *mat, const char *name) { blog(LOG_DEBUG, "Matrix \"%s\":\n" "â” %9.4f %9.4f %9.4f %9.4f ┓\n" "┃ %9.4f %9.4f %9.4f %9.4f ┃\n" "┃ %9.4f %9.4f %9.4f %9.4f ┃\n" "â”— %9.4f %9.4f %9.4f %9.4f â”›", name, mat->x.x, mat->x.y, mat->x.z, mat->x.w, mat->y.x, mat->y.y, mat->y.z, mat->y.w, mat->z.x, mat->z.y, mat->z.z, mat->z.w, mat->t.x, mat->t.y, mat->t.z, mat->t.w); } #endif static inline void update_nested_scene_crop(struct obs_scene_item *item, uint32_t width, uint32_t height) { if (!item->last_height || !item->last_width) return; /* Use last size and new size to calculate factor to adjust crop by. */ float scale_x = (float)width / (float)item->last_width; float scale_y = (float)height / (float)item->last_height; item->crop.left = (int)((float)item->crop.left * scale_x); item->crop.right = (int)((float)item->crop.right * scale_x); item->crop.top = (int)((float)item->crop.top * scale_y); item->crop.bottom = (int)((float)item->crop.bottom * scale_y); } static void update_item_transform(struct obs_scene_item *item, bool update_tex) { uint32_t width; uint32_t height; uint32_t cx; uint32_t cy; struct vec2 base_origin; struct vec2 origin; struct vec2 scale; struct vec2 position; struct calldata params; uint8_t stack[128]; if (os_atomic_load_long(&item->defer_update) > 0) return; /* Reset bounds crop */ memset(&item->bounds_crop, 0, sizeof(item->bounds_crop)); width = obs_source_get_width(item->source); height = obs_source_get_height(item->source); /* Adjust crop on nested scenes (if any) */ if (update_tex && item->is_scene) update_nested_scene_crop(item, width, height); cx = calc_cx(item, width); cy = calc_cy(item, height); item->last_width = width; item->last_height = height; width = cx; height = cy; vec2_zero(&base_origin); vec2_zero(&origin); if (!item->absolute_coordinates) { item_canvas_scale(&scale, item); pos_to_absolute(&position, &item->pos, item); } else { scale = item->scale; position = item->pos; } /* ----------------------- */ if (item->bounds_type != OBS_BOUNDS_NONE) { calculate_bounds_data(item, &origin, &scale, &cx, &cy); } else { cx = (uint32_t)((float)cx * scale.x); cy = (uint32_t)((float)cy * scale.y); } add_alignment(&origin, item->align, (int)cx, (int)cy); matrix4_identity(&item->draw_transform); matrix4_scale3f(&item->draw_transform, &item->draw_transform, scale.x, scale.y, 1.0f); matrix4_translate3f(&item->draw_transform, &item->draw_transform, -origin.x, -origin.y, 0.0f); matrix4_rotate_aa4f(&item->draw_transform, &item->draw_transform, 0.0f, 0.0f, 1.0f, RAD(item->rot)); matrix4_translate3f(&item->draw_transform, &item->draw_transform, position.x, position.y, 0.0f); #ifdef DEBUG_TRANSFORM blog(LOG_DEBUG, "Transform updated for \"%s\":", obs_source_get_name(item->source)); log_matrix(&item->draw_transform, "draw_transform"); #endif item->output_scale = scale; /* ----------------------- */ if (item->bounds_type != OBS_BOUNDS_NONE) { if (!item->absolute_coordinates) size_to_absolute(&scale, &item->bounds, item); else vec2_copy(&scale, &item->bounds); } else { scale.x *= (float)width; scale.y *= (float)height; } item->box_scale = scale; add_alignment(&base_origin, item->align, (int)scale.x, (int)scale.y); matrix4_identity(&item->box_transform); matrix4_scale3f(&item->box_transform, &item->box_transform, scale.x, scale.y, 1.0f); matrix4_translate3f(&item->box_transform, &item->box_transform, -base_origin.x, -base_origin.y, 0.0f); matrix4_rotate_aa4f(&item->box_transform, &item->box_transform, 0.0f, 0.0f, 1.0f, RAD(item->rot)); matrix4_translate3f(&item->box_transform, &item->box_transform, position.x, position.y, 0.0f); #ifdef DEBUG_TRANSFORM log_matrix(&item->draw_transform, "box_transform"); #endif /* ----------------------- */ calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_ptr(¶ms, "item", item); signal_parent(item->parent, "item_transform", ¶ms); if (!update_tex) return; os_atomic_set_bool(&item->update_transform, false); } static inline bool source_size_changed(struct obs_scene_item *item) { uint32_t width = obs_source_get_width(item->source); uint32_t height = obs_source_get_height(item->source); return item->last_width != width || item->last_height != height; } static inline bool crop_enabled(const struct obs_sceneitem_crop *crop) { return crop->left || crop->right || crop->top || crop->bottom; } static inline bool scale_filter_enabled(const struct obs_scene_item *item) { return item->scale_filter != OBS_SCALE_DISABLE; } static inline bool default_blending_enabled(const struct obs_scene_item *item) { return item->blend_type == OBS_BLEND_NORMAL; } static inline bool item_is_scene(const struct obs_scene_item *item) { return item->source && item->source->info.type == OBS_SOURCE_TYPE_SCENE; } static inline bool item_texture_enabled(const struct obs_scene_item *item) { return crop_enabled(&item->crop) || crop_enabled(&item->bounds_crop) || scale_filter_enabled(item) || (item->blend_method == OBS_BLEND_METHOD_SRGB_OFF) || !default_blending_enabled(item) || (item_is_scene(item) && !item->is_group); } static void render_item_texture(struct obs_scene_item *item, enum gs_color_space current_space, enum gs_color_space source_space) { gs_texture_t *tex = gs_texrender_get_texture(item->item_render); if (!tex) { return; } GS_DEBUG_MARKER_BEGIN(GS_DEBUG_COLOR_ITEM_TEXTURE, "render_item_texture"); gs_effect_t *effect = obs->video.default_effect; enum obs_scale_type type = item->scale_filter; uint32_t cx = gs_texture_get_width(tex); uint32_t cy = gs_texture_get_height(tex); bool upscale = false; if (type != OBS_SCALE_DISABLE) { if (type == OBS_SCALE_POINT) { gs_eparam_t *image = gs_effect_get_param_by_name(effect, "image"); gs_effect_set_next_sampler(image, obs->video.point_sampler); } else if (!close_float(item->output_scale.x, 1.0f, EPSILON) || !close_float(item->output_scale.y, 1.0f, EPSILON)) { if (item->output_scale.x < 0.5f || item->output_scale.y < 0.5f) { effect = obs->video.bilinear_lowres_effect; } else if (type == OBS_SCALE_BICUBIC) { effect = obs->video.bicubic_effect; } else if (type == OBS_SCALE_LANCZOS) { effect = obs->video.lanczos_effect; } else if (type == OBS_SCALE_AREA) { effect = obs->video.area_effect; upscale = (item->output_scale.x >= 1.0f) && (item->output_scale.y >= 1.0f); } gs_eparam_t *const scale_param = gs_effect_get_param_by_name(effect, "base_dimension"); if (scale_param) { struct vec2 base_res = {(float)cx, (float)cy}; gs_effect_set_vec2(scale_param, &base_res); } gs_eparam_t *const scale_i_param = gs_effect_get_param_by_name(effect, "base_dimension_i"); if (scale_i_param) { struct vec2 base_res_i = {1.0f / (float)cx, 1.0f / (float)cy}; gs_effect_set_vec2(scale_i_param, &base_res_i); } } } float multiplier = 1.f; if (current_space == GS_CS_709_SCRGB) { switch (source_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: case GS_CS_709_EXTENDED: multiplier = obs_get_video_sdr_white_level() / 80.f; break; case GS_CS_709_SCRGB: break; } } if (source_space == GS_CS_709_SCRGB) { switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: case GS_CS_709_EXTENDED: multiplier = 80.f / obs_get_video_sdr_white_level(); break; case GS_CS_709_SCRGB: break; } } const char *tech_name = "Draw"; if (upscale) { tech_name = "DrawUpscale"; switch (source_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: if (current_space == GS_CS_709_SCRGB) tech_name = "DrawUpscaleMultiply"; break; case GS_CS_709_EXTENDED: if (current_space == GS_CS_SRGB || current_space == GS_CS_SRGB_16F) tech_name = "DrawUpscaleTonemap"; else if (current_space == GS_CS_709_SCRGB) tech_name = "DrawUpscaleMultiply"; break; case GS_CS_709_SCRGB: if (current_space == GS_CS_SRGB || current_space == GS_CS_SRGB_16F) tech_name = "DrawUpscaleMultiplyTonemap"; else if (current_space == GS_CS_709_EXTENDED) tech_name = "DrawUpscaleMultiply"; break; } } else { switch (source_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: if (current_space == GS_CS_709_SCRGB) tech_name = "DrawMultiply"; break; case GS_CS_709_EXTENDED: if (current_space == GS_CS_SRGB || current_space == GS_CS_SRGB_16F) tech_name = "DrawTonemap"; else if (current_space == GS_CS_709_SCRGB) tech_name = "DrawMultiply"; break; case GS_CS_709_SCRGB: if (current_space == GS_CS_SRGB || current_space == GS_CS_SRGB_16F) tech_name = "DrawMultiplyTonemap"; else if (current_space == GS_CS_709_EXTENDED) tech_name = "DrawMultiply"; break; } } gs_eparam_t *const multiplier_param = gs_effect_get_param_by_name(effect, "multiplier"); if (multiplier_param) gs_effect_set_float(multiplier_param, multiplier); gs_blend_state_push(); gs_blend_function_separate(obs_blend_mode_params[item->blend_type].src_color, obs_blend_mode_params[item->blend_type].dst_color, obs_blend_mode_params[item->blend_type].src_alpha, obs_blend_mode_params[item->blend_type].dst_alpha); gs_blend_op(obs_blend_mode_params[item->blend_type].op); while (gs_effect_loop(effect, tech_name)) obs_source_draw(tex, 0, 0, 0, 0, 0); gs_blend_state_pop(); GS_DEBUG_MARKER_END(); } static bool are_texcoords_centered(struct matrix4 *m) { static const struct matrix4 identity = { {1.0f, 0.0f, 0.0f, 0.0f}, {0.0f, 1.0f, 0.0f, 0.0f}, {0.0f, 0.0f, 1.0f, 0.0f}, {0.0f, 0.0f, 0.0f, 1.0f}, }; struct matrix4 copy = identity; copy.t.x = floorf(m->t.x); copy.t.y = floorf(m->t.y); return memcmp(m, ©, sizeof(*m)) == 0; } static inline void render_item(struct obs_scene_item *item) { GS_DEBUG_MARKER_BEGIN_FORMAT(GS_DEBUG_COLOR_ITEM, "Item: %s", obs_source_get_name(item->source)); const bool use_texrender = item_texture_enabled(item); obs_source_t *const source = item->source; const enum gs_color_space current_space = gs_get_color_space(); const enum gs_color_space source_space = obs_source_get_color_space(source, 1, ¤t_space); const enum gs_color_format format = gs_get_format_from_space(source_space); if (item->item_render && (!use_texrender || (gs_texrender_get_format(item->item_render) != format))) { gs_texrender_destroy(item->item_render); item->item_render = NULL; } if (!item->item_render && use_texrender) { item->item_render = gs_texrender_create(format, GS_ZS_NONE); } if (item->item_render) { uint32_t width = obs_source_get_width(item->source); uint32_t height = obs_source_get_height(item->source); if (!width || !height) { goto cleanup; } uint32_t cx = calc_cx(item, width); uint32_t cy = calc_cy(item, height); if (cx && cy && gs_texrender_begin_with_color_space(item->item_render, cx, cy, source_space)) { float cx_scale = (float)width / (float)cx; float cy_scale = (float)height / (float)cy; struct vec4 clear_color; vec4_zero(&clear_color); gs_clear(GS_CLEAR_COLOR, &clear_color, 0.0f, 0); gs_ortho(0.0f, (float)width, 0.0f, (float)height, -100.0f, 100.0f); gs_matrix_scale3f(cx_scale, cy_scale, 1.0f); gs_matrix_translate3f(-(float)(item->crop.left + item->bounds_crop.left), -(float)(item->crop.top + item->bounds_crop.top), 0.0f); if (item->user_visible && transition_active(item->show_transition)) { const int cx = obs_source_get_width(item->source); const int cy = obs_source_get_height(item->source); obs_transition_set_size(item->show_transition, cx, cy); obs_source_video_render(item->show_transition); } else if (!item->user_visible && transition_active(item->hide_transition)) { const int cx = obs_source_get_width(item->source); const int cy = obs_source_get_height(item->source); obs_transition_set_size(item->hide_transition, cx, cy); obs_source_video_render(item->hide_transition); } else { obs_source_set_texcoords_centered(item->source, true); obs_source_video_render(item->source); obs_source_set_texcoords_centered(item->source, false); } gs_texrender_end(item->item_render); } } const bool linear_srgb = !item->item_render || (item->blend_method != OBS_BLEND_METHOD_SRGB_OFF); const bool previous = gs_set_linear_srgb(linear_srgb); gs_matrix_push(); gs_matrix_mul(&item->draw_transform); if (item->item_render) { render_item_texture(item, current_space, source_space); } else if (item->user_visible && transition_active(item->show_transition)) { const int cx = obs_source_get_width(item->source); const int cy = obs_source_get_height(item->source); obs_transition_set_size(item->show_transition, cx, cy); obs_source_video_render(item->show_transition); } else if (!item->user_visible && transition_active(item->hide_transition)) { const int cx = obs_source_get_width(item->source); const int cy = obs_source_get_height(item->source); obs_transition_set_size(item->hide_transition, cx, cy); obs_source_video_render(item->hide_transition); } else { const bool centered = are_texcoords_centered(&item->draw_transform); obs_source_set_texcoords_centered(item->source, centered); obs_source_video_render(item->source); obs_source_set_texcoords_centered(item->source, false); } gs_matrix_pop(); gs_set_linear_srgb(previous); cleanup: GS_DEBUG_MARKER_END(); } static void scene_video_tick(void *data, float seconds) { struct obs_scene *scene = data; struct obs_scene_item *item; video_lock(scene); item = scene->first_item; while (item) { if (item->item_render) gs_texrender_reset(item->item_render); item = item->next; } video_unlock(scene); UNUSED_PARAMETER(seconds); } /* assumes video lock */ static void update_transforms_and_prune_sources(obs_scene_t *scene, obs_scene_item_ptr_array_t *remove_items, obs_sceneitem_t *group_sceneitem, bool scene_size_changed) { struct obs_scene_item *item = scene->first_item; bool rebuild_group = group_sceneitem && os_atomic_load_bool(&group_sceneitem->update_group_resize); while (item) { if (obs_source_removed(item->source)) { struct obs_scene_item *del_item = item; item = item->next; obs_sceneitem_remove_internal(del_item); da_push_back(*remove_items, &del_item); rebuild_group = true; continue; } if (item->is_group) { obs_scene_t *group_scene = item->source->context.data; video_lock(group_scene); update_transforms_and_prune_sources(group_scene, remove_items, item, scene_size_changed); video_unlock(group_scene); } if (os_atomic_load_bool(&item->update_transform) || source_size_changed(item) || scene_size_changed) { update_item_transform(item, true); rebuild_group = true; } item = item->next; } if (rebuild_group && group_sceneitem) resize_group(group_sceneitem, scene_size_changed); } static inline bool scene_size_changed(obs_scene_t *scene) { uint32_t width = scene_getwidth(scene); uint32_t height = scene_getheight(scene); if (width == scene->last_width && height == scene->last_height) return false; scene->last_width = width; scene->last_height = height; return true; } static void scene_video_render(void *data, gs_effect_t *effect) { obs_scene_item_ptr_array_t remove_items; struct obs_scene *scene = data; struct obs_scene_item *item; da_init(remove_items); video_lock(scene); if (!scene->is_group) { bool size_changed = scene_size_changed(scene); update_transforms_and_prune_sources(scene, &remove_items, NULL, size_changed); } gs_blend_state_push(); gs_reset_blend_state(); item = scene->first_item; while (item) { if (item->user_visible || transition_active(item->hide_transition)) render_item(item); item = item->next; } gs_blend_state_pop(); video_unlock(scene); for (size_t i = 0; i < remove_items.num; i++) obs_sceneitem_release(remove_items.array[i]); da_free(remove_items); UNUSED_PARAMETER(effect); } static void set_visibility(struct obs_scene_item *item, bool vis) { pthread_mutex_lock(&item->actions_mutex); da_resize(item->audio_actions, 0); if (os_atomic_load_long(&item->active_refs) > 0) { if (!vis) obs_source_remove_active_child(item->parent->source, item->source); } else if (vis) { obs_source_add_active_child(item->parent->source, item->source); } os_atomic_set_long(&item->active_refs, vis ? 1 : 0); item->visible = vis; item->user_visible = vis; pthread_mutex_unlock(&item->actions_mutex); } static obs_sceneitem_t *obs_scene_add_internal(obs_scene_t *scene, obs_source_t *source, obs_sceneitem_t *insert_after, int64_t id); static void scene_load_item(struct obs_scene *scene, obs_data_t *item_data) { const char *name = obs_data_get_string(item_data, "name"); const char *src_uuid = obs_data_get_string(item_data, "source_uuid"); obs_source_t *source = NULL; const char *scale_filter_str; const char *blend_method_str; const char *blend_str; struct obs_scene_item *item; struct calldata params; uint8_t stack[128]; bool visible; bool lock; if (obs_data_get_bool(item_data, "group_item_backup")) return; if (src_uuid && strlen(src_uuid) == UUID_STR_LENGTH) source = obs_get_source_by_uuid(src_uuid); /* Fall back to name if UUID was not found or is not set. */ if (!source) source = obs_get_source_by_name(name); if (!source) { blog(LOG_WARNING, "[scene_load_item] Source %s not " "found!", name); return; } item = obs_scene_add_internal(scene, source, NULL, obs_data_get_int(item_data, "id")); if (!item) { blog(LOG_WARNING, "[scene_load_item] Could not add source '%s' " "to scene '%s'!", name, obs_source_get_name(scene->source)); obs_source_release(source); return; } calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_ptr(¶ms, "scene", scene); calldata_set_ptr(¶ms, "item", item); signal_handler_signal(scene->source->context.signals, "item_add", ¶ms); item->is_group = strcmp(source->info.id, group_info.id) == 0; obs_data_set_default_int(item_data, "align", OBS_ALIGN_TOP | OBS_ALIGN_LEFT); item->rot = (float)obs_data_get_double(item_data, "rot"); item->align = (uint32_t)obs_data_get_int(item_data, "align"); visible = obs_data_get_bool(item_data, "visible"); lock = obs_data_get_bool(item_data, "locked"); if (!item->absolute_coordinates && obs_data_has_user_value(item_data, "pos_rel") && obs_data_has_user_value(item_data, "scale_rel") && obs_data_has_user_value(item_data, "scale_ref")) { obs_data_get_vec2(item_data, "pos_rel", &item->pos); obs_data_get_vec2(item_data, "scale_rel", &item->scale); obs_data_get_vec2(item_data, "scale_ref", &item->scale_ref); } else { obs_data_get_vec2(item_data, "pos", &item->pos); if (!item->absolute_coordinates) pos_from_absolute(&item->pos, &item->pos, item); obs_data_get_vec2(item_data, "scale", &item->scale); } obs_data_release(item->private_settings); item->private_settings = obs_data_get_obj(item_data, "private_settings"); if (!item->private_settings) item->private_settings = obs_data_create(); set_visibility(item, visible); obs_sceneitem_set_locked(item, lock); item->bounds_type = (enum obs_bounds_type)obs_data_get_int(item_data, "bounds_type"); item->bounds_align = (uint32_t)obs_data_get_int(item_data, "bounds_align"); item->crop_to_bounds = obs_data_get_bool(item_data, "bounds_crop"); obs_data_get_vec2(item_data, "bounds", &item->bounds); if (!item->absolute_coordinates && obs_data_has_user_value(item_data, "bounds_rel")) { obs_data_get_vec2(item_data, "bounds_rel", &item->bounds); } else { obs_data_get_vec2(item_data, "bounds", &item->bounds); if (!item->absolute_coordinates) size_from_absolute(&item->bounds, &item->bounds, item); } item->crop.left = (uint32_t)obs_data_get_int(item_data, "crop_left"); item->crop.top = (uint32_t)obs_data_get_int(item_data, "crop_top"); item->crop.right = (uint32_t)obs_data_get_int(item_data, "crop_right"); item->crop.bottom = (uint32_t)obs_data_get_int(item_data, "crop_bottom"); scale_filter_str = obs_data_get_string(item_data, "scale_filter"); item->scale_filter = OBS_SCALE_DISABLE; if (scale_filter_str) { if (astrcmpi(scale_filter_str, "point") == 0) item->scale_filter = OBS_SCALE_POINT; else if (astrcmpi(scale_filter_str, "bilinear") == 0) item->scale_filter = OBS_SCALE_BILINEAR; else if (astrcmpi(scale_filter_str, "bicubic") == 0) item->scale_filter = OBS_SCALE_BICUBIC; else if (astrcmpi(scale_filter_str, "lanczos") == 0) item->scale_filter = OBS_SCALE_LANCZOS; else if (astrcmpi(scale_filter_str, "area") == 0) item->scale_filter = OBS_SCALE_AREA; } blend_method_str = obs_data_get_string(item_data, "blend_method"); item->blend_method = OBS_BLEND_METHOD_DEFAULT; if (blend_method_str) { if (astrcmpi(blend_method_str, "srgb_off") == 0) item->blend_method = OBS_BLEND_METHOD_SRGB_OFF; } blend_str = obs_data_get_string(item_data, "blend_type"); item->blend_type = OBS_BLEND_NORMAL; if (blend_str) { if (astrcmpi(blend_str, "additive") == 0) item->blend_type = OBS_BLEND_ADDITIVE; else if (astrcmpi(blend_str, "subtract") == 0) item->blend_type = OBS_BLEND_SUBTRACT; else if (astrcmpi(blend_str, "screen") == 0) item->blend_type = OBS_BLEND_SCREEN; else if (astrcmpi(blend_str, "multiply") == 0) item->blend_type = OBS_BLEND_MULTIPLY; else if (astrcmpi(blend_str, "lighten") == 0) item->blend_type = OBS_BLEND_LIGHTEN; else if (astrcmpi(blend_str, "darken") == 0) item->blend_type = OBS_BLEND_DARKEN; } obs_data_t *show_data = obs_data_get_obj(item_data, "show_transition"); if (show_data) { obs_sceneitem_transition_load(item, show_data, true); obs_data_release(show_data); } obs_data_t *hide_data = obs_data_get_obj(item_data, "hide_transition"); if (hide_data) { obs_sceneitem_transition_load(item, hide_data, false); obs_data_release(hide_data); } obs_source_release(source); update_item_transform(item, false); } static void scene_load(void *data, obs_data_t *settings) { struct obs_scene *scene = data; obs_data_array_t *items = obs_data_get_array(settings, "items"); size_t count, i; remove_all_items(scene); if (obs_data_get_bool(settings, "custom_size")) { scene->cx = (uint32_t)obs_data_get_int(settings, "cx"); scene->cy = (uint32_t)obs_data_get_int(settings, "cy"); scene->custom_size = true; } if (obs_data_has_user_value(settings, "id_counter")) scene->id_counter = obs_data_get_int(settings, "id_counter"); scene->absolute_coordinates = obs_data_get_bool(obs->data.private_data, "AbsoluteCoordinates"); if (!items) return; count = obs_data_array_count(items); for (i = 0; i < count; i++) { obs_data_t *item_data = obs_data_array_item(items, i); scene_load_item(scene, item_data); obs_data_release(item_data); } obs_data_array_release(items); } static void scene_save(void *data, obs_data_t *settings); static void scene_save_item(obs_data_array_t *array, struct obs_scene_item *item, struct obs_scene_item *backup_group) { obs_data_t *item_data = obs_data_create(); const char *name = obs_source_get_name(item->source); const char *src_uuid = obs_source_get_uuid(item->source); const char *scale_filter; const char *blend_method; const char *blend_type; struct vec2 pos = item->pos; struct vec2 scale = item->scale; float rot = item->rot; if (backup_group) { get_ungrouped_transform(backup_group, item, &pos, &scale, &rot); } obs_data_set_string(item_data, "name", name); obs_data_set_string(item_data, "source_uuid", src_uuid); obs_data_set_bool(item_data, "visible", item->user_visible); obs_data_set_bool(item_data, "locked", item->locked); obs_data_set_double(item_data, "rot", rot); obs_data_set_vec2(item_data, "scale_ref", &item->scale_ref); obs_data_set_int(item_data, "align", (int)item->align); obs_data_set_int(item_data, "bounds_type", (int)item->bounds_type); obs_data_set_int(item_data, "bounds_align", (int)item->bounds_align); obs_data_set_bool(item_data, "bounds_crop", item->crop_to_bounds); obs_data_set_int(item_data, "crop_left", (int)item->crop.left); obs_data_set_int(item_data, "crop_top", (int)item->crop.top); obs_data_set_int(item_data, "crop_right", (int)item->crop.right); obs_data_set_int(item_data, "crop_bottom", (int)item->crop.bottom); obs_data_set_int(item_data, "id", item->id); obs_data_set_bool(item_data, "group_item_backup", !!backup_group); if (!item->absolute_coordinates) { /* For backwards compatibility, also store absolute values. */ struct vec2 tmp_abs; pos_to_absolute(&tmp_abs, &pos, item); obs_data_set_vec2(item_data, "pos", &tmp_abs); obs_data_set_vec2(item_data, "pos_rel", &pos); item_canvas_scale(&tmp_abs, item); obs_data_set_vec2(item_data, "scale", &tmp_abs); obs_data_set_vec2(item_data, "scale_rel", &scale); size_to_absolute(&tmp_abs, &item->bounds, item); obs_data_set_vec2(item_data, "bounds", &tmp_abs); obs_data_set_vec2(item_data, "bounds_rel", &item->bounds); } else { obs_data_set_vec2(item_data, "pos", &pos); obs_data_set_vec2(item_data, "scale", &scale); obs_data_set_vec2(item_data, "bounds", &item->bounds); } if (item->is_group) { obs_scene_t *group_scene = item->source->context.data; obs_sceneitem_t *group_item; /* save group items as part of main scene, but ignored. * causes an automatic ungroup if scene collection file * is loaded in previous versions. */ full_lock(group_scene); group_item = group_scene->first_item; while (group_item) { scene_save_item(array, group_item, item); group_item = group_item->next; } full_unlock(group_scene); } if (item->scale_filter == OBS_SCALE_POINT) scale_filter = "point"; else if (item->scale_filter == OBS_SCALE_BILINEAR) scale_filter = "bilinear"; else if (item->scale_filter == OBS_SCALE_BICUBIC) scale_filter = "bicubic"; else if (item->scale_filter == OBS_SCALE_LANCZOS) scale_filter = "lanczos"; else if (item->scale_filter == OBS_SCALE_AREA) scale_filter = "area"; else scale_filter = "disable"; obs_data_set_string(item_data, "scale_filter", scale_filter); if (item->blend_method == OBS_BLEND_METHOD_SRGB_OFF) blend_method = "srgb_off"; else blend_method = "default"; obs_data_set_string(item_data, "blend_method", blend_method); if (item->blend_type == OBS_BLEND_NORMAL) blend_type = "normal"; else if (item->blend_type == OBS_BLEND_ADDITIVE) blend_type = "additive"; else if (item->blend_type == OBS_BLEND_SUBTRACT) blend_type = "subtract"; else if (item->blend_type == OBS_BLEND_SCREEN) blend_type = "screen"; else if (item->blend_type == OBS_BLEND_MULTIPLY) blend_type = "multiply"; else if (item->blend_type == OBS_BLEND_LIGHTEN) blend_type = "lighten"; else if (item->blend_type == OBS_BLEND_DARKEN) blend_type = "darken"; else blend_type = "normal"; obs_data_set_string(item_data, "blend_type", blend_type); obs_data_t *show_data = obs_sceneitem_transition_save(item, true); obs_data_set_obj(item_data, "show_transition", show_data); obs_data_release(show_data); obs_data_t *hide_data = obs_sceneitem_transition_save(item, false); obs_data_set_obj(item_data, "hide_transition", hide_data); obs_data_release(hide_data); obs_data_set_obj(item_data, "private_settings", item->private_settings); obs_data_array_push_back(array, item_data); obs_data_release(item_data); } static void scene_save(void *data, obs_data_t *settings) { struct obs_scene *scene = data; obs_data_array_t *array = obs_data_array_create(); struct obs_scene_item *item; full_lock(scene); item = scene->first_item; while (item) { scene_save_item(array, item, NULL); item = item->next; } obs_data_set_int(settings, "id_counter", scene->id_counter); obs_data_set_bool(settings, "custom_size", scene->custom_size); if (scene->custom_size) { obs_data_set_int(settings, "cx", scene->cx); obs_data_set_int(settings, "cy", scene->cy); } full_unlock(scene); obs_data_set_array(settings, "items", array); obs_data_array_release(array); } static uint32_t canvas_getwidth(obs_weak_canvas_t *weak) { uint32_t width = 0; obs_canvas_t *canvas = obs_weak_canvas_get_canvas(weak); if (canvas) { width = canvas->ovi.base_width; obs_canvas_release(canvas); } return width; } static uint32_t canvas_getheight(obs_weak_canvas_t *weak) { uint32_t height = 0; obs_canvas_t *canvas = obs_weak_canvas_get_canvas(weak); if (canvas) { height = canvas->ovi.base_height; obs_canvas_release(canvas); } return height; } static uint32_t scene_getwidth(void *data) { obs_scene_t *scene = data; if (scene->custom_size) return scene->cx; if (scene->source->canvas) return canvas_getwidth(scene->source->canvas); if (obs->data.main_canvas->mix) return obs->data.main_canvas->mix->ovi.base_width; return 0; } static uint32_t scene_getheight(void *data) { obs_scene_t *scene = data; if (scene->custom_size) return scene->cy; if (scene->source->canvas) return canvas_getheight(scene->source->canvas); if (obs->data.main_canvas->mix) return obs->data.main_canvas->mix->ovi.base_height; return 0; } static void apply_scene_item_audio_actions(struct obs_scene_item *item, float *buf, uint64_t ts, size_t sample_rate) { bool cur_visible = item->visible; uint64_t frame_num = 0; size_t deref_count = 0; pthread_mutex_lock(&item->actions_mutex); for (size_t i = 0; i < item->audio_actions.num; i++) { struct item_action action = item->audio_actions.array[i]; uint64_t timestamp = action.timestamp; uint64_t new_frame_num; if (timestamp < ts) timestamp = ts; new_frame_num = util_mul_div64(timestamp - ts, sample_rate, 1000000000ULL); if (ts && new_frame_num >= AUDIO_OUTPUT_FRAMES) break; da_erase(item->audio_actions, i--); item->visible = action.visible; if (!item->visible) deref_count++; if (buf && new_frame_num > frame_num) { for (; frame_num < new_frame_num; frame_num++) buf[frame_num] = cur_visible ? 1.0f : 0.0f; } cur_visible = item->visible; } if (buf) { for (; frame_num < AUDIO_OUTPUT_FRAMES; frame_num++) buf[frame_num] = cur_visible ? 1.0f : 0.0f; } pthread_mutex_unlock(&item->actions_mutex); while (deref_count--) { if (os_atomic_dec_long(&item->active_refs) == 0) { obs_source_remove_active_child(item->parent->source, item->source); } } } static bool apply_scene_item_volume(struct obs_scene_item *item, float *buf, uint64_t ts, size_t sample_rate) { bool actions_pending; struct item_action action; pthread_mutex_lock(&item->actions_mutex); actions_pending = item->audio_actions.num > 0; if (actions_pending) action = item->audio_actions.array[0]; pthread_mutex_unlock(&item->actions_mutex); if (actions_pending) { uint64_t duration = util_mul_div64(AUDIO_OUTPUT_FRAMES, 1000000000ULL, sample_rate); if (!ts || action.timestamp < (ts + duration)) { apply_scene_item_audio_actions(item, buf, ts, sample_rate); return true; } } return false; } static void process_all_audio_actions(struct obs_scene_item *item, size_t sample_rate) { while (apply_scene_item_volume(item, NULL, 0, sample_rate)) ; } static void mix_audio_with_buf(float *p_out, float *p_in, float *buf_in, size_t pos, size_t count) { register float *out = p_out + pos; register float *buf = buf_in; register float *in = p_in; register float *end = in + count; while (in < end) *(out++) += *(in++) * *(buf++); } static inline void mix_audio(float *p_out, float *p_in, size_t pos, size_t count) { register float *out = p_out + pos; register float *in = p_in; register float *end = in + count; while (in < end) *(out++) += *(in++); } static bool scene_audio_render(void *data, uint64_t *ts_out, struct obs_source_audio_mix *audio_output, uint32_t mixers, size_t channels, size_t sample_rate) { uint64_t timestamp = 0; float buf[AUDIO_OUTPUT_FRAMES]; struct obs_source_audio_mix child_audio; struct obs_scene *scene = data; struct obs_scene_item *item; audio_lock(scene); item = scene->first_item; while (item) { struct obs_source *source; if (item->visible && transition_active(item->show_transition)) source = item->show_transition; else if (!item->visible && transition_active(item->hide_transition)) source = item->hide_transition; else source = item->source; if (!obs_source_audio_pending(source) && (item->visible || transition_active(item->hide_transition))) { uint64_t source_ts = obs_source_get_audio_timestamp(source); if (source_ts && (!timestamp || source_ts < timestamp)) timestamp = source_ts; } item = item->next; } if (!timestamp) { /* just process all pending audio actions if no audio playing, * otherwise audio actions will just never be processed */ item = scene->first_item; while (item) { process_all_audio_actions(item, sample_rate); item = item->next; } audio_unlock(scene); return false; } item = scene->first_item; while (item) { uint64_t source_ts; size_t pos; size_t count; bool apply_buf; struct obs_source *source; if (item->visible && transition_active(item->show_transition)) source = item->show_transition; else if (!item->visible && transition_active(item->hide_transition)) source = item->hide_transition; else source = item->source; apply_buf = apply_scene_item_volume(item, buf, timestamp, sample_rate); if (obs_source_audio_pending(source)) { item = item->next; continue; } source_ts = obs_source_get_audio_timestamp(source); if (!source_ts) { item = item->next; continue; } pos = (size_t)ns_to_audio_frames(sample_rate, source_ts - timestamp); if (pos >= AUDIO_OUTPUT_FRAMES) { item = item->next; continue; } count = AUDIO_OUTPUT_FRAMES - pos; if (!apply_buf && !item->visible && !transition_active(item->hide_transition)) { item = item->next; continue; } obs_source_get_audio_mix(source, &child_audio); if (!source->audio_is_duplicated) { for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { if ((mixers & (1 << mix)) == 0) continue; for (size_t ch = 0; ch < channels; ch++) { float *out = audio_output->output[mix].data[ch]; float *in = child_audio.output[mix].data[ch]; if (apply_buf) mix_audio_with_buf(out, in, buf, pos, count); else mix_audio(out, in, pos, count); } } } item = item->next; } *ts_out = timestamp; audio_unlock(scene); return true; } enum gs_color_space scene_video_get_color_space(void *data, size_t count, const enum gs_color_space *preferred_spaces) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(count); UNUSED_PARAMETER(preferred_spaces); enum gs_color_space space = GS_CS_SRGB; struct obs_video_info ovi; if (obs_get_video_info(&ovi)) { if (ovi.colorspace == VIDEO_CS_2100_PQ || ovi.colorspace == VIDEO_CS_2100_HLG) space = GS_CS_709_EXTENDED; } return space; } const struct obs_source_info scene_info = { .id = "scene", .type = OBS_SOURCE_TYPE_SCENE, .output_flags = OBS_SOURCE_VIDEO | OBS_SOURCE_CUSTOM_DRAW | OBS_SOURCE_COMPOSITE | OBS_SOURCE_DO_NOT_DUPLICATE | OBS_SOURCE_SRGB | OBS_SOURCE_REQUIRES_CANVAS, .get_name = scene_getname, .create = scene_create, .destroy = scene_destroy, .video_tick = scene_video_tick, .video_render = scene_video_render, .audio_render = scene_audio_render, .get_width = scene_getwidth, .get_height = scene_getheight, .load = scene_load, .save = scene_save, .enum_active_sources = scene_enum_active_sources, .enum_all_sources = scene_enum_all_sources, .video_get_color_space = scene_video_get_color_space, }; const struct obs_source_info group_info = { .id = "group", .type = OBS_SOURCE_TYPE_SCENE, .output_flags = OBS_SOURCE_VIDEO | OBS_SOURCE_CUSTOM_DRAW | OBS_SOURCE_COMPOSITE | OBS_SOURCE_SRGB | OBS_SOURCE_REQUIRES_CANVAS, .get_name = group_getname, .create = scene_create, .destroy = scene_destroy, .video_tick = scene_video_tick, .video_render = scene_video_render, .audio_render = scene_audio_render, .get_width = scene_getwidth, .get_height = scene_getheight, .load = scene_load, .save = scene_save, .enum_active_sources = scene_enum_active_sources, .enum_all_sources = scene_enum_all_sources, .video_get_color_space = scene_video_get_color_space, }; static inline obs_scene_t *create_id(obs_canvas_t *canvas, const char *id, const char *name) { struct obs_source *source = obs_source_create_canvas(canvas, id, name, NULL, NULL); return source->context.data; } static inline obs_scene_t *create_private_id(const char *id, const char *name) { struct obs_source *source = obs_source_create_private(id, name, NULL); return source->context.data; } obs_scene_t *obs_scene_create(const char *name) { return create_id(obs->data.main_canvas, "scene", name); } obs_scene_t *obs_scene_create_private(const char *name) { return create_private_id("scene", name); } static obs_source_t *get_child_at_idx(obs_scene_t *scene, size_t idx) { struct obs_scene_item *item = scene->first_item; while (item && idx--) item = item->next; return item ? item->source : NULL; } static inline obs_source_t *dup_child(obs_scene_item_ptr_array_t *old_items, size_t idx, obs_scene_t *new_scene, bool private) { obs_source_t *source; source = old_items->array[idx]->source; /* if the old item is referenced more than once in the old scene, * make sure they're referenced similarly in the new scene to reduce * load times */ for (size_t i = 0; i < idx; i++) { struct obs_scene_item *item = old_items->array[i]; if (item->source == source) { source = get_child_at_idx(new_scene, i); return obs_source_get_ref(source); } } return obs_source_duplicate(source, private ? obs_source_get_name(source) : NULL, private); } static inline obs_source_t *new_ref(obs_source_t *source) { return obs_source_get_ref(source); } static inline void duplicate_item_data(struct obs_scene_item *dst, struct obs_scene_item *src, bool defer_texture_update, bool duplicate_hotkeys) { struct obs_scene *dst_scene = dst->parent; if (!src->user_visible) set_visibility(dst, false); dst->selected = src->selected; dst->pos = src->pos; dst->rot = src->rot; dst->scale = src->scale; dst->align = src->align; dst->last_width = src->last_width; dst->last_height = src->last_height; dst->output_scale = src->output_scale; dst->scale_filter = src->scale_filter; dst->blend_method = src->blend_method; dst->blend_type = src->blend_type; dst->box_transform = src->box_transform; dst->box_scale = src->box_scale; dst->draw_transform = src->draw_transform; dst->bounds_type = src->bounds_type; dst->bounds_align = src->bounds_align; dst->bounds = src->bounds; dst->crop_to_bounds = src->crop_to_bounds; dst->bounds_crop = src->bounds_crop; if (src->show_transition) { obs_source_t *transition = obs_source_duplicate(src->show_transition, obs_source_get_name(src->show_transition), true); obs_sceneitem_set_transition(dst, true, transition); obs_source_release(transition); } if (src->hide_transition) { obs_source_t *transition = obs_source_duplicate(src->hide_transition, obs_source_get_name(src->hide_transition), true); obs_sceneitem_set_transition(dst, false, transition); obs_source_release(transition); } dst->show_transition_duration = src->show_transition_duration; dst->hide_transition_duration = src->hide_transition_duration; if (duplicate_hotkeys && !dst_scene->source->context.private) { struct dstr show = {0}; struct dstr hide = {0}; obs_data_array_t *data0 = NULL; obs_data_array_t *data1 = NULL; obs_hotkey_pair_save(src->toggle_visibility, &data0, &data1); obs_hotkey_pair_load(dst->toggle_visibility, data0, data1); /* Fix scene item ID */ dstr_printf(&show, "libobs.show_scene_item.%" PRIi64, dst->id); dstr_printf(&hide, "libobs.hide_scene_item.%" PRIi64, dst->id); obs_hotkey_pair_set_names(dst->toggle_visibility, show.array, hide.array); obs_data_array_release(data0); obs_data_array_release(data1); dstr_free(&show); dstr_free(&hide); } obs_sceneitem_set_crop(dst, &src->crop); obs_sceneitem_set_locked(dst, src->locked); if (defer_texture_update) { os_atomic_set_bool(&dst->update_transform, true); } obs_data_apply(dst->private_settings, src->private_settings); } obs_scene_t *obs_scene_duplicate(obs_scene_t *scene, const char *name, enum obs_scene_duplicate_type type) { bool make_unique = type == OBS_SCENE_DUP_COPY || type == OBS_SCENE_DUP_PRIVATE_COPY; bool make_private = type == OBS_SCENE_DUP_PRIVATE_REFS || type == OBS_SCENE_DUP_PRIVATE_COPY; obs_scene_item_ptr_array_t items; struct obs_scene *new_scene; struct obs_scene_item *item; struct obs_source *source; da_init(items); if (!obs_ptr_valid(scene, "obs_scene_duplicate")) return NULL; /* --------------------------------- */ full_lock(scene); item = scene->first_item; while (item) { da_push_back(items, &item); obs_sceneitem_addref(item); item = item->next; } full_unlock(scene); /* --------------------------------- */ obs_canvas_t *canvas = obs_weak_canvas_get_canvas(scene->source->canvas); new_scene = make_private ? create_private_id(scene->source->info.id, name) : create_id(canvas, scene->source->info.id, name); obs_canvas_release(canvas); new_scene->is_group = scene->is_group; new_scene->custom_size = scene->custom_size; new_scene->cx = scene->cx; new_scene->cy = scene->cy; new_scene->absolute_coordinates = scene->absolute_coordinates; new_scene->last_width = scene->last_width; new_scene->last_height = scene->last_height; obs_source_copy_filters(new_scene->source, scene->source); obs_data_apply(new_scene->source->private_settings, scene->source->private_settings); /* never duplicate sub-items for groups */ if (scene->is_group) make_unique = false; for (size_t i = 0; i < items.num; i++) { item = items.array[i]; source = make_unique ? dup_child(&items, i, new_scene, make_private) : new_ref(item->source); if (source) { struct obs_scene_item *new_item = obs_scene_add(new_scene, source); if (!new_item) { obs_source_release(source); continue; } duplicate_item_data(new_item, item, false, false); obs_source_release(source); } } for (size_t i = 0; i < items.num; i++) obs_sceneitem_release(items.array[i]); if (new_scene->is_group) resize_scene(new_scene); da_free(items); return new_scene; } static inline void obs_scene_addref(obs_scene_t *scene) { if (scene) obs_source_addref(scene->source); } obs_scene_t *obs_scene_get_ref(obs_scene_t *scene) { if (!scene) return NULL; if (obs_source_get_ref(scene->source) != NULL) return scene; return NULL; } void obs_scene_release(obs_scene_t *scene) { if (scene) obs_source_release(scene->source); } obs_source_t *obs_scene_get_source(const obs_scene_t *scene) { return scene ? scene->source : NULL; } obs_scene_t *obs_scene_from_source(const obs_source_t *source) { if (!source || strcmp(source->info.id, scene_info.id) != 0) return NULL; return source->context.data; } obs_scene_t *obs_group_from_source(const obs_source_t *source) { if (!source || strcmp(source->info.id, group_info.id) != 0) return NULL; return source->context.data; } obs_sceneitem_t *obs_scene_find_source(obs_scene_t *scene, const char *name) { struct obs_scene_item *item; if (!scene) return NULL; full_lock(scene); item = scene->first_item; while (item) { if (strcmp(item->source->context.name, name) == 0) break; item = item->next; } full_unlock(scene); return item; } obs_sceneitem_t *obs_scene_find_source_recursive(obs_scene_t *scene, const char *name) { struct obs_scene_item *item; if (!scene) return NULL; full_lock(scene); item = scene->first_item; while (item) { if (strcmp(item->source->context.name, name) == 0) break; if (item->is_group) { obs_scene_t *group = item->source->context.data; obs_sceneitem_t *child = obs_scene_find_source(group, name); if (child) { item = child; break; } } item = item->next; } full_unlock(scene); return item; } obs_sceneitem_t *obs_scene_find_sceneitem_by_id(obs_scene_t *scene, int64_t id) { struct obs_scene_item *item; if (!scene) return NULL; full_lock(scene); item = scene->first_item; while (item) { if (item->id == id) break; item = item->next; } full_unlock(scene); return item; } void obs_scene_enum_items(obs_scene_t *scene, bool (*callback)(obs_scene_t *, obs_sceneitem_t *, void *), void *param) { struct obs_scene_item *item; if (!scene || !callback) return; full_lock(scene); item = scene->first_item; while (item) { struct obs_scene_item *next = item->next; obs_sceneitem_addref(item); if (!callback(scene, item, param)) { obs_sceneitem_release(item); break; } obs_sceneitem_release(item); item = next; } full_unlock(scene); } static obs_sceneitem_t *sceneitem_get_ref(obs_sceneitem_t *si) { long owners = os_atomic_load_long(&si->ref); while (owners > 0) { if (os_atomic_compare_exchange_long(&si->ref, &owners, owners + 1)) { return si; } } return NULL; } static bool hotkey_show_sceneitem(void *data, obs_hotkey_pair_id id, obs_hotkey_t *hotkey, bool pressed) { UNUSED_PARAMETER(id); UNUSED_PARAMETER(hotkey); obs_sceneitem_t *si = sceneitem_get_ref(data); if (pressed && si && !si->user_visible) { obs_sceneitem_set_visible(si, true); obs_sceneitem_release(si); return true; } obs_sceneitem_release(si); return false; } static bool hotkey_hide_sceneitem(void *data, obs_hotkey_pair_id id, obs_hotkey_t *hotkey, bool pressed) { UNUSED_PARAMETER(id); UNUSED_PARAMETER(hotkey); obs_sceneitem_t *si = sceneitem_get_ref(data); if (pressed && si && si->user_visible) { obs_sceneitem_set_visible(si, false); obs_sceneitem_release(si); return true; } obs_sceneitem_release(si); return false; } static void init_hotkeys(obs_scene_t *scene, obs_sceneitem_t *item, const char *name) { struct obs_data_array *hotkey_array; obs_data_t *hotkey_data = scene->source->context.hotkey_data; struct dstr show = {0}; struct dstr hide = {0}; struct dstr legacy = {0}; struct dstr show_desc = {0}; struct dstr hide_desc = {0}; dstr_printf(&show, "libobs.show_scene_item.%" PRIi64, item->id); dstr_printf(&hide, "libobs.hide_scene_item.%" PRIi64, item->id); dstr_copy(&show_desc, obs->hotkeys.sceneitem_show); dstr_replace(&show_desc, "%1", name); dstr_copy(&hide_desc, obs->hotkeys.sceneitem_hide); dstr_replace(&hide_desc, "%1", name); /* Check if legacy keys exists, migrate if necessary */ dstr_printf(&legacy, "libobs.show_scene_item.%s", name); hotkey_array = obs_data_get_array(hotkey_data, legacy.array); if (hotkey_array) { obs_data_set_array(hotkey_data, show.array, hotkey_array); obs_data_array_release(hotkey_array); } dstr_printf(&legacy, "libobs.hide_scene_item.%s", name); hotkey_array = obs_data_get_array(hotkey_data, legacy.array); if (hotkey_array) { obs_data_set_array(hotkey_data, hide.array, hotkey_array); obs_data_array_release(hotkey_array); } item->toggle_visibility = obs_hotkey_pair_register_source(scene->source, show.array, show_desc.array, hide.array, hide_desc.array, hotkey_show_sceneitem, hotkey_hide_sceneitem, item, item); dstr_free(&show); dstr_free(&hide); dstr_free(&legacy); dstr_free(&show_desc); dstr_free(&hide_desc); } static void sceneitem_rename_hotkey(const obs_sceneitem_t *scene_item, const char *new_name) { struct dstr show_desc = {0}; struct dstr hide_desc = {0}; dstr_copy(&show_desc, obs->hotkeys.sceneitem_show); dstr_replace(&show_desc, "%1", new_name); dstr_copy(&hide_desc, obs->hotkeys.sceneitem_hide); dstr_replace(&hide_desc, "%1", new_name); obs_hotkey_pair_set_descriptions(scene_item->toggle_visibility, show_desc.array, hide_desc.array); dstr_free(&show_desc); dstr_free(&hide_desc); } static void sceneitem_renamed(void *param, calldata_t *data) { obs_sceneitem_t *scene_item = param; const char *name = calldata_string(data, "new_name"); sceneitem_rename_hotkey(scene_item, name); } static inline bool source_has_audio(obs_source_t *source) { return (source->info.output_flags & (OBS_SOURCE_AUDIO | OBS_SOURCE_COMPOSITE)) != 0; } static obs_sceneitem_t *obs_scene_add_internal(obs_scene_t *scene, obs_source_t *source, obs_sceneitem_t *insert_after, int64_t id) { struct obs_scene_item *last; struct obs_scene_item *item; pthread_mutex_t mutex; struct item_action action = {.visible = true, .timestamp = os_gettime_ns()}; if (!scene) return NULL; source = obs_source_get_ref(source); if (!source) { blog(LOG_ERROR, "Tried to add a NULL source to a scene"); return NULL; } if (source->removed) { blog(LOG_WARNING, "Tried to add a removed source to a scene"); goto release_source_and_fail; } if (pthread_mutex_init(&mutex, NULL) != 0) { blog(LOG_WARNING, "Failed to create scene item mutex"); goto release_source_and_fail; } if (!obs_source_add_active_child(scene->source, source)) { blog(LOG_WARNING, "Failed to add source to scene due to " "infinite source recursion"); pthread_mutex_destroy(&mutex); goto release_source_and_fail; } item = bzalloc(sizeof(struct obs_scene_item)); item->source = source; item->id = id ? id : ++scene->id_counter; item->parent = scene; item->ref = 1; item->align = OBS_ALIGN_TOP | OBS_ALIGN_LEFT; item->actions_mutex = mutex; item->user_visible = true; item->locked = false; item->is_group = strcmp(source->info.id, group_info.id) == 0; item->is_scene = strcmp(source->info.id, scene_info.id) == 0; item->private_settings = obs_data_create(); item->toggle_visibility = OBS_INVALID_HOTKEY_PAIR_ID; item->absolute_coordinates = scene->absolute_coordinates; os_atomic_set_long(&item->active_refs, 1); vec2_set(&item->scale, 1.0f, 1.0f); get_scene_dimensions(item, &item->scale_ref.x, &item->scale_ref.y); matrix4_identity(&item->draw_transform); matrix4_identity(&item->box_transform); /* Ensure initial position is still top-left corner in relative mode. */ if (!item->absolute_coordinates) pos_from_absolute(&item->pos, &item->pos, item); if (source_has_audio(source)) { item->visible = false; da_push_back(item->audio_actions, &action); } else { item->visible = true; } full_lock(scene); if (insert_after) { obs_sceneitem_t *next = insert_after->next; if (next) next->prev = item; item->next = insert_after->next; item->prev = insert_after; insert_after->next = item; } else { last = scene->first_item; if (!last) { scene->first_item = item; } else { while (last->next) last = last->next; last->next = item; item->prev = last; } } full_unlock(scene); if (!scene->source->context.private) init_hotkeys(scene, item, obs_source_get_name(source)); signal_handler_connect(obs_source_get_signal_handler(source), "rename", sceneitem_renamed, item); return item; release_source_and_fail: obs_source_release(source); return NULL; } obs_sceneitem_t *obs_scene_add(obs_scene_t *scene, obs_source_t *source) { obs_sceneitem_t *item = obs_scene_add_internal(scene, source, NULL, 0); struct calldata params; uint8_t stack[128]; if (!item) return NULL; calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_ptr(¶ms, "scene", scene); calldata_set_ptr(¶ms, "item", item); signal_handler_signal(scene->source->context.signals, "item_add", ¶ms); return item; } static void obs_sceneitem_destroy(obs_sceneitem_t *item) { if (item) { if (item->item_render) { obs_enter_graphics(); gs_texrender_destroy(item->item_render); obs_leave_graphics(); } obs_data_release(item->private_settings); obs_hotkey_pair_unregister(item->toggle_visibility); pthread_mutex_destroy(&item->actions_mutex); signal_handler_disconnect(obs_source_get_signal_handler(item->source), "rename", sceneitem_renamed, item); if (item->show_transition) obs_source_release(item->show_transition); if (item->hide_transition) obs_source_release(item->hide_transition); if (item->source) obs_source_release(item->source); da_free(item->audio_actions); bfree(item); } } void obs_sceneitem_addref(obs_sceneitem_t *item) { if (item) os_atomic_inc_long(&item->ref); } void obs_sceneitem_release(obs_sceneitem_t *item) { if (!item) return; if (os_atomic_dec_long(&item->ref) == 0) obs_sceneitem_destroy(item); } static void obs_sceneitem_remove_internal(obs_sceneitem_t *item) { obs_scene_t *parent = item->parent; item->removed = true; obs_sceneitem_select(item, false); set_visibility(item, false); detach_sceneitem(item); signal_item_remove(parent, item); obs_sceneitem_set_transition(item, true, NULL); obs_sceneitem_set_transition(item, false, NULL); } void obs_sceneitem_remove(obs_sceneitem_t *item) { obs_scene_t *scene; if (!item || item->removed) return; scene = item->parent; assert(scene != NULL); assert(scene->source != NULL); full_lock(scene); obs_sceneitem_remove_internal(item); full_unlock(scene); obs_sceneitem_release(item); } void obs_sceneitem_save(obs_sceneitem_t *item, obs_data_array_t *arr) { scene_save_item(arr, item, NULL); } void sceneitem_restore(obs_data_t *data, void *vp) { obs_scene_t *scene = (obs_scene_t *)vp; scene_load_item(scene, data); } void obs_sceneitems_add(obs_scene_t *scene, obs_data_array_t *data) { obs_data_array_enum(data, sceneitem_restore, scene); } obs_scene_t *obs_sceneitem_get_scene(const obs_sceneitem_t *item) { return item ? item->parent : NULL; } obs_source_t *obs_sceneitem_get_source(const obs_sceneitem_t *item) { return item ? item->source : NULL; } static void signal_parent(obs_scene_t *parent, const char *command, calldata_t *params) { calldata_set_ptr(params, "scene", parent); signal_handler_signal(parent->source->context.signals, command, params); } struct passthrough { obs_data_array_t *ids; obs_data_array_t *scenes_and_groups; bool all_items; }; bool save_transform_states(obs_scene_t *scene, obs_sceneitem_t *item, void *vp_pass) { struct passthrough *pass = (struct passthrough *)vp_pass; if (obs_sceneitem_selected(item) || pass->all_items) { obs_data_t *temp = obs_data_create(); obs_data_array_t *item_ids = (obs_data_array_t *)pass->ids; struct obs_transform_info info; struct obs_sceneitem_crop crop; obs_sceneitem_get_info2(item, &info); obs_sceneitem_get_crop(item, &crop); struct vec2 pos = info.pos; struct vec2 scale = info.scale; float rot = info.rot; uint32_t alignment = info.alignment; uint32_t bounds_type = info.bounds_type; uint32_t bounds_alignment = info.bounds_alignment; bool crop_to_bounds = info.crop_to_bounds; struct vec2 bounds = info.bounds; obs_data_set_int(temp, "id", obs_sceneitem_get_id(item)); obs_data_set_vec2(temp, "pos", &pos); obs_data_set_vec2(temp, "scale", &scale); obs_data_set_double(temp, "rot", rot); obs_data_set_int(temp, "alignment", alignment); obs_data_set_int(temp, "bounds_type", bounds_type); obs_data_set_vec2(temp, "bounds", &bounds); obs_data_set_int(temp, "bounds_alignment", bounds_alignment); obs_data_set_bool(temp, "crop_to_bounds", crop_to_bounds); obs_data_set_int(temp, "top", crop.top); obs_data_set_int(temp, "bottom", crop.bottom); obs_data_set_int(temp, "left", crop.left); obs_data_set_int(temp, "right", crop.right); obs_data_array_push_back(item_ids, temp); obs_data_release(temp); } obs_source_t *item_source = obs_sceneitem_get_source(item); if (obs_source_is_group(item_source)) { obs_data_t *temp = obs_data_create(); obs_data_array_t *nids = obs_data_array_create(); obs_data_set_string(temp, "scene_name", obs_source_get_name(item_source)); obs_data_set_string(temp, "scene_uuid", obs_source_get_uuid(item_source)); obs_data_set_bool(temp, "is_group", true); obs_data_set_string(temp, "group_parent", obs_source_get_uuid(obs_scene_get_source(scene))); struct passthrough npass = {nids, pass->scenes_and_groups, pass->all_items}; obs_sceneitem_group_enum_items(item, save_transform_states, (void *)&npass); obs_data_set_array(temp, "items", nids); obs_data_array_push_back(pass->scenes_and_groups, temp); obs_data_release(temp); obs_data_array_release(nids); } return true; } obs_data_t *obs_scene_save_transform_states(obs_scene_t *scene, bool all_items) { obs_data_t *wrapper = obs_data_create(); obs_data_array_t *scenes_and_groups = obs_data_array_create(); obs_data_array_t *item_ids = obs_data_array_create(); struct passthrough pass = {item_ids, scenes_and_groups, all_items}; obs_data_t *temp = obs_data_create(); obs_data_set_string(temp, "scene_name", obs_source_get_name(obs_scene_get_source(scene))); obs_data_set_string(temp, "scene_uuid", obs_source_get_uuid(obs_scene_get_source(scene))); obs_data_set_bool(temp, "is_group", false); obs_scene_enum_items(scene, save_transform_states, (void *)&pass); obs_data_set_array(temp, "items", item_ids); obs_data_array_push_back(scenes_and_groups, temp); obs_data_set_array(wrapper, "scenes_and_groups", scenes_and_groups); obs_data_array_release(item_ids); obs_data_array_release(scenes_and_groups); obs_data_release(temp); return wrapper; } void load_transform_states(obs_data_t *temp, void *vp_scene) { obs_scene_t *scene = (obs_scene_t *)vp_scene; int64_t id = obs_data_get_int(temp, "id"); obs_sceneitem_t *item = obs_scene_find_sceneitem_by_id(scene, id); struct obs_transform_info info; struct obs_sceneitem_crop crop; obs_data_get_vec2(temp, "pos", &info.pos); obs_data_get_vec2(temp, "scale", &info.scale); info.rot = (float)obs_data_get_double(temp, "rot"); info.alignment = (uint32_t)obs_data_get_int(temp, "alignment"); info.bounds_type = (enum obs_bounds_type)obs_data_get_int(temp, "bounds_type"); info.bounds_alignment = (uint32_t)obs_data_get_int(temp, "bounds_alignment"); obs_data_get_vec2(temp, "bounds", &info.bounds); info.crop_to_bounds = obs_data_get_bool(temp, "crop_to_bounds"); crop.top = (int)obs_data_get_int(temp, "top"); crop.bottom = (int)obs_data_get_int(temp, "bottom"); crop.left = (int)obs_data_get_int(temp, "left"); crop.right = (int)obs_data_get_int(temp, "right"); obs_sceneitem_defer_update_begin(item); obs_sceneitem_set_info2(item, &info); obs_sceneitem_set_crop(item, &crop); obs_sceneitem_defer_update_end(item); } void iterate_scenes_and_groups_transform_states(obs_data_t *data, void *vp) { obs_data_array_t *items = obs_data_get_array(data, "items"); obs_source_t *scene_source = obs_get_source_by_uuid(obs_data_get_string(data, "scene_uuid")); obs_scene_t *scene = obs_scene_from_source(scene_source); if (obs_data_get_bool(data, "is_group")) { obs_source_t *parent_source = obs_get_source_by_uuid(obs_data_get_string(data, "group_parent")); obs_scene_t *parent = obs_scene_from_source(parent_source); obs_sceneitem_t *group = obs_scene_get_group(parent, obs_data_get_string(data, "scene_name")); scene = obs_sceneitem_group_get_scene(group); obs_source_release(parent_source); } obs_data_array_enum(items, load_transform_states, (void *)scene); UNUSED_PARAMETER(vp); obs_data_array_release(items); obs_source_release(scene_source); } void obs_scene_load_transform_states(const char *data) { obs_data_t *dat = obs_data_create_from_json(data); obs_data_array_t *scenes_and_groups = obs_data_get_array(dat, "scenes_and_groups"); obs_data_array_enum(scenes_and_groups, iterate_scenes_and_groups_transform_states, NULL); obs_data_release(dat); obs_data_array_release(scenes_and_groups); } void obs_sceneitem_select(obs_sceneitem_t *item, bool select) { struct calldata params; uint8_t stack[128]; const char *command = select ? "item_select" : "item_deselect"; if (!item || item->selected == select || !item->parent) return; item->selected = select; calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_ptr(¶ms, "item", item); signal_parent(item->parent, command, ¶ms); } bool obs_sceneitem_selected(const obs_sceneitem_t *item) { return item ? item->selected : false; } #define do_update_transform(item) \ do { \ if (!item->parent || item->parent->is_group) \ os_atomic_set_bool(&item->update_transform, true); \ else \ update_item_transform(item, false); \ } while (false) void obs_sceneitem_set_pos(obs_sceneitem_t *item, const struct vec2 *pos) { if (item) { if (!item->absolute_coordinates) pos_from_absolute(&item->pos, pos, item); else vec2_copy(&item->pos, pos); do_update_transform(item); } } void obs_sceneitem_set_rot(obs_sceneitem_t *item, float rot) { if (item) { item->rot = rot; do_update_transform(item); } } void obs_sceneitem_set_scale(obs_sceneitem_t *item, const struct vec2 *scale) { if (item) { if (!item->absolute_coordinates) item_relative_scale(&item->scale, scale, item); else vec2_copy(&item->scale, scale); do_update_transform(item); } } void obs_sceneitem_set_alignment(obs_sceneitem_t *item, uint32_t alignment) { if (item) { item->align = alignment; do_update_transform(item); } } static inline void signal_reorder(struct obs_scene_item *item) { const char *command = NULL; struct calldata params; uint8_t stack[128]; command = "reorder"; calldata_init_fixed(¶ms, stack, sizeof(stack)); signal_parent(item->parent, command, ¶ms); } static inline void signal_refresh(obs_scene_t *scene) { const char *command = NULL; struct calldata params; uint8_t stack[128]; command = "refresh"; calldata_init_fixed(¶ms, stack, sizeof(stack)); signal_parent(scene, command, ¶ms); } void obs_sceneitem_set_order(obs_sceneitem_t *item, enum obs_order_movement movement) { if (!item) return; struct obs_scene_item *next, *prev; struct obs_scene *scene = obs_scene_get_ref(item->parent); if (!scene) return; full_lock(scene); next = item->next; prev = item->prev; detach_sceneitem(item); if (movement == OBS_ORDER_MOVE_DOWN) { attach_sceneitem(scene, item, prev ? prev->prev : NULL); } else if (movement == OBS_ORDER_MOVE_UP) { attach_sceneitem(scene, item, next ? next : prev); } else if (movement == OBS_ORDER_MOVE_TOP) { struct obs_scene_item *last = next; if (!last) { last = prev; } else { while (last->next) last = last->next; } attach_sceneitem(scene, item, last); } else if (movement == OBS_ORDER_MOVE_BOTTOM) { attach_sceneitem(scene, item, NULL); } full_unlock(scene); signal_reorder(item); obs_scene_release(scene); } int obs_sceneitem_get_order_position(obs_sceneitem_t *item) { struct obs_scene *scene = item->parent; struct obs_scene_item *next = scene->first_item; full_lock(scene); int index = 0; while (next && next != item) { next = next->next; ++index; } full_unlock(scene); return index; } void obs_sceneitem_set_order_position(obs_sceneitem_t *item, int position) { if (!item) return; struct obs_scene *scene = obs_scene_get_ref(item->parent); if (!scene) return; full_lock(scene); detach_sceneitem(item); if (!scene->first_item || position == 0) { attach_sceneitem(scene, item, NULL); } else { struct obs_scene_item *next = scene->first_item; for (int i = position; i > 1; --i) { if (next->next == NULL) break; next = next->next; } attach_sceneitem(scene, item, next); } full_unlock(scene); signal_reorder(item); obs_scene_release(scene); } void obs_sceneitem_set_bounds_type(obs_sceneitem_t *item, enum obs_bounds_type type) { if (item) { item->bounds_type = type; do_update_transform(item); } } void obs_sceneitem_set_bounds_alignment(obs_sceneitem_t *item, uint32_t alignment) { if (item) { item->bounds_align = alignment; do_update_transform(item); } } void obs_sceneitem_set_bounds_crop(obs_sceneitem_t *item, bool crop) { if (item) { item->crop_to_bounds = crop; do_update_transform(item); } } void obs_sceneitem_set_bounds(obs_sceneitem_t *item, const struct vec2 *bounds) { if (item) { if (!item->absolute_coordinates) size_from_absolute(&item->bounds, bounds, item); else vec2_copy(&item->bounds, bounds); do_update_transform(item); } } void obs_sceneitem_get_pos(const obs_sceneitem_t *item, struct vec2 *pos) { if (!item) return; if (!item->absolute_coordinates) pos_to_absolute(pos, &item->pos, item); else vec2_copy(pos, &item->pos); } float obs_sceneitem_get_rot(const obs_sceneitem_t *item) { return item ? item->rot : 0.0f; } void obs_sceneitem_get_scale(const obs_sceneitem_t *item, struct vec2 *scale) { if (!item) return; if (!item->absolute_coordinates) item_canvas_scale(scale, item); else vec2_copy(scale, &item->scale); } uint32_t obs_sceneitem_get_alignment(const obs_sceneitem_t *item) { return item ? item->align : 0; } enum obs_bounds_type obs_sceneitem_get_bounds_type(const obs_sceneitem_t *item) { return item ? item->bounds_type : OBS_BOUNDS_NONE; } uint32_t obs_sceneitem_get_bounds_alignment(const obs_sceneitem_t *item) { return item ? item->bounds_align : 0; } bool obs_sceneitem_get_bounds_crop(const obs_sceneitem_t *item) { return item ? item->crop_to_bounds : false; } void obs_sceneitem_get_bounds(const obs_sceneitem_t *item, struct vec2 *bounds) { if (!item) return; if (!item->absolute_coordinates) size_to_absolute(bounds, &item->bounds, item); else vec2_copy(bounds, &item->bounds); } static inline void scene_item_get_info_internal(const obs_sceneitem_t *item, struct obs_transform_info *info) { if (!item->absolute_coordinates) { pos_to_absolute(&info->pos, &item->pos, item); item_canvas_scale(&info->scale, item); size_to_absolute(&info->bounds, &item->bounds, item); } else { info->pos = item->pos; info->scale = item->scale; info->bounds = item->bounds; } info->rot = item->rot; info->alignment = item->align; info->bounds_type = item->bounds_type; info->bounds_alignment = item->bounds_align; } void obs_sceneitem_get_info2(const obs_sceneitem_t *item, struct obs_transform_info *info) { if (item && info) { scene_item_get_info_internal(item, info); info->crop_to_bounds = item->crop_to_bounds; } } static inline void scene_item_set_info_internal(obs_sceneitem_t *item, const struct obs_transform_info *info) { if (!item->absolute_coordinates) { pos_from_absolute(&item->pos, &info->pos, item); size_from_absolute(&item->bounds, &info->bounds, item); if (isfinite(info->scale.x) && isfinite(info->scale.y)) { item_relative_scale(&item->scale, &info->scale, item); } } else { item->pos = info->pos; item->bounds = info->bounds; if (isfinite(info->scale.x) && isfinite(info->scale.y)) { item->scale = info->scale; } } item->rot = info->rot; item->align = info->alignment; item->bounds_type = info->bounds_type; item->bounds_align = info->bounds_alignment; } void obs_sceneitem_set_info2(obs_sceneitem_t *item, const struct obs_transform_info *info) { if (item && info) { scene_item_set_info_internal(item, info); item->crop_to_bounds = info->crop_to_bounds; do_update_transform(item); } } void obs_sceneitem_get_draw_transform(const obs_sceneitem_t *item, struct matrix4 *transform) { if (item) matrix4_copy(transform, &item->draw_transform); } void obs_sceneitem_get_box_transform(const obs_sceneitem_t *item, struct matrix4 *transform) { if (item) matrix4_copy(transform, &item->box_transform); } void obs_sceneitem_get_box_scale(const obs_sceneitem_t *item, struct vec2 *scale) { if (item) *scale = item->box_scale; } bool obs_sceneitem_visible(const obs_sceneitem_t *item) { return item ? item->user_visible : false; } static bool group_item_transition(obs_scene_t *scene, obs_sceneitem_t *item, void *param) { if (!param || !item) return true; const bool visible = *(bool *)param; if (obs_sceneitem_visible(item)) obs_sceneitem_do_transition(item, visible); UNUSED_PARAMETER(scene); return true; } bool obs_sceneitem_set_visible(obs_sceneitem_t *item, bool visible) { struct calldata cd; uint8_t stack[256]; struct item_action action = {.visible = visible, .timestamp = os_gettime_ns()}; if (!item) return false; if (item->user_visible == visible) return false; if (!item->parent) return false; obs_sceneitem_do_transition(item, visible); if (obs_sceneitem_is_group(item)) obs_sceneitem_group_enum_items(item, group_item_transition, &visible); item->user_visible = visible; if (visible) { if (os_atomic_inc_long(&item->active_refs) == 1) { if (!obs_source_add_active_child(item->parent->source, item->source)) { os_atomic_dec_long(&item->active_refs); return false; } } } calldata_init_fixed(&cd, stack, sizeof(stack)); calldata_set_ptr(&cd, "item", item); calldata_set_bool(&cd, "visible", visible); signal_parent(item->parent, "item_visible", &cd); if (source_has_audio(item->source)) { pthread_mutex_lock(&item->actions_mutex); da_push_back(item->audio_actions, &action); pthread_mutex_unlock(&item->actions_mutex); } else { set_visibility(item, visible); } return true; } bool obs_sceneitem_locked(const obs_sceneitem_t *item) { return item ? item->locked : false; } bool obs_sceneitem_set_locked(obs_sceneitem_t *item, bool lock) { struct calldata cd; uint8_t stack[256]; if (!item) return false; if (item->locked == lock) return false; if (!item->parent) return false; item->locked = lock; calldata_init_fixed(&cd, stack, sizeof(stack)); calldata_set_ptr(&cd, "item", item); calldata_set_bool(&cd, "locked", lock); signal_parent(item->parent, "item_locked", &cd); return true; } static bool sceneitems_match(obs_scene_t *scene, obs_sceneitem_t *const *items, size_t size, bool *order_matches) { obs_sceneitem_t *item = scene->first_item; size_t count = 0; while (item) { bool found = false; for (size_t i = 0; i < size; i++) { if (items[i] != item) continue; if (count != i) *order_matches = false; found = true; break; } if (!found) return false; item = item->next; count += 1; } return count == size; } bool obs_scene_reorder_items(obs_scene_t *scene, obs_sceneitem_t *const *item_order, size_t item_order_size) { if (!scene || !item_order_size) return false; scene = obs_scene_get_ref(scene); if (!scene) return false; full_lock(scene); bool order_matches = true; if (!sceneitems_match(scene, item_order, item_order_size, &order_matches) || order_matches) { full_unlock(scene); obs_scene_release(scene); return false; } scene->first_item = item_order[0]; obs_sceneitem_t *prev = NULL; for (size_t i = 0; i < item_order_size; i++) { item_order[i]->prev = prev; item_order[i]->next = NULL; if (prev) prev->next = item_order[i]; prev = item_order[i]; } full_unlock(scene); signal_reorder(scene->first_item); obs_scene_release(scene); return true; } void obs_scene_atomic_update(obs_scene_t *scene, obs_scene_atomic_update_func func, void *data) { if (!scene) return; scene = obs_scene_get_ref(scene); if (!scene) return; full_lock(scene); func(data, scene); full_unlock(scene); obs_scene_release(scene); } static inline bool crop_equal(const struct obs_sceneitem_crop *crop1, const struct obs_sceneitem_crop *crop2) { return crop1->left == crop2->left && crop1->right == crop2->right && crop1->top == crop2->top && crop1->bottom == crop2->bottom; } void obs_sceneitem_set_crop(obs_sceneitem_t *item, const struct obs_sceneitem_crop *crop) { if (!obs_ptr_valid(item, "obs_sceneitem_set_crop")) return; if (!obs_ptr_valid(crop, "obs_sceneitem_set_crop")) return; if (crop_equal(crop, &item->crop)) return; memcpy(&item->crop, crop, sizeof(*crop)); if (item->crop.left < 0) item->crop.left = 0; if (item->crop.right < 0) item->crop.right = 0; if (item->crop.top < 0) item->crop.top = 0; if (item->crop.bottom < 0) item->crop.bottom = 0; os_atomic_set_bool(&item->update_transform, true); } void obs_sceneitem_get_crop(const obs_sceneitem_t *item, struct obs_sceneitem_crop *crop) { if (!obs_ptr_valid(item, "obs_sceneitem_get_crop")) return; if (!obs_ptr_valid(crop, "obs_sceneitem_get_crop")) return; memcpy(crop, &item->crop, sizeof(*crop)); } void obs_sceneitem_set_scale_filter(obs_sceneitem_t *item, enum obs_scale_type filter) { if (!obs_ptr_valid(item, "obs_sceneitem_set_scale_filter")) return; item->scale_filter = filter; os_atomic_set_bool(&item->update_transform, true); } enum obs_scale_type obs_sceneitem_get_scale_filter(obs_sceneitem_t *item) { return obs_ptr_valid(item, "obs_sceneitem_get_scale_filter") ? item->scale_filter : OBS_SCALE_DISABLE; } void obs_sceneitem_set_blending_method(obs_sceneitem_t *item, enum obs_blending_method method) { if (!obs_ptr_valid(item, "obs_sceneitem_set_blending_method")) return; item->blend_method = method; } enum obs_blending_method obs_sceneitem_get_blending_method(obs_sceneitem_t *item) { return obs_ptr_valid(item, "obs_sceneitem_get_blending_method") ? item->blend_method : OBS_BLEND_METHOD_DEFAULT; } void obs_sceneitem_set_blending_mode(obs_sceneitem_t *item, enum obs_blending_type type) { if (!obs_ptr_valid(item, "obs_sceneitem_set_blending_mode")) return; item->blend_type = type; os_atomic_set_bool(&item->update_transform, true); } enum obs_blending_type obs_sceneitem_get_blending_mode(obs_sceneitem_t *item) { return obs_ptr_valid(item, "obs_sceneitem_get_blending_mode") ? item->blend_type : OBS_BLEND_NORMAL; } void obs_sceneitem_defer_update_begin(obs_sceneitem_t *item) { if (!obs_ptr_valid(item, "obs_sceneitem_defer_update_begin")) return; os_atomic_inc_long(&item->defer_update); } void obs_sceneitem_defer_update_end(obs_sceneitem_t *item) { if (!obs_ptr_valid(item, "obs_sceneitem_defer_update_end")) return; if (os_atomic_dec_long(&item->defer_update) == 0) do_update_transform(item); } void obs_sceneitem_defer_group_resize_begin(obs_sceneitem_t *item) { if (!obs_ptr_valid(item, "obs_sceneitem_defer_group_resize_begin")) return; os_atomic_inc_long(&item->defer_group_resize); } void obs_sceneitem_defer_group_resize_end(obs_sceneitem_t *item) { if (!obs_ptr_valid(item, "obs_sceneitem_defer_group_resize_end")) return; if (os_atomic_dec_long(&item->defer_group_resize) == 0) os_atomic_set_bool(&item->update_group_resize, true); } int64_t obs_sceneitem_get_id(const obs_sceneitem_t *item) { if (!obs_ptr_valid(item, "obs_sceneitem_get_id")) return 0; return item->id; } void obs_sceneitem_set_id(obs_sceneitem_t *item, int64_t id) { item->id = id; } obs_data_t *obs_sceneitem_get_private_settings(obs_sceneitem_t *item) { if (!obs_ptr_valid(item, "obs_sceneitem_get_private_settings")) return NULL; obs_data_addref(item->private_settings); return item->private_settings; } static inline void transform_val(struct vec2 *v2, struct matrix4 *transform) { struct vec3 v; vec3_set(&v, v2->x, v2->y, 0.0f); vec3_transform(&v, &v, transform); v2->x = v.x; v2->y = v.y; } static void get_ungrouped_transform(obs_sceneitem_t *group, obs_sceneitem_t *item, struct vec2 *pos, struct vec2 *scale, float *rot) { struct matrix4 transform; struct matrix4 mat; struct vec4 x_base; struct vec2 scale_abs, pos_abs; if (item->absolute_coordinates) { vec2_copy(&scale_abs, scale); vec2_copy(&pos_abs, pos); } else { size_to_absolute(&scale_abs, scale, item); pos_to_absolute(&pos_abs, pos, item); } vec4_set(&x_base, 1.0f, 0.0f, 0.0f, 0.0f); matrix4_copy(&transform, &group->draw_transform); transform_val(&pos_abs, &transform); vec4_set(&transform.t, 0.0f, 0.0f, 0.0f, 1.0f); vec4_set(&mat.x, scale_abs.x, 0.0f, 0.0f, 0.0f); vec4_set(&mat.y, 0.0f, scale_abs.y, 0.0f, 0.0f); vec4_set(&mat.z, 0.0f, 0.0f, 1.0f, 0.0f); vec4_set(&mat.t, 0.0f, 0.0f, 0.0f, 1.0f); matrix4_mul(&mat, &mat, &transform); scale_abs.x = vec4_len(&mat.x) * (scale_abs.x > 0.0f ? 1.0f : -1.0f); scale_abs.y = vec4_len(&mat.y) * (scale_abs.y > 0.0f ? 1.0f : -1.0f); if (item->absolute_coordinates) { vec2_copy(scale, &scale_abs); vec2_copy(pos, &pos_abs); } else { size_from_absolute(scale, &scale_abs, item); pos_from_absolute(pos, &pos_abs, item); } *rot += group->rot; } static void remove_group_transform(obs_sceneitem_t *group, obs_sceneitem_t *item) { obs_scene_t *parent = item->parent; if (!parent || !group) return; get_ungrouped_transform(group, item, &item->pos, &item->scale, &item->rot); update_item_transform(item, false); } static void apply_group_transform(obs_sceneitem_t *item, obs_sceneitem_t *group) { struct matrix4 transform; struct matrix4 mat; struct vec4 x_base; struct vec2 scale_abs, pos_abs; if (item->absolute_coordinates) { vec2_copy(&scale_abs, &item->scale); vec2_copy(&pos_abs, &item->pos); } else { size_to_absolute(&scale_abs, &item->scale, item); pos_to_absolute(&pos_abs, &item->pos, item); } vec4_set(&x_base, 1.0f, 0.0f, 0.0f, 0.0f); matrix4_inv(&transform, &group->draw_transform); transform_val(&pos_abs, &transform); vec4_set(&transform.t, 0.0f, 0.0f, 0.0f, 1.0f); vec4_set(&mat.x, scale_abs.x, 0.0f, 0.0f, 0.0f); vec4_set(&mat.y, 0.0f, scale_abs.y, 0.0f, 0.0f); vec4_set(&mat.z, 0.0f, 0.0f, 1.0f, 0.0f); vec4_set(&mat.t, 0.0f, 0.0f, 0.0f, 1.0f); matrix4_mul(&mat, &mat, &transform); scale_abs.x = vec4_len(&mat.x) * (scale_abs.x > 0.0f ? 1.0f : -1.0f); scale_abs.y = vec4_len(&mat.y) * (scale_abs.y > 0.0f ? 1.0f : -1.0f); if (item->absolute_coordinates) { vec2_copy(&item->scale, &scale_abs); vec2_copy(&item->pos, &pos_abs); } else { size_from_absolute(&item->scale, &scale_abs, item); pos_from_absolute(&item->pos, &pos_abs, item); } item->rot -= group->rot; update_item_transform(item, false); } static bool resize_scene_base(obs_scene_t *scene, struct vec2 *minv, struct vec2 *maxv, struct vec2 *scale) { vec2_set(minv, M_INFINITE, M_INFINITE); vec2_set(maxv, -M_INFINITE, -M_INFINITE); obs_sceneitem_t *item = scene->first_item; if (!item) { scene->cx = 0; scene->cy = 0; return false; } while (item) { #define get_min_max(x_val, y_val) \ do { \ struct vec3 v; \ vec3_set(&v, x_val, y_val, 0.0f); \ vec3_transform(&v, &v, &item->box_transform); \ if (v.x < minv->x) \ minv->x = v.x; \ if (v.y < minv->y) \ minv->y = v.y; \ if (v.x > maxv->x) \ maxv->x = v.x; \ if (v.y > maxv->y) \ maxv->y = v.y; \ } while (false) get_min_max(0.0f, 0.0f); get_min_max(1.0f, 0.0f); get_min_max(0.0f, 1.0f); get_min_max(1.0f, 1.0f); #undef get_min_max item = item->next; } item = scene->first_item; if (item) { struct vec2 minv_rel; if (!item->absolute_coordinates) size_from_absolute(&minv_rel, minv, item); else vec2_copy(&minv_rel, minv); while (item) { vec2_sub(&item->pos, &item->pos, &minv_rel); update_item_transform(item, false); item = item->next; } } vec2_sub(scale, maxv, minv); scene->cx = (uint32_t)ceilf(scale->x); scene->cy = (uint32_t)ceilf(scale->y); return true; } static void resize_scene(obs_scene_t *scene) { struct vec2 minv; struct vec2 maxv; struct vec2 scale; resize_scene_base(scene, &minv, &maxv, &scale); } /* assumes group scene and parent scene is locked */ static void resize_group(obs_sceneitem_t *group, bool scene_resize) { obs_scene_t *scene = group->source->context.data; struct vec2 minv; struct vec2 maxv; struct vec2 scale; if (os_atomic_load_long(&group->defer_group_resize) > 0) return; if (!resize_scene_base(scene, &minv, &maxv, &scale)) return; if (group->bounds_type == OBS_BOUNDS_NONE && !scene_resize) { struct vec2 new_pos; if ((group->align & OBS_ALIGN_LEFT) != 0) new_pos.x = minv.x; else if ((group->align & OBS_ALIGN_RIGHT) != 0) new_pos.x = maxv.x; else new_pos.x = (maxv.x - minv.x) * 0.5f + minv.x; if ((group->align & OBS_ALIGN_TOP) != 0) new_pos.y = minv.y; else if ((group->align & OBS_ALIGN_BOTTOM) != 0) new_pos.y = maxv.y; else new_pos.y = (maxv.y - minv.y) * 0.5f + minv.y; transform_val(&new_pos, &group->draw_transform); if (!group->absolute_coordinates) pos_from_absolute(&new_pos, &new_pos, group); vec2_copy(&group->pos, &new_pos); } os_atomic_set_bool(&group->update_group_resize, false); update_item_transform(group, false); } obs_sceneitem_t *obs_scene_add_group(obs_scene_t *scene, const char *name) { return obs_scene_insert_group(scene, name, NULL, 0); } obs_sceneitem_t *obs_scene_add_group2(obs_scene_t *scene, const char *name, bool signal) { return obs_scene_insert_group2(scene, name, NULL, 0, signal); } obs_sceneitem_t *obs_scene_insert_group(obs_scene_t *scene, const char *name, obs_sceneitem_t **items, size_t count) { if (!scene) return NULL; /* don't allow groups or sub-items of other groups */ for (size_t i = count; i > 0; i--) { obs_sceneitem_t *item = items[i - 1]; if (item->parent != scene || item->is_group) return NULL; } obs_canvas_t *canvas = obs_weak_canvas_get_canvas(scene->source->canvas); obs_scene_t *sub_scene = create_id(canvas, group_info.id, name); obs_canvas_release(canvas); obs_sceneitem_t *last_item = items ? items[count - 1] : NULL; obs_sceneitem_t *item = obs_scene_add_internal(scene, sub_scene->source, last_item, 0); if (!items || !count) { obs_scene_release(sub_scene); return item; } /* ------------------------- */ full_lock(scene); full_lock(sub_scene); sub_scene->first_item = items[0]; for (size_t i = count; i > 0; i--) { size_t idx = i - 1; remove_group_transform(item, items[idx]); detach_sceneitem(items[idx]); } for (size_t i = 0; i < count; i++) { size_t idx = i; if (idx != (count - 1)) { size_t next_idx = idx + 1; items[idx]->next = items[next_idx]; items[next_idx]->prev = items[idx]; } else { items[idx]->next = NULL; } items[idx]->parent = sub_scene; apply_group_transform(items[idx], item); } items[0]->prev = NULL; resize_group(item, false); full_unlock(sub_scene); full_unlock(scene); struct calldata params; uint8_t stack[128]; calldata_init_fixed(¶ms, stack, sizeof(stack)); calldata_set_ptr(¶ms, "scene", scene); calldata_set_ptr(¶ms, "item", item); signal_handler_signal(scene->source->context.signals, "item_add", ¶ms); /* ------------------------- */ obs_scene_release(sub_scene); return item; } obs_sceneitem_t *obs_scene_insert_group2(obs_scene_t *scene, const char *name, obs_sceneitem_t **items, size_t count, bool signal) { obs_sceneitem_t *item = obs_scene_insert_group(scene, name, items, count); if (signal && item) signal_refresh(scene); return item; } obs_sceneitem_t *obs_scene_get_group(obs_scene_t *scene, const char *name) { if (!scene || !name || !*name) { return NULL; } obs_sceneitem_t *group = NULL; obs_sceneitem_t *item; full_lock(scene); item = scene->first_item; while (item) { if (item->is_group && item->source->context.name) { if (strcmp(item->source->context.name, name) == 0) { group = item; break; } } item = item->next; } full_unlock(scene); return group; } bool obs_sceneitem_is_group(obs_sceneitem_t *item) { return item && item->is_group; } obs_scene_t *obs_sceneitem_group_get_scene(const obs_sceneitem_t *item) { return (item && item->is_group) ? item->source->context.data : NULL; } void obs_sceneitem_group_ungroup(obs_sceneitem_t *item) { if (!item || !item->is_group) return; obs_scene_t *scene = item->parent; obs_scene_t *subscene = item->source->context.data; obs_sceneitem_t *insert_after = item; obs_sceneitem_t *first; obs_sceneitem_t *last; signal_item_remove(scene, item); full_lock(scene); /* ------------------------- */ full_lock(subscene); first = subscene->first_item; last = first; while (last) { obs_sceneitem_t *dst; remove_group_transform(item, last); dst = obs_scene_add_internal(scene, last->source, insert_after, 0); duplicate_item_data(dst, last, true, true); apply_group_transform(last, item); if (!last->next) break; insert_after = dst; last = last->next; } full_unlock(subscene); /* ------------------------- */ detach_sceneitem(item); full_unlock(scene); obs_sceneitem_release(item); } void obs_sceneitem_group_ungroup2(obs_sceneitem_t *item, bool signal) { obs_scene_t *scene = item->parent; obs_sceneitem_group_ungroup(item); if (signal) signal_refresh(scene); } void obs_sceneitem_group_add_item(obs_sceneitem_t *group, obs_sceneitem_t *item) { if (!group || !group->is_group || !item) return; obs_scene_t *scene = group->parent; obs_scene_t *groupscene = group->source->context.data; if (item->parent != scene) return; if (item->parent == groupscene) return; /* ------------------------- */ full_lock(scene); full_lock(groupscene); remove_group_transform(group, item); detach_sceneitem(item); attach_sceneitem(groupscene, item, NULL); apply_group_transform(item, group); resize_group(group, false); full_unlock(groupscene); full_unlock(scene); /* ------------------------- */ signal_refresh(scene); } void obs_sceneitem_group_remove_item(obs_sceneitem_t *group, obs_sceneitem_t *item) { if (!item || !group || !group->is_group) return; obs_scene_t *groupscene = item->parent; obs_scene_t *scene = group->parent; /* ------------------------- */ full_lock(scene); full_lock(groupscene); remove_group_transform(group, item); detach_sceneitem(item); attach_sceneitem(scene, item, NULL); resize_group(group, false); full_unlock(groupscene); full_unlock(scene); /* ------------------------- */ signal_refresh(scene); } static void build_current_order_info(obs_scene_t *scene, struct obs_sceneitem_order_info **items_out, size_t *size_out) { DARRAY(struct obs_sceneitem_order_info) items; da_init(items); obs_sceneitem_t *item = scene->first_item; while (item) { struct obs_sceneitem_order_info info = {0}; info.item = item; da_push_back(items, &info); if (item->is_group) { obs_scene_t *sub_scene = item->source->context.data; full_lock(sub_scene); obs_sceneitem_t *sub_item = sub_scene->first_item; while (sub_item) { info.group = item; info.item = sub_item; da_push_back(items, &info); sub_item = sub_item->next; } full_unlock(sub_scene); } item = item->next; } *items_out = items.array; *size_out = items.num; } static bool sceneitems_match2(obs_scene_t *scene, struct obs_sceneitem_order_info *items, size_t size) { struct obs_sceneitem_order_info *cur_items; size_t cur_size; build_current_order_info(scene, &cur_items, &cur_size); if (cur_size != size) { bfree(cur_items); return false; } for (size_t i = 0; i < size; i++) { struct obs_sceneitem_order_info *new = &items[i]; struct obs_sceneitem_order_info *old = &cur_items[i]; if (new->group != old->group || new->item != old->item) { bfree(cur_items); return false; } } bfree(cur_items); return true; } static obs_sceneitem_t *get_sceneitem_parent_group(obs_scene_t *scene, obs_sceneitem_t *group_subitem) { if (group_subitem->is_group) return NULL; obs_sceneitem_t *item = scene->first_item; while (item) { if (item->is_group && item->source->context.data == group_subitem->parent) return item; item = item->next; } return NULL; } static void obs_sceneitem_move_hotkeys(obs_scene_t *parent, obs_sceneitem_t *item) { obs_data_array_t *data0 = NULL; obs_data_array_t *data1 = NULL; obs_hotkey_pair_save(item->toggle_visibility, &data0, &data1); obs_hotkey_pair_unregister(item->toggle_visibility); init_hotkeys(parent, item, obs_source_get_name(item->source)); obs_hotkey_pair_load(item->toggle_visibility, data0, data1); obs_data_array_release(data0); obs_data_array_release(data1); } bool obs_scene_reorder_items2(obs_scene_t *scene, struct obs_sceneitem_order_info *item_order, size_t item_order_size) { if (!scene || !item_order_size || !item_order) return false; scene = obs_scene_get_ref(scene); if (!scene) return false; full_lock(scene); if (sceneitems_match2(scene, item_order, item_order_size)) { full_unlock(scene); obs_scene_release(scene); return false; } for (size_t i = 0; i < item_order_size; i++) { struct obs_sceneitem_order_info *info = &item_order[i]; if (!info->item->is_group) { obs_sceneitem_t *group = get_sceneitem_parent_group(scene, info->item); remove_group_transform(group, info->item); } } scene->first_item = item_order[0].item; obs_sceneitem_t *prev = NULL; for (size_t i = 0; i < item_order_size; i++) { struct obs_sceneitem_order_info *info = &item_order[i]; obs_sceneitem_t *item = info->item; if (info->item->is_group) { obs_sceneitem_t *sub_prev = NULL; obs_scene_t *sub_scene = info->item->source->context.data; sub_scene->first_item = NULL; obs_scene_addref(sub_scene); full_lock(sub_scene); for (i++; i < item_order_size; i++) { struct obs_sceneitem_order_info *sub_info = &item_order[i]; obs_sceneitem_t *sub_item = sub_info->item; if (sub_info->group != info->item) { i--; break; } if (!sub_scene->first_item) sub_scene->first_item = sub_item; /* Move hotkeys into group */ obs_sceneitem_move_hotkeys(sub_scene, sub_item); sub_item->prev = sub_prev; sub_item->next = NULL; sub_item->parent = sub_scene; if (sub_prev) sub_prev->next = sub_item; apply_group_transform(sub_info->item, sub_info->group); sub_prev = sub_item; } resize_group(info->item, false); full_unlock(sub_scene); obs_scene_release(sub_scene); } /* Move item hotkeys out of group */ if (item->parent && obs_scene_is_group(item->parent)) obs_sceneitem_move_hotkeys(scene, item); item->prev = prev; item->next = NULL; item->parent = scene; if (prev) prev->next = item; prev = item; } full_unlock(scene); signal_reorder(scene->first_item); obs_scene_release(scene); return true; } obs_sceneitem_t *obs_sceneitem_get_group(obs_scene_t *scene, obs_sceneitem_t *group_subitem) { if (!scene || !group_subitem || group_subitem->is_group) return NULL; full_lock(scene); obs_sceneitem_t *group = get_sceneitem_parent_group(scene, group_subitem); full_unlock(scene); return group; } bool obs_source_is_group(const obs_source_t *source) { return source && strcmp(source->info.id, group_info.id) == 0; } bool obs_source_type_is_group(const char *id) { return id && strcmp(id, group_info.id) == 0; } bool obs_source_is_scene(const obs_source_t *source) { return source && strcmp(source->info.id, scene_info.id) == 0; } bool obs_source_type_is_scene(const char *id) { return id && strcmp(id, scene_info.id) == 0; } bool obs_scene_is_group(const obs_scene_t *scene) { return scene ? scene->is_group : false; } void obs_sceneitem_group_enum_items(obs_sceneitem_t *group, bool (*callback)(obs_scene_t *, obs_sceneitem_t *, void *), void *param) { if (!group || !group->is_group) return; obs_scene_t *scene = group->source->context.data; if (scene) obs_scene_enum_items(scene, callback, param); } void obs_sceneitem_force_update_transform(obs_sceneitem_t *item) { if (!item) return; if (os_atomic_set_bool(&item->update_transform, false)) update_item_transform(item, false); } void obs_sceneitem_set_transition(obs_sceneitem_t *item, bool show, obs_source_t *transition) { if (!item) return; obs_source_t **target = show ? &item->show_transition : &item->hide_transition; if (*target) obs_source_release(*target); *target = obs_source_get_ref(transition); } obs_source_t *obs_sceneitem_get_transition(obs_sceneitem_t *item, bool show) { if (!item) return NULL; return show ? item->show_transition : item->hide_transition; } void obs_sceneitem_set_transition_duration(obs_sceneitem_t *item, bool show, uint32_t duration_ms) { if (!item) return; if (show) item->show_transition_duration = duration_ms; else item->hide_transition_duration = duration_ms; } uint32_t obs_sceneitem_get_transition_duration(obs_sceneitem_t *item, bool show) { if (!item) return 0; return show ? item->show_transition_duration : item->hide_transition_duration; } void obs_sceneitem_transition_stop(void *data, calldata_t *calldata) { obs_source_t *parent = data; obs_source_t *transition; calldata_get_ptr(calldata, "source", &transition); obs_source_remove_active_child(parent, transition); signal_handler_t *sh = obs_source_get_signal_handler(transition); if (sh) signal_handler_disconnect(sh, "transition_stop", obs_sceneitem_transition_stop, parent); } void obs_sceneitem_do_transition(obs_sceneitem_t *item, bool visible) { if (!item) return; if (transition_active(item->show_transition)) obs_transition_force_stop(item->show_transition); if (transition_active(item->hide_transition)) obs_transition_force_stop(item->hide_transition); obs_source_t *transition = obs_sceneitem_get_transition(item, visible); if (!transition) return; int duration = (int)obs_sceneitem_get_transition_duration(item, visible); const int cx = obs_source_get_width(item->source); const int cy = obs_source_get_height(item->source); obs_transition_set_size(transition, cx, cy); obs_transition_set_alignment(transition, OBS_ALIGN_CENTER); obs_transition_set_scale_type(transition, OBS_TRANSITION_SCALE_ASPECT); if (duration == 0) duration = 300; obs_scene_t *scene = obs_sceneitem_get_scene(item); obs_source_t *parent = obs_scene_get_source(scene); obs_source_add_active_child(parent, transition); signal_handler_t *sh = obs_source_get_signal_handler(transition); if (sh) signal_handler_connect(sh, "transition_stop", obs_sceneitem_transition_stop, parent); if (!visible) { obs_transition_set(transition, item->source); obs_transition_start(transition, OBS_TRANSITION_MODE_AUTO, duration, NULL); } else { obs_transition_set(transition, NULL); obs_transition_start(transition, OBS_TRANSITION_MODE_AUTO, duration, item->source); } } void obs_sceneitem_transition_load(struct obs_scene_item *item, obs_data_t *data, bool show) { if (!item || !data) return; const char *id = obs_data_get_string(data, "id"); if (id && strlen(id)) { const char *tn = obs_data_get_string(data, "name"); obs_data_t *s = obs_data_get_obj(data, "transition"); obs_source_t *t = obs_source_create_private(id, tn, s); obs_sceneitem_set_transition(item, show, t); obs_source_release(t); obs_data_release(s); } else { obs_sceneitem_set_transition(item, show, NULL); } obs_sceneitem_set_transition_duration(item, show, (uint32_t)obs_data_get_int(data, "duration")); } obs_data_t *obs_sceneitem_transition_save(struct obs_scene_item *item, bool show) { obs_data_t *data = obs_data_create(); struct obs_source *transition = show ? item->show_transition : item->hide_transition; if (transition) { obs_data_set_string(data, "id", obs_source_get_unversioned_id(transition)); obs_data_set_string(data, "versioned_id", obs_source_get_id(transition)); obs_data_set_string(data, "name", obs_source_get_name(transition)); obs_data_t *s = obs_source_get_settings(transition); obs_data_set_obj(data, "transition", s); obs_data_release(s); } obs_data_set_int(data, "duration", show ? item->show_transition_duration : item->hide_transition_duration); return data; } void obs_scene_prune_sources(obs_scene_t *scene) { obs_scene_item_ptr_array_t remove_items; da_init(remove_items); video_lock(scene); update_transforms_and_prune_sources(scene, &remove_items, NULL, false); video_unlock(scene); for (size_t i = 0; i < remove_items.num; i++) obs_sceneitem_release(remove_items.array[i]); da_free(remove_items); } obs-studio-32.1.0-sources/libobs/obs-nix-x11.c000644 001751 001751 00000076455 15153330235 021661 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey Copyright (C) 2014 by Zachary Lund Copyright (C) 2019 by Jason Francis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" #include "obs-nix-platform.h" #include "obs-nix-x11.h" #include #if defined(XCB_XINPUT_FOUND) #include #endif #include #include #include #include #include void obs_nix_x11_log_info(void) { Display *dpy = obs_get_nix_platform_display(); if (!dpy) { blog(LOG_INFO, "Unable to open X display"); return; } int protocol_version = ProtocolVersion(dpy); int protocol_revision = ProtocolRevision(dpy); int vendor_release = VendorRelease(dpy); const char *vendor_name = ServerVendor(dpy); if (strstr(vendor_name, "X.Org")) { blog(LOG_INFO, "Window System: X%d.%d, Vendor: %s, Version: %d" ".%d.%d", protocol_version, protocol_revision, vendor_name, vendor_release / 10000000, (vendor_release / 100000) % 100, (vendor_release / 1000) % 100); } else { blog(LOG_INFO, "Window System: X%d.%d - vendor string: %s - " "vendor release: %d", protocol_version, protocol_revision, vendor_name, vendor_release); } } /* So here's how linux works with key mapping: * * First, there's a global key symbol enum (xcb_keysym_t) which has unique * values for all possible symbols keys can have (e.g., '1' and '!' are * different values). * * Then there's a key code (xcb_keycode_t), which is basically an index to the * actual key itself on the keyboard (e.g., '1' and '!' will share the same * value). * * xcb_keysym_t values should be given to libobs, and libobs will translate it * to an obs_key_t, and although xcb_keysym_t can differ ('!' vs '1'), it will * get the obs_key_t value that represents the actual key pressed; in other * words it will be based on the key code rather than the key symbol. The same * applies to checking key press states. */ struct keycode_list { DARRAY(xcb_keycode_t) list; }; struct obs_hotkeys_platform { Display *display; xcb_keysym_t base_keysyms[OBS_KEY_LAST_VALUE]; struct keycode_list keycodes[OBS_KEY_LAST_VALUE]; xcb_keycode_t min_keycode; xcb_keycode_t super_l_code; xcb_keycode_t super_r_code; /* stores a copy of the keysym map for keycodes */ xcb_keysym_t *keysyms; int num_keysyms; int syms_per_code; #if defined(XCB_XINPUT_FOUND) bool pressed[XINPUT_MOUSE_LEN]; bool update[XINPUT_MOUSE_LEN]; bool button_pressed[XINPUT_MOUSE_LEN]; #endif }; #define MOUSE_1 (1 << 16) #define MOUSE_2 (2 << 16) #define MOUSE_3 (3 << 16) #define MOUSE_4 (4 << 16) #define MOUSE_5 (5 << 16) static int get_keysym(obs_key_t key) { switch (key) { case OBS_KEY_RETURN: return XK_Return; case OBS_KEY_ESCAPE: return XK_Escape; case OBS_KEY_TAB: return XK_Tab; case OBS_KEY_BACKSPACE: return XK_BackSpace; case OBS_KEY_INSERT: return XK_Insert; case OBS_KEY_DELETE: return XK_Delete; case OBS_KEY_PAUSE: return XK_Pause; case OBS_KEY_PRINT: return XK_Print; case OBS_KEY_HOME: return XK_Home; case OBS_KEY_END: return XK_End; case OBS_KEY_LEFT: return XK_Left; case OBS_KEY_UP: return XK_Up; case OBS_KEY_RIGHT: return XK_Right; case OBS_KEY_DOWN: return XK_Down; case OBS_KEY_PAGEUP: return XK_Prior; case OBS_KEY_PAGEDOWN: return XK_Next; case OBS_KEY_SHIFT: return XK_Shift_L; case OBS_KEY_CONTROL: return XK_Control_L; case OBS_KEY_ALT: return XK_Alt_L; case OBS_KEY_CAPSLOCK: return XK_Caps_Lock; case OBS_KEY_NUMLOCK: return XK_Num_Lock; case OBS_KEY_SCROLLLOCK: return XK_Scroll_Lock; case OBS_KEY_F1: return XK_F1; case OBS_KEY_F2: return XK_F2; case OBS_KEY_F3: return XK_F3; case OBS_KEY_F4: return XK_F4; case OBS_KEY_F5: return XK_F5; case OBS_KEY_F6: return XK_F6; case OBS_KEY_F7: return XK_F7; case OBS_KEY_F8: return XK_F8; case OBS_KEY_F9: return XK_F9; case OBS_KEY_F10: return XK_F10; case OBS_KEY_F11: return XK_F11; case OBS_KEY_F12: return XK_F12; case OBS_KEY_F13: return XK_F13; case OBS_KEY_F14: return XK_F14; case OBS_KEY_F15: return XK_F15; case OBS_KEY_F16: return XK_F16; case OBS_KEY_F17: return XK_F17; case OBS_KEY_F18: return XK_F18; case OBS_KEY_F19: return XK_F19; case OBS_KEY_F20: return XK_F20; case OBS_KEY_F21: return XK_F21; case OBS_KEY_F22: return XK_F22; case OBS_KEY_F23: return XK_F23; case OBS_KEY_F24: return XK_F24; case OBS_KEY_F25: return XK_F25; case OBS_KEY_F26: return XK_F26; case OBS_KEY_F27: return XK_F27; case OBS_KEY_F28: return XK_F28; case OBS_KEY_F29: return XK_F29; case OBS_KEY_F30: return XK_F30; case OBS_KEY_F31: return XK_F31; case OBS_KEY_F32: return XK_F32; case OBS_KEY_F33: return XK_F33; case OBS_KEY_F34: return XK_F34; case OBS_KEY_F35: return XK_F35; case OBS_KEY_MENU: return XK_Menu; case OBS_KEY_HYPER_L: return XK_Hyper_L; case OBS_KEY_HYPER_R: return XK_Hyper_R; case OBS_KEY_HELP: return XK_Help; case OBS_KEY_CANCEL: return XK_Cancel; case OBS_KEY_FIND: return XK_Find; case OBS_KEY_REDO: return XK_Redo; case OBS_KEY_UNDO: return XK_Undo; case OBS_KEY_SPACE: return XK_space; case OBS_KEY_COPY: return XF86XK_Copy; case OBS_KEY_CUT: return XF86XK_Cut; case OBS_KEY_OPEN: return XF86XK_Open; case OBS_KEY_PASTE: return XF86XK_Paste; case OBS_KEY_FRONT: return SunXK_Front; case OBS_KEY_PROPS: return SunXK_Props; case OBS_KEY_EXCLAM: return XK_exclam; case OBS_KEY_QUOTEDBL: return XK_quotedbl; case OBS_KEY_NUMBERSIGN: return XK_numbersign; case OBS_KEY_DOLLAR: return XK_dollar; case OBS_KEY_PERCENT: return XK_percent; case OBS_KEY_AMPERSAND: return XK_ampersand; case OBS_KEY_APOSTROPHE: return XK_apostrophe; case OBS_KEY_PARENLEFT: return XK_parenleft; case OBS_KEY_PARENRIGHT: return XK_parenright; case OBS_KEY_ASTERISK: return XK_asterisk; case OBS_KEY_PLUS: return XK_plus; case OBS_KEY_COMMA: return XK_comma; case OBS_KEY_MINUS: return XK_minus; case OBS_KEY_PERIOD: return XK_period; case OBS_KEY_SLASH: return XK_slash; case OBS_KEY_0: return XK_0; case OBS_KEY_1: return XK_1; case OBS_KEY_2: return XK_2; case OBS_KEY_3: return XK_3; case OBS_KEY_4: return XK_4; case OBS_KEY_5: return XK_5; case OBS_KEY_6: return XK_6; case OBS_KEY_7: return XK_7; case OBS_KEY_8: return XK_8; case OBS_KEY_9: return XK_9; case OBS_KEY_NUMEQUAL: return XK_KP_Equal; case OBS_KEY_NUMASTERISK: return XK_KP_Multiply; case OBS_KEY_NUMPLUS: return XK_KP_Add; case OBS_KEY_NUMCOMMA: return XK_KP_Separator; case OBS_KEY_NUMMINUS: return XK_KP_Subtract; case OBS_KEY_NUMPERIOD: return XK_KP_Decimal; case OBS_KEY_NUMSLASH: return XK_KP_Divide; case OBS_KEY_NUM0: return XK_KP_0; case OBS_KEY_NUM1: return XK_KP_1; case OBS_KEY_NUM2: return XK_KP_2; case OBS_KEY_NUM3: return XK_KP_3; case OBS_KEY_NUM4: return XK_KP_4; case OBS_KEY_NUM5: return XK_KP_5; case OBS_KEY_NUM6: return XK_KP_6; case OBS_KEY_NUM7: return XK_KP_7; case OBS_KEY_NUM8: return XK_KP_8; case OBS_KEY_NUM9: return XK_KP_9; case OBS_KEY_COLON: return XK_colon; case OBS_KEY_SEMICOLON: return XK_semicolon; case OBS_KEY_LESS: return XK_less; case OBS_KEY_EQUAL: return XK_equal; case OBS_KEY_GREATER: return XK_greater; case OBS_KEY_QUESTION: return XK_question; case OBS_KEY_AT: return XK_at; case OBS_KEY_A: return XK_A; case OBS_KEY_B: return XK_B; case OBS_KEY_C: return XK_C; case OBS_KEY_D: return XK_D; case OBS_KEY_E: return XK_E; case OBS_KEY_F: return XK_F; case OBS_KEY_G: return XK_G; case OBS_KEY_H: return XK_H; case OBS_KEY_I: return XK_I; case OBS_KEY_J: return XK_J; case OBS_KEY_K: return XK_K; case OBS_KEY_L: return XK_L; case OBS_KEY_M: return XK_M; case OBS_KEY_N: return XK_N; case OBS_KEY_O: return XK_O; case OBS_KEY_P: return XK_P; case OBS_KEY_Q: return XK_Q; case OBS_KEY_R: return XK_R; case OBS_KEY_S: return XK_S; case OBS_KEY_T: return XK_T; case OBS_KEY_U: return XK_U; case OBS_KEY_V: return XK_V; case OBS_KEY_W: return XK_W; case OBS_KEY_X: return XK_X; case OBS_KEY_Y: return XK_Y; case OBS_KEY_Z: return XK_Z; case OBS_KEY_BRACKETLEFT: return XK_bracketleft; case OBS_KEY_BACKSLASH: return XK_backslash; case OBS_KEY_BRACKETRIGHT: return XK_bracketright; case OBS_KEY_ASCIICIRCUM: return XK_asciicircum; case OBS_KEY_UNDERSCORE: return XK_underscore; case OBS_KEY_QUOTELEFT: return XK_quoteleft; case OBS_KEY_BRACELEFT: return XK_braceleft; case OBS_KEY_BAR: return XK_bar; case OBS_KEY_BRACERIGHT: return XK_braceright; case OBS_KEY_ASCIITILDE: return XK_grave; case OBS_KEY_NOBREAKSPACE: return XK_nobreakspace; case OBS_KEY_EXCLAMDOWN: return XK_exclamdown; case OBS_KEY_CENT: return XK_cent; case OBS_KEY_STERLING: return XK_sterling; case OBS_KEY_CURRENCY: return XK_currency; case OBS_KEY_YEN: return XK_yen; case OBS_KEY_BROKENBAR: return XK_brokenbar; case OBS_KEY_SECTION: return XK_section; case OBS_KEY_DIAERESIS: return XK_diaeresis; case OBS_KEY_COPYRIGHT: return XK_copyright; case OBS_KEY_ORDFEMININE: return XK_ordfeminine; case OBS_KEY_GUILLEMOTLEFT: return XK_guillemotleft; case OBS_KEY_NOTSIGN: return XK_notsign; case OBS_KEY_HYPHEN: return XK_hyphen; case OBS_KEY_REGISTERED: return XK_registered; case OBS_KEY_MACRON: return XK_macron; case OBS_KEY_DEGREE: return XK_degree; case OBS_KEY_PLUSMINUS: return XK_plusminus; case OBS_KEY_TWOSUPERIOR: return XK_twosuperior; case OBS_KEY_THREESUPERIOR: return XK_threesuperior; case OBS_KEY_ACUTE: return XK_acute; case OBS_KEY_MU: return XK_mu; case OBS_KEY_PARAGRAPH: return XK_paragraph; case OBS_KEY_PERIODCENTERED: return XK_periodcentered; case OBS_KEY_CEDILLA: return XK_cedilla; case OBS_KEY_ONESUPERIOR: return XK_onesuperior; case OBS_KEY_MASCULINE: return XK_masculine; case OBS_KEY_GUILLEMOTRIGHT: return XK_guillemotright; case OBS_KEY_ONEQUARTER: return XK_onequarter; case OBS_KEY_ONEHALF: return XK_onehalf; case OBS_KEY_THREEQUARTERS: return XK_threequarters; case OBS_KEY_QUESTIONDOWN: return XK_questiondown; case OBS_KEY_AGRAVE: return XK_Agrave; case OBS_KEY_AACUTE: return XK_Aacute; case OBS_KEY_ACIRCUMFLEX: return XK_Acircumflex; case OBS_KEY_ATILDE: return XK_Atilde; case OBS_KEY_ADIAERESIS: return XK_Adiaeresis; case OBS_KEY_ARING: return XK_Aring; case OBS_KEY_AE: return XK_AE; case OBS_KEY_CCEDILLA: return XK_cedilla; case OBS_KEY_EGRAVE: return XK_Egrave; case OBS_KEY_EACUTE: return XK_Eacute; case OBS_KEY_ECIRCUMFLEX: return XK_Ecircumflex; case OBS_KEY_EDIAERESIS: return XK_Ediaeresis; case OBS_KEY_IGRAVE: return XK_Igrave; case OBS_KEY_IACUTE: return XK_Iacute; case OBS_KEY_ICIRCUMFLEX: return XK_Icircumflex; case OBS_KEY_IDIAERESIS: return XK_Idiaeresis; case OBS_KEY_ETH: return XK_ETH; case OBS_KEY_NTILDE: return XK_Ntilde; case OBS_KEY_OGRAVE: return XK_Ograve; case OBS_KEY_OACUTE: return XK_Oacute; case OBS_KEY_OCIRCUMFLEX: return XK_Ocircumflex; case OBS_KEY_ODIAERESIS: return XK_Odiaeresis; case OBS_KEY_MULTIPLY: return XK_multiply; case OBS_KEY_OOBLIQUE: return XK_Ooblique; case OBS_KEY_UGRAVE: return XK_Ugrave; case OBS_KEY_UACUTE: return XK_Uacute; case OBS_KEY_UCIRCUMFLEX: return XK_Ucircumflex; case OBS_KEY_UDIAERESIS: return XK_Udiaeresis; case OBS_KEY_YACUTE: return XK_Yacute; case OBS_KEY_THORN: return XK_Thorn; case OBS_KEY_SSHARP: return XK_ssharp; case OBS_KEY_DIVISION: return XK_division; case OBS_KEY_YDIAERESIS: return XK_Ydiaeresis; case OBS_KEY_MULTI_KEY: return XK_Multi_key; case OBS_KEY_CODEINPUT: return XK_Codeinput; case OBS_KEY_SINGLECANDIDATE: return XK_SingleCandidate; case OBS_KEY_MULTIPLECANDIDATE: return XK_MultipleCandidate; case OBS_KEY_PREVIOUSCANDIDATE: return XK_PreviousCandidate; case OBS_KEY_MODE_SWITCH: return XK_Mode_switch; case OBS_KEY_KANJI: return XK_Kanji; case OBS_KEY_MUHENKAN: return XK_Muhenkan; case OBS_KEY_HENKAN: return XK_Henkan; case OBS_KEY_ROMAJI: return XK_Romaji; case OBS_KEY_HIRAGANA: return XK_Hiragana; case OBS_KEY_KATAKANA: return XK_Katakana; case OBS_KEY_HIRAGANA_KATAKANA: return XK_Hiragana_Katakana; case OBS_KEY_ZENKAKU: return XK_Zenkaku; case OBS_KEY_HANKAKU: return XK_Hankaku; case OBS_KEY_ZENKAKU_HANKAKU: return XK_Zenkaku_Hankaku; case OBS_KEY_TOUROKU: return XK_Touroku; case OBS_KEY_MASSYO: return XK_Massyo; case OBS_KEY_KANA_LOCK: return XK_Kana_Lock; case OBS_KEY_KANA_SHIFT: return XK_Kana_Shift; case OBS_KEY_EISU_SHIFT: return XK_Eisu_Shift; case OBS_KEY_EISU_TOGGLE: return XK_Eisu_toggle; case OBS_KEY_HANGUL: return XK_Hangul; case OBS_KEY_HANGUL_START: return XK_Hangul_Start; case OBS_KEY_HANGUL_END: return XK_Hangul_End; case OBS_KEY_HANGUL_HANJA: return XK_Hangul_Hanja; case OBS_KEY_HANGUL_JAMO: return XK_Hangul_Jamo; case OBS_KEY_HANGUL_ROMAJA: return XK_Hangul_Romaja; case OBS_KEY_HANGUL_BANJA: return XK_Hangul_Banja; case OBS_KEY_HANGUL_PREHANJA: return XK_Hangul_PreHanja; case OBS_KEY_HANGUL_POSTHANJA: return XK_Hangul_PostHanja; case OBS_KEY_HANGUL_SPECIAL: return XK_Hangul_Special; case OBS_KEY_DEAD_GRAVE: return XK_dead_grave; case OBS_KEY_DEAD_ACUTE: return XK_dead_acute; case OBS_KEY_DEAD_CIRCUMFLEX: return XK_dead_circumflex; case OBS_KEY_DEAD_TILDE: return XK_dead_tilde; case OBS_KEY_DEAD_MACRON: return XK_dead_macron; case OBS_KEY_DEAD_BREVE: return XK_dead_breve; case OBS_KEY_DEAD_ABOVEDOT: return XK_dead_abovedot; case OBS_KEY_DEAD_DIAERESIS: return XK_dead_diaeresis; case OBS_KEY_DEAD_ABOVERING: return XK_dead_abovering; case OBS_KEY_DEAD_DOUBLEACUTE: return XK_dead_doubleacute; case OBS_KEY_DEAD_CARON: return XK_dead_caron; case OBS_KEY_DEAD_CEDILLA: return XK_dead_cedilla; case OBS_KEY_DEAD_OGONEK: return XK_dead_ogonek; case OBS_KEY_DEAD_IOTA: return XK_dead_iota; case OBS_KEY_DEAD_VOICED_SOUND: return XK_dead_voiced_sound; case OBS_KEY_DEAD_SEMIVOICED_SOUND: return XK_dead_semivoiced_sound; case OBS_KEY_DEAD_BELOWDOT: return XK_dead_belowdot; case OBS_KEY_DEAD_HOOK: return XK_dead_hook; case OBS_KEY_DEAD_HORN: return XK_dead_horn; case OBS_KEY_MOUSE1: return MOUSE_1; case OBS_KEY_MOUSE2: return MOUSE_2; case OBS_KEY_MOUSE3: return MOUSE_3; case OBS_KEY_MOUSE4: return MOUSE_4; case OBS_KEY_MOUSE5: return MOUSE_5; case OBS_KEY_VK_MEDIA_PLAY_PAUSE: return XF86XK_AudioPlay; case OBS_KEY_VK_MEDIA_STOP: return XF86XK_AudioStop; case OBS_KEY_VK_MEDIA_PREV_TRACK: return XF86XK_AudioPrev; case OBS_KEY_VK_MEDIA_NEXT_TRACK: return XF86XK_AudioNext; case OBS_KEY_VK_VOLUME_MUTE: return XF86XK_AudioMute; case OBS_KEY_VK_VOLUME_DOWN: return XF86XK_AudioRaiseVolume; case OBS_KEY_VK_VOLUME_UP: return XF86XK_AudioLowerVolume; /* TODO: Implement keys for non-US keyboards */ default:; } return 0; } static inline void fill_base_keysyms(struct obs_core_hotkeys *hotkeys) { for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) hotkeys->platform_context->base_keysyms[i] = get_keysym(i); } static obs_key_t key_from_base_keysym(obs_hotkeys_platform_t *context, xcb_keysym_t code) { for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) { if (context->base_keysyms[i] == (xcb_keysym_t)code) { return (obs_key_t)i; } } switch (code) { case XK_Shift_R: return OBS_KEY_SHIFT; case XK_Control_R: return OBS_KEY_CONTROL; case XK_Alt_R: return OBS_KEY_ALT; } return OBS_KEY_NONE; } static inline void add_key(obs_hotkeys_platform_t *context, obs_key_t key, int code) { xcb_keycode_t kc = (xcb_keycode_t)code; da_push_back(context->keycodes[key].list, &kc); if (context->keycodes[key].list.num > 1) { blog(LOG_DEBUG, "found alternate keycode %d for %s " "which already has keycode %d", code, obs_key_to_name(key), (int)context->keycodes[key].list.array[0]); } } static inline bool fill_keycodes(struct obs_core_hotkeys *hotkeys) { obs_hotkeys_platform_t *context = hotkeys->platform_context; xcb_connection_t *connection = XGetXCBConnection(context->display); const struct xcb_setup_t *setup = xcb_get_setup(connection); xcb_get_keyboard_mapping_cookie_t cookie; xcb_get_keyboard_mapping_reply_t *reply; xcb_generic_error_t *error = NULL; int code; int mincode = setup->min_keycode; int maxcode = setup->max_keycode; context->min_keycode = setup->min_keycode; cookie = xcb_get_keyboard_mapping(connection, mincode, maxcode - mincode + 1); reply = xcb_get_keyboard_mapping_reply(connection, cookie, &error); if (error || !reply) { blog(LOG_WARNING, "xcb_get_keyboard_mapping_reply failed"); goto error1; } const xcb_keysym_t *keysyms = xcb_get_keyboard_mapping_keysyms(reply); int syms_per_code = (int)reply->keysyms_per_keycode; context->num_keysyms = (maxcode - mincode + 1) * syms_per_code; context->syms_per_code = syms_per_code; context->keysyms = bmemdup(keysyms, sizeof(xcb_keysym_t) * context->num_keysyms); for (code = mincode; code <= maxcode; code++) { const xcb_keysym_t *sym; obs_key_t key; sym = &keysyms[(code - mincode) * syms_per_code]; for (int i = 0; i < syms_per_code; i++) { if (!sym[i]) break; if (sym[i] == XK_Super_L) { context->super_l_code = code; break; } else if (sym[i] == XK_Super_R) { context->super_r_code = code; break; } else { key = key_from_base_keysym(context, sym[i]); if (key != OBS_KEY_NONE) { add_key(context, key, code); break; } } } } error1: free(reply); free(error); return error != NULL || reply == NULL; } static xcb_screen_t *default_screen(obs_hotkeys_platform_t *context, xcb_connection_t *connection) { int def_screen_idx = XDefaultScreen(context->display); xcb_screen_iterator_t iter; iter = xcb_setup_roots_iterator(xcb_get_setup(connection)); while (iter.rem) { if (def_screen_idx-- == 0) return iter.data; xcb_screen_next(&iter); } return NULL; } static inline xcb_window_t root_window(obs_hotkeys_platform_t *context, xcb_connection_t *connection) { xcb_screen_t *screen = default_screen(context, connection); if (screen) return screen->root; return 0; } #if defined(XCB_XINPUT_FOUND) static inline void registerMouseEvents(struct obs_core_hotkeys *hotkeys) { obs_hotkeys_platform_t *context = hotkeys->platform_context; xcb_connection_t *connection = XGetXCBConnection(context->display); xcb_window_t window = root_window(context, connection); struct { xcb_input_event_mask_t head; xcb_input_xi_event_mask_t mask; } mask; mask.head.deviceid = XCB_INPUT_DEVICE_ALL_MASTER; mask.head.mask_len = sizeof(mask.mask) / sizeof(uint32_t); mask.mask = XCB_INPUT_XI_EVENT_MASK_RAW_BUTTON_PRESS | XCB_INPUT_XI_EVENT_MASK_RAW_BUTTON_RELEASE; xcb_input_xi_select_events(connection, window, 1, &mask.head); xcb_flush(connection); } #endif static bool obs_nix_x11_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys) { // Open a new X11 connection here, this avoids Qt masking events we care about. Display *display = XOpenDisplay(NULL); if (!display) return false; hotkeys->platform_context = bzalloc(sizeof(obs_hotkeys_platform_t)); hotkeys->platform_context->display = display; #if defined(XCB_XINPUT_FOUND) registerMouseEvents(hotkeys); #endif fill_base_keysyms(hotkeys); fill_keycodes(hotkeys); return true; } static void obs_nix_x11_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys) { obs_hotkeys_platform_t *context = hotkeys->platform_context; if (!context) return; for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) da_free(context->keycodes[i].list); bfree(context->keysyms); XCloseDisplay(context->display); bfree(context); hotkeys->platform_context = NULL; } static bool mouse_button_pressed(xcb_connection_t *connection, obs_hotkeys_platform_t *context, obs_key_t key) { bool ret = false; #if defined(XCB_XINPUT_FOUND) memset(context->pressed, 0, XINPUT_MOUSE_LEN); memset(context->update, 0, XINPUT_MOUSE_LEN); xcb_generic_event_t *ev; while ((ev = xcb_poll_for_event(connection))) { if ((ev->response_type & ~80) == XCB_GE_GENERIC) { switch (((xcb_ge_event_t *)ev)->event_type) { case XCB_INPUT_RAW_BUTTON_PRESS: { xcb_input_raw_button_press_event_t *mot; mot = (xcb_input_raw_button_press_event_t *)ev; if (mot->detail < XINPUT_MOUSE_LEN) { context->pressed[mot->detail - 1] = true; context->update[mot->detail - 1] = true; } else { blog(LOG_WARNING, "Unsupported button"); } break; } case XCB_INPUT_RAW_BUTTON_RELEASE: { xcb_input_raw_button_release_event_t *mot; mot = (xcb_input_raw_button_release_event_t *)ev; if (mot->detail < XINPUT_MOUSE_LEN) context->update[mot->detail - 1] = true; else blog(LOG_WARNING, "Unsupported button"); break; } default: break; } } free(ev); } // Mouse 2 for OBS is Right Click and Mouse 3 is Wheel Click. // Mouse Wheel axis clicks (xinput mot->detail 4 5 6 7) are ignored. switch (key) { case OBS_KEY_MOUSE1: ret = context->pressed[0] || context->button_pressed[0]; break; case OBS_KEY_MOUSE2: ret = context->pressed[2] || context->button_pressed[2]; break; case OBS_KEY_MOUSE3: ret = context->pressed[1] || context->button_pressed[1]; break; case OBS_KEY_MOUSE4: ret = context->pressed[7] || context->button_pressed[7]; break; case OBS_KEY_MOUSE5: ret = context->pressed[8] || context->button_pressed[8]; break; case OBS_KEY_MOUSE6: ret = context->pressed[9] || context->button_pressed[9]; break; case OBS_KEY_MOUSE7: ret = context->pressed[10] || context->button_pressed[10]; break; case OBS_KEY_MOUSE8: ret = context->pressed[11] || context->button_pressed[11]; break; case OBS_KEY_MOUSE9: ret = context->pressed[12] || context->button_pressed[12]; break; case OBS_KEY_MOUSE10: ret = context->pressed[13] || context->button_pressed[13]; break; case OBS_KEY_MOUSE11: ret = context->pressed[14] || context->button_pressed[14]; break; case OBS_KEY_MOUSE12: ret = context->pressed[15] || context->button_pressed[15]; break; case OBS_KEY_MOUSE13: ret = context->pressed[16] || context->button_pressed[16]; break; case OBS_KEY_MOUSE14: ret = context->pressed[17] || context->button_pressed[17]; break; case OBS_KEY_MOUSE15: ret = context->pressed[18] || context->button_pressed[18]; break; case OBS_KEY_MOUSE16: ret = context->pressed[19] || context->button_pressed[19]; break; case OBS_KEY_MOUSE17: ret = context->pressed[20] || context->button_pressed[20]; break; case OBS_KEY_MOUSE18: ret = context->pressed[21] || context->button_pressed[21]; break; case OBS_KEY_MOUSE19: ret = context->pressed[22] || context->button_pressed[22]; break; case OBS_KEY_MOUSE20: ret = context->pressed[23] || context->button_pressed[23]; break; case OBS_KEY_MOUSE21: ret = context->pressed[24] || context->button_pressed[24]; break; case OBS_KEY_MOUSE22: ret = context->pressed[25] || context->button_pressed[25]; break; case OBS_KEY_MOUSE23: ret = context->pressed[26] || context->button_pressed[26]; break; case OBS_KEY_MOUSE24: ret = context->pressed[27] || context->button_pressed[27]; break; case OBS_KEY_MOUSE25: ret = context->pressed[28] || context->button_pressed[28]; break; case OBS_KEY_MOUSE26: ret = context->pressed[29] || context->button_pressed[29]; break; case OBS_KEY_MOUSE27: ret = context->pressed[30] || context->button_pressed[30]; break; case OBS_KEY_MOUSE28: ret = context->pressed[31] || context->button_pressed[31]; break; case OBS_KEY_MOUSE29: ret = context->pressed[32] || context->button_pressed[32]; break; default: break; } for (int i = 0; i != XINPUT_MOUSE_LEN; i++) if (context->update[i]) context->button_pressed[i] = context->pressed[i]; #else xcb_generic_error_t *error = NULL; xcb_query_pointer_cookie_t qpc; xcb_query_pointer_reply_t *reply; qpc = xcb_query_pointer(connection, root_window(context, connection)); reply = xcb_query_pointer_reply(connection, qpc, &error); if (error) { blog(LOG_WARNING, "xcb_query_pointer_reply failed"); } else { uint16_t buttons = reply->mask; switch (key) { case OBS_KEY_MOUSE1: ret = buttons & XCB_BUTTON_MASK_1; break; case OBS_KEY_MOUSE2: ret = buttons & XCB_BUTTON_MASK_3; break; case OBS_KEY_MOUSE3: ret = buttons & XCB_BUTTON_MASK_2; break; default:; } } free(reply); free(error); #endif return ret; } static inline bool keycode_pressed(xcb_query_keymap_reply_t *reply, xcb_keycode_t code) { return (reply->keys[code / 8] & (1 << (code % 8))) != 0; } static bool key_pressed(xcb_connection_t *connection, obs_hotkeys_platform_t *context, obs_key_t key) { struct keycode_list *codes = &context->keycodes[key]; xcb_generic_error_t *error = NULL; xcb_query_keymap_reply_t *reply; bool pressed = false; reply = xcb_query_keymap_reply(connection, xcb_query_keymap(connection), &error); if (error) { blog(LOG_WARNING, "xcb_query_keymap failed"); } else if (key == OBS_KEY_META) { pressed = keycode_pressed(reply, context->super_l_code) || keycode_pressed(reply, context->super_r_code); } else { for (size_t i = 0; i < codes->list.num; i++) { if (keycode_pressed(reply, codes->list.array[i])) { pressed = true; break; } } } free(reply); free(error); return pressed; } static bool obs_nix_x11_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *context, obs_key_t key) { xcb_connection_t *conn = XGetXCBConnection(context->display); if (key >= OBS_KEY_MOUSE1 && key <= OBS_KEY_MOUSE29) { return mouse_button_pressed(conn, context, key); } else { return key_pressed(conn, context, key); } } static bool get_key_translation(struct dstr *dstr, xcb_keycode_t keycode) { xcb_connection_t *connection; char name[128]; connection = XGetXCBConnection(obs->hotkeys.platform_context->display); XKeyEvent event = {0}; event.type = KeyPress; event.display = obs->hotkeys.platform_context->display; event.keycode = keycode; event.root = root_window(obs->hotkeys.platform_context, connection); event.window = event.root; if (keycode) { int len = XLookupString(&event, name, 128, NULL, NULL); if (len) { dstr_ncopy(dstr, name, len); dstr_to_upper(dstr); return true; } } return false; } static void obs_nix_x11_key_to_str(obs_key_t key, struct dstr *dstr) { if (key >= OBS_KEY_MOUSE1 && key <= OBS_KEY_MOUSE29) { if (obs->hotkeys.translations[key]) { dstr_copy(dstr, obs->hotkeys.translations[key]); } else { dstr_printf(dstr, "Mouse %d", (int)(key - OBS_KEY_MOUSE1 + 1)); } return; } if (key >= OBS_KEY_NUM0 && key <= OBS_KEY_NUM9) { if (obs->hotkeys.translations[key]) { dstr_copy(dstr, obs->hotkeys.translations[key]); } else { dstr_printf(dstr, "Numpad %d", (int)(key - OBS_KEY_NUM0)); } return; } #define translate_key(key, def) dstr_copy(dstr, obs_get_hotkey_translation(key, def)) switch (key) { case OBS_KEY_INSERT: return translate_key(key, "Insert"); case OBS_KEY_DELETE: return translate_key(key, "Delete"); case OBS_KEY_HOME: return translate_key(key, "Home"); case OBS_KEY_END: return translate_key(key, "End"); case OBS_KEY_PAGEUP: return translate_key(key, "Page Up"); case OBS_KEY_PAGEDOWN: return translate_key(key, "Page Down"); case OBS_KEY_NUMLOCK: return translate_key(key, "Num Lock"); case OBS_KEY_SCROLLLOCK: return translate_key(key, "Scroll Lock"); case OBS_KEY_CAPSLOCK: return translate_key(key, "Caps Lock"); case OBS_KEY_BACKSPACE: return translate_key(key, "Backspace"); case OBS_KEY_TAB: return translate_key(key, "Tab"); case OBS_KEY_PRINT: return translate_key(key, "Print"); case OBS_KEY_PAUSE: return translate_key(key, "Pause"); case OBS_KEY_LEFT: return translate_key(key, "Left"); case OBS_KEY_RIGHT: return translate_key(key, "Right"); case OBS_KEY_UP: return translate_key(key, "Up"); case OBS_KEY_DOWN: return translate_key(key, "Down"); case OBS_KEY_SHIFT: return translate_key(key, "Shift"); case OBS_KEY_ALT: return translate_key(key, "Alt"); case OBS_KEY_CONTROL: return translate_key(key, "Control"); case OBS_KEY_META: return translate_key(key, "Super"); case OBS_KEY_MENU: return translate_key(key, "Menu"); case OBS_KEY_NUMASTERISK: return translate_key(key, "Numpad *"); case OBS_KEY_NUMPLUS: return translate_key(key, "Numpad +"); case OBS_KEY_NUMMINUS: return translate_key(key, "Numpad -"); case OBS_KEY_NUMCOMMA: return translate_key(key, "Numpad ,"); case OBS_KEY_NUMPERIOD: return translate_key(key, "Numpad ."); case OBS_KEY_NUMSLASH: return translate_key(key, "Numpad /"); case OBS_KEY_SPACE: return translate_key(key, "Space"); case OBS_KEY_ESCAPE: return translate_key(key, "Escape"); default:; } if (key >= OBS_KEY_F1 && key <= OBS_KEY_F35) { dstr_printf(dstr, "F%d", (int)(key - OBS_KEY_F1 + 1)); return; } obs_hotkeys_platform_t *context = obs->hotkeys.platform_context; struct keycode_list *keycodes = &context->keycodes[key]; for (size_t i = 0; i < keycodes->list.num; i++) { if (get_key_translation(dstr, keycodes->list.array[i])) { break; } } if (key != OBS_KEY_NONE && dstr_is_empty(dstr)) { dstr_copy(dstr, obs_key_to_name(key)); } } static obs_key_t key_from_keycode(obs_hotkeys_platform_t *context, xcb_keycode_t code) { for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) { struct keycode_list *codes = &context->keycodes[i]; for (size_t j = 0; j < codes->list.num; j++) { if (codes->list.array[j] == code) { return (obs_key_t)i; } } } return OBS_KEY_NONE; } static obs_key_t obs_nix_x11_key_from_virtual_key(int sym) { obs_hotkeys_platform_t *context = obs->hotkeys.platform_context; const xcb_keysym_t *keysyms = context->keysyms; int syms_per_code = context->syms_per_code; int num_keysyms = context->num_keysyms; if (sym == 0) return OBS_KEY_NONE; for (int i = 0; i < num_keysyms; i++) { if (keysyms[i] == (xcb_keysym_t)sym) { xcb_keycode_t code = (xcb_keycode_t)(i / syms_per_code); code += context->min_keycode; obs_key_t key = key_from_keycode(context, code); return key; } } return OBS_KEY_NONE; } static int obs_nix_x11_key_to_virtual_key(obs_key_t key) { if (key == OBS_KEY_META) return XK_Super_L; return (int)obs->hotkeys.platform_context->base_keysyms[(int)key]; } static const struct obs_nix_hotkeys_vtable x11_hotkeys_vtable = { .init = obs_nix_x11_hotkeys_platform_init, .free = obs_nix_x11_hotkeys_platform_free, .is_pressed = obs_nix_x11_hotkeys_platform_is_pressed, .key_to_str = obs_nix_x11_key_to_str, .key_from_virtual_key = obs_nix_x11_key_from_virtual_key, .key_to_virtual_key = obs_nix_x11_key_to_virtual_key, }; const struct obs_nix_hotkeys_vtable *obs_nix_x11_get_hotkeys_vtable(void) { return &x11_hotkeys_vtable; } obs-studio-32.1.0-sources/libobs/obs-service.h000644 001751 001751 00000007055 15153330235 022107 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once /** * @file * @brief header for modules implementing services. * * Services are modules that implement provider specific settings for outputs. */ #ifdef __cplusplus extern "C" { #endif struct obs_service_resolution { int cx; int cy; }; /* NOTE: Odd numbers are reserved for custom info from third-party protocols */ enum obs_service_connect_info { OBS_SERVICE_CONNECT_INFO_SERVER_URL = 0, OBS_SERVICE_CONNECT_INFO_STREAM_ID = 2, OBS_SERVICE_CONNECT_INFO_STREAM_KEY = 2, // Alias of OBS_SERVICE_CONNECT_INFO_STREAM_ID OBS_SERVICE_CONNECT_INFO_USERNAME = 4, OBS_SERVICE_CONNECT_INFO_PASSWORD = 6, OBS_SERVICE_CONNECT_INFO_ENCRYPT_PASSPHRASE = 8, OBS_SERVICE_CONNECT_INFO_BEARER_TOKEN = 10, }; struct obs_service_info { /* required */ const char *id; const char *(*get_name)(void *type_data); void *(*create)(obs_data_t *settings, obs_service_t *service); void (*destroy)(void *data); /* optional */ void (*activate)(void *data, obs_data_t *settings); void (*deactivate)(void *data); void (*update)(void *data, obs_data_t *settings); void (*get_defaults)(obs_data_t *settings); obs_properties_t *(*get_properties)(void *data); /** * Called when getting ready to start up an output, before the encoders * and output are initialized * * @param data Internal service data * @param output Output context * @return true to allow the output to start up, * false to prevent output from starting up */ bool (*initialize)(void *data, obs_output_t *output); const char *(*get_url)(void *data); const char *(*get_key)(void *data); const char *(*get_username)(void *data); const char *(*get_password)(void *data); bool (*deprecated_1)(); void (*apply_encoder_settings)(void *data, obs_data_t *video_encoder_settings, obs_data_t *audio_encoder_settings); void *type_data; void (*free_type_data)(void *type_data); /* TODO: Rename to 'get_preferred_output_type' once a API/ABI break happen */ const char *(*get_output_type)(void *data); void (*get_supported_resolutions)(void *data, struct obs_service_resolution **resolutions, size_t *count); void (*get_max_fps)(void *data, int *fps); void (*get_max_bitrate)(void *data, int *video_bitrate, int *audio_bitrate); const char **(*get_supported_video_codecs)(void *data); const char *(*get_protocol)(void *data); const char **(*get_supported_audio_codecs)(void *data); const char *(*get_connect_info)(void *data, uint32_t type); bool (*can_try_to_connect)(void *data); }; EXPORT void obs_register_service_s(const struct obs_service_info *info, size_t size); #define obs_register_service(info) obs_register_service_s(info, sizeof(struct obs_service_info)) #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs.h000644 001751 001751 00000321406 15153330235 020450 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #include "util/bmem.h" #include "util/profiler.h" #include "util/text-lookup.h" #include "graphics/graphics.h" #include "graphics/vec2.h" #include "graphics/vec3.h" #include "media-io/audio-io.h" #include "media-io/video-io.h" #include "callback/signal.h" #include "callback/proc.h" #include "obs-config.h" #include "obs-defs.h" #include "obs-data.h" #include "obs-properties.h" #include "obs-interaction.h" struct matrix4; /* opaque types */ struct obs_context_data; struct obs_display; struct obs_view; struct obs_source; struct obs_scene; struct obs_scene_item; struct obs_output; struct obs_encoder; struct obs_encoder_group; struct obs_service; struct obs_module; struct obs_module_metadata; struct obs_fader; struct obs_volmeter; struct obs_canvas; typedef struct obs_context_data obs_object_t; typedef struct obs_display obs_display_t; typedef struct obs_view obs_view_t; typedef struct obs_source obs_source_t; typedef struct obs_scene obs_scene_t; typedef struct obs_scene_item obs_sceneitem_t; typedef struct obs_output obs_output_t; typedef struct obs_encoder obs_encoder_t; typedef struct obs_encoder_group obs_encoder_group_t; typedef struct obs_service obs_service_t; typedef struct obs_module obs_module_t; typedef struct obs_module_metadata obs_module_metadata_t; typedef struct obs_fader obs_fader_t; typedef struct obs_volmeter obs_volmeter_t; typedef struct obs_canvas obs_canvas_t; typedef struct obs_weak_object obs_weak_object_t; typedef struct obs_weak_source obs_weak_source_t; typedef struct obs_weak_output obs_weak_output_t; typedef struct obs_weak_encoder obs_weak_encoder_t; typedef struct obs_weak_service obs_weak_service_t; typedef struct obs_weak_canvas obs_weak_canvas_t; #include "obs-missing-files.h" #include "obs-source.h" #include "obs-encoder.h" #include "obs-output.h" #include "obs-service.h" #include "obs-audio-controls.h" #include "obs-hotkey.h" /** * @file * @brief Main libobs header used by applications. * * @mainpage * * @section intro_sec Introduction * * This document describes the api for libobs to be used by applications as well * as @ref modules_page implementing some kind of functionality. * */ #ifdef __cplusplus extern "C" { #endif /** Used for changing the order of items (for example, filters in a source, * or items in a scene) */ enum obs_order_movement { OBS_ORDER_MOVE_UP, OBS_ORDER_MOVE_DOWN, OBS_ORDER_MOVE_TOP, OBS_ORDER_MOVE_BOTTOM, }; /** * Used with obs_source_process_filter to specify whether the filter should * render the source directly with the specified effect, or whether it should * render it to a texture */ enum obs_allow_direct_render { OBS_NO_DIRECT_RENDERING, OBS_ALLOW_DIRECT_RENDERING, }; enum obs_scale_type { OBS_SCALE_DISABLE, OBS_SCALE_POINT, OBS_SCALE_BICUBIC, OBS_SCALE_BILINEAR, OBS_SCALE_LANCZOS, OBS_SCALE_AREA, }; enum obs_blending_method { OBS_BLEND_METHOD_DEFAULT, OBS_BLEND_METHOD_SRGB_OFF, }; enum obs_blending_type { OBS_BLEND_NORMAL, OBS_BLEND_ADDITIVE, OBS_BLEND_SUBTRACT, OBS_BLEND_SCREEN, OBS_BLEND_MULTIPLY, OBS_BLEND_LIGHTEN, OBS_BLEND_DARKEN, }; /** * Used with scene items to indicate the type of bounds to use for scene items. * Mostly determines how the image will be scaled within those bounds, or * whether to use bounds at all. */ enum obs_bounds_type { OBS_BOUNDS_NONE, /**< no bounds */ OBS_BOUNDS_STRETCH, /**< stretch (ignores base scale) */ OBS_BOUNDS_SCALE_INNER, /**< scales to inner rectangle */ OBS_BOUNDS_SCALE_OUTER, /**< scales to outer rectangle */ OBS_BOUNDS_SCALE_TO_WIDTH, /**< scales to the width */ OBS_BOUNDS_SCALE_TO_HEIGHT, /**< scales to the height */ OBS_BOUNDS_MAX_ONLY, /**< no scaling, maximum size only */ }; /** * Used by libobs to define the state of a plugin/module. */ enum obs_module_load_state { OBS_MODULE_INVALID, OBS_MODULE_ENABLED, OBS_MODULE_MISSING, OBS_MODULE_DISABLED, OBS_MODULE_DISABLED_SAFE, OBS_MODULE_FAILED_TO_OPEN, OBS_MODULE_FAILED_TO_INITIALIZE, }; struct obs_transform_info { struct vec2 pos; float rot; struct vec2 scale; uint32_t alignment; enum obs_bounds_type bounds_type; uint32_t bounds_alignment; struct vec2 bounds; bool crop_to_bounds; }; /** * Video initialization structure */ struct obs_video_info { #ifndef SWIG /** * Graphics module to use (usually "libobs-opengl" or "libobs-d3d11") */ const char *graphics_module; #endif uint32_t fps_num; /**< Output FPS numerator */ uint32_t fps_den; /**< Output FPS denominator */ uint32_t base_width; /**< Base compositing width */ uint32_t base_height; /**< Base compositing height */ uint32_t output_width; /**< Output width */ uint32_t output_height; /**< Output height */ enum video_format output_format; /**< Output format */ /** Video adapter index to use (NOTE: avoid for optimus laptops) */ uint32_t adapter; /** Use shaders to convert to different color formats */ bool gpu_conversion; enum video_colorspace colorspace; /**< YUV type (if YUV) */ enum video_range_type range; /**< YUV range (if YUV) */ enum obs_scale_type scale_type; /**< How to scale if scaling */ }; /** * Audio initialization structure */ struct obs_audio_info { uint32_t samples_per_sec; enum speaker_layout speakers; }; struct obs_audio_info2 { uint32_t samples_per_sec; enum speaker_layout speakers; uint32_t max_buffering_ms; bool fixed_buffering; }; /** * Sent to source filters via the filter_audio callback to allow filtering of * audio data */ struct obs_audio_data { uint8_t *data[MAX_AV_PLANES]; uint32_t frames; uint64_t timestamp; }; /** * Source audio output structure. Used with obs_source_output_audio to output * source audio. Audio is automatically resampled and remixed as necessary. */ struct obs_source_audio { const uint8_t *data[MAX_AV_PLANES]; uint32_t frames; enum speaker_layout speakers; enum audio_format format; uint32_t samples_per_sec; uint64_t timestamp; }; struct obs_source_cea_708 { const uint8_t *data; uint32_t packets; uint64_t timestamp; }; #define OBS_SOURCE_FRAME_LINEAR_ALPHA (1 << 0) /** * Source asynchronous video output structure. Used with * obs_source_output_video to output asynchronous video. Video is buffered as * necessary to play according to timestamps. When used with audio output, * audio is synced to video as it is played. * * If a YUV format is specified, it will be automatically upsampled and * converted to RGB via shader on the graphics processor. * * NOTE: Non-YUV formats will always be treated as full range with this * structure! Use obs_source_frame2 along with obs_source_output_video2 * instead if partial range support is desired for non-YUV video formats. */ struct obs_source_frame { uint8_t *data[MAX_AV_PLANES]; uint32_t linesize[MAX_AV_PLANES]; uint32_t width; uint32_t height; uint64_t timestamp; enum video_format format; float color_matrix[16]; bool full_range; uint16_t max_luminance; float color_range_min[3]; float color_range_max[3]; bool flip; uint8_t flags; uint8_t trc; /* enum video_trc */ /* used internally by libobs */ volatile long refs; bool prev_frame; }; struct obs_source_frame2 { uint8_t *data[MAX_AV_PLANES]; uint32_t linesize[MAX_AV_PLANES]; uint32_t width; uint32_t height; uint64_t timestamp; enum video_format format; enum video_range_type range; float color_matrix[16]; float color_range_min[3]; float color_range_max[3]; bool flip; uint8_t flags; uint8_t trc; /* enum video_trc */ }; /** Access to the argc/argv used to start OBS. What you see is what you get. */ struct obs_cmdline_args { int argc; char **argv; }; /* ------------------------------------------------------------------------- */ /* OBS context */ /** * Find a core libobs data file * @param path name of the base file * @return A string containing the full path to the file. * Use bfree after use. */ EXPORT char *obs_find_data_file(const char *file); // TODO: Remove after deprecation grace period /** * Add a path to search libobs data files in. * @param path Full path to directory to look in. * The string is copied. */ OBS_DEPRECATED EXPORT void obs_add_data_path(const char *path); // TODO: Remove after deprecation grace period /** * Remove a path from libobs core data paths. * @param path The path to compare to currently set paths. * It does not need to be the same pointer, but * the path string must match an entry fully. * @return Whether or not the path was successfully removed. * If false, the path could not be found. */ OBS_DEPRECATED EXPORT bool obs_remove_data_path(const char *path); /** * Initializes OBS * * @param locale The locale to use for modules * @param module_config_path Path to module config storage directory * (or NULL if none) * @param store The profiler name store for OBS to use or NULL */ EXPORT bool obs_startup(const char *locale, const char *module_config_path, profiler_name_store_t *store); /** Releases all data associated with OBS and terminates the OBS context */ EXPORT void obs_shutdown(void); /** @return true if the main OBS context has been initialized */ EXPORT bool obs_initialized(void); /** @return The current core version */ EXPORT uint32_t obs_get_version(void); /** @return The current core version string */ EXPORT const char *obs_get_version_string(void); /** * Sets things up for calls to obs_get_cmdline_args. Called only once at startup * and safely copies argv/argc from main(). Subsequent calls do nothing. * * @param argc The count of command line arguments, from main() * @param argv An array of command line arguments, copied from main() and ends * with NULL. */ EXPORT void obs_set_cmdline_args(int argc, const char *const *argv); /** * Get the argc/argv used to start OBS * * @return The command line arguments used for main(). Don't modify this or * you'll mess things up for other callers. */ EXPORT struct obs_cmdline_args obs_get_cmdline_args(void); /** * Sets a new locale to use for modules. This will call obs_module_set_locale * for each module with the new locale. * * @param locale The locale to use for modules */ EXPORT void obs_set_locale(const char *locale); /** @return the current locale */ EXPORT const char *obs_get_locale(void); /** Initialize the Windows-specific crash handler */ #ifdef _WIN32 EXPORT void obs_init_win32_crash_handler(void); #endif /** * Returns the profiler name store (see util/profiler.h) used by OBS, which is * either a name store passed to obs_startup, an internal name store, or NULL * in case obs_initialized() returns false. */ EXPORT profiler_name_store_t *obs_get_profiler_name_store(void); /** * Sets base video output base resolution/fps/format. * * @note This data cannot be changed if an output is currently active. * @note The graphics module cannot be changed without fully destroying the * OBS context. * * @param ovi Pointer to an obs_video_info structure containing the * specification of the graphics subsystem, * @return OBS_VIDEO_SUCCESS if successful * OBS_VIDEO_NOT_SUPPORTED if the adapter lacks capabilities * OBS_VIDEO_INVALID_PARAM if a parameter is invalid * OBS_VIDEO_CURRENTLY_ACTIVE if video is currently active * OBS_VIDEO_MODULE_NOT_FOUND if the graphics module is not found * OBS_VIDEO_FAIL for generic failure */ EXPORT int obs_reset_video(struct obs_video_info *ovi); /** * Sets base audio output format/channels/samples/etc * * @note Cannot reset base audio if an output is currently active. */ EXPORT bool obs_reset_audio(const struct obs_audio_info *oai); EXPORT bool obs_reset_audio2(const struct obs_audio_info2 *oai); /** Gets the current video settings, returns false if no video */ EXPORT bool obs_get_video_info(struct obs_video_info *ovi); /** Gets the SDR white level, returns 300.f if no video */ EXPORT float obs_get_video_sdr_white_level(void); /** Gets the HDR nominal peak level, returns 1000.f if no video */ EXPORT float obs_get_video_hdr_nominal_peak_level(void); /** Sets the video levels */ EXPORT void obs_set_video_levels(float sdr_white_level, float hdr_nominal_peak_level); /** Gets the current audio settings, returns false if no audio */ EXPORT bool obs_get_audio_info(struct obs_audio_info *oai); /** * Gets the v2 audio settings that includes buffering information. * Returns false if no audio. */ EXPORT bool obs_get_audio_info2(struct obs_audio_info2 *oai2); /** * Opens a plugin module directly from a specific path. * * If the module already exists then the function will return successful, and * the module parameter will be given the pointer to the existing module. * * This does not initialize the module, it only loads the module image. To * initialize the module, call obs_init_module. * * @param module The pointer to the created module. * @param path Specifies the path to the module library file. If the * extension is not specified, it will use the extension * appropriate to the operating system. * @param data_path Specifies the path to the directory where the module's * data files are stored. * @returns MODULE_SUCCESS if successful * MODULE_ERROR if a generic error occurred * MODULE_FAILED_TO_OPEN if the module failed to open, e.g. because it was not found or had missing symbols * MODULE_MISSING_EXPORTS if required exports are missing * MODULE_INCOMPATIBLE_VER if incompatible version */ EXPORT int obs_open_module(obs_module_t **module, const char *path, const char *data_path); EXPORT bool obs_create_disabled_module(obs_module_t **module, const char *path, const char *data_path, enum obs_module_load_state state); /** * Initializes the module, which calls its obs_module_load export. If the * module is already loaded, then this function does nothing and returns * successful. */ EXPORT bool obs_init_module(obs_module_t *module); /** Returns a module based upon its name, or NULL if not found */ EXPORT obs_module_t *obs_get_module(const char *name); /** Returns a module if it is disabled, or NULL if not found in the disabled list */ EXPORT obs_module_t *obs_get_disabled_module(const char *name); /** Gets library of module */ EXPORT void *obs_get_module_lib(obs_module_t *module); /** Returns locale text from a specific module */ EXPORT bool obs_module_get_locale_string(const obs_module_t *mod, const char *lookup_string, const char **translated_string); EXPORT const char *obs_module_get_locale_text(const obs_module_t *mod, const char *text); /** Logs loaded modules */ EXPORT void obs_log_loaded_modules(void); /** Returns the module file name */ EXPORT const char *obs_get_module_file_name(obs_module_t *module); /** Returns the module full name */ EXPORT const char *obs_get_module_name(obs_module_t *module); /** Returns the module author(s) */ EXPORT const char *obs_get_module_author(obs_module_t *module); /** Returns the module description */ EXPORT const char *obs_get_module_description(obs_module_t *module); /** Returns the module binary path */ EXPORT const char *obs_get_module_binary_path(obs_module_t *module); /** Returns the module data path */ EXPORT const char *obs_get_module_data_path(obs_module_t *module); /** Adds a source type id to the module provided sources list */ EXPORT void obs_module_add_source(obs_module_t *module, const char *id); /** Adds an output type id to the module provided outputs list */ EXPORT void obs_module_add_output(obs_module_t *module, const char *id); /** Adds an encoder type id to the module provided encoders list */ EXPORT void obs_module_add_encoder(obs_module_t *module, const char *id); /** Adds an encoder service id to the module provided services list */ EXPORT void obs_module_add_service(obs_module_t *module, const char *id); #ifndef SWIG /** * Adds a module search path to be used with obs_find_modules. If the search * path strings contain %module%, that text will be replaced with the module * name when used. * * @param bin Specifies the module's binary directory search path. * @param data Specifies the module's data directory search path. */ EXPORT void obs_add_module_path(const char *bin, const char *data); /** * Adds a module to the list of modules allowed to load in Safe Mode. * If the list is empty, all modules are allowed. * * @param name Specifies the module's name (filename sans extension). */ EXPORT void obs_add_safe_module(const char *name); /** * Adds a module to the list of core modules (which cannot be disabled). * If the list is empty, all modules are allowed. * * @param name Specifies the module's name (filename sans extension). */ EXPORT void obs_add_core_module(const char *name); /** Automatically loads all modules from module paths (convenience function) */ EXPORT void obs_load_all_modules(void); struct obs_module_failure_info { char **failed_modules; size_t count; }; EXPORT void obs_module_failure_info_free(struct obs_module_failure_info *mfi); EXPORT void obs_load_all_modules2(struct obs_module_failure_info *mfi); /** Notifies modules that all modules have been loaded. This function should * be called after all modules have been loaded. */ EXPORT void obs_post_load_modules(void); struct obs_module_info { const char *bin_path; const char *data_path; }; typedef void (*obs_find_module_callback_t)(void *param, const struct obs_module_info *info); /** Finds all modules within the search paths added by obs_add_module_path. */ EXPORT void obs_find_modules(obs_find_module_callback_t callback, void *param); struct obs_module_info2 { const char *bin_path; const char *data_path; const char *name; }; typedef void (*obs_find_module_callback2_t)(void *param, const struct obs_module_info2 *info); /** Finds all modules within the search paths added by obs_add_module_path. */ EXPORT void obs_find_modules2(obs_find_module_callback2_t callback, void *param); #endif typedef void (*obs_enum_module_callback_t)(void *param, obs_module_t *module); /** Enumerates all loaded modules */ EXPORT void obs_enum_modules(obs_enum_module_callback_t callback, void *param); /** Helper function for using default module locale */ EXPORT lookup_t *obs_module_load_locale(obs_module_t *module, const char *default_locale, const char *locale); /** * Returns the location of a plugin module data file. * * @note Modules should use obs_module_file function defined in obs-module.h * as a more elegant means of getting their files without having to * specify the module parameter. * * @param module The module associated with the file to locate * @param file The file to locate * @return Path string, or NULL if not found. Use bfree to free string. */ EXPORT char *obs_find_module_file(obs_module_t *module, const char *file); /** * Adds a module name to the disabled modules list. * * @param name The name of the module to disable */ EXPORT void obs_add_disabled_module(const char *name); /** * Returns if a module can be disabled. * * @param name The name of the module to check * @return Boolean to indicate if module can be disabled */ EXPORT bool obs_get_module_allow_disable(const char *name); /** * Returns the path of a plugin module config file (whether it exists or not) * * @note Modules should use obs_module_config_path function defined in * obs-module.h as a more elegant means of getting their files without * having to specify the module parameter. * * @param module The module associated with the path * @param file The file to get a path to * @return Path string, or NULL if not found. Use bfree to free string. */ EXPORT char *obs_module_get_config_path(obs_module_t *module, const char *file); /** Enumerates all source types (inputs, filters, transitions, etc). */ EXPORT bool obs_enum_source_types(size_t idx, const char **id); /** * Enumerates all available inputs source types. * * Inputs are general source inputs (such as capture sources, device sources, * etc). */ EXPORT bool obs_enum_input_types(size_t idx, const char **id); EXPORT bool obs_enum_input_types2(size_t idx, const char **id, const char **unversioned_id); EXPORT const char *obs_get_latest_input_type_id(const char *unversioned_id); /** * Enumerates all available filter source types. * * Filters are sources that are used to modify the video/audio output of * other sources. */ EXPORT bool obs_enum_filter_types(size_t idx, const char **id); /** * Enumerates all available transition source types. * * Transitions are sources used to transition between two or more other * sources. */ EXPORT bool obs_enum_transition_types(size_t idx, const char **id); /** Enumerates all available output types. */ EXPORT bool obs_enum_output_types(size_t idx, const char **id); /** Enumerates all available encoder types. */ EXPORT bool obs_enum_encoder_types(size_t idx, const char **id); /** Enumerates all available service types. */ EXPORT bool obs_enum_service_types(size_t idx, const char **id); /** Helper function for entering the OBS graphics context */ EXPORT void obs_enter_graphics(void); /** Helper function for leaving the OBS graphics context */ EXPORT void obs_leave_graphics(void); /** Gets the main audio output handler for this OBS context */ EXPORT audio_t *obs_get_audio(void); /** Gets the main video output handler for this OBS context */ EXPORT video_t *obs_get_video(void); /** Returns true if video is active, false otherwise */ EXPORT bool obs_video_active(void); /** Sets the primary output source for a channel. */ EXPORT void obs_set_output_source(uint32_t channel, obs_source_t *source); /** * Gets the primary output source for a channel and increments the reference * counter for that source. Use obs_source_release to release. */ EXPORT obs_source_t *obs_get_output_source(uint32_t channel); /** * Enumerates all input sources * * Callback function returns true to continue enumeration, or false to end * enumeration. * * Use obs_source_get_ref or obs_source_get_weak_source if you want to retain * a reference after obs_enum_sources finishes */ EXPORT void obs_enum_sources(bool (*enum_proc)(void *, obs_source_t *), void *param); /** Enumerates scenes */ EXPORT void obs_enum_scenes(bool (*enum_proc)(void *, obs_source_t *), void *param); /** Enumerates all sources (regardless of type) */ EXPORT void obs_enum_all_sources(bool (*enum_proc)(void *, obs_source_t *), void *param); /** Enumerates outputs */ EXPORT void obs_enum_outputs(bool (*enum_proc)(void *, obs_output_t *), void *param); /** Enumerates encoders */ EXPORT void obs_enum_encoders(bool (*enum_proc)(void *, obs_encoder_t *), void *param); /** Enumerates encoders */ EXPORT void obs_enum_services(bool (*enum_proc)(void *, obs_service_t *), void *param); /** Enumerates canvases */ EXPORT void obs_enum_canvases(bool (*enum_proc)(void *, obs_canvas_t *), void *param); /** * Gets a source by its name. * * Increments the source reference counter, use obs_source_release to * release it when complete. */ EXPORT obs_source_t *obs_get_source_by_name(const char *name); /** * Gets a source by its UUID. * * Increments the source reference counter, use obs_source_release to * release it when complete. */ EXPORT obs_source_t *obs_get_source_by_uuid(const char *uuid); /** Get a transition source by its name. */ OBS_DEPRECATED EXPORT obs_source_t *obs_get_transition_by_name(const char *name); /** Get a transition source by its UUID. */ OBS_DEPRECATED EXPORT obs_source_t *obs_get_transition_by_uuid(const char *uuid); /** Gets an output by its name. */ EXPORT obs_output_t *obs_get_output_by_name(const char *name); /** Gets an encoder by its name. */ EXPORT obs_encoder_t *obs_get_encoder_by_name(const char *name); /** Gets an service by its name. */ EXPORT obs_service_t *obs_get_service_by_name(const char *name); /** Get a canvas by its name. */ EXPORT obs_canvas_t *obs_get_canvas_by_name(const char *name); /** Get a canvas by its UUID. */ EXPORT obs_canvas_t *obs_get_canvas_by_uuid(const char *uuid); enum obs_base_effect { OBS_EFFECT_DEFAULT, /**< RGB/YUV */ OBS_EFFECT_DEFAULT_RECT, /**< RGB/YUV (using texture_rect) */ OBS_EFFECT_OPAQUE, /**< RGB/YUV (alpha set to 1.0) */ OBS_EFFECT_SOLID, /**< RGB/YUV (solid color only) */ OBS_EFFECT_BICUBIC, /**< Bicubic downscale */ OBS_EFFECT_LANCZOS, /**< Lanczos downscale */ OBS_EFFECT_BILINEAR_LOWRES, /**< Bilinear low resolution downscale */ OBS_EFFECT_PREMULTIPLIED_ALPHA, /**< Premultiplied alpha */ OBS_EFFECT_REPEAT, /**< RGB/YUV (repeating) */ OBS_EFFECT_AREA, /**< Area rescale */ }; /** Returns a commonly used base effect */ EXPORT gs_effect_t *obs_get_base_effect(enum obs_base_effect effect); /** Returns the primary obs signal handler */ EXPORT signal_handler_t *obs_get_signal_handler(void); /** Returns the primary obs procedure handler */ EXPORT proc_handler_t *obs_get_proc_handler(void); /** Renders the last main output texture */ EXPORT void obs_render_main_texture(void); /** Renders the last main output texture ignoring background color */ EXPORT void obs_render_main_texture_src_color_only(void); /** Renders the last canvas output texture */ EXPORT void obs_render_canvas_texture(obs_canvas_t *canvas); /** Renders the last main output texture ignoring background color */ EXPORT void obs_render_canvas_texture_src_color_only(obs_canvas_t *canvas); /** Returns the last main output texture. This can return NULL if the texture * is unavailable. */ EXPORT gs_texture_t *obs_get_main_texture(void); /** Saves a source to settings data */ EXPORT obs_data_t *obs_save_source(obs_source_t *source); /** Loads a source from settings data */ EXPORT obs_source_t *obs_load_source(obs_data_t *data); /** Loads a private source from settings data */ EXPORT obs_source_t *obs_load_private_source(obs_data_t *data); /** Send a save signal to sources */ EXPORT void obs_source_save(obs_source_t *source); /** Send a load signal to sources (soft deprecated; does not load filters) */ EXPORT void obs_source_load(obs_source_t *source); /** Send a load signal to sources */ EXPORT void obs_source_load2(obs_source_t *source); typedef void (*obs_load_source_cb)(void *private_data, obs_source_t *source); /** Loads sources from a data array */ EXPORT void obs_load_sources(obs_data_array_t *array, obs_load_source_cb cb, void *private_data); /** Saves sources to a data array */ EXPORT obs_data_array_t *obs_save_sources(void); typedef bool (*obs_save_source_filter_cb)(void *data, obs_source_t *source); EXPORT obs_data_array_t *obs_save_sources_filtered(obs_save_source_filter_cb cb, void *data); /** Reset source UUIDs. NOTE: this function is only to be used by the UI and * will be removed in a future version! */ EXPORT void obs_reset_source_uuids(void); enum obs_obj_type { OBS_OBJ_TYPE_INVALID, OBS_OBJ_TYPE_SOURCE, OBS_OBJ_TYPE_OUTPUT, OBS_OBJ_TYPE_ENCODER, OBS_OBJ_TYPE_SERVICE, OBS_OBJ_TYPE_CANVAS, }; EXPORT enum obs_obj_type obs_obj_get_type(void *obj); EXPORT const char *obs_obj_get_id(void *obj); EXPORT bool obs_obj_invalid(void *obj); EXPORT void *obs_obj_get_data(void *obj); EXPORT bool obs_obj_is_private(void *obj); typedef bool (*obs_enum_audio_device_cb)(void *data, const char *name, const char *id); EXPORT bool obs_audio_monitoring_available(void); EXPORT void obs_reset_audio_monitoring(void); EXPORT void obs_enum_audio_monitoring_devices(obs_enum_audio_device_cb cb, void *data); EXPORT bool obs_set_audio_monitoring_device(const char *name, const char *id); EXPORT void obs_get_audio_monitoring_device(const char **name, const char **id); EXPORT void obs_add_tick_callback(void (*tick)(void *param, float seconds), void *param); EXPORT void obs_remove_tick_callback(void (*tick)(void *param, float seconds), void *param); EXPORT void obs_add_main_render_callback(void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param); EXPORT void obs_remove_main_render_callback(void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param); EXPORT void obs_add_main_rendered_callback(void (*rendered)(void *param), void *param); EXPORT void obs_remove_main_rendered_callback(void (*rendered)(void *param), void *param); EXPORT void obs_add_raw_video_callback(const struct video_scale_info *conversion, void (*callback)(void *param, struct video_data *frame), void *param); EXPORT void obs_add_raw_video_callback2(const struct video_scale_info *conversion, uint32_t frame_rate_divisor, void (*callback)(void *param, struct video_data *frame), void *param); EXPORT void obs_remove_raw_video_callback(void (*callback)(void *param, struct video_data *frame), void *param); EXPORT void obs_add_raw_audio_callback(size_t mix_idx, const struct audio_convert_info *conversion, audio_output_callback_t callback, void *param); EXPORT void obs_remove_raw_audio_callback(size_t mix_idx, audio_output_callback_t callback, void *param); EXPORT uint64_t obs_get_video_frame_time(void); EXPORT double obs_get_active_fps(void); EXPORT uint64_t obs_get_average_frame_time_ns(void); EXPORT uint64_t obs_get_frame_interval_ns(void); EXPORT uint32_t obs_get_total_frames(void); EXPORT uint32_t obs_get_lagged_frames(void); OBS_DEPRECATED EXPORT bool obs_nv12_tex_active(void); OBS_DEPRECATED EXPORT bool obs_p010_tex_active(void); EXPORT void obs_apply_private_data(obs_data_t *settings); EXPORT void obs_set_private_data(obs_data_t *settings); EXPORT obs_data_t *obs_get_private_data(void); typedef void (*obs_task_t)(void *param); enum obs_task_type { OBS_TASK_UI, OBS_TASK_GRAPHICS, OBS_TASK_AUDIO, OBS_TASK_DESTROY, }; EXPORT void obs_queue_task(enum obs_task_type type, obs_task_t task, void *param, bool wait); EXPORT bool obs_in_task_thread(enum obs_task_type type); EXPORT bool obs_wait_for_destroy_queue(void); typedef void (*obs_task_handler_t)(obs_task_t task, void *param, bool wait); EXPORT void obs_set_ui_task_handler(obs_task_handler_t handler); EXPORT obs_object_t *obs_object_get_ref(obs_object_t *object); EXPORT void obs_object_release(obs_object_t *object); EXPORT void obs_weak_object_addref(obs_weak_object_t *weak); EXPORT void obs_weak_object_release(obs_weak_object_t *weak); EXPORT obs_weak_object_t *obs_object_get_weak_object(obs_object_t *object); EXPORT obs_object_t *obs_weak_object_get_object(obs_weak_object_t *weak); EXPORT bool obs_weak_object_expired(obs_weak_object_t *weak); EXPORT bool obs_weak_object_references_object(obs_weak_object_t *weak, obs_object_t *object); /* ------------------------------------------------------------------------- */ /* View context */ /** * Creates a view context. * * A view can be used for things like separate previews, or drawing * sources separately. */ EXPORT obs_view_t *obs_view_create(void); /** Destroys this view context */ EXPORT void obs_view_destroy(obs_view_t *view); /** Sets the source to be used for this view context. */ EXPORT void obs_view_set_source(obs_view_t *view, uint32_t channel, obs_source_t *source); /** Gets the source currently in use for this view context */ EXPORT obs_source_t *obs_view_get_source(obs_view_t *view, uint32_t channel); /** Renders the sources of this view context */ EXPORT void obs_view_render(obs_view_t *view); /** Adds a view to the main render loop, with current obs_get_video_info state */ EXPORT video_t *obs_view_add(obs_view_t *view); /** Adds a view to the main render loop, with custom video settings */ EXPORT video_t *obs_view_add2(obs_view_t *view, struct obs_video_info *ovi); /** Removes a view from the main render loop */ EXPORT void obs_view_remove(obs_view_t *view); /** Enumerate the video info of all mixes using the specified view context */ EXPORT void obs_view_enum_video_info(obs_view_t *view, bool (*enum_proc)(void *, struct obs_video_info *), void *param); /* ------------------------------------------------------------------------- */ /* Display context */ /** * Adds a new window display linked to the main render pipeline. This creates * a new swap chain which updates every frame. * * @param graphics_data The swap chain initialization data. * @return The new display context, or NULL if failed. */ EXPORT obs_display_t *obs_display_create(const struct gs_init_data *graphics_data, uint32_t backround_color); /** Destroys a display context */ EXPORT void obs_display_destroy(obs_display_t *display); /** Changes the size of this display */ EXPORT void obs_display_resize(obs_display_t *display, uint32_t cx, uint32_t cy); /** Updates the color space of this display */ EXPORT void obs_display_update_color_space(obs_display_t *display); /** * Adds a draw callback for this display context * * @param display The display context. * @param draw The draw callback which is called each time a frame * updates. * @param param The user data to be associated with this draw callback. */ EXPORT void obs_display_add_draw_callback(obs_display_t *display, void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param); /** Removes a draw callback for this display context */ EXPORT void obs_display_remove_draw_callback(obs_display_t *display, void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param); EXPORT void obs_display_set_enabled(obs_display_t *display, bool enable); EXPORT bool obs_display_enabled(obs_display_t *display); EXPORT void obs_display_set_background_color(obs_display_t *display, uint32_t color); EXPORT void obs_display_size(obs_display_t *display, uint32_t *width, uint32_t *height); /* ------------------------------------------------------------------------- */ /* Sources */ /** Returns the translated display name of a source */ EXPORT const char *obs_source_get_display_name(const char *id); /** Returns a pointer to the module which provides the source */ EXPORT obs_module_t *obs_source_get_module(const char *id); /** Returns the load state of a source's module given the id */ EXPORT enum obs_module_load_state obs_source_load_state(const char *id); /** * Creates a source of the specified type with the specified settings. * * The "source" context is used for anything related to presenting * or modifying video/audio. Use obs_source_release to release it. */ EXPORT obs_source_t *obs_source_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data); EXPORT obs_source_t *obs_source_create_private(const char *id, const char *name, obs_data_t *settings); /* if source has OBS_SOURCE_DO_NOT_DUPLICATE output flag set, only returns a * reference */ EXPORT obs_source_t *obs_source_duplicate(obs_source_t *source, const char *desired_name, bool create_private); /** * Adds/releases a reference to a source. When the last reference is * released, the source is destroyed. */ EXPORT void obs_source_release(obs_source_t *source); EXPORT void obs_weak_source_addref(obs_weak_source_t *weak); EXPORT void obs_weak_source_release(obs_weak_source_t *weak); EXPORT obs_source_t *obs_source_get_ref(obs_source_t *source); EXPORT obs_weak_source_t *obs_source_get_weak_source(obs_source_t *source); EXPORT obs_source_t *obs_weak_source_get_source(obs_weak_source_t *weak); EXPORT bool obs_weak_source_expired(obs_weak_source_t *weak); EXPORT bool obs_weak_source_references_source(obs_weak_source_t *weak, obs_source_t *source); /** Notifies all references that the source should be released */ EXPORT void obs_source_remove(obs_source_t *source); /** Returns true if the source should be released */ EXPORT bool obs_source_removed(const obs_source_t *source); /** The 'hidden' flag is not the same as a sceneitem's visibility. It is a * property the determines if it can be found through searches. **/ /** Simply sets a 'hidden' flag when the source is still alive but shouldn't be found */ EXPORT void obs_source_set_hidden(obs_source_t *source, bool hidden); /** Returns the current 'hidden' state on the source */ EXPORT bool obs_source_is_hidden(obs_source_t *source); /** Returns capability flags of a source */ EXPORT uint32_t obs_source_get_output_flags(const obs_source_t *source); /** Returns capability flags of a source type */ EXPORT uint32_t obs_get_source_output_flags(const char *id); /** Gets the default settings for a source type */ EXPORT obs_data_t *obs_get_source_defaults(const char *id); /** Returns the property list, if any. Free with obs_properties_destroy */ EXPORT obs_properties_t *obs_get_source_properties(const char *id); EXPORT obs_missing_files_t *obs_source_get_missing_files(const obs_source_t *source); EXPORT void obs_source_replace_missing_file(obs_missing_file_cb cb, obs_source_t *source, const char *new_path, void *data); /** Returns whether the source has custom properties or not */ EXPORT bool obs_is_source_configurable(const char *id); EXPORT bool obs_source_configurable(const obs_source_t *source); /** * Returns the properties list for a specific existing source. Free with * obs_properties_destroy */ EXPORT obs_properties_t *obs_source_properties(const obs_source_t *source); /** Updates settings for this source */ EXPORT void obs_source_update(obs_source_t *source, obs_data_t *settings); EXPORT void obs_source_reset_settings(obs_source_t *source, obs_data_t *settings); /** Renders a video source. */ EXPORT void obs_source_video_render(obs_source_t *source); /** Gets the width of a source (if it has video) */ EXPORT uint32_t obs_source_get_width(obs_source_t *source); /** Gets the height of a source (if it has video) */ EXPORT uint32_t obs_source_get_height(obs_source_t *source); /** Gets the color space of a source (if it has video) */ EXPORT enum gs_color_space obs_source_get_color_space(obs_source_t *source, size_t count, const enum gs_color_space *preferred_spaces); /** Hints whether or not the source will blend texels */ EXPORT bool obs_source_get_texcoords_centered(obs_source_t *source); /** * If the source is a filter, returns the parent source of the filter. Only * guaranteed to be valid inside of the video_render, filter_audio, * filter_video, and filter_remove callbacks. */ EXPORT obs_source_t *obs_filter_get_parent(const obs_source_t *filter); /** * If the source is a filter, returns the target source of the filter. Only * guaranteed to be valid inside of the video_render, filter_audio, * filter_video, and filter_remove callbacks. */ EXPORT obs_source_t *obs_filter_get_target(const obs_source_t *filter); /** Used to directly render a non-async source without any filter processing */ EXPORT void obs_source_default_render(obs_source_t *source); /** Adds a filter to the source (which is used whenever the source is used) */ EXPORT void obs_source_filter_add(obs_source_t *source, obs_source_t *filter); /** Removes a filter from the source */ EXPORT void obs_source_filter_remove(obs_source_t *source, obs_source_t *filter); /** Modifies the order of a specific filter */ EXPORT void obs_source_filter_set_order(obs_source_t *source, obs_source_t *filter, enum obs_order_movement movement); /** Gets filter index */ EXPORT int obs_source_filter_get_index(obs_source_t *source, obs_source_t *filter); /** Sets filter index */ EXPORT void obs_source_filter_set_index(obs_source_t *source, obs_source_t *filter, size_t index); /** Gets the settings string for a source */ EXPORT obs_data_t *obs_source_get_settings(const obs_source_t *source); /** Gets the name of a source */ EXPORT const char *obs_source_get_name(const obs_source_t *source); /** Sets the name of a source */ EXPORT void obs_source_set_name(obs_source_t *source, const char *name); /** Gets the UUID of a source */ EXPORT const char *obs_source_get_uuid(const obs_source_t *source); /** Gets the source type */ EXPORT enum obs_source_type obs_source_get_type(const obs_source_t *source); /** Gets the source identifier */ EXPORT const char *obs_source_get_id(const obs_source_t *source); EXPORT const char *obs_source_get_unversioned_id(const obs_source_t *source); /** Returns the signal handler for a source */ EXPORT signal_handler_t *obs_source_get_signal_handler(const obs_source_t *source); /** Returns the procedure handler for a source */ EXPORT proc_handler_t *obs_source_get_proc_handler(const obs_source_t *source); /** Sets the user volume for a source that has audio output */ EXPORT void obs_source_set_volume(obs_source_t *source, float volume); /** Gets the user volume for a source that has audio output */ EXPORT float obs_source_get_volume(const obs_source_t *source); /* Gets speaker layout of a source */ EXPORT enum speaker_layout obs_source_get_speaker_layout(obs_source_t *source); /** Sets the balance value for a stereo audio source */ EXPORT void obs_source_set_balance_value(obs_source_t *source, float balance); /** Gets the balance value for a stereo audio source */ EXPORT float obs_source_get_balance_value(const obs_source_t *source); /** Sets the audio sync offset (in nanoseconds) for a source */ EXPORT void obs_source_set_sync_offset(obs_source_t *source, int64_t offset); /** Gets the audio sync offset (in nanoseconds) for a source */ EXPORT int64_t obs_source_get_sync_offset(const obs_source_t *source); /** Enumerates active child sources used by this source */ EXPORT void obs_source_enum_active_sources(obs_source_t *source, obs_source_enum_proc_t enum_callback, void *param); /** Enumerates the entire active child source tree used by this source */ EXPORT void obs_source_enum_active_tree(obs_source_t *source, obs_source_enum_proc_t enum_callback, void *param); EXPORT void obs_source_enum_full_tree(obs_source_t *source, obs_source_enum_proc_t enum_callback, void *param); /** Returns true if active, false if not */ EXPORT bool obs_source_active(const obs_source_t *source); /** * Returns true if currently displayed somewhere (active or not), false if not */ EXPORT bool obs_source_showing(const obs_source_t *source); /** Unused flag */ #define OBS_SOURCE_FLAG_UNUSED_1 (1 << 0) /** Specifies to force audio to mono */ #define OBS_SOURCE_FLAG_FORCE_MONO (1 << 1) /** * Sets source flags. Note that these are different from the main output * flags. These are generally things that can be set by the source or user, * while the output flags are more used to determine capabilities of a source. */ EXPORT void obs_source_set_flags(obs_source_t *source, uint32_t flags); /** Gets source flags. */ EXPORT uint32_t obs_source_get_flags(const obs_source_t *source); /** * Sets audio mixer flags. These flags are used to specify which mixers * the source's audio should be applied to. */ EXPORT void obs_source_set_audio_mixers(obs_source_t *source, uint32_t mixers); /** Gets audio mixer flags */ EXPORT uint32_t obs_source_get_audio_mixers(const obs_source_t *source); /** * Increments the 'showing' reference counter to indicate that the source is * being shown somewhere. If the reference counter was 0, will call the 'show' * callback. */ EXPORT void obs_source_inc_showing(obs_source_t *source); /** * Increments the 'active' reference counter to indicate that the source is * fully active. If the reference counter was 0, will call the 'activate' * callback. * * Unlike obs_source_inc_showing, this will cause children of this source to be * considered showing as well (currently used by transition previews to make * the stinger transition show correctly). obs_source_inc_showing should * generally be used instead. */ EXPORT void obs_source_inc_active(obs_source_t *source); /** * Decrements the 'showing' reference counter to indicate that the source is * no longer being shown somewhere. If the reference counter is set to 0, * will call the 'hide' callback */ EXPORT void obs_source_dec_showing(obs_source_t *source); /** * Decrements the 'active' reference counter to indicate that the source is no * longer fully active. If the reference counter is set to 0, will call the * 'deactivate' callback * * Unlike obs_source_dec_showing, this will cause children of this source to be * considered not showing as well. obs_source_dec_showing should generally be * used instead. */ EXPORT void obs_source_dec_active(obs_source_t *source); /** Enumerates filters assigned to the source */ EXPORT void obs_source_enum_filters(obs_source_t *source, obs_source_enum_proc_t callback, void *param); /** Gets a filter of a source by its display name. */ EXPORT obs_source_t *obs_source_get_filter_by_name(obs_source_t *source, const char *name); /** Gets the number of filters the source has. */ EXPORT size_t obs_source_filter_count(const obs_source_t *source); EXPORT void obs_source_copy_filters(obs_source_t *dst, obs_source_t *src); EXPORT void obs_source_copy_single_filter(obs_source_t *dst, obs_source_t *filter); EXPORT bool obs_source_enabled(const obs_source_t *source); EXPORT void obs_source_set_enabled(obs_source_t *source, bool enabled); EXPORT bool obs_source_muted(const obs_source_t *source); EXPORT void obs_source_set_muted(obs_source_t *source, bool muted); EXPORT bool obs_source_push_to_mute_enabled(obs_source_t *source); EXPORT void obs_source_enable_push_to_mute(obs_source_t *source, bool enabled); EXPORT uint64_t obs_source_get_push_to_mute_delay(obs_source_t *source); EXPORT void obs_source_set_push_to_mute_delay(obs_source_t *source, uint64_t delay); EXPORT bool obs_source_push_to_talk_enabled(obs_source_t *source); EXPORT void obs_source_enable_push_to_talk(obs_source_t *source, bool enabled); EXPORT uint64_t obs_source_get_push_to_talk_delay(obs_source_t *source); EXPORT void obs_source_set_push_to_talk_delay(obs_source_t *source, uint64_t delay); typedef void (*obs_source_audio_capture_t)(void *param, obs_source_t *source, const struct audio_data *audio_data, bool muted); EXPORT void obs_source_add_audio_pause_callback(obs_source_t *source, signal_callback_t callback, void *param); EXPORT void obs_source_remove_audio_pause_callback(obs_source_t *source, signal_callback_t callback, void *param); EXPORT void obs_source_add_audio_capture_callback(obs_source_t *source, obs_source_audio_capture_t callback, void *param); EXPORT void obs_source_remove_audio_capture_callback(obs_source_t *source, obs_source_audio_capture_t callback, void *param); /** * For an Audio Output Capture source (like 'wasapi_output_capture') used for 'Desktop Audio', this checks whether the * device is also used for monitoring. A signal to obs core struct is then emitted to trigger deduplication logic at * the end of an audio tick. */ EXPORT void obs_source_audio_output_capture_device_changed(obs_source_t *source, const char *device_id); typedef void (*obs_source_caption_t)(void *param, obs_source_t *source, const struct obs_source_cea_708 *captions); EXPORT void obs_source_add_caption_callback(obs_source_t *source, obs_source_caption_t callback, void *param); EXPORT void obs_source_remove_caption_callback(obs_source_t *source, obs_source_caption_t callback, void *param); enum obs_deinterlace_mode { OBS_DEINTERLACE_MODE_DISABLE, OBS_DEINTERLACE_MODE_DISCARD, OBS_DEINTERLACE_MODE_RETRO, OBS_DEINTERLACE_MODE_BLEND, OBS_DEINTERLACE_MODE_BLEND_2X, OBS_DEINTERLACE_MODE_LINEAR, OBS_DEINTERLACE_MODE_LINEAR_2X, OBS_DEINTERLACE_MODE_YADIF, OBS_DEINTERLACE_MODE_YADIF_2X, }; enum obs_deinterlace_field_order { OBS_DEINTERLACE_FIELD_ORDER_TOP, OBS_DEINTERLACE_FIELD_ORDER_BOTTOM, }; EXPORT void obs_source_set_deinterlace_mode(obs_source_t *source, enum obs_deinterlace_mode mode); EXPORT enum obs_deinterlace_mode obs_source_get_deinterlace_mode(const obs_source_t *source); EXPORT void obs_source_set_deinterlace_field_order(obs_source_t *source, enum obs_deinterlace_field_order field_order); EXPORT enum obs_deinterlace_field_order obs_source_get_deinterlace_field_order(const obs_source_t *source); enum obs_monitoring_type { OBS_MONITORING_TYPE_NONE, OBS_MONITORING_TYPE_MONITOR_ONLY, OBS_MONITORING_TYPE_MONITOR_AND_OUTPUT, }; EXPORT void obs_source_set_monitoring_type(obs_source_t *source, enum obs_monitoring_type type); EXPORT enum obs_monitoring_type obs_source_get_monitoring_type(const obs_source_t *source); /** Gets private front-end settings data. This data is saved/loaded * automatically. Returns an incremented reference. */ EXPORT obs_data_t *obs_source_get_private_settings(obs_source_t *item); EXPORT obs_data_array_t *obs_source_backup_filters(obs_source_t *source); EXPORT void obs_source_restore_filters(obs_source_t *source, obs_data_array_t *array); /* ------------------------------------------------------------------------- */ /* Functions used by sources */ EXPORT void *obs_source_get_type_data(obs_source_t *source); /** * Helper function to set the color matrix information when drawing the source. * * @param color_matrix The color matrix. Assigns to the 'color_matrix' * effect variable. * @param color_range_min The minimum color range. Assigns to the * 'color_range_min' effect variable. If NULL, * {0.0f, 0.0f, 0.0f} is used. * @param color_range_max The maximum color range. Assigns to the * 'color_range_max' effect variable. If NULL, * {1.0f, 1.0f, 1.0f} is used. */ EXPORT void obs_source_draw_set_color_matrix(const struct matrix4 *color_matrix, const struct vec3 *color_range_min, const struct vec3 *color_range_max); /** * Helper function to draw sprites for a source (synchronous video). * * @param image The sprite texture to draw. Assigns to the 'image' variable * of the current effect. * @param x X position of the sprite. * @param y Y position of the sprite. * @param cx Width of the sprite. If 0, uses the texture width. * @param cy Height of the sprite. If 0, uses the texture height. * @param flip Specifies whether to flip the image vertically. */ EXPORT void obs_source_draw(gs_texture_t *image, int x, int y, uint32_t cx, uint32_t cy, bool flip); /** * Outputs asynchronous video data. Set to NULL to deactivate the texture * * NOTE: Non-YUV formats will always be treated as full range with this * function! Use obs_source_output_video2 instead if partial range support is * desired for non-YUV video formats. */ EXPORT void obs_source_output_video(obs_source_t *source, const struct obs_source_frame *frame); EXPORT void obs_source_output_video2(obs_source_t *source, const struct obs_source_frame2 *frame); EXPORT void obs_source_set_async_rotation(obs_source_t *source, long rotation); EXPORT void obs_source_output_cea708(obs_source_t *source, const struct obs_source_cea_708 *captions); /** * Preloads asynchronous video data to allow instantaneous playback * * NOTE: Non-YUV formats will always be treated as full range with this * function! Use obs_source_preload_video2 instead if partial range support is * desired for non-YUV video formats. */ EXPORT void obs_source_preload_video(obs_source_t *source, const struct obs_source_frame *frame); EXPORT void obs_source_preload_video2(obs_source_t *source, const struct obs_source_frame2 *frame); /** Shows any preloaded video data */ EXPORT void obs_source_show_preloaded_video(obs_source_t *source); /** * Sets current async video frame immediately * * NOTE: Non-YUV formats will always be treated as full range with this * function! Use obs_source_preload_video2 instead if partial range support is * desired for non-YUV video formats. */ EXPORT void obs_source_set_video_frame(obs_source_t *source, const struct obs_source_frame *frame); EXPORT void obs_source_set_video_frame2(obs_source_t *source, const struct obs_source_frame2 *frame); /** Outputs audio data (always asynchronous) */ EXPORT void obs_source_output_audio(obs_source_t *source, const struct obs_source_audio *audio); /** Signal an update to any currently used properties via 'update_properties' */ EXPORT void obs_source_update_properties(obs_source_t *source); /** Gets the current async video frame */ EXPORT struct obs_source_frame *obs_source_get_frame(obs_source_t *source); /** Releases the current async video frame */ EXPORT void obs_source_release_frame(obs_source_t *source, struct obs_source_frame *frame); /** * Default RGB filter handler for generic effect filters. Processes the * filter chain and renders them to texture if needed, then the filter is * drawn with * * After calling this, set your parameters for the effect, then call * obs_source_process_filter_end to draw the filter. * * Returns true if filtering should continue, false if the filter is bypassed * for whatever reason. */ EXPORT bool obs_source_process_filter_begin(obs_source_t *filter, enum gs_color_format format, enum obs_allow_direct_render allow_direct); EXPORT bool obs_source_process_filter_begin_with_color_space(obs_source_t *filter, enum gs_color_format format, enum gs_color_space space, enum obs_allow_direct_render allow_direct); /** * Draws the filter. * * Before calling this function, first call obs_source_process_filter_begin and * then set the effect parameters, and then call this function to finalize the * filter. */ EXPORT void obs_source_process_filter_end(obs_source_t *filter, gs_effect_t *effect, uint32_t width, uint32_t height); /** * Draws the filter with a specific technique. * * Before calling this function, first call obs_source_process_filter_begin and * then set the effect parameters, and then call this function to finalize the * filter. */ EXPORT void obs_source_process_filter_tech_end(obs_source_t *filter, gs_effect_t *effect, uint32_t width, uint32_t height, const char *tech_name); /** Skips the filter if the filter is invalid and cannot be rendered */ EXPORT void obs_source_skip_video_filter(obs_source_t *filter); /** * Adds an active child source. Must be called by parent sources on child * sources when the child is added and active. This ensures that the source is * properly activated if the parent is active. * * @returns true if source can be added, false if it causes recursion */ EXPORT bool obs_source_add_active_child(obs_source_t *parent, obs_source_t *child); /** * Removes an active child source. Must be called by parent sources on child * sources when the child is removed or inactive. This ensures that the source * is properly deactivated if the parent is no longer active. */ EXPORT void obs_source_remove_active_child(obs_source_t *parent, obs_source_t *child); /** Sends a mouse down/up event to a source */ EXPORT void obs_source_send_mouse_click(obs_source_t *source, const struct obs_mouse_event *event, int32_t type, bool mouse_up, uint32_t click_count); /** Sends a mouse move event to a source. */ EXPORT void obs_source_send_mouse_move(obs_source_t *source, const struct obs_mouse_event *event, bool mouse_leave); /** Sends a mouse wheel event to a source */ EXPORT void obs_source_send_mouse_wheel(obs_source_t *source, const struct obs_mouse_event *event, int x_delta, int y_delta); /** Sends a got-focus or lost-focus event to a source */ EXPORT void obs_source_send_focus(obs_source_t *source, bool focus); /** Sends a key up/down event to a source */ EXPORT void obs_source_send_key_click(obs_source_t *source, const struct obs_key_event *event, bool key_up); /** Sets the default source flags. */ EXPORT void obs_source_set_default_flags(obs_source_t *source, uint32_t flags); /** Gets the base width for a source (not taking in to account filtering) */ EXPORT uint32_t obs_source_get_base_width(obs_source_t *source); /** Gets the base height for a source (not taking in to account filtering) */ EXPORT uint32_t obs_source_get_base_height(obs_source_t *source); EXPORT bool obs_source_audio_pending(const obs_source_t *source); EXPORT uint64_t obs_source_get_audio_timestamp(const obs_source_t *source); EXPORT void obs_source_get_audio_mix(const obs_source_t *source, struct obs_source_audio_mix *audio); EXPORT void obs_source_set_async_unbuffered(obs_source_t *source, bool unbuffered); EXPORT bool obs_source_async_unbuffered(const obs_source_t *source); /** Used to decouple audio from video so that audio doesn't attempt to sync up * with video. I.E. Audio acts independently. Only works when in unbuffered * mode. */ EXPORT void obs_source_set_async_decoupled(obs_source_t *source, bool decouple); EXPORT bool obs_source_async_decoupled(const obs_source_t *source); EXPORT void obs_source_set_audio_active(obs_source_t *source, bool show); EXPORT bool obs_source_audio_active(const obs_source_t *source); EXPORT uint32_t obs_source_get_last_obs_version(const obs_source_t *source); /** Media controls */ EXPORT void obs_source_media_play_pause(obs_source_t *source, bool pause); EXPORT void obs_source_media_restart(obs_source_t *source); EXPORT void obs_source_media_stop(obs_source_t *source); EXPORT void obs_source_media_next(obs_source_t *source); EXPORT void obs_source_media_previous(obs_source_t *source); EXPORT int64_t obs_source_media_get_duration(obs_source_t *source); EXPORT int64_t obs_source_media_get_time(obs_source_t *source); EXPORT void obs_source_media_set_time(obs_source_t *source, int64_t ms); EXPORT enum obs_media_state obs_source_media_get_state(obs_source_t *source); EXPORT void obs_source_media_started(obs_source_t *source); EXPORT void obs_source_media_ended(obs_source_t *source); /** Get canvas this source belongs to (reference incremented) */ EXPORT obs_canvas_t *obs_source_get_canvas(const obs_source_t *source); /* ------------------------------------------------------------------------- */ /* Transition-specific functions */ enum obs_transition_target { OBS_TRANSITION_SOURCE_A, OBS_TRANSITION_SOURCE_B, }; EXPORT obs_source_t *obs_transition_get_source(obs_source_t *transition, enum obs_transition_target target); EXPORT void obs_transition_clear(obs_source_t *transition); EXPORT obs_source_t *obs_transition_get_active_source(obs_source_t *transition); enum obs_transition_mode { OBS_TRANSITION_MODE_AUTO, OBS_TRANSITION_MODE_MANUAL, }; EXPORT bool obs_transition_start(obs_source_t *transition, enum obs_transition_mode mode, uint32_t duration_ms, obs_source_t *dest); EXPORT void obs_transition_set(obs_source_t *transition, obs_source_t *source); EXPORT void obs_transition_set_manual_time(obs_source_t *transition, float t); EXPORT void obs_transition_set_manual_torque(obs_source_t *transition, float torque, float clamp); enum obs_transition_scale_type { OBS_TRANSITION_SCALE_MAX_ONLY, OBS_TRANSITION_SCALE_ASPECT, OBS_TRANSITION_SCALE_STRETCH, }; EXPORT void obs_transition_set_scale_type(obs_source_t *transition, enum obs_transition_scale_type type); EXPORT enum obs_transition_scale_type obs_transition_get_scale_type(const obs_source_t *transition); EXPORT void obs_transition_set_alignment(obs_source_t *transition, uint32_t alignment); EXPORT uint32_t obs_transition_get_alignment(const obs_source_t *transition); EXPORT void obs_transition_set_size(obs_source_t *transition, uint32_t cx, uint32_t cy); EXPORT void obs_transition_get_size(const obs_source_t *transition, uint32_t *cx, uint32_t *cy); EXPORT bool obs_transition_is_active(obs_source_t *transition); /* function used by transitions */ /** * Enables fixed transitions (videos or specific types of transitions that * are of fixed duration and linearly interpolated */ EXPORT void obs_transition_enable_fixed(obs_source_t *transition, bool enable, uint32_t duration_ms); EXPORT bool obs_transition_fixed(obs_source_t *transition); typedef void (*obs_transition_video_render_callback_t)(void *data, gs_texture_t *a, gs_texture_t *b, float t, uint32_t cx, uint32_t cy); typedef float (*obs_transition_audio_mix_callback_t)(void *data, float t); EXPORT float obs_transition_get_time(obs_source_t *transition); EXPORT void obs_transition_force_stop(obs_source_t *transition); EXPORT void obs_transition_video_render(obs_source_t *transition, obs_transition_video_render_callback_t callback); EXPORT void obs_transition_video_render2(obs_source_t *transition, obs_transition_video_render_callback_t callback, gs_texture_t *placeholder_texture); EXPORT enum gs_color_space obs_transition_video_get_color_space(obs_source_t *transition); /** Directly renders its sub-source instead of to texture. Returns false if no * longer transitioning */ EXPORT bool obs_transition_video_render_direct(obs_source_t *transition, enum obs_transition_target target); EXPORT bool obs_transition_audio_render(obs_source_t *transition, uint64_t *ts_out, struct obs_source_audio_mix *audio, uint32_t mixers, size_t channels, size_t sample_rate, obs_transition_audio_mix_callback_t mix_a_callback, obs_transition_audio_mix_callback_t mix_b_callback); /* swaps transition sources and textures as an optimization and to reduce * memory usage when switching between transitions */ EXPORT void obs_transition_swap_begin(obs_source_t *tr_dest, obs_source_t *tr_source); EXPORT void obs_transition_swap_end(obs_source_t *tr_dest, obs_source_t *tr_source); /* ------------------------------------------------------------------------- */ /* Scenes */ /** * Creates a scene. * * A scene is a source which is a container of other sources with specific * display orientations. Scenes can also be used like any other source. */ EXPORT obs_scene_t *obs_scene_create(const char *name); EXPORT obs_scene_t *obs_scene_create_private(const char *name); enum obs_scene_duplicate_type { OBS_SCENE_DUP_REFS, /**< Source refs only */ OBS_SCENE_DUP_COPY, /**< Fully duplicate */ OBS_SCENE_DUP_PRIVATE_REFS, /**< Source refs only (as private) */ OBS_SCENE_DUP_PRIVATE_COPY, /**< Fully duplicate (as private) */ }; /** * Duplicates a scene. */ EXPORT obs_scene_t *obs_scene_duplicate(obs_scene_t *scene, const char *name, enum obs_scene_duplicate_type type); EXPORT void obs_scene_release(obs_scene_t *scene); EXPORT obs_scene_t *obs_scene_get_ref(obs_scene_t *scene); /** Gets the scene's source context */ EXPORT obs_source_t *obs_scene_get_source(const obs_scene_t *scene); /** Gets the scene from its source, or NULL if not a scene */ EXPORT obs_scene_t *obs_scene_from_source(const obs_source_t *source); /** Determines whether a source is within a scene */ EXPORT obs_sceneitem_t *obs_scene_find_source(obs_scene_t *scene, const char *name); EXPORT obs_sceneitem_t *obs_scene_find_source_recursive(obs_scene_t *scene, const char *name); EXPORT obs_sceneitem_t *obs_scene_find_sceneitem_by_id(obs_scene_t *scene, int64_t id); /** Gets scene by name, increments the reference */ static inline obs_scene_t *obs_get_scene_by_name(const char *name) { obs_source_t *source = obs_get_source_by_name(name); obs_scene_t *scene = obs_scene_from_source(source); if (!scene) { obs_source_release(source); return NULL; } return scene; } /** Enumerates sources within a scene */ EXPORT void obs_scene_enum_items(obs_scene_t *scene, bool (*callback)(obs_scene_t *, obs_sceneitem_t *, void *), void *param); EXPORT bool obs_scene_reorder_items(obs_scene_t *scene, obs_sceneitem_t *const *item_order, size_t item_order_size); struct obs_sceneitem_order_info { obs_sceneitem_t *group; obs_sceneitem_t *item; }; EXPORT bool obs_scene_reorder_items2(obs_scene_t *scene, struct obs_sceneitem_order_info *item_order, size_t item_order_size); EXPORT bool obs_source_is_scene(const obs_source_t *source); EXPORT bool obs_source_type_is_scene(const char *id); /** Adds/creates a new scene item for a source */ EXPORT obs_sceneitem_t *obs_scene_add(obs_scene_t *scene, obs_source_t *source); typedef void (*obs_scene_atomic_update_func)(void *, obs_scene_t *scene); EXPORT void obs_scene_atomic_update(obs_scene_t *scene, obs_scene_atomic_update_func func, void *data); EXPORT void obs_sceneitem_addref(obs_sceneitem_t *item); EXPORT void obs_sceneitem_release(obs_sceneitem_t *item); /** Removes a scene item. */ EXPORT void obs_sceneitem_remove(obs_sceneitem_t *item); /** Adds a scene item. */ EXPORT void obs_sceneitems_add(obs_scene_t *scene, obs_data_array_t *data); /** Saves Sceneitem into an array, arr **/ EXPORT void obs_sceneitem_save(obs_sceneitem_t *item, obs_data_array_t *arr); /** Set the ID of a sceneitem */ EXPORT void obs_sceneitem_set_id(obs_sceneitem_t *sceneitem, int64_t id); /** Save all the transform states for a current scene's sceneitems */ EXPORT obs_data_t *obs_scene_save_transform_states(obs_scene_t *scene, bool all_items); /** Load all the transform states of sceneitems in that scene */ EXPORT void obs_scene_load_transform_states(const char *state); /** Gets a sceneitem's order in its scene */ EXPORT int obs_sceneitem_get_order_position(obs_sceneitem_t *item); /** Gets the scene parent associated with the scene item. */ EXPORT obs_scene_t *obs_sceneitem_get_scene(const obs_sceneitem_t *item); /** Gets the source of a scene item. */ EXPORT obs_source_t *obs_sceneitem_get_source(const obs_sceneitem_t *item); /* FIXME: The following functions should be deprecated and replaced with a way * to specify saveable private user data. -Lain */ EXPORT void obs_sceneitem_select(obs_sceneitem_t *item, bool select); EXPORT bool obs_sceneitem_selected(const obs_sceneitem_t *item); EXPORT bool obs_sceneitem_locked(const obs_sceneitem_t *item); EXPORT bool obs_sceneitem_set_locked(obs_sceneitem_t *item, bool lock); /* Functions for getting/setting specific orientation of a scene item */ EXPORT void obs_sceneitem_set_pos(obs_sceneitem_t *item, const struct vec2 *pos); EXPORT void obs_sceneitem_set_rot(obs_sceneitem_t *item, float rot_deg); EXPORT void obs_sceneitem_set_scale(obs_sceneitem_t *item, const struct vec2 *scale); EXPORT void obs_sceneitem_set_alignment(obs_sceneitem_t *item, uint32_t alignment); EXPORT void obs_sceneitem_set_order(obs_sceneitem_t *item, enum obs_order_movement movement); EXPORT void obs_sceneitem_set_order_position(obs_sceneitem_t *item, int position); EXPORT void obs_sceneitem_set_bounds_type(obs_sceneitem_t *item, enum obs_bounds_type type); EXPORT void obs_sceneitem_set_bounds_alignment(obs_sceneitem_t *item, uint32_t alignment); EXPORT void obs_sceneitem_set_bounds_crop(obs_sceneitem_t *item, bool crop); EXPORT void obs_sceneitem_set_bounds(obs_sceneitem_t *item, const struct vec2 *bounds); EXPORT int64_t obs_sceneitem_get_id(const obs_sceneitem_t *item); EXPORT void obs_sceneitem_get_pos(const obs_sceneitem_t *item, struct vec2 *pos); EXPORT float obs_sceneitem_get_rot(const obs_sceneitem_t *item); EXPORT void obs_sceneitem_get_scale(const obs_sceneitem_t *item, struct vec2 *scale); EXPORT uint32_t obs_sceneitem_get_alignment(const obs_sceneitem_t *item); EXPORT enum obs_bounds_type obs_sceneitem_get_bounds_type(const obs_sceneitem_t *item); EXPORT uint32_t obs_sceneitem_get_bounds_alignment(const obs_sceneitem_t *item); EXPORT bool obs_sceneitem_get_bounds_crop(const obs_sceneitem_t *item); EXPORT void obs_sceneitem_get_bounds(const obs_sceneitem_t *item, struct vec2 *bounds); EXPORT void obs_sceneitem_get_info2(const obs_sceneitem_t *item, struct obs_transform_info *info); EXPORT void obs_sceneitem_set_info2(obs_sceneitem_t *item, const struct obs_transform_info *info); EXPORT void obs_sceneitem_get_draw_transform(const obs_sceneitem_t *item, struct matrix4 *transform); EXPORT void obs_sceneitem_get_box_transform(const obs_sceneitem_t *item, struct matrix4 *transform); EXPORT void obs_sceneitem_get_box_scale(const obs_sceneitem_t *item, struct vec2 *scale); EXPORT bool obs_sceneitem_visible(const obs_sceneitem_t *item); EXPORT bool obs_sceneitem_set_visible(obs_sceneitem_t *item, bool visible); struct obs_sceneitem_crop { int left; int top; int right; int bottom; }; EXPORT void obs_sceneitem_set_crop(obs_sceneitem_t *item, const struct obs_sceneitem_crop *crop); EXPORT void obs_sceneitem_get_crop(const obs_sceneitem_t *item, struct obs_sceneitem_crop *crop); EXPORT void obs_sceneitem_set_scale_filter(obs_sceneitem_t *item, enum obs_scale_type filter); EXPORT enum obs_scale_type obs_sceneitem_get_scale_filter(obs_sceneitem_t *item); EXPORT void obs_sceneitem_set_blending_method(obs_sceneitem_t *item, enum obs_blending_method method); EXPORT enum obs_blending_method obs_sceneitem_get_blending_method(obs_sceneitem_t *item); EXPORT void obs_sceneitem_set_blending_mode(obs_sceneitem_t *item, enum obs_blending_type type); EXPORT enum obs_blending_type obs_sceneitem_get_blending_mode(obs_sceneitem_t *item); EXPORT void obs_sceneitem_force_update_transform(obs_sceneitem_t *item); EXPORT void obs_sceneitem_defer_update_begin(obs_sceneitem_t *item); EXPORT void obs_sceneitem_defer_update_end(obs_sceneitem_t *item); /** Gets private front-end settings data. This data is saved/loaded * automatically. Returns an incremented reference. */ EXPORT obs_data_t *obs_sceneitem_get_private_settings(obs_sceneitem_t *item); EXPORT obs_sceneitem_t *obs_scene_add_group(obs_scene_t *scene, const char *name); EXPORT obs_sceneitem_t *obs_scene_insert_group(obs_scene_t *scene, const char *name, obs_sceneitem_t **items, size_t count); EXPORT obs_sceneitem_t *obs_scene_add_group2(obs_scene_t *scene, const char *name, bool signal); EXPORT obs_sceneitem_t *obs_scene_insert_group2(obs_scene_t *scene, const char *name, obs_sceneitem_t **items, size_t count, bool signal); EXPORT obs_sceneitem_t *obs_scene_get_group(obs_scene_t *scene, const char *name); EXPORT bool obs_sceneitem_is_group(obs_sceneitem_t *item); EXPORT obs_scene_t *obs_sceneitem_group_get_scene(const obs_sceneitem_t *group); EXPORT void obs_sceneitem_group_ungroup(obs_sceneitem_t *group); EXPORT void obs_sceneitem_group_ungroup2(obs_sceneitem_t *group, bool signal); EXPORT void obs_sceneitem_group_add_item(obs_sceneitem_t *group, obs_sceneitem_t *item); EXPORT void obs_sceneitem_group_remove_item(obs_sceneitem_t *group, obs_sceneitem_t *item); EXPORT obs_sceneitem_t *obs_sceneitem_get_group(obs_scene_t *scene, obs_sceneitem_t *item); EXPORT bool obs_source_is_group(const obs_source_t *source); EXPORT bool obs_source_type_is_group(const char *id); EXPORT bool obs_scene_is_group(const obs_scene_t *scene); EXPORT void obs_sceneitem_group_enum_items(obs_sceneitem_t *group, bool (*callback)(obs_scene_t *, obs_sceneitem_t *, void *), void *param); /** Gets the group from its source, or NULL if not a group */ EXPORT obs_scene_t *obs_group_from_source(const obs_source_t *source); static inline obs_scene_t *obs_group_or_scene_from_source(const obs_source_t *source) { obs_scene_t *s = obs_scene_from_source(source); return s ? s : obs_group_from_source(source); } EXPORT void obs_sceneitem_defer_group_resize_begin(obs_sceneitem_t *item); EXPORT void obs_sceneitem_defer_group_resize_end(obs_sceneitem_t *item); EXPORT void obs_sceneitem_set_transition(obs_sceneitem_t *item, bool show, obs_source_t *transition); EXPORT obs_source_t *obs_sceneitem_get_transition(obs_sceneitem_t *item, bool show); EXPORT void obs_sceneitem_set_transition_duration(obs_sceneitem_t *item, bool show, uint32_t duration_ms); EXPORT uint32_t obs_sceneitem_get_transition_duration(obs_sceneitem_t *item, bool show); EXPORT void obs_sceneitem_do_transition(obs_sceneitem_t *item, bool visible); EXPORT void obs_sceneitem_transition_load(struct obs_scene_item *item, obs_data_t *data, bool show); EXPORT obs_data_t *obs_sceneitem_transition_save(struct obs_scene_item *item, bool show); EXPORT void obs_scene_prune_sources(obs_scene_t *scene); /* ------------------------------------------------------------------------- */ /* Outputs */ EXPORT const char *obs_output_get_display_name(const char *id); /** Returns a pointer to the module which provides the output */ EXPORT obs_module_t *obs_output_get_module(const char *id); /** Returns the load state of a output's module given the id */ EXPORT enum obs_module_load_state obs_output_load_state(const char *id); /** * Creates an output. * * Outputs allow outputting to file, outputting to network, outputting to * directshow, or other custom outputs. */ EXPORT obs_output_t *obs_output_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data); /** * Adds/releases a reference to an output. When the last reference is * released, the output is destroyed. */ EXPORT void obs_output_release(obs_output_t *output); EXPORT void obs_weak_output_addref(obs_weak_output_t *weak); EXPORT void obs_weak_output_release(obs_weak_output_t *weak); EXPORT obs_output_t *obs_output_get_ref(obs_output_t *output); EXPORT obs_weak_output_t *obs_output_get_weak_output(obs_output_t *output); EXPORT obs_output_t *obs_weak_output_get_output(obs_weak_output_t *weak); EXPORT bool obs_weak_output_references_output(obs_weak_output_t *weak, obs_output_t *output); EXPORT const char *obs_output_get_name(const obs_output_t *output); /** Starts the output. */ EXPORT bool obs_output_start(obs_output_t *output); /** Stops the output. */ EXPORT void obs_output_stop(obs_output_t *output); /** * On reconnection, start where it left of on reconnection. Note however that * this option will consume extra memory to continually increase delay while * waiting to reconnect. */ #define OBS_OUTPUT_DELAY_PRESERVE (1 << 0) /** * Sets the current output delay, in seconds (if the output supports delay). * * If delay is currently active, it will set the delay value, but will not * affect the current delay, it will only affect the next time the output is * activated. */ EXPORT void obs_output_set_delay(obs_output_t *output, uint32_t delay_sec, uint32_t flags); /** Gets the currently set delay value, in seconds. */ EXPORT uint32_t obs_output_get_delay(const obs_output_t *output); /** If delay is active, gets the currently active delay value, in seconds. */ EXPORT uint32_t obs_output_get_active_delay(const obs_output_t *output); /** Forces the output to stop. Usually only used with delay. */ EXPORT void obs_output_force_stop(obs_output_t *output); /** Returns whether the output is active */ EXPORT bool obs_output_active(const obs_output_t *output); /** Returns output capability flags */ EXPORT uint32_t obs_output_get_flags(const obs_output_t *output); /** Returns output capability flags */ EXPORT uint32_t obs_get_output_flags(const char *id); /** Gets the default settings for an output type */ EXPORT obs_data_t *obs_output_defaults(const char *id); /** Returns the property list, if any. Free with obs_properties_destroy */ EXPORT obs_properties_t *obs_get_output_properties(const char *id); /** * Returns the property list of an existing output, if any. Free with * obs_properties_destroy */ EXPORT obs_properties_t *obs_output_properties(const obs_output_t *output); /** Updates the settings for this output context */ EXPORT void obs_output_update(obs_output_t *output, obs_data_t *settings); /** Specifies whether the output can be paused */ EXPORT bool obs_output_can_pause(const obs_output_t *output); /** Pauses the output (if the functionality is allowed by the output */ EXPORT bool obs_output_pause(obs_output_t *output, bool pause); /** Returns whether output is paused */ EXPORT bool obs_output_paused(const obs_output_t *output); /* Gets the current output settings string */ EXPORT obs_data_t *obs_output_get_settings(const obs_output_t *output); /** Returns the signal handler for an output */ EXPORT signal_handler_t *obs_output_get_signal_handler(const obs_output_t *output); /** Returns the procedure handler for an output */ EXPORT proc_handler_t *obs_output_get_proc_handler(const obs_output_t *output); /** * Sets the current audio/video media contexts associated with this output, * required for non-encoded outputs. Can be null. */ EXPORT void obs_output_set_media(obs_output_t *output, video_t *video, audio_t *audio); /** Returns the video media context associated with this output */ EXPORT video_t *obs_output_video(const obs_output_t *output); /** Returns the audio media context associated with this output */ EXPORT audio_t *obs_output_audio(const obs_output_t *output); /** Sets the current audio mixer for non-encoded outputs */ EXPORT void obs_output_set_mixer(obs_output_t *output, size_t mixer_idx); /** Gets the current audio mixer for non-encoded outputs */ EXPORT size_t obs_output_get_mixer(const obs_output_t *output); /** Sets the current audio mixes (mask) for a non-encoded multi-track output */ EXPORT void obs_output_set_mixers(obs_output_t *output, size_t mixers); /** Gets the current audio mixes (mask) for a non-encoded multi-track output */ EXPORT size_t obs_output_get_mixers(const obs_output_t *output); /** * Sets the current video encoder associated with this output, * required for encoded outputs */ EXPORT void obs_output_set_video_encoder(obs_output_t *output, obs_encoder_t *encoder); /** * Sets the current video encoder associated with this output, * required for encoded outputs. * * The idx parameter specifies the video encoder index. * Only used with outputs that have multiple video outputs (FFmpeg typically), * otherwise the parameter is ignored. */ EXPORT void obs_output_set_video_encoder2(obs_output_t *output, obs_encoder_t *encoder, size_t idx); /** * Sets the current audio encoder associated with this output, * required for encoded outputs. * * The idx parameter specifies the audio encoder index to set the encoder to. * Only used with outputs that have multiple audio outputs (RTMP typically), * otherwise the parameter is ignored. */ EXPORT void obs_output_set_audio_encoder(obs_output_t *output, obs_encoder_t *encoder, size_t idx); /** Returns the current video encoder associated with this output */ EXPORT obs_encoder_t *obs_output_get_video_encoder(const obs_output_t *output); /** * Returns the current video encoder associated with this output. * * The idx parameter specifies the video encoder index. * Only used with outputs that have multiple video outputs (FFmpeg typically), * otherwise specifying an idx > 0 returns a NULL. * */ EXPORT obs_encoder_t *obs_output_get_video_encoder2(const obs_output_t *output, size_t idx); /** * Returns the current audio encoder associated with this output * * The idx parameter specifies the audio encoder index. Only used with * outputs that have multiple audio outputs, otherwise the parameter is * ignored. */ EXPORT obs_encoder_t *obs_output_get_audio_encoder(const obs_output_t *output, size_t idx); /** Sets the current service associated with this output. */ EXPORT void obs_output_set_service(obs_output_t *output, obs_service_t *service); /** Gets the current service associated with this output. */ EXPORT obs_service_t *obs_output_get_service(const obs_output_t *output); /** * Sets the reconnect settings. Set retry_count to 0 to disable reconnecting. */ EXPORT void obs_output_set_reconnect_settings(obs_output_t *output, int retry_count, int retry_sec); EXPORT uint64_t obs_output_get_total_bytes(const obs_output_t *output); EXPORT int obs_output_get_frames_dropped(const obs_output_t *output); EXPORT int obs_output_get_total_frames(const obs_output_t *output); /** * Sets the preferred scaled resolution for this output. Set width and height * to 0 to disable scaling. * * If this output uses an encoder, it will call obs_encoder_set_scaled_size on * the encoder before the stream is started. If the encoder is already active, * then this function will trigger a warning and do nothing. */ EXPORT void obs_output_set_preferred_size(obs_output_t *output, uint32_t width, uint32_t height); /** * Sets the preferred scaled resolution for this output. Set width and height * to 0 to disable scaling. * * If this output uses an encoder, it will call obs_encoder_set_scaled_size on * the encoder before the stream is started. If the encoder is already active, * then this function will trigger a warning and do nothing. * * The idx parameter specifies the video encoder index to apply the scaling to. * Only used with outputs that have multiple video outputs (FFmpeg typically), * otherwise the parameter is ignored. */ EXPORT void obs_output_set_preferred_size2(obs_output_t *output, uint32_t width, uint32_t height, size_t idx); /** For video outputs, returns the width of the encoded image */ EXPORT uint32_t obs_output_get_width(const obs_output_t *output); /** * For video outputs, returns the width of the encoded image. * * The idx parameter specifies the video encoder index. * Only used with outputs that have multiple video outputs (FFmpeg typically), * otherwise the parameter is ignored and returns 0. */ EXPORT uint32_t obs_output_get_width2(const obs_output_t *output, size_t idx); /** For video outputs, returns the height of the encoded image */ EXPORT uint32_t obs_output_get_height(const obs_output_t *output); /** * For video outputs, returns the height of the encoded image. * * The idx parameter specifies the video encoder index. * Only used with outputs that have multiple video outputs (FFmpeg typically), * otherwise the parameter is ignored and returns 0. */ EXPORT uint32_t obs_output_get_height2(const obs_output_t *output, size_t idx); EXPORT const char *obs_output_get_id(const obs_output_t *output); EXPORT void obs_output_caption(obs_output_t *output, const struct obs_source_cea_708 *captions); EXPORT void obs_output_output_caption_text1(obs_output_t *output, const char *text); EXPORT void obs_output_output_caption_text2(obs_output_t *output, const char *text, double display_duration); EXPORT float obs_output_get_congestion(obs_output_t *output); EXPORT int obs_output_get_connect_time_ms(obs_output_t *output); EXPORT bool obs_output_reconnecting(const obs_output_t *output); /** Pass a string of the last output error, for UI use */ EXPORT void obs_output_set_last_error(obs_output_t *output, const char *message); EXPORT const char *obs_output_get_last_error(obs_output_t *output); EXPORT const char *obs_output_get_supported_video_codecs(const obs_output_t *output); EXPORT const char *obs_output_get_supported_audio_codecs(const obs_output_t *output); EXPORT const char *obs_output_get_protocols(const obs_output_t *output); EXPORT bool obs_is_output_protocol_registered(const char *protocol); EXPORT bool obs_enum_output_protocols(size_t idx, char **protocol); EXPORT void obs_enum_output_types_with_protocol(const char *protocol, void *data, bool (*enum_cb)(void *data, const char *id)); EXPORT const char *obs_get_output_supported_video_codecs(const char *id); EXPORT const char *obs_get_output_supported_audio_codecs(const char *id); /* Add/remove packet-processing callbacks that are invoked in * send_interleaved(), before forwarding packets to the output service. * This provides a mechanism to perform packet processing outside of * libobs, however any callback function registering with this API should keep * keep code to a minimum and understand it is running synchronously with the * calling thread. */ EXPORT void obs_output_add_packet_callback(obs_output_t *output, void (*packet_cb)(obs_output_t *output, struct encoder_packet *pkt, struct encoder_packet_time *pkt_time, void *param), void *param); EXPORT void obs_output_remove_packet_callback(obs_output_t *output, void (*packet_cb)(obs_output_t *output, struct encoder_packet *pkt, struct encoder_packet_time *pkt_time, void *param), void *param); /* Sets a callback to be called when the output checks if it should attempt to reconnect. * If the callback returns false, the output will not attempt to reconnect. */ EXPORT void obs_output_set_reconnect_callback(obs_output_t *output, bool (*reconnect_cb)(void *data, obs_output_t *output, int code), void *param); /* ------------------------------------------------------------------------- */ /* Functions used by outputs */ EXPORT void *obs_output_get_type_data(obs_output_t *output); /** Gets the video conversion info. Used only for raw output */ EXPORT const struct video_scale_info *obs_output_get_video_conversion(obs_output_t *output); /** Optionally sets the video conversion info. Used only for raw output */ EXPORT void obs_output_set_video_conversion(obs_output_t *output, const struct video_scale_info *conversion); /** Optionally sets the audio conversion info. Used only for raw output */ EXPORT void obs_output_set_audio_conversion(obs_output_t *output, const struct audio_convert_info *conversion); /** Returns whether data capture can begin */ EXPORT bool obs_output_can_begin_data_capture(const obs_output_t *output, uint32_t flags); /** Initializes encoders (if any) */ EXPORT bool obs_output_initialize_encoders(obs_output_t *output, uint32_t flags); /** * Begins data capture from media/encoders. * * @param output Output context * @return true if successful, false otherwise. */ EXPORT bool obs_output_begin_data_capture(obs_output_t *output, uint32_t flags); /** Ends data capture from media/encoders */ EXPORT void obs_output_end_data_capture(obs_output_t *output); /** * Signals that the output has stopped itself. * * @param output Output context * @param code Error code (or OBS_OUTPUT_SUCCESS if not an error) */ EXPORT void obs_output_signal_stop(obs_output_t *output, int code); EXPORT uint64_t obs_output_get_pause_offset(obs_output_t *output); /* ------------------------------------------------------------------------- */ /* Encoders */ EXPORT const char *obs_encoder_get_display_name(const char *id); /** Returns a pointer to the module which provides the encoder */ EXPORT obs_module_t *obs_encoder_get_module(const char *id); /** Returns the load state of an encoder's module given the id */ EXPORT enum obs_module_load_state obs_encoder_load_state(const char *id); /** * Creates a video encoder context * * @param id Video encoder ID * @param name Name to assign to this context * @param settings Settings * @return The video encoder context, or NULL if failed or not found. */ EXPORT obs_encoder_t *obs_video_encoder_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data); /** * Creates an audio encoder context * * @param id Audio Encoder ID * @param name Name to assign to this context * @param settings Settings * @param mixer_idx Index of the mixer to use for this audio encoder * @return The video encoder context, or NULL if failed or not found. */ EXPORT obs_encoder_t *obs_audio_encoder_create(const char *id, const char *name, obs_data_t *settings, size_t mixer_idx, obs_data_t *hotkey_data); /** * Adds/releases a reference to an encoder. When the last reference is * released, the encoder is destroyed. */ EXPORT void obs_encoder_release(obs_encoder_t *encoder); EXPORT void obs_weak_encoder_addref(obs_weak_encoder_t *weak); EXPORT void obs_weak_encoder_release(obs_weak_encoder_t *weak); EXPORT obs_encoder_t *obs_encoder_get_ref(obs_encoder_t *encoder); EXPORT obs_weak_encoder_t *obs_encoder_get_weak_encoder(obs_encoder_t *encoder); EXPORT obs_encoder_t *obs_weak_encoder_get_encoder(obs_weak_encoder_t *weak); EXPORT bool obs_weak_encoder_references_encoder(obs_weak_encoder_t *weak, obs_encoder_t *encoder); EXPORT void obs_encoder_set_name(obs_encoder_t *encoder, const char *name); EXPORT const char *obs_encoder_get_name(const obs_encoder_t *encoder); /** Returns the codec of an encoder by the id */ EXPORT const char *obs_get_encoder_codec(const char *id); /** Returns the type of an encoder by the id */ EXPORT enum obs_encoder_type obs_get_encoder_type(const char *id); /** Returns the codec of the encoder */ EXPORT const char *obs_encoder_get_codec(const obs_encoder_t *encoder); /** Returns the type of an encoder */ EXPORT enum obs_encoder_type obs_encoder_get_type(const obs_encoder_t *encoder); /** * Sets the scaled resolution for a video encoder. Set width and height to 0 * to disable scaling. If the encoder is active, this function will trigger * a warning, and do nothing. */ EXPORT void obs_encoder_set_scaled_size(obs_encoder_t *encoder, uint32_t width, uint32_t height); /** * Enable/disable GPU based scaling for a video encoder. * OBS_SCALE_DISABLE disables GPU based scaling (default), * any other value enables GPU based scaling. If the encoder * is active, this function will trigger a warning, and do nothing. */ EXPORT void obs_encoder_set_gpu_scale_type(obs_encoder_t *encoder, enum obs_scale_type gpu_scale_type); /** * Set frame rate divisor for a video encoder. This allows recording at * a partial frame rate compared to the base frame rate, e.g. 60 FPS with * divisor = 2 will record at 30 FPS, with divisor = 3 at 20, etc. * * Can only be called on stopped encoders, changing this on the fly is not supported */ EXPORT bool obs_encoder_set_frame_rate_divisor(obs_encoder_t *encoder, uint32_t divisor); /** * Adds region of interest (ROI) for an encoder. This allows prioritizing * quality of regions of the frame. * If regions overlap, regions added earlier take precedence. * * Returns false if the encoder does not support ROI or region is invalid. */ EXPORT bool obs_encoder_add_roi(obs_encoder_t *encoder, const struct obs_encoder_roi *roi); /** For video encoders, returns true if any ROIs were set */ EXPORT bool obs_encoder_has_roi(const obs_encoder_t *encoder); /** Clear all regions */ EXPORT void obs_encoder_clear_roi(obs_encoder_t *encoder); /** Enumerate regions with callback (reverse order of addition) */ EXPORT void obs_encoder_enum_roi(obs_encoder_t *encoder, void (*enum_proc)(void *, struct obs_encoder_roi *), void *param); /** Get ROI increment, encoders must rebuild their ROI map if it has changed */ EXPORT uint32_t obs_encoder_get_roi_increment(const obs_encoder_t *encoder); /** For video encoders, returns true if pre-encode scaling is enabled */ EXPORT bool obs_encoder_scaling_enabled(const obs_encoder_t *encoder); /** For video encoders, returns the width of the encoded image */ EXPORT uint32_t obs_encoder_get_width(const obs_encoder_t *encoder); /** For video encoders, returns the height of the encoded image */ EXPORT uint32_t obs_encoder_get_height(const obs_encoder_t *encoder); /** For video encoders, returns whether GPU scaling is enabled */ EXPORT bool obs_encoder_gpu_scaling_enabled(obs_encoder_t *encoder); /** For video encoders, returns GPU scaling type */ EXPORT enum obs_scale_type obs_encoder_get_scale_type(obs_encoder_t *encoder); /** For video encoders, returns the frame rate divisor (default is 1) */ EXPORT uint32_t obs_encoder_get_frame_rate_divisor(const obs_encoder_t *encoder); /** For video encoders, returns the number of frames encoded */ EXPORT uint32_t obs_encoder_get_encoded_frames(const obs_encoder_t *encoder); /** For audio encoders, returns the sample rate of the audio */ EXPORT uint32_t obs_encoder_get_sample_rate(const obs_encoder_t *encoder); /** For audio encoders, returns the frame size of the audio packet */ EXPORT size_t obs_encoder_get_frame_size(const obs_encoder_t *encoder); /** For audio encoders, returns the mixer index */ EXPORT size_t obs_encoder_get_mixer_index(const obs_encoder_t *encoder); /* For audio encoders, returns the number of samples to skip at the beginning of the stream */ EXPORT uint32_t obs_encoder_get_priming_samples(const obs_encoder_t *encoder); /** * Sets the preferred video format for a video encoder. If the encoder can use * the format specified, it will force a conversion to that format if the * obs output format does not match the preferred format. * * If the format is set to VIDEO_FORMAT_NONE, will revert to the default * functionality of converting only when absolutely necessary. * * If GPU scaling is enabled, conversion will happen on the GPU. */ EXPORT void obs_encoder_set_preferred_video_format(obs_encoder_t *encoder, enum video_format format); EXPORT enum video_format obs_encoder_get_preferred_video_format(const obs_encoder_t *encoder); /** * Sets the preferred colorspace for an encoder, e.g., to simultaneous SDR and * HDR output. * * Only supported when GPU scaling is enabled. */ EXPORT void obs_encoder_set_preferred_color_space(obs_encoder_t *encoder, enum video_colorspace colorspace); EXPORT enum video_colorspace obs_encoder_get_preferred_color_space(const obs_encoder_t *encoder); /** * Sets the preferred range for an encoder. * * Only supported when GPU scaling is enabled. */ EXPORT void obs_encoder_set_preferred_range(obs_encoder_t *encoder, enum video_range_type range); EXPORT enum video_range_type obs_encoder_get_preferred_range(const obs_encoder_t *encoder); /** Gets the default settings for an encoder type */ EXPORT obs_data_t *obs_encoder_defaults(const char *id); EXPORT obs_data_t *obs_encoder_get_defaults(const obs_encoder_t *encoder); /** Returns the property list, if any. Free with obs_properties_destroy */ EXPORT obs_properties_t *obs_get_encoder_properties(const char *id); /** * Returns the property list of an existing encoder, if any. Free with * obs_properties_destroy */ EXPORT obs_properties_t *obs_encoder_properties(const obs_encoder_t *encoder); /** * Updates the settings of the encoder context. Usually used for changing * bitrate while active */ EXPORT void obs_encoder_update(obs_encoder_t *encoder, obs_data_t *settings); /** Gets extra data (headers) associated with this context */ EXPORT bool obs_encoder_get_extra_data(const obs_encoder_t *encoder, uint8_t **extra_data, size_t *size); /** Returns the current settings for this encoder */ EXPORT obs_data_t *obs_encoder_get_settings(const obs_encoder_t *encoder); /** Sets the video output context to be used with this encoder */ EXPORT void obs_encoder_set_video(obs_encoder_t *encoder, video_t *video); /** Sets the audio output context to be used with this encoder */ EXPORT void obs_encoder_set_audio(obs_encoder_t *encoder, audio_t *audio); /** * Returns the video output context used with this encoder, or NULL if not * a video context */ EXPORT video_t *obs_encoder_video(const obs_encoder_t *encoder); /** * Returns the parent video output context used with this encoder, or NULL if not * a video context. Used when an FPS divisor is set, where the original video * context would not otherwise be gettable. */ EXPORT video_t *obs_encoder_parent_video(const obs_encoder_t *encoder); /** Returns if the encoder's video output context supports shared textures for the specified video format. */ EXPORT bool obs_encoder_video_tex_active(const obs_encoder_t *encoder, enum video_format format); /** * Returns the audio output context used with this encoder, or NULL if not * a audio context */ EXPORT audio_t *obs_encoder_audio(const obs_encoder_t *encoder); /** Returns true if encoder is active, false otherwise */ EXPORT bool obs_encoder_active(const obs_encoder_t *encoder); EXPORT void *obs_encoder_get_type_data(obs_encoder_t *encoder); EXPORT const char *obs_encoder_get_id(const obs_encoder_t *encoder); EXPORT uint32_t obs_get_encoder_caps(const char *encoder_id); EXPORT uint32_t obs_encoder_get_caps(const obs_encoder_t *encoder); EXPORT void obs_encoder_packet_ref(struct encoder_packet *dst, struct encoder_packet *src); EXPORT void obs_encoder_packet_release(struct encoder_packet *packet); EXPORT void *obs_encoder_create_rerouted(obs_encoder_t *encoder, const char *reroute_id); /** Returns whether encoder is paused */ EXPORT bool obs_encoder_paused(const obs_encoder_t *output); EXPORT const char *obs_encoder_get_last_error(obs_encoder_t *encoder); EXPORT void obs_encoder_set_last_error(obs_encoder_t *encoder, const char *message); EXPORT uint64_t obs_encoder_get_pause_offset(const obs_encoder_t *encoder); /** * Creates an "encoder group", allowing synchronized startup of encoders within * the group. Encoder groups are single owner, and hold strong references to * encoders within the group. Calling destroy on an active group will not actually * destroy the group until it becomes completely inactive. */ EXPORT bool obs_encoder_set_group(obs_encoder_t *encoder, obs_encoder_group_t *group); EXPORT obs_encoder_group_t *obs_encoder_group_create(); EXPORT void obs_encoder_group_destroy(obs_encoder_group_t *group); /* ------------------------------------------------------------------------- */ /* Stream Services */ EXPORT const char *obs_service_get_display_name(const char *id); /** Returns a pointer to the module which provides the service */ EXPORT obs_module_t *obs_service_get_module(const char *id); /** Returns the load state of a service's module given the id */ EXPORT enum obs_module_load_state obs_service_load_state(const char *id); EXPORT obs_service_t *obs_service_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data); EXPORT obs_service_t *obs_service_create_private(const char *id, const char *name, obs_data_t *settings); /** * Adds/releases a reference to a service. When the last reference is * released, the service is destroyed. */ EXPORT void obs_service_release(obs_service_t *service); EXPORT void obs_weak_service_addref(obs_weak_service_t *weak); EXPORT void obs_weak_service_release(obs_weak_service_t *weak); EXPORT obs_service_t *obs_service_get_ref(obs_service_t *service); EXPORT obs_weak_service_t *obs_service_get_weak_service(obs_service_t *service); EXPORT obs_service_t *obs_weak_service_get_service(obs_weak_service_t *weak); EXPORT bool obs_weak_service_references_service(obs_weak_service_t *weak, obs_service_t *service); EXPORT const char *obs_service_get_name(const obs_service_t *service); /** Gets the default settings for a service */ EXPORT obs_data_t *obs_service_defaults(const char *id); /** Returns the property list, if any. Free with obs_properties_destroy */ EXPORT obs_properties_t *obs_get_service_properties(const char *id); /** * Returns the property list of an existing service context, if any. Free with * obs_properties_destroy */ EXPORT obs_properties_t *obs_service_properties(const obs_service_t *service); /** Gets the service type */ EXPORT const char *obs_service_get_type(const obs_service_t *service); /** Updates the settings of the service context */ EXPORT void obs_service_update(obs_service_t *service, obs_data_t *settings); /** Returns the current settings for this service */ EXPORT obs_data_t *obs_service_get_settings(const obs_service_t *service); /** * Applies service-specific video encoder settings. * * @param video_encoder_settings Video encoder settings. Optional. * @param audio_encoder_settings Audio encoder settings. Optional. */ EXPORT void obs_service_apply_encoder_settings(obs_service_t *service, obs_data_t *video_encoder_settings, obs_data_t *audio_encoder_settings); EXPORT void *obs_service_get_type_data(obs_service_t *service); EXPORT const char *obs_service_get_id(const obs_service_t *service); EXPORT void obs_service_get_supported_resolutions(const obs_service_t *service, struct obs_service_resolution **resolutions, size_t *count); EXPORT void obs_service_get_max_fps(const obs_service_t *service, int *fps); EXPORT void obs_service_get_max_bitrate(const obs_service_t *service, int *video_bitrate, int *audio_bitrate); EXPORT const char **obs_service_get_supported_video_codecs(const obs_service_t *service); EXPORT const char **obs_service_get_supported_audio_codecs(const obs_service_t *service); /** Returns the protocol for this service context */ EXPORT const char *obs_service_get_protocol(const obs_service_t *service); EXPORT const char *obs_service_get_preferred_output_type(const obs_service_t *service); EXPORT const char *obs_service_get_connect_info(const obs_service_t *service, uint32_t type); EXPORT bool obs_service_can_try_to_connect(const obs_service_t *service); /* ------------------------------------------------------------------------- */ /* Source frame allocation functions */ EXPORT void obs_source_frame_init(struct obs_source_frame *frame, enum video_format format, uint32_t width, uint32_t height); static inline void obs_source_frame_free(struct obs_source_frame *frame) { if (frame) { bfree(frame->data[0]); memset(frame, 0, sizeof(*frame)); } } static inline struct obs_source_frame *obs_source_frame_create(enum video_format format, uint32_t width, uint32_t height) { struct obs_source_frame *frame; frame = (struct obs_source_frame *)bzalloc(sizeof(*frame)); obs_source_frame_init(frame, format, width, height); return frame; } static inline void obs_source_frame_destroy(struct obs_source_frame *frame) { if (frame) { bfree(frame->data[0]); bfree(frame); } } EXPORT void obs_source_frame_copy(struct obs_source_frame *dst, const struct obs_source_frame *src); /* ------------------------------------------------------------------------- */ /* Get source icon type */ EXPORT enum obs_icon_type obs_source_get_icon_type(const char *id); /* ------------------------------------------------------------------------- */ /* Canvases */ /* Canvas flags */ enum obs_canvas_flags { MAIN = 1 << 0, // Main canvas created by libobs, cannot be renamed or reset, cannot be set by user ACTIVATE = 1 << 1, // Canvas sources will become active when they are visible MIX_AUDIO = 1 << 2, // Audio from channels in this canvas will be mixed into the audio output SCENE_REF = 1 << 3, // Canvas will hold references for scene sources EPHEMERAL = 1 << 4, // Indicates this canvas is not supposed to be saved /* Presets */ PROGRAM = ACTIVATE | MIX_AUDIO | SCENE_REF, PREVIEW = EPHEMERAL, DEVICE = ACTIVATE | EPHEMERAL, }; /** Get a strong reference to the main OBS canvas */ EXPORT obs_canvas_t *obs_get_main_canvas(void); /** Creates a new canvas */ EXPORT obs_canvas_t *obs_canvas_create(const char *name, struct obs_video_info *ovi, uint32_t flags); /** Creates a new private canvas */ EXPORT obs_canvas_t *obs_canvas_create_private(const char *name, struct obs_video_info *ovi, uint32_t flags); /** Signal that references to canvas should be released and mark the canvas as removed. */ EXPORT void obs_canvas_remove(obs_canvas_t *canvas); /** Returns if a canvas is marked as removed (i.e., should no longer be used). */ EXPORT bool obs_canvas_removed(obs_canvas_t *canvas); /* Canvas properties */ /** Set canvas name */ EXPORT void obs_canvas_set_name(obs_canvas_t *canvas, const char *name); /** Get canvas name */ EXPORT const char *obs_canvas_get_name(const obs_canvas_t *canvas); /** Get canvas UUID */ EXPORT const char *obs_canvas_get_uuid(const obs_canvas_t *canvas); /** Gets flags set on a canvas */ EXPORT uint32_t obs_canvas_get_flags(const obs_canvas_t *canvas); /* Saving/Loading */ /** Saves a canvas to settings data */ EXPORT obs_data_t *obs_save_canvas(obs_canvas_t *source); /** Loads a canvas from settings data */ EXPORT obs_canvas_t *obs_load_canvas(obs_data_t *data); /* Reference counting */ /** Add strong reference */ EXPORT obs_canvas_t *obs_canvas_get_ref(obs_canvas_t *canvas); /** Release strong reference */ EXPORT void obs_canvas_release(obs_canvas_t *canvas); /** Add weak reference */ EXPORT void obs_weak_canvas_addref(obs_weak_canvas_t *weak); /** Release weak reference */ EXPORT void obs_weak_canvas_release(obs_weak_canvas_t *weak); /** Get weak reference from strong reference */ EXPORT obs_weak_canvas_t *obs_canvas_get_weak_canvas(obs_canvas_t *canvas); /** Get strong reference from weak reference */ EXPORT obs_canvas_t *obs_weak_canvas_get_canvas(obs_weak_canvas_t *weak); /** Returns the signal handler for a canvas */ EXPORT signal_handler_t *obs_canvas_get_signal_handler(obs_canvas_t *canvas); /* Channels */ /** Sets the source to be used for this canvas. */ EXPORT void obs_canvas_set_channel(obs_canvas_t *canvas, uint32_t channel, obs_source_t *source); /** Gets the source currently in use for this view context */ EXPORT obs_source_t *obs_canvas_get_channel(obs_canvas_t *canvas, uint32_t channel); /* Canvas sources */ /** Create scene attached to a canvas */ EXPORT obs_scene_t *obs_canvas_scene_create(obs_canvas_t *canvas, const char *name); /** Remove a scene from a canvas */ EXPORT void obs_canvas_scene_remove(obs_scene_t *scene); /** Move scene to another canvas, detaching it from the previous one and deduplicating the name if needed */ EXPORT void obs_canvas_move_scene(obs_scene_t *scene, obs_canvas_t *dst); /** Enumerates scenes belonging to a canvas */ EXPORT void obs_canvas_enum_scenes(obs_canvas_t *canvas, bool (*enum_proc)(void *, obs_source_t *), void *param); /** Get a canvas source by name */ EXPORT obs_source_t *obs_canvas_get_source_by_name(obs_canvas_t *canvas, const char *name); /** Get a canvas source by UUID */ EXPORT obs_scene_t *obs_canvas_get_scene_by_name(obs_canvas_t *canvas, const char *name); /* Canvas video */ /** Reset a canvas's video mix */ EXPORT bool obs_canvas_reset_video(obs_canvas_t *canvas, struct obs_video_info *ovi); /** Returns true if the canvas video is configured */ EXPORT bool obs_canvas_has_video(obs_canvas_t *canvas); /** Get canvas video output */ EXPORT video_t *obs_canvas_get_video(const obs_canvas_t *canvas); /** Get canvas video info (if it exists) */ EXPORT bool obs_canvas_get_video_info(const obs_canvas_t *canvas, struct obs_video_info *ovi); /** Renders the sources of this canvas's view context */ EXPORT void obs_canvas_render(obs_canvas_t *canvas); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-properties.c000644 001751 001751 00000110035 15153330235 022627 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/bmem.h" #include "util/darray.h" #include "obs-internal.h" #include "obs-properties.h" static inline void *get_property_data(struct obs_property *prop); /* ------------------------------------------------------------------------- */ struct float_data { double min, max, step; enum obs_number_type type; char *suffix; }; struct int_data { int min, max, step; enum obs_number_type type; char *suffix; }; struct list_item { char *name; bool disabled; union { char *str; long long ll; double d; bool b; }; }; struct path_data { char *filter; char *default_path; enum obs_path_type type; }; struct text_data { enum obs_text_type type; bool monospace; enum obs_text_info_type info_type; bool info_word_wrap; }; struct list_data { DARRAY(struct list_item) items; enum obs_combo_type type; enum obs_combo_format format; }; struct editable_list_data { enum obs_editable_list_type type; char *filter; char *default_path; }; struct button_data { obs_property_clicked_t callback; enum obs_button_type type; char *url; }; struct frame_rate_option { char *name; char *description; }; struct frame_rate_range { struct media_frames_per_second min_time; struct media_frames_per_second max_time; }; struct frame_rate_data { DARRAY(struct frame_rate_option) extra_options; DARRAY(struct frame_rate_range) ranges; }; struct group_data { enum obs_group_type type; obs_properties_t *content; }; static inline void path_data_free(struct path_data *data) { bfree(data->default_path); if (data->type == OBS_PATH_FILE) bfree(data->filter); } static inline void editable_list_data_free(struct editable_list_data *data) { bfree(data->default_path); bfree(data->filter); } static inline void list_item_free(struct list_data *data, struct list_item *item) { bfree(item->name); if (data->format == OBS_COMBO_FORMAT_STRING) bfree(item->str); } static inline void list_data_free(struct list_data *data) { for (size_t i = 0; i < data->items.num; i++) list_item_free(data, data->items.array + i); da_free(data->items); } static inline void frame_rate_data_options_free(struct frame_rate_data *data) { for (size_t i = 0; i < data->extra_options.num; i++) { struct frame_rate_option *opt = &data->extra_options.array[i]; bfree(opt->name); bfree(opt->description); } da_resize(data->extra_options, 0); } static inline void frame_rate_data_ranges_free(struct frame_rate_data *data) { da_resize(data->ranges, 0); } static inline void frame_rate_data_free(struct frame_rate_data *data) { frame_rate_data_options_free(data); frame_rate_data_ranges_free(data); da_free(data->extra_options); da_free(data->ranges); } static inline void group_data_free(struct group_data *data) { obs_properties_destroy(data->content); } static inline void int_data_free(struct int_data *data) { if (data->suffix) bfree(data->suffix); } static inline void float_data_free(struct float_data *data) { if (data->suffix) bfree(data->suffix); } static inline void button_data_free(struct button_data *data) { if (data->url) bfree(data->url); } struct obs_properties; struct obs_property { char *name; char *desc; char *long_desc; void *priv; enum obs_property_type type; bool visible; bool enabled; struct obs_properties *parent; obs_property_modified_t modified; obs_property_modified2_t modified2; UT_hash_handle hh; }; struct obs_properties { void *param; void (*destroy)(void *param); uint32_t flags; uint32_t groups; struct obs_property *properties; struct obs_property *parent; }; obs_properties_t *obs_properties_create(void) { struct obs_properties *props; props = bzalloc(sizeof(struct obs_properties)); return props; } void obs_properties_set_param(obs_properties_t *props, void *param, void (*destroy)(void *param)) { if (!props) return; if (props->param && props->destroy) props->destroy(props->param); props->param = param; props->destroy = destroy; } void obs_properties_set_flags(obs_properties_t *props, uint32_t flags) { if (props) props->flags = flags; } uint32_t obs_properties_get_flags(obs_properties_t *props) { return props ? props->flags : 0; } void *obs_properties_get_param(obs_properties_t *props) { return props ? props->param : NULL; } obs_properties_t *obs_properties_create_param(void *param, void (*destroy)(void *param)) { struct obs_properties *props = obs_properties_create(); obs_properties_set_param(props, param, destroy); return props; } static void obs_property_destroy(struct obs_property *property) { if (property->type == OBS_PROPERTY_LIST) list_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_PATH) path_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_EDITABLE_LIST) editable_list_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_FRAME_RATE) frame_rate_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_GROUP) group_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_INT) int_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_FLOAT) float_data_free(get_property_data(property)); else if (property->type == OBS_PROPERTY_BUTTON) button_data_free(get_property_data(property)); bfree(property->name); bfree(property->desc); bfree(property->long_desc); bfree(property); } void obs_properties_destroy(obs_properties_t *props) { if (props) { struct obs_property *p, *tmp; if (props->destroy && props->param) props->destroy(props->param); HASH_ITER (hh, props->properties, p, tmp) { HASH_DEL(props->properties, p); obs_property_destroy(p); } bfree(props); } } obs_property_t *obs_properties_first(obs_properties_t *props) { return (props != NULL) ? props->properties : NULL; } obs_property_t *obs_properties_get(obs_properties_t *props, const char *name) { struct obs_property *property, *tmp; if (!props) return NULL; HASH_FIND_STR(props->properties, name, property); if (property) return property; if (!props->groups) return NULL; /* Recursively check groups as well, if any */ HASH_ITER (hh, props->properties, property, tmp) { if (property->type != OBS_PROPERTY_GROUP) continue; obs_properties_t *group = obs_property_group_content(property); obs_property_t *found = obs_properties_get(group, name); if (found) return found; } return NULL; } obs_properties_t *obs_properties_get_parent(obs_properties_t *props) { return props->parent ? props->parent->parent : NULL; } void obs_properties_remove_by_name(obs_properties_t *props, const char *name) { if (!props) return; struct obs_property *cur, *tmp; HASH_FIND_STR(props->properties, name, cur); if (cur) { HASH_DELETE(hh, props->properties, cur); if (cur->type == OBS_PROPERTY_GROUP) props->groups--; obs_property_destroy(cur); return; } if (!props->groups) return; HASH_ITER (hh, props->properties, cur, tmp) { if (cur->type != OBS_PROPERTY_GROUP) continue; obs_properties_remove_by_name(obs_property_group_content(cur), name); } } typedef DARRAY(struct obs_property *) obs_property_da_t; void obs_properties_apply_settings_internal(obs_properties_t *props, obs_property_da_t *properties_with_callback) { struct obs_property *p = props->properties; while (p) { if (p->type == OBS_PROPERTY_GROUP) { obs_properties_apply_settings_internal(obs_property_group_content(p), properties_with_callback); } if (p->modified || p->modified2) da_push_back((*properties_with_callback), &p); p = p->hh.next; } } void obs_properties_apply_settings(obs_properties_t *props, obs_data_t *settings) { if (!props) return; obs_property_da_t properties_with_callback; da_init(properties_with_callback); obs_properties_apply_settings_internal(props, &properties_with_callback); while (properties_with_callback.num > 0) { struct obs_property *p = *(struct obs_property **)da_end(properties_with_callback); if (p->modified) p->modified(props, p, settings); else if (p->modified2) p->modified2(p->priv, props, p, settings); da_pop_back(properties_with_callback); } da_free(properties_with_callback); } /* ------------------------------------------------------------------------- */ static inline size_t get_property_size(enum obs_property_type type) { switch (type) { case OBS_PROPERTY_INVALID: return 0; case OBS_PROPERTY_BOOL: return 0; case OBS_PROPERTY_INT: return sizeof(struct int_data); case OBS_PROPERTY_FLOAT: return sizeof(struct float_data); case OBS_PROPERTY_TEXT: return sizeof(struct text_data); case OBS_PROPERTY_PATH: return sizeof(struct path_data); case OBS_PROPERTY_LIST: return sizeof(struct list_data); case OBS_PROPERTY_COLOR: return 0; case OBS_PROPERTY_BUTTON: return sizeof(struct button_data); case OBS_PROPERTY_FONT: return 0; case OBS_PROPERTY_EDITABLE_LIST: return sizeof(struct editable_list_data); case OBS_PROPERTY_FRAME_RATE: return sizeof(struct frame_rate_data); case OBS_PROPERTY_GROUP: return sizeof(struct group_data); case OBS_PROPERTY_COLOR_ALPHA: return 0; } return 0; } static inline struct obs_property *new_prop(struct obs_properties *props, const char *name, const char *desc, enum obs_property_type type) { size_t data_size = get_property_size(type); struct obs_property *p; p = bzalloc(sizeof(struct obs_property) + data_size); p->parent = props; p->enabled = true; p->visible = true; p->type = type; p->name = bstrdup(name); p->desc = bstrdup(desc); HASH_ADD_STR(props->properties, name, p); return p; } static inline obs_properties_t *get_topmost_parent(obs_properties_t *props) { obs_properties_t *parent = props; obs_properties_t *last_parent = parent; while (parent) { last_parent = parent; parent = obs_properties_get_parent(parent); } return last_parent; } static inline bool contains_prop(struct obs_properties *props, const char *name) { struct obs_property *p, *tmp; HASH_FIND_STR(props->properties, name, p); if (p) { blog(LOG_WARNING, "Property '%s' exists", name); return true; } if (!props->groups) return false; HASH_ITER (hh, props->properties, p, tmp) { if (p->type != OBS_PROPERTY_GROUP) continue; if (contains_prop(obs_property_group_content(p), name)) return true; } return false; } static inline bool has_prop(struct obs_properties *props, const char *name) { return contains_prop(get_topmost_parent(props), name); } static inline void *get_property_data(struct obs_property *prop) { return (uint8_t *)prop + sizeof(struct obs_property); } static inline void *get_type_data(struct obs_property *prop, enum obs_property_type type) { if (!prop || prop->type != type) return NULL; return get_property_data(prop); } obs_property_t *obs_properties_add_bool(obs_properties_t *props, const char *name, const char *desc) { if (!props || has_prop(props, name)) return NULL; return new_prop(props, name, desc, OBS_PROPERTY_BOOL); } static obs_property_t *add_int(obs_properties_t *props, const char *name, const char *desc, int min, int max, int step, enum obs_number_type type) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_INT); struct int_data *data = get_property_data(p); data->min = min; data->max = max; data->step = step; data->type = type; return p; } static obs_property_t *add_flt(obs_properties_t *props, const char *name, const char *desc, double min, double max, double step, enum obs_number_type type) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_FLOAT); struct float_data *data = get_property_data(p); data->min = min; data->max = max; data->step = step; data->type = type; return p; } obs_property_t *obs_properties_add_int(obs_properties_t *props, const char *name, const char *desc, int min, int max, int step) { return add_int(props, name, desc, min, max, step, OBS_NUMBER_SCROLLER); } obs_property_t *obs_properties_add_float(obs_properties_t *props, const char *name, const char *desc, double min, double max, double step) { return add_flt(props, name, desc, min, max, step, OBS_NUMBER_SCROLLER); } obs_property_t *obs_properties_add_int_slider(obs_properties_t *props, const char *name, const char *desc, int min, int max, int step) { return add_int(props, name, desc, min, max, step, OBS_NUMBER_SLIDER); } obs_property_t *obs_properties_add_float_slider(obs_properties_t *props, const char *name, const char *desc, double min, double max, double step) { return add_flt(props, name, desc, min, max, step, OBS_NUMBER_SLIDER); } obs_property_t *obs_properties_add_text(obs_properties_t *props, const char *name, const char *desc, enum obs_text_type type) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_TEXT); struct text_data *data = get_property_data(p); data->type = type; data->info_type = OBS_TEXT_INFO_NORMAL; data->info_word_wrap = true; return p; } obs_property_t *obs_properties_add_path(obs_properties_t *props, const char *name, const char *desc, enum obs_path_type type, const char *filter, const char *default_path) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_PATH); struct path_data *data = get_property_data(p); data->type = type; data->default_path = bstrdup(default_path); if (data->type == OBS_PATH_FILE) data->filter = bstrdup(filter); return p; } obs_property_t *obs_properties_add_list(obs_properties_t *props, const char *name, const char *desc, enum obs_combo_type type, enum obs_combo_format format) { if (!props || has_prop(props, name)) return NULL; if (type == OBS_COMBO_TYPE_EDITABLE && format != OBS_COMBO_FORMAT_STRING) { blog(LOG_WARNING, "List '%s', error: Editable combo boxes " "must be of the 'string' type", name); return NULL; } struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_LIST); struct list_data *data = get_property_data(p); data->format = format; data->type = type; return p; } obs_property_t *obs_properties_add_color(obs_properties_t *props, const char *name, const char *desc) { if (!props || has_prop(props, name)) return NULL; return new_prop(props, name, desc, OBS_PROPERTY_COLOR); } obs_property_t *obs_properties_add_color_alpha(obs_properties_t *props, const char *name, const char *desc) { if (!props || has_prop(props, name)) return NULL; return new_prop(props, name, desc, OBS_PROPERTY_COLOR_ALPHA); } obs_property_t *obs_properties_add_button(obs_properties_t *props, const char *name, const char *text, obs_property_clicked_t callback) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, text, OBS_PROPERTY_BUTTON); struct button_data *data = get_property_data(p); data->callback = callback; return p; } obs_property_t *obs_properties_add_button2(obs_properties_t *props, const char *name, const char *text, obs_property_clicked_t callback, void *priv) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, text, OBS_PROPERTY_BUTTON); struct button_data *data = get_property_data(p); data->callback = callback; p->priv = priv; return p; } obs_property_t *obs_properties_add_font(obs_properties_t *props, const char *name, const char *desc) { if (!props || has_prop(props, name)) return NULL; return new_prop(props, name, desc, OBS_PROPERTY_FONT); } obs_property_t *obs_properties_add_editable_list(obs_properties_t *props, const char *name, const char *desc, enum obs_editable_list_type type, const char *filter, const char *default_path) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_EDITABLE_LIST); struct editable_list_data *data = get_property_data(p); data->type = type; data->filter = bstrdup(filter); data->default_path = bstrdup(default_path); return p; } obs_property_t *obs_properties_add_frame_rate(obs_properties_t *props, const char *name, const char *desc) { if (!props || has_prop(props, name)) return NULL; struct obs_property *p = new_prop(props, name, desc, OBS_PROPERTY_FRAME_RATE); struct frame_rate_data *data = get_property_data(p); da_init(data->extra_options); da_init(data->ranges); return p; } static bool check_property_group_recursion(obs_properties_t *parent, obs_properties_t *group) { /* Scan the group for the parent. */ obs_property_t *p, *tmp; HASH_ITER (hh, group->properties, p, tmp) { if (p->type != OBS_PROPERTY_GROUP) continue; obs_properties_t *cprops = obs_property_group_content(p); if (cprops == parent) { /* Contains find_props */ return true; } else if (cprops == group) { /* Contains self, shouldn't be possible but * lets verify anyway. */ return true; } if (check_property_group_recursion(parent, cprops)) return true; } return false; } static bool check_property_group_duplicates(obs_properties_t *parent, obs_properties_t *group) { obs_property_t *p, *tmp; HASH_ITER (hh, group->properties, p, tmp) { if (has_prop(parent, p->name)) return true; } return false; } obs_property_t *obs_properties_add_group(obs_properties_t *props, const char *name, const char *desc, enum obs_group_type type, obs_properties_t *group) { if (!props || has_prop(props, name)) return NULL; if (!group) return NULL; /* Prevent recursion. */ if (props == group) return NULL; if (check_property_group_recursion(props, group)) return NULL; /* Prevent duplicate properties */ if (check_property_group_duplicates(props, group)) return NULL; obs_property_t *p = new_prop(props, name, desc, OBS_PROPERTY_GROUP); props->groups++; group->parent = p; struct group_data *data = get_property_data(p); data->type = type; data->content = group; return p; } /* ------------------------------------------------------------------------- */ static inline bool is_combo(struct obs_property *p) { return p->type == OBS_PROPERTY_LIST; } static inline struct list_data *get_list_data(struct obs_property *p) { if (!p || !is_combo(p)) return NULL; return get_property_data(p); } static inline struct list_data *get_list_fmt_data(struct obs_property *p, enum obs_combo_format format) { struct list_data *data = get_list_data(p); return (data && data->format == format) ? data : NULL; } /* ------------------------------------------------------------------------- */ bool obs_property_next(obs_property_t **p) { if (!p || !*p) return false; *p = (*p)->hh.next; return *p != NULL; } void obs_property_set_modified_callback(obs_property_t *p, obs_property_modified_t modified) { if (p) p->modified = modified; } void obs_property_set_modified_callback2(obs_property_t *p, obs_property_modified2_t modified2, void *priv) { if (p) { p->modified2 = modified2; p->priv = priv; } } bool obs_property_modified(obs_property_t *p, obs_data_t *settings) { if (p) { if (p->modified) { obs_properties_t *top = get_topmost_parent(p->parent); return p->modified(top, p, settings); } else if (p->modified2) { obs_properties_t *top = get_topmost_parent(p->parent); return p->modified2(p->priv, top, p, settings); } } return false; } bool obs_property_button_clicked(obs_property_t *p, void *obj) { struct obs_context_data *context = obj; if (p) { struct button_data *data = get_type_data(p, OBS_PROPERTY_BUTTON); if (data && data->callback) { obs_properties_t *top = get_topmost_parent(p->parent); if (p->priv) return data->callback(top, p, p->priv); return data->callback(top, p, (context ? context->data : NULL)); } } return false; } void obs_property_set_visible(obs_property_t *p, bool visible) { if (p) p->visible = visible; } void obs_property_set_enabled(obs_property_t *p, bool enabled) { if (p) p->enabled = enabled; } void obs_property_set_description(obs_property_t *p, const char *description) { if (p) { bfree(p->desc); p->desc = description && *description ? bstrdup(description) : NULL; } } void obs_property_set_long_description(obs_property_t *p, const char *long_desc) { if (p) { bfree(p->long_desc); p->long_desc = long_desc && *long_desc ? bstrdup(long_desc) : NULL; } } const char *obs_property_name(obs_property_t *p) { return p ? p->name : NULL; } const char *obs_property_description(obs_property_t *p) { return p ? p->desc : NULL; } const char *obs_property_long_description(obs_property_t *p) { return p ? p->long_desc : NULL; } enum obs_property_type obs_property_get_type(obs_property_t *p) { return p ? p->type : OBS_PROPERTY_INVALID; } bool obs_property_enabled(obs_property_t *p) { return p ? p->enabled : false; } bool obs_property_visible(obs_property_t *p) { return p ? p->visible : false; } int obs_property_int_min(obs_property_t *p) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); return data ? data->min : 0; } int obs_property_int_max(obs_property_t *p) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); return data ? data->max : 0; } int obs_property_int_step(obs_property_t *p) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); return data ? data->step : 0; } enum obs_number_type obs_property_int_type(obs_property_t *p) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); return data ? data->type : OBS_NUMBER_SCROLLER; } const char *obs_property_int_suffix(obs_property_t *p) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); return data ? data->suffix : NULL; } double obs_property_float_min(obs_property_t *p) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); return data ? data->min : 0; } double obs_property_float_max(obs_property_t *p) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); return data ? data->max : 0; } double obs_property_float_step(obs_property_t *p) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); return data ? data->step : 0; } const char *obs_property_float_suffix(obs_property_t *p) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); return data ? data->suffix : NULL; } enum obs_number_type obs_property_float_type(obs_property_t *p) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); return data ? data->type : OBS_NUMBER_SCROLLER; } enum obs_text_type obs_property_text_type(obs_property_t *p) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); return data ? data->type : OBS_TEXT_DEFAULT; } bool obs_property_text_monospace(obs_property_t *p) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); return data ? data->monospace : false; } enum obs_text_info_type obs_property_text_info_type(obs_property_t *p) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); return data ? data->info_type : OBS_TEXT_INFO_NORMAL; } bool obs_property_text_info_word_wrap(obs_property_t *p) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); return data ? data->info_word_wrap : true; } enum obs_path_type obs_property_path_type(obs_property_t *p) { struct path_data *data = get_type_data(p, OBS_PROPERTY_PATH); return data ? data->type : OBS_PATH_DIRECTORY; } const char *obs_property_path_filter(obs_property_t *p) { struct path_data *data = get_type_data(p, OBS_PROPERTY_PATH); return data ? data->filter : NULL; } const char *obs_property_path_default_path(obs_property_t *p) { struct path_data *data = get_type_data(p, OBS_PROPERTY_PATH); return data ? data->default_path : NULL; } enum obs_combo_type obs_property_list_type(obs_property_t *p) { struct list_data *data = get_list_data(p); return data ? data->type : OBS_COMBO_TYPE_INVALID; } enum obs_combo_format obs_property_list_format(obs_property_t *p) { struct list_data *data = get_list_data(p); return data ? data->format : OBS_COMBO_FORMAT_INVALID; } void obs_property_int_set_limits(obs_property_t *p, int min, int max, int step) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); if (!data) return; data->min = min; data->max = max; data->step = step; } void obs_property_float_set_limits(obs_property_t *p, double min, double max, double step) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); if (!data) return; data->min = min; data->max = max; data->step = step; } void obs_property_int_set_suffix(obs_property_t *p, const char *suffix) { struct int_data *data = get_type_data(p, OBS_PROPERTY_INT); if (!data) return; bfree(data->suffix); data->suffix = bstrdup(suffix); } void obs_property_float_set_suffix(obs_property_t *p, const char *suffix) { struct float_data *data = get_type_data(p, OBS_PROPERTY_FLOAT); if (!data) return; bfree(data->suffix); data->suffix = bstrdup(suffix); } void obs_property_text_set_monospace(obs_property_t *p, bool monospace) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); if (!data) return; data->monospace = monospace; } void obs_property_text_set_info_type(obs_property_t *p, enum obs_text_info_type type) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); if (!data) return; data->info_type = type; } void obs_property_text_set_info_word_wrap(obs_property_t *p, bool word_wrap) { struct text_data *data = get_type_data(p, OBS_PROPERTY_TEXT); if (!data) return; data->info_word_wrap = word_wrap; } void obs_property_button_set_type(obs_property_t *p, enum obs_button_type type) { struct button_data *data = get_type_data(p, OBS_PROPERTY_BUTTON); if (!data) return; data->type = type; } void obs_property_button_set_url(obs_property_t *p, char *url) { struct button_data *data = get_type_data(p, OBS_PROPERTY_BUTTON); if (!data) return; data->url = bstrdup(url); } void obs_property_list_clear(obs_property_t *p) { struct list_data *data = get_list_data(p); if (data) list_data_free(data); } static size_t add_item(struct list_data *data, const char *name, const void *val) { struct list_item item = {NULL}; item.name = bstrdup(name); if (data->format == OBS_COMBO_FORMAT_INT) item.ll = *(const long long *)val; else if (data->format == OBS_COMBO_FORMAT_FLOAT) item.d = *(const double *)val; else if (data->format == OBS_COMBO_FORMAT_BOOL) item.b = *(const bool *)val; else item.str = bstrdup(val); return da_push_back(data->items, &item); } static void insert_item(struct list_data *data, size_t idx, const char *name, const void *val) { struct list_item item = {NULL}; item.name = bstrdup(name); if (data->format == OBS_COMBO_FORMAT_INT) item.ll = *(const long long *)val; else if (data->format == OBS_COMBO_FORMAT_FLOAT) item.d = *(const double *)val; else if (data->format == OBS_COMBO_FORMAT_BOOL) item.b = *(const bool *)val; else item.str = bstrdup(val); da_insert(data->items, idx, &item); } size_t obs_property_list_add_string(obs_property_t *p, const char *name, const char *val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_STRING) return add_item(data, name, val); return 0; } size_t obs_property_list_add_int(obs_property_t *p, const char *name, long long val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_INT) return add_item(data, name, &val); return 0; } size_t obs_property_list_add_float(obs_property_t *p, const char *name, double val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_FLOAT) return add_item(data, name, &val); return 0; } size_t obs_property_list_add_bool(obs_property_t *p, const char *name, bool val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_BOOL) return add_item(data, name, &val); return 0; } void obs_property_list_insert_string(obs_property_t *p, size_t idx, const char *name, const char *val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_STRING) insert_item(data, idx, name, val); } void obs_property_list_insert_int(obs_property_t *p, size_t idx, const char *name, long long val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_INT) insert_item(data, idx, name, &val); } void obs_property_list_insert_float(obs_property_t *p, size_t idx, const char *name, double val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_FLOAT) insert_item(data, idx, name, &val); } void obs_property_list_insert_bool(obs_property_t *p, size_t idx, const char *name, bool val) { struct list_data *data = get_list_data(p); if (data && data->format == OBS_COMBO_FORMAT_BOOL) insert_item(data, idx, name, &val); } void obs_property_list_item_remove(obs_property_t *p, size_t idx) { struct list_data *data = get_list_data(p); if (data && idx < data->items.num) { list_item_free(data, data->items.array + idx); da_erase(data->items, idx); } } size_t obs_property_list_item_count(obs_property_t *p) { struct list_data *data = get_list_data(p); return data ? data->items.num : 0; } bool obs_property_list_item_disabled(obs_property_t *p, size_t idx) { struct list_data *data = get_list_data(p); return (data && idx < data->items.num) ? data->items.array[idx].disabled : false; } void obs_property_list_item_disable(obs_property_t *p, size_t idx, bool disabled) { struct list_data *data = get_list_data(p); if (!data || idx >= data->items.num) return; data->items.array[idx].disabled = disabled; } const char *obs_property_list_item_name(obs_property_t *p, size_t idx) { struct list_data *data = get_list_data(p); return (data && idx < data->items.num) ? data->items.array[idx].name : NULL; } const char *obs_property_list_item_string(obs_property_t *p, size_t idx) { struct list_data *data = get_list_fmt_data(p, OBS_COMBO_FORMAT_STRING); return (data && idx < data->items.num) ? data->items.array[idx].str : NULL; } long long obs_property_list_item_int(obs_property_t *p, size_t idx) { struct list_data *data = get_list_fmt_data(p, OBS_COMBO_FORMAT_INT); return (data && idx < data->items.num) ? data->items.array[idx].ll : 0; } double obs_property_list_item_float(obs_property_t *p, size_t idx) { struct list_data *data = get_list_fmt_data(p, OBS_COMBO_FORMAT_FLOAT); return (data && idx < data->items.num) ? data->items.array[idx].d : 0.0; } bool obs_property_list_item_bool(obs_property_t *p, size_t idx) { struct list_data *data = get_list_fmt_data(p, OBS_COMBO_FORMAT_BOOL); return (data && idx < data->items.num) ? data->items.array[idx].d : false; } enum obs_editable_list_type obs_property_editable_list_type(obs_property_t *p) { struct editable_list_data *data = get_type_data(p, OBS_PROPERTY_EDITABLE_LIST); return data ? data->type : OBS_EDITABLE_LIST_TYPE_STRINGS; } const char *obs_property_editable_list_filter(obs_property_t *p) { struct editable_list_data *data = get_type_data(p, OBS_PROPERTY_EDITABLE_LIST); return data ? data->filter : NULL; } const char *obs_property_editable_list_default_path(obs_property_t *p) { struct editable_list_data *data = get_type_data(p, OBS_PROPERTY_EDITABLE_LIST); return data ? data->default_path : NULL; } /* ------------------------------------------------------------------------- */ /* OBS_PROPERTY_FRAME_RATE */ void obs_property_frame_rate_clear(obs_property_t *p) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return; frame_rate_data_options_free(data); frame_rate_data_ranges_free(data); } void obs_property_frame_rate_options_clear(obs_property_t *p) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return; frame_rate_data_options_free(data); } void obs_property_frame_rate_fps_ranges_clear(obs_property_t *p) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return; frame_rate_data_ranges_free(data); } size_t obs_property_frame_rate_option_add(obs_property_t *p, const char *name, const char *description) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return DARRAY_INVALID; struct frame_rate_option *opt = da_push_back_new(data->extra_options); opt->name = bstrdup(name); opt->description = bstrdup(description); return data->extra_options.num - 1; } size_t obs_property_frame_rate_fps_range_add(obs_property_t *p, struct media_frames_per_second min, struct media_frames_per_second max) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return DARRAY_INVALID; struct frame_rate_range *rng = da_push_back_new(data->ranges); rng->min_time = min; rng->max_time = max; return data->ranges.num - 1; } void obs_property_frame_rate_option_insert(obs_property_t *p, size_t idx, const char *name, const char *description) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return; struct frame_rate_option *opt = da_insert_new(data->extra_options, idx); opt->name = bstrdup(name); opt->description = bstrdup(description); } void obs_property_frame_rate_fps_range_insert(obs_property_t *p, size_t idx, struct media_frames_per_second min, struct media_frames_per_second max) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); if (!data) return; struct frame_rate_range *rng = da_insert_new(data->ranges, idx); rng->min_time = min; rng->max_time = max; } size_t obs_property_frame_rate_options_count(obs_property_t *p) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); return data ? data->extra_options.num : 0; } const char *obs_property_frame_rate_option_name(obs_property_t *p, size_t idx) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); return data && data->extra_options.num > idx ? data->extra_options.array[idx].name : NULL; } const char *obs_property_frame_rate_option_description(obs_property_t *p, size_t idx) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); return data && data->extra_options.num > idx ? data->extra_options.array[idx].description : NULL; } size_t obs_property_frame_rate_fps_ranges_count(obs_property_t *p) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); return data ? data->ranges.num : 0; } struct media_frames_per_second obs_property_frame_rate_fps_range_min(obs_property_t *p, size_t idx) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); return data && data->ranges.num > idx ? data->ranges.array[idx].min_time : (struct media_frames_per_second){0}; } struct media_frames_per_second obs_property_frame_rate_fps_range_max(obs_property_t *p, size_t idx) { struct frame_rate_data *data = get_type_data(p, OBS_PROPERTY_FRAME_RATE); return data && data->ranges.num > idx ? data->ranges.array[idx].max_time : (struct media_frames_per_second){0}; } enum obs_group_type obs_property_group_type(obs_property_t *p) { struct group_data *data = get_type_data(p, OBS_PROPERTY_GROUP); return data ? data->type : OBS_COMBO_INVALID; } obs_properties_t *obs_property_group_content(obs_property_t *p) { struct group_data *data = get_type_data(p, OBS_PROPERTY_GROUP); return data ? data->content : NULL; } enum obs_button_type obs_property_button_type(obs_property_t *p) { struct button_data *data = get_type_data(p, OBS_PROPERTY_BUTTON); return data ? data->type : OBS_BUTTON_DEFAULT; } const char *obs_property_button_url(obs_property_t *p) { struct button_data *data = get_type_data(p, OBS_PROPERTY_BUTTON); return data ? data->url : ""; } obs-studio-32.1.0-sources/libobs/obs-hotkey.c000644 001751 001751 00000104575 15153330235 021752 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2014-2015 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "obs-internal.h" /* Since ids are just sequential size_t integers, we don't really need a * hash function to get an even distribution across buckets. * (Realistically this should never wrap, who has 4.29 billion hotkeys?!) */ #undef HASH_FUNCTION #define HASH_FUNCTION(s, len, hashv) (hashv) = *s % UINT_MAX /* Custom definitions to make adding/looking up size_t integers easier */ #define HASH_ADD_HKEY(head, idfield, add) HASH_ADD(hh, head, idfield, sizeof(size_t), add) #define HASH_FIND_HKEY(head, id, out) HASH_FIND(hh, head, &(id), sizeof(size_t), out) static inline bool lock(void) { if (!obs) return false; pthread_mutex_lock(&obs->hotkeys.mutex); return true; } static inline void unlock(void) { pthread_mutex_unlock(&obs->hotkeys.mutex); } obs_hotkey_id obs_hotkey_get_id(const obs_hotkey_t *key) { return key->id; } const char *obs_hotkey_get_name(const obs_hotkey_t *key) { return key->name; } const char *obs_hotkey_get_description(const obs_hotkey_t *key) { return key->description; } obs_hotkey_registerer_t obs_hotkey_get_registerer_type(const obs_hotkey_t *key) { return key->registerer_type; } void *obs_hotkey_get_registerer(const obs_hotkey_t *key) { return key->registerer; } obs_hotkey_id obs_hotkey_get_pair_partner_id(const obs_hotkey_t *key) { return key->pair_partner_id; } obs_key_combination_t obs_hotkey_binding_get_key_combination(obs_hotkey_binding_t *binding) { return binding->key; } obs_hotkey_id obs_hotkey_binding_get_hotkey_id(obs_hotkey_binding_t *binding) { return binding->hotkey_id; } obs_hotkey_t *obs_hotkey_binding_get_hotkey(obs_hotkey_binding_t *binding) { return binding->hotkey; } void obs_hotkey_set_name(obs_hotkey_id id, const char *name) { obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (!hotkey) return; bfree(hotkey->name); hotkey->name = bstrdup(name); } void obs_hotkey_set_description(obs_hotkey_id id, const char *desc) { obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (!hotkey) return; bfree(hotkey->description); hotkey->description = bstrdup(desc); } void obs_hotkey_pair_set_names(obs_hotkey_pair_id id, const char *name0, const char *name1) { obs_hotkey_pair_t *pair; HASH_FIND_HKEY(obs->hotkeys.hotkey_pairs, id, pair); if (!pair) return; obs_hotkey_set_name(pair->id[0], name0); obs_hotkey_set_name(pair->id[1], name1); } void obs_hotkey_pair_set_descriptions(obs_hotkey_pair_id id, const char *desc0, const char *desc1) { obs_hotkey_pair_t *pair; HASH_FIND_HKEY(obs->hotkeys.hotkey_pairs, id, pair); if (!pair) return; obs_hotkey_set_description(pair->id[0], desc0); obs_hotkey_set_description(pair->id[1], desc1); } static void hotkey_signal(const char *signal, obs_hotkey_t *hotkey) { calldata_t data; calldata_init(&data); calldata_set_ptr(&data, "key", hotkey); signal_handler_signal(obs->hotkeys.signals, signal, &data); calldata_free(&data); } static inline void load_bindings(obs_hotkey_t *hotkey, obs_data_array_t *data); static inline void context_add_hotkey(struct obs_context_data *context, obs_hotkey_id id) { da_push_back(context->hotkeys, &id); } static inline obs_hotkey_id obs_hotkey_register_internal(obs_hotkey_registerer_t type, void *registerer, struct obs_context_data *context, const char *name, const char *description, obs_hotkey_func func, void *data) { if ((obs->hotkeys.next_id + 1) == OBS_INVALID_HOTKEY_ID) blog(LOG_WARNING, "obs-hotkey: Available hotkey ids exhausted"); obs_hotkey_id result = obs->hotkeys.next_id++; obs_hotkey_t *hotkey = bzalloc(sizeof(obs_hotkey_t)); hotkey->id = result; hotkey->name = bstrdup(name); hotkey->description = bstrdup(description); hotkey->func = func; hotkey->data = data; hotkey->registerer_type = type; hotkey->registerer = registerer; hotkey->pair_partner_id = OBS_INVALID_HOTKEY_PAIR_ID; HASH_ADD_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (context) { obs_data_array_t *data = obs_data_get_array(context->hotkey_data, name); load_bindings(hotkey, data); obs_data_array_release(data); context_add_hotkey(context, result); } hotkey_signal("hotkey_register", hotkey); return result; } obs_hotkey_id obs_hotkey_register_frontend(const char *name, const char *description, obs_hotkey_func func, void *data) { if (!lock()) return OBS_INVALID_HOTKEY_ID; obs_hotkey_id id = obs_hotkey_register_internal(OBS_HOTKEY_REGISTERER_FRONTEND, NULL, NULL, name, description, func, data); unlock(); return id; } obs_hotkey_id obs_hotkey_register_encoder(obs_encoder_t *encoder, const char *name, const char *description, obs_hotkey_func func, void *data) { if (!encoder || !lock()) return OBS_INVALID_HOTKEY_ID; obs_hotkey_id id = obs_hotkey_register_internal(OBS_HOTKEY_REGISTERER_ENCODER, obs_encoder_get_weak_encoder(encoder), &encoder->context, name, description, func, data); unlock(); return id; } obs_hotkey_id obs_hotkey_register_output(obs_output_t *output, const char *name, const char *description, obs_hotkey_func func, void *data) { if (!output || !lock()) return OBS_INVALID_HOTKEY_ID; obs_hotkey_id id = obs_hotkey_register_internal(OBS_HOTKEY_REGISTERER_OUTPUT, obs_output_get_weak_output(output), &output->context, name, description, func, data); unlock(); return id; } obs_hotkey_id obs_hotkey_register_service(obs_service_t *service, const char *name, const char *description, obs_hotkey_func func, void *data) { if (!service || !lock()) return OBS_INVALID_HOTKEY_ID; obs_hotkey_id id = obs_hotkey_register_internal(OBS_HOTKEY_REGISTERER_SERVICE, obs_service_get_weak_service(service), &service->context, name, description, func, data); unlock(); return id; } obs_hotkey_id obs_hotkey_register_source(obs_source_t *source, const char *name, const char *description, obs_hotkey_func func, void *data) { if (!source || source->context.private || !lock()) return OBS_INVALID_HOTKEY_ID; obs_hotkey_id id = obs_hotkey_register_internal(OBS_HOTKEY_REGISTERER_SOURCE, obs_source_get_weak_source(source), &source->context, name, description, func, data); unlock(); return id; } static obs_hotkey_pair_t *create_hotkey_pair(struct obs_context_data *context, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { if ((obs->hotkeys.next_pair_id + 1) == OBS_INVALID_HOTKEY_PAIR_ID) blog(LOG_WARNING, "obs-hotkey: Available hotkey pair ids " "exhausted"); obs_hotkey_pair_t *pair = bzalloc(sizeof(obs_hotkey_pair_t)); pair->pair_id = obs->hotkeys.next_pair_id++; pair->func[0] = func0; pair->func[1] = func1; pair->id[0] = OBS_INVALID_HOTKEY_ID; pair->id[1] = OBS_INVALID_HOTKEY_ID; pair->data[0] = data0; pair->data[1] = data1; HASH_ADD_HKEY(obs->hotkeys.hotkey_pairs, pair_id, pair); if (context) da_push_back(context->hotkey_pairs, &pair->pair_id); return pair; } static void obs_hotkey_pair_first_func(void *data, obs_hotkey_id id, obs_hotkey_t *hotkey, bool pressed) { UNUSED_PARAMETER(id); obs_hotkey_pair_t *pair = data; if (pair->pressed1) return; if (pair->pressed0 && !pressed) pair->pressed0 = false; else if (pair->func[0](pair->data[0], pair->pair_id, hotkey, pressed)) pair->pressed0 = pressed; } static void obs_hotkey_pair_second_func(void *data, obs_hotkey_id id, obs_hotkey_t *hotkey, bool pressed) { UNUSED_PARAMETER(id); obs_hotkey_pair_t *pair = data; if (pair->pressed0) return; if (pair->pressed1 && !pressed) pair->pressed1 = false; else if (pair->func[1](pair->data[1], pair->pair_id, hotkey, pressed)) pair->pressed1 = pressed; } static obs_hotkey_pair_id register_hotkey_pair_internal(obs_hotkey_registerer_t type, void *registerer, void *(*weak_ref)(void *), struct obs_context_data *context, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { if (!lock()) return OBS_INVALID_HOTKEY_PAIR_ID; obs_hotkey_pair_t *pair = create_hotkey_pair(context, func0, func1, data0, data1); pair->id[0] = obs_hotkey_register_internal(type, weak_ref(registerer), context, name0, description0, obs_hotkey_pair_first_func, pair); pair->id[1] = obs_hotkey_register_internal(type, weak_ref(registerer), context, name1, description1, obs_hotkey_pair_second_func, pair); obs_hotkey_t *hotkey_1, *hotkey_2; HASH_FIND_HKEY(obs->hotkeys.hotkeys, pair->id[0], hotkey_1); HASH_FIND_HKEY(obs->hotkeys.hotkeys, pair->id[1], hotkey_2); if (hotkey_1) hotkey_1->pair_partner_id = pair->id[1]; if (hotkey_2) hotkey_2->pair_partner_id = pair->id[0]; obs_hotkey_pair_id id = pair->pair_id; unlock(); return id; } static inline void *obs_id_(void *id_) { return id_; } obs_hotkey_pair_id obs_hotkey_pair_register_frontend(const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { return register_hotkey_pair_internal(OBS_HOTKEY_REGISTERER_FRONTEND, NULL, obs_id_, NULL, name0, description0, name1, description1, func0, func1, data0, data1); } static inline void *weak_encoder_ref(void *ref) { return obs_encoder_get_weak_encoder(ref); } obs_hotkey_pair_id obs_hotkey_pair_register_encoder(obs_encoder_t *encoder, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { if (!encoder) return OBS_INVALID_HOTKEY_PAIR_ID; return register_hotkey_pair_internal(OBS_HOTKEY_REGISTERER_ENCODER, encoder, weak_encoder_ref, &encoder->context, name0, description0, name1, description1, func0, func1, data0, data1); } static inline void *weak_output_ref(void *ref) { return obs_output_get_weak_output(ref); } obs_hotkey_pair_id obs_hotkey_pair_register_output(obs_output_t *output, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { if (!output) return OBS_INVALID_HOTKEY_PAIR_ID; return register_hotkey_pair_internal(OBS_HOTKEY_REGISTERER_OUTPUT, output, weak_output_ref, &output->context, name0, description0, name1, description1, func0, func1, data0, data1); } static inline void *weak_service_ref(void *ref) { return obs_service_get_weak_service(ref); } obs_hotkey_pair_id obs_hotkey_pair_register_service(obs_service_t *service, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { if (!service) return OBS_INVALID_HOTKEY_PAIR_ID; return register_hotkey_pair_internal(OBS_HOTKEY_REGISTERER_SERVICE, service, weak_service_ref, &service->context, name0, description0, name1, description1, func0, func1, data0, data1); } static inline void *weak_source_ref(void *ref) { return obs_source_get_weak_source(ref); } obs_hotkey_pair_id obs_hotkey_pair_register_source(obs_source_t *source, const char *name0, const char *description0, const char *name1, const char *description1, obs_hotkey_active_func func0, obs_hotkey_active_func func1, void *data0, void *data1) { if (!source) return OBS_INVALID_HOTKEY_PAIR_ID; return register_hotkey_pair_internal(OBS_HOTKEY_REGISTERER_SOURCE, source, weak_source_ref, &source->context, name0, description0, name1, description1, func0, func1, data0, data1); } typedef bool (*obs_hotkey_binding_internal_enum_func)(void *data, size_t idx, obs_hotkey_binding_t *binding); static inline void enum_bindings(obs_hotkey_binding_internal_enum_func func, void *data) { const size_t num = obs->hotkeys.bindings.num; obs_hotkey_binding_t *array = obs->hotkeys.bindings.array; for (size_t i = 0; i < num; i++) { if (!func(data, i, &array[i])) break; } } typedef bool (*obs_hotkey_internal_enum_func)(void *data, obs_hotkey_t *hotkey); static inline void enum_context_hotkeys(struct obs_context_data *context, obs_hotkey_internal_enum_func func, void *data) { const size_t num = context->hotkeys.num; const obs_hotkey_id *array = context->hotkeys.array; obs_hotkey_t *hotkey; for (size_t i = 0; i < num; i++) { HASH_FIND_HKEY(obs->hotkeys.hotkeys, array[i], hotkey); if (!hotkey) continue; if (!func(data, hotkey)) break; } } static inline void load_modifier(uint32_t *modifiers, obs_data_t *data, const char *name, uint32_t flag) { if (obs_data_get_bool(data, name)) *modifiers |= flag; } static inline void create_binding(obs_hotkey_t *hotkey, obs_key_combination_t combo) { obs_hotkey_binding_t *binding = da_push_back_new(obs->hotkeys.bindings); if (!binding) return; binding->key = combo; binding->hotkey_id = hotkey->id; binding->hotkey = hotkey; } static inline void load_binding(obs_hotkey_t *hotkey, obs_data_t *data) { if (!hotkey || !data) return; obs_key_combination_t combo = {0}; uint32_t *modifiers = &combo.modifiers; load_modifier(modifiers, data, "shift", INTERACT_SHIFT_KEY); load_modifier(modifiers, data, "control", INTERACT_CONTROL_KEY); load_modifier(modifiers, data, "alt", INTERACT_ALT_KEY); load_modifier(modifiers, data, "command", INTERACT_COMMAND_KEY); combo.key = obs_key_from_name(obs_data_get_string(data, "key")); if (!modifiers && (combo.key == OBS_KEY_NONE || combo.key >= OBS_KEY_LAST_VALUE)) return; create_binding(hotkey, combo); } static inline void load_bindings(obs_hotkey_t *hotkey, obs_data_array_t *data) { const size_t count = obs_data_array_count(data); for (size_t i = 0; i < count; i++) { obs_data_t *item = obs_data_array_item(data, i); load_binding(hotkey, item); obs_data_release(item); } if (count) hotkey_signal("hotkey_bindings_changed", hotkey); } static inline bool remove_bindings(obs_hotkey_id id); void obs_hotkey_load_bindings(obs_hotkey_id id, obs_key_combination_t *combinations, size_t num) { if (!lock()) return; obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (hotkey) { bool changed = remove_bindings(id); for (size_t i = 0; i < num; i++) create_binding(hotkey, combinations[i]); if (num || changed) hotkey_signal("hotkey_bindings_changed", hotkey); } unlock(); } void obs_hotkey_load(obs_hotkey_id id, obs_data_array_t *data) { if (!lock()) return; obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (hotkey) { remove_bindings(id); load_bindings(hotkey, data); } unlock(); } static inline bool enum_load_bindings(void *data, obs_hotkey_t *hotkey) { obs_data_array_t *hotkey_data = obs_data_get_array(data, hotkey->name); if (!hotkey_data) return true; load_bindings(hotkey, hotkey_data); obs_data_array_release(hotkey_data); return true; } void obs_hotkeys_load_encoder(obs_encoder_t *encoder, obs_data_t *hotkeys) { if (!encoder || !hotkeys) return; if (!lock()) return; enum_context_hotkeys(&encoder->context, enum_load_bindings, hotkeys); unlock(); } void obs_hotkeys_load_output(obs_output_t *output, obs_data_t *hotkeys) { if (!output || !hotkeys) return; if (!lock()) return; enum_context_hotkeys(&output->context, enum_load_bindings, hotkeys); unlock(); } void obs_hotkeys_load_service(obs_service_t *service, obs_data_t *hotkeys) { if (!service || !hotkeys) return; if (!lock()) return; enum_context_hotkeys(&service->context, enum_load_bindings, hotkeys); unlock(); } void obs_hotkeys_load_source(obs_source_t *source, obs_data_t *hotkeys) { if (!source || !hotkeys) return; if (!lock()) return; enum_context_hotkeys(&source->context, enum_load_bindings, hotkeys); unlock(); } void obs_hotkey_pair_load(obs_hotkey_pair_id id, obs_data_array_t *data0, obs_data_array_t *data1) { if ((!data0 && !data1) || !lock()) return; obs_hotkey_pair_t *pair; HASH_FIND_HKEY(obs->hotkeys.hotkey_pairs, id, pair); if (!pair) goto unlock; obs_hotkey_t *p1, *p2; HASH_FIND_HKEY(obs->hotkeys.hotkeys, pair->id[0], p1); HASH_FIND_HKEY(obs->hotkeys.hotkeys, pair->id[1], p2); if (p1) { remove_bindings(pair->id[0]); load_bindings(p1, data0); } if (p2) { remove_bindings(pair->id[1]); load_bindings(p2, data1); } unlock: unlock(); } static inline void save_modifier(uint32_t modifiers, obs_data_t *data, const char *name, uint32_t flag) { if ((modifiers & flag) == flag) obs_data_set_bool(data, name, true); } struct save_bindings_helper_t { obs_data_array_t *array; obs_hotkey_t *hotkey; }; static inline bool save_bindings_helper(void *data, size_t idx, obs_hotkey_binding_t *binding) { UNUSED_PARAMETER(idx); struct save_bindings_helper_t *h = data; if (h->hotkey->id != binding->hotkey_id) return true; obs_data_t *hotkey = obs_data_create(); uint32_t modifiers = binding->key.modifiers; save_modifier(modifiers, hotkey, "shift", INTERACT_SHIFT_KEY); save_modifier(modifiers, hotkey, "control", INTERACT_CONTROL_KEY); save_modifier(modifiers, hotkey, "alt", INTERACT_ALT_KEY); save_modifier(modifiers, hotkey, "command", INTERACT_COMMAND_KEY); obs_data_set_string(hotkey, "key", obs_key_to_name(binding->key.key)); obs_data_array_push_back(h->array, hotkey); obs_data_release(hotkey); return true; } static inline obs_data_array_t *save_hotkey(obs_hotkey_t *hotkey) { obs_data_array_t *data = obs_data_array_create(); struct save_bindings_helper_t arg = {data, hotkey}; enum_bindings(save_bindings_helper, &arg); return data; } obs_data_array_t *obs_hotkey_save(obs_hotkey_id id) { obs_data_array_t *result = NULL; if (!lock()) return result; obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (hotkey) result = save_hotkey(hotkey); unlock(); return result; } void obs_hotkey_pair_save(obs_hotkey_pair_id id, obs_data_array_t **p_data0, obs_data_array_t **p_data1) { if ((!p_data0 && !p_data1) || !lock()) return; obs_hotkey_pair_t *pair; HASH_FIND_HKEY(obs->hotkeys.hotkey_pairs, id, pair); if (!pair) goto unlock; obs_hotkey_t *hotkey; if (p_data0) { HASH_FIND_HKEY(obs->hotkeys.hotkeys, pair->id[0], hotkey); if (hotkey) *p_data0 = save_hotkey(hotkey); } if (p_data1) { HASH_FIND_HKEY(obs->hotkeys.hotkeys, pair->id[1], hotkey); if (hotkey) *p_data1 = save_hotkey(hotkey); } unlock: unlock(); } static inline bool enum_save_hotkey(void *data, obs_hotkey_t *hotkey) { obs_data_array_t *hotkey_data = save_hotkey(hotkey); obs_data_set_array(data, hotkey->name, hotkey_data); obs_data_array_release(hotkey_data); return true; } static inline obs_data_t *save_context_hotkeys(struct obs_context_data *context) { if (!context->hotkeys.num) return NULL; obs_data_t *result = obs_data_create(); enum_context_hotkeys(context, enum_save_hotkey, result); return result; } obs_data_t *obs_hotkeys_save_encoder(obs_encoder_t *encoder) { obs_data_t *result = NULL; if (!lock()) return result; result = save_context_hotkeys(&encoder->context); unlock(); return result; } obs_data_t *obs_hotkeys_save_output(obs_output_t *output) { obs_data_t *result = NULL; if (!lock()) return result; result = save_context_hotkeys(&output->context); unlock(); return result; } obs_data_t *obs_hotkeys_save_service(obs_service_t *service) { obs_data_t *result = NULL; if (!lock()) return result; result = save_context_hotkeys(&service->context); unlock(); return result; } obs_data_t *obs_hotkeys_save_source(obs_source_t *source) { obs_data_t *result = NULL; if (!lock()) return result; result = save_context_hotkeys(&source->context); unlock(); return result; } struct binding_find_data { obs_hotkey_id id; size_t *idx; bool found; }; static inline bool binding_finder(void *data, size_t idx, obs_hotkey_binding_t *binding) { struct binding_find_data *find = data; if (binding->hotkey_id != find->id) return true; *find->idx = idx; find->found = true; return false; } static inline bool find_binding(obs_hotkey_id id, size_t *idx) { struct binding_find_data data = {id, idx, false}; enum_bindings(binding_finder, &data); return data.found; } static inline void release_pressed_binding(obs_hotkey_binding_t *binding); static inline bool remove_bindings(obs_hotkey_id id) { bool removed = false; size_t idx; while (find_binding(id, &idx)) { obs_hotkey_binding_t *binding = &obs->hotkeys.bindings.array[idx]; if (binding->pressed) release_pressed_binding(binding); da_erase(obs->hotkeys.bindings, idx); removed = true; } return removed; } static void release_registerer(obs_hotkey_t *hotkey) { switch (hotkey->registerer_type) { case OBS_HOTKEY_REGISTERER_FRONTEND: break; case OBS_HOTKEY_REGISTERER_ENCODER: obs_weak_encoder_release(hotkey->registerer); break; case OBS_HOTKEY_REGISTERER_OUTPUT: obs_weak_output_release(hotkey->registerer); break; case OBS_HOTKEY_REGISTERER_SERVICE: obs_weak_service_release(hotkey->registerer); break; case OBS_HOTKEY_REGISTERER_SOURCE: obs_weak_source_release(hotkey->registerer); break; } hotkey->registerer = NULL; } static inline void unregister_hotkey(obs_hotkey_id id) { if (id >= obs->hotkeys.next_id) return; obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (!hotkey) return; HASH_DEL(obs->hotkeys.hotkeys, hotkey); hotkey_signal("hotkey_unregister", hotkey); release_registerer(hotkey); if (hotkey->registerer_type == OBS_HOTKEY_REGISTERER_SOURCE) obs_weak_source_release(hotkey->registerer); bfree(hotkey->name); bfree(hotkey->description); bfree(hotkey); remove_bindings(id); } static inline void unregister_hotkey_pair(obs_hotkey_pair_id id) { if (id >= obs->hotkeys.next_pair_id) return; obs_hotkey_pair_t *pair; HASH_FIND_HKEY(obs->hotkeys.hotkey_pairs, id, pair); if (!pair) return; unregister_hotkey(pair->id[0]); unregister_hotkey(pair->id[1]); HASH_DEL(obs->hotkeys.hotkey_pairs, pair); bfree(pair); } void obs_hotkey_unregister(obs_hotkey_id id) { if (!lock()) return; unregister_hotkey(id); unlock(); } void obs_hotkey_pair_unregister(obs_hotkey_pair_id id) { if (!lock()) return; unregister_hotkey_pair(id); unlock(); } static void context_release_hotkeys(struct obs_context_data *context) { if (!context->hotkeys.num) goto cleanup; for (size_t i = 0; i < context->hotkeys.num; i++) unregister_hotkey(context->hotkeys.array[i]); cleanup: da_free(context->hotkeys); } static void context_release_hotkey_pairs(struct obs_context_data *context) { if (!context->hotkey_pairs.num) goto cleanup; for (size_t i = 0; i < context->hotkey_pairs.num; i++) unregister_hotkey_pair(context->hotkey_pairs.array[i]); cleanup: da_free(context->hotkey_pairs); } void obs_hotkeys_context_release(struct obs_context_data *context) { if (!lock()) return; context_release_hotkeys(context); context_release_hotkey_pairs(context); obs_data_release(context->hotkey_data); unlock(); } void obs_hotkeys_free(void) { obs_hotkey_t *hotkey, *tmp; HASH_ITER (hh, obs->hotkeys.hotkeys, hotkey, tmp) { HASH_DEL(obs->hotkeys.hotkeys, hotkey); bfree(hotkey->name); bfree(hotkey->description); release_registerer(hotkey); bfree(hotkey); } obs_hotkey_pair_t *pair, *tmp2; HASH_ITER (hh, obs->hotkeys.hotkey_pairs, pair, tmp2) { HASH_DEL(obs->hotkeys.hotkey_pairs, pair); bfree(pair); } da_free(obs->hotkeys.bindings); for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) { if (obs->hotkeys.translations[i]) { bfree(obs->hotkeys.translations[i]); obs->hotkeys.translations[i] = NULL; } } } void obs_enum_hotkeys(obs_hotkey_enum_func func, void *data) { if (!lock()) return; obs_hotkey_t *hk, *tmp; HASH_ITER (hh, obs->hotkeys.hotkeys, hk, tmp) { if (!func(data, hk->id, hk)) break; } unlock(); } void obs_enum_hotkey_bindings(obs_hotkey_binding_enum_func func, void *data) { if (!lock()) return; enum_bindings(func, data); unlock(); } static inline bool modifiers_match(obs_hotkey_binding_t *binding, uint32_t modifiers_, bool strict_modifiers) { uint32_t modifiers = binding->key.modifiers; if (!strict_modifiers) return (modifiers & modifiers_) == modifiers; else return modifiers == modifiers_; } static inline bool is_pressed(obs_key_t key) { return obs_hotkeys_platform_is_pressed(obs->hotkeys.platform_context, key); } static inline void press_released_binding(obs_hotkey_binding_t *binding) { binding->pressed = true; obs_hotkey_t *hotkey = binding->hotkey; if (hotkey->pressed++) return; if (!obs->hotkeys.reroute_hotkeys) hotkey->func(hotkey->data, hotkey->id, hotkey, true); else if (obs->hotkeys.router_func) obs->hotkeys.router_func(obs->hotkeys.router_func_data, hotkey->id, true); } static inline void release_pressed_binding(obs_hotkey_binding_t *binding) { binding->pressed = false; obs_hotkey_t *hotkey = binding->hotkey; if (--hotkey->pressed) return; if (!obs->hotkeys.reroute_hotkeys) hotkey->func(hotkey->data, hotkey->id, hotkey, false); else if (obs->hotkeys.router_func) obs->hotkeys.router_func(obs->hotkeys.router_func_data, hotkey->id, false); } static inline void handle_binding(obs_hotkey_binding_t *binding, uint32_t modifiers, bool no_press, bool strict_modifiers, bool *pressed) { bool modifiers_match_ = modifiers_match(binding, modifiers, strict_modifiers); bool modifiers_only = binding->key.key == OBS_KEY_NONE; if (!strict_modifiers && !binding->key.modifiers) binding->modifiers_match = true; if (modifiers_only) pressed = &modifiers_only; if (!binding->key.modifiers && modifiers_only) goto reset; if ((!binding->modifiers_match && !modifiers_only) || !modifiers_match_) goto reset; if ((pressed && !*pressed) || (!pressed && !is_pressed(binding->key.key))) goto reset; if (binding->pressed || no_press) return; press_released_binding(binding); return; reset: binding->modifiers_match = modifiers_match_; if (!binding->pressed) return; release_pressed_binding(binding); } struct obs_hotkey_internal_inject { obs_key_combination_t hotkey; bool pressed; bool strict_modifiers; }; static inline bool inject_hotkey(void *data, size_t idx, obs_hotkey_binding_t *binding) { UNUSED_PARAMETER(idx); struct obs_hotkey_internal_inject *event = data; if (modifiers_match(binding, event->hotkey.modifiers, event->strict_modifiers)) { bool pressed = binding->key.key == event->hotkey.key && event->pressed; if (binding->key.key == OBS_KEY_NONE) pressed = true; if (pressed) { binding->modifiers_match = true; if (!binding->pressed) press_released_binding(binding); } } return true; } void obs_hotkey_inject_event(obs_key_combination_t hotkey, bool pressed) { if (!lock()) return; struct obs_hotkey_internal_inject event = { {hotkey.modifiers, hotkey.key}, pressed, obs->hotkeys.strict_modifiers, }; enum_bindings(inject_hotkey, &event); unlock(); } void obs_hotkey_enable_background_press(bool enable) { if (!lock()) return; obs->hotkeys.thread_disable_press = !enable; unlock(); } struct obs_query_hotkeys_helper { uint32_t modifiers; bool no_press; bool strict_modifiers; }; static inline bool query_hotkey(void *data, size_t idx, obs_hotkey_binding_t *binding) { UNUSED_PARAMETER(idx); struct obs_query_hotkeys_helper *param = (struct obs_query_hotkeys_helper *)data; handle_binding(binding, param->modifiers, param->no_press, param->strict_modifiers, NULL); return true; } static inline void query_hotkeys() { uint32_t modifiers = 0; if (is_pressed(OBS_KEY_SHIFT)) modifiers |= INTERACT_SHIFT_KEY; if (is_pressed(OBS_KEY_CONTROL)) modifiers |= INTERACT_CONTROL_KEY; if (is_pressed(OBS_KEY_ALT)) modifiers |= INTERACT_ALT_KEY; if (is_pressed(OBS_KEY_META)) modifiers |= INTERACT_COMMAND_KEY; struct obs_query_hotkeys_helper param = { modifiers, obs->hotkeys.thread_disable_press, obs->hotkeys.strict_modifiers, }; enum_bindings(query_hotkey, ¶m); } #define NBSP "\xC2\xA0" void *obs_hotkey_thread(void *arg) { UNUSED_PARAMETER(arg); os_set_thread_name("libobs: hotkey thread"); const char *hotkey_thread_name = profile_store_name(obs_get_profiler_name_store(), "obs_hotkey_thread(%g" NBSP "ms)", 25.); profile_register_root(hotkey_thread_name, (uint64_t)25000000); while (os_event_timedwait(obs->hotkeys.stop_event, 25) == ETIMEDOUT) { if (!lock()) continue; profile_start(hotkey_thread_name); query_hotkeys(); profile_end(hotkey_thread_name); unlock(); profile_reenable_thread(); } return NULL; } void obs_hotkey_trigger_routed_callback(obs_hotkey_id id, bool pressed) { if (!lock()) return; if (!obs->hotkeys.reroute_hotkeys) goto unlock; obs_hotkey_t *hotkey; HASH_FIND_HKEY(obs->hotkeys.hotkeys, id, hotkey); if (!hotkey) goto unlock; hotkey->func(hotkey->data, id, hotkey, pressed); unlock: unlock(); } void obs_hotkey_set_callback_routing_func(obs_hotkey_callback_router_func func, void *data) { if (!lock()) return; obs->hotkeys.router_func = func; obs->hotkeys.router_func_data = data; unlock(); } void obs_hotkey_enable_callback_rerouting(bool enable) { if (!lock()) return; obs->hotkeys.reroute_hotkeys = enable; unlock(); } static void obs_set_key_translation(obs_key_t key, const char *translation) { bfree(obs->hotkeys.translations[key]); obs->hotkeys.translations[key] = NULL; if (translation) obs->hotkeys.translations[key] = bstrdup(translation); } void obs_hotkeys_set_translations_s(struct obs_hotkeys_translations *translations, size_t size) { #define ADD_TRANSLATION(key_name, var_name) \ if (t.var_name) \ obs_set_key_translation(key_name, t.var_name); struct obs_hotkeys_translations t = {0}; struct dstr numpad = {0}; struct dstr mouse = {0}; struct dstr button = {0}; if (!translations) { return; } memcpy(&t, translations, (size < sizeof(t)) ? size : sizeof(t)); ADD_TRANSLATION(OBS_KEY_INSERT, insert); ADD_TRANSLATION(OBS_KEY_DELETE, del); ADD_TRANSLATION(OBS_KEY_HOME, home); ADD_TRANSLATION(OBS_KEY_END, end); ADD_TRANSLATION(OBS_KEY_PAGEUP, page_up); ADD_TRANSLATION(OBS_KEY_PAGEDOWN, page_down); ADD_TRANSLATION(OBS_KEY_NUMLOCK, num_lock); ADD_TRANSLATION(OBS_KEY_SCROLLLOCK, scroll_lock); ADD_TRANSLATION(OBS_KEY_CAPSLOCK, caps_lock); ADD_TRANSLATION(OBS_KEY_BACKSPACE, backspace); ADD_TRANSLATION(OBS_KEY_TAB, tab); ADD_TRANSLATION(OBS_KEY_PRINT, print); ADD_TRANSLATION(OBS_KEY_PAUSE, pause); ADD_TRANSLATION(OBS_KEY_SHIFT, shift); ADD_TRANSLATION(OBS_KEY_ALT, alt); ADD_TRANSLATION(OBS_KEY_CONTROL, control); ADD_TRANSLATION(OBS_KEY_META, meta); ADD_TRANSLATION(OBS_KEY_MENU, menu); ADD_TRANSLATION(OBS_KEY_SPACE, space); ADD_TRANSLATION(OBS_KEY_ESCAPE, escape); #ifdef __APPLE__ const char *numpad_str = t.apple_keypad_num; ADD_TRANSLATION(OBS_KEY_NUMSLASH, apple_keypad_divide); ADD_TRANSLATION(OBS_KEY_NUMASTERISK, apple_keypad_multiply); ADD_TRANSLATION(OBS_KEY_NUMMINUS, apple_keypad_minus); ADD_TRANSLATION(OBS_KEY_NUMPLUS, apple_keypad_plus); ADD_TRANSLATION(OBS_KEY_NUMPERIOD, apple_keypad_decimal); ADD_TRANSLATION(OBS_KEY_NUMEQUAL, apple_keypad_equal); #else const char *numpad_str = t.numpad_num; ADD_TRANSLATION(OBS_KEY_NUMSLASH, numpad_divide); ADD_TRANSLATION(OBS_KEY_NUMASTERISK, numpad_multiply); ADD_TRANSLATION(OBS_KEY_NUMMINUS, numpad_minus); ADD_TRANSLATION(OBS_KEY_NUMPLUS, numpad_plus); ADD_TRANSLATION(OBS_KEY_NUMPERIOD, numpad_decimal); #endif if (numpad_str) { dstr_copy(&numpad, numpad_str); dstr_depad(&numpad); if (dstr_find(&numpad, "%1") == NULL) { dstr_cat(&numpad, " %1"); } #define ADD_NUMPAD_NUM(idx) \ dstr_copy_dstr(&button, &numpad); \ dstr_replace(&button, "%1", #idx); \ obs_set_key_translation(OBS_KEY_NUM##idx, button.array) ADD_NUMPAD_NUM(0); ADD_NUMPAD_NUM(1); ADD_NUMPAD_NUM(2); ADD_NUMPAD_NUM(3); ADD_NUMPAD_NUM(4); ADD_NUMPAD_NUM(5); ADD_NUMPAD_NUM(6); ADD_NUMPAD_NUM(7); ADD_NUMPAD_NUM(8); ADD_NUMPAD_NUM(9); } if (t.mouse_num) { dstr_copy(&mouse, t.mouse_num); dstr_depad(&mouse); if (dstr_find(&mouse, "%1") == NULL) { dstr_cat(&mouse, " %1"); } #define ADD_MOUSE_NUM(idx) \ dstr_copy_dstr(&button, &mouse); \ dstr_replace(&button, "%1", #idx); \ obs_set_key_translation(OBS_KEY_MOUSE##idx, button.array) ADD_MOUSE_NUM(1); ADD_MOUSE_NUM(2); ADD_MOUSE_NUM(3); ADD_MOUSE_NUM(4); ADD_MOUSE_NUM(5); ADD_MOUSE_NUM(6); ADD_MOUSE_NUM(7); ADD_MOUSE_NUM(8); ADD_MOUSE_NUM(9); ADD_MOUSE_NUM(10); ADD_MOUSE_NUM(11); ADD_MOUSE_NUM(12); ADD_MOUSE_NUM(13); ADD_MOUSE_NUM(14); ADD_MOUSE_NUM(15); ADD_MOUSE_NUM(16); ADD_MOUSE_NUM(17); ADD_MOUSE_NUM(18); ADD_MOUSE_NUM(19); ADD_MOUSE_NUM(20); ADD_MOUSE_NUM(21); ADD_MOUSE_NUM(22); ADD_MOUSE_NUM(23); ADD_MOUSE_NUM(24); ADD_MOUSE_NUM(25); ADD_MOUSE_NUM(26); ADD_MOUSE_NUM(27); ADD_MOUSE_NUM(28); ADD_MOUSE_NUM(29); } dstr_free(&numpad); dstr_free(&mouse); dstr_free(&button); } const char *obs_get_hotkey_translation(obs_key_t key, const char *def) { if (key == OBS_KEY_NONE) { return NULL; } return obs->hotkeys.translations[key] ? obs->hotkeys.translations[key] : def; } void obs_hotkey_update_atomic(obs_hotkey_atomic_update_func func, void *data) { if (!lock()) return; func(data); unlock(); } void obs_hotkeys_set_audio_hotkeys_translations(const char *mute, const char *unmute, const char *push_to_mute, const char *push_to_talk) { #define SET_T(n) \ bfree(obs->hotkeys.n); \ obs->hotkeys.n = bstrdup(n) SET_T(mute); SET_T(unmute); SET_T(push_to_mute); SET_T(push_to_talk); #undef SET_T } void obs_hotkeys_set_sceneitem_hotkeys_translations(const char *show, const char *hide) { #define SET_T(n) \ bfree(obs->hotkeys.sceneitem_##n); \ obs->hotkeys.sceneitem_##n = bstrdup(n) SET_T(show); SET_T(hide); #undef SET_T } obs-studio-32.1.0-sources/libobs/obs-output.h000644 001751 001751 00000005632 15153330235 022006 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #ifdef __cplusplus extern "C" { #endif /* obs_output_info.flags definitions */ #define OBS_OUTPUT_VIDEO (1 << 0) #define OBS_OUTPUT_AUDIO (1 << 1) #define OBS_OUTPUT_AV (OBS_OUTPUT_VIDEO | OBS_OUTPUT_AUDIO) #define OBS_OUTPUT_ENCODED (1 << 2) #define OBS_OUTPUT_SERVICE (1 << 3) #define OBS_OUTPUT_MULTI_TRACK (1 << 4) #define OBS_OUTPUT_CAN_PAUSE (1 << 5) #define OBS_OUTPUT_MULTI_TRACK_AUDIO OBS_OUTPUT_MULTI_TRACK #define OBS_OUTPUT_MULTI_TRACK_VIDEO (1 << 6) #define OBS_OUTPUT_MULTI_TRACK_AV (OBS_OUTPUT_MULTI_TRACK_AUDIO | OBS_OUTPUT_MULTI_TRACK_VIDEO) #define MAX_OUTPUT_AUDIO_ENCODERS 6 #define MAX_OUTPUT_VIDEO_ENCODERS 10 struct encoder_packet; struct obs_output_info { /* required */ const char *id; uint32_t flags; const char *(*get_name)(void *type_data); void *(*create)(obs_data_t *settings, obs_output_t *output); void (*destroy)(void *data); bool (*start)(void *data); void (*stop)(void *data, uint64_t ts); void (*raw_video)(void *data, struct video_data *frame); void (*raw_audio)(void *data, struct audio_data *frames); void (*encoded_packet)(void *data, struct encoder_packet *packet); /* optional */ void (*update)(void *data, obs_data_t *settings); void (*get_defaults)(obs_data_t *settings); obs_properties_t *(*get_properties)(void *data); void (*unused1)(void *data); uint64_t (*get_total_bytes)(void *data); int (*get_dropped_frames)(void *data); void *type_data; void (*free_type_data)(void *type_data); float (*get_congestion)(void *data); int (*get_connect_time_ms)(void *data); /* only used with encoded outputs, separated with semicolon */ const char *encoded_video_codecs; const char *encoded_audio_codecs; /* raw audio callback for multi track outputs */ void (*raw_audio2)(void *data, size_t idx, struct audio_data *frames); /* required if OBS_OUTPUT_SERVICE */ const char *protocols; }; EXPORT void obs_register_output_s(const struct obs_output_info *info, size_t size); #define obs_register_output(info) obs_register_output_s(info, sizeof(struct obs_output_info)) #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-av1.c000644 001751 001751 00000013524 15153330235 021127 0ustar00runnerrunner000000 000000 // SPDX-FileCopyrightText: 2023 David Rosca // // SPDX-License-Identifier: GPL-2.0-or-later #include "obs-av1.h" #include "obs.h" static inline uint64_t leb128(const uint8_t *buf, size_t size, size_t *len) { uint64_t value = 0; uint8_t leb128_byte; *len = 0; for (int i = 0; i < 8; i++) { if (size-- < 1) break; (*len)++; leb128_byte = buf[i]; value |= (leb128_byte & 0x7f) << (i * 7); if (!(leb128_byte & 0x80)) break; } return value; } static inline unsigned int get_bits(uint8_t val, unsigned int n, unsigned int count) { return (val >> (8 - n - count)) & ((1 << (count - 1)) * 2 - 1); } static void parse_obu_header(const uint8_t *buf, size_t size, size_t *obu_start, size_t *obu_size, int *obu_type) { int extension_flag, has_size_field; size_t size_len = 0; *obu_start = 0; *obu_size = 0; *obu_type = 0; if (size < 1) return; *obu_type = get_bits(*buf, 1, 4); extension_flag = get_bits(*buf, 5, 1); has_size_field = get_bits(*buf, 6, 1); if (extension_flag) (*obu_start)++; (*obu_start)++; if (has_size_field) *obu_size = (size_t)leb128(buf + *obu_start, size - *obu_start, &size_len); else *obu_size = size - 1; *obu_start += size_len; } // Pass a static 10 byte buffer in. The max size for a leb128. static inline void encode_uleb128(uint64_t val, uint8_t *out_buf, size_t *len_out) { size_t num_bytes = 0; uint8_t b = val & 0x7f; val >>= 7; while (val > 0) { out_buf[num_bytes] = b | 0x80; ++num_bytes; b = val & 0x7f; val >>= 7; } out_buf[num_bytes] = b; ++num_bytes; *len_out = num_bytes; } /* metadata_obu_itu_t35() is a public symbol. Maintain the function * and make it call the more general metadata_obu() function. */ void metadata_obu_itu_t35(const uint8_t *itut_t35_buffer, size_t itut_bufsize, uint8_t **out_buffer, size_t *outbuf_size) { metadata_obu(itut_t35_buffer, itut_bufsize, out_buffer, outbuf_size, METADATA_TYPE_ITUT_T35); } // Create an OBU to carry AV1 metadata types, including captions and user private data void metadata_obu(const uint8_t *source_buffer, size_t source_bufsize, uint8_t **out_buffer, size_t *outbuf_size, uint8_t metadata_type) { /* From the AV1 spec: 5.3.2 OBU Header Syntax * ------------- * obu_forbidden_bit (1) * obu_type (4) // In this case OBS_OBU_METADATA * obu_extension_flag (1) * obu_has_size_field (1) // Must be set, size of OBU is variable * obu_reserved_1bit (1) * if(obu_extension_flag == 1) * // skip, because we aren't setting this */ uint8_t obu_header_byte = (OBS_OBU_METADATA << 3) | (1 << 1); /* From the AV1 spec: 5.3.1 General OBU Syntax * if (obu_has_size_field) * obu_size leb128() * else * obu_size = sz - 1 - obu_extension_flag * * // Skipping portions unrelated to this OBU type * * if (obu_type == OBU_METADATA) * metdata_obu() * 5.8.1 General metadata OBU Syntax * // leb128(metadatatype) should always be 1 byte +1 for trailing bits * metadata_type leb128() * 5.8.2 Metadata ITUT T35 syntax * if (metadata_type == METADATA_TYPE_ITUT_T35) * // add ITUT T35 payload * 5.8.1 General metadata OBU Syntax * // trailing bits will always be 0x80 because * // everything in here is byte aligned * trailing_bits( obu_size * 8 - payloadBits ) */ int64_t size_field = 1 + source_bufsize + 1; uint8_t size_buf[10]; size_t size_buf_size = 0; encode_uleb128(size_field, size_buf, &size_buf_size); // header + obu_size + metadata_type + metadata_payload + trailing_bits *outbuf_size = 1 + size_buf_size + 1 + source_bufsize + 1; *out_buffer = bzalloc(*outbuf_size); size_t offset = 0; (*out_buffer)[0] = obu_header_byte; ++offset; memcpy((*out_buffer) + offset, size_buf, size_buf_size); offset += size_buf_size; (*out_buffer)[offset] = metadata_type; ++offset; memcpy((*out_buffer) + offset, source_buffer, source_bufsize); offset += source_bufsize; /* From AV1 spec: 6.2.1 General OBU semantics * ... Trailing bits are always present, unless the OBU consists of only * the header. Trailing bits achieve byte alignment when the payload of * an OBU is not byte aligned. The trailing bits may also used for * additional byte padding, and if used are taken into account in the * sz value. In all cases, the pattern used for the trailing bits * guarantees that all OBUs (except header-only OBUs) end with the same * pattern: one bit set to one, optionally followed by zeros. */ (*out_buffer)[offset] = 0x80; } bool obs_av1_keyframe(const uint8_t *data, size_t size) { const uint8_t *start = data, *end = data + size; while (start < end) { size_t obu_start, obu_size; int obu_type; parse_obu_header(start, end - start, &obu_start, &obu_size, &obu_type); if (obu_size) { if (obu_type == OBS_OBU_FRAME || obu_type == OBS_OBU_FRAME_HEADER) { uint8_t val = *(start + obu_start); if (!get_bits(val, 0, 1)) // show_existing_frame return get_bits(val, 1, 2) == 0; // frame_type return false; } } start += obu_start + obu_size; } return false; } void obs_extract_av1_headers(const uint8_t *packet, size_t size, uint8_t **new_packet_data, size_t *new_packet_size, uint8_t **header_data, size_t *header_size) { DARRAY(uint8_t) new_packet; DARRAY(uint8_t) header; const uint8_t *start = packet, *end = packet + size; da_init(new_packet); da_init(header); while (start < end) { size_t obu_start, obu_size; int obu_type; parse_obu_header(start, end - start, &obu_start, &obu_size, &obu_type); if (obu_type == OBS_OBU_METADATA || obu_type == OBS_OBU_SEQUENCE_HEADER) { da_push_back_array(header, start, obu_start + obu_size); } da_push_back_array(new_packet, start, obu_start + obu_size); start += obu_start + obu_size; } *new_packet_data = new_packet.array; *new_packet_size = new_packet.num; *header_data = header.array; *header_size = header.num; } obs-studio-32.1.0-sources/libobs/obs-module.c000644 001751 001751 00000103453 15153330235 021726 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/platform.h" #include "util/dstr.h" #include "obs-defs.h" #include "obs-internal.h" #include "obs-module.h" extern const char *get_module_extension(void); obs_module_t *loadingModule = NULL; static inline int req_func_not_found(const char *name, const char *path) { blog(LOG_DEBUG, "Required module function '%s' in module '%s' not " "found, loading of module failed", name, path); return MODULE_MISSING_EXPORTS; } static int load_module_exports(struct obs_module *mod, const char *path) { mod->load = os_dlsym(mod->module, "obs_module_load"); if (!mod->load) return req_func_not_found("obs_module_load", path); mod->set_pointer = os_dlsym(mod->module, "obs_module_set_pointer"); if (!mod->set_pointer) return req_func_not_found("obs_module_set_pointer", path); mod->ver = os_dlsym(mod->module, "obs_module_ver"); if (!mod->ver) return req_func_not_found("obs_module_ver", path); /* optional exports */ mod->unload = os_dlsym(mod->module, "obs_module_unload"); mod->post_load = os_dlsym(mod->module, "obs_module_post_load"); mod->set_locale = os_dlsym(mod->module, "obs_module_set_locale"); mod->free_locale = os_dlsym(mod->module, "obs_module_free_locale"); mod->name = os_dlsym(mod->module, "obs_module_name"); mod->description = os_dlsym(mod->module, "obs_module_description"); mod->author = os_dlsym(mod->module, "obs_module_author"); mod->get_string = os_dlsym(mod->module, "obs_module_get_string"); return MODULE_SUCCESS; } bool obs_module_get_locale_string(const obs_module_t *mod, const char *lookup_string, const char **translated_string) { if (mod->get_string) { return mod->get_string(lookup_string, translated_string); } return false; } const char *obs_module_get_locale_text(const obs_module_t *mod, const char *text) { const char *str = text; obs_module_get_locale_string(mod, text, &str); return str; } static inline char *get_module_name(const char *file) { static size_t ext_len = 0; struct dstr name = {0}; if (ext_len == 0) { const char *ext = get_module_extension(); ext_len = strlen(ext); } dstr_copy(&name, file); dstr_resize(&name, name.len - ext_len); return name.array; } #ifdef _WIN32 extern void reset_win32_symbol_paths(void); #endif int obs_module_load_metadata(struct obs_module *mod) { struct obs_module_metadata *md = NULL; /* Check if the metadata file exists */ struct dstr path = {0}; dstr_copy(&path, mod->data_path); if (!dstr_is_empty(&path) && dstr_end(&path) != '/') { dstr_cat_ch(&path, '/'); } dstr_cat(&path, "manifest.json"); if (os_file_exists(path.array)) { /* If we find a metadata file, allocate a new metadata. */ md = bmalloc(sizeof(obs_module_metadata_t)); obs_data_t *metadata = obs_data_create_from_json_file(path.array); md->display_name = bstrdup(obs_data_get_string(metadata, "display_name")); md->id = bstrdup(obs_data_get_string(metadata, "id")); md->version = bstrdup(obs_data_get_string(metadata, "version")); md->os_arch = bstrdup(obs_data_get_string(metadata, "os_arch")); md->name = bstrdup(obs_data_get_string(metadata, "name")); md->description = bstrdup(obs_data_get_string(metadata, "description")); md->long_description = bstrdup(obs_data_get_string(metadata, "long_description")); obs_data_t *urls = obs_data_get_obj(metadata, "urls"); md->repository_url = bstrdup(obs_data_get_string(urls, "repository")); md->website_url = bstrdup(obs_data_get_string(urls, "website")); md->support_url = bstrdup(obs_data_get_string(urls, "support")); obs_data_release(urls); md->has_banner = obs_data_get_bool(metadata, "has_banner"); md->has_icon = obs_data_get_bool(metadata, "has_icon"); obs_data_release(metadata); } dstr_free(&path); mod->metadata = md; return MODULE_SUCCESS; } int obs_open_module(obs_module_t **module, const char *path, const char *data_path) { struct obs_module mod = {0}; int errorcode; if (!module || !path || !obs) return MODULE_ERROR; #ifdef __APPLE__ /* HACK: Do not load obsolete obs-browser build on macOS; the * obs-browser plugin used to live in the Application Support * directory. */ if (astrstri(path, "Library/Application Support/obs-studio") != NULL && astrstri(path, "obs-browser") != NULL) { blog(LOG_WARNING, "Ignoring old obs-browser.so version"); return MODULE_HARDCODED_SKIP; } #endif blog(LOG_DEBUG, "---------------------------------"); mod.module = os_dlopen(path); if (!mod.module) { blog(LOG_WARNING, "Module '%s' not loaded", path); return MODULE_FAILED_TO_OPEN; } errorcode = load_module_exports(&mod, path); if (errorcode != MODULE_SUCCESS) return errorcode; /* Reject plugins compiled with a newer libobs. Patch version (lower 16-bit) is ignored. */ uint32_t ver = mod.ver ? mod.ver() & 0xFFFF0000 : 0; if (ver > LIBOBS_API_VER) { blog(LOG_WARNING, "Module '%s' compiled with newer libobs %d.%d", path, (ver >> 24) & 0xFF, (ver >> 16) & 0xFF); return MODULE_INCOMPATIBLE_VER; } mod.bin_path = bstrdup(path); mod.file = strrchr(mod.bin_path, '/'); mod.file = (!mod.file) ? mod.bin_path : (mod.file + 1); mod.mod_name = get_module_name(mod.file); mod.data_path = bstrdup(data_path); mod.next = obs->first_module; mod.load_state = OBS_MODULE_ENABLED; da_init(mod.sources); da_init(mod.outputs); da_init(mod.encoders); da_init(mod.services); if (mod.file) { blog(LOG_DEBUG, "Loading module: %s", mod.file); } obs_module_load_metadata(&mod); *module = bmemdup(&mod, sizeof(mod)); obs->first_module = (*module); mod.set_pointer(*module); if (mod.set_locale) mod.set_locale(obs->locale); return MODULE_SUCCESS; } bool obs_create_disabled_module(obs_module_t **module, const char *path, const char *data_path, enum obs_module_load_state state) { struct obs_module mod = {0}; mod.bin_path = bstrdup(path); mod.file = strrchr(mod.bin_path, '/'); mod.file = (!mod.file) ? mod.bin_path : (mod.file + 1); mod.mod_name = get_module_name(mod.file); mod.data_path = bstrdup(data_path); mod.next = obs->first_disabled_module; mod.load_state = state; da_init(mod.sources); da_init(mod.outputs); da_init(mod.encoders); da_init(mod.services); obs_module_load_metadata(&mod); *module = bmemdup(&mod, sizeof(mod)); obs->first_disabled_module = (*module); return true; } bool obs_init_module(obs_module_t *module) { if (!module || !obs) return false; if (module->loaded) return true; const char *profile_name = profile_store_name(obs_get_profiler_name_store(), "obs_init_module(%s)", module->file); profile_start(profile_name); loadingModule = module; module->loaded = module->load(); loadingModule = NULL; if (!module->loaded) blog(LOG_WARNING, "Failed to initialize module '%s'", module->file); profile_end(profile_name); return module->loaded; } void obs_log_loaded_modules(void) { blog(LOG_INFO, " Loaded Modules:"); for (obs_module_t *mod = obs->first_module; !!mod; mod = mod->next) blog(LOG_INFO, " %s", mod->file); } const char *obs_get_module_file_name(obs_module_t *module) { return module ? module->file : NULL; } const char *obs_get_module_name(obs_module_t *module) { if (module && module->metadata && module->metadata->display_name) { return module->metadata->display_name; } return (module && module->name) ? module->name() : NULL; } const char *obs_get_module_author(obs_module_t *module) { return (module && module->author) ? module->author() : NULL; } const char *obs_get_module_description(obs_module_t *module) { return (module && module->description) ? module->description() : NULL; } const char *obs_get_module_binary_path(obs_module_t *module) { return module ? module->bin_path : NULL; } const char *obs_get_module_data_path(obs_module_t *module) { return module ? module->data_path : NULL; } const char *obs_get_module_id(obs_module_t *module) { return module && module->metadata ? module->metadata->id : NULL; } const char *obs_get_module_version(obs_module_t *module) { return module && module->metadata ? module->metadata->version : NULL; } void obs_module_add_source(obs_module_t *module, const char *id) { char *source_id = bstrdup(id); if (module) { da_push_back(module->sources, &source_id); } } void obs_module_add_output(obs_module_t *module, const char *id) { char *output_id = bstrdup(id); if (module) { da_push_back(module->outputs, &output_id); } } void obs_module_add_encoder(obs_module_t *module, const char *id) { char *encoder_id = bstrdup(id); if (module) { da_push_back(module->encoders, &encoder_id); } } void obs_module_add_service(obs_module_t *module, const char *id) { char *service_id = bstrdup(id); if (module) { da_push_back(module->services, &service_id); } } obs_module_t *obs_get_module(const char *name) { obs_module_t *module = obs->first_module; while (module) { if (strcmp(module->mod_name, name) == 0) { return module; } module = module->next; } return NULL; } obs_module_t *obs_get_disabled_module(const char *name) { obs_module_t *module = obs->first_disabled_module; while (module) { if (strcmp(module->mod_name, name) == 0) { return module; } module = module->next; } return NULL; } void *obs_get_module_lib(obs_module_t *module) { return module ? module->module : NULL; } char *obs_find_module_file(obs_module_t *module, const char *file) { struct dstr output = {0}; if (!file) file = ""; if (!module) return NULL; dstr_copy(&output, module->data_path); if (!dstr_is_empty(&output) && dstr_end(&output) != '/' && *file) dstr_cat_ch(&output, '/'); dstr_cat(&output, file); if (!os_file_exists(output.array)) dstr_free(&output); return output.array; } char *obs_module_get_config_path(obs_module_t *module, const char *file) { struct dstr output = {0}; dstr_copy(&output, obs->module_config_path); if (!dstr_is_empty(&output) && dstr_end(&output) != '/') dstr_cat_ch(&output, '/'); dstr_cat(&output, module->mod_name); dstr_cat_ch(&output, '/'); dstr_cat(&output, file); return output.array; } void obs_add_module_path(const char *bin, const char *data) { struct obs_module_path omp; if (!obs || !bin || !data) return; omp.bin = bstrdup(bin); omp.data = bstrdup(data); da_push_back(obs->module_paths, &omp); } void obs_add_safe_module(const char *name) { if (!obs || !name) return; char *item = bstrdup(name); da_push_back(obs->safe_modules, &item); } void obs_add_core_module(const char *name) { if (!obs || !name) return; char *item = bstrdup(name); da_push_back(obs->core_modules, &item); } void obs_add_disabled_module(const char *name) { if (!obs || !name) return; char *item = bstrdup(name); da_push_back(obs->disabled_modules, &item); } extern void get_plugin_info(const char *path, bool *is_obs_plugin); struct fail_info { struct dstr fail_modules; size_t fail_count; }; static bool is_safe_module(const char *name) { if (!obs->safe_modules.num) return true; for (size_t i = 0; i < obs->safe_modules.num; i++) { if (strcmp(name, obs->safe_modules.array[i]) == 0) return true; } return false; } static bool is_core_module(const char *name) { for (size_t i = 0; i < obs->core_modules.num; i++) { if (strcmp(name, obs->core_modules.array[i]) == 0) return true; } return false; } static bool is_disabled_module(const char *name) { if (obs->disabled_modules.num == 0) return false; for (size_t i = 0; i < obs->disabled_modules.num; i++) { if (strcmp(name, obs->disabled_modules.array[i]) == 0) return true; } return false; } bool obs_get_module_allow_disable(const char *name) { return !is_core_module(name); } static void load_all_callback(void *param, const struct obs_module_info2 *info) { struct fail_info *fail_info = param; obs_module_t *module; obs_module_t *disabled_module; bool is_obs_plugin; get_plugin_info(info->bin_path, &is_obs_plugin); if (!is_obs_plugin) { blog(LOG_WARNING, "Skipping module '%s', not an OBS plugin", info->bin_path); return; } if (!is_safe_module(info->name)) { obs_create_disabled_module(&disabled_module, info->bin_path, info->data_path, OBS_MODULE_DISABLED_SAFE); blog(LOG_WARNING, "Skipping module '%s', not on safe list", info->name); return; } if (is_disabled_module(info->name)) { obs_create_disabled_module(&disabled_module, info->bin_path, info->data_path, OBS_MODULE_DISABLED); blog(LOG_WARNING, "Skipping module '%s', is disabled", info->name); return; } int code = obs_open_module(&module, info->bin_path, info->data_path); switch (code) { case MODULE_MISSING_EXPORTS: blog(LOG_DEBUG, "Failed to load module file '%s', not an OBS plugin", info->bin_path); return; case MODULE_FAILED_TO_OPEN: blog(LOG_DEBUG, "Failed to load module file '%s', module failed to open", info->bin_path); obs_create_disabled_module(&disabled_module, info->bin_path, info->data_path, OBS_MODULE_FAILED_TO_OPEN); goto load_failure; case MODULE_ERROR: blog(LOG_DEBUG, "Failed to load module file '%s' (unknown error)", info->bin_path); goto load_failure; case MODULE_INCOMPATIBLE_VER: blog(LOG_DEBUG, "Failed to load module file '%s', incompatible version", info->bin_path); obs_create_disabled_module(&disabled_module, info->bin_path, info->data_path, OBS_MODULE_FAILED_TO_OPEN); goto load_failure; case MODULE_HARDCODED_SKIP: return; } if (!obs_init_module(module)) { free_module(module); obs_create_disabled_module(&disabled_module, info->bin_path, info->data_path, OBS_MODULE_FAILED_TO_INITIALIZE); } UNUSED_PARAMETER(param); return; load_failure: if (fail_info) { dstr_cat(&fail_info->fail_modules, info->name); dstr_cat(&fail_info->fail_modules, ";"); fail_info->fail_count++; } } static const char *obs_load_all_modules_name = "obs_load_all_modules"; #ifdef _WIN32 static const char *reset_win32_symbol_paths_name = "reset_win32_symbol_paths"; #endif void obs_load_all_modules(void) { profile_start(obs_load_all_modules_name); obs_find_modules2(load_all_callback, NULL); #ifdef _WIN32 profile_start(reset_win32_symbol_paths_name); reset_win32_symbol_paths(); profile_end(reset_win32_symbol_paths_name); #endif profile_end(obs_load_all_modules_name); } static const char *obs_load_all_modules2_name = "obs_load_all_modules2"; void obs_load_all_modules2(struct obs_module_failure_info *mfi) { struct fail_info fail_info = {0}; memset(mfi, 0, sizeof(*mfi)); profile_start(obs_load_all_modules2_name); obs_find_modules2(load_all_callback, &fail_info); #ifdef _WIN32 profile_start(reset_win32_symbol_paths_name); reset_win32_symbol_paths(); profile_end(reset_win32_symbol_paths_name); #endif profile_end(obs_load_all_modules2_name); mfi->count = fail_info.fail_count; mfi->failed_modules = strlist_split(fail_info.fail_modules.array, ';', false); dstr_free(&fail_info.fail_modules); } void obs_module_failure_info_free(struct obs_module_failure_info *mfi) { if (mfi->failed_modules) { bfree(mfi->failed_modules); mfi->failed_modules = NULL; } } void obs_post_load_modules(void) { for (obs_module_t *mod = obs->first_module; !!mod; mod = mod->next) if (mod->post_load) mod->post_load(); } static inline void make_data_dir(struct dstr *parsed_data_dir, const char *data_dir, const char *name) { dstr_copy(parsed_data_dir, data_dir); dstr_replace(parsed_data_dir, "%module%", name); if (dstr_end(parsed_data_dir) == '/') dstr_resize(parsed_data_dir, parsed_data_dir->len - 1); } static char *make_data_directory(const char *module_name, const char *data_dir) { struct dstr parsed_data_dir = {0}; bool found = false; make_data_dir(&parsed_data_dir, data_dir, module_name); found = os_file_exists(parsed_data_dir.array); if (!found && astrcmpi_n(module_name, "lib", 3) == 0) make_data_dir(&parsed_data_dir, data_dir, module_name + 3); return parsed_data_dir.array; } static bool parse_binary_from_directory(struct dstr *parsed_bin_path, const char *bin_path, const char *file) { struct dstr directory = {0}; bool found = true; dstr_copy(&directory, bin_path); dstr_replace(&directory, "%module%", file); if (dstr_end(&directory) != '/') dstr_cat_ch(&directory, '/'); dstr_copy_dstr(parsed_bin_path, &directory); dstr_cat(parsed_bin_path, file); #ifdef __APPLE__ if (!os_file_exists(parsed_bin_path->array)) { dstr_cat(parsed_bin_path, ".so"); } #else dstr_cat(parsed_bin_path, get_module_extension()); #endif if (!os_file_exists(parsed_bin_path->array)) { /* Legacy fallback: Check for plugin with .so suffix*/ dstr_cat(parsed_bin_path, ".so"); /* if the file doesn't exist, check with 'lib' prefix */ dstr_copy_dstr(parsed_bin_path, &directory); dstr_cat(parsed_bin_path, "lib"); dstr_cat(parsed_bin_path, file); dstr_cat(parsed_bin_path, get_module_extension()); /* if neither exist, don't include this as a library */ if (!os_file_exists(parsed_bin_path->array)) { dstr_free(parsed_bin_path); found = false; } } dstr_free(&directory); return found; } static void process_found_module(struct obs_module_path *omp, const char *path, bool directory, obs_find_module_callback2_t callback, void *param) { struct obs_module_info2 info; struct dstr name = {0}; struct dstr parsed_bin_path = {0}; const char *file; char *parsed_data_dir; bool bin_found = true; file = strrchr(path, '/'); file = file ? (file + 1) : path; if (strcmp(file, ".") == 0 || strcmp(file, "..") == 0) return; dstr_copy(&name, file); char *ext = strrchr(name.array, '.'); if (ext) dstr_resize(&name, ext - name.array); if (!directory) { dstr_copy(&parsed_bin_path, path); } else { bin_found = parse_binary_from_directory(&parsed_bin_path, omp->bin, name.array); } parsed_data_dir = make_data_directory(name.array, omp->data); if (parsed_data_dir && bin_found) { info.bin_path = parsed_bin_path.array; info.data_path = parsed_data_dir; info.name = name.array; callback(param, &info); } bfree(parsed_data_dir); dstr_free(&name); dstr_free(&parsed_bin_path); } static void find_modules_in_path(struct obs_module_path *omp, obs_find_module_callback2_t callback, void *param) { struct dstr search_path = {0}; char *module_start; bool search_directories = false; os_glob_t *gi; dstr_copy(&search_path, omp->bin); module_start = strstr(search_path.array, "%module%"); if (module_start) { dstr_resize(&search_path, module_start - search_path.array); search_directories = true; } if (!dstr_is_empty(&search_path) && dstr_end(&search_path) != '/') dstr_cat_ch(&search_path, '/'); dstr_cat_ch(&search_path, '*'); if (!search_directories) dstr_cat(&search_path, get_module_extension()); if (os_glob(search_path.array, 0, &gi) == 0) { for (size_t i = 0; i < gi->gl_pathc; i++) { if (search_directories == gi->gl_pathv[i].directory) process_found_module(omp, gi->gl_pathv[i].path, search_directories, callback, param); } os_globfree(gi); } dstr_free(&search_path); } void obs_find_modules2(obs_find_module_callback2_t callback, void *param) { if (!obs) return; for (size_t i = 0; i < obs->module_paths.num; i++) { struct obs_module_path *omp = obs->module_paths.array + i; find_modules_in_path(omp, callback, param); } } void obs_find_modules(obs_find_module_callback_t callback, void *param) { /* the structure is ABI compatible so we can just cast the callback */ obs_find_modules2((obs_find_module_callback2_t)callback, param); } void obs_enum_modules(obs_enum_module_callback_t callback, void *param) { struct obs_module *module; if (!obs) return; module = obs->first_module; while (module) { callback(param, module); module = module->next; } } void free_module(struct obs_module *mod) { if (!mod) return; if (mod->module) { if (mod->free_locale) mod->free_locale(); if (mod->loaded && mod->unload) mod->unload(); /* there is no real reason to close the dynamic libraries, * and sometimes this can cause issues. */ /* os_dlclose(mod->module); */ } /* Is this module an active / loaded module, or a disabled module? */ if (mod->load_state == OBS_MODULE_ENABLED) { for (obs_module_t *m = obs->first_module; !!m; m = m->next) { if (m->next == mod) { m->next = mod->next; break; } } if (obs->first_module == mod) obs->first_module = mod->next; } else { for (obs_module_t *m = obs->first_disabled_module; !!m; m = m->next) { if (m->next == mod) { m->next = mod->next; break; } } if (obs->first_disabled_module == mod) obs->first_disabled_module = mod->next; } bfree(mod->mod_name); bfree(mod->bin_path); bfree(mod->data_path); for (size_t i = 0; i < mod->sources.num; i++) { bfree(mod->sources.array[i]); } da_free(mod->sources); for (size_t i = 0; i < mod->outputs.num; i++) { bfree(mod->outputs.array[i]); } da_free(mod->outputs); for (size_t i = 0; i < mod->encoders.num; i++) { bfree(mod->encoders.array[i]); } da_free(mod->encoders); for (size_t i = 0; i < mod->services.num; i++) { bfree(mod->services.array[i]); } da_free(mod->services); if (mod->metadata) { free_module_metadata(mod->metadata); bfree(mod->metadata); } bfree(mod); } lookup_t *obs_module_load_locale(obs_module_t *module, const char *default_locale, const char *locale) { struct dstr str = {0}; lookup_t *lookup = NULL; if (!module || !default_locale || !locale) { blog(LOG_WARNING, "obs_module_load_locale: Invalid parameters"); return NULL; } dstr_copy(&str, "locale/"); dstr_cat(&str, default_locale); dstr_cat(&str, ".ini"); char *file = obs_find_module_file(module, str.array); if (file) lookup = text_lookup_create(file); bfree(file); if (!lookup) { blog(LOG_WARNING, "Failed to load '%s' text for module: '%s'", default_locale, module->file); goto cleanup; } if (astrcmpi(locale, default_locale) == 0) goto cleanup; dstr_copy(&str, "/locale/"); dstr_cat(&str, locale); dstr_cat(&str, ".ini"); file = obs_find_module_file(module, str.array); if (!text_lookup_add(lookup, file)) blog(LOG_WARNING, "Failed to load '%s' text for module: '%s'", locale, module->file); bfree(file); cleanup: dstr_free(&str); return lookup; } #define REGISTER_OBS_DEF(size_var, structure, dest, info) \ do { \ struct structure data = {0}; \ if (!size_var) { \ blog(LOG_ERROR, "Tried to register " #structure " outside of obs_module_load"); \ return; \ } \ \ if (size_var > sizeof(data)) { \ blog(LOG_ERROR, \ "Tried to register " #structure " with size %llu which is more " \ "than libobs currently supports " \ "(%llu)", \ (long long unsigned)size_var, (long long unsigned)sizeof(data)); \ goto error; \ } \ \ memcpy(&data, info, size_var); \ da_push_back(dest, &data); \ } while (false) #define HAS_VAL(type, info, val) ((offsetof(type, val) + sizeof(info->val) <= size) && info->val) #define CHECK_REQUIRED_VAL(type, info, val, func) \ do { \ if (!HAS_VAL(type, info, val)) { \ blog(LOG_ERROR, \ "Required value '" #val "' for " \ "'%s' not found. " #func " failed.", \ info->id); \ goto error; \ } \ } while (false) #define CHECK_REQUIRED_VAL_EITHER(type, info, val1, val2, func) \ do { \ if (!HAS_VAL(type, info, val1) && !HAS_VAL(type, info, val2)) { \ blog(LOG_ERROR, \ "Neither '" #val1 "' nor '" #val2 "' " \ "for '%s' found. " #func " failed.", \ info->id); \ goto error; \ } \ } while (false) #define HANDLE_ERROR(size_var, structure, info) \ do { \ struct structure data = {0}; \ if (!size_var) \ return; \ \ memcpy(&data, info, sizeof(data) < size_var ? sizeof(data) : size_var); \ \ if (data.type_data && data.free_type_data) \ data.free_type_data(data.type_data); \ } while (false) #define source_warn(format, ...) blog(LOG_WARNING, "obs_register_source: " format, ##__VA_ARGS__) #define output_warn(format, ...) blog(LOG_WARNING, "obs_register_output: " format, ##__VA_ARGS__) #define encoder_warn(format, ...) blog(LOG_WARNING, "obs_register_encoder: " format, ##__VA_ARGS__) #define service_warn(format, ...) blog(LOG_WARNING, "obs_register_service: " format, ##__VA_ARGS__) void obs_register_source_s(const struct obs_source_info *info, size_t size) { struct obs_source_info data = {0}; obs_source_info_array_t *array = NULL; if (info->type == OBS_SOURCE_TYPE_INPUT) { array = &obs->input_types; } else if (info->type == OBS_SOURCE_TYPE_FILTER) { array = &obs->filter_types; } else if (info->type == OBS_SOURCE_TYPE_TRANSITION) { array = &obs->transition_types; } else if (info->type != OBS_SOURCE_TYPE_SCENE) { source_warn("Tried to register unknown source type: %u", info->type); goto error; } if (get_source_info2(info->id, info->version)) { source_warn("Source '%s' already exists! " "Duplicate library?", info->id); goto error; } if (size > sizeof(data)) { source_warn("Tried to register obs_source_info with size " "%llu which is more than libobs currently " "supports (%llu)", (long long unsigned)size, (long long unsigned)sizeof(data)); goto error; } /* NOTE: The assignment of data.module must occur before memcpy! */ if (loadingModule) { char *source_id = bstrdup(info->id); da_push_back(loadingModule->sources, &source_id); } memcpy(&data, info, size); /* mark audio-only filters as an async filter categorically */ if (data.type == OBS_SOURCE_TYPE_FILTER) { if ((data.output_flags & OBS_SOURCE_VIDEO) == 0) data.output_flags |= OBS_SOURCE_ASYNC; } if (data.type == OBS_SOURCE_TYPE_TRANSITION) { if (data.get_width) source_warn("get_width ignored registering " "transition '%s'", data.id); if (data.get_height) source_warn("get_height ignored registering " "transition '%s'", data.id); data.output_flags |= OBS_SOURCE_COMPOSITE | OBS_SOURCE_VIDEO | OBS_SOURCE_CUSTOM_DRAW; } if ((data.output_flags & OBS_SOURCE_COMPOSITE) != 0) { if ((data.output_flags & OBS_SOURCE_AUDIO) != 0) { source_warn("Source '%s': Composite sources " "cannot be audio sources", info->id); goto error; } if ((data.output_flags & OBS_SOURCE_ASYNC) != 0) { source_warn("Source '%s': Composite sources " "cannot be async sources", info->id); goto error; } } #define CHECK_REQUIRED_VAL_(info, val, func) CHECK_REQUIRED_VAL(struct obs_source_info, info, val, func) CHECK_REQUIRED_VAL_(info, get_name, obs_register_source); if (info->type != OBS_SOURCE_TYPE_FILTER && info->type != OBS_SOURCE_TYPE_TRANSITION && (info->output_flags & OBS_SOURCE_VIDEO) != 0 && (info->output_flags & OBS_SOURCE_ASYNC) == 0) { CHECK_REQUIRED_VAL_(info, get_width, obs_register_source); CHECK_REQUIRED_VAL_(info, get_height, obs_register_source); } if ((data.output_flags & OBS_SOURCE_COMPOSITE) != 0) { CHECK_REQUIRED_VAL_(info, audio_render, obs_register_source); } #undef CHECK_REQUIRED_VAL_ /* version-related stuff */ data.unversioned_id = data.id; if (data.version) { struct dstr versioned_id = {0}; dstr_printf(&versioned_id, "%s_v%d", data.id, (int)data.version); data.id = versioned_id.array; } else { data.id = bstrdup(data.id); } if (array) da_push_back(*array, &data); da_push_back(obs->source_types, &data); return; error: HANDLE_ERROR(size, obs_source_info, info); } void obs_register_output_s(const struct obs_output_info *info, size_t size) { if (find_output(info->id)) { output_warn("Output id '%s' already exists! " "Duplicate library?", info->id); goto error; } #define CHECK_REQUIRED_VAL_(info, val, func) CHECK_REQUIRED_VAL(struct obs_output_info, info, val, func) CHECK_REQUIRED_VAL_(info, get_name, obs_register_output); CHECK_REQUIRED_VAL_(info, create, obs_register_output); CHECK_REQUIRED_VAL_(info, destroy, obs_register_output); CHECK_REQUIRED_VAL_(info, start, obs_register_output); CHECK_REQUIRED_VAL_(info, stop, obs_register_output); if (info->flags & OBS_OUTPUT_SERVICE) CHECK_REQUIRED_VAL_(info, protocols, obs_register_output); if (info->flags & OBS_OUTPUT_ENCODED) { CHECK_REQUIRED_VAL_(info, encoded_packet, obs_register_output); } else { if (info->flags & OBS_OUTPUT_VIDEO) CHECK_REQUIRED_VAL_(info, raw_video, obs_register_output); if (info->flags & OBS_OUTPUT_AUDIO) { if (info->flags & OBS_OUTPUT_MULTI_TRACK) { CHECK_REQUIRED_VAL_(info, raw_audio2, obs_register_output); } else { CHECK_REQUIRED_VAL_(info, raw_audio, obs_register_output); } } } #undef CHECK_REQUIRED_VAL_ REGISTER_OBS_DEF(size, obs_output_info, obs->output_types, info); if (info->flags & OBS_OUTPUT_SERVICE) { char **protocols = strlist_split(info->protocols, ';', false); for (char **protocol = protocols; *protocol; ++protocol) { bool skip = false; for (size_t i = 0; i < obs->data.protocols.num; i++) { if (strcmp(*protocol, obs->data.protocols.array[i]) == 0) skip = true; } if (skip) continue; char *new_prtcl = bstrdup(*protocol); da_push_back(obs->data.protocols, &new_prtcl); } strlist_free(protocols); } if (loadingModule) { char *output_id = bstrdup(info->id); da_push_back(loadingModule->outputs, &output_id); } return; error: HANDLE_ERROR(size, obs_output_info, info); } void obs_register_encoder_s(const struct obs_encoder_info *info, size_t size) { if (find_encoder(info->id)) { encoder_warn("Encoder id '%s' already exists! " "Duplicate library?", info->id); goto error; } if (((info->caps & OBS_ENCODER_CAP_PASS_TEXTURE) != 0 && info->caps & OBS_ENCODER_CAP_SCALING) != 0) { encoder_warn("Texture encoders cannot self-scale. Encoder id '%s' not registered.", info->id); goto error; } #define CHECK_REQUIRED_VAL_(info, val, func) CHECK_REQUIRED_VAL(struct obs_encoder_info, info, val, func) CHECK_REQUIRED_VAL_(info, get_name, obs_register_encoder); CHECK_REQUIRED_VAL_(info, create, obs_register_encoder); CHECK_REQUIRED_VAL_(info, destroy, obs_register_encoder); if ((info->caps & OBS_ENCODER_CAP_PASS_TEXTURE) != 0) CHECK_REQUIRED_VAL_EITHER(struct obs_encoder_info, info, encode_texture, encode_texture2, obs_register_encoder); else CHECK_REQUIRED_VAL_(info, encode, obs_register_encoder); if (info->type == OBS_ENCODER_AUDIO) CHECK_REQUIRED_VAL_(info, get_frame_size, obs_register_encoder); #undef CHECK_REQUIRED_VAL_ REGISTER_OBS_DEF(size, obs_encoder_info, obs->encoder_types, info); if (loadingModule) { char *encoder_id = bstrdup(info->id); da_push_back(loadingModule->encoders, &encoder_id); } return; error: HANDLE_ERROR(size, obs_encoder_info, info); } void obs_register_service_s(const struct obs_service_info *info, size_t size) { if (find_service(info->id)) { service_warn("Service id '%s' already exists! " "Duplicate library?", info->id); goto error; } #define CHECK_REQUIRED_VAL_(info, val, func) CHECK_REQUIRED_VAL(struct obs_service_info, info, val, func) CHECK_REQUIRED_VAL_(info, get_name, obs_register_service); CHECK_REQUIRED_VAL_(info, create, obs_register_service); CHECK_REQUIRED_VAL_(info, destroy, obs_register_service); CHECK_REQUIRED_VAL_(info, get_protocol, obs_register_service); #undef CHECK_REQUIRED_VAL_ REGISTER_OBS_DEF(size, obs_service_info, obs->service_types, info); if (loadingModule) { char *service_id = bstrdup(info->id); da_push_back(loadingModule->services, &service_id); } return; error: HANDLE_ERROR(size, obs_service_info, info); } obs-studio-32.1.0-sources/libobs/obs-video.c000644 001751 001751 00000110174 15153330235 021545 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include "obs.h" #include "obs-internal.h" #include "graphics/vec4.h" #include "media-io/format-conversion.h" #include "media-io/video-frame.h" #ifdef _WIN32 #define WIN32_LEAN_AND_MEAN #include #endif static uint64_t tick_sources(uint64_t cur_time, uint64_t last_time) { struct obs_core_data *data = &obs->data; struct obs_source *source; uint64_t delta_time; float seconds; if (!last_time) last_time = cur_time - obs->video.video_frame_interval_ns; delta_time = cur_time - last_time; seconds = (float)((double)delta_time / 1000000000.0); /* ------------------------------------- */ /* call tick callbacks */ pthread_mutex_lock(&data->draw_callbacks_mutex); for (size_t i = data->tick_callbacks.num; i > 0; i--) { struct tick_callback *callback; callback = data->tick_callbacks.array + (i - 1); callback->tick(callback->param, seconds); } pthread_mutex_unlock(&data->draw_callbacks_mutex); /* ------------------------------------- */ /* get an array of all sources to tick */ da_clear(data->sources_to_tick); pthread_mutex_lock(&data->sources_mutex); source = data->sources; while (source) { obs_source_t *s = obs_source_removed(source) ? NULL : obs_source_get_ref(source); if (s) da_push_back(data->sources_to_tick, &s); source = (struct obs_source *)source->context.hh_uuid.next; } pthread_mutex_unlock(&data->sources_mutex); /* ------------------------------------- */ /* call the tick function of each source */ for (size_t i = 0; i < data->sources_to_tick.num; i++) { obs_source_t *s = data->sources_to_tick.array[i]; if (!obs_source_removed(s)) { const uint64_t start = source_profiler_source_tick_start(); obs_source_video_tick(s, seconds); source_profiler_source_tick_end(s, start); } obs_source_release(s); } return cur_time; } /* in obs-display.c */ extern void render_display(struct obs_display *display); static inline void render_displays(void) { struct obs_display *display; if (!obs->data.valid) return; gs_enter_context(obs->video.graphics); /* render extra displays/swaps */ pthread_mutex_lock(&obs->data.displays_mutex); display = obs->data.first_display; while (display) { render_display(display); display = display->next; } pthread_mutex_unlock(&obs->data.displays_mutex); gs_leave_context(); } static inline void set_render_size(uint32_t width, uint32_t height) { gs_enable_depth_test(false); gs_set_cull_mode(GS_NEITHER); gs_ortho(0.0f, (float)width, 0.0f, (float)height, -100.0f, 100.0f); gs_set_viewport(0, 0, width, height); } static inline void unmap_last_surface(struct obs_core_video_mix *video) { for (int c = 0; c < NUM_CHANNELS; ++c) { if (video->mapped_surfaces[c]) { gs_stagesurface_unmap(video->mapped_surfaces[c]); video->mapped_surfaces[c] = NULL; } } } static inline bool can_reuse_mix_texture(const struct obs_core_video_mix *mix, size_t *idx) { for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { const struct obs_core_video_mix *other = obs->video.mixes.array[i]; if (other == mix) break; if (other->view != mix->view) continue; if (other->render_space != mix->render_space) continue; if (other->ovi.base_width != mix->ovi.base_width || other->ovi.base_height != mix->ovi.base_height) continue; if (!other->texture_rendered) continue; *idx = i; return true; } return false; } static inline void draw_mix_texture(const size_t mix_idx) { gs_texture_t *tex = obs->video.mixes.array[mix_idx]->render_texture; gs_effect_t *effect = obs_get_base_effect(OBS_EFFECT_DEFAULT); gs_eparam_t *param = gs_effect_get_param_by_name(effect, "image"); gs_effect_set_texture_srgb(param, tex); gs_enable_framebuffer_srgb(true); while (gs_effect_loop(effect, "Draw")) gs_draw_sprite(tex, 0, 0, 0); gs_enable_framebuffer_srgb(false); } static const char *render_main_texture_name = "render_main_texture"; static inline void render_main_texture(struct obs_core_video_mix *video) { uint32_t base_width = video->ovi.base_width; uint32_t base_height = video->ovi.base_height; profile_start(render_main_texture_name); GS_DEBUG_MARKER_BEGIN(GS_DEBUG_COLOR_MAIN_TEXTURE, render_main_texture_name); struct vec4 clear_color; vec4_set(&clear_color, 0.0f, 0.0f, 0.0f, 0.0f); gs_set_render_target_with_color_space(video->render_texture, NULL, video->render_space); gs_clear(GS_CLEAR_COLOR, &clear_color, 1.0f, 0); set_render_size(base_width, base_height); pthread_mutex_lock(&obs->data.draw_callbacks_mutex); for (size_t i = obs->data.draw_callbacks.num; i > 0; i--) { struct draw_callback *const callback = obs->data.draw_callbacks.array + (i - 1); callback->draw(callback->param, base_width, base_height); } pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); /* In some cases we can reuse a previous mix's texture and save re-rendering everything */ size_t reuse_idx; if (can_reuse_mix_texture(video, &reuse_idx)) draw_mix_texture(reuse_idx); else obs_view_render(video->view); video->texture_rendered = true; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); for (size_t i = 0; i < obs->data.rendered_callbacks.num; ++i) { struct rendered_callback *const callback = &obs->data.rendered_callbacks.array[i]; callback->rendered(callback->param); } pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); GS_DEBUG_MARKER_END(); profile_end(render_main_texture_name); } static inline gs_effect_t *get_scale_effect_internal(struct obs_core_video_mix *mix) { struct obs_core_video *video = &obs->video; const struct video_output_info *info = video_output_get_info(mix->video); /* if the dimension is under half the size of the original image, * bicubic/lanczos can't sample enough pixels to create an accurate * image, so use the bilinear low resolution effect instead */ if (info->width < (mix->ovi.base_width / 2) && info->height < (mix->ovi.base_height / 2)) { return video->bilinear_lowres_effect; } switch (mix->ovi.scale_type) { case OBS_SCALE_BILINEAR: return video->default_effect; case OBS_SCALE_LANCZOS: return video->lanczos_effect; case OBS_SCALE_AREA: return video->area_effect; case OBS_SCALE_BICUBIC: default:; } return video->bicubic_effect; } static inline bool resolution_close(struct obs_core_video_mix *mix, uint32_t width, uint32_t height) { long width_cmp = (long)mix->ovi.base_width - (long)width; long height_cmp = (long)mix->ovi.base_height - (long)height; return labs(width_cmp) <= 16 && labs(height_cmp) <= 16; } static inline gs_effect_t *get_scale_effect(struct obs_core_video_mix *mix, uint32_t width, uint32_t height) { struct obs_core_video *video = &obs->video; if (resolution_close(mix, width, height)) { return video->default_effect; } else { /* if the scale method couldn't be loaded, use either bicubic * or bilinear by default */ gs_effect_t *effect = get_scale_effect_internal(mix); if (!effect) effect = !!video->bicubic_effect ? video->bicubic_effect : video->default_effect; return effect; } } static const char *render_output_texture_name = "render_output_texture"; static inline gs_texture_t *render_output_texture(struct obs_core_video_mix *mix) { struct obs_video_info *const ovi = &mix->ovi; gs_texture_t *texture = mix->render_texture; gs_texture_t *target = mix->output_texture; const uint32_t width = gs_texture_get_width(target); const uint32_t height = gs_texture_get_height(target); if ((width == ovi->base_width) && (height == ovi->base_height)) return texture; profile_start(render_output_texture_name); gs_effect_t *effect = get_scale_effect(mix, width, height); gs_technique_t *tech = gs_effect_get_technique(effect, "Draw"); gs_eparam_t *image = gs_effect_get_param_by_name(effect, "image"); gs_eparam_t *bres = gs_effect_get_param_by_name(effect, "base_dimension"); gs_eparam_t *bres_i = gs_effect_get_param_by_name(effect, "base_dimension_i"); size_t passes, i; gs_set_render_target(target, NULL); set_render_size(width, height); if (bres) { struct vec2 base; vec2_set(&base, (float)mix->ovi.base_width, (float)mix->ovi.base_height); gs_effect_set_vec2(bres, &base); } if (bres_i) { struct vec2 base_i; vec2_set(&base_i, 1.0f / (float)mix->ovi.base_width, 1.0f / (float)mix->ovi.base_height); gs_effect_set_vec2(bres_i, &base_i); } gs_effect_set_texture_srgb(image, texture); gs_enable_framebuffer_srgb(true); gs_enable_blending(false); passes = gs_technique_begin(tech); for (i = 0; i < passes; i++) { gs_technique_begin_pass(tech, i); gs_draw_sprite(texture, 0, width, height); gs_technique_end_pass(tech); } gs_technique_end(tech); gs_enable_blending(true); gs_enable_framebuffer_srgb(false); profile_end(render_output_texture_name); return target; } static void render_convert_plane(gs_effect_t *effect, gs_texture_t *target, const char *tech_name) { gs_technique_t *tech = gs_effect_get_technique(effect, tech_name); const uint32_t width = gs_texture_get_width(target); const uint32_t height = gs_texture_get_height(target); gs_set_render_target(target, NULL); set_render_size(width, height); size_t passes = gs_technique_begin(tech); for (size_t i = 0; i < passes; i++) { gs_technique_begin_pass(tech, i); gs_draw(GS_TRIS, 0, 3); gs_technique_end_pass(tech); } gs_technique_end(tech); } static const char *render_convert_texture_name = "render_convert_texture"; static void render_convert_texture(struct obs_core_video_mix *video, gs_texture_t *const *const convert_textures, gs_texture_t *texture) { profile_start(render_convert_texture_name); gs_effect_t *effect = obs->video.conversion_effect; gs_eparam_t *color_vec0 = gs_effect_get_param_by_name(effect, "color_vec0"); gs_eparam_t *color_vec1 = gs_effect_get_param_by_name(effect, "color_vec1"); gs_eparam_t *color_vec2 = gs_effect_get_param_by_name(effect, "color_vec2"); gs_eparam_t *image = gs_effect_get_param_by_name(effect, "image"); gs_eparam_t *width_i = gs_effect_get_param_by_name(effect, "width_i"); gs_eparam_t *height_i = gs_effect_get_param_by_name(effect, "height_i"); gs_eparam_t *sdr_white_nits_over_maximum = gs_effect_get_param_by_name(effect, "sdr_white_nits_over_maximum"); gs_eparam_t *hdr_lw = gs_effect_get_param_by_name(effect, "hdr_lw"); struct vec4 vec0, vec1, vec2; vec4_set(&vec0, video->color_matrix[4], video->color_matrix[5], video->color_matrix[6], video->color_matrix[7]); vec4_set(&vec1, video->color_matrix[0], video->color_matrix[1], video->color_matrix[2], video->color_matrix[3]); vec4_set(&vec2, video->color_matrix[8], video->color_matrix[9], video->color_matrix[10], video->color_matrix[11]); gs_enable_blending(false); if (convert_textures[0]) { const float hdr_nominal_peak_level = obs->video.hdr_nominal_peak_level; const float multiplier = obs_get_video_sdr_white_level() / 10000.f; gs_effect_set_texture(image, texture); gs_effect_set_vec4(color_vec0, &vec0); gs_effect_set_float(sdr_white_nits_over_maximum, multiplier); gs_effect_set_float(hdr_lw, hdr_nominal_peak_level); render_convert_plane(effect, convert_textures[0], video->conversion_techs[0]); if (convert_textures[1]) { gs_effect_set_texture(image, texture); gs_effect_set_vec4(color_vec1, &vec1); if (!convert_textures[2]) gs_effect_set_vec4(color_vec2, &vec2); gs_effect_set_float(width_i, video->conversion_width_i); gs_effect_set_float(height_i, video->conversion_height_i); gs_effect_set_float(sdr_white_nits_over_maximum, multiplier); gs_effect_set_float(hdr_lw, hdr_nominal_peak_level); render_convert_plane(effect, convert_textures[1], video->conversion_techs[1]); if (convert_textures[2]) { gs_effect_set_texture(image, texture); gs_effect_set_vec4(color_vec2, &vec2); gs_effect_set_float(width_i, video->conversion_width_i); gs_effect_set_float(height_i, video->conversion_height_i); gs_effect_set_float(sdr_white_nits_over_maximum, multiplier); gs_effect_set_float(hdr_lw, hdr_nominal_peak_level); render_convert_plane(effect, convert_textures[2], video->conversion_techs[2]); } } } gs_enable_blending(true); video->texture_converted = true; profile_end(render_convert_texture_name); } static const char *stage_output_texture_name = "stage_output_texture"; static inline void stage_output_texture(struct obs_core_video_mix *video, int cur_texture, gs_texture_t *const *const convert_textures, gs_texture_t *output_texture, gs_stagesurf_t *const *const copy_surfaces, size_t channel_count) { profile_start(stage_output_texture_name); unmap_last_surface(video); if (!video->gpu_conversion) { gs_stagesurf_t *copy = copy_surfaces[0]; if (copy) gs_stage_texture(copy, output_texture); video->active_copy_surfaces[cur_texture][0] = copy; for (size_t i = 1; i < NUM_CHANNELS; ++i) video->active_copy_surfaces[cur_texture][i] = NULL; video->textures_copied[cur_texture] = true; } else if (video->texture_converted) { for (size_t i = 0; i < channel_count; i++) { gs_stagesurf_t *copy = copy_surfaces[i]; if (copy) gs_stage_texture(copy, convert_textures[i]); video->active_copy_surfaces[cur_texture][i] = copy; } for (size_t i = channel_count; i < NUM_CHANNELS; ++i) video->active_copy_surfaces[cur_texture][i] = NULL; video->textures_copied[cur_texture] = true; } profile_end(stage_output_texture_name); } static inline bool queue_frame(struct obs_core_video_mix *video, bool raw_active, struct obs_vframe_info *vframe_info) { bool duplicate = !video->gpu_encoder_avail_queue.size || (video->gpu_encoder_queue.size && vframe_info->count > 1); if (duplicate) { struct obs_tex_frame *tf = deque_data(&video->gpu_encoder_queue, video->gpu_encoder_queue.size - sizeof(*tf)); /* texture-based encoding is stopping */ if (!tf) { return false; } tf->count++; os_sem_post(video->gpu_encode_semaphore); goto finish; } struct obs_tex_frame tf; deque_pop_front(&video->gpu_encoder_avail_queue, &tf, sizeof(tf)); if (tf.released) { #ifdef _WIN32 gs_texture_acquire_sync(tf.tex, tf.lock_key, GS_WAIT_INFINITE); #endif tf.released = false; } /* the vframe_info->count > 1 case causing a copy can only happen if by * some chance the very first frame has to be duplicated for whatever * reason. otherwise, it goes to the 'duplicate' case above, which * will ensure better performance. */ if (raw_active || vframe_info->count > 1) { gs_copy_texture(tf.tex, video->convert_textures_encode[0]); #ifndef _WIN32 /* Y and UV textures are views of the same texture on D3D, and * gs_copy_texture will copy all views of the underlying * texture. On other platforms, these are two distinct textures * that must be copied separately. */ gs_copy_texture(tf.tex_uv, video->convert_textures_encode[1]); #endif } else { gs_texture_t *tex = video->convert_textures_encode[0]; gs_texture_t *tex_uv = video->convert_textures_encode[1]; video->convert_textures_encode[0] = tf.tex; video->convert_textures_encode[1] = tf.tex_uv; tf.tex = tex; tf.tex_uv = tex_uv; } tf.count = 1; tf.timestamp = vframe_info->timestamp; tf.released = true; #ifdef _WIN32 tf.handle = gs_texture_get_shared_handle(tf.tex); gs_texture_release_sync(tf.tex, ++tf.lock_key); #endif deque_push_back(&video->gpu_encoder_queue, &tf, sizeof(tf)); os_sem_post(video->gpu_encode_semaphore); finish: return --vframe_info->count; } extern void full_stop(struct obs_encoder *encoder); static inline void encode_gpu(struct obs_core_video_mix *video, bool raw_active, struct obs_vframe_info *vframe_info) { while (queue_frame(video, raw_active, vframe_info)) ; } static const char *output_gpu_encoders_name = "output_gpu_encoders"; static void output_gpu_encoders(struct obs_core_video_mix *video, bool raw_active) { profile_start(output_gpu_encoders_name); if (!video->texture_converted) goto end; if (!video->vframe_info_buffer_gpu.size) goto end; struct obs_vframe_info vframe_info; deque_pop_front(&video->vframe_info_buffer_gpu, &vframe_info, sizeof(vframe_info)); pthread_mutex_lock(&video->gpu_encoder_mutex); encode_gpu(video, raw_active, &vframe_info); pthread_mutex_unlock(&video->gpu_encoder_mutex); end: profile_end(output_gpu_encoders_name); } static inline void render_video(struct obs_core_video_mix *video, bool raw_active, const bool gpu_active, int cur_texture) { gs_begin_scene(); gs_enable_depth_test(false); gs_set_cull_mode(GS_NEITHER); render_main_texture(video); if (raw_active || gpu_active) { gs_texture_t *const *convert_textures = video->convert_textures; gs_stagesurf_t *const *copy_surfaces = video->copy_surfaces[cur_texture]; size_t channel_count = NUM_CHANNELS; gs_texture_t *output_texture = render_output_texture(video); if (gpu_active) { convert_textures = video->convert_textures_encode; #ifdef _WIN32 copy_surfaces = video->copy_surfaces_encode; channel_count = 1; #endif gs_flush(); } if (video->gpu_conversion) { render_convert_texture(video, convert_textures, output_texture); } if (gpu_active) { gs_flush(); output_gpu_encoders(video, raw_active); } if (raw_active) { stage_output_texture(video, cur_texture, convert_textures, output_texture, copy_surfaces, channel_count); } } gs_set_render_target(NULL, NULL); gs_enable_blending(true); gs_end_scene(); } static inline bool download_frame(struct obs_core_video_mix *video, int prev_texture, struct video_data *frame) { if (!video->textures_copied[prev_texture]) return false; for (int channel = 0; channel < NUM_CHANNELS; ++channel) { gs_stagesurf_t *surface = video->active_copy_surfaces[prev_texture][channel]; if (surface) { if (!gs_stagesurface_map(surface, &frame->data[channel], &frame->linesize[channel])) return false; video->mapped_surfaces[channel] = surface; } } return true; } static const uint8_t *set_gpu_converted_plane(uint32_t width, uint32_t height, uint32_t linesize_input, uint32_t linesize_output, const uint8_t *in, uint8_t *out) { if ((width == linesize_input) && (width == linesize_output)) { size_t total = (size_t)width * (size_t)height; memcpy(out, in, total); in += total; } else { for (size_t y = 0; y < height; y++) { memcpy(out, in, width); out += linesize_output; in += linesize_input; } } return in; } static void set_gpu_converted_data(struct video_frame *output, const struct video_data *input, const struct video_output_info *info) { switch (info->format) { case VIDEO_FORMAT_I420: { const uint32_t width = info->width; const uint32_t height = info->height; set_gpu_converted_plane(width, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); const uint32_t width_d2 = width / 2; const uint32_t height_d2 = height / 2; set_gpu_converted_plane(width_d2, height_d2, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); set_gpu_converted_plane(width_d2, height_d2, input->linesize[2], output->linesize[2], input->data[2], output->data[2]); break; } case VIDEO_FORMAT_NV12: { const uint32_t width = info->width; const uint32_t height = info->height; const uint32_t height_d2 = height / 2; if (input->linesize[1]) { set_gpu_converted_plane(width, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(width, height_d2, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); } else { const uint8_t *const in_uv = set_gpu_converted_plane(width, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(width, height_d2, input->linesize[0], output->linesize[1], in_uv, output->data[1]); } break; } case VIDEO_FORMAT_I444: { const uint32_t width = info->width; const uint32_t height = info->height; set_gpu_converted_plane(width, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(width, height, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); set_gpu_converted_plane(width, height, input->linesize[2], output->linesize[2], input->data[2], output->data[2]); break; } case VIDEO_FORMAT_I010: { const uint32_t width = info->width; const uint32_t height = info->height; set_gpu_converted_plane(width * 2, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); const uint32_t height_d2 = height / 2; set_gpu_converted_plane(width, height_d2, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); set_gpu_converted_plane(width, height_d2, input->linesize[2], output->linesize[2], input->data[2], output->data[2]); break; } case VIDEO_FORMAT_P010: { const uint32_t width_x2 = info->width * 2; const uint32_t height = info->height; const uint32_t height_d2 = height / 2; if (input->linesize[1]) { set_gpu_converted_plane(width_x2, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(width_x2, height_d2, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); } else { const uint8_t *const in_uv = set_gpu_converted_plane(width_x2, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(width_x2, height_d2, input->linesize[0], output->linesize[1], in_uv, output->data[1]); } break; } case VIDEO_FORMAT_P216: { const uint32_t width_x2 = info->width * 2; const uint32_t height = info->height; set_gpu_converted_plane(width_x2, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(width_x2, height, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); break; } case VIDEO_FORMAT_P416: { const uint32_t height = info->height; set_gpu_converted_plane(info->width * 2, height, input->linesize[0], output->linesize[0], input->data[0], output->data[0]); set_gpu_converted_plane(info->width * 4, height, input->linesize[1], output->linesize[1], input->data[1], output->data[1]); break; } case VIDEO_FORMAT_NONE: case VIDEO_FORMAT_YVYU: case VIDEO_FORMAT_YUY2: case VIDEO_FORMAT_UYVY: case VIDEO_FORMAT_RGBA: case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_BGRX: case VIDEO_FORMAT_Y800: case VIDEO_FORMAT_BGR3: case VIDEO_FORMAT_I412: case VIDEO_FORMAT_I422: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_I40A: case VIDEO_FORMAT_I42A: case VIDEO_FORMAT_YUVA: case VIDEO_FORMAT_YA2L: case VIDEO_FORMAT_AYUV: case VIDEO_FORMAT_V210: case VIDEO_FORMAT_R10L: /* unimplemented */ ; } } static inline void copy_rgbx_frame(struct video_frame *output, const struct video_data *input, const struct video_output_info *info) { uint8_t *in_ptr = input->data[0]; uint8_t *out_ptr = output->data[0]; /* if the line sizes match, do a single copy */ if (input->linesize[0] == output->linesize[0]) { memcpy(out_ptr, in_ptr, (size_t)input->linesize[0] * (size_t)info->height); } else { const size_t copy_size = (size_t)info->width * 4; for (size_t y = 0; y < info->height; y++) { memcpy(out_ptr, in_ptr, copy_size); in_ptr += input->linesize[0]; out_ptr += output->linesize[0]; } } } static inline void output_video_data(struct obs_core_video_mix *video, struct video_data *input_frame, int count) { const struct video_output_info *info; struct video_frame output_frame; bool locked; info = video_output_get_info(video->video); locked = video_output_lock_frame(video->video, &output_frame, count, input_frame->timestamp); if (locked) { if (video->gpu_conversion) { set_gpu_converted_data(&output_frame, input_frame, info); } else { copy_rgbx_frame(&output_frame, input_frame, info); } video_output_unlock_frame(video->video); } } void add_ready_encoder_group(obs_encoder_t *encoder) { obs_weak_encoder_t *weak = obs_encoder_get_weak_encoder(encoder); pthread_mutex_lock(&obs->video.encoder_group_mutex); da_push_back(obs->video.ready_encoder_groups, &weak); pthread_mutex_unlock(&obs->video.encoder_group_mutex); } static inline void video_sleep(struct obs_core_video *video, uint64_t *p_time, uint64_t interval_ns) { struct obs_vframe_info vframe_info; uint64_t cur_time = *p_time; uint64_t t = cur_time + interval_ns; int count; if (os_sleepto_ns(t)) { *p_time = t; count = 1; } else { const uint64_t udiff = os_gettime_ns() - cur_time; int64_t diff; memcpy(&diff, &udiff, sizeof(diff)); const uint64_t clamped_diff = (diff > (int64_t)interval_ns) ? (uint64_t)diff : interval_ns; count = (int)(clamped_diff / interval_ns); *p_time = cur_time + interval_ns * count; } video->total_frames += count; video->lagged_frames += count - 1; vframe_info.timestamp = cur_time; vframe_info.count = count; pthread_mutex_lock(&video->encoder_group_mutex); for (size_t i = 0; i < video->ready_encoder_groups.num; i++) { obs_encoder_t *encoder = obs_weak_encoder_get_encoder(video->ready_encoder_groups.array[i]); obs_weak_encoder_release(video->ready_encoder_groups.array[i]); if (!encoder) continue; if (encoder->encoder_group) { struct obs_encoder_group *group = encoder->encoder_group; pthread_mutex_lock(&group->mutex); if (group->num_encoders_started >= group->encoders.num && !group->start_timestamp) group->start_timestamp = *p_time; pthread_mutex_unlock(&group->mutex); } obs_encoder_release(encoder); } da_clear(video->ready_encoder_groups); pthread_mutex_unlock(&video->encoder_group_mutex); pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { struct obs_core_video_mix *video = obs->video.mixes.array[i]; bool raw_active = video->raw_was_active; bool gpu_active = video->gpu_was_active; if (raw_active) deque_push_back(&video->vframe_info_buffer, &vframe_info, sizeof(vframe_info)); if (gpu_active) deque_push_back(&video->vframe_info_buffer_gpu, &vframe_info, sizeof(vframe_info)); } pthread_mutex_unlock(&obs->video.mixes_mutex); } static const char *output_frame_gs_context_name = "gs_context(video->graphics)"; static const char *output_frame_render_video_name = "render_video"; static const char *output_frame_download_frame_name = "download_frame"; static const char *output_frame_gs_flush_name = "gs_flush"; static const char *output_frame_output_video_data_name = "output_video_data"; static inline void output_frame(struct obs_core_video_mix *video) { const bool raw_active = video->raw_was_active; const bool gpu_active = video->gpu_was_active; int cur_texture = video->cur_texture; int prev_texture = cur_texture == 0 ? NUM_TEXTURES - 1 : cur_texture - 1; struct video_data frame; bool frame_ready = 0; memset(&frame, 0, sizeof(struct video_data)); profile_start(output_frame_gs_context_name); gs_enter_context(obs->video.graphics); profile_start(output_frame_render_video_name); GS_DEBUG_MARKER_BEGIN(GS_DEBUG_COLOR_RENDER_VIDEO, output_frame_render_video_name); render_video(video, raw_active, gpu_active, cur_texture); GS_DEBUG_MARKER_END(); profile_end(output_frame_render_video_name); if (raw_active) { profile_start(output_frame_download_frame_name); frame_ready = download_frame(video, prev_texture, &frame); profile_end(output_frame_download_frame_name); } profile_start(output_frame_gs_flush_name); gs_flush(); profile_end(output_frame_gs_flush_name); gs_leave_context(); profile_end(output_frame_gs_context_name); if (raw_active && frame_ready) { struct obs_vframe_info vframe_info; deque_pop_front(&video->vframe_info_buffer, &vframe_info, sizeof(vframe_info)); frame.timestamp = vframe_info.timestamp; profile_start(output_frame_output_video_data_name); output_video_data(video, &frame, vframe_info.count); profile_end(output_frame_output_video_data_name); } if (++video->cur_texture == NUM_TEXTURES) video->cur_texture = 0; } static inline void output_frames(void) { pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { struct obs_core_video_mix *mix = obs->video.mixes.array[i]; if (mix->view) { output_frame(mix); } else { obs->video.mixes.array[i] = NULL; obs_free_video_mix(mix); da_erase(obs->video.mixes, i); i--; num--; } } pthread_mutex_unlock(&obs->video.mixes_mutex); } #define NBSP "\xC2\xA0" static void clear_base_frame_data(struct obs_core_video_mix *video) { video->texture_rendered = false; video->texture_converted = false; deque_free(&video->vframe_info_buffer); video->cur_texture = 0; } static void clear_raw_frame_data(struct obs_core_video_mix *video) { memset(video->textures_copied, 0, sizeof(video->textures_copied)); deque_free(&video->vframe_info_buffer); } static void clear_gpu_frame_data(struct obs_core_video_mix *video) { deque_free(&video->vframe_info_buffer_gpu); } extern THREAD_LOCAL bool is_graphics_thread; static void execute_graphics_tasks(void) { struct obs_core_video *video = &obs->video; bool tasks_remaining = true; while (tasks_remaining) { pthread_mutex_lock(&video->task_mutex); if (video->tasks.size) { struct obs_task_info info; deque_pop_front(&video->tasks, &info, sizeof(info)); info.task(info.param); } tasks_remaining = !!video->tasks.size; pthread_mutex_unlock(&video->task_mutex); } } #ifdef _WIN32 struct winrt_exports { void (*winrt_initialize)(); void (*winrt_uninitialize)(); struct winrt_disaptcher *(*winrt_dispatcher_init)(); void (*winrt_dispatcher_free)(struct winrt_disaptcher *dispatcher); void (*winrt_capture_thread_start)(); void (*winrt_capture_thread_stop)(); }; #define WINRT_IMPORT(func) \ do { \ exports->func = os_dlsym(module, #func); \ if (!exports->func) { \ success = false; \ blog(LOG_ERROR, \ "Could not load function '%s' from " \ "module '%s'", \ #func, module_name); \ } \ } while (false) static bool load_winrt_imports(struct winrt_exports *exports, void *module, const char *module_name) { bool success = true; WINRT_IMPORT(winrt_initialize); WINRT_IMPORT(winrt_uninitialize); WINRT_IMPORT(winrt_dispatcher_init); WINRT_IMPORT(winrt_dispatcher_free); WINRT_IMPORT(winrt_capture_thread_start); WINRT_IMPORT(winrt_capture_thread_stop); return success; } struct winrt_state { bool loaded; void *winrt_module; struct winrt_exports exports; struct winrt_disaptcher *dispatcher; }; static void init_winrt_state(struct winrt_state *winrt) { static const char *const module_name = "libobs-winrt"; winrt->winrt_module = os_dlopen(module_name); winrt->loaded = winrt->winrt_module && load_winrt_imports(&winrt->exports, winrt->winrt_module, module_name); winrt->dispatcher = NULL; if (winrt->loaded) { winrt->exports.winrt_initialize(); winrt->dispatcher = winrt->exports.winrt_dispatcher_init(); gs_enter_context(obs->video.graphics); winrt->exports.winrt_capture_thread_start(); gs_leave_context(); } } static void uninit_winrt_state(struct winrt_state *winrt) { if (winrt->winrt_module) { if (winrt->loaded) { winrt->exports.winrt_capture_thread_stop(); if (winrt->dispatcher) winrt->exports.winrt_dispatcher_free(winrt->dispatcher); winrt->exports.winrt_uninitialize(); } os_dlclose(winrt->winrt_module); } } #endif // #ifdef _WIN32 static const char *tick_sources_name = "tick_sources"; static const char *render_displays_name = "render_displays"; static const char *output_frame_name = "output_frame"; static inline void update_active_state(struct obs_core_video_mix *video) { const bool raw_was_active = video->raw_was_active; const bool gpu_was_active = video->gpu_was_active; const bool was_active = video->was_active; bool raw_active = os_atomic_load_long(&video->raw_active) > 0; const bool gpu_active = os_atomic_load_long(&video->gpu_encoder_active) > 0; const bool active = raw_active || gpu_active; if (!was_active && active) clear_base_frame_data(video); if (!raw_was_active && raw_active) clear_raw_frame_data(video); if (!gpu_was_active && gpu_active) clear_gpu_frame_data(video); video->gpu_was_active = gpu_active; video->raw_was_active = raw_active; video->was_active = active; } static inline void update_active_states(void) { pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) update_active_state(obs->video.mixes.array[i]); pthread_mutex_unlock(&obs->video.mixes_mutex); } static inline bool stop_requested(void) { bool success = true; pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) if (!video_output_stopped(obs->video.mixes.array[i]->video)) success = false; pthread_mutex_unlock(&obs->video.mixes_mutex); return success; } bool obs_graphics_thread_loop(struct obs_graphics_context *context) { uint64_t frame_start = os_gettime_ns(); uint64_t frame_time_ns; update_active_states(); profile_start(context->video_thread_name); source_profiler_frame_begin(); gs_enter_context(obs->video.graphics); gs_begin_frame(); gs_leave_context(); profile_start(tick_sources_name); context->last_time = tick_sources(obs->video.video_time, context->last_time); profile_end(tick_sources_name); #ifdef _WIN32 MSG msg; while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { TranslateMessage(&msg); DispatchMessage(&msg); } #endif source_profiler_render_begin(); profile_start(output_frame_name); output_frames(); profile_end(output_frame_name); profile_start(render_displays_name); render_displays(); profile_end(render_displays_name); source_profiler_render_end(); execute_graphics_tasks(); frame_time_ns = os_gettime_ns() - frame_start; source_profiler_frame_collect(); profile_end(context->video_thread_name); profile_reenable_thread(); video_sleep(&obs->video, &obs->video.video_time, context->interval); context->frame_time_total_ns += frame_time_ns; context->fps_total_ns += (obs->video.video_time - context->last_time); context->fps_total_frames++; if (context->fps_total_ns >= 1000000000ULL) { obs->video.video_fps = (double)context->fps_total_frames / ((double)context->fps_total_ns / 1000000000.0); obs->video.video_avg_frame_time_ns = context->frame_time_total_ns / (uint64_t)context->fps_total_frames; context->frame_time_total_ns = 0; context->fps_total_ns = 0; context->fps_total_frames = 0; } return !stop_requested(); } void *obs_graphics_thread(void *param) { #ifdef _WIN32 struct winrt_state winrt; init_winrt_state(&winrt); #endif // #ifdef _WIN32 is_graphics_thread = true; const uint64_t interval = obs->video.video_frame_interval_ns; obs->video.video_time = os_gettime_ns(); os_set_thread_name("libobs: graphics thread"); const char *video_thread_name = profile_store_name(obs_get_profiler_name_store(), "obs_graphics_thread(%g" NBSP "ms)", interval / 1000000.); profile_register_root(video_thread_name, interval); srand((unsigned int)time(NULL)); struct obs_graphics_context context; context.interval = interval; context.frame_time_total_ns = 0; context.fps_total_ns = 0; context.fps_total_frames = 0; context.last_time = 0; context.video_thread_name = video_thread_name; #ifdef __APPLE__ while (obs_graphics_thread_loop_autorelease(&context)) #else while (obs_graphics_thread_loop(&context)) #endif ; #ifdef _WIN32 uninit_winrt_state(&winrt); #endif UNUSED_PARAMETER(param); return NULL; } obs-studio-32.1.0-sources/libobs/obs-missing-files.h000644 001751 001751 00000004362 15153330235 023216 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Dillon Pentz This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #ifdef __cplusplus extern "C" { #endif typedef void (*obs_missing_file_cb)(void *src, const char *new_path, void *data); struct obs_missing_file; struct obs_missing_files; typedef struct obs_missing_file obs_missing_file_t; typedef struct obs_missing_files obs_missing_files_t; enum obs_missing_file_src { OBS_MISSING_FILE_SOURCE, OBS_MISSING_FILE_SCRIPT }; EXPORT obs_missing_files_t *obs_missing_files_create(); EXPORT obs_missing_file_t *obs_missing_file_create(const char *path, obs_missing_file_cb callback, int src_type, void *src, void *data); EXPORT void obs_missing_files_add_file(obs_missing_files_t *files, obs_missing_file_t *file); EXPORT size_t obs_missing_files_count(obs_missing_files_t *files); EXPORT obs_missing_file_t *obs_missing_files_get_file(obs_missing_files_t *files, int idx); EXPORT void obs_missing_files_destroy(obs_missing_files_t *files); EXPORT void obs_missing_files_append(obs_missing_files_t *dst, obs_missing_files_t *src); EXPORT void obs_missing_file_issue_callback(obs_missing_file_t *file, const char *new_path); EXPORT const char *obs_missing_file_get_path(obs_missing_file_t *file); EXPORT const char *obs_missing_file_get_source_name(obs_missing_file_t *file); EXPORT void obs_missing_file_release(obs_missing_file_t *file); EXPORT void obs_missing_file_destroy(obs_missing_file_t *file); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-avc.c000644 001751 001751 00000020474 15153330235 021213 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-avc.h" #include "obs.h" #include "obs-nal.h" #include "util/array-serializer.h" #include "util/bitstream.h" bool obs_avc_keyframe(const uint8_t *data, size_t size) { const uint8_t *nal_start, *nal_end; const uint8_t *end = data + size; nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; const uint8_t type = nal_start[0] & 0x1F; if (type == OBS_NAL_SLICE_IDR || type == OBS_NAL_SLICE) return type == OBS_NAL_SLICE_IDR; nal_end = obs_nal_find_startcode(nal_start, end); nal_start = nal_end; } return false; } const uint8_t *obs_avc_find_startcode(const uint8_t *p, const uint8_t *end) { return obs_nal_find_startcode(p, end); } static int compute_avc_keyframe_priority(const uint8_t *nal_start, bool *is_keyframe, int priority) { const int type = nal_start[0] & 0x1F; if (type == OBS_NAL_SLICE_IDR) *is_keyframe = true; const int new_priority = nal_start[0] >> 5; if (priority < new_priority) priority = new_priority; return priority; } static void serialize_avc_data(struct serializer *s, const uint8_t *data, size_t size, bool *is_keyframe, int *priority) { const uint8_t *const end = data + size; const uint8_t *nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; *priority = compute_avc_keyframe_priority(nal_start, is_keyframe, *priority); const uint8_t *const nal_end = obs_nal_find_startcode(nal_start, end); const size_t nal_size = nal_end - nal_start; s_wb32(s, (uint32_t)nal_size); s_write(s, nal_start, nal_size); nal_start = nal_end; } } void obs_parse_avc_packet(struct encoder_packet *avc_packet, const struct encoder_packet *src) { struct array_output_data output; struct serializer s; long ref = 1; array_output_serializer_init(&s, &output); *avc_packet = *src; serialize(&s, &ref, sizeof(ref)); serialize_avc_data(&s, src->data, src->size, &avc_packet->keyframe, &avc_packet->priority); avc_packet->data = output.bytes.array + sizeof(ref); avc_packet->size = output.bytes.num - sizeof(ref); avc_packet->drop_priority = avc_packet->priority; } int obs_parse_avc_packet_priority(const struct encoder_packet *packet) { int priority = packet->priority; const uint8_t *const data = packet->data; const uint8_t *const end = data + packet->size; const uint8_t *nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; bool unused; priority = compute_avc_keyframe_priority(nal_start, &unused, priority); nal_start = obs_nal_find_startcode(nal_start, end); } return priority; } static inline bool has_start_code(const uint8_t *data) { if (data[0] != 0 || data[1] != 0) return false; return data[2] == 1 || (data[2] == 0 && data[3] == 1); } static void get_sps_pps(const uint8_t *data, size_t size, const uint8_t **sps, size_t *sps_size, const uint8_t **pps, size_t *pps_size) { const uint8_t *nal_start, *nal_end; const uint8_t *end = data + size; int type; nal_start = obs_nal_find_startcode(data, end); while (true) { while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; nal_end = obs_nal_find_startcode(nal_start, end); type = nal_start[0] & 0x1F; if (type == OBS_NAL_SPS) { *sps = nal_start; *sps_size = nal_end - nal_start; } else if (type == OBS_NAL_PPS) { *pps = nal_start; *pps_size = nal_end - nal_start; } nal_start = nal_end; } } static inline uint8_t get_ue_golomb(struct bitstream_reader *gb) { int i = 0; while (i < 32 && !bitstream_reader_read_bits(gb, 1)) i++; return bitstream_reader_read_bits(gb, i) + (1 << i) - 1; } static void get_sps_high_params(const uint8_t *sps, size_t size, uint8_t *chroma_format_idc, uint8_t *bit_depth_luma, uint8_t *bit_depth_chroma) { struct bitstream_reader gb; /* Extract RBSP */ uint8_t *rbsp = bzalloc(size); size_t i = 0; size_t rbsp_size = 0; while (i + 2 < size) { if (sps[i] == 0 && sps[i + 1] == 0 && sps[i + 2] == 3) { rbsp[rbsp_size++] = sps[i++]; rbsp[rbsp_size++] = sps[i++]; // skip emulation_prevention_three_byte i++; } else { rbsp[rbsp_size++] = sps[i++]; } } while (i < size) rbsp[rbsp_size++] = sps[i++]; /* Read relevant information from SPS */ bitstream_reader_init(&gb, rbsp, rbsp_size); // skip a whole bunch of stuff we don't care about bitstream_reader_read_bits(&gb, 24); // profile, constraint flags, level get_ue_golomb(&gb); // id *chroma_format_idc = get_ue_golomb(&gb); // skip separate_colour_plane_flag if (*chroma_format_idc == 3) bitstream_reader_read_bits(&gb, 1); *bit_depth_luma = get_ue_golomb(&gb); *bit_depth_chroma = get_ue_golomb(&gb); bfree(rbsp); } size_t obs_parse_avc_header(uint8_t **header, const uint8_t *data, size_t size) { struct array_output_data output; struct serializer s; const uint8_t *sps = NULL, *pps = NULL; size_t sps_size = 0, pps_size = 0; array_output_serializer_init(&s, &output); if (size <= 6) return 0; if (!has_start_code(data)) { *header = bmemdup(data, size); return size; } get_sps_pps(data, size, &sps, &sps_size, &pps, &pps_size); if (!sps || !pps || sps_size < 4) return 0; s_w8(&s, 0x01); s_write(&s, sps + 1, 3); s_w8(&s, 0xff); s_w8(&s, 0xe1); s_wb16(&s, (uint16_t)sps_size); s_write(&s, sps, sps_size); s_w8(&s, 0x01); s_wb16(&s, (uint16_t)pps_size); s_write(&s, pps, pps_size); uint8_t profile_idc = sps[1]; /* Additional data required for high, high10, high422, high444 profiles. * See ISO/IEC 14496-15 Section 5.3.3.1.2. */ if (profile_idc == 100 || profile_idc == 110 || profile_idc == 122 || profile_idc == 244) { uint8_t chroma_format_idc, bit_depth_luma, bit_depth_chroma; get_sps_high_params(sps + 1, sps_size - 1, &chroma_format_idc, &bit_depth_luma, &bit_depth_chroma); // reserved + chroma_format s_w8(&s, 0xfc | chroma_format_idc); // reserved + bit_depth_luma_minus8 s_w8(&s, 0xf8 | bit_depth_luma); // reserved + bit_depth_chroma_minus8 s_w8(&s, 0xf8 | bit_depth_chroma); // numOfSequenceParameterSetExt s_w8(&s, 0); } *header = output.bytes.array; return output.bytes.num; } void obs_extract_avc_headers(const uint8_t *packet, size_t size, uint8_t **new_packet_data, size_t *new_packet_size, uint8_t **header_data, size_t *header_size, uint8_t **sei_data, size_t *sei_size) { DARRAY(uint8_t) new_packet; DARRAY(uint8_t) header; DARRAY(uint8_t) sei; const uint8_t *nal_start, *nal_end, *nal_codestart; const uint8_t *end = packet + size; da_init(new_packet); da_init(header); da_init(sei); nal_start = obs_nal_find_startcode(packet, end); nal_end = NULL; while (nal_end != end) { nal_codestart = nal_start; while (nal_start < end && !*(nal_start++)) ; if (nal_start == end) break; const uint8_t type = nal_start[0] & 0x1F; nal_end = obs_nal_find_startcode(nal_start, end); if (!nal_end) nal_end = end; if (type == OBS_NAL_SPS || type == OBS_NAL_PPS) { da_push_back_array(header, nal_codestart, nal_end - nal_codestart); } else if (type == OBS_NAL_SEI) { da_push_back_array(sei, nal_codestart, nal_end - nal_codestart); } else { da_push_back_array(new_packet, nal_codestart, nal_end - nal_codestart); } nal_start = nal_end; } *new_packet_data = new_packet.array; *new_packet_size = new_packet.num; *header_data = header.array; *header_size = header.num; *sei_data = sei.array; *sei_size = sei.num; } obs-studio-32.1.0-sources/libobs/audio-monitoring/000755 001751 001751 00000000000 15153330731 022773 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/audio-monitoring/null/000755 001751 001751 00000000000 15153330731 023745 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/audio-monitoring/null/null-audio-monitoring.c000644 001751 001751 00000001171 15153330235 030344 0ustar00runnerrunner000000 000000 #include bool obs_audio_monitoring_available(void) { return false; } void obs_enum_audio_monitoring_devices(obs_enum_audio_device_cb cb, void *data) { UNUSED_PARAMETER(cb); UNUSED_PARAMETER(data); } struct audio_monitor *audio_monitor_create(obs_source_t *source) { UNUSED_PARAMETER(source); return NULL; } void audio_monitor_reset(struct audio_monitor *monitor) { UNUSED_PARAMETER(monitor); } void audio_monitor_destroy(struct audio_monitor *monitor) { UNUSED_PARAMETER(monitor); } bool devices_match(const char *id1, const char *id2) { UNUSED_PARAMETER(id1); UNUSED_PARAMETER(id2); return false; } obs-studio-32.1.0-sources/libobs/audio-monitoring/osx/000755 001751 001751 00000000000 15153330731 023604 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/audio-monitoring/osx/coreaudio-enum-devices.c000644 001751 001751 00000013025 15153330235 030304 0ustar00runnerrunner000000 000000 #include #include #include "../../obs-internal.h" #include "../../util/dstr.h" #include "../../util/apple/cfstring-utils.h" #include "mac-helpers.h" static bool obs_enum_audio_monitoring_device(obs_enum_audio_device_cb cb, void *data, AudioDeviceID id, bool allow_inputs) { UInt32 size = 0; CFStringRef cf_name = NULL; CFStringRef cf_uid = NULL; char *name = NULL; char *uid = NULL; OSStatus stat; bool cont = true; AudioObjectPropertyAddress addr = {kAudioDevicePropertyStreams, kAudioDevicePropertyScopeOutput, kAudioObjectPropertyElementMain}; /* Check if the device is capable of audio output. */ AudioObjectGetPropertyDataSize(id, &addr, 0, NULL, &size); if (!allow_inputs && !size) return true; size = sizeof(CFStringRef); addr.mSelector = kAudioDevicePropertyDeviceUID; stat = AudioObjectGetPropertyData(id, &addr, 0, NULL, &size, &cf_uid); if (!success(stat, "get audio device UID")) goto fail; addr.mSelector = kAudioDevicePropertyDeviceNameCFString; stat = AudioObjectGetPropertyData(id, &addr, 0, NULL, &size, &cf_name); if (!success(stat, "get audio device name")) goto fail; name = cfstr_copy_cstr(cf_name, kCFStringEncodingUTF8); if (!name) { blog(LOG_WARNING, "%s: failed to convert name", __FUNCTION__); goto fail; } uid = cfstr_copy_cstr(cf_uid, kCFStringEncodingUTF8); if (!uid) { blog(LOG_WARNING, "%s: failed to convert uid", __FUNCTION__); goto fail; } cont = cb(data, name, uid); fail: bfree(name); bfree(uid); if (cf_name) CFRelease(cf_name); if (cf_uid) CFRelease(cf_uid); return cont; } static void enum_audio_devices(obs_enum_audio_device_cb cb, void *data, bool allow_inputs) { AudioObjectPropertyAddress addr = {kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}; UInt32 size = 0; UInt32 count; OSStatus stat; AudioDeviceID *ids; stat = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &addr, 0, NULL, &size); if (!success(stat, "get data size")) return; ids = malloc(size); count = size / sizeof(AudioDeviceID); stat = AudioObjectGetPropertyData(kAudioObjectSystemObject, &addr, 0, NULL, &size, ids); if (success(stat, "get data")) { for (UInt32 i = 0; i < count; i++) { if (!obs_enum_audio_monitoring_device(cb, data, ids[i], allow_inputs)) break; } } free(ids); } void obs_enum_audio_monitoring_devices(obs_enum_audio_device_cb cb, void *data) { enum_audio_devices(cb, data, false); } static bool alloc_default_id(void *data, const char *name, const char *id) { char **p_id = data; UNUSED_PARAMETER(name); *p_id = bstrdup(id); return false; } static void get_default_id(char **p_id) { AudioObjectPropertyAddress addr = {kAudioHardwarePropertyDefaultOutputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}; if (*p_id) return; OSStatus stat; AudioDeviceID id = 0; UInt32 size = sizeof(id); stat = AudioObjectGetPropertyData(kAudioObjectSystemObject, &addr, 0, NULL, &size, &id); if (success(stat, "AudioObjectGetPropertyData")) obs_enum_audio_monitoring_device(alloc_default_id, p_id, id, true); if (!*p_id) *p_id = bzalloc(1); } struct device_name_info { const char *id; char *name; }; static bool enum_device_name(void *data, const char *name, const char *id) { struct device_name_info *info = data; if (strcmp(info->id, id) == 0) { info->name = bstrdup(name); return false; } return true; } bool devices_match(const char *id1, const char *id2) { struct device_name_info info = {0}; char *default_id = NULL; char *name1 = NULL; char *name2 = NULL; bool match; if (!id1 || !id2) return false; if (strcmp(id1, "default") == 0) { get_default_id(&default_id); id1 = default_id; } if (strcmp(id2, "default") == 0) { get_default_id(&default_id); id2 = default_id; } info.id = id1; enum_audio_devices(enum_device_name, &info, true); name1 = info.name; info.name = NULL; info.id = id2; enum_audio_devices(enum_device_name, &info, true); name2 = info.name; match = name1 && name2 && strcmp(name1, name2) == 0; bfree(default_id); bfree(name1); bfree(name2); return match; } static inline bool device_is_input(const char *device) { return astrstri(device, "soundflower") == NULL && astrstri(device, "wavtap") == NULL && astrstri(device, "soundsiphon") == NULL && astrstri(device, "ishowu") == NULL && astrstri(device, "blackhole") == NULL && astrstri(device, "loopback") == NULL && astrstri(device, "groundcontrol") == NULL && astrstri(device, "vbcable") == NULL; } static bool find_loopback_cb(void *param, const char *name, const char *id) { UNUSED_PARAMETER(name); char **p_id = param; if (!device_is_input(id)) { *p_id = bstrdup(id); return false; } return true; } void get_desktop_default_id(char **p_id) { if (*p_id) return; AudioObjectPropertyAddress addr = {kAudioHardwarePropertyDefaultSystemOutputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain}; AudioDeviceID id = 0; UInt32 size = sizeof(id); OSStatus stat = AudioObjectGetPropertyData(kAudioObjectSystemObject, &addr, 0, NULL, &size, &id); if (success(stat, "AudioObjectGetPropertyData")) { /* Try system default output first */ obs_enum_audio_monitoring_device(alloc_default_id, p_id, id, false); /* If not a loopback, try to find a virtual (non-input) device instead */ if (*p_id && device_is_input(*p_id)) { bfree(*p_id); *p_id = NULL; enum_audio_devices(find_loopback_cb, p_id, false); } } if (!*p_id) *p_id = bzalloc(1); } obs-studio-32.1.0-sources/libobs/audio-monitoring/osx/mac-helpers.h000644 001751 001751 00000000423 15153330235 026153 0ustar00runnerrunner000000 000000 #pragma once static bool success_(OSStatus stat, const char *func, const char *call) { if (stat != noErr) { blog(LOG_WARNING, "%s: %s failed: %d", func, call, (int)stat); return false; } return true; } #define success(stat, call) success_(stat, __FUNCTION__, call) obs-studio-32.1.0-sources/libobs/audio-monitoring/osx/coreaudio-output.c000644 001751 001751 00000020754 15153330235 027267 0ustar00runnerrunner000000 000000 #include #include #include #include #include "../../media-io/audio-resampler.h" #include "../../util/deque.h" #include "../../util/threading.h" #include "../../util/platform.h" #include "../../obs-internal.h" #include "../../util/darray.h" #include "mac-helpers.h" struct audio_monitor { obs_source_t *source; AudioQueueRef queue; AudioQueueBufferRef buffers[3]; pthread_mutex_t mutex; struct deque empty_buffers; struct deque new_data; audio_resampler_t *resampler; size_t buffer_size; size_t wait_size; uint32_t channels; volatile bool active; bool paused; bool ignore; }; static inline bool fill_buffer(struct audio_monitor *monitor) { AudioQueueBufferRef buf; OSStatus stat; if (monitor->new_data.size < monitor->buffer_size) { return false; } deque_pop_front(&monitor->empty_buffers, &buf, sizeof(buf)); deque_pop_front(&monitor->new_data, buf->mAudioData, monitor->buffer_size); buf->mAudioDataByteSize = (UInt32)monitor->buffer_size; stat = AudioQueueEnqueueBuffer(monitor->queue, buf, 0, NULL); if (!success(stat, "AudioQueueEnqueueBuffer")) { blog(LOG_WARNING, "%s: %s", __FUNCTION__, "Failed to enqueue buffer"); AudioQueueStop(monitor->queue, false); } return true; } static void on_audio_pause(void *data, calldata_t *calldata) { UNUSED_PARAMETER(calldata); struct audio_monitor *monitor = data; pthread_mutex_lock(&monitor->mutex); deque_free(&monitor->new_data); pthread_mutex_unlock(&monitor->mutex); } static void on_audio_playback(void *param, obs_source_t *source, const struct audio_data *audio_data, bool muted) { struct audio_monitor *monitor = param; float vol = source->user_volume; uint32_t bytes; if (!os_atomic_load_bool(&monitor->active)) { return; } if (os_atomic_load_long(&source->activate_refs) == 0) { return; } uint8_t *resample_data[MAX_AV_PLANES]; uint32_t resample_frames; uint64_t ts_offset; bool success; success = audio_resampler_resample(monitor->resampler, resample_data, &resample_frames, &ts_offset, (const uint8_t *const *)audio_data->data, (uint32_t)audio_data->frames); if (!success) { return; } bytes = sizeof(float) * monitor->channels * resample_frames; if (muted) { memset(resample_data[0], 0, bytes); } else { /* apply volume */ if (!close_float(vol, 1.0f, EPSILON)) { register float *cur = (float *)resample_data[0]; register float *end = cur + resample_frames * monitor->channels; while (cur < end) *(cur++) *= vol; } } pthread_mutex_lock(&monitor->mutex); deque_push_back(&monitor->new_data, resample_data[0], bytes); if (monitor->new_data.size >= monitor->wait_size) { monitor->wait_size = 0; while (monitor->empty_buffers.size > 0) { if (!fill_buffer(monitor)) { break; } } if (monitor->paused) { AudioQueueStart(monitor->queue, NULL); monitor->paused = false; } } pthread_mutex_unlock(&monitor->mutex); } static void buffer_audio(void *data, AudioQueueRef aq, AudioQueueBufferRef buf) { struct audio_monitor *monitor = data; pthread_mutex_lock(&monitor->mutex); deque_push_back(&monitor->empty_buffers, &buf, sizeof(buf)); while (monitor->empty_buffers.size > 0) { if (!fill_buffer(monitor)) { break; } } if (monitor->empty_buffers.size == sizeof(buf) * 3) { monitor->paused = true; monitor->wait_size = monitor->buffer_size * 3; AudioQueuePause(monitor->queue); } pthread_mutex_unlock(&monitor->mutex); UNUSED_PARAMETER(aq); } extern bool devices_match(const char *id1, const char *id2); static bool audio_monitor_init(struct audio_monitor *monitor, obs_source_t *source) { const struct audio_output_info *info = audio_output_get_info(obs->audio.audio); uint32_t channels = get_audio_channels(info->speakers); OSStatus stat; AudioStreamBasicDescription desc = {.mSampleRate = (Float64)info->samples_per_sec, .mFormatID = kAudioFormatLinearPCM, .mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked, .mBytesPerPacket = sizeof(float) * channels, .mFramesPerPacket = 1, .mBytesPerFrame = sizeof(float) * channels, .mChannelsPerFrame = channels, .mBitsPerChannel = sizeof(float) * 8}; monitor->source = source; monitor->channels = channels; monitor->buffer_size = channels * sizeof(float) * info->samples_per_sec / 100 * 3; monitor->wait_size = monitor->buffer_size * 3; pthread_mutex_init_value(&monitor->mutex); const char *uid = obs->audio.monitoring_device_id; if (!uid || !*uid) { return false; } if (source->info.output_flags & OBS_SOURCE_DO_NOT_SELF_MONITOR) { obs_data_t *s = obs_source_get_settings(source); const char *s_dev_id = obs_data_get_string(s, "device_id"); bool match = devices_match(s_dev_id, uid); obs_data_release(s); if (match) { monitor->ignore = true; return true; } } stat = AudioQueueNewOutput(&desc, buffer_audio, monitor, NULL, NULL, 0, &monitor->queue); if (!success(stat, "AudioStreamBasicDescription")) { return false; } if (strcmp(uid, "default") != 0) { CFStringRef cf_uid = CFStringCreateWithBytes(NULL, (const UInt8 *)uid, strlen(uid), kCFStringEncodingUTF8, false); stat = AudioQueueSetProperty(monitor->queue, kAudioQueueProperty_CurrentDevice, &cf_uid, sizeof(cf_uid)); CFRelease(cf_uid); if (!success(stat, "set current device")) { return false; } } stat = AudioQueueSetParameter(monitor->queue, kAudioQueueParam_Volume, 1.0); if (!success(stat, "set volume")) { return false; } for (size_t i = 0; i < 3; i++) { stat = AudioQueueAllocateBuffer(monitor->queue, (UInt32)monitor->buffer_size, &monitor->buffers[i]); if (!success(stat, "allocation of buffer")) { return false; } deque_push_back(&monitor->empty_buffers, &monitor->buffers[i], sizeof(monitor->buffers[i])); } if (pthread_mutex_init(&monitor->mutex, NULL) != 0) { blog(LOG_WARNING, "%s: %s", __FUNCTION__, "Failed to init mutex"); return false; } struct resample_info from = {.samples_per_sec = info->samples_per_sec, .speakers = info->speakers, .format = AUDIO_FORMAT_FLOAT_PLANAR}; struct resample_info to = {.samples_per_sec = info->samples_per_sec, .speakers = info->speakers, .format = AUDIO_FORMAT_FLOAT}; monitor->resampler = audio_resampler_create(&to, &from); if (!monitor->resampler) { blog(LOG_WARNING, "%s: %s", __FUNCTION__, "Failed to create resampler"); return false; } stat = AudioQueueStart(monitor->queue, NULL); if (!success(stat, "start")) { return false; } monitor->active = true; return true; } static void audio_monitor_free(struct audio_monitor *monitor) { if (monitor->source) { obs_source_remove_audio_capture_callback(monitor->source, on_audio_playback, monitor); obs_source_remove_audio_pause_callback(monitor->source, on_audio_pause, monitor); } if (monitor->active) { AudioQueueStop(monitor->queue, true); } for (size_t i = 0; i < 3; i++) { if (monitor->buffers[i]) { AudioQueueFreeBuffer(monitor->queue, monitor->buffers[i]); } } if (monitor->queue) { AudioQueueDispose(monitor->queue, true); } audio_resampler_destroy(monitor->resampler); deque_free(&monitor->empty_buffers); deque_free(&monitor->new_data); pthread_mutex_destroy(&monitor->mutex); } static void audio_monitor_init_final(struct audio_monitor *monitor) { if (monitor->ignore) return; obs_source_add_audio_capture_callback(monitor->source, on_audio_playback, monitor); obs_source_add_audio_pause_callback(monitor->source, on_audio_pause, monitor); } struct audio_monitor *audio_monitor_create(obs_source_t *source) { struct audio_monitor *monitor = bzalloc(sizeof(*monitor)); if (!audio_monitor_init(monitor, source)) { goto fail; } pthread_mutex_lock(&obs->audio.monitoring_mutex); da_push_back(obs->audio.monitors, &monitor); pthread_mutex_unlock(&obs->audio.monitoring_mutex); audio_monitor_init_final(monitor); return monitor; fail: audio_monitor_free(monitor); bfree(monitor); return NULL; } void audio_monitor_reset(struct audio_monitor *monitor) { bool success; obs_source_t *source = monitor->source; audio_monitor_free(monitor); memset(monitor, 0, sizeof(*monitor)); success = audio_monitor_init(monitor, source); if (success) audio_monitor_init_final(monitor); } void audio_monitor_destroy(struct audio_monitor *monitor) { if (monitor) { audio_monitor_free(monitor); pthread_mutex_lock(&obs->audio.monitoring_mutex); da_erase_item(obs->audio.monitors, &monitor); pthread_mutex_unlock(&obs->audio.monitoring_mutex); bfree(monitor); } } obs-studio-32.1.0-sources/libobs/audio-monitoring/osx/coreaudio-monitoring-available.c000644 001751 001751 00000000135 15153330235 032021 0ustar00runnerrunner000000 000000 #include "../../obs-internal.h" bool obs_audio_monitoring_available(void) { return true; } obs-studio-32.1.0-sources/libobs/audio-monitoring/win32/000755 001751 001751 00000000000 15153330731 023735 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/audio-monitoring/win32/wasapi-enum-devices.c000644 001751 001751 00000007215 15153330235 027753 0ustar00runnerrunner000000 000000 #include "../../obs-internal.h" #include "wasapi-output.h" #include #ifdef __MINGW32__ #ifdef DEFINE_PROPERTYKEY #undef DEFINE_PROPERTYKEY #endif #define DEFINE_PROPERTYKEY(id, a, b, c, d, e, f, g, h, i, j, k, l) \ const PROPERTYKEY id = {{a, \ b, \ c, \ { \ d, \ e, \ f, \ g, \ h, \ i, \ j, \ k, \ }}, \ l}; DEFINE_PROPERTYKEY(PKEY_Device_FriendlyName, 0xa45c254e, 0xdf1c, 0x4efd, 0x80, 0x20, 0x67, 0xd1, 0x46, 0xa8, 0x50, 0xe0, 14); #else #include #endif static bool get_device_info(obs_enum_audio_device_cb cb, void *data, IMMDeviceCollection *collection, UINT idx) { IPropertyStore *store = NULL; IMMDevice *device = NULL; PROPVARIANT name_var; char utf8_name[512]; WCHAR *w_id = NULL; char utf8_id[512]; bool cont = true; HRESULT hr; hr = collection->lpVtbl->Item(collection, idx, &device); if (FAILED(hr)) { goto fail; } hr = device->lpVtbl->GetId(device, &w_id); if (FAILED(hr)) { goto fail; } hr = device->lpVtbl->OpenPropertyStore(device, STGM_READ, &store); if (FAILED(hr)) { goto fail; } PropVariantInit(&name_var); hr = store->lpVtbl->GetValue(store, &PKEY_Device_FriendlyName, &name_var); if (FAILED(hr)) { goto fail; } os_wcs_to_utf8(w_id, 0, utf8_id, 512); os_wcs_to_utf8(name_var.pwszVal, 0, utf8_name, 512); cont = cb(data, utf8_name, utf8_id); PropVariantClear(&name_var); fail: safe_release(store); safe_release(device); if (w_id) CoTaskMemFree(w_id); return cont; } void obs_enum_audio_monitoring_devices(obs_enum_audio_device_cb cb, void *data) { IMMDeviceEnumerator *enumerator = NULL; IMMDeviceCollection *collection = NULL; UINT count; HRESULT hr; hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, &IID_IMMDeviceEnumerator, &enumerator); if (FAILED(hr)) { goto fail; } hr = enumerator->lpVtbl->EnumAudioEndpoints(enumerator, eRender, DEVICE_STATE_ACTIVE, &collection); if (FAILED(hr)) { goto fail; } hr = collection->lpVtbl->GetCount(collection, &count); if (FAILED(hr)) { goto fail; } for (UINT i = 0; i < count; i++) { if (!get_device_info(cb, data, collection, i)) { break; } } fail: safe_release(enumerator); safe_release(collection); } static void get_default_id(char **p_id) { IMMDeviceEnumerator *immde = NULL; IMMDevice *device = NULL; WCHAR *w_id = NULL; HRESULT hr; if (*p_id) return; hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, &IID_IMMDeviceEnumerator, &immde); if (FAILED(hr)) { goto fail; } hr = immde->lpVtbl->GetDefaultAudioEndpoint(immde, eRender, eConsole, &device); if (FAILED(hr)) { goto fail; } hr = device->lpVtbl->GetId(device, &w_id); if (FAILED(hr)) { goto fail; } os_wcs_to_utf8_ptr(w_id, 0, p_id); fail: if (!*p_id) *p_id = bzalloc(1); if (immde) immde->lpVtbl->Release(immde); if (device) device->lpVtbl->Release(device); if (w_id) CoTaskMemFree(w_id); } bool devices_match(const char *id1, const char *id2) { char *default_id = NULL; bool match; if (!id1 || !id2) return false; if (strcmp(id1, "default") == 0) { get_default_id(&default_id); id1 = default_id; } if (strcmp(id2, "default") == 0) { get_default_id(&default_id); id2 = default_id; } match = strcmp(id1, id2) == 0; bfree(default_id); return match; } obs-studio-32.1.0-sources/libobs/audio-monitoring/win32/wasapi-output.c000644 001751 001751 00000031423 15153330235 026725 0ustar00runnerrunner000000 000000 #include "../../media-io/audio-resampler.h" #include "../../util/deque.h" #include "../../util/platform.h" #include "../../util/darray.h" #include "../../util/util_uint64.h" #include "../../obs-internal.h" #include "wasapi-output.h" #define ACTUALLY_DEFINE_GUID(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \ EXTERN_C const GUID DECLSPEC_SELECTANY name = {l, w1, w2, {b1, b2, b3, b4, b5, b6, b7, b8}} #define do_log(level, format, ...) \ blog(level, "[audio monitoring: '%s'] " format, obs_source_get_name(monitor->source), ##__VA_ARGS__) #define warn(format, ...) do_log(LOG_WARNING, format, ##__VA_ARGS__) #define info(format, ...) do_log(LOG_INFO, format, ##__VA_ARGS__) #define debug(format, ...) do_log(LOG_DEBUG, format, ##__VA_ARGS__) ACTUALLY_DEFINE_GUID(CLSID_MMDeviceEnumerator, 0xBCDE0395, 0xE52F, 0x467C, 0x8E, 0x3D, 0xC4, 0x57, 0x92, 0x91, 0x69, 0x2E); ACTUALLY_DEFINE_GUID(IID_IMMDeviceEnumerator, 0xA95664D2, 0x9614, 0x4F35, 0xA7, 0x46, 0xDE, 0x8D, 0xB6, 0x36, 0x17, 0xE6); ACTUALLY_DEFINE_GUID(IID_IAudioClient, 0x1CB9AD4C, 0xDBFA, 0x4C32, 0xB1, 0x78, 0xC2, 0xF5, 0x68, 0xA7, 0x03, 0xB2); ACTUALLY_DEFINE_GUID(IID_IAudioRenderClient, 0xF294ACFC, 0x3146, 0x4483, 0xA7, 0xBF, 0xAD, 0xDC, 0xA7, 0xC2, 0x60, 0xE2); struct audio_monitor { obs_source_t *source; IAudioClient *client; IAudioRenderClient *render; uint64_t last_recv_time; uint64_t prev_video_ts; uint64_t time_since_prev; audio_resampler_t *resampler; uint32_t sample_rate; uint32_t channels; bool source_has_video; bool ignore; int64_t lowest_audio_offset; struct deque delay_buffer; uint32_t delay_size; DARRAY(float) buf; SRWLOCK playback_mutex; }; /* #define DEBUG_AUDIO */ static bool process_audio_delay(struct audio_monitor *monitor, float **data, uint32_t *frames, uint64_t ts, uint32_t pad) { obs_source_t *s = monitor->source; uint64_t last_frame_ts = s->last_frame_ts; uint64_t cur_time = os_gettime_ns(); uint64_t front_ts; uint64_t cur_ts; int64_t diff; uint32_t blocksize = monitor->channels * sizeof(float); /* cut off audio if long-since leftover audio in delay buffer */ if (cur_time - monitor->last_recv_time > 1000000000) deque_free(&monitor->delay_buffer); monitor->last_recv_time = cur_time; ts += monitor->source->sync_offset; deque_push_back(&monitor->delay_buffer, &ts, sizeof(ts)); deque_push_back(&monitor->delay_buffer, frames, sizeof(*frames)); deque_push_back(&monitor->delay_buffer, *data, *frames * blocksize); if (!monitor->prev_video_ts) { monitor->prev_video_ts = last_frame_ts; } else if (monitor->prev_video_ts == last_frame_ts) { monitor->time_since_prev += util_mul_div64(*frames, 1000000000ULL, monitor->sample_rate); } else { monitor->time_since_prev = 0; } while (monitor->delay_buffer.size != 0) { size_t size; bool bad_diff; deque_peek_front(&monitor->delay_buffer, &cur_ts, sizeof(ts)); front_ts = cur_ts - util_mul_div64(pad, 1000000000ULL, monitor->sample_rate); diff = (int64_t)front_ts - (int64_t)last_frame_ts; bad_diff = !last_frame_ts || llabs(diff) > 5000000000 || monitor->time_since_prev > 100000000ULL; /* delay audio if rushing */ if (!bad_diff && diff > 75000000) { #ifdef DEBUG_AUDIO blog(LOG_INFO, "audio rushing, cutting audio, " "diff: %lld, delay buffer size: %lu, " "v: %llu: a: %llu", diff, (int)monitor->delay_buffer.size, last_frame_ts, front_ts); #endif return false; } deque_pop_front(&monitor->delay_buffer, NULL, sizeof(ts)); deque_pop_front(&monitor->delay_buffer, frames, sizeof(*frames)); size = *frames * blocksize; da_resize(monitor->buf, size); deque_pop_front(&monitor->delay_buffer, monitor->buf.array, size); /* cut audio if dragging */ if (!bad_diff && diff < -75000000 && monitor->delay_buffer.size > 0) { #ifdef DEBUG_AUDIO blog(LOG_INFO, "audio dragging, cutting audio, " "diff: %lld, delay buffer size: %lu, " "v: %llu: a: %llu", diff, (int)monitor->delay_buffer.size, last_frame_ts, front_ts); #endif continue; } *data = monitor->buf.array; return true; } return false; } static enum speaker_layout convert_speaker_layout(DWORD layout, WORD channels) { switch (layout) { case KSAUDIO_SPEAKER_2POINT1: return SPEAKERS_2POINT1; case KSAUDIO_SPEAKER_SURROUND: return SPEAKERS_4POINT0; case KSAUDIO_SPEAKER_4POINT1: return SPEAKERS_4POINT1; case KSAUDIO_SPEAKER_5POINT1: return SPEAKERS_5POINT1; case KSAUDIO_SPEAKER_7POINT1: return SPEAKERS_7POINT1; } return (enum speaker_layout)channels; } static bool audio_monitor_init_wasapi(struct audio_monitor *monitor) { bool success = false; IMMDeviceEnumerator *immde = NULL; WAVEFORMATEX *wfex = NULL; UINT32 frames; HRESULT hr; /* ------------------------------------------ * * Init device */ hr = CoCreateInstance(&CLSID_MMDeviceEnumerator, NULL, CLSCTX_ALL, &IID_IMMDeviceEnumerator, (void **)&immde); if (FAILED(hr)) { warn("%s: Failed to create IMMDeviceEnumerator: %08lX", __FUNCTION__, hr); return false; } IMMDevice *device = NULL; const char *const id = obs->audio.monitoring_device_id; if (strcmp(id, "default") == 0) { hr = immde->lpVtbl->GetDefaultAudioEndpoint(immde, eRender, eConsole, &device); } else { wchar_t w_id[512]; os_utf8_to_wcs(id, 0, w_id, 512); hr = immde->lpVtbl->GetDevice(immde, w_id, &device); } if (FAILED(hr)) { warn("%s: Failed to get device: %08lX", __FUNCTION__, hr); goto fail; } /* ------------------------------------------ * * Init client */ hr = device->lpVtbl->Activate(device, &IID_IAudioClient, CLSCTX_ALL, NULL, (void **)&monitor->client); device->lpVtbl->Release(device); if (FAILED(hr)) { warn("%s: Failed to activate device: %08lX", __FUNCTION__, hr); goto fail; } hr = monitor->client->lpVtbl->GetMixFormat(monitor->client, &wfex); if (FAILED(hr)) { warn("%s: Failed to get mix format: %08lX", __FUNCTION__, hr); goto fail; } hr = monitor->client->lpVtbl->Initialize(monitor->client, AUDCLNT_SHAREMODE_SHARED, 0, 10000000, 0, wfex, NULL); if (FAILED(hr)) { warn("%s: Failed to initialize: %08lX", __FUNCTION__, hr); goto fail; } /* ------------------------------------------ * * Init resampler */ const struct audio_output_info *info = audio_output_get_info(obs->audio.audio); WAVEFORMATEXTENSIBLE *ext = (WAVEFORMATEXTENSIBLE *)wfex; struct resample_info from; struct resample_info to; from.samples_per_sec = info->samples_per_sec; from.speakers = info->speakers; from.format = AUDIO_FORMAT_FLOAT_PLANAR; to.samples_per_sec = (uint32_t)wfex->nSamplesPerSec; to.speakers = convert_speaker_layout(ext->dwChannelMask, wfex->nChannels); to.format = AUDIO_FORMAT_FLOAT; monitor->sample_rate = (uint32_t)wfex->nSamplesPerSec; monitor->channels = wfex->nChannels; monitor->resampler = audio_resampler_create(&to, &from); if (!monitor->resampler) { goto fail; } /* ------------------------------------------ * * Init client */ hr = monitor->client->lpVtbl->GetBufferSize(monitor->client, &frames); if (FAILED(hr)) { warn("%s: Failed to get buffer size: %08lX", __FUNCTION__, hr); goto fail; } hr = monitor->client->lpVtbl->GetService(monitor->client, &IID_IAudioRenderClient, (void **)&monitor->render); if (FAILED(hr)) { warn("%s: Failed to get IAudioRenderClient: %08lX", __FUNCTION__, hr); goto fail; } hr = monitor->client->lpVtbl->Start(monitor->client); if (FAILED(hr)) { warn("%s: Failed to start audio: %08lX", __FUNCTION__, hr); goto fail; } success = true; fail: safe_release(immde); if (wfex) CoTaskMemFree(wfex); return success; } static void audio_monitor_free_for_reconnect(struct audio_monitor *monitor) { if (monitor->client) monitor->client->lpVtbl->Stop(monitor->client); if (monitor->render) { monitor->render->lpVtbl->Release(monitor->render); monitor->render = NULL; } if (monitor->client) { monitor->client->lpVtbl->Stop(monitor->client); monitor->client->lpVtbl->Release(monitor->client); monitor->client = NULL; } audio_resampler_destroy(monitor->resampler); monitor->resampler = NULL; deque_free(&monitor->delay_buffer); da_free(monitor->buf); } static void on_audio_playback(void *param, obs_source_t *source, const struct audio_data *audio_data, bool muted) { struct audio_monitor *monitor = param; uint8_t *resample_data[MAX_AV_PLANES]; float vol = source->user_volume; uint32_t resample_frames; uint64_t ts_offset; bool success; BYTE *output; if (!TryAcquireSRWLockExclusive(&monitor->playback_mutex)) { return; } if (os_atomic_load_long(&source->activate_refs) == 0) { goto unlock; } if (!monitor->client && !audio_monitor_init_wasapi(monitor)) { goto free_for_reconnect; } success = audio_resampler_resample(monitor->resampler, resample_data, &resample_frames, &ts_offset, (const uint8_t *const *)audio_data->data, (uint32_t)audio_data->frames); if (!success) { goto unlock; } UINT32 pad = 0; HRESULT hr = monitor->client->lpVtbl->GetCurrentPadding(monitor->client, &pad); if (FAILED(hr)) { goto free_for_reconnect; } bool decouple_audio = source->async_unbuffered && source->async_decoupled; if (monitor->source_has_video && !decouple_audio) { uint64_t ts = audio_data->timestamp - ts_offset; if (!process_audio_delay(monitor, (float **)(&resample_data[0]), &resample_frames, ts, pad)) { goto unlock; } } IAudioRenderClient *const render = monitor->render; hr = render->lpVtbl->GetBuffer(render, resample_frames, &output); if (FAILED(hr)) { goto free_for_reconnect; } if (!muted) { /* apply volume */ if (!close_float(vol, 1.0f, EPSILON)) { register float *cur = (float *)resample_data[0]; register float *end = cur + resample_frames * monitor->channels; while (cur < end) *(cur++) *= vol; } memcpy(output, resample_data[0], resample_frames * monitor->channels * sizeof(float)); } hr = render->lpVtbl->ReleaseBuffer(render, resample_frames, muted ? AUDCLNT_BUFFERFLAGS_SILENT : 0); if (FAILED(hr)) { goto free_for_reconnect; } goto unlock; free_for_reconnect: audio_monitor_free_for_reconnect(monitor); unlock: ReleaseSRWLockExclusive(&monitor->playback_mutex); } static inline void audio_monitor_free(struct audio_monitor *monitor) { if (monitor->ignore) return; if (monitor->source) { obs_source_remove_audio_capture_callback(monitor->source, on_audio_playback, monitor); } if (monitor->client) monitor->client->lpVtbl->Stop(monitor->client); safe_release(monitor->client); safe_release(monitor->render); audio_resampler_destroy(monitor->resampler); deque_free(&monitor->delay_buffer); da_free(monitor->buf); } extern bool devices_match(const char *id1, const char *id2); static bool audio_monitor_init(struct audio_monitor *monitor, obs_source_t *source) { monitor->source = source; const char *id = obs->audio.monitoring_device_id; if (!id) { warn("%s: No device ID set", __FUNCTION__); return false; } if (source->info.output_flags & OBS_SOURCE_DO_NOT_SELF_MONITOR) { obs_data_t *s = obs_source_get_settings(source); const char *s_dev_id = obs_data_get_string(s, "device_id"); bool match = devices_match(s_dev_id, id); obs_data_release(s); if (match) { monitor->ignore = true; return true; } } InitializeSRWLock(&monitor->playback_mutex); return audio_monitor_init_wasapi(monitor); } static void audio_monitor_init_final(struct audio_monitor *monitor) { if (monitor->ignore) return; monitor->source_has_video = (monitor->source->info.output_flags & OBS_SOURCE_VIDEO) != 0; obs_source_add_audio_capture_callback(monitor->source, on_audio_playback, monitor); } struct audio_monitor *audio_monitor_create(obs_source_t *source) { struct audio_monitor monitor = {0}; struct audio_monitor *out; if (!audio_monitor_init(&monitor, source)) { goto fail; } out = bmemdup(&monitor, sizeof(monitor)); pthread_mutex_lock(&obs->audio.monitoring_mutex); da_push_back(obs->audio.monitors, &out); pthread_mutex_unlock(&obs->audio.monitoring_mutex); audio_monitor_init_final(out); return out; fail: audio_monitor_free(&monitor); return NULL; } void audio_monitor_reset(struct audio_monitor *monitor) { struct audio_monitor new_monitor = {0}; bool success; AcquireSRWLockExclusive(&monitor->playback_mutex); success = audio_monitor_init(&new_monitor, monitor->source); ReleaseSRWLockExclusive(&monitor->playback_mutex); if (success) { obs_source_t *source = monitor->source; audio_monitor_free(monitor); *monitor = new_monitor; audio_monitor_init_final(monitor); } else { audio_monitor_free(&new_monitor); } } void audio_monitor_destroy(struct audio_monitor *monitor) { if (monitor) { audio_monitor_free(monitor); pthread_mutex_lock(&obs->audio.monitoring_mutex); da_erase_item(obs->audio.monitors, &monitor); pthread_mutex_unlock(&obs->audio.monitoring_mutex); bfree(monitor); } } obs-studio-32.1.0-sources/libobs/audio-monitoring/win32/wasapi-output.h000644 001751 001751 00000001205 15153330235 026725 0ustar00runnerrunner000000 000000 #pragma once #include #include #include #ifndef KSAUDIO_SPEAKER_2POINT1 #define KSAUDIO_SPEAKER_2POINT1 (KSAUDIO_SPEAKER_STEREO | SPEAKER_LOW_FREQUENCY) #endif #define KSAUDIO_SPEAKER_SURROUND_AVUTIL (KSAUDIO_SPEAKER_STEREO | SPEAKER_FRONT_CENTER) #ifndef KSAUDIO_SPEAKER_4POINT1 #define KSAUDIO_SPEAKER_4POINT1 (KSAUDIO_SPEAKER_SURROUND | SPEAKER_LOW_FREQUENCY) #endif #define safe_release(ptr) \ do { \ if (ptr) { \ ptr->lpVtbl->Release(ptr); \ } \ } while (false) obs-studio-32.1.0-sources/libobs/audio-monitoring/win32/wasapi-monitoring-available.c000644 001751 001751 00000000135 15153330235 031464 0ustar00runnerrunner000000 000000 #include "../../obs-internal.h" bool obs_audio_monitoring_available(void) { return true; } obs-studio-32.1.0-sources/libobs/audio-monitoring/pulse/000755 001751 001751 00000000000 15153330731 024123 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/audio-monitoring/pulse/pulseaudio-wrapper.c000644 001751 001751 00000021525 15153330235 030123 0ustar00runnerrunner000000 000000 /* Copyright (C) 2014 by Leonhard Oelke Copyright (C) 2017 by Fabio Madia This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #include "pulseaudio-wrapper.h" /* global data */ static uint_fast32_t pulseaudio_refs = 0; static pthread_mutex_t pulseaudio_mutex = PTHREAD_MUTEX_INITIALIZER; static pa_threaded_mainloop *pulseaudio_mainloop = NULL; static pa_context *pulseaudio_context = NULL; static void pulseaudio_default_devices(pa_context *c, const pa_server_info *i, void *userdata) { UNUSED_PARAMETER(c); struct pulseaudio_default_output *d = (struct pulseaudio_default_output *)userdata; d->default_sink_name = bstrdup(i->default_sink_name); pulseaudio_signal(0); } void get_default_id(char **id) { pulseaudio_init(); struct pulseaudio_default_output *pdo = bzalloc(sizeof(struct pulseaudio_default_output)); pulseaudio_get_server_info((pa_server_info_cb_t)pulseaudio_default_devices, (void *)pdo); if (!pdo->default_sink_name || !*pdo->default_sink_name) { *id = bzalloc(1); } else { *id = bzalloc(strlen(pdo->default_sink_name) + 9); strcat(*id, pdo->default_sink_name); bfree(pdo->default_sink_name); } bfree(pdo); pulseaudio_unref(); } /** * Checks whether a sound source (id1) is the .monitor device for the * selected monitoring output (id2). */ bool devices_match(const char *id1, const char *id2) { bool match; char *name_default = NULL; char *name1 = NULL; char *name2 = NULL; if (!id1 || !id2) return false; if (strcmp(id1, "default") == 0) { get_default_id(&name_default); name1 = bzalloc(strlen(name_default) + 9); strcat(name1, name_default); strcat(name1, ".monitor"); } else { name1 = bstrdup(id1); } if (strcmp(id2, "default") == 0) { if (!name_default) get_default_id(&name_default); name2 = bzalloc(strlen(name_default) + 9); strcat(name2, name_default); strcat(name2, ".monitor"); } else { name2 = bzalloc(strlen(id2) + 9); strcat(name2, id2); strcat(name2, ".monitor"); } match = strcmp(name1, name2) == 0; bfree(name_default); bfree(name1); bfree(name2); return match; } /** * context status change callback * * @todo this is currently a noop, we want to reconnect here if the connection * is lost ... */ static void pulseaudio_context_state_changed(pa_context *c, void *userdata) { UNUSED_PARAMETER(userdata); UNUSED_PARAMETER(c); pulseaudio_signal(0); } /** * get the default properties */ static pa_proplist *pulseaudio_properties() { pa_proplist *p = pa_proplist_new(); pa_proplist_sets(p, PA_PROP_APPLICATION_NAME, "OBS"); pa_proplist_sets(p, PA_PROP_APPLICATION_ICON_NAME, "obs"); pa_proplist_sets(p, PA_PROP_MEDIA_ROLE, "production"); return p; } /** * Initialize the pulse audio context with properties and callback */ static void pulseaudio_init_context() { pulseaudio_lock(); pa_proplist *p = pulseaudio_properties(); pulseaudio_context = pa_context_new_with_proplist(pa_threaded_mainloop_get_api(pulseaudio_mainloop), "OBS-Monitor", p); pa_context_set_state_callback(pulseaudio_context, pulseaudio_context_state_changed, NULL); pa_context_connect(pulseaudio_context, NULL, PA_CONTEXT_NOAUTOSPAWN, NULL); pa_proplist_free(p); pulseaudio_unlock(); } /** * wait for context to be ready */ static int_fast32_t pulseaudio_context_ready() { pulseaudio_lock(); if (!PA_CONTEXT_IS_GOOD(pa_context_get_state(pulseaudio_context))) { pulseaudio_unlock(); return -1; } while (pa_context_get_state(pulseaudio_context) != PA_CONTEXT_READY) pulseaudio_wait(); pulseaudio_unlock(); return 0; } int_fast32_t pulseaudio_init() { pthread_mutex_lock(&pulseaudio_mutex); if (pulseaudio_refs == 0) { pulseaudio_mainloop = pa_threaded_mainloop_new(); pa_threaded_mainloop_start(pulseaudio_mainloop); pulseaudio_init_context(); } pulseaudio_refs++; pthread_mutex_unlock(&pulseaudio_mutex); return 0; } void pulseaudio_unref() { pthread_mutex_lock(&pulseaudio_mutex); if (--pulseaudio_refs == 0) { pulseaudio_lock(); if (pulseaudio_context != NULL) { pa_context_disconnect(pulseaudio_context); pa_context_unref(pulseaudio_context); pulseaudio_context = NULL; } pulseaudio_unlock(); if (pulseaudio_mainloop != NULL) { pa_threaded_mainloop_stop(pulseaudio_mainloop); pa_threaded_mainloop_free(pulseaudio_mainloop); pulseaudio_mainloop = NULL; } } pthread_mutex_unlock(&pulseaudio_mutex); } void pulseaudio_lock() { pa_threaded_mainloop_lock(pulseaudio_mainloop); } void pulseaudio_unlock() { pa_threaded_mainloop_unlock(pulseaudio_mainloop); } void pulseaudio_wait() { pa_threaded_mainloop_wait(pulseaudio_mainloop); } void pulseaudio_signal(int wait_for_accept) { pa_threaded_mainloop_signal(pulseaudio_mainloop, wait_for_accept); } void pulseaudio_accept() { pa_threaded_mainloop_accept(pulseaudio_mainloop); } int_fast32_t pulseaudio_get_source_info_list(pa_source_info_cb_t cb, void *userdata) { if (pulseaudio_context_ready() < 0) return -1; pulseaudio_lock(); pa_operation *op = pa_context_get_source_info_list(pulseaudio_context, cb, userdata); if (!op) { pulseaudio_unlock(); return -1; } while (pa_operation_get_state(op) == PA_OPERATION_RUNNING) pulseaudio_wait(); pa_operation_unref(op); pulseaudio_unlock(); return 0; } int_fast32_t pulseaudio_get_source_info(pa_source_info_cb_t cb, const char *name, void *userdata) { if (pulseaudio_context_ready() < 0) return -1; pulseaudio_lock(); pa_operation *op = pa_context_get_source_info_by_name(pulseaudio_context, name, cb, userdata); if (!op) { pulseaudio_unlock(); return -1; } while (pa_operation_get_state(op) == PA_OPERATION_RUNNING) pulseaudio_wait(); pa_operation_unref(op); pulseaudio_unlock(); return 0; } int_fast32_t pulseaudio_get_sink_info_list(pa_sink_info_cb_t cb, void *userdata) { if (pulseaudio_context_ready() < 0) return -1; pulseaudio_lock(); pa_operation *op = pa_context_get_sink_info_list(pulseaudio_context, cb, userdata); if (!op) { pulseaudio_unlock(); return -1; } while (pa_operation_get_state(op) == PA_OPERATION_RUNNING) pulseaudio_wait(); pa_operation_unref(op); pulseaudio_unlock(); return 0; } int_fast32_t pulseaudio_get_sink_info(pa_sink_info_cb_t cb, const char *name, void *userdata) { if (pulseaudio_context_ready() < 0) return -1; pulseaudio_lock(); pa_operation *op = pa_context_get_sink_info_by_name(pulseaudio_context, name, cb, userdata); if (!op) { pulseaudio_unlock(); return -1; } while (pa_operation_get_state(op) == PA_OPERATION_RUNNING) pulseaudio_wait(); pa_operation_unref(op); pulseaudio_unlock(); return 0; } int_fast32_t pulseaudio_get_server_info(pa_server_info_cb_t cb, void *userdata) { if (pulseaudio_context_ready() < 0) return -1; pulseaudio_lock(); pa_operation *op = pa_context_get_server_info(pulseaudio_context, cb, userdata); if (!op) { pulseaudio_unlock(); return -1; } while (pa_operation_get_state(op) == PA_OPERATION_RUNNING) pulseaudio_wait(); pa_operation_unref(op); pulseaudio_unlock(); return 0; } pa_stream *pulseaudio_stream_new(const char *name, const pa_sample_spec *ss, const pa_channel_map *map) { if (pulseaudio_context_ready() < 0) return NULL; pulseaudio_lock(); pa_proplist *p = pulseaudio_properties(); pa_stream *s = pa_stream_new_with_proplist(pulseaudio_context, name, ss, map, p); pa_proplist_free(p); pulseaudio_unlock(); return s; } int_fast32_t pulseaudio_connect_playback(pa_stream *s, const char *name, const pa_buffer_attr *attr, pa_stream_flags_t flags) { if (pulseaudio_context_ready() < 0) return -1; size_t dev_len = strlen(name); char *device = bzalloc(dev_len + 1); memcpy(device, name, dev_len); pulseaudio_lock(); int_fast32_t ret = pa_stream_connect_playback(s, device, attr, flags, NULL, NULL); pulseaudio_unlock(); bfree(device); return ret; } void pulseaudio_write_callback(pa_stream *p, pa_stream_request_cb_t cb, void *userdata) { if (pulseaudio_context_ready() < 0) return; pulseaudio_lock(); pa_stream_set_write_callback(p, cb, userdata); pulseaudio_unlock(); } void pulseaudio_set_underflow_callback(pa_stream *p, pa_stream_notify_cb_t cb, void *userdata) { if (pulseaudio_context_ready() < 0) return; pulseaudio_lock(); pa_stream_set_underflow_callback(p, cb, userdata); pulseaudio_unlock(); } obs-studio-32.1.0-sources/libobs/audio-monitoring/pulse/pulseaudio-enum-devices.c000644 001751 001751 00000001316 15153330235 031023 0ustar00runnerrunner000000 000000 #include #include "pulseaudio-wrapper.h" static void pulseaudio_output_info(pa_context *c, const pa_sink_info *i, int eol, void *userdata) { UNUSED_PARAMETER(c); if (eol != 0) goto skip; struct enum_cb *ecb = (struct enum_cb *)userdata; if (ecb->cont) ecb->cont = ecb->cb(ecb->data, i->description, i->name); skip: pulseaudio_signal(0); } void obs_enum_audio_monitoring_devices(obs_enum_audio_device_cb cb, void *data) { struct enum_cb *ecb = bzalloc(sizeof(struct enum_cb)); ecb->cb = cb; ecb->data = data; ecb->cont = 1; pulseaudio_init(); pa_sink_info_cb_t pa_cb = pulseaudio_output_info; pulseaudio_get_sink_info_list(pa_cb, (void *)ecb); pulseaudio_unref(); bfree(ecb); } obs-studio-32.1.0-sources/libobs/audio-monitoring/pulse/pulseaudio-monitoring-available.c000644 001751 001751 00000000127 15153330235 032541 0ustar00runnerrunner000000 000000 #include bool obs_audio_monitoring_available(void) { return true; } obs-studio-32.1.0-sources/libobs/audio-monitoring/pulse/pulseaudio-wrapper.h000644 001751 001751 00000014024 15153330235 030124 0ustar00runnerrunner000000 000000 /* Copyright (C) 2014 by Leonhard Oelke Copyright (C) 2017 by Fabio Madia This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include #include #include #pragma once struct pulseaudio_default_output { char *default_sink_name; }; struct enum_cb { obs_enum_audio_device_cb cb; void *data; int cont; }; void get_default_id(char **id); bool devices_match(const char *id1, const char *id2); /** * Initialize the pulseaudio mainloop and increase the reference count */ int_fast32_t pulseaudio_init(); /** * Unreference the pulseaudio mainloop, when the reference count reaches * zero the mainloop will automatically be destroyed */ void pulseaudio_unref(); /** * Lock the mainloop * * In order to allow for multiple threads to use the same mainloop pulseaudio * provides it's own locking mechanism. This function should be called before * using any pulseaudio function that is in any way related to the mainloop or * context. * * @note use of this function may cause deadlocks * * @warning do not use with pulseaudio_ wrapper functions */ void pulseaudio_lock(); /** * Unlock the mainloop * * @see pulseaudio_lock() */ void pulseaudio_unlock(); /** * Wait for events to happen * * This function should be called when waiting for an event to happen. */ void pulseaudio_wait(); /** * Wait for accept signal from calling thread * * This function tells the pulseaudio mainloop whether the data provided to * the callback should be retained until the calling thread executes * pulseaudio_accept() * * If wait_for_accept is 0 the function returns and the data is freed. */ void pulseaudio_signal(int wait_for_accept); /** * Signal the waiting callback to return * * This function is used in conjunction with pulseaudio_signal() */ void pulseaudio_accept(); /** * Request source information * * The function will block until the operation was executed and the mainloop * called the provided callback function. * * @return negative on error * * @note The function will block until the server context is ready. * * @warning call without active locks */ int_fast32_t pulseaudio_get_source_info_list(pa_source_info_cb_t cb, void *userdata); /** * Request source information from a specific source * * The function will block until the operation was executed and the mainloop * called the provided callback function. * * @param cb pointer to the callback function * @param name the source name to get information for * @param userdata pointer to userdata the callback will be called with * * @return negative on error * * @note The function will block until the server context is ready. * * @warning call without active locks */ int_fast32_t pulseaudio_get_source_info(pa_source_info_cb_t cb, const char *name, void *userdata); /** * Request sink information * * The function will block until the operation was executed and the mainloop * called the provided callback function. * * @return negative on error * * @note The function will block until the server context is ready. * * @warning call without active locks */ int_fast32_t pulseaudio_get_sink_info_list(pa_sink_info_cb_t cb, void *userdata); /** * Request sink information from a specific sink * * The function will block until the operation was executed and the mainloop * called the provided callback function. * * @param cb pointer to the callback function * @param name the sink name to get information for * @param userdata pointer to userdata the callback will be called with * * @return negative on error * * @note The function will block until the server context is ready. * * @warning call without active locks */ int_fast32_t pulseaudio_get_sink_info(pa_sink_info_cb_t cb, const char *name, void *userdata); /** * Request server information * * The function will block until the operation was executed and the mainloop * called the provided callback function. * * @return negative on error * * @note The function will block until the server context is ready. * * @warning call without active locks */ int_fast32_t pulseaudio_get_server_info(pa_server_info_cb_t cb, void *userdata); /** * Create a new stream with the default properties * * @note The function will block until the server context is ready. * * @warning call without active locks */ pa_stream *pulseaudio_stream_new(const char *name, const pa_sample_spec *ss, const pa_channel_map *map); /** * Connect to a pulseaudio playback stream * * @param s pa_stream to connect to. NULL for default * @param attr pa_buffer_attr * @param name Device name. NULL for default device * @param flags pa_stream_flags_t * @return negative on error */ int_fast32_t pulseaudio_connect_playback(pa_stream *s, const char *name, const pa_buffer_attr *attr, pa_stream_flags_t flags); /** * Sets a callback function for when data can be written to the stream * * @param p pa_stream to connect to. NULL for default * @param cb pa_stream_request_cb_t * @param userdata pointer to userdata the callback will be called with */ void pulseaudio_write_callback(pa_stream *p, pa_stream_request_cb_t cb, void *userdata); /** * Sets a callback function for when an underflow happen * * @param p pa_stream to connect to. NULL for default * @param cb pa_stream_notify_cb_t * @param userdata pointer to userdata the callback will be called with */ void pulseaudio_set_underflow_callback(pa_stream *p, pa_stream_notify_cb_t cb, void *userdata); obs-studio-32.1.0-sources/libobs/audio-monitoring/pulse/pulseaudio-output.c000644 001751 001751 00000032725 15153330235 030007 0ustar00runnerrunner000000 000000 #include "obs-internal.h" #include "pulseaudio-wrapper.h" #define PULSE_DATA(voidptr) struct audio_monitor *data = voidptr; #define blog(level, msg, ...) blog(level, "pulse-am: " msg, ##__VA_ARGS__) struct audio_monitor { obs_source_t *source; pa_stream *stream; char *device; pa_buffer_attr attr; enum speaker_layout speakers; pa_sample_format_t format; uint_fast32_t samples_per_sec; uint_fast32_t bytes_per_frame; uint_fast8_t channels; uint_fast32_t packets; uint_fast64_t frames; struct deque new_data; audio_resampler_t *resampler; bool ignore; pthread_mutex_t playback_mutex; }; static enum speaker_layout pulseaudio_channels_to_obs_speakers(uint_fast32_t channels) { switch (channels) { case 0: return SPEAKERS_UNKNOWN; case 1: return SPEAKERS_MONO; case 2: return SPEAKERS_STEREO; case 3: return SPEAKERS_2POINT1; case 4: return SPEAKERS_4POINT0; case 5: return SPEAKERS_4POINT1; case 6: return SPEAKERS_5POINT1; case 8: return SPEAKERS_7POINT1; default: return SPEAKERS_UNKNOWN; } } static enum audio_format pulseaudio_to_obs_audio_format(pa_sample_format_t format) { switch (format) { case PA_SAMPLE_U8: return AUDIO_FORMAT_U8BIT; case PA_SAMPLE_S16LE: return AUDIO_FORMAT_16BIT; case PA_SAMPLE_S32LE: return AUDIO_FORMAT_32BIT; case PA_SAMPLE_FLOAT32LE: return AUDIO_FORMAT_FLOAT; default: return AUDIO_FORMAT_UNKNOWN; } } static pa_channel_map pulseaudio_channel_map(enum speaker_layout layout) { pa_channel_map ret; ret.map[0] = PA_CHANNEL_POSITION_FRONT_LEFT; ret.map[1] = PA_CHANNEL_POSITION_FRONT_RIGHT; ret.map[2] = PA_CHANNEL_POSITION_FRONT_CENTER; ret.map[3] = PA_CHANNEL_POSITION_LFE; ret.map[4] = PA_CHANNEL_POSITION_REAR_LEFT; ret.map[5] = PA_CHANNEL_POSITION_REAR_RIGHT; ret.map[6] = PA_CHANNEL_POSITION_SIDE_LEFT; ret.map[7] = PA_CHANNEL_POSITION_SIDE_RIGHT; switch (layout) { case SPEAKERS_MONO: ret.channels = 1; ret.map[0] = PA_CHANNEL_POSITION_MONO; break; case SPEAKERS_STEREO: ret.channels = 2; break; case SPEAKERS_2POINT1: ret.channels = 3; ret.map[2] = PA_CHANNEL_POSITION_LFE; break; case SPEAKERS_4POINT0: ret.channels = 4; ret.map[3] = PA_CHANNEL_POSITION_REAR_CENTER; break; case SPEAKERS_4POINT1: ret.channels = 5; ret.map[4] = PA_CHANNEL_POSITION_REAR_CENTER; break; case SPEAKERS_5POINT1: ret.channels = 6; break; case SPEAKERS_7POINT1: ret.channels = 8; break; case SPEAKERS_UNKNOWN: default: ret.channels = 0; break; } return ret; } static void process_byte(void *p, size_t frames, size_t channels, float vol) { register uint8_t *cur = (uint8_t *)p; register uint8_t *end = cur + frames * channels; for (; cur < end; cur++) *cur = ((int)*cur - 128) * vol + 128; } static void process_s16(void *p, size_t frames, size_t channels, float vol) { register int16_t *cur = (int16_t *)p; register int16_t *end = cur + frames * channels; while (cur < end) *(cur++) *= vol; } static void process_s32(void *p, size_t frames, size_t channels, float vol) { register int32_t *cur = (int32_t *)p; register int32_t *end = cur + frames * channels; while (cur < end) *(cur++) *= vol; } static void process_float(void *p, size_t frames, size_t channels, float vol) { register float *cur = (float *)p; register float *end = cur + frames * channels; while (cur < end) *(cur++) *= vol; } void process_volume(const struct audio_monitor *monitor, float vol, uint8_t *const *resample_data, uint32_t resample_frames) { switch (monitor->format) { case PA_SAMPLE_U8: process_byte(resample_data[0], resample_frames, monitor->channels, vol); break; case PA_SAMPLE_S16LE: process_s16(resample_data[0], resample_frames, monitor->channels, vol); break; case PA_SAMPLE_S32LE: process_s32(resample_data[0], resample_frames, monitor->channels, vol); break; case PA_SAMPLE_FLOAT32LE: process_float(resample_data[0], resample_frames, monitor->channels, vol); break; default: // just ignore break; } } static void do_stream_write(void *param) { PULSE_DATA(param); uint8_t *buffer = NULL; pulseaudio_lock(); pthread_mutex_lock(&data->playback_mutex); // If we have grown a large buffer internally, grow the pulse buffer to match so we can write our data out. if (data->new_data.size > data->attr.tlength * 2) { data->attr.fragsize = (uint32_t)-1; data->attr.maxlength = (uint32_t)-1; data->attr.prebuf = (uint32_t)-1; data->attr.minreq = (uint32_t)-1; data->attr.tlength = data->new_data.size; pa_stream_set_buffer_attr(data->stream, &data->attr, NULL, NULL); } // Buffer up enough data before we start playing. if (pa_stream_is_corked(data->stream)) { if (data->new_data.size >= data->attr.tlength) { pa_stream_cork(data->stream, 0, NULL, NULL); } else { goto finish; } } while (data->new_data.size > 0) { size_t bytesToFill = data->new_data.size; if (pa_stream_begin_write(data->stream, (void **)&buffer, &bytesToFill)) goto finish; // PA may request we submit more or less data than we have. // Wait for more data if we cannot perform a full write. if (bytesToFill > data->new_data.size) { pa_stream_cancel_write(data->stream); goto finish; } deque_pop_front(&data->new_data, buffer, bytesToFill); pa_stream_write(data->stream, buffer, bytesToFill, NULL, 0LL, PA_SEEK_RELATIVE); } finish: pthread_mutex_unlock(&data->playback_mutex); pulseaudio_unlock(); } static void on_audio_playback(void *param, obs_source_t *source, const struct audio_data *audio_data, bool muted) { struct audio_monitor *monitor = param; float vol = source->user_volume; size_t bytes; uint8_t *resample_data[MAX_AV_PLANES]; uint32_t resample_frames; uint64_t ts_offset; bool success; if (pthread_mutex_trylock(&monitor->playback_mutex) != 0) return; if (os_atomic_load_long(&source->activate_refs) == 0) goto unlock; success = audio_resampler_resample(monitor->resampler, resample_data, &resample_frames, &ts_offset, (const uint8_t *const *)audio_data->data, (uint32_t)audio_data->frames); if (!success) goto unlock; bytes = monitor->bytes_per_frame * resample_frames; if (muted) { memset(resample_data[0], 0, bytes); } else { if (!close_float(vol, 1.0f, EPSILON)) { process_volume(monitor, vol, resample_data, resample_frames); } } deque_push_back(&monitor->new_data, resample_data[0], bytes); monitor->packets++; monitor->frames += resample_frames; unlock: pthread_mutex_unlock(&monitor->playback_mutex); do_stream_write(param); } static void pulseaudio_server_info(pa_context *c, const pa_server_info *i, void *userdata) { UNUSED_PARAMETER(c); UNUSED_PARAMETER(userdata); blog(LOG_INFO, "Server name: '%s %s'", i->server_name, i->server_version); pulseaudio_signal(0); } static void pulseaudio_sink_info(pa_context *c, const pa_sink_info *i, int eol, void *userdata) { UNUSED_PARAMETER(c); PULSE_DATA(userdata); // An error occurred if (eol < 0) { data->format = PA_SAMPLE_INVALID; goto skip; } // Terminating call for multi instance callbacks if (eol > 0) goto skip; blog(LOG_INFO, "Audio format: %s, %" PRIu32 " Hz, %" PRIu8 " channels", pa_sample_format_to_string(i->sample_spec.format), i->sample_spec.rate, i->sample_spec.channels); pa_sample_format_t format = i->sample_spec.format; if (pulseaudio_to_obs_audio_format(format) == AUDIO_FORMAT_UNKNOWN) { format = PA_SAMPLE_FLOAT32LE; blog(LOG_INFO, "Sample format %s not supported by OBS," "using %s instead for recording", pa_sample_format_to_string(i->sample_spec.format), pa_sample_format_to_string(format)); } uint8_t channels = i->sample_spec.channels; if (pulseaudio_channels_to_obs_speakers(channels) == SPEAKERS_UNKNOWN) { channels = 2; blog(LOG_INFO, "%c channels not supported by OBS," "using %c instead for recording", i->sample_spec.channels, channels); } data->format = format; data->samples_per_sec = i->sample_spec.rate; data->channels = channels; skip: pulseaudio_signal(0); } static void pulseaudio_stop_playback(struct audio_monitor *monitor) { if (monitor->stream) { /* Stop the stream */ pulseaudio_lock(); pa_stream_disconnect(monitor->stream); pulseaudio_unlock(); /* Remove the callbacks, to ensure we no longer try to do anything * with this stream object */ pulseaudio_write_callback(monitor->stream, NULL, NULL); /* Unreference the stream and drop it. PA will free it when it can. */ pulseaudio_lock(); pa_stream_unref(monitor->stream); pulseaudio_unlock(); monitor->stream = NULL; } blog(LOG_INFO, "Stopped Monitoring in '%s'", monitor->device); blog(LOG_INFO, "Got %" PRIuFAST32 " packets with %" PRIuFAST64 " frames", monitor->packets, monitor->frames); monitor->packets = 0; monitor->frames = 0; } static bool audio_monitor_init(struct audio_monitor *monitor, obs_source_t *source) { pthread_mutex_init_value(&monitor->playback_mutex); monitor->source = source; const char *id = obs->audio.monitoring_device_id; if (!id) return false; if (source->info.output_flags & OBS_SOURCE_DO_NOT_SELF_MONITOR) { obs_data_t *s = obs_source_get_settings(source); const char *s_dev_id = obs_data_get_string(s, "device_id"); bool match = devices_match(s_dev_id, id); obs_data_release(s); if (match) { monitor->ignore = true; blog(LOG_INFO, "Prevented feedback-loop in '%s'", s_dev_id); return true; } } pulseaudio_init(); if (strcmp(id, "default") == 0) get_default_id(&monitor->device); else monitor->device = bstrdup(id); if (!monitor->device) return false; if (pulseaudio_get_server_info(pulseaudio_server_info, (void *)monitor) < 0) { blog(LOG_ERROR, "Unable to get server info !"); return false; } if (pulseaudio_get_sink_info(pulseaudio_sink_info, monitor->device, (void *)monitor) < 0) { blog(LOG_ERROR, "Unable to get sink info !"); return false; } if (monitor->format == PA_SAMPLE_INVALID) { blog(LOG_ERROR, "An error occurred while getting the source info!"); return false; } pa_sample_spec spec; spec.format = monitor->format; spec.rate = (uint32_t)monitor->samples_per_sec; spec.channels = monitor->channels; if (!pa_sample_spec_valid(&spec)) { blog(LOG_ERROR, "Sample spec is not valid"); return false; } const struct audio_output_info *info = audio_output_get_info(obs->audio.audio); struct resample_info from = {.samples_per_sec = info->samples_per_sec, .speakers = info->speakers, .format = AUDIO_FORMAT_FLOAT_PLANAR}; struct resample_info to = {.samples_per_sec = (uint32_t)monitor->samples_per_sec, .speakers = pulseaudio_channels_to_obs_speakers(monitor->channels), .format = pulseaudio_to_obs_audio_format(monitor->format)}; monitor->resampler = audio_resampler_create(&to, &from); if (!monitor->resampler) { blog(LOG_WARNING, "%s: %s", __FUNCTION__, "Failed to create resampler"); return false; } monitor->speakers = pulseaudio_channels_to_obs_speakers(spec.channels); monitor->bytes_per_frame = pa_frame_size(&spec); pa_channel_map channel_map = pulseaudio_channel_map(monitor->speakers); monitor->stream = pulseaudio_stream_new(obs_source_get_name(monitor->source), &spec, &channel_map); if (!monitor->stream) { blog(LOG_ERROR, "Unable to create stream"); return false; } monitor->attr.fragsize = (uint32_t)-1; monitor->attr.maxlength = (uint32_t)-1; monitor->attr.minreq = (uint32_t)-1; monitor->attr.prebuf = (uint32_t)-1; monitor->attr.tlength = pa_usec_to_bytes(25000, &spec); pa_stream_flags_t flags = PA_STREAM_INTERPOLATE_TIMING | PA_STREAM_AUTO_TIMING_UPDATE | PA_STREAM_START_CORKED; int_fast32_t ret = pulseaudio_connect_playback(monitor->stream, monitor->device, &monitor->attr, flags); if (ret < 0) { pulseaudio_stop_playback(monitor); blog(LOG_ERROR, "Unable to connect to stream"); return false; } blog(LOG_INFO, "Started Monitoring in '%s'", monitor->device); return true; } static void audio_monitor_init_final(struct audio_monitor *monitor) { if (monitor->ignore) return; obs_source_add_audio_capture_callback(monitor->source, on_audio_playback, monitor); } static inline void audio_monitor_free(struct audio_monitor *monitor) { if (monitor->ignore) return; if (monitor->source) obs_source_remove_audio_capture_callback(monitor->source, on_audio_playback, monitor); audio_resampler_destroy(monitor->resampler); deque_free(&monitor->new_data); if (monitor->stream) pulseaudio_stop_playback(monitor); pulseaudio_unref(); bfree(monitor->device); } struct audio_monitor *audio_monitor_create(obs_source_t *source) { struct audio_monitor monitor = {0}; struct audio_monitor *out; if (!audio_monitor_init(&monitor, source)) goto fail; out = bmemdup(&monitor, sizeof(monitor)); pthread_mutex_lock(&obs->audio.monitoring_mutex); da_push_back(obs->audio.monitors, &out); pthread_mutex_unlock(&obs->audio.monitoring_mutex); audio_monitor_init_final(out); return out; fail: audio_monitor_free(&monitor); return NULL; } void audio_monitor_reset(struct audio_monitor *monitor) { struct audio_monitor new_monitor = {0}; bool success; audio_monitor_free(monitor); pthread_mutex_lock(&monitor->playback_mutex); success = audio_monitor_init(&new_monitor, monitor->source); pthread_mutex_unlock(&monitor->playback_mutex); if (success) { *monitor = new_monitor; audio_monitor_init_final(monitor); } else { audio_monitor_free(&new_monitor); } } void audio_monitor_destroy(struct audio_monitor *monitor) { if (monitor) { audio_monitor_free(monitor); pthread_mutex_lock(&obs->audio.monitoring_mutex); da_erase_item(obs->audio.monitors, &monitor); pthread_mutex_unlock(&obs->audio.monitoring_mutex); bfree(monitor); } } obs-studio-32.1.0-sources/libobs/obs.hpp000644 001751 001751 00000027025 15153330235 021010 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ /* Useful C++ classes and bindings for base obs data */ #pragma once #include "obs.h" /* RAII wrappers */ template class OBSRefAutoRelease; template class OBSRef; template class OBSSafeRef; using OBSObject = OBSSafeRef; using OBSSource = OBSSafeRef; using OBSScene = OBSSafeRef; using OBSSceneItem = OBSRef; using OBSData = OBSRef; using OBSDataArray = OBSRef; using OBSOutput = OBSSafeRef; using OBSEncoder = OBSSafeRef; using OBSService = OBSSafeRef; using OBSCanvas = OBSSafeRef; using OBSWeakObject = OBSRef; using OBSWeakSource = OBSRef; using OBSWeakOutput = OBSRef; using OBSWeakEncoder = OBSRef; using OBSWeakService = OBSRef; using OBSWeakCanvas = OBSRef; #define OBS_AUTORELEASE using OBSObjectAutoRelease = OBSRefAutoRelease; using OBSSourceAutoRelease = OBSRefAutoRelease; using OBSSceneAutoRelease = OBSRefAutoRelease; using OBSSceneItemAutoRelease = OBSRefAutoRelease; using OBSDataAutoRelease = OBSRefAutoRelease; using OBSDataArrayAutoRelease = OBSRefAutoRelease; using OBSOutputAutoRelease = OBSRefAutoRelease; using OBSEncoderAutoRelease = OBSRefAutoRelease; using OBSServiceAutoRelease = OBSRefAutoRelease; using OBSCanvasAutoRelease = OBSRefAutoRelease; using OBSWeakObjectAutoRelease = OBSRefAutoRelease; using OBSWeakSourceAutoRelease = OBSRefAutoRelease; using OBSWeakOutputAutoRelease = OBSRefAutoRelease; using OBSWeakEncoderAutoRelease = OBSRefAutoRelease; using OBSWeakServiceAutoRelease = OBSRefAutoRelease; using OBSWeakCanvasAutoRelease = OBSRefAutoRelease; template class OBSRefAutoRelease { protected: T val; public: inline OBSRefAutoRelease() : val(nullptr) {} inline OBSRefAutoRelease(T val_) : val(val_) {} OBSRefAutoRelease(const OBSRefAutoRelease &ref) = delete; inline OBSRefAutoRelease(OBSRefAutoRelease &&ref) : val(ref.val) { ref.val = nullptr; } inline ~OBSRefAutoRelease() { release(val); } inline operator T() const { return val; } inline T Get() const { return val; } inline bool operator==(T p) const { return val == p; } inline bool operator!=(T p) const { return val != p; } inline OBSRefAutoRelease &operator=(OBSRefAutoRelease &&ref) { if (this != &ref) { release(val); val = ref.val; ref.val = nullptr; } return *this; } inline OBSRefAutoRelease &operator=(T new_val) { release(val); val = new_val; return *this; } }; template class OBSRef : public OBSRefAutoRelease { inline OBSRef &Replace(T valIn) { addref(valIn); release(this->val); this->val = valIn; return *this; } struct TakeOwnership {}; inline OBSRef(T val_, TakeOwnership) : OBSRefAutoRelease::OBSRefAutoRelease(val_) {} public: inline OBSRef() : OBSRefAutoRelease::OBSRefAutoRelease(nullptr) {} inline OBSRef(const OBSRef &ref) : OBSRefAutoRelease::OBSRefAutoRelease(ref.val) { addref(this->val); } inline OBSRef(T val_) : OBSRefAutoRelease::OBSRefAutoRelease(val_) { addref(this->val); } inline OBSRef &operator=(const OBSRef &ref) { return Replace(ref.val); } inline OBSRef &operator=(T valIn) { return Replace(valIn); } friend OBSWeakObject OBSGetWeakRef(obs_object_t *object); friend OBSWeakSource OBSGetWeakRef(obs_source_t *source); friend OBSWeakOutput OBSGetWeakRef(obs_output_t *output); friend OBSWeakEncoder OBSGetWeakRef(obs_encoder_t *encoder); friend OBSWeakService OBSGetWeakRef(obs_service_t *service); friend OBSWeakCanvas OBSGetWeakRef(obs_canvas_t *canvas); }; template class OBSSafeRef : public OBSRefAutoRelease { inline OBSSafeRef &Replace(T valIn) { T newVal = getref(valIn); release(this->val); this->val = newVal; return *this; } struct TakeOwnership {}; inline OBSSafeRef(T val_, TakeOwnership) : OBSRefAutoRelease::OBSRefAutoRelease(val_) {} public: inline OBSSafeRef() : OBSRefAutoRelease::OBSRefAutoRelease(nullptr) {} inline OBSSafeRef(const OBSSafeRef &ref) : OBSRefAutoRelease::OBSRefAutoRelease(ref.val) { this->val = getref(ref.val); } inline OBSSafeRef(T val_) : OBSRefAutoRelease::OBSRefAutoRelease(val_) { this->val = getref(this->val); } inline OBSSafeRef &operator=(const OBSSafeRef &ref) { return Replace(ref.val); } inline OBSSafeRef &operator=(T valIn) { return Replace(valIn); } friend OBSObject OBSGetStrongRef(obs_weak_object_t *weak); friend OBSSource OBSGetStrongRef(obs_weak_source_t *weak); friend OBSOutput OBSGetStrongRef(obs_weak_output_t *weak); friend OBSEncoder OBSGetStrongRef(obs_weak_encoder_t *weak); friend OBSService OBSGetStrongRef(obs_weak_service_t *weak); friend OBSCanvas OBSGetStrongRef(obs_weak_canvas_t *weak); }; inline OBSObject OBSGetStrongRef(obs_weak_object_t *weak) { return {obs_weak_object_get_object(weak), OBSObject::TakeOwnership()}; } inline OBSWeakObject OBSGetWeakRef(obs_object_t *object) { return {obs_object_get_weak_object(object), OBSWeakObject::TakeOwnership()}; } inline OBSSource OBSGetStrongRef(obs_weak_source_t *weak) { return {obs_weak_source_get_source(weak), OBSSource::TakeOwnership()}; } inline OBSWeakSource OBSGetWeakRef(obs_source_t *source) { return {obs_source_get_weak_source(source), OBSWeakSource::TakeOwnership()}; } inline OBSOutput OBSGetStrongRef(obs_weak_output_t *weak) { return {obs_weak_output_get_output(weak), OBSOutput::TakeOwnership()}; } inline OBSWeakOutput OBSGetWeakRef(obs_output_t *output) { return {obs_output_get_weak_output(output), OBSWeakOutput::TakeOwnership()}; } inline OBSEncoder OBSGetStrongRef(obs_weak_encoder_t *weak) { return {obs_weak_encoder_get_encoder(weak), OBSEncoder::TakeOwnership()}; } inline OBSWeakEncoder OBSGetWeakRef(obs_encoder_t *encoder) { return {obs_encoder_get_weak_encoder(encoder), OBSWeakEncoder::TakeOwnership()}; } inline OBSService OBSGetStrongRef(obs_weak_service_t *weak) { return {obs_weak_service_get_service(weak), OBSService::TakeOwnership()}; } inline OBSWeakService OBSGetWeakRef(obs_service_t *service) { return {obs_service_get_weak_service(service), OBSWeakService::TakeOwnership()}; } inline OBSCanvas OBSGetStrongRef(obs_weak_canvas_t *canvas) { return {obs_weak_canvas_get_canvas(canvas), OBSCanvas::TakeOwnership()}; } inline OBSWeakCanvas OBSGetWeakRef(obs_canvas_t *canvas) { return {obs_canvas_get_weak_canvas(canvas), OBSWeakCanvas::TakeOwnership()}; } /* objects that are not meant to be instanced */ template class OBSPtr { T obj; public: inline OBSPtr() : obj(nullptr) {} inline OBSPtr(T obj_) : obj(obj_) {} inline OBSPtr(const OBSPtr &) = delete; inline OBSPtr(OBSPtr &&other) : obj(other.obj) { other.obj = nullptr; } inline ~OBSPtr() { destroy(obj); } inline OBSPtr &operator=(T obj_) { if (obj_ != obj) destroy(obj); obj = obj_; return *this; } inline OBSPtr &operator=(const OBSPtr &) = delete; inline OBSPtr &operator=(OBSPtr &&other) { if (obj) destroy(obj); obj = other.obj; other.obj = nullptr; return *this; } inline operator T() const { return obj; } inline bool operator==(T p) const { return obj == p; } inline bool operator!=(T p) const { return obj != p; } }; using OBSDisplay = OBSPtr; using OBSView = OBSPtr; using OBSFader = OBSPtr; using OBSVolMeter = OBSPtr; /* signal handler connection */ class OBSSignal { signal_handler_t *handler; const char *signal; signal_callback_t callback; void *param; public: inline OBSSignal() : handler(nullptr), signal(nullptr), callback(nullptr), param(nullptr) {} inline OBSSignal(signal_handler_t *handler_, const char *signal_, signal_callback_t callback_, void *param_) : handler(handler_), signal(signal_), callback(callback_), param(param_) { signal_handler_connect_ref(handler, signal, callback, param); } inline void Disconnect() { signal_handler_disconnect(handler, signal, callback, param); handler = nullptr; signal = nullptr; callback = nullptr; param = nullptr; } inline ~OBSSignal() { Disconnect(); } inline void Connect(signal_handler_t *handler_, const char *signal_, signal_callback_t callback_, void *param_) { Disconnect(); handler = handler_; signal = signal_; callback = callback_; param = param_; signal_handler_connect_ref(handler, signal, callback, param); } OBSSignal(const OBSSignal &) = delete; OBSSignal(OBSSignal &&other) noexcept : handler(other.handler), signal(other.signal), callback(other.callback), param(other.param) { other.handler = nullptr; other.signal = nullptr; other.callback = nullptr; other.param = nullptr; } OBSSignal &operator=(const OBSSignal &) = delete; OBSSignal &operator=(OBSSignal &&other) noexcept { Disconnect(); handler = other.handler; signal = other.signal; callback = other.callback; param = other.param; other.handler = nullptr; other.signal = nullptr; other.callback = nullptr; other.param = nullptr; return *this; } }; obs-studio-32.1.0-sources/libobs/obs-data.h000644 001751 001751 00000041115 15153330235 021353 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #include "media-io/frame-rate.h" #ifdef __cplusplus extern "C" { #endif struct vec2; struct vec3; struct vec4; struct quat; /* * OBS data settings storage * * This is used for retrieving or setting the data settings for things such * as sources, encoders, etc. This is designed for JSON serialization. */ struct obs_data; struct obs_data_item; struct obs_data_array; typedef struct obs_data obs_data_t; typedef struct obs_data_item obs_data_item_t; typedef struct obs_data_array obs_data_array_t; enum obs_data_type { OBS_DATA_NULL, OBS_DATA_STRING, OBS_DATA_NUMBER, OBS_DATA_BOOLEAN, OBS_DATA_OBJECT, OBS_DATA_ARRAY }; enum obs_data_number_type { OBS_DATA_NUM_INVALID, OBS_DATA_NUM_INT, OBS_DATA_NUM_DOUBLE }; /* ------------------------------------------------------------------------- */ /* Main usage functions */ EXPORT obs_data_t *obs_data_create(); EXPORT obs_data_t *obs_data_create_from_json(const char *json_string); EXPORT obs_data_t *obs_data_create_from_json_file(const char *json_file); EXPORT obs_data_t *obs_data_create_from_json_file_safe(const char *json_file, const char *backup_ext); EXPORT void obs_data_addref(obs_data_t *data); EXPORT void obs_data_release(obs_data_t *data); EXPORT const char *obs_data_get_json(obs_data_t *data); EXPORT const char *obs_data_get_json_with_defaults(obs_data_t *data); EXPORT const char *obs_data_get_json_pretty(obs_data_t *data); EXPORT const char *obs_data_get_json_pretty_with_defaults(obs_data_t *data); EXPORT const char *obs_data_get_last_json(obs_data_t *data); EXPORT bool obs_data_save_json(obs_data_t *data, const char *file); EXPORT bool obs_data_save_json_safe(obs_data_t *data, const char *file, const char *temp_ext, const char *backup_ext); EXPORT bool obs_data_save_json_pretty_safe(obs_data_t *data, const char *file, const char *temp_ext, const char *backup_ext); EXPORT void obs_data_apply(obs_data_t *target, obs_data_t *apply_data); EXPORT void obs_data_erase(obs_data_t *data, const char *name); EXPORT void obs_data_clear(obs_data_t *data); /* Set functions */ EXPORT void obs_data_set_string(obs_data_t *data, const char *name, const char *val); EXPORT void obs_data_set_int(obs_data_t *data, const char *name, long long val); EXPORT void obs_data_set_double(obs_data_t *data, const char *name, double val); EXPORT void obs_data_set_bool(obs_data_t *data, const char *name, bool val); EXPORT void obs_data_set_obj(obs_data_t *data, const char *name, obs_data_t *obj); EXPORT void obs_data_set_array(obs_data_t *data, const char *name, obs_data_array_t *array); /* * Creates an obs_data_t * filled with all default values. */ EXPORT obs_data_t *obs_data_get_defaults(obs_data_t *data); /* * Default value functions. */ EXPORT void obs_data_set_default_string(obs_data_t *data, const char *name, const char *val); EXPORT void obs_data_set_default_int(obs_data_t *data, const char *name, long long val); EXPORT void obs_data_set_default_double(obs_data_t *data, const char *name, double val); EXPORT void obs_data_set_default_bool(obs_data_t *data, const char *name, bool val); EXPORT void obs_data_set_default_obj(obs_data_t *data, const char *name, obs_data_t *obj); EXPORT void obs_data_set_default_array(obs_data_t *data, const char *name, obs_data_array_t *arr); /* * Application overrides * Use these to communicate the actual values of settings in case the user * settings aren't appropriate */ OBS_DEPRECATED EXPORT void obs_data_set_autoselect_string(obs_data_t *data, const char *name, const char *val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_int(obs_data_t *data, const char *name, long long val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_double(obs_data_t *data, const char *name, double val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_bool(obs_data_t *data, const char *name, bool val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_obj(obs_data_t *data, const char *name, obs_data_t *obj); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_array(obs_data_t *data, const char *name, obs_data_array_t *arr); /* * Get functions */ EXPORT const char *obs_data_get_string(obs_data_t *data, const char *name); EXPORT long long obs_data_get_int(obs_data_t *data, const char *name); EXPORT double obs_data_get_double(obs_data_t *data, const char *name); EXPORT bool obs_data_get_bool(obs_data_t *data, const char *name); EXPORT obs_data_t *obs_data_get_obj(obs_data_t *data, const char *name); EXPORT obs_data_array_t *obs_data_get_array(obs_data_t *data, const char *name); EXPORT const char *obs_data_get_default_string(obs_data_t *data, const char *name); EXPORT long long obs_data_get_default_int(obs_data_t *data, const char *name); EXPORT double obs_data_get_default_double(obs_data_t *data, const char *name); EXPORT bool obs_data_get_default_bool(obs_data_t *data, const char *name); EXPORT obs_data_t *obs_data_get_default_obj(obs_data_t *data, const char *name); EXPORT obs_data_array_t *obs_data_get_default_array(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT const char *obs_data_get_autoselect_string(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT long long obs_data_get_autoselect_int(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT double obs_data_get_autoselect_double(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT bool obs_data_get_autoselect_bool(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT obs_data_t *obs_data_get_autoselect_obj(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT obs_data_array_t *obs_data_get_autoselect_array(obs_data_t *data, const char *name); /* Array functions */ EXPORT obs_data_array_t *obs_data_array_create(); EXPORT void obs_data_array_addref(obs_data_array_t *array); EXPORT void obs_data_array_release(obs_data_array_t *array); EXPORT size_t obs_data_array_count(obs_data_array_t *array); EXPORT obs_data_t *obs_data_array_item(obs_data_array_t *array, size_t idx); EXPORT size_t obs_data_array_push_back(obs_data_array_t *array, obs_data_t *obj); EXPORT void obs_data_array_insert(obs_data_array_t *array, size_t idx, obs_data_t *obj); EXPORT void obs_data_array_push_back_array(obs_data_array_t *array, obs_data_array_t *array2); EXPORT void obs_data_array_erase(obs_data_array_t *array, size_t idx); EXPORT void obs_data_array_enum(obs_data_array_t *array, void (*cb)(obs_data_t *data, void *param), void *param); /* ------------------------------------------------------------------------- */ /* Item status inspection */ EXPORT bool obs_data_has_user_value(obs_data_t *data, const char *name); EXPORT bool obs_data_has_default_value(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT bool obs_data_has_autoselect_value(obs_data_t *data, const char *name); EXPORT bool obs_data_item_has_user_value(obs_data_item_t *data); EXPORT bool obs_data_item_has_default_value(obs_data_item_t *data); OBS_DEPRECATED EXPORT bool obs_data_item_has_autoselect_value(obs_data_item_t *data); /* ------------------------------------------------------------------------- */ /* Clearing data values */ EXPORT void obs_data_unset_user_value(obs_data_t *data, const char *name); EXPORT void obs_data_unset_default_value(obs_data_t *data, const char *name); OBS_DEPRECATED EXPORT void obs_data_unset_autoselect_value(obs_data_t *data, const char *name); EXPORT void obs_data_item_unset_user_value(obs_data_item_t *data); EXPORT void obs_data_item_unset_default_value(obs_data_item_t *data); OBS_DEPRECATED EXPORT void obs_data_item_unset_autoselect_value(obs_data_item_t *data); /* ------------------------------------------------------------------------- */ /* Item iteration */ EXPORT obs_data_item_t *obs_data_first(obs_data_t *data); EXPORT obs_data_item_t *obs_data_item_byname(obs_data_t *data, const char *name); EXPORT bool obs_data_item_next(obs_data_item_t **item); EXPORT void obs_data_item_release(obs_data_item_t **item); EXPORT void obs_data_item_remove(obs_data_item_t **item); /* Gets Item type */ EXPORT enum obs_data_type obs_data_item_gettype(obs_data_item_t *item); EXPORT enum obs_data_number_type obs_data_item_numtype(obs_data_item_t *item); EXPORT const char *obs_data_item_get_name(obs_data_item_t *item); /* Item set functions */ EXPORT void obs_data_item_set_string(obs_data_item_t **item, const char *val); EXPORT void obs_data_item_set_int(obs_data_item_t **item, long long val); EXPORT void obs_data_item_set_double(obs_data_item_t **item, double val); EXPORT void obs_data_item_set_bool(obs_data_item_t **item, bool val); EXPORT void obs_data_item_set_obj(obs_data_item_t **item, obs_data_t *val); EXPORT void obs_data_item_set_array(obs_data_item_t **item, obs_data_array_t *val); EXPORT void obs_data_item_set_default_string(obs_data_item_t **item, const char *val); EXPORT void obs_data_item_set_default_int(obs_data_item_t **item, long long val); EXPORT void obs_data_item_set_default_double(obs_data_item_t **item, double val); EXPORT void obs_data_item_set_default_bool(obs_data_item_t **item, bool val); EXPORT void obs_data_item_set_default_obj(obs_data_item_t **item, obs_data_t *val); EXPORT void obs_data_item_set_default_array(obs_data_item_t **item, obs_data_array_t *val); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_string(obs_data_item_t **item, const char *val); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_int(obs_data_item_t **item, long long val); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_double(obs_data_item_t **item, double val); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_bool(obs_data_item_t **item, bool val); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_obj(obs_data_item_t **item, obs_data_t *val); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_array(obs_data_item_t **item, obs_data_array_t *val); /* Item get functions */ EXPORT const char *obs_data_item_get_string(obs_data_item_t *item); EXPORT long long obs_data_item_get_int(obs_data_item_t *item); EXPORT double obs_data_item_get_double(obs_data_item_t *item); EXPORT bool obs_data_item_get_bool(obs_data_item_t *item); EXPORT obs_data_t *obs_data_item_get_obj(obs_data_item_t *item); EXPORT obs_data_array_t *obs_data_item_get_array(obs_data_item_t *item); EXPORT const char *obs_data_item_get_default_string(obs_data_item_t *item); EXPORT long long obs_data_item_get_default_int(obs_data_item_t *item); EXPORT double obs_data_item_get_default_double(obs_data_item_t *item); EXPORT bool obs_data_item_get_default_bool(obs_data_item_t *item); EXPORT obs_data_t *obs_data_item_get_default_obj(obs_data_item_t *item); EXPORT obs_data_array_t *obs_data_item_get_default_array(obs_data_item_t *item); OBS_DEPRECATED EXPORT const char *obs_data_item_get_autoselect_string(obs_data_item_t *item); OBS_DEPRECATED EXPORT long long obs_data_item_get_autoselect_int(obs_data_item_t *item); OBS_DEPRECATED EXPORT double obs_data_item_get_autoselect_double(obs_data_item_t *item); OBS_DEPRECATED EXPORT bool obs_data_item_get_autoselect_bool(obs_data_item_t *item); OBS_DEPRECATED EXPORT obs_data_t *obs_data_item_get_autoselect_obj(obs_data_item_t *item); OBS_DEPRECATED EXPORT obs_data_array_t *obs_data_item_get_autoselect_array(obs_data_item_t *item); /* ------------------------------------------------------------------------- */ /* Helper functions for certain structures */ EXPORT void obs_data_set_vec2(obs_data_t *data, const char *name, const struct vec2 *val); EXPORT void obs_data_set_vec3(obs_data_t *data, const char *name, const struct vec3 *val); EXPORT void obs_data_set_vec4(obs_data_t *data, const char *name, const struct vec4 *val); EXPORT void obs_data_set_quat(obs_data_t *data, const char *name, const struct quat *val); EXPORT void obs_data_set_default_vec2(obs_data_t *data, const char *name, const struct vec2 *val); EXPORT void obs_data_set_default_vec3(obs_data_t *data, const char *name, const struct vec3 *val); EXPORT void obs_data_set_default_vec4(obs_data_t *data, const char *name, const struct vec4 *val); EXPORT void obs_data_set_default_quat(obs_data_t *data, const char *name, const struct quat *val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_vec2(obs_data_t *data, const char *name, const struct vec2 *val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_vec3(obs_data_t *data, const char *name, const struct vec3 *val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_vec4(obs_data_t *data, const char *name, const struct vec4 *val); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_quat(obs_data_t *data, const char *name, const struct quat *val); EXPORT void obs_data_get_vec2(obs_data_t *data, const char *name, struct vec2 *val); EXPORT void obs_data_get_vec3(obs_data_t *data, const char *name, struct vec3 *val); EXPORT void obs_data_get_vec4(obs_data_t *data, const char *name, struct vec4 *val); EXPORT void obs_data_get_quat(obs_data_t *data, const char *name, struct quat *val); EXPORT void obs_data_get_default_vec2(obs_data_t *data, const char *name, struct vec2 *val); EXPORT void obs_data_get_default_vec3(obs_data_t *data, const char *name, struct vec3 *val); EXPORT void obs_data_get_default_vec4(obs_data_t *data, const char *name, struct vec4 *val); EXPORT void obs_data_get_default_quat(obs_data_t *data, const char *name, struct quat *val); OBS_DEPRECATED EXPORT void obs_data_get_autoselect_vec2(obs_data_t *data, const char *name, struct vec2 *val); OBS_DEPRECATED EXPORT void obs_data_get_autoselect_vec3(obs_data_t *data, const char *name, struct vec3 *val); OBS_DEPRECATED EXPORT void obs_data_get_autoselect_vec4(obs_data_t *data, const char *name, struct vec4 *val); OBS_DEPRECATED EXPORT void obs_data_get_autoselect_quat(obs_data_t *data, const char *name, struct quat *val); /* ------------------------------------------------------------------------- */ /* Helper functions for media_frames_per_second/OBS_PROPERTY_FRAME_RATE */ EXPORT void obs_data_set_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second fps, const char *option); EXPORT void obs_data_set_default_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second fps, const char *option); OBS_DEPRECATED EXPORT void obs_data_set_autoselect_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second fps, const char *option); EXPORT bool obs_data_get_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second *fps, const char **option); EXPORT bool obs_data_get_default_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second *fps, const char **option); EXPORT bool obs_data_get_autoselect_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second *fps, const char **option); EXPORT void obs_data_item_set_frames_per_second(obs_data_item_t **item, struct media_frames_per_second fps, const char *option); EXPORT void obs_data_item_set_default_frames_per_second(obs_data_item_t **item, struct media_frames_per_second fps, const char *option); OBS_DEPRECATED EXPORT void obs_data_item_set_autoselect_frames_per_second(obs_data_item_t **item, struct media_frames_per_second fps, const char *option); EXPORT bool obs_data_item_get_frames_per_second(obs_data_item_t *item, struct media_frames_per_second *fps, const char **option); EXPORT bool obs_data_item_get_default_frames_per_second(obs_data_item_t *item, struct media_frames_per_second *fps, const char **option); OBS_DEPRECATED EXPORT bool obs_data_item_get_autoselect_frames_per_second(obs_data_item_t *item, struct media_frames_per_second *fps, const char **option); /* ------------------------------------------------------------------------- */ /* OBS-specific functions */ static inline obs_data_t *obs_data_newref(obs_data_t *data) { if (data) obs_data_addref(data); else data = obs_data_create(); return data; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-output-delay.c000644 001751 001751 00000013322 15153330235 023070 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "obs-internal.h" static inline bool delay_active(const struct obs_output *output) { return os_atomic_load_bool(&output->delay_active); } static inline bool delay_capturing(const struct obs_output *output) { return os_atomic_load_bool(&output->delay_capturing); } static inline bool flag_encoded(const struct obs_output *output) { return (output->info.flags & OBS_OUTPUT_ENCODED) != 0; } static inline bool log_flag_encoded(const struct obs_output *output, const char *func_name, bool inverse_log) { const char *prefix = inverse_log ? "n encoded" : " raw"; bool ret = flag_encoded(output); if ((!inverse_log && !ret) || (inverse_log && ret)) blog(LOG_WARNING, "Output '%s': Tried to use %s on a%s output", output->context.name, func_name, prefix); return ret; } static inline void push_packet(struct obs_output *output, struct encoder_packet *packet, struct encoder_packet_time *packet_time, uint64_t t) { struct delay_data dd; dd.msg = DELAY_MSG_PACKET; dd.ts = t; dd.packet_time_valid = packet_time != NULL; if (packet_time != NULL) dd.packet_time = *packet_time; obs_encoder_packet_create_instance(&dd.packet, packet); pthread_mutex_lock(&output->delay_mutex); deque_push_back(&output->delay_data, &dd, sizeof(dd)); pthread_mutex_unlock(&output->delay_mutex); } static inline void process_delay_data(struct obs_output *output, struct delay_data *dd) { switch (dd->msg) { case DELAY_MSG_PACKET: if (!delay_active(output) || !delay_capturing(output)) obs_encoder_packet_release(&dd->packet); else output->delay_callback(output, &dd->packet, dd->packet_time_valid ? &dd->packet_time : NULL); break; case DELAY_MSG_START: obs_output_actual_start(output); break; case DELAY_MSG_STOP: obs_output_actual_stop(output, false, dd->ts); break; } } void obs_output_cleanup_delay(obs_output_t *output) { struct delay_data dd; while (output->delay_data.size) { deque_pop_front(&output->delay_data, &dd, sizeof(dd)); if (dd.msg == DELAY_MSG_PACKET) { obs_encoder_packet_release(&dd.packet); } } output->active_delay_ns = 0; os_atomic_set_long(&output->delay_restart_refs, 0); } static inline bool pop_packet(struct obs_output *output, uint64_t t) { uint64_t elapsed_time; struct delay_data dd; bool popped = false; bool preserve; /* ------------------------------------------------ */ preserve = (output->delay_cur_flags & OBS_OUTPUT_DELAY_PRESERVE) != 0; pthread_mutex_lock(&output->delay_mutex); if (output->delay_data.size) { deque_peek_front(&output->delay_data, &dd, sizeof(dd)); elapsed_time = (t - dd.ts); if (preserve && output->reconnecting) { output->active_delay_ns = elapsed_time; } else if (elapsed_time > output->active_delay_ns) { deque_pop_front(&output->delay_data, NULL, sizeof(dd)); popped = true; } } pthread_mutex_unlock(&output->delay_mutex); /* ------------------------------------------------ */ if (popped) process_delay_data(output, &dd); return popped; } void process_delay(void *data, struct encoder_packet *packet, struct encoder_packet_time *packet_time) { struct obs_output *output = data; uint64_t t = os_gettime_ns(); push_packet(output, packet, packet_time, t); while (pop_packet(output, t)) ; } bool obs_output_delay_start(obs_output_t *output) { struct delay_data dd = { .msg = DELAY_MSG_START, .ts = os_gettime_ns(), }; if (!delay_active(output)) { bool can_begin = obs_output_can_begin_data_capture(output, 0); if (!can_begin) return false; if (!obs_output_initialize_encoders(output, 0)) return false; } pthread_mutex_lock(&output->delay_mutex); deque_push_back(&output->delay_data, &dd, sizeof(dd)); pthread_mutex_unlock(&output->delay_mutex); os_atomic_inc_long(&output->delay_restart_refs); if (delay_active(output)) { do_output_signal(output, "starting"); return true; } if (!obs_output_begin_data_capture(output, 0)) { obs_output_cleanup_delay(output); return false; } return true; } void obs_output_delay_stop(obs_output_t *output) { struct delay_data dd = { .msg = DELAY_MSG_STOP, .ts = os_gettime_ns(), }; pthread_mutex_lock(&output->delay_mutex); deque_push_back(&output->delay_data, &dd, sizeof(dd)); pthread_mutex_unlock(&output->delay_mutex); do_output_signal(output, "stopping"); } void obs_output_set_delay(obs_output_t *output, uint32_t delay_sec, uint32_t flags) { if (!obs_output_valid(output, "obs_output_set_delay")) return; if (!log_flag_encoded(output, __FUNCTION__, false)) return; output->delay_sec = delay_sec; output->delay_flags = flags; } uint32_t obs_output_get_delay(const obs_output_t *output) { return obs_output_valid(output, "obs_output_set_delay") ? output->delay_sec : 0; } uint32_t obs_output_get_active_delay(const obs_output_t *output) { return obs_output_valid(output, "obs_output_set_delay") ? (uint32_t)(output->active_delay_ns / 1000000000ULL) : 0; } obs-studio-32.1.0-sources/libobs/obs.c000644 001751 001751 00000276666 15153330235 020464 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "graphics/matrix4.h" #include "callback/calldata.h" #include "obs.h" #include "obs-internal.h" struct obs_core *obs = NULL; static THREAD_LOCAL bool is_ui_thread = false; extern void add_default_module_paths(void); extern char *find_libobs_data_file(const char *file); static inline void make_video_info(struct video_output_info *vi, struct obs_video_info *ovi) { vi->name = "video"; vi->format = ovi->output_format; vi->fps_num = ovi->fps_num; vi->fps_den = ovi->fps_den; vi->width = ovi->output_width; vi->height = ovi->output_height; vi->range = ovi->range; vi->colorspace = ovi->colorspace; vi->cache_size = 6; } static inline void calc_gpu_conversion_sizes(struct obs_core_video_mix *video) { const struct video_output_info *info = video_output_get_info(video->video); video->conversion_needed = false; video->conversion_techs[0] = NULL; video->conversion_techs[1] = NULL; video->conversion_techs[2] = NULL; video->conversion_width_i = 0.f; video->conversion_height_i = 0.f; switch (info->format) { case VIDEO_FORMAT_I420: video->conversion_needed = true; video->conversion_techs[0] = "Planar_Y"; video->conversion_techs[1] = "Planar_U_Left"; video->conversion_techs[2] = "Planar_V_Left"; video->conversion_width_i = 1.f / (float)info->width; break; case VIDEO_FORMAT_NV12: video->conversion_needed = true; video->conversion_techs[0] = "NV12_Y"; video->conversion_techs[1] = "NV12_UV"; video->conversion_width_i = 1.f / (float)info->width; break; case VIDEO_FORMAT_I444: video->conversion_needed = true; video->conversion_techs[0] = "Planar_Y"; video->conversion_techs[1] = "Planar_U"; video->conversion_techs[2] = "Planar_V"; break; case VIDEO_FORMAT_I010: video->conversion_needed = true; video->conversion_width_i = 1.f / (float)info->width; video->conversion_height_i = 1.f / (float)info->height; if (info->colorspace == VIDEO_CS_2100_PQ) { video->conversion_techs[0] = "I010_PQ_Y"; video->conversion_techs[1] = "I010_PQ_U"; video->conversion_techs[2] = "I010_PQ_V"; } else if (info->colorspace == VIDEO_CS_2100_HLG) { video->conversion_techs[0] = "I010_HLG_Y"; video->conversion_techs[1] = "I010_HLG_U"; video->conversion_techs[2] = "I010_HLG_V"; } else { video->conversion_techs[0] = "I010_SRGB_Y"; video->conversion_techs[1] = "I010_SRGB_U"; video->conversion_techs[2] = "I010_SRGB_V"; } break; case VIDEO_FORMAT_P010: video->conversion_needed = true; video->conversion_width_i = 1.f / (float)info->width; video->conversion_height_i = 1.f / (float)info->height; if (info->colorspace == VIDEO_CS_2100_PQ) { video->conversion_techs[0] = "P010_PQ_Y"; video->conversion_techs[1] = "P010_PQ_UV"; } else if (info->colorspace == VIDEO_CS_2100_HLG) { video->conversion_techs[0] = "P010_HLG_Y"; video->conversion_techs[1] = "P010_HLG_UV"; } else { video->conversion_techs[0] = "P010_SRGB_Y"; video->conversion_techs[1] = "P010_SRGB_UV"; } break; case VIDEO_FORMAT_P216: video->conversion_needed = true; video->conversion_width_i = 1.f / (float)info->width; video->conversion_height_i = 1.f / (float)info->height; if (info->colorspace == VIDEO_CS_2100_PQ) { video->conversion_techs[0] = "P216_PQ_Y"; video->conversion_techs[1] = "P216_PQ_UV"; } else if (info->colorspace == VIDEO_CS_2100_HLG) { video->conversion_techs[0] = "P216_HLG_Y"; video->conversion_techs[1] = "P216_HLG_UV"; } else { video->conversion_techs[0] = "P216_SRGB_Y"; video->conversion_techs[1] = "P216_SRGB_UV"; } break; case VIDEO_FORMAT_P416: video->conversion_needed = true; video->conversion_width_i = 1.f / (float)info->width; video->conversion_height_i = 1.f / (float)info->height; if (info->colorspace == VIDEO_CS_2100_PQ) { video->conversion_techs[0] = "P416_PQ_Y"; video->conversion_techs[1] = "P416_PQ_UV"; } else if (info->colorspace == VIDEO_CS_2100_HLG) { video->conversion_techs[0] = "P416_HLG_Y"; video->conversion_techs[1] = "P416_HLG_UV"; } else { video->conversion_techs[0] = "P416_SRGB_Y"; video->conversion_techs[1] = "P416_SRGB_UV"; } break; default: break; } } static bool obs_init_gpu_conversion(struct obs_core_video_mix *video) { const struct video_output_info *info = video_output_get_info(video->video); calc_gpu_conversion_sizes(video); video->using_nv12_tex = info->format == VIDEO_FORMAT_NV12 ? gs_nv12_available() : false; video->using_p010_tex = info->format == VIDEO_FORMAT_P010 ? gs_p010_available() : false; if (!video->conversion_needed) { blog(LOG_INFO, "GPU conversion not available for format: %u", (unsigned int)info->format); video->gpu_conversion = false; video->using_nv12_tex = false; video->using_p010_tex = false; blog(LOG_INFO, "NV12 texture support not available"); return true; } if (video->using_nv12_tex) blog(LOG_INFO, "NV12 texture support enabled"); else blog(LOG_INFO, "NV12 texture support not available"); if (video->using_p010_tex) blog(LOG_INFO, "P010 texture support enabled"); else blog(LOG_INFO, "P010 texture support not available"); video->convert_textures[0] = NULL; video->convert_textures[1] = NULL; video->convert_textures[2] = NULL; video->convert_textures_encode[0] = NULL; video->convert_textures_encode[1] = NULL; video->convert_textures_encode[2] = NULL; if (video->using_nv12_tex) { if (!gs_texture_create_nv12(&video->convert_textures_encode[0], &video->convert_textures_encode[1], info->width, info->height, GS_RENDER_TARGET | GS_SHARED_KM_TEX)) { return false; } } else if (video->using_p010_tex) { if (!gs_texture_create_p010(&video->convert_textures_encode[0], &video->convert_textures_encode[1], info->width, info->height, GS_RENDER_TARGET | GS_SHARED_KM_TEX)) { return false; } } bool success = true; switch (info->format) { case VIDEO_FORMAT_I420: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R8, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width / 2, info->height / 2, GS_R8, 1, NULL, GS_RENDER_TARGET); video->convert_textures[2] = gs_texture_create(info->width / 2, info->height / 2, GS_R8, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1] || !video->convert_textures[2]) success = false; break; case VIDEO_FORMAT_NV12: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R8, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width / 2, info->height / 2, GS_R8G8, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1]) success = false; break; case VIDEO_FORMAT_I444: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R8, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width, info->height, GS_R8, 1, NULL, GS_RENDER_TARGET); video->convert_textures[2] = gs_texture_create(info->width, info->height, GS_R8, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1] || !video->convert_textures[2]) success = false; break; case VIDEO_FORMAT_I010: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R16, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width / 2, info->height / 2, GS_R16, 1, NULL, GS_RENDER_TARGET); video->convert_textures[2] = gs_texture_create(info->width / 2, info->height / 2, GS_R16, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1] || !video->convert_textures[2]) success = false; break; case VIDEO_FORMAT_P010: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R16, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width / 2, info->height / 2, GS_RG16, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1]) success = false; break; case VIDEO_FORMAT_P216: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R16, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width / 2, info->height, GS_RG16, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1]) success = false; break; case VIDEO_FORMAT_P416: video->convert_textures[0] = gs_texture_create(info->width, info->height, GS_R16, 1, NULL, GS_RENDER_TARGET); video->convert_textures[1] = gs_texture_create(info->width, info->height, GS_RG16, 1, NULL, GS_RENDER_TARGET); if (!video->convert_textures[0] || !video->convert_textures[1]) success = false; break; default: break; } if (!success) { for (size_t c = 0; c < NUM_CHANNELS; c++) { if (video->convert_textures[c]) { gs_texture_destroy(video->convert_textures[c]); video->convert_textures[c] = NULL; } if (video->convert_textures_encode[c]) { gs_texture_destroy(video->convert_textures_encode[c]); video->convert_textures_encode[c] = NULL; } } } return success; } static bool obs_init_gpu_copy_surfaces(struct obs_core_video_mix *video, size_t i) { const struct video_output_info *info = video_output_get_info(video->video); switch (info->format) { case VIDEO_FORMAT_I420: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R8); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width / 2, info->height / 2, GS_R8); if (!video->copy_surfaces[i][1]) return false; video->copy_surfaces[i][2] = gs_stagesurface_create(info->width / 2, info->height / 2, GS_R8); if (!video->copy_surfaces[i][2]) return false; break; case VIDEO_FORMAT_NV12: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R8); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width / 2, info->height / 2, GS_R8G8); if (!video->copy_surfaces[i][1]) return false; break; case VIDEO_FORMAT_I444: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R8); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width, info->height, GS_R8); if (!video->copy_surfaces[i][1]) return false; video->copy_surfaces[i][2] = gs_stagesurface_create(info->width, info->height, GS_R8); if (!video->copy_surfaces[i][2]) return false; break; case VIDEO_FORMAT_I010: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R16); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width / 2, info->height / 2, GS_R16); if (!video->copy_surfaces[i][1]) return false; video->copy_surfaces[i][2] = gs_stagesurface_create(info->width / 2, info->height / 2, GS_R16); if (!video->copy_surfaces[i][2]) return false; break; case VIDEO_FORMAT_P010: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R16); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width / 2, info->height / 2, GS_RG16); if (!video->copy_surfaces[i][1]) return false; break; case VIDEO_FORMAT_P216: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R16); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width / 2, info->height, GS_RG16); if (!video->copy_surfaces[i][1]) return false; break; case VIDEO_FORMAT_P416: video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, GS_R16); if (!video->copy_surfaces[i][0]) return false; video->copy_surfaces[i][1] = gs_stagesurface_create(info->width, info->height, GS_RG16); if (!video->copy_surfaces[i][1]) return false; break; default: break; } return true; } static bool obs_init_textures(struct obs_core_video_mix *video) { const struct video_output_info *info = video_output_get_info(video->video); bool success = true; enum gs_color_format format = GS_BGRA; switch (info->format) { case VIDEO_FORMAT_I010: case VIDEO_FORMAT_P010: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_I412: case VIDEO_FORMAT_YA2L: case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: format = GS_RGBA16F; break; default: break; } for (size_t i = 0; i < NUM_TEXTURES; i++) { #ifdef _WIN32 if (video->using_nv12_tex) { video->copy_surfaces_encode[i] = gs_stagesurface_create_nv12(info->width, info->height); if (!video->copy_surfaces_encode[i]) { success = false; break; } } else if (video->using_p010_tex) { video->copy_surfaces_encode[i] = gs_stagesurface_create_p010(info->width, info->height); if (!video->copy_surfaces_encode[i]) { success = false; break; } } #endif if (video->gpu_conversion) { if (!obs_init_gpu_copy_surfaces(video, i)) { success = false; break; } } else { video->copy_surfaces[i][0] = gs_stagesurface_create(info->width, info->height, format); if (!video->copy_surfaces[i][0]) { success = false; break; } } } enum gs_color_space space = GS_CS_SRGB; switch (info->colorspace) { case VIDEO_CS_2100_PQ: case VIDEO_CS_2100_HLG: space = GS_CS_709_EXTENDED; break; default: switch (info->format) { case VIDEO_FORMAT_I010: case VIDEO_FORMAT_P010: case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: space = GS_CS_SRGB_16F; break; default: space = GS_CS_SRGB; break; } break; } video->render_texture = gs_texture_create(video->ovi.base_width, video->ovi.base_height, format, 1, NULL, GS_RENDER_TARGET); if (!video->render_texture) success = false; video->output_texture = gs_texture_create(info->width, info->height, format, 1, NULL, GS_RENDER_TARGET); if (!video->output_texture) success = false; if (success) { video->render_space = space; } else { for (size_t i = 0; i < NUM_TEXTURES; i++) { for (size_t c = 0; c < NUM_CHANNELS; c++) { if (video->copy_surfaces[i][c]) { gs_stagesurface_destroy(video->copy_surfaces[i][c]); video->copy_surfaces[i][c] = NULL; } } #ifdef _WIN32 if (video->copy_surfaces_encode[i]) { gs_stagesurface_destroy(video->copy_surfaces_encode[i]); video->copy_surfaces_encode[i] = NULL; } #endif } if (video->render_texture) { gs_texture_destroy(video->render_texture); video->render_texture = NULL; } if (video->output_texture) { gs_texture_destroy(video->output_texture); video->output_texture = NULL; } } return success; } gs_effect_t *obs_load_effect(gs_effect_t **effect, const char *file) { if (!*effect) { char *filename = obs_find_data_file(file); *effect = gs_effect_create_from_file(filename, NULL); bfree(filename); } return *effect; } static const char *shader_comp_name = "shader compilation"; static const char *obs_init_graphics_name = "obs_init_graphics"; static int obs_init_graphics(struct obs_video_info *ovi) { struct obs_core_video *video = &obs->video; uint8_t transparent_tex_data[2 * 2 * 4] = {0}; const uint8_t *transparent_tex = transparent_tex_data; struct gs_sampler_info point_sampler = {0}; bool success = true; int errorcode; profile_start(obs_init_graphics_name); errorcode = gs_create(&video->graphics, ovi->graphics_module, ovi->adapter); if (errorcode != GS_SUCCESS) { profile_end(obs_init_graphics_name); switch (errorcode) { case GS_ERROR_MODULE_NOT_FOUND: return OBS_VIDEO_MODULE_NOT_FOUND; case GS_ERROR_NOT_SUPPORTED: return OBS_VIDEO_NOT_SUPPORTED; default: return OBS_VIDEO_FAIL; } } profile_start(shader_comp_name); gs_enter_context(video->graphics); char *filename = obs_find_data_file("default.effect"); video->default_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); if (gs_get_device_type() == GS_DEVICE_OPENGL) { filename = obs_find_data_file("default_rect.effect"); video->default_rect_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); } filename = obs_find_data_file("opaque.effect"); video->opaque_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("solid.effect"); video->solid_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("repeat.effect"); video->repeat_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("format_conversion.effect"); video->conversion_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("bicubic_scale.effect"); video->bicubic_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("lanczos_scale.effect"); video->lanczos_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("area.effect"); video->area_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("bilinear_lowres_scale.effect"); video->bilinear_lowres_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); filename = obs_find_data_file("premultiplied_alpha.effect"); video->premultiplied_alpha_effect = gs_effect_create_from_file(filename, NULL); bfree(filename); point_sampler.max_anisotropy = 1; video->point_sampler = gs_samplerstate_create(&point_sampler); obs->video.transparent_texture = gs_texture_create(2, 2, GS_RGBA, 1, &transparent_tex, 0); if (!video->default_effect) success = false; if (gs_get_device_type() == GS_DEVICE_OPENGL) { if (!video->default_rect_effect) success = false; } if (!video->opaque_effect) success = false; if (!video->solid_effect) success = false; if (!video->conversion_effect) success = false; if (!video->premultiplied_alpha_effect) success = false; if (!video->transparent_texture) success = false; if (!video->point_sampler) success = false; gs_leave_context(); profile_end(shader_comp_name); profile_end(obs_init_graphics_name); return success ? OBS_VIDEO_SUCCESS : OBS_VIDEO_FAIL; } static inline void set_video_matrix(struct obs_core_video_mix *video, struct video_output_info *info) { struct matrix4 mat; struct vec4 r_row; if (format_is_yuv(info->format)) { video_format_get_parameters_for_format(info->colorspace, info->range, info->format, (float *)&mat, NULL, NULL); matrix4_inv(&mat, &mat); /* swap R and G */ r_row = mat.x; mat.x = mat.y; mat.y = r_row; } else { matrix4_identity(&mat); } memcpy(video->color_matrix, &mat, sizeof(float) * 16); } static int obs_init_video_mix(struct obs_video_info *ovi, struct obs_core_video_mix *video) { struct video_output_info vi; pthread_mutex_init_value(&video->gpu_encoder_mutex); make_video_info(&vi, ovi); video->ovi = *ovi; /* main view graphics thread drives all frame output, * so share FPS settings for aux views */ pthread_mutex_lock(&obs->video.mixes_mutex); size_t num = obs->video.mixes.num; if (num && obs->data.main_canvas->mix) { struct obs_video_info main_ovi = obs->data.main_canvas->mix->ovi; video->ovi.fps_num = main_ovi.fps_num; video->ovi.fps_den = main_ovi.fps_den; } pthread_mutex_unlock(&obs->video.mixes_mutex); video->gpu_conversion = ovi->gpu_conversion; video->gpu_was_active = false; video->raw_was_active = false; video->was_active = false; set_video_matrix(video, &vi); int errorcode = video_output_open(&video->video, &vi); if (errorcode != VIDEO_OUTPUT_SUCCESS) { if (errorcode == VIDEO_OUTPUT_INVALIDPARAM) { blog(LOG_ERROR, "Invalid video parameters specified"); return OBS_VIDEO_INVALID_PARAM; } else { blog(LOG_ERROR, "Could not open video output"); } return OBS_VIDEO_FAIL; } if (pthread_mutex_init(&video->gpu_encoder_mutex, NULL) < 0) return OBS_VIDEO_FAIL; gs_enter_context(obs->video.graphics); if (video->gpu_conversion && !obs_init_gpu_conversion(video)) return OBS_VIDEO_FAIL; if (!obs_init_textures(video)) return OBS_VIDEO_FAIL; gs_leave_context(); return OBS_VIDEO_SUCCESS; } struct obs_core_video_mix *obs_create_video_mix(struct obs_video_info *ovi) { struct obs_core_video_mix *video = bzalloc(sizeof(struct obs_core_video_mix)); if (obs_init_video_mix(ovi, video) != OBS_VIDEO_SUCCESS) { bfree(video); video = NULL; } return video; } static bool restore_canvases(void) { bool success = true; pthread_mutex_lock(&obs->data.canvases_mutex); struct obs_context_data *ctx, *tmp; HASH_ITER (hh, (struct obs_context_data *)obs->data.canvases, ctx, tmp) { obs_canvas_t *canvas = (obs_canvas_t *)ctx; if (canvas->flags & MAIN) continue; if (!obs_canvas_reset_video_internal(canvas, NULL)) { blog(LOG_ERROR, "Failed restoring video mix for canvas '%s'", canvas->context.name); success = false; } } pthread_mutex_unlock(&obs->data.canvases_mutex); return success; } static int obs_init_video(struct obs_video_info *ovi) { struct obs_core_video *video = &obs->video; video->video_frame_interval_ns = util_mul_div64(1000000000ULL, ovi->fps_den, ovi->fps_num); video->video_half_frame_interval_ns = util_mul_div64(500000000ULL, ovi->fps_den, ovi->fps_num); if (pthread_mutex_init(&video->task_mutex, NULL) < 0) return OBS_VIDEO_FAIL; if (pthread_mutex_init(&video->encoder_group_mutex, NULL) < 0) return OBS_VIDEO_FAIL; if (pthread_mutex_init(&video->mixes_mutex, NULL) < 0) return OBS_VIDEO_FAIL; /* Reset main canvas mix first so it remains first in the rendering order. */ if (!obs_canvas_reset_video_internal(obs->data.main_canvas, ovi)) return OBS_VIDEO_FAIL; /* Reset mixes for remaining canvases using their existing video info. */ if (!restore_canvases()) return OBS_VIDEO_FAIL; int errorcode; #ifdef __APPLE__ pthread_attr_t attr; pthread_attr_init(&attr); pthread_attr_set_qos_class_np(&attr, QOS_CLASS_USER_INTERACTIVE, 0); errorcode = pthread_create(&video->video_thread, &attr, obs_graphics_thread_autorelease, obs); #else errorcode = pthread_create(&video->video_thread, NULL, obs_graphics_thread, obs); #endif if (errorcode != 0) return OBS_VIDEO_FAIL; video->thread_initialized = true; calldata_t parameters = {0}; signal_handler_signal(obs->signals, "video_reset", ¶meters); return OBS_VIDEO_SUCCESS; } static void stop_video(void) { pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) video_output_stop(obs->video.mixes.array[i]->video); pthread_mutex_unlock(&obs->video.mixes_mutex); struct obs_core_video *video = &obs->video; void *thread_retval; if (video->thread_initialized) { pthread_join(video->video_thread, &thread_retval); video->thread_initialized = false; } } static void obs_free_render_textures(struct obs_core_video_mix *video) { if (!obs->video.graphics) return; gs_enter_context(obs->video.graphics); for (size_t c = 0; c < NUM_CHANNELS; c++) { if (video->mapped_surfaces[c]) { gs_stagesurface_unmap(video->mapped_surfaces[c]); video->mapped_surfaces[c] = NULL; } } for (size_t i = 0; i < NUM_TEXTURES; i++) { for (size_t c = 0; c < NUM_CHANNELS; c++) { if (video->copy_surfaces[i][c]) { gs_stagesurface_destroy(video->copy_surfaces[i][c]); video->copy_surfaces[i][c] = NULL; } video->active_copy_surfaces[i][c] = NULL; } #ifdef _WIN32 if (video->copy_surfaces_encode[i]) { gs_stagesurface_destroy(video->copy_surfaces_encode[i]); video->copy_surfaces_encode[i] = NULL; } #endif } gs_texture_destroy(video->render_texture); for (size_t c = 0; c < NUM_CHANNELS; c++) { if (video->convert_textures[c]) { gs_texture_destroy(video->convert_textures[c]); video->convert_textures[c] = NULL; } if (video->convert_textures_encode[c]) { gs_texture_destroy(video->convert_textures_encode[c]); video->convert_textures_encode[c] = NULL; } } gs_texture_destroy(video->output_texture); video->render_texture = NULL; video->output_texture = NULL; gs_leave_context(); } void obs_free_video_mix(struct obs_core_video_mix *video) { if (video->video) { video_output_close(video->video); video->video = NULL; obs_free_render_textures(video); deque_free(&video->vframe_info_buffer); deque_free(&video->vframe_info_buffer_gpu); video->texture_rendered = false; memset(video->textures_copied, 0, sizeof(video->textures_copied)); video->texture_converted = false; pthread_mutex_destroy(&video->gpu_encoder_mutex); pthread_mutex_init_value(&video->gpu_encoder_mutex); da_free(video->gpu_encoders); video->gpu_encoder_active = 0; video->cur_texture = 0; } bfree(video); } static void obs_free_video(void) { pthread_mutex_lock(&obs->video.mixes_mutex); size_t num_views = 0; for (size_t i = 0; i < obs->video.mixes.num; i++) { struct obs_core_video_mix *video = obs->video.mixes.array[i]; if (video && video->view) num_views++; obs_free_video_mix(video); obs->video.mixes.array[i] = NULL; } da_free(obs->video.mixes); if (num_views > 0) blog(LOG_WARNING, "Number of remaining views: %ld", num_views); pthread_mutex_unlock(&obs->video.mixes_mutex); pthread_mutex_destroy(&obs->video.mixes_mutex); pthread_mutex_init_value(&obs->video.mixes_mutex); for (size_t i = 0; i < obs->video.ready_encoder_groups.num; i++) { obs_weak_encoder_release(obs->video.ready_encoder_groups.array[i]); } da_free(obs->video.ready_encoder_groups); pthread_mutex_destroy(&obs->video.encoder_group_mutex); pthread_mutex_init_value(&obs->video.encoder_group_mutex); pthread_mutex_destroy(&obs->video.task_mutex); pthread_mutex_init_value(&obs->video.task_mutex); deque_free(&obs->video.tasks); } static void obs_free_graphics(void) { struct obs_core_video *video = &obs->video; if (video->graphics) { gs_enter_context(video->graphics); gs_texture_destroy(video->transparent_texture); gs_samplerstate_destroy(video->point_sampler); gs_effect_destroy(video->default_effect); gs_effect_destroy(video->default_rect_effect); gs_effect_destroy(video->opaque_effect); gs_effect_destroy(video->solid_effect); gs_effect_destroy(video->conversion_effect); gs_effect_destroy(video->bicubic_effect); gs_effect_destroy(video->repeat_effect); gs_effect_destroy(video->lanczos_effect); gs_effect_destroy(video->area_effect); gs_effect_destroy(video->bilinear_lowres_effect); video->default_effect = NULL; gs_leave_context(); gs_destroy(video->graphics); video->graphics = NULL; } } void set_monitoring_duplication_source(void *param) { obs_source_t *src = param; struct obs_core_audio *audio = &obs->audio; audio->monitoring_duplicating_source = src; } static void apply_monitoring_deduplication(void *ignored, calldata_t *cd) { UNUSED_PARAMETER(ignored); obs_source_t *src = calldata_ptr(cd, "source"); obs_queue_task(OBS_TASK_AUDIO, set_monitoring_duplication_source, src, false); } static void set_audio_thread(void *unused); static bool obs_init_audio(struct audio_output_info *ai) { struct obs_core_audio *audio = &obs->audio; int errorcode; pthread_mutex_init_value(&audio->monitoring_mutex); if (pthread_mutex_init_recursive(&audio->monitoring_mutex) != 0) return false; if (pthread_mutex_init(&audio->task_mutex, NULL) != 0) return false; struct obs_task_info audio_init = {.task = set_audio_thread}; deque_push_back(&audio->tasks, &audio_init, sizeof(audio_init)); audio->monitoring_device_name = bstrdup("Default"); audio->monitoring_device_id = bstrdup("default"); audio->monitoring_duplicating_source = NULL; signal_handler_add(obs->signals, "void deduplication_changed(ptr source)"); signal_handler_connect(obs->signals, "deduplication_changed", apply_monitoring_deduplication, NULL); errorcode = audio_output_open(&audio->audio, ai); if (errorcode == AUDIO_OUTPUT_SUCCESS) return true; else if (errorcode == AUDIO_OUTPUT_INVALIDPARAM) blog(LOG_ERROR, "Invalid audio parameters specified"); else blog(LOG_ERROR, "Could not open audio output"); return false; } static void stop_audio(void) { struct obs_core_audio *audio = &obs->audio; if (audio->audio) { audio_output_close(audio->audio); audio->audio = NULL; } } static void obs_free_audio(void) { struct obs_core_audio *audio = &obs->audio; if (audio->audio) audio_output_close(audio->audio); deque_free(&audio->buffered_timestamps); da_free(audio->render_order); da_free(audio->root_nodes); da_free(audio->monitors); bfree(audio->monitoring_device_name); bfree(audio->monitoring_device_id); deque_free(&audio->tasks); pthread_mutex_destroy(&audio->task_mutex); pthread_mutex_destroy(&audio->monitoring_mutex); memset(audio, 0, sizeof(struct obs_core_audio)); } static bool obs_init_data(void) { struct obs_core_data *data = &obs->data; assert(data != NULL); pthread_mutex_init_value(&obs->data.displays_mutex); pthread_mutex_init_value(&obs->data.draw_callbacks_mutex); if (pthread_mutex_init_recursive(&data->sources_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&data->audio_sources_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&data->displays_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&data->outputs_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&data->encoders_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&data->services_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&obs->data.draw_callbacks_mutex) != 0) goto fail; if (pthread_mutex_init_recursive(&obs->data.canvases_mutex) != 0) goto fail; data->sources = NULL; data->public_sources = NULL; data->canvases = NULL; data->named_canvases = NULL; data->private_data = obs_data_create(); data->valid = true; fail: return data->valid; } void obs_main_view_free(struct obs_view *view) { if (!view) return; for (size_t i = 0; i < MAX_CHANNELS; i++) obs_source_release(view->channels[i]); memset(view->channels, 0, sizeof(view->channels)); pthread_mutex_destroy(&view->channels_mutex); } #define FREE_OBS_HASH_TABLE(handle, table, type) \ do { \ struct obs_context_data *ctx, *tmp; \ int unfreed = 0; \ HASH_ITER (handle, *(struct obs_context_data **)table, ctx, tmp) { \ obs_##type##_destroy((obs_##type##_t *)ctx); \ unfreed++; \ } \ if (unfreed) \ blog(LOG_INFO, "\t%d " #type "(s) were remaining", unfreed); \ } while (false) #define FREE_OBS_LINKED_LIST(type) \ do { \ int unfreed = 0; \ while (data->first_##type) { \ obs_##type##_destroy(data->first_##type); \ unfreed++; \ } \ if (unfreed) \ blog(LOG_INFO, "\t%d " #type "(s) were remaining", unfreed); \ } while (false) static void obs_free_data(void) { struct obs_core_data *data = &obs->data; data->valid = false; blog(LOG_INFO, "Freeing OBS context data"); /* Free main canvas */ obs_canvas_release(data->main_canvas); FREE_OBS_LINKED_LIST(output); FREE_OBS_LINKED_LIST(encoder); FREE_OBS_LINKED_LIST(display); FREE_OBS_LINKED_LIST(service); FREE_OBS_HASH_TABLE(hh, &data->public_sources, source); FREE_OBS_HASH_TABLE(hh_uuid, &data->sources, source); FREE_OBS_HASH_TABLE(hh, &data->named_canvases, canvas); FREE_OBS_HASH_TABLE(hh_uuid, &data->canvases, canvas); os_task_queue_wait(obs->destruction_task_thread); pthread_mutex_destroy(&data->sources_mutex); pthread_mutex_destroy(&data->audio_sources_mutex); pthread_mutex_destroy(&data->displays_mutex); pthread_mutex_destroy(&data->outputs_mutex); pthread_mutex_destroy(&data->encoders_mutex); pthread_mutex_destroy(&data->services_mutex); pthread_mutex_destroy(&data->draw_callbacks_mutex); pthread_mutex_destroy(&data->canvases_mutex); da_free(data->draw_callbacks); da_free(data->rendered_callbacks); da_free(data->tick_callbacks); obs_data_release(data->private_data); for (size_t i = 0; i < data->protocols.num; i++) bfree(data->protocols.array[i]); da_free(data->protocols); da_free(data->sources_to_tick); } static const char *obs_signals[] = { "void source_create(ptr source)", "void source_create_canvas(ptr source, ptr canvas)", "void source_destroy(ptr source)", "void source_remove(ptr source)", "void source_update(ptr source)", "void source_save(ptr source)", "void source_load(ptr source)", "void source_activate(ptr source)", "void source_deactivate(ptr source)", "void source_show(ptr source)", "void source_hide(ptr source)", "void source_audio_activate(ptr source)", "void source_audio_deactivate(ptr source)", "void source_filter_add(ptr source, ptr filter)", "void source_filter_remove(ptr source, ptr filter)", "void source_rename(ptr source, string new_name, string prev_name)", "void source_volume(ptr source, in out float volume)", "void source_volume_level(ptr source, float level, float magnitude, float peak)", "void source_transition_start(ptr source)", "void source_transition_video_stop(ptr source)", "void source_transition_stop(ptr source)", "void channel_change(int channel, in out ptr source, ptr prev_source)", "void hotkey_layout_change()", "void hotkey_register(ptr hotkey)", "void hotkey_unregister(ptr hotkey)", "void hotkey_bindings_changed(ptr hotkey)", "void canvas_create(ptr canvas)", "void canvas_remove(ptr canvas)", "void canvas_destroy(ptr canvas)", "void canvas_video_reset(ptr canvas)", "void canvas_rename(ptr canvas, string new_name, string prev_name)", "void video_reset()", NULL, }; static inline bool obs_init_handlers(void) { obs->signals = signal_handler_create(); if (!obs->signals) return false; obs->procs = proc_handler_create(); if (!obs->procs) return false; return signal_handler_add_array(obs->signals, obs_signals); } static pthread_once_t obs_pthread_once_init_token = PTHREAD_ONCE_INIT; static inline bool obs_init_hotkeys(void) { struct obs_core_hotkeys *hotkeys = &obs->hotkeys; bool success = false; assert(hotkeys != NULL); hotkeys->hotkeys = NULL; hotkeys->hotkey_pairs = NULL; hotkeys->signals = obs->signals; hotkeys->name_map_init_token = obs_pthread_once_init_token; hotkeys->mute = bstrdup("Mute"); hotkeys->unmute = bstrdup("Unmute"); hotkeys->push_to_mute = bstrdup("Push-to-mute"); hotkeys->push_to_talk = bstrdup("Push-to-talk"); hotkeys->sceneitem_show = bstrdup("Show '%1'"); hotkeys->sceneitem_hide = bstrdup("Hide '%1'"); if (!obs_hotkeys_platform_init(hotkeys)) return false; if (pthread_mutex_init_recursive(&hotkeys->mutex) != 0) goto fail; if (os_event_init(&hotkeys->stop_event, OS_EVENT_TYPE_MANUAL) != 0) goto fail; if (pthread_create(&hotkeys->hotkey_thread, NULL, obs_hotkey_thread, NULL)) goto fail; hotkeys->strict_modifiers = true; hotkeys->hotkey_thread_initialized = true; success = true; fail: return success; } static inline void stop_hotkeys(void) { struct obs_core_hotkeys *hotkeys = &obs->hotkeys; void *thread_ret; if (hotkeys->hotkey_thread_initialized) { os_event_signal(hotkeys->stop_event); pthread_join(hotkeys->hotkey_thread, &thread_ret); hotkeys->hotkey_thread_initialized = false; } os_event_destroy(hotkeys->stop_event); obs_hotkeys_free(); } static inline void obs_free_hotkeys(void) { struct obs_core_hotkeys *hotkeys = &obs->hotkeys; bfree(hotkeys->mute); bfree(hotkeys->unmute); bfree(hotkeys->push_to_mute); bfree(hotkeys->push_to_talk); bfree(hotkeys->sceneitem_show); bfree(hotkeys->sceneitem_hide); obs_hotkey_name_map_free(); obs_hotkeys_platform_free(hotkeys); pthread_mutex_destroy(&hotkeys->mutex); } extern const struct obs_source_info scene_info; extern const struct obs_source_info group_info; static const char *submix_name(void *unused) { UNUSED_PARAMETER(unused); return "Audio line (internal use only)"; } const struct obs_source_info audio_line_info = { .id = "audio_line", .type = OBS_SOURCE_TYPE_INPUT, .output_flags = OBS_SOURCE_AUDIO | OBS_SOURCE_CAP_DISABLED | OBS_SOURCE_SUBMIX, .get_name = submix_name, }; extern void log_system_info(void); static bool obs_init(const char *locale, const char *module_config_path, profiler_name_store_t *store) { obs = bzalloc(sizeof(struct obs_core)); pthread_mutex_init_value(&obs->audio.monitoring_mutex); pthread_mutex_init_value(&obs->audio.task_mutex); pthread_mutex_init_value(&obs->video.task_mutex); pthread_mutex_init_value(&obs->video.encoder_group_mutex); pthread_mutex_init_value(&obs->video.mixes_mutex); obs->name_store_owned = !store; obs->name_store = store ? store : profiler_name_store_create(); if (!obs->name_store) { blog(LOG_ERROR, "Couldn't create profiler name store"); return false; } log_system_info(); if (!obs_init_data()) return false; if (!obs_init_handlers()) return false; if (!obs_init_hotkeys()) return false; /* Create persistent main canvas. */ obs->data.main_canvas = obs_create_main_canvas(); if (!obs->data.main_canvas) return false; obs->destruction_task_thread = os_task_queue_create(); if (!obs->destruction_task_thread) return false; if (module_config_path) obs->module_config_path = bstrdup(module_config_path); obs->locale = bstrdup(locale); obs_register_source(&scene_info); obs_register_source(&group_info); obs_register_source(&audio_line_info); add_default_module_paths(); return true; } #ifdef _WIN32 extern bool initialize_com(void); extern void uninitialize_com(void); static bool com_initialized = false; #endif /* Separate from actual context initialization * since this can be set before startup and persist * after shutdown. */ static DARRAY(struct dstr) core_module_paths = {0}; char *obs_find_data_file(const char *file) { struct dstr path = {0}; char *result = find_libobs_data_file(file); if (result) return result; for (size_t i = 0; i < core_module_paths.num; ++i) { if (check_path(file, core_module_paths.array[i].array, &path)) return path.array; } blog(LOG_ERROR, "Failed to find file '%s' in libobs data directory", file); dstr_free(&path); return NULL; } void obs_add_data_path(const char *path) { struct dstr *new_path = da_push_back_new(core_module_paths); dstr_init_copy(new_path, path); } bool obs_remove_data_path(const char *path) { for (size_t i = 0; i < core_module_paths.num; ++i) { int result = dstr_cmp(&core_module_paths.array[i], path); if (result == 0) { dstr_free(&core_module_paths.array[i]); da_erase(core_module_paths, i); return true; } } return false; } static const char *obs_startup_name = "obs_startup"; bool obs_startup(const char *locale, const char *module_config_path, profiler_name_store_t *store) { bool success; profile_start(obs_startup_name); if (obs) { blog(LOG_WARNING, "Tried to call obs_startup more than once"); return false; } #ifdef _WIN32 com_initialized = initialize_com(); #endif success = obs_init(locale, module_config_path, store); profile_end(obs_startup_name); if (!success) obs_shutdown(); return success; } static struct obs_cmdline_args cmdline_args = {0, NULL}; void obs_set_cmdline_args(int argc, const char *const *argv) { char *data; size_t len; int i; /* Once argc is set (non-zero) we shouldn't call again */ if (cmdline_args.argc) return; cmdline_args.argc = argc; /* Safely copy over argv */ len = 0; for (i = 0; i < argc; i++) len += strlen(argv[i]) + 1; cmdline_args.argv = bmalloc(sizeof(char *) * (argc + 1) + len); data = (char *)cmdline_args.argv + sizeof(char *) * (argc + 1); for (i = 0; i < argc; i++) { cmdline_args.argv[i] = data; len = strlen(argv[i]) + 1; memcpy(data, argv[i], len); data += len; } cmdline_args.argv[argc] = NULL; } struct obs_cmdline_args obs_get_cmdline_args(void) { return cmdline_args; } void obs_shutdown(void) { struct obs_module *module; obs_wait_for_destroy_queue(); for (size_t i = 0; i < obs->source_types.num; i++) { struct obs_source_info *item = &obs->source_types.array[i]; if (item->type_data && item->free_type_data) item->free_type_data(item->type_data); if (item->id) bfree((void *)item->id); } da_free(obs->source_types); #define FREE_REGISTERED_TYPES(structure, list) \ do { \ for (size_t i = 0; i < list.num; i++) { \ struct structure *item = &list.array[i]; \ if (item->type_data && item->free_type_data) \ item->free_type_data(item->type_data); \ } \ da_free(list); \ } while (false) FREE_REGISTERED_TYPES(obs_output_info, obs->output_types); FREE_REGISTERED_TYPES(obs_encoder_info, obs->encoder_types); FREE_REGISTERED_TYPES(obs_service_info, obs->service_types); #undef FREE_REGISTERED_TYPES da_free(obs->input_types); da_free(obs->filter_types); da_free(obs->transition_types); stop_video(); stop_audio(); stop_hotkeys(); module = obs->first_module; while (module) { struct obs_module *next = module->next; free_module(module); module = next; } obs->first_module = NULL; module = obs->first_disabled_module; while (module) { struct obs_module *next = module->next; free_module(module); module = next; } obs->first_disabled_module = NULL; obs_free_data(); obs_free_audio(); obs_free_video(); os_task_queue_destroy(obs->destruction_task_thread); obs_free_hotkeys(); obs_free_graphics(); proc_handler_destroy(obs->procs); signal_handler_destroy(obs->signals); obs->procs = NULL; obs->signals = NULL; for (size_t i = 0; i < obs->module_paths.num; i++) { free_module_path(obs->module_paths.array + i); } da_free(obs->module_paths); for (size_t i = 0; i < obs->safe_modules.num; i++) { bfree(obs->safe_modules.array[i]); } da_free(obs->safe_modules); for (size_t i = 0; i < obs->disabled_modules.num; i++) { bfree(obs->disabled_modules.array[i]); } da_free(obs->disabled_modules); for (size_t i = 0; i < obs->core_modules.num; i++) { bfree(obs->core_modules.array[i]); } da_free(obs->core_modules); if (obs->name_store_owned) profiler_name_store_free(obs->name_store); bfree(obs->module_config_path); bfree(obs->locale); bfree(obs); obs = NULL; bfree(cmdline_args.argv); #ifdef _WIN32 if (com_initialized) uninitialize_com(); #endif } bool obs_initialized(void) { return obs != NULL; } uint32_t obs_get_version(void) { return LIBOBS_API_VER; } const char *obs_get_version_string(void) { return OBS_VERSION; } void obs_set_locale(const char *locale) { struct obs_module *module; if (obs->locale) bfree(obs->locale); obs->locale = bstrdup(locale); module = obs->first_module; while (module) { if (module->set_locale) module->set_locale(locale); module = module->next; } } const char *obs_get_locale(void) { return obs->locale; } #define OBS_SIZE_MIN 2 #define OBS_SIZE_MAX (32 * 1024) static inline bool size_valid(uint32_t width, uint32_t height) { return (width >= OBS_SIZE_MIN && height >= OBS_SIZE_MIN && width <= OBS_SIZE_MAX && height <= OBS_SIZE_MAX); } int obs_reset_video(struct obs_video_info *ovi) { if (!obs) return OBS_VIDEO_FAIL; /* don't allow changing of video settings if active. */ if (obs_video_active()) return OBS_VIDEO_CURRENTLY_ACTIVE; if (!size_valid(ovi->output_width, ovi->output_height) || !size_valid(ovi->base_width, ovi->base_height)) return OBS_VIDEO_INVALID_PARAM; stop_video(); obs_free_canvas_mixes(); obs_free_video(); /* align to multiple-of-two and SSE alignment sizes */ ovi->output_width &= 0xFFFFFFFC; ovi->output_height &= 0xFFFFFFFE; if (!obs->video.graphics) { int errorcode = obs_init_graphics(ovi); if (errorcode != OBS_VIDEO_SUCCESS) { obs_free_graphics(); return errorcode; } } const char *scale_type_name = ""; switch (ovi->scale_type) { case OBS_SCALE_DISABLE: scale_type_name = "Disabled"; break; case OBS_SCALE_POINT: scale_type_name = "Point"; break; case OBS_SCALE_BICUBIC: scale_type_name = "Bicubic"; break; case OBS_SCALE_BILINEAR: scale_type_name = "Bilinear"; break; case OBS_SCALE_LANCZOS: scale_type_name = "Lanczos"; break; case OBS_SCALE_AREA: scale_type_name = "Area"; break; } bool yuv = format_is_yuv(ovi->output_format); const char *yuv_format = get_video_colorspace_name(ovi->colorspace); const char *yuv_range = get_video_range_name(ovi->output_format, ovi->range); blog(LOG_INFO, "---------------------------------"); blog(LOG_INFO, "video settings reset:\n" "\tbase resolution: %dx%d\n" "\toutput resolution: %dx%d\n" "\tdownscale filter: %s\n" "\tfps: %d/%d\n" "\tformat: %s\n" "\tYUV mode: %s%s%s", ovi->base_width, ovi->base_height, ovi->output_width, ovi->output_height, scale_type_name, ovi->fps_num, ovi->fps_den, get_video_format_name(ovi->output_format), yuv ? yuv_format : "None", yuv ? "/" : "", yuv ? yuv_range : ""); source_profiler_reset_video(ovi); return obs_init_video(ovi); } #ifndef SEC_TO_MSEC #define SEC_TO_MSEC 1000 #endif bool obs_reset_audio2(const struct obs_audio_info2 *oai) { struct obs_core_audio *audio = &obs->audio; struct audio_output_info ai; /* don't allow changing of audio settings if active. */ if (!obs || (audio->audio && audio_output_active(audio->audio))) return false; obs_free_audio(); if (!oai) return true; if (oai->max_buffering_ms) { uint32_t max_frames = oai->max_buffering_ms * oai->samples_per_sec / SEC_TO_MSEC; max_frames += (AUDIO_OUTPUT_FRAMES - 1); audio->max_buffering_ticks = max_frames / AUDIO_OUTPUT_FRAMES; } else { audio->max_buffering_ticks = 45; } audio->fixed_buffer = oai->fixed_buffering; int max_buffering_ms = audio->max_buffering_ticks * AUDIO_OUTPUT_FRAMES * SEC_TO_MSEC / (int)oai->samples_per_sec; ai.name = "Audio"; ai.samples_per_sec = oai->samples_per_sec; ai.format = AUDIO_FORMAT_FLOAT_PLANAR; ai.speakers = oai->speakers; ai.input_callback = audio_callback; blog(LOG_INFO, "---------------------------------"); blog(LOG_INFO, "audio settings reset:\n" "\tsamples per sec: %d\n" "\tspeakers: %d\n" "\tmax buffering: %d milliseconds\n" "\tbuffering type: %s", (int)ai.samples_per_sec, (int)ai.speakers, max_buffering_ms, oai->fixed_buffering ? "fixed" : "dynamically increasing"); return obs_init_audio(&ai); } bool obs_reset_audio(const struct obs_audio_info *oai) { struct obs_audio_info2 oai2 = { .samples_per_sec = oai->samples_per_sec, .speakers = oai->speakers, }; return obs_reset_audio2(&oai2); } bool obs_get_video_info(struct obs_video_info *ovi) { if (!obs->video.graphics || !obs->data.main_canvas->mix) return false; *ovi = obs->data.main_canvas->mix->ovi; return true; } float obs_get_video_sdr_white_level(void) { struct obs_core_video *video = &obs->video; return video->graphics ? video->sdr_white_level : 300.f; } float obs_get_video_hdr_nominal_peak_level(void) { struct obs_core_video *video = &obs->video; return video->graphics ? video->hdr_nominal_peak_level : 1000.f; } void obs_set_video_levels(float sdr_white_level, float hdr_nominal_peak_level) { struct obs_core_video *video = &obs->video; assert(video->graphics); video->sdr_white_level = sdr_white_level; video->hdr_nominal_peak_level = hdr_nominal_peak_level; } bool obs_get_audio_info(struct obs_audio_info *oai) { struct obs_core_audio *audio = &obs->audio; const struct audio_output_info *info; if (!oai || !audio->audio) return false; info = audio_output_get_info(audio->audio); oai->samples_per_sec = info->samples_per_sec; oai->speakers = info->speakers; return true; } bool obs_get_audio_info2(struct obs_audio_info2 *oai2) { struct obs_core_audio *audio = &obs->audio; struct obs_audio_info oai; if (!obs_get_audio_info(&oai) || !oai2 || !audio->audio) { return false; } else { oai2->samples_per_sec = oai.samples_per_sec; oai2->speakers = oai.speakers; oai2->fixed_buffering = audio->fixed_buffer; oai2->max_buffering_ms = audio->max_buffering_ticks * AUDIO_OUTPUT_FRAMES * SEC_TO_MSEC / (int)oai2->samples_per_sec; return true; } } bool obs_enum_source_types(size_t idx, const char **id) { if (idx >= obs->source_types.num) return false; *id = obs->source_types.array[idx].id; return true; } bool obs_enum_input_types(size_t idx, const char **id) { if (idx >= obs->input_types.num) return false; *id = obs->input_types.array[idx].id; return true; } bool obs_enum_input_types2(size_t idx, const char **id, const char **unversioned_id) { if (idx >= obs->input_types.num) return false; if (id) *id = obs->input_types.array[idx].id; if (unversioned_id) *unversioned_id = obs->input_types.array[idx].unversioned_id; return true; } const char *obs_get_latest_input_type_id(const char *unversioned_id) { struct obs_source_info *latest = NULL; int version = -1; if (!unversioned_id) return NULL; for (size_t i = 0; i < obs->source_types.num; i++) { struct obs_source_info *info = &obs->source_types.array[i]; if (strcmp(info->unversioned_id, unversioned_id) == 0 && (int)info->version > version) { latest = info; version = info->version; } } assert(!!latest); if (!latest) return NULL; return latest->id; } bool obs_enum_filter_types(size_t idx, const char **id) { if (idx >= obs->filter_types.num) return false; *id = obs->filter_types.array[idx].id; return true; } bool obs_enum_transition_types(size_t idx, const char **id) { if (idx >= obs->transition_types.num) return false; *id = obs->transition_types.array[idx].id; return true; } bool obs_enum_output_types(size_t idx, const char **id) { if (idx >= obs->output_types.num) return false; *id = obs->output_types.array[idx].id; return true; } bool obs_enum_encoder_types(size_t idx, const char **id) { if (idx >= obs->encoder_types.num) return false; *id = obs->encoder_types.array[idx].id; return true; } bool obs_enum_service_types(size_t idx, const char **id) { if (idx >= obs->service_types.num) return false; *id = obs->service_types.array[idx].id; return true; } void obs_enter_graphics(void) { if (obs->video.graphics) gs_enter_context(obs->video.graphics); } void obs_leave_graphics(void) { if (obs->video.graphics) gs_leave_context(); } audio_t *obs_get_audio(void) { return obs->audio.audio; } video_t *obs_get_video(void) { return obs->data.main_canvas->mix->video; } obs_source_t *obs_get_output_source(uint32_t channel) { return obs_canvas_get_channel(obs->data.main_canvas, channel); } void obs_set_output_source(uint32_t channel, obs_source_t *source) { obs_canvas_set_channel(obs->data.main_canvas, channel, source); } void obs_enum_sources(bool (*enum_proc)(void *, obs_source_t *), void *param) { obs_source_t *source; pthread_mutex_lock(&obs->data.sources_mutex); source = obs->data.sources; while (source) { obs_source_t *s = obs_source_get_ref(source); if (s) { if (!s->context.private) { if (s->info.type == OBS_SOURCE_TYPE_INPUT && !enum_proc(param, s)) { obs_source_release(s); break; } else if (strcmp(s->info.id, group_info.id) == 0 && !enum_proc(param, s)) { obs_source_release(s); break; } } obs_source_release(s); } source = (obs_source_t *)source->context.hh_uuid.next; } pthread_mutex_unlock(&obs->data.sources_mutex); } void obs_canvas_enum_scenes(obs_canvas_t *canvas, bool (*enum_proc)(void *, obs_source_t *), void *param) { obs_source_t *source; pthread_mutex_lock(&canvas->sources_mutex); source = canvas->sources; while (source) { obs_source_t *s = obs_source_get_ref(source); if (s) { if (source->info.type == OBS_SOURCE_TYPE_SCENE && !enum_proc(param, s)) { obs_source_release(s); break; } obs_source_release(s); } source = (obs_source_t *)source->context.hh.next; } pthread_mutex_unlock(&canvas->sources_mutex); } void obs_enum_scenes(bool (*enum_proc)(void *, obs_source_t *), void *param) { obs_canvas_enum_scenes(obs->data.main_canvas, enum_proc, param); } static inline void obs_enum(void *pstart, pthread_mutex_t *mutex, void *proc, void *param) { struct obs_context_data **start = pstart, *context; bool (*enum_proc)(void *, void *) = proc; assert(start); assert(mutex); assert(enum_proc); pthread_mutex_lock(mutex); context = *start; while (context) { if (!enum_proc(param, context)) break; context = context->next; } pthread_mutex_unlock(mutex); } static inline void obs_enum_uuid(void *pstart, pthread_mutex_t *mutex, void *proc, void *param) { struct obs_context_data **start = pstart, *context, *tmp; bool (*enum_proc)(void *, void *) = proc; assert(start); assert(mutex); assert(enum_proc); pthread_mutex_lock(mutex); HASH_ITER (hh_uuid, *start, context, tmp) { if (!enum_proc(param, context)) break; } pthread_mutex_unlock(mutex); } void obs_enum_all_sources(bool (*enum_proc)(void *, obs_source_t *), void *param) { obs_enum_uuid(&obs->data.sources, &obs->data.sources_mutex, enum_proc, param); } void obs_enum_outputs(bool (*enum_proc)(void *, obs_output_t *), void *param) { obs_enum(&obs->data.first_output, &obs->data.outputs_mutex, enum_proc, param); } void obs_enum_encoders(bool (*enum_proc)(void *, obs_encoder_t *), void *param) { obs_enum(&obs->data.first_encoder, &obs->data.encoders_mutex, enum_proc, param); } void obs_enum_services(bool (*enum_proc)(void *, obs_service_t *), void *param) { obs_enum(&obs->data.first_service, &obs->data.services_mutex, enum_proc, param); } void obs_enum_canvases(bool (*enum_proc)(void *, obs_canvas_t *), void *param) { struct obs_context_data *start = (struct obs_context_data *)obs->data.named_canvases; struct obs_context_data *context, *tmp; pthread_mutex_lock(&obs->data.canvases_mutex); HASH_ITER (hh, start, context, tmp) { obs_canvas_t *canvas = (obs_canvas_t *)context; if (!enum_proc(param, canvas)) break; } pthread_mutex_unlock(&obs->data.canvases_mutex); } static inline void *get_context_by_name(void *vfirst, const char *name, pthread_mutex_t *mutex, void *(*addref)(void *)) { struct obs_context_data **first = vfirst; struct obs_context_data *context; pthread_mutex_lock(mutex); /* If context list head has a hash table, look the name up in there */ if (*first && (*first)->hh.tbl) { HASH_FIND_STR(*first, name, context); } else { context = *first; while (context) { if (!context->private && strcmp(context->name, name) == 0) { break; } context = context->next; } } if (context) addref(context); pthread_mutex_unlock(mutex); return context; } static void *get_context_by_uuid(void *ptable, const char *uuid, pthread_mutex_t *mutex, void *(*addref)(void *)) { struct obs_context_data **ht = ptable; struct obs_context_data *context; pthread_mutex_lock(mutex); HASH_FIND_UUID(*ht, uuid, context); if (context) addref(context); pthread_mutex_unlock(mutex); return context; } static inline void *obs_source_addref_safe_(void *ref) { return obs_source_get_ref(ref); } static inline void *obs_output_addref_safe_(void *ref) { return obs_output_get_ref(ref); } static inline void *obs_encoder_addref_safe_(void *ref) { return obs_encoder_get_ref(ref); } static inline void *obs_service_addref_safe_(void *ref) { return obs_service_get_ref(ref); } static inline void *obs_canvas_addref_safe_(void *ref) { return obs_canvas_get_ref(ref); } obs_source_t *obs_get_source_by_name(const char *name) { obs_source_t *source = get_context_by_name(&obs->data.public_sources, name, &obs->data.sources_mutex, obs_source_addref_safe_); /* For backwards compat: Also look up source name in main canvas's scenes list. */ if (!source) { source = get_context_by_name(&obs->data.main_canvas->sources, name, &obs->data.main_canvas->sources_mutex, obs_source_addref_safe_); } return source; } obs_source_t *obs_get_source_by_uuid(const char *uuid) { return get_context_by_uuid(&obs->data.sources, uuid, &obs->data.sources_mutex, obs_source_addref_safe_); } obs_canvas_t *obs_get_canvas_by_name(const char *name) { return get_context_by_name(&obs->data.named_canvases, name, &obs->data.canvases_mutex, obs_canvas_addref_safe_); } obs_canvas_t *obs_get_canvas_by_uuid(const char *uuid) { return get_context_by_uuid(&obs->data.canvases, uuid, &obs->data.canvases_mutex, obs_canvas_addref_safe_); } obs_source_t *obs_canvas_get_source_by_name(obs_canvas_t *canvas, const char *name) { return get_context_by_name(&canvas->sources, name, &canvas->sources_mutex, obs_source_addref_safe_); } obs_scene_t *obs_canvas_get_scene_by_name(obs_canvas_t *canvas, const char *name) { obs_source_t *source = obs_canvas_get_source_by_name(canvas, name); obs_scene_t *scene = obs_scene_from_source(source); if (!scene) { obs_source_release(source); return NULL; } return scene; } obs_source_t *obs_get_transition_by_name(const char *name) { struct obs_source **first = &obs->data.sources; struct obs_source *source; pthread_mutex_lock(&obs->data.sources_mutex); /* Transitions are private but can be found via this method, so we * can't look them up by name in the public_sources hash table. */ source = *first; while (source) { if (source->info.type == OBS_SOURCE_TYPE_TRANSITION && strcmp(source->context.name, name) == 0) { source = obs_source_addref_safe_(source); break; } source = (void *)source->context.hh_uuid.next; } pthread_mutex_unlock(&obs->data.sources_mutex); return source; } obs_source_t *obs_get_transition_by_uuid(const char *uuid) { obs_source_t *source = obs_get_source_by_uuid(uuid); if (source && source->info.type == OBS_SOURCE_TYPE_TRANSITION) return source; else if (source) obs_source_release(source); return NULL; } obs_output_t *obs_get_output_by_name(const char *name) { return get_context_by_name(&obs->data.first_output, name, &obs->data.outputs_mutex, obs_output_addref_safe_); } obs_encoder_t *obs_get_encoder_by_name(const char *name) { return get_context_by_name(&obs->data.first_encoder, name, &obs->data.encoders_mutex, obs_encoder_addref_safe_); } obs_service_t *obs_get_service_by_name(const char *name) { return get_context_by_name(&obs->data.first_service, name, &obs->data.services_mutex, obs_service_addref_safe_); } gs_effect_t *obs_get_base_effect(enum obs_base_effect effect) { switch (effect) { case OBS_EFFECT_DEFAULT: return obs->video.default_effect; case OBS_EFFECT_DEFAULT_RECT: return obs->video.default_rect_effect; case OBS_EFFECT_OPAQUE: return obs->video.opaque_effect; case OBS_EFFECT_SOLID: return obs->video.solid_effect; case OBS_EFFECT_REPEAT: return obs->video.repeat_effect; case OBS_EFFECT_BICUBIC: return obs->video.bicubic_effect; case OBS_EFFECT_LANCZOS: return obs->video.lanczos_effect; case OBS_EFFECT_AREA: return obs->video.area_effect; case OBS_EFFECT_BILINEAR_LOWRES: return obs->video.bilinear_lowres_effect; case OBS_EFFECT_PREMULTIPLIED_ALPHA: return obs->video.premultiplied_alpha_effect; } return NULL; } signal_handler_t *obs_get_signal_handler(void) { return obs->signals; } proc_handler_t *obs_get_proc_handler(void) { return obs->procs; } static void obs_render_canvas_texture_internal(obs_canvas_t *canvas, enum gs_blend_type src_c, enum gs_blend_type dest_c, enum gs_blend_type src_a, enum gs_blend_type dest_a) { struct obs_core_video_mix *video; gs_texture_t *tex; gs_effect_t *effect; gs_eparam_t *param; video = canvas->mix; if (!video || !video->texture_rendered) return; const enum gs_color_space source_space = video->render_space; const enum gs_color_space current_space = gs_get_color_space(); const char *tech_name = "Draw"; float multiplier = 1.f; switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: if (source_space == GS_CS_709_EXTENDED) tech_name = "DrawTonemap"; break; case GS_CS_709_SCRGB: tech_name = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.f; break; case GS_CS_709_EXTENDED: break; } const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(true); tex = video->render_texture; effect = obs_get_base_effect(OBS_EFFECT_DEFAULT); param = gs_effect_get_param_by_name(effect, "image"); gs_effect_set_texture_srgb(param, tex); param = gs_effect_get_param_by_name(effect, "multiplier"); gs_effect_set_float(param, multiplier); gs_blend_state_push(); gs_blend_function_separate(src_c, dest_c, src_a, dest_a); while (gs_effect_loop(effect, tech_name)) gs_draw_sprite(tex, 0, 0, 0); gs_blend_state_pop(); gs_enable_framebuffer_srgb(previous); } void obs_render_main_texture(void) { obs_render_canvas_texture_internal(obs->data.main_canvas, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); } void obs_render_main_texture_src_color_only(void) { obs_render_canvas_texture_internal(obs->data.main_canvas, GS_BLEND_ONE, GS_BLEND_ZERO, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); } void obs_render_canvas_texture(obs_canvas_t *canvas) { obs_render_canvas_texture_internal(canvas, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); } void obs_render_canvas_texture_src_color_only(obs_canvas_t *canvas) { obs_render_canvas_texture_internal(canvas, GS_BLEND_ONE, GS_BLEND_ZERO, GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); } gs_texture_t *obs_get_main_texture(void) { struct obs_core_video_mix *video; video = obs->data.main_canvas->mix; if (!video->texture_rendered) return NULL; return video->render_texture; } static obs_source_t *obs_load_source_type(obs_data_t *source_data, bool is_private) { obs_data_array_t *filters = obs_data_get_array(source_data, "filters"); obs_source_t *source; const char *name = obs_data_get_string(source_data, "name"); const char *uuid = obs_data_get_string(source_data, "uuid"); const char *id = obs_data_get_string(source_data, "id"); const char *v_id = obs_data_get_string(source_data, "versioned_id"); obs_data_t *settings = obs_data_get_obj(source_data, "settings"); obs_data_t *hotkeys = obs_data_get_obj(source_data, "hotkeys"); obs_canvas_t *canvas = NULL; double volume; double balance; int64_t sync; uint32_t prev_ver; uint32_t caps; uint32_t flags; uint32_t mixers; int di_order; int di_mode; int monitoring_type; prev_ver = (uint32_t)obs_data_get_int(source_data, "prev_ver"); if (!*v_id) v_id = id; if (obs_source_type_is_scene(id) || obs_source_type_is_group(id)) { const char *canvas_uuid = obs_data_get_string(source_data, "canvas_uuid"); canvas = obs_get_canvas_by_uuid(canvas_uuid); /* Fall back to main canvas if canvas cannot be found. */ if (!canvas) { canvas = obs_canvas_get_ref(obs->data.main_canvas); } } source = obs_source_create_set_last_ver(canvas, v_id, name, uuid, settings, hotkeys, prev_ver, is_private); if (source->owns_info_id) { bfree((void *)source->info.unversioned_id); source->info.unversioned_id = bstrdup(id); } obs_canvas_release(canvas); obs_data_release(hotkeys); caps = obs_source_get_output_flags(source); obs_data_set_default_double(source_data, "volume", 1.0); volume = obs_data_get_double(source_data, "volume"); obs_source_set_volume(source, (float)volume); obs_data_set_default_double(source_data, "balance", 0.5); balance = obs_data_get_double(source_data, "balance"); obs_source_set_balance_value(source, (float)balance); sync = obs_data_get_int(source_data, "sync"); obs_source_set_sync_offset(source, sync); obs_data_set_default_int(source_data, "mixers", 0x3F); mixers = (uint32_t)obs_data_get_int(source_data, "mixers"); obs_source_set_audio_mixers(source, mixers); obs_data_set_default_int(source_data, "flags", source->default_flags); flags = (uint32_t)obs_data_get_int(source_data, "flags"); obs_source_set_flags(source, flags); obs_data_set_default_bool(source_data, "enabled", true); obs_source_set_enabled(source, obs_data_get_bool(source_data, "enabled")); obs_data_set_default_bool(source_data, "muted", false); obs_source_set_muted(source, obs_data_get_bool(source_data, "muted")); obs_data_set_default_bool(source_data, "push-to-mute", false); obs_source_enable_push_to_mute(source, obs_data_get_bool(source_data, "push-to-mute")); obs_data_set_default_int(source_data, "push-to-mute-delay", 0); obs_source_set_push_to_mute_delay(source, obs_data_get_int(source_data, "push-to-mute-delay")); obs_data_set_default_bool(source_data, "push-to-talk", false); obs_source_enable_push_to_talk(source, obs_data_get_bool(source_data, "push-to-talk")); obs_data_set_default_int(source_data, "push-to-talk-delay", 0); obs_source_set_push_to_talk_delay(source, obs_data_get_int(source_data, "push-to-talk-delay")); di_mode = (int)obs_data_get_int(source_data, "deinterlace_mode"); obs_source_set_deinterlace_mode(source, (enum obs_deinterlace_mode)di_mode); di_order = (int)obs_data_get_int(source_data, "deinterlace_field_order"); obs_source_set_deinterlace_field_order(source, (enum obs_deinterlace_field_order)di_order); monitoring_type = (int)obs_data_get_int(source_data, "monitoring_type"); if (prev_ver < MAKE_SEMANTIC_VERSION(23, 2, 2)) { if ((caps & OBS_SOURCE_MONITOR_BY_DEFAULT) != 0) { /* updates older sources to enable monitoring * automatically if they added monitoring by default in * version 24 */ monitoring_type = OBS_MONITORING_TYPE_MONITOR_ONLY; obs_source_set_audio_mixers(source, 0x3F); } } obs_source_set_monitoring_type(source, (enum obs_monitoring_type)monitoring_type); obs_data_release(source->private_settings); source->private_settings = obs_data_get_obj(source_data, "private_settings"); if (!source->private_settings) source->private_settings = obs_data_create(); if (filters) { size_t count = obs_data_array_count(filters); for (size_t i = 0; i < count; i++) { obs_data_t *filter_data = obs_data_array_item(filters, i); obs_source_t *filter = obs_load_source_type(filter_data, true); if (filter) { obs_source_filter_add(source, filter); obs_source_release(filter); } obs_data_release(filter_data); } obs_data_array_release(filters); } obs_data_release(settings); return source; } obs_source_t *obs_load_source(obs_data_t *source_data) { return obs_load_source_type(source_data, false); } obs_source_t *obs_load_private_source(obs_data_t *source_data) { return obs_load_source_type(source_data, true); } void obs_load_sources(obs_data_array_t *array, obs_load_source_cb cb, void *private_data) { DARRAY(obs_source_t *) sources; size_t count; size_t i; da_init(sources); count = obs_data_array_count(array); da_reserve(sources, count); for (i = 0; i < count; i++) { obs_data_t *source_data = obs_data_array_item(array, i); obs_source_t *source = obs_load_source(source_data); da_push_back(sources, &source); obs_data_release(source_data); } /* tell sources that we want to load */ for (i = 0; i < sources.num; i++) { obs_source_t *source = sources.array[i]; obs_data_t *source_data = obs_data_array_item(array, i); if (source) { if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_load(source, source_data); obs_source_load2(source); if (cb) cb(private_data, source); } obs_data_release(source_data); } for (i = 0; i < sources.num; i++) obs_source_release(sources.array[i]); da_free(sources); } obs_data_t *obs_save_source(obs_source_t *source) { obs_data_array_t *filters = obs_data_array_create(); obs_data_t *source_data = obs_data_create(); obs_data_t *settings = obs_source_get_settings(source); obs_data_t *hotkey_data = source->context.hotkey_data; obs_data_t *hotkeys; float volume = obs_source_get_volume(source); float balance = obs_source_get_balance_value(source); uint32_t mixers = obs_source_get_audio_mixers(source); int64_t sync = obs_source_get_sync_offset(source); uint32_t flags = obs_source_get_flags(source); const char *name = obs_source_get_name(source); const char *uuid = obs_source_get_uuid(source); const char *id = source->info.unversioned_id; const char *v_id = source->info.id; bool enabled = obs_source_enabled(source); bool muted = obs_source_muted(source); bool push_to_mute = obs_source_push_to_mute_enabled(source); uint64_t ptm_delay = obs_source_get_push_to_mute_delay(source); bool push_to_talk = obs_source_push_to_talk_enabled(source); uint64_t ptt_delay = obs_source_get_push_to_talk_delay(source); int m_type = (int)obs_source_get_monitoring_type(source); int di_mode = (int)obs_source_get_deinterlace_mode(source); int di_order = (int)obs_source_get_deinterlace_field_order(source); obs_canvas_t *canvas = obs_source_get_canvas(source); DARRAY(obs_source_t *) filters_copy; obs_source_save(source); hotkeys = obs_hotkeys_save_source(source); if (hotkeys) { obs_data_release(hotkey_data); source->context.hotkey_data = hotkeys; hotkey_data = hotkeys; } obs_data_set_int(source_data, "prev_ver", LIBOBS_API_VER); obs_data_set_string(source_data, "name", name); obs_data_set_string(source_data, "uuid", uuid); obs_data_set_string(source_data, "id", id); obs_data_set_string(source_data, "versioned_id", v_id); obs_data_set_obj(source_data, "settings", settings); obs_data_set_int(source_data, "mixers", mixers); obs_data_set_int(source_data, "sync", sync); obs_data_set_int(source_data, "flags", flags); obs_data_set_double(source_data, "volume", volume); obs_data_set_double(source_data, "balance", balance); obs_data_set_bool(source_data, "enabled", enabled); obs_data_set_bool(source_data, "muted", muted); obs_data_set_bool(source_data, "push-to-mute", push_to_mute); obs_data_set_int(source_data, "push-to-mute-delay", ptm_delay); obs_data_set_bool(source_data, "push-to-talk", push_to_talk); obs_data_set_int(source_data, "push-to-talk-delay", ptt_delay); obs_data_set_obj(source_data, "hotkeys", hotkey_data); obs_data_set_int(source_data, "deinterlace_mode", di_mode); obs_data_set_int(source_data, "deinterlace_field_order", di_order); obs_data_set_int(source_data, "monitoring_type", m_type); if (canvas) { obs_data_set_string(source_data, "canvas_uuid", obs_canvas_get_uuid(canvas)); obs_canvas_release(canvas); } obs_data_set_obj(source_data, "private_settings", source->private_settings); if (source->info.type == OBS_SOURCE_TYPE_TRANSITION) obs_transition_save(source, source_data); pthread_mutex_lock(&source->filter_mutex); da_init(filters_copy); da_reserve(filters_copy, source->filters.num); for (size_t i = 0; i < source->filters.num; i++) { obs_source_t *filter = obs_source_get_ref(source->filters.array[i]); if (filter) da_push_back(filters_copy, &filter); } pthread_mutex_unlock(&source->filter_mutex); if (filters_copy.num) { for (size_t i = filters_copy.num; i > 0; i--) { obs_source_t *filter = filters_copy.array[i - 1]; obs_data_t *filter_data = obs_save_source(filter); obs_data_array_push_back(filters, filter_data); obs_data_release(filter_data); obs_source_release(filter); } obs_data_set_array(source_data, "filters", filters); } da_free(filters_copy); obs_data_release(settings); obs_data_array_release(filters); return source_data; } obs_data_array_t *obs_save_sources_filtered(obs_save_source_filter_cb cb, void *data_) { struct obs_core_data *data = &obs->data; obs_data_array_t *array; obs_source_t *source; array = obs_data_array_create(); pthread_mutex_lock(&data->sources_mutex); source = data->sources; while (source) { if ((source->info.type != OBS_SOURCE_TYPE_FILTER) != 0 && !source->removed && !source->temp_removed && !source->context.private && cb(data_, source)) { obs_data_t *source_data = obs_save_source(source); obs_data_array_push_back(array, source_data); obs_data_release(source_data); } source = (obs_source_t *)source->context.hh_uuid.next; } pthread_mutex_unlock(&data->sources_mutex); return array; } static bool save_source_filter(void *data, obs_source_t *source) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(source); return true; } obs_data_array_t *obs_save_sources(void) { return obs_save_sources_filtered(save_source_filter, NULL); } void obs_reset_source_uuids() { pthread_mutex_lock(&obs->data.sources_mutex); /* Move all sources to a new hash table */ struct obs_context_data *ht = (struct obs_context_data *)obs->data.sources; struct obs_context_data *new_ht = NULL; struct obs_context_data *ctx, *tmp; HASH_ITER (hh_uuid, ht, ctx, tmp) { HASH_DELETE(hh_uuid, ht, ctx); bfree((void *)ctx->uuid); ctx->uuid = os_generate_uuid(); HASH_ADD_UUID(new_ht, uuid, ctx); } /* The old table will be automatically freed once the last element has * been removed, so we can simply overwrite the pointer. */ obs->data.sources = (struct obs_source *)new_ht; pthread_mutex_unlock(&obs->data.sources_mutex); } /* ensures that names are never blank */ static inline char *dup_name(const char *name, bool private) { if (private && !name) return NULL; if (!name || !*name) { struct dstr unnamed = {0}; dstr_printf(&unnamed, "__unnamed%04lld", obs->data.unnamed_index++); return unnamed.array; } else { return bstrdup(name); } } static inline bool obs_context_data_init_wrap(struct obs_context_data *context, enum obs_obj_type type, obs_data_t *settings, const char *name, const char *uuid, obs_data_t *hotkey_data, bool private) { assert(context); memset(context, 0, sizeof(*context)); context->private = private; context->type = type; pthread_mutex_init_value(&context->rename_cache_mutex); if (pthread_mutex_init(&context->rename_cache_mutex, NULL) < 0) return false; context->signals = signal_handler_create(); if (!context->signals) return false; context->procs = proc_handler_create(); if (!context->procs) return false; if (uuid && strlen(uuid) == UUID_STR_LENGTH) context->uuid = bstrdup(uuid); /* Only automatically generate UUIDs for sources */ else if (type == OBS_OBJ_TYPE_SOURCE || type == OBS_OBJ_TYPE_CANVAS) context->uuid = os_generate_uuid(); context->name = dup_name(name, private); context->settings = obs_data_newref(settings); context->hotkey_data = obs_data_newref(hotkey_data); return true; } bool obs_context_data_init(struct obs_context_data *context, enum obs_obj_type type, obs_data_t *settings, const char *name, const char *uuid, obs_data_t *hotkey_data, bool private) { if (obs_context_data_init_wrap(context, type, settings, name, uuid, hotkey_data, private)) { return true; } else { obs_context_data_free(context); return false; } } void obs_context_data_free(struct obs_context_data *context) { obs_hotkeys_context_release(context); signal_handler_destroy(context->signals); proc_handler_destroy(context->procs); obs_data_release(context->settings); obs_context_data_remove(context); pthread_mutex_destroy(&context->rename_cache_mutex); bfree(context->name); bfree((void *)context->uuid); for (size_t i = 0; i < context->rename_cache.num; i++) bfree(context->rename_cache.array[i]); da_free(context->rename_cache); memset(context, 0, sizeof(*context)); } void obs_context_init_control(struct obs_context_data *context, void *object, obs_destroy_cb destroy) { context->control = bzalloc(sizeof(obs_weak_object_t)); context->control->object = object; context->destroy = destroy; } void obs_context_data_insert(struct obs_context_data *context, pthread_mutex_t *mutex, void *pfirst) { struct obs_context_data **first = pfirst; assert(context); assert(mutex); assert(first); context->mutex = mutex; pthread_mutex_lock(mutex); context->prev_next = first; context->next = *first; *first = context; if (context->next) context->next->prev_next = &context->next; pthread_mutex_unlock(mutex); } static inline char *obs_context_deduplicate_name(void *phash, const char *name) { struct obs_context_data *head = phash; struct obs_context_data *item = NULL; HASH_FIND_STR(head, name, item); if (!item) return NULL; struct dstr new_name = {0}; int suffix = 2; while (item) { dstr_printf(&new_name, "%s %d", name, suffix++); HASH_FIND_STR(head, new_name.array, item); } return new_name.array; } void obs_context_data_insert_name(struct obs_context_data *context, pthread_mutex_t *mutex, void *pfirst) { struct obs_context_data **first = pfirst; char *new_name; assert(context); assert(mutex); assert(first); context->mutex = mutex; pthread_mutex_lock(mutex); /* Ensure name is not a duplicate. */ new_name = obs_context_deduplicate_name(*first, context->name); if (new_name) { blog(LOG_WARNING, "Attempted to insert context with duplicate name \"%s\"!" " Name has been changed to \"%s\"", context->name, new_name); /* Since this happens before the context creation finishes, * do not bother to add it to the rename cache. */ bfree(context->name); context->name = new_name; } HASH_ADD_STR(*first, name, context); pthread_mutex_unlock(mutex); } void obs_context_data_insert_uuid(struct obs_context_data *context, pthread_mutex_t *mutex, void *pfirst_uuid) { struct obs_context_data **first_uuid = pfirst_uuid; struct obs_context_data *item = NULL; assert(context); assert(mutex); assert(first_uuid); context->mutex = mutex; pthread_mutex_lock(mutex); /* Ensure UUID is not a duplicate. * This should only ever happen if a scene collection file has been * manually edited and an entry has been duplicated without removing * or regenerating the UUID. */ HASH_FIND_UUID(*first_uuid, context->uuid, item); if (item) { blog(LOG_WARNING, "Attempted to insert context with duplicate UUID \"%s\"!", context->uuid); /* It is practically impossible for the new UUID to be a * duplicate, so don't bother checking again. */ bfree((void *)context->uuid); context->uuid = os_generate_uuid(); } HASH_ADD_UUID(*first_uuid, uuid, context); pthread_mutex_unlock(mutex); } void obs_context_data_remove(struct obs_context_data *context) { if (context && context->prev_next) { pthread_mutex_lock(context->mutex); *context->prev_next = context->next; if (context->next) context->next->prev_next = context->prev_next; context->prev_next = NULL; pthread_mutex_unlock(context->mutex); } } void obs_context_data_remove_name(struct obs_context_data *context, pthread_mutex_t *mutex, void *phead) { struct obs_context_data **head = phead; assert(head); if (!context) return; pthread_mutex_lock(mutex); HASH_DELETE(hh, *head, context); pthread_mutex_unlock(mutex); } void obs_context_data_remove_uuid(struct obs_context_data *context, pthread_mutex_t *mutex, void *puuid_head) { struct obs_context_data **uuid_head = puuid_head; assert(uuid_head); if (!context || !context->uuid || !uuid_head) return; pthread_mutex_lock(mutex); HASH_DELETE(hh_uuid, *uuid_head, context); pthread_mutex_unlock(mutex); } void obs_context_wait(struct obs_context_data *context) { pthread_mutex_lock(context->mutex); pthread_mutex_unlock(context->mutex); } void obs_context_data_setname(struct obs_context_data *context, const char *name) { pthread_mutex_lock(&context->rename_cache_mutex); if (context->name) da_push_back(context->rename_cache, &context->name); context->name = dup_name(name, context->private); pthread_mutex_unlock(&context->rename_cache_mutex); } void obs_context_data_setname_ht(struct obs_context_data *context, const char *name, void *phead) { struct obs_context_data **head = phead; char *new_name; pthread_mutex_lock(context->mutex); pthread_mutex_lock(&context->rename_cache_mutex); HASH_DEL(*head, context); if (context->name) da_push_back(context->rename_cache, &context->name); /* Ensure new name is not a duplicate. */ new_name = obs_context_deduplicate_name(*head, name); if (new_name) { blog(LOG_WARNING, "Attempted to rename context to duplicate name \"%s\"!" " New name has been changed to \"%s\"", context->name, new_name); context->name = new_name; } else { context->name = dup_name(name, context->private); } HASH_ADD_STR(*head, name, context); pthread_mutex_unlock(&context->rename_cache_mutex); pthread_mutex_unlock(context->mutex); } profiler_name_store_t *obs_get_profiler_name_store(void) { return obs->name_store; } uint64_t obs_get_video_frame_time(void) { return obs->video.video_time; } double obs_get_active_fps(void) { return obs->video.video_fps; } uint64_t obs_get_average_frame_time_ns(void) { return obs->video.video_avg_frame_time_ns; } uint64_t obs_get_frame_interval_ns(void) { return obs->video.video_frame_interval_ns; } enum obs_obj_type obs_obj_get_type(void *obj) { struct obs_context_data *context = obj; return context ? context->type : OBS_OBJ_TYPE_INVALID; } const char *obs_obj_get_id(void *obj) { struct obs_context_data *context = obj; if (!context) return NULL; switch (context->type) { case OBS_OBJ_TYPE_SOURCE: return ((obs_source_t *)obj)->info.id; case OBS_OBJ_TYPE_OUTPUT: return ((obs_output_t *)obj)->info.id; case OBS_OBJ_TYPE_ENCODER: return ((obs_encoder_t *)obj)->info.id; case OBS_OBJ_TYPE_SERVICE: return ((obs_service_t *)obj)->info.id; default:; } return NULL; } bool obs_obj_invalid(void *obj) { struct obs_context_data *context = obj; if (!context) return true; return !context->data; } void *obs_obj_get_data(void *obj) { struct obs_context_data *context = obj; if (!context) return NULL; return context->data; } bool obs_obj_is_private(void *obj) { struct obs_context_data *context = obj; if (!context) return false; return context->private; } void obs_reset_audio_monitoring(void) { if (!obs_audio_monitoring_available()) return; pthread_mutex_lock(&obs->audio.monitoring_mutex); for (size_t i = 0; i < obs->audio.monitors.num; i++) { struct audio_monitor *monitor = obs->audio.monitors.array[i]; audio_monitor_reset(monitor); } pthread_mutex_unlock(&obs->audio.monitoring_mutex); } static bool check_all_aoc_sources(void *param, obs_source_t *src) { UNUSED_PARAMETER(param); if (src->info.output_flags & OBS_SOURCE_DO_NOT_SELF_MONITOR) { obs_data_t *settings = obs_source_get_settings(src); const char *device_id = obs_data_get_string(settings, "device_id"); obs_source_audio_output_capture_device_changed(src, device_id); obs_data_release(settings); } return true; } bool obs_set_audio_monitoring_device(const char *name, const char *id) { if (!name || !id || !*name || !*id) return false; if (!obs_audio_monitoring_available()) return false; pthread_mutex_lock(&obs->audio.monitoring_mutex); if (strcmp(id, obs->audio.monitoring_device_id) == 0) { pthread_mutex_unlock(&obs->audio.monitoring_mutex); return true; } bfree(obs->audio.monitoring_device_name); bfree(obs->audio.monitoring_device_id); obs->audio.monitoring_device_name = bstrdup(name); obs->audio.monitoring_device_id = bstrdup(id); pthread_mutex_unlock(&obs->audio.monitoring_mutex); obs_reset_audio_monitoring(); /* Check all Audio Output Capture sources for monitoring duplication. */ obs_enum_sources(check_all_aoc_sources, NULL); return true; } void obs_get_audio_monitoring_device(const char **name, const char **id) { if (name) *name = obs->audio.monitoring_device_name; if (id) *id = obs->audio.monitoring_device_id; } void obs_add_tick_callback(void (*tick)(void *param, float seconds), void *param) { struct tick_callback data = {tick, param}; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); da_insert(obs->data.tick_callbacks, 0, &data); pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); } void obs_remove_tick_callback(void (*tick)(void *param, float seconds), void *param) { struct tick_callback data = {tick, param}; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); da_erase_item(obs->data.tick_callbacks, &data); pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); } void obs_add_main_render_callback(void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param) { struct draw_callback data = {draw, param}; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); da_insert(obs->data.draw_callbacks, 0, &data); pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); } void obs_remove_main_render_callback(void (*draw)(void *param, uint32_t cx, uint32_t cy), void *param) { struct draw_callback data = {draw, param}; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); da_erase_item(obs->data.draw_callbacks, &data); pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); } void obs_add_main_rendered_callback(void (*rendered)(void *param), void *param) { struct rendered_callback data = {rendered, param}; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); da_insert(obs->data.rendered_callbacks, 0, &data); pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); } void obs_remove_main_rendered_callback(void (*rendered)(void *param), void *param) { struct rendered_callback data = {rendered, param}; pthread_mutex_lock(&obs->data.draw_callbacks_mutex); da_erase_item(obs->data.rendered_callbacks, &data); pthread_mutex_unlock(&obs->data.draw_callbacks_mutex); } uint32_t obs_get_total_frames(void) { return obs->video.total_frames; } uint32_t obs_get_lagged_frames(void) { return obs->video.lagged_frames; } struct obs_core_video_mix *get_mix_for_video(video_t *v) { struct obs_core_video_mix *result = NULL; pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { struct obs_core_video_mix *mix = obs->video.mixes.array[i]; if (v == mix->video) { result = mix; break; } } pthread_mutex_unlock(&obs->video.mixes_mutex); return result; } void start_raw_video(video_t *v, const struct video_scale_info *conversion, uint32_t frame_rate_divisor, void (*callback)(void *param, struct video_data *frame), void *param) { struct obs_core_video_mix *video = get_mix_for_video(v); // TODO: Make affected outputs use views/canvasses, and revert this later. // https://github.com/obsproject/obs-studio/pull/12379 // https://github.com/obsproject/obs-studio/issues/12366 if (video_output_connect2(v, conversion, frame_rate_divisor, callback, param) && video) os_atomic_inc_long(&video->raw_active); } void stop_raw_video(video_t *v, void (*callback)(void *param, struct video_data *frame), void *param) { struct obs_core_video_mix *video = get_mix_for_video(v); // TODO: Make affected outputs use views/canvasses, and revert this later. // https://github.com/obsproject/obs-studio/pull/12379 // https://github.com/obsproject/obs-studio/issues/12366 if (video_output_disconnect2(v, callback, param) && video) os_atomic_dec_long(&video->raw_active); } void obs_add_raw_video_callback(const struct video_scale_info *conversion, void (*callback)(void *param, struct video_data *frame), void *param) { obs_add_raw_video_callback2(conversion, 1, callback, param); } void obs_add_raw_video_callback2(const struct video_scale_info *conversion, uint32_t frame_rate_divisor, void (*callback)(void *param, struct video_data *frame), void *param) { struct obs_core_video_mix *video = obs->data.main_canvas->mix; start_raw_video(video->video, conversion, frame_rate_divisor, callback, param); } void obs_remove_raw_video_callback(void (*callback)(void *param, struct video_data *frame), void *param) { struct obs_core_video_mix *video = obs->data.main_canvas->mix; stop_raw_video(video->video, callback, param); } void obs_add_raw_audio_callback(size_t mix_idx, const struct audio_convert_info *conversion, audio_output_callback_t callback, void *param) { struct obs_core_audio *audio = &obs->audio; audio_output_connect(audio->audio, mix_idx, conversion, callback, param); } void obs_remove_raw_audio_callback(size_t mix_idx, audio_output_callback_t callback, void *param) { struct obs_core_audio *audio = &obs->audio; audio_output_disconnect(audio->audio, mix_idx, callback, param); } void obs_apply_private_data(obs_data_t *settings) { if (!settings) return; obs_data_apply(obs->data.private_data, settings); } void obs_set_private_data(obs_data_t *settings) { obs_data_clear(obs->data.private_data); if (settings) obs_data_apply(obs->data.private_data, settings); } obs_data_t *obs_get_private_data(void) { obs_data_t *private_data = obs->data.private_data; obs_data_addref(private_data); return private_data; } extern bool init_gpu_encoding(struct obs_core_video_mix *video); extern void stop_gpu_encoding_thread(struct obs_core_video_mix *video); extern void free_gpu_encoding(struct obs_core_video_mix *video); bool start_gpu_encode(obs_encoder_t *encoder) { struct obs_core_video_mix *video = get_mix_for_video(encoder->media); bool success = true; obs_enter_graphics(); pthread_mutex_lock(&video->gpu_encoder_mutex); if (!video->gpu_encoders.num) success = init_gpu_encoding(video); if (success) da_push_back(video->gpu_encoders, &encoder); else free_gpu_encoding(video); pthread_mutex_unlock(&video->gpu_encoder_mutex); obs_leave_graphics(); if (success) { os_atomic_inc_long(&video->gpu_encoder_active); video_output_inc_texture_encoders(video->video); } return success; } void stop_gpu_encode(obs_encoder_t *encoder) { struct obs_core_video_mix *video = get_mix_for_video(encoder->media); bool call_free = false; os_atomic_dec_long(&video->gpu_encoder_active); video_output_dec_texture_encoders(video->video); pthread_mutex_lock(&video->gpu_encoder_mutex); da_erase_item(video->gpu_encoders, &encoder); if (!video->gpu_encoders.num) call_free = true; pthread_mutex_unlock(&video->gpu_encoder_mutex); os_event_wait(video->gpu_encode_inactive); if (call_free) { stop_gpu_encoding_thread(video); obs_enter_graphics(); pthread_mutex_lock(&video->gpu_encoder_mutex); free_gpu_encoding(video); pthread_mutex_unlock(&video->gpu_encoder_mutex); obs_leave_graphics(); } } bool obs_video_active(void) { bool result = false; pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0, num = obs->video.mixes.num; i < num; i++) { struct obs_core_video_mix *video = obs->video.mixes.array[i]; if (os_atomic_load_long(&video->raw_active) > 0 || os_atomic_load_long(&video->gpu_encoder_active) > 0) { result = true; break; } } pthread_mutex_unlock(&obs->video.mixes_mutex); return result; } bool obs_nv12_tex_active(void) { struct obs_core_video_mix *video = obs->data.main_canvas->mix; return video->using_nv12_tex; } bool obs_p010_tex_active(void) { struct obs_core_video_mix *video = obs->data.main_canvas->mix; return video->using_p010_tex; } /* ------------------------------------------------------------------------- */ /* task stuff */ struct task_wait_info { obs_task_t task; void *param; os_event_t *event; }; static void task_wait_callback(void *param) { struct task_wait_info *info = param; if (info->task) info->task(info->param); os_event_signal(info->event); } THREAD_LOCAL bool is_graphics_thread = false; THREAD_LOCAL bool is_audio_thread = false; static void set_audio_thread(void *unused) { is_audio_thread = true; UNUSED_PARAMETER(unused); } bool obs_in_task_thread(enum obs_task_type type) { if (type == OBS_TASK_GRAPHICS) return is_graphics_thread; else if (type == OBS_TASK_AUDIO) return is_audio_thread; else if (type == OBS_TASK_UI) return is_ui_thread; else if (type == OBS_TASK_DESTROY) return os_task_queue_inside(obs->destruction_task_thread); assert(false); return false; } void obs_queue_task(enum obs_task_type type, obs_task_t task, void *param, bool wait) { if (type == OBS_TASK_UI) { if (obs->ui_task_handler) { obs->ui_task_handler(task, param, wait); } else { blog(LOG_ERROR, "UI task could not be queued, " "there's no UI task handler!"); } } else { if (obs_in_task_thread(type)) { task(param); } else if (wait) { struct task_wait_info info = { .task = task, .param = param, }; os_event_init(&info.event, OS_EVENT_TYPE_MANUAL); obs_queue_task(type, task_wait_callback, &info, false); os_event_wait(info.event); os_event_destroy(info.event); } else if (type == OBS_TASK_GRAPHICS) { struct obs_core_video *video = &obs->video; struct obs_task_info info = {task, param}; pthread_mutex_lock(&video->task_mutex); deque_push_back(&video->tasks, &info, sizeof(info)); pthread_mutex_unlock(&video->task_mutex); } else if (type == OBS_TASK_AUDIO) { struct obs_core_audio *audio = &obs->audio; struct obs_task_info info = {task, param}; pthread_mutex_lock(&audio->task_mutex); deque_push_back(&audio->tasks, &info, sizeof(info)); pthread_mutex_unlock(&audio->task_mutex); } else if (type == OBS_TASK_DESTROY) { os_task_t os_task = (os_task_t)task; os_task_queue_queue_task(obs->destruction_task_thread, os_task, param); } } } bool obs_wait_for_destroy_queue(void) { struct task_wait_info info = {0}; if (!obs->video.thread_initialized || !obs->audio.audio) return false; /* allow video and audio threads time to release objects */ os_event_init(&info.event, OS_EVENT_TYPE_AUTO); obs_queue_task(OBS_TASK_GRAPHICS, task_wait_callback, &info, false); os_event_wait(info.event); obs_queue_task(OBS_TASK_AUDIO, task_wait_callback, &info, false); os_event_wait(info.event); os_event_destroy(info.event); /* wait for destroy task queue */ return os_task_queue_wait(obs->destruction_task_thread); } static void set_ui_thread(void *unused) { is_ui_thread = true; UNUSED_PARAMETER(unused); } void obs_set_ui_task_handler(obs_task_handler_t handler) { obs->ui_task_handler = handler; obs_queue_task(OBS_TASK_UI, set_ui_thread, NULL, false); } obs_object_t *obs_object_get_ref(obs_object_t *object) { if (!object) return NULL; return obs_weak_object_get_object(object->control); } void obs_object_release(obs_object_t *object) { if (!obs) { blog(LOG_WARNING, "Tried to release an object when the OBS " "core is shut down!"); return; } if (!object) return; obs_weak_object_t *control = object->control; if (obs_ref_release(&control->ref)) { object->destroy(object); obs_weak_object_release(control); } } void obs_weak_object_addref(obs_weak_object_t *weak) { if (!weak) return; obs_weak_ref_addref(&weak->ref); } void obs_weak_object_release(obs_weak_object_t *weak) { if (!weak) return; if (obs_weak_ref_release(&weak->ref)) bfree(weak); } obs_weak_object_t *obs_object_get_weak_object(obs_object_t *object) { if (!object) return NULL; obs_weak_object_t *weak = object->control; obs_weak_object_addref(weak); return weak; } obs_object_t *obs_weak_object_get_object(obs_weak_object_t *weak) { if (!weak) return NULL; if (obs_weak_ref_get_ref(&weak->ref)) return weak->object; return NULL; } bool obs_weak_object_expired(obs_weak_object_t *weak) { return weak ? obs_weak_ref_expired(&weak->ref) : true; } bool obs_weak_object_references_object(obs_weak_object_t *weak, obs_object_t *object) { return weak && object && weak->object == object; } bool obs_is_output_protocol_registered(const char *protocol) { for (size_t i = 0; i < obs->data.protocols.num; i++) { if (strcmp(protocol, obs->data.protocols.array[i]) == 0) return true; } return false; } bool obs_enum_output_protocols(size_t idx, char **protocol) { if (idx >= obs->data.protocols.num) return false; *protocol = obs->data.protocols.array[idx]; return true; } obs_canvas_t *obs_get_main_canvas(void) { return obs_canvas_get_ref(obs->data.main_canvas); } obs-studio-32.1.0-sources/libobs/data/000755 001751 001751 00000000000 15153330731 020420 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/data/default_rect.effect000644 001751 001751 00000002717 15153330235 024245 0ustar00runnerrunner000000 000000 #include "color.effect" uniform float4x4 ViewProj; uniform texture_rect image; sampler_state def_sampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertInOut { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertInOut VSDefault(VertInOut vert_in) { VertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.uv = vert_in.uv; return vert_out; } float4 PSDrawBare(VertInOut vert_in) : TARGET { return image.Sample(def_sampler, vert_in.uv); } float4 PSDrawD65P3(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); rgba.rgb = d65p3_to_rec709(rgba.rgb); return rgba; } float4 PSDrawOpaque(VertInOut vert_in) : TARGET { return float4(image.Sample(def_sampler, vert_in.uv).rgb, 1.0); } float4 PSDrawSrgbDecompress(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); return rgba; } technique Draw { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawBare(vert_in); } } technique DrawD65P3 { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawD65P3(vert_in); } } technique DrawOpaque { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawOpaque(vert_in); } } technique DrawSrgbDecompress { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawSrgbDecompress(vert_in); } } obs-studio-32.1.0-sources/libobs/data/deinterlace_yadif.effect000644 001751 001751 00000002042 15153330235 025226 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSYadifMode0RGBA, PSYadifMode0RGBA_multiply, PSYadifMode0RGBA_tonemap, PSYadifMode0RGBA_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/lanczos_scale.effect000644 001751 001751 00000017521 15153330235 024423 0ustar00runnerrunner000000 000000 /* * lanczos sharper * note - this shader is adapted from the GPL bsnes shader, very good stuff * there. */ #include "color.effect" uniform float4x4 ViewProj; uniform texture2d image; uniform float2 base_dimension; uniform float2 base_dimension_i; uniform float undistort_factor = 1.0; uniform float multiplier; sampler_state textureSampler { AddressU = Clamp; AddressV = Clamp; Filter = Linear; }; struct VertData { float4 pos : POSITION; float2 uv : TEXCOORD0; }; struct VertOut { float2 uv : TEXCOORD0; float4 pos : POSITION; }; struct FragData { float2 uv : TEXCOORD0; }; VertOut VSDefault(VertData v_in) { VertOut vert_out; vert_out.uv = v_in.uv * base_dimension; vert_out.pos = mul(float4(v_in.pos.xyz, 1.0), ViewProj); return vert_out; } float weight(float x) { float x_pi = x * 3.141592654; return 3.0 * sin(x_pi) * sin(x_pi * (1.0 / 3.0)) / (x_pi * x_pi); } void weight6(float f_neg, out float3 tap012, out float3 tap345) { tap012 = float3( weight(f_neg - 2.0), weight(f_neg - 1.0), min(1.0, weight(f_neg))); // Replace NaN with 1.0. tap345 = float3( weight(f_neg + 1.0), weight(f_neg + 2.0), weight(f_neg + 3.0)); // Normalize weights float sum = tap012.x + tap012.y + tap012.z + tap345.x + tap345.y + tap345.z; float sum_i = 1.0 / sum; tap012 = tap012 * sum_i; tap345 = tap345 * sum_i; } float AspectUndistortX(float x, float a) { // The higher the power, the longer the linear part will be. return (1.0 - a) * (x * x * x * x * x) + a * x; } float AspectUndistortU(float u) { // Normalize texture coord to -1.0 to 1.0 range, and back. return AspectUndistortX((u - 0.5) * 2.0, undistort_factor) * 0.5 + 0.5; } float2 undistort_coord(float xpos, float ypos) { return float2(AspectUndistortU(xpos), ypos); } float4 undistort_pixel(float xpos, float ypos) { return image.Sample(textureSampler, undistort_coord(xpos, ypos)); } float4 undistort_line(float3 xpos012, float3 xpos345, float ypos, float3 rowtap012, float3 rowtap345) { return undistort_pixel(xpos012.x, ypos) * rowtap012.x + undistort_pixel(xpos012.y, ypos) * rowtap012.y + undistort_pixel(xpos012.z, ypos) * rowtap012.z + undistort_pixel(xpos345.x, ypos) * rowtap345.x + undistort_pixel(xpos345.y, ypos) * rowtap345.y + undistort_pixel(xpos345.z, ypos) * rowtap345.z; } float4 DrawLanczos(FragData f_in, bool undistort) { float2 pos = f_in.uv; float2 pos2 = floor(pos - 0.5) + 0.5; float2 f_neg = pos2 - pos; float3 rowtap012, rowtap345; weight6(f_neg.x, rowtap012, rowtap345); float3 coltap012, coltap345; weight6(f_neg.y, coltap012, coltap345); float2 uv2 = pos2 * base_dimension_i; float2 uv1 = uv2 - base_dimension_i; float2 uv0 = uv1 - base_dimension_i; float2 uv3 = uv2 + base_dimension_i; float2 uv4 = uv3 + base_dimension_i; float2 uv5 = uv4 + base_dimension_i; if (undistort) { float3 xpos012 = float3(uv0.x, uv1.x, uv2.x); float3 xpos345 = float3(uv3.x, uv4.x, uv5.x); return undistort_line(xpos012, xpos345, uv0.y, rowtap012, rowtap345) * coltap012.x + undistort_line(xpos012, xpos345, uv1.y, rowtap012, rowtap345) * coltap012.y + undistort_line(xpos012, xpos345, uv2.y, rowtap012, rowtap345) * coltap012.z + undistort_line(xpos012, xpos345, uv3.y, rowtap012, rowtap345) * coltap345.x + undistort_line(xpos012, xpos345, uv4.y, rowtap012, rowtap345) * coltap345.y + undistort_line(xpos012, xpos345, uv5.y, rowtap012, rowtap345) * coltap345.z; } float u_weight_sum = rowtap012.z + rowtap345.x; float u_middle_offset = rowtap345.x * base_dimension_i.x / u_weight_sum; float u_middle = uv2.x + u_middle_offset; float v_weight_sum = coltap012.z + coltap345.x; float v_middle_offset = coltap345.x * base_dimension_i.y / v_weight_sum; float v_middle = uv2.y + v_middle_offset; float2 coord_limit = base_dimension - 0.5; float2 coord0_f = max(uv0 * base_dimension, 0.5); float2 coord1_f = max(uv1 * base_dimension, 0.5); float2 coord4_f = min(uv4 * base_dimension, coord_limit); float2 coord5_f = min(uv5 * base_dimension, coord_limit); int2 coord0 = int2(coord0_f); int2 coord1 = int2(coord1_f); int2 coord4 = int2(coord4_f); int2 coord5 = int2(coord5_f); float4 row0 = image.Load(int3(coord0, 0)) * rowtap012.x; row0 += image.Load(int3(coord1.x, coord0.y, 0)) * rowtap012.y; row0 += image.Sample(textureSampler, float2(u_middle, uv0.y)) * u_weight_sum; row0 += image.Load(int3(coord4.x, coord0.y, 0)) * rowtap345.y; row0 += image.Load(int3(coord5.x, coord0.y, 0)) * rowtap345.z; float4 total = row0 * coltap012.x; float4 row1 = image.Load(int3(coord0.x, coord1.y, 0)) * rowtap012.x; row1 += image.Load(int3(coord1.x, coord1.y, 0)) * rowtap012.y; row1 += image.Sample(textureSampler, float2(u_middle, uv1.y)) * u_weight_sum; row1 += image.Load(int3(coord4.x, coord1.y, 0)) * rowtap345.y; row1 += image.Load(int3(coord5.x, coord1.y, 0)) * rowtap345.z; total += row1 * coltap012.y; float4 row23 = image.Sample(textureSampler, float2(uv0.x, v_middle)) * rowtap012.x; row23 += image.Sample(textureSampler, float2(uv1.x, v_middle)) * rowtap012.y; row23 += image.Sample(textureSampler, float2(u_middle, v_middle)) * u_weight_sum; row23 += image.Sample(textureSampler, float2(uv4.x, v_middle)) * rowtap345.y; row23 += image.Sample(textureSampler, float2(uv5.x, v_middle)) * rowtap345.z; total += row23 * v_weight_sum; float4 row4 = image.Load(int3(coord0.x, coord4.y, 0)) * rowtap012.x; row4 += image.Load(int3(coord1.x, coord4.y, 0)) * rowtap012.y; row4 += image.Sample(textureSampler, float2(u_middle, uv4.y)) * u_weight_sum; row4 += image.Load(int3(coord4.x, coord4.y, 0)) * rowtap345.y; row4 += image.Load(int3(coord5.x, coord4.y, 0)) * rowtap345.z; total += row4 * coltap345.y; float4 row5 = image.Load(int3(coord0.x, coord5.y, 0)) * rowtap012.x; row5 += image.Load(int3(coord1.x, coord5.y, 0)) * rowtap012.y; row5 += image.Sample(textureSampler, float2(u_middle, uv5.y)) * u_weight_sum; row5 += image.Load(int3(coord4.x, coord5.y, 0)) * rowtap345.y; row5 += image.Load(int3(coord5, 0)) * rowtap345.z; total += row5 * coltap345.z; return total; } float4 PSDrawLanczosRGBA(FragData f_in, bool undistort) : TARGET { return DrawLanczos(f_in, undistort); } float4 PSDrawLanczosRGBAMultiply(FragData f_in, bool undistort) : TARGET { float4 rgba = DrawLanczos(f_in, undistort); rgba.rgb *= multiplier; return rgba; } float4 PSDrawLanczosRGBATonemap(FragData f_in, bool undistort) : TARGET { float4 rgba = DrawLanczos(f_in, undistort); rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawLanczosRGBAMultiplyTonemap(FragData f_in, bool undistort) : TARGET { float4 rgba = DrawLanczos(f_in, undistort); rgba.rgb *= multiplier; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } technique Draw { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBA(f_in, false); } } technique DrawMultiply { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBAMultiply(f_in, false); } } technique DrawTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBATonemap(f_in, false); } } technique DrawMultiplyTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBAMultiplyTonemap(f_in, false); } } technique DrawUndistort { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBA(f_in, true); } } technique DrawUndistortMultiply { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBAMultiply(f_in, true); } } technique DrawUndistortTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBATonemap(f_in, true); } } technique DrawUndistortMultiplyTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLanczosRGBAMultiplyTonemap(f_in, true); } } obs-studio-32.1.0-sources/libobs/data/deinterlace_yadif_2x.effect000644 001751 001751 00000002056 15153330235 025644 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSYadifMode0RGBA_2x, PSYadifMode0RGBA_2x_multiply, PSYadifMode0RGBA_2x_tonemap, PSYadifMode0RGBA_2x_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/opaque.effect000644 001751 001751 00000005673 15153330235 023102 0ustar00runnerrunner000000 000000 #include "color.effect" uniform float4x4 ViewProj; uniform texture2d image; uniform float multiplier; sampler_state def_sampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertInOut { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertInOut VSDefault(VertInOut vert_in) { VertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.uv = vert_in.uv; return vert_out; } float4 PSDraw(VertInOut vert_in) : TARGET { return float4(image.Sample(def_sampler, vert_in.uv).rgb, 1.0); } float4 PSDrawSrgbDecompress(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.0); } float4 PSDrawSrgbDecompressMultiply(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb = srgb_nonlinear_to_linear(rgb); rgb *= multiplier; return float4(rgb, 1.0); } float4 PSDrawMultiply(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb *= multiplier; return float4(rgb, 1.0); } float4 PSDrawTonemap(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb = rec709_to_rec2020(rgb); rgb = reinhard(rgb); rgb = rec2020_to_rec709(rgb); return float4(rgb, 1.0); } float4 PSDrawMultiplyTonemap(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb *= multiplier; rgb = rec709_to_rec2020(rgb); rgb = reinhard(rgb); rgb = rec2020_to_rec709(rgb); return float4(rgb, 1.0); } float4 PSDrawPQ(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb = st2084_to_linear(rgb) * multiplier; rgb = rec2020_to_rec709(rgb); return float4(rgb, 1.0); } float4 PSDrawTonemapPQ(VertInOut vert_in) : TARGET { float3 rgb = image.Sample(def_sampler, vert_in.uv).rgb; rgb = st2084_to_linear(rgb) * multiplier; rgb = reinhard(rgb); rgb = rec2020_to_rec709(rgb); return float4(rgb, 1.0); } technique Draw { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDraw(vert_in); } } technique DrawSrgbDecompress { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawSrgbDecompress(vert_in); } } technique DrawSrgbDecompressMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawSrgbDecompressMultiply(vert_in); } } technique DrawMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawMultiply(vert_in); } } technique DrawTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawTonemap(vert_in); } } technique DrawMultiplyTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawMultiplyTonemap(vert_in); } } technique DrawPQ { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawPQ(vert_in); } } technique DrawTonemapPQ { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawTonemapPQ(vert_in); } } obs-studio-32.1.0-sources/libobs/data/premultiplied_alpha.effect000644 001751 001751 00000001221 15153330235 025615 0ustar00runnerrunner000000 000000 uniform float4x4 ViewProj; uniform texture2d image; sampler_state def_sampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertInOut { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertInOut VSDefault(VertInOut vert_in) { VertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.uv = vert_in.uv; return vert_out; } float4 PSDraw(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); if (rgba.a > 0.0) rgba.rgb /= rgba.a; return saturate(rgba); } technique Draw { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDraw(vert_in); } } obs-studio-32.1.0-sources/libobs/data/default.effect000644 001751 001751 00000012772 15153330235 023232 0ustar00runnerrunner000000 000000 #include "color.effect" uniform float4x4 ViewProj; uniform texture2d image; uniform float multiplier; sampler_state def_sampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertInOut { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertInOut VSDefault(VertInOut vert_in) { VertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.uv = vert_in.uv; return vert_out; } float4 PSDrawBare(VertInOut vert_in) : TARGET { return image.Sample(def_sampler, vert_in.uv); } float4 PSDrawAlphaDivide(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb *= (rgba.a > 0.) ? (1. / rgba.a) : 0.; return rgba; } float4 PSDrawAlphaDivideTonemap(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb *= (rgba.a > 0.) ? (1. / rgba.a) : 0.; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawAlphaDivideR10L(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb *= (rgba.a > 0.) ? (multiplier / rgba.a) : 0.; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = linear_to_st2084(rgba.rgb); uint3 rgb1023 = uint3(mad(rgba.rgb, 876., 64.5)); uint b = (rgb1023.b & 0x3Fu) << 2; uint g = ((rgb1023.b & 0x3C0u) >> 6) | ((rgb1023.g & 0xFu) << 4); uint r = ((rgb1023.g & 0x3F0u) >> 4) | ((rgb1023.r & 0x3u) << 6); uint a = ((rgb1023.r & 0x3FCu) >> 2); return float4(uint4(r, g, b, a)) / 255.; } float4 PSDrawNonlinearAlpha(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_linear_to_nonlinear(rgba.rgb); rgba.rgb *= rgba.a; rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); return rgba; } float4 PSDrawNonlinearAlphaMultiply(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_linear_to_nonlinear(rgba.rgb); rgba.rgb *= rgba.a; rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); rgba.rgb *= multiplier; return rgba; } float4 PSDrawSrgbDecompress(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); return rgba; } float4 PSDrawSrgbDecompressMultiply(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); rgba.rgb *= multiplier; return rgba; } float4 PSDrawMultiply(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb *= multiplier; return rgba; } float4 PSDrawTonemap(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawMultiplyTonemap(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb *= multiplier; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawPQ(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = st2084_to_linear(rgba.rgb) * multiplier; rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawTonemapPQ(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = st2084_to_linear(rgba.rgb) * multiplier; rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawD65P3(VertInOut vert_in) : TARGET { float4 rgba = image.Sample(def_sampler, vert_in.uv); rgba.rgb = srgb_nonlinear_to_linear(rgba.rgb); rgba.rgb = d65p3_to_rec709(rgba.rgb); return rgba; } technique Draw { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawBare(vert_in); } } technique DrawAlphaDivide { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAlphaDivide(vert_in); } } technique DrawAlphaDivideTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAlphaDivideTonemap(vert_in); } } technique DrawAlphaDivideR10L { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAlphaDivideR10L(vert_in); } } technique DrawNonlinearAlpha { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawNonlinearAlpha(vert_in); } } technique DrawNonlinearAlphaMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawNonlinearAlphaMultiply(vert_in); } } technique DrawSrgbDecompress { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawSrgbDecompress(vert_in); } } technique DrawSrgbDecompressMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawSrgbDecompressMultiply(vert_in); } } technique DrawMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawMultiply(vert_in); } } technique DrawTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawTonemap(vert_in); } } technique DrawMultiplyTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawMultiplyTonemap(vert_in); } } technique DrawPQ { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawPQ(vert_in); } } technique DrawTonemapPQ { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawTonemapPQ(vert_in); } } technique DrawD65P3 { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawD65P3(vert_in); } } obs-studio-32.1.0-sources/libobs/data/format_conversion.effect000644 001751 001751 00000127303 15153330235 025340 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "color.effect" uniform float width; uniform float height; uniform float width_i; uniform float height_i; uniform float width_d2; uniform float height_d2; uniform float width_x2_i; uniform float height_x2_i; uniform float maximum_over_sdr_white_nits; uniform float sdr_white_nits_over_maximum; uniform float hlg_exponent; uniform float hdr_lw; uniform float hdr_lmax; uniform float4 color_vec0; uniform float4 color_vec1; uniform float4 color_vec2; uniform float3 color_range_min = {0.0, 0.0, 0.0}; uniform float3 color_range_max = {1.0, 1.0, 1.0}; uniform texture2d image; uniform texture2d image1; uniform texture2d image2; uniform texture2d image3; sampler_state def_sampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct FragPos { float4 pos : POSITION; }; struct VertTexPos { float2 uv : TEXCOORD0; float4 pos : POSITION; }; struct VertTexTexPos { float4 uvuv : TEXCOORD0; float4 pos : POSITION; }; struct VertTexPosWide { float3 uuv : TEXCOORD0; float4 pos : POSITION; }; struct VertTexPosWideWide { float4 uuvv : TEXCOORD0; float4 pos : POSITION; }; struct FragTex { float2 uv : TEXCOORD0; }; struct FragTexTex { float4 uvuv : TEXCOORD0; }; struct FragTexWide { float3 uuv : TEXCOORD0; }; struct FragTexWideWide { float4 uuvv : TEXCOORD0; }; FragPos VSPos(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4.0 - 1.0; float y = idLow * 4.0 - 1.0; FragPos vert_out; vert_out.pos = float4(x, y, 0.0, 1.0); return vert_out; } VertTexPosWide VSTexPos_Left(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4.0 - 1.0; float y = idLow * 4.0 - 1.0; float u_right = idHigh * 2.0; float u_left = u_right - width_i; float v = obs_glsl_compile ? (idLow * 2.0) : (1.0 - idLow * 2.0); VertTexPosWide vert_out; vert_out.uuv = float3(u_left, u_right, v); vert_out.pos = float4(x, y, 0.0, 1.0); return vert_out; } VertTexPosWideWide VSTexPos_TopLeft(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4.0 - 1.0; float y = idLow * 4.0 - 1.0; float u_right = idHigh * 2.0; float u_left = u_right - width_i; float v_bottom; float v_top; if (obs_glsl_compile) { v_bottom = idLow * 2.0; v_top = v_bottom + height_i; } else { v_bottom = 1.0 - idLow * 2.0; v_top = v_bottom - height_i; } VertTexPosWideWide vert_out; vert_out.uuvv = float4(u_left, u_right, v_top, v_bottom); vert_out.pos = float4(x, y, 0.0, 1.0); return vert_out; } VertTexTexPos VSPacked422Left_Reverse(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4. - 1.; float y = idLow * 4. - 1.; float u = idHigh * 2.; float v = idLow * 2.; v = obs_glsl_compile ? v : (1. - v); VertTexTexPos vert_out; vert_out.uvuv = float4(width_d2 * u, height * v, u + width_x2_i, v); vert_out.pos = float4(x, y, 0., 1.); return vert_out; } VertTexPos VS420Left_Reverse(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4. - 1.; float y = idLow * 4. - 1.; float u = idHigh * 2. + width_x2_i; float v = idLow * 2.; v = obs_glsl_compile ? v : (1. - v); VertTexPos vert_out; vert_out.uv = float2(u, v); vert_out.pos = float4(x, y, 0., 1.); return vert_out; } VertTexPos VS420TopLeft_Reverse(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4. - 1.; float y = idLow * 4. - 1.; float u = idHigh * 2. + width_x2_i; float v = idLow * 2. - height_x2_i; v = obs_glsl_compile ? v : (1. - v); VertTexPos vert_out; vert_out.uv = float2(u, v); vert_out.pos = float4(x, y, 0., 1.); return vert_out; } VertTexPos VS422Left_Reverse(uint id : VERTEXID) { float idHigh = float(id >> 1); float idLow = float(id & uint(1)); float x = idHigh * 4.0 - 1.0; float y = idLow * 4.0 - 1.0; float u = idHigh * 2.0 + width_x2_i; float v = obs_glsl_compile ? (idLow * 2.0) : (1.0 - idLow * 2.0); VertTexPos vert_out; vert_out.uv = float2(u, v); vert_out.pos = float4(x, y, 0.0, 1.0); return vert_out; } float PS_Y(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_P010_PQ_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; y = floor(saturate(y) * 1023. + 0.5) * (64. / 65535.); return y; } float PS_P010_HLG_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; y = floor(saturate(y) * 1023. + 0.5) * (64. / 65535.); return y; } float PS_P010_SRGB_Y(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = srgb_linear_to_nonlinear(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; y = floor(saturate(y) * 1023. + 0.5) * (64. / 65535.); return y; } float PS_P216_PQ_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_P216_HLG_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_P216_SRGB_Y(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = srgb_linear_to_nonlinear(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_P416_PQ_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_P416_HLG_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_P416_SRGB_Y(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = srgb_linear_to_nonlinear(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y; } float PS_I010_PQ_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y * (1023. / 65535.); } float PS_I010_HLG_Y_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb * sdr_white_nits_over_maximum; rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y * (1023. / 65535.); } float PS_I010_SRGB_Y(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = srgb_linear_to_nonlinear(rgb); float y = dot(color_vec0.xyz, rgb) + color_vec0.w; return y * (1023. / 65535.); } float2 PS_UV_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; return float2(u, v); } float2 PS_P010_PQ_UV_709_2020_WideWide(FragTexWideWide frag_in) : TARGET { float3 rgb_topleft = image.Sample(def_sampler, frag_in.uuvv.xz).rgb; float3 rgb_topright = image.Sample(def_sampler, frag_in.uuvv.yz).rgb; float3 rgb_bottomleft = image.Sample(def_sampler, frag_in.uuvv.xw).rgb; float3 rgb_bottomright = image.Sample(def_sampler, frag_in.uuvv.yw).rgb; float3 rgb = (rgb_topleft + rgb_topright + rgb_bottomleft + rgb_bottomright) * (0.25 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); uv = floor(saturate(uv) * 1023. + 0.5) * (64. / 65535.); return uv; } float2 PS_P010_HLG_UV_709_2020_WideWide(FragTexWideWide frag_in) : TARGET { float3 rgb_topleft = image.Sample(def_sampler, frag_in.uuvv.xz).rgb; float3 rgb_topright = image.Sample(def_sampler, frag_in.uuvv.yz).rgb; float3 rgb_bottomleft = image.Sample(def_sampler, frag_in.uuvv.xw).rgb; float3 rgb_bottomright = image.Sample(def_sampler, frag_in.uuvv.yw).rgb; float3 rgb = (rgb_topleft + rgb_topright + rgb_bottomleft + rgb_bottomright) * (0.25 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); uv = floor(saturate(uv) * 1023. + 0.5) * (64. / 65535.); return uv; } float2 PS_P010_SRGB_UV_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; rgb = srgb_linear_to_nonlinear(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); uv = floor(saturate(uv) * 1023. + 0.5) * (64. / 65535.); return uv; } float2 PS_P216_PQ_UV_709_2020_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * (0.5 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); return uv; } float2 PS_P216_HLG_UV_709_2020_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * (0.5 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); return uv; } float2 PS_P216_SRGB_UV_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; rgb = srgb_linear_to_nonlinear(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); return uv; } float2 PS_P416_PQ_UV_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); return uv; } float2 PS_P416_HLG_UV_709_2020(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); return uv; } float2 PS_P416_SRGB_UV(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; rgb = srgb_linear_to_nonlinear(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; float2 uv = float2(u, v); return uv; } float PS_U(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; float u = dot(color_vec1.xyz, rgb) + color_vec1.w; return u; } float PS_V(FragPos frag_in) : TARGET { float3 rgb = image.Load(int3(frag_in.pos.xy, 0)).rgb; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; return v; } float PS_U_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; float u = dot(color_vec1.xyz, rgb) + color_vec1.w; return u; } float PS_V_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; float v = dot(color_vec2.xyz, rgb) + color_vec2.w; return v; } float PS_I010_PQ_U_709_2020_WideWide(FragTexWideWide frag_in) : TARGET { float3 rgb_topleft = image.Sample(def_sampler, frag_in.uuvv.xz).rgb; float3 rgb_topright = image.Sample(def_sampler, frag_in.uuvv.yz).rgb; float3 rgb_bottomleft = image.Sample(def_sampler, frag_in.uuvv.xw).rgb; float3 rgb_bottomright = image.Sample(def_sampler, frag_in.uuvv.yw).rgb; float3 rgb = (rgb_topleft + rgb_topright + rgb_bottomleft + rgb_bottomright) * (0.25 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; return u * (1023. / 65535.); } float PS_I010_HLG_U_709_2020_WideWide(FragTexWideWide frag_in) : TARGET { float3 rgb_topleft = image.Sample(def_sampler, frag_in.uuvv.xz).rgb; float3 rgb_topright = image.Sample(def_sampler, frag_in.uuvv.yz).rgb; float3 rgb_bottomleft = image.Sample(def_sampler, frag_in.uuvv.xw).rgb; float3 rgb_bottomright = image.Sample(def_sampler, frag_in.uuvv.yw).rgb; float3 rgb = (rgb_topleft + rgb_topright + rgb_bottomleft + rgb_bottomright) * (0.25 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; return u * (1023. / 65535.); } float PS_I010_SRGB_U_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; rgb = srgb_linear_to_nonlinear(rgb); float u = dot(color_vec1.xyz, rgb) + color_vec1.w; return u * (1023. / 65535.); } float PS_I010_PQ_V_709_2020_WideWide(FragTexWideWide frag_in) : TARGET { float3 rgb_topleft = image.Sample(def_sampler, frag_in.uuvv.xz).rgb; float3 rgb_topright = image.Sample(def_sampler, frag_in.uuvv.yz).rgb; float3 rgb_bottomleft = image.Sample(def_sampler, frag_in.uuvv.xw).rgb; float3 rgb_bottomright = image.Sample(def_sampler, frag_in.uuvv.yw).rgb; float3 rgb = (rgb_topleft + rgb_topright + rgb_bottomleft + rgb_bottomright) * (0.25 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_st2084(rgb); float v = dot(color_vec2.xyz, rgb) + color_vec2.w; return v * (1023. / 65535.); } float PS_I010_HLG_V_709_2020_WideWide(FragTexWideWide frag_in) : TARGET { float3 rgb_topleft = image.Sample(def_sampler, frag_in.uuvv.xz).rgb; float3 rgb_topright = image.Sample(def_sampler, frag_in.uuvv.yz).rgb; float3 rgb_bottomleft = image.Sample(def_sampler, frag_in.uuvv.xw).rgb; float3 rgb_bottomright = image.Sample(def_sampler, frag_in.uuvv.yw).rgb; float3 rgb = (rgb_topleft + rgb_topright + rgb_bottomleft + rgb_bottomright) * (0.25 * sdr_white_nits_over_maximum); rgb = rec709_to_rec2020(rgb); rgb = linear_to_hlg(rgb, hdr_lw); float v = dot(color_vec2.xyz, rgb) + color_vec2.w; return v * (1023. / 65535.); } float PS_I010_SRGB_V_Wide(FragTexWide frag_in) : TARGET { float3 rgb_left = image.Sample(def_sampler, frag_in.uuv.xz).rgb; float3 rgb_right = image.Sample(def_sampler, frag_in.uuv.yz).rgb; float3 rgb = (rgb_left + rgb_right) * 0.5; rgb = srgb_linear_to_nonlinear(rgb); float v = dot(color_vec2.xyz, rgb) + color_vec2.w; return v * (1023. / 65535.); } float3 YUV_to_RGB(float3 yuv) { yuv = clamp(yuv, color_range_min, color_range_max); float r = dot(color_vec0.xyz, yuv) + color_vec0.w; float g = dot(color_vec1.xyz, yuv) + color_vec1.w; float b = dot(color_vec2.xyz, yuv) + color_vec2.w; return float3(r, g, b); } float3 PSUYVY_Reverse(FragTexTex frag_in) : TARGET { float2 y01 = image.Load(int3(frag_in.uvuv.xy, 0)).yw; float2 cbcr = image.Sample(def_sampler, frag_in.uvuv.zw, 0).zx; float leftover = frac(frag_in.uvuv.x); float y = (leftover < 0.5) ? y01.x : y01.y; float3 yuv = float3(y, cbcr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float3 PSYUY2_Reverse(FragTexTex frag_in) : TARGET { float2 y01 = image.Load(int3(frag_in.uvuv.xy, 0)).zx; float2 cbcr = image.Sample(def_sampler, frag_in.uvuv.zw, 0).yw; float leftover = frac(frag_in.uvuv.x); float y = (leftover < 0.5) ? y01.x : y01.y; float3 yuv = float3(y, cbcr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float4 PSYUY2_PQ_Reverse(FragTexTex frag_in) : TARGET { float2 y01 = image.Load(int3(frag_in.uvuv.xy, 0)).zx; float2 cbcr = image.Sample(def_sampler, frag_in.uvuv.zw, 0).yw; float leftover = frac(frag_in.uvuv.x); float y = (leftover < 0.5) ? y01.x : y01.y; float3 yuv = float3(y, cbcr); float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSYUY2_HLG_Reverse(FragTexTex frag_in) : TARGET { float2 y01 = image.Load(int3(frag_in.uvuv.xy, 0)).zx; float2 cbcr = image.Sample(def_sampler, frag_in.uvuv.zw, 0).yw; float leftover = frac(frag_in.uvuv.x); float y = (leftover < 0.5) ? y01.x : y01.y; float3 yuv = float3(y, cbcr); float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float3 PSYVYU_Reverse(FragTexTex frag_in) : TARGET { float2 y01 = image.Load(int3(frag_in.uvuv.xy, 0)).zx; float2 cbcr = image.Sample(def_sampler, frag_in.uvuv.zw, 0).wy; float leftover = frac(frag_in.uvuv.x); float y = (leftover < 0.5) ? y01.x : y01.y; float3 yuv = float3(y, cbcr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float3 PSPlanar420_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float4 PSPlanar420_PQ_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSPlanar420_HLG_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSPlanar420A_Reverse(VertTexPos frag_in) : TARGET { int3 xy0_luma = int3(frag_in.pos.xy, 0); float y = image.Load(xy0_luma).x; float alpha = image3.Load(xy0_luma).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); float4 rgba = float4(YUV_to_RGB(yuv), alpha); return rgba; } float3 PSPlanar422_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float4 PSPlanar422_10LE_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 1023.; float3 rgb = YUV_to_RGB(yuv); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSPlanar422_10LE_PQ_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 1023.; float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSPlanar422_10LE_HLG_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 1023.; float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSPlanar422A_Reverse(VertTexPos frag_in) : TARGET { int3 xy0_luma = int3(frag_in.pos.xy, 0); float y = image.Load(xy0_luma).x; float alpha = image3.Load(xy0_luma).x; float cb = image1.Sample(def_sampler, frag_in.uv).x; float cr = image2.Sample(def_sampler, frag_in.uv).x; float3 yuv = float3(y, cb, cr); float4 rgba = float4(YUV_to_RGB(yuv), alpha); return rgba; } float3 PSPlanar444_Reverse(FragPos frag_in) : TARGET { int3 xy0 = int3(frag_in.pos.xy, 0); float y = image.Load(xy0).x; float cb = image1.Load(xy0).x; float cr = image2.Load(xy0).x; float3 yuv = float3(y, cb, cr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float4 PSPlanar444_12LE_Reverse(FragPos frag_in) : TARGET { int3 xy0 = int3(frag_in.pos.xy, 0); float y = image.Load(xy0).x; float cb = image1.Load(xy0).x; float cr = image2.Load(xy0).x; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 4095.; float3 rgb = YUV_to_RGB(yuv); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSPlanar444_12LE_PQ_Reverse(FragPos frag_in) : TARGET { int3 xy0 = int3(frag_in.pos.xy, 0); float y = image.Load(xy0).x; float cb = image1.Load(xy0).x; float cr = image2.Load(xy0).x; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 4095; float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSPlanar444_12LE_HLG_Reverse(FragPos frag_in) : TARGET { int3 xy0 = int3(frag_in.pos.xy, 0); float y = image.Load(xy0).x; float cb = image1.Load(xy0).x; float cr = image2.Load(xy0).x; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 4095; float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSPlanar444A_Reverse(FragPos frag_in) : TARGET { int3 xy0 = int3(frag_in.pos.xy, 0); float y = image.Load(xy0).x; float cb = image1.Load(xy0).x; float cr = image2.Load(xy0).x; float alpha = image3.Load(xy0).x; float3 yuv = float3(y, cb, cr); float4 rgba = float4(YUV_to_RGB(yuv), alpha); return rgba; } float4 PSPlanar444A_12LE_Reverse(FragPos frag_in) : TARGET { int3 xy0 = int3(frag_in.pos.xy, 0); float y = image.Load(xy0).x; float cb = image1.Load(xy0).x; float cr = image2.Load(xy0).x; float alpha = image3.Load(xy0).x * 16.; float3 yuv = float3(y, cb, cr); yuv *= 65535. / 4095.; float3 rgb = YUV_to_RGB(yuv); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, alpha); } float4 PSAYUV_Reverse(FragPos frag_in) : TARGET { float4 yuva = image.Load(int3(frag_in.pos.xy, 0)); float4 rgba = float4(YUV_to_RGB(yuva.xyz), yuva.a); return rgba; } float3 PSNV12_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float2 cbcr = image1.Sample(def_sampler, frag_in.uv).xy; float3 yuv = float3(y, cbcr); float3 rgb = YUV_to_RGB(yuv); return rgb; } float4 PSNV12_PQ_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float2 cbcr = image1.Sample(def_sampler, frag_in.uv).xy; float3 yuv = float3(y, cbcr); float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSNV12_HLG_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float2 cbcr = image1.Sample(def_sampler, frag_in.uv).xy; float3 yuv = float3(y, cbcr); float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSI010_SRGB_Reverse(VertTexPos frag_in) : TARGET { float ratio = 65535. / 1023.; float y = image.Load(int3(frag_in.pos.xy, 0)).x * ratio; float cb = image1.Sample(def_sampler, frag_in.uv).x * ratio; float cr = image2.Sample(def_sampler, frag_in.uv).x * ratio; float3 yuv = float3(y, cb, cr); float3 rgb = YUV_to_RGB(yuv); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSI010_PQ_2020_709_Reverse(VertTexPos frag_in) : TARGET { float ratio = 65535. / 1023.; float y = image.Load(int3(frag_in.pos.xy, 0)).x * ratio; float cb = image1.Sample(def_sampler, frag_in.uv).x * ratio; float cr = image2.Sample(def_sampler, frag_in.uv).x * ratio; float3 yuv = float3(y, cb, cr); float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSI010_HLG_2020_709_Reverse(VertTexPos frag_in) : TARGET { float ratio = 65535. / 1023.; float y = image.Load(int3(frag_in.pos.xy, 0)).x * ratio; float cb = image1.Sample(def_sampler, frag_in.uv).x * ratio; float cr = image2.Sample(def_sampler, frag_in.uv).x * ratio; float3 yuv = float3(y, cb, cr); float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSP010_SRGB_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float2 cbcr = image1.Sample(def_sampler, frag_in.uv).xy; float3 yuv_65535 = floor(float3(y, cbcr) * 65535. + 0.5); float3 yuv_1023 = floor(yuv_65535 * 0.015625); float3 yuv = yuv_1023 / 1023.; float3 rgb = YUV_to_RGB(yuv); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSP010_PQ_2020_709_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float2 cbcr = image1.Sample(def_sampler, frag_in.uv).xy; float3 yuv_65535 = floor(float3(y, cbcr) * 65535. + 0.5); float3 yuv_1023 = floor(yuv_65535 * 0.015625); float3 yuv = yuv_1023 / 1023.; float3 pq = YUV_to_RGB(yuv); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSP010_HLG_2020_709_Reverse(VertTexPos frag_in) : TARGET { float y = image.Load(int3(frag_in.pos.xy, 0)).x; float2 cbcr = image1.Sample(def_sampler, frag_in.uv).xy; float3 yuv_65535 = floor(float3(y, cbcr) * 65535. + 0.5); float3 yuv_1023 = floor(yuv_65535 * 0.015625); float3 yuv = yuv_1023 / 1023.; float3 hlg = YUV_to_RGB(yuv); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float3 compute_v210_reverse(float2 pos) { uint x = uint(pos.x); uint packed_x = x % 6u; uint base_x = x / 6u * 4u; float y, cb, cr; if (packed_x == 0u) { float3 word0_rgb = image.Load(int3(base_x, pos.y, 0)).rgb; y = word0_rgb.y; cb = word0_rgb.x; cr = word0_rgb.z; } else if (packed_x == 1u) { float2 word0_rb = image.Load(int3(base_x, pos.y, 0)).rb; float2 word1_rg = image.Load(int3(base_x + 1u, pos.y, 0)).rg; y = word1_rg.x; cb = (word0_rb.x + word1_rg.y) * 0.5; cr = (word0_rb.y + image.Load(int3(base_x + 2u, pos.y, 0)).r) * 0.5; } else if (packed_x == 2u) { float2 word1_gb = image.Load(int3(base_x + 1u, pos.y, 0)).gb; y = word1_gb.y; cb = word1_gb.x; cr = image.Load(int3(base_x + 2u, pos.y, 0)).r; } else if (packed_x == 3u) { float2 word2_rb = image.Load(int3(base_x + 2u, pos.y, 0)).rb; y = image.Load(int3(base_x + 2u, pos.y, 0)).g; cb = (image.Load(int3(base_x + 1u, pos.y, 0)).g + word2_rb.y) * 0.5; cr = (word2_rb.x + image.Load(int3(base_x + 3u, pos.y, 0)).g) * 0.5; } else if (packed_x == 4u) { float2 word3_rg = image.Load(int3(base_x + 3u, pos.y, 0)).rg; y = word3_rg.x; cb = image.Load(int3(base_x + 2u, pos.y, 0)).b; cr = word3_rg.y; } else { float2 word3_gb = image.Load(int3(base_x + 3u, pos.y, 0)).gb; y = word3_gb.y; cb = image.Load(int3(base_x + 2u, pos.y, 0)).b; cr = word3_gb.x; uint base_x_4 = base_x + 4u; if ((pos.x + 1.) < width) { float2 word4_gb = image.Load(int3(base_x + 4u, pos.y, 0)).rb; cb = (cb + word4_gb.x) * 0.5; cr = (cr + word4_gb.y) * 0.5; } } float3 yuv_65535 = floor(float3(y, cb, cr) * 65535. + 0.5); float3 yuv_1023 = floor(yuv_65535 * 0.015625); float3 yuv = yuv_1023 / 1023.; float3 rgb = YUV_to_RGB(yuv); return rgb; } float4 PSV210_SRGB_Reverse(FragPos frag_in) : TARGET { float3 rgb = compute_v210_reverse(frag_in.pos.xy); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSV210_PQ_2020_709_Reverse(FragPos frag_in) : TARGET { float3 pq = compute_v210_reverse(frag_in.pos.xy); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSV210_HLG_2020_709_Reverse(FragPos frag_in) : TARGET { float3 hlg = compute_v210_reverse(frag_in.pos.xy); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float3 PSY800_Limited(FragPos frag_in) : TARGET { float limited = image.Load(int3(frag_in.pos.xy, 0)).x; float full = (255.0 / 219.0) * limited - (16.0 / 219.0); return float3(full, full, full); } float3 PSY800_Full(FragPos frag_in) : TARGET { float3 full = image.Load(int3(frag_in.pos.xy, 0)).xxx; return full; } float4 PSRGB_Limited(FragPos frag_in) : TARGET { float4 rgba = image.Load(int3(frag_in.pos.xy, 0)); rgba.rgb = (255.0 / 219.0) * rgba.rgb - (16.0 / 219.0); return rgba; } float3 PSBGR3_Limited(FragPos frag_in) : TARGET { float x = frag_in.pos.x * 3.0; float y = frag_in.pos.y; float b = image.Load(int3(x - 1.0, y, 0)).x; float g = image.Load(int3(x, y, 0)).x; float r = image.Load(int3(x + 1.0, y, 0)).x; float3 rgb = float3(r, g, b); rgb = (255.0 / 219.0) * rgb - (16.0 / 219.0); return rgb; } float3 PSBGR3_Full(FragPos frag_in) : TARGET { float x = frag_in.pos.x * 3.0; float y = frag_in.pos.y; float b = image.Load(int3(x - 1.0, y, 0)).x; float g = image.Load(int3(x, y, 0)).x; float r = image.Load(int3(x + 1.0, y, 0)).x; float3 rgb = float3(r, g, b); return rgb; } float3 compute_r10l_reverse(float2 pos, bool limited) { float4 xyzw = image.Load(int3(pos, 0)).bgra; uint4 xyzw255 = uint4(mad(xyzw, 255., 0.5)); uint r = ((xyzw255.z & 0xC0u) >> 6) | (xyzw255.w << 2); uint g = ((xyzw255.y & 0xFu) >> 4) | ((xyzw255.z & 0x3Fu) << 4); uint b = (xyzw255.x >> 2) | ((xyzw255.y & 0xFu) << 6); float3 rgb = float3(uint3(r, g, b)); if (limited) { rgb = mad(rgb, 1. / 876., -16. / 219.); } else { rgb /= 1023.; } return rgb; } float4 PSR10L_SRGB_Full_Reverse(FragPos frag_in) : TARGET { float3 rgb = compute_r10l_reverse(frag_in.pos.xy, false); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSR10L_PQ_2020_709_Full_Reverse(FragPos frag_in) : TARGET { float3 pq = compute_r10l_reverse(frag_in.pos.xy, false); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSR10L_HLG_2020_709_Full_Reverse(FragPos frag_in) : TARGET { float3 hlg = compute_r10l_reverse(frag_in.pos.xy, false); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSR10L_SRGB_Limited_Reverse(FragPos frag_in) : TARGET { float3 rgb = compute_r10l_reverse(frag_in.pos.xy, true); rgb = srgb_nonlinear_to_linear(rgb); return float4(rgb, 1.); } float4 PSR10L_PQ_2020_709_Limited_Reverse(FragPos frag_in) : TARGET { float3 pq = compute_r10l_reverse(frag_in.pos.xy, true); float3 hdr2020 = st2084_to_linear_eetf(pq, hdr_lw, hdr_lmax) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } float4 PSR10L_HLG_2020_709_Limited_Reverse(FragPos frag_in) : TARGET { float3 hlg = compute_r10l_reverse(frag_in.pos.xy, true); float3 hdr2020 = hlg_to_linear(hlg, hlg_exponent) * maximum_over_sdr_white_nits; float3 rgb = rec2020_to_rec709(hdr2020); return float4(rgb, 1.); } technique Planar_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_Y(frag_in); } } technique Planar_U { pass { vertex_shader = VSPos(id); pixel_shader = PS_U(frag_in); } } technique Planar_V { pass { vertex_shader = VSPos(id); pixel_shader = PS_V(frag_in); } } technique Planar_U_Left { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_U_Wide(frag_in); } } technique Planar_V_Left { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_V_Wide(frag_in); } } technique NV12_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_Y(frag_in); } } technique NV12_UV { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_UV_Wide(frag_in); } } technique I010_PQ_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_I010_PQ_Y_709_2020(frag_in); } } technique I010_HLG_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_I010_HLG_Y_709_2020(frag_in); } } technique I010_SRGB_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_I010_SRGB_Y(frag_in); } } technique I010_PQ_U { pass { vertex_shader = VSTexPos_TopLeft(id); pixel_shader = PS_I010_PQ_U_709_2020_WideWide(frag_in); } } technique I010_HLG_U { pass { vertex_shader = VSTexPos_TopLeft(id); pixel_shader = PS_I010_HLG_U_709_2020_WideWide(frag_in); } } technique I010_SRGB_U { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_I010_SRGB_U_Wide(frag_in); } } technique I010_PQ_V { pass { vertex_shader = VSTexPos_TopLeft(id); pixel_shader = PS_I010_PQ_V_709_2020_WideWide(frag_in); } } technique I010_HLG_V { pass { vertex_shader = VSTexPos_TopLeft(id); pixel_shader = PS_I010_HLG_V_709_2020_WideWide(frag_in); } } technique I010_SRGB_V { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_I010_SRGB_V_Wide(frag_in); } } technique P010_PQ_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P010_PQ_Y_709_2020(frag_in); } } technique P010_HLG_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P010_HLG_Y_709_2020(frag_in); } } technique P010_SRGB_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P010_SRGB_Y(frag_in); } } technique P010_PQ_UV { pass { vertex_shader = VSTexPos_TopLeft(id); pixel_shader = PS_P010_PQ_UV_709_2020_WideWide(frag_in); } } technique P010_HLG_UV { pass { vertex_shader = VSTexPos_TopLeft(id); pixel_shader = PS_P010_HLG_UV_709_2020_WideWide(frag_in); } } technique P010_SRGB_UV { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_P010_SRGB_UV_Wide(frag_in); } } technique P216_PQ_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P216_PQ_Y_709_2020(frag_in); } } technique P216_HLG_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P216_HLG_Y_709_2020(frag_in); } } technique P216_SRGB_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P216_SRGB_Y(frag_in); } } technique P216_PQ_UV { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_P216_PQ_UV_709_2020_Wide(frag_in); } } technique P216_HLG_UV { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_P216_HLG_UV_709_2020_Wide(frag_in); } } technique P216_SRGB_UV { pass { vertex_shader = VSTexPos_Left(id); pixel_shader = PS_P216_SRGB_UV_Wide(frag_in); } } technique P416_PQ_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P416_PQ_Y_709_2020(frag_in); } } technique P416_HLG_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P416_HLG_Y_709_2020(frag_in); } } technique P416_SRGB_Y { pass { vertex_shader = VSPos(id); pixel_shader = PS_P416_SRGB_Y(frag_in); } } technique P416_PQ_UV { pass { vertex_shader = VSPos(id); pixel_shader = PS_P416_PQ_UV_709_2020(frag_in); } } technique P416_HLG_UV { pass { vertex_shader = VSPos(id); pixel_shader = PS_P416_HLG_UV_709_2020(frag_in); } } technique P416_SRGB_UV { pass { vertex_shader = VSPos(id); pixel_shader = PS_P416_SRGB_UV(frag_in); } } technique UYVY_Reverse { pass { vertex_shader = VSPacked422Left_Reverse(id); pixel_shader = PSUYVY_Reverse(frag_in); } } technique YUY2_Reverse { pass { vertex_shader = VSPacked422Left_Reverse(id); pixel_shader = PSYUY2_Reverse(frag_in); } } technique YUY2_PQ_Reverse { pass { vertex_shader = VSPacked422Left_Reverse(id); pixel_shader = PSYUY2_PQ_Reverse(frag_in); } } technique YUY2_HLG_Reverse { pass { vertex_shader = VSPacked422Left_Reverse(id); pixel_shader = PSYUY2_HLG_Reverse(frag_in); } } technique YVYU_Reverse { pass { vertex_shader = VSPacked422Left_Reverse(id); pixel_shader = PSYVYU_Reverse(frag_in); } } technique I420_Reverse { pass { vertex_shader = VS420Left_Reverse(id); pixel_shader = PSPlanar420_Reverse(frag_in); } } technique I420_PQ_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSPlanar420_PQ_Reverse(frag_in); } } technique I420_HLG_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSPlanar420_HLG_Reverse(frag_in); } } technique I40A_Reverse { pass { vertex_shader = VS420Left_Reverse(id); pixel_shader = PSPlanar420A_Reverse(frag_in); } } technique I422_Reverse { pass { vertex_shader = VS422Left_Reverse(id); pixel_shader = PSPlanar422_Reverse(frag_in); } } technique I210_Reverse { pass { vertex_shader = VS422Left_Reverse(id); pixel_shader = PSPlanar422_10LE_Reverse(frag_in); } } technique I210_PQ_Reverse { pass { vertex_shader = VS422Left_Reverse(id); pixel_shader = PSPlanar422_10LE_PQ_Reverse(frag_in); } } technique I210_HLG_Reverse { pass { vertex_shader = VS422Left_Reverse(id); pixel_shader = PSPlanar422_10LE_HLG_Reverse(frag_in); } } technique I42A_Reverse { pass { vertex_shader = VS422Left_Reverse(id); pixel_shader = PSPlanar422A_Reverse(frag_in); } } technique I444_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSPlanar444_Reverse(frag_in); } } technique I412_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSPlanar444_12LE_Reverse(frag_in); } } technique I412_PQ_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSPlanar444_12LE_PQ_Reverse(frag_in); } } technique I412_HLG_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSPlanar444_12LE_HLG_Reverse(frag_in); } } technique YUVA_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSPlanar444A_Reverse(frag_in); } } technique YA2L_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSPlanar444A_12LE_Reverse(frag_in); } } technique AYUV_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSAYUV_Reverse(frag_in); } } technique NV12_Reverse { pass { vertex_shader = VS420Left_Reverse(id); pixel_shader = PSNV12_Reverse(frag_in); } } technique NV12_PQ_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSNV12_PQ_Reverse(frag_in); } } technique NV12_HLG_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSNV12_HLG_Reverse(frag_in); } } technique I010_SRGB_Reverse { pass { vertex_shader = VS420Left_Reverse(id); pixel_shader = PSI010_SRGB_Reverse(frag_in); } } technique I010_PQ_2020_709_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSI010_PQ_2020_709_Reverse(frag_in); } } technique I010_HLG_2020_709_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSI010_HLG_2020_709_Reverse(frag_in); } } technique P010_SRGB_Reverse { pass { vertex_shader = VS420Left_Reverse(id); pixel_shader = PSP010_SRGB_Reverse(frag_in); } } technique P010_PQ_2020_709_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSP010_PQ_2020_709_Reverse(frag_in); } } technique P010_HLG_2020_709_Reverse { pass { vertex_shader = VS420TopLeft_Reverse(id); pixel_shader = PSP010_HLG_2020_709_Reverse(frag_in); } } technique V210_SRGB_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSV210_SRGB_Reverse(frag_in); } } technique V210_PQ_2020_709_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSV210_PQ_2020_709_Reverse(frag_in); } } technique V210_HLG_2020_709_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSV210_HLG_2020_709_Reverse(frag_in); } } technique Y800_Limited { pass { vertex_shader = VSPos(id); pixel_shader = PSY800_Limited(frag_in); } } technique Y800_Full { pass { vertex_shader = VSPos(id); pixel_shader = PSY800_Full(frag_in); } } technique RGB_Limited { pass { vertex_shader = VSPos(id); pixel_shader = PSRGB_Limited(frag_in); } } technique BGR3_Limited { pass { vertex_shader = VSPos(id); pixel_shader = PSBGR3_Limited(frag_in); } } technique BGR3_Full { pass { vertex_shader = VSPos(id); pixel_shader = PSBGR3_Full(frag_in); } } technique R10L_SRGB_Full_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSR10L_SRGB_Full_Reverse(frag_in); } } technique R10L_PQ_2020_709_Full_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSR10L_PQ_2020_709_Full_Reverse(frag_in); } } technique R10L_HLG_2020_709_Full_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSR10L_HLG_2020_709_Full_Reverse(frag_in); } } technique R10L_SRGB_Limited_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSR10L_SRGB_Limited_Reverse(frag_in); } } technique R10L_PQ_2020_709_Limited_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSR10L_PQ_2020_709_Limited_Reverse(frag_in); } } technique R10L_HLG_2020_709_Limited_Reverse { pass { vertex_shader = VSPos(id); pixel_shader = PSR10L_HLG_2020_709_Limited_Reverse(frag_in); } } obs-studio-32.1.0-sources/libobs/data/repeat.effect000644 001751 001751 00000001161 15153330235 023054 0ustar00runnerrunner000000 000000 uniform float4x4 ViewProj; uniform texture2d image; uniform float2 scale; sampler_state def_sampler { Filter = Linear; AddressU = Repeat; AddressV = Repeat; }; struct VertInOut { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertInOut VSDefault(VertInOut vert_in) { VertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.uv = vert_in.uv * scale; return vert_out; } float4 PSDrawBare(VertInOut vert_in) : TARGET { return image.Sample(def_sampler, vert_in.uv); } technique Draw { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawBare(vert_in); } } obs-studio-32.1.0-sources/libobs/data/deinterlace_linear.effect000644 001751 001751 00000002022 15153330235 025402 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSLinearRGBA, PSLinearRGBA_multiply, PSLinearRGBA_tonemap, PSLinearRGBA_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/bilinear_lowres_scale.effect000644 001751 001751 00000005230 15153330235 026124 0ustar00runnerrunner000000 000000 /* * bilinear low res scaling, samples 8 pixels of a larger image to scale to a * low resolution image below half size */ #include "color.effect" uniform float4x4 ViewProj; uniform texture2d image; uniform float multiplier; sampler_state textureSampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertData { float4 pos : POSITION; float2 uv : TEXCOORD0; }; VertData VSDefault(VertData v_in) { VertData vert_out; vert_out.pos = mul(float4(v_in.pos.xyz, 1.0), ViewProj); vert_out.uv = v_in.uv; return vert_out; } float4 pixel(float2 uv) { return image.Sample(textureSampler, uv); } float4 DrawLowresBilinear(VertData f_in) { float2 uv = f_in.uv; float2 stepxy = float2(ddx(uv.x), ddy(uv.y)); float2 stepxy1 = stepxy * 0.0625; float2 stepxy3 = stepxy * 0.1875; float2 stepxy5 = stepxy * 0.3125; float2 stepxy7 = stepxy * 0.4375; // Simulate Direct3D 8-sample pattern float4 out_color; out_color = pixel(uv + float2( stepxy1.x, -stepxy3.y)); out_color += pixel(uv + float2(-stepxy1.x, stepxy3.y)); out_color += pixel(uv + float2( stepxy5.x, stepxy1.y)); out_color += pixel(uv + float2(-stepxy3.x, -stepxy5.y)); out_color += pixel(uv + float2(-stepxy5.x, stepxy5.y)); out_color += pixel(uv + float2(-stepxy7.x, -stepxy1.y)); out_color += pixel(uv + float2( stepxy3.x, stepxy7.y)); out_color += pixel(uv + float2( stepxy7.x, -stepxy7.y)); return out_color * 0.125; } float4 PSDrawLowresBilinearRGBA(VertData f_in) : TARGET { return DrawLowresBilinear(f_in); } float4 PSDrawLowresBilinearRGBAMultiply(VertData f_in) : TARGET { float4 rgba = DrawLowresBilinear(f_in); rgba.rgb *= multiplier; return rgba; } float4 PSDrawLowresBilinearRGBATonemap(VertData f_in) : TARGET { float4 rgba = DrawLowresBilinear(f_in); rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawLowresBilinearRGBAMultiplyTonemap(VertData f_in) : TARGET { float4 rgba = DrawLowresBilinear(f_in); rgba.rgb *= multiplier; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } technique Draw { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLowresBilinearRGBA(f_in); } } technique DrawMultiply { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLowresBilinearRGBAMultiply(f_in); } } technique DrawTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLowresBilinearRGBATonemap(f_in); } } technique DrawMultiplyTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawLowresBilinearRGBAMultiplyTonemap(f_in); } } obs-studio-32.1.0-sources/libobs/data/bicubic_scale.effect000644 001751 001751 00000013127 15153330235 024350 0ustar00runnerrunner000000 000000 /* * bicubic sharper (better for downscaling) * note - this shader is adapted from the GPL bsnes shader, very good stuff * there. */ #include "color.effect" uniform float4x4 ViewProj; uniform texture2d image; uniform float2 base_dimension; uniform float2 base_dimension_i; uniform float undistort_factor = 1.0; uniform float multiplier; sampler_state textureSampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertData { float4 pos : POSITION; float2 uv : TEXCOORD0; }; struct VertOut { float2 uv : TEXCOORD0; float4 pos : POSITION; }; struct FragData { float2 uv : TEXCOORD0; }; VertOut VSDefault(VertData v_in) { VertOut vert_out; vert_out.uv = v_in.uv * base_dimension; vert_out.pos = mul(float4(v_in.pos.xyz, 1.0), ViewProj); return vert_out; } float4 weight4(float x) { /* Sharper version. May look better in some cases. B=0, C=0.75 */ return float4( ((-0.75 * x + 1.5) * x - 0.75) * x, (1.25 * x - 2.25) * x * x + 1.0, ((-1.25 * x + 1.5) * x + 0.75) * x, (0.75 * x - 0.75) * x * x); } float AspectUndistortX(float x, float a) { // The higher the power, the longer the linear part will be. return (1.0 - a) * (x * x * x * x * x) + a * x; } float AspectUndistortU(float u) { // Normalize texture coord to -1.0 to 1.0 range, and back. return AspectUndistortX((u - 0.5) * 2.0, undistort_factor) * 0.5 + 0.5; } float2 undistort_coord(float xpos, float ypos) { return float2(AspectUndistortU(xpos), ypos); } float4 undistort_pixel(float xpos, float ypos) { return image.Sample(textureSampler, undistort_coord(xpos, ypos)); } float4 undistort_line(float4 xpos, float ypos, float4 rowtaps) { return undistort_pixel(xpos.x, ypos) * rowtaps.x + undistort_pixel(xpos.y, ypos) * rowtaps.y + undistort_pixel(xpos.z, ypos) * rowtaps.z + undistort_pixel(xpos.w, ypos) * rowtaps.w; } float4 DrawBicubic(FragData f_in, bool undistort) { float2 pos = f_in.uv; float2 pos1 = floor(pos - 0.5) + 0.5; float2 f = pos - pos1; float4 rowtaps = weight4(f.x); float4 coltaps = weight4(f.y); float2 uv1 = pos1 * base_dimension_i; float2 uv0 = uv1 - base_dimension_i; float2 uv2 = uv1 + base_dimension_i; float2 uv3 = uv2 + base_dimension_i; if (undistort) { float4 xpos = float4(uv0.x, uv1.x, uv2.x, uv3.x); return undistort_line(xpos, uv0.y, rowtaps) * coltaps.x + undistort_line(xpos, uv1.y, rowtaps) * coltaps.y + undistort_line(xpos, uv2.y, rowtaps) * coltaps.z + undistort_line(xpos, uv3.y, rowtaps) * coltaps.w; } float u_weight_sum = rowtaps.y + rowtaps.z; float u_middle_offset = rowtaps.z * base_dimension_i.x / u_weight_sum; float u_middle = uv1.x + u_middle_offset; float v_weight_sum = coltaps.y + coltaps.z; float v_middle_offset = coltaps.z * base_dimension_i.y / v_weight_sum; float v_middle = uv1.y + v_middle_offset; int2 coord_top_left = int2(max(uv0 * base_dimension, 0.5)); int2 coord_bottom_right = int2(min(uv3 * base_dimension, base_dimension - 0.5)); float4 top = image.Load(int3(coord_top_left, 0)) * rowtaps.x; top += image.Sample(textureSampler, float2(u_middle, uv0.y)) * u_weight_sum; top += image.Load(int3(coord_bottom_right.x, coord_top_left.y, 0)) * rowtaps.w; float4 total = top * coltaps.x; float4 middle = image.Sample(textureSampler, float2(uv0.x, v_middle)) * rowtaps.x; middle += image.Sample(textureSampler, float2(u_middle, v_middle)) * u_weight_sum; middle += image.Sample(textureSampler, float2(uv3.x, v_middle)) * rowtaps.w; total += middle * v_weight_sum; float4 bottom = image.Load(int3(coord_top_left.x, coord_bottom_right.y, 0)) * rowtaps.x; bottom += image.Sample(textureSampler, float2(u_middle, uv3.y)) * u_weight_sum; bottom += image.Load(int3(coord_bottom_right, 0)) * rowtaps.w; total += bottom * coltaps.w; return total; } float4 PSDrawBicubicRGBA(FragData f_in, bool undistort) : TARGET { return DrawBicubic(f_in, undistort); } float4 PSDrawBicubicRGBAMultiply(FragData f_in, bool undistort) : TARGET { float4 rgba = DrawBicubic(f_in, undistort); rgba.rgb *= multiplier; return rgba; } float4 PSDrawBicubicRGBATonemap(FragData f_in, bool undistort) : TARGET { float4 rgba = DrawBicubic(f_in, undistort); rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawBicubicRGBAMultiplyTonemap(FragData f_in, bool undistort) : TARGET { float4 rgba = DrawBicubic(f_in, undistort); rgba.rgb *= multiplier; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } technique Draw { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBA(f_in, false); } } technique DrawMultiply { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBAMultiply(f_in, false); } } technique DrawTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBATonemap(f_in, false); } } technique DrawMultiplyTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBAMultiplyTonemap(f_in, false); } } technique DrawUndistort { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBA(f_in, true); } } technique DrawUndistortMultiply { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBAMultiply(f_in, true); } } technique DrawUndistortTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBATonemap(f_in, true); } } technique DrawUndistortMultiplyTonemap { pass { vertex_shader = VSDefault(v_in); pixel_shader = PSDrawBicubicRGBAMultiplyTonemap(f_in, true); } } obs-studio-32.1.0-sources/libobs/data/area.effect000644 001751 001751 00000013017 15153330235 022507 0ustar00runnerrunner000000 000000 #include "color.effect" uniform float4x4 ViewProj; uniform float2 base_dimension; uniform float2 base_dimension_i; uniform texture2d image; uniform float multiplier; sampler_state textureSampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertData { float4 pos : POSITION; float2 uv : TEXCOORD0; }; struct VertInOut { float2 uv : TEXCOORD0; float4 pos : POSITION; }; struct FragData { float2 uv : TEXCOORD0; }; VertInOut VSDefault(VertData vert_in) { VertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.uv = vert_in.uv; return vert_out; } float4 DrawArea(FragData frag_in) { float2 uv = frag_in.uv; float2 uv_delta = float2(ddx(uv.x), ddy(uv.y)); // Handle potential OpenGL flip. if (obs_glsl_compile) uv_delta.y = abs(uv_delta.y); float2 uv_min = uv - 0.5 * uv_delta; float2 uv_max = uv_min + uv_delta; float2 load_index_begin = floor(uv_min * base_dimension); float2 load_index_end = ceil(uv_max * base_dimension); float2 target_dimension = 1.0 / uv_delta; float2 target_pos = uv * target_dimension; float2 target_pos_min = target_pos - 0.5; float2 target_pos_max = target_pos + 0.5; float2 scale = base_dimension_i * target_dimension; float4 total_color = float4(0.0, 0.0, 0.0, 0.0); float load_index_y = load_index_begin.y; do { float source_y_min = load_index_y * scale.y; float source_y_max = source_y_min + scale.y; float y_min = max(source_y_min, target_pos_min.y); float y_max = min(source_y_max, target_pos_max.y); float height = y_max - y_min; float load_index_x = load_index_begin.x; do { float source_x_min = load_index_x * scale.x; float source_x_max = source_x_min + scale.x; float x_min = max(source_x_min, target_pos_min.x); float x_max = min(source_x_max, target_pos_max.x); float width = x_max - x_min; float area = width * height; float4 color = image.Load(int3(load_index_x, load_index_y, 0)); total_color += area * color; ++load_index_x; } while (load_index_x < load_index_end.x); ++load_index_y; } while (load_index_y < load_index_end.y); return total_color; } float4 PSDrawAreaRGBA(FragData frag_in) : TARGET { return DrawArea(frag_in); } float4 PSDrawAreaRGBAMultiply(FragData frag_in) : TARGET { float4 rgba = DrawArea(frag_in); rgba.rgb *= multiplier; return rgba; } float4 PSDrawAreaRGBATonemap(FragData frag_in) : TARGET { float4 rgba = DrawArea(frag_in); rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawAreaRGBAMultiplyTonemap(FragData frag_in) : TARGET { float4 rgba = DrawArea(frag_in); rgba.rgb *= multiplier; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 DrawAreaUpscale(FragData frag_in) { float2 uv = frag_in.uv; float2 uv_delta = float2(ddx(uv.x), ddy(uv.y)); // Handle potential OpenGL flip. if (obs_glsl_compile) uv_delta.y = abs(uv_delta.y); float2 uv_min = uv - 0.5 * uv_delta; float2 uv_max = uv_min + uv_delta; float2 load_index_first = floor(uv_min * base_dimension); float2 load_index_last = ceil(uv_max * base_dimension) - 1.0; if (load_index_first.x < load_index_last.x) { float uv_boundary_x = load_index_last.x * base_dimension_i.x; uv.x = ((uv.x - uv_boundary_x) / uv_delta.x) * base_dimension_i.x + uv_boundary_x; } else uv.x = (load_index_first.x + 0.5) * base_dimension_i.x; if (load_index_first.y < load_index_last.y) { float uv_boundary_y = load_index_last.y * base_dimension_i.y; uv.y = ((uv.y - uv_boundary_y) / uv_delta.y) * base_dimension_i.y + uv_boundary_y; } else uv.y = (load_index_first.y + 0.5) * base_dimension_i.y; return image.Sample(textureSampler, uv); } float4 PSDrawAreaRGBAUpscale(FragData frag_in) : TARGET { return DrawAreaUpscale(frag_in); } float4 PSDrawAreaRGBAUpscaleMultiply(FragData frag_in) : TARGET { float4 rgba = DrawAreaUpscale(frag_in); rgba.rgb *= multiplier; return rgba; } float4 PSDrawAreaRGBAUpscaleTonemap(FragData frag_in) : TARGET { float4 rgba = DrawAreaUpscale(frag_in); rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } float4 PSDrawAreaRGBAUpscaleMultiplyTonemap(FragData frag_in) : TARGET { float4 rgba = DrawAreaUpscale(frag_in); rgba.rgb *= multiplier; rgba.rgb = rec709_to_rec2020(rgba.rgb); rgba.rgb = reinhard(rgba.rgb); rgba.rgb = rec2020_to_rec709(rgba.rgb); return rgba; } technique Draw { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBA(frag_in); } } technique DrawMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBAMultiply(frag_in); } } technique DrawTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBATonemap(frag_in); } } technique DrawMultiplyTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBAMultiplyTonemap(frag_in); } } technique DrawUpscale { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBAUpscale(frag_in); } } technique DrawUpscaleMultiply { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBAUpscaleMultiply(frag_in); } } technique DrawUpscaleTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBAUpscaleTonemap(frag_in); } } technique DrawUpscaleMultiplyTonemap { pass { vertex_shader = VSDefault(vert_in); pixel_shader = PSDrawAreaRGBAUpscaleMultiplyTonemap(frag_in); } } obs-studio-32.1.0-sources/libobs/data/deinterlace_linear_2x.effect000644 001751 001751 00000002036 15153330235 026020 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSLinearRGBA_2x, PSLinearRGBA_2x_multiply, PSLinearRGBA_2x_tonemap, PSLinearRGBA_2x_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/deinterlace_discard.effect000644 001751 001751 00000002026 15153330235 025545 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSDiscardRGBA, PSDiscardRGBA_multiply, PSDiscardRGBA_tonemap, PSDiscardRGBA_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/deinterlace_blend_2x.effect000644 001751 001751 00000002032 15153330235 025626 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSBlendRGBA_2x, PSBlendRGBA_2x_multiply, PSBlendRGBA_2x_tonemap, PSBlendRGBA_2x_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/color.effect000644 001751 001751 00000012457 15153330235 022724 0ustar00runnerrunner000000 000000 float srgb_linear_to_nonlinear_channel(float u) { return (u <= 0.0031308) ? (12.92 * u) : ((1.055 * pow(u, 1. / 2.4)) - 0.055); } float3 srgb_linear_to_nonlinear(float3 v) { return float3(srgb_linear_to_nonlinear_channel(v.r), srgb_linear_to_nonlinear_channel(v.g), srgb_linear_to_nonlinear_channel(v.b)); } float srgb_nonlinear_to_linear_channel(float u) { return (u <= 0.04045) ? (u / 12.92) : pow(mad(u, 1. / 1.055, .055 / 1.055), 2.4); } float3 srgb_nonlinear_to_linear(float3 v) { return float3(srgb_nonlinear_to_linear_channel(v.r), srgb_nonlinear_to_linear_channel(v.g), srgb_nonlinear_to_linear_channel(v.b)); } float3 rec709_to_rec2020(float3 v) { float r = dot(v, float3(0.62740389593469903, 0.32928303837788370, 0.043313065687417225)); float g = dot(v, float3(0.069097289358232075, 0.91954039507545871, 0.011362315566309178)); float b = dot(v, float3(0.016391438875150280, 0.088013307877225749, 0.89559525324762401)); return float3(r, g, b); } float3 d65p3_to_rec709(float3 v) { float r = dot(v, float3(1.2249401762805598, -0.22494017628055996, 0.)); float g = dot(v, float3(-0.042056954709688163, 1.0420569547096881, 0.)); float b = dot(v, float3(-0.019637554590334432, -0.078636045550631889, 1.0982736001409663)); return float3(r, g, b); } float3 rec2020_to_rec709(float3 v) { float r = dot(v, float3(1.6604910021084345, -0.58764113878854951, -0.072849863319884883)); float g = dot(v, float3(-0.12455047452159074, 1.1328998971259603, -0.0083494226043694768)); float b = dot(v, float3(-0.018150763354905303, -0.10057889800800739, 1.1187296613629127)); return float3(r, g, b); } float3 reinhard(float3 rgb) { rgb /= rgb + float3(1., 1., 1.); rgb = saturate(rgb); rgb = pow(rgb, float3(1. / 2.4, 1. / 2.4, 1. / 2.4)); rgb = srgb_nonlinear_to_linear(rgb); return rgb; } float linear_to_st2084_channel(float x) { float c = pow(abs(x), 0.1593017578); return pow((0.8359375 + 18.8515625 * c) / (1. + 18.6875 * c), 78.84375); } float3 linear_to_st2084(float3 rgb) { return float3(linear_to_st2084_channel(rgb.r), linear_to_st2084_channel(rgb.g), linear_to_st2084_channel(rgb.b)); } float st2084_to_linear_channel(float u) { float c = pow(abs(u), 1. / 78.84375); return pow(abs(max(c - 0.8359375, 0.) / (18.8515625 - 18.6875 * c)), 1. / 0.1593017578); } float3 st2084_to_linear(float3 rgb) { return float3(st2084_to_linear_channel(rgb.r), st2084_to_linear_channel(rgb.g), st2084_to_linear_channel(rgb.b)); } float eetf_0_Lmax(float maxRGB1_pq, float Lw, float Lmax) { float Lw_pq = linear_to_st2084_channel(Lw / 10000.); float E1 = saturate(maxRGB1_pq / Lw_pq); // Ensure normalization in case Lw is a lie float maxLum = linear_to_st2084_channel(Lmax / 10000.) / Lw_pq; float KS = (1.5 * maxLum) - 0.5; float E2 = E1; if (E1 > KS) { float T = (E1 - KS) / (1. - KS); float Tsquared = T * T; float Tcubed = Tsquared * T; float P = (2. * Tcubed - 3. * Tsquared + 1.) * KS + (Tcubed - 2. * Tsquared + T) * (1. - KS) + (-2. * Tcubed + 3. * Tsquared) * maxLum; E2 = P; } float E3 = E2; float E4 = E3 * Lw_pq; return E4; } float3 maxRGB_eetf_internal(float3 rgb_linear, float maxRGB1_linear, float maxRGB1_pq, float Lw, float Lmax) { float maxRGB2_pq = eetf_0_Lmax(maxRGB1_pq, Lw, Lmax); float maxRGB2_linear = st2084_to_linear_channel(maxRGB2_pq); // avoid divide-by-zero possibility maxRGB1_linear = max(6.10352e-5, maxRGB1_linear); rgb_linear *= maxRGB2_linear / maxRGB1_linear; return rgb_linear; } float3 maxRGB_eetf_pq_to_linear(float3 rgb_pq, float Lw, float Lmax) { float3 rgb_linear = st2084_to_linear(rgb_pq); float maxRGB1_linear = max(max(rgb_linear.r, rgb_linear.g), rgb_linear.b); float maxRGB1_pq = max(max(rgb_pq.r, rgb_pq.g), rgb_pq.b); return maxRGB_eetf_internal(rgb_linear, maxRGB1_linear, maxRGB1_pq, Lw, Lmax); } float3 maxRGB_eetf_linear_to_linear(float3 rgb_linear, float Lw, float Lmax) { float maxRGB1_linear = max(max(rgb_linear.r, rgb_linear.g), rgb_linear.b); float maxRGB1_pq = linear_to_st2084_channel(maxRGB1_linear); return maxRGB_eetf_internal(rgb_linear, maxRGB1_linear, maxRGB1_pq, Lw, Lmax); } float3 st2084_to_linear_eetf(float3 rgb, float Lw, float Lmax) { return (Lw > Lmax) ? maxRGB_eetf_pq_to_linear(rgb, Lw, Lmax) : st2084_to_linear(rgb); } float linear_to_hlg_channel(float u) { float ln2_i = 1. / log(2.); float m = 0.17883277 / ln2_i; return (u <= (1. / 12.)) ? sqrt(3. * u) : ((log2((12. * u) - 0.28466892) * m) + 0.55991073); } float3 linear_to_hlg(float3 rgb, float Lw) { rgb = saturate(rgb); if (Lw > 1000.) { rgb = maxRGB_eetf_linear_to_linear(rgb, Lw, 1000.); rgb *= 10000. / Lw; } else { rgb *= 10.; } float Yd = dot(rgb, float3(0.2627, 0.678, 0.0593)); // avoid inf from pow(0., negative) by using smallest positive normal number Yd = max(6.10352e-5, Yd); rgb *= pow(Yd, -1. / 6.); return float3(linear_to_hlg_channel(rgb.r), linear_to_hlg_channel(rgb.g), linear_to_hlg_channel(rgb.b)); } float hlg_to_linear_channel(float u) { float ln2_i = 1. / log(2.); float m = ln2_i / 0.17883277; float a = -ln2_i * 0.55991073 / 0.17883277; return (u <= 0.5) ? ((u * u) / 3.) : ((exp2(u * m + a) + 0.28466892) / 12.); } float3 hlg_to_linear(float3 v, float exponent) { float3 rgb = float3(hlg_to_linear_channel(v.r), hlg_to_linear_channel(v.g), hlg_to_linear_channel(v.b)); float Ys = dot(rgb, float3(0.2627, 0.678, 0.0593)); rgb *= pow(Ys, exponent); return rgb; } obs-studio-32.1.0-sources/libobs/data/deinterlace_blend.effect000644 001751 001751 00000002016 15153330235 025217 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSBlendRGBA, PSBlendRGBA_multiply, PSBlendRGBA_tonemap, PSBlendRGBA_multiply_tonemap); obs-studio-32.1.0-sources/libobs/data/deinterlace_base.effect000644 001751 001751 00000021021 15153330235 025042 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "color.effect" uniform float4x4 ViewProj; uniform texture2d image; uniform float multiplier; uniform texture2d previous_image; uniform float2 dimensions; uniform int field_order; uniform bool frame2; sampler_state textureSampler { Filter = Linear; AddressU = Clamp; AddressV = Clamp; }; struct VertData { float4 pos : POSITION; float2 uv : TEXCOORD0; }; int3 select(int2 texel, int x, int y) { return int3(texel + int2(x, y), 0); } float4 load_at_prev(int2 texel, int x, int y) { return previous_image.Load(select(texel, x, y)); } float4 load_at_image(int2 texel, int x, int y) { return image.Load(select(texel, x, y)); } float4 load_at(int2 texel, int x, int y, int field) { if(field == 0) return load_at_image(texel, x, y); else return load_at_prev(texel, x, y); } #define YADIF_UPDATE(c, level) \ if(score.c < spatial_score.c) \ { \ spatial_score.c = score.c; \ spatial_pred.c = (load_at(texel, level, -1, field) + load_at(texel, -level, 1, field)).c / 2; \ #define YADIF_CHECK_ONE(level, c) \ { \ float4 score = abs(load_at(texel, -1 + level, 1, field) - load_at(texel, -1 - level, -1, field)) + \ abs(load_at(texel, level, 1, field) - load_at(texel, -level, -1, field)) + \ abs(load_at(texel, 1 + level, 1, field) - load_at(texel, 1 - level, -1, field)); \ YADIF_UPDATE(c, level) } \ } #define YADIF_CHECK(level) \ { \ float4 score = abs(load_at(texel, -1 + level, 1, field) - load_at(texel, -1 - level, -1, field)) + \ abs(load_at(texel, level, 1, field) - load_at(texel, -level, -1, field)) + \ abs(load_at(texel, 1 + level, 1, field) - load_at(texel, 1 - level, -1, field)); \ YADIF_UPDATE(r, level) YADIF_CHECK_ONE(level * 2, r) } \ YADIF_UPDATE(g, level) YADIF_CHECK_ONE(level * 2, g) } \ YADIF_UPDATE(b, level) YADIF_CHECK_ONE(level * 2, b) } \ YADIF_UPDATE(a, level) YADIF_CHECK_ONE(level * 2, a) } \ } float4 texel_at_yadif(int2 texel, int field, bool mode0) { if((texel.y % 2) == field) return load_at(texel, 0, 0, field); #define YADIF_AVG(x_off, y_off) ((load_at_prev(texel, x_off, y_off) + load_at_image(texel, x_off, y_off))/2) float4 c = load_at(texel, 0, 1, field), d = YADIF_AVG(0, 0), e = load_at(texel, 0, -1, field); float4 temporal_diff0 = (abs(load_at_prev(texel, 0, 0) - load_at_image(texel, 0, 0))) / 2, temporal_diff1 = (abs(load_at_prev(texel, 0, 1) - c) + abs(load_at_prev(texel, 0, -1) - e)) / 2, temporal_diff2 = (abs(load_at_image(texel, 0, 1) - c) + abs(load_at_image(texel, 0, -1) - e)) / 2, diff = max(temporal_diff0, max(temporal_diff1, temporal_diff2)); float4 spatial_pred = (c + e) / 2, spatial_score = abs(load_at(texel, -1, 1, field) - load_at(texel, -1, -1, field)) + abs(c - e) + abs(load_at(texel, 1, 1, field) - load_at(texel, 1, -1, field)) - 1; YADIF_CHECK(-1) YADIF_CHECK(1) if (mode0) { float4 b = YADIF_AVG(0, 2), f = YADIF_AVG(0, -2); float4 max_ = max(d - e, max(d - c, min(b - c, f - e))), min_ = min(d - e, min(d - c, max(b - c, f - e))); diff = max(diff, max(min_, -max_)); } else { diff = max(diff, max(min(d - e, d - c), -max(d - e, d - c))); } #define YADIF_SPATIAL(c) \ { \ if(spatial_pred.c > d.c + diff.c) \ spatial_pred.c = d.c + diff.c; \ else if(spatial_pred.c < d.c - diff.c) \ spatial_pred.c = d.c - diff.c; \ } YADIF_SPATIAL(r) YADIF_SPATIAL(g) YADIF_SPATIAL(b) YADIF_SPATIAL(a) return spatial_pred; } float4 texel_at_yadif_2x(int2 texel, int field, bool mode0) { field = frame2 ? (1 - field) : field; return texel_at_yadif(texel, field, mode0); } float4 texel_at_discard(int2 texel, int field) { texel.y = texel.y / 2 * 2; return load_at_image(texel, 0, field); } float4 texel_at_discard_2x(int2 texel, int field) { field = frame2 ? field : (1 - field); return texel_at_discard(texel, field); } float4 texel_at_blend(int2 texel, int field) { return (load_at_image(texel, 0, 0) + load_at_image(texel, 0, 1)) / 2; } float4 texel_at_blend_2x(int2 texel, int field) { if (!frame2) return (load_at_image(texel, 0, 0) + load_at_prev(texel, 0, 1)) / 2; else return (load_at_image(texel, 0, 0) + load_at_image(texel, 0, 1)) / 2; } float4 texel_at_linear(int2 texel, int field) { if ((texel.y % 2) == field) return load_at_image(texel, 0, 0); return (load_at_image(texel, 0, -1) + load_at_image(texel, 0, 1)) / 2; } float4 texel_at_linear_2x(int2 texel, int field) { field = frame2 ? field : (1 - field); return texel_at_linear(texel, field); } float4 texel_at_yadif_discard(int2 texel, int field) { return (texel_at_yadif(texel, field, true) + texel_at_discard(texel, field)) / 2; } float4 texel_at_yadif_discard_2x(int2 texel, int field) { field = frame2 ? (1 - field) : field; return (texel_at_yadif(texel, field, true) + texel_at_discard(texel, field)) / 2; } int2 pixel_uv(float2 uv) { return int2(uv * dimensions); } float4 PSYadifMode0RGBA(VertData v_in) : TARGET { return texel_at_yadif(pixel_uv(v_in.uv), field_order, true); } float4 PSYadifMode0RGBA_2x(VertData v_in) : TARGET { return texel_at_yadif_2x(pixel_uv(v_in.uv), field_order, true); } float4 PSYadifMode2RGBA(VertData v_in) : TARGET { return texel_at_yadif(pixel_uv(v_in.uv), field_order, false); } float4 PSYadifMode2RGBA_2x(VertData v_in) : TARGET { return texel_at_yadif_2x(pixel_uv(v_in.uv), field_order, false); } float4 PSYadifDiscardRGBA(VertData v_in) : TARGET { return texel_at_yadif_discard(pixel_uv(v_in.uv), field_order); } float4 PSYadifDiscardRGBA_2x(VertData v_in) : TARGET { return texel_at_yadif_discard_2x(pixel_uv(v_in.uv), field_order); } float4 PSLinearRGBA(VertData v_in) : TARGET { return texel_at_linear(pixel_uv(v_in.uv), field_order); } float4 PSLinearRGBA_2x(VertData v_in) : TARGET { return texel_at_linear_2x(pixel_uv(v_in.uv), field_order); } float4 PSDiscardRGBA(VertData v_in) : TARGET { return texel_at_discard(pixel_uv(v_in.uv), field_order); } float4 PSDiscardRGBA_2x(VertData v_in) : TARGET { return texel_at_discard_2x(pixel_uv(v_in.uv), field_order); } float4 PSBlendRGBA(VertData v_in) : TARGET { return texel_at_blend(pixel_uv(v_in.uv), field_order); } float4 PSBlendRGBA_2x(VertData v_in) : TARGET { return texel_at_blend_2x(pixel_uv(v_in.uv), field_order); } VertData VSDefault(VertData v_in) { VertData vert_out; vert_out.pos = mul(float4(v_in.pos.xyz, 1.0), ViewProj); vert_out.uv = v_in.uv; return vert_out; } #define TECHNIQUE(rgba_ps, rgba_ps_multiply, rgba_ps_tonemap, rgba_ps_multiply_tonemap) \ float4 rgba_ps_multiply(VertData v_in) : TARGET \ { \ float4 rgba = rgba_ps(v_in); \ rgba.rgb *= multiplier; \ return rgba; \ } \ float4 rgba_ps_tonemap(VertData v_in) : TARGET \ { \ float4 rgba = rgba_ps(v_in); \ rgba.rgb = rec709_to_rec2020(rgba.rgb); \ rgba.rgb = reinhard(rgba.rgb); \ rgba.rgb = rec2020_to_rec709(rgba.rgb); \ return rgba; \ } \ float4 rgba_ps_multiply_tonemap(VertData v_in) : TARGET \ { \ float4 rgba = rgba_ps(v_in); \ rgba.rgb *= multiplier; \ rgba.rgb = rec709_to_rec2020(rgba.rgb); \ rgba.rgb = reinhard(rgba.rgb); \ rgba.rgb = rec2020_to_rec709(rgba.rgb); \ return rgba; \ } \ technique Draw \ { \ pass \ { \ vertex_shader = VSDefault(v_in); \ pixel_shader = rgba_ps(v_in); \ } \ } \ technique DrawMultiply \ { \ pass \ { \ vertex_shader = VSDefault(v_in); \ pixel_shader = rgba_ps_multiply(v_in); \ } \ } \ technique DrawTonemap \ { \ pass \ { \ vertex_shader = VSDefault(v_in); \ pixel_shader = rgba_ps_tonemap(v_in); \ } \ } \ technique DrawMultiplyTonemap \ { \ pass \ { \ vertex_shader = VSDefault(v_in); \ pixel_shader = rgba_ps_multiply_tonemap(v_in); \ } \ } obs-studio-32.1.0-sources/libobs/data/solid.effect000644 001751 001751 00000002770 15153330235 022715 0ustar00runnerrunner000000 000000 uniform float4x4 ViewProj; uniform float4 color = {1.0, 1.0, 1.0, 1.0}; uniform float4 randomvals1; uniform float4 randomvals2; uniform float4 randomvals3; struct SolidVertInOut { float4 pos : POSITION; }; SolidVertInOut VSSolid(SolidVertInOut vert_in) { SolidVertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); return vert_out; } float4 PSSolid(SolidVertInOut vert_in) : TARGET { return color; } float rand(float4 pos, float4 rand_vals) { return 0.5 + 0.5 * frac(sin(dot(pos.xy, float2(rand_vals.x, rand_vals.y))) * rand_vals.z); } float4 PSRandom(SolidVertInOut vert_in) : TARGET { return float4(rand(vert_in.pos, randomvals1), rand(vert_in.pos, randomvals2), rand(vert_in.pos, randomvals3), 1.0); } struct SolidColoredVertInOut { float4 pos : POSITION; float4 color : COLOR; }; SolidColoredVertInOut VSSolidColored(SolidColoredVertInOut vert_in) { SolidColoredVertInOut vert_out; vert_out.pos = mul(float4(vert_in.pos.xyz, 1.0), ViewProj); vert_out.color = vert_in.color; return vert_out; } float4 PSSolidColored(SolidColoredVertInOut vert_in) : TARGET { return vert_in.color * color; } technique Solid { pass { vertex_shader = VSSolid(vert_in); pixel_shader = PSSolid(vert_in); } } technique SolidColored { pass { vertex_shader = VSSolidColored(vert_in); pixel_shader = PSSolidColored(vert_in); } } technique Random { pass { vertex_shader = VSSolid(vert_in); pixel_shader = PSRandom(vert_in); } } obs-studio-32.1.0-sources/libobs/data/deinterlace_discard_2x.effect000644 001751 001751 00000002042 15153330235 026154 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * John R. Bradley * Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "deinterlace_base.effect" TECHNIQUE(PSDiscardRGBA_2x, PSDiscardRGBA_2x_multiply, PSDiscardRGBA_2x_tonemap, PSDiscardRGBA_2x_multiply_tonemap); obs-studio-32.1.0-sources/libobs/obs-audio-controls.c000644 001751 001751 00000056120 15153330235 023401 0ustar00runnerrunner000000 000000 /* Copyright (C) 2014 by Leonhard Oelke This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . */ #include #include "util/sse-intrin.h" #include "util/threading.h" #include "util/bmem.h" #include "media-io/audio-math.h" #include "obs.h" #include "obs-internal.h" #include "obs-audio-controls.h" /* These are pointless warnings generated not by our code, but by a standard * library macro, INFINITY */ #ifdef _MSC_VER #pragma warning(disable : 4056) #pragma warning(disable : 4756) #endif #define CLAMP(x, min, max) ((x) < min ? min : ((x) > max ? max : (x))) struct fader_cb { obs_fader_changed_t callback; void *param; }; struct obs_fader { pthread_mutex_t mutex; obs_fader_conversion_t def_to_db; obs_fader_conversion_t db_to_def; obs_source_t *source; enum obs_fader_type type; float max_db; float min_db; float cur_db; bool ignore_next_signal; pthread_mutex_t callback_mutex; DARRAY(struct fader_cb) callbacks; }; struct meter_cb { obs_volmeter_updated_t callback; void *param; }; struct obs_volmeter { pthread_mutex_t mutex; obs_source_t *source; enum obs_fader_type type; float cur_db; pthread_mutex_t callback_mutex; DARRAY(struct meter_cb) callbacks; enum obs_peak_meter_type peak_meter_type; unsigned int update_ms; float prev_samples[MAX_AUDIO_CHANNELS][4]; float magnitude[MAX_AUDIO_CHANNELS]; float peak[MAX_AUDIO_CHANNELS]; }; static float cubic_def_to_db(const float def) { if (def == 1.0f) return 0.0f; else if (def <= 0.0f) return -INFINITY; return mul_to_db(def * def * def); } static float cubic_db_to_def(const float db) { if (db == 0.0f) return 1.0f; else if (db == -INFINITY) return 0.0f; return cbrtf(db_to_mul(db)); } static float iec_def_to_db(const float def) { if (def == 1.0f) return 0.0f; else if (def <= 0.0f) return -INFINITY; float db; if (def >= 0.75f) db = (def - 1.0f) / 0.25f * 9.0f; else if (def >= 0.5f) db = (def - 0.75f) / 0.25f * 11.0f - 9.0f; else if (def >= 0.3f) db = (def - 0.5f) / 0.2f * 10.0f - 20.0f; else if (def >= 0.15f) db = (def - 0.3f) / 0.15f * 10.0f - 30.0f; else if (def >= 0.075f) db = (def - 0.15f) / 0.075f * 10.0f - 40.0f; else if (def >= 0.025f) db = (def - 0.075f) / 0.05f * 10.0f - 50.0f; else if (def >= 0.001f) db = (def - 0.025f) / 0.025f * 90.0f - 60.0f; else db = -INFINITY; return db; } static float iec_db_to_def(const float db) { if (db == 0.0f) return 1.0f; else if (db == -INFINITY) return 0.0f; float def; if (db >= -9.0f) def = (db + 9.0f) / 9.0f * 0.25f + 0.75f; else if (db >= -20.0f) def = (db + 20.0f) / 11.0f * 0.25f + 0.5f; else if (db >= -30.0f) def = (db + 30.0f) / 10.0f * 0.2f + 0.3f; else if (db >= -40.0f) def = (db + 40.0f) / 10.0f * 0.15f + 0.15f; else if (db >= -50.0f) def = (db + 50.0f) / 10.0f * 0.075f + 0.075f; else if (db >= -60.0f) def = (db + 60.0f) / 10.0f * 0.05f + 0.025f; else if (db >= -114.0f) def = (db + 150.0f) / 90.0f * 0.025f; else def = 0.0f; return def; } #define LOG_OFFSET_DB 6.0f #define LOG_RANGE_DB 96.0f /* equals -log10f(LOG_OFFSET_DB) */ #define LOG_OFFSET_VAL -0.77815125038364363f /* equals -log10f(-LOG_RANGE_DB + LOG_OFFSET_DB) */ #define LOG_RANGE_VAL -2.00860017176191756f static float log_def_to_db(const float def) { if (def >= 1.0f) return 0.0f; else if (def <= 0.0f) return -INFINITY; return -(LOG_RANGE_DB + LOG_OFFSET_DB) * powf((LOG_RANGE_DB + LOG_OFFSET_DB) / LOG_OFFSET_DB, -def) + LOG_OFFSET_DB; } static float log_db_to_def(const float db) { if (db >= 0.0f) return 1.0f; else if (db <= -96.0f) return 0.0f; return (-log10f(-db + LOG_OFFSET_DB) - LOG_RANGE_VAL) / (LOG_OFFSET_VAL - LOG_RANGE_VAL); } static void signal_volume_changed(struct obs_fader *fader, const float db) { pthread_mutex_lock(&fader->callback_mutex); for (size_t i = fader->callbacks.num; i > 0; i--) { struct fader_cb cb = fader->callbacks.array[i - 1]; cb.callback(cb.param, db); } pthread_mutex_unlock(&fader->callback_mutex); } static void signal_levels_updated(struct obs_volmeter *volmeter, const float magnitude[MAX_AUDIO_CHANNELS], const float peak[MAX_AUDIO_CHANNELS], const float input_peak[MAX_AUDIO_CHANNELS]) { pthread_mutex_lock(&volmeter->callback_mutex); for (size_t i = volmeter->callbacks.num; i > 0; i--) { struct meter_cb cb = volmeter->callbacks.array[i - 1]; cb.callback(cb.param, magnitude, peak, input_peak); } pthread_mutex_unlock(&volmeter->callback_mutex); } static void fader_source_volume_changed(void *vptr, calldata_t *calldata) { struct obs_fader *fader = (struct obs_fader *)vptr; pthread_mutex_lock(&fader->mutex); if (fader->ignore_next_signal) { fader->ignore_next_signal = false; pthread_mutex_unlock(&fader->mutex); return; } const float mul = (float)calldata_float(calldata, "volume"); const float db = mul_to_db(mul); fader->cur_db = db; pthread_mutex_unlock(&fader->mutex); signal_volume_changed(fader, db); } static void volmeter_source_volume_changed(void *vptr, calldata_t *calldata) { struct obs_volmeter *volmeter = (struct obs_volmeter *)vptr; pthread_mutex_lock(&volmeter->mutex); float mul = (float)calldata_float(calldata, "volume"); volmeter->cur_db = mul_to_db(mul); pthread_mutex_unlock(&volmeter->mutex); } static void fader_source_destroyed(void *vptr, calldata_t *calldata) { UNUSED_PARAMETER(calldata); struct obs_fader *fader = (struct obs_fader *)vptr; obs_fader_detach_source(fader); } static void volmeter_source_destroyed(void *vptr, calldata_t *calldata) { UNUSED_PARAMETER(calldata); struct obs_volmeter *volmeter = (struct obs_volmeter *)vptr; obs_volmeter_detach_source(volmeter); } static int get_nr_channels_from_audio_data(const struct audio_data *data) { int nr_channels = 0; for (int i = 0; i < MAX_AV_PLANES; i++) { if (data->data[i]) nr_channels++; } return CLAMP(nr_channels, 0, MAX_AUDIO_CHANNELS); } /* msb(h, g, f, e) lsb(d, c, b, a) --> msb(h, h, g, f) lsb(e, d, c, b) */ #define SHIFT_RIGHT_2PS(msb, lsb) \ { \ __m128 tmp = _mm_shuffle_ps(lsb, msb, _MM_SHUFFLE(0, 0, 3, 3)); \ lsb = _mm_shuffle_ps(lsb, tmp, _MM_SHUFFLE(2, 1, 2, 1)); \ msb = _mm_shuffle_ps(msb, msb, _MM_SHUFFLE(3, 3, 2, 1)); \ } /* x(d, c, b, a) --> (|d|, |c|, |b|, |a|) */ #define abs_ps(v) _mm_andnot_ps(_mm_set1_ps(-0.f), v) /* Take cross product of a vector with a matrix resulting in vector. */ #define VECTOR_MATRIX_CROSS_PS(out, v, m0, m1, m2, m3) \ { \ out = _mm_mul_ps(v, m0); \ __m128 mul1 = _mm_mul_ps(v, m1); \ __m128 mul2 = _mm_mul_ps(v, m2); \ __m128 mul3 = _mm_mul_ps(v, m3); \ \ _MM_TRANSPOSE4_PS(out, mul1, mul2, mul3); \ \ out = _mm_add_ps(out, mul1); \ out = _mm_add_ps(out, mul2); \ out = _mm_add_ps(out, mul3); \ } /* x4(d, c, b, a) --> max(a, b, c, d) */ #define hmax_ps(r, x4) \ do { \ float x4_mem[4]; \ _mm_storeu_ps(x4_mem, x4); \ r = x4_mem[0]; \ r = fmaxf(r, x4_mem[1]); \ r = fmaxf(r, x4_mem[2]); \ r = fmaxf(r, x4_mem[3]); \ } while (false) /* Calculate the true peak over a set of samples. * The algorithm implements 5x oversampling by using Whittaker-Shannon * interpolation over four samples. * * The four samples have location t=-1.5, -0.5, +0.5, +1.5 * The oversamples are taken at locations t=-0.3, -0.1, +0.1, +0.3 * * @param previous_samples Last 4 samples from the previous iteration. * @param samples The samples to find the peak in. * @param nr_samples Number of sets of 4 samples. * @returns 5 times oversampled true-peak from the set of samples. */ static float get_true_peak(__m128 previous_samples, const float *samples, size_t nr_samples) { /* These are normalized-sinc parameters for interpolating over sample * points which are located at x-coords: -1.5, -0.5, +0.5, +1.5. * And oversample points at x-coords: -0.3, -0.1, 0.1, 0.3. */ const __m128 m3 = _mm_set_ps(-0.155915f, 0.935489f, 0.233872f, -0.103943f); const __m128 m1 = _mm_set_ps(-0.216236f, 0.756827f, 0.504551f, -0.189207f); const __m128 p1 = _mm_set_ps(-0.189207f, 0.504551f, 0.756827f, -0.216236f); const __m128 p3 = _mm_set_ps(-0.103943f, 0.233872f, 0.935489f, -0.155915f); __m128 work = previous_samples; __m128 peak = previous_samples; for (size_t i = 0; (i + 3) < nr_samples; i += 4) { __m128 new_work = _mm_load_ps(&samples[i]); __m128 intrp_samples; /* Include the actual sample values in the peak. */ __m128 abs_new_work = abs_ps(new_work); peak = _mm_max_ps(peak, abs_new_work); /* Shift in the next point. */ SHIFT_RIGHT_2PS(new_work, work); VECTOR_MATRIX_CROSS_PS(intrp_samples, work, m3, m1, p1, p3); peak = _mm_max_ps(peak, abs_ps(intrp_samples)); SHIFT_RIGHT_2PS(new_work, work); VECTOR_MATRIX_CROSS_PS(intrp_samples, work, m3, m1, p1, p3); peak = _mm_max_ps(peak, abs_ps(intrp_samples)); SHIFT_RIGHT_2PS(new_work, work); VECTOR_MATRIX_CROSS_PS(intrp_samples, work, m3, m1, p1, p3); peak = _mm_max_ps(peak, abs_ps(intrp_samples)); SHIFT_RIGHT_2PS(new_work, work); VECTOR_MATRIX_CROSS_PS(intrp_samples, work, m3, m1, p1, p3); peak = _mm_max_ps(peak, abs_ps(intrp_samples)); } float r; hmax_ps(r, peak); return r; } /* points contain the first four samples to calculate the sinc interpolation * over. They will have come from a previous iteration. */ static float get_sample_peak(__m128 previous_samples, const float *samples, size_t nr_samples) { __m128 peak = previous_samples; for (size_t i = 0; (i + 3) < nr_samples; i += 4) { __m128 new_work = _mm_load_ps(&samples[i]); peak = _mm_max_ps(peak, abs_ps(new_work)); } float r; hmax_ps(r, peak); return r; } static void volmeter_process_peak_last_samples(obs_volmeter_t *volmeter, int channel_nr, float *samples, size_t nr_samples) { /* Take the last 4 samples that need to be used for the next peak * calculation. If there are less than 4 samples in total the new * samples shift out the old samples. */ switch (nr_samples) { case 0: break; case 1: volmeter->prev_samples[channel_nr][0] = volmeter->prev_samples[channel_nr][1]; volmeter->prev_samples[channel_nr][1] = volmeter->prev_samples[channel_nr][2]; volmeter->prev_samples[channel_nr][2] = volmeter->prev_samples[channel_nr][3]; volmeter->prev_samples[channel_nr][3] = samples[nr_samples - 1]; break; case 2: volmeter->prev_samples[channel_nr][0] = volmeter->prev_samples[channel_nr][2]; volmeter->prev_samples[channel_nr][1] = volmeter->prev_samples[channel_nr][3]; volmeter->prev_samples[channel_nr][2] = samples[nr_samples - 2]; volmeter->prev_samples[channel_nr][3] = samples[nr_samples - 1]; break; case 3: volmeter->prev_samples[channel_nr][0] = volmeter->prev_samples[channel_nr][3]; volmeter->prev_samples[channel_nr][1] = samples[nr_samples - 3]; volmeter->prev_samples[channel_nr][2] = samples[nr_samples - 2]; volmeter->prev_samples[channel_nr][3] = samples[nr_samples - 1]; break; default: volmeter->prev_samples[channel_nr][0] = samples[nr_samples - 4]; volmeter->prev_samples[channel_nr][1] = samples[nr_samples - 3]; volmeter->prev_samples[channel_nr][2] = samples[nr_samples - 2]; volmeter->prev_samples[channel_nr][3] = samples[nr_samples - 1]; } } static void volmeter_process_peak(obs_volmeter_t *volmeter, const struct audio_data *data, int nr_channels) { int nr_samples = data->frames; int channel_nr = 0; for (int plane_nr = 0; channel_nr < nr_channels; plane_nr++) { float *samples = (float *)data->data[plane_nr]; if (!samples) { continue; } if (((uintptr_t)samples & 0xf) > 0) { printf("Audio plane %i is not aligned %p skipping " "peak volume measurement.\n", plane_nr, samples); volmeter->peak[channel_nr] = 1.0; channel_nr++; continue; } /* volmeter->prev_samples may not be aligned to 16 bytes; * use unaligned load. */ __m128 previous_samples = _mm_loadu_ps(volmeter->prev_samples[channel_nr]); float peak; switch (volmeter->peak_meter_type) { case TRUE_PEAK_METER: peak = get_true_peak(previous_samples, samples, nr_samples); break; case SAMPLE_PEAK_METER: default: peak = get_sample_peak(previous_samples, samples, nr_samples); break; } volmeter_process_peak_last_samples(volmeter, channel_nr, samples, nr_samples); volmeter->peak[channel_nr] = peak; channel_nr++; } /* Clear the peak of the channels that have not been handled. */ for (; channel_nr < MAX_AUDIO_CHANNELS; channel_nr++) { volmeter->peak[channel_nr] = 0.0; } } static void volmeter_process_magnitude(obs_volmeter_t *volmeter, const struct audio_data *data, int nr_channels) { size_t nr_samples = data->frames; int channel_nr = 0; for (int plane_nr = 0; channel_nr < nr_channels; plane_nr++) { float *samples = (float *)data->data[plane_nr]; if (!samples) { continue; } float sum = 0.0; for (size_t i = 0; i < nr_samples; i++) { float sample = samples[i]; sum += sample * sample; } volmeter->magnitude[channel_nr] = sqrtf(sum / nr_samples); channel_nr++; } } static void volmeter_process_audio_data(obs_volmeter_t *volmeter, const struct audio_data *data) { int nr_channels = get_nr_channels_from_audio_data(data); volmeter_process_peak(volmeter, data, nr_channels); volmeter_process_magnitude(volmeter, data, nr_channels); } static void volmeter_source_data_received(void *vptr, obs_source_t *source, const struct audio_data *data, bool muted) { struct obs_volmeter *volmeter = (struct obs_volmeter *)vptr; float mul; float magnitude[MAX_AUDIO_CHANNELS]; float peak[MAX_AUDIO_CHANNELS]; float input_peak[MAX_AUDIO_CHANNELS]; pthread_mutex_lock(&volmeter->mutex); volmeter_process_audio_data(volmeter, data); // Adjust magnitude/peak based on the volume level set by the user. // And convert to dB. mul = muted && !obs_source_muted(source) ? 0.0f : db_to_mul(volmeter->cur_db); for (int channel_nr = 0; channel_nr < MAX_AUDIO_CHANNELS; channel_nr++) { magnitude[channel_nr] = mul_to_db(volmeter->magnitude[channel_nr] * mul); peak[channel_nr] = mul_to_db(volmeter->peak[channel_nr] * mul); /* The input-peak is NOT adjusted with volume, so that the user * can check the input-gain. */ input_peak[channel_nr] = mul_to_db(volmeter->peak[channel_nr]); } pthread_mutex_unlock(&volmeter->mutex); signal_levels_updated(volmeter, magnitude, peak, input_peak); } obs_fader_t *obs_fader_create(enum obs_fader_type type) { struct obs_fader *fader = bzalloc(sizeof(struct obs_fader)); if (!fader) return NULL; pthread_mutex_init_value(&fader->mutex); pthread_mutex_init_value(&fader->callback_mutex); if (pthread_mutex_init(&fader->mutex, NULL) != 0) goto fail; if (pthread_mutex_init(&fader->callback_mutex, NULL) != 0) goto fail; switch (type) { case OBS_FADER_CUBIC: fader->def_to_db = cubic_def_to_db; fader->db_to_def = cubic_db_to_def; fader->max_db = 0.0f; fader->min_db = -INFINITY; break; case OBS_FADER_IEC: fader->def_to_db = iec_def_to_db; fader->db_to_def = iec_db_to_def; fader->max_db = 0.0f; fader->min_db = -INFINITY; break; case OBS_FADER_LOG: fader->def_to_db = log_def_to_db; fader->db_to_def = log_db_to_def; fader->max_db = 0.0f; fader->min_db = -96.0f; break; default: goto fail; break; } fader->type = type; return fader; fail: obs_fader_destroy(fader); return NULL; } void obs_fader_destroy(obs_fader_t *fader) { if (!fader) return; obs_fader_detach_source(fader); da_free(fader->callbacks); pthread_mutex_destroy(&fader->callback_mutex); pthread_mutex_destroy(&fader->mutex); bfree(fader); } bool obs_fader_set_db(obs_fader_t *fader, const float db) { if (!fader) return false; pthread_mutex_lock(&fader->mutex); bool clamped = false; fader->cur_db = db; if (fader->cur_db > fader->max_db) { fader->cur_db = fader->max_db; clamped = true; } if (fader->cur_db < fader->min_db) { fader->cur_db = -INFINITY; clamped = true; } fader->ignore_next_signal = true; obs_source_t *src = fader->source; const float mul = db_to_mul(fader->cur_db); pthread_mutex_unlock(&fader->mutex); if (src) obs_source_set_volume(src, mul); return !clamped; } float obs_fader_get_db(obs_fader_t *fader) { if (!fader) return 0.0f; pthread_mutex_lock(&fader->mutex); const float db = fader->cur_db; pthread_mutex_unlock(&fader->mutex); return db; } bool obs_fader_set_deflection(obs_fader_t *fader, const float def) { if (!fader) return false; return obs_fader_set_db(fader, fader->def_to_db(def)); } float obs_fader_get_deflection(obs_fader_t *fader) { if (!fader) return 0.0f; pthread_mutex_lock(&fader->mutex); const float def = fader->db_to_def(fader->cur_db); pthread_mutex_unlock(&fader->mutex); return def; } bool obs_fader_set_mul(obs_fader_t *fader, const float mul) { if (!fader) return false; return obs_fader_set_db(fader, mul_to_db(mul)); } float obs_fader_get_mul(obs_fader_t *fader) { if (!fader) return 0.0f; pthread_mutex_lock(&fader->mutex); const float mul = db_to_mul(fader->cur_db); pthread_mutex_unlock(&fader->mutex); return mul; } bool obs_fader_attach_source(obs_fader_t *fader, obs_source_t *source) { signal_handler_t *sh; float vol; if (!fader || !source) return false; obs_fader_detach_source(fader); sh = obs_source_get_signal_handler(source); signal_handler_connect(sh, "volume", fader_source_volume_changed, fader); signal_handler_connect(sh, "destroy", fader_source_destroyed, fader); vol = obs_source_get_volume(source); pthread_mutex_lock(&fader->mutex); fader->source = source; fader->cur_db = mul_to_db(vol); pthread_mutex_unlock(&fader->mutex); return true; } void obs_fader_detach_source(obs_fader_t *fader) { signal_handler_t *sh; obs_source_t *source; if (!fader) return; pthread_mutex_lock(&fader->mutex); source = fader->source; fader->source = NULL; pthread_mutex_unlock(&fader->mutex); if (!source) return; sh = obs_source_get_signal_handler(source); signal_handler_disconnect(sh, "volume", fader_source_volume_changed, fader); signal_handler_disconnect(sh, "destroy", fader_source_destroyed, fader); } void obs_fader_add_callback(obs_fader_t *fader, obs_fader_changed_t callback, void *param) { struct fader_cb cb = {callback, param}; if (!obs_ptr_valid(fader, "obs_fader_add_callback")) return; pthread_mutex_lock(&fader->callback_mutex); da_push_back(fader->callbacks, &cb); pthread_mutex_unlock(&fader->callback_mutex); } void obs_fader_remove_callback(obs_fader_t *fader, obs_fader_changed_t callback, void *param) { struct fader_cb cb = {callback, param}; if (!obs_ptr_valid(fader, "obs_fader_remove_callback")) return; pthread_mutex_lock(&fader->callback_mutex); da_erase_item(fader->callbacks, &cb); pthread_mutex_unlock(&fader->callback_mutex); } obs_volmeter_t *obs_volmeter_create(enum obs_fader_type type) { struct obs_volmeter *volmeter = bzalloc(sizeof(struct obs_volmeter)); if (!volmeter) return NULL; pthread_mutex_init_value(&volmeter->mutex); pthread_mutex_init_value(&volmeter->callback_mutex); if (pthread_mutex_init(&volmeter->mutex, NULL) != 0) goto fail; if (pthread_mutex_init(&volmeter->callback_mutex, NULL) != 0) goto fail; volmeter->type = type; return volmeter; fail: obs_volmeter_destroy(volmeter); return NULL; } void obs_volmeter_destroy(obs_volmeter_t *volmeter) { if (!volmeter) return; obs_volmeter_detach_source(volmeter); da_free(volmeter->callbacks); pthread_mutex_destroy(&volmeter->callback_mutex); pthread_mutex_destroy(&volmeter->mutex); bfree(volmeter); } bool obs_volmeter_attach_source(obs_volmeter_t *volmeter, obs_source_t *source) { signal_handler_t *sh; float vol; if (!volmeter || !source) return false; obs_volmeter_detach_source(volmeter); sh = obs_source_get_signal_handler(source); signal_handler_connect(sh, "volume", volmeter_source_volume_changed, volmeter); signal_handler_connect(sh, "destroy", volmeter_source_destroyed, volmeter); obs_source_add_audio_capture_callback(source, volmeter_source_data_received, volmeter); vol = obs_source_get_volume(source); pthread_mutex_lock(&volmeter->mutex); volmeter->source = source; volmeter->cur_db = mul_to_db(vol); pthread_mutex_unlock(&volmeter->mutex); return true; } void obs_volmeter_detach_source(obs_volmeter_t *volmeter) { signal_handler_t *sh; obs_source_t *source; if (!volmeter) return; pthread_mutex_lock(&volmeter->mutex); source = volmeter->source; volmeter->source = NULL; pthread_mutex_unlock(&volmeter->mutex); if (!source) return; sh = obs_source_get_signal_handler(source); signal_handler_disconnect(sh, "volume", volmeter_source_volume_changed, volmeter); signal_handler_disconnect(sh, "destroy", volmeter_source_destroyed, volmeter); obs_source_remove_audio_capture_callback(source, volmeter_source_data_received, volmeter); } void obs_volmeter_set_peak_meter_type(obs_volmeter_t *volmeter, enum obs_peak_meter_type peak_meter_type) { pthread_mutex_lock(&volmeter->mutex); volmeter->peak_meter_type = peak_meter_type; pthread_mutex_unlock(&volmeter->mutex); } int obs_volmeter_get_nr_channels(obs_volmeter_t *volmeter) { int source_nr_audio_channels; int obs_nr_audio_channels; if (volmeter->source) { source_nr_audio_channels = get_audio_channels(volmeter->source->sample_info.speakers); } else { source_nr_audio_channels = 0; } struct obs_audio_info audio_info; if (obs_get_audio_info(&audio_info)) { obs_nr_audio_channels = get_audio_channels(audio_info.speakers); } else { obs_nr_audio_channels = 2; } return CLAMP(source_nr_audio_channels, 0, obs_nr_audio_channels); } void obs_volmeter_add_callback(obs_volmeter_t *volmeter, obs_volmeter_updated_t callback, void *param) { struct meter_cb cb = {callback, param}; if (!obs_ptr_valid(volmeter, "obs_volmeter_add_callback")) return; pthread_mutex_lock(&volmeter->callback_mutex); da_push_back(volmeter->callbacks, &cb); pthread_mutex_unlock(&volmeter->callback_mutex); } void obs_volmeter_remove_callback(obs_volmeter_t *volmeter, obs_volmeter_updated_t callback, void *param) { struct meter_cb cb = {callback, param}; if (!obs_ptr_valid(volmeter, "obs_volmeter_remove_callback")) return; pthread_mutex_lock(&volmeter->callback_mutex); da_erase_item(volmeter->callbacks, &cb); pthread_mutex_unlock(&volmeter->callback_mutex); } float obs_mul_to_db(float mul) { return mul_to_db(mul); } float obs_db_to_mul(float db) { return db_to_mul(db); } obs_fader_conversion_t obs_fader_db_to_def(obs_fader_t *fader) { return fader->db_to_def; } obs-studio-32.1.0-sources/libobs/obs-data.c000644 001751 001751 00000156534 15153330235 021362 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/bmem.h" #include "util/threading.h" #include "util/dstr.h" #include "util/darray.h" #include "util/platform.h" #include "util/uthash.h" #include "graphics/vec2.h" #include "graphics/vec3.h" #include "graphics/vec4.h" #include "graphics/quat.h" #include "obs-data.h" #include struct obs_data_item { volatile long ref; const char *name; struct obs_data *parent; UT_hash_handle hh; enum obs_data_type type; size_t name_len; size_t data_len; size_t data_size; size_t default_len; size_t default_size; size_t autoselect_size; size_t capacity; }; struct obs_data { volatile long ref; char *json; struct obs_data_item *items; }; struct obs_data_array { volatile long ref; DARRAY(obs_data_t *) objects; }; struct obs_data_number { enum obs_data_number_type type; union { long long int_val; double double_val; }; }; /* ------------------------------------------------------------------------- */ /* Item structure, designed to be one allocation only */ static inline size_t get_align_size(size_t size) { const size_t alignment = base_get_alignment(); return (size + alignment - 1) & ~(alignment - 1); } /* ensures data after the name has alignment (in case of SSE) */ static inline size_t get_name_align_size(const char *name) { size_t name_size = strlen(name) + 1; size_t alignment = base_get_alignment(); size_t total_size; total_size = sizeof(struct obs_data_item) + (name_size + alignment - 1); total_size &= ~(alignment - 1); return total_size - sizeof(struct obs_data_item); } static inline char *get_item_name(struct obs_data_item *item) { return (char *)item + sizeof(struct obs_data_item); } static inline void *get_data_ptr(obs_data_item_t *item) { return (uint8_t *)get_item_name(item) + item->name_len; } static inline void *get_item_data(struct obs_data_item *item) { if (!item->data_size && !item->default_size && !item->autoselect_size) return NULL; return get_data_ptr(item); } static inline void *get_default_data_ptr(obs_data_item_t *item) { return (uint8_t *)get_data_ptr(item) + item->data_len; } static inline void *get_item_default_data(struct obs_data_item *item) { return item->default_size ? get_default_data_ptr(item) : NULL; } static inline void *get_autoselect_data_ptr(obs_data_item_t *item) { return (uint8_t *)get_default_data_ptr(item) + item->default_len; } static inline void *get_item_autoselect_data(struct obs_data_item *item) { return item->autoselect_size ? get_autoselect_data_ptr(item) : NULL; } static inline size_t obs_data_item_total_size(struct obs_data_item *item) { return sizeof(struct obs_data_item) + item->name_len + item->data_len + item->default_len + item->autoselect_size; } static inline obs_data_t *get_item_obj(struct obs_data_item *item) { if (!item) return NULL; obs_data_t **data = get_item_data(item); if (!data) return NULL; return *data; } static inline obs_data_t *get_item_default_obj(struct obs_data_item *item) { if (!item || !item->default_size) return NULL; return *(obs_data_t **)get_default_data_ptr(item); } static inline obs_data_t *get_item_autoselect_obj(struct obs_data_item *item) { if (!item || !item->autoselect_size) return NULL; return *(obs_data_t **)get_autoselect_data_ptr(item); } static inline obs_data_array_t *get_item_array(struct obs_data_item *item) { obs_data_array_t **array; if (!item) return NULL; array = (obs_data_array_t **)get_item_data(item); return array ? *array : NULL; } static inline obs_data_array_t *get_item_default_array(struct obs_data_item *item) { if (!item || !item->default_size) return NULL; return *(obs_data_array_t **)get_default_data_ptr(item); } static inline obs_data_array_t *get_item_autoselect_array(struct obs_data_item *item) { if (!item || !item->autoselect_size) return NULL; return *(obs_data_array_t **)get_autoselect_data_ptr(item); } static inline void item_data_release(struct obs_data_item *item) { if (!obs_data_item_has_user_value(item)) return; if (item->type == OBS_DATA_OBJECT) { obs_data_t *obj = get_item_obj(item); obs_data_release(obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t *array = get_item_array(item); obs_data_array_release(array); } } static inline void item_default_data_release(struct obs_data_item *item) { if (item->type == OBS_DATA_OBJECT) { obs_data_t *obj = get_item_default_obj(item); obs_data_release(obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t *array = get_item_default_array(item); obs_data_array_release(array); } } static inline void item_autoselect_data_release(struct obs_data_item *item) { if (item->type == OBS_DATA_OBJECT) { obs_data_t *obj = get_item_autoselect_obj(item); obs_data_release(obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t *array = get_item_autoselect_array(item); obs_data_array_release(array); } } static inline void item_data_addref(struct obs_data_item *item) { if (item->type == OBS_DATA_OBJECT) { obs_data_t *obj = get_item_obj(item); obs_data_addref(obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t *array = get_item_array(item); obs_data_array_addref(array); } } static inline void item_default_data_addref(struct obs_data_item *item) { if (!item->data_size) return; if (item->type == OBS_DATA_OBJECT) { obs_data_t *obj = get_item_default_obj(item); obs_data_addref(obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t *array = get_item_default_array(item); obs_data_array_addref(array); } } static inline void item_autoselect_data_addref(struct obs_data_item *item) { if (item->type == OBS_DATA_OBJECT) { obs_data_t *obj = get_item_autoselect_obj(item); obs_data_addref(obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t *array = get_item_autoselect_array(item); obs_data_array_addref(array); } } static struct obs_data_item *obs_data_item_create(const char *name, const void *data, size_t size, enum obs_data_type type, bool default_data, bool autoselect_data) { struct obs_data_item *item; size_t name_size, total_size; if (!name || !data) return NULL; name_size = get_name_align_size(name); total_size = name_size + sizeof(struct obs_data_item) + size; item = bzalloc(total_size); item->capacity = total_size; item->type = type; item->name_len = name_size; item->ref = 1; if (default_data) { item->default_len = size; item->default_size = size; } else if (autoselect_data) { item->autoselect_size = size; } else { item->data_len = size; item->data_size = size; } char *name_ptr = get_item_name(item); item->name = name_ptr; strcpy(name_ptr, name); memcpy(get_item_data(item), data, size); item_data_addref(item); return item; } static inline void obs_data_item_detach(struct obs_data_item *item) { if (item->parent) { HASH_DEL(item->parent->items, item); item->parent = NULL; } } static inline void obs_data_item_reattach(struct obs_data *parent, struct obs_data_item *item) { if (parent) { HASH_ADD_STR(parent->items, name, item); item->parent = parent; } } static struct obs_data_item *obs_data_item_ensure_capacity(struct obs_data_item *item) { size_t new_size = obs_data_item_total_size(item); struct obs_data_item *new_item; if (item->capacity >= new_size) return item; struct obs_data *parent = item->parent; obs_data_item_detach(item); new_item = brealloc(item, new_size); new_item->capacity = new_size; new_item->name = get_item_name(new_item); obs_data_item_reattach(parent, new_item); return new_item; } static inline void obs_data_item_destroy(struct obs_data_item *item) { if (item->parent) HASH_DEL(item->parent->items, item); item_data_release(item); item_default_data_release(item); item_autoselect_data_release(item); obs_data_item_detach(item); bfree(item); } static inline void move_data(obs_data_item_t *old_item, void *old_data, obs_data_item_t *item, void *data, size_t len) { ptrdiff_t old_offset = (uint8_t *)old_data - (uint8_t *)old_item; ptrdiff_t new_offset = (uint8_t *)data - (uint8_t *)item; if (!old_data) return; memmove((uint8_t *)item + new_offset, (uint8_t *)item + old_offset, len); } static inline void obs_data_item_setdata(struct obs_data_item **p_item, const void *data, size_t size, enum obs_data_type type) { if (!p_item || !*p_item) return; struct obs_data_item *item = *p_item; ptrdiff_t old_default_data_pos = (uint8_t *)get_default_data_ptr(item) - (uint8_t *)item; item_data_release(item); item->data_size = size; item->type = type; item->data_len = (item->default_size || item->autoselect_size) ? get_align_size(size) : size; item = obs_data_item_ensure_capacity(item); if (item->default_size || item->autoselect_size) memmove(get_default_data_ptr(item), (uint8_t *)item + old_default_data_pos, item->default_len + item->autoselect_size); if (size) { memcpy(get_item_data(item), data, size); item_data_addref(item); } *p_item = item; } static inline void obs_data_item_set_default_data(struct obs_data_item **p_item, const void *data, size_t size, enum obs_data_type type) { if (!p_item || !*p_item) return; struct obs_data_item *item = *p_item; void *old_autoselect_data = get_autoselect_data_ptr(item); item_default_data_release(item); item->type = type; item->default_size = size; item->default_len = item->autoselect_size ? get_align_size(size) : size; item->data_len = item->data_size ? get_align_size(item->data_size) : 0; item = obs_data_item_ensure_capacity(item); if (item->autoselect_size) move_data(*p_item, old_autoselect_data, item, get_autoselect_data_ptr(item), item->autoselect_size); if (size) { memcpy(get_item_default_data(item), data, size); item_default_data_addref(item); } *p_item = item; } static inline void obs_data_item_set_autoselect_data(struct obs_data_item **p_item, const void *data, size_t size, enum obs_data_type type) { if (!p_item || !*p_item) return; struct obs_data_item *item = *p_item; item_autoselect_data_release(item); item->autoselect_size = size; item->type = type; item->data_len = item->data_size ? get_align_size(item->data_size) : 0; item->default_len = item->default_size ? get_align_size(item->default_size) : 0; item = obs_data_item_ensure_capacity(item); if (size) { memcpy(get_item_autoselect_data(item), data, size); item_autoselect_data_addref(item); } *p_item = item; } /* ------------------------------------------------------------------------- */ static void obs_data_add_json_item(obs_data_t *data, const char *key, json_t *json); static inline void obs_data_add_json_object_data(obs_data_t *data, json_t *jobj) { const char *item_key; json_t *jitem; json_object_foreach (jobj, item_key, jitem) { obs_data_add_json_item(data, item_key, jitem); } } static inline void obs_data_add_json_object(obs_data_t *data, const char *key, json_t *jobj) { obs_data_t *sub_obj = obs_data_create(); obs_data_add_json_object_data(sub_obj, jobj); obs_data_set_obj(data, key, sub_obj); obs_data_release(sub_obj); } static void obs_data_add_json_array(obs_data_t *data, const char *key, json_t *jarray) { obs_data_array_t *array = obs_data_array_create(); size_t idx; json_t *jitem; json_array_foreach (jarray, idx, jitem) { obs_data_t *item; if (!json_is_object(jitem)) continue; item = obs_data_create(); obs_data_add_json_object_data(item, jitem); obs_data_array_push_back(array, item); obs_data_release(item); } obs_data_set_array(data, key, array); obs_data_array_release(array); } static inline void obs_data_add_json_null(obs_data_t *data, const char *key) { obs_data_set_obj(data, key, NULL); } static void obs_data_add_json_item(obs_data_t *data, const char *key, json_t *json) { if (json_is_object(json)) obs_data_add_json_object(data, key, json); else if (json_is_array(json)) obs_data_add_json_array(data, key, json); else if (json_is_string(json)) obs_data_set_string(data, key, json_string_value(json)); else if (json_is_integer(json)) obs_data_set_int(data, key, json_integer_value(json)); else if (json_is_real(json)) obs_data_set_double(data, key, json_real_value(json)); else if (json_is_true(json)) obs_data_set_bool(data, key, true); else if (json_is_false(json)) obs_data_set_bool(data, key, false); else if (json_is_null(json)) obs_data_add_json_null(data, key); } /* ------------------------------------------------------------------------- */ static inline void set_json_string(json_t *json, const char *name, obs_data_item_t *item) { const char *val = obs_data_item_get_string(item); json_object_set_new(json, name, json_string(val)); } static inline void set_json_number(json_t *json, const char *name, obs_data_item_t *item) { enum obs_data_number_type type = obs_data_item_numtype(item); if (type == OBS_DATA_NUM_INT) { long long val = obs_data_item_get_int(item); json_object_set_new(json, name, json_integer(val)); } else { double val = obs_data_item_get_double(item); json_object_set_new(json, name, json_real(val)); } } static inline void set_json_bool(json_t *json, const char *name, obs_data_item_t *item) { bool val = obs_data_item_get_bool(item); json_object_set_new(json, name, val ? json_true() : json_false()); } static json_t *obs_data_to_json(obs_data_t *data, bool with_defaults); static inline void set_json_obj(json_t *json, const char *name, obs_data_item_t *item, bool with_defaults) { obs_data_t *obj = obs_data_item_get_obj(item); json_object_set_new(json, name, obs_data_to_json(obj, with_defaults)); obs_data_release(obj); } static inline void set_json_array(json_t *json, const char *name, obs_data_item_t *item, bool with_defaults) { json_t *jarray = json_array(); obs_data_array_t *array = obs_data_item_get_array(item); size_t count = obs_data_array_count(array); for (size_t idx = 0; idx < count; idx++) { obs_data_t *sub_item = obs_data_array_item(array, idx); json_t *jitem = obs_data_to_json(sub_item, with_defaults); json_array_append_new(jarray, jitem); obs_data_release(sub_item); } json_object_set_new(json, name, jarray); obs_data_array_release(array); } static json_t *obs_data_to_json(obs_data_t *data, bool with_defaults) { if (!data) return json_null(); json_t *json = json_object(); obs_data_item_t *item = NULL; obs_data_item_t *temp = NULL; HASH_ITER (hh, data->items, item, temp) { enum obs_data_type type = obs_data_item_gettype(item); const char *name = get_item_name(item); if (!with_defaults && !obs_data_item_has_user_value(item)) continue; if (type == OBS_DATA_STRING) set_json_string(json, name, item); else if (type == OBS_DATA_NUMBER) set_json_number(json, name, item); else if (type == OBS_DATA_BOOLEAN) set_json_bool(json, name, item); else if (type == OBS_DATA_OBJECT) set_json_obj(json, name, item, with_defaults); else if (type == OBS_DATA_ARRAY) set_json_array(json, name, item, with_defaults); } return json; } /* ------------------------------------------------------------------------- */ obs_data_t *obs_data_create() { struct obs_data *data = bzalloc(sizeof(struct obs_data)); data->ref = 1; return data; } obs_data_t *obs_data_create_from_json(const char *json_string) { obs_data_t *data = obs_data_create(); json_error_t error; json_t *root = json_loads(json_string, JSON_REJECT_DUPLICATES, &error); if (root) { obs_data_add_json_object_data(data, root); json_decref(root); } else { blog(LOG_ERROR, "obs-data.c: [obs_data_create_from_json] " "Failed reading json string (%d): %s", error.line, error.text); obs_data_release(data); data = NULL; } return data; } obs_data_t *obs_data_create_from_json_file(const char *json_file) { char *file_data = os_quick_read_utf8_file(json_file); obs_data_t *data = NULL; if (file_data) { data = obs_data_create_from_json(file_data); bfree(file_data); } return data; } obs_data_t *obs_data_create_from_json_file_safe(const char *json_file, const char *backup_ext) { obs_data_t *file_data = obs_data_create_from_json_file(json_file); if (!file_data && backup_ext && *backup_ext) { struct dstr backup_file = {0}; dstr_copy(&backup_file, json_file); if (*backup_ext != '.') dstr_cat(&backup_file, "."); dstr_cat(&backup_file, backup_ext); if (os_file_exists(backup_file.array)) { blog(LOG_WARNING, "obs-data.c: " "[obs_data_create_from_json_file_safe] " "attempting backup file"); /* delete current file if corrupt to prevent it from * being backed up again */ os_rename(backup_file.array, json_file); file_data = obs_data_create_from_json_file(json_file); } dstr_free(&backup_file); } return file_data; } void obs_data_addref(obs_data_t *data) { if (data) os_atomic_inc_long(&data->ref); } static inline void obs_data_destroy(struct obs_data *data) { struct obs_data_item *item, *temp; HASH_ITER (hh, data->items, item, temp) { obs_data_item_detach(item); obs_data_item_release(&item); } /* NOTE: don't use bfree for json text, allocated by json */ free(data->json); bfree(data); } void obs_data_release(obs_data_t *data) { if (!data) return; if (os_atomic_dec_long(&data->ref) == 0) obs_data_destroy(data); } static const char *obs_data_get_json_internal(obs_data_t *data, bool pretty, bool with_defaults) { if (!data) return NULL; size_t flags = JSON_PRESERVE_ORDER; if (pretty) flags |= JSON_INDENT(4); else flags |= JSON_COMPACT; /* NOTE: don't use libobs bfree for json text */ free(data->json); data->json = NULL; json_t *root = obs_data_to_json(data, with_defaults); data->json = json_dumps(root, flags); json_decref(root); return data->json; } const char *obs_data_get_json(obs_data_t *data) { return obs_data_get_json_internal(data, false, false); } const char *obs_data_get_json_with_defaults(obs_data_t *data) { return obs_data_get_json_internal(data, false, true); } const char *obs_data_get_json_pretty(obs_data_t *data) { return obs_data_get_json_internal(data, true, false); } const char *obs_data_get_json_pretty_with_defaults(obs_data_t *data) { return obs_data_get_json_internal(data, true, true); } const char *obs_data_get_last_json(obs_data_t *data) { return data ? data->json : NULL; } bool obs_data_save_json(obs_data_t *data, const char *file) { const char *json = obs_data_get_json(data); if (json && *json) { return os_quick_write_utf8_file(file, json, strlen(json), false); } return false; } bool obs_data_save_json_safe(obs_data_t *data, const char *file, const char *temp_ext, const char *backup_ext) { const char *json = obs_data_get_json(data); if (json && *json) { return os_quick_write_utf8_file_safe(file, json, strlen(json), false, temp_ext, backup_ext); } return false; } bool obs_data_save_json_pretty_safe(obs_data_t *data, const char *file, const char *temp_ext, const char *backup_ext) { const char *json = obs_data_get_json_pretty(data); if (json && *json) { return os_quick_write_utf8_file_safe(file, json, strlen(json), false, temp_ext, backup_ext); } return false; } static void get_defaults_array_cb(obs_data_t *data, void *vp) { obs_data_array_t *defs = (obs_data_array_t *)vp; obs_data_t *obs_defaults = obs_data_get_defaults(data); obs_data_array_push_back(defs, obs_defaults); obs_data_release(obs_defaults); } obs_data_t *obs_data_get_defaults(obs_data_t *data) { obs_data_t *defaults = obs_data_create(); if (!data) return defaults; struct obs_data_item *item, *temp; HASH_ITER (hh, data->items, item, temp) { const char *name = get_item_name(item); switch (item->type) { case OBS_DATA_NULL: break; case OBS_DATA_STRING: { const char *str = obs_data_get_default_string(data, name); obs_data_set_string(defaults, name, str); break; } case OBS_DATA_NUMBER: { switch (obs_data_item_numtype(item)) { case OBS_DATA_NUM_DOUBLE: { double val = obs_data_get_default_double(data, name); obs_data_set_double(defaults, name, val); break; } case OBS_DATA_NUM_INT: { long long val = obs_data_get_default_int(data, name); obs_data_set_int(defaults, name, val); break; } case OBS_DATA_NUM_INVALID: break; } break; } case OBS_DATA_BOOLEAN: { bool val = obs_data_get_default_bool(data, name); obs_data_set_bool(defaults, name, val); break; } case OBS_DATA_OBJECT: { obs_data_t *val = obs_data_get_default_obj(data, name); obs_data_t *defs = obs_data_get_defaults(val); obs_data_set_obj(defaults, name, defs); obs_data_release(defs); obs_data_release(val); break; } case OBS_DATA_ARRAY: { obs_data_array_t *arr = obs_data_get_default_array(data, name); obs_data_array_t *defs = obs_data_array_create(); obs_data_array_enum(arr, get_defaults_array_cb, (void *)defs); obs_data_set_array(defaults, name, defs); obs_data_array_release(defs); obs_data_array_release(arr); break; } } } return defaults; } static struct obs_data_item *get_item(struct obs_data *data, const char *name) { if (!data) return NULL; struct obs_data_item *item; HASH_FIND_STR(data->items, name, item); return item; } static void set_item_data(struct obs_data *data, struct obs_data_item **item, const char *name, const void *ptr, size_t size, enum obs_data_type type, bool default_data, bool autoselect_data) { obs_data_item_t *new_item = NULL; if ((!item || !*item) && data) { new_item = obs_data_item_create(name, ptr, size, type, default_data, autoselect_data); new_item->parent = data; HASH_ADD_STR(data->items, name, new_item); } else if (default_data) { obs_data_item_set_default_data(item, ptr, size, type); } else if (autoselect_data) { obs_data_item_set_autoselect_data(item, ptr, size, type); } else { obs_data_item_setdata(item, ptr, size, type); } } static inline void set_item(struct obs_data *data, obs_data_item_t **item, const char *name, const void *ptr, size_t size, enum obs_data_type type) { obs_data_item_t *actual_item = NULL; if (!data && !item) return; if (!item) { actual_item = get_item(data, name); item = &actual_item; } set_item_data(data, item, name, ptr, size, type, false, false); } static inline void set_item_def(struct obs_data *data, obs_data_item_t **item, const char *name, const void *ptr, size_t size, enum obs_data_type type) { obs_data_item_t *actual_item = NULL; if (!data && !item) return; if (!item) { actual_item = get_item(data, name); item = &actual_item; } if (*item && (*item)->type != type) return; set_item_data(data, item, name, ptr, size, type, true, false); } static inline void set_item_auto(struct obs_data *data, obs_data_item_t **item, const char *name, const void *ptr, size_t size, enum obs_data_type type) { obs_data_item_t *actual_item = NULL; if (!data && !item) return; if (!item) { actual_item = get_item(data, name); item = &actual_item; } set_item_data(data, item, name, ptr, size, type, false, true); } static void copy_obj(struct obs_data *data, const char *name, struct obs_data *obj, void (*callback)(obs_data_t *, const char *, obs_data_t *)) { if (obj) { obs_data_t *new_obj = obs_data_create(); obs_data_apply(new_obj, obj); callback(data, name, new_obj); obs_data_release(new_obj); } } static void copy_array(struct obs_data *data, const char *name, struct obs_data_array *array, void (*callback)(obs_data_t *, const char *, obs_data_array_t *)) { if (array) { obs_data_array_t *new_array = obs_data_array_create(); da_reserve(new_array->objects, array->objects.num); for (size_t i = 0; i < array->objects.num; i++) { obs_data_t *new_obj = obs_data_create(); obs_data_t *obj = array->objects.array[i]; obs_data_apply(new_obj, obj); obs_data_array_push_back(new_array, new_obj); obs_data_release(new_obj); } callback(data, name, new_array); obs_data_array_release(new_array); } } static inline void copy_item(struct obs_data *data, struct obs_data_item *item) { const char *name = get_item_name(item); void *ptr = get_item_data(item); if (item->type == OBS_DATA_OBJECT) { obs_data_t **obj = item->data_size ? ptr : NULL; if (obj) copy_obj(data, name, *obj, obs_data_set_obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t **array = item->data_size ? ptr : NULL; if (array) copy_array(data, name, *array, obs_data_set_array); } else { if (item->data_size) set_item(data, NULL, name, ptr, item->data_size, item->type); } } void obs_data_apply(obs_data_t *target, obs_data_t *apply_data) { if (!target || !apply_data || target == apply_data) return; struct obs_data_item *item, *temp; HASH_ITER (hh, apply_data->items, item, temp) { copy_item(target, item); } } void obs_data_erase(obs_data_t *data, const char *name) { struct obs_data_item *item = get_item(data, name); if (item) { obs_data_item_detach(item); obs_data_item_release(&item); } } static inline void clear_item(struct obs_data_item *item) { void *ptr = get_item_data(item); size_t size; if (item->data_len) { if (item->type == OBS_DATA_OBJECT) { obs_data_t **obj = item->data_size ? ptr : NULL; if (obj && *obj) obs_data_release(*obj); } else if (item->type == OBS_DATA_ARRAY) { obs_data_array_t **array = item->data_size ? ptr : NULL; if (array && *array) obs_data_array_release(*array); } size = item->default_len + item->autoselect_size; if (size) memmove(ptr, (uint8_t *)ptr + item->data_len, size); item->data_size = 0; item->data_len = 0; } } void obs_data_clear(obs_data_t *target) { if (!target) return; struct obs_data_item *item, *temp; HASH_ITER (hh, target->items, item, temp) { clear_item(item); } } typedef void (*set_item_t)(obs_data_t *, obs_data_item_t **, const char *, const void *, size_t, enum obs_data_type); static inline void obs_set_string(obs_data_t *data, obs_data_item_t **item, const char *name, const char *val, set_item_t set_item_) { if (!val) val = ""; set_item_(data, item, name, val, strlen(val) + 1, OBS_DATA_STRING); } static inline void obs_set_int(obs_data_t *data, obs_data_item_t **item, const char *name, long long val, set_item_t set_item_) { struct obs_data_number num; num.type = OBS_DATA_NUM_INT; num.int_val = val; set_item_(data, item, name, &num, sizeof(struct obs_data_number), OBS_DATA_NUMBER); } static inline void obs_set_double(obs_data_t *data, obs_data_item_t **item, const char *name, double val, set_item_t set_item_) { struct obs_data_number num; num.type = OBS_DATA_NUM_DOUBLE; num.double_val = val; set_item_(data, item, name, &num, sizeof(struct obs_data_number), OBS_DATA_NUMBER); } static inline void obs_set_bool(obs_data_t *data, obs_data_item_t **item, const char *name, bool val, set_item_t set_item_) { set_item_(data, item, name, &val, sizeof(bool), OBS_DATA_BOOLEAN); } static inline void obs_set_obj(obs_data_t *data, obs_data_item_t **item, const char *name, obs_data_t *obj, set_item_t set_item_) { set_item_(data, item, name, &obj, sizeof(obs_data_t *), OBS_DATA_OBJECT); } static inline void obs_set_array(obs_data_t *data, obs_data_item_t **item, const char *name, obs_data_array_t *array, set_item_t set_item_) { set_item_(data, item, name, &array, sizeof(obs_data_t *), OBS_DATA_ARRAY); } static inline void obs_take_obj(obs_data_t *data, obs_data_item_t **item, const char *name, obs_data_t *obj, set_item_t set_item_) { obs_set_obj(data, item, name, obj, set_item_); obs_data_release(obj); } void obs_data_set_string(obs_data_t *data, const char *name, const char *val) { obs_set_string(data, NULL, name, val, set_item); } void obs_data_set_int(obs_data_t *data, const char *name, long long val) { obs_set_int(data, NULL, name, val, set_item); } void obs_data_set_double(obs_data_t *data, const char *name, double val) { obs_set_double(data, NULL, name, val, set_item); } void obs_data_set_bool(obs_data_t *data, const char *name, bool val) { obs_set_bool(data, NULL, name, val, set_item); } void obs_data_set_obj(obs_data_t *data, const char *name, obs_data_t *obj) { obs_set_obj(data, NULL, name, obj, set_item); } void obs_data_set_array(obs_data_t *data, const char *name, obs_data_array_t *array) { obs_set_array(data, NULL, name, array, set_item); } void obs_data_set_default_string(obs_data_t *data, const char *name, const char *val) { obs_set_string(data, NULL, name, val, set_item_def); } void obs_data_set_default_int(obs_data_t *data, const char *name, long long val) { obs_set_int(data, NULL, name, val, set_item_def); } void obs_data_set_default_double(obs_data_t *data, const char *name, double val) { obs_set_double(data, NULL, name, val, set_item_def); } void obs_data_set_default_bool(obs_data_t *data, const char *name, bool val) { obs_set_bool(data, NULL, name, val, set_item_def); } void obs_data_set_default_obj(obs_data_t *data, const char *name, obs_data_t *obj) { obs_set_obj(data, NULL, name, obj, set_item_def); } void obs_data_set_default_array(obs_data_t *data, const char *name, obs_data_array_t *arr) { obs_set_array(data, NULL, name, arr, set_item_def); } void obs_data_set_autoselect_string(obs_data_t *data, const char *name, const char *val) { obs_set_string(data, NULL, name, val, set_item_auto); } void obs_data_set_autoselect_int(obs_data_t *data, const char *name, long long val) { obs_set_int(data, NULL, name, val, set_item_auto); } void obs_data_set_autoselect_double(obs_data_t *data, const char *name, double val) { obs_set_double(data, NULL, name, val, set_item_auto); } void obs_data_set_autoselect_bool(obs_data_t *data, const char *name, bool val) { obs_set_bool(data, NULL, name, val, set_item_auto); } void obs_data_set_autoselect_obj(obs_data_t *data, const char *name, obs_data_t *obj) { obs_set_obj(data, NULL, name, obj, set_item_auto); } void obs_data_set_autoselect_array(obs_data_t *data, const char *name, obs_data_array_t *arr) { obs_set_array(data, NULL, name, arr, set_item_auto); } const char *obs_data_get_string(obs_data_t *data, const char *name) { return obs_data_item_get_string(get_item(data, name)); } long long obs_data_get_int(obs_data_t *data, const char *name) { return obs_data_item_get_int(get_item(data, name)); } double obs_data_get_double(obs_data_t *data, const char *name) { return obs_data_item_get_double(get_item(data, name)); } bool obs_data_get_bool(obs_data_t *data, const char *name) { return obs_data_item_get_bool(get_item(data, name)); } obs_data_t *obs_data_get_obj(obs_data_t *data, const char *name) { return obs_data_item_get_obj(get_item(data, name)); } obs_data_array_t *obs_data_get_array(obs_data_t *data, const char *name) { return obs_data_item_get_array(get_item(data, name)); } const char *obs_data_get_default_string(obs_data_t *data, const char *name) { return obs_data_item_get_default_string(get_item(data, name)); } long long obs_data_get_default_int(obs_data_t *data, const char *name) { return obs_data_item_get_default_int(get_item(data, name)); } double obs_data_get_default_double(obs_data_t *data, const char *name) { return obs_data_item_get_default_double(get_item(data, name)); } bool obs_data_get_default_bool(obs_data_t *data, const char *name) { return obs_data_item_get_default_bool(get_item(data, name)); } obs_data_t *obs_data_get_default_obj(obs_data_t *data, const char *name) { return obs_data_item_get_default_obj(get_item(data, name)); } obs_data_array_t *obs_data_get_default_array(obs_data_t *data, const char *name) { return obs_data_item_get_default_array(get_item(data, name)); } const char *obs_data_get_autoselect_string(obs_data_t *data, const char *name) { return obs_data_item_get_autoselect_string(get_item(data, name)); } long long obs_data_get_autoselect_int(obs_data_t *data, const char *name) { return obs_data_item_get_autoselect_int(get_item(data, name)); } double obs_data_get_autoselect_double(obs_data_t *data, const char *name) { return obs_data_item_get_autoselect_double(get_item(data, name)); } bool obs_data_get_autoselect_bool(obs_data_t *data, const char *name) { return obs_data_item_get_autoselect_bool(get_item(data, name)); } obs_data_t *obs_data_get_autoselect_obj(obs_data_t *data, const char *name) { return obs_data_item_get_autoselect_obj(get_item(data, name)); } obs_data_array_t *obs_data_get_autoselect_array(obs_data_t *data, const char *name) { return obs_data_item_get_autoselect_array(get_item(data, name)); } obs_data_array_t *obs_data_array_create() { struct obs_data_array *array = bzalloc(sizeof(struct obs_data_array)); array->ref = 1; return array; } void obs_data_array_addref(obs_data_array_t *array) { if (array) os_atomic_inc_long(&array->ref); } static inline void obs_data_array_destroy(obs_data_array_t *array) { if (array) { for (size_t i = 0; i < array->objects.num; i++) obs_data_release(array->objects.array[i]); da_free(array->objects); bfree(array); } } void obs_data_array_release(obs_data_array_t *array) { if (!array) return; if (os_atomic_dec_long(&array->ref) == 0) obs_data_array_destroy(array); } size_t obs_data_array_count(obs_data_array_t *array) { return array ? array->objects.num : 0; } obs_data_t *obs_data_array_item(obs_data_array_t *array, size_t idx) { obs_data_t *data; if (!array) return NULL; data = (idx < array->objects.num) ? array->objects.array[idx] : NULL; if (data) os_atomic_inc_long(&data->ref); return data; } size_t obs_data_array_push_back(obs_data_array_t *array, obs_data_t *obj) { if (!array || !obj) return 0; os_atomic_inc_long(&obj->ref); return da_push_back(array->objects, &obj); } void obs_data_array_insert(obs_data_array_t *array, size_t idx, obs_data_t *obj) { if (!array || !obj) return; os_atomic_inc_long(&obj->ref); da_insert(array->objects, idx, &obj); } void obs_data_array_push_back_array(obs_data_array_t *array, obs_data_array_t *array2) { if (!array || !array2) return; for (size_t i = 0; i < array2->objects.num; i++) { obs_data_t *obj = array2->objects.array[i]; obs_data_addref(obj); } da_push_back_da(array->objects, array2->objects); } void obs_data_array_erase(obs_data_array_t *array, size_t idx) { if (array) { obs_data_release(array->objects.array[idx]); da_erase(array->objects, idx); } } void obs_data_array_enum(obs_data_array_t *array, void (*cb)(obs_data_t *data, void *param), void *param) { if (array && cb) { for (size_t i = 0; i < array->objects.num; i++) { cb(array->objects.array[i], param); } } } /* ------------------------------------------------------------------------- */ /* Item status inspection */ bool obs_data_has_user_value(obs_data_t *data, const char *name) { return data && obs_data_item_has_user_value(get_item(data, name)); } bool obs_data_has_default_value(obs_data_t *data, const char *name) { return data && obs_data_item_has_default_value(get_item(data, name)); } bool obs_data_has_autoselect_value(obs_data_t *data, const char *name) { return data && obs_data_item_has_autoselect_value(get_item(data, name)); } bool obs_data_item_has_user_value(obs_data_item_t *item) { return item && item->data_size; } bool obs_data_item_has_default_value(obs_data_item_t *item) { return item && item->default_size; } bool obs_data_item_has_autoselect_value(obs_data_item_t *item) { return item && item->autoselect_size; } /* ------------------------------------------------------------------------- */ /* Clearing data values */ void obs_data_unset_user_value(obs_data_t *data, const char *name) { obs_data_item_unset_user_value(get_item(data, name)); } void obs_data_unset_default_value(obs_data_t *data, const char *name) { obs_data_item_unset_default_value(get_item(data, name)); } void obs_data_unset_autoselect_value(obs_data_t *data, const char *name) { obs_data_item_unset_autoselect_value(get_item(data, name)); } void obs_data_item_unset_user_value(obs_data_item_t *item) { if (!item || !item->data_size) return; void *old_non_user_data = get_default_data_ptr(item); item_data_release(item); item->data_size = 0; item->data_len = 0; if (item->default_size || item->autoselect_size) move_data(item, old_non_user_data, item, get_default_data_ptr(item), item->default_len + item->autoselect_size); } void obs_data_item_unset_default_value(obs_data_item_t *item) { if (!item || !item->default_size) return; void *old_autoselect_data = get_autoselect_data_ptr(item); item_default_data_release(item); item->default_size = 0; item->default_len = 0; if (item->autoselect_size) move_data(item, old_autoselect_data, item, get_autoselect_data_ptr(item), item->autoselect_size); } void obs_data_item_unset_autoselect_value(obs_data_item_t *item) { if (!item || !item->autoselect_size) return; item_autoselect_data_release(item); item->autoselect_size = 0; } /* ------------------------------------------------------------------------- */ /* Item iteration */ obs_data_item_t *obs_data_first(obs_data_t *data) { if (!data) return NULL; if (data->items) os_atomic_inc_long(&data->items->ref); return data->items; } obs_data_item_t *obs_data_item_byname(obs_data_t *data, const char *name) { if (!data) return NULL; struct obs_data_item *item = get_item(data, name); if (item) os_atomic_inc_long(&item->ref); return item; } bool obs_data_item_next(obs_data_item_t **item) { if (item && *item) { obs_data_item_t *next = (*item)->hh.next; obs_data_item_release(item); *item = next; if (next) { os_atomic_inc_long(&next->ref); return true; } } return false; } void obs_data_item_release(obs_data_item_t **item) { if (item && *item) { long ref = os_atomic_dec_long(&(*item)->ref); if (!ref) { obs_data_item_destroy(*item); *item = NULL; } } } void obs_data_item_remove(obs_data_item_t **item) { if (item && *item) { obs_data_item_detach(*item); obs_data_item_release(item); } } enum obs_data_type obs_data_item_gettype(obs_data_item_t *item) { return item ? item->type : OBS_DATA_NULL; } enum obs_data_number_type obs_data_item_numtype(obs_data_item_t *item) { struct obs_data_number *num; if (!item || item->type != OBS_DATA_NUMBER) return OBS_DATA_NUM_INVALID; num = get_item_data(item); if (!num) return OBS_DATA_NUM_INVALID; return num->type; } const char *obs_data_item_get_name(obs_data_item_t *item) { if (!item) return NULL; return item->name; } void obs_data_item_set_string(obs_data_item_t **item, const char *val) { obs_set_string(NULL, item, NULL, val, set_item); } void obs_data_item_set_int(obs_data_item_t **item, long long val) { obs_set_int(NULL, item, NULL, val, set_item); } void obs_data_item_set_double(obs_data_item_t **item, double val) { obs_set_double(NULL, item, NULL, val, set_item); } void obs_data_item_set_bool(obs_data_item_t **item, bool val) { obs_set_bool(NULL, item, NULL, val, set_item); } void obs_data_item_set_obj(obs_data_item_t **item, obs_data_t *val) { obs_set_obj(NULL, item, NULL, val, set_item); } void obs_data_item_set_array(obs_data_item_t **item, obs_data_array_t *val) { obs_set_array(NULL, item, NULL, val, set_item); } void obs_data_item_set_default_string(obs_data_item_t **item, const char *val) { obs_set_string(NULL, item, NULL, val, set_item_def); } void obs_data_item_set_default_int(obs_data_item_t **item, long long val) { obs_set_int(NULL, item, NULL, val, set_item_def); } void obs_data_item_set_default_double(obs_data_item_t **item, double val) { obs_set_double(NULL, item, NULL, val, set_item_def); } void obs_data_item_set_default_bool(obs_data_item_t **item, bool val) { obs_set_bool(NULL, item, NULL, val, set_item_def); } void obs_data_item_set_default_obj(obs_data_item_t **item, obs_data_t *val) { obs_set_obj(NULL, item, NULL, val, set_item_def); } void obs_data_item_set_default_array(obs_data_item_t **item, obs_data_array_t *val) { obs_set_array(NULL, item, NULL, val, set_item_def); } void obs_data_item_set_autoselect_string(obs_data_item_t **item, const char *val) { obs_set_string(NULL, item, NULL, val, set_item_auto); } void obs_data_item_set_autoselect_int(obs_data_item_t **item, long long val) { obs_set_int(NULL, item, NULL, val, set_item_auto); } void obs_data_item_set_autoselect_double(obs_data_item_t **item, double val) { obs_set_double(NULL, item, NULL, val, set_item_auto); } void obs_data_item_set_autoselect_bool(obs_data_item_t **item, bool val) { obs_set_bool(NULL, item, NULL, val, set_item_auto); } void obs_data_item_set_autoselect_obj(obs_data_item_t **item, obs_data_t *val) { obs_set_obj(NULL, item, NULL, val, set_item_auto); } void obs_data_item_set_autoselect_array(obs_data_item_t **item, obs_data_array_t *val) { obs_set_array(NULL, item, NULL, val, set_item_auto); } static inline bool item_valid(struct obs_data_item *item, enum obs_data_type type) { return item && item->type == type; } typedef void *(*get_data_t)(obs_data_item_t *); static inline const char *data_item_get_string(obs_data_item_t *item, get_data_t get_data) { const char *str; return item_valid(item, OBS_DATA_STRING) && (str = get_data(item)) ? str : ""; } static inline long long item_int(struct obs_data_item *item, get_data_t get_data) { struct obs_data_number *num; if (item && (num = get_data(item))) { return (num->type == OBS_DATA_NUM_INT) ? num->int_val : (long long)num->double_val; } return 0; } static inline long long data_item_get_int(obs_data_item_t *item, get_data_t get_data) { return item_int(item_valid(item, OBS_DATA_NUMBER) ? item : NULL, get_data); } static inline double item_double(struct obs_data_item *item, get_data_t get_data) { struct obs_data_number *num; if (item && (num = get_data(item))) { return (num->type == OBS_DATA_NUM_INT) ? (double)num->int_val : num->double_val; } return 0.0; } static inline double data_item_get_double(obs_data_item_t *item, get_data_t get_data) { return item_double(item_valid(item, OBS_DATA_NUMBER) ? item : NULL, get_data); } static inline bool data_item_get_bool(obs_data_item_t *item, get_data_t get_data) { bool *data; return item_valid(item, OBS_DATA_BOOLEAN) && (data = get_data(item)) ? *data : false; } typedef obs_data_t *(*get_obj_t)(obs_data_item_t *); static inline obs_data_t *data_item_get_obj(obs_data_item_t *item, get_obj_t get_obj) { obs_data_t *obj = item_valid(item, OBS_DATA_OBJECT) ? get_obj(item) : NULL; if (obj) os_atomic_inc_long(&obj->ref); return obj; } typedef obs_data_array_t *(*get_array_t)(obs_data_item_t *); static inline obs_data_array_t *data_item_get_array(obs_data_item_t *item, get_array_t get_array) { obs_data_array_t *array = item_valid(item, OBS_DATA_ARRAY) ? get_array(item) : NULL; if (array) os_atomic_inc_long(&array->ref); return array; } const char *obs_data_item_get_string(obs_data_item_t *item) { return data_item_get_string(item, get_item_data); } long long obs_data_item_get_int(obs_data_item_t *item) { return data_item_get_int(item, get_item_data); } double obs_data_item_get_double(obs_data_item_t *item) { return data_item_get_double(item, get_item_data); } bool obs_data_item_get_bool(obs_data_item_t *item) { return data_item_get_bool(item, get_item_data); } obs_data_t *obs_data_item_get_obj(obs_data_item_t *item) { return data_item_get_obj(item, get_item_obj); } obs_data_array_t *obs_data_item_get_array(obs_data_item_t *item) { return data_item_get_array(item, get_item_array); } const char *obs_data_item_get_default_string(obs_data_item_t *item) { return data_item_get_string(item, get_item_default_data); } long long obs_data_item_get_default_int(obs_data_item_t *item) { return data_item_get_int(item, get_item_default_data); } double obs_data_item_get_default_double(obs_data_item_t *item) { return data_item_get_double(item, get_item_default_data); } bool obs_data_item_get_default_bool(obs_data_item_t *item) { return data_item_get_bool(item, get_item_default_data); } obs_data_t *obs_data_item_get_default_obj(obs_data_item_t *item) { return data_item_get_obj(item, get_item_default_obj); } obs_data_array_t *obs_data_item_get_default_array(obs_data_item_t *item) { return data_item_get_array(item, get_item_default_array); } const char *obs_data_item_get_autoselect_string(obs_data_item_t *item) { return data_item_get_string(item, get_item_autoselect_data); } long long obs_data_item_get_autoselect_int(obs_data_item_t *item) { return data_item_get_int(item, get_item_autoselect_data); } double obs_data_item_get_autoselect_double(obs_data_item_t *item) { return data_item_get_double(item, get_item_autoselect_data); } bool obs_data_item_get_autoselect_bool(obs_data_item_t *item) { return data_item_get_bool(item, get_item_autoselect_data); } obs_data_t *obs_data_item_get_autoselect_obj(obs_data_item_t *item) { return data_item_get_obj(item, get_item_autoselect_obj); } obs_data_array_t *obs_data_item_get_autoselect_array(obs_data_item_t *item) { return data_item_get_array(item, get_item_autoselect_array); } /* ------------------------------------------------------------------------- */ /* Helper functions for certain structures */ typedef void (*set_obj_t)(obs_data_t *, const char *, obs_data_t *); static inline void set_vec2(obs_data_t *data, const char *name, const struct vec2 *val, set_obj_t set_obj) { obs_data_t *obj = obs_data_create(); obs_data_set_double(obj, "x", val->x); obs_data_set_double(obj, "y", val->y); set_obj(data, name, obj); obs_data_release(obj); } static inline void set_vec3(obs_data_t *data, const char *name, const struct vec3 *val, set_obj_t set_obj) { obs_data_t *obj = obs_data_create(); obs_data_set_double(obj, "x", val->x); obs_data_set_double(obj, "y", val->y); obs_data_set_double(obj, "z", val->z); set_obj(data, name, obj); obs_data_release(obj); } static inline void set_vec4(obs_data_t *data, const char *name, const struct vec4 *val, set_obj_t set_obj) { obs_data_t *obj = obs_data_create(); obs_data_set_double(obj, "x", val->x); obs_data_set_double(obj, "y", val->y); obs_data_set_double(obj, "z", val->z); obs_data_set_double(obj, "w", val->w); set_obj(data, name, obj); obs_data_release(obj); } static inline void set_quat(obs_data_t *data, const char *name, const struct quat *val, set_obj_t set_obj) { obs_data_t *obj = obs_data_create(); obs_data_set_double(obj, "x", val->x); obs_data_set_double(obj, "y", val->y); obs_data_set_double(obj, "z", val->z); obs_data_set_double(obj, "w", val->w); set_obj(data, name, obj); obs_data_release(obj); } void obs_data_set_vec2(obs_data_t *data, const char *name, const struct vec2 *val) { set_vec2(data, name, val, obs_data_set_obj); } void obs_data_set_vec3(obs_data_t *data, const char *name, const struct vec3 *val) { set_vec3(data, name, val, obs_data_set_obj); } void obs_data_set_vec4(obs_data_t *data, const char *name, const struct vec4 *val) { set_vec4(data, name, val, obs_data_set_obj); } void obs_data_set_quat(obs_data_t *data, const char *name, const struct quat *val) { set_quat(data, name, val, obs_data_set_obj); } void obs_data_set_default_vec2(obs_data_t *data, const char *name, const struct vec2 *val) { set_vec2(data, name, val, obs_data_set_default_obj); } void obs_data_set_default_vec3(obs_data_t *data, const char *name, const struct vec3 *val) { set_vec3(data, name, val, obs_data_set_default_obj); } void obs_data_set_default_vec4(obs_data_t *data, const char *name, const struct vec4 *val) { set_vec4(data, name, val, obs_data_set_default_obj); } void obs_data_set_default_quat(obs_data_t *data, const char *name, const struct quat *val) { set_quat(data, name, val, obs_data_set_default_obj); } void obs_data_set_autoselect_vec2(obs_data_t *data, const char *name, const struct vec2 *val) { set_vec2(data, name, val, obs_data_set_autoselect_obj); } void obs_data_set_autoselect_vec3(obs_data_t *data, const char *name, const struct vec3 *val) { set_vec3(data, name, val, obs_data_set_autoselect_obj); } void obs_data_set_autoselect_vec4(obs_data_t *data, const char *name, const struct vec4 *val) { set_vec4(data, name, val, obs_data_set_autoselect_obj); } void obs_data_set_autoselect_quat(obs_data_t *data, const char *name, const struct quat *val) { set_quat(data, name, val, obs_data_set_autoselect_obj); } static inline void get_vec2(obs_data_t *obj, struct vec2 *val) { if (!obj) return; val->x = (float)obs_data_get_double(obj, "x"); val->y = (float)obs_data_get_double(obj, "y"); obs_data_release(obj); } static inline void get_vec3(obs_data_t *obj, struct vec3 *val) { if (!obj) return; val->x = (float)obs_data_get_double(obj, "x"); val->y = (float)obs_data_get_double(obj, "y"); val->z = (float)obs_data_get_double(obj, "z"); obs_data_release(obj); } static inline void get_vec4(obs_data_t *obj, struct vec4 *val) { if (!obj) return; val->x = (float)obs_data_get_double(obj, "x"); val->y = (float)obs_data_get_double(obj, "y"); val->z = (float)obs_data_get_double(obj, "z"); val->w = (float)obs_data_get_double(obj, "w"); obs_data_release(obj); } static inline void get_quat(obs_data_t *obj, struct quat *val) { if (!obj) return; val->x = (float)obs_data_get_double(obj, "x"); val->y = (float)obs_data_get_double(obj, "y"); val->z = (float)obs_data_get_double(obj, "z"); val->w = (float)obs_data_get_double(obj, "w"); obs_data_release(obj); } void obs_data_get_vec2(obs_data_t *data, const char *name, struct vec2 *val) { get_vec2(obs_data_get_obj(data, name), val); } void obs_data_get_vec3(obs_data_t *data, const char *name, struct vec3 *val) { get_vec3(obs_data_get_obj(data, name), val); } void obs_data_get_vec4(obs_data_t *data, const char *name, struct vec4 *val) { get_vec4(obs_data_get_obj(data, name), val); } void obs_data_get_quat(obs_data_t *data, const char *name, struct quat *val) { get_quat(obs_data_get_obj(data, name), val); } void obs_data_get_default_vec2(obs_data_t *data, const char *name, struct vec2 *val) { get_vec2(obs_data_get_default_obj(data, name), val); } void obs_data_get_default_vec3(obs_data_t *data, const char *name, struct vec3 *val) { get_vec3(obs_data_get_default_obj(data, name), val); } void obs_data_get_default_vec4(obs_data_t *data, const char *name, struct vec4 *val) { get_vec4(obs_data_get_default_obj(data, name), val); } void obs_data_get_default_quat(obs_data_t *data, const char *name, struct quat *val) { get_quat(obs_data_get_default_obj(data, name), val); } void obs_data_get_autoselect_vec2(obs_data_t *data, const char *name, struct vec2 *val) { get_vec2(obs_data_get_autoselect_obj(data, name), val); } void obs_data_get_autoselect_vec3(obs_data_t *data, const char *name, struct vec3 *val) { get_vec3(obs_data_get_autoselect_obj(data, name), val); } void obs_data_get_autoselect_vec4(obs_data_t *data, const char *name, struct vec4 *val) { get_vec4(obs_data_get_autoselect_obj(data, name), val); } void obs_data_get_autoselect_quat(obs_data_t *data, const char *name, struct quat *val) { get_quat(obs_data_get_autoselect_obj(data, name), val); } /* ------------------------------------------------------------------------- */ /* Helper functions for media_frames_per_seconds */ static inline obs_data_t *make_frames_per_second(struct media_frames_per_second fps, const char *option) { obs_data_t *obj = obs_data_create(); if (!option) { obs_data_set_double(obj, "numerator", fps.numerator); obs_data_set_double(obj, "denominator", fps.denominator); } else { obs_data_set_string(obj, "option", option); } return obj; } void obs_data_set_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second fps, const char *option) { obs_take_obj(data, NULL, name, make_frames_per_second(fps, option), set_item); } void obs_data_set_default_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second fps, const char *option) { obs_take_obj(data, NULL, name, make_frames_per_second(fps, option), set_item_def); } void obs_data_set_autoselect_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second fps, const char *option) { obs_take_obj(data, NULL, name, make_frames_per_second(fps, option), set_item_auto); } static inline bool get_option(obs_data_t *data, const char **option) { if (!option) return false; struct obs_data_item *opt = obs_data_item_byname(data, "option"); if (!opt) return false; *option = obs_data_item_get_string(opt); obs_data_item_release(&opt); obs_data_release(data); return true; } #define CLAMP(x, min, max) ((x) < min ? min : ((x) > max ? max : (x))) static inline bool get_frames_per_second(obs_data_t *data, struct media_frames_per_second *fps, const char **option) { if (!data) return false; if (get_option(data, option)) return true; if (!fps) goto free; struct obs_data_item *num = obs_data_item_byname(data, "numerator"); struct obs_data_item *den = obs_data_item_byname(data, "denominator"); if (!num || !den) { obs_data_item_release(&num); obs_data_item_release(&den); goto free; } long long num_ll = obs_data_item_get_int(num); long long den_ll = obs_data_item_get_int(den); fps->numerator = (uint32_t)CLAMP(num_ll, 0, (long long)UINT32_MAX); fps->denominator = (uint32_t)CLAMP(den_ll, 0, (long long)UINT32_MAX); obs_data_item_release(&num); obs_data_item_release(&den); obs_data_release(data); return media_frames_per_second_is_valid(*fps); free: obs_data_release(data); return false; } bool obs_data_get_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second *fps, const char **option) { return get_frames_per_second(obs_data_get_obj(data, name), fps, option); } bool obs_data_get_default_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second *fps, const char **option) { return get_frames_per_second(obs_data_get_default_obj(data, name), fps, option); } bool obs_data_get_autoselect_frames_per_second(obs_data_t *data, const char *name, struct media_frames_per_second *fps, const char **option) { return get_frames_per_second(obs_data_get_autoselect_obj(data, name), fps, option); } void obs_data_item_set_frames_per_second(obs_data_item_t **item, struct media_frames_per_second fps, const char *option) { obs_take_obj(NULL, item, NULL, make_frames_per_second(fps, option), set_item); } void obs_data_item_set_default_frames_per_second(obs_data_item_t **item, struct media_frames_per_second fps, const char *option) { obs_take_obj(NULL, item, NULL, make_frames_per_second(fps, option), set_item_def); } void obs_data_item_set_autoselect_frames_per_second(obs_data_item_t **item, struct media_frames_per_second fps, const char *option) { obs_take_obj(NULL, item, NULL, make_frames_per_second(fps, option), set_item_auto); } bool obs_data_item_get_frames_per_second(obs_data_item_t *item, struct media_frames_per_second *fps, const char **option) { return get_frames_per_second(obs_data_item_get_obj(item), fps, option); } bool obs_data_item_get_default_frames_per_second(obs_data_item_t *item, struct media_frames_per_second *fps, const char **option) { return get_frames_per_second(obs_data_item_get_default_obj(item), fps, option); } bool obs_data_item_get_autoselect_frames_per_second(obs_data_item_t *item, struct media_frames_per_second *fps, const char **option) { return get_frames_per_second(obs_data_item_get_autoselect_obj(item), fps, option); } obs-studio-32.1.0-sources/libobs/obs-audio.c000644 001751 001751 00000053030 15153330235 021535 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include "obs-internal.h" #include "util/util_uint64.h" struct ts_info { uint64_t start; uint64_t end; }; #define DEBUG_AUDIO 0 #define DEBUG_LAGGED_AUDIO 0 static void push_audio_tree(obs_source_t *parent, obs_source_t *source, void *p) { struct obs_core_audio *audio = p; if (da_find(audio->render_order, &source, 0) == DARRAY_INVALID) { obs_source_t *s = obs_source_get_ref(source); if (s) { da_push_back(audio->render_order, &s); s->audio_is_duplicated = false; } } UNUSED_PARAMETER(parent); } static inline bool is_individual_audio_source(obs_source_t *source) { return source->info.type == OBS_SOURCE_TYPE_INPUT && (source->info.output_flags & OBS_SOURCE_AUDIO) && !(source->info.output_flags & OBS_SOURCE_COMPOSITE); } /* * This version of push_audio_tree checks whether any source is an Audio Output Capture source ('Desktop Audio', * 'wasapi_output_capture' on Windows, 'pulse_output_capture' on Linux, 'coreaudio_output_capture' on macOS), & if the * corresponding device is the monitoring device. It then sets the core audio bool 'prevent_monitoring_duplication' to * true, which will silence all monitored sources (unless the Audio Output Capture source is muted). * Moreover, it has the purpose of detecting sources which appear several times in the audio tree. They are then tagged * as such to avoid their mixing in scenes and transitions and mixed directly as root_nodes. */ static void push_audio_tree2(obs_source_t *parent, obs_source_t *source, void *p) { if (obs_source_removed(source)) return; struct obs_core_audio *audio = p; size_t idx = da_find(audio->render_order, &source, 0); if (idx == DARRAY_INVALID) { /* First time we see this source → add to render order */ obs_source_t *s = obs_source_get_ref(source); if (s) { da_push_back(audio->render_order, &s); s->audio_is_duplicated = false; } } else { /* Source already present in tree → mark as duplicated if applicable */ obs_source_t *s = audio->render_order.array[idx]; if (is_individual_audio_source(s) && !s->audio_is_duplicated) { da_push_back(audio->root_nodes, &source); s->audio_is_duplicated = true; } } UNUSED_PARAMETER(parent); } static inline size_t convert_time_to_frames(size_t sample_rate, uint64_t t) { return (size_t)util_mul_div64(t, sample_rate, 1000000000ULL); } static inline void mix_audio(struct audio_output_data *mixes, obs_source_t *source, size_t channels, size_t sample_rate, struct ts_info *ts) { size_t total_floats = AUDIO_OUTPUT_FRAMES; size_t start_point = 0; if (source->audio_ts < ts->start || ts->end <= source->audio_ts) return; if (source->audio_ts != ts->start) { start_point = convert_time_to_frames(sample_rate, source->audio_ts - ts->start); if (start_point == AUDIO_OUTPUT_FRAMES) return; total_floats -= start_point; } for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) { for (size_t ch = 0; ch < channels; ch++) { register float *mix = mixes[mix_idx].data[ch]; register float *aud = source->audio_output_buf[mix_idx][ch]; register float *end; mix += start_point; end = aud + total_floats; while (aud < end) *(mix++) += *(aud++); } } } static bool ignore_audio(obs_source_t *source, size_t channels, size_t sample_rate, uint64_t start_ts) { size_t num_floats = source->audio_input_buf[0].size / sizeof(float); const char *name = obs_source_get_name(source); if (!source->audio_ts && num_floats) { #if DEBUG_LAGGED_AUDIO == 1 blog(LOG_DEBUG, "[src: %s] no timestamp, but audio available?", name); #endif for (size_t ch = 0; ch < channels; ch++) deque_pop_front(&source->audio_input_buf[ch], NULL, source->audio_input_buf[0].size); source->last_audio_input_buf_size = 0; return false; } if (num_floats) { /* round up the number of samples to drop */ size_t drop = (size_t)util_mul_div64(start_ts - source->audio_ts - 1, sample_rate, 1000000000ULL) + 1; if (drop > num_floats) drop = num_floats; #if DEBUG_LAGGED_AUDIO == 1 blog(LOG_DEBUG, "[src: %s] ignored %" PRIu64 "/%" PRIu64 " samples", name, (uint64_t)drop, (uint64_t)num_floats); #endif for (size_t ch = 0; ch < channels; ch++) deque_pop_front(&source->audio_input_buf[ch], NULL, drop * sizeof(float)); source->last_audio_input_buf_size = 0; source->audio_ts += util_mul_div64(drop, 1000000000ULL, sample_rate); blog(LOG_DEBUG, "[src: %s] ts lag after ignoring: %" PRIu64, name, start_ts - source->audio_ts); /* rounding error, adjust */ if (source->audio_ts == (start_ts - 1)) source->audio_ts = start_ts; /* source is back in sync */ if (source->audio_ts >= start_ts) return true; } else { #if DEBUG_LAGGED_AUDIO == 1 blog(LOG_DEBUG, "[src: %s] no samples to ignore! ts = %" PRIu64, name, source->audio_ts); #endif } if (!source->audio_pending || num_floats) { blog(LOG_WARNING, "Source %s audio is lagging (over by %.02f ms) " "at max audio buffering. Restarting source audio.", name, (start_ts - source->audio_ts) / 1000000.); } source->audio_pending = true; source->audio_ts = 0; /* tell the timestamp adjustment code in source_output_audio_data to * reset everything, and hopefully fix the timestamps */ source->timing_set = false; return false; } static bool discard_if_stopped(obs_source_t *source, size_t channels) { size_t last_size; size_t size; last_size = source->last_audio_input_buf_size; size = source->audio_input_buf[0].size; if (!size) return false; /* if perpetually pending data, it means the audio has stopped, * so clear the audio data */ if (last_size == size) { if (!source->pending_stop) { source->pending_stop = true; #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "doing pending stop trick: '%s'", source->context.name); #endif return false; } for (size_t ch = 0; ch < channels; ch++) deque_pop_front(&source->audio_input_buf[ch], NULL, source->audio_input_buf[ch].size); source->pending_stop = false; source->audio_ts = 0; source->last_audio_input_buf_size = 0; #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "source audio data appears to have " "stopped, clearing"); #endif return true; } else { source->last_audio_input_buf_size = size; return false; } } #define MAX_AUDIO_SIZE (AUDIO_OUTPUT_FRAMES * sizeof(float)) static inline void discard_audio(struct obs_core_audio *audio, obs_source_t *source, size_t channels, size_t sample_rate, struct ts_info *ts) { size_t total_floats = AUDIO_OUTPUT_FRAMES; size_t size; /* debug assert only */ UNUSED_PARAMETER(audio); #if DEBUG_AUDIO == 1 bool is_audio_source = source->info.output_flags & OBS_SOURCE_AUDIO; #endif if (source->info.audio_render) { source->audio_ts = 0; return; } if (ts->end <= source->audio_ts) { #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "can't discard, source " "timestamp (%" PRIu64 ") >= " "end timestamp (%" PRIu64 ")", source->audio_ts, ts->end); #endif return; } if (source->audio_ts < (ts->start - 1)) { if (source->audio_pending && source->audio_input_buf[0].size < MAX_AUDIO_SIZE && discard_if_stopped(source, channels)) return; #if DEBUG_AUDIO == 1 if (is_audio_source) { blog(LOG_DEBUG, "can't discard, source " "timestamp (%" PRIu64 ") < " "start timestamp (%" PRIu64 ")", source->audio_ts, ts->start); } /* ignore_audio should have already run and marked this source * pending, unless we *just* added buffering */ assert(audio->total_buffering_ticks < audio->max_buffering_ticks || source->audio_pending || !source->audio_ts || audio->buffering_wait_ticks); #endif return; } if (source->audio_ts != ts->start && source->audio_ts != (ts->start - 1)) { size_t start_point = convert_time_to_frames(sample_rate, source->audio_ts - ts->start); if (start_point == AUDIO_OUTPUT_FRAMES) { #if DEBUG_AUDIO == 1 if (is_audio_source) blog(LOG_DEBUG, "can't discard, start point is " "at audio frame count"); #endif return; } total_floats -= start_point; } size = total_floats * sizeof(float); if (source->audio_input_buf[0].size < size) { if (discard_if_stopped(source, channels)) return; #if DEBUG_AUDIO == 1 if (is_audio_source) blog(LOG_DEBUG, "can't discard, data still pending"); #endif source->audio_ts = ts->end; return; } for (size_t ch = 0; ch < channels; ch++) deque_pop_front(&source->audio_input_buf[ch], NULL, size); source->last_audio_input_buf_size = 0; #if DEBUG_AUDIO == 1 if (is_audio_source) blog(LOG_DEBUG, "audio discarded, new ts: %" PRIu64, ts->end); #endif source->pending_stop = false; source->audio_ts = ts->end; } static inline bool audio_buffering_maxed(struct obs_core_audio *audio) { return audio->total_buffering_ticks == audio->max_buffering_ticks; } static void set_fixed_audio_buffering(struct obs_core_audio *audio, size_t sample_rate, struct ts_info *ts) { struct ts_info new_ts; size_t total_ms; int ticks; if (audio_buffering_maxed(audio)) return; if (!audio->buffering_wait_ticks) audio->buffered_ts = ts->start; ticks = audio->max_buffering_ticks - audio->total_buffering_ticks; audio->total_buffering_ticks += ticks; total_ms = audio->total_buffering_ticks * AUDIO_OUTPUT_FRAMES * 1000 / sample_rate; blog(LOG_INFO, "Enabling fixed audio buffering, total " "audio buffering is now %d milliseconds", (int)total_ms); new_ts.start = audio->buffered_ts - audio_frames_to_ns(sample_rate, audio->buffering_wait_ticks * AUDIO_OUTPUT_FRAMES); while (ticks--) { const uint64_t cur_ticks = ++audio->buffering_wait_ticks; new_ts.end = new_ts.start; new_ts.start = audio->buffered_ts - audio_frames_to_ns(sample_rate, cur_ticks * AUDIO_OUTPUT_FRAMES); #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "add buffered ts: %" PRIu64 "-%" PRIu64, new_ts.start, new_ts.end); #endif deque_push_front(&audio->buffered_timestamps, &new_ts, sizeof(new_ts)); } *ts = new_ts; } static void add_audio_buffering(struct obs_core_audio *audio, size_t sample_rate, struct ts_info *ts, uint64_t min_ts, const char *buffering_name) { struct ts_info new_ts; uint64_t offset; uint64_t frames; size_t total_ms; size_t ms; int ticks; if (audio_buffering_maxed(audio)) return; if (!audio->buffering_wait_ticks) audio->buffered_ts = ts->start; offset = ts->start - min_ts; frames = ns_to_audio_frames(sample_rate, offset); ticks = (int)((frames + AUDIO_OUTPUT_FRAMES - 1) / AUDIO_OUTPUT_FRAMES); audio->total_buffering_ticks += ticks; if (audio->total_buffering_ticks >= audio->max_buffering_ticks) { ticks -= audio->total_buffering_ticks - audio->max_buffering_ticks; audio->total_buffering_ticks = audio->max_buffering_ticks; blog(LOG_WARNING, "Max audio buffering reached!"); } ms = ticks * AUDIO_OUTPUT_FRAMES * 1000 / sample_rate; total_ms = audio->total_buffering_ticks * AUDIO_OUTPUT_FRAMES * 1000 / sample_rate; blog(LOG_INFO, "adding %d milliseconds of audio buffering, total " "audio buffering is now %d milliseconds" " (source: %s)\n", (int)ms, (int)total_ms, buffering_name); #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "min_ts (%" PRIu64 ") < start timestamp " "(%" PRIu64 ")", min_ts, ts->start); blog(LOG_DEBUG, "old buffered ts: %" PRIu64 "-%" PRIu64, ts->start, ts->end); #endif new_ts.start = audio->buffered_ts - audio_frames_to_ns(sample_rate, audio->buffering_wait_ticks * AUDIO_OUTPUT_FRAMES); while (ticks--) { const uint64_t cur_ticks = ++audio->buffering_wait_ticks; new_ts.end = new_ts.start; new_ts.start = audio->buffered_ts - audio_frames_to_ns(sample_rate, cur_ticks * AUDIO_OUTPUT_FRAMES); #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "add buffered ts: %" PRIu64 "-%" PRIu64, new_ts.start, new_ts.end); #endif deque_push_front(&audio->buffered_timestamps, &new_ts, sizeof(new_ts)); } *ts = new_ts; } static bool audio_buffer_insufficient(struct obs_source *source, size_t sample_rate, uint64_t min_ts) { size_t total_floats = AUDIO_OUTPUT_FRAMES; size_t size; if (source->info.audio_render || source->audio_pending || !source->audio_ts) { return false; } if (source->audio_ts != min_ts && source->audio_ts != (min_ts - 1)) { size_t start_point = convert_time_to_frames(sample_rate, source->audio_ts - min_ts); if (start_point >= AUDIO_OUTPUT_FRAMES) return false; total_floats -= start_point; } size = total_floats * sizeof(float); if (source->audio_input_buf[0].size < size) { source->audio_pending = true; return true; } return false; } static inline const char *find_min_ts(struct obs_core_data *data, uint64_t *min_ts) { obs_source_t *buffering_source = NULL; struct obs_source *source = data->first_audio_source; while (source) { if (!source->audio_pending && source->audio_ts && source->audio_ts < *min_ts) { *min_ts = source->audio_ts; buffering_source = source; } source = (struct obs_source *)source->next_audio_source; } return buffering_source ? obs_source_get_name(buffering_source) : NULL; } static inline bool mark_invalid_sources(struct obs_core_data *data, size_t sample_rate, uint64_t min_ts) { bool recalculate = false; struct obs_source *source = data->first_audio_source; while (source) { recalculate |= audio_buffer_insufficient(source, sample_rate, min_ts); source = (struct obs_source *)source->next_audio_source; } return recalculate; } static inline const char *calc_min_ts(struct obs_core_data *data, size_t sample_rate, uint64_t *min_ts) { const char *buffering_name = find_min_ts(data, min_ts); if (mark_invalid_sources(data, sample_rate, *min_ts)) buffering_name = find_min_ts(data, min_ts); return buffering_name; } static inline void release_audio_sources(struct obs_core_audio *audio) { for (size_t i = 0; i < audio->render_order.num; i++) obs_source_release(audio->render_order.array[i]); } static inline void execute_audio_tasks(void) { struct obs_core_audio *audio = &obs->audio; bool tasks_remaining = true; while (tasks_remaining) { pthread_mutex_lock(&audio->task_mutex); if (audio->tasks.size) { struct obs_task_info info; deque_pop_front(&audio->tasks, &info, sizeof(info)); info.task(info.param); } tasks_remaining = !!audio->tasks.size; pthread_mutex_unlock(&audio->task_mutex); } } /* In case monitoring and an 'Audio Output Capture' source have the same device, one silences all the monitored * sources unless the 'Audio Output Capture' is muted. */ static inline bool should_silence_monitored_source(obs_source_t *source, struct obs_core_audio *audio) { obs_source_t *dup_src = audio->monitoring_duplicating_source; if (!dup_src || !obs_source_active(dup_src)) return false; bool fader_muted = close_float(audio->monitoring_duplicating_source->volume, 0.0f, 0.0001f); bool output_capture_unmuted = !audio->monitoring_duplicating_source->muted && !fader_muted; if (output_capture_unmuted) { if (source->monitoring_type == OBS_MONITORING_TYPE_MONITOR_AND_OUTPUT && source != audio->monitoring_duplicating_source) { return true; } } return false; } static inline void clear_audio_output_buf(obs_source_t *source, struct obs_core_audio *audio) { if (!audio->monitoring_duplicating_source) return; uint32_t aoc_mixers = audio->monitoring_duplicating_source->audio_mixers; uint32_t source_mixers = source->audio_mixers; for (size_t mix = 0; mix < MAX_AUDIO_MIXES; mix++) { uint32_t mix_and_val = (1 << mix); if ((aoc_mixers & mix_and_val) && (source_mixers & mix_and_val)) { for (size_t ch = 0; ch < MAX_AUDIO_CHANNELS; ch++) { float *buf = source->audio_output_buf[mix][ch]; if (buf) memset(buf, 0, AUDIO_OUTPUT_FRAMES * sizeof(float)); } } } } bool audio_callback(void *param, uint64_t start_ts_in, uint64_t end_ts_in, uint64_t *out_ts, uint32_t mixers, struct audio_output_data *mixes) { struct obs_core_data *data = &obs->data; struct obs_core_audio *audio = &obs->audio; struct obs_source *source; size_t sample_rate = audio_output_get_sample_rate(audio->audio); size_t channels = audio_output_get_channels(audio->audio); struct ts_info ts = {start_ts_in, end_ts_in}; size_t audio_size; uint64_t min_ts; da_resize(audio->render_order, 0); da_resize(audio->root_nodes, 0); deque_push_back(&audio->buffered_timestamps, &ts, sizeof(ts)); deque_peek_front(&audio->buffered_timestamps, &ts, sizeof(ts)); min_ts = ts.start; audio_size = AUDIO_OUTPUT_FRAMES * sizeof(float); #if DEBUG_AUDIO == 1 blog(LOG_DEBUG, "ts %llu-%llu", ts.start, ts.end); #endif /* ------------------------------------------------ */ /* build audio render order */ pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t j = 0; j < obs->video.mixes.num; j++) { struct obs_view *view = obs->video.mixes.array[j]->view; if (!view) continue; pthread_mutex_lock(&view->channels_mutex); /* NOTE: these are source channels, not audio channels */ for (uint32_t i = 0; i < MAX_CHANNELS; i++) { obs_source_t *source = view->channels[i]; if (!source) continue; if (!obs_source_active(source)) continue; if (obs_source_removed(source)) continue; /* first, add top - level sources as root_nodes */ if (obs->video.mixes.array[j]->mix_audio) da_push_back(audio->root_nodes, &source); /* Build audio tree, tag duplicate individual sources */ obs_source_enum_active_tree(source, push_audio_tree2, audio); /* add top - level sources to audio tree */ push_audio_tree(NULL, source, audio); } pthread_mutex_unlock(&view->channels_mutex); } pthread_mutex_unlock(&obs->video.mixes_mutex); pthread_mutex_lock(&data->audio_sources_mutex); source = data->first_audio_source; while (source) { if (!obs_source_removed(source)) { push_audio_tree(NULL, source, audio); } source = (struct obs_source *)source->next_audio_source; } pthread_mutex_unlock(&data->audio_sources_mutex); /* ------------------------------------------------ */ /* render audio data */ for (size_t i = 0; i < audio->render_order.num; i++) { obs_source_t *source = audio->render_order.array[i]; obs_source_audio_render(source, mixers, channels, sample_rate, audio_size); if (should_silence_monitored_source(source, audio)) clear_audio_output_buf(source, audio); /* if a source has gone backward in time and we can no * longer buffer, drop some or all of its audio */ if (audio_buffering_maxed(audio) && source->audio_ts != 0 && source->audio_ts < ts.start) { if (source->info.audio_render) { blog(LOG_DEBUG, "render audio source %s timestamp has " "gone backwards", obs_source_get_name(source)); /* just avoid further damage */ source->audio_pending = true; #if DEBUG_AUDIO == 1 /* this should really be fixed */ assert(false); #endif } else { pthread_mutex_lock(&source->audio_buf_mutex); bool rerender = ignore_audio(source, channels, sample_rate, ts.start); pthread_mutex_unlock(&source->audio_buf_mutex); /* if we (potentially) recovered, re-render */ if (rerender) obs_source_audio_render(source, mixers, channels, sample_rate, audio_size); } } } /* ------------------------------------------------ */ /* get minimum audio timestamp */ pthread_mutex_lock(&data->audio_sources_mutex); const char *buffering_name = calc_min_ts(data, sample_rate, &min_ts); pthread_mutex_unlock(&data->audio_sources_mutex); /* ------------------------------------------------ */ /* if a source has gone backward in time, buffer */ if (audio->fixed_buffer) { if (!audio_buffering_maxed(audio)) { set_fixed_audio_buffering(audio, sample_rate, &ts); } } else if (min_ts < ts.start) { add_audio_buffering(audio, sample_rate, &ts, min_ts, buffering_name); } /* ------------------------------------------------ */ /* mix audio */ if (!audio->buffering_wait_ticks) { for (size_t i = 0; i < audio->root_nodes.num; i++) { obs_source_t *source = audio->root_nodes.array[i]; if (source->audio_pending) continue; pthread_mutex_lock(&source->audio_buf_mutex); if (source->audio_output_buf[0][0] && source->audio_ts) mix_audio(mixes, source, channels, sample_rate, &ts); pthread_mutex_unlock(&source->audio_buf_mutex); } } /* ------------------------------------------------ */ /* discard audio */ pthread_mutex_lock(&data->audio_sources_mutex); source = data->first_audio_source; while (source) { pthread_mutex_lock(&source->audio_buf_mutex); discard_audio(audio, source, channels, sample_rate, &ts); pthread_mutex_unlock(&source->audio_buf_mutex); source = (struct obs_source *)source->next_audio_source; } pthread_mutex_unlock(&data->audio_sources_mutex); /* ------------------------------------------------ */ /* release audio sources */ release_audio_sources(audio); deque_pop_front(&audio->buffered_timestamps, NULL, sizeof(ts)); *out_ts = ts.start; if (audio->buffering_wait_ticks) { audio->buffering_wait_ticks--; return false; } execute_audio_tasks(); UNUSED_PARAMETER(param); return true; } obs-studio-32.1.0-sources/libobs/obs-nix.h000644 001751 001751 00000002520 15153330235 021235 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2020 by Georges Basile Stavracas Neto This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #ifdef __cplusplus extern "C" { #endif #include "obs-internal.h" struct obs_nix_hotkeys_vtable { bool (*init)(struct obs_core_hotkeys *hotkeys); void (*free)(struct obs_core_hotkeys *hotkeys); bool (*is_pressed)(obs_hotkeys_platform_t *context, obs_key_t key); void (*key_to_str)(obs_key_t key, struct dstr *dstr); obs_key_t (*key_from_virtual_key)(int sym); int (*key_to_virtual_key)(obs_key_t key); }; #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-defs.h000644 001751 001751 00000003611 15153330235 021362 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once /** Maximum number of source channels for output and per display */ #define MAX_CHANNELS 64 #define OBS_ALIGN_CENTER (0) #define OBS_ALIGN_LEFT (1 << 0) #define OBS_ALIGN_RIGHT (1 << 1) #define OBS_ALIGN_TOP (1 << 2) #define OBS_ALIGN_BOTTOM (1 << 3) #define MODULE_SUCCESS 0 #define MODULE_ERROR -1 #define MODULE_FAILED_TO_OPEN -2 #define MODULE_FILE_NOT_FOUND MODULE_FAILED_TO_OPEN /* DEPRECATED! */ #define MODULE_MISSING_EXPORTS -3 #define MODULE_INCOMPATIBLE_VER -4 #define MODULE_HARDCODED_SKIP -5 #define OBS_OUTPUT_SUCCESS 0 #define OBS_OUTPUT_BAD_PATH -1 #define OBS_OUTPUT_CONNECT_FAILED -2 #define OBS_OUTPUT_INVALID_STREAM -3 #define OBS_OUTPUT_ERROR -4 #define OBS_OUTPUT_DISCONNECTED -5 #define OBS_OUTPUT_UNSUPPORTED -6 #define OBS_OUTPUT_NO_SPACE -7 #define OBS_OUTPUT_ENCODE_ERROR -8 #define OBS_OUTPUT_HDR_DISABLED -9 #define OBS_VIDEO_SUCCESS 0 #define OBS_VIDEO_FAIL -1 #define OBS_VIDEO_NOT_SUPPORTED -2 #define OBS_VIDEO_INVALID_PARAM -3 #define OBS_VIDEO_CURRENTLY_ACTIVE -4 #define OBS_VIDEO_MODULE_NOT_FOUND -5 obs-studio-32.1.0-sources/libobs/obs-service.c000644 001751 001751 00000030730 15153330235 022076 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" #define get_weak(service) ((obs_weak_service_t *)service->context.control) const struct obs_service_info *find_service(const char *id) { size_t i; for (i = 0; i < obs->service_types.num; i++) if (strcmp(obs->service_types.array[i].id, id) == 0) return obs->service_types.array + i; return NULL; } const char *obs_service_get_display_name(const char *id) { const struct obs_service_info *info = find_service(id); return (info != NULL) ? info->get_name(info->type_data) : NULL; } obs_module_t *obs_service_get_module(const char *id) { obs_module_t *module = obs->first_module; while (module) { for (size_t i = 0; i < module->services.num; i++) { if (strcmp(module->services.array[i], id) == 0) { return module; } } module = module->next; } module = obs->first_disabled_module; while (module) { for (size_t i = 0; i < module->services.num; i++) { if (strcmp(module->services.array[i], id) == 0) { return module; } } module = module->next; } return NULL; } enum obs_module_load_state obs_service_load_state(const char *id) { obs_module_t *module = obs_service_get_module(id); if (!module) { return OBS_MODULE_MISSING; } return module->load_state; } static obs_service_t *obs_service_create_internal(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data, bool private) { const struct obs_service_info *info = find_service(id); struct obs_service *service; if (!info) { blog(LOG_ERROR, "Service '%s' not found", id); return NULL; } service = bzalloc(sizeof(struct obs_service)); if (!obs_context_data_init(&service->context, OBS_OBJ_TYPE_SERVICE, settings, name, NULL, hotkey_data, private)) { bfree(service); return NULL; } service->info = *info; service->context.data = service->info.create(service->context.settings, service); if (!service->context.data) blog(LOG_ERROR, "Failed to create service '%s'!", name); obs_context_init_control(&service->context, service, (obs_destroy_cb)obs_service_destroy); obs_context_data_insert(&service->context, &obs->data.services_mutex, &obs->data.first_service); blog(LOG_DEBUG, "service '%s' (%s) created", name, id); return service; } obs_service_t *obs_service_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { return obs_service_create_internal(id, name, settings, hotkey_data, false); } obs_service_t *obs_service_create_private(const char *id, const char *name, obs_data_t *settings) { return obs_service_create_internal(id, name, settings, NULL, true); } static void actually_destroy_service(struct obs_service *service) { if (service->context.data) service->info.destroy(service->context.data); if (service->output) service->output->service = NULL; blog(LOG_DEBUG, "service '%s' destroyed", service->context.name); obs_context_data_free(&service->context); if (service->owns_info_id) bfree((void *)service->info.id); bfree(service); } void obs_service_destroy(obs_service_t *service) { if (service) { obs_context_data_remove(&service->context); service->destroy = true; /* do NOT destroy the service until the service is no * longer in use */ if (!service->active) actually_destroy_service(service); } } const char *obs_service_get_name(const obs_service_t *service) { return obs_service_valid(service, "obs_service_get_name") ? service->context.name : NULL; } static inline obs_data_t *get_defaults(const struct obs_service_info *info) { obs_data_t *settings = obs_data_create(); if (info->get_defaults) info->get_defaults(settings); return settings; } obs_data_t *obs_service_defaults(const char *id) { const struct obs_service_info *info = find_service(id); return (info) ? get_defaults(info) : NULL; } obs_properties_t *obs_get_service_properties(const char *id) { const struct obs_service_info *info = find_service(id); if (info && info->get_properties) { obs_data_t *defaults = get_defaults(info); obs_properties_t *properties; properties = info->get_properties(NULL); obs_properties_apply_settings(properties, defaults); obs_data_release(defaults); return properties; } return NULL; } obs_properties_t *obs_service_properties(const obs_service_t *service) { if (!obs_service_valid(service, "obs_service_properties")) return NULL; if (service->info.get_properties) { obs_properties_t *props; props = service->info.get_properties(service->context.data); obs_properties_apply_settings(props, service->context.settings); return props; } return NULL; } const char *obs_service_get_type(const obs_service_t *service) { return obs_service_valid(service, "obs_service_get_type") ? service->info.id : NULL; } void obs_service_update(obs_service_t *service, obs_data_t *settings) { if (!obs_service_valid(service, "obs_service_update")) return; obs_data_apply(service->context.settings, settings); if (service->info.update) service->info.update(service->context.data, service->context.settings); } obs_data_t *obs_service_get_settings(const obs_service_t *service) { if (!obs_service_valid(service, "obs_service_get_settings")) return NULL; obs_data_addref(service->context.settings); return service->context.settings; } signal_handler_t *obs_service_get_signal_handler(const obs_service_t *service) { return obs_service_valid(service, "obs_service_get_signal_handler") ? service->context.signals : NULL; } proc_handler_t *obs_service_get_proc_handler(const obs_service_t *service) { return obs_service_valid(service, "obs_service_get_proc_handler") ? service->context.procs : NULL; } void obs_service_activate(struct obs_service *service) { if (!obs_service_valid(service, "obs_service_activate")) return; if (!service->output) { blog(LOG_WARNING, "obs_service_deactivate: service '%s' " "is not assigned to an output", obs_service_get_name(service)); return; } if (service->active) return; if (service->info.activate) service->info.activate(service->context.data, service->context.settings); service->active = true; } void obs_service_deactivate(struct obs_service *service, bool remove) { if (!obs_service_valid(service, "obs_service_deactivate")) return; if (!service->output) { blog(LOG_WARNING, "obs_service_deactivate: service '%s' " "is not assigned to an output", obs_service_get_name(service)); return; } if (!service->active) return; if (service->info.deactivate) service->info.deactivate(service->context.data); service->active = false; if (service->destroy) actually_destroy_service(service); else if (remove) service->output = NULL; } bool obs_service_initialize(struct obs_service *service, struct obs_output *output) { if (!obs_service_valid(service, "obs_service_initialize")) return false; if (!obs_output_valid(output, "obs_service_initialize")) return false; if (service->info.initialize) return service->info.initialize(service->context.data, output); return true; } void obs_service_apply_encoder_settings(obs_service_t *service, obs_data_t *video_encoder_settings, obs_data_t *audio_encoder_settings) { if (!obs_service_valid(service, "obs_service_apply_encoder_settings")) return; if (!service->info.apply_encoder_settings) return; if (video_encoder_settings || audio_encoder_settings) service->info.apply_encoder_settings(service->context.data, video_encoder_settings, audio_encoder_settings); } void obs_service_release(obs_service_t *service) { if (!service) return; obs_weak_service_t *control = get_weak(service); if (obs_ref_release(&control->ref)) { // The order of operations is important here since // get_context_by_name in obs.c relies on weak refs // being alive while the context is listed obs_service_destroy(service); obs_weak_service_release(control); } } void obs_weak_service_addref(obs_weak_service_t *weak) { if (!weak) return; obs_weak_ref_addref(&weak->ref); } void obs_weak_service_release(obs_weak_service_t *weak) { if (!weak) return; if (obs_weak_ref_release(&weak->ref)) bfree(weak); } obs_service_t *obs_service_get_ref(obs_service_t *service) { if (!service) return NULL; return obs_weak_service_get_service(get_weak(service)); } obs_weak_service_t *obs_service_get_weak_service(obs_service_t *service) { if (!service) return NULL; obs_weak_service_t *weak = get_weak(service); obs_weak_service_addref(weak); return weak; } obs_service_t *obs_weak_service_get_service(obs_weak_service_t *weak) { if (!weak) return NULL; if (obs_weak_ref_get_ref(&weak->ref)) return weak->service; return NULL; } bool obs_weak_service_references_service(obs_weak_service_t *weak, obs_service_t *service) { return weak && service && weak->service == service; } void *obs_service_get_type_data(obs_service_t *service) { return obs_service_valid(service, "obs_service_get_type_data") ? service->info.type_data : NULL; } const char *obs_service_get_id(const obs_service_t *service) { return obs_service_valid(service, "obs_service_get_id") ? service->info.id : NULL; } void obs_service_get_supported_resolutions(const obs_service_t *service, struct obs_service_resolution **resolutions, size_t *count) { if (!obs_service_valid(service, "obs_service_supported_resolutions")) return; if (!obs_ptr_valid(resolutions, "obs_service_supported_resolutions")) return; if (!obs_ptr_valid(count, "obs_service_supported_resolutions")) return; *resolutions = NULL; *count = 0; if (service->info.get_supported_resolutions) service->info.get_supported_resolutions(service->context.data, resolutions, count); } void obs_service_get_max_fps(const obs_service_t *service, int *fps) { if (!obs_service_valid(service, "obs_service_get_max_fps")) return; if (!obs_ptr_valid(fps, "obs_service_get_max_fps")) return; *fps = 0; if (service->info.get_max_fps) service->info.get_max_fps(service->context.data, fps); } void obs_service_get_max_bitrate(const obs_service_t *service, int *video_bitrate, int *audio_bitrate) { if (video_bitrate) *video_bitrate = 0; if (audio_bitrate) *audio_bitrate = 0; if (!obs_service_valid(service, "obs_service_get_max_bitrate")) return; if (service->info.get_max_bitrate) service->info.get_max_bitrate(service->context.data, video_bitrate, audio_bitrate); } const char **obs_service_get_supported_video_codecs(const obs_service_t *service) { if (service->info.get_supported_video_codecs) return service->info.get_supported_video_codecs(service->context.data); return NULL; } const char **obs_service_get_supported_audio_codecs(const obs_service_t *service) { if (service->info.get_supported_audio_codecs) return service->info.get_supported_audio_codecs(service->context.data); return NULL; } const char *obs_service_get_protocol(const obs_service_t *service) { if (!obs_service_valid(service, "obs_service_get_protocol")) return NULL; return service->info.get_protocol(service->context.data); } const char *obs_service_get_preferred_output_type(const obs_service_t *service) { if (!obs_service_valid(service, "obs_service_get_preferred_output_type")) return NULL; if (service->info.get_output_type) return service->info.get_output_type(service->context.data); return NULL; } const char *obs_service_get_connect_info(const obs_service_t *service, uint32_t type) { if (!obs_service_valid(service, "obs_service_get_info")) return NULL; if (!service->info.get_connect_info) return NULL; return service->info.get_connect_info(service->context.data, type); } bool obs_service_can_try_to_connect(const obs_service_t *service) { if (!obs_service_valid(service, "obs_service_can_connect")) return false; if (!service->info.can_try_to_connect) return true; return service->info.can_try_to_connect(service->context.data); } obs-studio-32.1.0-sources/libobs/obs-nix-wayland.c000644 001751 001751 00000122202 15153330235 022665 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Jason Francis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" #include "obs-nix-platform.h" #include "obs-nix-wayland.h" #include #include #include #include #include // X11 only supports 256 scancodes, most keyboards dont have 256 keys so this should be reasonable. #define MAX_KEYCODES 256 // X11 keymaps only have 4 shift levels, im not sure xkbcommon supports a way to shift the state into a higher level anyway. #define MAX_SHIFT_LEVELS 4 struct obs_hotkeys_platform { struct wl_display *display; struct wl_seat *seat; struct wl_keyboard *keyboard; struct xkb_context *xkb_context; struct xkb_keymap *xkb_keymap; struct xkb_state *xkb_state; xkb_keysym_t key_to_sym[MAX_SHIFT_LEVELS][MAX_KEYCODES]; xkb_keysym_t obs_to_key[OBS_KEY_LAST_VALUE]; uint32_t current_layout; }; static obs_key_t obs_nix_wayland_key_from_virtual_key(int sym); static void load_keymap_data(struct xkb_keymap *keymap, xkb_keycode_t key, void *data) { obs_hotkeys_platform_t *plat = (obs_hotkeys_platform_t *)data; if (key >= MAX_KEYCODES) return; const xkb_keysym_t *syms; for (int level = 0; level < MAX_SHIFT_LEVELS; level++) { int nsyms = xkb_keymap_key_get_syms_by_level(keymap, key, plat->current_layout, level, &syms); if (nsyms < 1) continue; obs_key_t obs_key = obs_nix_wayland_key_from_virtual_key(syms[0]); // This avoids ambiguity where multiple scancodes produce the same symbols. // e.g. LSGT and Shift+AB08 produce `<` on default US layout. if (!plat->obs_to_key[obs_key]) plat->obs_to_key[obs_key] = key; plat->key_to_sym[level][key] = syms[0]; } } static void rebuild_keymap_data(obs_hotkeys_platform_t *plat) { memset(plat->key_to_sym, 0, sizeof(xkb_keysym_t) * MAX_SHIFT_LEVELS * MAX_KEYCODES); memset(plat->obs_to_key, 0, sizeof(xkb_keysym_t) * OBS_KEY_LAST_VALUE); xkb_keymap_key_for_each(plat->xkb_keymap, load_keymap_data, plat); } static void platform_keyboard_keymap(void *data, struct wl_keyboard *keyboard, uint32_t format, int32_t fd, uint32_t size) { UNUSED_PARAMETER(keyboard); UNUSED_PARAMETER(format); obs_hotkeys_platform_t *plat = (obs_hotkeys_platform_t *)data; char *keymap_shm = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd, 0); if (keymap_shm == MAP_FAILED) { close(fd); return; } struct xkb_keymap *xkb_keymap = xkb_keymap_new_from_string( plat->xkb_context, keymap_shm, XKB_KEYMAP_FORMAT_TEXT_V1, XKB_KEYMAP_COMPILE_NO_FLAGS); munmap(keymap_shm, size); close(fd); // cleanup old keymap and state xkb_keymap_unref(plat->xkb_keymap); xkb_state_unref(plat->xkb_state); plat->xkb_keymap = xkb_keymap; plat->xkb_state = xkb_state_new(xkb_keymap); rebuild_keymap_data(plat); } static void platform_keyboard_modifiers(void *data, struct wl_keyboard *keyboard, uint32_t serial, uint32_t mods_depressed, uint32_t mods_latched, uint32_t mods_locked, uint32_t group) { UNUSED_PARAMETER(keyboard); UNUSED_PARAMETER(serial); obs_hotkeys_platform_t *plat = (obs_hotkeys_platform_t *)data; xkb_state_update_mask(plat->xkb_state, mods_depressed, mods_latched, mods_locked, 0, 0, group); if (plat->current_layout != group) { plat->current_layout = group; rebuild_keymap_data(plat); } } static void platform_keyboard_key(void *data, struct wl_keyboard *keyboard, uint32_t serial, uint32_t time, uint32_t key, uint32_t state) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(keyboard); UNUSED_PARAMETER(serial); UNUSED_PARAMETER(time); UNUSED_PARAMETER(key); UNUSED_PARAMETER(state); // We have access to the keyboard input here, but behave like other // platforms and let Qt inform us of key events through the platform // callbacks. } static void platform_keyboard_enter(void *data, struct wl_keyboard *keyboard, uint32_t serial, struct wl_surface *surface, struct wl_array *keys) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(keyboard); UNUSED_PARAMETER(serial); UNUSED_PARAMETER(surface); UNUSED_PARAMETER(keys); // Nothing to do here. } static void platform_keyboard_leave(void *data, struct wl_keyboard *keyboard, uint32_t serial, struct wl_surface *surface) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(keyboard); UNUSED_PARAMETER(serial); UNUSED_PARAMETER(surface); // Nothing to do. } static void platform_keyboard_repeat_info(void *data, struct wl_keyboard *keyboard, int32_t rate, int32_t delay) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(keyboard); UNUSED_PARAMETER(rate); UNUSED_PARAMETER(delay); // Nothing to do. } const struct wl_keyboard_listener keyboard_listener = { .keymap = platform_keyboard_keymap, .enter = platform_keyboard_enter, .leave = platform_keyboard_leave, .key = platform_keyboard_key, .modifiers = platform_keyboard_modifiers, .repeat_info = platform_keyboard_repeat_info, }; static void platform_seat_capabilities(void *data, struct wl_seat *seat, uint32_t capabilities) { UNUSED_PARAMETER(seat); obs_hotkeys_platform_t *plat = (obs_hotkeys_platform_t *)data; bool kb_present = capabilities & WL_SEAT_CAPABILITY_KEYBOARD; if (kb_present && plat->keyboard == NULL) { plat->keyboard = wl_seat_get_keyboard(plat->seat); wl_keyboard_add_listener(plat->keyboard, &keyboard_listener, plat); } else if (!kb_present && plat->keyboard != NULL) { wl_keyboard_release(plat->keyboard); plat->keyboard = NULL; } } static void platform_seat_name(void *data, struct wl_seat *seat, const char *name) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(seat); UNUSED_PARAMETER(name); // Nothing to do. } const struct wl_seat_listener seat_listener = { .capabilities = platform_seat_capabilities, .name = platform_seat_name, }; static void platform_registry_handler(void *data, struct wl_registry *registry, uint32_t id, const char *interface, uint32_t version) { obs_hotkeys_platform_t *plat = (obs_hotkeys_platform_t *)data; if (strcmp(interface, wl_seat_interface.name) == 0) { if (version < 4) { blog(LOG_WARNING, "[wayland] hotkeys disabled, compositor is too old"); return; } // Only negotiate up to version 7, the current wl_seat at time of writing. plat->seat = wl_registry_bind(registry, id, &wl_seat_interface, version <= 7 ? version : 7); wl_seat_add_listener(plat->seat, &seat_listener, plat); } } static void platform_registry_remover(void *data, struct wl_registry *registry, uint32_t id) { UNUSED_PARAMETER(data); UNUSED_PARAMETER(registry); UNUSED_PARAMETER(id); // Nothing to do. } const struct wl_registry_listener registry_listener = { .global = platform_registry_handler, .global_remove = platform_registry_remover, }; void obs_nix_wayland_log_info(void) { struct wl_display *display = obs_get_nix_platform_display(); if (display == NULL) { blog(LOG_INFO, "Unable to connect to Wayland server"); return; } //TODO: query some information about the wayland server if possible blog(LOG_INFO, "Connected to Wayland server"); } static bool obs_nix_wayland_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys) { struct wl_display *display = obs_get_nix_platform_display(); hotkeys->platform_context = bzalloc(sizeof(obs_hotkeys_platform_t)); hotkeys->platform_context->display = display; hotkeys->platform_context->xkb_context = xkb_context_new(XKB_CONTEXT_NO_FLAGS); struct wl_registry *registry = wl_display_get_registry(display); wl_registry_add_listener(registry, ®istry_listener, hotkeys->platform_context); wl_display_roundtrip(display); return true; } static void obs_nix_wayland_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys) { obs_hotkeys_platform_t *plat = hotkeys->platform_context; xkb_context_unref(plat->xkb_context); xkb_keymap_unref(plat->xkb_keymap); xkb_state_unref(plat->xkb_state); bfree(plat); } static bool obs_nix_wayland_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *context, obs_key_t key) { UNUSED_PARAMETER(context); UNUSED_PARAMETER(key); // This function is only used by the hotkey thread for capturing out of // focus hotkey triggers. Since wayland never delivers key events when out // of focus we leave this blank intentionally. return false; } static void obs_nix_wayland_key_to_str(obs_key_t key, struct dstr *dstr) { if (key >= OBS_KEY_MOUSE1 && key <= OBS_KEY_MOUSE29) { if (obs->hotkeys.translations[key]) { dstr_copy(dstr, obs->hotkeys.translations[key]); } else { dstr_printf(dstr, "Mouse %d", (int)(key - OBS_KEY_MOUSE1 + 1)); } return; } if (key >= OBS_KEY_NUM0 && key <= OBS_KEY_NUM9) { if (obs->hotkeys.translations[key]) { dstr_copy(dstr, obs->hotkeys.translations[key]); } else { dstr_printf(dstr, "Numpad %d", (int)(key - OBS_KEY_NUM0)); } return; } #define translate_key(key, def) dstr_copy(dstr, obs_get_hotkey_translation(key, def)) switch (key) { case OBS_KEY_INSERT: return translate_key(key, "Insert"); case OBS_KEY_DELETE: return translate_key(key, "Delete"); case OBS_KEY_HOME: return translate_key(key, "Home"); case OBS_KEY_END: return translate_key(key, "End"); case OBS_KEY_PAGEUP: return translate_key(key, "Page Up"); case OBS_KEY_PAGEDOWN: return translate_key(key, "Page Down"); case OBS_KEY_NUMLOCK: return translate_key(key, "Num Lock"); case OBS_KEY_SCROLLLOCK: return translate_key(key, "Scroll Lock"); case OBS_KEY_CAPSLOCK: return translate_key(key, "Caps Lock"); case OBS_KEY_BACKSPACE: return translate_key(key, "Backspace"); case OBS_KEY_TAB: return translate_key(key, "Tab"); case OBS_KEY_PRINT: return translate_key(key, "Print"); case OBS_KEY_PAUSE: return translate_key(key, "Pause"); case OBS_KEY_LEFT: return translate_key(key, "Left"); case OBS_KEY_RIGHT: return translate_key(key, "Right"); case OBS_KEY_UP: return translate_key(key, "Up"); case OBS_KEY_DOWN: return translate_key(key, "Down"); case OBS_KEY_SHIFT: return translate_key(key, "Shift"); case OBS_KEY_ALT: return translate_key(key, "Alt"); case OBS_KEY_CONTROL: return translate_key(key, "Control"); case OBS_KEY_META: return translate_key(key, "Super"); case OBS_KEY_MENU: return translate_key(key, "Menu"); case OBS_KEY_NUMASTERISK: return translate_key(key, "Numpad *"); case OBS_KEY_NUMPLUS: return translate_key(key, "Numpad +"); case OBS_KEY_NUMMINUS: return translate_key(key, "Numpad -"); case OBS_KEY_NUMCOMMA: return translate_key(key, "Numpad ,"); case OBS_KEY_NUMPERIOD: return translate_key(key, "Numpad ."); case OBS_KEY_NUMSLASH: return translate_key(key, "Numpad /"); case OBS_KEY_SPACE: return translate_key(key, "Space"); case OBS_KEY_ESCAPE: return translate_key(key, "Escape"); default:; } if (key >= OBS_KEY_F1 && key <= OBS_KEY_F35) { dstr_printf(dstr, "F%d", (int)(key - OBS_KEY_F1 + 1)); return; } obs_hotkeys_platform_t *plat = obs->hotkeys.platform_context; // Translate the obs key back down to shift level 1 and then back to obs key. xkb_keycode_t keycode = plat->obs_to_key[key]; xkb_keysym_t base_sym = plat->key_to_sym[0][keycode]; if (base_sym != 0) { char buf[16] = {0}; if (xkb_keysym_to_utf8(base_sym, buf, 15)) { // Normally obs uses capital letters but we are shift level 1 (lower case). dstr_copy(dstr, buf); } } if (key != OBS_KEY_NONE && dstr_is_empty(dstr)) { dstr_copy(dstr, obs_key_to_name(key)); } } static obs_key_t obs_nix_wayland_key_from_virtual_key(int sym) { switch (sym) { case XKB_KEY_0: return OBS_KEY_0; case XKB_KEY_1: return OBS_KEY_1; case XKB_KEY_2: return OBS_KEY_2; case XKB_KEY_3: return OBS_KEY_3; case XKB_KEY_4: return OBS_KEY_4; case XKB_KEY_5: return OBS_KEY_5; case XKB_KEY_6: return OBS_KEY_6; case XKB_KEY_7: return OBS_KEY_7; case XKB_KEY_8: return OBS_KEY_8; case XKB_KEY_9: return OBS_KEY_9; case XKB_KEY_A: return OBS_KEY_A; case XKB_KEY_a: return OBS_KEY_A; case XKB_KEY_Aacute: return OBS_KEY_AACUTE; case XKB_KEY_aacute: return OBS_KEY_AACUTE; case XKB_KEY_Acircumflex: return OBS_KEY_ACIRCUMFLEX; case XKB_KEY_acircumflex: return OBS_KEY_ACIRCUMFLEX; case XKB_KEY_acute: return OBS_KEY_ACUTE; case XKB_KEY_Adiaeresis: return OBS_KEY_ADIAERESIS; case XKB_KEY_adiaeresis: return OBS_KEY_ADIAERESIS; case XKB_KEY_AE: return OBS_KEY_AE; case XKB_KEY_ae: return OBS_KEY_AE; case XKB_KEY_Agrave: return OBS_KEY_AGRAVE; case XKB_KEY_agrave: return OBS_KEY_AGRAVE; case XKB_KEY_ampersand: return OBS_KEY_AMPERSAND; case XKB_KEY_apostrophe: return OBS_KEY_APOSTROPHE; case XKB_KEY_Aring: return OBS_KEY_ARING; case XKB_KEY_aring: return OBS_KEY_ARING; case XKB_KEY_asciicircum: return OBS_KEY_ASCIICIRCUM; case XKB_KEY_asciitilde: return OBS_KEY_ASCIITILDE; case XKB_KEY_asterisk: return OBS_KEY_ASTERISK; case XKB_KEY_at: return OBS_KEY_AT; case XKB_KEY_Atilde: return OBS_KEY_ATILDE; case XKB_KEY_atilde: return OBS_KEY_ATILDE; case XKB_KEY_B: return OBS_KEY_B; case XKB_KEY_b: return OBS_KEY_B; case XKB_KEY_backslash: return OBS_KEY_BACKSLASH; case XKB_KEY_BackSpace: return OBS_KEY_BACKSPACE; case XKB_KEY_BackTab: return OBS_KEY_BACKTAB; case XKB_KEY_bar: return OBS_KEY_BAR; case XKB_KEY_braceleft: return OBS_KEY_BRACELEFT; case XKB_KEY_braceright: return OBS_KEY_BRACERIGHT; case XKB_KEY_bracketleft: return OBS_KEY_BRACKETLEFT; case XKB_KEY_bracketright: return OBS_KEY_BRACKETRIGHT; case XKB_KEY_brokenbar: return OBS_KEY_BROKENBAR; case XKB_KEY_C: return OBS_KEY_C; case XKB_KEY_c: return OBS_KEY_C; case XKB_KEY_Cancel: return OBS_KEY_CANCEL; case XKB_KEY_Ccedilla: return OBS_KEY_CCEDILLA; case XKB_KEY_ccedilla: return OBS_KEY_CCEDILLA; case XKB_KEY_cedilla: return OBS_KEY_CEDILLA; case XKB_KEY_cent: return OBS_KEY_CENT; case XKB_KEY_Clear: return OBS_KEY_CLEAR; case XKB_KEY_Codeinput: return OBS_KEY_CODEINPUT; case XKB_KEY_colon: return OBS_KEY_COLON; case XKB_KEY_comma: return OBS_KEY_COMMA; case XKB_KEY_copyright: return OBS_KEY_COPYRIGHT; case XKB_KEY_currency: return OBS_KEY_CURRENCY; case XKB_KEY_D: return OBS_KEY_D; case XKB_KEY_d: return OBS_KEY_D; case XKB_KEY_dead_abovedot: return OBS_KEY_DEAD_ABOVEDOT; case XKB_KEY_dead_abovering: return OBS_KEY_DEAD_ABOVERING; case XKB_KEY_dead_acute: return OBS_KEY_DEAD_ACUTE; case XKB_KEY_dead_belowdot: return OBS_KEY_DEAD_BELOWDOT; case XKB_KEY_dead_breve: return OBS_KEY_DEAD_BREVE; case XKB_KEY_dead_caron: return OBS_KEY_DEAD_CARON; case XKB_KEY_dead_cedilla: return OBS_KEY_DEAD_CEDILLA; case XKB_KEY_dead_circumflex: return OBS_KEY_DEAD_CIRCUMFLEX; case XKB_KEY_dead_diaeresis: return OBS_KEY_DEAD_DIAERESIS; case XKB_KEY_dead_doubleacute: return OBS_KEY_DEAD_DOUBLEACUTE; case XKB_KEY_dead_grave: return OBS_KEY_DEAD_GRAVE; case XKB_KEY_dead_hook: return OBS_KEY_DEAD_HOOK; case XKB_KEY_dead_horn: return OBS_KEY_DEAD_HORN; case XKB_KEY_dead_iota: return OBS_KEY_DEAD_IOTA; case XKB_KEY_dead_macron: return OBS_KEY_DEAD_MACRON; case XKB_KEY_dead_ogonek: return OBS_KEY_DEAD_OGONEK; case XKB_KEY_dead_semivoiced_sound: return OBS_KEY_DEAD_SEMIVOICED_SOUND; case XKB_KEY_dead_tilde: return OBS_KEY_DEAD_TILDE; case XKB_KEY_dead_voiced_sound: return OBS_KEY_DEAD_VOICED_SOUND; case XKB_KEY_degree: return OBS_KEY_DEGREE; case XKB_KEY_Delete: return OBS_KEY_DELETE; case XKB_KEY_diaeresis: return OBS_KEY_DIAERESIS; case XKB_KEY_division: return OBS_KEY_DIVISION; case XKB_KEY_dollar: return OBS_KEY_DOLLAR; case XKB_KEY_Down: return OBS_KEY_DOWN; case XKB_KEY_E: return OBS_KEY_E; case XKB_KEY_e: return OBS_KEY_E; case XKB_KEY_Eacute: return OBS_KEY_EACUTE; case XKB_KEY_eacute: return OBS_KEY_EACUTE; case XKB_KEY_Ecircumflex: return OBS_KEY_ECIRCUMFLEX; case XKB_KEY_ecircumflex: return OBS_KEY_ECIRCUMFLEX; case XKB_KEY_Ediaeresis: return OBS_KEY_EDIAERESIS; case XKB_KEY_ediaeresis: return OBS_KEY_EDIAERESIS; case XKB_KEY_Egrave: return OBS_KEY_EGRAVE; case XKB_KEY_egrave: return OBS_KEY_EGRAVE; case XKB_KEY_Eisu_Shift: return OBS_KEY_EISU_SHIFT; case XKB_KEY_Eisu_toggle: return OBS_KEY_EISU_TOGGLE; case XKB_KEY_End: return OBS_KEY_END; case XKB_KEY_equal: return OBS_KEY_EQUAL; case XKB_KEY_Escape: return OBS_KEY_ESCAPE; case XKB_KEY_Eth: return OBS_KEY_ETH; case XKB_KEY_eth: return OBS_KEY_ETH; case XKB_KEY_exclam: return OBS_KEY_EXCLAM; case XKB_KEY_exclamdown: return OBS_KEY_EXCLAMDOWN; case XKB_KEY_Execute: return OBS_KEY_EXECUTE; case XKB_KEY_F: return OBS_KEY_F; case XKB_KEY_f: return OBS_KEY_F; case XKB_KEY_F1: return OBS_KEY_F1; case XKB_KEY_F10: return OBS_KEY_F10; case XKB_KEY_F11: return OBS_KEY_F11; case XKB_KEY_F12: return OBS_KEY_F12; case XKB_KEY_F13: return OBS_KEY_F13; case XKB_KEY_F14: return OBS_KEY_F14; case XKB_KEY_F15: return OBS_KEY_F15; case XKB_KEY_F16: return OBS_KEY_F16; case XKB_KEY_F17: return OBS_KEY_F17; case XKB_KEY_F18: return OBS_KEY_F18; case XKB_KEY_F19: return OBS_KEY_F19; case XKB_KEY_F2: return OBS_KEY_F2; case XKB_KEY_F20: return OBS_KEY_F20; case XKB_KEY_F21: return OBS_KEY_F21; case XKB_KEY_F22: return OBS_KEY_F22; case XKB_KEY_F23: return OBS_KEY_F23; case XKB_KEY_F24: return OBS_KEY_F24; case XKB_KEY_F25: return OBS_KEY_F25; case XKB_KEY_F26: return OBS_KEY_F26; case XKB_KEY_F27: return OBS_KEY_F27; case XKB_KEY_F28: return OBS_KEY_F28; case XKB_KEY_F29: return OBS_KEY_F29; case XKB_KEY_F3: return OBS_KEY_F3; case XKB_KEY_F30: return OBS_KEY_F30; case XKB_KEY_F31: return OBS_KEY_F31; case XKB_KEY_F32: return OBS_KEY_F32; case XKB_KEY_F33: return OBS_KEY_F33; case XKB_KEY_F34: return OBS_KEY_F34; case XKB_KEY_F35: return OBS_KEY_F35; case XKB_KEY_F4: return OBS_KEY_F4; case XKB_KEY_F5: return OBS_KEY_F5; case XKB_KEY_F6: return OBS_KEY_F6; case XKB_KEY_F7: return OBS_KEY_F7; case XKB_KEY_F8: return OBS_KEY_F8; case XKB_KEY_F9: return OBS_KEY_F9; case XKB_KEY_Find: return OBS_KEY_FIND; case XKB_KEY_G: return OBS_KEY_G; case XKB_KEY_g: return OBS_KEY_G; case XKB_KEY_greater: return OBS_KEY_GREATER; case XKB_KEY_guillemotleft: return OBS_KEY_GUILLEMOTLEFT; case XKB_KEY_guillemotright: return OBS_KEY_GUILLEMOTRIGHT; case XKB_KEY_H: return OBS_KEY_H; case XKB_KEY_h: return OBS_KEY_H; case XKB_KEY_Hangul: return OBS_KEY_HANGUL; case XKB_KEY_Hangul_Banja: return OBS_KEY_HANGUL_BANJA; case XKB_KEY_Hangul_End: return OBS_KEY_HANGUL_END; case XKB_KEY_Hangul_Hanja: return OBS_KEY_HANGUL_HANJA; case XKB_KEY_Hangul_Jamo: return OBS_KEY_HANGUL_JAMO; case XKB_KEY_Hangul_Jeonja: return OBS_KEY_HANGUL_JEONJA; case XKB_KEY_Hangul_PostHanja: return OBS_KEY_HANGUL_POSTHANJA; case XKB_KEY_Hangul_PreHanja: return OBS_KEY_HANGUL_PREHANJA; case XKB_KEY_Hangul_Romaja: return OBS_KEY_HANGUL_ROMAJA; case XKB_KEY_Hangul_Special: return OBS_KEY_HANGUL_SPECIAL; case XKB_KEY_Hangul_Start: return OBS_KEY_HANGUL_START; case XKB_KEY_Hankaku: return OBS_KEY_HANKAKU; case XKB_KEY_Help: return OBS_KEY_HELP; case XKB_KEY_Henkan: return OBS_KEY_HENKAN; case XKB_KEY_Hiragana: return OBS_KEY_HIRAGANA; case XKB_KEY_Hiragana_Katakana: return OBS_KEY_HIRAGANA_KATAKANA; case XKB_KEY_Home: return OBS_KEY_HOME; case XKB_KEY_Hyper_L: return OBS_KEY_HYPER_L; case XKB_KEY_Hyper_R: return OBS_KEY_HYPER_R; case XKB_KEY_hyphen: return OBS_KEY_HYPHEN; case XKB_KEY_I: return OBS_KEY_I; case XKB_KEY_i: return OBS_KEY_I; case XKB_KEY_Iacute: return OBS_KEY_IACUTE; case XKB_KEY_iacute: return OBS_KEY_IACUTE; case XKB_KEY_Icircumflex: return OBS_KEY_ICIRCUMFLEX; case XKB_KEY_icircumflex: return OBS_KEY_ICIRCUMFLEX; case XKB_KEY_Idiaeresis: return OBS_KEY_IDIAERESIS; case XKB_KEY_idiaeresis: return OBS_KEY_IDIAERESIS; case XKB_KEY_Igrave: return OBS_KEY_IGRAVE; case XKB_KEY_igrave: return OBS_KEY_IGRAVE; case XKB_KEY_Insert: return OBS_KEY_INSERT; case XKB_KEY_J: return OBS_KEY_J; case XKB_KEY_j: return OBS_KEY_J; case XKB_KEY_K: return OBS_KEY_K; case XKB_KEY_k: return OBS_KEY_K; case XKB_KEY_Kana_Lock: return OBS_KEY_KANA_LOCK; case XKB_KEY_Kana_Shift: return OBS_KEY_KANA_SHIFT; case XKB_KEY_Kanji: return OBS_KEY_KANJI; case XKB_KEY_Katakana: return OBS_KEY_KATAKANA; case XKB_KEY_L: return OBS_KEY_L; case XKB_KEY_l: return OBS_KEY_L; case XKB_KEY_Left: return OBS_KEY_LEFT; case XKB_KEY_less: return OBS_KEY_LESS; case XKB_KEY_M: return OBS_KEY_M; case XKB_KEY_m: return OBS_KEY_M; case XKB_KEY_macron: return OBS_KEY_MACRON; case XKB_KEY_masculine: return OBS_KEY_MASCULINE; case XKB_KEY_Massyo: return OBS_KEY_MASSYO; case XKB_KEY_Menu: return OBS_KEY_MENU; case XKB_KEY_minus: return OBS_KEY_MINUS; case XKB_KEY_Mode_switch: return OBS_KEY_MODE_SWITCH; case XKB_KEY_mu: return OBS_KEY_MU; case XKB_KEY_Muhenkan: return OBS_KEY_MUHENKAN; case XKB_KEY_MultipleCandidate: return OBS_KEY_MULTIPLECANDIDATE; case XKB_KEY_multiply: return OBS_KEY_MULTIPLY; case XKB_KEY_Multi_key: return OBS_KEY_MULTI_KEY; case XKB_KEY_N: return OBS_KEY_N; case XKB_KEY_n: return OBS_KEY_N; case XKB_KEY_nobreakspace: return OBS_KEY_NOBREAKSPACE; case XKB_KEY_notsign: return OBS_KEY_NOTSIGN; case XKB_KEY_Ntilde: return OBS_KEY_NTILDE; case XKB_KEY_ntilde: return OBS_KEY_NTILDE; case XKB_KEY_numbersign: return OBS_KEY_NUMBERSIGN; case XKB_KEY_O: return OBS_KEY_O; case XKB_KEY_o: return OBS_KEY_O; case XKB_KEY_Oacute: return OBS_KEY_OACUTE; case XKB_KEY_oacute: return OBS_KEY_OACUTE; case XKB_KEY_Ocircumflex: return OBS_KEY_OCIRCUMFLEX; case XKB_KEY_ocircumflex: return OBS_KEY_OCIRCUMFLEX; case XKB_KEY_Odiaeresis: return OBS_KEY_ODIAERESIS; case XKB_KEY_odiaeresis: return OBS_KEY_ODIAERESIS; case XKB_KEY_Ograve: return OBS_KEY_OGRAVE; case XKB_KEY_ograve: return OBS_KEY_OGRAVE; case XKB_KEY_onehalf: return OBS_KEY_ONEHALF; case XKB_KEY_onequarter: return OBS_KEY_ONEQUARTER; case XKB_KEY_onesuperior: return OBS_KEY_ONESUPERIOR; case XKB_KEY_Ooblique: return OBS_KEY_OOBLIQUE; case XKB_KEY_ooblique: return OBS_KEY_OOBLIQUE; case XKB_KEY_ordfeminine: return OBS_KEY_ORDFEMININE; case XKB_KEY_Otilde: return OBS_KEY_OTILDE; case XKB_KEY_otilde: return OBS_KEY_OTILDE; case XKB_KEY_P: return OBS_KEY_P; case XKB_KEY_p: return OBS_KEY_P; case XKB_KEY_paragraph: return OBS_KEY_PARAGRAPH; case XKB_KEY_parenleft: return OBS_KEY_PARENLEFT; case XKB_KEY_parenright: return OBS_KEY_PARENRIGHT; case XKB_KEY_Pause: return OBS_KEY_PAUSE; case XKB_KEY_percent: return OBS_KEY_PERCENT; case XKB_KEY_period: return OBS_KEY_PERIOD; case XKB_KEY_periodcentered: return OBS_KEY_PERIODCENTERED; case XKB_KEY_plus: return OBS_KEY_PLUS; case XKB_KEY_plusminus: return OBS_KEY_PLUSMINUS; case XKB_KEY_PreviousCandidate: return OBS_KEY_PREVIOUSCANDIDATE; case XKB_KEY_Print: return OBS_KEY_PRINT; case XKB_KEY_Q: return OBS_KEY_Q; case XKB_KEY_q: return OBS_KEY_Q; case XKB_KEY_question: return OBS_KEY_QUESTION; case XKB_KEY_questiondown: return OBS_KEY_QUESTIONDOWN; case XKB_KEY_quotedbl: return OBS_KEY_QUOTEDBL; case XKB_KEY_quoteleft: return OBS_KEY_QUOTELEFT; case XKB_KEY_R: return OBS_KEY_R; case XKB_KEY_r: return OBS_KEY_R; case XKB_KEY_Redo: return OBS_KEY_REDO; case XKB_KEY_registered: return OBS_KEY_REGISTERED; case XKB_KEY_Return: return OBS_KEY_RETURN; case XKB_KEY_Right: return OBS_KEY_RIGHT; case XKB_KEY_Romaji: return OBS_KEY_ROMAJI; case XKB_KEY_S: return OBS_KEY_S; case XKB_KEY_s: return OBS_KEY_S; case XKB_KEY_section: return OBS_KEY_SECTION; case XKB_KEY_Select: return OBS_KEY_SELECT; case XKB_KEY_semicolon: return OBS_KEY_SEMICOLON; case XKB_KEY_SingleCandidate: return OBS_KEY_SINGLECANDIDATE; case XKB_KEY_slash: return OBS_KEY_SLASH; case XKB_KEY_space: return OBS_KEY_SPACE; case XKB_KEY_ssharp: return OBS_KEY_SSHARP; case XKB_KEY_sterling: return OBS_KEY_STERLING; case XKB_KEY_T: return OBS_KEY_T; case XKB_KEY_t: return OBS_KEY_T; case XKB_KEY_Tab: return OBS_KEY_TAB; case XKB_KEY_Thorn: return OBS_KEY_THORN; case XKB_KEY_thorn: return OBS_KEY_THORN; case XKB_KEY_threequarters: return OBS_KEY_THREEQUARTERS; case XKB_KEY_threesuperior: return OBS_KEY_THREESUPERIOR; case XKB_KEY_Touroku: return OBS_KEY_TOUROKU; case XKB_KEY_twosuperior: return OBS_KEY_TWOSUPERIOR; case XKB_KEY_U: return OBS_KEY_U; case XKB_KEY_u: return OBS_KEY_U; case XKB_KEY_Uacute: return OBS_KEY_UACUTE; case XKB_KEY_uacute: return OBS_KEY_UACUTE; case XKB_KEY_Ucircumflex: return OBS_KEY_UCIRCUMFLEX; case XKB_KEY_ucircumflex: return OBS_KEY_UCIRCUMFLEX; case XKB_KEY_Udiaeresis: return OBS_KEY_UDIAERESIS; case XKB_KEY_udiaeresis: return OBS_KEY_UDIAERESIS; case XKB_KEY_Ugrave: return OBS_KEY_UGRAVE; case XKB_KEY_ugrave: return OBS_KEY_UGRAVE; case XKB_KEY_underscore: return OBS_KEY_UNDERSCORE; case XKB_KEY_Undo: return OBS_KEY_UNDO; case XKB_KEY_Up: return OBS_KEY_UP; case XKB_KEY_V: return OBS_KEY_V; case XKB_KEY_v: return OBS_KEY_V; case XKB_KEY_W: return OBS_KEY_W; case XKB_KEY_w: return OBS_KEY_W; case XKB_KEY_X: return OBS_KEY_X; case XKB_KEY_x: return OBS_KEY_X; case XKB_KEY_Y: return OBS_KEY_Y; case XKB_KEY_y: return OBS_KEY_Y; case XKB_KEY_Yacute: return OBS_KEY_YACUTE; case XKB_KEY_yacute: return OBS_KEY_YACUTE; case XKB_KEY_Ydiaeresis: return OBS_KEY_YDIAERESIS; case XKB_KEY_ydiaeresis: return OBS_KEY_YDIAERESIS; case XKB_KEY_yen: return OBS_KEY_YEN; case XKB_KEY_Z: return OBS_KEY_Z; case XKB_KEY_z: return OBS_KEY_Z; case XKB_KEY_Zenkaku: return OBS_KEY_ZENKAKU; case XKB_KEY_Zenkaku_Hankaku: return OBS_KEY_ZENKAKU_HANKAKU; case XKB_KEY_Page_Up: return OBS_KEY_PAGEUP; case XKB_KEY_Page_Down: return OBS_KEY_PAGEDOWN; case XKB_KEY_KP_Equal: return OBS_KEY_NUMEQUAL; case XKB_KEY_KP_Multiply: return OBS_KEY_NUMASTERISK; case XKB_KEY_KP_Add: return OBS_KEY_NUMPLUS; case XKB_KEY_KP_Separator: return OBS_KEY_NUMCOMMA; case XKB_KEY_KP_Subtract: return OBS_KEY_NUMMINUS; case XKB_KEY_KP_Decimal: return OBS_KEY_NUMPERIOD; case XKB_KEY_KP_Divide: return OBS_KEY_NUMSLASH; case XKB_KEY_KP_Enter: return OBS_KEY_ENTER; case XKB_KEY_KP_0: return OBS_KEY_NUM0; case XKB_KEY_KP_1: return OBS_KEY_NUM1; case XKB_KEY_KP_2: return OBS_KEY_NUM2; case XKB_KEY_KP_3: return OBS_KEY_NUM3; case XKB_KEY_KP_4: return OBS_KEY_NUM4; case XKB_KEY_KP_5: return OBS_KEY_NUM5; case XKB_KEY_KP_6: return OBS_KEY_NUM6; case XKB_KEY_KP_7: return OBS_KEY_NUM7; case XKB_KEY_KP_8: return OBS_KEY_NUM8; case XKB_KEY_KP_9: return OBS_KEY_NUM9; case XKB_KEY_XF86AudioPlay: return OBS_KEY_VK_MEDIA_PLAY_PAUSE; case XKB_KEY_XF86AudioStop: return OBS_KEY_VK_MEDIA_STOP; case XKB_KEY_XF86AudioPrev: return OBS_KEY_VK_MEDIA_PREV_TRACK; case XKB_KEY_XF86AudioNext: return OBS_KEY_VK_MEDIA_NEXT_TRACK; case XKB_KEY_XF86AudioMute: return OBS_KEY_VK_VOLUME_MUTE; case XKB_KEY_XF86AudioRaiseVolume: return OBS_KEY_VK_VOLUME_DOWN; case XKB_KEY_XF86AudioLowerVolume: return OBS_KEY_VK_VOLUME_UP; } return OBS_KEY_NONE; } static int obs_nix_wayland_key_to_virtual_key(obs_key_t key) { switch (key) { case OBS_KEY_0: return XKB_KEY_0; case OBS_KEY_1: return XKB_KEY_1; case OBS_KEY_2: return XKB_KEY_2; case OBS_KEY_3: return XKB_KEY_3; case OBS_KEY_4: return XKB_KEY_4; case OBS_KEY_5: return XKB_KEY_5; case OBS_KEY_6: return XKB_KEY_6; case OBS_KEY_7: return XKB_KEY_7; case OBS_KEY_8: return XKB_KEY_8; case OBS_KEY_9: return XKB_KEY_9; case OBS_KEY_A: return XKB_KEY_A; case OBS_KEY_AACUTE: return XKB_KEY_Aacute; case OBS_KEY_ACIRCUMFLEX: return XKB_KEY_Acircumflex; case OBS_KEY_ACUTE: return XKB_KEY_acute; case OBS_KEY_ADIAERESIS: return XKB_KEY_Adiaeresis; case OBS_KEY_AE: return XKB_KEY_AE; case OBS_KEY_AGRAVE: return XKB_KEY_Agrave; case OBS_KEY_AMPERSAND: return XKB_KEY_ampersand; case OBS_KEY_APOSTROPHE: return XKB_KEY_apostrophe; case OBS_KEY_ARING: return XKB_KEY_Aring; case OBS_KEY_ASCIICIRCUM: return XKB_KEY_asciicircum; case OBS_KEY_ASCIITILDE: return XKB_KEY_asciitilde; case OBS_KEY_ASTERISK: return XKB_KEY_asterisk; case OBS_KEY_AT: return XKB_KEY_at; case OBS_KEY_ATILDE: return XKB_KEY_Atilde; case OBS_KEY_B: return XKB_KEY_B; case OBS_KEY_BACKSLASH: return XKB_KEY_backslash; case OBS_KEY_BACKSPACE: return XKB_KEY_BackSpace; case OBS_KEY_BACKTAB: return XKB_KEY_BackTab; case OBS_KEY_BAR: return XKB_KEY_bar; case OBS_KEY_BRACELEFT: return XKB_KEY_braceleft; case OBS_KEY_BRACERIGHT: return XKB_KEY_braceright; case OBS_KEY_BRACKETLEFT: return XKB_KEY_bracketleft; case OBS_KEY_BRACKETRIGHT: return XKB_KEY_bracketright; case OBS_KEY_BROKENBAR: return XKB_KEY_brokenbar; case OBS_KEY_C: return XKB_KEY_C; case OBS_KEY_CANCEL: return XKB_KEY_Cancel; case OBS_KEY_CCEDILLA: return XKB_KEY_Ccedilla; case OBS_KEY_CEDILLA: return XKB_KEY_cedilla; case OBS_KEY_CENT: return XKB_KEY_cent; case OBS_KEY_CLEAR: return XKB_KEY_Clear; case OBS_KEY_CODEINPUT: return XKB_KEY_Codeinput; case OBS_KEY_COLON: return XKB_KEY_colon; case OBS_KEY_COMMA: return XKB_KEY_comma; case OBS_KEY_COPYRIGHT: return XKB_KEY_copyright; case OBS_KEY_CURRENCY: return XKB_KEY_currency; case OBS_KEY_D: return XKB_KEY_D; case OBS_KEY_DEAD_ABOVEDOT: return XKB_KEY_dead_abovedot; case OBS_KEY_DEAD_ABOVERING: return XKB_KEY_dead_abovering; case OBS_KEY_DEAD_ACUTE: return XKB_KEY_dead_acute; case OBS_KEY_DEAD_BELOWDOT: return XKB_KEY_dead_belowdot; case OBS_KEY_DEAD_BREVE: return XKB_KEY_dead_breve; case OBS_KEY_DEAD_CARON: return XKB_KEY_dead_caron; case OBS_KEY_DEAD_CEDILLA: return XKB_KEY_dead_cedilla; case OBS_KEY_DEAD_CIRCUMFLEX: return XKB_KEY_dead_circumflex; case OBS_KEY_DEAD_DIAERESIS: return XKB_KEY_dead_diaeresis; case OBS_KEY_DEAD_DOUBLEACUTE: return XKB_KEY_dead_doubleacute; case OBS_KEY_DEAD_GRAVE: return XKB_KEY_dead_grave; case OBS_KEY_DEAD_HOOK: return XKB_KEY_dead_hook; case OBS_KEY_DEAD_HORN: return XKB_KEY_dead_horn; case OBS_KEY_DEAD_IOTA: return XKB_KEY_dead_iota; case OBS_KEY_DEAD_MACRON: return XKB_KEY_dead_macron; case OBS_KEY_DEAD_OGONEK: return XKB_KEY_dead_ogonek; case OBS_KEY_DEAD_SEMIVOICED_SOUND: return XKB_KEY_dead_semivoiced_sound; case OBS_KEY_DEAD_TILDE: return XKB_KEY_dead_tilde; case OBS_KEY_DEAD_VOICED_SOUND: return XKB_KEY_dead_voiced_sound; case OBS_KEY_DEGREE: return XKB_KEY_degree; case OBS_KEY_DELETE: return XKB_KEY_Delete; case OBS_KEY_DIAERESIS: return XKB_KEY_diaeresis; case OBS_KEY_DIVISION: return XKB_KEY_division; case OBS_KEY_DOLLAR: return XKB_KEY_dollar; case OBS_KEY_DOWN: return XKB_KEY_Down; case OBS_KEY_E: return XKB_KEY_E; case OBS_KEY_EACUTE: return XKB_KEY_Eacute; case OBS_KEY_ECIRCUMFLEX: return XKB_KEY_Ecircumflex; case OBS_KEY_EDIAERESIS: return XKB_KEY_Ediaeresis; case OBS_KEY_EGRAVE: return XKB_KEY_Egrave; case OBS_KEY_EISU_SHIFT: return XKB_KEY_Eisu_Shift; case OBS_KEY_EISU_TOGGLE: return XKB_KEY_Eisu_toggle; case OBS_KEY_END: return XKB_KEY_End; case OBS_KEY_EQUAL: return XKB_KEY_equal; case OBS_KEY_ESCAPE: return XKB_KEY_Escape; case OBS_KEY_ETH: return XKB_KEY_ETH; case OBS_KEY_EXCLAM: return XKB_KEY_exclam; case OBS_KEY_EXCLAMDOWN: return XKB_KEY_exclamdown; case OBS_KEY_EXECUTE: return XKB_KEY_Execute; case OBS_KEY_F: return XKB_KEY_F; case OBS_KEY_F1: return XKB_KEY_F1; case OBS_KEY_F10: return XKB_KEY_F10; case OBS_KEY_F11: return XKB_KEY_F11; case OBS_KEY_F12: return XKB_KEY_F12; case OBS_KEY_F13: return XKB_KEY_F13; case OBS_KEY_F14: return XKB_KEY_F14; case OBS_KEY_F15: return XKB_KEY_F15; case OBS_KEY_F16: return XKB_KEY_F16; case OBS_KEY_F17: return XKB_KEY_F17; case OBS_KEY_F18: return XKB_KEY_F18; case OBS_KEY_F19: return XKB_KEY_F19; case OBS_KEY_F2: return XKB_KEY_F2; case OBS_KEY_F20: return XKB_KEY_F20; case OBS_KEY_F21: return XKB_KEY_F21; case OBS_KEY_F22: return XKB_KEY_F22; case OBS_KEY_F23: return XKB_KEY_F23; case OBS_KEY_F24: return XKB_KEY_F24; case OBS_KEY_F25: return XKB_KEY_F25; case OBS_KEY_F26: return XKB_KEY_F26; case OBS_KEY_F27: return XKB_KEY_F27; case OBS_KEY_F28: return XKB_KEY_F28; case OBS_KEY_F29: return XKB_KEY_F29; case OBS_KEY_F3: return XKB_KEY_F3; case OBS_KEY_F30: return XKB_KEY_F30; case OBS_KEY_F31: return XKB_KEY_F31; case OBS_KEY_F32: return XKB_KEY_F32; case OBS_KEY_F33: return XKB_KEY_F33; case OBS_KEY_F34: return XKB_KEY_F34; case OBS_KEY_F35: return XKB_KEY_F35; case OBS_KEY_F4: return XKB_KEY_F4; case OBS_KEY_F5: return XKB_KEY_F5; case OBS_KEY_F6: return XKB_KEY_F6; case OBS_KEY_F7: return XKB_KEY_F7; case OBS_KEY_F8: return XKB_KEY_F8; case OBS_KEY_F9: return XKB_KEY_F9; case OBS_KEY_FIND: return XKB_KEY_Find; case OBS_KEY_G: return XKB_KEY_G; case OBS_KEY_GREATER: return XKB_KEY_greater; case OBS_KEY_GUILLEMOTLEFT: return XKB_KEY_guillemotleft; case OBS_KEY_GUILLEMOTRIGHT: return XKB_KEY_guillemotright; case OBS_KEY_H: return XKB_KEY_H; case OBS_KEY_HANGUL: return XKB_KEY_Hangul; case OBS_KEY_HANGUL_BANJA: return XKB_KEY_Hangul_Banja; case OBS_KEY_HANGUL_END: return XKB_KEY_Hangul_End; case OBS_KEY_HANGUL_HANJA: return XKB_KEY_Hangul_Hanja; case OBS_KEY_HANGUL_JAMO: return XKB_KEY_Hangul_Jamo; case OBS_KEY_HANGUL_JEONJA: return XKB_KEY_Hangul_Jeonja; case OBS_KEY_HANGUL_POSTHANJA: return XKB_KEY_Hangul_PostHanja; case OBS_KEY_HANGUL_PREHANJA: return XKB_KEY_Hangul_PreHanja; case OBS_KEY_HANGUL_ROMAJA: return XKB_KEY_Hangul_Romaja; case OBS_KEY_HANGUL_SPECIAL: return XKB_KEY_Hangul_Special; case OBS_KEY_HANGUL_START: return XKB_KEY_Hangul_Start; case OBS_KEY_HANKAKU: return XKB_KEY_Hankaku; case OBS_KEY_HELP: return XKB_KEY_Help; case OBS_KEY_HENKAN: return XKB_KEY_Henkan; case OBS_KEY_HIRAGANA: return XKB_KEY_Hiragana; case OBS_KEY_HIRAGANA_KATAKANA: return XKB_KEY_Hiragana_Katakana; case OBS_KEY_HOME: return XKB_KEY_Home; case OBS_KEY_HYPER_L: return XKB_KEY_Hyper_L; case OBS_KEY_HYPER_R: return XKB_KEY_Hyper_R; case OBS_KEY_HYPHEN: return XKB_KEY_hyphen; case OBS_KEY_I: return XKB_KEY_I; case OBS_KEY_IACUTE: return XKB_KEY_Iacute; case OBS_KEY_ICIRCUMFLEX: return XKB_KEY_Icircumflex; case OBS_KEY_IDIAERESIS: return XKB_KEY_Idiaeresis; case OBS_KEY_IGRAVE: return XKB_KEY_Igrave; case OBS_KEY_INSERT: return XKB_KEY_Insert; case OBS_KEY_J: return XKB_KEY_J; case OBS_KEY_K: return XKB_KEY_K; case OBS_KEY_KANA_LOCK: return XKB_KEY_Kana_Lock; case OBS_KEY_KANA_SHIFT: return XKB_KEY_Kana_Shift; case OBS_KEY_KANJI: return XKB_KEY_Kanji; case OBS_KEY_KATAKANA: return XKB_KEY_Katakana; case OBS_KEY_L: return XKB_KEY_L; case OBS_KEY_LEFT: return XKB_KEY_Left; case OBS_KEY_LESS: return XKB_KEY_less; case OBS_KEY_M: return XKB_KEY_M; case OBS_KEY_MACRON: return XKB_KEY_macron; case OBS_KEY_MASCULINE: return XKB_KEY_masculine; case OBS_KEY_MASSYO: return XKB_KEY_Massyo; case OBS_KEY_MENU: return XKB_KEY_Menu; case OBS_KEY_MINUS: return XKB_KEY_minus; case OBS_KEY_MODE_SWITCH: return XKB_KEY_Mode_switch; case OBS_KEY_MU: return XKB_KEY_mu; case OBS_KEY_MUHENKAN: return XKB_KEY_Muhenkan; case OBS_KEY_MULTI_KEY: return XKB_KEY_Multi_key; case OBS_KEY_MULTIPLECANDIDATE: return XKB_KEY_MultipleCandidate; case OBS_KEY_MULTIPLY: return XKB_KEY_multiply; case OBS_KEY_N: return XKB_KEY_N; case OBS_KEY_NOBREAKSPACE: return XKB_KEY_nobreakspace; case OBS_KEY_NOTSIGN: return XKB_KEY_notsign; case OBS_KEY_NTILDE: return XKB_KEY_Ntilde; case OBS_KEY_NUMBERSIGN: return XKB_KEY_numbersign; case OBS_KEY_O: return XKB_KEY_O; case OBS_KEY_OACUTE: return XKB_KEY_Oacute; case OBS_KEY_OCIRCUMFLEX: return XKB_KEY_Ocircumflex; case OBS_KEY_ODIAERESIS: return XKB_KEY_Odiaeresis; case OBS_KEY_OGRAVE: return XKB_KEY_Ograve; case OBS_KEY_ONEHALF: return XKB_KEY_onehalf; case OBS_KEY_ONEQUARTER: return XKB_KEY_onequarter; case OBS_KEY_ONESUPERIOR: return XKB_KEY_onesuperior; case OBS_KEY_OOBLIQUE: return XKB_KEY_Ooblique; case OBS_KEY_ORDFEMININE: return XKB_KEY_ordfeminine; case OBS_KEY_OTILDE: return XKB_KEY_Otilde; case OBS_KEY_P: return XKB_KEY_P; case OBS_KEY_PARAGRAPH: return XKB_KEY_paragraph; case OBS_KEY_PARENLEFT: return XKB_KEY_parenleft; case OBS_KEY_PARENRIGHT: return XKB_KEY_parenright; case OBS_KEY_PAUSE: return XKB_KEY_Pause; case OBS_KEY_PERCENT: return XKB_KEY_percent; case OBS_KEY_PERIOD: return XKB_KEY_period; case OBS_KEY_PERIODCENTERED: return XKB_KEY_periodcentered; case OBS_KEY_PLUS: return XKB_KEY_plus; case OBS_KEY_PLUSMINUS: return XKB_KEY_plusminus; case OBS_KEY_PREVIOUSCANDIDATE: return XKB_KEY_PreviousCandidate; case OBS_KEY_PRINT: return XKB_KEY_Print; case OBS_KEY_Q: return XKB_KEY_Q; case OBS_KEY_QUESTION: return XKB_KEY_question; case OBS_KEY_QUESTIONDOWN: return XKB_KEY_questiondown; case OBS_KEY_QUOTEDBL: return XKB_KEY_quotedbl; case OBS_KEY_QUOTELEFT: return XKB_KEY_quoteleft; case OBS_KEY_R: return XKB_KEY_R; case OBS_KEY_REDO: return XKB_KEY_Redo; case OBS_KEY_REGISTERED: return XKB_KEY_registered; case OBS_KEY_RETURN: return XKB_KEY_Return; case OBS_KEY_RIGHT: return XKB_KEY_Right; case OBS_KEY_ROMAJI: return XKB_KEY_Romaji; case OBS_KEY_S: return XKB_KEY_S; case OBS_KEY_SECTION: return XKB_KEY_section; case OBS_KEY_SELECT: return XKB_KEY_Select; case OBS_KEY_SEMICOLON: return XKB_KEY_semicolon; case OBS_KEY_SINGLECANDIDATE: return XKB_KEY_SingleCandidate; case OBS_KEY_SLASH: return XKB_KEY_slash; case OBS_KEY_SPACE: return XKB_KEY_space; case OBS_KEY_SSHARP: return XKB_KEY_ssharp; case OBS_KEY_STERLING: return XKB_KEY_sterling; case OBS_KEY_T: return XKB_KEY_T; case OBS_KEY_TAB: return XKB_KEY_Tab; case OBS_KEY_THORN: return XKB_KEY_THORN; case OBS_KEY_THREEQUARTERS: return XKB_KEY_threequarters; case OBS_KEY_THREESUPERIOR: return XKB_KEY_threesuperior; case OBS_KEY_TOUROKU: return XKB_KEY_Touroku; case OBS_KEY_TWOSUPERIOR: return XKB_KEY_twosuperior; case OBS_KEY_U: return XKB_KEY_U; case OBS_KEY_UACUTE: return XKB_KEY_Uacute; case OBS_KEY_UCIRCUMFLEX: return XKB_KEY_Ucircumflex; case OBS_KEY_UDIAERESIS: return XKB_KEY_Udiaeresis; case OBS_KEY_UGRAVE: return XKB_KEY_Ugrave; case OBS_KEY_UNDERSCORE: return XKB_KEY_underscore; case OBS_KEY_UNDO: return XKB_KEY_Undo; case OBS_KEY_UP: return XKB_KEY_Up; case OBS_KEY_V: return XKB_KEY_V; case OBS_KEY_W: return XKB_KEY_W; case OBS_KEY_X: return XKB_KEY_X; case OBS_KEY_Y: return XKB_KEY_Y; case OBS_KEY_YACUTE: return XKB_KEY_Yacute; case OBS_KEY_YDIAERESIS: return XKB_KEY_Ydiaeresis; case OBS_KEY_YEN: return XKB_KEY_yen; case OBS_KEY_Z: return XKB_KEY_Z; case OBS_KEY_ZENKAKU: return XKB_KEY_Zenkaku; case OBS_KEY_ZENKAKU_HANKAKU: return XKB_KEY_Zenkaku_Hankaku; case OBS_KEY_PAGEUP: return XKB_KEY_Page_Up; case OBS_KEY_PAGEDOWN: return XKB_KEY_Page_Down; case OBS_KEY_NUMEQUAL: return XKB_KEY_KP_Equal; case OBS_KEY_NUMASTERISK: return XKB_KEY_KP_Multiply; case OBS_KEY_NUMPLUS: return XKB_KEY_KP_Add; case OBS_KEY_NUMCOMMA: return XKB_KEY_KP_Separator; case OBS_KEY_NUMMINUS: return XKB_KEY_KP_Subtract; case OBS_KEY_NUMPERIOD: return XKB_KEY_KP_Decimal; case OBS_KEY_NUMSLASH: return XKB_KEY_KP_Divide; case OBS_KEY_ENTER: return XKB_KEY_KP_Enter; case OBS_KEY_NUM0: return XKB_KEY_KP_0; case OBS_KEY_NUM1: return XKB_KEY_KP_1; case OBS_KEY_NUM2: return XKB_KEY_KP_2; case OBS_KEY_NUM3: return XKB_KEY_KP_3; case OBS_KEY_NUM4: return XKB_KEY_KP_4; case OBS_KEY_NUM5: return XKB_KEY_KP_5; case OBS_KEY_NUM6: return XKB_KEY_KP_6; case OBS_KEY_NUM7: return XKB_KEY_KP_7; case OBS_KEY_NUM8: return XKB_KEY_KP_8; case OBS_KEY_NUM9: return XKB_KEY_KP_9; case OBS_KEY_VK_MEDIA_PLAY_PAUSE: return XKB_KEY_XF86AudioPlay; case OBS_KEY_VK_MEDIA_STOP: return XKB_KEY_XF86AudioStop; case OBS_KEY_VK_MEDIA_PREV_TRACK: return XKB_KEY_XF86AudioPrev; case OBS_KEY_VK_MEDIA_NEXT_TRACK: return XKB_KEY_XF86AudioNext; case OBS_KEY_VK_VOLUME_MUTE: return XKB_KEY_XF86AudioMute; case OBS_KEY_VK_VOLUME_DOWN: return XKB_KEY_XF86AudioRaiseVolume; case OBS_KEY_VK_VOLUME_UP: return XKB_KEY_XF86AudioLowerVolume; default: break; } return 0; } static const struct obs_nix_hotkeys_vtable wayland_hotkeys_vtable = { .init = obs_nix_wayland_hotkeys_platform_init, .free = obs_nix_wayland_hotkeys_platform_free, .is_pressed = obs_nix_wayland_hotkeys_platform_is_pressed, .key_to_str = obs_nix_wayland_key_to_str, .key_from_virtual_key = obs_nix_wayland_key_from_virtual_key, .key_to_virtual_key = obs_nix_wayland_key_to_virtual_key, }; const struct obs_nix_hotkeys_vtable *obs_nix_wayland_get_hotkeys_vtable(void) { return &wayland_hotkeys_vtable; } obs-studio-32.1.0-sources/libobs/obs-nix-platform.c000644 001751 001751 00000002653 15153330235 023061 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Jason Francis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-nix-platform.h" #include static enum obs_nix_platform_type obs_nix_platform = OBS_NIX_PLATFORM_X11_EGL; static void *obs_nix_platform_display = NULL; void obs_set_nix_platform(enum obs_nix_platform_type platform) { assert(platform != OBS_NIX_PLATFORM_INVALID); obs_nix_platform = platform; } enum obs_nix_platform_type obs_get_nix_platform(void) { return obs_nix_platform; } void obs_set_nix_platform_display(void *display) { obs_nix_platform_display = display; } void *obs_get_nix_platform_display(void) { return obs_nix_platform_display; } obs-studio-32.1.0-sources/libobs/obs-missing-files.c000644 001751 001751 00000007014 15153330235 023206 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Dillon Pentz This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/threading.h" #include "util/dstr.h" #include "obs-missing-files.h" #include "obs.h" struct obs_missing_file { volatile long ref; char *file_path; obs_missing_file_cb callback; int src_type; void *src; char *src_name; void *data; }; struct obs_missing_files { DARRAY(struct obs_missing_file *) files; }; obs_missing_files_t *obs_missing_files_create() { struct obs_missing_files *files = bzalloc(sizeof(struct obs_missing_files)); return files; } void obs_missing_files_destroy(obs_missing_files_t *files) { for (size_t i = 0; i < files->files.num; i++) { obs_missing_file_release(files->files.array[i]); } da_free(files->files); bfree(files); } void obs_missing_files_add_file(obs_missing_files_t *files, obs_missing_file_t *file) { da_insert(files->files, files->files.num, &file); } size_t obs_missing_files_count(obs_missing_files_t *files) { return files->files.num; } obs_missing_file_t *obs_missing_files_get_file(obs_missing_files_t *files, int idx) { return files->files.array[idx]; } void obs_missing_files_append(obs_missing_files_t *dst, obs_missing_files_t *src) { for (size_t i = 0; i < src->files.num; i++) { obs_missing_file_t *file = src->files.array[i]; obs_missing_files_add_file(dst, file); os_atomic_inc_long(&file->ref); } } obs_missing_file_t *obs_missing_file_create(const char *path, obs_missing_file_cb callback, int src_type, void *src, void *data) { struct obs_missing_file *file = bzalloc(sizeof(struct obs_missing_file)); file->file_path = bstrdup(path); file->callback = callback; file->src_type = src_type; file->src = src; file->data = data; file->ref = 1; switch (src_type) { case OBS_MISSING_FILE_SOURCE: file->src_name = bstrdup(obs_source_get_name(src)); break; case OBS_MISSING_FILE_SCRIPT: break; } return file; } void obs_missing_file_release(obs_missing_file_t *file) { if (!file) return; if (os_atomic_dec_long(&file->ref) == 0) obs_missing_file_destroy(file); } void obs_missing_file_destroy(obs_missing_file_t *file) { switch (file->src_type) { case OBS_MISSING_FILE_SOURCE: bfree(file->src_name); break; case OBS_MISSING_FILE_SCRIPT: break; } bfree(file->file_path); bfree(file); } void obs_missing_file_issue_callback(obs_missing_file_t *file, const char *new_path) { switch (file->src_type) { case OBS_MISSING_FILE_SOURCE: obs_source_replace_missing_file(file->callback, (obs_source_t *)file->src, new_path, file->data); break; case OBS_MISSING_FILE_SCRIPT: break; } } const char *obs_missing_file_get_path(obs_missing_file_t *file) { return file->file_path; } const char *obs_missing_file_get_source_name(obs_missing_file_t *file) { return file->src_name; } obs-studio-32.1.0-sources/libobs/obs-source-transition.c000644 001751 001751 00000074460 15153330235 024136 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" #include "util/util_uint64.h" #include "graphics/math-extra.h" #define lock_transition(transition) pthread_mutex_lock(&transition->transition_mutex); #define unlock_transition(transition) pthread_mutex_unlock(&transition->transition_mutex); #define trylock_textures(transition) pthread_mutex_trylock(&transition->transition_tex_mutex) #define lock_textures(transition) pthread_mutex_lock(&transition->transition_tex_mutex) #define unlock_textures(transition) pthread_mutex_unlock(&transition->transition_tex_mutex) static inline bool transition_valid(const obs_source_t *transition, const char *func) { if (!obs_ptr_valid(transition, func)) return false; else if (transition->info.type != OBS_SOURCE_TYPE_TRANSITION) return false; return true; } bool obs_transition_init(obs_source_t *transition) { pthread_mutex_init_value(&transition->transition_mutex); pthread_mutex_init_value(&transition->transition_tex_mutex); if (pthread_mutex_init(&transition->transition_mutex, NULL) != 0) return false; if (pthread_mutex_init(&transition->transition_tex_mutex, NULL) != 0) return false; transition->transition_alignment = OBS_ALIGN_LEFT | OBS_ALIGN_TOP; transition->transition_texrender[0] = gs_texrender_create(GS_RGBA, GS_ZS_NONE); transition->transition_texrender[1] = gs_texrender_create(GS_RGBA, GS_ZS_NONE); transition->transition_source_active[0] = true; return transition->transition_texrender[0] != NULL && transition->transition_texrender[1] != NULL; } void obs_transition_free(obs_source_t *transition) { pthread_mutex_destroy(&transition->transition_mutex); pthread_mutex_destroy(&transition->transition_tex_mutex); gs_enter_context(obs->video.graphics); gs_texrender_destroy(transition->transition_texrender[0]); gs_texrender_destroy(transition->transition_texrender[1]); gs_leave_context(); } void obs_transition_clear(obs_source_t *transition) { obs_source_t *s[2]; bool active[2]; if (!transition_valid(transition, "obs_transition_clear")) return; lock_transition(transition); for (size_t i = 0; i < 2; i++) { s[i] = transition->transition_sources[i]; active[i] = transition->transition_source_active[i]; transition->transition_sources[i] = NULL; transition->transition_source_active[i] = false; } transition->transitioning_video = false; transition->transitioning_audio = false; unlock_transition(transition); for (size_t i = 0; i < 2; i++) { if (s[i] && active[i]) obs_source_remove_active_child(transition, s[i]); obs_source_release(s[i]); } } void add_alignment(struct vec2 *v, uint32_t align, int cx, int cy); static inline uint32_t get_cx(obs_source_t *tr) { return tr->transition_cx ? tr->transition_cx : tr->transition_actual_cx; } static inline uint32_t get_cy(obs_source_t *tr) { return tr->transition_cy ? tr->transition_cy : tr->transition_actual_cy; } static void recalculate_transition_matrix(obs_source_t *tr, size_t idx) { obs_source_t *child; struct matrix4 mat; struct vec2 pos; struct vec2 scale; float tr_cx = (float)get_cx(tr); float tr_cy = (float)get_cy(tr); float source_cx; float source_cy; float tr_aspect = tr_cx / tr_cy; float source_aspect; enum obs_transition_scale_type scale_type = tr->transition_scale_type; lock_transition(tr); child = tr->transition_sources[idx]; if (!child) { unlock_transition(tr); return; } source_cx = (float)obs_source_get_width(child); source_cy = (float)obs_source_get_height(child); unlock_transition(tr); if (source_cx == 0.0f || source_cy == 0.0f) return; source_aspect = source_cx / source_cy; if (scale_type == OBS_TRANSITION_SCALE_MAX_ONLY) { if (source_cx > tr_cx || source_cy > tr_cy) { scale_type = OBS_TRANSITION_SCALE_ASPECT; } else { scale.x = 1.0f; scale.y = 1.0f; } } if (scale_type == OBS_TRANSITION_SCALE_ASPECT) { bool use_width = tr_aspect < source_aspect; scale.x = scale.y = use_width ? tr_cx / source_cx : tr_cy / source_cy; } else if (scale_type == OBS_TRANSITION_SCALE_STRETCH) { scale.x = tr_cx / source_cx; scale.y = tr_cy / source_cy; } source_cx *= scale.x; source_cy *= scale.y; vec2_zero(&pos); add_alignment(&pos, tr->transition_alignment, (int)(tr_cx - source_cx), (int)(tr_cy - source_cy)); matrix4_identity(&mat); matrix4_scale3f(&mat, &mat, scale.x, scale.y, 1.0f); matrix4_translate3f(&mat, &mat, pos.x, pos.y, 0.0f); matrix4_copy(&tr->transition_matrices[idx], &mat); } static inline void recalculate_transition_matrices(obs_source_t *transition) { recalculate_transition_matrix(transition, 0); recalculate_transition_matrix(transition, 1); } static void recalculate_transition_size(obs_source_t *transition) { uint32_t cx = 0, cy = 0; obs_source_t *child; lock_transition(transition); for (size_t i = 0; i < 2; i++) { child = transition->transition_sources[i]; if (child) { uint32_t new_cx = obs_source_get_width(child); uint32_t new_cy = obs_source_get_height(child); if (new_cx > cx) cx = new_cx; if (new_cy > cy) cy = new_cy; } } unlock_transition(transition); transition->transition_actual_cx = cx; transition->transition_actual_cy = cy; } void obs_transition_tick(obs_source_t *transition, float t) { recalculate_transition_size(transition); recalculate_transition_matrices(transition); if (transition->transition_mode == OBS_TRANSITION_MODE_MANUAL) { if (transition->transition_manual_torque == 0.0f) { transition->transition_manual_val = transition->transition_manual_target; } else { transition->transition_manual_val = calc_torquef(transition->transition_manual_val, transition->transition_manual_target, transition->transition_manual_torque, transition->transition_manual_clamp, t); } } if (trylock_textures(transition) == 0) { gs_texrender_reset(transition->transition_texrender[0]); gs_texrender_reset(transition->transition_texrender[1]); unlock_textures(transition); } } static void set_source(obs_source_t *transition, enum obs_transition_target target, obs_source_t *new_child, bool (*callback)(obs_source_t *t, size_t idx, obs_source_t *c)) { size_t idx = (size_t)target; obs_source_t *old_child; bool add_success = true; bool already_active; if (new_child) new_child = obs_source_get_ref(new_child); lock_transition(transition); old_child = transition->transition_sources[idx]; if (new_child == old_child) { unlock_transition(transition); obs_source_release(new_child); return; } already_active = transition->transition_source_active[idx]; if (already_active) { if (new_child) add_success = obs_source_add_active_child(transition, new_child); if (old_child && add_success) obs_source_remove_active_child(transition, old_child); } if (callback && add_success) add_success = callback(transition, idx, new_child); transition->transition_sources[idx] = add_success ? new_child : NULL; unlock_transition(transition); if (add_success) { if (transition->transition_cx == 0 || transition->transition_cy == 0) { recalculate_transition_size(transition); recalculate_transition_matrices(transition); } } else { obs_source_release(new_child); } obs_source_release(old_child); } obs_source_t *obs_transition_get_source(obs_source_t *transition, enum obs_transition_target target) { size_t idx = (size_t)target; obs_source_t *ret; if (!transition_valid(transition, "obs_transition_get_source")) return NULL; lock_transition(transition); ret = transition->transition_sources[idx]; ret = obs_source_get_ref(ret); unlock_transition(transition); return ret; } obs_source_t *obs_transition_get_active_source(obs_source_t *transition) { obs_source_t *ret; if (!transition_valid(transition, "obs_transition_get_source")) return NULL; lock_transition(transition); if (transition->transitioning_audio || transition->transitioning_video) ret = transition->transition_sources[1]; else ret = transition->transition_sources[0]; ret = obs_source_get_ref(ret); unlock_transition(transition); return ret; } static bool activate_transition(obs_source_t *transition, size_t idx, obs_source_t *child) { if (!transition->transition_source_active[idx]) { if (!obs_source_add_active_child(transition, child)) return false; transition->transition_source_active[idx] = true; } transition->transitioning_video = true; transition->transitioning_audio = true; return true; } static inline bool transition_active(obs_source_t *transition) { return transition->transitioning_audio || transition->transitioning_video; } bool obs_transition_is_active(obs_source_t *transition) { return transition_active(transition); } bool obs_transition_start(obs_source_t *transition, enum obs_transition_mode mode, uint32_t duration_ms, obs_source_t *dest) { bool active; bool same_as_source; bool same_as_dest; bool same_mode; if (!transition_valid(transition, "obs_transition_start")) return false; if (transition_active(transition)) { obs_transition_set(transition, transition->transition_sources[1]); } lock_transition(transition); same_as_source = dest == transition->transition_sources[0]; same_as_dest = dest == transition->transition_sources[1]; same_mode = mode == transition->transition_mode; active = transition_active(transition); unlock_transition(transition); if (same_as_source && !active) return false; if (active && mode == OBS_TRANSITION_MODE_MANUAL && same_mode && same_as_dest) return true; lock_transition(transition); transition->transition_mode = mode; transition->transition_manual_val = 0.0f; transition->transition_manual_target = 0.0f; unlock_transition(transition); if (transition->info.transition_start) transition->info.transition_start(transition->context.data); if (transition->transition_use_fixed_duration) duration_ms = transition->transition_fixed_duration; if (!active || (!same_as_dest && !same_as_source)) { transition->transition_start_time = os_gettime_ns(); transition->transition_duration = (uint64_t)duration_ms * 1000000ULL; } set_source(transition, OBS_TRANSITION_SOURCE_B, dest, activate_transition); if (dest == NULL && same_as_dest && !same_as_source) { transition->transitioning_video = true; transition->transitioning_audio = true; } obs_source_dosignal(transition, "source_transition_start", "transition_start"); recalculate_transition_size(transition); recalculate_transition_matrices(transition); return true; } void obs_transition_set_manual_torque(obs_source_t *transition, float torque, float clamp) { lock_transition(transition); transition->transition_manual_torque = torque; transition->transition_manual_clamp = clamp; unlock_transition(transition); } void obs_transition_set_manual_time(obs_source_t *transition, float t) { enum obs_transition_mode mode; lock_transition(transition); transition->transition_manual_target = t; mode = transition->transition_mode; unlock_transition(transition); if (mode == OBS_TRANSITION_MODE_MANUAL && t == 0.0f) { obs_transition_set(transition, transition->transition_sources[0]); } } void obs_transition_set(obs_source_t *transition, obs_source_t *source) { obs_source_t *s[2]; bool active[2]; if (!transition_valid(transition, "obs_transition_set")) return; source = obs_source_get_ref(source); if (transition_active(transition)) { obs_source_dosignal(transition, "source_transition_stop", "transition_stop"); } lock_transition(transition); for (size_t i = 0; i < 2; i++) { s[i] = transition->transition_sources[i]; active[i] = transition->transition_source_active[i]; transition->transition_sources[i] = NULL; transition->transition_source_active[i] = false; } transition->transition_source_active[0] = true; transition->transition_sources[0] = source; transition->transitioning_video = false; transition->transitioning_audio = false; transition->transition_manual_val = 0.0f; transition->transition_manual_target = 0.0f; unlock_transition(transition); for (size_t i = 0; i < 2; i++) { if (s[i] && active[i]) obs_source_remove_active_child(transition, s[i]); obs_source_release(s[i]); } if (source) obs_source_add_active_child(transition, source); } static float calc_time(obs_source_t *transition, uint64_t ts) { if (transition->transition_mode == OBS_TRANSITION_MODE_MANUAL) return transition->transition_manual_val; uint64_t end; if (ts <= transition->transition_start_time) return 0.0f; end = transition->transition_duration; ts -= transition->transition_start_time; if (ts >= end || end == 0) return 1.0f; return (float)((long double)ts / (long double)end); } static inline float get_video_time(obs_source_t *transition) { uint64_t ts = obs->video.video_time; return calc_time(transition, ts); } float obs_transition_get_time(obs_source_t *transition) { return get_video_time(transition); } static inline gs_texture_t *get_texture(obs_source_t *transition, enum obs_transition_target target) { size_t idx = (size_t)target; return gs_texrender_get_texture(transition->transition_texrender[idx]); } void obs_transition_set_scale_type(obs_source_t *transition, enum obs_transition_scale_type type) { if (!transition_valid(transition, "obs_transition_set_scale_type")) return; transition->transition_scale_type = type; } enum obs_transition_scale_type obs_transition_get_scale_type(const obs_source_t *transition) { return transition_valid(transition, "obs_transition_get_scale_type") ? transition->transition_scale_type : OBS_TRANSITION_SCALE_MAX_ONLY; } void obs_transition_set_alignment(obs_source_t *transition, uint32_t alignment) { if (!transition_valid(transition, "obs_transition_set_alignment")) return; transition->transition_alignment = alignment; } uint32_t obs_transition_get_alignment(const obs_source_t *transition) { return transition_valid(transition, "obs_transition_get_alignment") ? transition->transition_alignment : 0; } void obs_transition_set_size(obs_source_t *transition, uint32_t cx, uint32_t cy) { if (!transition_valid(transition, "obs_transition_set_size")) return; transition->transition_cx = cx; transition->transition_cy = cy; } void obs_transition_get_size(const obs_source_t *transition, uint32_t *cx, uint32_t *cy) { if (!transition_valid(transition, "obs_transition_set_size")) { *cx = 0; *cy = 0; return; } *cx = transition->transition_cx; *cy = transition->transition_cy; } void obs_transition_save(obs_source_t *tr, obs_data_t *data) { obs_source_t *child; lock_transition(tr); child = transition_active(tr) ? tr->transition_sources[1] : tr->transition_sources[0]; obs_data_set_string(data, "transition_source_a", child ? child->context.name : ""); obs_data_set_int(data, "transition_alignment", tr->transition_alignment); obs_data_set_int(data, "transition_mode", (int64_t)tr->transition_mode); obs_data_set_int(data, "transition_scale_type", (int64_t)tr->transition_scale_type); obs_data_set_int(data, "transition_cx", tr->transition_cx); obs_data_set_int(data, "transition_cy", tr->transition_cy); unlock_transition(tr); } void obs_transition_load(obs_source_t *tr, obs_data_t *data) { const char *name = obs_data_get_string(data, "transition_source_a"); int64_t alignment = obs_data_get_int(data, "transition_alignment"); int64_t mode = obs_data_get_int(data, "transition_mode"); int64_t scale_type = obs_data_get_int(data, "transition_scale_type"); int64_t cx = obs_data_get_int(data, "transition_cx"); int64_t cy = obs_data_get_int(data, "transition_cy"); obs_source_t *source = NULL; if (name) { source = obs_get_source_by_name(name); if (source) { if (!obs_source_add_active_child(tr, source)) { blog(LOG_WARNING, "Cannot set transition '%s' " "to source '%s' due to " "infinite recursion", tr->context.name, name); obs_source_release(source); source = NULL; } } else { blog(LOG_WARNING, "Failed to find source '%s' for " "transition '%s'", name, tr->context.name); } } lock_transition(tr); tr->transition_sources[0] = source; tr->transition_source_active[0] = true; tr->transition_alignment = (uint32_t)alignment; tr->transition_mode = (enum obs_transition_mode)mode; tr->transition_scale_type = (enum obs_transition_scale_type)scale_type; tr->transition_cx = (uint32_t)cx; tr->transition_cy = (uint32_t)cy; unlock_transition(tr); recalculate_transition_size(tr); recalculate_transition_matrices(tr); } struct transition_state { obs_source_t *s[2]; bool transitioning_video; bool transitioning_audio; }; static inline void copy_transition_state(obs_source_t *transition, struct transition_state *state) { state->s[0] = obs_source_get_ref(transition->transition_sources[0]); state->s[1] = obs_source_get_ref(transition->transition_sources[1]); state->transitioning_video = transition->transitioning_video; state->transitioning_audio = transition->transitioning_audio; } void obs_transition_enum_sources(obs_source_t *transition, obs_source_enum_proc_t cb, void *param) { lock_transition(transition); for (size_t i = 0; i < 2; i++) { if (transition->transition_sources[i]) cb(transition, transition->transition_sources[i], param); } unlock_transition(transition); } static inline void render_child(obs_source_t *transition, obs_source_t *child, size_t idx, enum gs_color_space space) { uint32_t cx = get_cx(transition); uint32_t cy = get_cy(transition); struct vec4 blank; if (!child) return; enum gs_color_format format = gs_get_format_from_space(space); if (gs_texrender_get_format(transition->transition_texrender[idx]) != format) { gs_texrender_destroy(transition->transition_texrender[idx]); transition->transition_texrender[idx] = gs_texrender_create(format, GS_ZS_NONE); } if (gs_texrender_begin_with_color_space(transition->transition_texrender[idx], cx, cy, space)) { vec4_zero(&blank); gs_clear(GS_CLEAR_COLOR, &blank, 0.0f, 0); gs_ortho(0.0f, (float)cx, 0.0f, (float)cy, -100.0f, 100.0f); gs_matrix_push(); gs_matrix_mul(&transition->transition_matrices[idx]); obs_source_video_render(child); gs_matrix_pop(); gs_texrender_end(transition->transition_texrender[idx]); } } static void obs_transition_stop(obs_source_t *transition) { obs_source_t *old_child = transition->transition_sources[0]; if (old_child && transition->transition_source_active[0]) obs_source_remove_active_child(transition, old_child); obs_source_release(old_child); transition->transition_source_active[0] = true; transition->transition_source_active[1] = false; transition->transition_sources[0] = transition->transition_sources[1]; transition->transition_sources[1] = NULL; } static inline void handle_stop(obs_source_t *transition) { if (transition->info.transition_stop) transition->info.transition_stop(transition->context.data); obs_source_dosignal(transition, "source_transition_stop", "transition_stop"); } void obs_transition_force_stop(obs_source_t *transition) { handle_stop(transition); } void obs_transition_video_render(obs_source_t *transition, obs_transition_video_render_callback_t callback) { obs_transition_video_render2(transition, callback, obs->video.transparent_texture); } void obs_transition_video_render2(obs_source_t *transition, obs_transition_video_render_callback_t callback, gs_texture_t *placeholder_texture) { struct transition_state state; struct matrix4 matrices[2]; bool locked = false; bool stopped = false; bool video_stopped = false; float t; if (!transition_valid(transition, "obs_transition_video_render")) return; t = get_video_time(transition); lock_transition(transition); if (t >= 1.0f && transition->transitioning_video) { transition->transitioning_video = false; video_stopped = true; if (!transition->transitioning_audio) { obs_transition_stop(transition); stopped = true; } } copy_transition_state(transition, &state); matrices[0] = transition->transition_matrices[0]; matrices[1] = transition->transition_matrices[1]; unlock_transition(transition); if (state.transitioning_video) locked = trylock_textures(transition) == 0; if (state.transitioning_video && locked && callback) { gs_texture_t *tex[2]; uint32_t cx; uint32_t cy; const enum gs_color_space current_space = gs_get_color_space(); const enum gs_color_space source_space = obs_source_get_color_space(transition, 1, ¤t_space); for (size_t i = 0; i < 2; i++) { if (state.s[i]) { render_child(transition, state.s[i], i, source_space); tex[i] = get_texture(transition, i); if (!tex[i]) tex[i] = placeholder_texture; } else { tex[i] = placeholder_texture; } } cx = get_cx(transition); cy = get_cy(transition); if (cx && cy) { gs_blend_state_push(); gs_blend_function(GS_BLEND_ONE, GS_BLEND_INVSRCALPHA); callback(transition->context.data, tex[0], tex[1], t, cx, cy); gs_blend_state_pop(); } } else if (state.transitioning_audio) { if (state.s[1]) { gs_matrix_push(); gs_matrix_mul(&matrices[1]); obs_source_video_render(state.s[1]); gs_matrix_pop(); } } else { if (state.s[0]) { gs_matrix_push(); gs_matrix_mul(&matrices[0]); obs_source_video_render(state.s[0]); gs_matrix_pop(); } } if (locked) unlock_textures(transition); obs_source_release(state.s[0]); obs_source_release(state.s[1]); if (video_stopped) obs_source_dosignal(transition, "source_transition_video_stop", "transition_video_stop"); if (stopped) handle_stop(transition); } static enum gs_color_space mix_spaces(enum gs_color_space a, enum gs_color_space b) { if ((a == GS_CS_709_EXTENDED) || (a == GS_CS_709_SCRGB) || (b == GS_CS_709_EXTENDED) || (b == GS_CS_709_SCRGB)) return GS_CS_709_EXTENDED; if ((a == GS_CS_SRGB_16F) || (b == GS_CS_SRGB_16F)) return GS_CS_SRGB_16F; return GS_CS_SRGB; } enum gs_color_space obs_transition_video_get_color_space(obs_source_t *transition) { obs_source_t *source0 = transition->transition_sources[0]; obs_source_t *source1 = transition->transition_sources[1]; const enum gs_color_space preferred_spaces[] = { GS_CS_SRGB, GS_CS_SRGB_16F, GS_CS_709_EXTENDED, }; enum gs_color_space space = GS_CS_SRGB; if (source0) { space = mix_spaces(space, obs_source_get_color_space(source0, OBS_COUNTOF(preferred_spaces), preferred_spaces)); } if (source1) { space = mix_spaces(space, obs_source_get_color_space(source1, OBS_COUNTOF(preferred_spaces), preferred_spaces)); } return space; } bool obs_transition_video_render_direct(obs_source_t *transition, enum obs_transition_target target) { struct transition_state state; struct matrix4 matrices[2]; bool stopped = false; bool video_stopped = false; bool render_b = target == OBS_TRANSITION_SOURCE_B; bool transitioning; float t; if (!transition_valid(transition, "obs_transition_video_render")) return false; t = get_video_time(transition); lock_transition(transition); if (t >= 1.0f && transition->transitioning_video) { transition->transitioning_video = false; video_stopped = true; if (!obs_source_active(transition)) transition->transitioning_audio = false; if (!transition->transitioning_audio) { obs_transition_stop(transition); stopped = true; } } copy_transition_state(transition, &state); transitioning = state.transitioning_audio || state.transitioning_video; matrices[0] = transition->transition_matrices[0]; matrices[1] = transition->transition_matrices[1]; unlock_transition(transition); int idx = (transitioning && render_b) ? 1 : 0; if (state.s[idx]) { gs_matrix_push(); gs_matrix_mul(&matrices[idx]); obs_source_video_render(state.s[idx]); gs_matrix_pop(); } obs_source_release(state.s[0]); obs_source_release(state.s[1]); if (video_stopped) obs_source_dosignal(transition, "source_transition_video_stop", "transition_video_stop"); if (stopped) handle_stop(transition); return transitioning; } static inline float get_sample_time(obs_source_t *transition, size_t sample_rate, size_t sample, uint64_t ts) { uint64_t sample_ts_offset = util_mul_div64(sample, 1000000000ULL, sample_rate); uint64_t i_ts = ts + sample_ts_offset; return calc_time(transition, i_ts); } static inline void mix_child(obs_source_t *transition, float *out, float *in, size_t count, size_t sample_rate, uint64_t ts, obs_transition_audio_mix_callback_t mix) { void *context_data = transition->context.data; for (size_t i = 0; i < count; i++) { float t = get_sample_time(transition, sample_rate, i, ts); out[i] += in[i] * mix(context_data, t); } } static void process_audio(obs_source_t *transition, obs_source_t *child, struct obs_source_audio_mix *audio, uint64_t min_ts, uint32_t mixers, size_t channels, size_t sample_rate, obs_transition_audio_mix_callback_t mix) { bool valid = child && !child->audio_pending && child->audio_ts && !child->audio_is_duplicated; struct obs_source_audio_mix child_audio; uint64_t ts; size_t pos; if (!valid) return; ts = child->audio_ts; obs_source_get_audio_mix(child, &child_audio); pos = (size_t)ns_to_audio_frames(sample_rate, ts - min_ts); if (pos > AUDIO_OUTPUT_FRAMES) return; for (size_t mix_idx = 0; mix_idx < MAX_AUDIO_MIXES; mix_idx++) { struct audio_output_data *output = &audio->output[mix_idx]; struct audio_output_data *input = &child_audio.output[mix_idx]; if ((mixers & (1 << mix_idx)) == 0) continue; for (size_t ch = 0; ch < channels; ch++) { float *out = output->data[ch]; float *in = input->data[ch]; mix_child(transition, out + pos, in, AUDIO_OUTPUT_FRAMES - pos, sample_rate, ts, mix); } } } static inline uint64_t calc_min_ts(obs_source_t *sources[2]) { uint64_t min_ts = 0; for (size_t i = 0; i < 2; i++) { if (sources[i] && !sources[i]->audio_pending && sources[i]->audio_ts) { if (!min_ts || sources[i]->audio_ts < min_ts) min_ts = sources[i]->audio_ts; } } return min_ts; } static inline bool stop_audio(obs_source_t *transition) { transition->transitioning_audio = false; if (!transition->transitioning_video) { obs_transition_stop(transition); return true; } return false; } bool obs_transition_audio_render(obs_source_t *transition, uint64_t *ts_out, struct obs_source_audio_mix *audio, uint32_t mixers, size_t channels, size_t sample_rate, obs_transition_audio_mix_callback_t mix_a, obs_transition_audio_mix_callback_t mix_b) { obs_source_t *sources[2]; struct transition_state state = {0}; bool stopped = false; uint64_t min_ts; float t; if (!transition_valid(transition, "obs_transition_audio_render")) return false; lock_transition(transition); sources[0] = transition->transition_sources[0]; sources[1] = transition->transition_sources[1]; if (sources[0] && obs_source_removed(sources[0])) sources[0] = NULL; if (sources[1] && obs_source_removed(sources[1])) sources[1] = NULL; min_ts = calc_min_ts(sources); if (min_ts) { t = calc_time(transition, min_ts); if (t >= 1.0f && transition->transitioning_audio) stopped = stop_audio(transition); sources[0] = transition->transition_sources[0]; sources[1] = transition->transition_sources[1]; min_ts = calc_min_ts(sources); if (min_ts) copy_transition_state(transition, &state); } else if (!transition->transitioning_video && transition->transitioning_audio) { stopped = stop_audio(transition); } unlock_transition(transition); if (min_ts) { if (state.transitioning_audio) { if (state.s[0]) process_audio(transition, state.s[0], audio, min_ts, mixers, channels, sample_rate, mix_a); if (state.s[1]) process_audio(transition, state.s[1], audio, min_ts, mixers, channels, sample_rate, mix_b); } else if (state.s[0]) { memcpy(audio->output[0].data[0], state.s[0]->audio_output_buf[0][0], TOTAL_AUDIO_SIZE); } obs_source_release(state.s[0]); obs_source_release(state.s[1]); } if (stopped) handle_stop(transition); *ts_out = min_ts; return !!min_ts; } void obs_transition_enable_fixed(obs_source_t *transition, bool enable, uint32_t duration) { if (!transition_valid(transition, "obs_transition_enable_fixed")) return; transition->transition_use_fixed_duration = enable; transition->transition_fixed_duration = duration; } bool obs_transition_fixed(obs_source_t *transition) { return transition_valid(transition, "obs_transition_fixed") ? transition->transition_use_fixed_duration : false; } static inline obs_source_t *copy_source_state(obs_source_t *tr_dest, obs_source_t *tr_source, size_t idx) { obs_source_t *old_child = tr_dest->transition_sources[idx]; obs_source_t *new_child = obs_source_get_ref(tr_source->transition_sources[idx]); bool active = tr_source->transition_source_active[idx]; if (old_child && tr_dest->transition_source_active[idx]) obs_source_remove_active_child(tr_dest, old_child); tr_dest->transition_sources[idx] = new_child; tr_dest->transition_source_active[idx] = active; if (active && new_child) obs_source_add_active_child(tr_dest, new_child); return old_child; } void obs_transition_swap_begin(obs_source_t *tr_dest, obs_source_t *tr_source) { obs_source_t *old_children[2]; if (tr_dest == tr_source) return; lock_textures(tr_source); lock_textures(tr_dest); lock_transition(tr_source); lock_transition(tr_dest); for (size_t i = 0; i < 2; i++) old_children[i] = copy_source_state(tr_dest, tr_source, i); unlock_transition(tr_dest); unlock_transition(tr_source); for (size_t i = 0; i < 2; i++) obs_source_release(old_children[i]); } void obs_transition_swap_end(obs_source_t *tr_dest, obs_source_t *tr_source) { if (tr_dest == tr_source) return; obs_transition_clear(tr_source); for (size_t i = 0; i < 2; i++) { gs_texrender_t *dest = tr_dest->transition_texrender[i]; gs_texrender_t *source = tr_source->transition_texrender[i]; tr_dest->transition_texrender[i] = source; tr_source->transition_texrender[i] = dest; } unlock_textures(tr_dest); unlock_textures(tr_source); } obs-studio-32.1.0-sources/libobs/obs-properties.h000644 001751 001751 00000036144 15153330235 022644 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #include "obs-data.h" #include "media-io/frame-rate.h" /** * @file * @brief libobs header for the properties system used in libobs * * @page properties Properties * @brief Platform and Toolkit independent settings implementation * * @section prop_overview_sec Overview * * libobs uses a property system which lets for example sources specify * settings that can be displayed to the user by the UI. * */ #ifdef __cplusplus extern "C" { #endif /** Only update when the user presses OK or Apply */ #define OBS_PROPERTIES_DEFER_UPDATE (1 << 0) enum obs_property_type { OBS_PROPERTY_INVALID, OBS_PROPERTY_BOOL, OBS_PROPERTY_INT, OBS_PROPERTY_FLOAT, OBS_PROPERTY_TEXT, OBS_PROPERTY_PATH, OBS_PROPERTY_LIST, OBS_PROPERTY_COLOR, OBS_PROPERTY_BUTTON, OBS_PROPERTY_FONT, OBS_PROPERTY_EDITABLE_LIST, OBS_PROPERTY_FRAME_RATE, OBS_PROPERTY_GROUP, OBS_PROPERTY_COLOR_ALPHA, }; enum obs_combo_format { OBS_COMBO_FORMAT_INVALID, OBS_COMBO_FORMAT_INT, OBS_COMBO_FORMAT_FLOAT, OBS_COMBO_FORMAT_STRING, OBS_COMBO_FORMAT_BOOL, }; enum obs_combo_type { OBS_COMBO_TYPE_INVALID, OBS_COMBO_TYPE_EDITABLE, OBS_COMBO_TYPE_LIST, OBS_COMBO_TYPE_RADIO, }; enum obs_editable_list_type { OBS_EDITABLE_LIST_TYPE_STRINGS, OBS_EDITABLE_LIST_TYPE_FILES, OBS_EDITABLE_LIST_TYPE_FILES_AND_URLS, }; enum obs_path_type { OBS_PATH_FILE, OBS_PATH_FILE_SAVE, OBS_PATH_DIRECTORY, }; enum obs_text_type { OBS_TEXT_DEFAULT, OBS_TEXT_PASSWORD, OBS_TEXT_MULTILINE, OBS_TEXT_INFO, }; enum obs_text_info_type { OBS_TEXT_INFO_NORMAL, OBS_TEXT_INFO_WARNING, OBS_TEXT_INFO_ERROR, }; enum obs_number_type { OBS_NUMBER_SCROLLER, OBS_NUMBER_SLIDER, }; enum obs_group_type { OBS_COMBO_INVALID, OBS_GROUP_NORMAL, OBS_GROUP_CHECKABLE, }; enum obs_button_type { OBS_BUTTON_DEFAULT, OBS_BUTTON_URL, }; #define OBS_FONT_BOLD (1 << 0) #define OBS_FONT_ITALIC (1 << 1) #define OBS_FONT_UNDERLINE (1 << 2) #define OBS_FONT_STRIKEOUT (1 << 3) struct obs_properties; struct obs_property; typedef struct obs_properties obs_properties_t; typedef struct obs_property obs_property_t; /* ------------------------------------------------------------------------- */ EXPORT obs_properties_t *obs_properties_create(void); EXPORT obs_properties_t *obs_properties_create_param(void *param, void (*destroy)(void *param)); EXPORT void obs_properties_destroy(obs_properties_t *props); EXPORT void obs_properties_set_flags(obs_properties_t *props, uint32_t flags); EXPORT uint32_t obs_properties_get_flags(obs_properties_t *props); EXPORT void obs_properties_set_param(obs_properties_t *props, void *param, void (*destroy)(void *param)); EXPORT void *obs_properties_get_param(obs_properties_t *props); EXPORT obs_property_t *obs_properties_first(obs_properties_t *props); EXPORT obs_property_t *obs_properties_get(obs_properties_t *props, const char *property); EXPORT obs_properties_t *obs_properties_get_parent(obs_properties_t *props); /** Remove a property from a properties list. * * Removes a property from a properties list. Only valid in either * get_properties or modified_callback(2). modified_callback(2) must return * true so that all UI properties are rebuilt and returning false is undefined * behavior. * * @param props Properties to remove from. * @param property Name of the property to remove. */ EXPORT void obs_properties_remove_by_name(obs_properties_t *props, const char *property); /** * Applies settings to the properties by calling all the necessary * modification callbacks */ EXPORT void obs_properties_apply_settings(obs_properties_t *props, obs_data_t *settings); /* ------------------------------------------------------------------------- */ /** * Callback for when a button property is clicked. If the properties * need to be refreshed due to changes to the property layout, return true, * otherwise return false. */ typedef bool (*obs_property_clicked_t)(obs_properties_t *props, obs_property_t *property, void *data); EXPORT obs_property_t *obs_properties_add_bool(obs_properties_t *props, const char *name, const char *description); EXPORT obs_property_t *obs_properties_add_int(obs_properties_t *props, const char *name, const char *description, int min, int max, int step); EXPORT obs_property_t *obs_properties_add_float(obs_properties_t *props, const char *name, const char *description, double min, double max, double step); EXPORT obs_property_t *obs_properties_add_int_slider(obs_properties_t *props, const char *name, const char *description, int min, int max, int step); EXPORT obs_property_t *obs_properties_add_float_slider(obs_properties_t *props, const char *name, const char *description, double min, double max, double step); EXPORT obs_property_t *obs_properties_add_text(obs_properties_t *props, const char *name, const char *description, enum obs_text_type type); /** * Adds a 'path' property. Can be a directory or a file. * * If target is a file path, the filters should be this format, separated by * double semicolons, and extensions separated by space: * "Example types 1 and 2 (*.ex1 *.ex2);;Example type 3 (*.ex3)" * * @param props Properties object * @param name Settings name * @param description Description (display name) of the property * @param type Type of path (directory or file) * @param filter If type is a file path, then describes the file filter * that the user can browse. Items are separated via * double semicolons. If multiple file types in a * filter, separate with space. */ EXPORT obs_property_t *obs_properties_add_path(obs_properties_t *props, const char *name, const char *description, enum obs_path_type type, const char *filter, const char *default_path); EXPORT obs_property_t *obs_properties_add_list(obs_properties_t *props, const char *name, const char *description, enum obs_combo_type type, enum obs_combo_format format); EXPORT obs_property_t *obs_properties_add_color(obs_properties_t *props, const char *name, const char *description); EXPORT obs_property_t *obs_properties_add_color_alpha(obs_properties_t *props, const char *name, const char *description); EXPORT obs_property_t *obs_properties_add_button(obs_properties_t *props, const char *name, const char *text, obs_property_clicked_t callback); EXPORT obs_property_t *obs_properties_add_button2(obs_properties_t *props, const char *name, const char *text, obs_property_clicked_t callback, void *priv); /** * Adds a font selection property. * * A font is an obs_data sub-object which contains the following items: * face: face name string * style: style name string * size: size integer * flags: font flags integer (OBS_FONT_* defined above) */ EXPORT obs_property_t *obs_properties_add_font(obs_properties_t *props, const char *name, const char *description); EXPORT obs_property_t *obs_properties_add_editable_list(obs_properties_t *props, const char *name, const char *description, enum obs_editable_list_type type, const char *filter, const char *default_path); EXPORT obs_property_t *obs_properties_add_frame_rate(obs_properties_t *props, const char *name, const char *description); EXPORT obs_property_t *obs_properties_add_group(obs_properties_t *props, const char *name, const char *description, enum obs_group_type type, obs_properties_t *group); /* ------------------------------------------------------------------------- */ /** * Optional callback for when a property is modified. If the properties * need to be refreshed due to changes to the property layout, return true, * otherwise return false. */ typedef bool (*obs_property_modified_t)(obs_properties_t *props, obs_property_t *property, obs_data_t *settings); typedef bool (*obs_property_modified2_t)(void *priv, obs_properties_t *props, obs_property_t *property, obs_data_t *settings); EXPORT void obs_property_set_modified_callback(obs_property_t *p, obs_property_modified_t modified); EXPORT void obs_property_set_modified_callback2(obs_property_t *p, obs_property_modified2_t modified, void *priv); EXPORT bool obs_property_modified(obs_property_t *p, obs_data_t *settings); EXPORT bool obs_property_button_clicked(obs_property_t *p, void *obj); EXPORT void obs_property_set_visible(obs_property_t *p, bool visible); EXPORT void obs_property_set_enabled(obs_property_t *p, bool enabled); EXPORT void obs_property_set_description(obs_property_t *p, const char *description); EXPORT void obs_property_set_long_description(obs_property_t *p, const char *long_description); EXPORT const char *obs_property_name(obs_property_t *p); EXPORT const char *obs_property_description(obs_property_t *p); EXPORT const char *obs_property_long_description(obs_property_t *p); EXPORT enum obs_property_type obs_property_get_type(obs_property_t *p); EXPORT bool obs_property_enabled(obs_property_t *p); EXPORT bool obs_property_visible(obs_property_t *p); EXPORT bool obs_property_next(obs_property_t **p); EXPORT int obs_property_int_min(obs_property_t *p); EXPORT int obs_property_int_max(obs_property_t *p); EXPORT int obs_property_int_step(obs_property_t *p); EXPORT enum obs_number_type obs_property_int_type(obs_property_t *p); EXPORT const char *obs_property_int_suffix(obs_property_t *p); EXPORT double obs_property_float_min(obs_property_t *p); EXPORT double obs_property_float_max(obs_property_t *p); EXPORT double obs_property_float_step(obs_property_t *p); EXPORT enum obs_number_type obs_property_float_type(obs_property_t *p); EXPORT const char *obs_property_float_suffix(obs_property_t *p); EXPORT enum obs_text_type obs_property_text_type(obs_property_t *p); EXPORT bool obs_property_text_monospace(obs_property_t *p); EXPORT enum obs_text_info_type obs_property_text_info_type(obs_property_t *p); EXPORT bool obs_property_text_info_word_wrap(obs_property_t *p); EXPORT enum obs_path_type obs_property_path_type(obs_property_t *p); EXPORT const char *obs_property_path_filter(obs_property_t *p); EXPORT const char *obs_property_path_default_path(obs_property_t *p); EXPORT enum obs_combo_type obs_property_list_type(obs_property_t *p); EXPORT enum obs_combo_format obs_property_list_format(obs_property_t *p); EXPORT void obs_property_int_set_limits(obs_property_t *p, int min, int max, int step); EXPORT void obs_property_float_set_limits(obs_property_t *p, double min, double max, double step); EXPORT void obs_property_int_set_suffix(obs_property_t *p, const char *suffix); EXPORT void obs_property_float_set_suffix(obs_property_t *p, const char *suffix); EXPORT void obs_property_text_set_monospace(obs_property_t *p, bool monospace); EXPORT void obs_property_text_set_info_type(obs_property_t *p, enum obs_text_info_type type); EXPORT void obs_property_text_set_info_word_wrap(obs_property_t *p, bool word_wrap); EXPORT void obs_property_button_set_type(obs_property_t *p, enum obs_button_type type); EXPORT void obs_property_button_set_url(obs_property_t *p, char *url); EXPORT void obs_property_list_clear(obs_property_t *p); EXPORT size_t obs_property_list_add_string(obs_property_t *p, const char *name, const char *val); EXPORT size_t obs_property_list_add_int(obs_property_t *p, const char *name, long long val); EXPORT size_t obs_property_list_add_float(obs_property_t *p, const char *name, double val); EXPORT size_t obs_property_list_add_bool(obs_property_t *p, const char *name, bool val); EXPORT void obs_property_list_insert_string(obs_property_t *p, size_t idx, const char *name, const char *val); EXPORT void obs_property_list_insert_int(obs_property_t *p, size_t idx, const char *name, long long val); EXPORT void obs_property_list_insert_float(obs_property_t *p, size_t idx, const char *name, double val); EXPORT void obs_property_list_insert_bool(obs_property_t *p, size_t idx, const char *name, bool val); EXPORT void obs_property_list_item_disable(obs_property_t *p, size_t idx, bool disabled); EXPORT bool obs_property_list_item_disabled(obs_property_t *p, size_t idx); EXPORT void obs_property_list_item_remove(obs_property_t *p, size_t idx); EXPORT size_t obs_property_list_item_count(obs_property_t *p); EXPORT const char *obs_property_list_item_name(obs_property_t *p, size_t idx); EXPORT const char *obs_property_list_item_string(obs_property_t *p, size_t idx); EXPORT long long obs_property_list_item_int(obs_property_t *p, size_t idx); EXPORT double obs_property_list_item_float(obs_property_t *p, size_t idx); EXPORT bool obs_property_list_item_bool(obs_property_t *p, size_t idx); EXPORT enum obs_editable_list_type obs_property_editable_list_type(obs_property_t *p); EXPORT const char *obs_property_editable_list_filter(obs_property_t *p); EXPORT const char *obs_property_editable_list_default_path(obs_property_t *p); EXPORT void obs_property_frame_rate_clear(obs_property_t *p); EXPORT void obs_property_frame_rate_options_clear(obs_property_t *p); EXPORT void obs_property_frame_rate_fps_ranges_clear(obs_property_t *p); EXPORT size_t obs_property_frame_rate_option_add(obs_property_t *p, const char *name, const char *description); EXPORT size_t obs_property_frame_rate_fps_range_add(obs_property_t *p, struct media_frames_per_second min, struct media_frames_per_second max); EXPORT void obs_property_frame_rate_option_insert(obs_property_t *p, size_t idx, const char *name, const char *description); EXPORT void obs_property_frame_rate_fps_range_insert(obs_property_t *p, size_t idx, struct media_frames_per_second min, struct media_frames_per_second max); EXPORT size_t obs_property_frame_rate_options_count(obs_property_t *p); EXPORT const char *obs_property_frame_rate_option_name(obs_property_t *p, size_t idx); EXPORT const char *obs_property_frame_rate_option_description(obs_property_t *p, size_t idx); EXPORT size_t obs_property_frame_rate_fps_ranges_count(obs_property_t *p); EXPORT struct media_frames_per_second obs_property_frame_rate_fps_range_min(obs_property_t *p, size_t idx); EXPORT struct media_frames_per_second obs_property_frame_rate_fps_range_max(obs_property_t *p, size_t idx); EXPORT enum obs_group_type obs_property_group_type(obs_property_t *p); EXPORT obs_properties_t *obs_property_group_content(obs_property_t *p); EXPORT enum obs_button_type obs_property_button_type(obs_property_t *p); EXPORT const char *obs_property_button_url(obs_property_t *p); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obsversion.h000644 001751 001751 00000000167 15153330235 022054 0ustar00runnerrunner000000 000000 #pragma once extern const char *OBS_VERSION; extern const char *OBS_VERSION_CANONICAL; extern const char *OBS_COMMIT; obs-studio-32.1.0-sources/libobs/obs-windows.c000644 001751 001751 00000066524 15153330235 022142 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "util/windows/win-registry.h" #include "util/windows/win-version.h" #include "util/platform.h" #include "util/dstr.h" #include "obs.h" #include "obs-internal.h" #include #include #include static uint32_t win_ver = 0; static uint32_t win_build = 0; const char *get_module_extension(void) { return ".dll"; } static const char *module_bin[] = {"../../obs-plugins/64bit"}; static const char *module_data[] = {"../../data/obs-plugins/%module%"}; static const int module_patterns_size = sizeof(module_bin) / sizeof(module_bin[0]); void add_default_module_paths(void) { for (int i = 0; i < module_patterns_size; i++) obs_add_module_path(module_bin[i], module_data[i]); } /* on windows, points to [base directory]/data/libobs */ char *find_libobs_data_file(const char *file) { struct dstr path; dstr_init(&path); if (check_path(file, "../../data/libobs/", &path)) return path.array; dstr_free(&path); return NULL; } static void log_processor_info(void) { HKEY key; wchar_t data[1024]; char *str = NULL; DWORD size, speed; LSTATUS status; memset(data, 0, sizeof(data)); status = RegOpenKeyW(HKEY_LOCAL_MACHINE, L"HARDWARE\\DESCRIPTION\\System\\CentralProcessor\\0", &key); if (status != ERROR_SUCCESS) return; size = sizeof(data); status = RegQueryValueExW(key, L"ProcessorNameString", NULL, NULL, (LPBYTE)data, &size); if (status == ERROR_SUCCESS) { os_wcs_to_utf8_ptr(data, 0, &str); blog(LOG_INFO, "CPU Name: %s", str); bfree(str); } size = sizeof(speed); status = RegQueryValueExW(key, L"~MHz", NULL, NULL, (LPBYTE)&speed, &size); if (status == ERROR_SUCCESS) blog(LOG_INFO, "CPU Speed: %ldMHz", speed); RegCloseKey(key); } static void log_processor_cores(void) { blog(LOG_INFO, "Physical Cores: %d, Logical Cores: %d", os_get_physical_cores(), os_get_logical_cores()); } static void log_emulation_status(void) { if (os_get_emulation_status()) { blog(LOG_WARNING, "Windows ARM64: Running with x64 emulation"); } } static void log_available_memory(void) { MEMORYSTATUSEX ms; ms.dwLength = sizeof(ms); GlobalMemoryStatusEx(&ms); #ifdef _WIN64 const char *note = ""; #else const char *note = " (NOTE: 32bit programs cannot use more than 3gb)"; #endif blog(LOG_INFO, "Physical Memory: %luMB Total, %luMB Free%s", (DWORD)(ms.ullTotalPhys / 1048576), (DWORD)(ms.ullAvailPhys / 1048576), note); } static void log_lenovo_vantage(void) { SC_HANDLE manager = OpenSCManager(NULL, NULL, SC_MANAGER_CONNECT); if (!manager) return; SC_HANDLE service = OpenService(manager, L"FBNetFilter", SERVICE_QUERY_STATUS); if (service) { blog(LOG_WARNING, "Lenovo Vantage / Legion Edge is installed. The \"Network Boost\" " "feature must be disabled when streaming with OBS."); CloseServiceHandle(service); } CloseServiceHandle(manager); } static void log_conflicting_software(void) { log_lenovo_vantage(); } extern const char *get_win_release_id(); static void log_windows_version(void) { struct win_version_info ver; get_win_ver(&ver); const char *release_id = get_win_release_id(); bool b64 = is_64_bit_windows(); const char *windows_bitness = b64 ? "64" : "32"; bool arm64 = is_arm64_windows(); const char *arm64_windows = arm64 ? "ARM " : ""; blog(LOG_INFO, "Windows Version: %d.%d Build %d (release: %s; revision: %d; %s%s-bit)", ver.major, ver.minor, ver.build, release_id, ver.revis, arm64_windows, windows_bitness); } static void log_admin_status(void) { SID_IDENTIFIER_AUTHORITY auth = SECURITY_NT_AUTHORITY; PSID admin_group; BOOL success; success = AllocateAndInitializeSid(&auth, 2, SECURITY_BUILTIN_DOMAIN_RID, DOMAIN_ALIAS_RID_ADMINS, 0, 0, 0, 0, 0, 0, &admin_group); if (success) { if (!CheckTokenMembership(NULL, admin_group, &success)) success = false; FreeSid(admin_group); } blog(LOG_INFO, "Running as administrator: %s", success ? "true" : "false"); } #define WIN10_GAME_BAR_REG_KEY L"Software\\Microsoft\\Windows\\CurrentVersion\\GameDVR" #define WIN10_GAME_DVR_POLICY_REG_KEY L"SOFTWARE\\Policies\\Microsoft\\Windows\\GameDVR" #define WIN10_GAME_DVR_REG_KEY L"System\\GameConfigStore" #define WIN10_GAME_MODE_REG_KEY L"Software\\Microsoft\\GameBar" static void log_gaming_features(void) { if (win_ver < 0xA00) return; struct reg_dword game_bar_enabled; struct reg_dword game_dvr_allowed; struct reg_dword game_dvr_enabled; struct reg_dword game_dvr_bg_recording; struct reg_dword game_mode_enabled; get_reg_dword(HKEY_CURRENT_USER, WIN10_GAME_BAR_REG_KEY, L"AppCaptureEnabled", &game_bar_enabled); get_reg_dword(HKEY_CURRENT_USER, WIN10_GAME_DVR_POLICY_REG_KEY, L"AllowGameDVR", &game_dvr_allowed); get_reg_dword(HKEY_CURRENT_USER, WIN10_GAME_DVR_REG_KEY, L"GameDVR_Enabled", &game_dvr_enabled); get_reg_dword(HKEY_CURRENT_USER, WIN10_GAME_BAR_REG_KEY, L"HistoricalCaptureEnabled", &game_dvr_bg_recording); get_reg_dword(HKEY_CURRENT_USER, WIN10_GAME_MODE_REG_KEY, L"AutoGameModeEnabled", &game_mode_enabled); if (game_mode_enabled.status != ERROR_SUCCESS) { get_reg_dword(HKEY_CURRENT_USER, WIN10_GAME_MODE_REG_KEY, L"AllowAutoGameMode", &game_mode_enabled); } blog(LOG_INFO, "Windows 10/11 Gaming Features:"); if (game_bar_enabled.status == ERROR_SUCCESS) { blog(LOG_INFO, "\tGame Bar: %s", (bool)game_bar_enabled.return_value ? "On" : "Off"); } if (game_dvr_allowed.status == ERROR_SUCCESS) { blog(LOG_INFO, "\tGame DVR Allowed: %s", (bool)game_dvr_allowed.return_value ? "Yes" : "No"); } if (game_dvr_enabled.status == ERROR_SUCCESS) { blog(LOG_INFO, "\tGame DVR: %s", (bool)game_dvr_enabled.return_value ? "On" : "Off"); } if (game_dvr_bg_recording.status == ERROR_SUCCESS) { blog(LOG_INFO, "\tGame DVR Background Recording: %s", (bool)game_dvr_bg_recording.return_value ? "On" : "Off"); } if (game_mode_enabled.status == ERROR_SUCCESS) { blog(LOG_INFO, "\tGame Mode: %s", (bool)game_mode_enabled.return_value ? "On" : "Off"); } else if (win_build >= 19042) { // On by default in newer Windows 10 builds (no registry key set) blog(LOG_INFO, "\tGame Mode: Probably On (no reg key set)"); } } static const char *get_str_for_state(int state) { switch (state) { case WSC_SECURITY_PRODUCT_STATE_ON: return "enabled"; case WSC_SECURITY_PRODUCT_STATE_OFF: return "disabled"; case WSC_SECURITY_PRODUCT_STATE_SNOOZED: return "temporarily disabled"; case WSC_SECURITY_PRODUCT_STATE_EXPIRED: return "expired"; default: return "unknown"; } } static const char *get_str_for_type(int type) { switch (type) { case WSC_SECURITY_PROVIDER_ANTIVIRUS: return "AV"; case WSC_SECURITY_PROVIDER_FIREWALL: return "FW"; case WSC_SECURITY_PROVIDER_ANTISPYWARE: return "ASW"; default: return "unknown"; } } static void log_security_products_by_type(IWSCProductList *prod_list, int type) { HRESULT hr; LONG count = 0; IWscProduct *prod; BSTR name; WSC_SECURITY_PRODUCT_STATE prod_state; hr = prod_list->lpVtbl->Initialize(prod_list, type); if (FAILED(hr)) return; hr = prod_list->lpVtbl->get_Count(prod_list, &count); if (FAILED(hr)) { prod_list->lpVtbl->Release(prod_list); return; } for (int i = 0; i < count; i++) { hr = prod_list->lpVtbl->get_Item(prod_list, i, &prod); if (FAILED(hr)) continue; hr = prod->lpVtbl->get_ProductName(prod, &name); if (FAILED(hr)) continue; hr = prod->lpVtbl->get_ProductState(prod, &prod_state); if (FAILED(hr)) { SysFreeString(name); continue; } char *product_name; os_wcs_to_utf8_ptr(name, 0, &product_name); blog(LOG_INFO, "\t%s: %s (%s)", product_name, get_str_for_state(prod_state), get_str_for_type(type)); bfree(product_name); SysFreeString(name); prod->lpVtbl->Release(prod); } prod_list->lpVtbl->Release(prod_list); } static void log_security_products(void) { IWSCProductList *prod_list = NULL; HMODULE h_wsc; HRESULT hr; /* We load the DLL rather than import wcsapi.lib because the clsid / * iid only exists on Windows 8 or higher. */ h_wsc = LoadLibraryW(L"wscapi.dll"); if (!h_wsc) return; const CLSID *prod_list_clsid = (const CLSID *)GetProcAddress(h_wsc, "CLSID_WSCProductList"); const IID *prod_list_iid = (const IID *)GetProcAddress(h_wsc, "IID_IWSCProductList"); if (prod_list_clsid && prod_list_iid) { blog(LOG_INFO, "Sec. Software Status:"); hr = CoCreateInstance(prod_list_clsid, NULL, CLSCTX_INPROC_SERVER, prod_list_iid, &prod_list); if (!FAILED(hr)) { log_security_products_by_type(prod_list, WSC_SECURITY_PROVIDER_ANTIVIRUS); } hr = CoCreateInstance(prod_list_clsid, NULL, CLSCTX_INPROC_SERVER, prod_list_iid, &prod_list); if (!FAILED(hr)) { log_security_products_by_type(prod_list, WSC_SECURITY_PROVIDER_FIREWALL); } hr = CoCreateInstance(prod_list_clsid, NULL, CLSCTX_INPROC_SERVER, prod_list_iid, &prod_list); if (!FAILED(hr)) { log_security_products_by_type(prod_list, WSC_SECURITY_PROVIDER_ANTISPYWARE); } } FreeLibrary(h_wsc); } void log_system_info(void) { struct win_version_info ver; get_win_ver(&ver); win_ver = (ver.major << 8) | ver.minor; win_build = ver.build; log_processor_info(); log_processor_cores(); log_available_memory(); log_windows_version(); log_emulation_status(); log_admin_status(); log_gaming_features(); log_security_products(); log_conflicting_software(); } struct obs_hotkeys_platform { int vk_codes[OBS_KEY_LAST_VALUE]; }; static int get_virtual_key(obs_key_t key) { switch (key) { case OBS_KEY_RETURN: return VK_RETURN; case OBS_KEY_ESCAPE: return VK_ESCAPE; case OBS_KEY_TAB: return VK_TAB; case OBS_KEY_BACKTAB: return VK_OEM_BACKTAB; case OBS_KEY_BACKSPACE: return VK_BACK; case OBS_KEY_INSERT: return VK_INSERT; case OBS_KEY_DELETE: return VK_DELETE; case OBS_KEY_PAUSE: return VK_PAUSE; case OBS_KEY_PRINT: return VK_SNAPSHOT; case OBS_KEY_CLEAR: return VK_CLEAR; case OBS_KEY_HOME: return VK_HOME; case OBS_KEY_END: return VK_END; case OBS_KEY_LEFT: return VK_LEFT; case OBS_KEY_UP: return VK_UP; case OBS_KEY_RIGHT: return VK_RIGHT; case OBS_KEY_DOWN: return VK_DOWN; case OBS_KEY_PAGEUP: return VK_PRIOR; case OBS_KEY_PAGEDOWN: return VK_NEXT; case OBS_KEY_SHIFT: return VK_SHIFT; case OBS_KEY_CONTROL: return VK_CONTROL; case OBS_KEY_ALT: return VK_MENU; case OBS_KEY_CAPSLOCK: return VK_CAPITAL; case OBS_KEY_NUMLOCK: return VK_NUMLOCK; case OBS_KEY_SCROLLLOCK: return VK_SCROLL; case OBS_KEY_F1: return VK_F1; case OBS_KEY_F2: return VK_F2; case OBS_KEY_F3: return VK_F3; case OBS_KEY_F4: return VK_F4; case OBS_KEY_F5: return VK_F5; case OBS_KEY_F6: return VK_F6; case OBS_KEY_F7: return VK_F7; case OBS_KEY_F8: return VK_F8; case OBS_KEY_F9: return VK_F9; case OBS_KEY_F10: return VK_F10; case OBS_KEY_F11: return VK_F11; case OBS_KEY_F12: return VK_F12; case OBS_KEY_F13: return VK_F13; case OBS_KEY_F14: return VK_F14; case OBS_KEY_F15: return VK_F15; case OBS_KEY_F16: return VK_F16; case OBS_KEY_F17: return VK_F17; case OBS_KEY_F18: return VK_F18; case OBS_KEY_F19: return VK_F19; case OBS_KEY_F20: return VK_F20; case OBS_KEY_F21: return VK_F21; case OBS_KEY_F22: return VK_F22; case OBS_KEY_F23: return VK_F23; case OBS_KEY_F24: return VK_F24; case OBS_KEY_SPACE: return VK_SPACE; case OBS_KEY_APOSTROPHE: return VK_OEM_7; case OBS_KEY_PLUS: return VK_OEM_PLUS; case OBS_KEY_COMMA: return VK_OEM_COMMA; case OBS_KEY_MINUS: return VK_OEM_MINUS; case OBS_KEY_PERIOD: return VK_OEM_PERIOD; case OBS_KEY_SLASH: return VK_OEM_2; case OBS_KEY_0: return '0'; case OBS_KEY_1: return '1'; case OBS_KEY_2: return '2'; case OBS_KEY_3: return '3'; case OBS_KEY_4: return '4'; case OBS_KEY_5: return '5'; case OBS_KEY_6: return '6'; case OBS_KEY_7: return '7'; case OBS_KEY_8: return '8'; case OBS_KEY_9: return '9'; case OBS_KEY_NUMASTERISK: return VK_MULTIPLY; case OBS_KEY_NUMPLUS: return VK_ADD; case OBS_KEY_NUMMINUS: return VK_SUBTRACT; case OBS_KEY_NUMPERIOD: return VK_DECIMAL; case OBS_KEY_NUMSLASH: return VK_DIVIDE; case OBS_KEY_NUM0: return VK_NUMPAD0; case OBS_KEY_NUM1: return VK_NUMPAD1; case OBS_KEY_NUM2: return VK_NUMPAD2; case OBS_KEY_NUM3: return VK_NUMPAD3; case OBS_KEY_NUM4: return VK_NUMPAD4; case OBS_KEY_NUM5: return VK_NUMPAD5; case OBS_KEY_NUM6: return VK_NUMPAD6; case OBS_KEY_NUM7: return VK_NUMPAD7; case OBS_KEY_NUM8: return VK_NUMPAD8; case OBS_KEY_NUM9: return VK_NUMPAD9; case OBS_KEY_SEMICOLON: return VK_OEM_1; case OBS_KEY_A: return 'A'; case OBS_KEY_B: return 'B'; case OBS_KEY_C: return 'C'; case OBS_KEY_D: return 'D'; case OBS_KEY_E: return 'E'; case OBS_KEY_F: return 'F'; case OBS_KEY_G: return 'G'; case OBS_KEY_H: return 'H'; case OBS_KEY_I: return 'I'; case OBS_KEY_J: return 'J'; case OBS_KEY_K: return 'K'; case OBS_KEY_L: return 'L'; case OBS_KEY_M: return 'M'; case OBS_KEY_N: return 'N'; case OBS_KEY_O: return 'O'; case OBS_KEY_P: return 'P'; case OBS_KEY_Q: return 'Q'; case OBS_KEY_R: return 'R'; case OBS_KEY_S: return 'S'; case OBS_KEY_T: return 'T'; case OBS_KEY_U: return 'U'; case OBS_KEY_V: return 'V'; case OBS_KEY_W: return 'W'; case OBS_KEY_X: return 'X'; case OBS_KEY_Y: return 'Y'; case OBS_KEY_Z: return 'Z'; case OBS_KEY_BRACKETLEFT: return VK_OEM_4; case OBS_KEY_BACKSLASH: return VK_OEM_5; case OBS_KEY_BRACKETRIGHT: return VK_OEM_6; case OBS_KEY_ASCIITILDE: return VK_OEM_3; case OBS_KEY_HENKAN: return VK_CONVERT; case OBS_KEY_MUHENKAN: return VK_NONCONVERT; case OBS_KEY_KANJI: return VK_KANJI; case OBS_KEY_TOUROKU: return VK_OEM_FJ_TOUROKU; case OBS_KEY_MASSYO: return VK_OEM_FJ_MASSHOU; case OBS_KEY_HANGUL: return VK_HANGUL; case OBS_KEY_BACKSLASH_RT102: return VK_OEM_102; case OBS_KEY_MOUSE1: return VK_LBUTTON; case OBS_KEY_MOUSE2: return VK_RBUTTON; case OBS_KEY_MOUSE3: return VK_MBUTTON; case OBS_KEY_MOUSE4: return VK_XBUTTON1; case OBS_KEY_MOUSE5: return VK_XBUTTON2; case OBS_KEY_VK_CANCEL: return VK_CANCEL; case OBS_KEY_0x07: return 0x07; case OBS_KEY_0x0A: return 0x0A; case OBS_KEY_0x0B: return 0x0B; case OBS_KEY_0x0E: return 0x0E; case OBS_KEY_0x0F: return 0x0F; case OBS_KEY_0x16: return 0x16; case OBS_KEY_VK_JUNJA: return VK_JUNJA; case OBS_KEY_VK_FINAL: return VK_FINAL; case OBS_KEY_0x1A: return 0x1A; case OBS_KEY_VK_ACCEPT: return VK_ACCEPT; case OBS_KEY_VK_MODECHANGE: return VK_MODECHANGE; case OBS_KEY_VK_SELECT: return VK_SELECT; case OBS_KEY_VK_PRINT: return VK_PRINT; case OBS_KEY_VK_EXECUTE: return VK_EXECUTE; case OBS_KEY_VK_HELP: return VK_HELP; case OBS_KEY_0x30: return 0x30; case OBS_KEY_0x31: return 0x31; case OBS_KEY_0x32: return 0x32; case OBS_KEY_0x33: return 0x33; case OBS_KEY_0x34: return 0x34; case OBS_KEY_0x35: return 0x35; case OBS_KEY_0x36: return 0x36; case OBS_KEY_0x37: return 0x37; case OBS_KEY_0x38: return 0x38; case OBS_KEY_0x39: return 0x39; case OBS_KEY_0x3A: return 0x3A; case OBS_KEY_0x3B: return 0x3B; case OBS_KEY_0x3C: return 0x3C; case OBS_KEY_0x3D: return 0x3D; case OBS_KEY_0x3E: return 0x3E; case OBS_KEY_0x3F: return 0x3F; case OBS_KEY_0x40: return 0x40; case OBS_KEY_0x41: return 0x41; case OBS_KEY_0x42: return 0x42; case OBS_KEY_0x43: return 0x43; case OBS_KEY_0x44: return 0x44; case OBS_KEY_0x45: return 0x45; case OBS_KEY_0x46: return 0x46; case OBS_KEY_0x47: return 0x47; case OBS_KEY_0x48: return 0x48; case OBS_KEY_0x49: return 0x49; case OBS_KEY_0x4A: return 0x4A; case OBS_KEY_0x4B: return 0x4B; case OBS_KEY_0x4C: return 0x4C; case OBS_KEY_0x4D: return 0x4D; case OBS_KEY_0x4E: return 0x4E; case OBS_KEY_0x4F: return 0x4F; case OBS_KEY_0x50: return 0x50; case OBS_KEY_0x51: return 0x51; case OBS_KEY_0x52: return 0x52; case OBS_KEY_0x53: return 0x53; case OBS_KEY_0x54: return 0x54; case OBS_KEY_0x55: return 0x55; case OBS_KEY_0x56: return 0x56; case OBS_KEY_0x57: return 0x57; case OBS_KEY_0x58: return 0x58; case OBS_KEY_0x59: return 0x59; case OBS_KEY_0x5A: return 0x5A; case OBS_KEY_VK_LWIN: return VK_LWIN; case OBS_KEY_VK_RWIN: return VK_RWIN; case OBS_KEY_VK_APPS: return VK_APPS; case OBS_KEY_0x5E: return 0x5E; case OBS_KEY_VK_SLEEP: return VK_SLEEP; case OBS_KEY_VK_SEPARATOR: return VK_SEPARATOR; case OBS_KEY_0x88: return 0x88; case OBS_KEY_0x89: return 0x89; case OBS_KEY_0x8A: return 0x8A; case OBS_KEY_0x8B: return 0x8B; case OBS_KEY_0x8C: return 0x8C; case OBS_KEY_0x8D: return 0x8D; case OBS_KEY_0x8E: return 0x8E; case OBS_KEY_0x8F: return 0x8F; case OBS_KEY_VK_OEM_FJ_JISHO: return VK_OEM_FJ_JISHO; case OBS_KEY_VK_OEM_FJ_LOYA: return VK_OEM_FJ_LOYA; case OBS_KEY_VK_OEM_FJ_ROYA: return VK_OEM_FJ_ROYA; case OBS_KEY_0x97: return 0x97; case OBS_KEY_0x98: return 0x98; case OBS_KEY_0x99: return 0x99; case OBS_KEY_0x9A: return 0x9A; case OBS_KEY_0x9B: return 0x9B; case OBS_KEY_0x9C: return 0x9C; case OBS_KEY_0x9D: return 0x9D; case OBS_KEY_0x9E: return 0x9E; case OBS_KEY_0x9F: return 0x9F; case OBS_KEY_VK_LSHIFT: return VK_LSHIFT; case OBS_KEY_VK_RSHIFT: return VK_RSHIFT; case OBS_KEY_VK_LCONTROL: return VK_LCONTROL; case OBS_KEY_VK_RCONTROL: return VK_RCONTROL; case OBS_KEY_VK_LMENU: return VK_LMENU; case OBS_KEY_VK_RMENU: return VK_RMENU; case OBS_KEY_VK_BROWSER_BACK: return VK_BROWSER_BACK; case OBS_KEY_VK_BROWSER_FORWARD: return VK_BROWSER_FORWARD; case OBS_KEY_VK_BROWSER_REFRESH: return VK_BROWSER_REFRESH; case OBS_KEY_VK_BROWSER_STOP: return VK_BROWSER_STOP; case OBS_KEY_VK_BROWSER_SEARCH: return VK_BROWSER_SEARCH; case OBS_KEY_VK_BROWSER_FAVORITES: return VK_BROWSER_FAVORITES; case OBS_KEY_VK_BROWSER_HOME: return VK_BROWSER_HOME; case OBS_KEY_VK_VOLUME_MUTE: return VK_VOLUME_MUTE; case OBS_KEY_VK_VOLUME_DOWN: return VK_VOLUME_DOWN; case OBS_KEY_VK_VOLUME_UP: return VK_VOLUME_UP; case OBS_KEY_VK_MEDIA_NEXT_TRACK: return VK_MEDIA_NEXT_TRACK; case OBS_KEY_VK_MEDIA_PREV_TRACK: return VK_MEDIA_PREV_TRACK; case OBS_KEY_VK_MEDIA_STOP: return VK_MEDIA_STOP; case OBS_KEY_VK_MEDIA_PLAY_PAUSE: return VK_MEDIA_PLAY_PAUSE; case OBS_KEY_VK_LAUNCH_MAIL: return VK_LAUNCH_MAIL; case OBS_KEY_VK_LAUNCH_MEDIA_SELECT: return VK_LAUNCH_MEDIA_SELECT; case OBS_KEY_VK_LAUNCH_APP1: return VK_LAUNCH_APP1; case OBS_KEY_VK_LAUNCH_APP2: return VK_LAUNCH_APP2; case OBS_KEY_0xB8: return 0xB8; case OBS_KEY_0xB9: return 0xB9; case OBS_KEY_0xC1: return 0xC1; case OBS_KEY_0xC2: return 0xC2; case OBS_KEY_0xC3: return 0xC3; case OBS_KEY_0xC4: return 0xC4; case OBS_KEY_0xC5: return 0xC5; case OBS_KEY_0xC6: return 0xC6; case OBS_KEY_0xC7: return 0xC7; case OBS_KEY_0xC8: return 0xC8; case OBS_KEY_0xC9: return 0xC9; case OBS_KEY_0xCA: return 0xCA; case OBS_KEY_0xCB: return 0xCB; case OBS_KEY_0xCC: return 0xCC; case OBS_KEY_0xCD: return 0xCD; case OBS_KEY_0xCE: return 0xCE; case OBS_KEY_0xCF: return 0xCF; case OBS_KEY_0xD0: return 0xD0; case OBS_KEY_0xD1: return 0xD1; case OBS_KEY_0xD2: return 0xD2; case OBS_KEY_0xD3: return 0xD3; case OBS_KEY_0xD4: return 0xD4; case OBS_KEY_0xD5: return 0xD5; case OBS_KEY_0xD6: return 0xD6; case OBS_KEY_0xD7: return 0xD7; case OBS_KEY_0xD8: return 0xD8; case OBS_KEY_0xD9: return 0xD9; case OBS_KEY_0xDA: return 0xDA; case OBS_KEY_VK_OEM_8: return VK_OEM_8; case OBS_KEY_0xE0: return 0xE0; case OBS_KEY_VK_OEM_AX: return VK_OEM_AX; case OBS_KEY_VK_ICO_HELP: return VK_ICO_HELP; case OBS_KEY_VK_ICO_00: return VK_ICO_00; case OBS_KEY_VK_PROCESSKEY: return VK_PROCESSKEY; case OBS_KEY_VK_ICO_CLEAR: return VK_ICO_CLEAR; case OBS_KEY_VK_PACKET: return VK_PACKET; case OBS_KEY_0xE8: return 0xE8; case OBS_KEY_VK_OEM_RESET: return VK_OEM_RESET; case OBS_KEY_VK_OEM_JUMP: return VK_OEM_JUMP; case OBS_KEY_VK_OEM_PA1: return VK_OEM_PA1; case OBS_KEY_VK_OEM_PA2: return VK_OEM_PA2; case OBS_KEY_VK_OEM_PA3: return VK_OEM_PA3; case OBS_KEY_VK_OEM_WSCTRL: return VK_OEM_WSCTRL; case OBS_KEY_VK_OEM_CUSEL: return VK_OEM_CUSEL; case OBS_KEY_VK_OEM_ATTN: return VK_OEM_ATTN; case OBS_KEY_VK_OEM_FINISH: return VK_OEM_FINISH; case OBS_KEY_VK_OEM_COPY: return VK_OEM_COPY; case OBS_KEY_VK_OEM_AUTO: return VK_OEM_AUTO; case OBS_KEY_VK_OEM_ENLW: return VK_OEM_ENLW; case OBS_KEY_VK_ATTN: return VK_ATTN; case OBS_KEY_VK_CRSEL: return VK_CRSEL; case OBS_KEY_VK_EXSEL: return VK_EXSEL; case OBS_KEY_VK_EREOF: return VK_EREOF; case OBS_KEY_VK_PLAY: return VK_PLAY; case OBS_KEY_VK_ZOOM: return VK_ZOOM; case OBS_KEY_VK_NONAME: return VK_NONAME; case OBS_KEY_VK_PA1: return VK_PA1; case OBS_KEY_VK_OEM_CLEAR: return VK_OEM_CLEAR; /* TODO: Implement keys for non-US keyboards */ default:; } return 0; } bool obs_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys) { hotkeys->platform_context = bzalloc(sizeof(obs_hotkeys_platform_t)); for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) hotkeys->platform_context->vk_codes[i] = get_virtual_key(i); return true; } void obs_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys) { bfree(hotkeys->platform_context); hotkeys->platform_context = NULL; } static bool vk_down(DWORD vk) { short state = GetAsyncKeyState(vk); bool down = (state & 0x8000) != 0; return down; } bool obs_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *context, obs_key_t key) { if (key == OBS_KEY_META) { return vk_down(VK_LWIN) || vk_down(VK_RWIN); } UNUSED_PARAMETER(context); return vk_down(obs_key_to_virtual_key(key)); } void obs_key_to_str(obs_key_t key, struct dstr *str) { wchar_t name[128] = L""; UINT scan_code; int vk; if (key == OBS_KEY_NONE) { return; } else if (key >= OBS_KEY_F13 && key <= OBS_KEY_F24) { dstr_printf(str, "F%d", (int)(key - OBS_KEY_F13 + 13)); return; } else if (key >= OBS_KEY_MOUSE1 && key <= OBS_KEY_MOUSE29) { if (obs->hotkeys.translations[key]) { dstr_copy(str, obs->hotkeys.translations[key]); } else { dstr_printf(str, "Mouse %d", (int)(key - OBS_KEY_MOUSE1 + 1)); } return; } if (key == OBS_KEY_PAUSE) { dstr_copy(str, obs_get_hotkey_translation(key, "Pause")); return; } else if (key == OBS_KEY_META) { dstr_copy(str, obs_get_hotkey_translation(key, "Windows")); return; } vk = obs_key_to_virtual_key(key); scan_code = MapVirtualKey(vk, 0) << 16; switch (vk) { case VK_HOME: case VK_END: case VK_LEFT: case VK_UP: case VK_RIGHT: case VK_DOWN: case VK_PRIOR: case VK_NEXT: case VK_INSERT: case VK_DELETE: case VK_NUMLOCK: scan_code |= 0x01000000; } if ((key < OBS_KEY_VK_CANCEL || key > OBS_KEY_VK_OEM_CLEAR) && scan_code != 0 && GetKeyNameTextW(scan_code, name, 128) != 0) { dstr_from_wcs(str, name); } else if (key != OBS_KEY_NONE) { dstr_copy(str, obs_key_to_name(key)); } } obs_key_t obs_key_from_virtual_key(int code) { obs_hotkeys_platform_t *platform = obs->hotkeys.platform_context; for (size_t i = 0; i < OBS_KEY_LAST_VALUE; i++) { if (platform->vk_codes[i] == code) { return (obs_key_t)i; } } return OBS_KEY_NONE; } int obs_key_to_virtual_key(obs_key_t key) { if (key == OBS_KEY_META) return VK_LWIN; return obs->hotkeys.platform_context->vk_codes[(int)key]; } static inline void add_combo_key(obs_key_t key, struct dstr *str) { struct dstr key_str = {0}; obs_key_to_str(key, &key_str); if (!dstr_is_empty(&key_str)) { if (!dstr_is_empty(str)) { dstr_cat(str, " + "); } dstr_cat_dstr(str, &key_str); } dstr_free(&key_str); } void obs_key_combination_to_str(obs_key_combination_t combination, struct dstr *str) { if ((combination.modifiers & INTERACT_CONTROL_KEY) != 0) { add_combo_key(OBS_KEY_CONTROL, str); } if ((combination.modifiers & INTERACT_COMMAND_KEY) != 0) { add_combo_key(OBS_KEY_META, str); } if ((combination.modifiers & INTERACT_ALT_KEY) != 0) { add_combo_key(OBS_KEY_ALT, str); } if ((combination.modifiers & INTERACT_SHIFT_KEY) != 0) { add_combo_key(OBS_KEY_SHIFT, str); } if (combination.key != OBS_KEY_NONE) { add_combo_key(combination.key, str); } } bool sym_initialize_called = false; void reset_win32_symbol_paths(void) { static BOOL(WINAPI * sym_initialize_w)(HANDLE, const wchar_t *, BOOL); static BOOL(WINAPI * sym_set_search_path_w)(HANDLE, const wchar_t *); static bool funcs_initialized = false; static bool initialize_success = false; struct obs_module *module = obs->first_module; struct dstr path_str = {0}; DARRAY(char *) paths; wchar_t *path_str_w = NULL; char *abspath; da_init(paths); if (!funcs_initialized) { HMODULE mod; funcs_initialized = true; mod = LoadLibraryW(L"DbgHelp"); if (!mod) return; sym_initialize_w = (void *)GetProcAddress(mod, "SymInitializeW"); sym_set_search_path_w = (void *)GetProcAddress(mod, "SymSetSearchPathW"); if (!sym_initialize_w || !sym_set_search_path_w) { FreeLibrary(mod); return; } initialize_success = true; // Leaks 'mod' once. } if (!initialize_success) return; abspath = os_get_abs_path_ptr("."); if (abspath) da_push_back(paths, &abspath); while (module) { bool found = false; struct dstr path = {0}; char *path_end; dstr_copy(&path, module->bin_path); dstr_replace(&path, "/", "\\"); path_end = strrchr(path.array, '\\'); if (!path_end) { module = module->next; dstr_free(&path); continue; } *path_end = 0; abspath = os_get_abs_path_ptr(path.array); if (abspath) { for (size_t i = 0; i < paths.num; i++) { const char *existing_path = paths.array[i]; if (astrcmpi(abspath, existing_path) == 0) { found = true; break; } } if (!found) { da_push_back(paths, &abspath); } else { bfree(abspath); } } dstr_free(&path); module = module->next; } for (size_t i = 0; i < paths.num; i++) { const char *path = paths.array[i]; if (path && *path) { if (i != 0) dstr_cat(&path_str, ";"); dstr_cat(&path_str, paths.array[i]); } } if (path_str.array) { os_utf8_to_wcs_ptr(path_str.array, path_str.len, &path_str_w); if (path_str_w) { if (!sym_initialize_called) { sym_initialize_w(GetCurrentProcess(), path_str_w, false); sym_initialize_called = true; } else { sym_set_search_path_w(GetCurrentProcess(), path_str_w); } bfree(path_str_w); } } for (size_t i = 0; i < paths.num; i++) bfree(paths.array[i]); dstr_free(&path_str); da_free(paths); } extern void initialize_crash_handler(void); void obs_init_win32_crash_handler(void) { initialize_crash_handler(); } bool initialize_com(void) { const HRESULT hr = CoInitializeEx(0, COINIT_APARTMENTTHREADED); const bool success = SUCCEEDED(hr); if (!success) blog(LOG_ERROR, "CoInitializeEx failed: 0x%08X", hr); return success; } void uninitialize_com(void) { CoUninitialize(); } obs-studio-32.1.0-sources/libobs/obs-nal.c000644 001751 001751 00000004006 15153330235 021205 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-nal.h" /* NOTE: I noticed that FFmpeg does some unusual special handling of certain * scenarios that I was unaware of, so instead of just searching for {0, 0, 1} * we'll just use the code from FFmpeg - http://www.ffmpeg.org/ */ static const uint8_t *ff_avc_find_startcode_internal(const uint8_t *p, const uint8_t *end) { const uint8_t *a = p + 4 - ((intptr_t)p & 3); for (end -= 3; p < a && p < end; p++) { if (p[0] == 0 && p[1] == 0 && p[2] == 1) return p; } for (end -= 3; p < end; p += 4) { uint32_t x = *(const uint32_t *)p; if ((x - 0x01010101) & (~x) & 0x80808080) { if (p[1] == 0) { if (p[0] == 0 && p[2] == 1) return p; if (p[2] == 0 && p[3] == 1) return p + 1; } if (p[3] == 0) { if (p[2] == 0 && p[4] == 1) return p + 2; if (p[4] == 0 && p[5] == 1) return p + 3; } } } for (end += 3; p < end; p++) { if (p[0] == 0 && p[1] == 0 && p[2] == 1) return p; } return end + 3; } const uint8_t *obs_nal_find_startcode(const uint8_t *p, const uint8_t *end) { const uint8_t *out = ff_avc_find_startcode_internal(p, end); if (p < out && out < end && !out[-1]) out--; return out; } obs-studio-32.1.0-sources/libobs/obs-ffmpeg-compat.h000644 001751 001751 00000001127 15153330235 023166 0ustar00runnerrunner000000 000000 #pragma once #include /* LIBAVCODEC_VERSION_CHECK checks for the right version of libav and FFmpeg * a is the major version * b and c the minor and micro versions of libav * d and e the minor and micro versions of FFmpeg */ #define LIBAVCODEC_VERSION_CHECK(a, b, c, d, e) \ ((LIBAVCODEC_VERSION_MICRO < 100 && LIBAVCODEC_VERSION_INT >= AV_VERSION_INT(a, b, c)) || \ (LIBAVCODEC_VERSION_MICRO >= 100 && LIBAVCODEC_VERSION_INT >= AV_VERSION_INT(a, d, e))) #define INPUT_BUFFER_PADDING_SIZE AV_INPUT_BUFFER_PADDING_SIZE obs-studio-32.1.0-sources/libobs/cmake/000755 001751 001751 00000000000 15153330731 020567 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/cmake/os-linux.cmake000644 001751 001751 00000005341 15153330235 023351 0ustar00runnerrunner000000 000000 find_package(LibUUID REQUIRED) find_package(X11 REQUIRED) find_package(X11_XCB REQUIRED) find_package(XCB REQUIRED XCB OPTIONAL_COMPONENTS XINPUT) find_package(Gio) target_sources( libobs PRIVATE obs-nix-platform.c obs-nix-platform.h obs-nix-x11.c obs-nix.c util/pipe-posix.c util/platform-nix.c util/threading-posix.c util/threading-posix.h ) target_compile_definitions( libobs PRIVATE OBS_INSTALL_PREFIX="${OBS_INSTALL_PREFIX}" $<$:ENABLE_DARRAY_TYPE_TEST> $<$:ENABLE_DARRAY_TYPE_TEST> ) if(CMAKE_C_COMPILER_ID STREQUAL GNU) # * Silence type-limits warning in line 292 of libobs/utils/utf8.c if(CMAKE_C_COMPILER_VERSION VERSION_GREATER_EQUAL 12.3.0) target_compile_options(libobs PRIVATE -Wno-error=type-limits) endif() endif() set(CMAKE_M_LIBS "") include(CheckCSourceCompiles) set(LIBM_TEST_SOURCE "#include\nfloat f; int main(){sqrt(f);return 0;}") check_c_source_compiles("${LIBM_TEST_SOURCE}" HAVE_MATH_IN_STD_LIB) set(UUID_TEST_SOURCE "#include\nint main(){return 0;}") check_c_source_compiles("${UUID_TEST_SOURCE}" HAVE_UUID_HEADER) if(NOT HAVE_UUID_HEADER) message(FATAL_ERROR "Required system header not found.") endif() target_link_libraries( libobs PRIVATE X11::X11 X11::XCB XCB::XCB LibUUID::LibUUID ${CMAKE_DL_LIBS} $<$>:m> $<$:XCB::XINPUT> ) if(ENABLE_PULSEAUDIO) find_package(PulseAudio REQUIRED) target_sources( libobs PRIVATE audio-monitoring/pulse/pulseaudio-enum-devices.c audio-monitoring/pulse/pulseaudio-monitoring-available.c audio-monitoring/pulse/pulseaudio-output.c audio-monitoring/pulse/pulseaudio-wrapper.c audio-monitoring/pulse/pulseaudio-wrapper.h ) target_link_libraries(libobs PRIVATE PulseAudio::PulseAudio) target_enable_feature(libobs "PulseAudio audio monitoring (Linux)") else() target_sources(libobs PRIVATE audio-monitoring/null/null-audio-monitoring.c) target_disable_feature(libobs "PulseAudio audio monitoring (Linux)") endif() if(TARGET gio::gio) target_sources(libobs PRIVATE util/platform-nix-dbus.c util/platform-nix-portal.c) target_link_libraries(libobs PRIVATE gio::gio) endif() if(ENABLE_WAYLAND) find_package(Wayland REQUIRED Client) find_package(Xkbcommon REQUIRED) target_sources(libobs PRIVATE obs-nix-wayland.c) target_link_libraries(libobs PRIVATE Wayland::Client xkbcommon::xkbcommon) target_enable_feature(libobs "Wayland compositor support (Linux)") else() target_disable_feature(libobs "Wayland compositor support (Linux)") endif() set_target_properties(libobs PROPERTIES OUTPUT_NAME obs) obs-studio-32.1.0-sources/libobs/cmake/obs-version.cmake000644 001751 001751 00000000564 15153330235 024043 0ustar00runnerrunner000000 000000 add_library(libobs-version OBJECT) add_library(OBS::libobs-version ALIAS libobs-version) configure_file(obsversion.c.in obsversion.c @ONLY) target_sources(libobs-version PRIVATE obsversion.c PUBLIC obsversion.h) target_include_directories(libobs-version PUBLIC "$") set_property(TARGET libobs-version PROPERTY FOLDER core) obs-studio-32.1.0-sources/libobs/cmake/os-windows.cmake000644 001751 001751 00000005056 15153330235 023707 0ustar00runnerrunner000000 000000 if(NOT TARGET OBS::obfuscate) add_library(obs-obfuscate INTERFACE) add_library(OBS::obfuscate ALIAS obs-obfuscate) target_sources(obs-obfuscate INTERFACE util/windows/obfuscate.c util/windows/obfuscate.h) target_include_directories(obs-obfuscate INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}") endif() if(NOT TARGET OBS::comutils) add_library(obs-comutils INTERFACE) add_library(OBS::COMutils ALIAS obs-comutils) target_sources(obs-comutils INTERFACE util/windows/ComPtr.hpp) target_include_directories(obs-comutils INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}") endif() if(NOT TARGET OBS::winhandle) add_library(obs-winhandle INTERFACE) add_library(OBS::winhandle ALIAS obs-winhandle) target_sources(obs-winhandle INTERFACE util/windows/WinHandle.hpp) target_include_directories(obs-winhandle INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}") endif() if(NOT TARGET OBS::threading-windows) add_library(obs-threading-windows INTERFACE) add_library(OBS::threading-windows ALIAS obs-threading-windows) target_sources(obs-threading-windows INTERFACE util/threading-windows.h) target_include_directories(obs-threading-windows INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}") endif() if(NOT TARGET OBS::w32-pthreads) add_subdirectory("${CMAKE_SOURCE_DIR}/deps/w32-pthreads" "${CMAKE_BINARY_DIR}/deps/w32-pthreads") endif() if(NOT OBS_PARENT_ARCHITECTURE STREQUAL CMAKE_VS_PLATFORM_NAME) return() endif() configure_file(cmake/windows/obs-module.rc.in libobs.rc) target_sources( libobs PRIVATE audio-monitoring/win32/wasapi-enum-devices.c audio-monitoring/win32/wasapi-monitoring-available.c audio-monitoring/win32/wasapi-output.c audio-monitoring/win32/wasapi-output.h libobs.rc obs-win-crash-handler.c obs-windows.c util/pipe-windows.c util/platform-windows.c util/threading-windows.c util/threading-windows.h util/windows/CoTaskMemPtr.hpp util/windows/device-enum.c util/windows/device-enum.h util/windows/HRError.hpp util/windows/win-registry.h util/windows/win-version.h util/windows/window-helpers.c util/windows/window-helpers.h ) target_compile_options(libobs PRIVATE $<$:/EHc->) set_source_files_properties( obs-win-crash-handler.c PROPERTIES COMPILE_DEFINITIONS OBS_VERSION="${OBS_VERSION_CANONICAL}" ) target_link_libraries( libobs PRIVATE Avrt Dwmapi Dxgi winmm Rpcrt4 OBS::obfuscate OBS::winhandle OBS::COMutils PUBLIC OBS::w32-pthreads ) target_link_options(libobs PRIVATE /IGNORE:4098 /SAFESEH:NO) set_target_properties(libobs PROPERTIES PREFIX "" OUTPUT_NAME "obs") obs-studio-32.1.0-sources/libobs/cmake/libobsConfig.cmake.in000644 001751 001751 00000000702 15153330235 024574 0ustar00runnerrunner000000 000000 @PACKAGE_INIT@ include(CMakeFindDependencyMacro) list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/finders") if(MSVC) find_dependency(w32-pthreads REQUIRED) endif() find_dependency(SIMDe REQUIRED) find_dependency(Threads REQUIRED) include("${CMAKE_CURRENT_LIST_DIR}/@TARGETS_EXPORT_NAME@.cmake") check_required_components("@PROJECT_NAME@") set_property(TARGET OBS::libobs APPEND PROPERTY INTERFACE_COMPILE_DEFINITIONS HAVE_OBSCONFIG_H) obs-studio-32.1.0-sources/libobs/cmake/os-freebsd.cmake000644 001751 001751 00000005107 15153330235 023624 0ustar00runnerrunner000000 000000 find_package(LibUUID REQUIRED) find_package(X11 REQUIRED) find_package(X11_XCB REQUIRED) find_package(XCB REQUIRED XCB OPTIONAL_COMPONENTS XINPUT) find_package(Gio) find_package(Sysinfo REQUIRED) set(CMAKE_M_LIBS "") include(CheckCSourceCompiles) set(LIBM_TEST_SOURCE "#include\nfloat f; int main(){sqrt(f);return 0;}") check_c_source_compiles("${LIBM_TEST_SOURCE}" HAVE_MATH_IN_STD_LIB) set(CMAKE_REQUIRED_INCLUDES "/usr/local/include") set(UUID_TEST_SOURCE "#include\nint main(){return 0;}") check_c_source_compiles("${UUID_TEST_SOURCE}" HAVE_UUID_HEADER) if(NOT HAVE_UUID_HEADER) message(FATAL_ERROR "Required system header not found.") endif() target_sources( libobs PRIVATE obs-nix-platform.c obs-nix-platform.h obs-nix-x11.c obs-nix.c util/pipe-posix.c util/platform-nix.c util/threading-posix.c util/threading-posix.h ) target_compile_definitions( libobs PRIVATE OBS_INSTALL_PREFIX="${OBS_INSTALL_PREFIX}" $<$:ENABLE_DARRAY_TYPE_TEST> $<$:ENABLE_DARRAY_TYPE_TEST> ) target_link_libraries( libobs PRIVATE X11::XCB XCB::XCB LibUUID::LibUUID Sysinfo::Sysinfo ${CMAKE_DL_LIBS} $<$>:m> $<$:XCB::XINPUT> ) if(ENABLE_PULSEAUDIO) find_package(PulseAudio REQUIRED) target_sources( libobs PRIVATE audio-monitoring/pulse/pulseaudio-enum-devices.c audio-monitoring/pulse/pulseaudio-monitoring-available.c audio-monitoring/pulse/pulseaudio-output.c audio-monitoring/pulse/pulseaudio-wrapper.c audio-monitoring/pulse/pulseaudio-wrapper.h ) target_link_libraries(libobs PRIVATE PulseAudio::PulseAudio) target_enable_feature(libobs "PulseAudio audio monitoring (FreeBSD)") else() target_sources(libobs PRIVATE audio-monitoring/null/null-audio-monitoring.c) target_disable_feature(libobs "PulseAudio audio monitoring (FreeBSD)") endif() if(TARGET gio::gio) target_sources(libobs PRIVATE util/platform-nix-dbus.c util/platform-nix-portal.c) target_link_libraries(libobs PRIVATE gio::gio) endif() if(ENABLE_WAYLAND) find_package(Wayland REQUIRED Client) find_package(Xkbcommon REQUIRED) target_sources(libobs PRIVATE obs-nix-wayland.c) target_link_libraries(libobs PRIVATE Wayland::Client xkbcommon::xkbcommon) target_enable_feature(libobs "Wayland compositor support (FreeBSD)") else() target_disable_feature(libobs "Wayland compositor support (FreebSD)") endif() set_target_properties(libobs PROPERTIES OUTPUT_NAME obs) obs-studio-32.1.0-sources/libobs/cmake/os-macos.cmake000644 001751 001751 00000002052 15153330235 023310 0ustar00runnerrunner000000 000000 target_link_libraries( libobs PRIVATE "$" "$" "$" "$" "$" "$" "$" ) target_sources( libobs PRIVATE audio-monitoring/osx/coreaudio-enum-devices.c audio-monitoring/osx/coreaudio-monitoring-available.c audio-monitoring/osx/coreaudio-output.c audio-monitoring/osx/mac-helpers.h obs-cocoa.m util/apple/cfstring-utils.h util/pipe-posix.c util/platform-cocoa.m util/platform-nix.c util/threading-posix.c util/threading-posix.h ) target_compile_options(libobs PUBLIC "$<$>:-Wno-strict-prototypes;-Wno-shorten-64-to-32>") set_property(SOURCE obs-cocoa.m util/platform-cocoa.m PROPERTY COMPILE_OPTIONS -fobjc-arc) set_property(TARGET libobs PROPERTY FRAMEWORK TRUE) obs-studio-32.1.0-sources/libobs/cmake/linux/000755 001751 001751 00000000000 15153330731 021726 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/cmake/linux/libobs.pc.in000644 001751 001751 00000000571 15153330235 024133 0ustar00runnerrunner000000 000000 prefix=@CMAKE_INSTALL_PREFIX@ exec_prefix=${prefix} libdir=${prefix}/lib includedir=${prefix}/include Name: libobs Description: OBS Studio core compositor library Version: @OBS_VERSION_CANONICAL@ Requires: Libs: -L${libdir} -lobs Libs.private: -pthread -lm Cflags: -I${includedir} -std=gnu@CMAKE_C_STANDARD@ -fPIC -fvisibility=hidden -fopenmp-simd -Werror -DHAVE_OBSCONFIG_H obs-studio-32.1.0-sources/libobs/cmake/macos/000755 001751 001751 00000000000 15153330731 021671 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/cmake/macos/entitlements.plist000644 001751 001751 00000000435 15153330235 025462 0ustar00runnerrunner000000 000000 com.apple.security.cs.disable-library-validation obs-studio-32.1.0-sources/libobs/cmake/windows/000755 001751 001751 00000000000 15153330731 022261 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/cmake/windows/obs-module.rc.in000644 001751 001751 00000001255 15153330235 025264 0ustar00runnerrunner000000 000000 1 VERSIONINFO FILEVERSION ${OBS_VERSION_MAJOR},${OBS_VERSION_MINOR},${OBS_VERSION_PATCH},0 BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904B0" BEGIN VALUE "CompanyName", "${OBS_COMPANY_NAME}" VALUE "FileDescription", "OBS Library" VALUE "FileVersion", "${OBS_VERSION_CANONICAL}" VALUE "ProductName", "${OBS_PRODUCT_NAME}" VALUE "ProductVersion", "${OBS_VERSION_CANONICAL}" VALUE "Comments", "${OBS_COMMENTS}" VALUE "LegalCopyright", "${OBS_LEGAL_COPYRIGHT}" VALUE "InternalName", "libobs" VALUE "OriginalFilename", "libobs" END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x0409, 0x04B0 END END obs-studio-32.1.0-sources/libobs/obs-config.h000644 001751 001751 00000003221 15153330235 021703 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once /* * LIBOBS_API_VER is returned by module_version in each module. * * Libobs uses semantic versioning. See http://semver.org/ for more * information. */ /* * Increment if major breaking API changes */ #define LIBOBS_API_MAJOR_VER 32 /* * Increment if backward-compatible additions * * Reset to zero each major version */ #define LIBOBS_API_MINOR_VER 1 /* * Increment if backward-compatible bug fix * * Reset to zero each major or minor version */ #define LIBOBS_API_PATCH_VER 0 #define MAKE_SEMANTIC_VERSION(major, minor, patch) ((major << 24) | (minor << 16) | patch) #define LIBOBS_API_VER MAKE_SEMANTIC_VERSION(LIBOBS_API_MAJOR_VER, LIBOBS_API_MINOR_VER, LIBOBS_API_PATCH_VER) #include "obsconfig.h" #define OBS_INSTALL_DATA_PATH OBS_INSTALL_PREFIX "/" OBS_DATA_PATH obs-studio-32.1.0-sources/libobs/obs-nal.h000644 001751 001751 00000002257 15153330235 021220 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #ifdef __cplusplus extern "C" { #endif enum { OBS_NAL_PRIORITY_DISPOSABLE = 0, OBS_NAL_PRIORITY_LOW = 1, OBS_NAL_PRIORITY_HIGH = 2, OBS_NAL_PRIORITY_HIGHEST = 3, }; EXPORT const uint8_t *obs_nal_find_startcode(const uint8_t *p, const uint8_t *end); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-encoder.h000644 001751 001751 00000026623 15153330235 022070 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once /** * @file * @brief header for modules implementing encoders. * * Encoders are modules that implement some codec that can be used by libobs * to process output data. */ #ifdef __cplusplus extern "C" { #endif struct obs_encoder; typedef struct obs_encoder obs_encoder_t; #define OBS_ENCODER_CAP_DEPRECATED (1 << 0) #define OBS_ENCODER_CAP_PASS_TEXTURE (1 << 1) #define OBS_ENCODER_CAP_DYN_BITRATE (1 << 2) #define OBS_ENCODER_CAP_INTERNAL (1 << 3) #define OBS_ENCODER_CAP_ROI (1 << 4) #define OBS_ENCODER_CAP_SCALING (1 << 5) /** Specifies the encoder type */ enum obs_encoder_type { OBS_ENCODER_AUDIO, /**< The encoder provides an audio codec */ OBS_ENCODER_VIDEO /**< The encoder provides a video codec */ }; /* encoder_packet_time is used for timestamping events associated * with each video frame. This is useful for deriving absolute * timestamps (i.e. wall-clock based formats) and measuring latency. * * For each frame, there are four events of interest, described in * the encoder_packet_time struct, namely cts, fer, ferc, and pir. * The timebase of these four events is os_gettime_ns(), which provides * very high resolution timestamping, and the ability to convert the * timing to any other time format. * * Each frame follows a timeline in the following temporal order: * CTS, FER, FERC, PIR * * PTS is the integer-based monotonically increasing value that is used * to associate an encoder_packet_time entry with a specific encoder_packet. */ struct encoder_packet_time { /* PTS used to associate uncompressed frames with encoded packets. */ int64_t pts; /* Composition timestamp is when the frame was rendered, * captured via os_gettime_ns(). */ uint64_t cts; /* FERC (Frame Encode Request) is when the frame was * submitted to the encoder for encoding via the encode * callback (e.g. encode_texture2()), captured via os_gettime_ns(). */ uint64_t fer; /* FERC (Frame Encode Request Complete) is when * the associated FER event completed. If the encode * is synchronous with the call, this means FERC - FEC * measures the actual encode time, otherwise if the * encode is asynchronous, it measures the pipeline * delay between encode request and encode complete. * FERC is also captured via os_gettime_ns(). */ uint64_t ferc; /* PIR (Packet Interleave Request) is when the encoded packet * is interleaved with the stream. PIR is captured via * os_gettime_ns(). The difference between PIR and CTS gives * the total latency between frame rendering * and packet interleaving. */ uint64_t pir; }; /** Encoder output packet */ struct encoder_packet { uint8_t *data; /**< Packet data */ size_t size; /**< Packet size */ int64_t pts; /**< Presentation timestamp */ int64_t dts; /**< Decode timestamp */ int32_t timebase_num; /**< Timebase numerator */ int32_t timebase_den; /**< Timebase denominator */ enum obs_encoder_type type; /**< Encoder type */ bool keyframe; /**< Is a keyframe */ /* ---------------------------------------------------------------- */ /* Internal video variables (will be parsed automatically) */ /* DTS in microseconds */ int64_t dts_usec; /* System DTS in microseconds */ int64_t sys_dts_usec; /** * Packet priority * * This is generally use by video encoders to specify the priority * of the packet. */ int priority; /** * Dropped packet priority * * If this packet needs to be dropped, the next packet must be of this * priority or higher to continue transmission. */ int drop_priority; /** Audio track index (used with outputs) */ size_t track_idx; /** Encoder from which the track originated from */ obs_encoder_t *encoder; }; /** Encoder input frame */ struct encoder_frame { /** Data for the frame/audio */ uint8_t *data[MAX_AV_PLANES]; /** size of each plane */ uint32_t linesize[MAX_AV_PLANES]; /** Number of frames (audio only) */ uint32_t frames; /** Presentation timestamp */ int64_t pts; }; /** Encoder region of interest */ struct obs_encoder_roi { /* The rectangle edges of the region are specified as number of pixels * from the input video's top and left edges (i.e. row/column 0). */ uint32_t top; uint32_t bottom; uint32_t left; uint32_t right; /* Priority is specified as a float value between -1 and 1. * These are converted to encoder-specific values by the encoder. * Values above 0 tell the encoder to increase quality for that region, * values below tell it to worsen it. * Not all encoders support negative values and they may be ignored. */ float priority; }; struct gs_texture; /** Encoder input texture */ struct encoder_texture { /** Shared texture handle, only set on Windows */ uint32_t handle; /** Textures, length determined by format */ struct gs_texture *tex[4]; }; /** * Encoder interface * * Encoders have a limited usage with OBS. You are not generally supposed to * implement every encoder out there. Generally, these are limited or specific * encoders for h264/aac for streaming and recording. It doesn't have to be * *just* h264 or aac of course, but generally those are the expected encoders. * * That being said, other encoders will be kept in mind for future use. */ struct obs_encoder_info { /* ----------------------------------------------------------------- */ /* Required implementation*/ /** Specifies the named identifier of this encoder */ const char *id; /** Specifies the encoder type (video or audio) */ enum obs_encoder_type type; /** Specifies the codec */ const char *codec; /** * Gets the full translated name of this encoder * * @param type_data The type_data variable of this structure * @return Translated name of the encoder */ const char *(*get_name)(void *type_data); /** * Creates the encoder with the specified settings * * @param settings Settings for the encoder * @param encoder OBS encoder context * @return Data associated with this encoder context, or * NULL if initialization failed. */ void *(*create)(obs_data_t *settings, obs_encoder_t *encoder); /** * Destroys the encoder data * * @param data Data associated with this encoder context */ void (*destroy)(void *data); /** * Encodes frame(s), and outputs encoded packets as they become * available. * * @param data Data associated with this encoder * context * @param[in] frame Raw audio/video data to encode * @param[out] packet Encoder packet output, if any * @param[out] received_packet Set to true if a packet was received, * false otherwise * @return true if successful, false otherwise. */ bool (*encode)(void *data, struct encoder_frame *frame, struct encoder_packet *packet, bool *received_packet); /** Audio encoder only: Returns the frame size for this encoder */ size_t (*get_frame_size)(void *data); /* ----------------------------------------------------------------- */ /* Optional implementation */ /** * Gets the default settings for this encoder * * @param[out] settings Data to assign default settings to */ void (*get_defaults)(obs_data_t *settings); /** * Gets the property information of this encoder * * @return The properties data */ obs_properties_t *(*get_properties)(void *data); /** * Updates the settings for this encoder (usually used for things like * changing bitrate while active) * * @param data Data associated with this encoder context * @param settings New settings for this encoder * @return true if successful, false otherwise */ bool (*update)(void *data, obs_data_t *settings); /** * Returns extra data associated with this encoder (usually header) * * @param data Data associated with this encoder context * @param[out] extra_data Pointer to receive the extra data * @param[out] size Pointer to receive the size of the extra * data * @return true if extra data available, false * otherwise */ bool (*get_extra_data)(void *data, uint8_t **extra_data, size_t *size); /** * Gets the SEI data, if any * * @param data Data associated with this encoder context * @param[out] sei_data Pointer to receive the SEI data * @param[out] size Pointer to receive the SEI data size * @return true if SEI data available, false otherwise */ bool (*get_sei_data)(void *data, uint8_t **sei_data, size_t *size); /** * Returns desired audio format and sample information * * @param data Data associated with this encoder context * @param[in/out] info Audio format information */ void (*get_audio_info)(void *data, struct audio_convert_info *info); /** * Returns desired video format information * * @param data Data associated with this encoder context * @param[in/out] info Video format information */ void (*get_video_info)(void *data, struct video_scale_info *info); void *type_data; void (*free_type_data)(void *type_data); uint32_t caps; /** * Gets the default settings for this encoder * * If get_defaults is also defined both will be called, and the first * call will be to get_defaults, then to get_defaults2. * * @param[out] settings Data to assign default settings to * @param[in] typedata Type Data */ void (*get_defaults2)(obs_data_t *settings, void *type_data); /** * Gets the property information of this encoder * * @param[in] data Pointer from create (or null) * @param[in] typedata Type Data * @return The properties data */ obs_properties_t *(*get_properties2)(void *data, void *type_data); bool (*encode_texture)(void *data, uint32_t handle, int64_t pts, uint64_t lock_key, uint64_t *next_key, struct encoder_packet *packet, bool *received_packet); bool (*encode_texture2)(void *data, struct encoder_texture *texture, int64_t pts, uint64_t lock_key, uint64_t *next_key, struct encoder_packet *packet, bool *received_packet); /** Audio encoder only: Returns padding, in samples, that must be skipped at the start of the stream. */ uint32_t (*get_priming_samples)(void *data); }; EXPORT void obs_register_encoder_s(const struct obs_encoder_info *info, size_t size); /** * Register an encoder definition to the current obs context. This should be * used in obs_module_load. * * @param info Pointer to the source definition structure. */ #define obs_register_encoder(info) obs_register_encoder_s(info, sizeof(struct obs_encoder_info)) #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/obs-hotkeys.h000644 001751 001751 00000043440 15153330235 022133 0ustar00runnerrunner000000 000000 OBS_HOTKEY(OBS_KEY_NONE) OBS_HOTKEY(OBS_KEY_RETURN) OBS_HOTKEY(OBS_KEY_ENTER) OBS_HOTKEY(OBS_KEY_ESCAPE) OBS_HOTKEY(OBS_KEY_TAB) OBS_HOTKEY(OBS_KEY_BACKTAB) OBS_HOTKEY(OBS_KEY_BACKSPACE) OBS_HOTKEY(OBS_KEY_INSERT) OBS_HOTKEY(OBS_KEY_DELETE) OBS_HOTKEY(OBS_KEY_PAUSE) OBS_HOTKEY(OBS_KEY_PRINT) OBS_HOTKEY(OBS_KEY_SYSREQ) OBS_HOTKEY(OBS_KEY_CLEAR) OBS_HOTKEY(OBS_KEY_HOME) OBS_HOTKEY(OBS_KEY_END) OBS_HOTKEY(OBS_KEY_LEFT) OBS_HOTKEY(OBS_KEY_UP) OBS_HOTKEY(OBS_KEY_RIGHT) OBS_HOTKEY(OBS_KEY_DOWN) OBS_HOTKEY(OBS_KEY_PAGEUP) OBS_HOTKEY(OBS_KEY_PAGEDOWN) OBS_HOTKEY(OBS_KEY_SHIFT) OBS_HOTKEY(OBS_KEY_CONTROL) OBS_HOTKEY(OBS_KEY_META) OBS_HOTKEY(OBS_KEY_ALT) OBS_HOTKEY(OBS_KEY_ALTGR) OBS_HOTKEY(OBS_KEY_CAPSLOCK) OBS_HOTKEY(OBS_KEY_NUMLOCK) OBS_HOTKEY(OBS_KEY_SCROLLLOCK) OBS_HOTKEY(OBS_KEY_F1) OBS_HOTKEY(OBS_KEY_F2) OBS_HOTKEY(OBS_KEY_F3) OBS_HOTKEY(OBS_KEY_F4) OBS_HOTKEY(OBS_KEY_F5) OBS_HOTKEY(OBS_KEY_F6) OBS_HOTKEY(OBS_KEY_F7) OBS_HOTKEY(OBS_KEY_F8) OBS_HOTKEY(OBS_KEY_F9) OBS_HOTKEY(OBS_KEY_F10) OBS_HOTKEY(OBS_KEY_F11) OBS_HOTKEY(OBS_KEY_F12) OBS_HOTKEY(OBS_KEY_F13) OBS_HOTKEY(OBS_KEY_F14) OBS_HOTKEY(OBS_KEY_F15) OBS_HOTKEY(OBS_KEY_F16) OBS_HOTKEY(OBS_KEY_F17) OBS_HOTKEY(OBS_KEY_F18) OBS_HOTKEY(OBS_KEY_F19) OBS_HOTKEY(OBS_KEY_F20) OBS_HOTKEY(OBS_KEY_F21) OBS_HOTKEY(OBS_KEY_F22) OBS_HOTKEY(OBS_KEY_F23) OBS_HOTKEY(OBS_KEY_F24) OBS_HOTKEY(OBS_KEY_F25) OBS_HOTKEY(OBS_KEY_F26) OBS_HOTKEY(OBS_KEY_F27) OBS_HOTKEY(OBS_KEY_F28) OBS_HOTKEY(OBS_KEY_F29) OBS_HOTKEY(OBS_KEY_F30) OBS_HOTKEY(OBS_KEY_F31) OBS_HOTKEY(OBS_KEY_F32) OBS_HOTKEY(OBS_KEY_F33) OBS_HOTKEY(OBS_KEY_F34) OBS_HOTKEY(OBS_KEY_F35) OBS_HOTKEY(OBS_KEY_MENU) OBS_HOTKEY(OBS_KEY_HYPER_L) OBS_HOTKEY(OBS_KEY_HYPER_R) OBS_HOTKEY(OBS_KEY_HELP) OBS_HOTKEY(OBS_KEY_DIRECTION_L) OBS_HOTKEY(OBS_KEY_DIRECTION_R) OBS_HOTKEY(OBS_KEY_SPACE) OBS_HOTKEY(OBS_KEY_EXCLAM) OBS_HOTKEY(OBS_KEY_QUOTEDBL) OBS_HOTKEY(OBS_KEY_NUMBERSIGN) OBS_HOTKEY(OBS_KEY_DOLLAR) OBS_HOTKEY(OBS_KEY_PERCENT) OBS_HOTKEY(OBS_KEY_AMPERSAND) OBS_HOTKEY(OBS_KEY_APOSTROPHE) OBS_HOTKEY(OBS_KEY_PARENLEFT) OBS_HOTKEY(OBS_KEY_PARENRIGHT) OBS_HOTKEY(OBS_KEY_ASTERISK) OBS_HOTKEY(OBS_KEY_PLUS) OBS_HOTKEY(OBS_KEY_COMMA) OBS_HOTKEY(OBS_KEY_MINUS) OBS_HOTKEY(OBS_KEY_PERIOD) OBS_HOTKEY(OBS_KEY_SLASH) OBS_HOTKEY(OBS_KEY_0) OBS_HOTKEY(OBS_KEY_1) OBS_HOTKEY(OBS_KEY_2) OBS_HOTKEY(OBS_KEY_3) OBS_HOTKEY(OBS_KEY_4) OBS_HOTKEY(OBS_KEY_5) OBS_HOTKEY(OBS_KEY_6) OBS_HOTKEY(OBS_KEY_7) OBS_HOTKEY(OBS_KEY_8) OBS_HOTKEY(OBS_KEY_9) OBS_HOTKEY(OBS_KEY_NUMEQUAL) OBS_HOTKEY(OBS_KEY_NUMASTERISK) OBS_HOTKEY(OBS_KEY_NUMPLUS) OBS_HOTKEY(OBS_KEY_NUMCOMMA) OBS_HOTKEY(OBS_KEY_NUMMINUS) OBS_HOTKEY(OBS_KEY_NUMPERIOD) OBS_HOTKEY(OBS_KEY_NUMSLASH) OBS_HOTKEY(OBS_KEY_NUM0) OBS_HOTKEY(OBS_KEY_NUM1) OBS_HOTKEY(OBS_KEY_NUM2) OBS_HOTKEY(OBS_KEY_NUM3) OBS_HOTKEY(OBS_KEY_NUM4) OBS_HOTKEY(OBS_KEY_NUM5) OBS_HOTKEY(OBS_KEY_NUM6) OBS_HOTKEY(OBS_KEY_NUM7) OBS_HOTKEY(OBS_KEY_NUM8) OBS_HOTKEY(OBS_KEY_NUM9) OBS_HOTKEY(OBS_KEY_COLON) OBS_HOTKEY(OBS_KEY_SEMICOLON) OBS_HOTKEY(OBS_KEY_QUOTE) OBS_HOTKEY(OBS_KEY_LESS) OBS_HOTKEY(OBS_KEY_EQUAL) OBS_HOTKEY(OBS_KEY_GREATER) OBS_HOTKEY(OBS_KEY_QUESTION) OBS_HOTKEY(OBS_KEY_AT) OBS_HOTKEY(OBS_KEY_A) OBS_HOTKEY(OBS_KEY_B) OBS_HOTKEY(OBS_KEY_C) OBS_HOTKEY(OBS_KEY_D) OBS_HOTKEY(OBS_KEY_E) OBS_HOTKEY(OBS_KEY_F) OBS_HOTKEY(OBS_KEY_G) OBS_HOTKEY(OBS_KEY_H) OBS_HOTKEY(OBS_KEY_I) OBS_HOTKEY(OBS_KEY_J) OBS_HOTKEY(OBS_KEY_K) OBS_HOTKEY(OBS_KEY_L) OBS_HOTKEY(OBS_KEY_M) OBS_HOTKEY(OBS_KEY_N) OBS_HOTKEY(OBS_KEY_O) OBS_HOTKEY(OBS_KEY_P) OBS_HOTKEY(OBS_KEY_Q) OBS_HOTKEY(OBS_KEY_R) OBS_HOTKEY(OBS_KEY_S) OBS_HOTKEY(OBS_KEY_T) OBS_HOTKEY(OBS_KEY_U) OBS_HOTKEY(OBS_KEY_V) OBS_HOTKEY(OBS_KEY_W) OBS_HOTKEY(OBS_KEY_X) OBS_HOTKEY(OBS_KEY_Y) OBS_HOTKEY(OBS_KEY_Z) OBS_HOTKEY(OBS_KEY_BRACKETLEFT) OBS_HOTKEY(OBS_KEY_BACKSLASH) OBS_HOTKEY(OBS_KEY_BRACKETRIGHT) OBS_HOTKEY(OBS_KEY_ASCIICIRCUM) OBS_HOTKEY(OBS_KEY_UNDERSCORE) OBS_HOTKEY(OBS_KEY_QUOTELEFT) OBS_HOTKEY(OBS_KEY_BRACELEFT) OBS_HOTKEY(OBS_KEY_BAR) OBS_HOTKEY(OBS_KEY_BRACERIGHT) OBS_HOTKEY(OBS_KEY_ASCIITILDE) OBS_HOTKEY(OBS_KEY_NOBREAKSPACE) OBS_HOTKEY(OBS_KEY_EXCLAMDOWN) OBS_HOTKEY(OBS_KEY_CENT) OBS_HOTKEY(OBS_KEY_STERLING) OBS_HOTKEY(OBS_KEY_CURRENCY) OBS_HOTKEY(OBS_KEY_YEN) OBS_HOTKEY(OBS_KEY_BROKENBAR) OBS_HOTKEY(OBS_KEY_SECTION) OBS_HOTKEY(OBS_KEY_DIAERESIS) OBS_HOTKEY(OBS_KEY_COPYRIGHT) OBS_HOTKEY(OBS_KEY_ORDFEMININE) OBS_HOTKEY(OBS_KEY_GUILLEMOTLEFT) OBS_HOTKEY(OBS_KEY_NOTSIGN) OBS_HOTKEY(OBS_KEY_HYPHEN) OBS_HOTKEY(OBS_KEY_REGISTERED) OBS_HOTKEY(OBS_KEY_MACRON) OBS_HOTKEY(OBS_KEY_DEGREE) OBS_HOTKEY(OBS_KEY_PLUSMINUS) OBS_HOTKEY(OBS_KEY_TWOSUPERIOR) OBS_HOTKEY(OBS_KEY_THREESUPERIOR) OBS_HOTKEY(OBS_KEY_ACUTE) OBS_HOTKEY(OBS_KEY_MU) OBS_HOTKEY(OBS_KEY_PARAGRAPH) OBS_HOTKEY(OBS_KEY_PERIODCENTERED) OBS_HOTKEY(OBS_KEY_CEDILLA) OBS_HOTKEY(OBS_KEY_ONESUPERIOR) OBS_HOTKEY(OBS_KEY_MASCULINE) OBS_HOTKEY(OBS_KEY_GUILLEMOTRIGHT) OBS_HOTKEY(OBS_KEY_ONEQUARTER) OBS_HOTKEY(OBS_KEY_ONEHALF) OBS_HOTKEY(OBS_KEY_THREEQUARTERS) OBS_HOTKEY(OBS_KEY_QUESTIONDOWN) OBS_HOTKEY(OBS_KEY_AGRAVE) OBS_HOTKEY(OBS_KEY_AACUTE) OBS_HOTKEY(OBS_KEY_ACIRCUMFLEX) OBS_HOTKEY(OBS_KEY_ATILDE) OBS_HOTKEY(OBS_KEY_ADIAERESIS) OBS_HOTKEY(OBS_KEY_ARING) OBS_HOTKEY(OBS_KEY_AE) OBS_HOTKEY(OBS_KEY_CCEDILLA) OBS_HOTKEY(OBS_KEY_EGRAVE) OBS_HOTKEY(OBS_KEY_EACUTE) OBS_HOTKEY(OBS_KEY_ECIRCUMFLEX) OBS_HOTKEY(OBS_KEY_EDIAERESIS) OBS_HOTKEY(OBS_KEY_IGRAVE) OBS_HOTKEY(OBS_KEY_IACUTE) OBS_HOTKEY(OBS_KEY_ICIRCUMFLEX) OBS_HOTKEY(OBS_KEY_IDIAERESIS) OBS_HOTKEY(OBS_KEY_ETH) OBS_HOTKEY(OBS_KEY_NTILDE) OBS_HOTKEY(OBS_KEY_OGRAVE) OBS_HOTKEY(OBS_KEY_OACUTE) OBS_HOTKEY(OBS_KEY_OCIRCUMFLEX) OBS_HOTKEY(OBS_KEY_OTILDE) OBS_HOTKEY(OBS_KEY_ODIAERESIS) OBS_HOTKEY(OBS_KEY_MULTIPLY) OBS_HOTKEY(OBS_KEY_OOBLIQUE) OBS_HOTKEY(OBS_KEY_UGRAVE) OBS_HOTKEY(OBS_KEY_UACUTE) OBS_HOTKEY(OBS_KEY_UCIRCUMFLEX) OBS_HOTKEY(OBS_KEY_UDIAERESIS) OBS_HOTKEY(OBS_KEY_YACUTE) OBS_HOTKEY(OBS_KEY_THORN) OBS_HOTKEY(OBS_KEY_SSHARP) OBS_HOTKEY(OBS_KEY_DIVISION) OBS_HOTKEY(OBS_KEY_YDIAERESIS) OBS_HOTKEY(OBS_KEY_MULTI_KEY) OBS_HOTKEY(OBS_KEY_CODEINPUT) OBS_HOTKEY(OBS_KEY_SINGLECANDIDATE) OBS_HOTKEY(OBS_KEY_MULTIPLECANDIDATE) OBS_HOTKEY(OBS_KEY_PREVIOUSCANDIDATE) OBS_HOTKEY(OBS_KEY_MODE_SWITCH) OBS_HOTKEY(OBS_KEY_KANJI) OBS_HOTKEY(OBS_KEY_MUHENKAN) OBS_HOTKEY(OBS_KEY_HENKAN) OBS_HOTKEY(OBS_KEY_ROMAJI) OBS_HOTKEY(OBS_KEY_HIRAGANA) OBS_HOTKEY(OBS_KEY_KATAKANA) OBS_HOTKEY(OBS_KEY_HIRAGANA_KATAKANA) OBS_HOTKEY(OBS_KEY_ZENKAKU) OBS_HOTKEY(OBS_KEY_HANKAKU) OBS_HOTKEY(OBS_KEY_ZENKAKU_HANKAKU) OBS_HOTKEY(OBS_KEY_TOUROKU) OBS_HOTKEY(OBS_KEY_MASSYO) OBS_HOTKEY(OBS_KEY_KANA_LOCK) OBS_HOTKEY(OBS_KEY_KANA_SHIFT) OBS_HOTKEY(OBS_KEY_EISU_SHIFT) OBS_HOTKEY(OBS_KEY_EISU_TOGGLE) OBS_HOTKEY(OBS_KEY_HANGUL) OBS_HOTKEY(OBS_KEY_HANGUL_START) OBS_HOTKEY(OBS_KEY_HANGUL_END) OBS_HOTKEY(OBS_KEY_HANGUL_HANJA) OBS_HOTKEY(OBS_KEY_HANGUL_JAMO) OBS_HOTKEY(OBS_KEY_HANGUL_ROMAJA) OBS_HOTKEY(OBS_KEY_HANGUL_JEONJA) OBS_HOTKEY(OBS_KEY_HANGUL_BANJA) OBS_HOTKEY(OBS_KEY_HANGUL_PREHANJA) OBS_HOTKEY(OBS_KEY_HANGUL_POSTHANJA) OBS_HOTKEY(OBS_KEY_HANGUL_SPECIAL) OBS_HOTKEY(OBS_KEY_DEAD_GRAVE) OBS_HOTKEY(OBS_KEY_DEAD_ACUTE) OBS_HOTKEY(OBS_KEY_DEAD_CIRCUMFLEX) OBS_HOTKEY(OBS_KEY_DEAD_TILDE) OBS_HOTKEY(OBS_KEY_DEAD_MACRON) OBS_HOTKEY(OBS_KEY_DEAD_BREVE) OBS_HOTKEY(OBS_KEY_DEAD_ABOVEDOT) OBS_HOTKEY(OBS_KEY_DEAD_DIAERESIS) OBS_HOTKEY(OBS_KEY_DEAD_ABOVERING) OBS_HOTKEY(OBS_KEY_DEAD_DOUBLEACUTE) OBS_HOTKEY(OBS_KEY_DEAD_CARON) OBS_HOTKEY(OBS_KEY_DEAD_CEDILLA) OBS_HOTKEY(OBS_KEY_DEAD_OGONEK) OBS_HOTKEY(OBS_KEY_DEAD_IOTA) OBS_HOTKEY(OBS_KEY_DEAD_VOICED_SOUND) OBS_HOTKEY(OBS_KEY_DEAD_SEMIVOICED_SOUND) OBS_HOTKEY(OBS_KEY_DEAD_BELOWDOT) OBS_HOTKEY(OBS_KEY_DEAD_HOOK) OBS_HOTKEY(OBS_KEY_DEAD_HORN) OBS_HOTKEY(OBS_KEY_BACK) OBS_HOTKEY(OBS_KEY_FORWARD) OBS_HOTKEY(OBS_KEY_STOP) OBS_HOTKEY(OBS_KEY_REFRESH) OBS_HOTKEY(OBS_KEY_VOLUMEDOWN) OBS_HOTKEY(OBS_KEY_VOLUMEMUTE) OBS_HOTKEY(OBS_KEY_VOLUMEUP) OBS_HOTKEY(OBS_KEY_BASSBOOST) OBS_HOTKEY(OBS_KEY_BASSUP) OBS_HOTKEY(OBS_KEY_BASSDOWN) OBS_HOTKEY(OBS_KEY_TREBLEUP) OBS_HOTKEY(OBS_KEY_TREBLEDOWN) OBS_HOTKEY(OBS_KEY_MEDIAPLAY) OBS_HOTKEY(OBS_KEY_MEDIASTOP) OBS_HOTKEY(OBS_KEY_MEDIAPREVIOUS) OBS_HOTKEY(OBS_KEY_MEDIANEXT) OBS_HOTKEY(OBS_KEY_MEDIARECORD) OBS_HOTKEY(OBS_KEY_MEDIAPAUSE) OBS_HOTKEY(OBS_KEY_MEDIATOGGLEPLAYPAUSE) OBS_HOTKEY(OBS_KEY_HOMEPAGE) OBS_HOTKEY(OBS_KEY_FAVORITES) OBS_HOTKEY(OBS_KEY_SEARCH) OBS_HOTKEY(OBS_KEY_STANDBY) OBS_HOTKEY(OBS_KEY_OPENURL) OBS_HOTKEY(OBS_KEY_LAUNCHMAIL) OBS_HOTKEY(OBS_KEY_LAUNCHMEDIA) OBS_HOTKEY(OBS_KEY_LAUNCH0) OBS_HOTKEY(OBS_KEY_LAUNCH1) OBS_HOTKEY(OBS_KEY_LAUNCH2) OBS_HOTKEY(OBS_KEY_LAUNCH3) OBS_HOTKEY(OBS_KEY_LAUNCH4) OBS_HOTKEY(OBS_KEY_LAUNCH5) OBS_HOTKEY(OBS_KEY_LAUNCH6) OBS_HOTKEY(OBS_KEY_LAUNCH7) OBS_HOTKEY(OBS_KEY_LAUNCH8) OBS_HOTKEY(OBS_KEY_LAUNCH9) OBS_HOTKEY(OBS_KEY_LAUNCHA) OBS_HOTKEY(OBS_KEY_LAUNCHB) OBS_HOTKEY(OBS_KEY_LAUNCHC) OBS_HOTKEY(OBS_KEY_LAUNCHD) OBS_HOTKEY(OBS_KEY_LAUNCHE) OBS_HOTKEY(OBS_KEY_LAUNCHF) OBS_HOTKEY(OBS_KEY_LAUNCHG) OBS_HOTKEY(OBS_KEY_LAUNCHH) OBS_HOTKEY(OBS_KEY_MONBRIGHTNESSUP) OBS_HOTKEY(OBS_KEY_MONBRIGHTNESSDOWN) OBS_HOTKEY(OBS_KEY_KEYBOARDLIGHTONOFF) OBS_HOTKEY(OBS_KEY_KEYBOARDBRIGHTNESSUP) OBS_HOTKEY(OBS_KEY_KEYBOARDBRIGHTNESSDOWN) OBS_HOTKEY(OBS_KEY_POWEROFF) OBS_HOTKEY(OBS_KEY_WAKEUP) OBS_HOTKEY(OBS_KEY_EJECT) OBS_HOTKEY(OBS_KEY_SCREENSAVER) OBS_HOTKEY(OBS_KEY_WWW) OBS_HOTKEY(OBS_KEY_MEMO) OBS_HOTKEY(OBS_KEY_LIGHTBULB) OBS_HOTKEY(OBS_KEY_SHOP) OBS_HOTKEY(OBS_KEY_HISTORY) OBS_HOTKEY(OBS_KEY_ADDFAVORITE) OBS_HOTKEY(OBS_KEY_HOTLINKS) OBS_HOTKEY(OBS_KEY_BRIGHTNESSADJUST) OBS_HOTKEY(OBS_KEY_FINANCE) OBS_HOTKEY(OBS_KEY_COMMUNITY) OBS_HOTKEY(OBS_KEY_AUDIOREWIND) OBS_HOTKEY(OBS_KEY_BACKFORWARD) OBS_HOTKEY(OBS_KEY_APPLICATIONLEFT) OBS_HOTKEY(OBS_KEY_APPLICATIONRIGHT) OBS_HOTKEY(OBS_KEY_BOOK) OBS_HOTKEY(OBS_KEY_CD) OBS_HOTKEY(OBS_KEY_CALCULATOR) OBS_HOTKEY(OBS_KEY_TODOLIST) OBS_HOTKEY(OBS_KEY_CLEARGRAB) OBS_HOTKEY(OBS_KEY_CLOSE) OBS_HOTKEY(OBS_KEY_COPY) OBS_HOTKEY(OBS_KEY_CUT) OBS_HOTKEY(OBS_KEY_DISPLAY) OBS_HOTKEY(OBS_KEY_DOS) OBS_HOTKEY(OBS_KEY_DOCUMENTS) OBS_HOTKEY(OBS_KEY_EXCEL) OBS_HOTKEY(OBS_KEY_EXPLORER) OBS_HOTKEY(OBS_KEY_GAME) OBS_HOTKEY(OBS_KEY_GO) OBS_HOTKEY(OBS_KEY_ITOUCH) OBS_HOTKEY(OBS_KEY_LOGOFF) OBS_HOTKEY(OBS_KEY_MARKET) OBS_HOTKEY(OBS_KEY_MEETING) OBS_HOTKEY(OBS_KEY_MENUKB) OBS_HOTKEY(OBS_KEY_MENUPB) OBS_HOTKEY(OBS_KEY_MYSITES) OBS_HOTKEY(OBS_KEY_NEWS) OBS_HOTKEY(OBS_KEY_OFFICEHOME) OBS_HOTKEY(OBS_KEY_OPTION) OBS_HOTKEY(OBS_KEY_PASTE) OBS_HOTKEY(OBS_KEY_PHONE) OBS_HOTKEY(OBS_KEY_CALENDAR) OBS_HOTKEY(OBS_KEY_REPLY) OBS_HOTKEY(OBS_KEY_RELOAD) OBS_HOTKEY(OBS_KEY_ROTATEWINDOWS) OBS_HOTKEY(OBS_KEY_ROTATIONPB) OBS_HOTKEY(OBS_KEY_ROTATIONKB) OBS_HOTKEY(OBS_KEY_SAVE) OBS_HOTKEY(OBS_KEY_SEND) OBS_HOTKEY(OBS_KEY_SPELL) OBS_HOTKEY(OBS_KEY_SPLITSCREEN) OBS_HOTKEY(OBS_KEY_SUPPORT) OBS_HOTKEY(OBS_KEY_TASKPANE) OBS_HOTKEY(OBS_KEY_TERMINAL) OBS_HOTKEY(OBS_KEY_TOOLS) OBS_HOTKEY(OBS_KEY_TRAVEL) OBS_HOTKEY(OBS_KEY_VIDEO) OBS_HOTKEY(OBS_KEY_WORD) OBS_HOTKEY(OBS_KEY_XFER) OBS_HOTKEY(OBS_KEY_ZOOMIN) OBS_HOTKEY(OBS_KEY_ZOOMOUT) OBS_HOTKEY(OBS_KEY_AWAY) OBS_HOTKEY(OBS_KEY_MESSENGER) OBS_HOTKEY(OBS_KEY_WEBCAM) OBS_HOTKEY(OBS_KEY_MAILFORWARD) OBS_HOTKEY(OBS_KEY_PICTURES) OBS_HOTKEY(OBS_KEY_MUSIC) OBS_HOTKEY(OBS_KEY_BATTERY) OBS_HOTKEY(OBS_KEY_BLUETOOTH) OBS_HOTKEY(OBS_KEY_WLAN) OBS_HOTKEY(OBS_KEY_UWB) OBS_HOTKEY(OBS_KEY_AUDIOFORWARD) OBS_HOTKEY(OBS_KEY_AUDIOREPEAT) OBS_HOTKEY(OBS_KEY_AUDIORANDOMPLAY) OBS_HOTKEY(OBS_KEY_SUBTITLE) OBS_HOTKEY(OBS_KEY_AUDIOCYCLETRACK) OBS_HOTKEY(OBS_KEY_TIME) OBS_HOTKEY(OBS_KEY_HIBERNATE) OBS_HOTKEY(OBS_KEY_VIEW) OBS_HOTKEY(OBS_KEY_TOPMENU) OBS_HOTKEY(OBS_KEY_POWERDOWN) OBS_HOTKEY(OBS_KEY_SUSPEND) OBS_HOTKEY(OBS_KEY_CONTRASTADJUST) OBS_HOTKEY(OBS_KEY_MEDIALAST) OBS_HOTKEY(OBS_KEY_CALL) OBS_HOTKEY(OBS_KEY_CAMERA) OBS_HOTKEY(OBS_KEY_CAMERAFOCUS) OBS_HOTKEY(OBS_KEY_CONTEXT1) OBS_HOTKEY(OBS_KEY_CONTEXT2) OBS_HOTKEY(OBS_KEY_CONTEXT3) OBS_HOTKEY(OBS_KEY_CONTEXT4) OBS_HOTKEY(OBS_KEY_FLIP) OBS_HOTKEY(OBS_KEY_HANGUP) OBS_HOTKEY(OBS_KEY_NO) OBS_HOTKEY(OBS_KEY_SELECT) OBS_HOTKEY(OBS_KEY_YES) OBS_HOTKEY(OBS_KEY_TOGGLECALLHANGUP) OBS_HOTKEY(OBS_KEY_VOICEDIAL) OBS_HOTKEY(OBS_KEY_LASTNUMBERREDIAL) OBS_HOTKEY(OBS_KEY_EXECUTE) OBS_HOTKEY(OBS_KEY_PRINTER) OBS_HOTKEY(OBS_KEY_PLAY) OBS_HOTKEY(OBS_KEY_SLEEP) OBS_HOTKEY(OBS_KEY_ZOOM) OBS_HOTKEY(OBS_KEY_CANCEL) #ifndef OBS_MOUSE_BUTTON #define OBS_MOUSE_BUTTON(x) OBS_HOTKEY(x) #define OBS_MOUSE_BUTTON_DEFAULT 1 #endif OBS_MOUSE_BUTTON(OBS_KEY_MOUSE1) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE2) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE3) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE4) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE5) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE6) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE7) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE8) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE9) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE10) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE11) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE12) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE13) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE14) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE15) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE16) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE17) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE18) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE19) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE20) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE21) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE22) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE23) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE24) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE25) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE26) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE27) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE28) OBS_MOUSE_BUTTON(OBS_KEY_MOUSE29) #ifdef OBS_MOUSE_BUTTON_DEFAULT #undef OBS_MOUSE_BUTTON #undef OBS_MOUSE_BUTTON_DEFAULT #endif OBS_HOTKEY(OBS_KEY_BACKSLASH_RT102) OBS_HOTKEY(OBS_KEY_OPEN) OBS_HOTKEY(OBS_KEY_FIND) OBS_HOTKEY(OBS_KEY_REDO) OBS_HOTKEY(OBS_KEY_UNDO) OBS_HOTKEY(OBS_KEY_FRONT) OBS_HOTKEY(OBS_KEY_PROPS) OBS_HOTKEY(OBS_KEY_VK_CANCEL) OBS_HOTKEY(OBS_KEY_0x07) OBS_HOTKEY(OBS_KEY_0x0A) OBS_HOTKEY(OBS_KEY_0x0B) OBS_HOTKEY(OBS_KEY_0x0E) OBS_HOTKEY(OBS_KEY_0x0F) OBS_HOTKEY(OBS_KEY_0x16) OBS_HOTKEY(OBS_KEY_VK_JUNJA) OBS_HOTKEY(OBS_KEY_VK_FINAL) OBS_HOTKEY(OBS_KEY_0x1A) OBS_HOTKEY(OBS_KEY_VK_ACCEPT) OBS_HOTKEY(OBS_KEY_VK_MODECHANGE) OBS_HOTKEY(OBS_KEY_VK_SELECT) OBS_HOTKEY(OBS_KEY_VK_PRINT) OBS_HOTKEY(OBS_KEY_VK_EXECUTE) OBS_HOTKEY(OBS_KEY_VK_HELP) OBS_HOTKEY(OBS_KEY_0x30) OBS_HOTKEY(OBS_KEY_0x31) OBS_HOTKEY(OBS_KEY_0x32) OBS_HOTKEY(OBS_KEY_0x33) OBS_HOTKEY(OBS_KEY_0x34) OBS_HOTKEY(OBS_KEY_0x35) OBS_HOTKEY(OBS_KEY_0x36) OBS_HOTKEY(OBS_KEY_0x37) OBS_HOTKEY(OBS_KEY_0x38) OBS_HOTKEY(OBS_KEY_0x39) OBS_HOTKEY(OBS_KEY_0x3A) OBS_HOTKEY(OBS_KEY_0x3B) OBS_HOTKEY(OBS_KEY_0x3C) OBS_HOTKEY(OBS_KEY_0x3D) OBS_HOTKEY(OBS_KEY_0x3E) OBS_HOTKEY(OBS_KEY_0x3F) OBS_HOTKEY(OBS_KEY_0x40) OBS_HOTKEY(OBS_KEY_0x41) OBS_HOTKEY(OBS_KEY_0x42) OBS_HOTKEY(OBS_KEY_0x43) OBS_HOTKEY(OBS_KEY_0x44) OBS_HOTKEY(OBS_KEY_0x45) OBS_HOTKEY(OBS_KEY_0x46) OBS_HOTKEY(OBS_KEY_0x47) OBS_HOTKEY(OBS_KEY_0x48) OBS_HOTKEY(OBS_KEY_0x49) OBS_HOTKEY(OBS_KEY_0x4A) OBS_HOTKEY(OBS_KEY_0x4B) OBS_HOTKEY(OBS_KEY_0x4C) OBS_HOTKEY(OBS_KEY_0x4D) OBS_HOTKEY(OBS_KEY_0x4E) OBS_HOTKEY(OBS_KEY_0x4F) OBS_HOTKEY(OBS_KEY_0x50) OBS_HOTKEY(OBS_KEY_0x51) OBS_HOTKEY(OBS_KEY_0x52) OBS_HOTKEY(OBS_KEY_0x53) OBS_HOTKEY(OBS_KEY_0x54) OBS_HOTKEY(OBS_KEY_0x55) OBS_HOTKEY(OBS_KEY_0x56) OBS_HOTKEY(OBS_KEY_0x57) OBS_HOTKEY(OBS_KEY_0x58) OBS_HOTKEY(OBS_KEY_0x59) OBS_HOTKEY(OBS_KEY_0x5A) OBS_HOTKEY(OBS_KEY_VK_LWIN) OBS_HOTKEY(OBS_KEY_VK_RWIN) OBS_HOTKEY(OBS_KEY_VK_APPS) OBS_HOTKEY(OBS_KEY_0x5E) OBS_HOTKEY(OBS_KEY_VK_SLEEP) OBS_HOTKEY(OBS_KEY_VK_SEPARATOR) OBS_HOTKEY(OBS_KEY_0x88) OBS_HOTKEY(OBS_KEY_0x89) OBS_HOTKEY(OBS_KEY_0x8A) OBS_HOTKEY(OBS_KEY_0x8B) OBS_HOTKEY(OBS_KEY_0x8C) OBS_HOTKEY(OBS_KEY_0x8D) OBS_HOTKEY(OBS_KEY_0x8E) OBS_HOTKEY(OBS_KEY_0x8F) OBS_HOTKEY(OBS_KEY_VK_OEM_FJ_JISHO) OBS_HOTKEY(OBS_KEY_VK_OEM_FJ_LOYA) OBS_HOTKEY(OBS_KEY_VK_OEM_FJ_ROYA) OBS_HOTKEY(OBS_KEY_0x97) OBS_HOTKEY(OBS_KEY_0x98) OBS_HOTKEY(OBS_KEY_0x99) OBS_HOTKEY(OBS_KEY_0x9A) OBS_HOTKEY(OBS_KEY_0x9B) OBS_HOTKEY(OBS_KEY_0x9C) OBS_HOTKEY(OBS_KEY_0x9D) OBS_HOTKEY(OBS_KEY_0x9E) OBS_HOTKEY(OBS_KEY_0x9F) OBS_HOTKEY(OBS_KEY_VK_LSHIFT) OBS_HOTKEY(OBS_KEY_VK_RSHIFT) OBS_HOTKEY(OBS_KEY_VK_LCONTROL) OBS_HOTKEY(OBS_KEY_VK_RCONTROL) OBS_HOTKEY(OBS_KEY_VK_LMENU) OBS_HOTKEY(OBS_KEY_VK_RMENU) OBS_HOTKEY(OBS_KEY_VK_BROWSER_BACK) OBS_HOTKEY(OBS_KEY_VK_BROWSER_FORWARD) OBS_HOTKEY(OBS_KEY_VK_BROWSER_REFRESH) OBS_HOTKEY(OBS_KEY_VK_BROWSER_STOP) OBS_HOTKEY(OBS_KEY_VK_BROWSER_SEARCH) OBS_HOTKEY(OBS_KEY_VK_BROWSER_FAVORITES) OBS_HOTKEY(OBS_KEY_VK_BROWSER_HOME) OBS_HOTKEY(OBS_KEY_VK_VOLUME_MUTE) OBS_HOTKEY(OBS_KEY_VK_VOLUME_DOWN) OBS_HOTKEY(OBS_KEY_VK_VOLUME_UP) OBS_HOTKEY(OBS_KEY_VK_MEDIA_NEXT_TRACK) OBS_HOTKEY(OBS_KEY_VK_MEDIA_PREV_TRACK) OBS_HOTKEY(OBS_KEY_VK_MEDIA_STOP) OBS_HOTKEY(OBS_KEY_VK_MEDIA_PLAY_PAUSE) OBS_HOTKEY(OBS_KEY_VK_LAUNCH_MAIL) OBS_HOTKEY(OBS_KEY_VK_LAUNCH_MEDIA_SELECT) OBS_HOTKEY(OBS_KEY_VK_LAUNCH_APP1) OBS_HOTKEY(OBS_KEY_VK_LAUNCH_APP2) OBS_HOTKEY(OBS_KEY_0xB8) OBS_HOTKEY(OBS_KEY_0xB9) OBS_HOTKEY(OBS_KEY_0xC1) OBS_HOTKEY(OBS_KEY_0xC2) OBS_HOTKEY(OBS_KEY_0xC3) OBS_HOTKEY(OBS_KEY_0xC4) OBS_HOTKEY(OBS_KEY_0xC5) OBS_HOTKEY(OBS_KEY_0xC6) OBS_HOTKEY(OBS_KEY_0xC7) OBS_HOTKEY(OBS_KEY_0xC8) OBS_HOTKEY(OBS_KEY_0xC9) OBS_HOTKEY(OBS_KEY_0xCA) OBS_HOTKEY(OBS_KEY_0xCB) OBS_HOTKEY(OBS_KEY_0xCC) OBS_HOTKEY(OBS_KEY_0xCD) OBS_HOTKEY(OBS_KEY_0xCE) OBS_HOTKEY(OBS_KEY_0xCF) OBS_HOTKEY(OBS_KEY_0xD0) OBS_HOTKEY(OBS_KEY_0xD1) OBS_HOTKEY(OBS_KEY_0xD2) OBS_HOTKEY(OBS_KEY_0xD3) OBS_HOTKEY(OBS_KEY_0xD4) OBS_HOTKEY(OBS_KEY_0xD5) OBS_HOTKEY(OBS_KEY_0xD6) OBS_HOTKEY(OBS_KEY_0xD7) OBS_HOTKEY(OBS_KEY_0xD8) OBS_HOTKEY(OBS_KEY_0xD9) OBS_HOTKEY(OBS_KEY_0xDA) OBS_HOTKEY(OBS_KEY_VK_OEM_8) OBS_HOTKEY(OBS_KEY_0xE0) OBS_HOTKEY(OBS_KEY_VK_OEM_AX) OBS_HOTKEY(OBS_KEY_VK_ICO_HELP) OBS_HOTKEY(OBS_KEY_VK_ICO_00) OBS_HOTKEY(OBS_KEY_VK_PROCESSKEY) OBS_HOTKEY(OBS_KEY_VK_ICO_CLEAR) OBS_HOTKEY(OBS_KEY_VK_PACKET) OBS_HOTKEY(OBS_KEY_0xE8) OBS_HOTKEY(OBS_KEY_VK_OEM_RESET) OBS_HOTKEY(OBS_KEY_VK_OEM_JUMP) OBS_HOTKEY(OBS_KEY_VK_OEM_PA1) OBS_HOTKEY(OBS_KEY_VK_OEM_PA2) OBS_HOTKEY(OBS_KEY_VK_OEM_PA3) OBS_HOTKEY(OBS_KEY_VK_OEM_WSCTRL) OBS_HOTKEY(OBS_KEY_VK_OEM_CUSEL) OBS_HOTKEY(OBS_KEY_VK_OEM_ATTN) OBS_HOTKEY(OBS_KEY_VK_OEM_FINISH) OBS_HOTKEY(OBS_KEY_VK_OEM_COPY) OBS_HOTKEY(OBS_KEY_VK_OEM_AUTO) OBS_HOTKEY(OBS_KEY_VK_OEM_ENLW) OBS_HOTKEY(OBS_KEY_VK_ATTN) OBS_HOTKEY(OBS_KEY_VK_CRSEL) OBS_HOTKEY(OBS_KEY_VK_EXSEL) OBS_HOTKEY(OBS_KEY_VK_EREOF) OBS_HOTKEY(OBS_KEY_VK_PLAY) OBS_HOTKEY(OBS_KEY_VK_ZOOM) OBS_HOTKEY(OBS_KEY_VK_NONAME) OBS_HOTKEY(OBS_KEY_VK_PA1) OBS_HOTKEY(OBS_KEY_VK_OEM_CLEAR) obs-studio-32.1.0-sources/libobs/obs-internal.h000644 001751 001751 00000125213 15153330235 022260 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "util/c99defs.h" #include "util/darray.h" #include "util/deque.h" #include "util/dstr.h" #include "util/threading.h" #include "util/platform.h" #include "util/profiler.h" #include "util/task.h" #include "util/uthash.h" #include "util/array-serializer.h" #include "callback/signal.h" #include "callback/proc.h" #include "graphics/graphics.h" #include "graphics/matrix4.h" #include "media-io/audio-resampler.h" #include "media-io/video-io.h" #include "media-io/audio-io.h" #include "obs.h" #include #include /* Custom helpers for the UUID hash table */ #define HASH_FIND_UUID(head, uuid, out) HASH_FIND(hh_uuid, head, uuid, UUID_STR_LENGTH, out) #define HASH_ADD_UUID(head, uuid_field, add) HASH_ADD(hh_uuid, head, uuid_field[0], UUID_STR_LENGTH, add) #define NUM_TEXTURES 2 #define NUM_CHANNELS 3 #define MICROSECOND_DEN 1000000 #define NUM_ENCODE_TEXTURES 10 #define NUM_ENCODE_TEXTURE_FRAMES_TO_WAIT 1 static inline int64_t packet_dts_usec(struct encoder_packet *packet) { return packet->dts * MICROSECOND_DEN / packet->timebase_den; } struct tick_callback { void (*tick)(void *param, float seconds); void *param; }; struct draw_callback { void (*draw)(void *param, uint32_t cx, uint32_t cy); void *param; }; struct rendered_callback { void (*rendered)(void *param); void *param; }; struct packet_callback { void (*packet_cb)(obs_output_t *output, struct encoder_packet *pkt, struct encoder_packet_time *pkt_time, void *param); void *param; }; struct reconnect_callback { bool (*reconnect_cb)(void *data, obs_output_t *output, int code); void *param; }; /* ------------------------------------------------------------------------- */ /* validity checks */ static inline bool obs_object_valid(const void *obj, const char *f, const char *t) { if (!obj) { blog(LOG_DEBUG, "%s: Null '%s' parameter", f, t); return false; } return true; } #define obs_ptr_valid(ptr, func) obs_object_valid(ptr, func, #ptr) #define obs_source_valid obs_ptr_valid #define obs_output_valid obs_ptr_valid #define obs_encoder_valid obs_ptr_valid #define obs_service_valid obs_ptr_valid /* ------------------------------------------------------------------------- */ /* modules */ struct obs_module { char *mod_name; const char *file; char *bin_path; char *data_path; void *module; bool loaded; enum obs_module_load_state load_state; bool (*load)(void); void (*unload)(void); void (*post_load)(void); void (*set_locale)(const char *locale); bool (*get_string)(const char *lookup_string, const char **translated_string); void (*free_locale)(void); uint32_t (*ver)(void); void (*set_pointer)(obs_module_t *module); const char *(*name)(void); const char *(*description)(void); const char *(*author)(void); struct obs_module_metadata *metadata; struct obs_module *next; DARRAY(char *) sources; DARRAY(char *) outputs; DARRAY(char *) encoders; DARRAY(char *) services; }; struct obs_disabled_module { char *mod_name; enum obs_module_load_state load_state; struct obs_module_metadata *metadata; struct obs_disabled_module *next; DARRAY(char *) sources; DARRAY(char *) outputs; DARRAY(char *) encoders; DARRAY(char *) services; }; extern void free_module(struct obs_module *mod); struct obs_module_path { char *bin; char *data; }; static inline void free_module_path(struct obs_module_path *omp) { if (omp) { bfree(omp->bin); bfree(omp->data); } } struct obs_module_metadata { char *display_name; char *version; char *id; char *os_arch; char *description; char *long_description; bool has_icon; bool has_banner; char *repository_url; char *support_url; char *website_url; char *name; }; static inline void free_module_metadata(struct obs_module_metadata *omi) { if (omi) { bfree(omi->display_name); bfree(omi->version); bfree(omi->id); bfree(omi->os_arch); bfree(omi->description); bfree(omi->long_description); bfree(omi->repository_url); bfree(omi->support_url); bfree(omi->website_url); bfree(omi->name); } } static inline bool check_path(const char *data, const char *path, struct dstr *output) { dstr_copy(output, path); dstr_cat(output, data); return os_file_exists(output->array); } /* ------------------------------------------------------------------------- */ /* hotkeys */ struct obs_hotkey { obs_hotkey_id id; char *name; char *description; obs_hotkey_func func; void *data; int pressed; obs_hotkey_registerer_t registerer_type; void *registerer; obs_hotkey_id pair_partner_id; UT_hash_handle hh; }; struct obs_hotkey_pair { obs_hotkey_pair_id pair_id; obs_hotkey_id id[2]; obs_hotkey_active_func func[2]; bool pressed0; bool pressed1; void *data[2]; UT_hash_handle hh; }; typedef struct obs_hotkey_pair obs_hotkey_pair_t; typedef struct obs_hotkeys_platform obs_hotkeys_platform_t; void *obs_hotkey_thread(void *param); struct obs_core_hotkeys; bool obs_hotkeys_platform_init(struct obs_core_hotkeys *hotkeys); void obs_hotkeys_platform_free(struct obs_core_hotkeys *hotkeys); bool obs_hotkeys_platform_is_pressed(obs_hotkeys_platform_t *context, obs_key_t key); const char *obs_get_hotkey_translation(obs_key_t key, const char *def); struct obs_context_data; void obs_hotkeys_context_release(struct obs_context_data *context); void obs_hotkeys_free(void); struct obs_hotkey_binding { obs_key_combination_t key; bool pressed; bool modifiers_match; obs_hotkey_id hotkey_id; obs_hotkey_t *hotkey; }; struct obs_hotkey_name_map_item; void obs_hotkey_name_map_free(void); /* ------------------------------------------------------------------------- */ /* views */ enum view_type { INVALID_VIEW, MAIN_VIEW, AUX_VIEW, }; struct obs_view { pthread_mutex_t channels_mutex; obs_source_t *channels[MAX_CHANNELS]; enum view_type type; }; extern bool obs_view_init(struct obs_view *view, enum view_type type); extern void obs_view_free(struct obs_view *view); /* ------------------------------------------------------------------------- */ /* displays */ struct obs_display { bool update_color_space; bool enabled; uint32_t cx, cy; uint32_t next_cx, next_cy; uint32_t background_color; gs_swapchain_t *swap; pthread_mutex_t draw_callbacks_mutex; pthread_mutex_t draw_info_mutex; DARRAY(struct draw_callback) draw_callbacks; bool use_clear_workaround; struct obs_display *next; struct obs_display **prev_next; }; extern bool obs_display_init(struct obs_display *display, const struct gs_init_data *graphics_data); extern void obs_display_free(struct obs_display *display); /* ------------------------------------------------------------------------- */ /* core */ struct obs_vframe_info { uint64_t timestamp; int count; }; struct obs_tex_frame { gs_texture_t *tex; gs_texture_t *tex_uv; uint32_t handle; uint64_t timestamp; uint64_t lock_key; int count; bool released; }; struct obs_task_info { obs_task_t task; void *param; }; struct obs_core_video_mix { struct obs_view *view; gs_stagesurf_t *active_copy_surfaces[NUM_TEXTURES][NUM_CHANNELS]; gs_stagesurf_t *copy_surfaces[NUM_TEXTURES][NUM_CHANNELS]; gs_texture_t *convert_textures[NUM_CHANNELS]; gs_texture_t *convert_textures_encode[NUM_CHANNELS]; #ifdef _WIN32 gs_stagesurf_t *copy_surfaces_encode[NUM_TEXTURES]; #endif gs_texture_t *render_texture; gs_texture_t *output_texture; enum gs_color_space render_space; bool texture_rendered; bool textures_copied[NUM_TEXTURES]; bool texture_converted; bool using_nv12_tex; bool using_p010_tex; struct deque vframe_info_buffer; struct deque vframe_info_buffer_gpu; gs_stagesurf_t *mapped_surfaces[NUM_CHANNELS]; int cur_texture; volatile long raw_active; volatile long gpu_encoder_active; bool gpu_was_active; bool raw_was_active; bool was_active; pthread_mutex_t gpu_encoder_mutex; struct deque gpu_encoder_queue; struct deque gpu_encoder_avail_queue; DARRAY(obs_encoder_t *) gpu_encoders; os_sem_t *gpu_encode_semaphore; os_event_t *gpu_encode_inactive; pthread_t gpu_encode_thread; bool gpu_encode_thread_initialized; volatile bool gpu_encode_stop; video_t *video; struct obs_video_info ovi; bool gpu_conversion; const char *conversion_techs[NUM_CHANNELS]; bool conversion_needed; float conversion_width_i; float conversion_height_i; float color_matrix[16]; bool encoder_only_mix; long encoder_refs; bool mix_audio; }; extern struct obs_core_video_mix *obs_create_video_mix(struct obs_video_info *ovi); extern void obs_free_video_mix(struct obs_core_video_mix *video); struct obs_core_video { graphics_t *graphics; gs_effect_t *default_effect; gs_effect_t *default_rect_effect; gs_effect_t *opaque_effect; gs_effect_t *solid_effect; gs_effect_t *repeat_effect; gs_effect_t *conversion_effect; gs_effect_t *bicubic_effect; gs_effect_t *lanczos_effect; gs_effect_t *area_effect; gs_effect_t *bilinear_lowres_effect; gs_effect_t *premultiplied_alpha_effect; gs_samplerstate_t *point_sampler; uint64_t video_time; uint64_t video_frame_interval_ns; uint64_t video_half_frame_interval_ns; uint64_t video_avg_frame_time_ns; double video_fps; pthread_t video_thread; uint32_t total_frames; uint32_t lagged_frames; bool thread_initialized; gs_texture_t *transparent_texture; gs_effect_t *deinterlace_discard_effect; gs_effect_t *deinterlace_discard_2x_effect; gs_effect_t *deinterlace_linear_effect; gs_effect_t *deinterlace_linear_2x_effect; gs_effect_t *deinterlace_blend_effect; gs_effect_t *deinterlace_blend_2x_effect; gs_effect_t *deinterlace_yadif_effect; gs_effect_t *deinterlace_yadif_2x_effect; float sdr_white_level; float hdr_nominal_peak_level; pthread_mutex_t task_mutex; struct deque tasks; pthread_mutex_t encoder_group_mutex; DARRAY(obs_weak_encoder_t *) ready_encoder_groups; pthread_mutex_t mixes_mutex; DARRAY(struct obs_core_video_mix *) mixes; }; extern void add_ready_encoder_group(obs_encoder_t *encoder); struct audio_monitor; struct obs_core_audio { audio_t *audio; DARRAY(struct obs_source *) render_order; DARRAY(struct obs_source *) root_nodes; uint64_t buffered_ts; struct deque buffered_timestamps; uint64_t buffering_wait_ticks; int total_buffering_ticks; int max_buffering_ticks; bool fixed_buffer; pthread_mutex_t monitoring_mutex; DARRAY(struct audio_monitor *) monitors; char *monitoring_device_name; char *monitoring_device_id; pthread_mutex_t task_mutex; struct deque tasks; struct obs_source *monitoring_duplicating_source; }; /* user sources, output channels, and displays */ struct obs_core_data { /* Hash tables (uthash) */ struct obs_source *sources; /* Lookup by UUID (hh_uuid) */ struct obs_source *public_sources; /* Lookup by name (hh) */ struct obs_canvas *canvases; /* Lookup by UUID (hh_uuid) */ struct obs_canvas *named_canvases; /* Lookup by name (hh) */ /* Linked lists */ struct obs_source *first_audio_source; struct obs_display *first_display; struct obs_output *first_output; struct obs_encoder *first_encoder; struct obs_service *first_service; pthread_mutex_t sources_mutex; pthread_mutex_t displays_mutex; pthread_mutex_t outputs_mutex; pthread_mutex_t encoders_mutex; pthread_mutex_t services_mutex; pthread_mutex_t audio_sources_mutex; pthread_mutex_t draw_callbacks_mutex; pthread_mutex_t canvases_mutex; DARRAY(struct draw_callback) draw_callbacks; DARRAY(struct rendered_callback) rendered_callbacks; DARRAY(struct tick_callback) tick_callbacks; /* Main canvas, guaranteed to exist for the lifetime of the program */ struct obs_canvas *main_canvas; long long unnamed_index; obs_data_t *private_data; volatile bool valid; DARRAY(char *) protocols; DARRAY(obs_source_t *) sources_to_tick; }; /* user hotkeys */ struct obs_core_hotkeys { pthread_mutex_t mutex; obs_hotkey_t *hotkeys; obs_hotkey_id next_id; obs_hotkey_pair_t *hotkey_pairs; obs_hotkey_pair_id next_pair_id; pthread_t hotkey_thread; bool hotkey_thread_initialized; os_event_t *stop_event; bool thread_disable_press; bool strict_modifiers; bool reroute_hotkeys; DARRAY(obs_hotkey_binding_t) bindings; obs_hotkey_callback_router_func router_func; void *router_func_data; obs_hotkeys_platform_t *platform_context; pthread_once_t name_map_init_token; struct obs_hotkey_name_map_item *name_map; signal_handler_t *signals; char *translations[OBS_KEY_LAST_VALUE]; char *mute; char *unmute; char *push_to_mute; char *push_to_talk; char *sceneitem_show; char *sceneitem_hide; }; typedef DARRAY(struct obs_source_info) obs_source_info_array_t; struct obs_core { struct obs_module *first_module; struct obs_module *first_disabled_module; DARRAY(struct obs_module_path) module_paths; DARRAY(char *) safe_modules; DARRAY(char *) disabled_modules; DARRAY(char *) core_modules; obs_source_info_array_t source_types; obs_source_info_array_t input_types; obs_source_info_array_t filter_types; obs_source_info_array_t transition_types; DARRAY(struct obs_output_info) output_types; DARRAY(struct obs_encoder_info) encoder_types; DARRAY(struct obs_service_info) service_types; signal_handler_t *signals; proc_handler_t *procs; char *locale; char *module_config_path; bool name_store_owned; profiler_name_store_t *name_store; /* segmented into multiple sub-structures to keep things a bit more * clean and organized */ struct obs_core_video video; struct obs_core_audio audio; struct obs_core_data data; struct obs_core_hotkeys hotkeys; os_task_queue_t *destruction_task_thread; obs_task_handler_t ui_task_handler; }; extern struct obs_core *obs; struct obs_graphics_context { uint64_t last_time; uint64_t interval; uint64_t frame_time_total_ns; uint64_t fps_total_ns; uint32_t fps_total_frames; const char *video_thread_name; }; extern void *obs_graphics_thread(void *param); extern bool obs_graphics_thread_loop(struct obs_graphics_context *context); #ifdef __APPLE__ extern void *obs_graphics_thread_autorelease(void *param); extern bool obs_graphics_thread_loop_autorelease(struct obs_graphics_context *context); #endif extern gs_effect_t *obs_load_effect(gs_effect_t **effect, const char *file); extern bool audio_callback(void *param, uint64_t start_ts_in, uint64_t end_ts_in, uint64_t *out_ts, uint32_t mixers, struct audio_output_data *mixes); extern struct obs_core_video_mix *get_mix_for_video(video_t *video); extern void start_raw_video(video_t *video, const struct video_scale_info *conversion, uint32_t frame_rate_divisor, void (*callback)(void *param, struct video_data *frame), void *param); extern void stop_raw_video(video_t *video, void (*callback)(void *param, struct video_data *frame), void *param); /* ------------------------------------------------------------------------- */ /* obs shared context data */ struct obs_weak_ref { volatile long refs; volatile long weak_refs; }; struct obs_weak_object { struct obs_weak_ref ref; struct obs_context_data *object; }; typedef void (*obs_destroy_cb)(void *obj); struct obs_context_data { char *name; const char *uuid; void *data; obs_data_t *settings; signal_handler_t *signals; proc_handler_t *procs; enum obs_obj_type type; struct obs_weak_object *control; obs_destroy_cb destroy; DARRAY(obs_hotkey_id) hotkeys; DARRAY(obs_hotkey_pair_id) hotkey_pairs; obs_data_t *hotkey_data; DARRAY(char *) rename_cache; pthread_mutex_t rename_cache_mutex; pthread_mutex_t *mutex; struct obs_context_data *next; struct obs_context_data **prev_next; UT_hash_handle hh; UT_hash_handle hh_uuid; bool private; }; extern bool obs_context_data_init(struct obs_context_data *context, enum obs_obj_type type, obs_data_t *settings, const char *name, const char *uuid, obs_data_t *hotkey_data, bool private); extern void obs_context_init_control(struct obs_context_data *context, void *object, obs_destroy_cb destroy); extern void obs_context_data_free(struct obs_context_data *context); extern void obs_context_data_insert(struct obs_context_data *context, pthread_mutex_t *mutex, void *first); extern void obs_context_data_insert_name(struct obs_context_data *context, pthread_mutex_t *mutex, void *first); extern void obs_context_data_insert_uuid(struct obs_context_data *context, pthread_mutex_t *mutex, void *first_uuid); extern void obs_context_data_remove(struct obs_context_data *context); extern void obs_context_data_remove_name(struct obs_context_data *context, pthread_mutex_t *mutex, void *phead); extern void obs_context_data_remove_uuid(struct obs_context_data *context, pthread_mutex_t *mutex, void *puuid_head); extern void obs_context_wait(struct obs_context_data *context); extern void obs_context_data_setname(struct obs_context_data *context, const char *name); extern void obs_context_data_setname_ht(struct obs_context_data *context, const char *name, void *phead); /* ------------------------------------------------------------------------- */ /* ref-counting */ static inline void obs_ref_addref(struct obs_weak_ref *ref) { os_atomic_inc_long(&ref->refs); } static inline bool obs_ref_release(struct obs_weak_ref *ref) { return os_atomic_dec_long(&ref->refs) == -1; } static inline void obs_weak_ref_addref(struct obs_weak_ref *ref) { os_atomic_inc_long(&ref->weak_refs); } static inline bool obs_weak_ref_release(struct obs_weak_ref *ref) { return os_atomic_dec_long(&ref->weak_refs) == -1; } static inline bool obs_weak_ref_get_ref(struct obs_weak_ref *ref) { long owners = os_atomic_load_long(&ref->refs); while (owners > -1) { if (os_atomic_compare_exchange_long(&ref->refs, &owners, owners + 1)) { return true; } } return false; } static inline bool obs_weak_ref_expired(struct obs_weak_ref *ref) { long owners = os_atomic_load_long(&ref->refs); return owners < 0; } /* ------------------------------------------------------------------------- */ /* canvases */ struct obs_weak_canvas { struct obs_weak_ref ref; struct obs_canvas *canvas; }; struct obs_canvas { struct obs_context_data context; /* obs_canvas_flags */ uint32_t flags; /* Video info for this canvas, FPS ignored */ struct obs_video_info ovi; /* Hash table containing scenes (and groups) associated with this canvas */ struct obs_source *sources; pthread_mutex_t sources_mutex; /* For now, canvas objects mainly act as a proxy for the existing view and video mix objects, * though this may change in the future. */ struct obs_view view; struct obs_core_video_mix *mix; }; extern obs_canvas_t *obs_create_main_canvas(void); extern void obs_canvas_destroy(obs_canvas_t *canvas); extern void obs_canvas_clear_mix(obs_canvas_t *canvas); extern void obs_free_canvas_mixes(void); extern bool obs_canvas_reset_video_internal(obs_canvas_t *canvas, struct obs_video_info *ovi); extern void obs_canvas_insert_source(obs_canvas_t *canvas, obs_source_t *source); extern void obs_canvas_remove_source(obs_source_t *source); extern void obs_canvas_rename_source(obs_source_t *source, const char *name); /* ------------------------------------------------------------------------- */ /* sources */ struct async_frame { struct obs_source_frame *frame; long unused_count; bool used; }; enum audio_action_type { AUDIO_ACTION_VOL, AUDIO_ACTION_MUTE, AUDIO_ACTION_PTT, AUDIO_ACTION_PTM, }; struct audio_action { uint64_t timestamp; enum audio_action_type type; union { float vol; bool set; }; }; struct obs_weak_source { struct obs_weak_ref ref; struct obs_source *source; }; struct audio_cb_info { obs_source_audio_capture_t callback; void *param; }; struct caption_cb_info { obs_source_caption_t callback; void *param; }; enum media_action_type { MEDIA_ACTION_NONE, MEDIA_ACTION_PLAY_PAUSE, MEDIA_ACTION_RESTART, MEDIA_ACTION_STOP, MEDIA_ACTION_NEXT, MEDIA_ACTION_PREVIOUS, MEDIA_ACTION_SET_TIME, }; struct media_action { enum media_action_type type; union { bool pause; int64_t ms; }; }; struct obs_source { struct obs_context_data context; struct obs_source_info info; /* general exposed flags that can be set for the source */ uint32_t flags; uint32_t default_flags; uint32_t last_obs_ver; /* indicates ownership of the info.id buffer */ bool owns_info_id; /* signals to call the source update in the video thread */ long defer_update_count; /* ensures show/hide are only called once */ volatile long show_refs; /* ensures activate/deactivate are only called once */ volatile long activate_refs; /* source is in the process of being destroyed */ volatile long destroying; /* used to indicate that the source has been removed and all * references to it should be released (not exactly how I would prefer * to handle things but it's the best option) */ bool removed; /* used to indicate if the source should show up when queried for user ui */ bool temp_removed; bool active; bool showing; /* used to temporarily disable sources if needed */ bool enabled; /* hint to allow sources to render more quickly */ bool texcoords_centered; /* timing (if video is present, is based upon video) */ volatile bool timing_set; volatile uint64_t timing_adjust; uint64_t resample_offset; uint64_t next_audio_ts_min; uint64_t next_audio_sys_ts_min; uint64_t last_frame_ts; uint64_t last_sys_timestamp; bool async_rendered; /* audio */ bool audio_failed; bool audio_pending; bool pending_stop; bool audio_active; bool user_muted; bool muted; struct obs_source *next_audio_source; struct obs_source **prev_next_audio_source; uint64_t audio_ts; struct deque audio_input_buf[MAX_AUDIO_CHANNELS]; size_t last_audio_input_buf_size; DARRAY(struct audio_action) audio_actions; float *audio_output_buf[MAX_AUDIO_MIXES][MAX_AUDIO_CHANNELS]; float *audio_mix_buf[MAX_AUDIO_CHANNELS]; struct resample_info sample_info; audio_resampler_t *resampler; pthread_mutex_t audio_actions_mutex; pthread_mutex_t audio_buf_mutex; pthread_mutex_t audio_mutex; pthread_mutex_t audio_cb_mutex; DARRAY(struct audio_cb_info) audio_cb_list; struct obs_audio_data audio_data; size_t audio_storage_size; uint32_t audio_mixers; float user_volume; float volume; int64_t sync_offset; int64_t last_sync_offset; float balance; /* audio_is_duplicated: tracks whether a source appears multiple times in the audio tree during this tick */ bool audio_is_duplicated; /* async video data */ gs_texture_t *async_textures[MAX_AV_PLANES]; gs_texrender_t *async_texrender; struct obs_source_frame *cur_async_frame; bool async_gpu_conversion; enum video_format async_format; bool async_full_range; uint8_t async_trc; enum video_format async_cache_format; bool async_cache_full_range; uint8_t async_cache_trc; enum gs_color_format async_texture_formats[MAX_AV_PLANES]; int async_channel_count; long async_rotation; bool async_flip; bool async_linear_alpha; bool async_active; bool async_update_texture; bool async_unbuffered; bool async_decoupled; struct obs_source_frame *async_preload_frame; DARRAY(struct async_frame) async_cache; DARRAY(struct obs_source_frame *) async_frames; pthread_mutex_t async_mutex; uint32_t async_width; uint32_t async_height; uint32_t async_cache_width; uint32_t async_cache_height; uint32_t async_convert_width[MAX_AV_PLANES]; uint32_t async_convert_height[MAX_AV_PLANES]; uint64_t async_last_rendered_ts; pthread_mutex_t caption_cb_mutex; DARRAY(struct caption_cb_info) caption_cb_list; /* async video deinterlacing */ uint64_t deinterlace_offset; uint64_t deinterlace_frame_ts; gs_effect_t *deinterlace_effect; struct obs_source_frame *prev_async_frame; gs_texture_t *async_prev_textures[MAX_AV_PLANES]; gs_texrender_t *async_prev_texrender; uint32_t deinterlace_half_duration; enum obs_deinterlace_mode deinterlace_mode; bool deinterlace_top_first; bool deinterlace_rendered; /* filters */ struct obs_source *filter_parent; struct obs_source *filter_target; DARRAY(struct obs_source *) filters; pthread_mutex_t filter_mutex; gs_texrender_t *filter_texrender; enum obs_allow_direct_render allow_direct; bool rendering_filter; bool filter_bypass_active; /* sources specific hotkeys */ obs_hotkey_pair_id mute_unmute_key; obs_hotkey_id push_to_mute_key; obs_hotkey_id push_to_talk_key; bool push_to_mute_enabled; bool push_to_mute_pressed; bool user_push_to_mute_pressed; bool push_to_talk_enabled; bool push_to_talk_pressed; bool user_push_to_talk_pressed; uint64_t push_to_mute_delay; uint64_t push_to_mute_stop_time; uint64_t push_to_talk_delay; uint64_t push_to_talk_stop_time; /* transitions */ uint64_t transition_start_time; uint64_t transition_duration; pthread_mutex_t transition_tex_mutex; gs_texrender_t *transition_texrender[2]; pthread_mutex_t transition_mutex; obs_source_t *transition_sources[2]; float transition_manual_clamp; float transition_manual_torque; float transition_manual_target; float transition_manual_val; bool transitioning_video; bool transitioning_audio; bool transition_source_active[2]; uint32_t transition_alignment; uint32_t transition_actual_cx; uint32_t transition_actual_cy; uint32_t transition_cx; uint32_t transition_cy; uint32_t transition_fixed_duration; bool transition_use_fixed_duration; enum obs_transition_mode transition_mode; enum obs_transition_scale_type transition_scale_type; struct matrix4 transition_matrices[2]; /* color space */ gs_texrender_t *color_space_texrender; /* audio monitoring */ struct audio_monitor *monitor; enum obs_monitoring_type monitoring_type; /* media action queue */ DARRAY(struct media_action) media_actions; pthread_mutex_t media_actions_mutex; /* private data */ obs_data_t *private_settings; /* canvas this source belongs to (only used for scenes) */ obs_weak_canvas_t *canvas; }; extern struct obs_source_info *get_source_info(const char *id); extern struct obs_source_info *get_source_info2(const char *unversioned_id, uint32_t ver); extern bool obs_source_init_context(struct obs_source *source, obs_data_t *settings, const char *name, const char *uuid, obs_data_t *hotkey_data, bool private); extern bool obs_transition_init(obs_source_t *transition); extern void obs_transition_free(obs_source_t *transition); extern void obs_transition_tick(obs_source_t *transition, float t); extern void obs_transition_enum_sources(obs_source_t *transition, obs_source_enum_proc_t enum_callback, void *param); extern void obs_transition_save(obs_source_t *source, obs_data_t *data); extern void obs_transition_load(obs_source_t *source, obs_data_t *data); struct audio_monitor *audio_monitor_create(obs_source_t *source); void audio_monitor_reset(struct audio_monitor *monitor); extern void audio_monitor_destroy(struct audio_monitor *monitor); extern obs_source_t *obs_source_create_canvas(obs_canvas_t *canvas, const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data); extern obs_source_t *obs_source_create_set_last_ver(obs_canvas_t *canvas, const char *id, const char *name, const char *uuid, obs_data_t *settings, obs_data_t *hotkey_data, uint32_t last_obs_ver, bool is_private); extern void obs_source_destroy(struct obs_source *source); extern void obs_source_addref(obs_source_t *source); static inline void obs_source_dosignal(struct obs_source *source, const char *signal_obs, const char *signal_source) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); if (signal_obs && !source->context.private) signal_handler_signal(obs->signals, signal_obs, &data); if (signal_source) signal_handler_signal(source->context.signals, signal_source, &data); } static inline void obs_source_dosignal_canvas(struct obs_source *source, struct obs_canvas *canvas, const char *signal_obs, const char *signal_source) { struct calldata data; uint8_t stack[128]; calldata_init_fixed(&data, stack, sizeof(stack)); calldata_set_ptr(&data, "source", source); calldata_set_ptr(&data, "canvas", canvas); if (signal_obs && !source->context.private) signal_handler_signal(obs->signals, signal_obs, &data); if (signal_source) signal_handler_signal(source->context.signals, signal_source, &data); } /* maximum timestamp variance in nanoseconds */ #define MAX_TS_VAR 2000000000ULL static inline bool frame_out_of_bounds(const obs_source_t *source, uint64_t ts) { if (ts < source->last_frame_ts) return ((source->last_frame_ts - ts) > MAX_TS_VAR); else return ((ts - source->last_frame_ts) > MAX_TS_VAR); } static inline enum gs_color_format convert_video_format(enum video_format format, enum video_trc trc) { switch (trc) { case VIDEO_TRC_PQ: case VIDEO_TRC_HLG: return GS_RGBA16F; default: switch (format) { case VIDEO_FORMAT_RGBA: return GS_RGBA; case VIDEO_FORMAT_BGRA: case VIDEO_FORMAT_I40A: case VIDEO_FORMAT_I42A: case VIDEO_FORMAT_YUVA: case VIDEO_FORMAT_AYUV: return GS_BGRA; case VIDEO_FORMAT_I010: case VIDEO_FORMAT_P010: case VIDEO_FORMAT_I210: case VIDEO_FORMAT_I412: case VIDEO_FORMAT_YA2L: case VIDEO_FORMAT_P216: case VIDEO_FORMAT_P416: case VIDEO_FORMAT_V210: case VIDEO_FORMAT_R10L: return GS_RGBA16F; default: return GS_BGRX; } } } static inline enum gs_color_space convert_video_space(enum video_format format, enum video_trc trc) { enum gs_color_space space = GS_CS_SRGB; if (convert_video_format(format, trc) == GS_RGBA16F) { switch (trc) { case VIDEO_TRC_DEFAULT: case VIDEO_TRC_SRGB: space = GS_CS_SRGB_16F; break; case VIDEO_TRC_PQ: case VIDEO_TRC_HLG: space = GS_CS_709_EXTENDED; } } return space; } extern void obs_source_set_texcoords_centered(obs_source_t *source, bool centered); extern void obs_source_activate(obs_source_t *source, enum view_type type); extern void obs_source_deactivate(obs_source_t *source, enum view_type type); extern void obs_source_video_tick(obs_source_t *source, float seconds); extern float obs_source_get_target_volume(obs_source_t *source, obs_source_t *target); extern uint64_t obs_source_get_last_async_ts(const obs_source_t *source); extern void obs_source_audio_render(obs_source_t *source, uint32_t mixers, size_t channels, size_t sample_rate, size_t size); extern void add_alignment(struct vec2 *v, uint32_t align, int cx, int cy); extern struct obs_source_frame *filter_async_video(obs_source_t *source, struct obs_source_frame *in); extern bool update_async_texture(struct obs_source *source, const struct obs_source_frame *frame, gs_texture_t *tex, gs_texrender_t *texrender); extern bool update_async_textures(struct obs_source *source, const struct obs_source_frame *frame, gs_texture_t *tex[MAX_AV_PLANES], gs_texrender_t *texrender); extern bool set_async_texture_size(struct obs_source *source, const struct obs_source_frame *frame); extern void remove_async_frame(obs_source_t *source, struct obs_source_frame *frame); extern void set_deinterlace_texture_size(obs_source_t *source); extern void deinterlace_process_last_frame(obs_source_t *source, uint64_t sys_time); extern void deinterlace_update_async_video(obs_source_t *source); extern void deinterlace_render(obs_source_t *s); /* ------------------------------------------------------------------------- */ /* outputs */ enum delay_msg { DELAY_MSG_PACKET, DELAY_MSG_START, DELAY_MSG_STOP, }; struct delay_data { enum delay_msg msg; uint64_t ts; struct encoder_packet packet; bool packet_time_valid; struct encoder_packet_time packet_time; }; typedef void (*encoded_callback_t)(void *data, struct encoder_packet *packet, struct encoder_packet_time *frame_time); struct obs_weak_output { struct obs_weak_ref ref; struct obs_output *output; }; #define CAPTION_LINE_CHARS (32) #define CAPTION_LINE_BYTES (4 * CAPTION_LINE_CHARS) struct caption_text { char text[CAPTION_LINE_BYTES + 1]; double display_duration; struct caption_text *next; }; struct caption_track_data { struct caption_text *caption_head; struct caption_text *caption_tail; pthread_mutex_t caption_mutex; double caption_timestamp; double last_caption_timestamp; struct deque caption_data; }; struct pause_data { pthread_mutex_t mutex; uint64_t last_video_ts; uint64_t ts_start; uint64_t ts_end; uint64_t ts_offset; }; extern bool video_pause_check(struct pause_data *pause, uint64_t timestamp); extern bool audio_pause_check(struct pause_data *pause, struct audio_data *data, size_t sample_rate); extern void pause_reset(struct pause_data *pause); enum keyframe_group_track_status { KEYFRAME_TRACK_STATUS_NOT_SEEN = 0, KEYFRAME_TRACK_STATUS_SEEN = 1, KEYFRAME_TRACK_STATUS_SKIPPED = 2, }; struct keyframe_group_data { uintptr_t group_id; int64_t pts; uint32_t required_tracks; enum keyframe_group_track_status seen_on_track[MAX_OUTPUT_VIDEO_ENCODERS]; }; struct obs_output { struct obs_context_data context; struct obs_output_info info; /* indicates ownership of the info.id buffer */ bool owns_info_id; bool received_video[MAX_OUTPUT_VIDEO_ENCODERS]; DARRAY(struct keyframe_group_data) keyframe_group_tracking; bool received_audio; volatile bool data_active; volatile bool end_data_capture_thread_active; int64_t video_offsets[MAX_OUTPUT_VIDEO_ENCODERS]; int64_t audio_offsets[MAX_OUTPUT_AUDIO_ENCODERS]; int64_t highest_audio_ts; int64_t highest_video_ts[MAX_OUTPUT_VIDEO_ENCODERS]; pthread_t end_data_capture_thread; os_event_t *stopping_event; pthread_mutex_t interleaved_mutex; DARRAY(struct encoder_packet) interleaved_packets; size_t interleaver_max_batch_size; int stop_code; int reconnect_retry_sec; int reconnect_retry_max; int reconnect_retries; uint32_t reconnect_retry_cur_msec; float reconnect_retry_exp; pthread_t reconnect_thread; os_event_t *reconnect_stop_event; volatile bool reconnecting; volatile bool reconnect_thread_active; uint32_t starting_drawn_count; uint32_t starting_lagged_count; int total_frames; volatile bool active; volatile bool paused; video_t *video; audio_t *audio; obs_encoder_t *video_encoders[MAX_OUTPUT_VIDEO_ENCODERS]; obs_encoder_t *audio_encoders[MAX_OUTPUT_AUDIO_ENCODERS]; obs_service_t *service; size_t mixer_mask; struct pause_data pause; struct deque audio_buffer[MAX_AUDIO_MIXES][MAX_AV_PLANES]; uint64_t audio_start_ts; uint64_t video_start_ts; size_t audio_size; size_t planes; size_t sample_rate; size_t total_audio_frames; uint32_t scaled_width; uint32_t scaled_height; bool video_conversion_set; bool audio_conversion_set; struct video_scale_info video_conversion; struct audio_convert_info audio_conversion; // captions are output per track struct caption_track_data *caption_tracks[MAX_OUTPUT_VIDEO_ENCODERS]; DARRAY(struct encoder_packet_time) encoder_packet_times[MAX_OUTPUT_VIDEO_ENCODERS]; /* Packet callbacks */ pthread_mutex_t pkt_callbacks_mutex; DARRAY(struct packet_callback) pkt_callbacks; struct reconnect_callback reconnect_callback; bool valid; uint64_t active_delay_ns; encoded_callback_t delay_callback; struct deque delay_data; /* struct delay_data */ pthread_mutex_t delay_mutex; uint32_t delay_sec; uint32_t delay_flags; uint32_t delay_cur_flags; volatile long delay_restart_refs; volatile bool delay_active; volatile bool delay_capturing; char *last_error_message; float audio_data[MAX_AUDIO_CHANNELS][AUDIO_OUTPUT_FRAMES]; }; static inline void do_output_signal(struct obs_output *output, const char *signal) { struct calldata params = {0}; calldata_set_ptr(¶ms, "output", output); signal_handler_signal(output->context.signals, signal, ¶ms); calldata_free(¶ms); } extern void process_delay(void *data, struct encoder_packet *packet, struct encoder_packet_time *packet_time); extern void obs_output_cleanup_delay(obs_output_t *output); extern bool obs_output_delay_start(obs_output_t *output); extern void obs_output_delay_stop(obs_output_t *output); extern bool obs_output_actual_start(obs_output_t *output); extern void obs_output_actual_stop(obs_output_t *output, bool force, uint64_t ts); extern const struct obs_output_info *find_output(const char *id); extern void obs_output_remove_encoder(struct obs_output *output, struct obs_encoder *encoder); extern void obs_encoder_packet_create_instance(struct encoder_packet *dst, const struct encoder_packet *src); void obs_output_destroy(obs_output_t *output); /* ------------------------------------------------------------------------- */ /* encoders */ struct obs_weak_encoder { struct obs_weak_ref ref; struct obs_encoder *encoder; }; struct encoder_callback { bool sent_first_packet; encoded_callback_t new_packet; void *param; }; struct obs_encoder_group { pthread_mutex_t mutex; /* allows group to be destroyed even if some encoders are active */ bool destroy_on_stop; /* holds strong references to all encoders */ DARRAY(struct obs_encoder *) encoders; uint32_t num_encoders_started; uint64_t start_timestamp; }; struct obs_encoder { struct obs_context_data context; struct obs_encoder_info info; /* allows re-routing to another encoder */ struct obs_encoder_info orig_info; pthread_mutex_t init_mutex; uint32_t samplerate; size_t planes; size_t blocksize; size_t framesize; size_t framesize_bytes; size_t mixer_idx; /* OBS_SCALE_DISABLE indicates GPU scaling is disabled */ enum obs_scale_type gpu_scale_type; uint32_t scaled_width; uint32_t scaled_height; enum video_format preferred_format; enum video_colorspace preferred_space; enum video_range_type preferred_range; volatile bool active; volatile bool paused; bool initialized; /* indicates ownership of the info.id buffer */ bool owns_info_id; uint32_t timebase_num; uint32_t timebase_den; // allow outputting at fractions of main composition FPS, // e.g. 60 FPS with frame_rate_divisor = 1 turns into 30 FPS // // a separate counter is used in favor of using remainder calculations // to allow "inputs" started at the same time to start on the same frame // whereas with remainder calculation the frame alignment would depend on // the total frame count at the time the encoder was started uint32_t frame_rate_divisor; uint32_t frame_rate_divisor_counter; // only used for GPU encoders video_t *fps_override; // Number of frames successfully encoded uint32_t encoded_frames; /* Regions of interest to prioritize during encoding */ pthread_mutex_t roi_mutex; DARRAY(struct obs_encoder_roi) roi; uint32_t roi_increment; int64_t cur_pts; struct deque audio_input_buffer[MAX_AV_PLANES]; uint8_t *audio_output_buffer[MAX_AV_PLANES]; /* if a video encoder is paired with an audio encoder, make it start * up at the specific timestamp. if this is the audio encoder, * it waits until it's ready to sync up with video */ bool first_received; DARRAY(struct obs_weak_encoder *) paired_encoders; int64_t offset_usec; uint64_t first_raw_ts; uint64_t start_ts; /* track encoders that are part of a gop-aligned multi track group */ struct obs_encoder_group *encoder_group; pthread_mutex_t outputs_mutex; DARRAY(obs_output_t *) outputs; /* stores the video/audio media output pointer. video_t *or audio_t **/ void *media; /* Stores the original video if GPU scaling is enabled and `media` can be overwritten. */ video_t *original_video; pthread_mutex_t callbacks_mutex; DARRAY(struct encoder_callback) callbacks; DARRAY(struct encoder_packet_time) encoder_packet_times; struct pause_data pause; const char *profile_encoder_encode_name; char *last_error_message; /* reconfigure encoder at next possible opportunity */ bool reconfigure_requested; }; extern struct obs_encoder_info *find_encoder(const char *id); extern bool obs_encoder_initialize(obs_encoder_t *encoder); extern void obs_encoder_shutdown(obs_encoder_t *encoder); extern void obs_encoder_start(obs_encoder_t *encoder, encoded_callback_t new_packet, void *param); extern void obs_encoder_stop(obs_encoder_t *encoder, encoded_callback_t new_packet, void *param); extern void obs_encoder_add_output(struct obs_encoder *encoder, struct obs_output *output); extern void obs_encoder_remove_output(struct obs_encoder *encoder, struct obs_output *output); extern bool start_gpu_encode(obs_encoder_t *encoder); extern void stop_gpu_encode(obs_encoder_t *encoder); extern bool do_encode(struct obs_encoder *encoder, struct encoder_frame *frame, const uint64_t *frame_cts); extern void send_off_encoder_packet(obs_encoder_t *encoder, bool success, bool received, struct encoder_packet *pkt); void obs_encoder_destroy(obs_encoder_t *encoder); /* ------------------------------------------------------------------------- */ /* services */ struct obs_weak_service { struct obs_weak_ref ref; struct obs_service *service; }; struct obs_service { struct obs_context_data context; struct obs_service_info info; /* indicates ownership of the info.id buffer */ bool owns_info_id; bool active; bool destroy; struct obs_output *output; }; extern const struct obs_service_info *find_service(const char *id); extern void obs_service_activate(struct obs_service *service); extern void obs_service_deactivate(struct obs_service *service, bool remove); extern bool obs_service_initialize(struct obs_service *service, struct obs_output *output); void obs_service_destroy(obs_service_t *service); void obs_output_remove_encoder_internal(struct obs_output *output, struct obs_encoder *encoder); /** Internal Source Profiler functions **/ /* Start of frame in graphics loop */ extern void source_profiler_frame_begin(void); /* Process data collected during frame */ extern void source_profiler_frame_collect(void); /* Start/end of outputs being rendered (GPU timer begin/end) */ extern void source_profiler_render_begin(void); extern void source_profiler_render_end(void); /* Reset settings, buffers, and GPU timers when video settings change */ extern void source_profiler_reset_video(struct obs_video_info *ovi); /* Signal that source received an async frame */ extern void source_profiler_async_frame_received(obs_source_t *source); /* Get timestamp for start of tick */ extern uint64_t source_profiler_source_tick_start(void); /* Submit start timestamp for source */ extern void source_profiler_source_tick_end(obs_source_t *source, uint64_t start); /* Obtain GPU timer and start timestamp for render start of a source. */ extern uint64_t source_profiler_source_render_begin(gs_timer_t **timer); /* Submit start timestamp and GPU timer after rendering source */ extern void source_profiler_source_render_end(obs_source_t *source, uint64_t start, gs_timer_t *timer); /* Remove source from profiler hashmaps */ extern void source_profiler_remove_source(obs_source_t *source); obs-studio-32.1.0-sources/libobs/util/000755 001751 001751 00000000000 15153330731 020464 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/util/pipe-posix.c000644 001751 001751 00000011245 15153330235 022727 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include #include #include #include #include "bmem.h" #include "pipe.h" extern char **environ; struct os_process_pipe { bool read_pipe; int pid; FILE *file; FILE *err_file; }; os_process_pipe_t *os_process_pipe_create_internal(const char *bin, char **argv, const char *type) { struct os_process_pipe process_pipe = {0}; struct os_process_pipe *out; posix_spawn_file_actions_t file_actions; if (!bin || !argv || !type) { return NULL; } process_pipe.read_pipe = *type == 'r'; int mainfds[2] = {0}; int errfds[2] = {0}; if (pipe(mainfds) != 0) { return NULL; } if (pipe(errfds) != 0) { close(mainfds[0]); close(mainfds[1]); return NULL; } if (posix_spawn_file_actions_init(&file_actions) != 0) { close(mainfds[0]); close(mainfds[1]); close(errfds[0]); close(errfds[1]); return NULL; } fcntl(mainfds[0], F_SETFD, FD_CLOEXEC); fcntl(mainfds[1], F_SETFD, FD_CLOEXEC); fcntl(errfds[0], F_SETFD, FD_CLOEXEC); fcntl(errfds[1], F_SETFD, FD_CLOEXEC); if (process_pipe.read_pipe) { posix_spawn_file_actions_addclose(&file_actions, mainfds[0]); if (mainfds[1] != STDOUT_FILENO) { posix_spawn_file_actions_adddup2(&file_actions, mainfds[1], STDOUT_FILENO); posix_spawn_file_actions_addclose(&file_actions, mainfds[1]); } } else { posix_spawn_file_actions_addclose(&file_actions, mainfds[1]); if (mainfds[0] != STDIN_FILENO) { posix_spawn_file_actions_adddup2(&file_actions, mainfds[0], STDIN_FILENO); posix_spawn_file_actions_addclose(&file_actions, mainfds[0]); } } posix_spawn_file_actions_addclose(&file_actions, errfds[0]); if (errfds[1] != STDERR_FILENO) { posix_spawn_file_actions_adddup2(&file_actions, errfds[1], STDERR_FILENO); posix_spawn_file_actions_addclose(&file_actions, errfds[1]); } int pid; int ret = posix_spawn(&pid, bin, &file_actions, NULL, (char *const *)argv, environ); posix_spawn_file_actions_destroy(&file_actions); if (ret != 0) { close(mainfds[0]); close(mainfds[1]); close(errfds[0]); close(errfds[1]); return NULL; } close(errfds[1]); process_pipe.err_file = fdopen(errfds[0], "r"); if (process_pipe.read_pipe) { close(mainfds[1]); process_pipe.file = fdopen(mainfds[0], "r"); } else { close(mainfds[0]); process_pipe.file = fdopen(mainfds[1], "w"); } process_pipe.pid = pid; out = bmalloc(sizeof(os_process_pipe_t)); *out = process_pipe; return out; } os_process_pipe_t *os_process_pipe_create(const char *cmd_line, const char *type) { if (!cmd_line) return NULL; char *argv[4] = {"sh", "-c", (char *)cmd_line, NULL}; return os_process_pipe_create_internal("/bin/sh", argv, type); } os_process_pipe_t *os_process_pipe_create2(const os_process_args_t *args, const char *type) { char **argv = os_process_args_get_argv(args); return os_process_pipe_create_internal(argv[0], argv, type); } int os_process_pipe_destroy(os_process_pipe_t *pp) { int ret = 0; if (pp) { int status; fclose(pp->file); pp->file = NULL; fclose(pp->err_file); pp->err_file = NULL; do { ret = waitpid(pp->pid, &status, 0); } while (ret == -1 && errno == EINTR); if (WIFEXITED(status)) ret = (int)(char)WEXITSTATUS(status); bfree(pp); } return ret; } size_t os_process_pipe_read(os_process_pipe_t *pp, uint8_t *data, size_t len) { if (!pp) { return 0; } if (!pp->read_pipe) { return 0; } return fread(data, 1, len, pp->file); } size_t os_process_pipe_read_err(os_process_pipe_t *pp, uint8_t *data, size_t len) { if (!pp) { return 0; } return fread(data, 1, len, pp->err_file); } size_t os_process_pipe_write(os_process_pipe_t *pp, const uint8_t *data, size_t len) { if (!pp) { return 0; } if (pp->read_pipe) { return 0; } size_t written = 0; while (written < len) { size_t ret = fwrite(data + written, 1, len - written, pp->file); if (!ret) return written; written += ret; } return written; } obs-studio-32.1.0-sources/libobs/util/dstr.h000644 001751 001751 00000022404 15153330235 021612 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #include #include "c99defs.h" #include "bmem.h" /* * Dynamic string * * Helper struct/functions for dynamically sizing string buffers. */ #ifdef __cplusplus extern "C" { #endif struct strref; struct dstr { char *array; size_t len; /* number of characters, excluding null terminator */ size_t capacity; }; #ifndef _MSC_VER #define PRINTFATTR(f, a) __attribute__((__format__(__printf__, f, a))) #else #define PRINTFATTR(f, a) #endif EXPORT int astrcmpi(const char *str1, const char *str2); EXPORT int wstrcmpi(const wchar_t *str1, const wchar_t *str2); EXPORT int astrcmp_n(const char *str1, const char *str2, size_t n); EXPORT int wstrcmp_n(const wchar_t *str1, const wchar_t *str2, size_t n); EXPORT int astrcmpi_n(const char *str1, const char *str2, size_t n); EXPORT int wstrcmpi_n(const wchar_t *str1, const wchar_t *str2, size_t n); EXPORT char *astrstri(const char *str, const char *find); EXPORT wchar_t *wstrstri(const wchar_t *str, const wchar_t *find); EXPORT char *strdepad(char *str); EXPORT wchar_t *wcsdepad(wchar_t *str); EXPORT char **strlist_split(const char *str, char split_ch, bool include_empty); EXPORT void strlist_free(char **strlist); static inline void dstr_init(struct dstr *dst); static inline void dstr_init_move(struct dstr *dst, struct dstr *src); static inline void dstr_init_move_array(struct dstr *dst, char *str); static inline void dstr_init_copy(struct dstr *dst, const char *src); static inline void dstr_init_copy_dstr(struct dstr *dst, const struct dstr *src); EXPORT void dstr_init_copy_strref(struct dstr *dst, const struct strref *src); static inline void dstr_free(struct dstr *dst); static inline void dstr_array_free(struct dstr *array, const size_t count); static inline void dstr_move(struct dstr *dst, struct dstr *src); static inline void dstr_move_array(struct dstr *dst, char *str); EXPORT void dstr_copy(struct dstr *dst, const char *array); static inline void dstr_copy_dstr(struct dstr *dst, const struct dstr *src); EXPORT void dstr_copy_strref(struct dstr *dst, const struct strref *src); EXPORT void dstr_ncopy(struct dstr *dst, const char *array, const size_t len); EXPORT void dstr_ncopy_dstr(struct dstr *dst, const struct dstr *src, const size_t len); static inline void dstr_resize(struct dstr *dst, const size_t num); static inline void dstr_reserve(struct dstr *dst, const size_t num); static inline bool dstr_is_empty(const struct dstr *str); static inline void dstr_cat(struct dstr *dst, const char *array); EXPORT void dstr_cat_dstr(struct dstr *dst, const struct dstr *str); EXPORT void dstr_cat_strref(struct dstr *dst, const struct strref *str); static inline void dstr_cat_ch(struct dstr *dst, char ch); EXPORT void dstr_ncat(struct dstr *dst, const char *array, const size_t len); EXPORT void dstr_ncat_dstr(struct dstr *dst, const struct dstr *str, const size_t len); EXPORT void dstr_insert(struct dstr *dst, const size_t idx, const char *array); EXPORT void dstr_insert_dstr(struct dstr *dst, const size_t idx, const struct dstr *str); EXPORT void dstr_insert_ch(struct dstr *dst, const size_t idx, const char ch); EXPORT void dstr_remove(struct dstr *dst, const size_t idx, const size_t count); PRINTFATTR(2, 3) EXPORT void dstr_printf(struct dstr *dst, const char *format, ...); PRINTFATTR(2, 3) EXPORT void dstr_catf(struct dstr *dst, const char *format, ...); EXPORT void dstr_vprintf(struct dstr *dst, const char *format, va_list args); EXPORT void dstr_vcatf(struct dstr *dst, const char *format, va_list args); EXPORT void dstr_safe_printf(struct dstr *dst, const char *format, const char *val1, const char *val2, const char *val3, const char *val4); static inline const char *dstr_find_i(const struct dstr *str, const char *find); static inline const char *dstr_find(const struct dstr *str, const char *find); EXPORT void dstr_replace(struct dstr *str, const char *find, const char *replace); static inline int dstr_cmp(const struct dstr *str1, const char *str2); static inline int dstr_cmpi(const struct dstr *str1, const char *str2); static inline int dstr_ncmp(const struct dstr *str1, const char *str2, const size_t n); static inline int dstr_ncmpi(const struct dstr *str1, const char *str2, const size_t n); EXPORT void dstr_depad(struct dstr *dst); EXPORT void dstr_left(struct dstr *dst, const struct dstr *str, const size_t pos); EXPORT void dstr_mid(struct dstr *dst, const struct dstr *str, const size_t start, const size_t count); EXPORT void dstr_right(struct dstr *dst, const struct dstr *str, const size_t pos); static inline char dstr_end(const struct dstr *str); EXPORT void dstr_from_mbs(struct dstr *dst, const char *mbstr); EXPORT char *dstr_to_mbs(const struct dstr *str); EXPORT void dstr_from_wcs(struct dstr *dst, const wchar_t *wstr); EXPORT wchar_t *dstr_to_wcs(const struct dstr *str); EXPORT void dstr_to_upper(struct dstr *str); EXPORT void dstr_to_lower(struct dstr *str); #undef PRINTFATTR /* ------------------------------------------------------------------------- */ static inline void dstr_init(struct dstr *dst) { dst->array = NULL; dst->len = 0; dst->capacity = 0; } static inline void dstr_init_move_array(struct dstr *dst, char *str) { dst->array = str; dst->len = (!str) ? 0 : strlen(str); dst->capacity = dst->len + 1; } static inline void dstr_init_move(struct dstr *dst, struct dstr *src) { *dst = *src; dstr_init(src); } static inline void dstr_init_copy(struct dstr *dst, const char *str) { dstr_init(dst); dstr_copy(dst, str); } static inline void dstr_init_copy_dstr(struct dstr *dst, const struct dstr *src) { dstr_init(dst); dstr_copy_dstr(dst, src); } static inline void dstr_free(struct dstr *dst) { bfree(dst->array); dst->array = NULL; dst->len = 0; dst->capacity = 0; } static inline void dstr_array_free(struct dstr *array, const size_t count) { size_t i; for (i = 0; i < count; i++) dstr_free(array + i); } static inline void dstr_move_array(struct dstr *dst, char *str) { dstr_free(dst); dst->array = str; dst->len = (!str) ? 0 : strlen(str); dst->capacity = dst->len + 1; } static inline void dstr_move(struct dstr *dst, struct dstr *src) { dstr_free(dst); dstr_init_move(dst, src); } static inline void dstr_ensure_capacity(struct dstr *dst, const size_t new_size) { size_t new_cap; if (new_size <= dst->capacity) return; new_cap = (!dst->capacity) ? new_size : dst->capacity * 2; if (new_size > new_cap) new_cap = new_size; dst->array = (char *)brealloc(dst->array, new_cap); dst->capacity = new_cap; } static inline void dstr_copy_dstr(struct dstr *dst, const struct dstr *src) { dstr_free(dst); if (src->len) { dstr_ensure_capacity(dst, src->len + 1); memcpy(dst->array, src->array, src->len + 1); dst->len = src->len; } } static inline void dstr_reserve(struct dstr *dst, const size_t capacity) { if (capacity == 0 || capacity <= dst->len) return; dst->array = (char *)brealloc(dst->array, capacity); dst->capacity = capacity; } static inline void dstr_resize(struct dstr *dst, const size_t num) { if (!num) { dstr_free(dst); return; } dstr_ensure_capacity(dst, num + 1); dst->array[num] = 0; dst->len = num; } static inline bool dstr_is_empty(const struct dstr *str) { if (!str->array || !str->len) return true; if (!*str->array) return true; return false; } static inline void dstr_cat(struct dstr *dst, const char *array) { size_t len; if (!array || !*array) return; len = strlen(array); dstr_ncat(dst, array, len); } static inline void dstr_cat_ch(struct dstr *dst, char ch) { dstr_ensure_capacity(dst, ++dst->len + 1); dst->array[dst->len - 1] = ch; dst->array[dst->len] = 0; } static inline const char *dstr_find_i(const struct dstr *str, const char *find) { return astrstri(str->array, find); } static inline const char *dstr_find(const struct dstr *str, const char *find) { return strstr(str->array, find); } static inline int dstr_cmp(const struct dstr *str1, const char *str2) { const char *s1 = str1->array ? str1->array : ""; const char *s2 = str2 ? str2 : ""; return strcmp(s1, s2); } static inline int dstr_cmpi(const struct dstr *str1, const char *str2) { return astrcmpi(str1->array, str2); } static inline int dstr_ncmp(const struct dstr *str1, const char *str2, const size_t n) { return astrcmp_n(str1->array, str2, n); } static inline int dstr_ncmpi(const struct dstr *str1, const char *str2, const size_t n) { return astrcmpi_n(str1->array, str2, n); } static inline char dstr_end(const struct dstr *str) { if (dstr_is_empty(str)) return 0; return str->array[str->len - 1]; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/apple/000755 001751 001751 00000000000 15153330731 021565 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/util/apple/cfstring-utils.h000644 001751 001751 00000000455 15153330235 024716 0ustar00runnerrunner000000 000000 #pragma once #include "../c99defs.h" #include "../dstr.h" #ifdef __cplusplus extern "C" { #endif EXPORT char *cfstr_copy_cstr(CFStringRef cfstr, CFStringEncoding cfstr_enc); EXPORT bool cfstr_copy_dstr(CFStringRef cfstr, CFStringEncoding cfstr_enc, struct dstr *str); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/platform.c000644 001751 001751 00000041214 15153330235 022455 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #define _FILE_OFFSET_BITS 64 #include #include #include #include #include #include "c99defs.h" #include "platform.h" #include "bmem.h" #include "utf8.h" #include "dstr.h" #include "obs.h" #include "threading.h" FILE *os_wfopen(const wchar_t *path, const char *mode) { FILE *file = NULL; if (path) { #ifdef _MSC_VER wchar_t *wcs_mode; os_utf8_to_wcs_ptr(mode, 0, &wcs_mode); file = _wfopen(path, wcs_mode); bfree(wcs_mode); #else char *mbs_path; os_wcs_to_utf8_ptr(path, 0, &mbs_path); file = fopen(mbs_path, mode); bfree(mbs_path); #endif } return file; } FILE *os_fopen(const char *path, const char *mode) { #ifdef _WIN32 wchar_t *wpath = NULL; FILE *file = NULL; if (path) { os_utf8_to_wcs_ptr(path, 0, &wpath); file = os_wfopen(wpath, mode); bfree(wpath); } return file; #else return path ? fopen(path, mode) : NULL; #endif } int64_t os_fgetsize(FILE *file) { int64_t cur_offset = os_ftelli64(file); int64_t size; int errval = 0; if (fseek(file, 0, SEEK_END) == -1) return -1; size = os_ftelli64(file); if (size == -1) errval = errno; if (os_fseeki64(file, cur_offset, SEEK_SET) != 0 && errval != 0) errno = errval; return size; } #ifdef _WIN32 int os_stat(const char *file, struct stat *st) { if (file) { wchar_t w_file[512]; size_t size = os_utf8_to_wcs(file, 0, w_file, sizeof(w_file)); if (size > 0) { struct _stat st_w32; int ret = _wstat(w_file, &st_w32); if (ret == 0) { st->st_dev = st_w32.st_dev; st->st_ino = st_w32.st_ino; st->st_mode = st_w32.st_mode; st->st_nlink = st_w32.st_nlink; st->st_uid = st_w32.st_uid; st->st_gid = st_w32.st_gid; st->st_rdev = st_w32.st_rdev; st->st_size = st_w32.st_size; st->st_atime = st_w32.st_atime; st->st_mtime = st_w32.st_mtime; st->st_ctime = st_w32.st_ctime; } return ret; } } return -1; } #endif int os_fseeki64(FILE *file, int64_t offset, int origin) { #ifdef _MSC_VER return _fseeki64(file, offset, origin); #else return fseeko(file, offset, origin); #endif } int64_t os_ftelli64(FILE *file) { #ifdef _MSC_VER return _ftelli64(file); #else return ftello(file); #endif } size_t os_fread_mbs(FILE *file, char **pstr) { size_t size = 0; size_t len = 0; fseek(file, 0, SEEK_END); size = (size_t)os_ftelli64(file); *pstr = NULL; if (size > 0) { char *mbstr = bmalloc(size + 1); fseek(file, 0, SEEK_SET); size = fread(mbstr, 1, size, file); if (size == 0) { bfree(mbstr); return 0; } mbstr[size] = 0; len = os_mbs_to_utf8_ptr(mbstr, size, pstr); bfree(mbstr); } return len; } size_t os_fread_utf8(FILE *file, char **pstr) { size_t size = 0; size_t len = 0; *pstr = NULL; fseek(file, 0, SEEK_END); size = (size_t)os_ftelli64(file); if (size > 0) { char bom[3]; char *utf8str; off_t offset; bom[0] = 0; bom[1] = 0; bom[2] = 0; /* remove the ghastly BOM if present */ fseek(file, 0, SEEK_SET); size_t size_read = fread(bom, 1, 3, file); (void)size_read; offset = (astrcmp_n(bom, "\xEF\xBB\xBF", 3) == 0) ? 3 : 0; size -= offset; if (size == 0) return 0; utf8str = bmalloc(size + 1); fseek(file, offset, SEEK_SET); size = fread(utf8str, 1, size, file); if (size == 0) { bfree(utf8str); return 0; } utf8str[size] = 0; *pstr = utf8str; } return len; } char *os_quick_read_mbs_file(const char *path) { FILE *f = os_fopen(path, "rb"); char *file_string = NULL; if (!f) return NULL; os_fread_mbs(f, &file_string); fclose(f); return file_string; } char *os_quick_read_utf8_file(const char *path) { FILE *f = os_fopen(path, "rb"); char *file_string = NULL; if (!f) return NULL; os_fread_utf8(f, &file_string); fclose(f); return file_string; } bool os_quick_write_mbs_file(const char *path, const char *str, size_t len) { FILE *f = os_fopen(path, "wb"); char *mbs = NULL; size_t mbs_len = 0; if (!f) return false; mbs_len = os_utf8_to_mbs_ptr(str, len, &mbs); if (mbs_len) fwrite(mbs, 1, mbs_len, f); bfree(mbs); fflush(f); fclose(f); return true; } bool os_quick_write_utf8_file(const char *path, const char *str, size_t len, bool marker) { FILE *f = os_fopen(path, "wb"); if (!f) return false; if (marker) { if (fwrite("\xEF\xBB\xBF", 3, 1, f) != 1) { fclose(f); return false; } } if (len) { if (fwrite(str, len, 1, f) != 1) { fclose(f); return false; } } fflush(f); fclose(f); return true; } bool os_quick_write_utf8_file_safe(const char *path, const char *str, size_t len, bool marker, const char *temp_ext, const char *backup_ext) { struct dstr backup_path = {0}; struct dstr temp_path = {0}; bool success = false; if (!temp_ext || !*temp_ext) { blog(LOG_ERROR, "os_quick_write_utf8_file_safe: invalid " "temporary extension specified"); return false; } dstr_copy(&temp_path, path); if (*temp_ext != '.') dstr_cat(&temp_path, "."); dstr_cat(&temp_path, temp_ext); if (!os_quick_write_utf8_file(temp_path.array, str, len, marker)) { blog(LOG_ERROR, "os_quick_write_utf8_file_safe: failed to " "write to %s", temp_path.array); goto cleanup; } if (backup_ext && *backup_ext) { dstr_copy(&backup_path, path); if (*backup_ext != '.') dstr_cat(&backup_path, "."); dstr_cat(&backup_path, backup_ext); } if (os_safe_replace(path, temp_path.array, backup_path.array) == 0) success = true; cleanup: dstr_free(&backup_path); dstr_free(&temp_path); return success; } int64_t os_get_file_size(const char *path) { FILE *f = os_fopen(path, "rb"); if (!f) return -1; int64_t sz = os_fgetsize(f); fclose(f); return sz; } size_t os_mbs_to_wcs(const char *str, size_t len, wchar_t *dst, size_t dst_size) { UNUSED_PARAMETER(len); size_t out_len; if (!str) return 0; out_len = dst ? (dst_size - 1) : mbstowcs(NULL, str, 0); if (dst) { if (!dst_size) return 0; if (out_len) out_len = mbstowcs(dst, str, out_len + 1); dst[out_len] = 0; } return out_len; } size_t os_utf8_to_wcs(const char *str, size_t len, wchar_t *dst, size_t dst_size) { size_t in_len; size_t out_len; if (!str) return 0; in_len = len ? len : strlen(str); out_len = dst ? (dst_size - 1) : utf8_to_wchar(str, in_len, NULL, 0, 0); if (dst) { if (!dst_size) return 0; if (out_len) out_len = utf8_to_wchar(str, in_len, dst, out_len + 1, 0); dst[out_len] = 0; } return out_len; } size_t os_wcs_to_mbs(const wchar_t *str, size_t len, char *dst, size_t dst_size) { UNUSED_PARAMETER(len); size_t out_len; if (!str) return 0; out_len = dst ? (dst_size - 1) : wcstombs(NULL, str, 0); if (dst) { if (!dst_size) return 0; if (out_len) out_len = wcstombs(dst, str, out_len + 1); dst[out_len] = 0; } return out_len; } size_t os_wcs_to_utf8(const wchar_t *str, size_t len, char *dst, size_t dst_size) { size_t in_len; size_t out_len; if (!str) return 0; in_len = (len != 0) ? len : wcslen(str); out_len = dst ? (dst_size - 1) : wchar_to_utf8(str, in_len, NULL, 0, 0); if (dst) { if (!dst_size) return 0; if (out_len) out_len = wchar_to_utf8(str, in_len, dst, out_len, 0); dst[out_len] = 0; } return out_len; } size_t os_mbs_to_wcs_ptr(const char *str, size_t len, wchar_t **pstr) { if (str) { size_t out_len = os_mbs_to_wcs(str, len, NULL, 0); *pstr = bmalloc((out_len + 1) * sizeof(wchar_t)); return os_mbs_to_wcs(str, len, *pstr, out_len + 1); } else { *pstr = NULL; return 0; } } size_t os_utf8_to_wcs_ptr(const char *str, size_t len, wchar_t **pstr) { if (str) { size_t out_len = os_utf8_to_wcs(str, len, NULL, 0); *pstr = bmalloc((out_len + 1) * sizeof(wchar_t)); return os_utf8_to_wcs(str, len, *pstr, out_len + 1); } else { *pstr = NULL; return 0; } } size_t os_wcs_to_mbs_ptr(const wchar_t *str, size_t len, char **pstr) { if (str) { size_t out_len = os_wcs_to_mbs(str, len, NULL, 0); *pstr = bmalloc((out_len + 1) * sizeof(char)); return os_wcs_to_mbs(str, len, *pstr, out_len + 1); } else { *pstr = NULL; return 0; } } size_t os_wcs_to_utf8_ptr(const wchar_t *str, size_t len, char **pstr) { if (str) { size_t out_len = os_wcs_to_utf8(str, len, NULL, 0); *pstr = bmalloc((out_len + 1) * sizeof(char)); return os_wcs_to_utf8(str, len, *pstr, out_len + 1); } else { *pstr = NULL; return 0; } } size_t os_utf8_to_mbs_ptr(const char *str, size_t len, char **pstr) { char *dst = NULL; size_t out_len = 0; if (str) { wchar_t *wstr = NULL; size_t wlen = os_utf8_to_wcs_ptr(str, len, &wstr); out_len = os_wcs_to_mbs_ptr(wstr, wlen, &dst); bfree(wstr); } *pstr = dst; return out_len; } size_t os_mbs_to_utf8_ptr(const char *str, size_t len, char **pstr) { char *dst = NULL; size_t out_len = 0; if (str) { wchar_t *wstr = NULL; size_t wlen = os_mbs_to_wcs_ptr(str, len, &wstr); out_len = os_wcs_to_utf8_ptr(wstr, wlen, &dst); bfree(wstr); } *pstr = dst; return out_len; } /* locale independent double conversion from jansson, credit goes to them */ static inline void to_locale(char *str) { const char *point; char *pos; point = localeconv()->decimal_point; if (*point == '.') { /* No conversion needed */ return; } pos = strchr(str, '.'); if (pos) *pos = *point; } static inline void from_locale(char *buffer) { const char *point; char *pos; point = localeconv()->decimal_point; if (*point == '.') { /* No conversion needed */ return; } pos = strchr(buffer, *point); if (pos) *pos = '.'; } double os_strtod(const char *str) { char buf[64]; strncpy(buf, str, sizeof(buf) - 1); buf[sizeof(buf) - 1] = 0; to_locale(buf); return strtod(buf, NULL); } int os_dtostr(double value, char *dst, size_t size) { int ret; char *start, *end; size_t length; ret = snprintf(dst, size, "%.17g", value); if (ret < 0) return -1; length = (size_t)ret; if (length >= size) return -1; from_locale(dst); /* Make sure there's a dot or 'e' in the output. Otherwise a real is converted to an integer when decoding */ if (strchr(dst, '.') == NULL && strchr(dst, 'e') == NULL) { if (length + 3 >= size) { /* No space to append ".0" */ return -1; } dst[length] = '.'; dst[length + 1] = '0'; dst[length + 2] = '\0'; length += 2; } /* Remove leading '+' from positive exponent. Also remove leading zeros from exponents (added by some printf() implementations) */ start = strchr(dst, 'e'); if (start) { start++; end = start + 1; if (*start == '-') start++; while (*end == '0') end++; if (end != start) { memmove(start, end, length - (size_t)(end - dst)); length -= (size_t)(end - start); } } return (int)length; } static int recursive_mkdir(char *path) { char *last_slash; int ret; ret = os_mkdir(path); if (ret != MKDIR_ERROR) return ret; last_slash = strrchr(path, '/'); if (!last_slash) return MKDIR_ERROR; *last_slash = 0; ret = recursive_mkdir(path); *last_slash = '/'; if (ret == MKDIR_ERROR) return MKDIR_ERROR; ret = os_mkdir(path); return ret; } int os_mkdirs(const char *dir) { struct dstr dir_str; int ret; dstr_init_copy(&dir_str, dir); dstr_replace(&dir_str, "\\", "/"); ret = recursive_mkdir(dir_str.array); dstr_free(&dir_str); return ret; } const char *os_get_path_extension(const char *path) { for (size_t pos = strlen(path); pos > 0; pos--) { switch (path[pos - 1]) { case '.': return path + pos - 1; case '/': case '\\': return NULL; } } return NULL; } static inline bool valid_string(const char *str) { while (str && *str) { if (*(str++) != ' ') return true; } return false; } static void replace_text(struct dstr *str, size_t pos, size_t len, const char *new_text) { struct dstr front = {0}; struct dstr back = {0}; dstr_left(&front, str, pos); dstr_right(&back, str, pos + len); dstr_copy_dstr(str, &front); dstr_cat(str, new_text); dstr_cat_dstr(str, &back); dstr_free(&front); dstr_free(&back); } static void erase_ch(struct dstr *str, size_t pos) { struct dstr new_str = {0}; dstr_left(&new_str, str, pos); dstr_cat(&new_str, str->array + pos + 1); dstr_free(str); *str = new_str; } char *os_generate_formatted_filename(const char *extension, bool space, const char *format) { time_t now = time(0); struct tm *cur_time; cur_time = localtime(&now); struct obs_video_info ovi; obs_get_video_info(&ovi); const size_t spec_count = 23; static const char *spec[][2] = { {"%CCYY", "%Y"}, {"%YY", "%y"}, {"%MM", "%m"}, {"%DD", "%d"}, {"%hh", "%H"}, {"%mm", "%M"}, {"%ss", "%S"}, {"%%", "%%"}, {"%a", ""}, {"%A", ""}, {"%b", ""}, {"%B", ""}, {"%d", ""}, {"%H", ""}, {"%I", ""}, {"%m", ""}, {"%M", ""}, {"%p", ""}, {"%S", ""}, {"%y", ""}, {"%Y", ""}, {"%z", ""}, {"%Z", ""}, }; char convert[128] = {0}; struct dstr sf; struct dstr c = {0}; size_t pos = 0; dstr_init_copy(&sf, format); while (pos < sf.len) { const char *cmp = sf.array + pos; for (size_t i = 0; i < spec_count && !convert[0]; i++) { size_t len = strlen(spec[i][0]); if (astrcmp_n(cmp, spec[i][0], len) == 0) { if (strlen(spec[i][1])) strftime(convert, sizeof(convert), spec[i][1], cur_time); else strftime(convert, sizeof(convert), spec[i][0], cur_time); dstr_copy(&c, convert); if (c.len && valid_string(c.array)) replace_text(&sf, pos, len, convert); } } if (!convert[0]) { if (astrcmp_n(cmp, "%FPS", 4) == 0) { if (ovi.fps_den <= 1) { snprintf(convert, sizeof(convert), "%u", ovi.fps_num); } else { const double obsFPS = (double)ovi.fps_num / (double)ovi.fps_den; snprintf(convert, sizeof(convert), "%.2f", obsFPS); } replace_text(&sf, pos, 4, convert); } else if (astrcmp_n(cmp, "%CRES", 5) == 0) { snprintf(convert, sizeof(convert), "%ux%u", ovi.base_width, ovi.base_height); replace_text(&sf, pos, 5, convert); } else if (astrcmp_n(cmp, "%ORES", 5) == 0) { snprintf(convert, sizeof(convert), "%ux%u", ovi.output_width, ovi.output_height); replace_text(&sf, pos, 5, convert); } else if (astrcmp_n(cmp, "%VF", 3) == 0) { strcpy(convert, get_video_format_name(ovi.output_format)); replace_text(&sf, pos, 3, convert); } else if (astrcmp_n(cmp, "%s", 2) == 0) { snprintf(convert, sizeof(convert), "%" PRId64, (int64_t)now); replace_text(&sf, pos, 2, convert); } } if (convert[0]) { pos += strlen(convert); convert[0] = 0; } else if (!convert[0] && sf.array[pos] == '%') { erase_ch(&sf, pos); } else { pos++; } } if (!space) dstr_replace(&sf, " ", "_"); if (extension && *extension) { dstr_cat_ch(&sf, '.'); dstr_cat(&sf, extension); } dstr_free(&c); if (sf.len > 255) dstr_mid(&sf, &sf, 0, 255); return sf.array; } static struct { struct timespec ts; bool ts_valid; uint64_t timestamp; } timespec_offset = {0}; static void init_timespec_offset(void) { timespec_offset.ts_valid = timespec_get(×pec_offset.ts, TIME_UTC) == TIME_UTC; timespec_offset.timestamp = os_gettime_ns(); } struct timespec *os_nstime_to_timespec(uint64_t timestamp, struct timespec *storage) { static pthread_once_t once = PTHREAD_ONCE_INIT; pthread_once(&once, init_timespec_offset); if (!storage || !timespec_offset.ts_valid) { return NULL; } *storage = timespec_offset.ts; static const int64_t nsecs_per_sec = 1000000000; int64_t nsecs = 0; int64_t secs = 0; if (timestamp >= timespec_offset.timestamp) { uint64_t offset = timestamp - timespec_offset.timestamp; nsecs = storage->tv_nsec + offset % nsecs_per_sec; secs = storage->tv_sec + offset / nsecs_per_sec; } else { uint64_t offset = timespec_offset.timestamp - timestamp; int64_t nsec_offset = offset % nsecs_per_sec; int64_t sec_offset = offset / nsecs_per_sec; int64_t tv_nsec = storage->tv_nsec; if (nsec_offset > tv_nsec) { storage->tv_sec -= 1; tv_nsec += nsecs_per_sec; } nsecs = tv_nsec - nsec_offset; secs = storage->tv_sec - sec_offset; } if (nsecs > nsecs_per_sec) { nsecs -= nsecs_per_sec; secs += 1; } storage->tv_nsec = (long)nsecs; storage->tv_sec = (time_t)secs; return storage; } obs-studio-32.1.0-sources/libobs/util/dstr.c000644 001751 001751 00000034201 15153330235 021603 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include #include #include #include #include #include #include #include "c99defs.h" #include "dstr.h" #include "darray.h" #include "bmem.h" #include "utf8.h" #include "lexer.h" #include "platform.h" static const char *astrblank = ""; static const wchar_t *wstrblank = L""; int astrcmpi(const char *str1, const char *str2) { if (!str1) str1 = astrblank; if (!str2) str2 = astrblank; do { char ch1 = (char)toupper(*str1); char ch2 = (char)toupper(*str2); if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (*str1++ && *str2++); return 0; } int wstrcmpi(const wchar_t *str1, const wchar_t *str2) { if (!str1) str1 = wstrblank; if (!str2) str2 = wstrblank; do { wchar_t ch1 = (wchar_t)towupper(*str1); wchar_t ch2 = (wchar_t)towupper(*str2); if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (*str1++ && *str2++); return 0; } int astrcmp_n(const char *str1, const char *str2, size_t n) { if (!n) return 0; if (!str1) str1 = astrblank; if (!str2) str2 = astrblank; do { char ch1 = *str1; char ch2 = *str2; if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (*str1++ && *str2++ && --n); return 0; } int wstrcmp_n(const wchar_t *str1, const wchar_t *str2, size_t n) { if (!n) return 0; if (!str1) str1 = wstrblank; if (!str2) str2 = wstrblank; do { wchar_t ch1 = *str1; wchar_t ch2 = *str2; if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (*str1++ && *str2++ && --n); return 0; } int astrcmpi_n(const char *str1, const char *str2, size_t n) { if (!n) return 0; if (!str1) str1 = astrblank; if (!str2) str2 = astrblank; do { char ch1 = (char)toupper(*str1); char ch2 = (char)toupper(*str2); if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (*str1++ && *str2++ && --n); return 0; } int wstrcmpi_n(const wchar_t *str1, const wchar_t *str2, size_t n) { if (!n) return 0; if (!str1) str1 = wstrblank; if (!str2) str2 = wstrblank; do { wchar_t ch1 = (wchar_t)towupper(*str1); wchar_t ch2 = (wchar_t)towupper(*str2); if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (*str1++ && *str2++ && --n); return 0; } char *astrstri(const char *str, const char *find) { size_t len; if (!str || !find) return NULL; len = strlen(find); do { if (astrcmpi_n(str, find, len) == 0) return (char *)str; } while (*str++); return NULL; } wchar_t *wstrstri(const wchar_t *str, const wchar_t *find) { size_t len; if (!str || !find) return NULL; len = wcslen(find); do { if (wstrcmpi_n(str, find, len) == 0) return (wchar_t *)str; } while (*str++); return NULL; } static inline bool is_padding(int ch) { return ch == ' ' || ch == '\t' || ch == '\n' || ch == '\r'; } char *strdepad(char *str) { char *temp; size_t len; if (!str) return str; if (!*str) return str; temp = str; /* remove preceding spaces/tabs */ while (is_padding(*temp)) ++temp; len = strlen(temp); if (temp != str) memmove(str, temp, len + 1); if (len) { temp = str + (len - 1); while (is_padding(*temp)) *(temp--) = 0; } return str; } wchar_t *wcsdepad(wchar_t *str) { wchar_t *temp; size_t len; if (!str) return str; if (!*str) return str; temp = str; /* remove preceding spaces/tabs */ while (is_padding(*temp)) ++temp; len = wcslen(temp); if (temp != str) memmove(str, temp, (len + 1) * sizeof(wchar_t)); if (len) { temp = str + (len - 1); while (is_padding(*temp)) *(temp--) = 0; } return str; } char **strlist_split(const char *str, char split_ch, bool include_empty) { const char *cur_str = str; const char *next_str; char *out = NULL; size_t count = 0; size_t total_size = 0; if (str) { char **table; char *offset; size_t cur_idx = 0; size_t cur_pos = 0; next_str = strchr(str, split_ch); while (next_str) { size_t size = next_str - cur_str; if (size || include_empty) { ++count; total_size += size + 1; } cur_str = next_str + 1; next_str = strchr(cur_str, split_ch); } if (*cur_str || include_empty) { ++count; total_size += strlen(cur_str) + 1; } /* ------------------ */ cur_pos = (count + 1) * sizeof(char *); total_size += cur_pos; out = bmalloc(total_size); offset = out + cur_pos; table = (char **)out; /* ------------------ */ next_str = strchr(str, split_ch); cur_str = str; while (next_str) { size_t size = next_str - cur_str; if (size || include_empty) { table[cur_idx++] = offset; strncpy(offset, cur_str, size); offset[size] = 0; offset += size + 1; } cur_str = next_str + 1; next_str = strchr(cur_str, split_ch); } if (*cur_str || include_empty) { table[cur_idx++] = offset; strcpy(offset, cur_str); } table[cur_idx] = NULL; } return (char **)out; } void strlist_free(char **strlist) { bfree(strlist); } void dstr_init_copy_strref(struct dstr *dst, const struct strref *src) { dstr_init(dst); dstr_copy_strref(dst, src); } void dstr_copy(struct dstr *dst, const char *array) { size_t len; if (!array || !*array) { dstr_free(dst); return; } len = strlen(array); dstr_ensure_capacity(dst, len + 1); memcpy(dst->array, array, len + 1); dst->len = len; } void dstr_copy_strref(struct dstr *dst, const struct strref *src) { if (dst->array) dstr_free(dst); dstr_ncopy(dst, src->array, src->len); } static inline size_t size_min(size_t a, size_t b) { return (a < b) ? a : b; } void dstr_ncopy(struct dstr *dst, const char *array, const size_t len) { if (dst->array) dstr_free(dst); if (!len) return; dst->array = bmemdup(array, len + 1); dst->len = len; dst->capacity = len + 1; dst->array[len] = 0; } void dstr_ncopy_dstr(struct dstr *dst, const struct dstr *str, const size_t len) { size_t newlen; if (dst->array) dstr_free(dst); if (!len) return; newlen = size_min(len, str->len); dst->array = bmemdup(str->array, newlen + 1); dst->len = newlen; dst->capacity = newlen + 1; dst->array[newlen] = 0; } void dstr_cat_dstr(struct dstr *dst, const struct dstr *str) { size_t new_len; if (!str->len) return; new_len = dst->len + str->len; dstr_ensure_capacity(dst, new_len + 1); memcpy(dst->array + dst->len, str->array, str->len + 1); dst->len = new_len; } void dstr_cat_strref(struct dstr *dst, const struct strref *str) { dstr_ncat(dst, str->array, str->len); } void dstr_ncat(struct dstr *dst, const char *array, const size_t len) { size_t new_len; if (!array || !*array || !len) return; new_len = dst->len + len; dstr_ensure_capacity(dst, new_len + 1); memcpy(dst->array + dst->len, array, len); dst->len = new_len; dst->array[new_len] = 0; } void dstr_ncat_dstr(struct dstr *dst, const struct dstr *str, const size_t len) { size_t new_len, in_len; if (!str->array || !*str->array || !len) return; in_len = size_min(len, str->len); new_len = dst->len + in_len; dstr_ensure_capacity(dst, new_len + 1); memcpy(dst->array + dst->len, str->array, in_len); dst->len = new_len; dst->array[new_len] = 0; } void dstr_insert(struct dstr *dst, const size_t idx, const char *array) { size_t new_len, len; if (!array || !*array) return; if (idx == dst->len) { dstr_cat(dst, array); return; } len = strlen(array); new_len = dst->len + len; dstr_ensure_capacity(dst, new_len + 1); memmove(dst->array + idx + len, dst->array + idx, dst->len - idx + 1); memcpy(dst->array + idx, array, len); dst->len = new_len; } void dstr_insert_dstr(struct dstr *dst, const size_t idx, const struct dstr *str) { size_t new_len; if (!str->len) return; if (idx == dst->len) { dstr_cat_dstr(dst, str); return; } new_len = dst->len + str->len; dstr_ensure_capacity(dst, (new_len + 1)); memmove(dst->array + idx + str->len, dst->array + idx, dst->len - idx + 1); memcpy(dst->array + idx, str->array, str->len); dst->len = new_len; } void dstr_insert_ch(struct dstr *dst, const size_t idx, const char ch) { if (idx == dst->len) { dstr_cat_ch(dst, ch); return; } dstr_ensure_capacity(dst, (++dst->len + 1)); memmove(dst->array + idx + 1, dst->array + idx, dst->len - idx + 1); dst->array[idx] = ch; } void dstr_remove(struct dstr *dst, const size_t idx, const size_t count) { size_t end; if (!count) return; if (count == dst->len) { dstr_free(dst); return; } end = idx + count; if (end == dst->len) dst->array[idx] = 0; else memmove(dst->array + idx, dst->array + end, dst->len - end + 1); dst->len -= count; } void dstr_printf(struct dstr *dst, const char *format, ...) { va_list args; va_start(args, format); dstr_vprintf(dst, format, args); va_end(args); } void dstr_catf(struct dstr *dst, const char *format, ...) { va_list args; va_start(args, format); dstr_vcatf(dst, format, args); va_end(args); } void dstr_vprintf(struct dstr *dst, const char *format, va_list args) { va_list args_cp; va_copy(args_cp, args); int len = vsnprintf(NULL, 0, format, args_cp); va_end(args_cp); if (len < 0) len = 4095; dstr_ensure_capacity(dst, ((size_t)len) + 1); len = vsnprintf(dst->array, ((size_t)len) + 1, format, args); if (!*dst->array) { dstr_free(dst); return; } dst->len = len < 0 ? strlen(dst->array) : (size_t)len; } void dstr_vcatf(struct dstr *dst, const char *format, va_list args) { va_list args_cp; va_copy(args_cp, args); int len = vsnprintf(NULL, 0, format, args_cp); va_end(args_cp); if (len < 0) len = 4095; dstr_ensure_capacity(dst, dst->len + ((size_t)len) + 1); len = vsnprintf(dst->array + dst->len, ((size_t)len) + 1, format, args); if (!*dst->array) { dstr_free(dst); return; } dst->len += len < 0 ? strlen(dst->array + dst->len) : (size_t)len; } void dstr_safe_printf(struct dstr *dst, const char *format, const char *val1, const char *val2, const char *val3, const char *val4) { dstr_copy(dst, format); if (val1) dstr_replace(dst, "$1", val1); if (val2) dstr_replace(dst, "$2", val2); if (val3) dstr_replace(dst, "$3", val3); if (val4) dstr_replace(dst, "$4", val4); } void dstr_replace(struct dstr *str, const char *find, const char *replace) { size_t find_len, replace_len; char *temp; if (dstr_is_empty(str)) return; if (!replace) replace = ""; find_len = strlen(find); replace_len = strlen(replace); temp = str->array; if (replace_len < find_len) { unsigned long count = 0; while ((temp = strstr(temp, find)) != NULL) { char *end = temp + find_len; size_t end_len = strlen(end); if (end_len) { memmove(temp + replace_len, end, end_len + 1); if (replace_len) memcpy(temp, replace, replace_len); } else { strcpy(temp, replace); } temp += replace_len; ++count; } if (count) str->len += (replace_len - find_len) * count; } else if (replace_len > find_len) { unsigned long count = 0; while ((temp = strstr(temp, find)) != NULL) { temp += find_len; ++count; } if (!count) return; str->len += (replace_len - find_len) * count; dstr_ensure_capacity(str, str->len + 1); temp = str->array; while ((temp = strstr(temp, find)) != NULL) { char *end = temp + find_len; size_t end_len = strlen(end); if (end_len) { memmove(temp + replace_len, end, end_len + 1); memcpy(temp, replace, replace_len); } else { strcpy(temp, replace); } temp += replace_len; } } else { while ((temp = strstr(temp, find)) != NULL) { memcpy(temp, replace, replace_len); temp += replace_len; } } } void dstr_depad(struct dstr *str) { if (str->array) { str->array = strdepad(str->array); if (*str->array) str->len = strlen(str->array); else dstr_free(str); } } void dstr_left(struct dstr *dst, const struct dstr *str, const size_t pos) { dstr_resize(dst, pos); if (dst != str) memcpy(dst->array, str->array, pos); } void dstr_mid(struct dstr *dst, const struct dstr *str, const size_t start, const size_t count) { struct dstr temp; dstr_init(&temp); dstr_copy_dstr(&temp, str); dstr_ncopy(dst, temp.array + start, count); dstr_free(&temp); } void dstr_right(struct dstr *dst, const struct dstr *str, const size_t pos) { struct dstr temp; dstr_init(&temp); dstr_ncopy(&temp, str->array + pos, str->len - pos); dstr_copy_dstr(dst, &temp); dstr_free(&temp); } void dstr_from_mbs(struct dstr *dst, const char *mbstr) { dstr_free(dst); dst->len = os_mbs_to_utf8_ptr(mbstr, 0, &dst->array); } char *dstr_to_mbs(const struct dstr *str) { char *dst; os_mbs_to_utf8_ptr(str->array, str->len, &dst); return dst; } wchar_t *dstr_to_wcs(const struct dstr *str) { wchar_t *dst; os_utf8_to_wcs_ptr(str->array, str->len, &dst); return dst; } void dstr_from_wcs(struct dstr *dst, const wchar_t *wstr) { size_t len = wchar_to_utf8(wstr, 0, NULL, 0, 0); if (len) { dstr_resize(dst, len); wchar_to_utf8(wstr, 0, dst->array, len + 1, 0); } else { dstr_free(dst); } } void dstr_to_upper(struct dstr *str) { wchar_t *wstr; wchar_t *temp; if (dstr_is_empty(str)) return; wstr = dstr_to_wcs(str); temp = wstr; if (!wstr) return; while (*temp) { *temp = (wchar_t)towupper(*temp); temp++; } dstr_from_wcs(str, wstr); bfree(wstr); } void dstr_to_lower(struct dstr *str) { wchar_t *wstr; wchar_t *temp; if (dstr_is_empty(str)) return; wstr = dstr_to_wcs(str); temp = wstr; if (!wstr) return; while (*temp) { *temp = (wchar_t)towlower(*temp); temp++; } dstr_from_wcs(str, wstr); bfree(wstr); } obs-studio-32.1.0-sources/libobs/util/bmem.h000644 001751 001751 00000003744 15153330235 021564 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" #include "base.h" #include #include #ifdef __cplusplus extern "C" { #endif struct base_allocator { void *(*malloc)(size_t); void *(*realloc)(void *, size_t); void (*free)(void *); }; EXPORT void *bmalloc(size_t size); EXPORT void *brealloc(void *ptr, size_t size); EXPORT void bfree(void *ptr); EXPORT int base_get_alignment(void); EXPORT long bnum_allocs(void); EXPORT void *bmemdup(const void *ptr, size_t size); static inline void *bzalloc(size_t size) { void *mem = bmalloc(size); if (mem) memset(mem, 0, size); return mem; } static inline char *bstrdup_n(const char *str, size_t n) { char *dup; if (!str) return NULL; dup = (char *)bmemdup(str, n + 1); dup[n] = 0; return dup; } static inline wchar_t *bwstrdup_n(const wchar_t *str, size_t n) { wchar_t *dup; if (!str) return NULL; dup = (wchar_t *)bmemdup(str, (n + 1) * sizeof(wchar_t)); dup[n] = 0; return dup; } static inline char *bstrdup(const char *str) { if (!str) return NULL; return bstrdup_n(str, strlen(str)); } static inline wchar_t *bwstrdup(const wchar_t *str) { if (!str) return NULL; return bwstrdup_n(str, wcslen(str)); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/serializer.h000644 001751 001751 00000007301 15153330235 023006 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" /* * General programmable serialization functions. (A shared interface to * various reading/writing to/from different inputs/outputs) */ #ifdef __cplusplus extern "C" { #endif enum serialize_seek_type { SERIALIZE_SEEK_START, SERIALIZE_SEEK_CURRENT, SERIALIZE_SEEK_END }; struct serializer { void *data; size_t (*read)(void *, void *, size_t); size_t (*write)(void *, const void *, size_t); int64_t (*seek)(void *, int64_t, enum serialize_seek_type); int64_t (*get_pos)(void *); }; static inline size_t s_read(struct serializer *s, void *data, size_t size) { if (s && s->read && data && size) return s->read(s->data, (void *)data, size); return 0; } static inline size_t s_write(struct serializer *s, const void *data, size_t size) { if (s && s->write && data && size) return s->write(s->data, (void *)data, size); return 0; } static inline size_t serialize(struct serializer *s, void *data, size_t len) { if (s) { if (s->write) return s->write(s->data, data, len); else if (s->read) return s->read(s->data, data, len); } return 0; } static inline int64_t serializer_seek(struct serializer *s, int64_t offset, enum serialize_seek_type seek_type) { if (s && s->seek) return s->seek(s->data, offset, seek_type); return -1; } static inline int64_t serializer_get_pos(struct serializer *s) { if (s && s->get_pos) return s->get_pos(s->data); return -1; } /* formatted this to be similar to the AVIO layout that ffmpeg uses */ static inline void s_w8(struct serializer *s, uint8_t u8) { s_write(s, &u8, sizeof(uint8_t)); } static inline void s_wl16(struct serializer *s, uint16_t u16) { s_w8(s, (uint8_t)u16); s_w8(s, u16 >> 8); } static inline void s_wl24(struct serializer *s, uint32_t u24) { s_w8(s, (uint8_t)u24); s_wl16(s, (uint16_t)(u24 >> 8)); } static inline void s_wl32(struct serializer *s, uint32_t u32) { s_wl16(s, (uint16_t)u32); s_wl16(s, (uint16_t)(u32 >> 16)); } static inline void s_wl64(struct serializer *s, uint64_t u64) { s_wl32(s, (uint32_t)u64); s_wl32(s, (uint32_t)(u64 >> 32)); } static inline void s_wlf(struct serializer *s, float f) { s_wl32(s, *(uint32_t *)&f); } static inline void s_wld(struct serializer *s, double d) { s_wl64(s, *(uint64_t *)&d); } static inline void s_wb16(struct serializer *s, uint16_t u16) { s_w8(s, u16 >> 8); s_w8(s, (uint8_t)u16); } static inline void s_wb24(struct serializer *s, uint32_t u24) { s_wb16(s, (uint16_t)(u24 >> 8)); s_w8(s, (uint8_t)u24); } static inline void s_wb32(struct serializer *s, uint32_t u32) { s_wb16(s, (uint16_t)(u32 >> 16)); s_wb16(s, (uint16_t)u32); } static inline void s_wb64(struct serializer *s, uint64_t u64) { s_wb32(s, (uint32_t)(u64 >> 32)); s_wb32(s, (uint32_t)u64); } static inline void s_wbf(struct serializer *s, float f) { s_wb32(s, *(uint32_t *)&f); } static inline void s_wbd(struct serializer *s, double d) { s_wb64(s, *(uint64_t *)&d); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/buffered-file-serializer.h000644 001751 001751 00000002251 15153330235 025502 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2024 Dennis Sädtler * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "serializer.h" #ifdef __cplusplus extern "C" { #endif EXPORT bool buffered_file_serializer_init_defaults(struct serializer *s, const char *path); EXPORT bool buffered_file_serializer_init(struct serializer *s, const char *path, size_t max_bufsize, size_t chunk_size); EXPORT void buffered_file_serializer_free(struct serializer *s); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/dstr.hpp000644 001751 001751 00000002616 15153330235 022155 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "dstr.h" class DStr { dstr str; DStr(DStr const &) = delete; DStr &operator=(DStr const &) = delete; public: inline DStr() { dstr_init(&str); } inline DStr(DStr &&other) : DStr() { dstr_move(&str, &other.str); } inline DStr &operator=(DStr &&other) { dstr_move(&str, &other.str); return *this; } inline ~DStr() { dstr_free(&str); } inline operator dstr *() { return &str; } inline operator const dstr *() const { return &str; } inline operator char *() { return str.array; } inline operator const char *() const { return str.array; } inline dstr *operator->() { return &str; } }; obs-studio-32.1.0-sources/libobs/util/array-serializer.h000644 001751 001751 00000002326 15153330235 024124 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "serializer.h" #include "darray.h" #ifdef __cplusplus extern "C" { #endif struct array_output_data { DARRAY(uint8_t) bytes; size_t cur_pos; }; EXPORT void array_output_serializer_init(struct serializer *s, struct array_output_data *data); EXPORT void array_output_serializer_free(struct array_output_data *data); EXPORT void array_output_serializer_reset(struct array_output_data *data); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/file-serializer.c000644 001751 001751 00000010012 15153330235 023707 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "dstr.h" #include "file-serializer.h" #include "platform.h" static size_t file_input_read(void *file, void *data, size_t size) { return fread(data, 1, size, file); } static int64_t file_input_seek(void *file, int64_t offset, enum serialize_seek_type seek_type) { int origin = SEEK_SET; switch (seek_type) { case SERIALIZE_SEEK_START: origin = SEEK_SET; break; case SERIALIZE_SEEK_CURRENT: origin = SEEK_CUR; break; case SERIALIZE_SEEK_END: origin = SEEK_END; break; } if (os_fseeki64(file, offset, origin) == -1) return -1; return os_ftelli64(file); } static int64_t file_input_get_pos(void *file) { return os_ftelli64(file); } bool file_input_serializer_init(struct serializer *s, const char *path) { s->data = os_fopen(path, "rb"); if (!s->data) return false; s->read = file_input_read; s->write = NULL; s->seek = file_input_seek; s->get_pos = file_input_get_pos; return true; } void file_input_serializer_free(struct serializer *s) { if (s->data) fclose(s->data); } /* ------------------------------------------------------------------------- */ struct file_output_data { FILE *file; char *temp_name; char *file_name; }; static size_t file_output_write(void *sdata, const void *data, size_t size) { struct file_output_data *out = sdata; return fwrite(data, 1, size, out->file); } static int64_t file_output_seek(void *sdata, int64_t offset, enum serialize_seek_type seek_type) { struct file_output_data *out = sdata; int origin = SEEK_SET; switch (seek_type) { case SERIALIZE_SEEK_START: origin = SEEK_SET; break; case SERIALIZE_SEEK_CURRENT: origin = SEEK_CUR; break; case SERIALIZE_SEEK_END: origin = SEEK_END; break; } if (os_fseeki64(out->file, offset, origin) == -1) return -1; return os_ftelli64(out->file); } static int64_t file_output_get_pos(void *sdata) { struct file_output_data *out = sdata; return os_ftelli64(out->file); } bool file_output_serializer_init(struct serializer *s, const char *path) { FILE *file = os_fopen(path, "wb"); struct file_output_data *out; if (!file) return false; out = bzalloc(sizeof(*out)); out->file = file; s->data = out; s->read = NULL; s->write = file_output_write; s->seek = file_output_seek; s->get_pos = file_output_get_pos; return true; } bool file_output_serializer_init_safe(struct serializer *s, const char *path, const char *temp_ext) { struct dstr temp_name = {0}; struct file_output_data *out; FILE *file; if (!temp_ext || !*temp_ext) return false; dstr_copy(&temp_name, path); if (*temp_ext != '.') dstr_cat_ch(&temp_name, '.'); dstr_cat(&temp_name, temp_ext); file = os_fopen(temp_name.array, "wb"); if (!file) { dstr_free(&temp_name); return false; } out = bzalloc(sizeof(*out)); out->file_name = bstrdup(path); out->temp_name = temp_name.array; out->file = file; s->data = out; s->read = NULL; s->write = file_output_write; s->seek = file_output_seek; s->get_pos = file_output_get_pos; return true; } void file_output_serializer_free(struct serializer *s) { struct file_output_data *out = s->data; if (out) { fclose(out->file); if (out->temp_name) { os_unlink(out->file_name); os_rename(out->temp_name, out->file_name); } bfree(out->file_name); bfree(out->temp_name); bfree(out); } } obs-studio-32.1.0-sources/libobs/util/threading-windows.h000644 001751 001751 00000007206 15153330235 024276 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #include #if !defined(_M_IX86) && !defined(_M_X64) && !defined(_M_ARM) && !defined(_M_ARM64) #error Processor not supported #endif static inline long os_atomic_inc_long(volatile long *val) { return _InterlockedIncrement(val); } static inline long os_atomic_dec_long(volatile long *val) { return _InterlockedDecrement(val); } static inline void os_atomic_store_long(volatile long *ptr, long val) { #if defined(_M_ARM64) _ReadWriteBarrier(); __stlr32((volatile unsigned *)ptr, val); _ReadWriteBarrier(); #elif defined(_M_ARM) __dmb(_ARM_BARRIER_ISH); __iso_volatile_store32((volatile __int32 *)ptr, val); __dmb(_ARM_BARRIER_ISH); #else _InterlockedExchange(ptr, val); #endif } static inline long os_atomic_set_long(volatile long *ptr, long val) { return _InterlockedExchange(ptr, val); } static inline long os_atomic_exchange_long(volatile long *ptr, long val) { return os_atomic_set_long(ptr, val); } static inline long os_atomic_load_long(const volatile long *ptr) { #if defined(_M_ARM64) const long val = __ldar32((volatile unsigned *)ptr); #else const long val = __iso_volatile_load32((const volatile __int32 *)ptr); #endif #if defined(_M_ARM) __dmb(_ARM_BARRIER_ISH); #else _ReadWriteBarrier(); #endif return val; } static inline bool os_atomic_compare_swap_long(volatile long *val, long old_val, long new_val) { return _InterlockedCompareExchange(val, new_val, old_val) == old_val; } static inline bool os_atomic_compare_exchange_long(volatile long *val, long *old_ptr, long new_val) { const long old_val = *old_ptr; const long previous = _InterlockedCompareExchange(val, new_val, old_val); *old_ptr = previous; return previous == old_val; } static inline void os_atomic_store_bool(volatile bool *ptr, bool val) { #if defined(_M_ARM64) _ReadWriteBarrier(); __stlr8((volatile unsigned char *)ptr, val); _ReadWriteBarrier(); #elif defined(_M_ARM) __dmb(_ARM_BARRIER_ISH); __iso_volatile_store8((volatile char *)ptr, val); __dmb(_ARM_BARRIER_ISH); #else _InterlockedExchange8((volatile char *)ptr, (char)val); #endif } static inline bool os_atomic_set_bool(volatile bool *ptr, bool val) { const char c = _InterlockedExchange8((volatile char *)ptr, (char)val); bool b; /* Avoid unnecessary char to bool conversion. Value known 0 or 1. */ memcpy(&b, &c, sizeof(b)); return b; } static inline bool os_atomic_exchange_bool(volatile bool *ptr, bool val) { return os_atomic_set_bool(ptr, val); } static inline bool os_atomic_load_bool(const volatile bool *ptr) { bool b; #if defined(_M_ARM64) const unsigned char c = __ldar8((volatile unsigned char *)ptr); #else const char c = __iso_volatile_load8((const volatile char *)ptr); #endif #if defined(_M_ARM) __dmb(_ARM_BARRIER_ISH); #else _ReadWriteBarrier(); #endif /* Avoid unnecessary char to bool conversion. Value known 0 or 1. */ memcpy(&b, &c, sizeof(b)); return b; } obs-studio-32.1.0-sources/libobs/util/uthash.h000644 001751 001751 00000002331 15153330235 022127 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Dennis Sädtler This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once /* * This file (re)defines various uthash settings for use in libobs */ #include /* Use OBS allocator */ #undef uthash_malloc #undef uthash_free #define uthash_malloc(sz) bmalloc(sz) #define uthash_free(ptr, sz) bfree(ptr) /* Use SFH (Super Fast Hash) function instead of JEN */ #undef HASH_FUNCTION #define HASH_FUNCTION HASH_SFH obs-studio-32.1.0-sources/libobs/util/threading-windows.c000644 001751 001751 00000010023 15153330235 024260 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "bmem.h" #include "threading.h" #include "util/platform.h" #define WIN32_LEAN_AND_MEAN #include #ifdef __MINGW32__ #include #ifndef TRYLEVEL_NONE #ifndef __MINGW64__ #define NO_SEH_MINGW #endif #ifndef __try #define __try #endif #ifndef __except #define __except (x) if (0) #endif #endif #endif int os_event_init(os_event_t **event, enum os_event_type type) { HANDLE handle; handle = CreateEvent(NULL, (type == OS_EVENT_TYPE_MANUAL), FALSE, NULL); if (!handle) return -1; *event = (os_event_t *)handle; return 0; } void os_event_destroy(os_event_t *event) { if (event) CloseHandle((HANDLE)event); } int os_event_wait(os_event_t *event) { DWORD code; if (!event) return EINVAL; code = WaitForSingleObject((HANDLE)event, INFINITE); if (code != WAIT_OBJECT_0) return EINVAL; return 0; } int os_event_timedwait(os_event_t *event, unsigned long milliseconds) { DWORD code; if (!event) return EINVAL; code = WaitForSingleObject((HANDLE)event, milliseconds); if (code == WAIT_TIMEOUT) return ETIMEDOUT; else if (code != WAIT_OBJECT_0) return EINVAL; return 0; } int os_event_try(os_event_t *event) { DWORD code; if (!event) return EINVAL; code = WaitForSingleObject((HANDLE)event, 0); if (code == WAIT_TIMEOUT) return EAGAIN; else if (code != WAIT_OBJECT_0) return EINVAL; return 0; } int os_event_signal(os_event_t *event) { if (!event) return EINVAL; if (!SetEvent((HANDLE)event)) return EINVAL; return 0; } void os_event_reset(os_event_t *event) { if (!event) return; ResetEvent((HANDLE)event); } int os_sem_init(os_sem_t **sem, int value) { HANDLE handle = CreateSemaphore(NULL, (LONG)value, 0x7FFFFFFF, NULL); if (!handle) return -1; *sem = (os_sem_t *)handle; return 0; } void os_sem_destroy(os_sem_t *sem) { if (sem) CloseHandle((HANDLE)sem); } int os_sem_post(os_sem_t *sem) { if (!sem) return -1; return ReleaseSemaphore((HANDLE)sem, 1, NULL) ? 0 : -1; } int os_sem_wait(os_sem_t *sem) { DWORD ret; if (!sem) return -1; ret = WaitForSingleObject((HANDLE)sem, INFINITE); return (ret == WAIT_OBJECT_0) ? 0 : -1; } #define VC_EXCEPTION 0x406D1388 #pragma pack(push, 8) struct vs_threadname_info { DWORD type; /* 0x1000 */ const char *name; DWORD thread_id; DWORD flags; }; #pragma pack(pop) #define THREADNAME_INFO_SIZE (sizeof(struct vs_threadname_info) / sizeof(ULONG_PTR)) void os_set_thread_name(const char *name) { #ifdef __MINGW32__ UNUSED_PARAMETER(name); #else struct vs_threadname_info info; info.type = 0x1000; info.name = name; info.thread_id = GetCurrentThreadId(); info.flags = 0; #ifdef NO_SEH_MINGW __try1(EXCEPTION_EXECUTE_HANDLER) { #else __try { #endif RaiseException(VC_EXCEPTION, 0, THREADNAME_INFO_SIZE, (ULONG_PTR *)&info); #ifdef NO_SEH_MINGW } __except1 { #else } __except (EXCEPTION_EXECUTE_HANDLER) { #endif } #endif const HMODULE hModule = LoadLibrary(L"KernelBase.dll"); if (hModule) { typedef HRESULT(WINAPI * set_thread_description_t)(HANDLE, PCWSTR); const set_thread_description_t std = (set_thread_description_t)GetProcAddress(hModule, "SetThreadDescription"); if (std) { wchar_t *wname; os_utf8_to_wcs_ptr(name, 0, &wname); std(GetCurrentThread(), wname); bfree(wname); } FreeLibrary(hModule); } } obs-studio-32.1.0-sources/libobs/util/cf-lexer.c000644 001751 001751 00000075457 15153330235 022356 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include "platform.h" #include "cf-lexer.h" static inline void cf_convert_from_escape_literal(char **p_dst, const char **p_src) { char *dst = *p_dst; const char *src = *p_src; switch (*(src++)) { case '\'': *(dst++) = '\''; break; case '\"': *(dst++) = '\"'; break; case '\?': *(dst++) = '\?'; break; case '\\': *(dst++) = '\\'; break; case '0': *(dst++) = '\0'; break; case 'a': *(dst++) = '\a'; break; case 'b': *(dst++) = '\b'; break; case 'f': *(dst++) = '\f'; break; case 'n': *(dst++) = '\n'; break; case 'r': *(dst++) = '\r'; break; case 't': *(dst++) = '\t'; break; case 'v': *(dst++) = '\v'; break; /* hex */ case 'X': case 'x': *(dst++) = (char)strtoul(src, NULL, 16); src += 2; break; /* oct */ default: if (isdigit(*src)) { *(dst++) = (char)strtoul(src, NULL, 8); src += 3; } /* case 'u': case 'U': */ } *p_dst = dst; *p_src = src; } char *cf_literal_to_str(const char *literal, size_t count) { const char *temp_src; char *str, *temp_dst; if (!count) count = strlen(literal); if (count < 2) return NULL; if (literal[0] != literal[count - 1]) return NULL; if (literal[0] != '\"' && literal[0] != '\'') return NULL; /* strip leading and trailing quote characters */ str = bzalloc(--count); temp_src = literal + 1; temp_dst = str; while (*temp_src && --count > 0) { if (*temp_src == '\\') { temp_src++; cf_convert_from_escape_literal(&temp_dst, &temp_src); } else { *(temp_dst++) = *(temp_src++); } } *temp_dst = 0; return str; } static bool cf_is_token_break(struct base_token *start_token, const struct base_token *token) { switch (start_token->type) { case BASETOKEN_ALPHA: if (token->type == BASETOKEN_OTHER || token->type == BASETOKEN_WHITESPACE) return true; break; case BASETOKEN_DIGIT: if (token->type == BASETOKEN_WHITESPACE || (token->type == BASETOKEN_OTHER && *token->text.array != '.')) return true; break; case BASETOKEN_WHITESPACE: /* lump all non-newline whitespace together when possible */ if (is_space_or_tab(*start_token->text.array) && is_space_or_tab(*token->text.array)) break; return true; case BASETOKEN_OTHER: if (*start_token->text.array == '.' && token->type == BASETOKEN_DIGIT) { start_token->type = BASETOKEN_DIGIT; break; } /* Falls through. */ case BASETOKEN_NONE: return true; } return false; } static inline bool cf_is_splice(const char *array) { return (*array == '\\' && is_newline(array[1])); } static inline void cf_pass_any_splices(const char **parray) { while (cf_is_splice(*parray)) *parray += 1 + newline_size((*parray) + 1); } static inline bool cf_is_comment(const char *array) { const char *offset = array; if (*offset++ == '/') { cf_pass_any_splices(&offset); return (*offset == '*' || *offset == '/'); } return false; } static bool cf_lexer_process_comment(struct cf_lexer *lex, struct cf_token *out_token) { const char *offset; if (!cf_is_comment(out_token->unmerged_str.array)) return false; offset = lex->base_lexer.offset; cf_pass_any_splices(&offset); strcpy(lex->write_offset++, " "); out_token->str.len = 1; if (*offset == '/') { while (*++offset && !is_newline(*offset)) cf_pass_any_splices(&offset); } else if (*offset == '*') { bool was_star = false; lex->unexpected_eof = true; while (*++offset) { cf_pass_any_splices(&offset); if (was_star && *offset == '/') { offset++; lex->unexpected_eof = false; break; } else { was_star = (*offset == '*'); } } } out_token->unmerged_str.len += (size_t)(offset - out_token->unmerged_str.array); out_token->type = CFTOKEN_SPACETAB; lex->base_lexer.offset = offset; return true; } static inline void cf_lexer_write_strref(struct cf_lexer *lex, const struct strref *ref) { strncpy(lex->write_offset, ref->array, ref->len); lex->write_offset[ref->len] = 0; lex->write_offset += ref->len; } static bool cf_lexer_is_include(struct cf_lexer *lex) { bool found_include_import = false; bool found_preprocessor = false; size_t i; for (i = lex->tokens.num; i > 0; i--) { struct cf_token *token = lex->tokens.array + (i - 1); if (is_space_or_tab(*token->str.array)) continue; if (!found_include_import) { if (strref_cmp(&token->str, "include") != 0 && strref_cmp(&token->str, "import") != 0) break; found_include_import = true; } else if (!found_preprocessor) { if (*token->str.array != '#') break; found_preprocessor = true; } else { return is_newline(*token->str.array); } } /* if starting line */ return found_preprocessor && found_include_import; } static void cf_lexer_getstrtoken(struct cf_lexer *lex, struct cf_token *out_token, char delimiter, bool allow_escaped_delimiters) { const char *offset = lex->base_lexer.offset; bool escaped = false; out_token->unmerged_str.len++; out_token->str.len++; cf_lexer_write_strref(lex, &out_token->unmerged_str); while (*offset) { cf_pass_any_splices(&offset); if (*offset == delimiter) { if (!escaped) { *lex->write_offset++ = *offset; out_token->str.len++; offset++; break; } } else if (is_newline(*offset)) { break; } *lex->write_offset++ = *offset; out_token->str.len++; escaped = (allow_escaped_delimiters && *offset == '\\'); offset++; } *lex->write_offset = 0; out_token->unmerged_str.len += (size_t)(offset - out_token->unmerged_str.array); out_token->type = CFTOKEN_STRING; lex->base_lexer.offset = offset; } static bool cf_lexer_process_string(struct cf_lexer *lex, struct cf_token *out_token) { char ch = *out_token->unmerged_str.array; if (ch == '<' && cf_lexer_is_include(lex)) { cf_lexer_getstrtoken(lex, out_token, '>', false); return true; } else if (ch == '"' || ch == '\'') { cf_lexer_getstrtoken(lex, out_token, ch, !cf_lexer_is_include(lex)); return true; } return false; } static inline enum cf_token_type cf_get_token_type(const struct cf_token *token, const struct base_token *start_token) { switch (start_token->type) { case BASETOKEN_ALPHA: return CFTOKEN_NAME; case BASETOKEN_DIGIT: return CFTOKEN_NUM; case BASETOKEN_WHITESPACE: if (is_newline(*token->str.array)) return CFTOKEN_NEWLINE; else return CFTOKEN_SPACETAB; case BASETOKEN_NONE: case BASETOKEN_OTHER: break; } return CFTOKEN_OTHER; } static bool cf_lexer_nexttoken(struct cf_lexer *lex, struct cf_token *out_token) { struct base_token token, start_token; bool wrote_data = false; base_token_clear(&token); base_token_clear(&start_token); cf_token_clear(out_token); while (lexer_getbasetoken(&lex->base_lexer, &token, PARSE_WHITESPACE)) { /* reclassify underscore as alpha for alnum tokens */ if (*token.text.array == '_') token.type = BASETOKEN_ALPHA; /* ignore escaped newlines to merge spliced lines */ if (cf_is_splice(token.text.array)) { lex->base_lexer.offset += newline_size(token.text.array + 1); continue; } if (!wrote_data) { out_token->unmerged_str.array = token.text.array; out_token->str.array = lex->write_offset; /* if comment then output a space */ if (cf_lexer_process_comment(lex, out_token)) return true; /* process string tokens if any */ if (cf_lexer_process_string(lex, out_token)) return true; base_token_copy(&start_token, &token); wrote_data = true; } else if (cf_is_token_break(&start_token, &token)) { lex->base_lexer.offset -= token.text.len; break; } /* write token to CF lexer to account for splicing/comments */ cf_lexer_write_strref(lex, &token.text); out_token->str.len += token.text.len; } if (wrote_data) { out_token->unmerged_str.len = (size_t)(lex->base_lexer.offset - out_token->unmerged_str.array); out_token->type = cf_get_token_type(out_token, &start_token); } return wrote_data; } void cf_lexer_init(struct cf_lexer *lex) { lexer_init(&lex->base_lexer); da_init(lex->tokens); lex->file = NULL; lex->reformatted = NULL; lex->write_offset = NULL; lex->unexpected_eof = false; } void cf_lexer_free(struct cf_lexer *lex) { bfree(lex->file); bfree(lex->reformatted); lexer_free(&lex->base_lexer); da_free(lex->tokens); lex->file = NULL; lex->reformatted = NULL; lex->write_offset = NULL; lex->unexpected_eof = false; } bool cf_lexer_lex(struct cf_lexer *lex, const char *str, const char *file) { struct cf_token token; struct cf_token *last_token = NULL; cf_lexer_free(lex); if (!str || !*str) return false; if (file) lex->file = bstrdup(file); lexer_start(&lex->base_lexer, str); cf_token_clear(&token); lex->reformatted = bmalloc(strlen(str) + 1); lex->reformatted[0] = 0; lex->write_offset = lex->reformatted; while (cf_lexer_nexttoken(lex, &token)) { if (last_token && is_space_or_tab(*last_token->str.array) && is_space_or_tab(*token.str.array)) { cf_token_add(last_token, &token); continue; } token.lex = lex; last_token = da_push_back_new(lex->tokens); memcpy(last_token, &token, sizeof(struct cf_token)); } cf_token_clear(&token); token.str.array = lex->write_offset; token.unmerged_str.array = lex->base_lexer.offset; token.lex = lex; da_push_back(lex->tokens, &token); return !lex->unexpected_eof; } /* ------------------------------------------------------------------------- */ struct macro_param { struct cf_token name; cf_token_array_t tokens; }; static inline void macro_param_init(struct macro_param *param) { cf_token_clear(¶m->name); da_init(param->tokens); } static inline void macro_param_free(struct macro_param *param) { cf_token_clear(¶m->name); da_free(param->tokens); } /* ------------------------------------------------------------------------- */ struct macro_params { DARRAY(struct macro_param) params; }; static inline void macro_params_init(struct macro_params *params) { da_init(params->params); } static inline void macro_params_free(struct macro_params *params) { size_t i; for (i = 0; i < params->params.num; i++) macro_param_free(params->params.array + i); da_free(params->params); } static inline struct macro_param *get_macro_param(const struct macro_params *params, const struct strref *name) { size_t i; if (!params) return NULL; for (i = 0; i < params->params.num; i++) { struct macro_param *param = params->params.array + i; if (strref_cmp_strref(¶m->name.str, name) == 0) return param; } return NULL; } /* ------------------------------------------------------------------------- */ static bool cf_preprocessor(struct cf_preprocessor *pp, bool if_block, struct cf_token **p_cur_token); static void cf_preprocess_tokens(struct cf_preprocessor *pp, bool if_block, struct cf_token **p_cur_token); static inline bool go_to_newline(struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; while (cur_token->type != CFTOKEN_NEWLINE && cur_token->type != CFTOKEN_NONE) cur_token++; *p_cur_token = cur_token; return cur_token->type != CFTOKEN_NONE; } static inline bool next_token(struct cf_token **p_cur_token, bool preprocessor) { struct cf_token *cur_token = *p_cur_token; if (cur_token->type != CFTOKEN_NONE) cur_token++; /* if preprocessor, stop at newline */ while (cur_token->type == CFTOKEN_SPACETAB && (preprocessor || cur_token->type == CFTOKEN_NEWLINE)) cur_token++; *p_cur_token = cur_token; return cur_token->type != CFTOKEN_NONE; } static inline void cf_gettokenoffset(struct cf_preprocessor *pp, const struct cf_token *token, uint32_t *row, uint32_t *col) { lexer_getstroffset(&pp->lex->base_lexer, token->unmerged_str.array, row, col); } static void cf_addew(struct cf_preprocessor *pp, const struct cf_token *token, const char *message, int error_level, const char *val1, const char *val2, const char *val3) { uint32_t row, col; cf_gettokenoffset(pp, token, &row, &col); if (!val1 && !val2 && !val3) { error_data_add(pp->ed, token->lex->file, row, col, message, error_level); } else { struct dstr formatted; dstr_init(&formatted); dstr_safe_printf(&formatted, message, val1, val2, val3, NULL); error_data_add(pp->ed, token->lex->file, row, col, formatted.array, error_level); dstr_free(&formatted); } } static inline void cf_adderror(struct cf_preprocessor *pp, const struct cf_token *token, const char *error, const char *val1, const char *val2, const char *val3) { cf_addew(pp, token, error, LEX_ERROR, val1, val2, val3); } static inline void cf_addwarning(struct cf_preprocessor *pp, const struct cf_token *token, const char *warning, const char *val1, const char *val2, const char *val3) { cf_addew(pp, token, warning, LEX_WARNING, val1, val2, val3); } static inline void cf_adderror_expecting(struct cf_preprocessor *pp, const struct cf_token *token, const char *expecting) { cf_adderror(pp, token, "Expected $1", expecting, NULL, NULL); } static inline void cf_adderror_expected_newline(struct cf_preprocessor *pp, const struct cf_token *token) { cf_adderror(pp, token, "Unexpected token after preprocessor, expected " "newline", NULL, NULL, NULL); } static inline void cf_adderror_unexpected_endif_eof(struct cf_preprocessor *pp, const struct cf_token *token) { cf_adderror(pp, token, "Unexpected end of file before #endif", NULL, NULL, NULL); } static inline void cf_adderror_unexpected_eof(struct cf_preprocessor *pp, const struct cf_token *token) { cf_adderror(pp, token, "Unexpected end of file", NULL, NULL, NULL); } static inline void insert_path(struct cf_preprocessor *pp, struct dstr *str_file) { const char *file; const char *slash; if (pp && pp->lex && pp->lex->file) { file = pp->lex->file; slash = strrchr(file, '/'); if (slash) { struct dstr path = {0}; dstr_ncopy(&path, file, slash - file + 1); dstr_insert_dstr(str_file, 0, &path); dstr_free(&path); } } } static void cf_include_file(struct cf_preprocessor *pp, const struct cf_token *file_token) { struct cf_lexer new_lex; struct dstr str_file; FILE *file; char *file_data; struct cf_token *tokens; size_t i; dstr_init(&str_file); dstr_copy_strref(&str_file, &file_token->str); dstr_mid(&str_file, &str_file, 1, str_file.len - 2); insert_path(pp, &str_file); /* if dependency already exists, run preprocessor on it */ for (i = 0; i < pp->dependencies.num; i++) { struct cf_lexer *dep = pp->dependencies.array + i; if (strcmp(dep->file, str_file.array) == 0) { tokens = cf_lexer_get_tokens(dep); cf_preprocess_tokens(pp, false, &tokens); goto exit; } } file = os_fopen(str_file.array, "rb"); if (!file) { cf_adderror(pp, file_token, "Could not open file '$1'", file_token->str.array, NULL, NULL); goto exit; } os_fread_utf8(file, &file_data); fclose(file); cf_lexer_init(&new_lex); cf_lexer_lex(&new_lex, file_data, str_file.array); tokens = cf_lexer_get_tokens(&new_lex); cf_preprocess_tokens(pp, false, &tokens); bfree(file_data); da_push_back(pp->dependencies, &new_lex); exit: dstr_free(&str_file); } static inline bool is_sys_include(struct strref *ref) { return ref->len >= 2 && ref->array[0] == '<' && ref->array[ref->len - 1] == '>'; } static inline bool is_loc_include(struct strref *ref) { return ref->len >= 2 && ref->array[0] == '"' && ref->array[ref->len - 1] == '"'; } static void cf_preprocess_include(struct cf_preprocessor *pp, struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; if (pp->ignore_state) { go_to_newline(p_cur_token); return; } next_token(&cur_token, true); if (cur_token->type != CFTOKEN_STRING) { cf_adderror_expecting(pp, cur_token, "string"); go_to_newline(&cur_token); goto exit; } if (is_sys_include(&cur_token->str)) { /* TODO */ } else if (is_loc_include(&cur_token->str)) { if (!pp->ignore_state) cf_include_file(pp, cur_token); } else { cf_adderror(pp, cur_token, "Invalid or incomplete string", NULL, NULL, NULL); go_to_newline(&cur_token); goto exit; } cur_token++; exit: *p_cur_token = cur_token; } static bool cf_preprocess_macro_params(struct cf_preprocessor *pp, struct cf_def *def, struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; bool success = false; def->macro = true; do { next_token(&cur_token, true); if (cur_token->type != CFTOKEN_NAME) { cf_adderror_expecting(pp, cur_token, "identifier"); go_to_newline(&cur_token); goto exit; } cf_def_addparam(def, cur_token); next_token(&cur_token, true); if (cur_token->type != CFTOKEN_OTHER || (*cur_token->str.array != ',' && *cur_token->str.array != ')')) { cf_adderror_expecting(pp, cur_token, "',' or ')'"); go_to_newline(&cur_token); goto exit; } } while (*cur_token->str.array != ')'); /* ended properly, now go to first define token (or newline) */ next_token(&cur_token, true); success = true; exit: *p_cur_token = cur_token; return success; } #define INVALID_INDEX ((size_t)-1) static inline size_t cf_preprocess_get_def_idx(struct cf_preprocessor *pp, const struct strref *def_name) { struct cf_def *array = pp->defines.array; size_t i; for (i = 0; i < pp->defines.num; i++) { struct cf_def *cur_def = array + i; if (strref_cmp_strref(&cur_def->name.str, def_name) == 0) return i; } return INVALID_INDEX; } static inline struct cf_def *cf_preprocess_get_def(struct cf_preprocessor *pp, const struct strref *def_name) { size_t idx = cf_preprocess_get_def_idx(pp, def_name); if (idx == INVALID_INDEX) return NULL; return pp->defines.array + idx; } static char space_filler[2] = " "; static inline void append_space(struct cf_preprocessor *pp, cf_token_array_t *tokens, const struct cf_token *base) { struct cf_token token; strref_set(&token.str, space_filler, 1); token.type = CFTOKEN_SPACETAB; if (base) { token.lex = base->lex; strref_copy(&token.unmerged_str, &base->unmerged_str); } else { token.lex = pp->lex; strref_copy(&token.unmerged_str, &token.str); } da_push_back(*tokens, &token); } static inline void append_end_token(cf_token_array_t *tokens) { struct cf_token end; cf_token_clear(&end); da_push_back(*tokens, &end); } static void cf_preprocess_define(struct cf_preprocessor *pp, struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; struct cf_def def; if (pp->ignore_state) { go_to_newline(p_cur_token); return; } cf_def_init(&def); next_token(&cur_token, true); if (cur_token->type != CFTOKEN_NAME) { cf_adderror_expecting(pp, cur_token, "identifier"); go_to_newline(&cur_token); goto exit; } append_space(pp, &def.tokens, NULL); cf_token_copy(&def.name, cur_token); if (!next_token(&cur_token, true)) goto complete; /* process macro */ if (*cur_token->str.array == '(') { if (!cf_preprocess_macro_params(pp, &def, &cur_token)) goto error; } while (cur_token->type != CFTOKEN_NEWLINE && cur_token->type != CFTOKEN_NONE) cf_def_addtoken(&def, cur_token++); complete: append_end_token(&def.tokens); append_space(pp, &def.tokens, NULL); da_push_back(pp->defines, &def); goto exit; error: cf_def_free(&def); exit: *p_cur_token = cur_token; } static inline void cf_preprocess_remove_def_strref(struct cf_preprocessor *pp, const struct strref *ref) { size_t def_idx = cf_preprocess_get_def_idx(pp, ref); if (def_idx != INVALID_INDEX) { struct cf_def *array = pp->defines.array; cf_def_free(array + def_idx); da_erase(pp->defines, def_idx); } } static void cf_preprocess_undef(struct cf_preprocessor *pp, struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; if (pp->ignore_state) { go_to_newline(p_cur_token); return; } next_token(&cur_token, true); if (cur_token->type != CFTOKEN_NAME) { cf_adderror_expecting(pp, cur_token, "identifier"); go_to_newline(&cur_token); goto exit; } cf_preprocess_remove_def_strref(pp, &cur_token->str); cur_token++; exit: *p_cur_token = cur_token; } /* Processes an #ifdef/#ifndef/#if/#else/#elif sub block recursively */ static inline bool cf_preprocess_subblock(struct cf_preprocessor *pp, bool ignore, struct cf_token **p_cur_token) { bool eof; if (!next_token(p_cur_token, true)) return false; if (!pp->ignore_state) { pp->ignore_state = ignore; cf_preprocess_tokens(pp, true, p_cur_token); pp->ignore_state = false; } else { cf_preprocess_tokens(pp, true, p_cur_token); } eof = ((*p_cur_token)->type == CFTOKEN_NONE); if (eof) cf_adderror_unexpected_endif_eof(pp, *p_cur_token); return !eof; } static void cf_preprocess_ifdef(struct cf_preprocessor *pp, bool ifnot, struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; struct cf_def *def; bool is_true; next_token(&cur_token, true); if (cur_token->type != CFTOKEN_NAME) { cf_adderror_expecting(pp, cur_token, "identifier"); go_to_newline(&cur_token); goto exit; } def = cf_preprocess_get_def(pp, &cur_token->str); is_true = (def == NULL) == ifnot; if (!cf_preprocess_subblock(pp, !is_true, &cur_token)) goto exit; if (strref_cmp(&cur_token->str, "else") == 0) { if (!cf_preprocess_subblock(pp, is_true, &cur_token)) goto exit; /*} else if (strref_cmp(&cur_token->str, "elif") == 0) {*/ } cur_token++; exit: *p_cur_token = cur_token; } static bool cf_preprocessor(struct cf_preprocessor *pp, bool if_block, struct cf_token **p_cur_token) { struct cf_token *cur_token = *p_cur_token; if (strref_cmp(&cur_token->str, "include") == 0) { cf_preprocess_include(pp, p_cur_token); } else if (strref_cmp(&cur_token->str, "define") == 0) { cf_preprocess_define(pp, p_cur_token); } else if (strref_cmp(&cur_token->str, "undef") == 0) { cf_preprocess_undef(pp, p_cur_token); } else if (strref_cmp(&cur_token->str, "ifdef") == 0) { cf_preprocess_ifdef(pp, false, p_cur_token); } else if (strref_cmp(&cur_token->str, "ifndef") == 0) { cf_preprocess_ifdef(pp, true, p_cur_token); /*} else if (strref_cmp(&cur_token->str, "if") == 0) { TODO;*/ } else if (strref_cmp(&cur_token->str, "else") == 0 || /*strref_cmp(&cur_token->str, "elif") == 0 ||*/ strref_cmp(&cur_token->str, "endif") == 0) { if (!if_block) { struct dstr name; dstr_init_copy_strref(&name, &cur_token->str); cf_adderror(pp, cur_token, "#$1 outside of " "#if/#ifdef/#ifndef block", name.array, NULL, NULL); dstr_free(&name); (*p_cur_token)++; return true; } return false; } else if (cur_token->type != CFTOKEN_NEWLINE && cur_token->type != CFTOKEN_NONE) { /* * TODO: language-specific preprocessor stuff should be sent to * handler of some sort */ (*p_cur_token)++; } return true; } static void cf_preprocess_addtoken(struct cf_preprocessor *pp, cf_token_array_t *dst, struct cf_token **p_cur_token, const struct cf_token *base, const struct macro_params *params); /* * collects tokens for a macro parameter * * note that it is important to make sure that any usage of function calls * within a macro parameter is preserved, example MACRO(func(1, 2), 3), do not * let it stop on the comma at "1," */ static void cf_preprocess_save_macro_param(struct cf_preprocessor *pp, struct cf_token **p_cur_token, struct macro_param *param, const struct cf_token *base, const struct macro_params *cur_params) { struct cf_token *cur_token = *p_cur_token; int brace_count = 0; append_space(pp, ¶m->tokens, base); while (cur_token->type != CFTOKEN_NONE) { if (*cur_token->str.array == '(') { brace_count++; } else if (*cur_token->str.array == ')') { if (brace_count) brace_count--; else break; } else if (*cur_token->str.array == ',') { if (!brace_count) break; } cf_preprocess_addtoken(pp, ¶m->tokens, &cur_token, base, cur_params); } if (cur_token->type == CFTOKEN_NONE) cf_adderror_unexpected_eof(pp, cur_token); append_space(pp, ¶m->tokens, base); append_end_token(¶m->tokens); *p_cur_token = cur_token; } static inline bool param_is_whitespace(const struct macro_param *param) { struct cf_token *array = param->tokens.array; size_t i; for (i = 0; i < param->tokens.num; i++) if (array[i].type != CFTOKEN_NONE && array[i].type != CFTOKEN_SPACETAB && array[i].type != CFTOKEN_NEWLINE) return false; return true; } /* collects parameter tokens of a used macro and stores them for the unwrap */ static void cf_preprocess_save_macro_params(struct cf_preprocessor *pp, struct cf_token **p_cur_token, const struct cf_def *def, const struct cf_token *base, const struct macro_params *cur_params, struct macro_params *dst) { struct cf_token *cur_token = *p_cur_token; size_t count = 0; next_token(&cur_token, false); if (cur_token->type != CFTOKEN_OTHER || *cur_token->str.array != '(') { cf_adderror_expecting(pp, cur_token, "'('"); goto exit; } do { struct macro_param param; macro_param_init(¶m); cur_token++; count++; cf_preprocess_save_macro_param(pp, &cur_token, ¶m, base, cur_params); if (cur_token->type != CFTOKEN_OTHER || (*cur_token->str.array != ',' && *cur_token->str.array != ')')) { macro_param_free(¶m); cf_adderror_expecting(pp, cur_token, "',' or ')'"); goto exit; } if (param_is_whitespace(¶m)) { /* if 0-param macro, ignore first entry */ if (count == 1 && !def->params.num && *cur_token->str.array == ')') { macro_param_free(¶m); break; } } if (count <= def->params.num) { cf_token_copy(¶m.name, cf_def_getparam(def, count - 1)); da_push_back(dst->params, ¶m); } else { macro_param_free(¶m); } } while (*cur_token->str.array != ')'); if (count != def->params.num) cf_adderror(pp, cur_token, "Mismatching number of macro parameters", NULL, NULL, NULL); exit: *p_cur_token = cur_token; } static inline void cf_preprocess_unwrap_param(struct cf_preprocessor *pp, cf_token_array_t *dst, struct cf_token **p_cur_token, const struct cf_token *base, const struct macro_param *param) { struct cf_token *cur_token = *p_cur_token; struct cf_token *cur_param_token = param->tokens.array; while (cur_param_token->type != CFTOKEN_NONE) cf_preprocess_addtoken(pp, dst, &cur_param_token, base, NULL); cur_token++; *p_cur_token = cur_token; } static inline void cf_preprocess_unwrap_define(struct cf_preprocessor *pp, cf_token_array_t *dst, struct cf_token **p_cur_token, const struct cf_token *base, const struct cf_def *def, const struct macro_params *cur_params) { struct cf_token *cur_token = *p_cur_token; struct macro_params new_params; struct cf_token *cur_def_token = def->tokens.array; macro_params_init(&new_params); if (def->macro) cf_preprocess_save_macro_params(pp, &cur_token, def, base, cur_params, &new_params); while (cur_def_token->type != CFTOKEN_NONE) cf_preprocess_addtoken(pp, dst, &cur_def_token, base, &new_params); macro_params_free(&new_params); cur_token++; *p_cur_token = cur_token; } static void cf_preprocess_addtoken(struct cf_preprocessor *pp, cf_token_array_t *dst, struct cf_token **p_cur_token, const struct cf_token *base, const struct macro_params *params) { struct cf_token *cur_token = *p_cur_token; if (pp->ignore_state) goto ignore; if (!base) base = cur_token; if (cur_token->type == CFTOKEN_NAME) { struct cf_def *def; struct macro_param *param; param = get_macro_param(params, &cur_token->str); if (param) { cf_preprocess_unwrap_param(pp, dst, &cur_token, base, param); goto exit; } def = cf_preprocess_get_def(pp, &cur_token->str); if (def) { cf_preprocess_unwrap_define(pp, dst, &cur_token, base, def, params); goto exit; } } da_push_back(*dst, cur_token); ignore: cur_token++; exit: *p_cur_token = cur_token; } static void cf_preprocess_tokens(struct cf_preprocessor *pp, bool if_block, struct cf_token **p_cur_token) { bool newline = true; bool preprocessor_line = if_block; struct cf_token *cur_token = *p_cur_token; while (cur_token->type != CFTOKEN_NONE) { if (cur_token->type != CFTOKEN_SPACETAB && cur_token->type != CFTOKEN_NEWLINE) { if (preprocessor_line) { cf_adderror_expected_newline(pp, cur_token); if (!go_to_newline(&cur_token)) break; } if (newline && *cur_token->str.array == '#') { next_token(&cur_token, true); preprocessor_line = true; if (!cf_preprocessor(pp, if_block, &cur_token)) break; continue; } newline = false; } if (cur_token->type == CFTOKEN_NEWLINE) { newline = true; preprocessor_line = false; } else if (cur_token->type == CFTOKEN_NONE) { break; } cf_preprocess_addtoken(pp, &pp->tokens, &cur_token, NULL, NULL); } *p_cur_token = cur_token; } void cf_preprocessor_init(struct cf_preprocessor *pp) { da_init(pp->defines); da_init(pp->sys_include_dirs); da_init(pp->dependencies); da_init(pp->tokens); pp->lex = NULL; pp->ed = NULL; pp->ignore_state = false; } void cf_preprocessor_free(struct cf_preprocessor *pp) { struct cf_lexer *dependencies = pp->dependencies.array; char **sys_include_dirs = pp->sys_include_dirs.array; struct cf_def *defs = pp->defines.array; size_t i; for (i = 0; i < pp->defines.num; i++) cf_def_free(defs + i); for (i = 0; i < pp->sys_include_dirs.num; i++) bfree(sys_include_dirs[i]); for (i = 0; i < pp->dependencies.num; i++) cf_lexer_free(dependencies + i); da_free(pp->defines); da_free(pp->sys_include_dirs); da_free(pp->dependencies); da_free(pp->tokens); pp->lex = NULL; pp->ed = NULL; pp->ignore_state = false; } bool cf_preprocess(struct cf_preprocessor *pp, struct cf_lexer *lex, struct error_data *ed) { struct cf_token *token = cf_lexer_get_tokens(lex); if (!token) return false; pp->ed = ed; pp->lex = lex; cf_preprocess_tokens(pp, false, &token); da_push_back(pp->tokens, token); return !lex->unexpected_eof; } void cf_preprocessor_add_def(struct cf_preprocessor *pp, struct cf_def *def) { struct cf_def *existing = cf_preprocess_get_def(pp, &def->name.str); if (existing) { struct dstr name; dstr_init_copy_strref(&name, &def->name.str); cf_addwarning(pp, &def->name, "Token $1 already defined", name.array, NULL, NULL); cf_addwarning(pp, &existing->name, "Previous definition of $1 is here", name.array, NULL, NULL); cf_def_free(existing); memcpy(existing, def, sizeof(struct cf_def)); } else { da_push_back(pp->defines, def); } } void cf_preprocessor_remove_def(struct cf_preprocessor *pp, const char *def_name) { struct strref ref; ref.array = def_name; ref.len = strlen(def_name); cf_preprocess_remove_def_strref(pp, &ref); } obs-studio-32.1.0-sources/libobs/util/pipe.h000644 001751 001751 00000004073 15153330235 021575 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" #ifdef __cplusplus extern "C" { #endif struct os_process_pipe; typedef struct os_process_pipe os_process_pipe_t; struct os_process_args; typedef struct os_process_args os_process_args_t; EXPORT os_process_pipe_t *os_process_pipe_create(const char *cmd_line, const char *type); EXPORT os_process_pipe_t *os_process_pipe_create2(const os_process_args_t *args, const char *type); EXPORT int os_process_pipe_destroy(os_process_pipe_t *pp); EXPORT size_t os_process_pipe_read(os_process_pipe_t *pp, uint8_t *data, size_t len); EXPORT size_t os_process_pipe_read_err(os_process_pipe_t *pp, uint8_t *data, size_t len); EXPORT size_t os_process_pipe_write(os_process_pipe_t *pp, const uint8_t *data, size_t len); EXPORT struct os_process_args *os_process_args_create(const char *executable); EXPORT void os_process_args_add_arg(struct os_process_args *args, const char *arg); #ifndef _MSC_VER __attribute__((__format__(__printf__, 2, 3))) #endif EXPORT void os_process_args_add_argf(struct os_process_args *args, const char *format, ...); EXPORT char **os_process_args_get_argv(const struct os_process_args *args); EXPORT size_t os_process_args_get_argc(struct os_process_args *args); EXPORT void os_process_args_destroy(struct os_process_args *args); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/platform-windows.c000644 001751 001751 00000072740 15153330235 024155 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include #include #include #include #include #include #include "base.h" #include "platform.h" #include "darray.h" #include "dstr.h" #include "util_uint64.h" #include "windows/win-registry.h" #include "windows/win-version.h" #include "../../deps/w32-pthreads/pthread.h" #define MAX_SZ_LEN 256 static bool have_clockfreq = false; static LARGE_INTEGER clock_freq; static uint32_t winver = 0; static char win_release_id[MAX_SZ_LEN] = "unavailable"; static inline uint64_t get_clockfreq(void) { if (!have_clockfreq) { QueryPerformanceFrequency(&clock_freq); have_clockfreq = true; } return clock_freq.QuadPart; } static inline uint32_t get_winver(void) { if (!winver) { struct win_version_info ver; get_win_ver(&ver); winver = (ver.major << 8) | ver.minor; } return winver; } void *os_dlopen(const char *path) { struct dstr dll_name; wchar_t *wpath; wchar_t *wpath_slash; HMODULE h_library = NULL; if (!path) return NULL; dstr_init_copy(&dll_name, path); dstr_replace(&dll_name, "\\", "/"); if (!dstr_find(&dll_name, ".dll")) dstr_cat(&dll_name, ".dll"); os_utf8_to_wcs_ptr(dll_name.array, 0, &wpath); dstr_free(&dll_name); /* to make module dependency issues easier to deal with, allow * dynamically loaded libraries on windows to search for dependent * libraries that are within the library's own directory */ wpath_slash = wcsrchr(wpath, L'/'); if (wpath_slash) { *wpath_slash = 0; SetDllDirectoryW(wpath); *wpath_slash = L'/'; } h_library = LoadLibraryW(wpath); bfree(wpath); if (wpath_slash) SetDllDirectoryW(NULL); if (!h_library) { DWORD error = GetLastError(); /* don't print error for libraries that aren't meant to be * dynamically linked */ if (error == ERROR_PROC_NOT_FOUND) return NULL; char *message = NULL; FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS | FORMAT_MESSAGE_ALLOCATE_BUFFER, NULL, error, MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), (LPSTR)&message, 0, NULL); blog(LOG_INFO, "LoadLibrary failed for '%s': %s (%lu)", path, message, error); if (message) LocalFree(message); } return h_library; } void *os_dlsym(void *module, const char *func) { void *handle; handle = (void *)GetProcAddress(module, func); return handle; } void os_dlclose(void *module) { FreeLibrary(module); } static bool has_obs_export(VOID *base, PIMAGE_NT_HEADERS nt_headers) { __try { PIMAGE_DATA_DIRECTORY data_dir; data_dir = &nt_headers->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_EXPORT]; if (data_dir->Size == 0) return false; PIMAGE_SECTION_HEADER section, last_section; section = IMAGE_FIRST_SECTION(nt_headers); last_section = section; /* find the section that contains the export directory */ int i; for (i = 0; i < nt_headers->FileHeader.NumberOfSections; i++) { if (section->VirtualAddress <= data_dir->VirtualAddress) { last_section = section; section++; continue; } else { break; } } /* double check in case we exited early */ if (last_section->VirtualAddress > data_dir->VirtualAddress || section->VirtualAddress <= data_dir->VirtualAddress) return false; section = last_section; /* get a pointer to the export directory */ PIMAGE_EXPORT_DIRECTORY export; export = (PIMAGE_EXPORT_DIRECTORY)((byte *)base + data_dir->VirtualAddress - section->VirtualAddress + section->PointerToRawData); if (export->NumberOfNames == 0) return false; /* get a pointer to the export directory names */ DWORD *names_ptr; names_ptr = (DWORD *)((byte *)base + export->AddressOfNames - section->VirtualAddress + section->PointerToRawData); /* iterate through each name and see if its an obs plugin */ CHAR *name; size_t j; for (j = 0; j < export->NumberOfNames; j++) { name = (CHAR *)base + names_ptr[j] - section->VirtualAddress + section->PointerToRawData; if (!strcmp(name, "obs_module_load")) { return true; } } } __except (EXCEPTION_EXECUTE_HANDLER) { /* we failed somehow, for compatibility let's assume it * was a valid plugin and let the loader deal with it */ return true; } return false; } void get_plugin_info(const char *path, bool *is_obs_plugin) { struct dstr dll_name; wchar_t *wpath; HANDLE hFile = INVALID_HANDLE_VALUE; HANDLE hFileMapping = NULL; VOID *base = NULL; PIMAGE_DOS_HEADER dos_header; PIMAGE_NT_HEADERS nt_headers; *is_obs_plugin = false; if (!path) return; dstr_init_copy(&dll_name, path); dstr_replace(&dll_name, "\\", "/"); if (!dstr_find(&dll_name, ".dll")) dstr_cat(&dll_name, ".dll"); os_utf8_to_wcs_ptr(dll_name.array, 0, &wpath); dstr_free(&dll_name); hFile = CreateFileW(wpath, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0); bfree(wpath); if (hFile == INVALID_HANDLE_VALUE) goto cleanup; hFileMapping = CreateFileMapping(hFile, NULL, PAGE_READONLY, 0, 0, NULL); if (hFileMapping == NULL) goto cleanup; base = MapViewOfFile(hFileMapping, FILE_MAP_READ, 0, 0, 0); if (!base) goto cleanup; /* all mapped file i/o must be prepared to handle exceptions */ __try { dos_header = (PIMAGE_DOS_HEADER)base; if (dos_header->e_magic != IMAGE_DOS_SIGNATURE) goto cleanup; nt_headers = (PIMAGE_NT_HEADERS)((byte *)dos_header + dos_header->e_lfanew); if (nt_headers->Signature != IMAGE_NT_SIGNATURE) goto cleanup; *is_obs_plugin = has_obs_export(base, nt_headers); } __except (EXCEPTION_EXECUTE_HANDLER) { /* we failed somehow, for compatibility let's assume it * was a valid plugin and let the loader deal with it */ *is_obs_plugin = true; goto cleanup; } cleanup: if (base) UnmapViewOfFile(base); if (hFileMapping != NULL) CloseHandle(hFileMapping); if (hFile != INVALID_HANDLE_VALUE) CloseHandle(hFile); } bool os_is_obs_plugin(const char *path) { bool is_obs_plugin; get_plugin_info(path, &is_obs_plugin); return is_obs_plugin; } union time_data { FILETIME ft; unsigned long long val; }; struct os_cpu_usage_info { union time_data last_time, last_sys_time, last_user_time; DWORD core_count; }; os_cpu_usage_info_t *os_cpu_usage_info_start(void) { struct os_cpu_usage_info *info = bzalloc(sizeof(*info)); SYSTEM_INFO si; FILETIME dummy; GetSystemInfo(&si); GetSystemTimeAsFileTime(&info->last_time.ft); GetProcessTimes(GetCurrentProcess(), &dummy, &dummy, &info->last_sys_time.ft, &info->last_user_time.ft); info->core_count = si.dwNumberOfProcessors; return info; } double os_cpu_usage_info_query(os_cpu_usage_info_t *info) { union time_data cur_time, cur_sys_time, cur_user_time; FILETIME dummy; double percent; if (!info) return 0.0; GetSystemTimeAsFileTime(&cur_time.ft); GetProcessTimes(GetCurrentProcess(), &dummy, &dummy, &cur_sys_time.ft, &cur_user_time.ft); percent = (double)(cur_sys_time.val - info->last_sys_time.val + (cur_user_time.val - info->last_user_time.val)); percent /= (double)(cur_time.val - info->last_time.val); percent /= (double)info->core_count; info->last_time.val = cur_time.val; info->last_sys_time.val = cur_sys_time.val; info->last_user_time.val = cur_user_time.val; return percent * 100.0; } void os_cpu_usage_info_destroy(os_cpu_usage_info_t *info) { if (info) bfree(info); } bool os_sleepto_ns(uint64_t time_target) { const uint64_t freq = get_clockfreq(); const LONGLONG count_target = util_mul_div64(time_target, freq, 1000000000); LARGE_INTEGER count; QueryPerformanceCounter(&count); const bool stall = count.QuadPart < count_target; if (stall) { const DWORD milliseconds = (DWORD)(((count_target - count.QuadPart) * 1000.0) / freq); if (milliseconds > 1) Sleep(milliseconds - 1); for (;;) { QueryPerformanceCounter(&count); if (count.QuadPart >= count_target) break; YieldProcessor(); } } return stall; } bool os_sleepto_ns_fast(uint64_t time_target) { uint64_t current = os_gettime_ns(); if (time_target < current) return false; do { uint64_t remain_ms = (time_target - current) / 1000000; if (!remain_ms) remain_ms = 1; Sleep((DWORD)remain_ms); current = os_gettime_ns(); } while (time_target > current); return true; } void os_sleep_ms(uint32_t duration) { /* windows 8+ appears to have decreased sleep precision */ if (get_winver() >= 0x0602 && duration > 0) duration--; Sleep(duration); } uint64_t os_gettime_ns(void) { LARGE_INTEGER current_time; QueryPerformanceCounter(¤t_time); return util_mul_div64(current_time.QuadPart, 1000000000, get_clockfreq()); } /* returns [folder]\[name] on windows */ static int os_get_path_internal(char *dst, size_t size, const char *name, int folder) { wchar_t path_utf16[MAX_PATH]; SHGetFolderPathW(NULL, folder, NULL, SHGFP_TYPE_CURRENT, path_utf16); if (os_wcs_to_utf8(path_utf16, 0, dst, size) != 0) { if (!name || !*name) { return (int)strlen(dst); } if (strcat_s(dst, size, "\\") == 0) { if (strcat_s(dst, size, name) == 0) { return (int)strlen(dst); } } } return -1; } static char *os_get_path_ptr_internal(const char *name, int folder) { char *ptr; wchar_t path_utf16[MAX_PATH]; struct dstr path; SHGetFolderPathW(NULL, folder, NULL, SHGFP_TYPE_CURRENT, path_utf16); os_wcs_to_utf8_ptr(path_utf16, 0, &ptr); dstr_init_move_array(&path, ptr); dstr_cat(&path, "\\"); dstr_cat(&path, name); return path.array; } int os_get_config_path(char *dst, size_t size, const char *name) { return os_get_path_internal(dst, size, name, CSIDL_APPDATA); } char *os_get_config_path_ptr(const char *name) { return os_get_path_ptr_internal(name, CSIDL_APPDATA); } int os_get_program_data_path(char *dst, size_t size, const char *name) { return os_get_path_internal(dst, size, name, CSIDL_COMMON_APPDATA); } char *os_get_program_data_path_ptr(const char *name) { return os_get_path_ptr_internal(name, CSIDL_COMMON_APPDATA); } char *os_get_executable_path_ptr(const char *name) { char *ptr; char *slash; wchar_t path_utf16[MAX_PATH]; struct dstr path; GetModuleFileNameW(NULL, path_utf16, MAX_PATH); os_wcs_to_utf8_ptr(path_utf16, 0, &ptr); dstr_init_move_array(&path, ptr); dstr_replace(&path, "\\", "/"); slash = strrchr(path.array, '/'); if (slash) { size_t len = slash - path.array + 1; dstr_resize(&path, len); } if (name && *name) { dstr_cat(&path, name); } return path.array; } bool os_file_exists(const char *path) { WIN32_FIND_DATAW wfd; HANDLE hFind; wchar_t *path_utf16; if (!os_utf8_to_wcs_ptr(path, 0, &path_utf16)) return false; hFind = FindFirstFileW(path_utf16, &wfd); if (hFind != INVALID_HANDLE_VALUE) FindClose(hFind); bfree(path_utf16); return hFind != INVALID_HANDLE_VALUE; } size_t os_get_abs_path(const char *path, char *abspath, size_t size) { wchar_t wpath[MAX_PATH]; wchar_t wabspath[MAX_PATH]; size_t out_len = 0; size_t len; if (!abspath) return 0; len = os_utf8_to_wcs(path, 0, wpath, MAX_PATH); if (!len) return 0; if (_wfullpath(wabspath, wpath, MAX_PATH) != NULL) out_len = os_wcs_to_utf8(wabspath, 0, abspath, size); return out_len; } char *os_get_abs_path_ptr(const char *path) { char *ptr = bmalloc(MAX_PATH); if (!os_get_abs_path(path, ptr, MAX_PATH)) { bfree(ptr); ptr = NULL; } return ptr; } struct os_dir { HANDLE handle; WIN32_FIND_DATA wfd; bool first; struct os_dirent out; }; os_dir_t *os_opendir(const char *path) { struct dstr path_str = {0}; struct os_dir *dir = NULL; WIN32_FIND_DATA wfd; HANDLE handle; wchar_t *w_path; dstr_copy(&path_str, path); dstr_cat(&path_str, "/*.*"); if (os_utf8_to_wcs_ptr(path_str.array, path_str.len, &w_path) > 0) { handle = FindFirstFileW(w_path, &wfd); if (handle != INVALID_HANDLE_VALUE) { dir = bzalloc(sizeof(struct os_dir)); dir->handle = handle; dir->first = true; dir->wfd = wfd; } bfree(w_path); } dstr_free(&path_str); return dir; } static inline bool is_dir(WIN32_FIND_DATA *wfd) { return !!(wfd->dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY); } struct os_dirent *os_readdir(os_dir_t *dir) { if (!dir) return NULL; if (dir->first) { dir->first = false; } else { if (!FindNextFileW(dir->handle, &dir->wfd)) return NULL; } os_wcs_to_utf8(dir->wfd.cFileName, 0, dir->out.d_name, sizeof(dir->out.d_name)); dir->out.directory = is_dir(&dir->wfd); return &dir->out; } void os_closedir(os_dir_t *dir) { if (dir) { FindClose(dir->handle); bfree(dir); } } int64_t os_get_free_space(const char *path) { ULARGE_INTEGER remainingSpace; char abs_path[512]; wchar_t w_abs_path[512]; if (os_get_abs_path(path, abs_path, 512) > 0) { if (os_utf8_to_wcs(abs_path, 0, w_abs_path, 512) > 0) { BOOL success = GetDiskFreeSpaceExW(w_abs_path, (PULARGE_INTEGER)&remainingSpace, NULL, NULL); if (success) return (int64_t)remainingSpace.QuadPart; } } return -1; } static void make_globent(struct os_globent *ent, WIN32_FIND_DATA *wfd, const char *pattern) { struct dstr name = {0}; struct dstr path = {0}; char *slash; dstr_from_wcs(&name, wfd->cFileName); dstr_copy(&path, pattern); if (path.array) { slash = strrchr(path.array, '/'); if (slash) dstr_resize(&path, slash + 1 - path.array); else dstr_free(&path); } dstr_cat_dstr(&path, &name); ent->path = path.array; ent->directory = is_dir(wfd); dstr_free(&name); } int os_glob(const char *pattern, int flags, os_glob_t **pglob) { DARRAY(struct os_globent) files; HANDLE handle; WIN32_FIND_DATA wfd; int ret = -1; wchar_t *w_path; da_init(files); if (os_utf8_to_wcs_ptr(pattern, 0, &w_path) > 0) { handle = FindFirstFileW(w_path, &wfd); if (handle != INVALID_HANDLE_VALUE) { do { struct os_globent ent = {0}; make_globent(&ent, &wfd, pattern); if (ent.path) da_push_back(files, &ent); } while (FindNextFile(handle, &wfd)); FindClose(handle); *pglob = bmalloc(sizeof(**pglob)); (*pglob)->gl_pathc = files.num; (*pglob)->gl_pathv = files.array; ret = 0; } bfree(w_path); } if (ret != 0) *pglob = NULL; UNUSED_PARAMETER(flags); return ret; } void os_globfree(os_glob_t *pglob) { if (pglob) { for (size_t i = 0; i < pglob->gl_pathc; i++) bfree(pglob->gl_pathv[i].path); bfree(pglob->gl_pathv); bfree(pglob); } } int os_unlink(const char *path) { wchar_t *w_path; bool success; os_utf8_to_wcs_ptr(path, 0, &w_path); if (!w_path) return -1; success = !!DeleteFileW(w_path); bfree(w_path); return success ? 0 : -1; } int os_rmdir(const char *path) { wchar_t *w_path; bool success; os_utf8_to_wcs_ptr(path, 0, &w_path); if (!w_path) return -1; success = !!RemoveDirectoryW(w_path); bfree(w_path); return success ? 0 : -1; } int os_mkdir(const char *path) { wchar_t *path_utf16; BOOL success; if (!os_utf8_to_wcs_ptr(path, 0, &path_utf16)) return MKDIR_ERROR; success = CreateDirectory(path_utf16, NULL); bfree(path_utf16); if (!success) return (GetLastError() == ERROR_ALREADY_EXISTS) ? MKDIR_EXISTS : MKDIR_ERROR; return MKDIR_SUCCESS; } int os_rename(const char *old_path, const char *new_path) { wchar_t *old_path_utf16 = NULL; wchar_t *new_path_utf16 = NULL; int code = -1; if (!os_utf8_to_wcs_ptr(old_path, 0, &old_path_utf16)) { return -1; } if (!os_utf8_to_wcs_ptr(new_path, 0, &new_path_utf16)) { goto error; } code = MoveFileExW(old_path_utf16, new_path_utf16, MOVEFILE_REPLACE_EXISTING) ? 0 : -1; error: bfree(old_path_utf16); bfree(new_path_utf16); return code; } int os_safe_replace(const char *target, const char *from, const char *backup) { wchar_t *wtarget = NULL; wchar_t *wfrom = NULL; wchar_t *wbackup = NULL; int code = -1; if (!target || !from) return -1; if (!os_utf8_to_wcs_ptr(target, 0, &wtarget)) return -1; if (!os_utf8_to_wcs_ptr(from, 0, &wfrom)) goto fail; if (backup && !os_utf8_to_wcs_ptr(backup, 0, &wbackup)) goto fail; if (ReplaceFileW(wtarget, wfrom, wbackup, 0, NULL, NULL)) { code = 0; } else if (GetLastError() == ERROR_FILE_NOT_FOUND) { code = MoveFileExW(wfrom, wtarget, MOVEFILE_REPLACE_EXISTING) ? 0 : -1; } fail: bfree(wtarget); bfree(wfrom); bfree(wbackup); return code; } BOOL WINAPI DllMain(HINSTANCE hinst_dll, DWORD reason, LPVOID reserved) { switch (reason) { case DLL_PROCESS_ATTACH: timeBeginPeriod(1); #ifdef PTW32_STATIC_LIB pthread_win32_process_attach_np(); #endif break; case DLL_PROCESS_DETACH: timeEndPeriod(1); #ifdef PTW32_STATIC_LIB pthread_win32_process_detach_np(); #endif break; case DLL_THREAD_ATTACH: #ifdef PTW32_STATIC_LIB pthread_win32_thread_attach_np(); #endif break; case DLL_THREAD_DETACH: #ifdef PTW32_STATIC_LIB pthread_win32_thread_detach_np(); #endif break; } UNUSED_PARAMETER(hinst_dll); UNUSED_PARAMETER(reserved); return true; } os_performance_token_t *os_request_high_performance(const char *reason) { UNUSED_PARAMETER(reason); return NULL; } void os_end_high_performance(os_performance_token_t *token) { UNUSED_PARAMETER(token); } int os_copyfile(const char *file_in, const char *file_out) { wchar_t *file_in_utf16 = NULL; wchar_t *file_out_utf16 = NULL; int code = -1; if (!os_utf8_to_wcs_ptr(file_in, 0, &file_in_utf16)) { return -1; } if (!os_utf8_to_wcs_ptr(file_out, 0, &file_out_utf16)) { goto error; } code = CopyFileW(file_in_utf16, file_out_utf16, true) ? 0 : -1; error: bfree(file_in_utf16); bfree(file_out_utf16); return code; } char *os_getcwd(char *path, size_t size) { wchar_t *path_w; DWORD len; len = GetCurrentDirectoryW(0, NULL); if (!len) return NULL; path_w = bmalloc(((size_t)len + 1) * sizeof(wchar_t)); GetCurrentDirectoryW(len + 1, path_w); os_wcs_to_utf8(path_w, (size_t)len, path, size); bfree(path_w); return path; } int os_chdir(const char *path) { wchar_t *path_w = NULL; size_t size; int ret; size = os_utf8_to_wcs_ptr(path, 0, &path_w); if (!path_w) return -1; ret = SetCurrentDirectoryW(path_w) ? 0 : -1; bfree(path_w); return ret; } typedef DWORD(WINAPI *get_file_version_info_size_w_t)(LPCWSTR module, LPDWORD unused); typedef BOOL(WINAPI *get_file_version_info_w_t)(LPCWSTR module, DWORD unused, DWORD len, LPVOID data); typedef BOOL(WINAPI *ver_query_value_w_t)(LPVOID data, LPCWSTR subblock, LPVOID *buf, PUINT sizeout); static get_file_version_info_size_w_t get_file_version_info_size = NULL; static get_file_version_info_w_t get_file_version_info = NULL; static ver_query_value_w_t ver_query_value = NULL; static bool ver_initialized = false; static bool ver_initialize_success = false; static bool initialize_version_functions(void) { HMODULE ver = GetModuleHandleW(L"version"); ver_initialized = true; if (!ver) { ver = LoadLibraryW(L"version"); if (!ver) { blog(LOG_ERROR, "Failed to load windows " "version library"); return false; } } get_file_version_info_size = (get_file_version_info_size_w_t)GetProcAddress(ver, "GetFileVersionInfoSizeW"); get_file_version_info = (get_file_version_info_w_t)GetProcAddress(ver, "GetFileVersionInfoW"); ver_query_value = (ver_query_value_w_t)GetProcAddress(ver, "VerQueryValueW"); if (!get_file_version_info_size || !get_file_version_info || !ver_query_value) { blog(LOG_ERROR, "Failed to load windows version " "functions"); return false; } ver_initialize_success = true; return true; } bool get_dll_ver(const wchar_t *lib, struct win_version_info *ver_info) { VS_FIXEDFILEINFO *info = NULL; UINT len = 0; BOOL success; LPVOID data; DWORD size; char utf8_lib[512]; if (!ver_initialized && !initialize_version_functions()) return false; if (!ver_initialize_success) return false; os_wcs_to_utf8(lib, 0, utf8_lib, sizeof(utf8_lib)); size = get_file_version_info_size(lib, NULL); if (!size) { blog(LOG_ERROR, "Failed to get %s version info size", utf8_lib); return false; } data = bmalloc(size); if (!get_file_version_info(lib, 0, size, data)) { blog(LOG_ERROR, "Failed to get %s version info", utf8_lib); bfree(data); return false; } success = ver_query_value(data, L"\\", (LPVOID *)&info, &len); if (!success || !info || !len) { blog(LOG_ERROR, "Failed to get %s version info value", utf8_lib); bfree(data); return false; } ver_info->major = (int)HIWORD(info->dwFileVersionMS); ver_info->minor = (int)LOWORD(info->dwFileVersionMS); ver_info->build = (int)HIWORD(info->dwFileVersionLS); ver_info->revis = (int)LOWORD(info->dwFileVersionLS); bfree(data); return true; } bool is_64_bit_windows(void) { #if defined(_WIN64) return true; #elif defined(_WIN32) BOOL b64 = false; return IsWow64Process(GetCurrentProcess(), &b64) && b64; #endif } bool is_arm64_windows(void) { #if defined(_M_ARM64) || defined(_M_ARM64EC) return true; #else USHORT processMachine; USHORT nativeMachine; bool result = IsWow64Process2(GetCurrentProcess(), &processMachine, &nativeMachine); return (result && (nativeMachine == IMAGE_FILE_MACHINE_ARM64)); #endif } bool os_get_emulation_status(void) { #if defined(_M_ARM64) || defined(_M_ARM64EC) return false; #else return is_arm64_windows(); #endif } void get_reg_dword(HKEY hkey, LPCWSTR sub_key, LPCWSTR value_name, struct reg_dword *info) { struct reg_dword reg = {0}; HKEY key; LSTATUS status; status = RegOpenKeyEx(hkey, sub_key, 0, KEY_READ, &key); if (status != ERROR_SUCCESS) { info->status = status; info->size = 0; info->return_value = 0; return; } reg.size = sizeof(reg.return_value); reg.status = RegQueryValueExW(key, value_name, NULL, NULL, (LPBYTE)®.return_value, ®.size); RegCloseKey(key); *info = reg; } #define WINVER_REG_KEY L"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion" static inline void rtl_get_ver(struct win_version_info *ver) { HMODULE ntdll = GetModuleHandleW(L"ntdll"); if (!ntdll) return; NTSTATUS(WINAPI * get_ver) (RTL_OSVERSIONINFOEXW *) = (void *)GetProcAddress(ntdll, "RtlGetVersion"); if (!get_ver) { return; } RTL_OSVERSIONINFOEXW osver = {0}; osver.dwOSVersionInfoSize = sizeof(osver); NTSTATUS s = get_ver(&osver); if (s < 0) { return; } ver->major = osver.dwMajorVersion; ver->minor = osver.dwMinorVersion; ver->build = osver.dwBuildNumber; ver->revis = 0; } static inline bool get_reg_sz(HKEY key, const wchar_t *val, wchar_t *buf, DWORD size) { const LSTATUS status = RegGetValueW(key, NULL, val, RRF_RT_REG_SZ, NULL, buf, &size); return status == ERROR_SUCCESS; } static inline void get_reg_ver(struct win_version_info *ver) { HKEY key; DWORD size, dw_val; LSTATUS status; wchar_t str[MAX_SZ_LEN]; status = RegOpenKeyW(HKEY_LOCAL_MACHINE, WINVER_REG_KEY, &key); if (status != ERROR_SUCCESS) return; size = sizeof(dw_val); status = RegQueryValueExW(key, L"CurrentMajorVersionNumber", NULL, NULL, (LPBYTE)&dw_val, &size); if (status == ERROR_SUCCESS) ver->major = (int)dw_val; status = RegQueryValueExW(key, L"CurrentMinorVersionNumber", NULL, NULL, (LPBYTE)&dw_val, &size); if (status == ERROR_SUCCESS) ver->minor = (int)dw_val; status = RegQueryValueExW(key, L"UBR", NULL, NULL, (LPBYTE)&dw_val, &size); if (status == ERROR_SUCCESS) ver->revis = (int)dw_val; if (get_reg_sz(key, L"CurrentBuildNumber", str, sizeof(str))) { ver->build = wcstol(str, NULL, 10); } const wchar_t *release_key = ver->build > 19041 ? L"DisplayVersion" : L"ReleaseId"; if (get_reg_sz(key, release_key, str, sizeof(str))) { os_wcs_to_utf8(str, 0, win_release_id, MAX_SZ_LEN); } RegCloseKey(key); } static inline bool version_higher(struct win_version_info *cur, struct win_version_info *new) { if (new->major > cur->major) { return true; } if (new->major == cur->major) { if (new->minor > cur->minor) { return true; } if (new->minor == cur->minor) { if (new->build > cur->build) { return true; } if (new->build == cur->build) { return new->revis > cur->revis; } } } return false; } static inline void use_higher_ver(struct win_version_info *cur, struct win_version_info *new) { if (version_higher(cur, new)) *cur = *new; } void get_win_ver(struct win_version_info *info) { static struct win_version_info ver = {0}; static bool got_version = false; if (!info) return; if (!got_version) { struct win_version_info reg_ver = {0}; struct win_version_info rtl_ver = {0}; struct win_version_info nto_ver = {0}; get_reg_ver(®_ver); rtl_get_ver(&rtl_ver); get_dll_ver(L"ntoskrnl.exe", &nto_ver); ver = reg_ver; use_higher_ver(&ver, &rtl_ver); use_higher_ver(&ver, &nto_ver); got_version = true; } *info = ver; } const char *get_win_release_id(void) { return win_release_id; } uint32_t get_win_ver_int(void) { return get_winver(); } struct os_inhibit_info { bool active; }; os_inhibit_t *os_inhibit_sleep_create(const char *reason) { UNUSED_PARAMETER(reason); return bzalloc(sizeof(struct os_inhibit_info)); } bool os_inhibit_sleep_set_active(os_inhibit_t *info, bool active) { if (!info) return false; if (info->active == active) return false; if (active) { SetThreadExecutionState(ES_CONTINUOUS | ES_SYSTEM_REQUIRED | ES_AWAYMODE_REQUIRED | ES_DISPLAY_REQUIRED); } else { SetThreadExecutionState(ES_CONTINUOUS); } info->active = active; return true; } void os_inhibit_sleep_destroy(os_inhibit_t *info) { if (info) { os_inhibit_sleep_set_active(info, false); bfree(info); } } void os_breakpoint(void) { __debugbreak(); } void os_oom(void) { #ifdef DEBUG __debugbreak(); #else RaiseException(ERROR_OUTOFMEMORY, EXCEPTION_NONCONTINUABLE, 0, NULL); #endif } DWORD num_logical_cores(ULONG_PTR mask) { DWORD left_shift = sizeof(ULONG_PTR) * 8 - 1; DWORD bit_set_count = 0; ULONG_PTR bit_test = (ULONG_PTR)1 << left_shift; for (DWORD i = 0; i <= left_shift; ++i) { bit_set_count += ((mask & bit_test) ? 1 : 0); bit_test /= 2; } return bit_set_count; } static int physical_cores = 0; static int logical_cores = 0; static bool core_count_initialized = false; static void os_get_cores_internal(void) { PSYSTEM_LOGICAL_PROCESSOR_INFORMATION info = NULL, temp = NULL; DWORD len = 0; if (core_count_initialized) return; core_count_initialized = true; GetLogicalProcessorInformation(info, &len); if (GetLastError() != ERROR_INSUFFICIENT_BUFFER) return; info = malloc(len); if (info) { if (GetLogicalProcessorInformation(info, &len)) { DWORD num = len / sizeof(*info); temp = info; for (DWORD i = 0; i < num; i++) { if (temp->Relationship == RelationProcessorCore) { ULONG_PTR mask = temp->ProcessorMask; physical_cores++; logical_cores += num_logical_cores(mask); } temp++; } } free(info); } } int os_get_physical_cores(void) { if (!core_count_initialized) os_get_cores_internal(); return physical_cores; } int os_get_logical_cores(void) { if (!core_count_initialized) os_get_cores_internal(); return logical_cores; } static inline bool os_get_sys_memory_usage_internal(MEMORYSTATUSEX *msex) { if (!GlobalMemoryStatusEx(msex)) return false; return true; } uint64_t os_get_sys_free_size(void) { MEMORYSTATUSEX msex = {sizeof(MEMORYSTATUSEX)}; if (!os_get_sys_memory_usage_internal(&msex)) return 0; return msex.ullAvailPhys; } static uint64_t total_memory = 0; static bool total_memory_initialized = false; static void os_get_sys_total_size_internal() { total_memory_initialized = true; MEMORYSTATUSEX msex = {sizeof(MEMORYSTATUSEX)}; if (!os_get_sys_memory_usage_internal(&msex)) return; total_memory = msex.ullTotalPhys; } uint64_t os_get_sys_total_size(void) { if (!total_memory_initialized) os_get_sys_total_size_internal(); return total_memory; } static inline bool os_get_proc_memory_usage_internal(PROCESS_MEMORY_COUNTERS *pmc) { if (!GetProcessMemoryInfo(GetCurrentProcess(), pmc, sizeof(*pmc))) return false; return true; } bool os_get_proc_memory_usage(os_proc_memory_usage_t *usage) { PROCESS_MEMORY_COUNTERS pmc = {sizeof(PROCESS_MEMORY_COUNTERS)}; if (!os_get_proc_memory_usage_internal(&pmc)) return false; usage->resident_size = pmc.WorkingSetSize; usage->virtual_size = pmc.PagefileUsage; return true; } uint64_t os_get_proc_resident_size(void) { PROCESS_MEMORY_COUNTERS pmc = {sizeof(PROCESS_MEMORY_COUNTERS)}; if (!os_get_proc_memory_usage_internal(&pmc)) return 0; return pmc.WorkingSetSize; } uint64_t os_get_proc_virtual_size(void) { PROCESS_MEMORY_COUNTERS pmc = {sizeof(PROCESS_MEMORY_COUNTERS)}; if (!os_get_proc_memory_usage_internal(&pmc)) return 0; return pmc.PagefileUsage; } uint64_t os_get_free_disk_space(const char *dir) { wchar_t *wdir = NULL; os_utf8_to_wcs_ptr(dir, 0, &wdir); if (!wdir) return 0; ULARGE_INTEGER free; bool success = !!GetDiskFreeSpaceExW(wdir, &free, NULL, NULL); bfree(wdir); return success ? free.QuadPart : 0; } char *os_generate_uuid(void) { UUID uuid; RPC_STATUS res = UuidCreate(&uuid); if (res != RPC_S_OK && res != RPC_S_UUID_LOCAL_ONLY) bcrash("Failed to get UUID, RPC_STATUS: %l", res); struct dstr uuid_str = {0}; dstr_printf(&uuid_str, "%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x", uuid.Data1, uuid.Data2, uuid.Data3, uuid.Data4[0], uuid.Data4[1], uuid.Data4[2], uuid.Data4[3], uuid.Data4[4], uuid.Data4[5], uuid.Data4[6], uuid.Data4[7]); return uuid_str.array; } obs-studio-32.1.0-sources/libobs/util/bitstream.c000644 001751 001751 00000001740 15153330235 022623 0ustar00runnerrunner000000 000000 #include "bitstream.h" #include #include void bitstream_reader_init(struct bitstream_reader *r, uint8_t *data, size_t len) { memset(r, 0, sizeof(struct bitstream_reader)); r->buf = data; r->subPos = 0x80; r->len = len; } uint8_t bitstream_reader_read_bit(struct bitstream_reader *r) { if (r->pos >= r->len) return 0; uint8_t bit = (*(r->buf + r->pos) & r->subPos) == r->subPos ? 1 : 0; r->subPos >>= 0x1; if (r->subPos == 0) { r->subPos = 0x80; r->pos++; } return bit; } uint8_t bitstream_reader_read_bits(struct bitstream_reader *r, int bits) { uint8_t res = 0; for (int i = 1; i <= bits; i++) { res <<= 1; res |= bitstream_reader_read_bit(r); } return res; } uint8_t bitstream_reader_r8(struct bitstream_reader *r) { return bitstream_reader_read_bits(r, 8); } uint16_t bitstream_reader_r16(struct bitstream_reader *r) { uint8_t b = bitstream_reader_read_bits(r, 8); return ((uint16_t)b << 8) | bitstream_reader_read_bits(r, 8); } obs-studio-32.1.0-sources/libobs/util/cf-parser.c000644 001751 001751 00000003554 15153330235 022520 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "cf-parser.h" void cf_adderror(struct cf_parser *p, const char *error, int level, const char *val1, const char *val2, const char *val3) { uint32_t row, col; lexer_getstroffset(&p->cur_token->lex->base_lexer, p->cur_token->unmerged_str.array, &row, &col); if (!val1 && !val2 && !val3) { error_data_add(&p->error_list, p->cur_token->lex->file, row, col, error, level); } else { struct dstr formatted; dstr_init(&formatted); dstr_safe_printf(&formatted, error, val1, val2, val3, NULL); error_data_add(&p->error_list, p->cur_token->lex->file, row, col, formatted.array, level); dstr_free(&formatted); } } bool cf_pass_pair(struct cf_parser *p, char in, char out) { if (p->cur_token->type != CFTOKEN_OTHER || *p->cur_token->str.array != in) return p->cur_token->type != CFTOKEN_NONE; p->cur_token++; while (p->cur_token->type != CFTOKEN_NONE) { if (*p->cur_token->str.array == in) { if (!cf_pass_pair(p, in, out)) break; continue; } else if (*p->cur_token->str.array == out) { p->cur_token++; return true; } p->cur_token++; } return false; } obs-studio-32.1.0-sources/libobs/util/utf8.h000644 001751 001751 00000002251 15153330235 021522 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2007 Alexey Vatchenko * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once /* * utf8: implementation of UTF-8 charset encoding (RFC3629). */ #ifdef __cplusplus extern "C" { #endif #define UTF8_IGNORE_ERROR 0x01 #define UTF8_SKIP_BOM 0x02 size_t utf8_to_wchar(const char *in, size_t insize, wchar_t *out, size_t outsize, int flags); size_t wchar_to_utf8(const wchar_t *in, size_t insize, char *out, size_t outsize, int flags); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/util.hpp000644 001751 001751 00000007007 15153330235 022155 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ /* Useful C++ classes/bindings for util data and pointers */ #pragma once #include #include #include #include "bmem.h" #include "config-file.h" #include "text-lookup.h" /* RAII wrappers */ template class BPtr { T *ptr; BPtr(BPtr const &) = delete; BPtr &operator=(BPtr const &) = delete; public: inline BPtr(T *p = nullptr) : ptr(p) {} inline BPtr(BPtr &&other) { *this = std::move(other); } inline ~BPtr() { bfree(ptr); } inline T *operator=(T *p) { bfree(ptr); ptr = p; return p; } inline BPtr &operator=(BPtr &&other) { ptr = other.ptr; other.ptr = nullptr; return *this; } inline operator T *() { return ptr; } inline T **operator&() { bfree(ptr); ptr = nullptr; return &ptr; } inline bool operator!() { return ptr == NULL; } inline bool operator==(T p) { return ptr == p; } inline bool operator!=(T p) { return ptr != p; } inline T *Get() const { return ptr; } }; class ConfigFile { config_t *config; ConfigFile(ConfigFile const &) = delete; ConfigFile &operator=(ConfigFile const &) = delete; public: inline ConfigFile() : config(NULL) {} inline ConfigFile(ConfigFile &&other) noexcept : config(other.config) { other.config = nullptr; } inline ~ConfigFile() { config_close(config); } inline bool Create(const char *file) { Close(); config = config_create(file); return config != NULL; } inline void Swap(ConfigFile &other) { config_t *newConfig = other.config; other.config = config; config = newConfig; } inline int OpenString(const char *str) { Close(); return config_open_string(&config, str); } inline int Open(const char *file, config_open_type openType) { Close(); return config_open(&config, file, openType); } inline int Save() { return config_save(config); } inline int SaveSafe(const char *temp_ext, const char *backup_ext = nullptr) { return config_save_safe(config, temp_ext, backup_ext); } inline void Close() { config_close(config); config = NULL; } inline operator config_t *() const { return config; } }; class TextLookup { lookup_t *lookup; TextLookup(TextLookup const &) = delete; TextLookup &operator=(TextLookup const &) = delete; public: inline TextLookup(lookup_t *lookup = nullptr) : lookup(lookup) {} inline TextLookup(TextLookup &&other) noexcept : lookup(other.lookup) { other.lookup = nullptr; } inline ~TextLookup() { text_lookup_destroy(lookup); } inline TextLookup &operator=(lookup_t *val) { text_lookup_destroy(lookup); lookup = val; return *this; } inline operator lookup_t *() const { return lookup; } inline const char *GetString(const char *lookupVal) const { const char *out; if (!text_lookup_getstr(lookup, lookupVal, &out)) return lookupVal; return out; } }; obs-studio-32.1.0-sources/libobs/util/threading-posix.h000644 001751 001751 00000004570 15153330235 023747 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once static inline long os_atomic_inc_long(volatile long *val) { return __atomic_add_fetch(val, 1, __ATOMIC_SEQ_CST); } static inline long os_atomic_dec_long(volatile long *val) { return __atomic_sub_fetch(val, 1, __ATOMIC_SEQ_CST); } static inline void os_atomic_store_long(volatile long *ptr, long val) { __atomic_store_n(ptr, val, __ATOMIC_SEQ_CST); } static inline long os_atomic_set_long(volatile long *ptr, long val) { return __atomic_exchange_n(ptr, val, __ATOMIC_SEQ_CST); } static inline long os_atomic_exchange_long(volatile long *ptr, long val) { return os_atomic_set_long(ptr, val); } static inline long os_atomic_load_long(const volatile long *ptr) { return __atomic_load_n(ptr, __ATOMIC_SEQ_CST); } static inline bool os_atomic_compare_swap_long(volatile long *val, long old_val, long new_val) { return __atomic_compare_exchange_n(val, &old_val, new_val, false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline bool os_atomic_compare_exchange_long(volatile long *val, long *old_val, long new_val) { return __atomic_compare_exchange_n(val, old_val, new_val, false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST); } static inline void os_atomic_store_bool(volatile bool *ptr, bool val) { __atomic_store_n(ptr, val, __ATOMIC_SEQ_CST); } static inline bool os_atomic_set_bool(volatile bool *ptr, bool val) { return __atomic_exchange_n(ptr, val, __ATOMIC_SEQ_CST); } static inline bool os_atomic_exchange_bool(volatile bool *ptr, bool val) { return os_atomic_set_bool(ptr, val); } static inline bool os_atomic_load_bool(const volatile bool *ptr) { return __atomic_load_n(ptr, __ATOMIC_SEQ_CST); } obs-studio-32.1.0-sources/libobs/util/pipe-windows.c000644 001751 001751 00000013644 15153330235 023264 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #define WIN32_LEAN_AND_MEAN #include #include "platform.h" #include "bmem.h" #include "dstr.h" #include "pipe.h" struct os_process_pipe { bool read_pipe; HANDLE handle; HANDLE handle_err; HANDLE process; }; static bool create_pipe(HANDLE *input, HANDLE *output) { SECURITY_ATTRIBUTES sa = {0}; sa.nLength = sizeof(sa); sa.bInheritHandle = true; if (!CreatePipe(input, output, &sa, 0)) { return false; } return true; } static inline bool create_process(const char *cmd_line, HANDLE stdin_handle, HANDLE stdout_handle, HANDLE stderr_handle, HANDLE *process) { PROCESS_INFORMATION pi = {0}; wchar_t *cmd_line_w = NULL; STARTUPINFOW si = {0}; bool success = false; si.cb = sizeof(si); si.dwFlags = STARTF_USESTDHANDLES | STARTF_FORCEOFFFEEDBACK; si.hStdInput = stdin_handle; si.hStdOutput = stdout_handle; si.hStdError = stderr_handle; DWORD flags = 0; #ifndef SHOW_SUBPROCESSES flags = CREATE_NO_WINDOW; #endif os_utf8_to_wcs_ptr(cmd_line, 0, &cmd_line_w); if (cmd_line_w) { success = !!CreateProcessW(NULL, cmd_line_w, NULL, NULL, true, flags, NULL, NULL, &si, &pi); if (success) { *process = pi.hProcess; CloseHandle(pi.hThread); } else { // Not logging the full command line is intentional // as it may contain stream keys etc. blog(LOG_ERROR, "CreateProcessW failed: %lu", GetLastError()); } bfree(cmd_line_w); } return success; } os_process_pipe_t *os_process_pipe_create(const char *cmd_line, const char *type) { os_process_pipe_t *pp = NULL; bool read_pipe; HANDLE process; HANDLE output; HANDLE err_input, err_output; HANDLE input; bool success; if (!cmd_line || !type) { return NULL; } if (*type != 'r' && *type != 'w') { return NULL; } if (!create_pipe(&input, &output)) { return NULL; } if (!create_pipe(&err_input, &err_output)) { return NULL; } read_pipe = *type == 'r'; success = !!SetHandleInformation(read_pipe ? input : output, HANDLE_FLAG_INHERIT, false); if (!success) { goto error; } success = !!SetHandleInformation(err_input, HANDLE_FLAG_INHERIT, false); if (!success) { goto error; } success = create_process(cmd_line, read_pipe ? NULL : input, read_pipe ? output : NULL, err_output, &process); if (!success) { goto error; } pp = bmalloc(sizeof(*pp)); pp->handle = read_pipe ? input : output; pp->read_pipe = read_pipe; pp->process = process; pp->handle_err = err_input; CloseHandle(read_pipe ? output : input); CloseHandle(err_output); return pp; error: CloseHandle(output); CloseHandle(input); return NULL; } static inline void add_backslashes(struct dstr *str, size_t count) { while (count--) dstr_cat_ch(str, '\\'); } os_process_pipe_t *os_process_pipe_create2(const os_process_args_t *args, const char *type) { struct dstr cmd_line = {0}; /* Convert list to command line as Windows does not have any API that * allows us to just pass argc/argv. */ char **argv = os_process_args_get_argv(args); /* Based on Python subprocess module implementation. */ while (*argv) { size_t bs_count = 0; const char *arg = *argv; bool needs_quotes = strlen(arg) == 0 || strstr(arg, " ") != NULL || strstr(arg, "\t") != NULL; if (cmd_line.len) dstr_cat_ch(&cmd_line, ' '); if (needs_quotes) dstr_cat_ch(&cmd_line, '"'); while (*arg) { if (*arg == '\\') { bs_count++; } else if (*arg == '"') { add_backslashes(&cmd_line, bs_count * 2); dstr_cat(&cmd_line, "\\\""); bs_count = 0; } else { if (bs_count) { add_backslashes(&cmd_line, bs_count); bs_count = 0; } dstr_cat_ch(&cmd_line, *arg); } arg++; } if (bs_count) add_backslashes(&cmd_line, bs_count); if (needs_quotes) { add_backslashes(&cmd_line, bs_count); dstr_cat_ch(&cmd_line, '"'); } argv++; } os_process_pipe_t *ret = os_process_pipe_create(cmd_line.array, type); dstr_free(&cmd_line); return ret; } int os_process_pipe_destroy(os_process_pipe_t *pp) { int ret = 0; if (pp) { DWORD code; CloseHandle(pp->handle); CloseHandle(pp->handle_err); WaitForSingleObject(pp->process, INFINITE); if (GetExitCodeProcess(pp->process, &code)) ret = (int)code; CloseHandle(pp->process); bfree(pp); } return ret; } size_t os_process_pipe_read(os_process_pipe_t *pp, uint8_t *data, size_t len) { DWORD bytes_read; bool success; if (!pp) { return 0; } if (!pp->read_pipe) { return 0; } success = !!ReadFile(pp->handle, data, (DWORD)len, &bytes_read, NULL); if (success && bytes_read) { return bytes_read; } return 0; } size_t os_process_pipe_read_err(os_process_pipe_t *pp, uint8_t *data, size_t len) { DWORD bytes_read; bool success; if (!pp || !pp->handle_err) { return 0; } success = !!ReadFile(pp->handle_err, data, (DWORD)len, &bytes_read, NULL); if (success && bytes_read) { return bytes_read; } else bytes_read = GetLastError(); return 0; } size_t os_process_pipe_write(os_process_pipe_t *pp, const uint8_t *data, size_t len) { DWORD bytes_written; bool success; if (!pp) { return 0; } if (pp->read_pipe) { return 0; } success = !!WriteFile(pp->handle, data, (DWORD)len, &bytes_written, NULL); if (success && bytes_written) { return bytes_written; } return 0; } obs-studio-32.1.0-sources/libobs/util/source-profiler.h000644 001751 001751 00000004267 15153330235 023765 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Dennis Sädtler This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs.h" #ifdef __cplusplus extern "C" { #endif typedef struct profiler_result { /* Tick times in ns */ uint64_t tick_avg; uint64_t tick_max; /* Average and max render times for CPU and GPU in ns */ uint64_t render_avg; uint64_t render_max; uint64_t render_gpu_avg; uint64_t render_gpu_max; /* Average of the sum of all render passes in a frame in ns * (a source can be rendered more than once per frame). */ uint64_t render_sum; uint64_t render_gpu_sum; /* FPS of submitted async input */ double async_input; /* Actually rendered async frames */ double async_rendered; /* Best and worst frame times of input/output in ns */ uint64_t async_input_best; uint64_t async_input_worst; uint64_t async_rendered_best; uint64_t async_rendered_worst; } profiler_result_t; /* Enable/disable profiler (applied on next frame) */ EXPORT void source_profiler_enable(bool enable); /* Enable/disable GPU profiling (applied on next frame) */ EXPORT void source_profiler_gpu_enable(bool enable); /* Get latest profiling results for source (must be freed by user) */ EXPORT profiler_result_t *source_profiler_get_result(obs_source_t *source); /* Update existing profiler results object for source */ EXPORT bool source_profiler_fill_result(obs_source_t *source, profiler_result_t *result); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/bmem.c000644 001751 001751 00000006770 15153330235 021561 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include "base.h" #include "bmem.h" #include "platform.h" #include "threading.h" /* * NOTE: totally jacked the mem alignment trick from ffmpeg, credit to them: * http://www.ffmpeg.org/ */ #define ALIGNMENT 32 /* * Attention, intrepid adventurers, exploring the depths of the libobs code! * * There used to be a TODO comment here saying that we should use memalign on * non-Windows platforms. However, since *nix/POSIX systems do not provide an * aligned realloc(), this is currently not (easily) achievable. * So while the use of posix_memalign()/memalign() would be a fairly trivial * change, it would also ruin our memory alignment for some reallocated memory * on those platforms. */ #if defined(_WIN32) #define ALIGNED_MALLOC 1 #else #define ALIGNMENT_HACK 1 #endif static void *a_malloc(size_t size) { #ifdef ALIGNED_MALLOC return _aligned_malloc(size, ALIGNMENT); #elif ALIGNMENT_HACK void *ptr = NULL; long diff; ptr = malloc(size + ALIGNMENT); if (ptr) { diff = ((~(long)ptr) & (ALIGNMENT - 1)) + 1; ptr = (char *)ptr + diff; ((char *)ptr)[-1] = (char)diff; } return ptr; #else return malloc(size); #endif } static void *a_realloc(void *ptr, size_t size) { #ifdef ALIGNED_MALLOC return _aligned_realloc(ptr, size, ALIGNMENT); #elif ALIGNMENT_HACK long diff; if (!ptr) return a_malloc(size); diff = ((char *)ptr)[-1]; ptr = realloc((char *)ptr - diff, size + diff); if (ptr) ptr = (char *)ptr + diff; return ptr; #else return realloc(ptr, size); #endif } static void a_free(void *ptr) { #ifdef ALIGNED_MALLOC _aligned_free(ptr); #elif ALIGNMENT_HACK if (ptr) free((char *)ptr - ((char *)ptr)[-1]); #else free(ptr); #endif } static long num_allocs = 0; void *bmalloc(size_t size) { if (!size) { os_breakpoint(); bcrash("bmalloc: Allocating 0 bytes is broken behavior, please fix your code!"); } void *ptr = a_malloc(size); if (!ptr) { os_oom(); bcrash("Out of memory while trying to allocate %lu bytes", (unsigned long)size); } os_atomic_inc_long(&num_allocs); return ptr; } void *brealloc(void *ptr, size_t size) { if (!ptr) os_atomic_inc_long(&num_allocs); if (!size) { os_breakpoint(); bcrash("brealloc: Allocating 0 bytes is broken behavior, please fix your code!"); } ptr = a_realloc(ptr, size); if (!ptr) { os_oom(); bcrash("Out of memory while trying to allocate %lu bytes", (unsigned long)size); } return ptr; } void bfree(void *ptr) { if (ptr) { os_atomic_dec_long(&num_allocs); a_free(ptr); } } long bnum_allocs(void) { return num_allocs; } int base_get_alignment(void) { return ALIGNMENT; } void *bmemdup(const void *ptr, size_t size) { void *out = bmalloc(size); if (size) memcpy(out, ptr, size); return out; } obs-studio-32.1.0-sources/libobs/util/profiler.hpp000644 001751 001751 00000002126 15153330235 023017 0ustar00runnerrunner000000 000000 #pragma once #include "profiler.h" struct ScopeProfiler { const char *name; bool enabled = true; ScopeProfiler(const char *name) : name(name) { profile_start(name); } ~ScopeProfiler() { Stop(); } ScopeProfiler(const ScopeProfiler &) = delete; ScopeProfiler(ScopeProfiler &&other) : name(other.name), enabled(other.enabled) { other.enabled = false; } ScopeProfiler &operator=(const ScopeProfiler &) = delete; ScopeProfiler &operator=(ScopeProfiler &&other) = delete; void Stop() { if (!enabled) return; profile_end(name); enabled = false; } }; #ifndef NO_PROFILER_MACROS #define ScopeProfiler_NameConcatImpl(x, y) x##y #define ScopeProfiler_NameConcat(x, y) ScopeProfiler_NameConcatImpl(x, y) #ifdef __COUNTER__ #define ScopeProfiler_Name(x) ScopeProfiler_NameConcat(x, __COUNTER__) #else #define ScopeProfiler_Name(x) ScopeProfiler_NameConcat(x, __LINE__) #endif #define ProfileScope(x) \ ScopeProfiler ScopeProfiler_Name(SCOPE_PROFILE) \ { \ x \ } #endif obs-studio-32.1.0-sources/libobs/util/file-serializer.h000644 001751 001751 00000002420 15153330235 023720 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "serializer.h" #ifdef __cplusplus extern "C" { #endif EXPORT bool file_input_serializer_init(struct serializer *s, const char *path); EXPORT void file_input_serializer_free(struct serializer *s); EXPORT bool file_output_serializer_init(struct serializer *s, const char *path); EXPORT bool file_output_serializer_init_safe(struct serializer *s, const char *path, const char *temp_ext); EXPORT void file_output_serializer_free(struct serializer *s); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/utf8.c000644 001751 001751 00000021014 15153330235 021513 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2007 Alexey Vatchenko * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include "utf8.h" #ifdef _WIN32 #include #include "c99defs.h" static inline bool has_utf8_bom(const char *in_char) { uint8_t *in = (uint8_t *)in_char; return (in && in[0] == 0xef && in[1] == 0xbb && in[2] == 0xbf); } size_t utf8_to_wchar(const char *in, size_t insize, wchar_t *out, size_t outsize, int flags) { int i_insize = (int)insize; int ret; if (i_insize == 0) i_insize = (int)strlen(in); /* prevent bom from being used in the string */ if (has_utf8_bom(in)) { if (i_insize >= 3) { in += 3; i_insize -= 3; } } ret = MultiByteToWideChar(CP_UTF8, 0, in, i_insize, out, (int)outsize); UNUSED_PARAMETER(flags); return (ret > 0) ? (size_t)ret : 0; } size_t wchar_to_utf8(const wchar_t *in, size_t insize, char *out, size_t outsize, int flags) { int i_insize = (int)insize; int ret; if (i_insize == 0) i_insize = (int)wcslen(in); ret = WideCharToMultiByte(CP_UTF8, 0, in, i_insize, out, (int)outsize, NULL, NULL); UNUSED_PARAMETER(flags); return (ret > 0) ? (size_t)ret : 0; } #else #define _NXT 0x80 #define _SEQ2 0xc0 #define _SEQ3 0xe0 #define _SEQ4 0xf0 #define _SEQ5 0xf8 #define _SEQ6 0xfc #define _BOM 0xfeff static int wchar_forbidden(wchar_t sym); static int utf8_forbidden(unsigned char octet); static int wchar_forbidden(wchar_t sym) { /* Surrogate pairs */ if (sym >= 0xd800 && sym <= 0xdfff) return -1; return 0; } static int utf8_forbidden(unsigned char octet) { switch (octet) { case 0xc0: case 0xc1: case 0xf5: case 0xff: return -1; } return 0; } /* * DESCRIPTION * This function translates UTF-8 string into UCS-4 string (all symbols * will be in local machine byte order). * * It takes the following arguments: * in - input UTF-8 string. It can be null-terminated. * insize - size of input string in bytes. If insize is 0, * function continues until a null terminator is reached. * out - result buffer for UCS-4 string. If out is NULL, * function returns size of result buffer. * outsize - size of out buffer in wide characters. * * RETURN VALUES * The function returns size of result buffer (in wide characters). * Zero is returned in case of error. * * CAVEATS * 1. If UTF-8 string contains zero symbols, they will be translated * as regular symbols. * 2. If UTF8_IGNORE_ERROR or UTF8_SKIP_BOM flag is set, sizes may vary * when `out' is NULL and not NULL. It's because of special UTF-8 * sequences which may result in forbidden (by RFC3629) UNICODE * characters. So, the caller must check return value every time and * not prepare buffer in advance (\0 terminate) but after calling this * function. */ size_t utf8_to_wchar(const char *in, size_t insize, wchar_t *out, size_t outsize, int flags) { unsigned char *p, *lim; wchar_t *wlim, high; size_t n, total, i, n_bits; if (in == NULL || (outsize == 0 && out != NULL)) return 0; total = 0; p = (unsigned char *)in; lim = (insize != 0) ? (p + insize) : (unsigned char *)-1; wlim = out == NULL ? NULL : out + outsize; for (; p < lim; p += n) { if (!*p && insize == 0) break; if (utf8_forbidden(*p) != 0 && (flags & UTF8_IGNORE_ERROR) == 0) return 0; /* * Get number of bytes for one wide character. */ n = 1; /* default: 1 byte. Used when skipping bytes. */ if ((*p & 0x80) == 0) high = (wchar_t)*p; else if ((*p & 0xe0) == _SEQ2) { n = 2; high = (wchar_t)(*p & 0x1f); } else if ((*p & 0xf0) == _SEQ3) { n = 3; high = (wchar_t)(*p & 0x0f); } else if ((*p & 0xf8) == _SEQ4) { n = 4; high = (wchar_t)(*p & 0x07); } else if ((*p & 0xfc) == _SEQ5) { n = 5; high = (wchar_t)(*p & 0x03); } else if ((*p & 0xfe) == _SEQ6) { n = 6; high = (wchar_t)(*p & 0x01); } else { if ((flags & UTF8_IGNORE_ERROR) == 0) return 0; continue; } /* does the sequence header tell us truth about length? */ if ((size_t)(lim - p) <= n - 1) { if ((flags & UTF8_IGNORE_ERROR) == 0) return 0; n = 1; continue; /* skip */ } /* * Validate sequence. * All symbols must have higher bits set to 10xxxxxx */ if (n > 1) { for (i = 1; i < n; i++) { if ((p[i] & 0xc0) != _NXT) break; } if (i != n) { if ((flags & UTF8_IGNORE_ERROR) == 0) return 0; n = 1; continue; /* skip */ } } total++; if (out == NULL) continue; if (out >= wlim) return 0; /* no space left */ *out = 0; n_bits = 0; for (i = 1; i < n; i++) { *out |= (wchar_t)(p[n - i] & 0x3f) << n_bits; n_bits += 6; /* 6 low bits in every byte */ } *out |= high << n_bits; if (wchar_forbidden(*out) != 0) { if ((flags & UTF8_IGNORE_ERROR) == 0) return 0; /* forbidden character */ else { total--; out--; } } else if (*out == _BOM && (flags & UTF8_SKIP_BOM) != 0) { total--; out--; } out++; } return total; } /* * DESCRIPTION * This function translates UCS-4 symbols (given in local machine * byte order) into UTF-8 string. * * It takes the following arguments: * in - input unicode string. It can be null-terminated. * insize - size of input string in wide characters. If insize is 0, * function continues until a null terminator is reaches. * out - result buffer for utf8 string. If out is NULL, * function returns size of result buffer. * outsize - size of result buffer. * * RETURN VALUES * The function returns size of result buffer (in bytes). Zero is returned * in case of error. * * CAVEATS * If UCS-4 string contains zero symbols, they will be translated * as regular symbols. */ size_t wchar_to_utf8(const wchar_t *in, size_t insize, char *out, size_t outsize, int flags) { wchar_t *w, *wlim, ch = 0; unsigned char *p, *lim, *oc; size_t total, n; if (in == NULL || (outsize == 0 && out != NULL)) return 0; w = (wchar_t *)in; wlim = (insize != 0) ? (w + insize) : (wchar_t *)-1; p = (unsigned char *)out; lim = out == NULL ? NULL : p + outsize; total = 0; for (; w < wlim; w++) { if (!*w && insize == 0) break; if (wchar_forbidden(*w) != 0) { if ((flags & UTF8_IGNORE_ERROR) == 0) return 0; else continue; } if (*w == _BOM && (flags & UTF8_SKIP_BOM) != 0) continue; if (*w < 0) { if ((flags & UTF8_IGNORE_ERROR) == 0) return 0; continue; } else if (*w <= 0x0000007f) n = 1; else if (*w <= 0x000007ff) n = 2; else if (*w <= 0x0000ffff) n = 3; else if (*w <= 0x001fffff) n = 4; else if (*w <= 0x03ffffff) n = 5; else /* if (*w <= 0x7fffffff) */ n = 6; total += n; if (out == NULL) continue; if ((size_t)(lim - p) <= n - 1) return 0; /* no space left */ ch = *w; oc = (unsigned char *)&ch; switch (n) { case 1: *p = oc[0]; break; case 2: p[1] = _NXT | (oc[0] & 0x3f); p[0] = _SEQ2 | (oc[0] >> 6) | ((oc[1] & 0x07) << 2); break; case 3: p[2] = _NXT | (oc[0] & 0x3f); p[1] = _NXT | (oc[0] >> 6) | ((oc[1] & 0x0f) << 2); p[0] = _SEQ3 | ((oc[1] & 0xf0) >> 4); break; case 4: p[3] = _NXT | (oc[0] & 0x3f); p[2] = _NXT | (oc[0] >> 6) | ((oc[1] & 0x0f) << 2); p[1] = _NXT | ((oc[1] & 0xf0) >> 4) | ((oc[2] & 0x03) << 4); p[0] = _SEQ4 | ((oc[2] & 0x1f) >> 2); break; case 5: p[4] = _NXT | (oc[0] & 0x3f); p[3] = _NXT | (oc[0] >> 6) | ((oc[1] & 0x0f) << 2); p[2] = _NXT | ((oc[1] & 0xf0) >> 4) | ((oc[2] & 0x03) << 4); p[1] = _NXT | (oc[2] >> 2); p[0] = _SEQ5 | (oc[3] & 0x03); break; case 6: p[5] = _NXT | (oc[0] & 0x3f); p[4] = _NXT | (oc[0] >> 6) | ((oc[1] & 0x0f) << 2); p[3] = _NXT | (oc[1] >> 4) | ((oc[2] & 0x03) << 4); p[2] = _NXT | (oc[2] >> 2); p[1] = _NXT | (oc[3] & 0x3f); p[0] = _SEQ6 | ((oc[3] & 0x40) >> 6); break; } /* * NOTE: do not check here for forbidden UTF-8 characters. * They cannot appear here because we do proper conversion. */ p += n; } return total; } #endif obs-studio-32.1.0-sources/libobs/util/base.c000644 001751 001751 00000005310 15153330235 021540 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include "c99defs.h" #include "base.h" static int crashing = 0; static void *log_param = NULL; static void *crash_param = NULL; static void def_log_handler(int log_level, const char *format, va_list args, void *param) { char out[8192]; vsnprintf(out, sizeof(out), format, args); switch (log_level) { case LOG_DEBUG: fprintf(stdout, "debug: %s\n", out); fflush(stdout); break; case LOG_INFO: fprintf(stdout, "info: %s\n", out); fflush(stdout); break; case LOG_WARNING: fprintf(stdout, "warning: %s\n", out); fflush(stdout); break; case LOG_ERROR: fprintf(stderr, "error: %s\n", out); fflush(stderr); } UNUSED_PARAMETER(param); } OBS_NORETURN static void def_crash_handler(const char *format, va_list args, void *param) { vfprintf(stderr, format, args); exit(0); UNUSED_PARAMETER(param); } static log_handler_t log_handler = def_log_handler; static void (*crash_handler)(const char *, va_list, void *) = def_crash_handler; void base_get_log_handler(log_handler_t *handler, void **param) { if (handler) *handler = log_handler; if (param) *param = log_param; } void base_set_log_handler(log_handler_t handler, void *param) { if (!handler) handler = def_log_handler; log_param = param; log_handler = handler; } void base_set_crash_handler(void (*handler)(const char *, va_list, void *), void *param) { crash_param = param; crash_handler = handler; } OBS_NORETURN void bcrash(const char *format, ...) { va_list args; if (crashing) { fputs("Crashed in the crash handler", stderr); exit(2); } crashing = 1; va_start(args, format); crash_handler(format, args, crash_param); va_end(args); exit(0); } void blogva(int log_level, const char *format, va_list args) { log_handler(log_level, format, args, log_param); } void blog(int log_level, const char *format, ...) { va_list args; va_start(args, format); blogva(log_level, format, args); va_end(args); } obs-studio-32.1.0-sources/libobs/util/lexer.h000644 001751 001751 00000014631 15153330235 021760 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" #include "dstr.h" #include "darray.h" #ifdef __cplusplus extern "C" { #endif /* ------------------------------------------------------------------------- */ /* string reference (string segment within an already existing array) */ struct strref { const char *array; size_t len; }; static inline void strref_clear(struct strref *dst) { dst->array = NULL; dst->len = 0; } static inline void strref_set(struct strref *dst, const char *array, size_t len) { dst->array = array; dst->len = len; } static inline void strref_copy(struct strref *dst, const struct strref *src) { dst->array = src->array; dst->len = src->len; } static inline void strref_add(struct strref *dst, const struct strref *t) { if (!dst->array) strref_copy(dst, t); else dst->len += t->len; } static inline bool strref_is_empty(const struct strref *str) { return !str || !str->array || !str->len || !*str->array; } EXPORT int strref_cmp(const struct strref *str1, const char *str2); EXPORT int strref_cmpi(const struct strref *str1, const char *str2); EXPORT int strref_cmp_strref(const struct strref *str1, const struct strref *str2); EXPORT int strref_cmpi_strref(const struct strref *str1, const struct strref *str2); /* ------------------------------------------------------------------------- */ EXPORT bool valid_int_str(const char *str, size_t n); EXPORT bool valid_float_str(const char *str, size_t n); static inline bool valid_int_strref(const struct strref *str) { return valid_int_str(str->array, str->len); } static inline bool valid_float_strref(const struct strref *str) { return valid_float_str(str->array, str->len); } static inline bool is_whitespace(char ch) { return ch == ' ' || ch == '\r' || ch == '\t' || ch == '\n'; } static inline bool is_newline(char ch) { return ch == '\r' || ch == '\n'; } static inline bool is_space_or_tab(const char ch) { return ch == ' ' || ch == '\t'; } static inline bool is_newline_pair(char ch1, char ch2) { return (ch1 == '\r' && ch2 == '\n') || (ch1 == '\n' && ch2 == '\r'); } static inline int newline_size(const char *array) { if (strncmp(array, "\r\n", 2) == 0 || strncmp(array, "\n\r", 2) == 0) return 2; else if (*array == '\r' || *array == '\n') return 1; return 0; } /* ------------------------------------------------------------------------- */ /* * A "base" token is one of four things: * 1.) A sequence of alpha characters * 2.) A sequence of numeric characters * 3.) A single whitespace character if whitespace is not ignored * 4.) A single character that does not fall into the above 3 categories */ enum base_token_type { BASETOKEN_NONE, BASETOKEN_ALPHA, BASETOKEN_DIGIT, BASETOKEN_WHITESPACE, BASETOKEN_OTHER, }; struct base_token { struct strref text; enum base_token_type type; bool passed_whitespace; }; static inline void base_token_clear(struct base_token *t) { memset(t, 0, sizeof(struct base_token)); } static inline void base_token_copy(struct base_token *dst, struct base_token *src) { memcpy(dst, src, sizeof(struct base_token)); } /* ------------------------------------------------------------------------- */ #define LEX_ERROR 0 #define LEX_WARNING 1 struct error_item { char *error; const char *file; uint32_t row, column; int level; }; static inline void error_item_init(struct error_item *ei) { memset(ei, 0, sizeof(struct error_item)); } static inline void error_item_free(struct error_item *ei) { bfree(ei->error); error_item_init(ei); } static inline void error_item_array_free(struct error_item *array, size_t num) { size_t i; for (i = 0; i < num; i++) error_item_free(array + i); } /* ------------------------------------------------------------------------- */ struct error_data { DARRAY(struct error_item) errors; }; static inline void error_data_init(struct error_data *data) { da_init(data->errors); } static inline void error_data_free(struct error_data *data) { error_item_array_free(data->errors.array, data->errors.num); da_free(data->errors); } static inline const struct error_item *error_data_item(struct error_data *ed, size_t idx) { return ed->errors.array + idx; } EXPORT char *error_data_buildstring(struct error_data *ed); EXPORT void error_data_add(struct error_data *ed, const char *file, uint32_t row, uint32_t column, const char *msg, int level); static inline size_t error_data_type_count(struct error_data *ed, int type) { size_t count = 0, i; for (i = 0; i < ed->errors.num; i++) { if (ed->errors.array[i].level == type) count++; } return count; } static inline bool error_data_has_errors(struct error_data *ed) { size_t i; for (i = 0; i < ed->errors.num; i++) if (ed->errors.array[i].level == LEX_ERROR) return true; return false; } /* ------------------------------------------------------------------------- */ struct lexer { char *text; const char *offset; }; static inline void lexer_init(struct lexer *lex) { memset(lex, 0, sizeof(struct lexer)); } static inline void lexer_free(struct lexer *lex) { bfree(lex->text); lexer_init(lex); } static inline void lexer_start(struct lexer *lex, const char *text) { lexer_free(lex); lex->text = bstrdup(text); lex->offset = lex->text; } static inline void lexer_start_move(struct lexer *lex, char *text) { lexer_free(lex); lex->text = text; lex->offset = lex->text; } static inline void lexer_reset(struct lexer *lex) { lex->offset = lex->text; } enum ignore_whitespace { PARSE_WHITESPACE, IGNORE_WHITESPACE }; EXPORT bool lexer_getbasetoken(struct lexer *lex, struct base_token *t, enum ignore_whitespace iws); EXPORT void lexer_getstroffset(const struct lexer *lex, const char *str, uint32_t *row, uint32_t *col); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/threading-posix.c000644 001751 001751 00000012444 15153330235 023741 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #if defined(__APPLE__) || defined(__MINGW32__) #include #endif #ifdef __APPLE__ #include #include #include #else #define _GNU_SOURCE #include #endif #if defined(__FreeBSD__) #include #endif #include "bmem.h" #include "threading.h" struct os_event_data { pthread_mutex_t mutex; pthread_cond_t cond; volatile bool signalled; bool manual; }; int os_event_init(os_event_t **event, enum os_event_type type) { int code = 0; struct os_event_data *data = bzalloc(sizeof(struct os_event_data)); if ((code = pthread_mutex_init(&data->mutex, NULL)) < 0) { bfree(data); return code; } if ((code = pthread_cond_init(&data->cond, NULL)) < 0) { pthread_mutex_destroy(&data->mutex); bfree(data); return code; } data->manual = (type == OS_EVENT_TYPE_MANUAL); data->signalled = false; *event = data; return 0; } void os_event_destroy(os_event_t *event) { if (event) { pthread_mutex_destroy(&event->mutex); pthread_cond_destroy(&event->cond); bfree(event); } } int os_event_wait(os_event_t *event) { int code = 0; pthread_mutex_lock(&event->mutex); while (!event->signalled) { code = pthread_cond_wait(&event->cond, &event->mutex); if (code != 0) break; } if (code == 0) { if (!event->manual) event->signalled = false; } pthread_mutex_unlock(&event->mutex); return code; } static inline void add_ms_to_ts(struct timespec *ts, unsigned long milliseconds) { ts->tv_sec += milliseconds / 1000; ts->tv_nsec += (milliseconds % 1000) * 1000000; if (ts->tv_nsec >= 1000000000) { ts->tv_sec += 1; ts->tv_nsec -= 1000000000; } } int os_event_timedwait(os_event_t *event, unsigned long milliseconds) { int code = 0; pthread_mutex_lock(&event->mutex); while (!event->signalled) { struct timespec ts; #if defined(__APPLE__) || defined(__MINGW32__) struct timeval tv; gettimeofday(&tv, NULL); ts.tv_sec = tv.tv_sec; ts.tv_nsec = tv.tv_usec * 1000; #else clock_gettime(CLOCK_REALTIME, &ts); #endif add_ms_to_ts(&ts, milliseconds); code = pthread_cond_timedwait(&event->cond, &event->mutex, &ts); if (code != 0) break; } if (code == 0) { if (!event->manual) event->signalled = false; } pthread_mutex_unlock(&event->mutex); return code; } int os_event_try(os_event_t *event) { int ret = EAGAIN; pthread_mutex_lock(&event->mutex); if (event->signalled) { if (!event->manual) event->signalled = false; ret = 0; } pthread_mutex_unlock(&event->mutex); return ret; } int os_event_signal(os_event_t *event) { int code = 0; pthread_mutex_lock(&event->mutex); code = pthread_cond_signal(&event->cond); event->signalled = true; pthread_mutex_unlock(&event->mutex); return code; } void os_event_reset(os_event_t *event) { pthread_mutex_lock(&event->mutex); event->signalled = false; pthread_mutex_unlock(&event->mutex); } #ifdef __APPLE__ struct os_sem_data { semaphore_t sem; task_t task; }; int os_sem_init(os_sem_t **sem, int value) { semaphore_t new_sem; task_t task = mach_task_self(); if (semaphore_create(task, &new_sem, 0, value) != KERN_SUCCESS) return -1; *sem = bzalloc(sizeof(struct os_sem_data)); if (!*sem) return -2; (*sem)->sem = new_sem; (*sem)->task = task; return 0; } void os_sem_destroy(os_sem_t *sem) { if (sem) { semaphore_destroy(sem->task, sem->sem); bfree(sem); } } int os_sem_post(os_sem_t *sem) { if (!sem) return -1; return (semaphore_signal(sem->sem) == KERN_SUCCESS) ? 0 : -1; } int os_sem_wait(os_sem_t *sem) { if (!sem) return -1; return (semaphore_wait(sem->sem) == KERN_SUCCESS) ? 0 : -1; } #else struct os_sem_data { sem_t sem; }; int os_sem_init(os_sem_t **sem, int value) { sem_t new_sem; int ret = sem_init(&new_sem, 0, value); if (ret != 0) return ret; *sem = bzalloc(sizeof(struct os_sem_data)); (*sem)->sem = new_sem; return 0; } void os_sem_destroy(os_sem_t *sem) { if (sem) { sem_destroy(&sem->sem); bfree(sem); } } int os_sem_post(os_sem_t *sem) { if (!sem) return -1; return sem_post(&sem->sem); } int os_sem_wait(os_sem_t *sem) { if (!sem) return -1; return sem_wait(&sem->sem); } #endif void os_set_thread_name(const char *name) { #if defined(__APPLE__) pthread_setname_np(name); #elif defined(__FreeBSD__) pthread_set_name_np(pthread_self(), name); #elif defined(__GLIBC__) && !defined(__MINGW32__) if (strlen(name) <= 15) { pthread_setname_np(pthread_self(), name); } else { char *thread_name = bstrdup_n(name, 15); pthread_setname_np(pthread_self(), thread_name); bfree(thread_name); } #endif } obs-studio-32.1.0-sources/libobs/util/profiler.h000644 001751 001751 00000007226 15153330235 022465 0ustar00runnerrunner000000 000000 #pragma once #include "base.h" #include "darray.h" #ifdef __cplusplus extern "C" { #endif typedef struct profiler_snapshot profiler_snapshot_t; typedef struct profiler_snapshot_entry profiler_snapshot_entry_t; typedef struct profiler_time_entry profiler_time_entry_t; /* ------------------------------------------------------------------------- */ /* Profiling */ EXPORT void profile_register_root(const char *name, uint64_t expected_time_between_calls); EXPORT void profile_start(const char *name); EXPORT void profile_end(const char *name); EXPORT void profile_reenable_thread(void); /* ------------------------------------------------------------------------- */ /* Profiler control */ EXPORT void profiler_start(void); EXPORT void profiler_stop(void); EXPORT void profiler_print(profiler_snapshot_t *snap); EXPORT void profiler_print_time_between_calls(profiler_snapshot_t *snap); EXPORT void profiler_free(void); /* ------------------------------------------------------------------------- */ /* Profiler name storage */ typedef struct profiler_name_store profiler_name_store_t; EXPORT profiler_name_store_t *profiler_name_store_create(void); EXPORT void profiler_name_store_free(profiler_name_store_t *store); #ifndef _MSC_VER #define PRINTFATTR(f, a) __attribute__((__format__(__printf__, f, a))) #else #define PRINTFATTR(f, a) #endif PRINTFATTR(2, 3) EXPORT const char *profile_store_name(profiler_name_store_t *store, const char *format, ...); #undef PRINTFATTR /* ------------------------------------------------------------------------- */ /* Profiler data access */ struct profiler_time_entry { uint64_t time_delta; uint64_t count; }; typedef DARRAY(profiler_time_entry_t) profiler_time_entries_t; typedef bool (*profiler_entry_enum_func)(void *context, profiler_snapshot_entry_t *entry); EXPORT profiler_snapshot_t *profile_snapshot_create(void); EXPORT void profile_snapshot_free(profiler_snapshot_t *snap); EXPORT bool profiler_snapshot_dump_csv(const profiler_snapshot_t *snap, const char *filename); EXPORT bool profiler_snapshot_dump_csv_gz(const profiler_snapshot_t *snap, const char *filename); EXPORT size_t profiler_snapshot_num_roots(profiler_snapshot_t *snap); EXPORT void profiler_snapshot_enumerate_roots(profiler_snapshot_t *snap, profiler_entry_enum_func func, void *context); typedef bool (*profiler_name_filter_func)(void *data, const char *name, bool *remove); EXPORT void profiler_snapshot_filter_roots(profiler_snapshot_t *snap, profiler_name_filter_func func, void *data); EXPORT size_t profiler_snapshot_num_children(profiler_snapshot_entry_t *entry); EXPORT void profiler_snapshot_enumerate_children(profiler_snapshot_entry_t *entry, profiler_entry_enum_func func, void *context); EXPORT const char *profiler_snapshot_entry_name(profiler_snapshot_entry_t *entry); EXPORT profiler_time_entries_t *profiler_snapshot_entry_times(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_min_time(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_max_time(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_overall_count(profiler_snapshot_entry_t *entry); EXPORT profiler_time_entries_t *profiler_snapshot_entry_times_between_calls(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_expected_time_between_calls(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_min_time_between_calls(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_max_time_between_calls(profiler_snapshot_entry_t *entry); EXPORT uint64_t profiler_snapshot_entry_overall_between_calls_count(profiler_snapshot_entry_t *entry); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/cf-lexer.h000644 001751 001751 00000013101 15153330235 022335 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "lexer.h" #ifdef __cplusplus extern "C" { #endif EXPORT char *cf_literal_to_str(const char *literal, size_t count); /* ------------------------------------------------------------------------- */ /* * A C-family lexer token is defined as: * 1.) A generic 'name' token. (abc123_def456) * 2.) A numeric sequence (usually starting with a number) * 3.) A sequence of generic whitespace defined as spaces and tabs * 4.) A newline * 5.) A string or character sequence (surrounded by single or double quotes) * 6.) A single character of a type not specified above */ enum cf_token_type { CFTOKEN_NONE, CFTOKEN_NAME, CFTOKEN_NUM, CFTOKEN_SPACETAB, CFTOKEN_NEWLINE, CFTOKEN_STRING, CFTOKEN_OTHER }; struct cf_token { const struct cf_lexer *lex; struct strref str; struct strref unmerged_str; enum cf_token_type type; }; typedef DARRAY(struct cf_token) cf_token_array_t; static inline void cf_token_clear(struct cf_token *t) { memset(t, 0, sizeof(struct cf_token)); } static inline void cf_token_copy(struct cf_token *dst, const struct cf_token *src) { memcpy(dst, src, sizeof(struct cf_token)); } static inline void cf_token_add(struct cf_token *dst, const struct cf_token *add) { strref_add(&dst->str, &add->str); strref_add(&dst->unmerged_str, &add->unmerged_str); } /* ------------------------------------------------------------------------- */ /* * The c-family lexer is a base lexer for generating a list of string * reference tokens to be used with c-style languages. * * This base lexer is meant to be used as a stepping stone for an actual * language lexer/parser. * * It reformats the text in the two following ways: * 1.) Spliced lines (escaped newlines) are merged * 2.) All comments are converted to a single space */ struct cf_lexer { char *file; struct lexer base_lexer; char *reformatted, *write_offset; cf_token_array_t tokens; bool unexpected_eof; /* unexpected multi-line comment eof */ }; EXPORT void cf_lexer_init(struct cf_lexer *lex); EXPORT void cf_lexer_free(struct cf_lexer *lex); static inline struct cf_token *cf_lexer_get_tokens(struct cf_lexer *lex) { return lex->tokens.array; } EXPORT bool cf_lexer_lex(struct cf_lexer *lex, const char *str, const char *file); /* ------------------------------------------------------------------------- */ /* c-family preprocessor definition */ struct cf_def { struct cf_token name; cf_token_array_t params; cf_token_array_t tokens; bool macro; }; static inline void cf_def_init(struct cf_def *cfd) { cf_token_clear(&cfd->name); da_init(cfd->params); da_init(cfd->tokens); cfd->macro = false; } static inline void cf_def_addparam(struct cf_def *cfd, struct cf_token *param) { da_push_back(cfd->params, param); } static inline void cf_def_addtoken(struct cf_def *cfd, struct cf_token *token) { da_push_back(cfd->tokens, token); } static inline struct cf_token *cf_def_getparam(const struct cf_def *cfd, size_t idx) { return cfd->params.array + idx; } static inline void cf_def_free(struct cf_def *cfd) { cf_token_clear(&cfd->name); da_free(cfd->params); da_free(cfd->tokens); } /* ------------------------------------------------------------------------- */ /* * C-family preprocessor * * This preprocessor allows for standard c-style preprocessor directives * to be applied to source text, such as: * * + #include * + #define/#undef * + #ifdef/#ifndef/#if/#elif/#else/#endif * * Still left to implement (TODO): * + #if/#elif * + "defined" preprocessor keyword * + system includes * + variadic macros * + custom callbacks (for things like pragma) * + option to exclude features such as #import, variadic macros, and other * features for certain language implementations * + macro parameter string operator # * + macro parameter token concatenation operator ## * + restricted macros */ struct cf_preprocessor { struct cf_lexer *lex; struct error_data *ed; DARRAY(struct cf_def) defines; DARRAY(char *) sys_include_dirs; DARRAY(struct cf_lexer) dependencies; cf_token_array_t tokens; bool ignore_state; }; EXPORT void cf_preprocessor_init(struct cf_preprocessor *pp); EXPORT void cf_preprocessor_free(struct cf_preprocessor *pp); EXPORT bool cf_preprocess(struct cf_preprocessor *pp, struct cf_lexer *lex, struct error_data *ed); static inline void cf_preprocessor_add_sys_include_dir(struct cf_preprocessor *pp, const char *include_dir) { char *str = bstrdup(include_dir); if (include_dir) da_push_back(pp->sys_include_dirs, &str); } EXPORT void cf_preprocessor_add_def(struct cf_preprocessor *pp, struct cf_def *def); EXPORT void cf_preprocessor_remove_def(struct cf_preprocessor *pp, const char *def_name); static inline struct cf_token *cf_preprocessor_get_tokens(struct cf_preprocessor *pp) { return pp->tokens.array; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/util_uint128.h000644 001751 001751 00000004762 15153330235 023114 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once struct util_uint128 { union { uint32_t i32[4]; struct { uint64_t low; uint64_t high; }; }; }; typedef struct util_uint128 util_uint128_t; static inline util_uint128_t util_add128(util_uint128_t a, util_uint128_t b) { util_uint128_t out; uint64_t val; val = (a.low & 0xFFFFFFFFULL) + (b.low & 0xFFFFFFFFULL); out.i32[0] = (uint32_t)(val & 0xFFFFFFFFULL); val >>= 32; val += (a.low >> 32) + (b.low >> 32); out.i32[1] = (uint32_t)val; val >>= 32; val += (a.high & 0xFFFFFFFFULL) + (b.high & 0xFFFFFFFFULL); out.i32[2] = (uint32_t)(val & 0xFFFFFFFFULL); val >>= 32; val += (a.high >> 32) + (b.high >> 32); out.i32[3] = (uint32_t)val; return out; } static inline util_uint128_t util_lshift64_internal_32(uint64_t a) { util_uint128_t val; val.low = a << 32; val.high = a >> 32; return val; } static inline util_uint128_t util_lshift64_internal_64(uint64_t a) { util_uint128_t val; val.low = 0; val.high = a; return val; } static inline util_uint128_t util_mul64_64(uint64_t a, uint64_t b) { util_uint128_t out; uint64_t m; m = (a & 0xFFFFFFFFULL) * (b & 0xFFFFFFFFULL); out.low = m; out.high = 0; m = (a >> 32) * (b & 0xFFFFFFFFULL); out = util_add128(out, util_lshift64_internal_32(m)); m = (a & 0xFFFFFFFFULL) * (b >> 32); out = util_add128(out, util_lshift64_internal_32(m)); m = (a >> 32) * (b >> 32); out = util_add128(out, util_lshift64_internal_64(m)); return out; } static inline util_uint128_t util_div128_32(util_uint128_t a, uint32_t b) { util_uint128_t out; uint64_t val = 0; for (int i = 3; i >= 0; i--) { val = (val << 32) | a.i32[i]; if (val < b) { out.i32[i] = 0; continue; } out.i32[i] = (uint32_t)(val / b); val = val % b; } return out; } obs-studio-32.1.0-sources/libobs/util/sse-intrin.h000644 001751 001751 00000002502 15153330235 022726 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Peter Geis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "c99defs.h" #if (defined(_MSC_VER) || defined(__MINGW32__)) && ((defined(_M_X64) && !defined(_M_ARM64EC)) || defined(_M_IX86)) #include #else #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN #endif #if defined(_MSC_VER) && defined(__cplusplus) #include #endif #if defined(__APPLE__) #include #endif #define SIMDE_ENABLE_NATIVE_ALIASES PRAGMA_WARN_PUSH #include PRAGMA_WARN_POP #endif obs-studio-32.1.0-sources/libobs/util/bitstream.h000644 001751 001751 00000001177 15153330235 022634 0ustar00runnerrunner000000 000000 #pragma once #include "c99defs.h" /* * General programmable serialization functions. (A shared interface to * various reading/writing to/from different inputs/outputs) */ #ifdef __cplusplus extern "C" { #endif struct bitstream_reader { uint8_t pos; uint8_t subPos; uint8_t *buf; size_t len; }; EXPORT void bitstream_reader_init(struct bitstream_reader *r, uint8_t *data, size_t len); EXPORT uint8_t bitstream_reader_read_bits(struct bitstream_reader *r, int bits); EXPORT uint8_t bitstream_reader_r8(struct bitstream_reader *r); EXPORT uint16_t bitstream_reader_r16(struct bitstream_reader *r); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/text-lookup.h000644 001751 001751 00000002707 15153330235 023135 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once /* * Text Lookup interface * * Used for storing and looking up localized strings. Stores localization * strings in a hashmap to efficiently look up associated strings via a * unique string identifier name. */ #include "c99defs.h" #ifdef __cplusplus extern "C" { #endif /* opaque typedef */ struct text_lookup; typedef struct text_lookup lookup_t; /* functions */ EXPORT lookup_t *text_lookup_create(const char *path); EXPORT bool text_lookup_add(lookup_t *lookup, const char *path); EXPORT void text_lookup_destroy(lookup_t *lookup); EXPORT bool text_lookup_getstr(lookup_t *lookup, const char *lookup_val, const char **out); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/c99defs.h000644 001751 001751 00000005254 15153330235 022110 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once /* * Contains hacks for getting some C99 stuff working in VC, things like * bool, stdint */ #define UNUSED_PARAMETER(param) (void)param #ifdef _MSC_VER #define _OBS_DEPRECATED __declspec(deprecated) #define OBS_NORETURN __declspec(noreturn) #define FORCE_INLINE __forceinline #else #define _OBS_DEPRECATED __attribute__((deprecated)) #define OBS_NORETURN __attribute__((noreturn)) #define FORCE_INLINE inline __attribute__((always_inline)) #endif #if defined(SWIG_TYPE_TABLE) #define OBS_DEPRECATED #else #define OBS_DEPRECATED _OBS_DEPRECATED #endif #if defined(IS_LIBOBS) #define OBS_EXTERNAL_DEPRECATED #else #define OBS_EXTERNAL_DEPRECATED OBS_DEPRECATED #endif #ifdef _MSC_VER #define EXPORT __declspec(dllexport) #else #define EXPORT __attribute__((visibility("default"))) #endif #ifdef _MSC_VER #define PRAGMA_WARN_PUSH _Pragma("warning(push)") #define PRAGMA_WARN_POP _Pragma("warning(pop)") #define PRAGMA_WARN_DEPRECATION _Pragma("warning(disable: 4996)") #define PRAGMA_DISABLE_DEPRECATION _Pragma("warning(disable: 4996)") #elif defined(__clang__) #define PRAGMA_WARN_PUSH _Pragma("clang diagnostic push") #define PRAGMA_WARN_POP _Pragma("clang diagnostic pop") #define PRAGMA_WARN_DEPRECATION _Pragma("clang diagnostic warning \"-Wdeprecated-declarations\"") #define PRAGMA_DISABLE_DEPRECATION _Pragma("clang diagnostic ignored \"-Wdeprecated-declarations\"") #elif defined(__GNUC__) #define PRAGMA_WARN_PUSH _Pragma("GCC diagnostic push") #define PRAGMA_WARN_POP _Pragma("GCC diagnostic pop") #define PRAGMA_WARN_DEPRECATION _Pragma("GCC diagnostic warning \"-Wdeprecated-declarations\"") #define PRAGMA_DISABLE_DEPRECATION _Pragma("GCC diagnostic ignored \"-Wdeprecated-declarations\"") #else #define PRAGMA_WARN_PUSH #define PRAGMA_WARN_POP #define PRAGMA_WARN_DEPRECATION #define PRAGMA_DISABLE_DEPRECATION #endif #include #include #include #include obs-studio-32.1.0-sources/libobs/util/crc32.h000644 001751 001751 00000001713 15153330235 021552 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" #ifdef __cplusplus extern "C" { #endif EXPORT uint32_t calc_crc32(uint32_t crc, const void *buf, size_t size); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/pipe.c000644 001751 001751 00000004246 15153330235 021572 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Dennis Sädtler * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "pipe.h" #include "darray.h" #include "dstr.h" struct os_process_args { DARRAY(char *) arguments; }; struct os_process_args *os_process_args_create(const char *executable) { struct os_process_args *args = bzalloc(sizeof(struct os_process_args)); char *str = bstrdup(executable); da_push_back(args->arguments, &str); /* Last item in argv must be NULL. */ char *terminator = NULL; da_push_back(args->arguments, &terminator); return args; } void os_process_args_add_arg(struct os_process_args *args, const char *arg) { char *str = bstrdup(arg); /* Insert before NULL list terminator. */ da_insert(args->arguments, args->arguments.num - 1, &str); } void os_process_args_add_argf(struct os_process_args *args, const char *format, ...) { va_list va_args; struct dstr tmp = {0}; va_start(va_args, format); dstr_vprintf(&tmp, format, va_args); da_insert(args->arguments, args->arguments.num - 1, &tmp.array); va_end(va_args); } size_t os_process_args_get_argc(struct os_process_args *args) { /* Do not count terminating NULL. */ return args->arguments.num - 1; } char **os_process_args_get_argv(const struct os_process_args *args) { return args->arguments.array; } void os_process_args_destroy(struct os_process_args *args) { if (!args) return; for (size_t idx = 0; idx < args->arguments.num; idx++) bfree(args->arguments.array[idx]); da_free(args->arguments); bfree(args); } obs-studio-32.1.0-sources/libobs/util/profiler.c000644 001751 001751 00000066720 15153330235 022464 0ustar00runnerrunner000000 000000 #include #include "profiler.h" #include "darray.h" #include "dstr.h" #include "platform.h" #include "threading.h" #include #include //#define TRACK_OVERHEAD struct profiler_snapshot { DARRAY(profiler_snapshot_entry_t) roots; }; struct profiler_snapshot_entry { const char *name; profiler_time_entries_t times; uint64_t min_time; uint64_t max_time; uint64_t overall_count; profiler_time_entries_t times_between_calls; uint64_t expected_time_between_calls; uint64_t min_time_between_calls; uint64_t max_time_between_calls; uint64_t overall_between_calls_count; DARRAY(profiler_snapshot_entry_t) children; }; typedef struct profiler_time_entry profiler_time_entry; typedef struct profile_call profile_call; struct profile_call { const char *name; #ifdef TRACK_OVERHEAD uint64_t overhead_start; #endif uint64_t start_time; uint64_t end_time; #ifdef TRACK_OVERHEAD uint64_t overhead_end; #endif uint64_t expected_time_between_calls; DARRAY(profile_call) children; profile_call *parent; }; typedef struct profile_times_table_entry profile_times_table_entry; struct profile_times_table_entry { size_t probes; profiler_time_entry entry; }; typedef struct profile_times_table profile_times_table; struct profile_times_table { size_t size; size_t occupied; size_t max_probe_count; profile_times_table_entry *entries; size_t old_start_index; size_t old_occupied; profile_times_table_entry *old_entries; }; typedef struct profile_entry profile_entry; struct profile_entry { const char *name; profile_times_table times; #ifdef TRACK_OVERHEAD profile_times_table overhead; #endif uint64_t expected_time_between_calls; profile_times_table times_between_calls; DARRAY(profile_entry) children; }; typedef struct profile_root_entry profile_root_entry; struct profile_root_entry { pthread_mutex_t *mutex; const char *name; profile_entry *entry; profile_call *prev_call; }; static inline uint64_t diff_ns_to_usec(uint64_t prev, uint64_t next) { return (next - prev + 500) / 1000; } static inline void update_max_probes(profile_times_table *map, size_t val) { map->max_probe_count = map->max_probe_count < val ? val : map->max_probe_count; } static void migrate_old_entries(profile_times_table *map, bool limit_items); static void grow_hashmap(profile_times_table *map, uint64_t usec, uint64_t count); static void add_hashmap_entry(profile_times_table *map, uint64_t usec, uint64_t count) { size_t probes = 1; size_t start = usec % map->size; for (;; probes += 1) { size_t idx = (start + probes) % map->size; profile_times_table_entry *entry = &map->entries[idx]; if (!entry->probes) { entry->probes = probes; entry->entry.time_delta = usec; entry->entry.count = count; map->occupied += 1; update_max_probes(map, probes); return; } if (entry->entry.time_delta == usec) { entry->entry.count += count; return; } if (entry->probes >= probes) continue; if (map->occupied / (double)map->size > 0.7) { grow_hashmap(map, usec, count); return; } size_t old_probes = entry->probes; uint64_t old_count = entry->entry.count; uint64_t old_usec = entry->entry.time_delta; entry->probes = probes; entry->entry.count = count; entry->entry.time_delta = usec; update_max_probes(map, probes); probes = old_probes; count = old_count; usec = old_usec; start = usec % map->size; } } static void init_hashmap(profile_times_table *map, size_t size) { map->size = size; map->occupied = 0; map->max_probe_count = 0; map->entries = bzalloc(sizeof(profile_times_table_entry) * size); map->old_start_index = 0; map->old_occupied = 0; map->old_entries = NULL; } static void migrate_old_entries(profile_times_table *map, bool limit_items) { if (!map->old_entries) return; if (!map->old_occupied) { bfree(map->old_entries); map->old_entries = NULL; return; } for (size_t i = 0; !limit_items || i < 8; i++, map->old_start_index++) { if (!map->old_occupied) return; profile_times_table_entry *entry = &map->old_entries[map->old_start_index]; if (!entry->probes) continue; add_hashmap_entry(map, entry->entry.time_delta, entry->entry.count); map->old_occupied -= 1; } } static void grow_hashmap(profile_times_table *map, uint64_t usec, uint64_t count) { migrate_old_entries(map, false); size_t old_size = map->size; size_t old_occupied = map->occupied; profile_times_table_entry *entries = map->entries; init_hashmap(map, (old_size * 2 < 16) ? 16 : (old_size * 2)); map->old_occupied = old_occupied; map->old_entries = entries; add_hashmap_entry(map, usec, count); } static profile_entry *init_entry(profile_entry *entry, const char *name) { entry->name = name; init_hashmap(&entry->times, 1); #ifdef TRACK_OVERHEAD init_hashmap(&entry->overhead, 1); #endif entry->expected_time_between_calls = 0; init_hashmap(&entry->times_between_calls, 1); return entry; } static profile_entry *get_child(profile_entry *parent, const char *name) { const size_t num = parent->children.num; for (size_t i = 0; i < num; i++) { profile_entry *child = &parent->children.array[i]; if (child->name == name) return child; } return init_entry(da_push_back_new(parent->children), name); } static void merge_call(profile_entry *entry, profile_call *call, profile_call *prev_call) { const size_t num = call->children.num; for (size_t i = 0; i < num; i++) { profile_call *child = &call->children.array[i]; merge_call(get_child(entry, child->name), child, NULL); } if (entry->expected_time_between_calls != 0 && prev_call) { migrate_old_entries(&entry->times_between_calls, true); uint64_t usec = diff_ns_to_usec(prev_call->start_time, call->start_time); add_hashmap_entry(&entry->times_between_calls, usec, 1); } migrate_old_entries(&entry->times, true); uint64_t usec = diff_ns_to_usec(call->start_time, call->end_time); add_hashmap_entry(&entry->times, usec, 1); #ifdef TRACK_OVERHEAD migrate_old_entries(&entry->overhead, true); usec = diff_ns_to_usec(call->overhead_start, call->start_time); usec += diff_ns_to_usec(call->end_time, call->overhead_end); add_hashmap_entry(&entry->overhead, usec, 1); #endif } static bool enabled = false; static pthread_mutex_t root_mutex = PTHREAD_MUTEX_INITIALIZER; static DARRAY(profile_root_entry) root_entries; static THREAD_LOCAL profile_call *thread_context = NULL; static THREAD_LOCAL bool thread_enabled = true; void profiler_start(void) { pthread_mutex_lock(&root_mutex); enabled = true; pthread_mutex_unlock(&root_mutex); } void profiler_stop(void) { pthread_mutex_lock(&root_mutex); enabled = false; pthread_mutex_unlock(&root_mutex); } void profile_reenable_thread(void) { if (thread_enabled) return; pthread_mutex_lock(&root_mutex); thread_enabled = enabled; pthread_mutex_unlock(&root_mutex); } static bool lock_root(void) { pthread_mutex_lock(&root_mutex); if (!enabled) { pthread_mutex_unlock(&root_mutex); thread_enabled = false; return false; } return true; } static profile_root_entry *get_root_entry(const char *name) { profile_root_entry *r_entry = NULL; for (size_t i = 0; i < root_entries.num; i++) { if (root_entries.array[i].name == name) { r_entry = &root_entries.array[i]; break; } } if (!r_entry) { r_entry = da_push_back_new(root_entries); r_entry->mutex = bmalloc(sizeof(pthread_mutex_t)); pthread_mutex_init(r_entry->mutex, NULL); r_entry->name = name; r_entry->entry = bzalloc(sizeof(profile_entry)); init_entry(r_entry->entry, name); } return r_entry; } void profile_register_root(const char *name, uint64_t expected_time_between_calls) { if (!lock_root()) return; get_root_entry(name)->entry->expected_time_between_calls = (expected_time_between_calls + 500) / 1000; pthread_mutex_unlock(&root_mutex); } static void free_call_context(profile_call *context); static void merge_context(profile_call *context) { pthread_mutex_t *mutex = NULL; profile_entry *entry = NULL; profile_call *prev_call = NULL; if (!lock_root()) { free_call_context(context); return; } profile_root_entry *r_entry = get_root_entry(context->name); mutex = r_entry->mutex; entry = r_entry->entry; prev_call = r_entry->prev_call; r_entry->prev_call = context; pthread_mutex_lock(mutex); pthread_mutex_unlock(&root_mutex); merge_call(entry, context, prev_call); pthread_mutex_unlock(mutex); free_call_context(prev_call); } void profile_start(const char *name) { if (!thread_enabled) return; profile_call new_call = { .name = name, #ifdef TRACK_OVERHEAD .overhead_start = os_gettime_ns(), #endif .parent = thread_context, }; profile_call *call = NULL; if (new_call.parent) { size_t idx = da_push_back(new_call.parent->children, &new_call); call = &new_call.parent->children.array[idx]; } else { call = bmalloc(sizeof(profile_call)); memcpy(call, &new_call, sizeof(profile_call)); } thread_context = call; call->start_time = os_gettime_ns(); } void profile_end(const char *name) { uint64_t end = os_gettime_ns(); if (!thread_enabled) return; profile_call *call = thread_context; if (!call) { blog(LOG_ERROR, "Called profile end with no active profile"); return; } if (!call->name) call->name = name; if (call->name != name) { blog(LOG_ERROR, "Called profile end with mismatching name: " "start(\"%s\"[%p]) <-> end(\"%s\"[%p])", call->name, call->name, name, name); profile_call *parent = call->parent; while (parent && parent->parent && parent->name != name) parent = parent->parent; if (!parent || parent->name != name) return; while (call->name != name) { profile_end(call->name); call = call->parent; } } thread_context = call->parent; call->end_time = end; #ifdef TRACK_OVERHEAD call->overhead_end = os_gettime_ns(); #endif if (call->parent) return; merge_context(call); } static int profiler_time_entry_compare(const void *first, const void *second) { int64_t diff = ((profiler_time_entry *)second)->time_delta - ((profiler_time_entry *)first)->time_delta; return diff < 0 ? -1 : (diff > 0 ? 1 : 0); } static uint64_t copy_map_to_array(profile_times_table *map, profiler_time_entries_t *entry_buffer, uint64_t *min_, uint64_t *max_) { migrate_old_entries(map, false); da_reserve(*entry_buffer, map->occupied); da_resize(*entry_buffer, 0); uint64_t min__ = ~(uint64_t)0; uint64_t max__ = 0; uint64_t calls = 0; for (size_t i = 0; i < map->size; i++) { if (!map->entries[i].probes) continue; profiler_time_entry *entry = &map->entries[i].entry; da_push_back(*entry_buffer, entry); calls += entry->count; min__ = (min__ < entry->time_delta) ? min__ : entry->time_delta; max__ = (max__ > entry->time_delta) ? max__ : entry->time_delta; } if (min_) *min_ = min__; if (max_) *max_ = max__; return calls; } typedef void (*profile_entry_print_func)(profiler_snapshot_entry_t *entry, struct dstr *indent_buffer, struct dstr *output_buffer, unsigned indent, uint64_t active, uint64_t parent_calls); /* UTF-8 characters */ #define VPIPE_RIGHT " \xe2\x94\xa3" #define VPIPE " \xe2\x94\x83" #define DOWN_RIGHT " \xe2\x94\x97" static void make_indent_string(struct dstr *indent_buffer, unsigned indent, uint64_t active) { indent_buffer->len = 0; if (!indent) { dstr_cat_ch(indent_buffer, 0); return; } for (size_t i = 0; i < indent; i++) { const char *fragment = ""; bool last = i + 1 == indent; if (active & ((uint64_t)1 << i)) fragment = last ? VPIPE_RIGHT : VPIPE; else fragment = last ? DOWN_RIGHT : " "; dstr_cat(indent_buffer, fragment); } } static void gather_stats(uint64_t expected_time_between_calls, profiler_time_entries_t *entries, uint64_t calls, uint64_t *percentile99, uint64_t *median, double *percent_within_bounds) { if (!entries->num) { *percentile99 = 0; *median = 0; *percent_within_bounds = 0.; return; } /*if (entry_buffer->num > 2) blog(LOG_INFO, "buffer-size %lu, overall count %llu\n" "map-size %lu, occupied %lu, probes %lu", entry_buffer->num, calls, map->size, map->occupied, map->max_probe_count);*/ uint64_t accu = 0; for (size_t i = 0; i < entries->num; i++) { uint64_t old_accu = accu; accu += entries->array[i].count; if (old_accu < calls * 0.01 && accu >= calls * 0.01) *percentile99 = entries->array[i].time_delta; else if (old_accu < calls * 0.5 && accu >= calls * 0.5) { *median = entries->array[i].time_delta; break; } } *percent_within_bounds = 0.; if (!expected_time_between_calls) return; accu = 0; for (size_t i = 0; i < entries->num; i++) { profiler_time_entry *entry = &entries->array[i]; if (entry->time_delta < expected_time_between_calls) break; accu += entry->count; } *percent_within_bounds = (1. - (double)accu / calls) * 100; } #define G_MS "g\xC2\xA0ms" static void profile_print_entry(profiler_snapshot_entry_t *entry, struct dstr *indent_buffer, struct dstr *output_buffer, unsigned indent, uint64_t active, uint64_t parent_calls) { uint64_t calls = entry->overall_count; uint64_t min_ = entry->min_time; uint64_t max_ = entry->max_time; uint64_t percentile99 = 0; uint64_t median = 0; double percent_within_bounds = 0.; gather_stats(entry->expected_time_between_calls, &entry->times, calls, &percentile99, &median, &percent_within_bounds); make_indent_string(indent_buffer, indent, active); if (min_ == max_) { dstr_printf(output_buffer, "%s%s: %" G_MS, indent_buffer->array, entry->name, min_ / 1000.); } else { dstr_printf(output_buffer, "%s%s: min=%" G_MS ", median=%" G_MS ", " "max=%" G_MS ", 99th percentile=%" G_MS, indent_buffer->array, entry->name, min_ / 1000., median / 1000., max_ / 1000., percentile99 / 1000.); if (entry->expected_time_between_calls) { double expected_ms = entry->expected_time_between_calls / 1000.; dstr_catf(output_buffer, ", %g%% below %" G_MS, percent_within_bounds, expected_ms); } } if (parent_calls && calls != parent_calls) { double calls_per_parent = (double)calls / parent_calls; if (lround(calls_per_parent * 10) != 10) dstr_catf(output_buffer, ", %g calls per parent call", calls_per_parent); } blog(LOG_INFO, "%s", output_buffer->array); active |= (uint64_t)1 << indent; for (size_t i = 0; i < entry->children.num; i++) { if ((i + 1) == entry->children.num) active &= (1 << indent) - 1; profile_print_entry(&entry->children.array[i], indent_buffer, output_buffer, indent + 1, active, calls); } } static void gather_stats_between(profiler_time_entries_t *entries, uint64_t calls, uint64_t lower_bound, uint64_t upper_bound, uint64_t min_, uint64_t max_, uint64_t *median, double *percent, double *lower, double *higher) { *median = 0; *percent = 0.; *lower = 0.; *higher = 0.; if (!entries->num) return; uint64_t accu = 0; for (size_t i = 0; i < entries->num; i++) { accu += entries->array[i].count; if (accu < calls * 0.5) continue; *median = entries->array[i].time_delta; break; } bool found_upper_bound = max_ <= upper_bound; bool found_lower_bound = false; if (min_ >= upper_bound) { *higher = 100.; return; } if (found_upper_bound && min_ >= lower_bound) { *percent = 100.; return; } accu = 0; for (size_t i = 0; i < entries->num; i++) { uint64_t delta = entries->array[i].time_delta; if (!found_upper_bound && delta <= upper_bound) { *higher = (double)accu / calls * 100; accu = 0; found_upper_bound = true; } if (!found_lower_bound && delta < lower_bound) { *percent = (double)accu / calls * 100; accu = 0; found_lower_bound = true; } accu += entries->array[i].count; } if (!found_upper_bound) { *higher = 100.; } else if (!found_lower_bound) { *percent = (double)accu / calls * 100; } else { *lower = (double)accu / calls * 100; } } static void profile_print_entry_expected(profiler_snapshot_entry_t *entry, struct dstr *indent_buffer, struct dstr *output_buffer, unsigned indent, uint64_t active, uint64_t parent_calls) { UNUSED_PARAMETER(parent_calls); if (!entry->expected_time_between_calls) return; uint64_t expected_time = entry->expected_time_between_calls; uint64_t min_ = entry->min_time_between_calls; uint64_t max_ = entry->max_time_between_calls; uint64_t median = 0; double percent = 0.; double lower = 0.; double higher = 0.; gather_stats_between(&entry->times_between_calls, entry->overall_between_calls_count, (uint64_t)(expected_time * 0.98), (uint64_t)(expected_time * 1.02 + 0.5), min_, max_, &median, &percent, &lower, &higher); make_indent_string(indent_buffer, indent, active); blog(LOG_INFO, "%s%s: min=%" G_MS ", median=%" G_MS ", max=%" G_MS ", %g%% " "within ±2%% of %" G_MS " (%g%% lower, %g%% higher)", indent_buffer->array, entry->name, min_ / 1000., median / 1000., max_ / 1000., percent, expected_time / 1000., lower, higher); active |= (uint64_t)1 << indent; for (size_t i = 0; i < entry->children.num; i++) { if ((i + 1) == entry->children.num) active &= (1 << indent) - 1; profile_print_entry_expected(&entry->children.array[i], indent_buffer, output_buffer, indent + 1, active, 0); } } void profile_print_func(const char *intro, profile_entry_print_func print, profiler_snapshot_t *snap) { struct dstr indent_buffer = {0}; struct dstr output_buffer = {0}; bool free_snapshot = !snap; if (!snap) snap = profile_snapshot_create(); blog(LOG_INFO, "%s", intro); for (size_t i = 0; i < snap->roots.num; i++) { print(&snap->roots.array[i], &indent_buffer, &output_buffer, 0, 0, 0); } blog(LOG_INFO, "================================================="); if (free_snapshot) profile_snapshot_free(snap); dstr_free(&output_buffer); dstr_free(&indent_buffer); } void profiler_print(profiler_snapshot_t *snap) { profile_print_func("== Profiler Results =============================", profile_print_entry, snap); } void profiler_print_time_between_calls(profiler_snapshot_t *snap) { profile_print_func("== Profiler Time Between Calls ==================", profile_print_entry_expected, snap); } static void free_call_children(profile_call *call) { if (!call) return; const size_t num = call->children.num; for (size_t i = 0; i < num; i++) free_call_children(&call->children.array[i]); da_free(call->children); } static void free_call_context(profile_call *context) { free_call_children(context); bfree(context); } static void free_hashmap(profile_times_table *map) { map->size = 0; bfree(map->entries); map->entries = NULL; bfree(map->old_entries); map->old_entries = NULL; } static void free_profile_entry(profile_entry *entry) { for (size_t i = 0; i < entry->children.num; i++) free_profile_entry(&entry->children.array[i]); free_hashmap(&entry->times); #ifdef TRACK_OVERHEAD free_hashmap(&entry->overhead); #endif free_hashmap(&entry->times_between_calls); da_free(entry->children); } void profiler_free(void) { DARRAY(profile_root_entry) old_root_entries = {0}; pthread_mutex_lock(&root_mutex); enabled = false; da_move(old_root_entries, root_entries); pthread_mutex_unlock(&root_mutex); for (size_t i = 0; i < old_root_entries.num; i++) { profile_root_entry *entry = &old_root_entries.array[i]; pthread_mutex_lock(entry->mutex); pthread_mutex_unlock(entry->mutex); pthread_mutex_destroy(entry->mutex); bfree(entry->mutex); entry->mutex = NULL; free_call_context(entry->prev_call); free_profile_entry(entry->entry); bfree(entry->entry); } da_free(old_root_entries); pthread_mutex_destroy(&root_mutex); } /* ------------------------------------------------------------------------- */ /* Profiler name storage */ struct profiler_name_store { pthread_mutex_t mutex; DARRAY(char *) names; }; profiler_name_store_t *profiler_name_store_create(void) { profiler_name_store_t *store = bzalloc(sizeof(profiler_name_store_t)); if (pthread_mutex_init(&store->mutex, NULL)) goto error; return store; error: bfree(store); return NULL; } void profiler_name_store_free(profiler_name_store_t *store) { if (!store) return; for (size_t i = 0; i < store->names.num; i++) bfree(store->names.array[i]); da_free(store->names); pthread_mutex_destroy(&store->mutex); bfree(store); } const char *profile_store_name(profiler_name_store_t *store, const char *format, ...) { va_list args; va_start(args, format); struct dstr str = {0}; dstr_vprintf(&str, format, args); va_end(args); const char *result = NULL; pthread_mutex_lock(&store->mutex); size_t idx = da_push_back(store->names, &str.array); result = store->names.array[idx]; pthread_mutex_unlock(&store->mutex); return result; } /* ------------------------------------------------------------------------- */ /* Profiler data access */ static void add_entry_to_snapshot(profile_entry *entry, profiler_snapshot_entry_t *s_entry) { s_entry->name = entry->name; s_entry->overall_count = copy_map_to_array(&entry->times, &s_entry->times, &s_entry->min_time, &s_entry->max_time); if ((s_entry->expected_time_between_calls = entry->expected_time_between_calls)) s_entry->overall_between_calls_count = copy_map_to_array(&entry->times_between_calls, &s_entry->times_between_calls, &s_entry->min_time_between_calls, &s_entry->max_time_between_calls); da_reserve(s_entry->children, entry->children.num); for (size_t i = 0; i < entry->children.num; i++) add_entry_to_snapshot(&entry->children.array[i], da_push_back_new(s_entry->children)); } static void sort_snapshot_entry(profiler_snapshot_entry_t *entry) { qsort(entry->times.array, entry->times.num, sizeof(profiler_time_entry), profiler_time_entry_compare); if (entry->expected_time_between_calls) qsort(entry->times_between_calls.array, entry->times_between_calls.num, sizeof(profiler_time_entry), profiler_time_entry_compare); for (size_t i = 0; i < entry->children.num; i++) sort_snapshot_entry(&entry->children.array[i]); } profiler_snapshot_t *profile_snapshot_create(void) { profiler_snapshot_t *snap = bzalloc(sizeof(profiler_snapshot_t)); pthread_mutex_lock(&root_mutex); da_reserve(snap->roots, root_entries.num); for (size_t i = 0; i < root_entries.num; i++) { pthread_mutex_lock(root_entries.array[i].mutex); add_entry_to_snapshot(root_entries.array[i].entry, da_push_back_new(snap->roots)); pthread_mutex_unlock(root_entries.array[i].mutex); } pthread_mutex_unlock(&root_mutex); for (size_t i = 0; i < snap->roots.num; i++) sort_snapshot_entry(&snap->roots.array[i]); return snap; } static void free_snapshot_entry(profiler_snapshot_entry_t *entry) { for (size_t i = 0; i < entry->children.num; i++) free_snapshot_entry(&entry->children.array[i]); da_free(entry->children); da_free(entry->times_between_calls); da_free(entry->times); } void profile_snapshot_free(profiler_snapshot_t *snap) { if (!snap) return; for (size_t i = 0; i < snap->roots.num; i++) free_snapshot_entry(&snap->roots.array[i]); da_free(snap->roots); bfree(snap); } typedef void (*dump_csv_func)(void *data, struct dstr *buffer); static void entry_dump_csv(struct dstr *buffer, const profiler_snapshot_entry_t *parent, const profiler_snapshot_entry_t *entry, dump_csv_func func, void *data) { const char *parent_name = parent ? parent->name : NULL; for (size_t i = 0; i < entry->times.num; i++) { dstr_printf(buffer, "%p,%p,%p,%p,%s,0," "%" PRIu64 ",%" PRIu64 "\n", entry, parent, entry->name, parent_name, entry->name, entry->times.array[i].time_delta, entry->times.array[i].count); func(data, buffer); } for (size_t i = 0; i < entry->times_between_calls.num; i++) { dstr_printf(buffer, "%p,%p,%p,%p,%s," "%" PRIu64 ",%" PRIu64 ",%" PRIu64 "\n", entry, parent, entry->name, parent_name, entry->name, entry->expected_time_between_calls, entry->times_between_calls.array[i].time_delta, entry->times_between_calls.array[i].count); func(data, buffer); } for (size_t i = 0; i < entry->children.num; i++) entry_dump_csv(buffer, entry, &entry->children.array[i], func, data); } static void profiler_snapshot_dump(const profiler_snapshot_t *snap, dump_csv_func func, void *data) { struct dstr buffer = {0}; dstr_init_copy(&buffer, "id,parent_id,name_id,parent_name_id,name," "time_between_calls,time_delta_µs,count\n"); func(data, &buffer); for (size_t i = 0; i < snap->roots.num; i++) entry_dump_csv(&buffer, NULL, &snap->roots.array[i], func, data); dstr_free(&buffer); } static void dump_csv_fwrite(void *data, struct dstr *buffer) { fwrite(buffer->array, 1, buffer->len, data); } bool profiler_snapshot_dump_csv(const profiler_snapshot_t *snap, const char *filename) { FILE *f = os_fopen(filename, "wb+"); if (!f) return false; profiler_snapshot_dump(snap, dump_csv_fwrite, f); fclose(f); return true; } static void dump_csv_gzwrite(void *data, struct dstr *buffer) { gzwrite(data, buffer->array, (unsigned)buffer->len); } bool profiler_snapshot_dump_csv_gz(const profiler_snapshot_t *snap, const char *filename) { gzFile gz; #ifdef _WIN32 wchar_t *filename_w = NULL; os_utf8_to_wcs_ptr(filename, 0, &filename_w); if (!filename_w) return false; gz = gzopen_w(filename_w, "wb"); bfree(filename_w); #else gz = gzopen(filename, "wb"); #endif if (!gz) return false; profiler_snapshot_dump(snap, dump_csv_gzwrite, gz); #ifdef _WIN32 gzclose_w(gz); #else gzclose(gz); #endif return true; } size_t profiler_snapshot_num_roots(profiler_snapshot_t *snap) { return snap ? snap->roots.num : 0; } void profiler_snapshot_enumerate_roots(profiler_snapshot_t *snap, profiler_entry_enum_func func, void *context) { if (!snap) return; for (size_t i = 0; i < snap->roots.num; i++) if (!func(context, &snap->roots.array[i])) break; } void profiler_snapshot_filter_roots(profiler_snapshot_t *snap, profiler_name_filter_func func, void *data) { for (size_t i = 0; i < snap->roots.num;) { bool remove = false; bool res = func(data, snap->roots.array[i].name, &remove); if (remove) { free_snapshot_entry(&snap->roots.array[i]); da_erase(snap->roots, i); } if (!res) break; if (!remove) i += 1; } } size_t profiler_snapshot_num_children(profiler_snapshot_entry_t *entry) { return entry ? entry->children.num : 0; } void profiler_snapshot_enumerate_children(profiler_snapshot_entry_t *entry, profiler_entry_enum_func func, void *context) { if (!entry) return; for (size_t i = 0; i < entry->children.num; i++) if (!func(context, &entry->children.array[i])) break; } const char *profiler_snapshot_entry_name(profiler_snapshot_entry_t *entry) { return entry ? entry->name : NULL; } profiler_time_entries_t *profiler_snapshot_entry_times(profiler_snapshot_entry_t *entry) { return entry ? &entry->times : NULL; } uint64_t profiler_snapshot_entry_overall_count(profiler_snapshot_entry_t *entry) { return entry ? entry->overall_count : 0; } uint64_t profiler_snapshot_entry_min_time(profiler_snapshot_entry_t *entry) { return entry ? entry->min_time : 0; } uint64_t profiler_snapshot_entry_max_time(profiler_snapshot_entry_t *entry) { return entry ? entry->max_time : 0; } profiler_time_entries_t *profiler_snapshot_entry_times_between_calls(profiler_snapshot_entry_t *entry) { return entry ? &entry->times_between_calls : NULL; } uint64_t profiler_snapshot_entry_expected_time_between_calls(profiler_snapshot_entry_t *entry) { return entry ? entry->expected_time_between_calls : 0; } uint64_t profiler_snapshot_entry_min_time_between_calls(profiler_snapshot_entry_t *entry) { return entry ? entry->min_time_between_calls : 0; } uint64_t profiler_snapshot_entry_max_time_between_calls(profiler_snapshot_entry_t *entry) { return entry ? entry->max_time_between_calls : 0; } uint64_t profiler_snapshot_entry_overall_between_calls_count(profiler_snapshot_entry_t *entry) { return entry ? entry->overall_between_calls_count : 0; } obs-studio-32.1.0-sources/libobs/util/text-lookup.c000644 001751 001751 00000013035 15153330235 023124 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include "dstr.h" #include "text-lookup.h" #include "lexer.h" #include "platform.h" #include "uthash.h" /* ------------------------------------------------------------------------- */ struct text_item { char *lookup, *value; UT_hash_handle hh; }; static inline void text_item_destroy(struct text_item *item) { bfree(item->lookup); bfree(item->value); bfree(item); } /* ------------------------------------------------------------------------- */ struct text_lookup { struct text_item *items; }; static void lookup_getstringtoken(struct lexer *lex, struct strref *token) { const char *temp = lex->offset; bool was_backslash = false; while (*temp != 0 && *temp != '\n') { if (!was_backslash) { if (*temp == '\\') { was_backslash = true; } else if (*temp == '"') { temp++; break; } } else { was_backslash = false; } ++temp; } token->len += (size_t)(temp - lex->offset); if (*token->array == '"') { token->array++; token->len--; if (*(temp - 1) == '"') token->len--; } lex->offset = temp; } static bool lookup_gettoken(struct lexer *lex, struct strref *str) { struct base_token temp; base_token_clear(&temp); strref_clear(str); while (lexer_getbasetoken(lex, &temp, PARSE_WHITESPACE)) { char ch = *temp.text.array; if (!str->array) { /* comments are designated with a #, and end at LF */ if (ch == '#') { while (*lex->offset != '\n' && *lex->offset != 0) ++lex->offset; } else if (temp.type == BASETOKEN_WHITESPACE) { strref_copy(str, &temp.text); break; } else { strref_copy(str, &temp.text); if (ch == '"') { lookup_getstringtoken(lex, str); break; } else if (ch == '=') { break; } } } else { if (temp.type == BASETOKEN_WHITESPACE || *temp.text.array == '=') { lex->offset -= temp.text.len; break; } if (ch == '#') { lex->offset--; break; } str->len += temp.text.len; } } return (str->len != 0); } static inline bool lookup_goto_nextline(struct lexer *p) { struct strref val; bool success = true; strref_clear(&val); while (true) { if (!lookup_gettoken(p, &val)) { success = false; break; } if (*val.array == '\n') break; } return success; } static char *convert_string(const char *str, size_t len) { struct dstr out; out.array = bstrdup_n(str, len); out.capacity = len + 1; out.len = len; dstr_replace(&out, "\\n", "\n"); dstr_replace(&out, "\\t", "\t"); dstr_replace(&out, "\\r", "\r"); dstr_replace(&out, "\\\"", "\""); return out.array; } static void lookup_addfiledata(struct text_lookup *lookup, const char *file_data) { struct lexer lex; struct strref name, value; lexer_init(&lex); lexer_start(&lex, file_data); strref_clear(&name); strref_clear(&value); while (lookup_gettoken(&lex, &name)) { struct text_item *item; struct text_item *old; bool got_eq = false; if (*name.array == '\n') continue; getval: if (!lookup_gettoken(&lex, &value)) break; if (*value.array == '\n') continue; else if (!got_eq && *value.array == '=') { got_eq = true; goto getval; } item = bzalloc(sizeof(struct text_item)); item->lookup = bstrdup_n(name.array, name.len); item->value = convert_string(value.array, value.len); HASH_REPLACE_STR(lookup->items, lookup, item, old); if (old) text_item_destroy(old); if (!lookup_goto_nextline(&lex)) break; } lexer_free(&lex); } static inline bool lookup_getstring(const char *lookup_val, const char **out, struct text_lookup *lookup) { struct text_item *item; if (!lookup->items) return false; HASH_FIND_STR(lookup->items, lookup_val, item); if (!item) return false; *out = item->value; return true; } /* ------------------------------------------------------------------------- */ lookup_t *text_lookup_create(const char *path) { struct text_lookup *lookup = bzalloc(sizeof(struct text_lookup)); if (!text_lookup_add(lookup, path)) { bfree(lookup); lookup = NULL; } return lookup; } bool text_lookup_add(lookup_t *lookup, const char *path) { struct dstr file_str; char *temp = NULL; FILE *file; file = os_fopen(path, "rb"); if (!file) return false; os_fread_utf8(file, &temp); dstr_init_move_array(&file_str, temp); fclose(file); if (!file_str.array) return false; dstr_replace(&file_str, "\r", " "); lookup_addfiledata(lookup, file_str.array); dstr_free(&file_str); return true; } void text_lookup_destroy(lookup_t *lookup) { if (lookup) { struct text_item *item, *tmp; HASH_ITER (hh, lookup->items, item, tmp) { HASH_DELETE(hh, lookup->items, item); text_item_destroy(item); } bfree(lookup); } } bool text_lookup_getstr(lookup_t *lookup, const char *lookup_val, const char **out) { if (lookup) return lookup_getstring(lookup_val, out, lookup); return false; } obs-studio-32.1.0-sources/libobs/util/darray.h000644 001751 001751 00000042173 15153330235 022125 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" #include #include #include #include "bmem.h" #ifdef __cplusplus extern "C" { #endif /* * Dynamic array. * * NOTE: Not type-safe when using directly. * Specifying size per call with inline maximizes compiler optimizations * * See DARRAY macro at the bottom of the file for slightly safer usage. */ #define DARRAY_INVALID ((size_t)-1) struct darray { void *array; size_t num; size_t capacity; }; static inline void darray_init(struct darray *dst) { dst->array = NULL; dst->num = 0; dst->capacity = 0; } static inline void darray_free(struct darray *dst) { bfree(dst->array); dst->array = NULL; dst->num = 0; dst->capacity = 0; } static inline size_t darray_alloc_size(const size_t element_size, const struct darray *da) { return element_size * da->num; } static inline void *darray_item(const size_t element_size, const struct darray *da, size_t idx) { return (void *)(((uint8_t *)da->array) + element_size * idx); } static inline void *darray_end(const size_t element_size, const struct darray *da) { if (!da->num) return NULL; return darray_item(element_size, da, da->num - 1); } static inline void darray_reserve(const size_t element_size, struct darray *dst, const size_t capacity) { void *ptr; if (capacity == 0 || capacity <= dst->capacity) return; ptr = bmalloc(element_size * capacity); if (dst->array) { if (dst->num) memcpy(ptr, dst->array, element_size * dst->num); bfree(dst->array); } dst->array = ptr; dst->capacity = capacity; } static inline void darray_ensure_capacity(const size_t element_size, struct darray *dst, const size_t new_size) { size_t new_cap; void *ptr; if (new_size <= dst->capacity) return; new_cap = (!dst->capacity) ? new_size : dst->capacity * 2; if (new_size > new_cap) new_cap = new_size; ptr = bmalloc(element_size * new_cap); if (dst->array) { if (dst->capacity) memcpy(ptr, dst->array, element_size * dst->capacity); bfree(dst->array); } dst->array = ptr; dst->capacity = new_cap; } static inline void darray_clear(struct darray *dst) { dst->num = 0; } static inline void darray_resize(const size_t element_size, struct darray *dst, const size_t size) { int b_clear; size_t old_num; if (size == dst->num) { return; } else if (size == 0) { dst->num = 0; return; } b_clear = size > dst->num; old_num = dst->num; darray_ensure_capacity(element_size, dst, size); dst->num = size; if (b_clear) memset(darray_item(element_size, dst, old_num), 0, element_size * (dst->num - old_num)); } static inline void darray_copy(const size_t element_size, struct darray *dst, const struct darray *da) { if (da->num == 0) { darray_free(dst); } else { darray_resize(element_size, dst, da->num); memcpy(dst->array, da->array, element_size * da->num); } } static inline void darray_copy_array(const size_t element_size, struct darray *dst, const void *array, const size_t num) { darray_resize(element_size, dst, num); memcpy(dst->array, array, element_size * dst->num); } static inline void darray_move(struct darray *dst, struct darray *src) { darray_free(dst); memcpy(dst, src, sizeof(struct darray)); src->array = NULL; src->capacity = 0; src->num = 0; } static inline size_t darray_find(const size_t element_size, const struct darray *da, const void *item, const size_t idx) { size_t i; assert(idx <= da->num); for (i = idx; i < da->num; i++) { void *compare = darray_item(element_size, da, i); if (memcmp(compare, item, element_size) == 0) return i; } return DARRAY_INVALID; } static inline size_t darray_push_back(const size_t element_size, struct darray *dst, const void *item) { darray_ensure_capacity(element_size, dst, ++dst->num); memcpy(darray_end(element_size, dst), item, element_size); return dst->num - 1; } static inline void *darray_push_back_new(const size_t element_size, struct darray *dst) { void *last; darray_ensure_capacity(element_size, dst, ++dst->num); last = darray_end(element_size, dst); memset(last, 0, element_size); return last; } static inline size_t darray_push_back_array(const size_t element_size, struct darray *dst, const void *array, const size_t num) { size_t old_num; if (!dst) return 0; if (!array || !num) return dst->num; old_num = dst->num; darray_resize(element_size, dst, dst->num + num); memcpy(darray_item(element_size, dst, old_num), array, element_size * num); return old_num; } static inline size_t darray_push_back_darray(const size_t element_size, struct darray *dst, const struct darray *da) { return darray_push_back_array(element_size, dst, da->array, da->num); } static inline void darray_insert(const size_t element_size, struct darray *dst, const size_t idx, const void *item) { void *new_item; size_t move_count; assert(idx <= dst->num); if (idx == dst->num) { darray_push_back(element_size, dst, item); return; } move_count = dst->num - idx; darray_ensure_capacity(element_size, dst, ++dst->num); new_item = darray_item(element_size, dst, idx); memmove(darray_item(element_size, dst, idx + 1), new_item, move_count * element_size); memcpy(new_item, item, element_size); } static inline void *darray_insert_new(const size_t element_size, struct darray *dst, const size_t idx) { void *item; size_t move_count; assert(idx <= dst->num); if (idx == dst->num) return darray_push_back_new(element_size, dst); move_count = dst->num - idx; darray_ensure_capacity(element_size, dst, ++dst->num); item = darray_item(element_size, dst, idx); memmove(darray_item(element_size, dst, idx + 1), item, move_count * element_size); memset(item, 0, element_size); return item; } static inline void darray_insert_array(const size_t element_size, struct darray *dst, const size_t idx, const void *array, const size_t num) { size_t old_num; assert(array != NULL); assert(num != 0); assert(idx <= dst->num); old_num = dst->num; darray_resize(element_size, dst, dst->num + num); memmove(darray_item(element_size, dst, idx + num), darray_item(element_size, dst, idx), element_size * (old_num - idx)); memcpy(darray_item(element_size, dst, idx), array, element_size * num); } static inline void darray_insert_darray(const size_t element_size, struct darray *dst, const size_t idx, const struct darray *da) { darray_insert_array(element_size, dst, idx, da->array, da->num); } static inline void darray_erase(const size_t element_size, struct darray *dst, const size_t idx) { assert(idx < dst->num); if (idx >= dst->num || !--dst->num) return; memmove(darray_item(element_size, dst, idx), darray_item(element_size, dst, idx + 1), element_size * (dst->num - idx)); } static inline void darray_erase_item(const size_t element_size, struct darray *dst, const void *item) { size_t idx = darray_find(element_size, dst, item, 0); if (idx != DARRAY_INVALID) darray_erase(element_size, dst, idx); } static inline void darray_erase_range(const size_t element_size, struct darray *dst, const size_t start, const size_t end) { size_t count, move_count; assert(start <= dst->num); assert(end <= dst->num); assert(end > start); count = end - start; if (count == 1) { darray_erase(element_size, dst, start); return; } else if (count == dst->num) { dst->num = 0; return; } move_count = dst->num - end; if (move_count) memmove(darray_item(element_size, dst, start), darray_item(element_size, dst, end), move_count * element_size); dst->num -= count; } static inline void darray_pop_front(const size_t element_size, struct darray *dst) { assert(dst->num != 0); if (dst->num) darray_erase(element_size, dst, 0); } static inline void darray_pop_back(const size_t element_size, struct darray *dst) { assert(dst->num != 0); if (dst->num) darray_erase(element_size, dst, dst->num - 1); } static inline void darray_join(const size_t element_size, struct darray *dst, struct darray *da) { darray_push_back_darray(element_size, dst, da); darray_free(da); } static inline void darray_split(const size_t element_size, struct darray *dst1, struct darray *dst2, const struct darray *da, const size_t idx) { struct darray temp; assert(da->num >= idx); assert(dst1 != dst2); darray_init(&temp); darray_copy(element_size, &temp, da); darray_free(dst1); darray_free(dst2); if (da->num) { if (idx) darray_copy_array(element_size, dst1, temp.array, temp.num); if (idx < temp.num - 1) darray_copy_array(element_size, dst2, darray_item(element_size, &temp, idx), temp.num - idx); } darray_free(&temp); } static inline void darray_move_item(const size_t element_size, struct darray *dst, const size_t from, const size_t to) { void *temp, *p_from, *p_to; if (from == to) return; temp = malloc(element_size); if (!temp) { bcrash("darray_move_item: out of memory"); return; } p_from = darray_item(element_size, dst, from); p_to = darray_item(element_size, dst, to); memcpy(temp, p_from, element_size); if (to < from) memmove(darray_item(element_size, dst, to + 1), p_to, element_size * (from - to)); else memmove(p_from, darray_item(element_size, dst, from + 1), element_size * (to - from)); memcpy(p_to, temp, element_size); free(temp); } static inline void darray_swap(const size_t element_size, struct darray *dst, const size_t a, const size_t b) { void *temp, *a_ptr, *b_ptr; assert(a < dst->num); assert(b < dst->num); if (a == b) return; temp = malloc(element_size); if (!temp) bcrash("darray_swap: out of memory"); a_ptr = darray_item(element_size, dst, a); b_ptr = darray_item(element_size, dst, b); memcpy(temp, a_ptr, element_size); memcpy(a_ptr, b_ptr, element_size); memcpy(b_ptr, temp, element_size); free(temp); } /* * Defines to make dynamic arrays more type-safe. * Note: Still not 100% type-safe but much better than using darray directly * Makes it a little easier to use as well. * * I did -not- want to use a gigantic macro to generate a crapload of * typesafe inline functions per type. It just feels like a mess to me. */ #define DARRAY(type) \ union { \ struct darray da; \ struct { \ type *array; \ size_t num; \ size_t capacity; \ }; \ } #define da_init(v) darray_init(&(v).da) #define da_free(v) darray_free(&(v).da) #define da_alloc_size(v) (sizeof(*(v).array) * (v).num) #define da_end(v) darray_end(sizeof(*(v).array), &(v).da) #define da_reserve(v, capacity) darray_reserve(sizeof(*(v).array), &(v).da, capacity) #define da_resize(v, size) darray_resize(sizeof(*(v).array), &(v).da, size) #define da_clear(v) darray_clear(&(v).da) #define da_copy(dst, src) darray_copy(sizeof(*(dst).array), &(dst).da, &(src).da) #define da_copy_array(dst, src_array, n) darray_copy_array(sizeof(*(dst).array), &(dst).da, src_array, n) #define da_move(dst, src) darray_move(&(dst).da, &(src).da) #ifdef ENABLE_DARRAY_TYPE_TEST #ifdef __cplusplus #define da_type_test(v, item) \ ({ \ if (false) { \ auto _t = (v).array; \ _t = (item); \ (void)_t; \ *(v).array = *(item); \ } \ }) #else #define da_type_test(v, item) \ ({ \ if (false) { \ const typeof(*(v).array) *_t; \ _t = (item); \ (void)_t; \ *(v).array = *(item); \ } \ }) #endif #endif // ENABLE_DARRAY_TYPE_TEST #ifdef ENABLE_DARRAY_TYPE_TEST #define da_find(v, item, idx) \ ({ \ da_type_test(v, item); \ darray_find(sizeof(*(v).array), &(v).da, item, idx); \ }) #else #define da_find(v, item, idx) darray_find(sizeof(*(v).array), &(v).da, item, idx) #endif #ifdef ENABLE_DARRAY_TYPE_TEST #define da_push_back(v, item) \ ({ \ da_type_test(v, item); \ darray_push_back(sizeof(*(v).array), &(v).da, item); \ }) #else #define da_push_back(v, item) darray_push_back(sizeof(*(v).array), &(v).da, item) #endif #ifdef __GNUC__ /* GCC 12 with -O2 generates a warning -Wstringop-overflow in da_push_back_new, * which could be false positive. Extract the macro here to avoid the warning. */ #define da_push_back_new(v) \ ({ \ __typeof__(v) *d = &(v); \ darray_ensure_capacity(sizeof(*d->array), &d->da, ++d->num); \ memset(&d->array[d->num - 1], 0, sizeof(*d->array)); \ &d->array[d->num - 1]; \ }) #else #define da_push_back_new(v) darray_push_back_new(sizeof(*(v).array), &(v).da) #endif #ifdef ENABLE_DARRAY_TYPE_TEST #define da_push_back_array(dst, src_array, n) \ ({ \ da_type_test(dst, src_array); \ darray_push_back_array(sizeof(*(dst).array), &(dst).da, src_array, n); \ }) #else #define da_push_back_array(dst, src_array, n) darray_push_back_array(sizeof(*(dst).array), &(dst).da, src_array, n) #endif #ifdef ENABLE_DARRAY_TYPE_TEST #define da_push_back_da(dst, src) \ ({ \ da_type_test(dst, (src).array); \ darray_push_back_darray(sizeof(*(dst).array), &(dst).da, &(src).da); \ }) #else #define da_push_back_da(dst, src) darray_push_back_darray(sizeof(*(dst).array), &(dst).da, &(src).da) #endif #ifdef ENABLE_DARRAY_TYPE_TEST #define da_insert(v, idx, item) \ ({ \ da_type_test(v, item); \ darray_insert(sizeof(*(v).array), &(v).da, idx, item); \ }) #else #define da_insert(v, idx, item) darray_insert(sizeof(*(v).array), &(v).da, idx, item) #endif #define da_insert_new(v, idx) darray_insert_new(sizeof(*(v).array), &(v).da, idx) #ifdef ENABLE_DARRAY_TYPE_TEST #define da_insert_array(dst, idx, src_array, n) \ ({ \ da_type_test(dst, src_array); \ darray_insert_array(sizeof(*(dst).array), &(dst).da, idx, src_array, n); \ }) #else #define da_insert_array(dst, idx, src_array, n) darray_insert_array(sizeof(*(dst).array), &(dst).da, idx, src_array, n) #endif #ifdef ENABLE_DARRAY_TYPE_TEST #define da_insert_da(dst, idx, src) \ ({ \ da_type_test(dst, (src).array); \ darray_insert_darray(sizeof(*(dst).array), &(dst).da, idx, &(src).da); \ }) #else #define da_insert_da(dst, idx, src) darray_insert_darray(sizeof(*(dst).array), &(dst).da, idx, &(src).da) #endif #define da_erase(dst, idx) darray_erase(sizeof(*(dst).array), &(dst).da, idx) #ifdef ENABLE_DARRAY_TYPE_TEST #define da_erase_item(dst, item) \ ({ \ da_type_test(dst, item); \ darray_erase_item(sizeof(*(dst).array), &(dst).da, item); \ }) #else #define da_erase_item(dst, item) darray_erase_item(sizeof(*(dst).array), &(dst).da, item) #endif #define da_erase_range(dst, from, to) darray_erase_range(sizeof(*(dst).array), &(dst).da, from, to) #define da_pop_front(dst) darray_pop_front(sizeof(*(dst).array), &(dst).da); #define da_pop_back(dst) darray_pop_back(sizeof(*(dst).array), &(dst).da); #define da_join(dst, src) darray_join(sizeof(*(dst).array), &(dst).da, &(src).da) #define da_split(dst1, dst2, src, idx) darray_split(sizeof(*(src).array), &(dst1).da, &(dst2).da, &(src).da, idx) #define da_move_item(v, from, to) darray_move_item(sizeof(*(v).array), &(v).da, from, to) #define da_swap(v, idx1, idx2) darray_swap(sizeof(*(v).array), &(v).da, idx1, idx2) #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/deque.h000644 001751 001751 00000016772 15153330235 021754 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" #include #include #include #include "bmem.h" #ifdef __cplusplus extern "C" { #endif /* Double-ended Queue */ struct deque { void *data; size_t size; size_t start_pos; size_t end_pos; size_t capacity; }; static inline void deque_init(struct deque *dq) { memset(dq, 0, sizeof(struct deque)); } static inline void deque_free(struct deque *dq) { bfree(dq->data); memset(dq, 0, sizeof(struct deque)); } static inline void deque_reorder_data(struct deque *dq, size_t new_capacity) { size_t difference; uint8_t *data; if (!dq->size || !dq->start_pos || dq->end_pos > dq->start_pos) return; difference = new_capacity - dq->capacity; data = (uint8_t *)dq->data + dq->start_pos; memmove(data + difference, data, dq->capacity - dq->start_pos); dq->start_pos += difference; } static inline void deque_ensure_capacity(struct deque *dq) { size_t new_capacity; if (dq->size <= dq->capacity) return; new_capacity = dq->capacity * 2; if (dq->size > new_capacity) new_capacity = dq->size; dq->data = brealloc(dq->data, new_capacity); deque_reorder_data(dq, new_capacity); dq->capacity = new_capacity; } static inline void deque_reserve(struct deque *dq, size_t capacity) { if (capacity <= dq->capacity) return; dq->data = brealloc(dq->data, capacity); deque_reorder_data(dq, capacity); dq->capacity = capacity; } static inline void deque_upsize(struct deque *dq, size_t size) { size_t add_size = size - dq->size; size_t new_end_pos = dq->end_pos + add_size; if (size <= dq->size) return; dq->size = size; deque_ensure_capacity(dq); if (new_end_pos > dq->capacity) { size_t back_size = dq->capacity - dq->end_pos; size_t loop_size = add_size - back_size; if (back_size) memset((uint8_t *)dq->data + dq->end_pos, 0, back_size); memset(dq->data, 0, loop_size); new_end_pos -= dq->capacity; } else { memset((uint8_t *)dq->data + dq->end_pos, 0, add_size); } dq->end_pos = new_end_pos; } /** Overwrites data at a specific point in the buffer (relative). */ static inline void deque_place(struct deque *dq, size_t position, const void *data, size_t size) { size_t end_point = position + size; size_t data_end_pos; if (end_point > dq->size) deque_upsize(dq, end_point); position += dq->start_pos; if (position >= dq->capacity) position -= dq->capacity; data_end_pos = position + size; if (data_end_pos > dq->capacity) { size_t back_size = data_end_pos - dq->capacity; size_t loop_size = size - back_size; memcpy((uint8_t *)dq->data + position, data, loop_size); memcpy(dq->data, (uint8_t *)data + loop_size, back_size); } else { memcpy((uint8_t *)dq->data + position, data, size); } } static inline void deque_push_back(struct deque *dq, const void *data, size_t size) { size_t new_end_pos = dq->end_pos + size; dq->size += size; deque_ensure_capacity(dq); if (new_end_pos > dq->capacity) { size_t back_size = dq->capacity - dq->end_pos; size_t loop_size = size - back_size; if (back_size) memcpy((uint8_t *)dq->data + dq->end_pos, data, back_size); memcpy(dq->data, (uint8_t *)data + back_size, loop_size); new_end_pos -= dq->capacity; } else { memcpy((uint8_t *)dq->data + dq->end_pos, data, size); } dq->end_pos = new_end_pos; } static inline void deque_push_front(struct deque *dq, const void *data, size_t size) { dq->size += size; deque_ensure_capacity(dq); if (dq->size == size) { dq->start_pos = 0; dq->end_pos = size; memcpy((uint8_t *)dq->data, data, size); } else if (dq->start_pos < size) { size_t back_size = size - dq->start_pos; if (dq->start_pos) memcpy(dq->data, (uint8_t *)data + back_size, dq->start_pos); dq->start_pos = dq->capacity - back_size; memcpy((uint8_t *)dq->data + dq->start_pos, data, back_size); } else { dq->start_pos -= size; memcpy((uint8_t *)dq->data + dq->start_pos, data, size); } } static inline void deque_push_back_zero(struct deque *dq, size_t size) { size_t new_end_pos = dq->end_pos + size; dq->size += size; deque_ensure_capacity(dq); if (new_end_pos > dq->capacity) { size_t back_size = dq->capacity - dq->end_pos; size_t loop_size = size - back_size; if (back_size) memset((uint8_t *)dq->data + dq->end_pos, 0, back_size); memset(dq->data, 0, loop_size); new_end_pos -= dq->capacity; } else { memset((uint8_t *)dq->data + dq->end_pos, 0, size); } dq->end_pos = new_end_pos; } static inline void deque_push_front_zero(struct deque *dq, size_t size) { dq->size += size; deque_ensure_capacity(dq); if (dq->size == size) { dq->start_pos = 0; dq->end_pos = size; memset((uint8_t *)dq->data, 0, size); } else if (dq->start_pos < size) { size_t back_size = size - dq->start_pos; if (dq->start_pos) memset(dq->data, 0, dq->start_pos); dq->start_pos = dq->capacity - back_size; memset((uint8_t *)dq->data + dq->start_pos, 0, back_size); } else { dq->start_pos -= size; memset((uint8_t *)dq->data + dq->start_pos, 0, size); } } static inline void deque_peek_front(struct deque *dq, void *data, size_t size) { assert(size <= dq->size); if (data) { size_t start_size = dq->capacity - dq->start_pos; if (start_size < size) { memcpy(data, (uint8_t *)dq->data + dq->start_pos, start_size); memcpy((uint8_t *)data + start_size, dq->data, size - start_size); } else { memcpy(data, (uint8_t *)dq->data + dq->start_pos, size); } } } static inline void deque_peek_back(struct deque *dq, void *data, size_t size) { assert(size <= dq->size); if (data) { size_t back_size = (dq->end_pos ? dq->end_pos : dq->capacity); if (back_size < size) { size_t front_size = size - back_size; size_t new_end_pos = dq->capacity - front_size; memcpy((uint8_t *)data + (size - back_size), dq->data, back_size); memcpy(data, (uint8_t *)dq->data + new_end_pos, front_size); } else { memcpy(data, (uint8_t *)dq->data + dq->end_pos - size, size); } } } static inline void deque_pop_front(struct deque *dq, void *data, size_t size) { deque_peek_front(dq, data, size); dq->size -= size; if (!dq->size) { dq->start_pos = dq->end_pos = 0; return; } dq->start_pos += size; if (dq->start_pos >= dq->capacity) dq->start_pos -= dq->capacity; } static inline void deque_pop_back(struct deque *dq, void *data, size_t size) { deque_peek_back(dq, data, size); dq->size -= size; if (!dq->size) { dq->start_pos = dq->end_pos = 0; return; } if (dq->end_pos <= size) dq->end_pos = dq->capacity - (size - dq->end_pos); else dq->end_pos -= size; } static inline void *deque_data(struct deque *dq, size_t idx) { uint8_t *ptr = (uint8_t *)dq->data; size_t offset = dq->start_pos + idx; if (idx >= dq->size) return NULL; if (offset >= dq->capacity) offset -= dq->capacity; return ptr + offset; } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/platform-cocoa.m000644 001751 001751 00000034372 15153330235 023560 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Ruwen Hahn * Lain Bailey * Marvin Scholz * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "base.h" #include "platform.h" #include "dstr.h" #include #include #include #include #include #include #include #include #include #include #include #import #include "apple/cfstring-utils.h" uint64_t os_gettime_ns(void) { return clock_gettime_nsec_np(CLOCK_UPTIME_RAW); } /* gets the location [domain mask]/Library/Application Support/[name] */ static int os_get_path_internal(char *dst, size_t size, const char *name, NSSearchPathDomainMask domainMask) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSApplicationSupportDirectory, domainMask, YES); if ([paths count] == 0) bcrash("Could not get home directory (platform-cocoa)"); NSString *application_support = paths[0]; const char *base_path = [application_support UTF8String]; if (!name || !*name) return snprintf(dst, size, "%s", base_path); else return snprintf(dst, size, "%s/%s", base_path, name); } static char *os_get_path_ptr_internal(const char *name, NSSearchPathDomainMask domainMask) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSApplicationSupportDirectory, domainMask, YES); if ([paths count] == 0) bcrash("Could not get home directory (platform-cocoa)"); NSString *application_support = paths[0]; NSUInteger len = [application_support lengthOfBytesUsingEncoding:NSUTF8StringEncoding]; char *path_ptr = bmalloc(len + 1); path_ptr[len] = 0; memcpy(path_ptr, [application_support UTF8String], len); struct dstr path; dstr_init_move_array(&path, path_ptr); dstr_cat(&path, "/"); dstr_cat(&path, name); return path.array; } int os_get_config_path(char *dst, size_t size, const char *name) { return os_get_path_internal(dst, size, name, NSUserDomainMask); } char *os_get_config_path_ptr(const char *name) { return os_get_path_ptr_internal(name, NSUserDomainMask); } int os_get_program_data_path(char *dst, size_t size, const char *name) { return os_get_path_internal(dst, size, name, NSLocalDomainMask); } char *os_get_program_data_path_ptr(const char *name) { return os_get_path_ptr_internal(name, NSLocalDomainMask); } char *os_get_executable_path_ptr(const char *name) { char exe[PATH_MAX]; char abs_path[PATH_MAX]; uint32_t size = sizeof(exe); struct dstr path; char *slash; if (_NSGetExecutablePath(exe, &size) != 0) { return NULL; } if (!realpath(exe, abs_path)) { return NULL; } dstr_init_copy(&path, abs_path); slash = strrchr(path.array, '/'); if (slash) { size_t len = slash - path.array + 1; dstr_resize(&path, len); } if (name && *name) { dstr_cat(&path, name); } return path.array; } struct os_cpu_usage_info { int64_t last_cpu_time; int64_t last_sys_time; int core_count; }; static inline void add_time_value(time_value_t *dst, time_value_t *a, time_value_t *b) { dst->microseconds = a->microseconds + b->microseconds; dst->seconds = a->seconds + b->seconds; if (dst->microseconds >= 1000000) { dst->seconds += dst->microseconds / 1000000; dst->microseconds %= 1000000; } } static bool get_time_info(int64_t *cpu_time, int64_t *sys_time) { mach_port_t task = mach_task_self(); struct task_thread_times_info thread_data; struct task_basic_info_64 task_data; mach_msg_type_number_t count; kern_return_t kern_ret; time_value_t cur_time; *cpu_time = 0; *sys_time = 0; count = TASK_THREAD_TIMES_INFO_COUNT; kern_ret = task_info(task, TASK_THREAD_TIMES_INFO, (task_info_t) &thread_data, &count); if (kern_ret != KERN_SUCCESS) return false; count = TASK_BASIC_INFO_64_COUNT; kern_ret = task_info(task, TASK_BASIC_INFO_64, (task_info_t) &task_data, &count); if (kern_ret != KERN_SUCCESS) return false; add_time_value(&cur_time, &thread_data.user_time, &thread_data.system_time); add_time_value(&cur_time, &cur_time, &task_data.user_time); add_time_value(&cur_time, &cur_time, &task_data.system_time); *cpu_time = os_gettime_ns() / 1000; *sys_time = cur_time.seconds * 1000000 + cur_time.microseconds; return true; } os_cpu_usage_info_t *os_cpu_usage_info_start(void) { struct os_cpu_usage_info *info = bmalloc(sizeof(*info)); if (!get_time_info(&info->last_cpu_time, &info->last_sys_time)) { bfree(info); return NULL; } info->core_count = (int) sysconf(_SC_NPROCESSORS_ONLN); return info; } double os_cpu_usage_info_query(os_cpu_usage_info_t *info) { int64_t sys_time, cpu_time; int64_t sys_time_delta, cpu_time_delta; if (!info || !get_time_info(&cpu_time, &sys_time)) return 0.0; sys_time_delta = sys_time - info->last_sys_time; cpu_time_delta = cpu_time - info->last_cpu_time; if (cpu_time_delta == 0) return 0.0; info->last_sys_time = sys_time; info->last_cpu_time = cpu_time; return (double) sys_time_delta * 100.0 / (double) cpu_time_delta / (double) info->core_count; } void os_cpu_usage_info_destroy(os_cpu_usage_info_t *info) { if (info) bfree(info); } os_performance_token_t *os_request_high_performance(const char *reason) { @autoreleasepool { NSProcessInfo *processInfo = NSProcessInfo.processInfo; id activity = [processInfo beginActivityWithOptions:NSActivityUserInitiated reason:@(reason ? reason : "")]; return CFBridgingRetain(activity); } } void os_end_high_performance(os_performance_token_t *token) { @autoreleasepool { NSProcessInfo *processInfo = NSProcessInfo.processInfo; [processInfo endActivity:CFBridgingRelease(token)]; } } struct os_inhibit_info { CFStringRef reason; IOPMAssertionID sleep_id; IOPMAssertionID user_id; bool active; }; os_inhibit_t *os_inhibit_sleep_create(const char *reason) { struct os_inhibit_info *info = bzalloc(sizeof(*info)); if (!reason) info->reason = CFStringCreateWithCString(kCFAllocatorDefault, reason, kCFStringEncodingUTF8); else info->reason = CFStringCreateCopy(kCFAllocatorDefault, CFSTR("")); return info; } bool os_inhibit_sleep_set_active(os_inhibit_t *info, bool active) { IOReturn success; if (!info) return false; if (info->active == active) return false; if (active) { IOPMAssertionDeclareUserActivity(info->reason, kIOPMUserActiveLocal, &info->user_id); success = IOPMAssertionCreateWithName(kIOPMAssertionTypeNoDisplaySleep, kIOPMAssertionLevelOn, info->reason, &info->sleep_id); if (success != kIOReturnSuccess) { blog(LOG_WARNING, "Failed to disable sleep"); return false; } } else { IOPMAssertionRelease(info->sleep_id); } info->active = active; return true; } void os_inhibit_sleep_destroy(os_inhibit_t *info) { if (info) { os_inhibit_sleep_set_active(info, false); CFRelease(info->reason); bfree(info); } } static int physical_cores = 0; static int logical_cores = 0; static bool core_count_initialized = false; bool os_get_emulation_status(void) { #ifdef __aarch64__ return false; #else int rosettaTranslated = 0; size_t size = sizeof(rosettaTranslated); if (sysctlbyname("sysctl.proc_translated", &rosettaTranslated, &size, NULL, 0) == -1) return false; return rosettaTranslated == 1; #endif } static void os_get_cores_internal(void) { if (core_count_initialized) return; core_count_initialized = true; size_t size; int ret; size = sizeof(physical_cores); ret = sysctlbyname("machdep.cpu.core_count", &physical_cores, &size, NULL, 0); if (ret != 0) return; ret = sysctlbyname("machdep.cpu.thread_count", &logical_cores, &size, NULL, 0); } int os_get_physical_cores(void) { if (!core_count_initialized) os_get_cores_internal(); return physical_cores; } int os_get_logical_cores(void) { if (!core_count_initialized) os_get_cores_internal(); return logical_cores; } static inline bool os_get_sys_memory_usage_internal(vm_statistics_t vmstat) { mach_msg_type_number_t out_count = HOST_VM_INFO_COUNT; if (host_statistics(mach_host_self(), HOST_VM_INFO, (host_info_t) vmstat, &out_count) != KERN_SUCCESS) return false; return true; } uint64_t os_get_sys_free_size(void) { vm_statistics_data_t vmstat = {}; if (!os_get_sys_memory_usage_internal(&vmstat)) return 0; return vmstat.free_count * vm_page_size; } int64_t os_get_free_space(const char *path) { if (path) { NSURL *fileURL = [NSURL fileURLWithPath:@(path)]; NSArray *availableCapacityKeys = @[ NSURLVolumeAvailableCapacityKey, NSURLVolumeAvailableCapacityForImportantUsageKey, NSURLVolumeAvailableCapacityForOpportunisticUsageKey ]; NSDictionary *values = [fileURL resourceValuesForKeys:availableCapacityKeys error:nil]; NSNumber *availableImportantSpace = values[NSURLVolumeAvailableCapacityForImportantUsageKey]; NSNumber *availableSpace = values[NSURLVolumeAvailableCapacityKey]; if (availableImportantSpace.longValue > 0) { return availableImportantSpace.longValue; } else { return availableSpace.longValue; } } return 0; } uint64_t os_get_free_disk_space(const char *dir) { int64_t free_space = os_get_free_space(dir); return (uint64_t) free_space; } static uint64_t total_memory = 0; static bool total_memory_initialized = false; static void os_get_sys_total_size_internal() { total_memory_initialized = true; size_t size; int ret; size = sizeof(total_memory); ret = sysctlbyname("hw.memsize", &total_memory, &size, NULL, 0); } uint64_t os_get_sys_total_size(void) { if (!total_memory_initialized) os_get_sys_total_size_internal(); return total_memory; } static inline bool os_get_proc_memory_usage_internal(mach_task_basic_info_data_t *taskinfo) { const task_flavor_t flavor = MACH_TASK_BASIC_INFO; mach_msg_type_number_t out_count = MACH_TASK_BASIC_INFO_COUNT; if (task_info(mach_task_self(), flavor, (task_info_t) taskinfo, &out_count) != KERN_SUCCESS) return false; return true; } bool os_get_proc_memory_usage(os_proc_memory_usage_t *usage) { mach_task_basic_info_data_t taskinfo = {}; if (!os_get_proc_memory_usage_internal(&taskinfo)) return false; usage->resident_size = taskinfo.resident_size; usage->virtual_size = taskinfo.virtual_size; return true; } uint64_t os_get_proc_resident_size(void) { mach_task_basic_info_data_t taskinfo = {}; if (!os_get_proc_memory_usage_internal(&taskinfo)) return 0; return taskinfo.resident_size; } uint64_t os_get_proc_virtual_size(void) { mach_task_basic_info_data_t taskinfo = {}; if (!os_get_proc_memory_usage_internal(&taskinfo)) return 0; return taskinfo.virtual_size; } /* Obtains a copy of the contents of a CFString in specified encoding. * Returns char* (must be bfree'd by caller) or NULL on failure. */ char *cfstr_copy_cstr(CFStringRef cfstring, CFStringEncoding cfstring_encoding) { if (!cfstring) return NULL; // Try the quick way to obtain the buffer const char *tmp_buffer = CFStringGetCStringPtr(cfstring, cfstring_encoding); if (tmp_buffer != NULL) return bstrdup(tmp_buffer); // The quick way did not work, try the more expensive one CFIndex length = CFStringGetLength(cfstring); CFIndex max_size = CFStringGetMaximumSizeForEncoding(length, cfstring_encoding); // If result would exceed LONG_MAX, kCFNotFound is returned if (max_size == kCFNotFound) return NULL; // Account for the null terminator max_size++; char *buffer = bmalloc(max_size); if (buffer == NULL) { return NULL; } // Copy CFString in requested encoding to buffer Boolean success = CFStringGetCString(cfstring, buffer, max_size, cfstring_encoding); if (!success) { bfree(buffer); buffer = NULL; } return buffer; } /* Copies the contents of a CFString in specified encoding to a given dstr. * Returns true on success or false on failure. * In case of failure, the dstr capacity but not size is changed. */ bool cfstr_copy_dstr(CFStringRef cfstring, CFStringEncoding cfstring_encoding, struct dstr *str) { if (!cfstring) return false; // Try the quick way to obtain the buffer const char *tmp_buffer = CFStringGetCStringPtr(cfstring, cfstring_encoding); if (tmp_buffer != NULL) { dstr_copy(str, tmp_buffer); return true; } // The quick way did not work, try the more expensive one CFIndex length = CFStringGetLength(cfstring); CFIndex max_size = CFStringGetMaximumSizeForEncoding(length, cfstring_encoding); // If result would exceed LONG_MAX, kCFNotFound is returned if (max_size == kCFNotFound) return NULL; // Account for the null terminator max_size++; dstr_ensure_capacity(str, max_size); // Copy CFString in requested encoding to dstr buffer Boolean success = CFStringGetCString(cfstring, str->array, max_size, cfstring_encoding); if (success) dstr_resize(str, max_size); return (bool) success; } obs-studio-32.1.0-sources/libobs/util/threading.h000644 001751 001751 00000005367 15153330235 022614 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once /* * Allows posix thread usage on windows as well as other operating systems. * Use this header if you want to make your code more platform independent. * * Also provides a custom platform-independent "event" handler via * pthread conditional waits. */ #include "c99defs.h" #ifndef _MSC_VER #include #endif #include #ifdef __cplusplus extern "C" { #endif #ifdef _WIN32 #include "threading-windows.h" #else #include "threading-posix.h" #endif /* this may seem strange, but you can't use it unless it's an initializer */ static inline void pthread_mutex_init_value(pthread_mutex_t *mutex) { pthread_mutex_t init_val = PTHREAD_MUTEX_INITIALIZER; if (!mutex) return; *mutex = init_val; } static inline int pthread_mutex_init_recursive(pthread_mutex_t *mutex) { pthread_mutexattr_t attr; int ret = pthread_mutexattr_init(&attr); if (ret == 0) { ret = pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); if (ret == 0) { ret = pthread_mutex_init(mutex, &attr); } pthread_mutexattr_destroy(&attr); } return ret; } enum os_event_type { OS_EVENT_TYPE_AUTO, OS_EVENT_TYPE_MANUAL, }; struct os_event_data; struct os_sem_data; typedef struct os_event_data os_event_t; typedef struct os_sem_data os_sem_t; EXPORT int os_event_init(os_event_t **event, enum os_event_type type); EXPORT void os_event_destroy(os_event_t *event); EXPORT int os_event_wait(os_event_t *event); EXPORT int os_event_timedwait(os_event_t *event, unsigned long milliseconds); EXPORT int os_event_try(os_event_t *event); EXPORT int os_event_signal(os_event_t *event); EXPORT void os_event_reset(os_event_t *event); EXPORT int os_sem_init(os_sem_t **sem, int value); EXPORT void os_sem_destroy(os_sem_t *sem); EXPORT int os_sem_post(os_sem_t *sem); EXPORT int os_sem_wait(os_sem_t *sem); EXPORT void os_set_thread_name(const char *name); #ifdef _MSC_VER #define THREAD_LOCAL __declspec(thread) #else #define THREAD_LOCAL __thread #endif #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/source-profiler.c000644 001751 001751 00000037011 15153330235 023751 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Dennis Sädtler This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "source-profiler.h" #include "darray.h" #include "obs-internal.h" #include "platform.h" #include "threading.h" #include "uthash.h" struct frame_sample { uint64_t tick; DARRAY(uint64_t) render_cpu; DARRAY(gs_timer_t *) render_timers; }; /* Buffer frame data collection to give GPU time to finish rendering. * Set to the same as the rendering buffer (NUM_TEXTURES) */ #define FRAME_BUFFER_SIZE NUM_TEXTURES struct source_samples { /* the pointer address of the source is the hashtable key */ uintptr_t key; uint8_t frame_idx; struct frame_sample *frames[FRAME_BUFFER_SIZE]; UT_hash_handle hh; }; /* Basic fixed-size circular buffer to hold most recent N uint64_t values * (older items will be overwritten). */ struct ucirclebuf { size_t idx; size_t capacity; size_t num; uint64_t *array; }; struct profiler_entry { /* the pointer address of the source is the hashtable key */ uintptr_t key; /* Tick times for last N frames */ struct ucirclebuf tick; /* Time of first render pass in a frame, for last N frames */ struct ucirclebuf render_cpu; struct ucirclebuf render_gpu; /* Sum of all render passes in a frame, for last N frames */ struct ucirclebuf render_cpu_sum; struct ucirclebuf render_gpu_sum; /* Timestamps of last N async frame submissions */ struct ucirclebuf async_frame_ts; /* Timestamps of last N async frames rendered */ struct ucirclebuf async_rendered_ts; UT_hash_handle hh; }; /* Hashmaps */ struct source_samples *hm_samples = NULL; struct profiler_entry *hm_entries = NULL; /* GPU timer ranges (only required for DirectX) */ static uint8_t timer_idx = 0; static gs_timer_range_t *timer_ranges[FRAME_BUFFER_SIZE] = {0}; static uint64_t profiler_samples = 0; /* Sources can be rendered more than once per frame, to avoid reallocating * memory in the majority of cases, reserve at least two. */ static const size_t render_times_reservation = 2; pthread_rwlock_t hm_rwlock = PTHREAD_RWLOCK_INITIALIZER; static bool enabled = false; static bool gpu_enabled = false; /* These can be set from other threads, mark them volatile */ static volatile bool enable_next = false; static volatile bool gpu_enable_next = false; void ucirclebuf_init(struct ucirclebuf *buf, size_t capacity) { if (!capacity) return; memset(buf, 0, sizeof(struct ucirclebuf)); buf->capacity = capacity; buf->array = bmalloc(sizeof(uint64_t) * capacity); } void ucirclebuf_free(struct ucirclebuf *buf) { bfree(buf->array); memset(buf, 0, sizeof(struct ucirclebuf)); } void ucirclebuf_push(struct ucirclebuf *buf, uint64_t val) { if (buf->num == buf->capacity) { buf->idx %= buf->capacity; buf->array[buf->idx++] = val; return; } buf->array[buf->idx++] = val; buf->num++; } static struct frame_sample *frame_sample_create(void) { struct frame_sample *smp = bzalloc(sizeof(struct frame_sample)); da_reserve(smp->render_cpu, render_times_reservation); da_reserve(smp->render_timers, render_times_reservation); return smp; } static void frame_sample_destroy(struct frame_sample *sample) { if (sample->render_timers.num) { gs_enter_context(obs->video.graphics); for (size_t i = 0; i < sample->render_timers.num; i++) gs_timer_destroy(sample->render_timers.array[i]); gs_leave_context(); } da_free(sample->render_cpu); da_free(sample->render_timers); bfree(sample); } struct source_samples *source_samples_create(const uintptr_t key) { struct source_samples *smps = bzalloc(sizeof(struct source_samples)); smps->key = key; for (size_t i = 0; i < FRAME_BUFFER_SIZE; i++) smps->frames[i] = frame_sample_create(); return smps; } static void source_samples_destroy(struct source_samples *sample) { for (size_t i = 0; i < FRAME_BUFFER_SIZE; i++) frame_sample_destroy(sample->frames[i]); bfree(sample); } static struct profiler_entry *entry_create(const uintptr_t key) { struct profiler_entry *ent = bzalloc(sizeof(struct profiler_entry)); ent->key = key; ucirclebuf_init(&ent->tick, profiler_samples); ucirclebuf_init(&ent->render_cpu, profiler_samples); ucirclebuf_init(&ent->render_gpu, profiler_samples); ucirclebuf_init(&ent->render_cpu_sum, profiler_samples); ucirclebuf_init(&ent->render_gpu_sum, profiler_samples); ucirclebuf_init(&ent->async_frame_ts, profiler_samples); ucirclebuf_init(&ent->async_rendered_ts, profiler_samples); return ent; } static void entry_destroy(struct profiler_entry *entry) { ucirclebuf_free(&entry->tick); ucirclebuf_free(&entry->render_cpu); ucirclebuf_free(&entry->render_gpu); ucirclebuf_free(&entry->render_cpu_sum); ucirclebuf_free(&entry->render_gpu_sum); ucirclebuf_free(&entry->async_frame_ts); ucirclebuf_free(&entry->async_rendered_ts); bfree(entry); } static void reset_gpu_timers(void) { gs_enter_context(obs->video.graphics); for (int i = 0; i < FRAME_BUFFER_SIZE; i++) { if (timer_ranges[i]) { gs_timer_range_destroy(timer_ranges[i]); timer_ranges[i] = NULL; } } gs_leave_context(); } static void profiler_shutdown(void) { struct source_samples *smp, *tmp; HASH_ITER (hh, hm_samples, smp, tmp) { HASH_DEL(hm_samples, smp); source_samples_destroy(smp); } pthread_rwlock_wrlock(&hm_rwlock); struct profiler_entry *ent, *etmp; HASH_ITER (hh, hm_entries, ent, etmp) { HASH_DEL(hm_entries, ent); entry_destroy(ent); } pthread_rwlock_unlock(&hm_rwlock); reset_gpu_timers(); } void source_profiler_enable(bool enable) { enable_next = enable; } void source_profiler_gpu_enable(bool enable) { gpu_enable_next = enable && enable_next; } void source_profiler_reset_video(struct obs_video_info *ovi) { double fps = ceil((double)ovi->fps_num / (double)ovi->fps_den); profiler_samples = (uint64_t)(fps * 5); /* This is fine because the video thread won't be running at this point */ profiler_shutdown(); } void source_profiler_render_begin(void) { if (!gpu_enabled) return; gs_enter_context(obs->video.graphics); if (!timer_ranges[timer_idx]) timer_ranges[timer_idx] = gs_timer_range_create(); gs_timer_range_begin(timer_ranges[timer_idx]); gs_leave_context(); } void source_profiler_render_end(void) { if (!gpu_enabled || !timer_ranges[timer_idx]) return; gs_enter_context(obs->video.graphics); gs_timer_range_end(timer_ranges[timer_idx]); gs_leave_context(); } void source_profiler_frame_begin(void) { if (!enabled && enable_next) enabled = true; if (!gpu_enabled && enabled && gpu_enable_next) { gpu_enabled = true; } else if (gpu_enabled) { /* Advance timer idx if gpu enabled */ timer_idx = (timer_idx + 1) % FRAME_BUFFER_SIZE; } } static inline bool is_async_video_source(const struct obs_source *source) { return (source->info.output_flags & OBS_SOURCE_ASYNC_VIDEO) == OBS_SOURCE_ASYNC_VIDEO; } static const char *source_profiler_frame_collect_name = "source_profiler_frame_collect"; void source_profiler_frame_collect(void) { if (!enabled) return; profile_start(source_profiler_frame_collect_name); bool gpu_disjoint = false; bool gpu_ready = false; uint64_t freq = 0; if (gpu_enabled) { uint8_t timer_range_idx = (timer_idx + 1) % FRAME_BUFFER_SIZE; if (timer_ranges[timer_range_idx]) { gpu_ready = true; gs_enter_context(obs->video.graphics); gs_timer_range_get_data(timer_ranges[timer_range_idx], &gpu_disjoint, &freq); } if (gpu_disjoint) { blog(LOG_WARNING, "GPU Timers were disjoint, discarding samples."); } } pthread_rwlock_wrlock(&hm_rwlock); struct source_samples *smps = hm_samples; while (smps) { /* processing is delayed by FRAME_BUFFER_SIZE - 1 frames */ uint8_t frame_idx = (smps->frame_idx + 1) % FRAME_BUFFER_SIZE; struct frame_sample *smp = smps->frames[frame_idx]; if (!smp->tick) { /* No data yet */ smps = smps->hh.next; continue; } struct profiler_entry *ent; HASH_FIND_PTR(hm_entries, &smps->key, ent); if (!ent) { ent = entry_create(smps->key); HASH_ADD_PTR(hm_entries, key, ent); } ucirclebuf_push(&ent->tick, smp->tick); if (smp->render_cpu.num) { uint64_t sum = 0; for (size_t idx = 0; idx < smp->render_cpu.num; idx++) { sum += smp->render_cpu.array[idx]; } ucirclebuf_push(&ent->render_cpu, smp->render_cpu.array[0]); ucirclebuf_push(&ent->render_cpu_sum, sum); da_clear(smp->render_cpu); } else { ucirclebuf_push(&ent->render_cpu, 0); ucirclebuf_push(&ent->render_cpu_sum, 0); } /* Note that we still check this even if GPU profiling has been * disabled to destroy leftover timers. */ if (smp->render_timers.num) { uint64_t sum = 0, first = 0, ticks = 0; for (size_t i = 0; i < smp->render_timers.num; i++) { gs_timer_t *timer = smp->render_timers.array[i]; if (gpu_ready && !gpu_disjoint && gs_timer_get_data(timer, &ticks)) { /* Convert ticks to ns */ sum += util_mul_div64(ticks, 1000000000ULL, freq); if (!first) first = sum; } gs_timer_destroy(timer); } if (first) { ucirclebuf_push(&ent->render_gpu, first); ucirclebuf_push(&ent->render_gpu_sum, sum); } da_clear(smp->render_timers); } else { ucirclebuf_push(&ent->render_gpu, 0); ucirclebuf_push(&ent->render_gpu_sum, 0); } const obs_source_t *src = *(const obs_source_t **)smps->hh.key; if (is_async_video_source(src)) { uint64_t ts = obs_source_get_last_async_ts(src); ucirclebuf_push(&ent->async_rendered_ts, ts); } smps = smps->hh.next; } pthread_rwlock_unlock(&hm_rwlock); if (gpu_enabled && gpu_ready) gs_leave_context(); /* Apply updated states for next frame */ if (!enable_next) { enabled = gpu_enabled = false; profiler_shutdown(); } else if (!gpu_enable_next) { gpu_enabled = false; reset_gpu_timers(); } profile_end(source_profiler_frame_collect_name); } void source_profiler_async_frame_received(obs_source_t *source) { if (!enabled) return; uint64_t ts = os_gettime_ns(); pthread_rwlock_wrlock(&hm_rwlock); struct profiler_entry *ent; HASH_FIND_PTR(hm_entries, &source, ent); if (ent) ucirclebuf_push(&ent->async_frame_ts, ts); pthread_rwlock_unlock(&hm_rwlock); } uint64_t source_profiler_source_tick_start(void) { if (!enabled) return 0; return os_gettime_ns(); } void source_profiler_source_tick_end(obs_source_t *source, uint64_t start) { if (!enabled) return; const uint64_t delta = os_gettime_ns() - start; struct source_samples *smp = NULL; HASH_FIND_PTR(hm_samples, &source, smp); if (!smp) { smp = source_samples_create((uintptr_t)source); HASH_ADD_PTR(hm_samples, key, smp); } else { /* Advance index here since tick happens first and only once * at the start of each frame. */ smp->frame_idx = (smp->frame_idx + 1) % FRAME_BUFFER_SIZE; } smp->frames[smp->frame_idx]->tick = delta; } uint64_t source_profiler_source_render_begin(gs_timer_t **timer) { if (!enabled) return 0; if (gpu_enabled) { *timer = gs_timer_create(); gs_timer_begin(*timer); } else { *timer = NULL; } return os_gettime_ns(); } void source_profiler_source_render_end(obs_source_t *source, uint64_t start, gs_timer_t *timer) { if (!enabled) return; if (timer) gs_timer_end(timer); const uint64_t delta = os_gettime_ns() - start; struct source_samples *smp; HASH_FIND_PTR(hm_samples, &source, smp); if (smp) { da_push_back(smp->frames[smp->frame_idx]->render_cpu, &delta); if (timer) { da_push_back(smp->frames[smp->frame_idx]->render_timers, &timer); } } else if (timer) { gs_timer_destroy(timer); } } static void task_delete_source(void *key) { struct source_samples *smp; HASH_FIND_PTR(hm_samples, &key, smp); if (smp) { HASH_DEL(hm_samples, smp); source_samples_destroy(smp); } pthread_rwlock_rdlock(&hm_rwlock); struct profiler_entry *ent = NULL; HASH_FIND_PTR(hm_entries, &key, ent); if (ent) { HASH_DEL(hm_entries, ent); entry_destroy(ent); } pthread_rwlock_unlock(&hm_rwlock); } void source_profiler_remove_source(obs_source_t *source) { if (!enabled) return; /* Schedule deletion task on graphics thread */ obs_queue_task(OBS_TASK_GRAPHICS, task_delete_source, source, false); } static inline void calculate_tick(struct profiler_entry *ent, struct profiler_result *result) { size_t idx = 0; uint64_t sum = 0; for (; idx < ent->tick.num; idx++) { const uint64_t delta = ent->tick.array[idx]; if (delta > result->tick_max) result->tick_max = delta; sum += delta; } if (idx) result->tick_avg = sum / idx; } static inline void calculate_render(struct profiler_entry *ent, struct profiler_result *result) { size_t idx; uint64_t sum = 0, sum_sum = 0; for (idx = 0; idx < ent->render_cpu.num; idx++) { const uint64_t delta = ent->render_cpu.array[idx]; if (delta > result->render_max) result->render_max = delta; sum += delta; sum_sum += ent->render_cpu_sum.array[idx]; } if (idx) { result->render_avg = sum / idx; result->render_sum = sum_sum / idx; } if (!gpu_enabled) return; sum = sum_sum = 0; for (idx = 0; idx < ent->render_gpu.num; idx++) { const uint64_t delta = ent->render_gpu.array[idx]; if (delta > result->render_gpu_max) result->render_gpu_max = delta; sum += delta; sum_sum += ent->render_gpu_sum.array[idx]; } if (idx) { result->render_gpu_avg = sum / idx; result->render_gpu_sum = sum_sum / idx; } } static inline void calculate_fps(const struct ucirclebuf *frames, double *avg, uint64_t *best, uint64_t *worst) { uint64_t deltas = 0, delta_sum = 0, best_delta = 0, worst_delta = 0; for (size_t idx = 0; idx < frames->num; idx++) { const uint64_t ts = frames->array[idx]; if (!ts) break; size_t prev_idx = idx ? idx - 1 : frames->num - 1; const uint64_t prev_ts = frames->array[prev_idx]; if (!prev_ts || prev_ts >= ts) continue; uint64_t delta = (ts - prev_ts); if (delta < best_delta || !best_delta) best_delta = delta; if (delta > worst_delta) worst_delta = delta; delta_sum += delta; deltas++; } if (deltas && delta_sum) { *avg = 1.0E9 / ((double)delta_sum / (double)deltas); *best = best_delta; *worst = worst_delta; } } bool source_profiler_fill_result(obs_source_t *source, struct profiler_result *result) { if (!enabled || !result) return false; memset(result, 0, sizeof(struct profiler_result)); pthread_rwlock_rdlock(&hm_rwlock); struct profiler_entry *ent = NULL; HASH_FIND_PTR(hm_entries, &source, ent); if (ent) { calculate_tick(ent, result); calculate_render(ent, result); if (is_async_video_source(source)) { calculate_fps(&ent->async_frame_ts, &result->async_input, &result->async_input_best, &result->async_input_worst); calculate_fps(&ent->async_rendered_ts, &result->async_rendered, &result->async_rendered_best, &result->async_rendered_worst); } } pthread_rwlock_unlock(&hm_rwlock); return !!ent; } profiler_result_t *source_profiler_get_result(obs_source_t *source) { profiler_result_t *ret = bmalloc(sizeof(profiler_result_t)); if (!source_profiler_fill_result(source, ret)) { bfree(ret); return NULL; } return ret; } obs-studio-32.1.0-sources/libobs/util/config-file.h000644 001751 001751 00000011400 15153330235 023012 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "c99defs.h" /* * Generic ini-style config file functions * * NOTE: It is highly recommended to use the default value functions (bottom of * the file) before reading any variables from config files. */ #ifdef __cplusplus extern "C" { #endif struct config_data; typedef struct config_data config_t; #define CONFIG_SUCCESS 0 #define CONFIG_FILENOTFOUND -1 #define CONFIG_ERROR -2 enum config_open_type { CONFIG_OPEN_EXISTING, CONFIG_OPEN_ALWAYS, }; EXPORT config_t *config_create(const char *file); EXPORT int config_open(config_t **config, const char *file, enum config_open_type open_type); EXPORT int config_open_string(config_t **config, const char *str); EXPORT int config_save(config_t *config); EXPORT int config_save_safe(config_t *config, const char *temp_ext, const char *backup_ext); EXPORT void config_close(config_t *config); EXPORT size_t config_num_sections(config_t *config); EXPORT const char *config_get_section(config_t *config, size_t idx); EXPORT void config_set_string(config_t *config, const char *section, const char *name, const char *value); EXPORT void config_set_int(config_t *config, const char *section, const char *name, int64_t value); EXPORT void config_set_uint(config_t *config, const char *section, const char *name, uint64_t value); EXPORT void config_set_bool(config_t *config, const char *section, const char *name, bool value); EXPORT void config_set_double(config_t *config, const char *section, const char *name, double value); EXPORT const char *config_get_string(config_t *config, const char *section, const char *name); EXPORT int64_t config_get_int(config_t *config, const char *section, const char *name); EXPORT uint64_t config_get_uint(config_t *config, const char *section, const char *name); EXPORT bool config_get_bool(config_t *config, const char *section, const char *name); EXPORT double config_get_double(config_t *config, const char *section, const char *name); EXPORT bool config_remove_value(config_t *config, const char *section, const char *name); /* * DEFAULT VALUES * * The following functions are used to set what values will return if they do * not exist. Call these functions *once* for each known value before reading * any of them anywhere else. * * These do *not* actually set any values, they only set what values will be * returned for config_get_* if the specified variable does not exist. * * You can initialize the defaults programmatically using config_set_default_* * functions (recommended for most cases), or you can initialize it via a file * with config_open_defaults. */ EXPORT int config_open_defaults(config_t *config, const char *file); EXPORT void config_set_default_string(config_t *config, const char *section, const char *name, const char *value); EXPORT void config_set_default_int(config_t *config, const char *section, const char *name, int64_t value); EXPORT void config_set_default_uint(config_t *config, const char *section, const char *name, uint64_t value); EXPORT void config_set_default_bool(config_t *config, const char *section, const char *name, bool value); EXPORT void config_set_default_double(config_t *config, const char *section, const char *name, double value); /* These functions allow you to get the current default values rather than get * the actual values. Probably almost never really needed */ EXPORT const char *config_get_default_string(config_t *config, const char *section, const char *name); EXPORT int64_t config_get_default_int(config_t *config, const char *section, const char *name); EXPORT uint64_t config_get_default_uint(config_t *config, const char *section, const char *name); EXPORT bool config_get_default_bool(config_t *config, const char *section, const char *name); EXPORT double config_get_default_double(config_t *config, const char *section, const char *name); EXPORT bool config_has_user_value(config_t *config, const char *section, const char *name); EXPORT bool config_has_default_value(config_t *config, const char *section, const char *name); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/crc32.c000644 001751 001751 00000010230 15153330235 021537 0ustar00runnerrunner000000 000000 /* * Copyright (c) 1986 Gary S. Brown * 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "crc32.h" /* CRC32 code derived from work by Gary S. Brown. */ static uint32_t crc32_tab[] = { 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f, 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9, 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d}; uint32_t calc_crc32(uint32_t crc, const void *buf, size_t size) { const uint8_t *p; p = buf; crc = crc ^ ~0UL; while (size--) crc = crc32_tab[(crc ^ *p++) & 0xFF] ^ (crc >> 8); return crc ^ ~0UL; } obs-studio-32.1.0-sources/libobs/util/platform-nix-dbus.c000644 001751 001751 00000011005 15153330235 024177 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include "bmem.h" /* NOTE: This is basically just the VLC implementation from its d-bus power * management inhibition code. Credit is theirs for this. */ enum service_type { FREEDESKTOP_SS, /* freedesktop screensaver (KDE >= 4, GNOME >= 3.10) */ FREEDESKTOP_PM, /* freedesktop power management (KDE, gnome <= 2.26) */ MATE_SM, /* MATE (>= 1.0) session manager */ GNOME_SM, /* GNOME 2.26 - 3.4 session manager */ }; struct service_info { const char *name; const char *path; const char *interface; const char *uninhibit; }; static const struct service_info services[] = { [FREEDESKTOP_SS] = { .name = "org.freedesktop.ScreenSaver", .path = "/ScreenSaver", .interface = "org.freedesktop.ScreenSaver", .uninhibit = "UnInhibit", }, [FREEDESKTOP_PM] = { .name = "org.freedesktop.PowerManagement.Inhibit", .path = "/org/freedesktop/PowerManagement", .interface = "org.freedesktop.PowerManagement.Inhibit", .uninhibit = "UnInhibit", }, [MATE_SM] = { .name = "org.mate.SessionManager", .path = "/org/mate/SessionManager", .interface = "org.mate.SessionManager", .uninhibit = "Uninhibit", }, [GNOME_SM] = { .name = "org.gnome.SessionManager", .path = "/org/gnome/SessionManager", .interface = "org.gnome.SessionManager", .uninhibit = "Uninhibit", }, }; static const size_t num_services = (sizeof(services) / sizeof(struct service_info)); struct dbus_sleep_info { const struct service_info *service; GDBusConnection *c; uint32_t cookie; enum service_type type; }; void dbus_sleep_info_destroy(struct dbus_sleep_info *info) { if (info) { g_clear_object(&info->c); bfree(info); } } struct dbus_sleep_info *dbus_sleep_info_create(void) { struct dbus_sleep_info *info = bzalloc(sizeof(*info)); g_autoptr(GError) error = NULL; info->c = g_bus_get_sync(G_BUS_TYPE_SESSION, NULL, &error); if (!info->c) { blog(LOG_ERROR, "Could not create dbus connection: %s", error->message); bfree(info); return NULL; } for (size_t i = 0; i < num_services; i++) { const struct service_info *service = &services[i]; g_autoptr(GVariant) reply = NULL; if (!service->name) continue; reply = g_dbus_connection_call_sync(info->c, "org.freedesktop.DBus", "/org/freedesktop/DBus", "org.freedesktop.DBus", "GetNameOwner", g_variant_new("(s)", service->name), NULL, G_DBUS_CALL_FLAGS_NO_AUTO_START, -1, NULL, NULL); if (reply != NULL) { blog(LOG_DEBUG, "Found dbus service: %s", service->name); info->service = service; info->type = (enum service_type)i; return info; } } dbus_sleep_info_destroy(info); return NULL; } void dbus_inhibit_sleep(struct dbus_sleep_info *info, const char *reason, bool active) { g_autoptr(GVariant) reply = NULL; g_autoptr(GError) error = NULL; const char *method; GVariant *params; if (active == !!info->cookie) return; method = active ? "Inhibit" : info->service->uninhibit; if (active) { const char *program = "libobs"; uint32_t flags = 0xC; uint32_t xid = 0; assert(info->cookie == 0); switch (info->type) { case MATE_SM: case GNOME_SM: params = g_variant_new("(s@usu)", program, g_variant_new_uint32(xid), reason, flags); break; default: params = g_variant_new("(ss)", program, reason); } } else { assert(info->cookie != 0); params = g_variant_new("(u)", info->cookie); } reply = g_dbus_connection_call_sync(info->c, info->service->name, info->service->path, info->service->interface, method, params, NULL, G_DBUS_CALL_FLAGS_NONE, -1, NULL, &error); if (error != NULL) { blog(LOG_ERROR, "Failed to call %s: %s", method, error->message); return; } if (active) g_variant_get(reply, "(u)", &info->cookie); else info->cookie = 0; } obs-studio-32.1.0-sources/libobs/util/platform.h000644 001751 001751 00000016142 15153330235 022464 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #include #include #include "c99defs.h" /* * Platform-independent functions for Accessing files, encoding, DLLs, * sleep, timer, and timing. */ #ifdef __cplusplus extern "C" { #endif EXPORT FILE *os_wfopen(const wchar_t *path, const char *mode); EXPORT FILE *os_fopen(const char *path, const char *mode); EXPORT int64_t os_fgetsize(FILE *file); #ifdef _WIN32 EXPORT int os_stat(const char *file, struct stat *st); #else #define os_stat stat #endif EXPORT int os_fseeki64(FILE *file, int64_t offset, int origin); EXPORT int64_t os_ftelli64(FILE *file); EXPORT size_t os_fread_mbs(FILE *file, char **pstr); EXPORT size_t os_fread_utf8(FILE *file, char **pstr); /* functions purely for convenience */ EXPORT char *os_quick_read_utf8_file(const char *path); EXPORT bool os_quick_write_utf8_file(const char *path, const char *str, size_t len, bool marker); EXPORT bool os_quick_write_utf8_file_safe(const char *path, const char *str, size_t len, bool marker, const char *temp_ext, const char *backup_ext); EXPORT char *os_quick_read_mbs_file(const char *path); EXPORT bool os_quick_write_mbs_file(const char *path, const char *str, size_t len); EXPORT int64_t os_get_file_size(const char *path); EXPORT int64_t os_get_free_space(const char *path); EXPORT size_t os_mbs_to_wcs(const char *str, size_t str_len, wchar_t *dst, size_t dst_size); EXPORT size_t os_utf8_to_wcs(const char *str, size_t len, wchar_t *dst, size_t dst_size); EXPORT size_t os_wcs_to_mbs(const wchar_t *str, size_t len, char *dst, size_t dst_size); EXPORT size_t os_wcs_to_utf8(const wchar_t *str, size_t len, char *dst, size_t dst_size); EXPORT size_t os_mbs_to_wcs_ptr(const char *str, size_t len, wchar_t **pstr); EXPORT size_t os_utf8_to_wcs_ptr(const char *str, size_t len, wchar_t **pstr); EXPORT size_t os_wcs_to_mbs_ptr(const wchar_t *str, size_t len, char **pstr); EXPORT size_t os_wcs_to_utf8_ptr(const wchar_t *str, size_t len, char **pstr); EXPORT size_t os_utf8_to_mbs_ptr(const char *str, size_t len, char **pstr); EXPORT size_t os_mbs_to_utf8_ptr(const char *str, size_t len, char **pstr); EXPORT double os_strtod(const char *str); EXPORT int os_dtostr(double value, char *dst, size_t size); EXPORT void *os_dlopen(const char *path); EXPORT void *os_dlsym(void *module, const char *func); EXPORT void os_dlclose(void *module); EXPORT bool os_is_obs_plugin(const char *path); struct os_cpu_usage_info; typedef struct os_cpu_usage_info os_cpu_usage_info_t; EXPORT os_cpu_usage_info_t *os_cpu_usage_info_start(void); EXPORT double os_cpu_usage_info_query(os_cpu_usage_info_t *info); EXPORT void os_cpu_usage_info_destroy(os_cpu_usage_info_t *info); typedef const void os_performance_token_t; EXPORT os_performance_token_t *os_request_high_performance(const char *reason); EXPORT void os_end_high_performance(os_performance_token_t *); /** * Sleeps to a specific time (in nanoseconds). Doesn't have to be super * accurate in terms of actual slept time because the target time is ensured. * Returns false if already at or past target time. */ EXPORT bool os_sleepto_ns(uint64_t time_target); EXPORT bool os_sleepto_ns_fast(uint64_t time_target); EXPORT void os_sleep_ms(uint32_t duration); EXPORT uint64_t os_gettime_ns(void); EXPORT int os_get_config_path(char *dst, size_t size, const char *name); EXPORT char *os_get_config_path_ptr(const char *name); EXPORT int os_get_program_data_path(char *dst, size_t size, const char *name); EXPORT char *os_get_program_data_path_ptr(const char *name); EXPORT char *os_get_executable_path_ptr(const char *name); EXPORT bool os_file_exists(const char *path); EXPORT size_t os_get_abs_path(const char *path, char *abspath, size_t size); EXPORT char *os_get_abs_path_ptr(const char *path); EXPORT const char *os_get_path_extension(const char *path); EXPORT bool os_get_emulation_status(void); struct os_dir; typedef struct os_dir os_dir_t; struct os_dirent { char d_name[256]; bool directory; }; EXPORT os_dir_t *os_opendir(const char *path); EXPORT struct os_dirent *os_readdir(os_dir_t *dir); EXPORT void os_closedir(os_dir_t *dir); struct os_globent { char *path; bool directory; }; struct os_glob_info { size_t gl_pathc; struct os_globent *gl_pathv; }; typedef struct os_glob_info os_glob_t; /* currently no flags available */ EXPORT int os_glob(const char *pattern, int flags, os_glob_t **pglob); EXPORT void os_globfree(os_glob_t *pglob); EXPORT int os_unlink(const char *path); EXPORT int os_rmdir(const char *path); EXPORT char *os_getcwd(char *path, size_t size); EXPORT int os_chdir(const char *path); EXPORT uint64_t os_get_free_disk_space(const char *dir); #define MKDIR_EXISTS 1 #define MKDIR_SUCCESS 0 #define MKDIR_ERROR -1 EXPORT int os_mkdir(const char *path); EXPORT int os_mkdirs(const char *path); EXPORT int os_rename(const char *old_path, const char *new_path); EXPORT int os_copyfile(const char *file_in, const char *file_out); EXPORT int os_safe_replace(const char *target_path, const char *from_path, const char *backup_path); EXPORT char *os_generate_formatted_filename(const char *extension, bool space, const char *format); struct os_inhibit_info; typedef struct os_inhibit_info os_inhibit_t; EXPORT os_inhibit_t *os_inhibit_sleep_create(const char *reason); EXPORT bool os_inhibit_sleep_set_active(os_inhibit_t *info, bool active); EXPORT void os_inhibit_sleep_destroy(os_inhibit_t *info); EXPORT void os_breakpoint(void); EXPORT void os_oom(void); EXPORT int os_get_physical_cores(void); EXPORT int os_get_logical_cores(void); EXPORT uint64_t os_get_sys_free_size(void); EXPORT uint64_t os_get_sys_total_size(void); struct os_proc_memory_usage { uint64_t resident_size; uint64_t virtual_size; }; typedef struct os_proc_memory_usage os_proc_memory_usage_t; EXPORT bool os_get_proc_memory_usage(os_proc_memory_usage_t *usage); EXPORT uint64_t os_get_proc_resident_size(void); EXPORT uint64_t os_get_proc_virtual_size(void); #define UUID_STR_LENGTH 36 EXPORT char *os_generate_uuid(void); EXPORT struct timespec *os_nstime_to_timespec(uint64_t timestamp, struct timespec *storage); /* clang-format off */ #ifdef __APPLE__ # define ARCH_BITS 64 #else # ifdef _WIN32 # ifdef _WIN64 # define ARCH_BITS 64 # else # define ARCH_BITS 32 # endif # else # ifdef __LP64__ # define ARCH_BITS 64 # else # define ARCH_BITS 32 # endif # endif #endif /* clang-format on */ #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/lexer.c000644 001751 001751 00000014133 15153330235 021750 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include "lexer.h" static const char *astrblank = ""; int strref_cmp(const struct strref *str1, const char *str2) { size_t i = 0; if (strref_is_empty(str1)) return (!str2 || !*str2) ? 0 : -1; if (!str2) str2 = astrblank; do { char ch1, ch2; ch1 = (i < str1->len) ? str1->array[i] : 0; ch2 = *str2; if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (i++ < str1->len && *str2++); return 0; } int strref_cmpi(const struct strref *str1, const char *str2) { size_t i = 0; if (strref_is_empty(str1)) return (!str2 || !*str2) ? 0 : -1; if (!str2) str2 = astrblank; do { char ch1, ch2; ch1 = (i < str1->len) ? (char)toupper(str1->array[i]) : 0; ch2 = (char)toupper(*str2); if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; } while (i++ < str1->len && *str2++); return 0; } int strref_cmp_strref(const struct strref *str1, const struct strref *str2) { size_t i = 0; if (strref_is_empty(str1)) return strref_is_empty(str2) ? 0 : -1; if (strref_is_empty(str2)) return -1; do { char ch1, ch2; ch1 = (i < str1->len) ? str1->array[i] : 0; ch2 = (i < str2->len) ? str2->array[i] : 0; if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; i++; } while (i <= str1->len && i <= str2->len); return 0; } int strref_cmpi_strref(const struct strref *str1, const struct strref *str2) { size_t i = 0; if (strref_is_empty(str1)) return strref_is_empty(str2) ? 0 : -1; if (strref_is_empty(str2)) return -1; do { char ch1, ch2; ch1 = (i < str1->len) ? (char)toupper(str1->array[i]) : 0; ch2 = (i < str2->len) ? (char)toupper(str2->array[i]) : 0; if (ch1 < ch2) return -1; else if (ch1 > ch2) return 1; i++; } while (i <= str1->len && i <= str2->len); return 0; } /* ------------------------------------------------------------------------- */ bool valid_int_str(const char *str, size_t n) { bool found_num = false; if (!str) return false; if (!*str) return false; if (!n) n = strlen(str); if (*str == '-' || *str == '+') ++str; do { if (*str > '9' || *str < '0') return false; found_num = true; } while (*++str && --n); return found_num; } bool valid_float_str(const char *str, size_t n) { bool found_num = false; bool found_exp = false; bool found_dec = false; if (!str) return false; if (!*str) return false; if (!n) n = strlen(str); if (*str == '-' || *str == '+') ++str; do { if (*str == '.') { if (found_dec || found_exp || !found_num) return false; found_dec = true; } else if (*str == 'e') { if (found_exp || !found_num) return false; found_exp = true; found_num = false; } else if (*str == '-' || *str == '+') { if (!found_exp || !found_num) return false; } else if (*str > '9' || *str < '0') { return false; } else { found_num = true; } } while (*++str && --n); return found_num; } /* ------------------------------------------------------------------------- */ void error_data_add(struct error_data *data, const char *file, uint32_t row, uint32_t column, const char *msg, int level) { struct error_item item; if (!data) return; item.file = file; item.row = row; item.column = column; item.level = level; item.error = bstrdup(msg); da_push_back(data->errors, &item); } char *error_data_buildstring(struct error_data *ed) { struct dstr str; struct error_item *items = ed->errors.array; size_t i; dstr_init(&str); for (i = 0; i < ed->errors.num; i++) { struct error_item *item = items + i; dstr_catf(&str, "%s (%u, %u): %s\n", item->file, item->row, item->column, item->error); } return str.array; } /* ------------------------------------------------------------------------- */ static inline enum base_token_type get_char_token_type(const char ch) { if (is_whitespace(ch)) return BASETOKEN_WHITESPACE; else if (ch >= '0' && ch <= '9') return BASETOKEN_DIGIT; else if ((ch >= 'a' && ch <= 'z') || (ch >= 'A' && ch <= 'Z')) return BASETOKEN_ALPHA; return BASETOKEN_OTHER; } bool lexer_getbasetoken(struct lexer *lex, struct base_token *token, enum ignore_whitespace iws) { const char *offset = lex->offset; const char *token_start = NULL; enum base_token_type type = BASETOKEN_NONE; bool ignore_whitespace = (iws == IGNORE_WHITESPACE); if (!offset) return false; while (*offset != 0) { char ch = *(offset++); enum base_token_type new_type = get_char_token_type(ch); if (type == BASETOKEN_NONE) { if (new_type == BASETOKEN_WHITESPACE && ignore_whitespace) continue; token_start = offset - 1; type = new_type; if (type != BASETOKEN_DIGIT && type != BASETOKEN_ALPHA) { if (is_newline(ch) && is_newline_pair(ch, *offset)) { offset++; } break; } } else if (type != new_type) { offset--; break; } } lex->offset = offset; if (token_start && offset > token_start) { strref_set(&token->text, token_start, offset - token_start); token->type = type; return true; } return false; } void lexer_getstroffset(const struct lexer *lex, const char *str, uint32_t *row, uint32_t *col) { uint32_t cur_col = 1, cur_row = 1; const char *text = lex->text; if (!str) return; while (text < str) { if (is_newline(*text)) { text += newline_size(text) - 1; cur_col = 1; cur_row++; } else { cur_col++; } text++; } *row = cur_row; *col = cur_col; } obs-studio-32.1.0-sources/libobs/util/task.c000644 001751 001751 00000006352 15153330235 021577 0ustar00runnerrunner000000 000000 #include "task.h" #include "bmem.h" #include "threading.h" #include "deque.h" struct os_task_queue { pthread_t thread; os_sem_t *sem; long id; bool waiting; bool tasks_processed; os_event_t *wait_event; pthread_mutex_t mutex; struct deque tasks; }; struct os_task_info { os_task_t task; void *param; }; static THREAD_LOCAL bool exit_thread = false; static THREAD_LOCAL long thread_id = 0; static volatile long thread_id_counter = 1; static void *tiny_tubular_task_thread(void *param); os_task_queue_t *os_task_queue_create(void) { struct os_task_queue *tq = bzalloc(sizeof(*tq)); tq->id = os_atomic_inc_long(&thread_id_counter); if (pthread_mutex_init(&tq->mutex, NULL) != 0) goto fail1; if (os_sem_init(&tq->sem, 0) != 0) goto fail2; if (os_event_init(&tq->wait_event, OS_EVENT_TYPE_AUTO) != 0) goto fail3; if (pthread_create(&tq->thread, NULL, tiny_tubular_task_thread, tq) != 0) goto fail4; return tq; fail4: os_event_destroy(tq->wait_event); fail3: os_sem_destroy(tq->sem); fail2: pthread_mutex_destroy(&tq->mutex); fail1: bfree(tq); return NULL; } bool os_task_queue_queue_task(os_task_queue_t *tq, os_task_t task, void *param) { struct os_task_info ti = { task, param, }; if (!tq) return false; pthread_mutex_lock(&tq->mutex); deque_push_back(&tq->tasks, &ti, sizeof(ti)); pthread_mutex_unlock(&tq->mutex); os_sem_post(tq->sem); return true; } static void wait_for_thread(void *data) { os_task_queue_t *tq = data; os_event_signal(tq->wait_event); } static void stop_thread(void *unused) { exit_thread = true; UNUSED_PARAMETER(unused); } void os_task_queue_destroy(os_task_queue_t *tq) { if (!tq) return; os_task_queue_queue_task(tq, stop_thread, NULL); pthread_join(tq->thread, NULL); os_event_destroy(tq->wait_event); os_sem_destroy(tq->sem); pthread_mutex_destroy(&tq->mutex); deque_free(&tq->tasks); bfree(tq); } bool os_task_queue_wait(os_task_queue_t *tq) { if (!tq) return false; struct os_task_info ti = { wait_for_thread, tq, }; pthread_mutex_lock(&tq->mutex); tq->waiting = true; tq->tasks_processed = false; deque_push_back(&tq->tasks, &ti, sizeof(ti)); pthread_mutex_unlock(&tq->mutex); os_sem_post(tq->sem); os_event_wait(tq->wait_event); pthread_mutex_lock(&tq->mutex); bool tasks_processed = tq->tasks_processed; pthread_mutex_unlock(&tq->mutex); return tasks_processed; } bool os_task_queue_inside(os_task_queue_t *tq) { return tq->id == thread_id; } static void *tiny_tubular_task_thread(void *param) { struct os_task_queue *tq = param; thread_id = tq->id; os_set_thread_name(__FUNCTION__); while (!exit_thread && os_sem_wait(tq->sem) == 0) { struct os_task_info ti; pthread_mutex_lock(&tq->mutex); deque_pop_front(&tq->tasks, &ti, sizeof(ti)); if (tq->tasks.size && ti.task == wait_for_thread) { deque_push_back(&tq->tasks, &ti, sizeof(ti)); deque_pop_front(&tq->tasks, &ti, sizeof(ti)); } if (tq->tasks.size && ti.task == stop_thread) { deque_push_back(&tq->tasks, &ti, sizeof(ti)); deque_pop_front(&tq->tasks, &ti, sizeof(ti)); } if (tq->waiting) { if (ti.task == wait_for_thread) { tq->waiting = false; } else { tq->tasks_processed = true; } } pthread_mutex_unlock(&tq->mutex); ti.task(ti.param); } return NULL; } obs-studio-32.1.0-sources/libobs/util/base.h000644 001751 001751 00000005023 15153330235 021546 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #include "c99defs.h" /* * Just contains logging/crash related stuff */ #ifdef __cplusplus extern "C" { #endif #define STRINGIFY(x) #x #define STRINGIFY_(x) STRINGIFY(x) #define S__LINE__ STRINGIFY_(__LINE__) #define INT_CUR_LINE __LINE__ #define FILE_LINE __FILE__ " (" S__LINE__ "): " #define OBS_COUNTOF(x) (sizeof(x) / sizeof(x[0])) enum { /** * Use if there's a problem that can potentially affect the program, * but isn't enough to require termination of the program. * * Use in creation functions and core subsystem functions. Places that * should definitely not fail. */ LOG_ERROR = 100, /** * Use if a problem occurs that doesn't affect the program and is * recoverable. * * Use in places where failure isn't entirely unexpected, and can * be handled safely. */ LOG_WARNING = 200, /** * Informative message to be displayed in the log. */ LOG_INFO = 300, /** * Debug message to be used mostly by developers. */ LOG_DEBUG = 400 }; typedef void (*log_handler_t)(int lvl, const char *msg, va_list args, void *p); EXPORT void base_get_log_handler(log_handler_t *handler, void **param); EXPORT void base_set_log_handler(log_handler_t handler, void *param); EXPORT void base_set_crash_handler(void (*handler)(const char *, va_list, void *), void *param); EXPORT void blogva(int log_level, const char *format, va_list args); #if !defined(_MSC_VER) && !defined(SWIG) #define PRINTFATTR(f, a) __attribute__((__format__(__printf__, f, a))) #else #define PRINTFATTR(f, a) #endif PRINTFATTR(2, 3) EXPORT void blog(int log_level, const char *format, ...); PRINTFATTR(1, 2) #ifndef SWIG OBS_NORETURN #endif EXPORT void bcrash(const char *format, ...); #undef PRINTFATTR #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/task.h000644 001751 001751 00000001007 15153330235 021574 0ustar00runnerrunner000000 000000 #pragma once #include "c99defs.h" #ifdef __cplusplus extern "C" { #endif struct os_task_queue; typedef struct os_task_queue os_task_queue_t; typedef void (*os_task_t)(void *param); EXPORT os_task_queue_t *os_task_queue_create(void); EXPORT bool os_task_queue_queue_task(os_task_queue_t *tt, os_task_t task, void *param); EXPORT void os_task_queue_destroy(os_task_queue_t *tt); EXPORT bool os_task_queue_wait(os_task_queue_t *tt); EXPORT bool os_task_queue_inside(os_task_queue_t *tt); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/cf-parser.h000644 001751 001751 00000015747 15153330235 022534 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "cf-lexer.h" /* * C-family parser * * Handles preprocessing/lexing/errors when parsing a file, and provides a * set of parsing functions to be able to go through all the resulting tokens * more easily. */ #ifdef __cplusplus extern "C" { #endif #define PARSE_SUCCESS 0 #define PARSE_CONTINUE -1 #define PARSE_BREAK -2 #define PARSE_UNEXPECTED_CONTINUE -3 #define PARSE_UNEXPECTED_BREAK -4 #define PARSE_EOF -5 struct cf_parser { struct cf_lexer lex; struct cf_preprocessor pp; struct error_data error_list; struct cf_token *cur_token; }; static inline void cf_parser_init(struct cf_parser *parser) { cf_lexer_init(&parser->lex); cf_preprocessor_init(&parser->pp); error_data_init(&parser->error_list); parser->cur_token = NULL; } static inline void cf_parser_free(struct cf_parser *parser) { cf_lexer_free(&parser->lex); cf_preprocessor_free(&parser->pp); error_data_free(&parser->error_list); parser->cur_token = NULL; } static inline bool cf_parser_parse(struct cf_parser *parser, const char *str, const char *file) { if (!cf_lexer_lex(&parser->lex, str, file)) return false; if (!cf_preprocess(&parser->pp, &parser->lex, &parser->error_list)) return false; parser->cur_token = cf_preprocessor_get_tokens(&parser->pp); return true; } EXPORT void cf_adderror(struct cf_parser *parser, const char *error, int level, const char *val1, const char *val2, const char *val3); static inline void cf_adderror_expecting(struct cf_parser *p, const char *expected) { cf_adderror(p, "Expected '$1'", LEX_ERROR, expected, NULL, NULL); } static inline void cf_adderror_unexpected_eof(struct cf_parser *p) { cf_adderror(p, "Unexpected EOF", LEX_ERROR, NULL, NULL, NULL); } static inline void cf_adderror_syntax_error(struct cf_parser *p) { cf_adderror(p, "Syntax error", LEX_ERROR, NULL, NULL, NULL); } static inline bool cf_next_token(struct cf_parser *p) { if (p->cur_token->type != CFTOKEN_SPACETAB && p->cur_token->type != CFTOKEN_NEWLINE && p->cur_token->type != CFTOKEN_NONE) p->cur_token++; while (p->cur_token->type == CFTOKEN_SPACETAB || p->cur_token->type == CFTOKEN_NEWLINE) p->cur_token++; return p->cur_token->type != CFTOKEN_NONE; } static inline bool cf_next_valid_token(struct cf_parser *p) { if (!cf_next_token(p)) { cf_adderror_unexpected_eof(p); return false; } return true; } EXPORT bool cf_pass_pair(struct cf_parser *p, char in, char out); static inline bool cf_go_to_token(struct cf_parser *p, const char *str1, const char *str2) { while (cf_next_token(p)) { if (strref_cmp(&p->cur_token->str, str1) == 0) { return true; } else if (str2 && strref_cmp(&p->cur_token->str, str2) == 0) { return true; } else if (*p->cur_token->str.array == '{') { if (!cf_pass_pair(p, '{', '}')) break; } } return false; } static inline bool cf_go_to_valid_token(struct cf_parser *p, const char *str1, const char *str2) { if (!cf_go_to_token(p, str1, str2)) { cf_adderror_unexpected_eof(p); return false; } return true; } static inline bool cf_go_to_token_type(struct cf_parser *p, enum cf_token_type type) { while (p->cur_token->type != CFTOKEN_NONE && p->cur_token->type != type) p->cur_token++; return p->cur_token->type != CFTOKEN_NONE; } static inline int cf_token_should_be(struct cf_parser *p, const char *str, const char *goto1, const char *goto2) { if (strref_cmp(&p->cur_token->str, str) == 0) return PARSE_SUCCESS; if (goto1) { if (!cf_go_to_token(p, goto1, goto2)) return PARSE_EOF; } cf_adderror_expecting(p, str); return PARSE_CONTINUE; } static inline int cf_next_token_should_be(struct cf_parser *p, const char *str, const char *goto1, const char *goto2) { if (!cf_next_token(p)) { cf_adderror_unexpected_eof(p); return PARSE_EOF; } else if (strref_cmp(&p->cur_token->str, str) == 0) { return PARSE_SUCCESS; } if (goto1) { if (!cf_go_to_token(p, goto1, goto2)) return PARSE_EOF; } cf_adderror_expecting(p, str); return PARSE_CONTINUE; } static inline bool cf_peek_token(struct cf_parser *p, struct cf_token *peek) { struct cf_token *cur_token = p->cur_token; bool success = cf_next_token(p); *peek = *p->cur_token; p->cur_token = cur_token; return success; } static inline bool cf_peek_valid_token(struct cf_parser *p, struct cf_token *peek) { bool success = cf_peek_token(p, peek); if (!success) cf_adderror_unexpected_eof(p); return success; } static inline bool cf_token_is(struct cf_parser *p, const char *val) { return strref_cmp(&p->cur_token->str, val) == 0; } static inline int cf_token_is_type(struct cf_parser *p, enum cf_token_type type, const char *type_expected, const char *goto_token) { if (p->cur_token->type != type) { cf_adderror_expecting(p, type_expected); if (goto_token) { if (!cf_go_to_valid_token(p, goto_token, NULL)) return PARSE_EOF; } return PARSE_CONTINUE; } return PARSE_SUCCESS; } static inline void cf_copy_token(struct cf_parser *p, char **dst) { *dst = bstrdup_n(p->cur_token->str.array, p->cur_token->str.len); } static inline int cf_get_name(struct cf_parser *p, char **dst, const char *name, const char *goto_token) { int errcode; errcode = cf_token_is_type(p, CFTOKEN_NAME, name, goto_token); if (errcode != PARSE_SUCCESS) return errcode; *dst = bstrdup_n(p->cur_token->str.array, p->cur_token->str.len); return PARSE_SUCCESS; } static inline int cf_next_name(struct cf_parser *p, char **dst, const char *name, const char *goto_token) { if (!cf_next_valid_token(p)) return PARSE_EOF; return cf_get_name(p, dst, name, goto_token); } static inline int cf_next_token_copy(struct cf_parser *p, char **dst) { if (!cf_next_valid_token(p)) return PARSE_EOF; cf_copy_token(p, dst); return PARSE_SUCCESS; } static inline int cf_get_name_ref(struct cf_parser *p, struct strref *dst, const char *name, const char *goto_token) { int errcode; errcode = cf_token_is_type(p, CFTOKEN_NAME, name, goto_token); if (errcode != PARSE_SUCCESS) return errcode; strref_copy(dst, &p->cur_token->str); return PARSE_SUCCESS; } static inline int cf_next_name_ref(struct cf_parser *p, struct strref *dst, const char *name, const char *goto_token) { if (!cf_next_valid_token(p)) return PARSE_EOF; return cf_get_name_ref(p, dst, name, goto_token); } #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/platform-nix.c000644 001751 001751 00000055772 15153330235 023267 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "obsconfig.h" #if !defined(__APPLE__) #include #include #include #if defined(__FreeBSD__) || defined(__OpenBSD__) #include #include #include #include #include #include #if defined(__FreeBSD__) #include #endif #else #include #endif #if !defined(__OpenBSD__) #include #endif #include #endif #include "darray.h" #include "dstr.h" #include "platform.h" #include "threading.h" void *os_dlopen(const char *path) { struct dstr dylib_name; if (!path) return NULL; dstr_init_copy(&dylib_name, path); #ifdef __APPLE__ if (!dstr_find(&dylib_name, ".framework") && !dstr_find(&dylib_name, ".plugin") && !dstr_find(&dylib_name, ".dylib") && !dstr_find(&dylib_name, ".so")) #else if (!dstr_find(&dylib_name, ".so")) #endif dstr_cat(&dylib_name, ".so"); #ifdef __APPLE__ int dlopen_flags = RTLD_NOW | RTLD_FIRST; if (dstr_find(&dylib_name, "Python")) { dlopen_flags = dlopen_flags | RTLD_GLOBAL; } else { dlopen_flags = dlopen_flags | RTLD_LOCAL; } void *res = dlopen(dylib_name.array, dlopen_flags); #else void *res = dlopen(dylib_name.array, RTLD_NOW); #endif if (!res) blog(LOG_ERROR, "os_dlopen(%s->%s): %s\n", path, dylib_name.array, dlerror()); dstr_free(&dylib_name); return res; } void *os_dlsym(void *module, const char *func) { return dlsym(module, func); } void os_dlclose(void *module) { if (module) dlclose(module); } void get_plugin_info(const char *path, bool *is_obs_plugin) { *is_obs_plugin = true; UNUSED_PARAMETER(path); } bool os_is_obs_plugin(const char *path) { UNUSED_PARAMETER(path); /* not necessary on this platform */ return true; } #if !defined(__APPLE__) struct os_cpu_usage_info { clock_t last_cpu_time, last_sys_time, last_user_time; int core_count; }; os_cpu_usage_info_t *os_cpu_usage_info_start(void) { struct os_cpu_usage_info *info = bmalloc(sizeof(*info)); struct tms time_sample; info->last_cpu_time = times(&time_sample); info->last_sys_time = time_sample.tms_stime; info->last_user_time = time_sample.tms_utime; info->core_count = sysconf(_SC_NPROCESSORS_ONLN); return info; } double os_cpu_usage_info_query(os_cpu_usage_info_t *info) { struct tms time_sample; clock_t cur_cpu_time; double percent; if (!info) return 0.0; cur_cpu_time = times(&time_sample); if (cur_cpu_time <= info->last_cpu_time || time_sample.tms_stime < info->last_sys_time || time_sample.tms_utime < info->last_user_time) return 0.0; percent = (double)(time_sample.tms_stime - info->last_sys_time + (time_sample.tms_utime - info->last_user_time)); percent /= (double)(cur_cpu_time - info->last_cpu_time); percent /= (double)info->core_count; info->last_cpu_time = cur_cpu_time; info->last_sys_time = time_sample.tms_stime; info->last_user_time = time_sample.tms_utime; return percent * 100.0; } void os_cpu_usage_info_destroy(os_cpu_usage_info_t *info) { if (info) bfree(info); } #endif bool os_sleepto_ns(uint64_t time_target) { uint64_t current = os_gettime_ns(); if (time_target < current) return false; time_target -= current; struct timespec req, remain; memset(&req, 0, sizeof(req)); memset(&remain, 0, sizeof(remain)); req.tv_sec = time_target / 1000000000; req.tv_nsec = time_target % 1000000000; while (nanosleep(&req, &remain)) { req = remain; memset(&remain, 0, sizeof(remain)); } return true; } bool os_sleepto_ns_fast(uint64_t time_target) { uint64_t current = os_gettime_ns(); if (time_target < current) return false; do { uint64_t remain_us = (time_target - current + 999) / 1000; useconds_t us = remain_us >= 1000000 ? 999999 : remain_us; usleep(us); current = os_gettime_ns(); } while (time_target > current); return true; } void os_sleep_ms(uint32_t duration) { usleep(duration * 1000); } #if !defined(__APPLE__) uint64_t os_gettime_ns(void) { struct timespec ts; clock_gettime(CLOCK_MONOTONIC, &ts); return ((uint64_t)ts.tv_sec * 1000000000ULL + (uint64_t)ts.tv_nsec); } /* should return $HOME/.config/[name] as default */ int os_get_config_path(char *dst, size_t size, const char *name) { char *xdg_ptr = getenv("XDG_CONFIG_HOME"); // If XDG_CONFIG_HOME is unset, // we use the default $HOME/.config/[name] instead if (xdg_ptr == NULL) { char *home_ptr = getenv("HOME"); if (home_ptr == NULL) bcrash("Could not get $HOME\n"); if (!name || !*name) { return snprintf(dst, size, "%s/.config", home_ptr); } else { return snprintf(dst, size, "%s/.config/%s", home_ptr, name); } } else { if (!name || !*name) return snprintf(dst, size, "%s", xdg_ptr); else return snprintf(dst, size, "%s/%s", xdg_ptr, name); } } /* should return $HOME/.config/[name] as default */ char *os_get_config_path_ptr(const char *name) { struct dstr path; char *xdg_ptr = getenv("XDG_CONFIG_HOME"); /* If XDG_CONFIG_HOME is unset, * we use the default $HOME/.config/[name] instead */ if (xdg_ptr == NULL) { char *home_ptr = getenv("HOME"); if (home_ptr == NULL) bcrash("Could not get $HOME\n"); dstr_init_copy(&path, home_ptr); dstr_cat(&path, "/.config/"); dstr_cat(&path, name); } else { dstr_init_copy(&path, xdg_ptr); dstr_cat(&path, "/"); dstr_cat(&path, name); } return path.array; } int os_get_program_data_path(char *dst, size_t size, const char *name) { return snprintf(dst, size, "/usr/local/share/%s", !!name ? name : ""); } char *os_get_program_data_path_ptr(const char *name) { size_t len = snprintf(NULL, 0, "/usr/local/share/%s", !!name ? name : ""); char *str = bmalloc(len + 1); snprintf(str, len + 1, "/usr/local/share/%s", !!name ? name : ""); str[len] = 0; return str; } #if defined(__OpenBSD__) // a bit modified version of https://stackoverflow.com/a/31495527 ssize_t os_openbsd_get_executable_path(char *epath) { int mib[4]; char **argv; size_t len; const char *comm; int ok = 0; mib[0] = CTL_KERN; mib[1] = KERN_PROC_ARGS; mib[2] = getpid(); mib[3] = KERN_PROC_ARGV; if (sysctl(mib, 4, NULL, &len, NULL, 0) < 0) abort(); if (!(argv = malloc(len))) abort(); if (sysctl(mib, 4, argv, &len, NULL, 0) < 0) abort(); comm = argv[0]; if (*comm == '/' || *comm == '.') { if (realpath(comm, epath)) ok = 1; } else { char *sp; char *xpath = strdup(getenv("PATH")); char *path = strtok_r(xpath, ":", &sp); struct stat st; if (!xpath) abort(); while (path) { snprintf(epath, PATH_MAX, "%s/%s", path, comm); if (!stat(epath, &st) && (st.st_mode & S_IXUSR)) { ok = 1; break; } path = strtok_r(NULL, ":", &sp); } free(xpath); } free(argv); return ok ? (ssize_t)strlen(epath) : -1; } #endif char *os_get_executable_path_ptr(const char *name) { char exe[PATH_MAX]; #if defined(__FreeBSD__) || defined(__DragonFly__) int sysctlname[4] = {CTL_KERN, KERN_PROC, KERN_PROC_PATHNAME, -1}; size_t pathlen = PATH_MAX; ssize_t count; if (sysctl(sysctlname, nitems(sysctlname), exe, &pathlen, NULL, 0) == -1) { blog(LOG_ERROR, "sysctl(KERN_PROC_PATHNAME) failed, errno %d", errno); return NULL; } count = pathlen; #elif defined(__OpenBSD__) ssize_t count = os_openbsd_get_executable_path(exe); #else ssize_t count = readlink("/proc/self/exe", exe, PATH_MAX - 1); if (count >= 0) { exe[count] = '\0'; } #endif const char *path_out = NULL; struct dstr path; if (count == -1) { return NULL; } path_out = dirname(exe); if (!path_out) { return NULL; } dstr_init_copy(&path, path_out); dstr_cat(&path, "/"); if (name && *name) { dstr_cat(&path, name); } return path.array; } bool os_get_emulation_status(void) { return false; } #endif bool os_file_exists(const char *path) { return access(path, F_OK) == 0; } size_t os_get_abs_path(const char *path, char *abspath, size_t size) { size_t min_size = size < PATH_MAX ? size : PATH_MAX; char newpath[PATH_MAX]; int ret; if (!abspath) return 0; if (!realpath(path, newpath)) return 0; ret = snprintf(abspath, min_size, "%s", newpath); return ret >= 0 ? ret : 0; } char *os_get_abs_path_ptr(const char *path) { char *ptr = bmalloc(512); if (!os_get_abs_path(path, ptr, 512)) { bfree(ptr); ptr = NULL; } return ptr; } struct os_dir { const char *path; DIR *dir; struct dirent *cur_dirent; struct os_dirent out; }; os_dir_t *os_opendir(const char *path) { struct os_dir *dir; DIR *dir_val; dir_val = opendir(path); if (!dir_val) return NULL; dir = bzalloc(sizeof(struct os_dir)); dir->dir = dir_val; dir->path = path; return dir; } static inline bool is_dir(const char *path) { struct stat stat_info; if (stat(path, &stat_info) == 0) return !!S_ISDIR(stat_info.st_mode); blog(LOG_DEBUG, "is_dir: stat for %s failed, errno: %d", path, errno); return false; } struct os_dirent *os_readdir(os_dir_t *dir) { struct dstr file_path = {0}; if (!dir) return NULL; dir->cur_dirent = readdir(dir->dir); if (!dir->cur_dirent) return NULL; const size_t length = strlen(dir->cur_dirent->d_name); if (sizeof(dir->out.d_name) <= length) return NULL; memcpy(dir->out.d_name, dir->cur_dirent->d_name, length + 1); dstr_copy(&file_path, dir->path); dstr_cat(&file_path, "/"); dstr_cat(&file_path, dir->out.d_name); dir->out.directory = is_dir(file_path.array); dstr_free(&file_path); return &dir->out; } void os_closedir(os_dir_t *dir) { if (dir) { closedir(dir->dir); bfree(dir); } } #ifndef __APPLE__ int64_t os_get_free_space(const char *path) { struct statvfs info; int64_t ret = (int64_t)statvfs(path, &info); if (ret == 0) ret = (int64_t)info.f_bsize * (int64_t)info.f_bfree; return ret; } #endif struct posix_glob_info { struct os_glob_info base; glob_t gl; }; int os_glob(const char *pattern, int flags, os_glob_t **pglob) { struct posix_glob_info pgi; int ret = glob(pattern, 0, NULL, &pgi.gl); if (ret == 0) { DARRAY(struct os_globent) list; da_init(list); for (size_t i = 0; i < pgi.gl.gl_pathc; i++) { struct os_globent ent = {0}; ent.path = pgi.gl.gl_pathv[i]; ent.directory = is_dir(ent.path); da_push_back(list, &ent); } pgi.base.gl_pathc = list.num; pgi.base.gl_pathv = list.array; *pglob = bmemdup(&pgi, sizeof(pgi)); } else { *pglob = NULL; } UNUSED_PARAMETER(flags); return ret; } void os_globfree(os_glob_t *pglob) { if (pglob) { struct posix_glob_info *pgi = (struct posix_glob_info *)pglob; globfree(&pgi->gl); bfree(pgi->base.gl_pathv); bfree(pgi); } } int os_unlink(const char *path) { return unlink(path); } int os_rmdir(const char *path) { return rmdir(path); } int os_mkdir(const char *path) { if (mkdir(path, 0755) == 0) return MKDIR_SUCCESS; return (errno == EEXIST) ? MKDIR_EXISTS : MKDIR_ERROR; } int os_rename(const char *old_path, const char *new_path) { return rename(old_path, new_path); } int os_safe_replace(const char *target, const char *from, const char *backup) { if (backup && os_file_exists(target) && rename(target, backup) != 0) return -1; return rename(from, target); } #if !defined(__APPLE__) os_performance_token_t *os_request_high_performance(const char *reason) { UNUSED_PARAMETER(reason); return NULL; } void os_end_high_performance(os_performance_token_t *token) { UNUSED_PARAMETER(token); } #endif int os_copyfile(const char *file_path_in, const char *file_path_out) { FILE *file_out = NULL; FILE *file_in = NULL; uint8_t data[4096]; int ret = -1; size_t size; if (os_file_exists(file_path_out)) return -1; file_in = fopen(file_path_in, "rb"); if (!file_in) return -1; file_out = fopen(file_path_out, "ab+"); if (!file_out) goto error; do { size = fread(data, 1, sizeof(data), file_in); if (size) size = fwrite(data, 1, size, file_out); } while (size == sizeof(data)); ret = feof(file_in) ? 0 : -1; error: if (file_out) fclose(file_out); fclose(file_in); return ret; } char *os_getcwd(char *path, size_t size) { return getcwd(path, size); } int os_chdir(const char *path) { return chdir(path); } #if !defined(__APPLE__) #if defined(GIO_FOUND) struct dbus_sleep_info; struct portal_inhibit_info; extern struct dbus_sleep_info *dbus_sleep_info_create(void); extern void dbus_inhibit_sleep(struct dbus_sleep_info *dbus, const char *sleep, bool active); extern void dbus_sleep_info_destroy(struct dbus_sleep_info *dbus); extern struct portal_inhibit_info *portal_inhibit_info_create(void); extern void portal_inhibit(struct portal_inhibit_info *portal, const char *reason, bool active); extern void portal_inhibit_info_destroy(struct portal_inhibit_info *portal); #endif struct os_inhibit_info { #if defined(GIO_FOUND) struct dbus_sleep_info *dbus; struct portal_inhibit_info *portal; #endif pthread_t screensaver_thread; os_event_t *stop_event; char *reason; posix_spawnattr_t attr; bool active; }; os_inhibit_t *os_inhibit_sleep_create(const char *reason) { struct os_inhibit_info *info = bzalloc(sizeof(*info)); sigset_t set; #if defined(GIO_FOUND) info->portal = portal_inhibit_info_create(); if (!info->portal) { /* In a Flatpak, only the portal can be used for inhibition. */ if (access("/.flatpak-info", F_OK) == 0) { bfree(info); return NULL; } info->dbus = dbus_sleep_info_create(); } if (info->portal || info->dbus) { info->reason = bstrdup(reason); return info; } #endif os_event_init(&info->stop_event, OS_EVENT_TYPE_AUTO); posix_spawnattr_init(&info->attr); sigemptyset(&set); posix_spawnattr_setsigmask(&info->attr, &set); sigaddset(&set, SIGPIPE); posix_spawnattr_setsigdefault(&info->attr, &set); posix_spawnattr_setflags(&info->attr, POSIX_SPAWN_SETSIGDEF | POSIX_SPAWN_SETSIGMASK); info->reason = bstrdup(reason); return info; } extern char **environ; static void reset_screensaver(os_inhibit_t *info) { char *argv[3] = {(char *)"xdg-screensaver", (char *)"reset", NULL}; pid_t pid; int err = posix_spawnp(&pid, "xdg-screensaver", NULL, &info->attr, argv, environ); if (err == 0) { int status; while (waitpid(pid, &status, 0) == -1) ; } else { blog(LOG_WARNING, "Failed to create xdg-screensaver: %d", err); } } static void *screensaver_thread(void *param) { os_inhibit_t *info = param; while (os_event_timedwait(info->stop_event, 30000) == ETIMEDOUT) reset_screensaver(info); return NULL; } bool os_inhibit_sleep_set_active(os_inhibit_t *info, bool active) { int ret; if (!info) return false; if (info->active == active) return false; #if defined(GIO_FOUND) if (info->portal) portal_inhibit(info->portal, info->reason, active); if (info->dbus) dbus_inhibit_sleep(info->dbus, info->reason, active); if (info->portal || info->dbus) { info->active = active; return true; } #endif if (!info->stop_event) return true; if (active) { ret = pthread_create(&info->screensaver_thread, NULL, &screensaver_thread, info); if (ret < 0) { blog(LOG_ERROR, "Failed to create screensaver " "inhibitor thread"); return false; } } else { os_event_signal(info->stop_event); pthread_join(info->screensaver_thread, NULL); } info->active = active; return true; } void os_inhibit_sleep_destroy(os_inhibit_t *info) { if (info) { os_inhibit_sleep_set_active(info, false); #if defined(GIO_FOUND) if (info->portal) { portal_inhibit_info_destroy(info->portal); } else if (info->dbus) { dbus_sleep_info_destroy(info->dbus); } else { os_event_destroy(info->stop_event); posix_spawnattr_destroy(&info->attr); } #else os_event_destroy(info->stop_event); posix_spawnattr_destroy(&info->attr); #endif bfree(info->reason); bfree(info); } } #endif void os_breakpoint() { raise(SIGTRAP); } void os_oom() { raise(SIGTRAP); } #ifndef __APPLE__ static int physical_cores = 0; static int logical_cores = 0; static bool core_count_initialized = false; static void os_get_cores_internal(void) { if (core_count_initialized) return; core_count_initialized = true; logical_cores = sysconf(_SC_NPROCESSORS_ONLN); #if defined(__linux__) int physical_id = -1; int last_physical_id = -1; int core_count = 0; char *line = NULL; size_t linecap = 0; FILE *fp; struct dstr proc_phys_id; struct dstr proc_phys_ids; fp = fopen("/proc/cpuinfo", "r"); if (!fp) return; dstr_init(&proc_phys_id); dstr_init(&proc_phys_ids); while (getline(&line, &linecap, fp) != -1) { if (!strncmp(line, "physical id", 11)) { char *start = strchr(line, ':'); if (!start || *(++start) == '\0') continue; physical_id = atoi(start); dstr_free(&proc_phys_id); dstr_init(&proc_phys_id); dstr_catf(&proc_phys_id, "%d", physical_id); } if (!strncmp(line, "cpu cores", 9)) { char *start = strchr(line, ':'); if (!start || *(++start) == '\0') continue; if (dstr_is_empty(&proc_phys_ids) || (!dstr_is_empty(&proc_phys_ids) && !dstr_find(&proc_phys_ids, proc_phys_id.array))) { dstr_cat_dstr(&proc_phys_ids, &proc_phys_id); dstr_cat(&proc_phys_ids, " "); core_count += atoi(start); } } if (*line == '\n' && physical_id != last_physical_id) { last_physical_id = physical_id; } } if (core_count == 0) physical_cores = logical_cores; else physical_cores = core_count; fclose(fp); dstr_free(&proc_phys_ids); dstr_free(&proc_phys_id); free(line); #elif defined(__FreeBSD__) char *text = os_quick_read_utf8_file("/var/run/dmesg.boot"); char *core_count = text; int packages = 0; int cores = 0; struct dstr proc_packages; struct dstr proc_cores; dstr_init(&proc_packages); dstr_init(&proc_cores); if (!text || !*text) { physical_cores = logical_cores; return; } core_count = strstr(core_count, "\nFreeBSD/SMP: "); if (!core_count) goto FreeBSD_cores_cleanup; core_count++; core_count = strstr(core_count, "\nFreeBSD/SMP: "); if (!core_count) goto FreeBSD_cores_cleanup; core_count = strstr(core_count, ": "); core_count += 2; size_t len = strcspn(core_count, " "); dstr_ncopy(&proc_packages, core_count, len); core_count = strstr(core_count, "package(s) x "); if (!core_count) goto FreeBSD_cores_cleanup; core_count += 13; len = strcspn(core_count, " "); dstr_ncopy(&proc_cores, core_count, len); FreeBSD_cores_cleanup: if (!dstr_is_empty(&proc_packages)) packages = atoi(proc_packages.array); if (!dstr_is_empty(&proc_cores)) cores = atoi(proc_cores.array); if (packages == 0) physical_cores = logical_cores; else if (cores == 0) physical_cores = packages; else physical_cores = packages * cores; dstr_free(&proc_cores); dstr_free(&proc_packages); bfree(text); #else physical_cores = logical_cores; #endif } int os_get_physical_cores(void) { if (!core_count_initialized) os_get_cores_internal(); return physical_cores; } int os_get_logical_cores(void) { if (!core_count_initialized) os_get_cores_internal(); return logical_cores; } #ifdef __FreeBSD__ uint64_t os_get_sys_free_size(void) { uint64_t mem_free = 0; size_t length = sizeof(mem_free); if (sysctlbyname("vm.stats.vm.v_free_count", &mem_free, &length, NULL, 0) < 0) return 0; return mem_free; } static inline bool os_get_proc_memory_usage_internal(struct kinfo_proc *kinfo) { int mib[] = {CTL_KERN, KERN_PROC, KERN_PROC_PID, getpid()}; size_t length = sizeof(*kinfo); if (sysctl(mib, sizeof(mib) / sizeof(mib[0]), kinfo, &length, NULL, 0) < 0) return false; return true; } bool os_get_proc_memory_usage(os_proc_memory_usage_t *usage) { struct kinfo_proc kinfo; if (!os_get_proc_memory_usage_internal(&kinfo)) return false; usage->resident_size = (uint64_t)kinfo.ki_rssize * sysconf(_SC_PAGESIZE); usage->virtual_size = (uint64_t)kinfo.ki_size; return true; } uint64_t os_get_proc_resident_size(void) { struct kinfo_proc kinfo; if (!os_get_proc_memory_usage_internal(&kinfo)) return 0; return (uint64_t)kinfo.ki_rssize * sysconf(_SC_PAGESIZE); } uint64_t os_get_proc_virtual_size(void) { struct kinfo_proc kinfo; if (!os_get_proc_memory_usage_internal(&kinfo)) return 0; return (uint64_t)kinfo.ki_size; } #else typedef struct { unsigned long virtual_size; unsigned long resident_size; unsigned long share_pages; unsigned long text; unsigned long library; unsigned long data; unsigned long dirty_pages; } statm_t; static inline bool os_get_proc_memory_usage_internal(statm_t *statm) { const char *statm_path = "/proc/self/statm"; FILE *f = fopen(statm_path, "r"); if (!f) return false; if (fscanf(f, "%lu %lu %lu %lu %lu %lu %lu", &statm->virtual_size, &statm->resident_size, &statm->share_pages, &statm->text, &statm->library, &statm->data, &statm->dirty_pages) != 7) { fclose(f); return false; } fclose(f); return true; } bool os_get_proc_memory_usage(os_proc_memory_usage_t *usage) { statm_t statm = {}; if (!os_get_proc_memory_usage_internal(&statm)) return false; usage->resident_size = (uint64_t)statm.resident_size * sysconf(_SC_PAGESIZE); usage->virtual_size = statm.virtual_size; return true; } uint64_t os_get_proc_resident_size(void) { statm_t statm = {}; if (!os_get_proc_memory_usage_internal(&statm)) return 0; return (uint64_t)statm.resident_size * sysconf(_SC_PAGESIZE); } uint64_t os_get_proc_virtual_size(void) { statm_t statm = {}; if (!os_get_proc_memory_usage_internal(&statm)) return 0; return (uint64_t)statm.virtual_size; } uint64_t os_get_sys_free_size(void) { uint64_t free_memory = 0; #ifndef __OpenBSD__ struct sysinfo info; if (sysinfo(&info) < 0) return 0; free_memory = ((uint64_t)info.freeram + (uint64_t)info.bufferram) * info.mem_unit; #endif return free_memory; } #endif static uint64_t total_memory = 0; static bool total_memory_initialized = false; static void os_get_sys_total_size_internal() { total_memory_initialized = true; #ifndef __OpenBSD__ struct sysinfo info; if (sysinfo(&info) < 0) return; total_memory = (uint64_t)info.totalram * info.mem_unit; #endif } uint64_t os_get_sys_total_size(void) { if (!total_memory_initialized) os_get_sys_total_size_internal(); return total_memory; } #endif #ifndef __APPLE__ uint64_t os_get_free_disk_space(const char *dir) { struct statvfs info; if (statvfs(dir, &info) != 0) return 0; return (uint64_t)info.f_frsize * (uint64_t)info.f_bavail; } #endif char *os_generate_uuid(void) { uuid_t uuid; // 36 char UUID + NULL char *out = bmalloc(37); uuid_generate(uuid); uuid_unparse_lower(uuid, out); return out; } obs-studio-32.1.0-sources/libobs/util/platform-nix-portal.c000644 001751 001751 00000015005 15153330235 024547 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * 2021 Georges Basile Stavracas Neto * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include "bmem.h" #include "dstr.h" #define PORTAL_NAME "org.freedesktop.portal.Desktop" #define PORTAL_PATH "/org/freedesktop/portal/desktop" #define INHIBIT_PORTAL_IFACE "org.freedesktop.portal.Inhibit" struct portal_inhibit_info { GDBusConnection *c; GCancellable *cancellable; unsigned int signal_id; char *sender_name; char *request_path; bool active; }; static void new_request(struct portal_inhibit_info *info, char **out_token, char **out_path) { struct dstr token; struct dstr path; uint32_t id; id = rand(); dstr_init(&token); dstr_printf(&token, "obs_inhibit_portal%u", id); *out_token = token.array; dstr_init(&path); dstr_printf(&path, "/org/freedesktop/portal/desktop/request/%s/%s", info->sender_name, token.array); *out_path = path.array; } static inline void unsubscribe_from_request(struct portal_inhibit_info *info) { if (info->signal_id > 0) { g_dbus_connection_signal_unsubscribe(info->c, info->signal_id); info->signal_id = 0; } } static inline void remove_inhibit_data(struct portal_inhibit_info *info) { g_clear_pointer(&info->request_path, bfree); info->active = false; } static void response_received(GDBusConnection *bus, const char *sender_name, const char *object_path, const char *interface_name, const char *signal_name, GVariant *parameters, gpointer data) { UNUSED_PARAMETER(bus); UNUSED_PARAMETER(sender_name); UNUSED_PARAMETER(object_path); UNUSED_PARAMETER(interface_name); UNUSED_PARAMETER(signal_name); struct portal_inhibit_info *info = data; g_autoptr(GVariant) ret = NULL; uint32_t response; g_variant_get(parameters, "(u@a{sv})", &response, &ret); if (response != 0) { if (response == 1) blog(LOG_WARNING, "Inhibit denied by user"); remove_inhibit_data(info); } unsubscribe_from_request(info); } static void inhibited_cb(GObject *source_object, GAsyncResult *result, gpointer user_data) { UNUSED_PARAMETER(source_object); struct portal_inhibit_info *info = user_data; g_autoptr(GVariant) reply = NULL; g_autoptr(GError) error = NULL; reply = g_dbus_connection_call_finish(info->c, result, &error); if (error != NULL) { if (!g_error_matches(error, G_IO_ERROR, G_IO_ERROR_CANCELLED)) blog(LOG_ERROR, "Failed to inhibit: %s", error->message); unsubscribe_from_request(info); remove_inhibit_data(info); } g_clear_object(&info->cancellable); } static void do_inhibit(struct portal_inhibit_info *info, const char *reason) { GVariantBuilder options; uint32_t flags = 0xC; char *token; info->active = true; new_request(info, &token, &info->request_path); info->signal_id = g_dbus_connection_signal_subscribe(info->c, PORTAL_NAME, "org.freedesktop.portal.Request", "Response", info->request_path, NULL, G_DBUS_SIGNAL_FLAGS_NO_MATCH_RULE, response_received, info, NULL); g_variant_builder_init(&options, G_VARIANT_TYPE_VARDICT); g_variant_builder_add(&options, "{sv}", "handle_token", g_variant_new_string(token)); g_variant_builder_add(&options, "{sv}", "reason", g_variant_new_string(reason)); bfree(token); info->cancellable = g_cancellable_new(); g_dbus_connection_call(info->c, PORTAL_NAME, PORTAL_PATH, INHIBIT_PORTAL_IFACE, "Inhibit", g_variant_new("(sua{sv})", "", flags, &options), NULL, G_DBUS_CALL_FLAGS_NONE, -1, info->cancellable, inhibited_cb, info); } static void uninhibited_cb(GObject *source_object, GAsyncResult *result, gpointer user_data) { UNUSED_PARAMETER(source_object); struct portal_inhibit_info *info = user_data; g_autoptr(GVariant) reply = NULL; g_autoptr(GError) error = NULL; reply = g_dbus_connection_call_finish(info->c, result, &error); if (error) blog(LOG_WARNING, "Error uninhibiting: %s", error->message); } static void do_uninhibit(struct portal_inhibit_info *info) { if (info->cancellable) { /* If uninhibit is called before the inhibit call is finished, * cancel it instead. */ g_cancellable_cancel(info->cancellable); g_clear_object(&info->cancellable); } else { g_dbus_connection_call(info->c, PORTAL_NAME, info->request_path, "org.freedesktop.portal.Request", "Close", g_variant_new("()"), G_VARIANT_TYPE_UNIT, G_DBUS_CALL_FLAGS_NONE, -1, NULL, uninhibited_cb, info); } remove_inhibit_data(info); } void portal_inhibit_info_destroy(struct portal_inhibit_info *info) { if (info) { g_cancellable_cancel(info->cancellable); unsubscribe_from_request(info); remove_inhibit_data(info); g_clear_pointer(&info->sender_name, bfree); g_clear_object(&info->cancellable); g_clear_object(&info->c); bfree(info); } } struct portal_inhibit_info *portal_inhibit_info_create(void) { struct portal_inhibit_info *info = bzalloc(sizeof(*info)); g_autoptr(GVariant) reply = NULL; g_autoptr(GError) error = NULL; char *aux; info->c = g_bus_get_sync(G_BUS_TYPE_SESSION, NULL, &error); if (!info->c) { blog(LOG_ERROR, "Could not create dbus connection: %s", error->message); bfree(info); return NULL; } info->sender_name = bstrdup(g_dbus_connection_get_unique_name(info->c) + 1); while ((aux = strstr(info->sender_name, ".")) != NULL) *aux = '_'; reply = g_dbus_connection_call_sync(info->c, PORTAL_NAME, PORTAL_PATH, "org.freedesktop.DBus.Properties", "Get", g_variant_new("(ss)", INHIBIT_PORTAL_IFACE, "version"), G_VARIANT_TYPE("(v)"), G_DBUS_CALL_FLAGS_NONE, -1, NULL, NULL); if (reply != NULL) { blog(LOG_DEBUG, "Found portal inhibitor"); return info; } portal_inhibit_info_destroy(info); return NULL; } void portal_inhibit(struct portal_inhibit_info *info, const char *reason, bool active) { if (active == info->active) return; if (active) do_inhibit(info, reason); else do_uninhibit(info); } obs-studio-32.1.0-sources/libobs/util/array-serializer.c000644 001751 001751 00000004776 15153330235 024132 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "darray.h" #include "array-serializer.h" static size_t array_output_write(void *param, const void *data, size_t size) { struct array_output_data *output = param; if (output->cur_pos < output->bytes.num) { size_t new_size = output->cur_pos + size; if (new_size > output->bytes.num) { darray_ensure_capacity(sizeof(uint8_t), &output->bytes.da, new_size); output->bytes.num = new_size; } memcpy(output->bytes.array + output->cur_pos, data, size); output->cur_pos += size; } else { da_push_back_array(output->bytes, (uint8_t *)data, size); output->cur_pos += size; } return size; } static int64_t array_output_get_pos(void *param) { struct array_output_data *data = param; return (int64_t)data->bytes.num; } static int64_t array_output_seek(void *param, int64_t offset, enum serialize_seek_type seek_type) { struct array_output_data *output = param; size_t new_pos = 0; switch (seek_type) { case SERIALIZE_SEEK_START: new_pos = offset; break; case SERIALIZE_SEEK_CURRENT: new_pos = output->cur_pos + offset; break; case SERIALIZE_SEEK_END: new_pos = output->bytes.num - offset; break; } if (new_pos > output->bytes.num) return -1; output->cur_pos = new_pos; return (int64_t)new_pos; } void array_output_serializer_init(struct serializer *s, struct array_output_data *data) { memset(s, 0, sizeof(struct serializer)); memset(data, 0, sizeof(struct array_output_data)); s->data = data; s->write = array_output_write; s->get_pos = array_output_get_pos; s->seek = array_output_seek; } void array_output_serializer_free(struct array_output_data *data) { da_free(data->bytes); } void array_output_serializer_reset(struct array_output_data *data) { da_clear(data->bytes); data->cur_pos = 0; } obs-studio-32.1.0-sources/libobs/util/windows/000755 001751 001751 00000000000 15153330731 022156 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/util/windows/win-version.h000644 001751 001751 00000003103 15153330235 024603 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include "../c99defs.h" #ifdef __cplusplus extern "C" { #endif struct win_version_info { int major; int minor; int build; int revis; }; static inline int win_version_compare(const struct win_version_info *dst, const struct win_version_info *src) { if (dst->major > src->major) return 1; if (dst->major == src->major) { if (dst->minor > src->minor) return 1; if (dst->minor == src->minor) { if (dst->build > src->build) return 1; if (dst->build == src->build) return 0; } } return -1; } EXPORT bool is_64_bit_windows(void); EXPORT bool is_arm64_windows(void); EXPORT bool get_dll_ver(const wchar_t *lib, struct win_version_info *info); EXPORT void get_win_ver(struct win_version_info *info); EXPORT uint32_t get_win_ver_int(void); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/windows/CoTaskMemPtr.hpp000644 001751 001751 00000002504 15153330235 025200 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once template class CoTaskMemPtr { T *ptr; inline void Clear() { if (ptr) CoTaskMemFree(ptr); } public: inline CoTaskMemPtr() : ptr(NULL) {} inline CoTaskMemPtr(T *ptr_) : ptr(ptr_) {} inline ~CoTaskMemPtr() { Clear(); } inline operator T *() const { return ptr; } inline T *operator->() const { return ptr; } inline const T *Get() const { return ptr; } inline CoTaskMemPtr &operator=(T *val) { Clear(); ptr = val; return *this; } inline T **operator&() { Clear(); ptr = NULL; return &ptr; } }; obs-studio-32.1.0-sources/libobs/util/windows/window-helpers.c000644 001751 001751 00000031171 15153330235 025273 0ustar00runnerrunner000000 000000 #include "window-helpers.h" #include #include #include static inline void encode_dstr(struct dstr *str) { dstr_replace(str, "#", "#22"); dstr_replace(str, ":", "#3A"); } static inline char *decode_str(const char *src) { struct dstr str = {0}; dstr_copy(&str, src); dstr_replace(&str, "#3A", ":"); dstr_replace(&str, "#22", "#"); return str.array; } void ms_build_window_strings(const char *str, char **class, char **title, char **exe) { char **strlist; *class = NULL; *title = NULL; *exe = NULL; if (!str) { return; } strlist = strlist_split(str, ':', true); if (strlist && strlist[0] && strlist[1] && strlist[2]) { *title = decode_str(strlist[0]); *class = decode_str(strlist[1]); *exe = decode_str(strlist[2]); } strlist_free(strlist); } static void insert_preserved_val(obs_property_t *p, const char *val, size_t idx) { char *window_class = NULL; char *title = NULL; char *executable = NULL; struct dstr desc = {0}; ms_build_window_strings(val, &window_class, &title, &executable); dstr_printf(&desc, "[%s]: %s", executable, title); obs_property_list_insert_string(p, idx, desc.array, val); obs_property_list_item_disable(p, idx, true); dstr_free(&desc); bfree(window_class); bfree(title); bfree(executable); } bool ms_check_window_property_setting(obs_properties_t *ppts, obs_property_t *p, obs_data_t *settings, const char *val, size_t idx) { const char *cur_val; bool match = false; size_t i = 0; cur_val = obs_data_get_string(settings, val); if (!cur_val) { return false; } for (;;) { const char *val = obs_property_list_item_string(p, i++); if (!val) break; if (strcmp(val, cur_val) == 0) { match = true; break; } } if (cur_val && *cur_val && !match) { insert_preserved_val(p, cur_val, idx); return true; } UNUSED_PARAMETER(ppts); return false; } static HMODULE kernel32(void) { static HMODULE kernel32_handle = NULL; if (!kernel32_handle) kernel32_handle = GetModuleHandleA("kernel32"); return kernel32_handle; } static inline HANDLE open_process(DWORD desired_access, bool inherit_handle, DWORD process_id) { typedef HANDLE(WINAPI * PFN_OpenProcess)(DWORD, BOOL, DWORD); static PFN_OpenProcess open_process_proc = NULL; if (!open_process_proc) open_process_proc = (PFN_OpenProcess)ms_get_obfuscated_func(kernel32(), "B}caZyah`~q", 0x2D5BEBAF6DDULL); return open_process_proc(desired_access, inherit_handle, process_id); } bool ms_get_window_exe(struct dstr *name, HWND window) { wchar_t wname[MAX_PATH]; struct dstr temp = {0}; bool success = false; HANDLE process = NULL; char *slash; DWORD id; GetWindowThreadProcessId(window, &id); if (id == GetCurrentProcessId()) return false; process = open_process(PROCESS_QUERY_LIMITED_INFORMATION, false, id); if (!process) goto fail; if (!GetProcessImageFileNameW(process, wname, MAX_PATH)) goto fail; dstr_from_wcs(&temp, wname); slash = strrchr(temp.array, '\\'); if (!slash) goto fail; dstr_copy(name, slash + 1); success = true; fail: if (!success) dstr_copy(name, "unknown"); dstr_free(&temp); CloseHandle(process); return true; } void ms_get_window_title(struct dstr *name, HWND hwnd) { int len; len = GetWindowTextLengthW(hwnd); if (!len) return; if (len > 1024) { wchar_t *temp; temp = malloc(sizeof(wchar_t) * (len + 1)); if (!temp) return; if (GetWindowTextW(hwnd, temp, len + 1)) dstr_from_wcs(name, temp); free(temp); } else { wchar_t temp[1024 + 1]; if (GetWindowTextW(hwnd, temp, len + 1)) dstr_from_wcs(name, temp); } } void ms_get_window_class(struct dstr *class, HWND hwnd) { wchar_t temp[256]; temp[0] = 0; if (GetClassNameW(hwnd, temp, sizeof(temp) / sizeof(wchar_t))) dstr_from_wcs(class, temp); } /* not capturable or internal windows, exact executable names */ static const char *internal_microsoft_exes_exact[] = { "startmenuexperiencehost.exe", "applicationframehost.exe", "peopleexperiencehost.exe", "shellexperiencehost.exe", "microsoft.notes.exe", "systemsettings.exe", "textinputhost.exe", "searchapp.exe", "video.ui.exe", "searchui.exe", "lockapp.exe", "cortana.exe", "gamebar.exe", "tabtip.exe", "time.exe", NULL, }; /* partial matches start from the beginning of the executable name */ static const char *internal_microsoft_exes_partial[] = { "windowsinternal", NULL, }; static bool is_microsoft_internal_window_exe(const char *exe) { if (!exe) return false; for (const char **vals = internal_microsoft_exes_exact; *vals; vals++) { if (astrcmpi(exe, *vals) == 0) return true; } for (const char **vals = internal_microsoft_exes_partial; *vals; vals++) { if (astrcmpi_n(exe, *vals, strlen(*vals)) == 0) return true; } return false; } static void add_window(obs_property_t *p, HWND hwnd, add_window_cb callback) { struct dstr class = {0}; struct dstr title = {0}; struct dstr exe = {0}; struct dstr encoded = {0}; struct dstr desc = {0}; if (!ms_get_window_exe(&exe, hwnd)) return; if (is_microsoft_internal_window_exe(exe.array)) { dstr_free(&exe); return; } ms_get_window_title(&title, hwnd); if (dstr_cmp(&exe, "explorer.exe") == 0 && dstr_is_empty(&title)) { dstr_free(&exe); dstr_free(&title); return; } ms_get_window_class(&class, hwnd); if (callback && !callback(title.array, class.array, exe.array)) { dstr_free(&title); dstr_free(&class); dstr_free(&exe); return; } dstr_printf(&desc, "[%s]: %s", exe.array, title.array); encode_dstr(&title); encode_dstr(&class); encode_dstr(&exe); dstr_cat_dstr(&encoded, &title); dstr_cat(&encoded, ":"); dstr_cat_dstr(&encoded, &class); dstr_cat(&encoded, ":"); dstr_cat_dstr(&encoded, &exe); obs_property_list_add_string(p, desc.array, encoded.array); dstr_free(&encoded); dstr_free(&desc); dstr_free(&class); dstr_free(&title); dstr_free(&exe); } static inline bool IsWindowCloaked(HWND window) { DWORD cloaked; HRESULT hr = DwmGetWindowAttribute(window, DWMWA_CLOAKED, &cloaked, sizeof(cloaked)); return SUCCEEDED(hr) && cloaked; } static bool check_window_valid(HWND window, enum window_search_mode mode) { DWORD styles, ex_styles; RECT rect; if (!IsWindowVisible(window) || (mode == EXCLUDE_MINIMIZED && (IsIconic(window) || IsWindowCloaked(window)))) return false; GetClientRect(window, &rect); styles = (DWORD)GetWindowLongPtr(window, GWL_STYLE); ex_styles = (DWORD)GetWindowLongPtr(window, GWL_EXSTYLE); if (ex_styles & WS_EX_TOOLWINDOW) return false; if (styles & WS_CHILD) return false; if (mode == EXCLUDE_MINIMIZED && (rect.bottom == 0 || rect.right == 0)) return false; return true; } bool ms_is_uwp_window(HWND hwnd) { wchar_t name[256]; name[0] = 0; if (!GetClassNameW(hwnd, name, sizeof(name) / sizeof(wchar_t))) return false; return wcscmp(name, L"ApplicationFrameWindow") == 0 || wcscmp(name, L"WinUIDesktopWin32WindowClass") == 0; } HWND ms_get_uwp_actual_window(HWND parent) { DWORD parent_id = 0; HWND child; GetWindowThreadProcessId(parent, &parent_id); child = FindWindowEx(parent, NULL, NULL, NULL); while (child) { DWORD child_id = 0; GetWindowThreadProcessId(child, &child_id); if (child_id != parent_id) return child; child = FindWindowEx(parent, child, NULL, NULL); } return NULL; } static HWND next_window(HWND window, enum window_search_mode mode, HWND *parent, bool use_findwindowex) { if (*parent) { window = *parent; *parent = NULL; } while (true) { if (use_findwindowex) window = FindWindowEx(GetDesktopWindow(), window, NULL, NULL); else window = GetNextWindow(window, GW_HWNDNEXT); if (!window || check_window_valid(window, mode)) break; } if (ms_is_uwp_window(window)) { HWND child = ms_get_uwp_actual_window(window); if (child) { *parent = window; return child; } } return window; } static HWND first_window(enum window_search_mode mode, HWND *parent, bool *use_findwindowex) { HWND window = FindWindowEx(GetDesktopWindow(), NULL, NULL, NULL); if (!window) { *use_findwindowex = false; window = GetWindow(GetDesktopWindow(), GW_CHILD); } else { *use_findwindowex = true; } *parent = NULL; if (!check_window_valid(window, mode)) { window = next_window(window, mode, parent, *use_findwindowex); if (!window && *use_findwindowex) { *use_findwindowex = false; window = GetWindow(GetDesktopWindow(), GW_CHILD); if (!check_window_valid(window, mode)) window = next_window(window, mode, parent, *use_findwindowex); } } if (ms_is_uwp_window(window)) { HWND child = ms_get_uwp_actual_window(window); if (child) { *parent = window; return child; } } return window; } void ms_fill_window_list(obs_property_t *p, enum window_search_mode mode, add_window_cb callback) { HWND parent; bool use_findwindowex = false; HWND window = first_window(mode, &parent, &use_findwindowex); while (window) { add_window(p, window, callback); window = next_window(window, mode, &parent, use_findwindowex); } } static int window_rating(HWND window, enum window_priority priority, const char *class, const char *title, const char *exe, bool uwp_window, bool generic_class) { struct dstr cur_class = {0}; struct dstr cur_title = {0}; struct dstr cur_exe = {0}; int val = 0x7FFFFFFF; if (!ms_get_window_exe(&cur_exe, window)) return 0x7FFFFFFF; ms_get_window_title(&cur_title, window); ms_get_window_class(&cur_class, window); bool class_matches = dstr_cmpi(&cur_class, class) == 0; bool exe_matches = dstr_cmpi(&cur_exe, exe) == 0; int title_val = abs(dstr_cmpi(&cur_title, title)); if (generic_class && (priority == WINDOW_PRIORITY_CLASS)) priority = WINDOW_PRIORITY_TITLE; /* always match by name with UWP windows */ if (uwp_window) { if (priority == WINDOW_PRIORITY_EXE && !exe_matches) val = 0x7FFFFFFF; else val = title_val == 0 ? 0 : 0x7FFFFFFF; } else if (priority == WINDOW_PRIORITY_CLASS) { val = class_matches ? title_val : 0x7FFFFFFF; if (val != 0x7FFFFFFF && !exe_matches) val += 0x1000; } else if (priority == WINDOW_PRIORITY_TITLE) { val = title_val == 0 ? 0 : 0x7FFFFFFF; } else if (priority == WINDOW_PRIORITY_EXE) { val = exe_matches ? title_val : 0x7FFFFFFF; } dstr_free(&cur_class); dstr_free(&cur_title); dstr_free(&cur_exe); return val; } static const char *generic_class_substrings[] = { "Chrome", "SDL_app", NULL, }; static bool is_generic_class(const char *current_class) { const char **class = generic_class_substrings; while (*class) { if (astrstri(current_class, *class) != NULL) { return true; } class ++; } return false; } static bool is_uwp_class(const char *window_class) { return strcmp(window_class, "Windows.UI.Core.CoreWindow") == 0 || strcmp(window_class, "WinUIDesktopWin32WindowClass") == 0; } HWND ms_find_window(enum window_search_mode mode, enum window_priority priority, const char *class, const char *title, const char *exe) { HWND parent; bool use_findwindowex = false; HWND window = first_window(mode, &parent, &use_findwindowex); HWND best_window = NULL; int best_rating = 0x7FFFFFFF; if (!class) return NULL; const bool uwp_window = is_uwp_class(class); const bool generic_class = is_generic_class(class); while (window) { int rating = window_rating(window, priority, class, title, exe, uwp_window, generic_class); if (rating < best_rating) { best_rating = rating; best_window = window; if (rating == 0) break; } window = next_window(window, mode, &parent, use_findwindowex); } return best_window; } struct top_level_enum_data { enum window_search_mode mode; enum window_priority priority; const char *class; const char *title; const char *exe; bool uwp_window; bool generic_class; HWND best_window; int best_rating; }; BOOL CALLBACK enum_windows_proc(HWND window, LPARAM lParam) { struct top_level_enum_data *data = (struct top_level_enum_data *)lParam; if (!check_window_valid(window, data->mode)) return TRUE; if (IsWindowCloaked(window)) return TRUE; const int rating = window_rating(window, data->priority, data->class, data->title, data->exe, data->uwp_window, data->generic_class); if (rating < data->best_rating) { data->best_rating = rating; data->best_window = window; } return rating > 0; } HWND ms_find_window_top_level(enum window_search_mode mode, enum window_priority priority, const char *class, const char *title, const char *exe) { if (!class) return NULL; struct top_level_enum_data data; data.mode = mode; data.priority = priority; data.class = class; data.title = title; data.exe = exe; data.uwp_window = is_uwp_class(class); data.generic_class = is_generic_class(class); data.best_window = NULL; data.best_rating = 0x7FFFFFFF; EnumWindows(enum_windows_proc, (LPARAM)&data); return data.best_window; } obs-studio-32.1.0-sources/libobs/util/windows/WinHandle.hpp000644 001751 001751 00000003611 15153330235 024540 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once class WinHandle { HANDLE handle = INVALID_HANDLE_VALUE; inline void Clear() { if (handle && handle != INVALID_HANDLE_VALUE) CloseHandle(handle); } public: inline WinHandle() {} inline WinHandle(HANDLE handle_) : handle(handle_) {} inline ~WinHandle() { Clear(); } inline operator HANDLE() const { return handle; } inline WinHandle &operator=(HANDLE handle_) { if (handle_ != handle) { Clear(); handle = handle_; } return *this; } inline HANDLE *operator&() { return &handle; } inline bool Valid() const { return handle && handle != INVALID_HANDLE_VALUE; } }; class WinModule { HMODULE handle = NULL; inline void Clear() { if (handle) FreeLibrary(handle); } public: inline WinModule() {} inline WinModule(HMODULE handle_) : handle(handle_) {} inline ~WinModule() { Clear(); } inline operator HMODULE() const { return handle; } inline WinModule &operator=(HMODULE handle_) { if (handle_ != handle) { Clear(); handle = handle_; } return *this; } inline HMODULE *operator&() { return &handle; } inline bool Valid() const { return handle != NULL; } }; obs-studio-32.1.0-sources/libobs/util/windows/win-registry.h000644 001751 001751 00000002174 15153330235 024775 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * Copyright (c) 2017 Ryan Foster * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #include "../c99defs.h" #ifdef __cplusplus extern "C" { #endif struct reg_dword { LSTATUS status; DWORD size; DWORD return_value; }; EXPORT void get_reg_dword(HKEY hkey, LPCWSTR sub_key, LPCWSTR value_name, struct reg_dword *info); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/windows/window-helpers.h000644 001751 001751 00000002762 15153330235 025304 0ustar00runnerrunner000000 000000 #pragma once #include #include #include #include #ifdef __cplusplus extern "C" { #endif enum window_priority { WINDOW_PRIORITY_CLASS, WINDOW_PRIORITY_TITLE, WINDOW_PRIORITY_EXE, }; enum window_search_mode { INCLUDE_MINIMIZED, EXCLUDE_MINIMIZED, }; EXPORT bool ms_get_window_exe(struct dstr *name, HWND window); EXPORT void ms_get_window_title(struct dstr *name, HWND hwnd); EXPORT void ms_get_window_class(struct dstr *window_class, HWND hwnd); EXPORT bool ms_is_uwp_window(HWND hwnd); EXPORT HWND ms_get_uwp_actual_window(HWND parent); typedef bool (*add_window_cb)(const char *title, const char *window_class, const char *exe); EXPORT void ms_fill_window_list(obs_property_t *p, enum window_search_mode mode, add_window_cb callback); EXPORT void ms_build_window_strings(const char *str, char **window_class, char **title, char **exe); EXPORT bool ms_check_window_property_setting(obs_properties_t *ppts, obs_property_t *p, obs_data_t *settings, const char *val, size_t idx); EXPORT void ms_build_window_strings(const char *str, char **window_class, char **title, char **exe); EXPORT HWND ms_find_window(enum window_search_mode mode, enum window_priority priority, const char *window_class, const char *title, const char *exe); EXPORT HWND ms_find_window_top_level(enum window_search_mode mode, enum window_priority priority, const char *window_class, const char *title, const char *exe); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/windows/device-enum.h000644 001751 001751 00000000407 15153330235 024530 0ustar00runnerrunner000000 000000 #pragma once #include "../c99defs.h" #ifdef __cplusplus extern "C" { #endif typedef bool (*device_luid_cb)(void *param, uint32_t idx, uint64_t luid); EXPORT void enum_graphics_device_luids(device_luid_cb device_luid, void *param); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/windows/ComPtr.hpp000644 001751 001751 00000006371 15153330235 024101 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #ifdef _WIN32 #include #endif /* Oh no I have my own com pointer class, the world is ending, how dare you * write your own! */ template class ComPtr { protected: T *ptr; inline void Kill() { if (ptr) ptr->Release(); } inline void Replace(T *p) { if (ptr != p) { if (p) p->AddRef(); if (ptr) ptr->Release(); ptr = p; } } public: inline ComPtr() : ptr(nullptr) {} inline ComPtr(T *p) : ptr(p) { if (ptr) ptr->AddRef(); } inline ComPtr(const ComPtr &c) : ptr(c.ptr) { if (ptr) ptr->AddRef(); } inline ComPtr(ComPtr &&c) noexcept : ptr(c.ptr) { c.ptr = nullptr; } template inline ComPtr(ComPtr &&c) noexcept : ptr(c.Detach()) {} inline ~ComPtr() { Kill(); } inline void Clear() { if (ptr) { ptr->Release(); ptr = nullptr; } } inline ComPtr &operator=(T *p) { Replace(p); return *this; } inline ComPtr &operator=(const ComPtr &c) { Replace(c.ptr); return *this; } inline ComPtr &operator=(ComPtr &&c) noexcept { if (&ptr != &c.ptr) { Kill(); ptr = c.ptr; c.ptr = nullptr; } return *this; } template inline ComPtr &operator=(ComPtr &&c) noexcept { Kill(); ptr = c.Detach(); return *this; } inline T *Detach() { T *out = ptr; ptr = nullptr; return out; } inline void CopyTo(T **out) { if (out) { if (ptr) ptr->AddRef(); *out = ptr; } } inline ULONG Release() { ULONG ref; if (!ptr) return 0; ref = ptr->Release(); ptr = nullptr; return ref; } inline T **Assign() { Clear(); return &ptr; } inline void Set(T *p) { Kill(); ptr = p; } inline T *Get() const { return ptr; } inline T **operator&() { return Assign(); } inline operator T *() const { return ptr; } inline T *operator->() const { return ptr; } inline bool operator==(T *p) const { return ptr == p; } inline bool operator!=(T *p) const { return ptr != p; } inline bool operator!() const { return !ptr; } }; #ifdef _WIN32 template class ComQIPtr : public ComPtr { public: inline ComQIPtr(IUnknown *unk) { this->ptr = nullptr; unk->QueryInterface(__uuidof(T), (void **)&this->ptr); } template inline ComQIPtr(const ComPtr &c) { this->ptr = nullptr; c->QueryInterface(__uuidof(T), (void **)&this->ptr); } inline ComPtr &operator=(IUnknown *unk) { ComPtr::Clear(); unk->QueryInterface(__uuidof(T), (void **)&this->ptr); return *this; } }; #endif obs-studio-32.1.0-sources/libobs/util/windows/HRError.hpp000644 001751 001751 00000001636 15153330235 024217 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once struct HRError { const char *str; HRESULT hr; inline HRError(const char *str, HRESULT hr) : str(str), hr(hr) {} }; obs-studio-32.1.0-sources/libobs/util/windows/device-enum.c000644 001751 001751 00000001240 15153330235 024517 0ustar00runnerrunner000000 000000 #include "device-enum.h" #include "../dstr.h" #include void enum_graphics_device_luids(device_luid_cb device_luid, void *param) { IDXGIFactory1 *factory; IDXGIAdapter1 *adapter; HRESULT hr; hr = CreateDXGIFactory1(&IID_IDXGIFactory1, (void **)&factory); if (FAILED(hr)) return; for (UINT i = 0; factory->lpVtbl->EnumAdapters1(factory, i, &adapter) == S_OK; i++) { DXGI_ADAPTER_DESC desc; hr = adapter->lpVtbl->GetDesc(adapter, &desc); adapter->lpVtbl->Release(adapter); if (FAILED(hr)) continue; uint64_t luid64 = *(uint64_t *)&desc.AdapterLuid; if (!device_luid(param, i, luid64)) break; } factory->lpVtbl->Release(factory); } obs-studio-32.1.0-sources/libobs/util/windows/obfuscate.h000644 001751 001751 00000000472 15153330235 024304 0ustar00runnerrunner000000 000000 #pragma once #include "../c99defs.h" #include #ifdef __cplusplus extern "C" { #endif /* this is a workaround to A/Vs going crazy whenever certain functions (such as * OpenProcess) are used */ void *ms_get_obfuscated_func(HMODULE module, const char *str, uint64_t val); #ifdef __cplusplus } #endif obs-studio-32.1.0-sources/libobs/util/windows/obfuscate.c000644 001751 001751 00000001447 15153330235 024302 0ustar00runnerrunner000000 000000 #ifdef _MSC_VER #pragma warning(disable : 4152) /* casting func ptr to void */ #endif #include #include #include "obfuscate.h" #define LOWER_HALFBYTE(x) ((x) & 0xF) #define UPPER_HALFBYTE(x) (((x) >> 4) & 0xF) static void deobfuscate_str(char *str, uint64_t val) { uint8_t *dec_val = (uint8_t *)&val; int i = 0; while (*str != 0) { int pos = i / 2; bool bottom = (i % 2) == 0; uint8_t *ch = (uint8_t *)str; uint8_t xor = bottom ? LOWER_HALFBYTE(dec_val[pos]) : UPPER_HALFBYTE(dec_val[pos]); *ch ^= xor; if (++i == sizeof(uint64_t) * 2) i = 0; str++; } } void *ms_get_obfuscated_func(HMODULE module, const char *str, uint64_t val) { char new_name[128]; strcpy(new_name, str); deobfuscate_str(new_name, val); return GetProcAddress(module, new_name); } obs-studio-32.1.0-sources/libobs/util/curl/000755 001751 001751 00000000000 15153330731 021431 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/libobs/util/curl/curl-helper.h000644 001751 001751 00000002314 15153330235 024023 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #include #if defined(_WIN32) && LIBCURL_VERSION_NUM >= 0x072c00 #ifdef CURLSSLOPT_REVOKE_BEST_EFFORT #define CURL_OBS_REVOKE_SETTING CURLSSLOPT_REVOKE_BEST_EFFORT #else #define CURL_OBS_REVOKE_SETTING CURLSSLOPT_NO_REVOKE #endif #define curl_obs_set_revoke_setting(handle) curl_easy_setopt(handle, CURLOPT_SSL_OPTIONS, CURL_OBS_REVOKE_SETTING) #else #define curl_obs_set_revoke_setting(handle) #endif obs-studio-32.1.0-sources/libobs/util/util_uint64.h000644 001751 001751 00000002360 15153330235 023023 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2020 Hans Petter Selasky * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once #if defined(_MSC_VER) && defined(_M_X64) #include #endif static inline uint64_t util_mul_div64(uint64_t num, uint64_t mul, uint64_t div) { #if defined(_MSC_VER) && defined(_M_X64) && (_MSC_VER >= 1920) unsigned __int64 high; const unsigned __int64 low = _umul128(num, mul, &high); unsigned __int64 rem; return _udiv128(high, low, div, &rem); #else const uint64_t rem = num % div; return (num / div) * mul + (rem * mul) / div; #endif } obs-studio-32.1.0-sources/libobs/util/config-file.c000644 001751 001751 00000042747 15153330235 023027 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include #include #include #include "config-file.h" #include "threading.h" #include "platform.h" #include "base.h" #include "bmem.h" #include "lexer.h" #include "dstr.h" #include "uthash.h" struct config_item { char *name; char *value; UT_hash_handle hh; }; static inline void config_item_free(struct config_item *item) { bfree(item->name); bfree(item->value); bfree(item); } struct config_section { char *name; struct config_item *items; UT_hash_handle hh; }; static inline void config_section_free(struct config_section *section) { struct config_item *item; struct config_item *temp; HASH_ITER (hh, section->items, item, temp) { HASH_DELETE(hh, section->items, item); config_item_free(item); } bfree(section->name); bfree(section); } struct config_data { char *file; struct config_section *sections; struct config_section *defaults; pthread_mutex_t mutex; }; config_t *config_create(const char *file) { struct config_data *config; FILE *f; f = os_fopen(file, "wb"); if (!f) return NULL; fclose(f); config = bzalloc(sizeof(struct config_data)); if (pthread_mutex_init_recursive(&config->mutex) != 0) { bfree(config); return NULL; } config->file = bstrdup(file); return config; } static bool config_parse_string(struct lexer *lex, struct strref *ref, char end) { bool success = end != 0; struct base_token token; base_token_clear(&token); while (lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) { if (end) { if (*token.text.array == end) { success = true; break; } else if (is_newline(*token.text.array)) { success = false; break; } } else { if (is_newline(*token.text.array)) { success = true; break; } } strref_add(ref, &token.text); } //remove_ref_whitespace(ref); return success; } static void unescape(struct dstr *str) { char *read = str->array; char *write = str->array; for (; *read; read++, write++) { char cur = *read; if (cur == '\\') { char next = read[1]; if (next == '\\') { read++; } else if (next == 'r') { cur = '\r'; read++; } else if (next == 'n') { cur = '\n'; read++; } } if (read != write) *write = cur; } if (read != write) *write = '\0'; } static void config_add_item(struct config_item **items, struct strref *name, struct strref *value) { struct config_item *item; struct dstr item_value; item = bzalloc(sizeof(struct config_item)); item->name = bstrdup_n(name->array, name->len); if (!strref_is_empty(value)) { dstr_init_copy_strref(&item_value, value); unescape(&item_value); item->value = bstrdup_n(item_value.array, item_value.len); dstr_free(&item_value); } else { item->value = bzalloc(1); } HASH_ADD_STR(*items, name, item); } static void config_parse_section(struct config_section *section, struct lexer *lex) { struct base_token token; while (lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) { struct strref name, value; while (token.type == BASETOKEN_WHITESPACE) { if (!lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) return; } if (token.type == BASETOKEN_OTHER) { if (*token.text.array == '#') { do { if (!lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) return; } while (!is_newline(*token.text.array)); continue; } else if (*token.text.array == '[') { lex->offset--; return; } } strref_copy(&name, &token.text); if (!config_parse_string(lex, &name, '=')) continue; strref_clear(&value); config_parse_string(lex, &value, 0); config_add_item(§ion->items, &name, &value); } } static void parse_config_data(struct config_section **sections, struct lexer *lex) { struct strref section_name; struct base_token token; base_token_clear(&token); while (lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) { struct config_section *section; while (token.type == BASETOKEN_WHITESPACE) { if (!lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) return; } if (*token.text.array != '[') { while (!is_newline(*token.text.array)) { if (!lexer_getbasetoken(lex, &token, PARSE_WHITESPACE)) return; } continue; } strref_clear(§ion_name); config_parse_string(lex, §ion_name, ']'); if (!section_name.len) return; section = bzalloc(sizeof(struct config_section)); section->name = bstrdup_n(section_name.array, section_name.len); config_parse_section(section, lex); HASH_ADD_STR(*sections, name, section); } } static int config_parse_file(struct config_section **sections, const char *file, bool always_open) { char *file_data; struct lexer lex; FILE *f; f = os_fopen(file, "rb"); if (always_open && !f) f = os_fopen(file, "w+"); if (!f) return CONFIG_FILENOTFOUND; os_fread_utf8(f, &file_data); fclose(f); if (!file_data) return CONFIG_SUCCESS; lexer_init(&lex); lexer_start_move(&lex, file_data); parse_config_data(sections, &lex); lexer_free(&lex); return CONFIG_SUCCESS; } int config_open(config_t **config, const char *file, enum config_open_type open_type) { int errorcode; bool always_open = open_type == CONFIG_OPEN_ALWAYS; if (!config) return CONFIG_ERROR; *config = bzalloc(sizeof(struct config_data)); if (!*config) return CONFIG_ERROR; if (pthread_mutex_init_recursive(&(*config)->mutex) != 0) { bfree(*config); return CONFIG_ERROR; } (*config)->file = bstrdup(file); errorcode = config_parse_file(&(*config)->sections, file, always_open); if (errorcode != CONFIG_SUCCESS) { config_close(*config); *config = NULL; } return errorcode; } int config_open_string(config_t **config, const char *str) { struct lexer lex; if (!config) return CONFIG_ERROR; *config = bzalloc(sizeof(struct config_data)); if (!*config) return CONFIG_ERROR; if (pthread_mutex_init_recursive(&(*config)->mutex) != 0) { bfree(*config); return CONFIG_ERROR; } (*config)->file = NULL; lexer_init(&lex); lexer_start(&lex, str); parse_config_data(&(*config)->sections, &lex); lexer_free(&lex); return CONFIG_SUCCESS; } int config_open_defaults(config_t *config, const char *file) { if (!config) return CONFIG_ERROR; return config_parse_file(&config->defaults, file, false); } int config_save(config_t *config) { FILE *f; struct dstr str, tmp; int ret = CONFIG_ERROR; if (!config) return CONFIG_ERROR; if (!config->file) return CONFIG_ERROR; dstr_init(&str); dstr_init(&tmp); pthread_mutex_lock(&config->mutex); f = os_fopen(config->file, "wb"); if (!f) { pthread_mutex_unlock(&config->mutex); return CONFIG_FILENOTFOUND; } struct config_section *section, *stmp; struct config_item *item, *itmp; int idx = 0; HASH_ITER (hh, config->sections, section, stmp) { if (idx++) dstr_cat(&str, "\n"); dstr_cat(&str, "["); dstr_cat(&str, section->name); dstr_cat(&str, "]\n"); HASH_ITER (hh, section->items, item, itmp) { dstr_copy(&tmp, item->value ? item->value : ""); dstr_replace(&tmp, "\\", "\\\\"); dstr_replace(&tmp, "\r", "\\r"); dstr_replace(&tmp, "\n", "\\n"); dstr_cat(&str, item->name); dstr_cat(&str, "="); dstr_cat(&str, tmp.array); dstr_cat(&str, "\n"); } } #ifdef _WIN32 if (fwrite("\xEF\xBB\xBF", 3, 1, f) != 1) goto cleanup; #endif if (fwrite(str.array, str.len, 1, f) != 1) goto cleanup; ret = CONFIG_SUCCESS; cleanup: fclose(f); pthread_mutex_unlock(&config->mutex); dstr_free(&tmp); dstr_free(&str); return ret; } int config_save_safe(config_t *config, const char *temp_ext, const char *backup_ext) { struct dstr temp_file = {0}; struct dstr backup_file = {0}; char *file = config->file; int ret; if (!temp_ext || !*temp_ext) { blog(LOG_ERROR, "config_save_safe: invalid " "temporary extension specified"); return CONFIG_ERROR; } pthread_mutex_lock(&config->mutex); dstr_copy(&temp_file, config->file); if (*temp_ext != '.') dstr_cat(&temp_file, "."); dstr_cat(&temp_file, temp_ext); config->file = temp_file.array; ret = config_save(config); config->file = file; if (ret != CONFIG_SUCCESS) { blog(LOG_ERROR, "config_save_safe: failed to " "write to %s", temp_file.array); goto cleanup; } if (backup_ext && *backup_ext) { dstr_copy(&backup_file, config->file); if (*backup_ext != '.') dstr_cat(&backup_file, "."); dstr_cat(&backup_file, backup_ext); } if (os_safe_replace(file, temp_file.array, backup_file.array) != 0) ret = CONFIG_ERROR; cleanup: pthread_mutex_unlock(&config->mutex); dstr_free(&temp_file); dstr_free(&backup_file); return ret; } void config_close(config_t *config) { struct config_section *section, *temp; if (!config) return; HASH_ITER (hh, config->sections, section, temp) { HASH_DELETE(hh, config->sections, section); config_section_free(section); } HASH_ITER (hh, config->defaults, section, temp) { HASH_DELETE(hh, config->defaults, section); config_section_free(section); } bfree(config->file); pthread_mutex_destroy(&config->mutex); bfree(config); } size_t config_num_sections(config_t *config) { return HASH_CNT(hh, config->sections); } const char *config_get_section(config_t *config, size_t idx) { struct config_section *section; struct config_section *temp; const char *name = NULL; size_t ctr = 0; pthread_mutex_lock(&config->mutex); if (idx >= config_num_sections(config)) goto unlock; HASH_ITER (hh, config->sections, section, temp) { if (idx == ctr++) { name = section->name; break; } } unlock: pthread_mutex_unlock(&config->mutex); return name; } static const struct config_item *config_find_item(const struct config_section *sections, const char *section, const char *name) { struct config_section *sec; struct config_item *res; HASH_FIND_STR(sections, section, sec); if (!sec) return NULL; HASH_FIND_STR(sec->items, name, res); return res; } static void config_set_item(config_t *config, struct config_section **sections, const char *section, const char *name, char *value) { struct config_section *sec; struct config_item *item; pthread_mutex_lock(&config->mutex); HASH_FIND_STR(*sections, section, sec); if (!sec) { sec = bzalloc(sizeof(struct config_section)); sec->name = bstrdup(section); HASH_ADD_STR(*sections, name, sec); } HASH_FIND_STR(sec->items, name, item); if (!item) { item = bzalloc(sizeof(struct config_item)); item->name = bstrdup(name); item->value = value; HASH_ADD_STR(sec->items, name, item); } else { bfree(item->value); item->value = value; } pthread_mutex_unlock(&config->mutex); } static void config_set_item_default(config_t *config, const char *section, const char *name, char *value) { config_set_item(config, &config->defaults, section, name, value); if (!config_has_user_value(config, section, name)) config_set_item(config, &config->sections, section, name, bstrdup(value)); } void config_set_string(config_t *config, const char *section, const char *name, const char *value) { if (!value) value = ""; config_set_item(config, &config->sections, section, name, bstrdup(value)); } void config_set_int(config_t *config, const char *section, const char *name, int64_t value) { struct dstr str; dstr_init(&str); dstr_printf(&str, "%" PRId64, value); config_set_item(config, &config->sections, section, name, str.array); } void config_set_uint(config_t *config, const char *section, const char *name, uint64_t value) { struct dstr str; dstr_init(&str); dstr_printf(&str, "%" PRIu64, value); config_set_item(config, &config->sections, section, name, str.array); } void config_set_bool(config_t *config, const char *section, const char *name, bool value) { char *str = bstrdup(value ? "true" : "false"); config_set_item(config, &config->sections, section, name, str); } void config_set_double(config_t *config, const char *section, const char *name, double value) { char *str = bzalloc(64); os_dtostr(value, str, 64); config_set_item(config, &config->sections, section, name, str); } void config_set_default_string(config_t *config, const char *section, const char *name, const char *value) { if (!value) value = ""; config_set_item_default(config, section, name, bstrdup(value)); } void config_set_default_int(config_t *config, const char *section, const char *name, int64_t value) { struct dstr str; dstr_init(&str); dstr_printf(&str, "%" PRId64, value); config_set_item_default(config, section, name, str.array); } void config_set_default_uint(config_t *config, const char *section, const char *name, uint64_t value) { struct dstr str; dstr_init(&str); dstr_printf(&str, "%" PRIu64, value); config_set_item_default(config, section, name, str.array); } void config_set_default_bool(config_t *config, const char *section, const char *name, bool value) { char *str = bstrdup(value ? "true" : "false"); config_set_item_default(config, section, name, str); } void config_set_default_double(config_t *config, const char *section, const char *name, double value) { struct dstr str; dstr_init(&str); dstr_printf(&str, "%g", value); config_set_item_default(config, section, name, str.array); } const char *config_get_string(config_t *config, const char *section, const char *name) { const struct config_item *item; const char *value = NULL; pthread_mutex_lock(&config->mutex); item = config_find_item(config->sections, section, name); if (!item) item = config_find_item(config->defaults, section, name); if (item) value = item->value; pthread_mutex_unlock(&config->mutex); return value; } static inline int64_t str_to_int64(const char *str) { if (!str || !*str) return 0; if (str[0] == '0' && str[1] == 'x') return strtoll(str + 2, NULL, 16); else return strtoll(str, NULL, 10); } static inline uint64_t str_to_uint64(const char *str) { if (!str || !*str) return 0; if (str[0] == '0' && str[1] == 'x') return strtoull(str + 2, NULL, 16); else return strtoull(str, NULL, 10); } int64_t config_get_int(config_t *config, const char *section, const char *name) { const char *value = config_get_string(config, section, name); if (value) return str_to_int64(value); return 0; } uint64_t config_get_uint(config_t *config, const char *section, const char *name) { const char *value = config_get_string(config, section, name); if (value) return str_to_uint64(value); return 0; } bool config_get_bool(config_t *config, const char *section, const char *name) { const char *value = config_get_string(config, section, name); if (value) return astrcmpi(value, "true") == 0 || !!str_to_uint64(value); return false; } double config_get_double(config_t *config, const char *section, const char *name) { const char *value = config_get_string(config, section, name); if (value) return os_strtod(value); return 0.0; } bool config_remove_value(config_t *config, const char *section, const char *name) { struct config_section *sec; struct config_item *item; bool success = false; pthread_mutex_lock(&config->mutex); HASH_FIND_STR(config->sections, section, sec); if (sec) { HASH_FIND_STR(sec->items, name, item); if (item) { HASH_DELETE(hh, sec->items, item); config_item_free(item); success = true; } } pthread_mutex_unlock(&config->mutex); return success; } const char *config_get_default_string(config_t *config, const char *section, const char *name) { const struct config_item *item; const char *value = NULL; pthread_mutex_lock(&config->mutex); item = config_find_item(config->defaults, section, name); if (item) value = item->value; pthread_mutex_unlock(&config->mutex); return value; } int64_t config_get_default_int(config_t *config, const char *section, const char *name) { const char *value = config_get_default_string(config, section, name); if (value) return str_to_int64(value); return 0; } uint64_t config_get_default_uint(config_t *config, const char *section, const char *name) { const char *value = config_get_default_string(config, section, name); if (value) return str_to_uint64(value); return 0; } bool config_get_default_bool(config_t *config, const char *section, const char *name) { const char *value = config_get_default_string(config, section, name); if (value) return astrcmpi(value, "true") == 0 || !!str_to_uint64(value); return false; } double config_get_default_double(config_t *config, const char *section, const char *name) { const char *value = config_get_default_string(config, section, name); if (value) return os_strtod(value); return 0.0; } bool config_has_user_value(config_t *config, const char *section, const char *name) { bool success; pthread_mutex_lock(&config->mutex); success = config_find_item(config->sections, section, name) != NULL; pthread_mutex_unlock(&config->mutex); return success; } bool config_has_default_value(config_t *config, const char *section, const char *name) { bool success; pthread_mutex_lock(&config->mutex); success = config_find_item(config->defaults, section, name) != NULL; pthread_mutex_unlock(&config->mutex); return success; } obs-studio-32.1.0-sources/libobs/util/buffered-file-serializer.c000644 001751 001751 00000025454 15153330235 025507 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2024 Dennis Sädtler * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #include "buffered-file-serializer.h" #include #include "platform.h" #include "threading.h" #include "deque.h" #include "dstr.h" static const size_t DEFAULT_BUF_SIZE = 256ULL * 1048576ULL; // 256 MiB static const size_t DEFAULT_CHUNK_SIZE = 1048576; // 1 MiB /* ========================================================================== */ /* Buffered writer based on ffmpeg-mux implementation */ struct io_header { uint64_t seek_offset; uint64_t data_length; }; struct io_buffer { bool active; bool shutdown_requested; bool output_error; os_event_t *buffer_space_available_event; os_event_t *new_data_available_event; pthread_t io_thread; pthread_mutex_t data_mutex; FILE *output_file; struct deque data; uint64_t next_pos; size_t buffer_size; size_t chunk_size; }; struct file_output_data { struct dstr filename; struct io_buffer io; }; static void *io_thread(void *opaque) { struct file_output_data *out = opaque; os_set_thread_name("buffered writer i/o thread"); // Chunk collects the writes into a larger batch size_t chunk_used = 0; size_t chunk_size = out->io.chunk_size; unsigned char *chunk = bmalloc(chunk_size); if (!chunk) { os_atomic_set_bool(&out->io.output_error, true); fprintf(stderr, "Error allocating memory for output\n"); goto error; } bool shutting_down; bool want_seek = false; bool force_flush_chunk = false; // current_seek_position is a virtual position updated as we read from // the buffer, if it becomes discontinuous due to a seek request we // flush the chunk. next_seek_position is the actual offset we should // seek to when we write the chunk. uint64_t current_seek_position = 0; uint64_t next_seek_position; for (;;) { // Wait for data to be written to the buffer os_event_wait(out->io.new_data_available_event); // Loop to write in chunk_size chunks for (;;) { pthread_mutex_lock(&out->io.data_mutex); shutting_down = os_atomic_load_bool(&out->io.shutdown_requested); // Fetch as many writes as possible from the deque // and fill up our local chunk. This may involve // seeking, so take care of that as well. for (;;) { size_t available = out->io.data.size; // Buffer is empty (now) or was already empty (we got // woken up to exit) if (!available) break; // Get seek offset and data size struct io_header header; deque_peek_front(&out->io.data, &header, sizeof(header)); // Do we need to seek? if (header.seek_offset != current_seek_position) { // If there's already part of a chunk pending, // flush it at the current offset. Similarly, // if we already plan to seek, then seek. if (chunk_used || want_seek) { force_flush_chunk = true; break; } // Mark that we need to seek and where to want_seek = true; next_seek_position = header.seek_offset; // Update our virtual position current_seek_position = header.seek_offset; } // Make sure there's enough room for the data, if // not then force a flush if (header.data_length + chunk_used > chunk_size) { force_flush_chunk = true; break; } // Remove header that we already read deque_pop_front(&out->io.data, NULL, sizeof(header)); // Copy from the buffer to our local chunk deque_pop_front(&out->io.data, chunk + chunk_used, header.data_length); // Update offsets chunk_used += header.data_length; current_seek_position += header.data_length; } // Signal that there is more room in the buffer os_event_signal(out->io.buffer_space_available_event); // Try to avoid lots of small writes unless this was the final // data left in the buffer. The buffer might be entirely empty // if we were woken up to exit. if (!force_flush_chunk && (!chunk_used || (chunk_used < 65536 && !shutting_down))) { os_event_reset(out->io.new_data_available_event); pthread_mutex_unlock(&out->io.data_mutex); break; } pthread_mutex_unlock(&out->io.data_mutex); // Seek if we need to if (want_seek) { os_fseeki64(out->io.output_file, next_seek_position, SEEK_SET); // Update the next virtual position, making sure to take // into account the size of the chunk we're about to write. current_seek_position = next_seek_position + chunk_used; want_seek = false; // If we did a seek but do not have any data left to write // return to the start of the loop. if (!chunk_used) { force_flush_chunk = false; continue; } } // Write the current chunk to the output file size_t bytes_written = fwrite(chunk, 1, chunk_used, out->io.output_file); if (bytes_written != chunk_used) { blog(LOG_ERROR, "Error writing to '%s': %s (%zu != %zu)\n", out->filename.array, strerror(errno), bytes_written, chunk_used); os_atomic_set_bool(&out->io.output_error, true); goto error; } chunk_used = 0; force_flush_chunk = false; } // If this was the last chunk, time to exit if (shutting_down) break; } error: if (chunk) bfree(chunk); fclose(out->io.output_file); return NULL; } /* ========================================================================== */ /* Serializer Implementation */ static int64_t file_output_seek(void *opaque, int64_t offset, enum serialize_seek_type seek_type) { struct file_output_data *out = opaque; // If the output thread failed, signal that back up the stack if (os_atomic_load_bool(&out->io.output_error)) return -1; // Update where the next write should go pthread_mutex_lock(&out->io.data_mutex); switch (seek_type) { case SERIALIZE_SEEK_START: out->io.next_pos = offset; break; case SERIALIZE_SEEK_CURRENT: out->io.next_pos += offset; break; case SERIALIZE_SEEK_END: out->io.next_pos -= offset; break; } pthread_mutex_unlock(&out->io.data_mutex); return (int64_t)out->io.next_pos; } #ifndef _WIN32 static inline size_t max(size_t a, size_t b) { return a > b ? a : b; } static inline size_t min(size_t a, size_t b) { return a < b ? a : b; } #endif static size_t file_output_write(void *opaque, const void *buf, size_t buf_size) { struct file_output_data *out = opaque; if (!buf_size) return 0; // Split writes into at chunks that are at most chunk_size bytes uintptr_t ptr = (uintptr_t)buf; size_t remaining = buf_size; while (remaining) { if (os_atomic_load_bool(&out->io.output_error)) return 0; pthread_mutex_lock(&out->io.data_mutex); size_t next_chunk_size = min(remaining, out->io.chunk_size); // Avoid unbounded growth of the deque, cap to buffer_size size_t cap = max(out->io.data.capacity, out->io.buffer_size); size_t free_space = cap - out->io.data.size; if (free_space < next_chunk_size + sizeof(struct io_header)) { blog(LOG_DEBUG, "Waiting for I/O thread..."); // No space, wait for the I/O thread to make space os_event_reset(out->io.buffer_space_available_event); pthread_mutex_unlock(&out->io.data_mutex); os_event_wait(out->io.buffer_space_available_event); continue; } // Calculate how many chunks we can fit into the buffer size_t num_chunks = free_space / (next_chunk_size + sizeof(struct io_header)); while (remaining && num_chunks--) { struct io_header header = { .data_length = next_chunk_size, .seek_offset = out->io.next_pos, }; // Copy the data into the buffer deque_push_back(&out->io.data, &header, sizeof(header)); deque_push_back(&out->io.data, (const void *)ptr, next_chunk_size); // Advance the next write position out->io.next_pos += next_chunk_size; // Update remainder and advance data pointer remaining -= next_chunk_size; ptr += next_chunk_size; next_chunk_size = min(remaining, out->io.chunk_size); } // Tell the I/O thread that there's new data to be written os_event_signal(out->io.new_data_available_event); pthread_mutex_unlock(&out->io.data_mutex); } return buf_size - remaining; } static int64_t file_output_get_pos(void *opaque) { struct file_output_data *out = opaque; // If thread failed return -1 if (os_atomic_load_bool(&out->io.output_error)) return -1; return (int64_t)out->io.next_pos; } bool buffered_file_serializer_init_defaults(struct serializer *s, const char *path) { return buffered_file_serializer_init(s, path, 0, 0); } bool buffered_file_serializer_init(struct serializer *s, const char *path, size_t max_bufsize, size_t chunk_size) { struct file_output_data *out; out = bzalloc(sizeof(*out)); dstr_init_copy(&out->filename, path); out->io.output_file = os_fopen(path, "wb"); if (!out->io.output_file) { dstr_free(&out->filename); bfree(out); return false; } out->io.buffer_size = max_bufsize ? max_bufsize : DEFAULT_BUF_SIZE; out->io.chunk_size = chunk_size ? chunk_size : DEFAULT_CHUNK_SIZE; // Start at 1MB, this can grow up to max_bufsize depending // on how fast data is going in and out. deque_reserve(&out->io.data, 1048576); pthread_mutex_init(&out->io.data_mutex, NULL); os_event_init(&out->io.buffer_space_available_event, OS_EVENT_TYPE_AUTO); os_event_init(&out->io.new_data_available_event, OS_EVENT_TYPE_AUTO); pthread_create(&out->io.io_thread, NULL, io_thread, out); out->io.active = true; s->data = out; s->read = NULL; s->write = file_output_write; s->seek = file_output_seek; s->get_pos = file_output_get_pos; return true; } void buffered_file_serializer_free(struct serializer *s) { struct file_output_data *out = s->data; if (!out) return; if (out->io.active) { os_atomic_set_bool(&out->io.shutdown_requested, true); // Wakes up the I/O thread and waits for it to finish pthread_mutex_lock(&out->io.data_mutex); os_event_signal(out->io.new_data_available_event); pthread_mutex_unlock(&out->io.data_mutex); pthread_join(out->io.io_thread, NULL); os_event_destroy(out->io.new_data_available_event); os_event_destroy(out->io.buffer_space_available_event); pthread_mutex_destroy(&out->io.data_mutex); blog(LOG_DEBUG, "Final buffer capacity: %zu KiB", out->io.data.capacity / 1024); deque_free(&out->io.data); } dstr_free(&out->filename); bfree(out); } obs-studio-32.1.0-sources/libobs/obs-encoder.c000644 001751 001751 00000165613 15153330235 022066 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs.h" #include "obs-internal.h" #include "util/util_uint64.h" #define encoder_active(encoder) os_atomic_load_bool(&encoder->active) #define set_encoder_active(encoder, val) os_atomic_set_bool(&encoder->active, val) #define get_weak(encoder) ((obs_weak_encoder_t *)encoder->context.control) static void encoder_set_video(obs_encoder_t *encoder, video_t *video); struct obs_encoder_info *find_encoder(const char *id) { for (size_t i = 0; i < obs->encoder_types.num; i++) { struct obs_encoder_info *info = obs->encoder_types.array + i; if (strcmp(info->id, id) == 0) return info; } return NULL; } const char *obs_encoder_get_display_name(const char *id) { struct obs_encoder_info *ei = find_encoder(id); return ei ? ei->get_name(ei->type_data) : NULL; } obs_module_t *obs_encoder_get_module(const char *id) { obs_module_t *module = obs->first_module; while (module) { for (size_t i = 0; i < module->encoders.num; i++) { if (strcmp(module->encoders.array[i], id) == 0) { return module; } } module = module->next; } module = obs->first_disabled_module; while (module) { for (size_t i = 0; i < module->encoders.num; i++) { if (strcmp(module->encoders.array[i], id) == 0) { return module; } } module = module->next; } return NULL; } enum obs_module_load_state obs_encoder_load_state(const char *id) { obs_module_t *module = obs_encoder_get_module(id); if (!module) { return OBS_MODULE_MISSING; } return module->load_state; } static bool init_encoder(struct obs_encoder *encoder, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { pthread_mutex_init_value(&encoder->init_mutex); pthread_mutex_init_value(&encoder->callbacks_mutex); pthread_mutex_init_value(&encoder->outputs_mutex); pthread_mutex_init_value(&encoder->pause.mutex); pthread_mutex_init_value(&encoder->roi_mutex); if (!obs_context_data_init(&encoder->context, OBS_OBJ_TYPE_ENCODER, settings, name, NULL, hotkey_data, false)) return false; if (pthread_mutex_init_recursive(&encoder->init_mutex) != 0) return false; if (pthread_mutex_init_recursive(&encoder->callbacks_mutex) != 0) return false; if (pthread_mutex_init(&encoder->outputs_mutex, NULL) != 0) return false; if (pthread_mutex_init(&encoder->pause.mutex, NULL) != 0) return false; if (pthread_mutex_init(&encoder->roi_mutex, NULL) != 0) return false; if (encoder->orig_info.get_defaults) { encoder->orig_info.get_defaults(encoder->context.settings); } if (encoder->orig_info.get_defaults2) { encoder->orig_info.get_defaults2(encoder->context.settings, encoder->orig_info.type_data); } return true; } static struct obs_encoder *create_encoder(const char *id, enum obs_encoder_type type, const char *name, obs_data_t *settings, size_t mixer_idx, obs_data_t *hotkey_data) { struct obs_encoder *encoder; struct obs_encoder_info *ei = find_encoder(id); bool success; if (ei && ei->type != type) return NULL; encoder = bzalloc(sizeof(struct obs_encoder)); encoder->mixer_idx = mixer_idx; if (!ei) { blog(LOG_ERROR, "Encoder ID '%s' not found", id); encoder->info.id = bstrdup(id); encoder->info.type = type; encoder->owns_info_id = true; encoder->orig_info = encoder->info; } else { encoder->info = *ei; encoder->orig_info = *ei; } success = init_encoder(encoder, name, settings, hotkey_data); if (!success) { blog(LOG_ERROR, "creating encoder '%s' (%s) failed", name, id); obs_encoder_destroy(encoder); return NULL; } obs_context_init_control(&encoder->context, encoder, (obs_destroy_cb)obs_encoder_destroy); obs_context_data_insert(&encoder->context, &obs->data.encoders_mutex, &obs->data.first_encoder); if (type == OBS_ENCODER_VIDEO) { encoder->frame_rate_divisor = 1; } blog(LOG_DEBUG, "encoder '%s' (%s) created", name, id); if (ei && ei->caps & OBS_ENCODER_CAP_DEPRECATED) { blog(LOG_WARNING, "Encoder ID '%s' is deprecated and may be removed in a future version.", id); } return encoder; } obs_encoder_t *obs_video_encoder_create(const char *id, const char *name, obs_data_t *settings, obs_data_t *hotkey_data) { if (!name || !id) return NULL; return create_encoder(id, OBS_ENCODER_VIDEO, name, settings, 0, hotkey_data); } obs_encoder_t *obs_audio_encoder_create(const char *id, const char *name, obs_data_t *settings, size_t mixer_idx, obs_data_t *hotkey_data) { if (!name || !id) return NULL; return create_encoder(id, OBS_ENCODER_AUDIO, name, settings, mixer_idx, hotkey_data); } static void receive_video(void *param, struct video_data *frame); static void receive_audio(void *param, size_t mix_idx, struct audio_data *data); static inline void get_audio_info(const struct obs_encoder *encoder, struct audio_convert_info *info) { const struct audio_output_info *aoi; aoi = audio_output_get_info(encoder->media); if (info->format == AUDIO_FORMAT_UNKNOWN) info->format = aoi->format; if (!info->samples_per_sec) info->samples_per_sec = aoi->samples_per_sec; if (info->speakers == SPEAKERS_UNKNOWN) info->speakers = aoi->speakers; if (encoder->info.get_audio_info) encoder->info.get_audio_info(encoder->context.data, info); } static inline void get_video_info(struct obs_encoder *encoder, struct video_scale_info *info) { const struct video_output_info *voi; voi = video_output_get_info(encoder->media); info->format = voi->format; info->colorspace = voi->colorspace; info->range = voi->range; info->width = obs_encoder_get_width(encoder); info->height = obs_encoder_get_height(encoder); if (encoder->info.get_video_info) encoder->info.get_video_info(encoder->context.data, info); /** * Prevent video output from performing an actual scale. If GPU scaling is * enabled, the voi will contain the scaled size. Therefore, GPU scaling * takes priority over self-scaling functionality. */ if ((encoder->info.caps & OBS_ENCODER_CAP_SCALING) != 0) { info->width = voi->width; info->height = voi->height; } } static inline bool gpu_encode_available(const struct obs_encoder *encoder) { struct obs_core_video_mix *video = get_mix_for_video(encoder->media); if (!video) return false; return (encoder->info.caps & OBS_ENCODER_CAP_PASS_TEXTURE) != 0 && (video->using_p010_tex || video->using_nv12_tex); } /** * GPU based rescaling is currently implemented via core video mixes, * i.e. a core mix with matching width/height/format/colorspace/range * will be created if it doesn't exist already to generate encoder * input */ static void maybe_set_up_gpu_rescale(struct obs_encoder *encoder) { struct obs_core_video_mix *mix, *current_mix; bool create_mix = true; struct obs_video_info ovi; const struct video_output_info *info; uint32_t width; uint32_t height; enum video_format format; enum video_colorspace space; enum video_range_type range; if (!encoder->media) return; if (encoder->gpu_scale_type == OBS_SCALE_DISABLE) return; if (!encoder->scaled_height && !encoder->scaled_width && encoder->preferred_format == VIDEO_FORMAT_NONE && encoder->preferred_space == VIDEO_CS_DEFAULT && encoder->preferred_range == VIDEO_RANGE_DEFAULT) return; info = video_output_get_info(encoder->media); width = encoder->scaled_width ? encoder->scaled_width : info->width; height = encoder->scaled_height ? encoder->scaled_height : info->height; format = encoder->preferred_format != VIDEO_FORMAT_NONE ? encoder->preferred_format : info->format; space = encoder->preferred_space != VIDEO_CS_DEFAULT ? encoder->preferred_space : info->colorspace; range = encoder->preferred_range != VIDEO_RANGE_DEFAULT ? encoder->preferred_range : info->range; current_mix = get_mix_for_video(encoder->media); if (!current_mix) return; /* Store original video_t so it can be restored if scaling is disabled. */ if (!current_mix->encoder_only_mix) encoder->original_video = encoder->media; pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0; i < obs->video.mixes.num; i++) { struct obs_core_video_mix *current = obs->video.mixes.array[i]; const struct video_output_info *voi = video_output_get_info(current->video); if (current_mix->view != current->view) continue; if (current->ovi.scale_type != encoder->gpu_scale_type) continue; if (voi->width != width || voi->height != height) continue; if (voi->format != format || voi->colorspace != space || voi->range != range) continue; current->encoder_refs += 1; obs_encoder_set_video(encoder, current->video); create_mix = false; break; } pthread_mutex_unlock(&obs->video.mixes_mutex); if (!create_mix) return; ovi = current_mix->ovi; ovi.output_format = format; ovi.colorspace = space; ovi.range = range; ovi.output_height = height; ovi.output_width = width; ovi.scale_type = encoder->gpu_scale_type; ovi.gpu_conversion = true; mix = obs_create_video_mix(&ovi); if (!mix) return; mix->encoder_only_mix = true; mix->encoder_refs = 1; mix->view = current_mix->view; pthread_mutex_lock(&obs->video.mixes_mutex); // double check that nobody else added a matching mix while we've created our mix for (size_t i = 0; i < obs->video.mixes.num; i++) { struct obs_core_video_mix *current = obs->video.mixes.array[i]; const struct video_output_info *voi = video_output_get_info(current->video); if (current->view != current_mix->view) continue; if (current->ovi.scale_type != encoder->gpu_scale_type) continue; if (voi->width != width || voi->height != height) continue; if (voi->format != format || voi->colorspace != space || voi->range != range) continue; obs_encoder_set_video(encoder, current->video); create_mix = false; break; } if (!create_mix) { obs_free_video_mix(mix); } else { da_push_back(obs->video.mixes, &mix); obs_encoder_set_video(encoder, mix->video); } pthread_mutex_unlock(&obs->video.mixes_mutex); } static void add_connection(struct obs_encoder *encoder) { if (encoder->info.type == OBS_ENCODER_AUDIO) { struct audio_convert_info audio_info = {0}; get_audio_info(encoder, &audio_info); audio_output_connect(encoder->media, encoder->mixer_idx, &audio_info, receive_audio, encoder); } else { struct video_scale_info info = {0}; get_video_info(encoder, &info); if (gpu_encode_available(encoder)) { start_gpu_encode(encoder); } else { start_raw_video(encoder->media, &info, encoder->frame_rate_divisor, receive_video, encoder); } } if (encoder->encoder_group) { pthread_mutex_lock(&encoder->encoder_group->mutex); encoder->encoder_group->num_encoders_started += 1; bool ready = encoder->encoder_group->num_encoders_started >= encoder->encoder_group->encoders.num; pthread_mutex_unlock(&encoder->encoder_group->mutex); if (ready) add_ready_encoder_group(encoder); } set_encoder_active(encoder, true); } void obs_encoder_group_actually_destroy(obs_encoder_group_t *group); static void remove_connection(struct obs_encoder *encoder, bool shutdown) { if (encoder->info.type == OBS_ENCODER_AUDIO) { audio_output_disconnect(encoder->media, encoder->mixer_idx, receive_audio, encoder); } else { if (gpu_encode_available(encoder)) { stop_gpu_encode(encoder); } else { stop_raw_video(encoder->media, receive_video, encoder); } } if (encoder->encoder_group) { pthread_mutex_lock(&encoder->encoder_group->mutex); if (--encoder->encoder_group->num_encoders_started == 0) encoder->encoder_group->start_timestamp = 0; pthread_mutex_unlock(&encoder->encoder_group->mutex); } /* obs_encoder_shutdown locks init_mutex, so don't call it on encode * errors, otherwise you can get a deadlock with outputs when they end * data capture, which will lock init_mutex and the video callback * mutex in the reverse order. instead, call shutdown before starting * up again */ if (shutdown) obs_encoder_shutdown(encoder); encoder->initialized = false; set_encoder_active(encoder, false); } static inline void free_audio_buffers(struct obs_encoder *encoder) { for (size_t i = 0; i < MAX_AV_PLANES; i++) { deque_free(&encoder->audio_input_buffer[i]); bfree(encoder->audio_output_buffer[i]); encoder->audio_output_buffer[i] = NULL; } } void obs_encoder_destroy(obs_encoder_t *encoder) { if (encoder) { pthread_mutex_lock(&encoder->outputs_mutex); for (size_t i = 0; i < encoder->outputs.num; i++) { struct obs_output *output = encoder->outputs.array[i]; // This happens while the output is still "active", so // remove without checking active obs_output_remove_encoder_internal(output, encoder); } da_free(encoder->outputs); pthread_mutex_unlock(&encoder->outputs_mutex); blog(LOG_DEBUG, "encoder '%s' destroyed", encoder->context.name); obs_encoder_set_group(encoder, NULL); free_audio_buffers(encoder); if (encoder->context.data) encoder->info.destroy(encoder->context.data); da_free(encoder->callbacks); da_free(encoder->roi); da_free(encoder->encoder_packet_times); pthread_mutex_destroy(&encoder->init_mutex); pthread_mutex_destroy(&encoder->callbacks_mutex); pthread_mutex_destroy(&encoder->outputs_mutex); pthread_mutex_destroy(&encoder->pause.mutex); pthread_mutex_destroy(&encoder->roi_mutex); obs_context_data_free(&encoder->context); if (encoder->owns_info_id) bfree((void *)encoder->info.id); if (encoder->last_error_message) bfree(encoder->last_error_message); if (encoder->fps_override) video_output_free_frame_rate_divisor(encoder->fps_override); bfree(encoder); } } const char *obs_encoder_get_name(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_get_name") ? encoder->context.name : NULL; } void obs_encoder_set_name(obs_encoder_t *encoder, const char *name) { if (!obs_encoder_valid(encoder, "obs_encoder_set_name")) return; if (name && *name && strcmp(name, encoder->context.name) != 0) obs_context_data_setname(&encoder->context, name); } static inline obs_data_t *get_defaults(const struct obs_encoder_info *info) { obs_data_t *settings = obs_data_create(); if (info->get_defaults) { info->get_defaults(settings); } if (info->get_defaults2) { info->get_defaults2(settings, info->type_data); } return settings; } obs_data_t *obs_encoder_defaults(const char *id) { const struct obs_encoder_info *info = find_encoder(id); return (info) ? get_defaults(info) : NULL; } obs_data_t *obs_encoder_get_defaults(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_defaults")) return NULL; return get_defaults(&encoder->info); } obs_properties_t *obs_get_encoder_properties(const char *id) { const struct obs_encoder_info *ei = find_encoder(id); if (ei && (ei->get_properties || ei->get_properties2)) { obs_data_t *defaults = get_defaults(ei); obs_properties_t *properties = NULL; if (ei->get_properties2) { properties = ei->get_properties2(NULL, ei->type_data); } else if (ei->get_properties) { properties = ei->get_properties(NULL); } obs_properties_apply_settings(properties, defaults); obs_data_release(defaults); return properties; } return NULL; } obs_properties_t *obs_encoder_properties(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_properties")) return NULL; if (encoder->orig_info.get_properties2) { obs_properties_t *props; props = encoder->orig_info.get_properties2(encoder->context.data, encoder->orig_info.type_data); obs_properties_apply_settings(props, encoder->context.settings); return props; } else if (encoder->orig_info.get_properties) { obs_properties_t *props; props = encoder->orig_info.get_properties(encoder->context.data); obs_properties_apply_settings(props, encoder->context.settings); return props; } return NULL; } void obs_encoder_update(obs_encoder_t *encoder, obs_data_t *settings) { if (!obs_encoder_valid(encoder, "obs_encoder_update")) return; obs_data_apply(encoder->context.settings, settings); // Encoder isn't initialized yet, only apply changes to settings if (!encoder->context.data) return; // Encoder doesn't support updates if (!encoder->info.update) return; // If the encoder is active we defer the update as it may not be // reentrant. Setting reconfigure_requested to true makes the changes // apply at the next possible moment in the encoder / GPU encoder // thread. if (encoder_active(encoder)) { encoder->reconfigure_requested = true; } else { encoder->info.update(encoder->context.data, encoder->context.settings); } } bool obs_encoder_get_extra_data(const obs_encoder_t *encoder, uint8_t **extra_data, size_t *size) { if (!obs_encoder_valid(encoder, "obs_encoder_get_extra_data")) return false; if (encoder->info.get_extra_data && encoder->context.data) return encoder->info.get_extra_data(encoder->context.data, extra_data, size); return false; } obs_data_t *obs_encoder_get_settings(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_settings")) return NULL; obs_data_addref(encoder->context.settings); return encoder->context.settings; } static inline void reset_audio_buffers(struct obs_encoder *encoder) { free_audio_buffers(encoder); for (size_t i = 0; i < encoder->planes; i++) encoder->audio_output_buffer[i] = bmalloc(encoder->framesize_bytes); } static void intitialize_audio_encoder(struct obs_encoder *encoder) { struct audio_convert_info info = {0}; get_audio_info(encoder, &info); encoder->samplerate = info.samples_per_sec; encoder->planes = get_audio_planes(info.format, info.speakers); encoder->blocksize = get_audio_size(info.format, info.speakers, 1); encoder->framesize = encoder->info.get_frame_size(encoder->context.data); encoder->framesize_bytes = encoder->blocksize * encoder->framesize; reset_audio_buffers(encoder); } static THREAD_LOCAL bool can_reroute = false; static inline bool obs_encoder_initialize_internal(obs_encoder_t *encoder) { if (!encoder->media) { blog(LOG_ERROR, "obs_encoder_initialize_internal: encoder '%s' has no media set", encoder->context.name); return false; } if (encoder_active(encoder)) return true; if (encoder->initialized) return true; obs_encoder_shutdown(encoder); maybe_set_up_gpu_rescale(encoder); if (encoder->orig_info.create) { can_reroute = true; encoder->info = encoder->orig_info; encoder->context.data = encoder->orig_info.create(encoder->context.settings, encoder); can_reroute = false; } if (!encoder->context.data) return false; if (encoder->orig_info.type == OBS_ENCODER_AUDIO) intitialize_audio_encoder(encoder); encoder->initialized = true; return true; } void *obs_encoder_create_rerouted(obs_encoder_t *encoder, const char *reroute_id) { if (!obs_ptr_valid(encoder, "obs_encoder_reroute")) return NULL; if (!obs_ptr_valid(reroute_id, "obs_encoder_reroute")) return NULL; if (!can_reroute) return NULL; const struct obs_encoder_info *ei = find_encoder(reroute_id); if (ei) { if (ei->type != encoder->orig_info.type || astrcmpi(ei->codec, encoder->orig_info.codec) != 0) { return NULL; } encoder->info = *ei; return encoder->info.create(encoder->context.settings, encoder); } return NULL; } bool obs_encoder_initialize(obs_encoder_t *encoder) { bool success; if (!encoder) return false; pthread_mutex_lock(&encoder->init_mutex); success = obs_encoder_initialize_internal(encoder); pthread_mutex_unlock(&encoder->init_mutex); return success; } /** * free video mix if it's an encoder only video mix * see `maybe_set_up_gpu_rescale` */ static void maybe_clear_encoder_core_video_mix(obs_encoder_t *encoder) { pthread_mutex_lock(&obs->video.mixes_mutex); for (size_t i = 0; i < obs->video.mixes.num; i++) { struct obs_core_video_mix *mix = obs->video.mixes.array[i]; if (mix->video != encoder->media) continue; if (!mix->encoder_only_mix) break; encoder_set_video(encoder, encoder->original_video); mix->encoder_refs -= 1; if (mix->encoder_refs == 0) { da_erase(obs->video.mixes, i); obs_free_video_mix(mix); } } pthread_mutex_unlock(&obs->video.mixes_mutex); } void obs_encoder_shutdown(obs_encoder_t *encoder) { pthread_mutex_lock(&encoder->init_mutex); if (encoder->context.data) { encoder->info.destroy(encoder->context.data); encoder->context.data = NULL; encoder->first_received = false; encoder->offset_usec = 0; encoder->start_ts = 0; encoder->frame_rate_divisor_counter = 0; maybe_clear_encoder_core_video_mix(encoder); for (size_t i = 0; i < encoder->paired_encoders.num; i++) { obs_weak_encoder_release(encoder->paired_encoders.array[i]); } da_free(encoder->paired_encoders); } obs_encoder_set_last_error(encoder, NULL); pthread_mutex_unlock(&encoder->init_mutex); } static inline size_t get_callback_idx(const struct obs_encoder *encoder, encoded_callback_t new_packet, void *param) { for (size_t i = 0; i < encoder->callbacks.num; i++) { struct encoder_callback *cb = encoder->callbacks.array + i; if (cb->new_packet == new_packet && cb->param == param) return i; } return DARRAY_INVALID; } void pause_reset(struct pause_data *pause) { pthread_mutex_lock(&pause->mutex); pause->last_video_ts = 0; pause->ts_start = 0; pause->ts_end = 0; pause->ts_offset = 0; pthread_mutex_unlock(&pause->mutex); } static inline void obs_encoder_start_internal(obs_encoder_t *encoder, encoded_callback_t new_packet, void *param) { struct encoder_callback cb = {false, new_packet, param}; bool first = false; if (!encoder->context.data || !encoder->media) return; pthread_mutex_lock(&encoder->callbacks_mutex); first = (encoder->callbacks.num == 0); size_t idx = get_callback_idx(encoder, new_packet, param); if (idx == DARRAY_INVALID) da_push_back(encoder->callbacks, &cb); pthread_mutex_unlock(&encoder->callbacks_mutex); if (first) { os_atomic_set_bool(&encoder->paused, false); pause_reset(&encoder->pause); encoder->cur_pts = 0; add_connection(encoder); } } void obs_encoder_start(obs_encoder_t *encoder, encoded_callback_t new_packet, void *param) { if (!obs_encoder_valid(encoder, "obs_encoder_start")) return; if (!obs_ptr_valid(new_packet, "obs_encoder_start")) return; pthread_mutex_lock(&encoder->init_mutex); obs_encoder_start_internal(encoder, new_packet, param); pthread_mutex_unlock(&encoder->init_mutex); } void obs_encoder_stop(obs_encoder_t *encoder, encoded_callback_t new_packet, void *param) { bool last = false; size_t idx; if (!obs_encoder_valid(encoder, "obs_encoder_stop")) return; if (!obs_ptr_valid(new_packet, "obs_encoder_stop")) return; pthread_mutex_lock(&encoder->init_mutex); pthread_mutex_lock(&encoder->callbacks_mutex); idx = get_callback_idx(encoder, new_packet, param); if (idx != DARRAY_INVALID) { da_erase(encoder->callbacks, idx); last = (encoder->callbacks.num == 0); } pthread_mutex_unlock(&encoder->callbacks_mutex); encoder->encoder_packet_times.num = 0; if (last) { remove_connection(encoder, true); pthread_mutex_unlock(&encoder->init_mutex); struct obs_encoder_group *group = encoder->encoder_group; /* Destroying the group all the way back here prevents a race * where destruction of the group can prematurely destroy the * encoder within internal functions. This is the point where it * is safe to destroy the group, even if the encoder is then * also destroyed. */ if (group) { pthread_mutex_lock(&group->mutex); if (group->destroy_on_stop && group->num_encoders_started == 0) obs_encoder_group_actually_destroy(group); else pthread_mutex_unlock(&group->mutex); } /* init_mutex already unlocked */ return; } pthread_mutex_unlock(&encoder->init_mutex); } const char *obs_encoder_get_codec(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_get_codec") ? encoder->info.codec : NULL; } const char *obs_get_encoder_codec(const char *id) { struct obs_encoder_info *info = find_encoder(id); return info ? info->codec : NULL; } enum obs_encoder_type obs_encoder_get_type(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_get_type") ? encoder->info.type : OBS_ENCODER_AUDIO; } enum obs_encoder_type obs_get_encoder_type(const char *id) { struct obs_encoder_info *info = find_encoder(id); return info ? info->type : OBS_ENCODER_AUDIO; } uint32_t obs_encoder_get_encoded_frames(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_output_get_encoded_frames") ? encoder->encoded_frames : 0; } void obs_encoder_set_scaled_size(obs_encoder_t *encoder, uint32_t width, uint32_t height) { if (!obs_encoder_valid(encoder, "obs_encoder_set_scaled_size")) return; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_set_scaled_size: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return; } if (encoder_active(encoder)) { blog(LOG_WARNING, "encoder '%s': Cannot set the scaled " "resolution while the encoder is active", obs_encoder_get_name(encoder)); return; } if (encoder->initialized) { blog(LOG_WARNING, "encoder '%s': Cannot set the scaled resolution " "after the encoder has been initialized", obs_encoder_get_name(encoder)); return; } const struct video_output_info *voi; voi = video_output_get_info(encoder->media); if (voi && voi->width == width && voi->height == height) { blog(LOG_WARNING, "encoder '%s': Scaled resolution " "matches output resolution, scaling " "disabled", obs_encoder_get_name(encoder)); encoder->scaled_width = encoder->scaled_height = 0; return; } encoder->scaled_width = width; encoder->scaled_height = height; } void obs_encoder_set_gpu_scale_type(obs_encoder_t *encoder, enum obs_scale_type gpu_scale_type) { if (!obs_encoder_valid(encoder, "obs_encoder_set_gpu_scale_type")) return; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_set_gpu_scale_type: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return; } if (encoder_active(encoder)) { blog(LOG_WARNING, "encoder '%s': Cannot enable GPU scaling " "while the encoder is active", obs_encoder_get_name(encoder)); return; } if (encoder->initialized) { blog(LOG_WARNING, "encoder '%s': Cannot enable GPU scaling " "after the encoder has been initialized", obs_encoder_get_name(encoder)); return; } encoder->gpu_scale_type = gpu_scale_type; } bool obs_encoder_set_frame_rate_divisor(obs_encoder_t *encoder, uint32_t frame_rate_divisor) { if (!obs_encoder_valid(encoder, "obs_encoder_set_frame_rate_divisor")) return false; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_set_frame_rate_divisor: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return false; } if (encoder_active(encoder)) { blog(LOG_WARNING, "encoder '%s': Cannot set frame rate divisor " "while the encoder is active", obs_encoder_get_name(encoder)); return false; } if (encoder->initialized) { blog(LOG_WARNING, "encoder '%s': Cannot set frame rate divisor " "after the encoder has been initialized", obs_encoder_get_name(encoder)); return false; } if (frame_rate_divisor == 0) { blog(LOG_WARNING, "encoder '%s': Cannot set frame " "rate divisor to 0", obs_encoder_get_name(encoder)); return false; } encoder->frame_rate_divisor = frame_rate_divisor; if (encoder->fps_override) { video_output_free_frame_rate_divisor(encoder->fps_override); encoder->fps_override = NULL; } if (encoder->media) { encoder->fps_override = video_output_create_with_frame_rate_divisor(encoder->media, encoder->frame_rate_divisor); } return true; } bool obs_encoder_scaling_enabled(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_scaling_enabled")) return false; return encoder->scaled_width || encoder->scaled_height; } uint32_t obs_encoder_get_width(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_width")) return 0; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_get_width: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return 0; } if (!encoder->media) return 0; return encoder->scaled_width != 0 ? encoder->scaled_width : video_output_get_width(encoder->media); } uint32_t obs_encoder_get_height(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_height")) return 0; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_get_height: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return 0; } if (!encoder->media) return 0; return encoder->scaled_height != 0 ? encoder->scaled_height : video_output_get_height(encoder->media); } bool obs_encoder_gpu_scaling_enabled(obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_gpu_scaling_enabled")) return 0; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_gpu_scaling_enabled: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return 0; } return encoder->gpu_scale_type != OBS_SCALE_DISABLE; } enum obs_scale_type obs_encoder_get_scale_type(obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_scale_type")) return 0; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_get_scale_type: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return 0; } return encoder->gpu_scale_type; } uint32_t obs_encoder_get_frame_rate_divisor(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_set_frame_rate_divisor")) return 0; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_set_frame_rate_divisor: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return 0; } return encoder->frame_rate_divisor; } uint32_t obs_encoder_get_sample_rate(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_sample_rate")) return 0; if (encoder->info.type != OBS_ENCODER_AUDIO) { blog(LOG_WARNING, "obs_encoder_get_sample_rate: " "encoder '%s' is not an audio encoder", obs_encoder_get_name(encoder)); return 0; } if (!encoder->media) return 0; return encoder->samplerate != 0 ? encoder->samplerate : audio_output_get_sample_rate(encoder->media); } size_t obs_encoder_get_frame_size(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_frame_size")) return 0; if (encoder->info.type != OBS_ENCODER_AUDIO) { blog(LOG_WARNING, "obs_encoder_get_frame_size: " "encoder '%s' is not an audio encoder", obs_encoder_get_name(encoder)); return 0; } return encoder->framesize; } size_t obs_encoder_get_mixer_index(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_mixer_index")) return 0; if (encoder->info.type != OBS_ENCODER_AUDIO) { blog(LOG_WARNING, "obs_encoder_get_mixer_index: " "encoder '%s' is not an audio encoder", obs_encoder_get_name(encoder)); return 0; } return encoder->mixer_idx; } void obs_encoder_set_video(obs_encoder_t *encoder, video_t *video) { if (!obs_encoder_valid(encoder, "obs_encoder_set_video")) return; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_set_video: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return; } if (encoder_active(encoder)) { blog(LOG_WARNING, "encoder '%s': Cannot apply a new video_t " "object while the encoder is active", obs_encoder_get_name(encoder)); return; } if (encoder->initialized) { blog(LOG_WARNING, "encoder '%s': Cannot apply a new video_t object " "after the encoder has been initialized", obs_encoder_get_name(encoder)); return; } encoder_set_video(encoder, video); } static void encoder_set_video(obs_encoder_t *encoder, video_t *video) { const struct video_output_info *voi; if (encoder->fps_override) { video_output_free_frame_rate_divisor(encoder->fps_override); encoder->fps_override = NULL; } if (video) { voi = video_output_get_info(video); encoder->media = video; encoder->timebase_num = voi->fps_den; encoder->timebase_den = voi->fps_num; if (encoder->frame_rate_divisor) { encoder->fps_override = video_output_create_with_frame_rate_divisor(video, encoder->frame_rate_divisor); } } else { encoder->media = NULL; encoder->timebase_num = 0; encoder->timebase_den = 0; } } void obs_encoder_set_audio(obs_encoder_t *encoder, audio_t *audio) { if (!obs_encoder_valid(encoder, "obs_encoder_set_audio")) return; if (encoder->info.type != OBS_ENCODER_AUDIO) { blog(LOG_WARNING, "obs_encoder_set_audio: " "encoder '%s' is not an audio encoder", obs_encoder_get_name(encoder)); return; } if (encoder_active(encoder)) { blog(LOG_WARNING, "encoder '%s': Cannot apply a new audio_t " "object while the encoder is active", obs_encoder_get_name(encoder)); return; } if (audio) { encoder->media = audio; encoder->timebase_num = 1; encoder->timebase_den = audio_output_get_sample_rate(audio); } else { encoder->media = NULL; encoder->timebase_num = 0; encoder->timebase_den = 0; } } video_t *obs_encoder_video(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_video")) return NULL; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_set_video: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return NULL; } return encoder->fps_override ? encoder->fps_override : encoder->media; } video_t *obs_encoder_parent_video(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_parent_video")) return NULL; if (encoder->info.type != OBS_ENCODER_VIDEO) { blog(LOG_WARNING, "obs_encoder_parent_video: " "encoder '%s' is not a video encoder", obs_encoder_get_name(encoder)); return NULL; } return encoder->media; } audio_t *obs_encoder_audio(const obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_audio")) return NULL; if (encoder->info.type != OBS_ENCODER_AUDIO) { blog(LOG_WARNING, "obs_encoder_set_audio: " "encoder '%s' is not an audio encoder", obs_encoder_get_name(encoder)); return NULL; } return encoder->media; } bool obs_encoder_active(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_active") ? encoder_active(encoder) : false; } static inline bool get_sei(const struct obs_encoder *encoder, uint8_t **sei, size_t *size) { if (encoder->info.get_sei_data) return encoder->info.get_sei_data(encoder->context.data, sei, size); return false; } static void send_first_video_packet(struct obs_encoder *encoder, struct encoder_callback *cb, struct encoder_packet *packet, struct encoder_packet_time *packet_time) { struct encoder_packet first_packet; DARRAY(uint8_t) data; uint8_t *sei; size_t size; /* always wait for first keyframe */ if (!packet->keyframe) return; da_init(data); if (!get_sei(encoder, &sei, &size) || !sei || !size) { cb->new_packet(cb->param, packet, packet_time); cb->sent_first_packet = true; return; } da_push_back_array(data, sei, size); da_push_back_array(data, packet->data, packet->size); first_packet = *packet; first_packet.data = data.array; first_packet.size = data.num; cb->new_packet(cb->param, &first_packet, packet_time); cb->sent_first_packet = true; da_free(data); } static const char *send_packet_name = "send_packet"; static inline void send_packet(struct obs_encoder *encoder, struct encoder_callback *cb, struct encoder_packet *packet, struct encoder_packet_time *packet_time) { profile_start(send_packet_name); /* include SEI in first video packet */ if (encoder->info.type == OBS_ENCODER_VIDEO && !cb->sent_first_packet) send_first_video_packet(encoder, cb, packet, packet_time); else cb->new_packet(cb->param, packet, packet_time); profile_end(send_packet_name); } void full_stop(struct obs_encoder *encoder) { if (encoder) { pthread_mutex_lock(&encoder->outputs_mutex); for (size_t i = 0; i < encoder->outputs.num; i++) { struct obs_output *output = encoder->outputs.array[i]; obs_output_force_stop(output); pthread_mutex_lock(&output->interleaved_mutex); output->info.encoded_packet(output->context.data, NULL); pthread_mutex_unlock(&output->interleaved_mutex); } pthread_mutex_unlock(&encoder->outputs_mutex); pthread_mutex_lock(&encoder->callbacks_mutex); da_free(encoder->callbacks); pthread_mutex_unlock(&encoder->callbacks_mutex); remove_connection(encoder, false); } } void send_off_encoder_packet(obs_encoder_t *encoder, bool success, bool received, struct encoder_packet *pkt) { if (!success) { blog(LOG_ERROR, "Error encoding with encoder '%s'", encoder->context.name); full_stop(encoder); return; } if (received) { if (!encoder->first_received) { encoder->offset_usec = packet_dts_usec(pkt); encoder->first_received = true; } /* we use system time here to ensure sync with other encoders, * you do not want to use relative timestamps here */ pkt->dts_usec = encoder->start_ts / 1000 + packet_dts_usec(pkt) - encoder->offset_usec; pkt->sys_dts_usec = pkt->dts_usec; pthread_mutex_lock(&encoder->pause.mutex); pkt->sys_dts_usec += encoder->pause.ts_offset / 1000; pthread_mutex_unlock(&encoder->pause.mutex); /* Find the encoder packet timing entry in the encoder * timing array with the corresponding PTS value, then remove * the entry from the array to ensure it doesn't continuously fill. */ struct encoder_packet_time ept_local; struct encoder_packet_time *ept = NULL; bool found_ept = false; if (pkt->type == OBS_ENCODER_VIDEO) { for (size_t i = encoder->encoder_packet_times.num; i > 0; i--) { ept = &encoder->encoder_packet_times.array[i - 1]; if (ept->pts == pkt->pts) { ept_local = *ept; da_erase(encoder->encoder_packet_times, i - 1); found_ept = true; break; } } if (!found_ept) blog(LOG_DEBUG, "%s: Encoder packet timing for PTS %" PRId64 " not found", __FUNCTION__, pkt->pts); } pthread_mutex_lock(&encoder->callbacks_mutex); for (size_t i = encoder->callbacks.num; i > 0; i--) { struct encoder_callback *cb; cb = encoder->callbacks.array + (i - 1); send_packet(encoder, cb, pkt, found_ept ? &ept_local : NULL); } pthread_mutex_unlock(&encoder->callbacks_mutex); // Count number of video frames successfully encoded if (pkt->type == OBS_ENCODER_VIDEO) encoder->encoded_frames++; } } static const char *do_encode_name = "do_encode"; bool do_encode(struct obs_encoder *encoder, struct encoder_frame *frame, const uint64_t *frame_cts) { profile_start(do_encode_name); if (!encoder->profile_encoder_encode_name) encoder->profile_encoder_encode_name = profile_store_name(obs_get_profiler_name_store(), "encode(%s)", encoder->context.name); struct encoder_packet pkt = {0}; bool received = false; bool success; uint64_t fer_ts = 0; if (encoder->reconfigure_requested) { encoder->reconfigure_requested = false; encoder->info.update(encoder->context.data, encoder->context.settings); } pkt.timebase_num = encoder->timebase_num * encoder->frame_rate_divisor; pkt.timebase_den = encoder->timebase_den; pkt.encoder = encoder; /* Get the frame encode request timestamp. This * needs to be read just before the encode request. */ fer_ts = os_gettime_ns(); profile_start(encoder->profile_encoder_encode_name); success = encoder->info.encode(encoder->context.data, frame, &pkt, &received); profile_end(encoder->profile_encoder_encode_name); /* Generate and enqueue the frame timing metrics, namely * the CTS (composition time), FER (frame encode request), FERC * (frame encode request complete) and current PTS. PTS is used to * associate the frame timing data with the encode packet. */ if (frame_cts) { struct encoder_packet_time *ept = da_push_back_new(encoder->encoder_packet_times); // Get the frame encode request complete timestamp if (success) { ept->ferc = os_gettime_ns(); } else { // Encode had error, set ferc to 0 ept->ferc = 0; } ept->pts = frame->pts; ept->cts = *frame_cts; ept->fer = fer_ts; } send_off_encoder_packet(encoder, success, received, &pkt); profile_end(do_encode_name); return success; } static inline bool video_pause_check_internal(struct pause_data *pause, uint64_t ts) { pause->last_video_ts = ts; if (!pause->ts_start) { return false; } if (ts == pause->ts_end) { pause->ts_start = 0; pause->ts_end = 0; } else if (ts >= pause->ts_start) { return true; } return false; } bool video_pause_check(struct pause_data *pause, uint64_t timestamp) { bool ignore_frame; pthread_mutex_lock(&pause->mutex); ignore_frame = video_pause_check_internal(pause, timestamp); pthread_mutex_unlock(&pause->mutex); return ignore_frame; } static const char *receive_video_name = "receive_video"; static void receive_video(void *param, struct video_data *frame) { profile_start(receive_video_name); struct obs_encoder *encoder = param; struct encoder_frame enc_frame; if (encoder->encoder_group && !encoder->start_ts) { struct obs_encoder_group *group = encoder->encoder_group; bool ready = false; pthread_mutex_lock(&group->mutex); ready = group->start_timestamp == frame->timestamp; pthread_mutex_unlock(&group->mutex); if (!ready) goto wait_for_audio; } if (!encoder->first_received && encoder->paired_encoders.num) { for (size_t i = 0; i < encoder->paired_encoders.num; i++) { obs_encoder_t *paired = obs_weak_encoder_get_encoder(encoder->paired_encoders.array[i]); if (!paired) continue; if (!paired->first_received || paired->first_raw_ts > frame->timestamp) { obs_encoder_release(paired); goto wait_for_audio; } obs_encoder_release(paired); } } if (video_pause_check(&encoder->pause, frame->timestamp)) goto wait_for_audio; memset(&enc_frame, 0, sizeof(struct encoder_frame)); for (size_t i = 0; i < MAX_AV_PLANES; i++) { enc_frame.data[i] = frame->data[i]; enc_frame.linesize[i] = frame->linesize[i]; } if (!encoder->start_ts) encoder->start_ts = frame->timestamp; enc_frame.frames = 1; enc_frame.pts = encoder->cur_pts; if (do_encode(encoder, &enc_frame, &frame->timestamp)) encoder->cur_pts += encoder->timebase_num * encoder->frame_rate_divisor; wait_for_audio: profile_end(receive_video_name); } static void clear_audio(struct obs_encoder *encoder) { for (size_t i = 0; i < encoder->planes; i++) deque_free(&encoder->audio_input_buffer[i]); } static inline void push_back_audio(struct obs_encoder *encoder, struct audio_data *data, size_t size, size_t offset_size) { if (offset_size >= size) return; size -= offset_size; /* push in to the circular buffer */ for (size_t i = 0; i < encoder->planes; i++) deque_push_back(&encoder->audio_input_buffer[i], data->data[i] + offset_size, size); } static inline size_t calc_offset_size(struct obs_encoder *encoder, uint64_t v_start_ts, uint64_t a_start_ts) { uint64_t offset = v_start_ts - a_start_ts; offset = util_mul_div64(offset, encoder->samplerate, 1000000000ULL); return (size_t)offset * encoder->blocksize; } static void start_from_buffer(struct obs_encoder *encoder, uint64_t v_start_ts) { size_t size = encoder->audio_input_buffer[0].size; struct audio_data audio = {0}; size_t offset_size = 0; for (size_t i = 0; i < MAX_AV_PLANES; i++) { audio.data[i] = encoder->audio_input_buffer[i].data; memset(&encoder->audio_input_buffer[i], 0, sizeof(struct deque)); } if (encoder->first_raw_ts < v_start_ts) offset_size = calc_offset_size(encoder, v_start_ts, encoder->first_raw_ts); push_back_audio(encoder, &audio, size, offset_size); for (size_t i = 0; i < MAX_AV_PLANES; i++) bfree(audio.data[i]); } static const char *buffer_audio_name = "buffer_audio"; static bool buffer_audio(struct obs_encoder *encoder, struct audio_data *data) { profile_start(buffer_audio_name); size_t size = data->frames * encoder->blocksize; size_t offset_size = 0; bool success = true; struct obs_encoder *paired_encoder = NULL; /* Audio encoders can only be paired to one video encoder */ if (encoder->paired_encoders.num) { paired_encoder = obs_weak_encoder_get_encoder(encoder->paired_encoders.array[0]); } if (!encoder->start_ts && paired_encoder) { uint64_t end_ts = data->timestamp; uint64_t v_start_ts = paired_encoder->start_ts; /* no video yet, so don't start audio */ if (!v_start_ts) { success = false; goto fail; } /* audio starting point still not synced with video starting * point, so don't start audio */ end_ts += util_mul_div64(data->frames, 1000000000ULL, encoder->samplerate); if (end_ts <= v_start_ts) { success = false; goto fail; } /* ready to start audio, truncate if necessary */ if (data->timestamp < v_start_ts) offset_size = calc_offset_size(encoder, v_start_ts, data->timestamp); if (data->timestamp <= v_start_ts) clear_audio(encoder); encoder->start_ts = v_start_ts; /* use currently buffered audio instead */ if (v_start_ts < data->timestamp) { start_from_buffer(encoder, v_start_ts); } } else if (!encoder->start_ts && !paired_encoder) { encoder->start_ts = data->timestamp; } fail: push_back_audio(encoder, data, size, offset_size); obs_encoder_release(paired_encoder); profile_end(buffer_audio_name); return success; } static bool send_audio_data(struct obs_encoder *encoder) { struct encoder_frame enc_frame; memset(&enc_frame, 0, sizeof(struct encoder_frame)); for (size_t i = 0; i < encoder->planes; i++) { deque_pop_front(&encoder->audio_input_buffer[i], encoder->audio_output_buffer[i], encoder->framesize_bytes); enc_frame.data[i] = encoder->audio_output_buffer[i]; enc_frame.linesize[i] = (uint32_t)encoder->framesize_bytes; } enc_frame.frames = (uint32_t)encoder->framesize; enc_frame.pts = encoder->cur_pts; if (!do_encode(encoder, &enc_frame, NULL)) return false; encoder->cur_pts += encoder->framesize; return true; } static void pause_audio(struct pause_data *pause, struct audio_data *data, size_t sample_rate) { uint64_t cutoff_frames = pause->ts_start - data->timestamp; cutoff_frames = ns_to_audio_frames(sample_rate, cutoff_frames); data->frames = (uint32_t)cutoff_frames; } static void unpause_audio(struct pause_data *pause, struct audio_data *data, size_t sample_rate) { uint64_t cutoff_frames = pause->ts_end - data->timestamp; cutoff_frames = ns_to_audio_frames(sample_rate, cutoff_frames); for (size_t i = 0; i < MAX_AV_PLANES; i++) { if (!data->data[i]) break; data->data[i] += cutoff_frames * sizeof(float); } data->timestamp = pause->ts_start; data->frames = data->frames - (uint32_t)cutoff_frames; pause->ts_start = 0; pause->ts_end = 0; } static inline bool audio_pause_check_internal(struct pause_data *pause, struct audio_data *data, size_t sample_rate) { uint64_t end_ts; if (!pause->ts_start) { return false; } end_ts = data->timestamp + audio_frames_to_ns(sample_rate, data->frames); if (pause->ts_start >= data->timestamp) { if (pause->ts_start <= end_ts) { pause_audio(pause, data, sample_rate); return !data->frames; } } else { if (pause->ts_end >= data->timestamp && pause->ts_end <= end_ts) { unpause_audio(pause, data, sample_rate); return !data->frames; } return true; } return false; } bool audio_pause_check(struct pause_data *pause, struct audio_data *data, size_t sample_rate) { bool ignore_audio; pthread_mutex_lock(&pause->mutex); ignore_audio = audio_pause_check_internal(pause, data, sample_rate); data->timestamp -= pause->ts_offset; pthread_mutex_unlock(&pause->mutex); return ignore_audio; } static const char *receive_audio_name = "receive_audio"; static void receive_audio(void *param, size_t mix_idx, struct audio_data *in) { profile_start(receive_audio_name); struct obs_encoder *encoder = param; struct audio_data audio = *in; if (!encoder->first_received) { encoder->first_raw_ts = audio.timestamp; encoder->first_received = true; clear_audio(encoder); } if (audio_pause_check(&encoder->pause, &audio, encoder->samplerate)) goto end; if (!buffer_audio(encoder, &audio)) goto end; while (encoder->audio_input_buffer[0].size >= encoder->framesize_bytes) { if (!send_audio_data(encoder)) { break; } } UNUSED_PARAMETER(mix_idx); end: profile_end(receive_audio_name); } void obs_encoder_add_output(struct obs_encoder *encoder, struct obs_output *output) { if (!encoder || !output) return; pthread_mutex_lock(&encoder->outputs_mutex); da_push_back(encoder->outputs, &output); pthread_mutex_unlock(&encoder->outputs_mutex); } void obs_encoder_remove_output(struct obs_encoder *encoder, struct obs_output *output) { if (!encoder || !output) return; pthread_mutex_lock(&encoder->outputs_mutex); da_erase_item(encoder->outputs, &output); pthread_mutex_unlock(&encoder->outputs_mutex); } void obs_encoder_packet_create_instance(struct encoder_packet *dst, const struct encoder_packet *src) { long *p_refs; *dst = *src; p_refs = bmalloc(src->size + sizeof(long)); dst->data = (void *)(p_refs + 1); *p_refs = 1; memcpy(dst->data, src->data, src->size); } void obs_encoder_packet_ref(struct encoder_packet *dst, struct encoder_packet *src) { if (!src) return; if (src->data) { long *p_refs = ((long *)src->data) - 1; os_atomic_inc_long(p_refs); } *dst = *src; } void obs_encoder_packet_release(struct encoder_packet *pkt) { if (!pkt) return; if (pkt->data) { long *p_refs = ((long *)pkt->data) - 1; if (os_atomic_dec_long(p_refs) == 0) bfree(p_refs); } memset(pkt, 0, sizeof(struct encoder_packet)); } void obs_encoder_set_preferred_video_format(obs_encoder_t *encoder, enum video_format format) { if (!encoder || encoder->info.type != OBS_ENCODER_VIDEO) return; encoder->preferred_format = format; } enum video_format obs_encoder_get_preferred_video_format(const obs_encoder_t *encoder) { if (!encoder || encoder->info.type != OBS_ENCODER_VIDEO) return VIDEO_FORMAT_NONE; return encoder->preferred_format; } void obs_encoder_set_preferred_color_space(obs_encoder_t *encoder, enum video_colorspace colorspace) { if (!encoder || encoder->info.type != OBS_ENCODER_VIDEO) return; encoder->preferred_space = colorspace; } enum video_colorspace obs_encoder_get_preferred_color_space(const obs_encoder_t *encoder) { if (!encoder || encoder->info.type != OBS_ENCODER_VIDEO) return VIDEO_CS_DEFAULT; return encoder->preferred_space; } void obs_encoder_set_preferred_range(obs_encoder_t *encoder, enum video_range_type range) { if (!encoder || encoder->info.type != OBS_ENCODER_VIDEO) return; encoder->preferred_range = range; } enum video_range_type obs_encoder_get_preferred_range(const obs_encoder_t *encoder) { if (!encoder || encoder->info.type != OBS_ENCODER_VIDEO) return VIDEO_RANGE_DEFAULT; return encoder->preferred_range; } void obs_encoder_release(obs_encoder_t *encoder) { if (!encoder) return; obs_weak_encoder_t *control = get_weak(encoder); if (obs_ref_release(&control->ref)) { // The order of operations is important here since // get_context_by_name in obs.c relies on weak refs // being alive while the context is listed obs_encoder_destroy(encoder); obs_weak_encoder_release(control); } } void obs_weak_encoder_addref(obs_weak_encoder_t *weak) { if (!weak) return; obs_weak_ref_addref(&weak->ref); } void obs_weak_encoder_release(obs_weak_encoder_t *weak) { if (!weak) return; if (obs_weak_ref_release(&weak->ref)) bfree(weak); } obs_encoder_t *obs_encoder_get_ref(obs_encoder_t *encoder) { if (!encoder) return NULL; return obs_weak_encoder_get_encoder(get_weak(encoder)); } obs_weak_encoder_t *obs_encoder_get_weak_encoder(obs_encoder_t *encoder) { if (!encoder) return NULL; obs_weak_encoder_t *weak = get_weak(encoder); obs_weak_encoder_addref(weak); return weak; } obs_encoder_t *obs_weak_encoder_get_encoder(obs_weak_encoder_t *weak) { if (!weak) return NULL; if (obs_weak_ref_get_ref(&weak->ref)) return weak->encoder; return NULL; } bool obs_weak_encoder_references_encoder(obs_weak_encoder_t *weak, obs_encoder_t *encoder) { return weak && encoder && weak->encoder == encoder; } void *obs_encoder_get_type_data(obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_get_type_data") ? encoder->orig_info.type_data : NULL; } const char *obs_encoder_get_id(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_get_id") ? encoder->orig_info.id : NULL; } uint32_t obs_get_encoder_caps(const char *encoder_id) { struct obs_encoder_info *info = find_encoder(encoder_id); return info ? info->caps : 0; } uint32_t obs_encoder_get_caps(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_get_caps") ? encoder->orig_info.caps : 0; } bool obs_encoder_paused(const obs_encoder_t *encoder) { return obs_encoder_valid(encoder, "obs_encoder_paused") ? os_atomic_load_bool(&encoder->paused) : false; } const char *obs_encoder_get_last_error(obs_encoder_t *encoder) { if (!obs_encoder_valid(encoder, "obs_encoder_get_last_error")) return NULL; return encoder->last_error_message; } void obs_encoder_set_last_error(obs_encoder_t *encoder, const char *message) { if (!obs_encoder_valid(encoder, "obs_encoder_set_last_error")) return; if (encoder->last_error_message) bfree(encoder->last_error_message); if (message) encoder->last_error_message = bstrdup(message); else encoder->last_error_message = NULL; } uint64_t obs_encoder_get_pause_offset(const obs_encoder_t *encoder) { return encoder ? encoder->pause.ts_offset : 0; } bool obs_encoder_has_roi(const obs_encoder_t *encoder) { return encoder->roi.num > 0; } bool obs_encoder_add_roi(obs_encoder_t *encoder, const struct obs_encoder_roi *roi) { if (!roi) return false; if (!(encoder->info.caps & OBS_ENCODER_CAP_ROI)) return false; /* Area smaller than the smallest possible block (16x16) */ if (roi->bottom - roi->top < 16 || roi->right - roi->left < 16) return false; /* Other invalid ROIs */ if (roi->priority < -1.0f || roi->priority > 1.0f) return false; pthread_mutex_lock(&encoder->roi_mutex); da_push_back(encoder->roi, roi); encoder->roi_increment++; pthread_mutex_unlock(&encoder->roi_mutex); return true; } void obs_encoder_clear_roi(obs_encoder_t *encoder) { if (!encoder->roi.num) return; pthread_mutex_lock(&encoder->roi_mutex); da_clear(encoder->roi); encoder->roi_increment++; pthread_mutex_unlock(&encoder->roi_mutex); } void obs_encoder_enum_roi(obs_encoder_t *encoder, void (*enum_proc)(void *, struct obs_encoder_roi *), void *param) { float scale_x = 0; float scale_y = 0; /* Scale ROI passed to callback to output size */ if (encoder->scaled_height && encoder->scaled_width) { const uint32_t width = video_output_get_width(encoder->media); const uint32_t height = video_output_get_height(encoder->media); if (!width || !height) return; scale_x = (float)encoder->scaled_width / (float)width; scale_y = (float)encoder->scaled_height / (float)height; } pthread_mutex_lock(&encoder->roi_mutex); size_t idx = encoder->roi.num; while (idx) { struct obs_encoder_roi *roi = &encoder->roi.array[--idx]; if (scale_x > 0 && scale_y > 0) { struct obs_encoder_roi scaled_roi = { .top = (uint32_t)((float)roi->top * scale_y), .bottom = (uint32_t)((float)roi->bottom * scale_y), .left = (uint32_t)((float)roi->left * scale_x), .right = (uint32_t)((float)roi->right * scale_x), .priority = roi->priority, }; enum_proc(param, &scaled_roi); } else { enum_proc(param, roi); } } pthread_mutex_unlock(&encoder->roi_mutex); } uint32_t obs_encoder_get_roi_increment(const obs_encoder_t *encoder) { return encoder->roi_increment; } bool obs_encoder_set_group(obs_encoder_t *encoder, obs_encoder_group_t *group) { if (!obs_encoder_valid(encoder, "obs_encoder_set_group")) return false; if (obs_encoder_active(encoder)) { blog(LOG_ERROR, "obs_encoder_set_group: encoder '%s' is already active", obs_encoder_get_name(encoder)); return false; } if (encoder->encoder_group) { struct obs_encoder_group *old_group = encoder->encoder_group; pthread_mutex_lock(&old_group->mutex); if (old_group->num_encoders_started) { pthread_mutex_unlock(&old_group->mutex); blog(LOG_ERROR, "obs_encoder_set_group: encoder '%s' existing group has started encoders", obs_encoder_get_name(encoder)); return false; } da_erase_item(old_group->encoders, &encoder); obs_encoder_release(encoder); pthread_mutex_unlock(&old_group->mutex); } if (!group) return true; pthread_mutex_lock(&group->mutex); if (group->num_encoders_started) { pthread_mutex_unlock(&group->mutex); blog(LOG_ERROR, "obs_encoder_set_group: specified group has started encoders"); return false; } obs_encoder_t *ref = obs_encoder_get_ref(encoder); if (!ref) { pthread_mutex_unlock(&group->mutex); return false; } da_push_back(group->encoders, &ref); encoder->encoder_group = group; pthread_mutex_unlock(&group->mutex); return true; } obs_encoder_group_t *obs_encoder_group_create() { struct obs_encoder_group *group = bzalloc(sizeof(struct obs_encoder_group)); pthread_mutex_init_value(&group->mutex); if (pthread_mutex_init(&group->mutex, NULL) != 0) { bfree(group); return NULL; } return group; } void obs_encoder_group_actually_destroy(obs_encoder_group_t *group) { for (size_t i = 0; i < group->encoders.num; i++) { struct obs_encoder *encoder = group->encoders.array[i]; encoder->encoder_group = NULL; obs_encoder_release(encoder); } da_free(group->encoders); pthread_mutex_unlock(&group->mutex); pthread_mutex_destroy(&group->mutex); bfree(group); } void obs_encoder_group_destroy(obs_encoder_group_t *group) { if (!group) return; pthread_mutex_lock(&group->mutex); if (group->num_encoders_started) { group->destroy_on_stop = true; pthread_mutex_unlock(&group->mutex); return; } obs_encoder_group_actually_destroy(group); } bool obs_encoder_video_tex_active(const obs_encoder_t *encoder, enum video_format format) { struct obs_core_video_mix *mix = get_mix_for_video(encoder->media); if (format == VIDEO_FORMAT_NV12) return mix->using_nv12_tex; if (format == VIDEO_FORMAT_P010) return mix->using_p010_tex; return false; } uint32_t obs_encoder_get_priming_samples(const obs_encoder_t *encoder) { if (encoder->info.get_priming_samples) { return encoder->info.get_priming_samples(encoder->context.data); } return 0; } obs-studio-32.1.0-sources/libobs/obs-module.h000644 001751 001751 00000020323 15153330235 021725 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs.h" #ifdef __cplusplus #define MODULE_EXPORT extern "C" EXPORT #define MODULE_EXTERN extern "C" #else #define MODULE_EXPORT EXPORT #define MODULE_EXTERN extern #endif /** * @file * @brief This file is used by modules for module declaration and module * exports. * * @page modules_page Modules * @brief Modules or plugins are libraries that can be loaded by libobs and * subsequently interact with it. * * @section modules_overview_sec Overview * * Modules can provide a wide range of functionality to libobs, they for example * can feed captured audio or video to libobs, or interface with an encoder to * provide some codec to libobs. * * @section modules_basic_sec Creating a basic module * * In order to create a module for libobs you will need to build a shared * library that implements a basic interface for libobs to interact with. * The following code would create a simple source plugin without localization: * @code #include OBS_DECLARE_MODULE() extern struct obs_source_info my_source; bool obs_module_load(void) { obs_register_source(&my_source); return true; } @endcode * * If you want to enable localization, you will need to also use the * @ref OBS_MODULE_USE_DEFAULT_LOCALE() macro. * * Other module types: * - @ref obs_register_encoder() * - @ref obs_register_service() * - @ref obs_register_output() * */ /** Required: Declares a libobs module. */ #define OBS_DECLARE_MODULE() \ static obs_module_t *obs_module_pointer; \ MODULE_EXPORT void obs_module_set_pointer(obs_module_t *module); \ void obs_module_set_pointer(obs_module_t *module) \ { \ obs_module_pointer = module; \ } \ obs_module_t *obs_current_module(void) \ { \ return obs_module_pointer; \ } \ MODULE_EXPORT uint32_t obs_module_ver(void); \ uint32_t obs_module_ver(void) \ { \ return LIBOBS_API_VER; \ } /** * Required: Called when the module is loaded. Use this function to load all * the sources/encoders/outputs/services for your module, or anything else that * may need loading. * * @return Return true to continue loading the module, otherwise * false to indicate failure and unload the module */ MODULE_EXPORT bool obs_module_load(void); /** Optional: Called when the module is unloaded. */ MODULE_EXPORT void obs_module_unload(void); /** Optional: Called when all modules have finished loading */ MODULE_EXPORT void obs_module_post_load(void); /** Called to set the current locale data for the module. */ MODULE_EXPORT void obs_module_set_locale(const char *locale); /** Called to free the current locale data for the module. */ MODULE_EXPORT void obs_module_free_locale(void); /** Optional: Use this macro in a module to use default locale handling. */ #define OBS_MODULE_USE_DEFAULT_LOCALE(module_name, default_locale) \ lookup_t *obs_module_lookup = NULL; \ const char *obs_module_text(const char *val) \ { \ const char *out = val; \ text_lookup_getstr(obs_module_lookup, val, &out); \ return out; \ } \ bool obs_module_get_string(const char *val, const char **out) \ { \ return text_lookup_getstr(obs_module_lookup, val, out); \ } \ void obs_module_set_locale(const char *locale) \ { \ if (obs_module_lookup) \ text_lookup_destroy(obs_module_lookup); \ obs_module_lookup = obs_module_load_locale(obs_current_module(), default_locale, locale); \ } \ void obs_module_free_locale(void) \ { \ text_lookup_destroy(obs_module_lookup); \ obs_module_lookup = NULL; \ } /** Helper function for looking up locale if default locale handler was used */ MODULE_EXTERN const char *obs_module_text(const char *lookup_string); /** Helper function for looking up locale if default locale handler was used, * returns true if text found, otherwise false */ MODULE_EXPORT bool obs_module_get_string(const char *lookup_string, const char **translated_string); /** Helper function that returns the current module */ MODULE_EXTERN obs_module_t *obs_current_module(void); /** * Returns the location to a module data file associated with the current * module. Free with bfree when complete. Equivalent to: * obs_find_module_file(obs_current_module(), file); */ #define obs_module_file(file) obs_find_module_file(obs_current_module(), file) /** * Returns the location to a module config file associated with the current * module. Free with bfree when complete. Will return NULL if configuration * directory is not set. Equivalent to: * obs_module_get_config_path(obs_current_module(), file); */ #define obs_module_config_path(file) obs_module_get_config_path(obs_current_module(), file) /** * Optional: Declares the author(s) of the module * * @param name Author name(s) */ #define OBS_MODULE_AUTHOR(name) \ MODULE_EXPORT const char *obs_module_author(void); \ const char *obs_module_author(void) \ { \ return name; \ } /** Optional: Returns the full name of the module */ MODULE_EXPORT const char *obs_module_name(void); /** Optional: Returns a description of the module */ MODULE_EXPORT const char *obs_module_description(void); /** Returns the module's unique ID, or NULL if it doesn't have one */ MODULE_EXPORT const char *obs_get_module_id(obs_module_t *module); /** Returns the module's semver version number or NULL if it doesn't have one */ MODULE_EXPORT const char *obs_get_module_version(obs_module_t *module); obs-studio-32.1.0-sources/libobs/obs-source-deinterlace.c000644 001751 001751 00000035525 15153330235 024222 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2023 by Lain Bailey This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include "obs-internal.h" static bool ready_deinterlace_frames(obs_source_t *source, uint64_t sys_time) { struct obs_source_frame *next_frame = source->async_frames.array[0]; struct obs_source_frame *prev_frame = NULL; struct obs_source_frame *frame = NULL; uint64_t sys_offset = sys_time - source->last_sys_timestamp; uint64_t frame_time = next_frame->timestamp; uint64_t frame_offset = 0; size_t idx = 1; if (source->async_unbuffered) { while (source->async_frames.num > 2) { da_erase(source->async_frames, 0); remove_async_frame(source, next_frame); next_frame = source->async_frames.array[0]; } if (source->async_frames.num == 2) { bool prev_frame = true; if (source->async_unbuffered && source->deinterlace_offset) { const uint64_t timestamp = source->async_frames.array[0]->timestamp; const uint64_t after_timestamp = source->async_frames.array[1]->timestamp; const uint64_t duration = after_timestamp - timestamp; const uint64_t frame_end = timestamp + source->deinterlace_offset + duration; if (sys_time < frame_end) { // Don't skip ahead prematurely. prev_frame = false; source->deinterlace_frame_ts = timestamp - duration; } } source->async_frames.array[0]->prev_frame = prev_frame; } source->deinterlace_offset = 0; source->last_frame_ts = next_frame->timestamp; return true; } /* account for timestamp invalidation */ if (frame_out_of_bounds(source, frame_time)) { source->last_frame_ts = next_frame->timestamp; source->deinterlace_offset = 0; return true; } else { frame_offset = frame_time - source->last_frame_ts; source->last_frame_ts += sys_offset; } while (source->last_frame_ts > next_frame->timestamp) { /* this tries to reduce the needless frame duplication, also * helps smooth out async rendering to frame boundaries. In * other words, tries to keep the framerate as smooth as * possible */ if ((source->last_frame_ts - next_frame->timestamp) < 2000000) break; if (prev_frame) { da_erase(source->async_frames, 0); remove_async_frame(source, prev_frame); } if (source->async_frames.num <= 2) { bool exit = true; if (prev_frame) { prev_frame->prev_frame = true; } else if (!frame && source->async_frames.num == 2) { exit = false; } if (exit) { source->deinterlace_offset = 0; return true; } } if (frame) idx = 2; else idx = 1; prev_frame = frame; frame = next_frame; next_frame = source->async_frames.array[idx]; /* more timestamp checking and compensating */ if ((next_frame->timestamp - frame_time) > MAX_TS_VAR) { source->last_frame_ts = next_frame->timestamp - frame_offset; source->deinterlace_offset = 0; } frame_time = next_frame->timestamp; frame_offset = frame_time - source->last_frame_ts; } if (prev_frame) prev_frame->prev_frame = true; return frame != NULL; } static inline bool first_frame(obs_source_t *s) { if (s->last_frame_ts) return false; if (s->async_frames.num >= 2) s->async_frames.array[0]->prev_frame = true; return true; } static inline uint64_t uint64_diff(uint64_t ts1, uint64_t ts2) { return (ts1 < ts2) ? (ts2 - ts1) : (ts1 - ts2); } #define TWOX_TOLERANCE 1000000 #define TS_JUMP_THRESHOLD 70000000ULL static inline void deinterlace_get_closest_frames(obs_source_t *s, uint64_t sys_time) { uint64_t half_interval; if (s->async_unbuffered && s->deinterlace_offset) { // Want to keep frame if it has not elapsed. const uint64_t frame_end = s->deinterlace_frame_ts + s->deinterlace_offset + ((uint64_t)s->deinterlace_half_duration * 2) - TWOX_TOLERANCE; if (sys_time < frame_end) { // Process new frames if we think time jumped. const uint64_t diff = frame_end - sys_time; if (diff < TS_JUMP_THRESHOLD) { return; } } } if (!s->async_frames.num) return; half_interval = obs->video.video_half_frame_interval_ns; if (first_frame(s) || ready_deinterlace_frames(s, sys_time)) { uint64_t offset; s->prev_async_frame = NULL; s->cur_async_frame = s->async_frames.array[0]; da_erase(s->async_frames, 0); if ((s->async_frames.num > 0) && s->cur_async_frame->prev_frame) { s->prev_async_frame = s->cur_async_frame; s->cur_async_frame = s->async_frames.array[0]; da_erase(s->async_frames, 0); s->deinterlace_half_duration = (uint32_t)((s->cur_async_frame->timestamp - s->prev_async_frame->timestamp) / 2); } else { s->deinterlace_half_duration = (uint32_t)((s->cur_async_frame->timestamp - s->deinterlace_frame_ts) / 2); } if (!s->last_frame_ts) s->last_frame_ts = s->cur_async_frame->timestamp; s->deinterlace_frame_ts = s->cur_async_frame->timestamp; offset = obs->video.video_time - s->deinterlace_frame_ts; if (!s->deinterlace_offset) { s->deinterlace_offset = offset; } else { uint64_t offset_diff = uint64_diff(s->deinterlace_offset, offset); if (offset_diff > half_interval) s->deinterlace_offset = offset; } } } void deinterlace_process_last_frame(obs_source_t *s, uint64_t sys_time) { if (s->prev_async_frame) { remove_async_frame(s, s->prev_async_frame); s->prev_async_frame = NULL; } if (s->cur_async_frame) { remove_async_frame(s, s->cur_async_frame); s->cur_async_frame = NULL; } deinterlace_get_closest_frames(s, sys_time); } void set_deinterlace_texture_size(obs_source_t *source) { const enum gs_color_format format = convert_video_format(source->async_format, source->async_trc); if (source->async_gpu_conversion) { source->async_prev_texrender = gs_texrender_create(format, GS_ZS_NONE); for (int c = 0; c < source->async_channel_count; c++) source->async_prev_textures[c] = gs_texture_create(source->async_convert_width[c], source->async_convert_height[c], source->async_texture_formats[c], 1, NULL, GS_DYNAMIC); } else { source->async_prev_textures[0] = gs_texture_create(source->async_width, source->async_height, format, 1, NULL, GS_DYNAMIC); } } void deinterlace_update_async_video(obs_source_t *source) { if (source->deinterlace_rendered) return; source->deinterlace_rendered = true; pthread_mutex_lock(&source->async_mutex); const bool updated = source->cur_async_frame != NULL; struct obs_source_frame *frame = source->prev_async_frame; source->prev_async_frame = NULL; pthread_mutex_unlock(&source->async_mutex); if (frame) { os_atomic_inc_long(&frame->refs); if (set_async_texture_size(source, frame)) { update_async_textures(source, frame, source->async_prev_textures, source->async_prev_texrender); } obs_source_release_frame(source, frame); } else if (updated) { /* swap cur/prev if no previous texture */ for (size_t c = 0; c < MAX_AV_PLANES; c++) { gs_texture_t *prev_tex = source->async_prev_textures[c]; source->async_prev_textures[c] = source->async_textures[c]; source->async_textures[c] = prev_tex; } if (source->async_texrender) { gs_texrender_t *prev = source->async_prev_texrender; source->async_prev_texrender = source->async_texrender; source->async_texrender = prev; } } } static inline gs_effect_t *get_effect(enum obs_deinterlace_mode mode) { switch (mode) { case OBS_DEINTERLACE_MODE_DISABLE: return NULL; case OBS_DEINTERLACE_MODE_DISCARD: return obs_load_effect(&obs->video.deinterlace_discard_effect, "deinterlace_discard.effect"); case OBS_DEINTERLACE_MODE_RETRO: return obs_load_effect(&obs->video.deinterlace_discard_2x_effect, "deinterlace_discard_2x.effect"); case OBS_DEINTERLACE_MODE_BLEND: return obs_load_effect(&obs->video.deinterlace_blend_effect, "deinterlace_blend.effect"); case OBS_DEINTERLACE_MODE_BLEND_2X: return obs_load_effect(&obs->video.deinterlace_blend_2x_effect, "deinterlace_blend_2x.effect"); case OBS_DEINTERLACE_MODE_LINEAR: return obs_load_effect(&obs->video.deinterlace_linear_effect, "deinterlace_linear.effect"); case OBS_DEINTERLACE_MODE_LINEAR_2X: return obs_load_effect(&obs->video.deinterlace_linear_2x_effect, "deinterlace_linear_2x.effect"); case OBS_DEINTERLACE_MODE_YADIF: return obs_load_effect(&obs->video.deinterlace_yadif_effect, "deinterlace_yadif.effect"); case OBS_DEINTERLACE_MODE_YADIF_2X: return obs_load_effect(&obs->video.deinterlace_yadif_2x_effect, "deinterlace_yadif_2x.effect"); } return NULL; } static bool deinterlace_linear_required(enum obs_deinterlace_mode mode) { switch (mode) { case OBS_DEINTERLACE_MODE_DISABLE: case OBS_DEINTERLACE_MODE_DISCARD: case OBS_DEINTERLACE_MODE_RETRO: return false; case OBS_DEINTERLACE_MODE_BLEND: case OBS_DEINTERLACE_MODE_BLEND_2X: case OBS_DEINTERLACE_MODE_LINEAR: case OBS_DEINTERLACE_MODE_LINEAR_2X: case OBS_DEINTERLACE_MODE_YADIF: case OBS_DEINTERLACE_MODE_YADIF_2X: return true; } return false; } void deinterlace_render(obs_source_t *s) { gs_effect_t *effect = s->deinterlace_effect; gs_eparam_t *image = gs_effect_get_param_by_name(effect, "image"); gs_eparam_t *prev = gs_effect_get_param_by_name(effect, "previous_image"); gs_eparam_t *multiplier_param = gs_effect_get_param_by_name(effect, "multiplier"); gs_eparam_t *field = gs_effect_get_param_by_name(effect, "field_order"); gs_eparam_t *frame2 = gs_effect_get_param_by_name(effect, "frame2"); gs_eparam_t *dimensions = gs_effect_get_param_by_name(effect, "dimensions"); struct vec2 size = {(float)s->async_width, (float)s->async_height}; gs_texture_t *cur_tex = s->async_texrender ? gs_texrender_get_texture(s->async_texrender) : s->async_textures[0]; gs_texture_t *prev_tex = s->async_prev_texrender ? gs_texrender_get_texture(s->async_prev_texrender) : s->async_prev_textures[0]; if (!cur_tex || !prev_tex || !s->async_width || !s->async_height) return; const enum gs_color_space source_space = convert_video_space(s->async_format, s->async_trc); const bool linear_srgb = (source_space != GS_CS_SRGB) || gs_get_linear_srgb() || deinterlace_linear_required(s->deinterlace_mode); const enum gs_color_space current_space = gs_get_color_space(); const char *tech_name = "Draw"; float multiplier = 1.0; switch (source_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: if (current_space == GS_CS_709_SCRGB) { tech_name = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; } break; case GS_CS_709_EXTENDED: switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: tech_name = "DrawTonemap"; break; case GS_CS_709_SCRGB: tech_name = "DrawMultiply"; multiplier = obs_get_video_sdr_white_level() / 80.0f; break; case GS_CS_709_EXTENDED: break; } break; case GS_CS_709_SCRGB: switch (current_space) { case GS_CS_SRGB: case GS_CS_SRGB_16F: tech_name = "DrawMultiplyTonemap"; multiplier = 80.0f / obs_get_video_sdr_white_level(); break; case GS_CS_709_EXTENDED: tech_name = "DrawMultiply"; multiplier = 80.0f / obs_get_video_sdr_white_level(); break; case GS_CS_709_SCRGB: break; } } const bool previous = gs_framebuffer_srgb_enabled(); gs_enable_framebuffer_srgb(linear_srgb); if (linear_srgb) { gs_effect_set_texture_srgb(image, cur_tex); gs_effect_set_texture_srgb(prev, prev_tex); } else { gs_effect_set_texture(image, cur_tex); gs_effect_set_texture(prev, prev_tex); } gs_effect_set_float(multiplier_param, multiplier); gs_effect_set_int(field, s->deinterlace_top_first); gs_effect_set_vec2(dimensions, &size); const uint64_t frame2_ts = s->deinterlace_frame_ts + s->deinterlace_offset + s->deinterlace_half_duration - TWOX_TOLERANCE; gs_effect_set_bool(frame2, obs->video.video_time >= frame2_ts); while (gs_effect_loop(effect, tech_name)) gs_draw_sprite(NULL, s->async_flip ? GS_FLIP_V : 0, s->async_width, s->async_height); gs_enable_framebuffer_srgb(previous); } static void enable_deinterlacing(obs_source_t *source, enum obs_deinterlace_mode mode) { obs_enter_graphics(); if (source->async_format != VIDEO_FORMAT_NONE && source->async_width != 0 && source->async_height != 0) set_deinterlace_texture_size(source); source->deinterlace_mode = mode; source->deinterlace_effect = get_effect(mode); pthread_mutex_lock(&source->async_mutex); if (source->prev_async_frame) { remove_async_frame(source, source->prev_async_frame); source->prev_async_frame = NULL; } pthread_mutex_unlock(&source->async_mutex); obs_leave_graphics(); } static void disable_deinterlacing(obs_source_t *source) { obs_enter_graphics(); gs_texture_destroy(source->async_prev_textures[0]); gs_texture_destroy(source->async_prev_textures[1]); gs_texture_destroy(source->async_prev_textures[2]); gs_texrender_destroy(source->async_prev_texrender); source->deinterlace_mode = OBS_DEINTERLACE_MODE_DISABLE; source->async_prev_textures[0] = NULL; source->async_prev_textures[1] = NULL; source->async_prev_textures[2] = NULL; source->async_prev_texrender = NULL; obs_leave_graphics(); } void obs_source_set_deinterlace_mode(obs_source_t *source, enum obs_deinterlace_mode mode) { if (!obs_source_valid(source, "obs_source_set_deinterlace_mode")) return; if (source->deinterlace_mode == mode) return; if (source->deinterlace_mode == OBS_DEINTERLACE_MODE_DISABLE) { enable_deinterlacing(source, mode); } else if (mode == OBS_DEINTERLACE_MODE_DISABLE) { disable_deinterlacing(source); } else { obs_enter_graphics(); source->deinterlace_mode = mode; source->deinterlace_effect = get_effect(mode); obs_leave_graphics(); } } enum obs_deinterlace_mode obs_source_get_deinterlace_mode(const obs_source_t *source) { return obs_source_valid(source, "obs_source_set_deinterlace_mode") ? source->deinterlace_mode : OBS_DEINTERLACE_MODE_DISABLE; } void obs_source_set_deinterlace_field_order(obs_source_t *source, enum obs_deinterlace_field_order field_order) { if (!obs_source_valid(source, "obs_source_set_deinterlace_field_order")) return; source->deinterlace_top_first = field_order == OBS_DEINTERLACE_FIELD_ORDER_TOP; } enum obs_deinterlace_field_order obs_source_get_deinterlace_field_order(const obs_source_t *source) { if (!obs_source_valid(source, "obs_source_set_deinterlace_field_order")) return OBS_DEINTERLACE_FIELD_ORDER_TOP; return source->deinterlace_top_first ? OBS_DEINTERLACE_FIELD_ORDER_TOP : OBS_DEINTERLACE_FIELD_ORDER_BOTTOM; } obs-studio-32.1.0-sources/libobs/obs-hotkey-name-map.c000644 001751 001751 00000006025 15153330235 023432 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2014 by Ruwen Hahn This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #include #include #include #include #include "obs-internal.h" struct obs_hotkey_name_map; typedef struct obs_hotkey_name_map_item obs_hotkey_name_map_item_t; struct obs_hotkey_name_map_item { char *key; int val; UT_hash_handle hh; }; static void obs_hotkey_name_map_insert(obs_hotkey_name_map_item_t **hmap, const char *key, int v) { if (!hmap || !key) return; obs_hotkey_name_map_item_t *t; HASH_FIND_STR(*hmap, key, t); if (t) return; t = bzalloc(sizeof(obs_hotkey_name_map_item_t)); t->key = bstrdup(key); t->val = v; HASH_ADD_STR(*hmap, key, t); } static bool obs_hotkey_name_map_lookup(obs_hotkey_name_map_item_t *hmap, const char *key, int *v) { if (!hmap || !key) return false; obs_hotkey_name_map_item_t *elem; HASH_FIND_STR(hmap, key, elem); if (elem) { *v = elem->val; return true; } return false; } static const char *obs_key_names[] = { #define OBS_HOTKEY(x) #x, #include "obs-hotkeys.h" #undef OBS_HOTKEY }; const char *obs_key_to_name(obs_key_t key) { if (key >= OBS_KEY_LAST_VALUE) { blog(LOG_ERROR, "obs-hotkey.c: queried unknown key " "with code %d", (int)key); return ""; } return obs_key_names[key]; } static obs_key_t obs_key_from_name_fallback(const char *name) { #define OBS_HOTKEY(x) \ if (strcmp(#x, name) == 0) \ return x; #include "obs-hotkeys.h" #undef OBS_HOTKEY return OBS_KEY_NONE; } static void init_name_map(void) { #define OBS_HOTKEY(x) obs_hotkey_name_map_insert(&obs->hotkeys.name_map, #x, x); #include "obs-hotkeys.h" #undef OBS_HOTKEY } obs_key_t obs_key_from_name(const char *name) { if (!obs) return obs_key_from_name_fallback(name); if (pthread_once(&obs->hotkeys.name_map_init_token, init_name_map)) return obs_key_from_name_fallback(name); int v = 0; if (obs_hotkey_name_map_lookup(obs->hotkeys.name_map, name, &v)) return v; return OBS_KEY_NONE; } void obs_hotkey_name_map_free(void) { if (!obs || !obs->hotkeys.name_map) return; obs_hotkey_name_map_item_t *root = obs->hotkeys.name_map; obs_hotkey_name_map_item_t *n, *tmp; HASH_ITER (hh, root, n, tmp) { HASH_DEL(root, n); bfree(n->key); bfree(n); } } obs-studio-32.1.0-sources/libobs/obs-nix-x11.h000644 001751 001751 00000001767 15153330235 021660 0ustar00runnerrunner000000 000000 /****************************************************************************** Copyright (C) 2019 by Jason Francis This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . ******************************************************************************/ #pragma once #include "obs-nix.h" void obs_nix_x11_log_info(void); const struct obs_nix_hotkeys_vtable *obs_nix_x11_get_hotkeys_vtable(void); obs-studio-32.1.0-sources/libobs/obsversion.c.in000644 001751 001751 00000000254 15153330235 022451 0ustar00runnerrunner000000 000000 #include const char *OBS_VERSION = "@OBS_VERSION@"; const char *OBS_VERSION_CANONICAL = "@OBS_VERSION_CANONICAL@"; const char *OBS_COMMIT = "@OBS_COMMIT@"; obs-studio-32.1.0-sources/.editorconfig000644 001751 001751 00000002326 15153330235 020714 0ustar00runnerrunner000000 000000 # EditorConfig is awesome: http://EditorConfig.org # Since OBS follows the Linux kernel coding style I have started this file to # help with automatically setting various text editors to those requirements. # top-most EditorConfig file root = true # Unix-style newlines with a newline ending every file. [*] insert_final_newline = true trim_trailing_whitespace = true charset = utf-8 indent_style = tab indent_size = 8 # As per notr1ch, for 3rd party code that's a part of obs-outputs. [plugins/obs-outputs/librtmp/*.{cpp,c,h}] indent_style = space indent_size = 4 [CMakeLists.txt] indent_style = space indent_size = 2 [**/CMakeLists.txt] indent_style = space indent_size = 2 [cmake/**/*.cmake] indent_style = space indent_size = 2 [plugins/{rtmp-services,win-capture}/data/**/*.json] indent_style = space indent_size = 4 [*.qss] indent_style = space indent_size = 4 [build-aux/**/*.json] indent_style = space indent_size = 4 [*.py] indent_style = space indent_size = 4 [*.yaml] indent_style = space indent_size = 2 [{*.zsh,.*.zsh,build-aux/.functions/*,.github/scripts/utils.zsh/*}] indent_style = space indent_size = 2 [*.ui] indent_style = space indent_size = 1 [{*.obt,*.oha,*.ovt}] indent_style = space indent_size = 4 obs-studio-32.1.0-sources/deps/000755 001751 001751 00000000000 15153330731 017170 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/000755 001751 001751 00000000000 15153330731 022367 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/CMakeLists.txt000644 001751 001751 00000003456 15153330235 025136 0ustar00runnerrunner000000 000000 add_library(libdshowcapture INTERFACE) add_library(OBS::libdshowcapture ALIAS libdshowcapture) target_sources( libdshowcapture INTERFACE src/dshowcapture.hpp src/source/capture-filter.cpp src/source/capture-filter.hpp src/source/device-vendor.cpp src/source/device.cpp src/source/device.hpp src/source/dshow-base.cpp src/source/dshow-base.hpp src/source/dshow-demux.cpp src/source/dshow-demux.hpp src/source/dshow-device-defs.hpp src/source/dshow-encoded-device.cpp src/source/dshow-enum.cpp src/source/dshow-enum.hpp src/source/dshow-formats.cpp src/source/dshow-formats.hpp src/source/dshow-media-type.cpp src/source/dshow-media-type.hpp src/source/dshowcapture.cpp src/source/dshowencode.cpp src/source/encoder.cpp src/source/encoder.hpp src/source/external/IVideoCaptureFilter.h src/source/log.cpp src/source/log.hpp src/source/output-filter.cpp src/source/output-filter.hpp src/external/capture-device-support/Library/EGAVResult.cpp src/external/capture-device-support/Library/ElgatoUVCDevice.cpp src/external/capture-device-support/Library/win/EGAVHIDImplementation.cpp src/external/capture-device-support/SampleCode/DriverInterface.cpp) target_include_directories( libdshowcapture INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}/src" "${CMAKE_CURRENT_SOURCE_DIR}/src/external/capture-device-support/Library") target_compile_definitions(libdshowcapture INTERFACE _UP_WINDOWS=1) target_compile_options(libdshowcapture INTERFACE /wd4005 /wd4018) obs-studio-32.1.0-sources/deps/libdshowcapture/src/000755 001751 001751 00000000000 15153330731 023156 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/COPYING000644 001751 001751 00000063642 15153330240 024217 0ustar00runnerrunner000000 000000 GNU LESSER GENERAL PUBLIC LICENSE Version 2.1, February 1999 Copyright (C) 1991, 1999 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the first released version of the Lesser GPL. It also counts as the successor of the GNU Library Public License, version 2, hence the version number 2.1.] Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below. When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things. To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it. For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights. We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library. To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others. Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license. Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs. When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library. We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances. For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License. In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system. Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library. The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, whereas the latter must be combined with the library in order to run. GNU LESSER GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you". A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables. The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".) "Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does. 1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) The modified work must itself be a software library. b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change. c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library. In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices. Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy. This option is useful when you wish to copy part of the code of the Library into a program that is not a library. 4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code. 5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables. When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law. If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.) Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself. 6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications. You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things: a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.) b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with. c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution. d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place. e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy. For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute. 7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above. b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it. 10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License. 11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation. 14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Libraries If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License). To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. , 1 April 1990 Ty Coon, President of Vice That's all there is to it! obs-studio-32.1.0-sources/deps/libdshowcapture/src/tests/000755 001751 001751 00000000000 15153330240 024313 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/README000644 001751 001751 00000000654 15153330240 024036 0ustar00runnerrunner000000 000000 libdshowcapture This library was created as a means to simplify the process of using DirectShow to capture video and/or audio devices, such as webcams, capture devices (internal, USB 2.0, USB 3.0), microphones, auxiliary sound inputs, etc. The biggest goal of this project is to eventually support as many devices as possible, as well as add more interesting features later on for improving performance. obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/000755 001751 001751 00000000000 15153330731 025000 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/.clang-format000644 001751 001751 00000000066 15153330240 027350 0ustar00runnerrunner000000 000000 Language: Cpp SortIncludes: false DisableFormat: true obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/000755 001751 001751 00000000000 15153330731 031412 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/readme.md000644 001751 001751 00000003164 15153330240 033170 0ustar00runnerrunner000000 000000 HID API for Elgato UVC devices ============================== The folder `Library` contains cross-platform code to access device-specific features of selected Elgato video products. The CMake project in this folder builds a small console app for testing. Modify `selectedDeviceID` in `SampleCode/main.cpp` to select the correct device type. Supported platforms ------------------- * Windows (10 or higher) * macOS Supported devices ----------------- * HD60 S+ * HD60 X Supported features ----------------- * Switch on-device HDR tonemapping on/off * Read HDMI HDR status packet (for HDR detection) Limitations ----------- The library was written for macOS and Windows. However, the sample project was only built with Visual Studio 2019 and tested on Windows so far. -------------------------------------------------------------------------------- Driver API for Elgato devices ============================= For non-UVC devices device properties can be accessed via a custom driver property set (`IKsPropertySet`). Sample code is provided, see `SampleCode/DriverInterface.h/.cpp` Supported platforms ------------------- * Windows (10 or higher) Supported devices ----------------- * 4K60 Pro MK.2 * 4K60 S+ Supported features ------------------ * Switch on-device HDR tonemapping on/off (4K60 Pro MK.2 only) * Set video compression (4K60 S+ only) * Read HDMI HDR status packet (for HDR detection) #### 4K60 Pro MK.2 To receive HDR the format on DirectShow filter pin must be set to P010. #### 4K60 S+ The 4K60 S+ always produces compressed video output. For HDR the encoder format must be set to HEVC via the driver interface. obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/Library/000755 001751 001751 00000000000 15153330731 033016 5ustar00runnerrunner000000 000000 deps/libdshowcapture/src/external/capture-device-support/Library/ElgatoUVCDevice.h000644 001751 001751 00000005453 15153330240 036023 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022-23 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #pragma once #include #include #include "EGAVResult.h" #include "EGAVDevice.h" #include "HDMIInfoFramesAPI.h" #ifdef _MSC_VER #include "win/EGAVHIDImplementation.h" #else #include "mac/EGAVHIDImplementation.h" #endif // Supported devices inline const EGAVDeviceID deviceIDHD60SPlus (EGAVBusType::USB, 0x0FD9, 0x006A); //!< HD60 S+ inline const EGAVDeviceID deviceIDHD60X (EGAVBusType::USB, 0x0FD9, 0x0082); //!< HD60 X inline const EGAVDeviceID deviceIDHD60XRev2 (EGAVBusType::USB, 0x0FD9, 0x008A); //!< HD60 X Rev. 2 // EXTEND_DEVICES //! @return Device IDs of supported Elgato UVC devices std::vector GetElgatoUVCDeviceIDs(); //! @return true for new devices with new USB chipset bool IsNewDeviceType(const EGAVDeviceID& inDeviceID); //============================================================================== // # Class ElgatoUVCDevice //============================================================================== class ElgatoUVCDevice { public: ElgatoUVCDevice(std::shared_ptr hid, bool isNewDeviceType); //! @brief Works with HD60 S+, HD60 X or newer void SetHDRTonemappingEnabled(bool inEnable); //! @brief Works with HD60 S+, HD60 X or newer EGAVResult GetHDMIHDRStatusPacket(HDMI_GENERIC_INFOFRAME& outFrame); //! @brief Works with HD60 S+, HD60 X or newer EGAVResult IsVideoHDR(bool& outIsHDR); private: EGAVResult WriteI2cData(uint8_t inI2CAddress, uint8_t inRegister, uint8_t* inData, uint8_t inLength); EGAVResult ReadI2cData(uint8_t inI2CAddress, uint8_t inRegister, uint8_t* outData, uint8_t inLength); bool mNewDeviceType = false; //!< true: HD60 X and newer devices , false: HD60 S+ std::shared_ptr mHIDImpl; std::recursive_mutex mHIDMutex; }; deps/libdshowcapture/src/external/capture-device-support/Library/ElgatoUVCDevice.cpp000644 001751 001751 00000022320 15153330240 036346 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022-23 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #include "ElgatoUVCDevice.h" #define WORKAROUND_HD60_S_PLUS_PAYLOAD_SIZE 1 //!< Workaround HD60 S+ firmware issue: invalid payload length (seen with HDR and SPD info frames) //============================================================================== // # Elgato HID interface for UVC devices //============================================================================== enum class I2CAddress { MCU = 0x55, }; //! @brief I2C registers for MCU (I2C address 0x55). enum class MCU_I2C_REGISTER { GET_HDR_PACKET = 0x09, //!< HDR capable devices (HD60 S+, HD60 X) XET_HDR_TONEMAPPING = 0x0A, //!< HDR capable devices (HD60 S+, HD60 X): Enable hardware tonemapping; param 0/1 (uint8_t) }; //============================================================================== // ## HID interface - I2C //============================================================================== //! @brief HID report case for new device type. enum class REPORT_CASE_NEW { REPORT_IIC_WRITE = 6, REPORT_IIC_READ = 7 }; //! @brief HID report IDs for new device type. Can also be queried via HidP_GetValueCaps() enum class HID_REPORT_ID_NEW { I2C_READ = 5, I2C_WRITE = 6 }; //! @brief HDI report IDs for original device type. Can also be queried via HidP_GetValueCaps() enum class HID_REPORT_ID { I2C_READ_SET_ID = 9, I2C_READ_GET_ID = 10, I2C_WRITE_ID = 11 }; const int I2C_BUFFER_HEADER_SIZE = 4; const int MAX_COMM_READ_BUFFER_SIZE = 32; //============================================================================== // # Helpers //============================================================================== std::vector GetElgatoUVCDeviceIDs() { std::vector devices {deviceIDHD60SPlus, deviceIDHD60X, deviceIDHD60XRev2 }; // EXTEND_DEVICES return devices; } //! @return true for new USB chipset bool IsNewDeviceType(const EGAVDeviceID& inDeviceID) { return (inDeviceID != deviceIDHD60SPlus); } //============================================================================== // # Class ElgatoUVCDevice //============================================================================== ElgatoUVCDevice::ElgatoUVCDevice(std::shared_ptr hid, bool isNewDeviceType) : mHIDImpl(hid), mNewDeviceType(isNewDeviceType) {} EGAVResult ElgatoUVCDevice::ReadI2cData(uint8_t inI2CAddress, uint8_t inRegister, uint8_t* outData, uint8_t inLength) { EGAVResult_CheckPointer(outData); EGAVResult_CheckPointer(mHIDImpl); EPL_ASSERT_BREAK(inLength <= MAX_COMM_READ_BUFFER_SIZE); const std::lock_guard lock(mHIDMutex); EGAVResult res = EGAVResult::ErrUnknown; if (mNewDeviceType) { const uint8_t writeLen = 1 /* +1 for byte register address*/, readLen = inLength, reportLen = 4 + writeLen + sizeof(readLen); std::vector outputMessage{ reportLen, (uint8_t)REPORT_CASE_NEW::REPORT_IIC_READ, inI2CAddress, writeLen, inRegister, readLen }; EPL_ASSERT_BREAK(reportLen == outputMessage.size()); res = mHIDImpl->WriteHID(outputMessage, (int)HID_REPORT_ID_NEW::I2C_WRITE); if (res.Failed()) error_printf("WriteHID() FAILED for I2C address 0x%02x, register 0x%02x", inI2CAddress, inRegister); else { std::vector inputMessage; const int inputReportLength = 0xFF | ((int)REPORT_CASE_NEW::REPORT_IIC_READ << 8); // report case is coded into report length res = mHIDImpl->ReadHID(inputMessage, (int)HID_REPORT_ID_NEW::I2C_READ, inputReportLength); if (res.Failed()) error_printf("ReadHID() FAILED for I2C address 0x%02x, register 0x%02x", inI2CAddress, inRegister); else { int dataLen = std::min((int)inLength, (int)(inputMessage.size() - 1)); memcpy(outData, inputMessage.data() + 1, dataLen); } } } else { std::vector outputMessage{ inI2CAddress, inRegister, inLength }; res = mHIDImpl->WriteHID(outputMessage, (int)HID_REPORT_ID::I2C_READ_SET_ID); if (res.Failed()) error_printf("WriteHID() FAILED for I2C address 0x%02x, register 0x%02x", inI2CAddress, inRegister); else { std::vector inputMessage(I2C_BUFFER_HEADER_SIZE + MAX_COMM_READ_BUFFER_SIZE); res = mHIDImpl->ReadHID(inputMessage, (int)HID_REPORT_ID::I2C_READ_GET_ID); if (res.Failed()) error_printf("ReadHID() FAILED for I2C address 0x%02x, register 0x%02x", inI2CAddress, inRegister); else memcpy(outData, inputMessage.data(), inLength); } } EPL_ASSERT_BREAK(res.Succeeded()); return res; } EGAVResult ElgatoUVCDevice::WriteI2cData(uint8_t inI2CAddress, uint8_t inRegister, uint8_t* inData, uint8_t inLength) { EGAVResult_CheckPointer(inData); EGAVResult_CheckPointer(mHIDImpl); const std::lock_guard lock(mHIDMutex); EGAVResult res = EGAVResult::ErrUnknown; if (mNewDeviceType) { const uint8_t writeLen = 1 + inLength /* +1 for byte register address*/, reportLen = 4 + writeLen; std::vector outputMessage{ reportLen, (uint8_t)REPORT_CASE_NEW::REPORT_IIC_WRITE, inI2CAddress, writeLen, inRegister }; for (int i = 0; i < inLength; i++) outputMessage.push_back(inData[i]); EPL_ASSERT_BREAK(reportLen == outputMessage.size()); res = mHIDImpl->WriteHID(outputMessage, (int)HID_REPORT_ID_NEW::I2C_WRITE); } else { std::vector outputMessage{ inI2CAddress, inRegister, inLength }; for (int i = 0; i < inLength; i++) outputMessage.push_back(inData[i]); res = mHIDImpl->WriteHID(outputMessage, (int)HID_REPORT_ID::I2C_WRITE_ID); } if (res.Failed()) error_printf("WriteHID() FAILED for I2C address 0x%02d, register 0x%02d", inI2CAddress, inRegister); return res; } void ElgatoUVCDevice::SetHDRTonemappingEnabled(bool inValue) { const std::lock_guard lock(mHIDMutex); uint8_t buffer = inValue ? 1 : 0; WriteI2cData((uint8_t)I2CAddress::MCU, (uint8_t)MCU_I2C_REGISTER::XET_HDR_TONEMAPPING, &buffer, sizeof(buffer)); } EGAVResult ElgatoUVCDevice::GetHDMIHDRStatusPacket(HDMI_GENERIC_INFOFRAME& outFrame) { const std::lock_guard lock(mHIDMutex); const size_t bufSize = mNewDeviceType ? 32 : 33; uint8_t* buffer = new uint8_t[bufSize]; EGAVResult res = ReadI2cData((uint8_t)I2CAddress::MCU, (uint8_t)MCU_I2C_REGISTER::GET_HDR_PACKET, buffer, (uint8_t)bufSize); if (res.Succeeded()) { size_t size = (bufSize < sizeof(outFrame)) ? bufSize : sizeof(outFrame); memcpy(&outFrame, mNewDeviceType ? buffer : buffer+1, size); #if WORKAROUND_HD60_S_PLUS_PAYLOAD_SIZE // Workaround HD60 S+ firmware issue: invalid payload length (seen with HDR and SPD info frames) // Also with HD60 X FW 22.03.24 (MCU: 22.03.16) if (outFrame.header.bPayloadLength > HDMI_MAX_INFOFRAME_PAYLOAD) { if (HDMI_INFOFRAME_TYPE_DR == outFrame.header.bfType) { int diff = outFrame.header.bPayloadLength - sizeof(outFrame.plDR1); outFrame.header.bPayloadLength = sizeof(outFrame.plDR1); outFrame.bChecksum += diff; } } #endif } delete [] buffer; return res; } EGAVResult ElgatoUVCDevice::IsVideoHDR(bool& outIsHDR) { // Try to read HDR meta data HDMI_GENERIC_INFOFRAME frame{}, emptyFrame{}; memset(&emptyFrame, 0, sizeof(emptyFrame)); EGAVResult res = GetHDMIHDRStatusPacket(frame); if (res.Succeeded()) { bool isInfoFrameValid = HDMI_IsInfoFrameValid(&frame); res = isInfoFrameValid ? EGAVResult::Ok : EGAVResult::ErrUnknown; if (isInfoFrameValid) { // Check type in header and EOTF flag in payload if (HDMI_INFOFRAME_TYPE_DR == frame.header.bfType && HDMI_DR_EOTF_SDRGAMMA != frame.plDR1.bfEOTF) { outIsHDR = true; } else if (HDMI_INFOFRAME_TYPE_DR == frame.header.bfType && HDMI_DR_EOTF_SDRGAMMA == frame.plDR1.bfEOTF) { outIsHDR = false; // we get here with HD60 X (22.03.24 (MCU: 22.03.16)) } else if (0 /*HDMI_INFOFRAME_TYPE_RESERVED*/ == frame.header.bfType && (0 == memcmp(&frame, &emptyFrame, sizeof(emptyFrame)))) { // all empty (seen with HD60 S+ when HDR is not active) outIsHDR = false; } else if (HDMI_INFOFRAME_TYPE_DR != frame.header.bfType) { warning_printf("HDMI Metadata: Wrong header type: %d", frame.header.bfType); res = EGAVResult::ErrNotFound; } } else warning_printf("HDMI Metadata: HDMI_IsInfoFrameValid() returned error (checksum)!"); } else warning_printf("HDMI Metadata: GetHDMIHDRStatusPacket() failed!"); return res; } deps/libdshowcapture/src/external/capture-device-support/Library/EGAVResult.h000644 001751 001751 00000021242 15153330240 035025 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVResult.h @brief Definition of error and success codes **/ //============================================================================== #pragma once //------------------------------------------------------------------------------ // Includes //------------------------------------------------------------------------------ #include #if _UP_WINDOWS // FMB NOTE: For some strange reason must be before . // This is only necessary because contents of are required in other code files. // However, if is include somewhere it must be guaranteded that was never include before. #define NOMINMAX #include // for sockaddr_in #include #endif //#include "EGAVFeatures.h" // for EGAV_API #define EVH_AVOID_UNNEEDED_COPY_CONSTRUCTORS (!_UP_WINDOWS) //!< 0 - explicit copy constructors: required for .NET frontends connected via SWIG e.g. 4KCU and EVH Test App //!< 1 - to avoid "warning: definition of implicit copy constructor for different classes e.g. //! 'EGAVAudioSampleType' is deprecated because it has a user-declared copy assignment operator [-Wdeprecated-copy]" //============================================================================== // EGAVResultCustomType //============================================================================== enum class EGAVResultCustomType { None, Hresult, //!< HRESULT (Windows) WinError, //!< System error code (Windows) MainConcept, //!< MainConcept error codes (BS_OK etc.) Device, //!< Device Error Codes Mac //!< Errors from macOS }; enum class EGAVResultCustomTypeDevice { None, SpeedInsufficient, ResultUnexpected, }; //============================================================================== // EGAVResult //============================================================================== typedef int EGAVResultCode; //! Error description class /*EGAV_API*/ EGAVResult { public: //------------------------------------------------------------------------------ // Constants //------------------------------------------------------------------------------ // EXTEND_EGAVResultCode: add new error codes here and in GetResultCodeString function in cpp file static const EGAVResultCode ErrInvalidOperation = -300; //!< Execution of the operation would lead to an error/invalid state static const EGAVResultCode ErrUnknownUnit = -200; //!< Can't instantiate desired EGAV unit (source, sink, input, output, etc.) static const EGAVResultCode ErrDeviceInUse = -108; //!< Device is in use by another application (from EVHALResultCode) static const EGAVResultCode ErrInvalidPath = -101; //!< File error: specifided path is not valid static const EGAVResultCode ErrCouldNotOpenFile = -100; //!< File error: Could not open file static const EGAVResultCode ErrResultPending = -19; //!< hardware busy, try again later static const EGAVResultCode ErrResourceNotAvail = -18; //!< Resource not available static const EGAVResultCode ErrOutOfRange = -17; //!< Out of Range static const EGAVResultCode ErrTimeOut = -16; //!< Operation timed out static const EGAVResultCode ErrNotSupported = -15; //!< Operation not supported; used with firmware update static const EGAVResultCode ErrConversionFailed = -14; //!< Conversion failed static const EGAVResultCode ErrNotFound = -13; //!< Not found static const EGAVResultCode ErrNoData = -12; //!< No data static const EGAVResultCode ErrVideoScaler = -11; //!< Video scaler error static const EGAVResultCode ErrEncoder = -10; //!< Encoder error static const EGAVResultCode ErrInvalidFormat = -9; //!< Invalid format static const EGAVResultCode ErrInvalidParameter = -8; //!< Invalid parameter static const EGAVResultCode ErrInvalidState = -7; //!< Invalid state (e.g. when trying to process data while a unit is desinitialized) static const EGAVResultCode ErrInsufficientMemory = -6; //!< Out of memory static const EGAVResultCode ErrNotInitialized = -5; //!< Not initialized static const EGAVResultCode ErrInvalidCast = -4; //!< Cast operation failed static const EGAVResultCode ErrNotImplemented = -3; //!< Can't instantiate desired EGAV unit (source, sink, input, output, etc.) static const EGAVResultCode ErrNullPointer = -2; //!< Null pointer static const EGAVResultCode ErrUnknown = -1; //!< General failure static const EGAVResultCode ErrCustom = 0; //!< Custom error code: error code is in mCustomResultCode static const EGAVResultCode Ok = 1; //!< Success static const EGAVResultCode OkNoDataChanged = 2; //!< Success: No data were changed (similar to HRESULT value S_FALSE ) static const EGAVResultCode OkFileNotFound = 3; //!< Success: File was not found, but that is a valid state. static const EGAVResultCode OkButIncomplete = 4; //!< Success: Operation didn't fail, but had some internal uncritical errors (similar to S_FALSE on Windows) //------------------------------------------------------------------------------ // Construction //------------------------------------------------------------------------------ //! Constructor EGAVResult() {} EGAVResult(EGAVResultCode inResultCode); EGAVResult(EGAVResultCustomType inCustomResultType, int64_t inCustomResultCode); //------------------------------------------------------------------------------ // Initialization //------------------------------------------------------------------------------ #if _UP_WINDOWS //! Init with HRESULT (EGAVResultCustomType::Hresult) void InitWithHresult(HRESULT hr); //! Init with Windows error code (EGAVResultCustomType::WinError) void InitWithWinError(LONG err); #endif //------------------------------------------------------------------------------ // Helpers //------------------------------------------------------------------------------ bool Succeeded() const; bool Failed() const { return !Succeeded(); } EGAVResultCode GetResultCode() const { return mResultCode; } EGAVResultCustomType GetCustomResultType() const { return mCustomResultType; } int64_t GetCustomResultCode() const { return mCustomResultCode; } void operator=(const EGAVResultCode inResultCode); bool operator==(const EGAVResult inResult) const; bool operator!=(const EGAVResult inResult) const; bool operator==(const EGAVResultCode inResultCode) const; bool operator!=(const EGAVResultCode inResultCode) const; //------------------------------------------------------------------------------ // Members //------------------------------------------------------------------------------ // Common EGAVResultCode mResultCode = ErrCustom; // Custom error codes (platform error codes or error codes from other APIs) EGAVResultCustomType mCustomResultType = EGAVResultCustomType::None; int64_t mCustomResultCode = 0; private: std::string mMessage; }; //============================================================================== // # Macros //============================================================================== #define EGAVResult_CheckPointer(_p_) \ { \ if (!_p_) \ return EGAVResult::ErrNullPointer; \ } #define EGAVResult_CheckCondition(_cond_) \ { \ if (false == (_cond_)) \ return EGAVResult::ErrUnknown; \ } #ifndef EGAV_OVERRIDE_DEBUG_MACROS inline void dummy() {} #define EPL_ASSERT_BREAK(...) dummy() #define error_printf(...) dummy() #define warning_printf(...) dummy() #define info_printf(...) dummy() #endif deps/libdshowcapture/src/external/capture-device-support/Library/HDMIInfoFramesAPI.h000644 001751 001751 00000065503 15153330240 036141 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #pragma once #include #pragma pack(push, 1) //==================================================================================== // # VIDEO IDENTIFICATION CODES (VIC) //==================================================================================== // Video ID Code, see CEA-861-E chapter 4.1, table 4 // Video ID Code, see CEA-861-G chapter 4.1, table 3 typedef struct _HDMI_VIC_DESCRIPTOR { uint8_t bID; //!< Video ID Code int iWidth; //!< width in pixels int iHeight; //!< height in pixels int iFieldRate; //!< field refresh rate in Hz bool bInterlaced; //!< interlaced short iAspectX; //!< picture aspect ratio H short iAspectY; //!< picture aspect ratio V } HDMI_VIC_DESCRIPTOR; // a field refresh value of 24Hz means either 24.00Hz or 23.98Hz // a field refresh value of 30Hz means either 30.00Hz or 29.97Hz // a field refresh value of 48Hz means either 48.00Hz or 47.95Hz // a field refresh value of 60Hz means either 60.00Hz or 59.94Hz // a field refresh value of 120Hz means either 120.00Hz or 119.88Hz // a field refresh value of 240Hz means either 240.00Hz or 239.76Hz #define HDMI_VIC_TABLE_SIZE 220 extern const HDMI_VIC_DESCRIPTOR g_HDMI_VIC_TABLE[HDMI_VIC_TABLE_SIZE]; //==================================================================================== // # INFO FRAME TYPES //==================================================================================== // see CEA-861-E chapter 6, table 6 // see CEA-861-G chapter 6, table 5 #define HDMI_INFOFRAME_TYPE_RESERVED 0x00 // reserved #define HDMI_INFOFRAME_TYPE_VS 0x01 // Vendor Specific #define HDMI_INFOFRAME_TYPE_AVI 0x02 // Auxiliary Video Information #define HDMI_INFOFRAME_TYPE_SPD 0x03 // Source Product Description #define HDMI_INFOFRAME_TYPE_A 0x04 // Audio #define HDMI_INFOFRAME_TYPE_MS 0x05 // MPEG Source #define HDMI_INFOFRAME_TYPE_VBI 0x06 // NTSC VBI #define HDMI_INFOFRAME_TYPE_DR 0x07 // Dynamic Range and Mastering #define HDMI_INFOFRAME_TYPE_MIN HDMI_INFOFRAME_TYPE_VS #define HDMI_INFOFRAME_TYPE_MAX HDMI_INFOFRAME_TYPE_DR //==================================================================================== // # INFO FRAME HEADER //==================================================================================== // see CEA-861-G Annex D.1 #define HDMI_MAX_INFOFRAME_SIZE 31 // 3 bytes header + 1 byte checksum + 27 bytes payload #define HDMI_MAX_INFOFRAME_PAYLOAD 27 // see CEA-861-E chapter 6 typedef struct _HDMI_INFOFRAMEHEADER { uint8_t bfType : 7; // InfoFrame Type Code (see HDMI_INFOFRAME_TYPE_*) uint8_t bfPacketType : 1; // the HDMI Packet Type is 0x80+InfoframeType for HDMI InfoFrame Packets uint8_t bfVersion : 7; // InfoFrame Version Number, starting with 1 uint8_t bfChangeBit : 1; // InfoFrame Change Bit, VS Infoframe only uint8_t bPayloadLength; // Size of InfoFrame payload, not including Type, Version, Length } HDMI_INFOFRAMEHEADER; //==================================================================================== // # VENDOR SPECIFIC INFO FRAME //==================================================================================== // see CEA-861-E chapter 6.1, table 7 // see CEA-861-G chapter 6.1, table 6 // type code is 0x01, version is 0x01, size is vendor specific typedef struct _HDMI_VS1_PAYLOAD { uint8_t IEEERegistrationID[3]; // IEEE OUI uint8_t bVendorSpecificPayload[HDMI_MAX_INFOFRAME_SIZE - 3 - 3 - 1]; // 24 bytes } HDMI_VS1_PAYLOAD; //------------------------------------------------------------------------------------ // see CEA-861-G chapter 6.1, table 7 // type code is 0x01, version is 0x02, size is vendor specific // VS infoframe version 2 uses bit 7 of the version number as the ChangeBit typedef struct _HDMI_VS2_PAYLOAD { uint8_t IEEERegistrationID[3]; // IEEE OUI uint8_t bVendorSpecificPayload[HDMI_MAX_INFOFRAME_SIZE - 3 - 3 - 1]; // 24 bytes } HDMI_VS2_PAYLOAD; //==================================================================================== // # AUXILIARY VIDEO INFORMATION INFOFRAME //==================================================================================== // Scan Information, see CEA-861-E chapter 6.4, table 10 #define HDMI_AVI_S_NODATA 0x00 // no data #define HDMI_AVI_S_OVERSCAN 0x01 // composed for overscan #define HDMI_AVI_S_UNDERSCAN 0x02 // composed for underscan // Bar Data Present, see CEA-861-E chapter 6.4, table 10 #define HDMI_AVI_B_NODATA 0x00 // no data #define HDMI_AVI_B_V 0x01 // vertical bar info present #define HDMI_AVI_B_H 0x02 // horizontal bar info present #define HDMI_AVI_B_VH 0x03 // vertical and horizontal bar info present // Active Format Information Present, see CEA-861-E chapter 6.4, table 10 #define HDMI_AVI_A_NONE 0x00 // no information #define HDMI_AVI_A_PRESENT 0x01 // information present // RGB or YCbCr, see CEA-861-E chapter 6.4, table 10 // RGB or YCbCr, see CEA-861-G chapter 6.4, table 10 #define HDMI_AVI_Y_RGB 0x00 // RGB #define HDMI_AVI_Y_YCBCR422 0x01 // YCbCr 4:2:2 #define HDMI_AVI_Y_YCBCR444 0x02 // YCbCr 4:4:4 #define HDMI_AVI_Y_YCBCR420 0x03 // YCbCr 4:2:0 #define HDMI_AVI_Y_IDO 0x07 // IDO-Defined // Active Portion Aspect Ration, see CEA-861-E chapter 6.4, table 11 #define HDMI_AVI_R_SAME 0x08 // same as coded frame aspect ration #define HDMI_AVI_R_4TO3 0x09 // 4:3 (center) #define HDMI_AVI_R_16TO9 0x0A // 16:9 (center) #define HDMI_AVI_R_14TO9 0x0B // 14:9 (center) // Coded Frame Aspect Ration, see CEA-861-E chapter 6.4, table 11 #define HDMI_AVI_M_NODATA 0x00 // no data #define HDMI_AVI_M_4TO3 0x01 // 4:3 #define HDMI_AVI_M_16TO9 0x02 // 16:9 // Colorimetry, see CEA-861-E chapter 6.4, table 11 #define HDMI_AVI_C_NODATA 0x00 // no data #define HDMI_AVI_C_SMTPE170M 0x01 // SMTPE 170M #define HDMI_AVI_C_ITUR709 0x02 // ITU-R 709 #define HDMI_AVI_C_EXTENDED 0x03 // extended colorimetry information valid // Non-Uniform Picture Scaling, see CEA-861-E chapter 6.4, table 13 #define HDMI_AVI_SC_NO 0x00 // no known scaling #define HDMI_AVI_SC_H 0x01 // picture has been scaled horizontally #define HDMI_AVI_SC_V 0x02 // picture has been scaled vertically #define HDMI_AVI_SC_HV 0x03 // picture has been scaled horizontally and vertically // RGB Quantization Range, see CEA-861-E chapter 6.4, table 13 #define HDMI_AVI_Q_DEFAULT 0x00 // default, depends on video format #define HDMI_AVI_Q_LIMITED 0x01 // limited range #define HDMI_AVI_Q_FULL 0x02 // full range // Extended Colorimetry, see CEA-861-E chapter 6.4, table 13 // Extended Colorimetry, see CEA-861-G chapter 6.4, table 13 #define HDMI_AVI_EC_XVYCC601 0x00 // xvYCC 601 #define HDMI_AVI_EC_XVYCC709 0x01 // xvYCC 709 #define HDMI_AVI_EC_SYCC601 0x02 // sYCC 601 #define HDMI_AVI_EC_ADOBEYCC601 0x03 // Adobe YCC 601 #define HDMI_AVI_EC_ADOBERGB 0x04 // Adobe RGB #define HDMI_AVI_EC_BT2020C 0x05 // ITU BT2020 YcCbcCrc #define HDMI_AVI_EC_BT2020 0x06 // ITU BT2020 RGB or YCbCr #define HDMI_AVI_EC_EXTENDED 0x07 // extended information, see HDMI_AVI_ACE_* // IT Content, see CEA-861-E chapter 6.4, table 13 #define HDMI_AVI_ITC_NODATA 0x00 // no data #define HDMI_AVI_ITC_VALID 0x01 // IT Content, CN is valid // Pixel Repetition Factor, see CEA-861-E chapter 6.4, table 15 #define HDMI_AVI_PR_NONE 0x00 // pixels are not repeated, i.e. only sent once in total #define HDMI_AVI_PR_1 0x01 // pixels are repeated once, i.e. sent twice in total #define HDMI_AVI_PR_2 0x02 // pixels are repeated twice, i.e. sent three times in total #define HDMI_AVI_PR_3 0x03 // pixels are repeated three times, i.e. sent 4 times in total #define HDMI_AVI_PR_4 0x04 // pixels are repeated 4 times, i.e. sent 5 times in total #define HDMI_AVI_PR_5 0x05 // pixels are repeated 5 times, i.e. sent 6 times in total #define HDMI_AVI_PR_6 0x06 // pixels are repeated 6 times, i.e. sent 7 times in total #define HDMI_AVI_PR_7 0x07 // pixels are repeated 7 times, i.e. sent 8 times in total #define HDMI_AVI_PR_8 0x08 // pixels are repeated 8 times, i.e. sent 9 times in total #define HDMI_AVI_PR_9 0x09 // pixels are repeated 9 times, i.e. sent 10 times in total // IT Content Type, see CEA-861-E chapter 6.4, table 16 #define HDMI_AVI_CN_GRAPHICS 0x00 // graphics #define HDMI_AVI_CN_PHOTO 0x01 // photo #define HDMI_AVI_CN_CINEMA 0x02 // cinema #define HDMI_AVI_CN_GAME 0x03 // game // YCC Quantization Range, see CEA-861-E chapter 6.4, table 17 #define HDMI_AVI_YQ_LIMITED 0x00 // limited range #define HDMI_AVI_YQ_FULL 0x01 // full range // Additional Colorimetry Extension, see CEA-861-G chapter 6.4, table 25 #define HDMI_AVI_ACE_DCIP3D65 0x00 // DCI-P3 R'G'B' (D65) #define HDMI_AVI_ACE_DCIP3TH 0x01 // DCI-P3 R'G'B' (Theater) //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.3, table 8 // type code is 0x02, version is 0x01, size is 13 // ATTENTION: this AVI version is obsolete and should not be used! typedef struct _HDMI_AVI1_PAYLOAD { // data byte 1, see CEA-861-E chapter 6.4, table 10 uint8_t bfScanInformation : 2; // S0, S1 uint8_t bfBarDataPresent : 2; // B0, B1 uint8_t bfActiveFormatInformationPresent : 1; // A0 uint8_t bfRGBorYCbCr : 2; // Y0, Y1 uint8_t bfFutureUse1 : 1; // reserved, zero // data byte 2, see CEA-861-E chapter 6.4, table 11 uint8_t bfActivePortionAspectRatio : 4; // R0, R1, R2, R3 uint8_t bfCodedFrameAspectRatio : 2; // M0, M1 uint8_t bfColorimetry : 2; // C0, C1 // data byte 3, see CEA-861-E chapter 6.4, table 13 uint8_t bfNonUniformPictureScaling : 2; // SC0, SC1 uint8_t bfFutureUse3 : 6; // reserved, zero uint8_t bfFutureUse4 = 0; // reserved, zero uint8_t bfFutureUse5 = 0; // reserved, zero uint16_t wLineNumberOfEndOfTopBar; // ETB uint16_t wLineNumberOfStartOfBottomBar; // SBB uint16_t wPixelNumberOfEndOfLeftBar; // ELB uint16_t wPixelNumberOfStartOfRightBar; // SRB } HDMI_AVI1_PAYLOAD; //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.4, table 9 // see CEA-861-E chapter 6.4, table 8 // type code is 0x02, version is 0x02, size is 13 typedef struct _HDMI_AVI2_PAYLOAD { // data byte 1, see CEA-861-E chapter 6.4, table 10 uint8_t bfScanInformation : 2; // S0, S1 uint8_t bfBarDataPresent : 2; // B0, B1 uint8_t bfActiveFormatInformationPresent : 1; // A0 uint8_t bfRGBorYCbCr : 2; // Y0, Y1 uint8_t bfFutureUse1 : 1; // reserved, zero // data byte 2, see CEA-861-E chapter 6.4, table 11 uint8_t bfActivePortionAspectRatio : 4; // R0, R1, R2, R3 uint8_t bfCodedFrameAspectRatio : 2; // M0, M1 uint8_t bfColorimetry : 2; // C0, C1 // data byte 3, see CEA-861-E chapter 6.4, table 13 uint8_t bfNonUniformPictureScaling : 2; // SC0, SC1 uint8_t bfRGBQuantizationRange : 2; // Q0, Q1 uint8_t bfExtendedColorimetry : 3; // EC0, EC1, EC2 uint8_t bfITContent : 1; // ITC // data byte 4, see CEA-861-E chapter 4.1, table 4 uint8_t bfVIC : 7; // VIC uint8_t bfFutureUse4 : 1; // reserved, zero // data byte 5, see CEA-861-E chapter 6.4, tables 15, 16, 17 uint8_t bfPixelRepetitionFactor : 4; // PR0, PR1, PR2, PR3 uint8_t bfITContentType : 2; // CN0, CN1 uint8_t bfYCCQuantizationRange : 2; // YQ0, YQ1 uint16_t wLineNumberOfEndOfTopBar; // ETB uint16_t wLineNumberOfStartOfBottomBar; // SBB uint16_t wPixelNumberOfEndOfLeftBar; // ELB uint16_t wPixelNumberOfStartOfRightBar; // SRB } HDMI_AVI2_PAYLOAD; //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.4, table 8 // type code is 0x02, version is 0x03, size is 13 typedef struct _HDMI_AVI3_PAYLOAD { // data byte 1, see CEA-861-G chapter 6.4, table 10 uint8_t bfScanInformation : 2; // S0, S1 uint8_t bfBarDataPresent : 2; // B0, B1 uint8_t bfActiveFormatInformationPresent : 1; // A0 uint8_t bfRGBorYCbCr : 3; // Y0, Y1, Y2 // data byte 2, see CEA-861-E chapter 6.4, table 11 uint8_t bfActivePortionAspectRatio : 4; // R0, R1, R2, R3 uint8_t bfCodedFrameAspectRatio : 2; // M0, M1 uint8_t bfColorimetry : 2; // C0, C1 // data byte 3, see CEA-861-E chapter 6.4, table 13 uint8_t bfNonUniformPictureScaling : 2; // SC0, SC1 uint8_t bfRGBQuantizationRange : 2; // Q0, Q1 uint8_t bfExtendedColorimetry : 3; // EC0, EC1, EC2 uint8_t bfITContent : 1; // ITC // data byte 4, see CEA-861-E chapter 4.1, table 4 uint8_t bVIC; // VIC // data byte 5, see CEA-861-E chapter 6.4, tables 15, 16, 17 uint8_t bfPixelRepetitionFactor : 4; // PR0, PR1, PR2, PR3 uint8_t bfITContentType : 2; // CN0, CN1 uint8_t bfYCCQuantizationRange : 2; // YQ0, YQ1 uint16_t wLineNumberOfEndOfTopBar; // ETB uint16_t wLineNumberOfStartOfBottomBar; // SBB uint16_t wPixelNumberOfEndOfLeftBar; // ELB uint16_t wPixelNumberOfStartOfRightBar; // SRB } HDMI_AVI3_PAYLOAD; //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.4, table 9 // type code is 0x02, version is 0x04, size is 14 typedef struct _HDMI_AVI4_PAYLOAD { // data byte 1, see CEA-861-G chapter 6.4, table 10 uint8_t bfScanInformation : 2; // S0, S1 uint8_t bfBarDataPresent : 2; // B0, B1 uint8_t bfActiveFormatInformationPresent : 1; // A0 uint8_t bfRGBorYCbCr : 3; // Y0, Y1, Y2 // data byte 2, see CEA-861-E chapter 6.4, table 11 uint8_t bfActivePortionAspectRatio : 4; // R0, R1, R2, R3 uint8_t bfCodedFrameAspectRatio : 2; // M0, M1 uint8_t bfColorimetry : 2; // C0, C1 // data byte 3, see CEA-861-E chapter 6.4, table 13 uint8_t bfNonUniformPictureScaling : 2; // SC0, SC1 uint8_t bfRGBQuantizationRange : 2; // Q0, Q1 uint8_t bfExtendedColorimetry : 3; // EC0, EC1, EC2 uint8_t bfITContent : 1; // ITC // data byte 4, see CEA-861-E chapter 4.1, table 4 uint8_t bVIC; // VIC // data byte 5, see CEA-861-E chapter 6.4, tables 15, 16, 17 uint8_t bfPixelRepetitionFactor : 4; // PR0, PR1, PR2, PR3 uint8_t bfITContentType : 2; // CN0, CN1 uint8_t bfYCCQuantizationRange : 2; // YQ0, YQ1 uint16_t wLineNumberOfEndOfTopBar; // ETB uint16_t wLineNumberOfStartOfBottomBar; // SBB uint16_t wPixelNumberOfEndOfLeftBar; // ELB uint16_t wPixelNumberOfStartOfRightBar; // SRB // data byte 14, see CEA-861-G chapter 6.4, tables 15, 16, 17 uint8_t bfReserved14 : 4; // reserved, zero uint8_t bfAdditionalColorimetry : 4; // ACE } HDMI_AVI4_PAYLOAD; //------------------------------------------------------------------------------------ #define HDMI_ERROR -1 #define HDMI_UNKNOWN 0 #define HDMI_FORMAT_RGB 1 // RGB #define HDMI_FORMAT_YCBCR420 2 // YCbCr 4:2:0 #define HDMI_FORMAT_YCBCR422 3 // YCbCr 4:2:2 #define HDMI_FORMAT_YCBCR444 4 // YCbCr 4:4:4 #define HDMI_COLOR_ADOBERGB 1 #define HDMI_COLOR_BT2020 2 #define HDMI_COLOR_DCIP3D65 3 #define HDMI_COLOR_DCIP3TH 4 #define HDMI_COLOR_SMPTE170M 5 #define HDMI_COLOR_BT709 6 #define HDMI_COLOR_XVYCC601 7 #define HDMI_COLOR_XVYCC709 8 #define HDMI_COLOR_SYCC601 9 #define HDMI_COLOR_ADOBEYCC601 10 #define HDMI_COLOR_BT2020C 11 //==================================================================================== // # SOURCE PRODUCT DESCRIPTION INFOFRAME (SPD) //==================================================================================== // Source Information, see CEA-861-E chapter 6.5, table 22 #define HDMI_SPD_SI_UNKNOWN 0x00 #define HDMI_SPD_SI_STB 0x01 #define HDMI_SPD_SI_DVD 0x02 #define HDMI_SPD_SI_DVHS 0x03 #define HDMI_SPD_SI_DVR 0x04 #define HDMI_SPD_SI_DVC 0x05 #define HDMI_SPD_SI_DSC 0x06 #define HDMI_SPD_SI_VCD 0x07 #define HDMI_SPD_SI_GAME 0x08 #define HDMI_SPD_SI_PC 0x09 #define HDMI_SPD_SI_BD 0x0A #define HDMI_SPD_SI_SACD 0x0B #define HDMI_SPD_SI_HDDVD 0x0C #define HDMI_SPD_SI_PMP 0x0D const char* HDMI_SPD_ToString(uint8_t inByte); //! @brief Maps abbreviated names to user-friendly names (e.g. MSFT --> Microsoft) //! @param inManufacturer manufacturer string as contained in HDMI info packet std::string HDMI_SPD_MapManufacturerString(const std::string& inManufacturer); //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.5, table 21 // type code is 0x03, version is 1, size is 25 typedef struct _HDMI_SPD1_PAYLOAD { uint8_t bVendorName[8]; uint8_t bProductDescription[16]; uint8_t bSourceInformation; } HDMI_SPD1_PAYLOAD; //==================================================================================== // AUDIO INFOFRAME //==================================================================================== // Audio Channel Count, see CEA-861-E chapter 6.6.1, table 24 #define HDMI_A_CC_STREAM 0x00 // see stream header // Audio Coding Type, see CEA-861-E chapter 6.6.1, table 24 #define HDMI_A_CT_STREAM 0x00 // see stream header #define HDMI_A_CT_PCM 0x01 // PCM #define HDMI_A_CT_AC3 0x02 // AC-3 #define HDMI_A_CT_MPEG1 0x03 // MPEG-1 #define HDMI_A_CT_MP3 0x04 // MP3 #define HDMI_A_CT_MPEG2 0x05 // MPEG-2 #define HDMI_A_CT_AACLC 0x06 // AAC-LC #define HDMI_A_CT_DTS 0x07 // DTS #define HDMI_A_CT_ATRAC 0x08 // ATRAC #define HDMI_A_CT_DSD 0x09 // DSD #define HDMI_A_CT_EAC3 0x0A // E-AC-3 #define HDMI_A_CT_DTSHD 0x0B // DTS-HD #define HDMI_A_CT_MLP 0x0C // MLP #define HDMI_A_CT_DST 0x0D // DST #define HDMI_A_CT_WMAPRO 0x0E // WMA Pro #define HDMI_A_CT_CXT 0x0F // see Audio Coding Extension Type (CXT) // Sample Size, see CEA-861-E chapter 6.6.1, table 25 #define HDMI_A_SS_STREAM 0x00 // see stream header #define HDMI_A_SS_16BIT 0x01 // 16bit #define HDMI_A_SS_20BIT 0x02 // 20bit #define HDMI_A_SS_24BIT 0x03 // 24bit // Sample Frequency, see CEA-861-E chapter 6.6.1, table 25 #define HDMI_A_SF_STREAM 0x00 // see stream header #define HDMI_A_SF_32000 0x01 // 32kHz #define HDMI_A_SF_44100 0x02 // 44.1kHz #define HDMI_A_SF_48000 0x03 // 48kHz #define HDMI_A_SF_88200 0x04 // 88.2kHz #define HDMI_A_SF_96000 0x05 // 96kHz #define HDMI_A_SF_176400 0x06 // 176.4kHz #define HDMI_A_SF_192000 0x07 // 192kHz // Audio Coding Extension Type, see CEA-861-E chapter 6.6.1, table 26 #define HDMI_A_CXT_CT 0x00 // see Audio Coding Type (CT) #define HDMI_A_CXT_HEAAC 0x01 // HE-AAC #define HDMI_A_CXT_HEAAC2 0x02 // HE-AAC v2 #define HDMI_A_CXT_MPEGSURROUND 0x03 // MPEG Surround // Down-mix Inhibit, see CEA-861-E chapter 6.6.2, table 30 #define HDMI_A_DM_PERMITTED 0x00 // down mix permitted #define HDMI_A_DM_PROHIBITED 0x01 // down mix prohibited // LFE Playback Level, see CEA-861-E chapter 6.6.2, table 31 #define HDMI_A_LFEPBL_UNKNOWN 0x00 // unknown #define HDMI_A_LFEPBL_0DB 0x01 // 0 dB #define HDMI_A_LFEPBL_10DB 0x02 // +10 dB //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.6, table 23 // type code is 0x04, version is 1, size is 10 typedef struct _HDMI_A1_PAYLOAD { // data byte 1, see CEA-861-E chapter 6.6.1, table 24 uint8_t bfChannelCount : 3; // CC0, CC1, CC2 (channel count - 1) uint8_t bfReserved1 : 1; // reserved, zero uint8_t bfAudioCodingType : 4; // CT0, CT1, CT2, CT3 // data byte 2, see CEA-861-E chapter 6.6.1, table 25 uint8_t bfSampleSize : 2; // SS0, SS1 uint8_t bfSampleFrequency : 3; // SF0, SF1, SF2 uint8_t bfReserved2 : 3; // reserved, zero // data byte 3, see CEA-861-E chapter 6.6.1, table 26 uint8_t bfAudioCodingExtensionType : 5; // CXT0, CXT1, CXT2, CXT3, CXT4 uint8_t bfReserved3 : 3; // reserved, zero // data byte 4, see CEA-861-E chapter 6.6.2, table 28 uint8_t bChannelAllocation; // CA (channel to speaker) // data byte 5, see CEA-861-E chapter 6.6.2, tables 29, 30, 31 uint8_t bfLFEPlaybackLevel : 2; // LFEPBL0, LFEPBL1 uint8_t bfReserved5 : 1; // reserved, zero uint8_t bfLevelShiftValue : 4; // LSV0, LSV1 (dB) uint8_t bfDownMixInhibitFlag : 1; // DM_INH uint8_t bReserved6 = 0; // reserved, or Speaker Mask, or Channel Index uint8_t bReserved7 = 0; // reserved, or Speaker Mask, or Channel Index uint8_t bReserved8 = 0; // reserved, or Speaker Mask, or Channel Index uint8_t bReserved9 = 0; // reserved, or Speaker Mask, or Channel Index uint8_t bReserved10 = 0; // reserved, zero } HDMI_A1_PAYLOAD; //==================================================================================== // MPEG SOURCE INFOFRAME //==================================================================================== // MPEG Frame, see CEA-861-E chapter 6.7, table 33 #define HDMI_MS_MF_UNKNOWN 0x00 // unknown #define HDMI_MS_MF_I 0x01 // I-Frame #define HDMI_MS_MF_P 0x02 // P-Frame #define HDMI_MS_MF_B 0x03 // B-Frame // Field Repeat, see CEA-861-E chapter 6.7, table 33 #define HDMI_MS_FR_NEW 0x00 // new field #define HDMI_MS_FR_REPEATED 0x01 // repeated field //------------------------------------------------------------------------------------ // see CEA-861-E chapter 6.7, table 32 // type code is 0x05, version is 1, size is 10 // ATTENTION: it is recommended not to use this info frame typedef struct _HDMI_MS1_PAYLOAD { uint32_t dwMPEGBitRate; // MPEG bit rate in Hz // data byte 5, see CEA-861-E chapter 6.7, table 33 uint8_t bfMPEGFrame : 2; // MF0, MF1 uint8_t bfReserved5a : 2; // reserved, zero uint8_t bfFieldRepeat : 1; // FR0 uint8_t bfReserved5b : 3; // reserved, zero uint8_t bReserved6 = 0; // reserved uint8_t bReserved7 = 0; // reserved uint8_t bReserved8 = 0; // reserved uint8_t bReserved9 = 0; // reserved uint8_t bReserved10 = 0; // reserved } HDMI_MS1_PAYLOAD; //==================================================================================== // NTSC VBI INFOFRAME //==================================================================================== // see CEA-861-E chapter 6.8, table 34 // type code is 0x06, version is 1, size depends typedef struct _HDMI_VBI1_PAYLOAD { uint8_t bPESDataField[HDMI_MAX_INFOFRAME_SIZE - 3 - 1]; // PES data field, limited to max 27 bytes } HDMI_VBI1_PAYLOAD; //==================================================================================== // # DYNAMIC RANGE AND MASTERING INFOFRAME //==================================================================================== /* Example for valid DR info frame: 87 (type) | Header 01 (version) | 1A (length) | 8D (checksum) 02 00 FA 00 AE 02 85 00 | Payload 29 00 A3 02 5C 01 40 01 | 51 01 DB 05 00 00 DB 05 | 1F 03 | */ //------------------------------------------------------------------------------------ // EOTF, see CEA-861.3-A chapter 3.2, table 3 #define HDMI_DR_EOTF_SDRGAMMA 0x00 // traditional gamma, SDR #define HDMI_DR_EOTF_HDRGAMMA 0x01 // traditional gamma, HDR #define HDMI_DR_EOTF_ST2084 0x02 // ST2084 PQ #define HDMI_DR_EOTF_HLG 0x03 // BT2100 HLG // Metadata, see CEA-861.3-A chapter 3.2, table 4 #define HDMI_DR_MD_STATIC 0x00 // static metadata type 1 //------------------------------------------------------------------------------------ // used for static metadata, see CEA-861.3-A chapter 3.2.1 typedef struct _HDMI_XY { uint16_t X; // encoded in units of 0.00002 uint16_t Y; // encoded in units of 0.00002 } HDMI_XY; //------------------------------------------------------------------------------------ //! HDR Meta Data //! type code is 0x07, version is 1, size depends (30 for static metadata type 1) //! @sa CEA-861.3-A chapter 3.2, table 2 typedef struct _HDMI_DR1_PAYLOAD { // data byte 1, see CEA-861.3-A chapter 3.2, table 3 uint8_t bfEOTF : 3; // EOTF uint8_t bfReserved1 : 5; // reserved // data byte 2, see CEA-861.3-A chapter 3.2, table 4 uint8_t bfMetadataID : 3; // static metadata descriptor ID uint8_t bfReserved2 : 5; // reserved // data bytes 3-22, for static metadata type 1, see CEA-861.3-A chapter 3.2.1, table 5 HDMI_XY xyDisplayPrimaries[3]; // chromaticity of red or green or blue (ST2086) HDMI_XY xyWhitePoint; // white point (ST2086) uint16_t wMaxDisplayLuminance; // maximum display mastering luminance (ST2086), nit uint16_t wMinDisplayLuminance; // minimum display mastering luminance (ST2086), 0.0001 nit uint16_t wMaxCLL; // maximum content light level, nit uint16_t wMaxFALL; // maximum frame-average light level, nit } HDMI_DR1_PAYLOAD; //==================================================================================== // GENERIC INFOFRAME TYPE //==================================================================================== typedef struct _HDMI_GENERIC_INFOFRAME { HDMI_INFOFRAMEHEADER header; // type, version, length uint8_t bChecksum; // the sum of all bytes in the info frame must be zero union { // generic byte array, to address the InfoFrame by index uint8_t bPayload[HDMI_MAX_INFOFRAME_SIZE - 3 - 1] = { 0 }; // specific infoframes HDMI_VS1_PAYLOAD plVS1; HDMI_VS2_PAYLOAD plVS2; HDMI_AVI1_PAYLOAD plAVI1; HDMI_AVI2_PAYLOAD plAVI2; HDMI_AVI3_PAYLOAD plAVI3; HDMI_AVI4_PAYLOAD plAVI4; HDMI_SPD1_PAYLOAD plSPD1; HDMI_A1_PAYLOAD plA1; HDMI_MS1_PAYLOAD plMS1; HDMI_VBI1_PAYLOAD plVBI1; HDMI_DR1_PAYLOAD plDR1; }; } HDMI_GENERIC_INFOFRAME; #pragma pack(pop) //==================================================================================== // # FUNCTIONS //==================================================================================== // Verifies the checksum of the HDMI Info Frame inline bool HDMI_IsInfoFrameValid(const _HDMI_GENERIC_INFOFRAME* pInfoFrame) { if (pInfoFrame == NULL) return false; unsigned char* data = (unsigned char*)pInfoFrame; int size = sizeof(HDMI_INFOFRAMEHEADER) + 1 + pInfoFrame->header.bPayloadLength; unsigned char checksum = 0; while (size-- != 0) checksum += *data++; return (checksum == 0); } obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/Library/EGAVHID.h000644 001751 001751 00000005537 15153330240 034243 0ustar00runnerrunner000000 000000 /* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVHID.h @brief HID specific types and constants. **/ //============================================================================== #pragma once #include #include "EGAVResult.h" #include "EGAVDevice.h" // required for EGAVDeviceID const int kHidDefaultReportID = 0; //! Dummy report ID //============================================================================== // # Interfaces (HID) //============================================================================== //! @brief HID Interface for cross-platform support. (EVH-442) class EGAVHIDInterface { public: virtual ~EGAVHIDInterface() { } virtual EGAVResult InitHIDInterface(const EGAVDeviceID& inDeviceID) = 0; virtual EGAVResult DeinitHIDInterface() = 0; //! @brief Reads a HID response message from the OS. //! @param outReceivedMessage will contain the resulting message. Its length will be adjusted automatically. //! @param report ID The reportID is always 0 (kHidDefaultReportID) for Facecam (Penna) //! @param inReadBufferSize size of buffer passed to ID read routine. If 0 mHIDCaps->InputReportByteLength is used. virtual EGAVResult ReadHID(std::vector& outMessage, int inReportID, int inReadBufferSize = 0) = 0; //! @brief Writes specified message. //! The implementation should construct a HID report to send to the hardware //! @param message (NOT report!) //! @param report ID The reportID is always 0 (kHidDefaultReportID) for Facecam (Penna). The Penna-specific message tag is in the first byte of the message. virtual EGAVResult WriteHID(const std::vector& inMessage, int inReportID) = 0; }; //! @brief Platform specific factory method std::shared_ptr CreateEGAVHIDInterface(); deps/libdshowcapture/src/external/capture-device-support/Library/EGAVDevice.h000644 001751 001751 00000005355 15153330240 034755 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #pragma once #include #include #include //============================================================================== // # Class EGAVDeviceID //============================================================================== enum class EGAVBusType { Unknown, USB, PCI }; struct EGAVDeviceID { EGAVDeviceID() {} //! @brief EGAVDeviceID constructor //! @param inBusType USB or PCIe //! @param inVendorID USB vendor ID or PCIe subsystem vendor ID //! @param inProductID USB product ID or PCIe subsystem device ID //! @param inLocationID Location ID (macOS only) EGAVDeviceID(EGAVBusType inBusType, uint16_t inVendorID, uint16_t inProductID, uint32_t inLocationID = 0) : busType(inBusType), vendorID(inVendorID), productID(inProductID), locationID(inLocationID) { } EGAVBusType busType = EGAVBusType::Unknown; uint16_t vendorID = 0; //!< USB vendor ID or PCI subvendor ID uint16_t productID = 0; //!< USB product ID or PCI subdevice ID uint32_t locationID = 0; //!< USB location ID (macOS only) bool Equals(const EGAVDeviceID& inDeviceID, bool inIgnoreLocation) const { if (this->busType != inDeviceID.busType) return false; if (this->vendorID != inDeviceID.vendorID) return false; if (this->productID != inDeviceID.productID) return false; if (this->locationID != inDeviceID.locationID && !inIgnoreLocation) return false; return true; } bool operator == (const EGAVDeviceID& inDeviceID) const { return Equals(inDeviceID, false); } bool operator != (const EGAVDeviceID& inDeviceID) const { return !Equals(inDeviceID, false); } std::string toString(); }; deps/libdshowcapture/src/external/capture-device-support/Library/EGAVResult.cpp000644 001751 001751 00000007572 15153330240 035372 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVResult.cpp @brief Definition of error and success codes **/ //============================================================================== #include "EGAVResult.h" #if _UP_WINDOWS #include // for HRESULT #endif //------------------------------------------------------------------------------ // Constructor / Destructor //------------------------------------------------------------------------------ EGAVResult::EGAVResult(EGAVResultCode inResultCode) { mResultCode = inResultCode; } EGAVResult::EGAVResult(EGAVResultCustomType inCustomResultType, int64_t inCustomResultCode) { mResultCode = ErrCustom; mCustomResultType = inCustomResultType; mCustomResultCode = inCustomResultCode; } #if _UP_WINDOWS void EGAVResult::InitWithHresult(HRESULT hr) { mResultCode = ErrCustom; mCustomResultType = EGAVResultCustomType::Hresult; mCustomResultCode = hr; } void EGAVResult::InitWithWinError(LONG err) { mResultCode = ErrCustom; mCustomResultType = EGAVResultCustomType::WinError; mCustomResultCode = err; } #endif //------------------------------------------------------------------------------ // Helpers //------------------------------------------------------------------------------ bool EGAVResult::Succeeded() const { if (mResultCode == ErrCustom) { switch (mCustomResultType) { #if _UP_WINDOWS case EGAVResultCustomType::Hresult: return SUCCEEDED(mCustomResultCode); case EGAVResultCustomType::WinError: return (mCustomResultCode == ERROR_SUCCESS) ? true : false; #elif _UP_MAC case EGAVResultCustomType::Mac: return (mCustomResultCode == 0) ? true : false; #endif case EGAVResultCustomType::MainConcept: case EGAVResultCustomType::Device: return (mCustomResultCode == 0) ? true : false; default: return false; } } else { return (int32_t)mResultCode > 0 ? true : false; } } //------------------------------------------------------------------------------ // Operators //------------------------------------------------------------------------------ void EGAVResult::operator=(const EGAVResultCode inResultCode) { mCustomResultType = EGAVResultCustomType::None; mResultCode = inResultCode; } bool EGAVResult::operator==(const EGAVResult inResult) const { if (mResultCode != inResult.mResultCode ) return false; if (mCustomResultType != inResult.mCustomResultType) return false; if (mCustomResultCode != inResult.mCustomResultCode) return false; return true; } bool EGAVResult::operator!=(const EGAVResult inResult) const { return !(*this == inResult); } bool EGAVResult::operator==(const EGAVResultCode inResultCode) const { return mResultCode == inResultCode; } bool EGAVResult::operator!=(const EGAVResultCode inResultCode) const { return mResultCode != inResultCode; } obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/Library/mac/000755 001751 001751 00000000000 15153330731 033556 5ustar00runnerrunner000000 000000 deps/libdshowcapture/src/external/capture-device-support/Library/mac/EGAVHIDImplementation.cpp000644 001751 001751 00000022313 15153330240 040154 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVHIDImplementation.cpp @brief macOS implementation of EGAVHIDInterface **/ //============================================================================== #include "EGAVHIDImplementation.h" #include #include #include //------------------------------------------------------------------------------ // PREPROCESSOR SWITCHES //------------------------------------------------------------------------------ std::shared_ptr CreateEGAVHIDInterface() { return std::make_shared(); } //============================================================================== // # HID Device Enumeration //============================================================================== static int LocationIDOfHIDDevice(IOHIDDeviceRef hidRef) { int locationID = 0; // this block is just here to see if we can find the corresponding device CFTypeRef uniqueID = IOHIDDeviceGetProperty(hidRef, CFSTR(kIOHIDUniqueIDKey)); if (CFGetTypeID(uniqueID) == CFNumberGetTypeID()) { uint64_t uniqueID64 = 0; CFNumberGetValue((CFNumberRef)uniqueID, kCFNumberLongLongType, &uniqueID64); //NSLog(@"uniqueID %lld or 0x%16llx", uniqueID64, uniqueID64); CFMutableDictionaryRef matchingDict = IORegistryEntryIDMatching(uniqueID64); // next call consumes a reference to matchingDict io_service_t matchedService = IOServiceGetMatchingService(kIOMasterPortDefault, matchingDict); if (matchedService != 0) { CFNumberRef loc = (CFNumberRef)IORegistryEntrySearchCFProperty(matchedService, kIOServicePlane, CFSTR("locationID"), kCFAllocatorDefault, kIORegistryIterateRecursively | kIORegistryIterateParents); if (loc != NULL) { CFNumberGetValue(loc, kCFNumberIntType, &locationID); CFRelease(loc); } } } return locationID; } static void HIDDeviceMatchingCallback(void* inContext, IOReturn /*inResult*/, void* /*inSender*/, IOHIDDeviceRef inIOHIDDeviceRef) { reinterpret_cast(inContext)->DeviceAdded(inIOHIDDeviceRef); } static void HIDDeviceRemovalCallback(void* inContext, IOReturn /*inResult*/, void* /*inSender*/, IOHIDDeviceRef inIOHIDDeviceRef) { reinterpret_cast(inContext)->DeviceRemoved(inIOHIDDeviceRef); } //============================================================================== // # Class EGAVHID //============================================================================== EGAVHID::EGAVHID() { } void EGAVHID::DeviceAdded(IOHIDDeviceRef deviceRef) { if (mLocationID == 0) // we don't care about a specific locationID { mHIDDevice = deviceRef; info_printf("## DeviceAdded()"); } else if (LocationIDOfHIDDevice(deviceRef) == mLocationID) { mHIDDevice = deviceRef; info_printf("## DeviceAdded(): Location ID %d", mLocationID); } } void EGAVHID::DeviceRemoved(IOHIDDeviceRef deviceRef) { if (mHIDDevice == deviceRef) { info_printf("## DeviceRemoved()"); mHIDDevice = 0; } } //============================================================================== // ## HID interface //============================================================================== EGAVResult EGAVHID::InitHIDInterface(const EGAVDeviceID& inDeviceID, EGAVUnitPtr /*inOwner*/, bool /* inIgnoreDevicePathCheck = false */) { dbgFunctionI(); EGAVResult res = EGAVResult::Ok; mLocationID = inDeviceID.locationID; if (!mWorkerCreated) // program dies if thread is assigned when it isn't already null { mWorkerCreated = true; mWorker = std::thread([this, inDeviceID] { mRunLoop = CFRunLoopGetCurrent(); IOHIDManagerRef manager = IOHIDManagerCreate(kCFAllocatorDefault, kIOHIDManagerOptionNone); IOHIDManagerRegisterDeviceMatchingCallback(manager, HIDDeviceMatchingCallback, this); IOHIDManagerRegisterDeviceRemovalCallback(manager, HIDDeviceRemovalCallback, this); CFMutableDictionaryRef matchingDict = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFNumberRef vendor = CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &inDeviceID.vendorID); CFNumberRef product = CFNumberCreate(kCFAllocatorDefault, kCFNumberShortType, &inDeviceID.productID); CFDictionaryAddValue(matchingDict, CFSTR(kIOHIDVendorIDKey), vendor); CFDictionaryAddValue(matchingDict, CFSTR(kIOHIDProductIDKey), product); IOHIDManagerSetDeviceMatching(manager, matchingDict); IOHIDManagerScheduleWithRunLoop(manager, mRunLoop, kCFRunLoopDefaultMode); IOReturn ret = IOHIDManagerOpen(manager, kIOHIDOptionsTypeNone); EPL_ASSERT_BREAK(ret == kIOReturnSuccess); CFRunLoopRun(); IOHIDManagerClose(manager, kIOHIDOptionsTypeNone); CFRelease(manager); }); } // Wait for device found int64_t startTimeMsec = EplTime_GetMonotonicMilliseconds(); const int64_t kHidDiscoveryTimeoutMsec = 1500; while (!mHIDDevice && EplTime_GetMonotonicMilliseconds() - startTimeMsec < kHidDiscoveryTimeoutMsec) { std::this_thread::sleep_for(std::chrono::milliseconds(100)); } if (mHIDDevice) { // Query input report size //! @todo do this only once CFIndex reportSize = 0; CFTypeRef number = IOHIDDeviceGetProperty(mHIDDevice, CFSTR(kIOHIDMaxInputReportSizeKey)); CFNumberGetValue((CFNumberRef)number, kCFNumberCFIndexType, &reportSize); mInputReportSize = (int)reportSize; number = IOHIDDeviceGetProperty(mHIDDevice, CFSTR(kIOHIDMaxOutputReportSizeKey)); CFNumberGetValue((CFNumberRef)number, kCFNumberCFIndexType, &reportSize); mOutputReportSize = (int)reportSize; } return mHIDDevice ? EGAVResult::Ok : EGAVResult::ErrNotFound; } EGAVResult EGAVHID::DeinitHIDInterface() { dbgFunctionI(); mHIDDevice = nullptr; if (mRunLoop != nullptr) { CFRunLoopStop(mRunLoop); mWorker.join(); mRunLoop = nullptr; } mWorkerCreated = false; return EGAVResult::Ok; } EGAVResult EGAVHID::ReadHID(std::vector& outMessage, int inReportId, int inReadBufferSize /*= 0*/) { EGAVResult_CheckPointer(mHIDDevice); EGAVResult res = EGAVResult::Ok; std::vector report(mInputReportSize); // from the hardware, no zero prepended report[0] = inReportId; int usedSize = 0; { CFIndex bufferSize = inReadBufferSize > 0 ? inReadBufferSize : report.size(); IOReturn err = IOHIDDeviceGetReport(mHIDDevice, kIOHIDReportTypeInput, inReportId, &report[0], &bufferSize); if (err == noErr) usedSize = (int)bufferSize; else error_printf("IOHIDDeviceGetReport() failed with IOReturn %d (0x%08X)", err, err); res = (err == noErr) ? EGAVResult::Ok : EGAVResult::ErrUnknown; } if (res.Succeeded()) { outMessage.clear(); // Facecam: calling code expects the report ID (0) in front of the report (Facecam) //! @todo check if this behavior is really necessary. Check on Windows also. if (inReportId == kHidDefaultReportID) outMessage.push_back(inReportId); for (int i = 0; i< usedSize; i++) outMessage.push_back(report[i]); } return res; } //! If device only has one report ID, it is zero (kHidDefaultReportID) EGAVResult EGAVHID::WriteHID(const std::vector& inMessage, int inReportID) { // dbgFunctionI(); EGAVResult_CheckPointer(mHIDDevice); std::vector report; // From Device Class Definition for Human Interface Devices (HID) Version 1.11 // If a device has multiple report structures, all data transfers start with a 1-byte identifier prefix that indicates which report structure // applies to the transfer. This allows the class driver to distinguish incoming pointer data from keyboard data by examining the transfer prefix. if (inReportID != kHidDefaultReportID) report.push_back(inReportID); for (auto m : inMessage) report.push_back(m); report.resize(mInputReportSize, 0); // pad report with zeros, ensure it is always the right length IOReturn err = IOHIDDeviceSetReport(mHIDDevice, kIOHIDReportTypeOutput, inReportID, &report[0], report.size()); // 0xE00002D6 - kIOReturnTimeout // 0xE00002EB - kIOReturnAborted if (err != noErr) error_printf("IOHIDDeviceSetReport() failed with IOReturn %d (0x%08X)", err, err); EGAVResult res = (err == noErr) ? EGAVResult::Ok : EGAVResult::ErrUnknown; return res; } deps/libdshowcapture/src/external/capture-device-support/Library/mac/EGAVHIDImplementation.h000644 001751 001751 00000005423 15153330240 037624 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVHIDImplementation.h @brief macOS implementation of EGAVHIDInterface **/ //============================================================================== #pragma once #include #include // macOS #include #include "EGAVEngine/EGAVHID.h" class HIDTransport; class EGAVHID : public EGAVHIDInterface { public: EGAVHID(); void DeviceAdded(IOHIDDeviceRef deviceRef); void DeviceRemoved(IOHIDDeviceRef deviceRef); //----------------------------------------------------------------------------- // ## EGAVHIDInterface implementation //----------------------------------------------------------------------------- virtual EGAVResult InitHIDInterface(const EGAVDeviceID& inDeviceID, EGAVUnitPtr inOwner, bool inIgnoreDevicePathCheck = false) override; virtual EGAVResult DeinitHIDInterface() override; //! @brief Reads a HID response message from the OS. //! @param outMessage will contain the resulting message. Its length will be adjusted automatically. virtual EGAVResult ReadHID(std::vector& outMessage, int inReportID, int inReadBufferSize = 0) override; //! @brief Writes a HID report (with ID 0) containing inMessage to the OS. //! @param inMessage the report contents, not including the report ID. virtual EGAVResult WriteHID(const std::vector& inMessage, int inReportID) override; private: int mLocationID = 0; int mInputReportSize = 0, mOutputReportSize = 0; std::atomic mHIDDevice = nullptr; std::atomic mRunLoop = nullptr; std::thread mWorker; //!< background worker for HID device discovery bool mWorkerCreated = false; }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/Library/win/000755 001751 001751 00000000000 15153330731 033613 5ustar00runnerrunner000000 000000 deps/libdshowcapture/src/external/capture-device-support/Library/win/EGAVHIDImplementation.cpp000644 001751 001751 00000015422 15153330240 040214 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVHIDImplementation.cpp @brief Windows implementation of EGAVHIDInterface **/ //============================================================================== // https://docs.microsoft.com/en-us/windows-hardware/drivers/hid/introduction-to-hid-concepts #include "EGAVHIDImplementation.h" #include // for ostringstream // Windows headers; for HID interface #include #pragma comment(lib,"hid.lib") #include #include #pragma comment(lib, "setupapi.lib") #ifndef SAFE_CLOSE_HANDLE #define SAFE_CLOSE_HANDLE(_handle_) \ { \ if ((_handle_ != 0) && (_handle_ != INVALID_HANDLE_VALUE))\ { \ CloseHandle(_handle_); \ _handle_= INVALID_HANDLE_VALUE; \ } \ } #endif std::string GetHIDDevicePath(int inIndex) { std::string devicePath; GUID guid; HidD_GetHidGuid(&guid); HDEVINFO DeviceInfo = SetupDiGetClassDevs(&guid, NULL, NULL, (DIGCF_PRESENT | DIGCF_DEVICEINTERFACE)); SP_DEVICE_INTERFACE_DATA DeviceInterface; DeviceInterface.cbSize = sizeof(SP_DEVICE_INTERFACE_DATA); if (!SetupDiEnumDeviceInterfaces(DeviceInfo, NULL, &guid, inIndex, &DeviceInterface)) { SetupDiDestroyDeviceInfoList(DeviceInfo); return ""; } unsigned long size = 0; SetupDiGetDeviceInterfaceDetail(DeviceInfo, &DeviceInterface, NULL, 0, &size, 0); PSP_INTERFACE_DEVICE_DETAIL_DATA pDeviceDetail = (PSP_INTERFACE_DEVICE_DETAIL_DATA)malloc(size); if (pDeviceDetail) { pDeviceDetail->cbSize = sizeof(SP_INTERFACE_DEVICE_DETAIL_DATA); if (SetupDiGetDeviceInterfaceDetail(DeviceInfo, &DeviceInterface, pDeviceDetail, size, &size, NULL)) devicePath = CT2A(pDeviceDetail->DevicePath); free(pDeviceDetail); } SetupDiDestroyDeviceInfoList(DeviceInfo); return devicePath; } std::shared_ptr CreateEGAVHIDInterface() { return std::make_shared(); } //============================================================================== // ## Class EGAVHID //============================================================================== EGAVHID::EGAVHID() : mHIDCaps(std::make_unique<_HIDP_CAPS>()) { } EGAVResult EGAVHID::InitHIDInterface(const EGAVDeviceID& inDeviceID) { EGAVResult res = EGAVResult::ErrNotFound; DWORD index = 0; std::string path; while ((path = GetHIDDevicePath(index++)) != "") { HANDLE hHidDevice = CreateFileA(path.c_str(), GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL); if (hHidDevice == INVALID_HANDLE_VALUE) continue; bool isCorrectHidDevice = false; HIDD_ATTRIBUTES attr; if (HidD_GetAttributes(hHidDevice, &attr)) isCorrectHidDevice = (attr.VendorID == inDeviceID.vendorID && attr.ProductID == inDeviceID.productID); if (isCorrectHidDevice) { mHIDHandle = hHidDevice; { PHIDP_PREPARSED_DATA p; HidD_GetPreparsedData(mHIDHandle, &p); HIDP_CAPS c; HidP_GetCaps(p, &c); *mHIDCaps = c; HidD_FreePreparsedData(p); } res = EGAVResult::Ok; break; } CloseHandle(hHidDevice); } return res; } EGAVResult EGAVHID::DeinitHIDInterface() { SAFE_CLOSE_HANDLE(mHIDHandle); return EGAVResult::Ok; } //! This reads the report from the hardware //! See Facemcam (Penna) sample code in https://elgato.atlassian.net/browse/EVH-493 EGAVResult EGAVHID::ReadHID(std::vector& outMessage, int inReportID, int inReadBufferSize/* = 0*/) { if (outMessage.size() >= mHIDCaps->InputReportByteLength) return EGAVResult::ErrInvalidParameter; std::vector inputReport(mHIDCaps->InputReportByteLength); inputReport[0] = (uint8_t)inReportID; if (inReadBufferSize > 0) inputReport.resize(inReadBufferSize); // Required for Cam Link PD575 (EVH-1418) BOOL success = HidD_GetInputReport(mHIDHandle, &inputReport[0], (ULONG)inputReport.size()); EGAVResult res = success ? EGAVResult::Ok : EGAVResult::ErrInvalidOperation; // 121- ERROR_SEM_TIMEOUT // 31 - ERROR_GEN_FAILURE - for invalid report ID if (FALSE == success) // 87 - ERROR_INVALID_PARAMETER - if (buffer size != caps.InputReportByteLength) { // error_printf("HidD_GetInputReport() for report ID %d FAILED with %d", inReportID, GetLastError()); } else { outMessage.assign(inputReport.begin(), inputReport.end()); } return res; } // this only prepends a zero byte (the report ID) to the message, pads it out to the size of an output // report and sends it to the hardware EGAVResult EGAVHID::WriteHID(const std::vector& inMessage, int inReportID) { if (!mHIDCaps) return EGAVResult::ErrInvalidState; if (inMessage.size() > mHIDCaps->OutputReportByteLength-1) return EGAVResult::ErrInvalidParameter; std::vector outputReport(mHIDCaps->OutputReportByteLength, 0); outputReport[0] = (uint8_t)inReportID; memcpy(&outputReport[1], &inMessage[0], inMessage.size()); // If the top-level collection includes report IDs, the caller must set the first byte of the ReportBuffer parameter to a non-zero report ID. BOOL success = HidD_SetOutputReport(mHIDHandle, &outputReport[0], (ULONG)outputReport.size()); EGAVResult res = success ? EGAVResult::Ok : EGAVResult::ErrInvalidOperation; // 1167 - ERROR_DEVICE_NOT_CONNECTED if (FALSE == success) // 87 - ERROR_INVALID_PARAMETER - if (buffer size != c.OutputReportByteLength) { // error_printf("#### HID: HidD_SetOutputReport() FAILED with %d", GetLastError()); } return res; }deps/libdshowcapture/src/external/capture-device-support/Library/win/EGAVHIDImplementation.h000644 001751 001751 00000004760 15153330240 037664 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file EGAVHIDImplementation.h @brief Windows implementation of EGAVHIDInterface **/ //============================================================================== #pragma once #include #include "EGAVHID.h" struct _HIDP_CAPS; class EGAVHID : public EGAVHIDInterface { public: EGAVHID(); //----------------------------------------------------------------------------- // ## EGAVHIDInterface implementation //----------------------------------------------------------------------------- //! @param inIgnoreDevicePathCheck true for Cyclops because HID has a different DeviceID virtual EGAVResult InitHIDInterface(const EGAVDeviceID& inDeviceID) override; virtual EGAVResult DeinitHIDInterface() override; //! @brief Reads a HID response message from the OS. //! @param outMessage will contain the resulting message. Its length will be adjusted automatically. virtual EGAVResult ReadHID(std::vector& outMessage, int inReportID, int inReadBufferSize = 0) override; //! @brief Writes a HID report (with ID 0) containing inMessage to the OS. //! @param inMessage the report contents, not including the report ID. virtual EGAVResult WriteHID(const std::vector& inMessage, int inReportID) override; HANDLE GetHIDHandle() { return mHIDHandle; } private: HANDLE mHIDHandle = nullptr; std::unique_ptr<_HIDP_CAPS> mHIDCaps; };obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/LICENSE000644 001751 001751 00000002065 15153330240 032415 0ustar00runnerrunner000000 000000 MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/CMakeLists.txt000644 001751 001751 00000004232 15153330240 034146 0ustar00runnerrunner000000 000000 # MIT License # # Copyright (c) 2022 Corsair Memory, Inc. # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # CMakeList.txt : CMake project for EGAVHIDSample, include source and define # project specific logic here. # cmake_minimum_required (VERSION 3.8) project ("EGAVHIDSample") set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED ON) set(FRAMEWORK_FOLDER "Library") if(WIN32) set(PLATFORM_FOLDER "win") set(PLATFORM_SOURCES SampleCode/DriverInterface.cpp ) elseif(APPLE) set(PLATFORM_FOLDER "mac") set(PLATFORM_SOURCES) endif() # Add source to this project's executable. add_executable (EGAVHIDSample ${PLATFORM_SOURCES} "${FRAMEWORK_FOLDER}/EGAVResult.cpp" "${FRAMEWORK_FOLDER}/ElgatoUVCDevice.cpp" "${FRAMEWORK_FOLDER}/${PLATFORM_FOLDER}/EGAVHIDImplementation.cpp" "SampleCode/DriverInterface.cpp" "SampleCode/main.cpp" ) target_include_directories(EGAVHIDSample PRIVATE ${FRAMEWORK_FOLDER}) target_compile_definitions(EGAVHIDSample PUBLIC EGAV_API) if(WIN32) target_compile_definitions(EGAVHIDSample PUBLIC _UP_WINDOWS=1) elseif(APPLE) target_compile_definitions(EGAVHIDSample PUBLIC _UP_MAC=1) endif()obs-studio-32.1.0-sources/deps/libdshowcapture/src/external/capture-device-support/SampleCode/000755 001751 001751 00000000000 15153330731 033426 5ustar00runnerrunner000000 000000 deps/libdshowcapture/src/external/capture-device-support/SampleCode/main.cpp000644 001751 001751 00000006146 15153330240 035001 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022-23 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ #include #include #include #include #include "ElgatoUVCDevice.h" //============================================================================== // # Constants //============================================================================== const EGAVDeviceID& selectedDeviceID = deviceIDHD60SPlus; // Select your device here //============================================================================== // # main() //============================================================================== int main() { std::cout << "========================================" << std::endl; std::cout << " Sample: HDR Tonemapping" << std::endl; std::cout << "========================================" << std::endl; std::cout << std::endl; std::shared_ptr hid = std::make_shared(); EGAVResult res = hid->InitHIDInterface(selectedDeviceID); if (res.Failed()) { std::cout << "InitHIDInterface() failed. Do you have the correct device connected?" << std::endl << std::endl; std::this_thread::sleep_for(std::chrono::milliseconds(2000)); } else { ElgatoUVCDevice device(hid, IsNewDeviceType(selectedDeviceID)); HDMI_GENERIC_INFOFRAME frame{}; memset(&frame, 0, sizeof(frame)); res = device.GetHDMIHDRStatusPacket(frame); if (res.Succeeded()) { bool isHDR = false; res = device.IsVideoHDR(isHDR); std::cout << "Video is " << (isHDR ? "HDR" : "SDR") << std::endl; if (res.Succeeded() && isHDR) { std::cout << "Disable HDR tonemapping" << std::endl; device.SetHDRTonemappingEnabled(false); #if 1 // TEST: TOGGLE TONEMAPPINING for (int i = 0; i < 2; i++) { std::this_thread::sleep_for(std::chrono::milliseconds(2000)); std::cout << "Enable HDR tonemapping" << std::endl; device.SetHDRTonemappingEnabled(true); std::this_thread::sleep_for(std::chrono::milliseconds(2000)); std::cout << "Disable HDR tonemapping" << std::endl; device.SetHDRTonemappingEnabled(false); } #endif } } hid->DeinitHIDInterface(); } return 0; } deps/libdshowcapture/src/external/capture-device-support/SampleCode/DriverInterface.h000644 001751 001751 00000004773 15153330240 036602 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022-23 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file DriverInterface.h @brief EGAVDeviceProperties class implementation. Support for 4K60 Pro MK.2 and 4K60 S+ **/ //============================================================================== #include #include #include "HDMIInfoFramesAPI.h" //! @brief Device properties for Elgato's non-UVC devices class EGAVDeviceProperties { public: enum class DeviceType { None = 0, GC4K60ProMK2, //!< 4K60 Pro MK.2: PCI\VEN_12AB& DEV_0710& SUBSYS_000E1CFA GC4K60SPlus //!< 4K60 S+: USB\VID_0FD9&PID_0068 or USB\VID_0FD9&PID_0075 }; //! @brief //! @param inKsPropertySet Interface for driver property set. //! Can be queried from the DirectShow filter via IBaseFilter::QueryInterface() //! @param inDeviceType EGAVDeviceProperties(IKsPropertySet* inKsPropertySet, DeviceType inDeviceType); //! @brief 4K60 S+ only //! @param inHEVC 1 - HEVC, 0 - H.264 HRESULT SetEncoderType(bool inHEVC); //! @brief 4K60 Pro MK.2 only //! @param inEnable 1 - enable tone mapping, 0 - disable HDR tonemapping HRESULT SetHDRTonemapping(bool inEnable); HRESULT IsVideoHDR(bool& outIsHDR); HRESULT GetHDMIHDRStatusPacket(uint8_t* outBuffer, int inBufferSize); private: DeviceType mDeviceType = DeviceType::None; GUID mCustomPropertySetGUID = GUID_NULL; CComPtr mICustomPropertySet; };deps/libdshowcapture/src/external/capture-device-support/SampleCode/DriverInterface.cpp000644 001751 001751 00000012247 15153330240 037130 0ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/* MIT License Copyright (c) 2022 Corsair Memory, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ //============================================================================== /** @file DriverInterface.cpp @brief EGAVDeviceProperties class implementation Support for 4K60 Pro MK.2 and 4K60 S+ **/ //============================================================================== #include "HDMIInfoFramesAPI.h" #include "DriverInterface.h" #ifndef EGAV_OVERRIDE_DEBUG_MACROS inline void dummy() {} #define warning_printf(...) dummy() #ifndef HR_CHKRET_POINTER #define HR_CHKRET_POINTER(_ptr_) if (!(_ptr_)) return E_POINTER; #endif #endif //! Property IDs for IKsPropertySet enum class DriverProperty { XET_ENCODER_VIDEO_FORMAT = 400, //!< 4K60 S+ encoder format. uint32_t parameter: 0 - H.264 and 1 - HEVC GET_HDMI_HDR_PACKET_00_15 = 720, //!< HDMI HDR status packet - part 1 GET_HDMI_HDR_PACKET_16_31 = 721, //!< HDMI HDR status packet - part 2 XET_HDMI_HDR_TO_SDR = 722 //!< 4K60 Pro MK.2 Set HDR tonemapping. uint32_t parameter: 1 - on / 0 - off }; static const int HDMI_PACKET_SIZE = 32; EGAVDeviceProperties::EGAVDeviceProperties(IKsPropertySet* inKsPropertySet, DeviceType inDeviceType) : mICustomPropertySet(inKsPropertySet), mDeviceType(inDeviceType) { switch (inDeviceType) { case EGAVDeviceProperties::DeviceType::GC4K60ProMK2: mCustomPropertySetGUID = { 0xD1E5209F, 0x68FD, 0x4529, 0xBE, 0xE0, 0x5E, 0x7A, 0x1F, 0x47, 0x92, 0x26 }; break; case EGAVDeviceProperties::DeviceType::GC4K60SPlus: mCustomPropertySetGUID = { 0xD1E5209F, 0x68FD, 0x4529, 0xBE, 0xE0, 0x5E, 0x7A, 0x1F, 0x47, 0x92, 0x24 }; break; default: break; } } HRESULT EGAVDeviceProperties::SetEncoderType(bool inHEVC) { if (mDeviceType != DeviceType::GC4K60SPlus) return E_FAIL; HR_CHKRET_POINTER(mICustomPropertySet); uint32_t param = inHEVC ? 1 : 0; HRESULT hr = mICustomPropertySet->Set(mCustomPropertySetGUID, (DWORD)DriverProperty::XET_ENCODER_VIDEO_FORMAT, nullptr, 0, ¶m, sizeof(param)); return hr; } HRESULT EGAVDeviceProperties::SetHDRTonemapping(bool inEnable) { if (mDeviceType != DeviceType::GC4K60ProMK2) return E_FAIL; HR_CHKRET_POINTER(mICustomPropertySet); uint32_t param = inEnable ? 1 : 0; HRESULT hr = mICustomPropertySet->Set(mCustomPropertySetGUID, (DWORD)DriverProperty::XET_HDMI_HDR_TO_SDR, nullptr, 0, ¶m, sizeof(param)); return hr; } HRESULT EGAVDeviceProperties::GetHDMIHDRStatusPacket(uint8_t *outBuffer, int inBufferSize) { HR_CHKRET_POINTER(outBuffer); if (inBufferSize < HDMI_PACKET_SIZE) return E_INVALIDARG; HR_CHKRET_POINTER(mICustomPropertySet); DWORD dwRet = 0; HRESULT hr = mICustomPropertySet->Get(mCustomPropertySetGUID, (DWORD)DriverProperty::GET_HDMI_HDR_PACKET_00_15, nullptr, 0, &outBuffer[0], 16, &dwRet); if (SUCCEEDED(hr)) hr = mICustomPropertySet->Get(mCustomPropertySetGUID, (DWORD)DriverProperty::GET_HDMI_HDR_PACKET_16_31, nullptr, 0, &outBuffer[16], 16, &dwRet); return hr ; } HRESULT EGAVDeviceProperties::IsVideoHDR(bool& outIsHDR) { outIsHDR = false; // Try to read HDR meta data static const uint8_t emptyBuffer[HDMI_PACKET_SIZE] = { 0 }; uint8_t buffer[HDMI_PACKET_SIZE] = { 0 }; HRESULT hr = GetHDMIHDRStatusPacket(buffer, sizeof(buffer)); if (SUCCEEDED(hr)) { HDMI_GENERIC_INFOFRAME* frame = (HDMI_GENERIC_INFOFRAME*)(&buffer[0]); hr = (true == HDMI_IsInfoFrameValid(frame)) ? S_OK : E_FAIL; if (SUCCEEDED(hr)) { // Check type in header and EOTF flag in payload if (HDMI_INFOFRAME_TYPE_DR == frame->header.bfType && HDMI_DR_EOTF_SDRGAMMA != frame->plDR1.bfEOTF) { outIsHDR = true; } else if (HDMI_INFOFRAME_TYPE_DR == frame->header.bfType && HDMI_DR_EOTF_SDRGAMMA == frame->plDR1.bfEOTF) { outIsHDR = false; } else if (0 /*HDMI_INFOFRAME_TYPE_RESERVED */ == frame->header.bfType && (0 == memcmp(buffer, emptyBuffer, sizeof(buffer)))) { outIsHDR = false; } else if (HDMI_INFOFRAME_TYPE_DR != frame->header.bfType) { warning_printf("HDMI Metadata: Wrong header type: %d", frame->header.bfType); hr = E_FAIL; } } else warning_printf("HDMI Metadata: HDMI_IsInfoFrameValid() returned error (checksum)!"); } else warning_printf("HDMI Metadata: GetHDMIHDRStatusPacket() failed!"); return hr; } obs-studio-32.1.0-sources/deps/libdshowcapture/src/CMakeLists.txt000644 001751 001751 00000010020 15153330240 025702 0ustar00runnerrunner000000 000000 cmake_minimum_required(VERSION 2.8.12) project(libdshowcapture) set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_CURRENT_SOURCE_DIR}/cmake/Modules/") option(BUILD_SHARED_LIBS "Build shared library" ON) find_package(CXX11 REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CXX11_FLAGS}") if(${CMAKE_C_COMPILER_ID} MATCHES "Clang" OR ${CMAKE_CXX_COMPILER_ID} MATCHES "Clang") set(CMAKE_COMPILER_IS_CLANG TRUE) endif() if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCXX OR CMAKE_COMPILER_IS_CLANG) set(CMAKE_CXX_FLAGS "-Wall -Wextra -Wno-unused-function -Werror-implicit-function-declaration -Wno-missing-field-initializers ${CMAKE_CXX_FLAGS} -fno-strict-aliasing" ) set(CMAKE_C_FLAGS "-Wall -Wextra -Wno-unused-function -Werror-implicit-function-declaration -Wno-missing-braces -Wno-missing-field-initializers ${CMAKE_C_FLAGS} -std=gnu99 -fno-strict-aliasing" ) option(USE_LIBC++ "Use libc++ instead of libstdc++" ${APPLE}) if(USE_LIBC++) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++") endif() elseif(MSVC) if(CMAKE_CXX_FLAGS MATCHES "/W[0-4]") string(REGEX REPLACE "/W[0-4]" "/W4" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}") else() set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /W4") endif() # Disable pointless constant condition warnings set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4127 /wd4201") endif() if(WIN32) add_definitions(-DUNICODE -D_UNICODE) if(BUILD_SHARED_LIBS) add_definitions(-DDSHOWCAPTURE_EXPORTS) endif() endif() if(MSVC) set(CMAKE_C_FLAGS_DEBUG "/DDEBUG=1 /D_DEBUG=1 ${CMAKE_C_FLAGS_DEBUG}") set(CMAKE_CXX_FLAGS_DEBUG "/DDEBUG=1 /D_DEBUG=1 ${CMAKE_C_FLAGS_DEBUG}") if(NOT CMAKE_SIZEOF_VOID_P EQUAL 8) set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /SAFESEH:NO") set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /SAFESEH:NO") set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /SAFESEH:NO") endif() else() if(MINGW) set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -D_WIN32_WINNT=0x0600 -DWINVER=0x0600") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_WIN32_WINNT=0x0600 -DWINVER=0x0600") endif() set(CMAKE_C_FLAGS_DEBUG "-DDEBUG=1 -D_DEBUG=1 ${CMAKE_C_FLAGS_DEBUG}") set(CMAKE_CXX_FLAGS_DEBUG "-DDEBUG=1 -D_DEBUG=1 ${CMAKE_CXX_FLAGS_DEBUG}") endif() if(MINGW) include(CheckSymbolExists) check_symbol_exists(MINGW_HAS_SECURE_API "_mingw.h" HAVE_MINGW_HAS_SECURE_API) if(NOT HAVE_MINGW_HAS_SECURE_API) message(FATAL_ERROR "mingw must be compiled with --enable-secure-api") endif() endif() set(libdshowcapture_SOURCES external/capture-device-support/Library/EGAVResult.cpp external/capture-device-support/Library/ElgatoUVCDevice.cpp external/capture-device-support/Library/win/EGAVHIDImplementation.cpp external/capture-device-support/SampleCode/DriverInterface.cpp source/capture-filter.cpp source/output-filter.cpp source/dshowcapture.cpp source/dshowencode.cpp source/device.cpp source/device-vendor.cpp source/encoder.cpp source/dshow-base.cpp source/dshow-demux.cpp source/dshow-enum.cpp source/dshow-formats.cpp source/dshow-media-type.cpp source/dshow-encoded-device.cpp source/log.cpp) set(libdshowcapture_HEADERS dshowcapture.hpp source/external/IVideoCaptureFilter.h source/capture-filter.hpp source/output-filter.hpp source/device.hpp source/encoder.hpp source/dshow-base.hpp source/dshow-demux.hpp source/dshow-device-defs.hpp source/dshow-enum.hpp source/dshow-formats.hpp source/dshow-media-type.hpp source/log.hpp) add_library(libdshowcapture ${libdshowcapture_SOURCES} ${libdshowcapture_HEADERS}) target_include_directories( libdshowcapture PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/external/capture-device-support/Library) target_compile_definitions(libdshowcapture PRIVATE _UP_WINDOWS=1) target_link_libraries(libdshowcapture PRIVATE setupapi strmiids ksuser winmm wmcodecdspuuid) obs-studio-32.1.0-sources/deps/libdshowcapture/src/dshowcapture.hpp000644 001751 001751 00000014167 15153330240 026403 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include #include #include #ifdef DSHOWCAPTURE_EXPORTS #define DSHOWCAPTURE_EXPORT __declspec(dllexport) #else #define DSHOWCAPTURE_EXPORT #endif #define DSHOWCAPTURE_VERSION_MAJOR 0 #define DSHOWCAPTURE_VERSION_MINOR 10 #define DSHOWCAPTURE_VERSION_PATCH 0 #define MAKE_DSHOWCAPTURE_VERSION(major, minor, patch) \ ((major << 24) | (minor << 16) | (patch)) #define DSHOWCAPTURE_VERSION \ MAKE_DSHOWCAPTURE_VERSION(DSHOWCAPTURE_VERSION_MAJOR, \ DSHOWCAPTURE_VERSION_MINOR, \ DSHOWCAPTURE_VERSION_PATCH) #define DSHOW_MAX_PLANES 8 namespace DShow { /* internal forward */ struct HDevice; struct HVideoEncoder; struct VideoConfig; struct AudioConfig; typedef std::function VideoProc; typedef std::function AudioProc; typedef std::function ReactivateProc; enum class InitGraph { False, True, }; /** DirectShow configuration dialog type */ enum class DialogType { ConfigVideo, ConfigAudio, ConfigCrossbar, ConfigCrossbar2, }; enum class VideoFormat { Any, Unknown, /* raw formats */ ARGB = 100, XRGB, RGB24, /* planar YUV formats */ I420 = 200, NV12, YV12, Y800, P010, /* packed YUV formats */ YVYU = 300, YUY2, UYVY, HDYC, /* encoded formats */ MJPEG = 400, H264, HEVC, }; enum class AudioFormat { Any, Unknown, /* raw formats */ Wave16bit = 100, WaveFloat, /* encoded formats */ AAC = 200, AC3, MPGA, /* MPEG 1 */ }; enum class AudioMode { Capture, DirectSound, WaveOut, }; enum class Result { Success, InUse, Error, }; struct VideoInfo { int minCX, minCY; int maxCX, maxCY; int granularityCX, granularityCY; long long minInterval, maxInterval; VideoFormat format; }; struct AudioInfo { int minChannels, maxChannels; int channelsGranularity; int minSampleRate, maxSampleRate; int sampleRateGranularity; AudioFormat format; }; struct DeviceId { std::wstring name; std::wstring path; }; struct VideoDevice : DeviceId { bool audioAttached = false; bool separateAudioFilter = false; std::vector caps; }; struct AudioDevice : DeviceId { std::vector caps; }; struct Config : DeviceId { /** Use the device's desired default config */ bool useDefaultConfig = true; }; struct VideoConfig : Config { VideoProc callback; ReactivateProc reactivateCallback; /** Desired width/height of video. */ int cx = 0, cy_abs = 0; /** Whether or not cy was negative. */ bool cy_flip = false; /** Desired frame interval (in 100-nanosecond units) */ long long frameInterval = 0; /** Internal video format. */ VideoFormat internalFormat = VideoFormat::Any; /** Desired video format. */ VideoFormat format = VideoFormat::Any; }; struct AudioConfig : Config { AudioProc callback; /** * Use the audio attached to the video device * * (name/path member variables will be ignored) */ bool useVideoDevice = false; /** Use separate filter for audio */ bool useSeparateAudioFilter = false; /** Desired sample rate */ int sampleRate = 0; /** Desired channels */ int channels = 0; /** Desired audio format */ AudioFormat format = AudioFormat::Any; /** Audio playback mode */ AudioMode mode = AudioMode::Capture; /** Desired buffer */ int buffer = 0; }; class DSHOWCAPTURE_EXPORT Device { HDevice *context; public: Device(InitGraph initialize = InitGraph::False); ~Device(); bool Valid() const; bool ResetGraph(); void ShutdownGraph(); bool SetVideoConfig(VideoConfig *config); bool SetAudioConfig(AudioConfig *config); /** * Connects all the configured filters together. * * Call SetVideoConfig and/or SetAudioConfig before using. */ bool ConnectFilters(); Result Start(); void Stop(); bool GetVideoConfig(VideoConfig &config) const; bool GetAudioConfig(AudioConfig &config) const; bool GetVideoDeviceId(DeviceId &id) const; bool GetAudioDeviceId(DeviceId &id) const; /** * Opens a DirectShow dialog associated with this device * * @param type The dialog type */ void OpenDialog(void *hwnd, DialogType type) const; static bool EnumVideoDevices(std::vector &devices); static bool EnumAudioDevices(std::vector &devices); }; struct VideoEncoderConfig : DeviceId { int fpsNumerator; int fpsDenominator; int bitrate; int keyframeInterval; int cx; int cy; }; struct EncoderPacket { unsigned char *data; size_t size; long long pts; long long dts; }; class VideoEncoder { HVideoEncoder *context; public: VideoEncoder(); ~VideoEncoder(); bool Valid() const; bool Active() const; bool ResetGraph(); bool SetConfig(VideoEncoderConfig &config); bool GetConfig(VideoEncoderConfig &config) const; bool Encode(unsigned char *data[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd, EncoderPacket &packet, bool &new_packet); static bool EnumEncoders(std::vector &encoders); }; enum class LogType { Error, Warning, Info, Debug, }; typedef void (*LogCallback)(LogType type, const wchar_t *msg, void *param); DSHOWCAPTURE_EXPORT void SetLogCallback(LogCallback callback, void *param); }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/vs/000755 001751 001751 00000000000 15153330731 023606 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/vs/2013/000755 001751 001751 00000000000 15153330731 024173 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/vs/2013/dshowcapture.sln000644 001751 001751 00000002351 15153330240 027415 0ustar00runnerrunner000000 000000  Microsoft Visual Studio Solution File, Format Version 11.00 # Visual Studio 2010 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "dshowcapture", "dshowcapture\dshowcapture.vcxproj", "{FFF52519-38BB-4155-851D-4209EC67ACA7}" EndProject Global GlobalSection(SolutionConfigurationPlatforms) = preSolution Debug|Win32 = Debug|Win32 Debug|x64 = Debug|x64 Release|Win32 = Release|Win32 Release|x64 = Release|x64 EndGlobalSection GlobalSection(ProjectConfigurationPlatforms) = postSolution {FFF52519-38BB-4155-851D-4209EC67ACA7}.Debug|Win32.ActiveCfg = Debug|Win32 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Debug|Win32.Build.0 = Debug|Win32 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Debug|x64.ActiveCfg = Debug|x64 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Debug|x64.Build.0 = Debug|x64 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Release|Win32.ActiveCfg = Release|Win32 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Release|Win32.Build.0 = Release|Win32 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Release|x64.ActiveCfg = Release|x64 {FFF52519-38BB-4155-851D-4209EC67ACA7}.Release|x64.Build.0 = Release|x64 EndGlobalSection GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE EndGlobalSection EndGlobal obs-studio-32.1.0-sources/deps/libdshowcapture/src/vs/2013/dshowcapture/000755 001751 001751 00000000000 15153330731 026703 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/vs/2013/dshowcapture/dshowcapture.vcxproj000644 001751 001751 00000022107 15153330240 033025 0ustar00runnerrunner000000 000000  Debug Win32 Debug x64 Release Win32 Release x64 {FFF52519-38BB-4155-851D-4209EC67ACA7} Win32Proj dshowcapture DynamicLibrary true Unicode v120 DynamicLibrary true Unicode v120 DynamicLibrary false true Unicode v120 DynamicLibrary false true Unicode v120 true true false false Level4 Disabled WIN32;_DEBUG;_WINDOWS;_USRDLL;DSHOWCAPTURE_EXPORTS;%(PreprocessorDefinitions) Windows true strmiids.lib;%(AdditionalDependencies) Level4 Disabled WIN32;_DEBUG;_WINDOWS;_USRDLL;DSHOWCAPTURE_EXPORTS;%(PreprocessorDefinitions) Windows true strmiids.lib;%(AdditionalDependencies) Level4 MaxSpeed true true WIN32;NDEBUG;_WINDOWS;_USRDLL;DSHOWCAPTURE_EXPORTS;%(PreprocessorDefinitions) Windows true true true strmiids.lib;%(AdditionalDependencies) Level4 MaxSpeed true true WIN32;NDEBUG;_WINDOWS;_USRDLL;DSHOWCAPTURE_EXPORTS;%(PreprocessorDefinitions) Windows true true true strmiids.lib;%(AdditionalDependencies) obs-studio-32.1.0-sources/deps/libdshowcapture/src/vs/2013/dshowcapture/dshowcapture.vcxproj.filters000644 001751 001751 00000007600 15153330240 034475 0ustar00runnerrunner000000 000000  {4FC737F1-C7A5-4376-A066-2A32D752A2FF} cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx {93995380-89BD-4b04-88EB-625FBE52EBFB} h;hpp;hxx;hm;inl;inc;xsd {67DA6AB6-F800-4c08-8B7A-83BB121AAD01} rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe;resx;tiff;tif;png;wav;mfcribbon-ms Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Source Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files Header Files obs-studio-32.1.0-sources/deps/libdshowcapture/src/cmake/000755 001751 001751 00000000000 15153330731 024236 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/cmake/Modules/000755 001751 001751 00000000000 15153330731 025646 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/cmake/Modules/FindCXX11.cmake000644 001751 001751 00000003260 15153330240 030251 0ustar00runnerrunner000000 000000 # - Finds if the compiler has C++11 support # This module can be used to detect compiler flags for using C++11, and checks # a small subset of the language. # # The following variables are set: # CXX11_FLAGS - flags to add to the CXX compiler for C++11 support # CXX11_FOUND - true if the compiler supports C++11 # if(CXX11_FLAGS) set(CXX11_FOUND TRUE) return() endif() include(CheckCXXSourceCompiles) if(MSVC) set(CXX11_FLAG_CANDIDATES " " ) else() set(CXX11_FLAG_CANDIDATES #gcc "-std=gnu++11" "-std=gnu++0x" #Gnu and Intel Linux "-std=c++11" "-std=c++0x" #Microsoft Visual Studio, and everything that automatically accepts C++11 " " #Intel windows "/Qstd=c++11" "/Qstd=c++0x" ) endif() set(CXX11_TEST_SOURCE " int main() { int n[] = {4,7,6,1,2}; int r; auto f = [&](int j) { r = j; }; for (auto i : n) f(i); return 0; } ") foreach(FLAG ${CXX11_FLAG_CANDIDATES}) set(SAFE_CMAKE_REQUIRED_FLAGS "${CMAKE_REQUIRED_FLAGS}") set(CMAKE_REQUIRED_FLAGS "${FLAG}") unset(CXX11_FLAG_DETECTED CACHE) message(STATUS "Try C++11 flag = [${FLAG}]") check_cxx_source_compiles("${CXX11_TEST_SOURCE}" CXX11_FLAG_DETECTED) set(CMAKE_REQUIRED_FLAGS "${SAFE_CMAKE_REQUIRED_FLAGS}") if(CXX11_FLAG_DETECTED) set(CXX11_FLAGS_INTERNAL "${FLAG}") break() endif(CXX11_FLAG_DETECTED) endforeach(FLAG ${CXX11_FLAG_CANDIDATES}) set(CXX11_FLAGS "${CXX11_FLAGS_INTERNAL}" CACHE STRING "C++11 Flags") include(FindPackageHandleStandardArgs) find_package_handle_standard_args(CXX11 DEFAULT_MSG CXX11_FLAGS) mark_as_advanced(CXX11_FLAGS) obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/000755 001751 001751 00000000000 15153330731 024456 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-encoded-device.cpp000644 001751 001751 00000015656 15153330240 031152 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "dshow-base.hpp" #include "dshow-media-type.hpp" #include "dshow-formats.hpp" #include "dshow-demux.hpp" #include "capture-filter.hpp" #include "device.hpp" #include "log.hpp" namespace DShow { static inline bool CreateFilters(IBaseFilter *filter, IBaseFilter **crossbar, IBaseFilter **encoder, IBaseFilter **demuxer) { ComPtr inputPin; ComPtr outputPin; REGPINMEDIUM inMedium; REGPINMEDIUM outMedium; bool hasOutMedium; HRESULT hr; if (!GetPinByName(filter, PINDIR_INPUT, nullptr, &inputPin)) { Warning(L"Encoded Device: Failed to get input pin"); return false; } if (!GetPinByName(filter, PINDIR_OUTPUT, nullptr, &outputPin)) { Warning(L"Encoded Device: Failed to get output pin"); return false; } if (!GetPinMedium(inputPin, inMedium)) { Warning(L"Encoded Device: Failed to get input pin medium"); return false; } hasOutMedium = GetPinMedium(outputPin, outMedium); if (!GetFilterByMedium(AM_KSCATEGORY_CROSSBAR, inMedium, crossbar)) { Warning(L"Encoded Device: Failed to get crossbar filter"); return false; } /* perfectly okay if there's no encoder filter, some don't have them */ if (hasOutMedium) GetFilterByMedium(KSCATEGORY_ENCODER, outMedium, encoder); hr = CoCreateInstance(CLSID_MPEG2Demultiplexer, nullptr, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void **)demuxer); if (FAILED(hr)) { WarningHR(L"Encoded Device: Failed to create demuxer", hr); return false; } return true; } static inline bool ConnectEncodedFilters(IGraphBuilder *graph, IBaseFilter *filter, IBaseFilter *crossbar, IBaseFilter *encoder, IBaseFilter *demuxer) { if (!DirectConnectFilters(graph, crossbar, filter)) { Warning(L"Encoded Device: Failed to connect crossbar to " L"device"); return false; } if (!!encoder) { if (!DirectConnectFilters(graph, filter, encoder)) { Warning(L"Encoded Device: Failed to connect device to " L"encoder"); return false; } if (!DirectConnectFilters(graph, encoder, demuxer)) { Warning(L"Encoded Device: Failed to connect encoder to " L"demuxer"); return false; } } else { if (!DirectConnectFilters(graph, filter, demuxer)) { Warning(L"Encoded Device: Failed to connect device to " L"demuxer"); return false; } } return true; } static inline bool MapPacketIDs(IBaseFilter *demuxer, ULONG video, ULONG audio) { ComPtr videoPin, audioPin; HRESULT hr; if (!GetPinByName(demuxer, PINDIR_OUTPUT, DEMUX_VIDEO_PIN, &videoPin)) { Warning(L"Encoded Device: Could not get video pin from " L"demuxer"); return false; } if (!GetPinByName(demuxer, PINDIR_OUTPUT, DEMUX_AUDIO_PIN, &audioPin)) { Warning(L"Encoded Device: Could not get audio pin from " L"demuxer"); return false; } hr = MapPinToPacketID(videoPin, video); if (FAILED(hr)) { WarningHR(L"Encoded Device: Failed to map demuxer video pin " L"packet ID", hr); return false; } hr = MapPinToPacketID(audioPin, audio); if (FAILED(hr)) { WarningHR(L"Encoded Device: Failed to map demuxer audio pin " L"packet ID", hr); return false; } return true; } /* * rocket-specific workaround code. I have no idea what any of these numbers * are except the GUID which was obvious. All I know is calling * IKsPropertySet::Set turns on/off some sort of 'mode' on the device. * I discovered this merely by chance while monitoring API usage in other * programs because I could not figure out how the hell to get this thing * to turn on. */ static const GUID RocketEncoderGUID = {0x99100000, 0xa330, 0x11e1, {0xa3, 0x80, 0x99, 0x10, 0x68, 0x64, 0x00, 0x00}}; struct RocketPropStruct { DWORD dwSize; DWORD unknown1; DWORD unknown2; DWORD unknown3; DWORD code; DWORD unknown4; BOOL enabled; }; struct RocketInstStruct { DWORD code; DWORD unknown1; }; bool SetRocketEnabled(IBaseFilter *encoder, bool enable) { static const ULONG rocketEnableId = 0x9910E001; static const DWORD rocketEnableCode = 0x38384001; RocketInstStruct rocketInstance = {}; RocketPropStruct rocketProperty = {}; ComQIPtr propertySet(encoder); if (!propertySet) return false; rocketProperty.dwSize = sizeof(rocketProperty); rocketProperty.code = rocketEnableCode; rocketProperty.enabled = enable; rocketInstance.code = rocketEnableCode; HRESULT hr = propertySet->Set(RocketEncoderGUID, rocketEnableId, &rocketInstance, sizeof(rocketInstance), &rocketProperty, sizeof(rocketProperty)); return SUCCEEDED(hr); } bool HDevice::SetupEncodedVideoCapture(IBaseFilter *filter, VideoConfig &config, const EncodedDevice &info) { ComPtr crossbar; ComPtr encoder; ComPtr demuxer; MediaType mtVideo; MediaType mtAudio; if (!CreateFilters(filter, &crossbar, &encoder, &demuxer)) return false; if (!CreateDemuxVideoPin(demuxer, mtVideo, info.width, info.height, info.frameInterval, info.videoFormat)) return false; if (!CreateDemuxAudioPin(demuxer, mtAudio, info.samplesPerSec, 16, 2, info.audioFormat)) return false; config.cx = info.width; config.cy_abs = labs(info.height); config.cy_flip = info.height < 0; config.frameInterval = info.frameInterval; config.format = info.videoFormat; config.internalFormat = info.videoFormat; PinCaptureInfo pci; pci.callback = [this](IMediaSample *s) { Receive(true, s); }; pci.expectedMajorType = mtVideo->majortype; pci.expectedSubType = mtVideo->subtype; videoCapture = new CaptureFilter(pci); videoFilter = demuxer; if (!!encoder && config.name.find(L"IT9910") != std::string::npos) { rocketEncoder = encoder; if (!SetRocketEnabled(rocketEncoder, true)) return false; } graph->AddFilter(crossbar, L"Crossbar"); graph->AddFilter(filter, L"Device"); graph->AddFilter(demuxer, L"Demuxer"); graph->AddFilter(videoCapture, L"Capture Filter"); if (!!encoder) graph->AddFilter(encoder, L"Encoder"); bool success = ConnectEncodedFilters(graph, filter, crossbar, encoder, demuxer); if (success) success = MapPacketIDs(demuxer, info.videoPacketID, info.audioPacketID); encodedDevice = success; return success; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/CoTaskMemPtr.hpp000644 001751 001751 00000002426 15153330240 027477 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once template class CoTaskMemPtr { T *ptr; inline void Clear() { if (ptr) CoTaskMemFree(ptr); } public: inline CoTaskMemPtr() : ptr(NULL) {} inline CoTaskMemPtr(T *ptr_) : ptr(ptr_) {} inline ~CoTaskMemPtr() { Clear(); } inline operator T *() const { return ptr; } inline T *operator->() const { return ptr; } inline CoTaskMemPtr &operator=(T *val) { Clear(); ptr = val; return *this; } inline T **operator&() { Clear(); ptr = NULL; return &ptr; } }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshowcapture.cpp000644 001751 001751 00000016447 15153330240 027701 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "../dshowcapture.hpp" #include "dshow-base.hpp" #include "dshow-enum.hpp" #include "device.hpp" #include "dshow-device-defs.hpp" #include "log.hpp" #include namespace DShow { Device::Device(InitGraph initialize) : context(new HDevice) { if (initialize == InitGraph::True) context->CreateGraph(); } Device::~Device() { delete context; } bool Device::Valid() const { return context->initialized; } bool Device::ResetGraph() { /* cheap and easy way to clear all the filters */ delete context; context = new HDevice; return context->CreateGraph(); } void Device::ShutdownGraph() { delete context; context = new HDevice; } bool Device::SetVideoConfig(VideoConfig *config) { return context->SetVideoConfig(config); } bool Device::SetAudioConfig(AudioConfig *config) { return context->SetAudioConfig(config); } bool Device::ConnectFilters() { return context->ConnectFilters(); } Result Device::Start() { return context->Start(); } void Device::Stop() { context->Stop(); } bool Device::GetVideoConfig(VideoConfig &config) const { if (context->videoCapture == NULL) return false; config = context->videoConfig; return true; } bool Device::GetAudioConfig(AudioConfig &config) const { if (context->audioCapture == NULL) return false; config = context->audioConfig; return true; } bool Device::GetVideoDeviceId(DeviceId &id) const { if (context->videoCapture == NULL) return false; id = context->videoConfig; return true; } bool Device::GetAudioDeviceId(DeviceId &id) const { if (context->audioCapture == NULL) return false; id = context->audioConfig; return true; } static void OpenPropertyPages(HWND hwnd, IUnknown *propertyObject) { if (!propertyObject) return; ComQIPtr pages(propertyObject); CAUUID cauuid; if (pages != NULL) { if (SUCCEEDED(pages->GetPages(&cauuid)) && cauuid.cElems) { OleCreatePropertyFrame(hwnd, 0, 0, NULL, 1, (LPUNKNOWN *)&propertyObject, cauuid.cElems, cauuid.pElems, 0, 0, NULL); CoTaskMemFree(cauuid.pElems); } } } void Device::OpenDialog(void *hwnd, DialogType type) const { ComPtr ptr; HRESULT hr; if (type == DialogType::ConfigVideo) { ptr = context->videoFilter; } else if (type == DialogType::ConfigCrossbar || type == DialogType::ConfigCrossbar2) { hr = context->builder->FindInterface(NULL, NULL, context->videoFilter, IID_IAMCrossbar, (void **)&ptr); if (FAILED(hr)) { WarningHR(L"Failed to find crossbar", hr); return; } if (ptr != NULL && type == DialogType::ConfigCrossbar2) { ComQIPtr xbar(ptr); ComQIPtr filter(xbar); hr = context->builder->FindInterface( &LOOK_UPSTREAM_ONLY, NULL, filter, IID_IAMCrossbar, (void **)&ptr); if (FAILED(hr)) { WarningHR(L"Failed to find crossbar2", hr); return; } } } else if (type == DialogType::ConfigAudio) { ptr = context->audioFilter; } if (!ptr) { Warning(L"Could not find filter to open dialog type: %d with", (int)type); return; } OpenPropertyPages((HWND)hwnd, ptr); } static void EnumEncodedVideo(std::vector &devices, const wchar_t *deviceName, const wchar_t *devicePath, const EncodedDevice &info) { VideoDevice device; VideoInfo caps; device.name = deviceName; device.path = devicePath; device.audioAttached = true; device.separateAudioFilter = false; caps.minCX = caps.maxCX = info.width; caps.minCY = caps.maxCY = info.height; caps.granularityCX = caps.granularityCY = 1; caps.minInterval = caps.maxInterval = info.frameInterval; caps.format = info.videoFormat; device.caps.push_back(caps); devices.push_back(device); } static void EnumExceptionVideoDevice(std::vector &devices, IBaseFilter *filter, const wchar_t *deviceName, const wchar_t *devicePath) { ComPtr pin; if (GetPinByName(filter, PINDIR_OUTPUT, L"656", &pin)) EnumEncodedVideo(devices, deviceName, devicePath, HD_PVR2); else if (GetPinByName(filter, PINDIR_OUTPUT, L"TS Out", &pin)) EnumEncodedVideo(devices, deviceName, devicePath, Roxio); } static bool EnumVideoDevice(std::vector &devices, IBaseFilter *filter, const wchar_t *deviceName, const wchar_t *devicePath) { ComPtr pin; ComPtr audioPin; ComPtr audioFilter; VideoDevice info; if (wcsstr(deviceName, L"C875") != nullptr || wcsstr(deviceName, L"Prif Streambox") != nullptr || wcsstr(deviceName, L"C835") != nullptr) { EnumEncodedVideo(devices, deviceName, devicePath, AV_LGP); return true; } else if (wcsstr(deviceName, L"Hauppauge HD PVR Capture") != nullptr) { EnumEncodedVideo(devices, deviceName, devicePath, HD_PVR1); return true; } bool success = GetFilterPin(filter, MEDIATYPE_Video, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); /* if this device has no standard capture pin, see if it's an * encoded device, and get its information if so (all encoded devices * are exception devices pretty much) */ if (!success) { EnumExceptionVideoDevice(devices, filter, deviceName, devicePath); return true; } if (!EnumVideoCaps(pin, info.caps)) return true; info.audioAttached = GetFilterPin(filter, MEDIATYPE_Audio, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &audioPin); // Fallback: Find a corresponding audio filter for the same device if (!info.audioAttached) { info.separateAudioFilter = GetDeviceAudioFilter(devicePath, &audioFilter); info.audioAttached = info.separateAudioFilter; } info.name = deviceName; if (devicePath) info.path = devicePath; devices.push_back(info); return true; } bool Device::EnumVideoDevices(std::vector &devices) { devices.clear(); return EnumDevices(CLSID_VideoInputDeviceCategory, EnumDeviceCallback(EnumVideoDevice), &devices); } static bool EnumAudioDevice(vector &devices, IBaseFilter *filter, const wchar_t *deviceName, const wchar_t *devicePath) { ComPtr pin; AudioDevice info; bool success = GetFilterPin(filter, MEDIATYPE_Audio, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); if (!success) return true; if (!EnumAudioCaps(pin, info.caps)) return true; info.name = deviceName; if (devicePath) info.path = devicePath; devices.push_back(info); return true; } bool Device::EnumAudioDevices(vector &devices) { devices.clear(); return EnumDevices(CLSID_AudioInputDeviceCategory, EnumDeviceCallback(EnumAudioDevice), &devices); } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-base.cpp000644 001751 001751 00000057552 15153330240 027227 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2014 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "dshow-base.hpp" #include "dshow-enum.hpp" #include "log.hpp" #include #include #include #include // for DRV_QUERYDEVICEINTERFACE #include // for SetupDixxx #include // for CM_xxx #include // for std::transform using namespace std; namespace DShow { bool CreateFilterGraph(IGraphBuilder **pgraph, ICaptureGraphBuilder2 **pbuilder, IMediaControl **pcontrol) { ComPtr graph; ComPtr builder; ComPtr control; HRESULT hr; hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IFilterGraph, (void **)&graph); if (FAILED(hr)) { ErrorHR(L"Failed to create IGraphBuilder", hr); return false; } hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2, (void **)&builder); if (FAILED(hr)) { ErrorHR(L"Failed to create ICaptureGraphBuilder2", hr); return false; } hr = builder->SetFiltergraph(graph); if (FAILED(hr)) { ErrorHR(L"Failed to set filter graph", hr); return false; } hr = graph->QueryInterface(IID_IMediaControl, (void **)&control); if (FAILED(hr)) { ErrorHR(L"Failed to create IMediaControl", hr); return false; } *pgraph = graph.Detach(); *pbuilder = builder.Detach(); *pcontrol = control.Detach(); return true; } void LogFilters(IGraphBuilder *graph) { ComPtr filterEnum; ComPtr filter; HRESULT hr; hr = graph->EnumFilters(&filterEnum); if (FAILED(hr)) return; Debug(L"Loaded filters:"); while (filterEnum->Next(1, &filter, NULL) == S_OK) { FILTER_INFO filterInfo; hr = filter->QueryFilterInfo(&filterInfo); if (SUCCEEDED(hr)) { if (filterInfo.pGraph) filterInfo.pGraph->Release(); Debug(L"\t%s", filterInfo.achName); } } } struct DeviceFilterCallbackInfo { ComPtr filter; const wchar_t *name; const wchar_t *path; }; static bool GetDeviceCallback(DeviceFilterCallbackInfo &info, IBaseFilter *filter, const wchar_t *name, const wchar_t *path) { if (info.name && *info.name && wcscmp(name, info.name) != 0) return true; info.filter = filter; /* continue if path does not match */ if (!path || !info.path || wcscmp(path, info.path) != 0) return true; return false; } bool GetDeviceFilter(const IID &type, const wchar_t *name, const wchar_t *path, IBaseFilter **out) { DeviceFilterCallbackInfo info; info.name = name; info.path = path; if (!EnumDevices(type, EnumDeviceCallback(GetDeviceCallback), &info)) return false; if (info.filter != NULL) { *out = info.filter.Detach(); return true; } return false; } /* checks to see if a pin's config caps have a specific media type */ static bool PinConfigHasMajorType(IPin *pin, const GUID &type) { HRESULT hr; ComPtr config; int count, size; hr = pin->QueryInterface(IID_IAMStreamConfig, (void **)&config); if (FAILED(hr)) return false; hr = config->GetNumberOfCapabilities(&count, &size); if (FAILED(hr)) return false; vector caps; caps.resize(size); for (int i = 0; i < count; i++) { MediaTypePtr mt; if (SUCCEEDED(config->GetStreamCaps(i, &mt, caps.data()))) if (mt->majortype == type) return true; } return false; } /* checks to see if a pin has a certain major media type */ static bool PinHasMajorType(IPin *pin, const GUID &type) { HRESULT hr; MediaTypePtr mt; ComPtr mediaEnum; /* first, check the config caps. */ if (PinConfigHasMajorType(pin, type)) return true; /* then let's check the media type for the pin */ if (FAILED(pin->EnumMediaTypes(&mediaEnum))) return false; ULONG curVal; hr = mediaEnum->Next(1, &mt, &curVal); if (hr != S_OK) return false; return mt->majortype == type; } static inline bool PinIsDirection(IPin *pin, PIN_DIRECTION dir) { if (!pin) return false; PIN_DIRECTION pinDir; return SUCCEEDED(pin->QueryDirection(&pinDir)) && pinDir == dir; } static HRESULT GetPinCategory(IPin *pin, GUID &category) { if (!pin) return E_POINTER; ComQIPtr propertySet(pin); DWORD size; if (propertySet == NULL) return E_NOINTERFACE; return propertySet->Get(AMPROPSETID_Pin, AMPROPERTY_PIN_CATEGORY, NULL, 0, &category, sizeof(GUID), &size); } static inline bool PinIsCategory(IPin *pin, const GUID &category) { if (!pin) return false; GUID pinCategory; HRESULT hr = GetPinCategory(pin, pinCategory); /* if the pin has no category interface, chances are we created it */ if (FAILED(hr)) return (hr == E_NOINTERFACE); return category == pinCategory; } static inline bool PinNameIs(IPin *pin, const wchar_t *name) { if (!pin) return false; if (!name) return true; PIN_INFO pinInfo; if (FAILED(pin->QueryPinInfo(&pinInfo))) return false; if (pinInfo.pFilter) pinInfo.pFilter->Release(); return wcscmp(name, pinInfo.achName) == 0; } static inline bool PinMatches(IPin *pin, const GUID &type, const GUID &category, PIN_DIRECTION &dir) { if (!PinHasMajorType(pin, type)) return false; if (!PinIsDirection(pin, dir)) return false; if (!PinIsCategory(pin, category)) return false; return true; } bool GetFilterPin(IBaseFilter *filter, const GUID &type, const GUID &category, PIN_DIRECTION dir, IPin **pin) { ComPtr curPin; ComPtr pinsEnum; ULONG num; if (!filter) return false; if (FAILED(filter->EnumPins(&pinsEnum))) return false; while (pinsEnum->Next(1, &curPin, &num) == S_OK) { if (PinMatches(curPin, type, category, dir)) { *pin = curPin; (*pin)->AddRef(); return true; } } return false; } bool GetPinByName(IBaseFilter *filter, PIN_DIRECTION dir, const wchar_t *name, IPin **pin) { ComPtr curPin; ComPtr pinsEnum; ULONG num; if (!filter) return false; if (FAILED(filter->EnumPins(&pinsEnum))) return false; while (pinsEnum->Next(1, &curPin, &num) == S_OK) { if (PinIsDirection(curPin, dir) && PinNameIs(curPin, name)) { *pin = curPin.Detach(); return true; } } return false; } bool GetPinByMedium(IBaseFilter *filter, REGPINMEDIUM &medium, IPin **pin) { ComPtr curPin; ComPtr pinsEnum; ULONG num; if (!filter) return false; if (FAILED(filter->EnumPins(&pinsEnum))) return false; while (pinsEnum->Next(1, &curPin, &num) == S_OK) { REGPINMEDIUM curMedium; if (GetPinMedium(curPin, curMedium) && memcmp(&medium, &curMedium, sizeof(medium)) == 0) { *pin = curPin.Detach(); return true; } } return false; } static bool GetFilterByMediumFromMoniker(IMoniker *moniker, REGPINMEDIUM &medium, IBaseFilter **filter) { ComPtr curFilter; HRESULT hr; hr = moniker->BindToObject(nullptr, nullptr, IID_IBaseFilter, (void **)&curFilter); if (SUCCEEDED(hr)) { ComPtr pin; if (GetPinByMedium(curFilter, medium, &pin)) { *filter = curFilter.Detach(); return true; } } else { WarningHR(L"GetFilterByMediumFromMoniker: BindToObject failed", hr); } return false; } bool GetFilterByMedium(const CLSID &id, REGPINMEDIUM &medium, IBaseFilter **filter) { ComPtr deviceEnum; ComPtr enumMoniker; ComPtr moniker; DWORD count = 0; HRESULT hr; hr = CoCreateInstance(CLSID_SystemDeviceEnum, nullptr, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **)&deviceEnum); if (FAILED(hr)) { WarningHR(L"GetFilterByMedium: Failed to create device enum", hr); return false; } hr = deviceEnum->CreateClassEnumerator(id, &enumMoniker, 0); if (hr != S_OK) { WarningHR(L"GetFilterByMedium: Failed to create enum moniker", hr); return false; } enumMoniker->Reset(); while (enumMoniker->Next(1, &moniker, &count) == S_OK) { if (GetFilterByMediumFromMoniker(moniker, medium, filter)) return true; } return false; } bool GetPinMedium(IPin *pin, REGPINMEDIUM &medium) { ComQIPtr ksPin(pin); CoTaskMemPtr items; if (!ksPin) return false; if (FAILED(ksPin->KsQueryMediums(&items))) return false; REGPINMEDIUM *curMed = reinterpret_cast(items + 1); for (ULONG i = 0; i < items->Count; i++, curMed++) { if (!IsEqualGUID(curMed->clsMedium, GUID_NULL) && !IsEqualGUID(curMed->clsMedium, KSMEDIUMSETID_Standard)) { medium = *curMed; return true; } } memset(&medium, 0, sizeof(medium)); return false; } static inline bool PinIsConnected(IPin *pin) { ComPtr connectedPin; return SUCCEEDED(pin->ConnectedTo(&connectedPin)); } static bool DirectConnectOutputPin(IFilterGraph *graph, IPin *pin, IBaseFilter *filterIn) { ComPtr curPin; ComPtr pinsEnum; ULONG num; if (!graph || !filterIn || !pin) return false; if (FAILED(filterIn->EnumPins(&pinsEnum))) return false; while (pinsEnum->Next(1, &curPin, &num) == S_OK) { if (PinIsDirection(curPin, PINDIR_INPUT) && !PinIsConnected(curPin)) { if (graph->ConnectDirect(pin, curPin, nullptr) == S_OK) return true; } } return false; } bool DirectConnectFilters(IFilterGraph *graph, IBaseFilter *filterOut, IBaseFilter *filterIn) { ComPtr curPin; ComPtr pinsEnum; ULONG num; bool connected = false; if (!graph || !filterOut || !filterIn) return false; if (FAILED(filterOut->EnumPins(&pinsEnum))) return false; while (pinsEnum->Next(1, &curPin, &num) == S_OK) { if (PinIsDirection(curPin, PINDIR_OUTPUT) && !PinIsConnected(curPin)) { if (DirectConnectOutputPin(graph, curPin, filterIn)) connected = true; } } return connected; } HRESULT MapPinToPacketID(IPin *pin, ULONG packetID) { ComQIPtr pidMap(pin); if (!pidMap) return E_NOINTERFACE; return pidMap->MapPID(1, &packetID, MEDIA_ELEMENTARY_STREAM); } wstring ConvertHRToEnglish(HRESULT hr) { LPWSTR buffer = NULL; wstring str; FormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, hr, MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), (LPTSTR)&buffer, 0, NULL); if (buffer) { str = buffer; LocalFree(buffer); } return str; } static HRESULT DevicePathToDeviceInstancePath(const wchar_t *devicePath, wchar_t *devInstPath, int size) { /* Sanity checks */ if (!devicePath) return E_POINTER; if (!devInstPath) return E_POINTER; /* Convert to uppercase STL string */ wstring parseDevicePath = devicePath; for (wchar_t &c : parseDevicePath) c = (wchar_t)toupper(c); /* Find start position ('\\?\' or '\?\') */ wstring startToken = L"\\\\?\\"; size_t start = parseDevicePath.find(startToken, 0); if (start == string::npos) { startToken = L"\\??\\"; start = parseDevicePath.find(startToken, 0); if (start == string::npos) return E_FAIL; } parseDevicePath = parseDevicePath.substr( startToken.size(), parseDevicePath.size() - startToken.size()); /* Find end position (last occurrence of '#') */ wchar_t endToken = '#'; size_t end = parseDevicePath.find_last_of(endToken, parseDevicePath.size()); if (end == string::npos) return E_FAIL; parseDevicePath = parseDevicePath.substr(0, end); /* Replace '#' by '\' */ std::replace(parseDevicePath.begin(), parseDevicePath.end(), L'#', L'\\'); /* Set output parameter */ StringCchCopyW(devInstPath, size, parseDevicePath.c_str()); return S_OK; } static HRESULT GetParentDeviceInstancePath(const wchar_t *devInstPath, wchar_t *parentDevInstPath, int size) { /* Init return value */ HRESULT hr = E_FAIL; /* Get device info */ HDEVINFO hDevInfo = SetupDiCreateDeviceInfoList(nullptr, NULL); if (NULL != hDevInfo) { SP_DEVINFO_DATA did; did.cbSize = sizeof(SP_DEVINFO_DATA); BOOL success = SetupDiOpenDeviceInfo(hDevInfo, devInstPath, NULL, 0, &did); if (success) { /* Get parent device */ DEVINST devParent; CONFIGRET ret = CM_Get_Parent(&devParent, did.DevInst, 0); if (CR_SUCCESS == ret) { /* Get parent device instance path */ ret = CM_Get_Device_ID( devParent, parentDevInstPath, size, 0); if (CR_SUCCESS == ret) hr = S_OK; } /* Cleanup */ SetupDiDeleteDeviceInfo(hDevInfo, &did); } /* Cleanup */ SetupDiDestroyDeviceInfoList(hDevInfo); } return hr; } static bool IsSameInstPath(const wchar_t *audDevPath, const wchar_t *vidDevInstPath) { /* Get audio device instance path */ wchar_t audDevInstPath[512]; HRESULT hr = DevicePathToDeviceInstancePath(audDevPath, audDevInstPath, _ARRAYSIZE(audDevInstPath)); /* Compare audio and video device instance path */ if (FAILED(hr)) return false; return wcscmp(audDevInstPath, vidDevInstPath) == 0; } static HRESULT GetAudioCaptureParentDeviceInstancePath(IMoniker *audioCapture, wchar_t *parentDevInstPath, int size) { /* Sanity checks */ if (!audioCapture) return E_POINTER; /* Bind to property bag */ ComPtr propertyBag; HRESULT hr = audioCapture->BindToStorage(0, 0, IID_IPropertyBag, (void **)&propertyBag); if (SUCCEEDED(hr)) { /* Init variant */ VARIANT var; VariantInit(&var); /* Get "WaveInId" */ hr = propertyBag->Read(L"WaveInId", &var, nullptr); if (SUCCEEDED(hr) && var.vt == VT_I4) { /* Get device path */ wchar_t devicePath[512]; MMRESULT res = waveInMessage((HWAVEIN)var.iVal, DRV_QUERYDEVICEINTERFACE, (DWORD_PTR)devicePath, sizeof(devicePath)); if (res == MMSYSERR_NOERROR) { /* Get device instance path */ wchar_t devInstPath[512]; hr = DevicePathToDeviceInstancePath( devicePath, devInstPath, _ARRAYSIZE(devInstPath)); /* Get parent */ if (SUCCEEDED(hr)) hr = GetParentDeviceInstancePath( devInstPath, parentDevInstPath, size); } } /* Cleanup */ VariantClear(&var); } return hr; } static bool IsMonikerSameParentInstPath(IMoniker *moniker, const wchar_t *vidDevInstPath) { /* Get video parent device instance path */ wchar_t vidParentDevInstPath[512]; HRESULT hr = GetParentDeviceInstancePath( vidDevInstPath, vidParentDevInstPath, _ARRAYSIZE(vidParentDevInstPath)); if (FAILED(hr)) return false; /* Get audio parent device instance path */ wchar_t audParentDevInstPath[512]; hr = GetAudioCaptureParentDeviceInstancePath( moniker, audParentDevInstPath, _ARRAYSIZE(audParentDevInstPath)); if (FAILED(hr)) return false; /* Compare audio and video parent device instance path */ return wcscmp(audParentDevInstPath, vidParentDevInstPath) == 0; } #define VEN_ID_SIZE 4 static inline bool MatchingStartToken(const wstring &path, const wstring &start_token) { return path.find(start_token) == 0 && path.size() >= start_token.size() + VEN_ID_SIZE; } static bool IsUncoupledDevice(const wchar_t *vidDevInstPath) { /* Sanity checks */ if (!vidDevInstPath) return false; const wstring path = vidDevInstPath; /* USB */ const wstring usbToken = L"USB\\VID_"; const wstring usbVidIdWhitelist[] = { L"0FD9", /* elgato */ L"3842", /* evga */ L"0B05", /* asus */ L"07CA", /* avermedia */ L"048D", /* digitnow/pengo */ L"04B4", /* mokose */ L"0557", /* aten */ L"1164", /* startek/kapchr */ L"1532", /* razer */ L"1BCF", /* mypin/treaslin/mirabox */ L"1E4E", /* pengo/cloneralliance */ L"1E71", /* nzxt */ L"2040", /* hauppauge */ L"2935", /* magewell */ L"298F", /* genki */ L"2B77", /* epiphan */ L"32ED", /* ezcap */ L"534D", /* brand-less/pacoxi/ucec */ L"EBA4", /* zasluke */ }; if (MatchingStartToken(path, usbToken)) { /* Get USB vendor ID */ const wstring vid = path.substr(usbToken.size(), VEN_ID_SIZE); for (const wstring &whitelistId : usbVidIdWhitelist) { if (vid == whitelistId) { return true; } } } /* PCI */ const wstring pciVenToken = L"PCI\\VEN_"; const wstring pciSubsysToken = L"SUBSYS_"; const wstring pciVenIdWhitelist[] = { L"1CD7", /* magewell */ L"8888", /* acasis */ L"1461", /* avermedia */ }; const wstring pciSubsysIdWhitelist[] = { L"1CFA", /* elgato */ }; if (MatchingStartToken(path, pciVenToken)) { const wstring vid = path.substr(usbToken.size(), VEN_ID_SIZE); for (const wstring &whitelistId : pciVenIdWhitelist) { if (vid == whitelistId) { return true; } } size_t subsysPos = path.find(pciSubsysToken); size_t subsysIdPos = subsysPos + pciSubsysToken.size() + VEN_ID_SIZE; size_t expectedSize = subsysIdPos + VEN_ID_SIZE; if (subsysPos != string::npos && path.size() >= expectedSize) { /* Get PCI subsystem vendor ID */ const wstring ssid = path.substr(subsysIdPos, VEN_ID_SIZE); for (const wstring &whitelistId : pciSubsysIdWhitelist) { if (ssid == whitelistId) { return true; } } } } return false; } static HRESULT ReadProperty(IMoniker *moniker, const wchar_t *property, wchar_t *value, int size) { /* Sanity checks */ if (!moniker) return E_POINTER; if (!property) return E_POINTER; if (!value) return E_POINTER; /* Increment reference count */ moniker->AddRef(); /* Bind to property bag */ ComPtr propertyBag; HRESULT hr = moniker->BindToStorage(0, 0, IID_IPropertyBag, (void **)&propertyBag); if (SUCCEEDED(hr)) { /* Initialize variant */ VARIANT var; VariantInit(&var); /* Read property */ hr = propertyBag->Read(property, &var, nullptr); if (SUCCEEDED(hr)) StringCchCopyW(value, size, var.bstrVal); /* Cleanup */ VariantClear(&var); } /* Decrement reference count */ moniker->Release(); return hr; } static HRESULT GetFriendlyName(REFCLSID deviceClass, const wchar_t *devPath, wchar_t *name, int nameSize) { /* Sanity checks */ if (!devPath) return E_POINTER; if (!name) return E_POINTER; /* Create device enumerator */ ComPtr createDevEnum; HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **)&createDevEnum); /* Enumerate filters */ ComPtr enumMoniker; if (SUCCEEDED(hr)) { /* returns S_FALSE if no devices are installed */ hr = createDevEnum->CreateClassEnumerator(deviceClass, &enumMoniker, 0); if (!enumMoniker) hr = E_FAIL; } /* Cycle through the enumeration */ if (SUCCEEDED(hr)) { ULONG fetched = 0; ComPtr moniker; enumMoniker->Reset(); while (enumMoniker->Next(1, &moniker, &fetched) == S_OK) { /* Get device path from moniker */ wchar_t monikerDevPath[512]; hr = ReadProperty(moniker, L"DevicePath", monikerDevPath, _ARRAYSIZE(monikerDevPath)); /* Find desired filter */ if (wcscmp(devPath, monikerDevPath) == 0) { /* Get friendly name */ hr = ReadProperty(moniker, L"FriendlyName", name, nameSize); return hr; } } } return E_FAIL; } static bool MatchFriendlyNames(const wchar_t *vidName, const wchar_t *audName) { /* Sanity checks */ if (!vidName) return false; if (!audName) return false; /* Convert strings to lower case */ wstring strVidName = vidName; for (wchar_t &c : strVidName) c = (wchar_t)tolower(c); wstring strAudName = audName; for (wchar_t &c : strAudName) c = (wchar_t)tolower(c); /* Remove 'video' from friendly name */ size_t posVid; const wstring searchVid[] = {L"(video) ", L"(video)", L"video ", L"video", L"hdmi", L" / multiview"}; for (int i = 0; i < _ARRAYSIZE(searchVid); i++) { const wstring &search = searchVid[i]; while ((posVid = strVidName.find(search)) != std::string::npos) { strVidName.replace(posVid, search.length(), L""); } } /* Remove 'audio' from friendly name */ size_t posAud; const wstring searchAud[] = {L"(audio) ", L"(audio)", L"audio ", L"audio"}; for (int i = 0; i < _ARRAYSIZE(searchAud); i++) { const wstring &search = searchAud[i]; while ((posAud = strAudName.find(search)) != std::string::npos) { strAudName.replace(posAud, search.length(), L""); } } return strVidName == strAudName; } static bool GetDeviceAudioFilterInternal(REFCLSID deviceClass, const wchar_t *vidDevPath, IBaseFilter **audioCaptureFilter, bool matchFilterName = false) { /* Get video device instance path */ wchar_t vidDevInstPath[512]; HRESULT hr = DevicePathToDeviceInstancePath(vidDevPath, vidDevInstPath, _ARRAYSIZE(vidDevInstPath)); if (FAILED(hr)) return false; #if 1 /* Only enabled for certain whitelisted devices for now */ if (!IsUncoupledDevice(vidDevInstPath)) return false; #endif /* Get friendly name */ wchar_t vidName[512]; if (matchFilterName) { hr = GetFriendlyName(CLSID_VideoInputDeviceCategory, vidDevPath, vidName, _ARRAYSIZE(vidName)); if (FAILED(hr)) return false; } /* Create device enumerator */ ComPtr createDevEnum; if (SUCCEEDED(hr)) hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **)&createDevEnum); /* Enumerate filters */ ComPtr enumMoniker; if (SUCCEEDED(hr)) { /* returns S_FALSE if no devices are installed */ hr = createDevEnum->CreateClassEnumerator(deviceClass, &enumMoniker, 0); if (!enumMoniker) hr = E_FAIL; } /* Cycle through the enumeration */ if (SUCCEEDED(hr)) { ULONG fetched = 0; ComPtr moniker; enumMoniker->Reset(); while (enumMoniker->Next(1, &moniker, &fetched) == S_OK) { bool samePath = false; /* Get device path */ wchar_t audDevPath[512]; hr = ReadProperty(moniker, L"DevicePath", audDevPath, _ARRAYSIZE(audDevPath)); if (SUCCEEDED(hr)) { /* Skip if it is the video device */ if (wcscmp(audDevPath, vidDevPath) == 0) continue; samePath = IsSameInstPath(audDevPath, vidDevInstPath); } else { samePath = IsMonikerSameParentInstPath( moniker, vidDevInstPath); } /* Get audio capture filter */ if (samePath) { /* Match video and audio filter names */ bool isSameFilterName = false; if (matchFilterName) { wchar_t audName[512]; hr = ReadProperty(moniker, L"FriendlyName", audName, _ARRAYSIZE(audName)); if (SUCCEEDED(hr)) { isSameFilterName = MatchFriendlyNames( vidName, audName); } } if (!matchFilterName || isSameFilterName) { hr = moniker->BindToObject( 0, 0, IID_IBaseFilter, (void **)audioCaptureFilter); if (SUCCEEDED(hr)) return true; } } } } return false; } bool GetDeviceAudioFilter(const wchar_t *vidDevPath, IBaseFilter **audioCaptureFilter) { /* Search in "Audio capture sources" and match filter name */ bool success = GetDeviceAudioFilterInternal( CLSID_AudioInputDeviceCategory, vidDevPath, audioCaptureFilter, true); /* Search in "WDM Streaming Capture Devices" and match filter name */ if (!success) success = GetDeviceAudioFilterInternal(KSCATEGORY_CAPTURE, vidDevPath, audioCaptureFilter, true); /* Search in "Audio capture sources" */ if (!success) success = GetDeviceAudioFilterInternal( CLSID_AudioInputDeviceCategory, vidDevPath, audioCaptureFilter); /* Search in "WDM Streaming Capture Devices" */ if (!success) success = GetDeviceAudioFilterInternal( KSCATEGORY_CAPTURE, vidDevPath, audioCaptureFilter); return success; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/encoder.hpp000644 001751 001751 00000004050 15153330240 026600 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "../dshowcapture.hpp" #include "output-filter.hpp" #include "capture-filter.hpp" #include #include #include #include using namespace std; struct EncodedData { vector data; inline EncodedData() = default; inline EncodedData(unsigned char *data_, size_t size) { data.resize(size); memcpy(data.data(), data_, size); } }; namespace DShow { struct HVideoEncoder { ComPtr graph; ComPtr builder; ComPtr control; ComPtr encoder; ComPtr device; ComPtr output; ComPtr capture; VideoEncoderConfig config; mutex packetMutex; deque packets; EncodedData curPacket; deque ptsVals; bool initialized = false; bool active = false; HVideoEncoder(); ~HVideoEncoder(); bool SetupCrossbar(); void Receive(IMediaSample *s); bool ConnectFilters(); bool SetupEncoder(IBaseFilter *filter); bool SetConfig(VideoEncoderConfig &config); bool Encode(unsigned char *frame[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd, EncoderPacket &packet, bool &new_packet); }; }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-enum.hpp000644 001751 001751 00000003002 15153330240 027243 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "../dshowcapture.hpp" #include "dshow-base.hpp" #include "dshow-media-type.hpp" #include using namespace std; namespace DShow { bool GetClosestVideoMediaType(IBaseFilter *filter, VideoConfig &config, MediaType &mt); bool GetClosestAudioMediaType(IBaseFilter *filter, AudioConfig &config, MediaType &mt); bool EnumVideoCaps(IPin *pin, vector &caps); bool EnumAudioCaps(IPin *pin, vector &caps); typedef bool (*EnumDeviceCallback)(void *param, IBaseFilter *filter, const wchar_t *deviceName, const wchar_t *devicePath); bool EnumDevices(const GUID &type, EnumDeviceCallback callback, void *param); }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/device.hpp000644 001751 001751 00000006413 15153330240 026425 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "../dshowcapture.hpp" #include "capture-filter.hpp" #include #include using namespace std; namespace DShow { struct EncodedData { long long lastStartTime = 0; long long lastStopTime = 0; vector bytes; }; struct EncodedDevice { VideoFormat videoFormat; ULONG videoPacketID; long width; long height; long long frameInterval; AudioFormat audioFormat; ULONG audioPacketID; DWORD samplesPerSec; }; struct HDevice { ComPtr graph; ComPtr builder; ComPtr control; ComPtr videoFilter; ComPtr audioFilter; ComPtr videoCapture; ComPtr audioCapture; ComPtr audioOutput; ComPtr rocketEncoder; MediaType videoMediaType; MediaType audioMediaType; VideoConfig videoConfig; AudioConfig audioConfig; bool encodedDevice = false; bool rotatableDevice = false; bool deviceHdrSignal = false; bool reactivatePending = false; bool initialized; bool active; EncodedData encodedVideo; EncodedData encodedAudio; HDevice(); ~HDevice(); void ConvertVideoSettings(); void ConvertAudioSettings(); bool EnsureInitialized(const wchar_t *func); bool EnsureActive(const wchar_t *func); bool EnsureInactive(const wchar_t *func); inline void SendToCallback(bool video, unsigned char *data, size_t size, long long startTime, long long stopTime, long rotation); void Receive(bool video, IMediaSample *sample); bool SetupEncodedVideoCapture(IBaseFilter *filter, VideoConfig &config, const EncodedDevice &info); bool SetupExceptionVideoCapture(IBaseFilter *filter, VideoConfig &config); bool SetupExceptionAudioCapture(IPin *pin); bool SetupVideoCapture(IBaseFilter *filter, VideoConfig &config); bool SetupAudioCapture(IBaseFilter *filter, AudioConfig &config); bool SetupAudioOutput(IBaseFilter *filter, AudioConfig &config); bool SetVideoConfig(VideoConfig *config); bool SetAudioConfig(AudioConfig *config); bool CreateGraph(); bool FindCrossbar(IBaseFilter *filter, IBaseFilter **crossbar); bool ConnectPins(const GUID &category, const GUID &type, IBaseFilter *filter, IBaseFilter *capture); bool RenderFilters(const GUID &category, const GUID &type, IBaseFilter *filter, IBaseFilter *capture); void SetAudioBuffering(int bufferingMs); bool ConnectFilters(); void DisconnectFilters(); Result Start(); void Stop(); }; }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/external/000755 001751 001751 00000000000 15153330731 026300 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/external/.clang-format000644 001751 001751 00000000066 15153330240 030650 0ustar00runnerrunner000000 000000 Language: Cpp SortIncludes: false DisableFormat: true obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/external/IVideoCaptureFilter.h000644 001751 001751 00000027123 15153330240 032322 0ustar00runnerrunner000000 000000 //============================================================================= //! @file IVideoCaptureFilter.h //! @bc ----------------------------------------------------------------------- //! @ec @brief Interface declaration for Elgato Video Capture Filter //! @author F.M.Birth, T.Schnitzler //! @date 01-Oct-12 FMB - Creation //! @date 08-Apr-13 TS - Added IVideoCaptureFilter2 //! @date 14-Nov-13 TS - Added IVideoCaptureFilter3 //! @date 21-Jul-14 TS - Added IVideoCaptureFilter4 //! @date 12-Aug-14 TS - Added IVideoCaptureFilter5 //! @date 28-Aug-14 FDj - MIT license added //! @note Supports Elgato Game Capture HD //! @bc ----------------------------------------------------------------------- //! @ec @par Copyright //! @n (c) 2012-14, Elgato Systems. All Rights Reserved. //! @n //! @n The MIT License (MIT) //! @n //! @n Permission is hereby granted, free of charge, to any person obtaining a copy //! @n of this software and associated documentation files (the "Software"), to deal //! @n in the Software without restriction, including without limitation the rights //! @n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell //! @n copies of the Software, and to permit persons to whom the Software is //! @n furnished to do so, subject to the following conditions: //! @n //! @n The above copyright notice and this permission notice shall be included in all //! @n copies or substantial portions of the Software. //! @n //! @n THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR //! @n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, //! @n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE //! @n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER //! @n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, //! @n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE //! @n SOFTWARE. //! @n //============================================================================= #pragma once /*============================================================================= // FILTER INTERFACE =============================================================================*/ #define VIDEO_CAPTURE_FILTER_NAME "Elgato Game Capture HD" #define VIDEO_CAPTURE_FILTER_NAME_L L"Elgato Game Capture HD" // {39F50F4C-99E1-464a-B6F9-D605B4FB5918} DEFINE_GUID(CLSID_ElgatoVideoCaptureFilter, 0x39f50f4c, 0x99e1, 0x464a, 0xb6, 0xf9, 0xd6, 0x5, 0xb4, 0xfb, 0x59, 0x18); // {39F50F4C-99E1-464a-B6F9-D605B4FB5919} DEFINE_GUID(CLSID_ElgatoVideoCaptureFilterProperties, 0x39f50f4c, 0x99e1, 0x464a, 0xb6, 0xf9, 0xd6, 0x5, 0xb4, 0xfb, 0x59, 0x19); // {39F50F4C-99E1-464a-B6F9-D605B4FB5920} DEFINE_GUID(IID_IElgatoVideoCaptureFilter, 0x39f50f4c, 0x99e1, 0x464a, 0xb6, 0xf9, 0xd6, 0x5, 0xb4, 0xfb, 0x59, 0x20); // {585B2914-252E-49bd-B730-7B4C40F4D4E5} DEFINE_GUID(IID_IElgatoVideoCaptureFilter2, 0x585b2914, 0x252e, 0x49bd, 0xb7, 0x30, 0x7b, 0x4c, 0x40, 0xf4, 0xd4, 0xe5); // {CC415EB7-B1C7-428c-9E5E-D9747DB4BE76} DEFINE_GUID(IID_IElgatoVideoCaptureFilter3, 0xcc415eb7, 0xb1c7, 0x428c, 0x9e, 0x5e, 0xd9, 0x74, 0x7d, 0xb4, 0xbe, 0x76); // {197992FF-ED65-47CB-8032-D287AB40B33F} DEFINE_GUID(IID_IElgatoVideoCaptureFilter4, 0x197992ff, 0xed65, 0x47cb, 0x80, 0x32, 0xd2, 0x87, 0xab, 0x40, 0xb3, 0x3f); // {7E6E9E9E-4062-4364-99B1-15C2F662B502} DEFINE_GUID(IID_IElgatoVideoCaptureFilter5, 0x7e6e9e9e, 0x4062, 0x4364, 0x99, 0xb1, 0x15, 0xc2, 0xf6, 0x62, 0xb5, 0x2); /*============================================================================= // IElgatoVideoCaptureFilter =============================================================================*/ //! Interface DECLARE_INTERFACE_(IElgatoVideoCaptureFilter, IUnknown) { }; /*============================================================================= // IElgatoVideoCaptureFilter2 =============================================================================*/ //! Video Capture device type enum VIDEO_CAPTURE_FILTER_DEVICE_TYPE { VIDEO_CAPTURE_FILTER_DEVICE_TYPE_INVALID = 0, //!< Invalid VIDEO_CAPTURE_FILTER_DEVICE_TYPE_GAME_CAPTURE_HD = 2, //!< Game Capture HD (VID: 0x0fd9 PID: 0x0044, 0x004e, 0x0051) VIDEO_CAPTURE_FILTER_DEVICE_TYPE_GAME_CAPTURE_HD60 = 8, //!< Game Capture HD60 (VID: 0x0fd9 PID: 0x005c) NUM_VIDEO_CAPTURE_FILTER_DEVICE_TYPE }; //! Input device enum VIDEO_CAPTURE_FILTER_INPUT_DEVICE { VIDEO_CAPTURE_FILTER_INPUT_DEVICE_INVALID = 0, //!< Invalid VIDEO_CAPTURE_FILTER_INPUT_DEVICE_XBOX360 = 1, //!< Microsoft Xbox 360 VIDEO_CAPTURE_FILTER_INPUT_DEVICE_PLAYSTATION3 = 2, //!< Sony PlayStation 3 VIDEO_CAPTURE_FILTER_INPUT_DEVICE_IPAD = 3, //!< Apple iPad VIDEO_CAPTURE_FILTER_INPUT_DEVICE_IPOD_IPHONE = 4, //!< Apple iPod or iPhone VIDEO_CAPTURE_FILTER_INPUT_DEVICE_WII = 5, //!< Nintendo Wii VIDEO_CAPTURE_FILTER_INPUT_DEVICE_OTHER = 6, //!< Other VIDEO_CAPTURE_FILTER_INPUT_DEVICE_WII_U = 7, //!< Nintendo Wii U VIDEO_CAPTURE_FILTER_INPUT_DEVICE_XBOX_ONE = 8, //!< Microsoft Xbox One VIDEO_CAPTURE_FILTER_INPUT_DEVICE_PLAYSTATION4 = 9, //!< Sony PlayStation 4 }; //! Video inputs enum VIDEO_CAPTURE_FILTER_VIDEO_INPUT { VIDEO_CAPTURE_FILTER_VIDEO_INPUT_INVALID = 0, //!< Invalid VIDEO_CAPTURE_FILTER_VIDEO_INPUT_COMPOSITE = 1, //!< Composite VIDEO_CAPTURE_FILTER_VIDEO_INPUT_SVIDEO = 2, //!< S-Video VIDEO_CAPTURE_FILTER_VIDEO_INPUT_COMPONENT = 3, //!< Component VIDEO_CAPTURE_FILTER_VIDEO_INPUT_HDMI = 4, //!< HDMI }; //! Video encoder profile enum VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE { VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE_INVALID = 0x00000000, //!< Invalid VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE_240 = 0x00000001, //!< 320x240 VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE_360 = 0x00000002, //!< 480x360 VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE_480 = 0x00000004, //!< 640x480 VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE_720 = 0x00000008, //!< 1280x720 VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE_1080 = 0x00000010, //!< 1920x1080 }; //! Color range enum VIDEO_CAPTURE_FILTER_COLOR_RANGE { VIDEO_CAPTURE_FILTER_COLOR_RANGE_INVALID = 0, //!< Invalid VIDEO_CAPTURE_FILTER_COLOR_RANGE_FULL = 1, //!< 0-255 VIDEO_CAPTURE_FILTER_COLOR_RANGE_LIMITED = 2, //!< 16-235 VIDEO_CAPTURE_FILTER_COLOR_RANGE_SHOOT = 3, //!< }; //! Settings struct VIDEO_CAPTURE_FILTER_SETTINGS { TCHAR deviceName[256]; //!< Device name (get only) VIDEO_CAPTURE_FILTER_INPUT_DEVICE inputDevice; //!< Input device (e.g. Xbox360) VIDEO_CAPTURE_FILTER_VIDEO_INPUT videoInput; //!< Video input (e.g. HDMI) VIDEO_CAPTURE_FILTER_VID_ENC_PROFILE profile; //!< Video encoder profile (maximum resolution) BOOL useAnalogAudioInput; //!< for HDMI with analog audio input VIDEO_CAPTURE_FILTER_COLOR_RANGE hdmiColorRange; //!< HDMI color range int brightness; //!< Brightness (0-10000) int contrast; //!< Contrast (0-10000) int saturation; //!< Saturation (0-10000) int hue; //!< Hue (0-10000) int analogAudioGain; //!< Analog audio gain (-60 - 12 dB) int digitalAudioGain; //!< Digital audio gain (-60 - 12 dB) BOOL preserveInputFormat; //!< Input Format will be preserved (e.g. do not convert interlaced to progressive) BOOL stretchStandardDefinitionInput; //!< Stretch SD input to 16:9 }; //! Interface DECLARE_INTERFACE_(IElgatoVideoCaptureFilter2, IElgatoVideoCaptureFilter) { // Get current settings STDMETHOD(GetSettings)(THIS_ VIDEO_CAPTURE_FILTER_SETTINGS *pSettings) PURE; // Set settings STDMETHOD(SetSettings)(THIS_ const VIDEO_CAPTURE_FILTER_SETTINGS *pcSettings) PURE; }; /*============================================================================= // IElgatoVideoCaptureFilter3 =============================================================================*/ //! Interface DECLARE_INTERFACE_(IElgatoVideoCaptureFilter3, IElgatoVideoCaptureFilter2) { //! Get A/V delay in milli-seconds (approximate delay between input signal and DirectShow //! filter output) STDMETHOD(GetDelayMs)(THIS_ int* pnDelayMs) PURE; }; /*============================================================================= // IElgatoVideoCaptureFilter4 =============================================================================*/ //! Messages enum VIDEO_CAPTURE_FILTER_NOTIFICATION { //! Description: Delay of the device has changed. Call GetDelayMs() to get the new delay. VIDEO_CAPTURE_FILTER_NOTIFICATION_DEVICE_DELAY_CHANGED = 110, //!< Data: none //! Description: Output format has changed. Update your signal path accordingly. VIDEO_CAPTURE_FILTER_NOTIFICATION_CAPTURE_OUTPUT_FORMAT_CHANGED = 305, //!< Data: none }; //! Custom event that can be received by IMediaEvent::GetEvent. If SetNotificationCallback() was not set this method is used to send notifications. //! lEventCode = VIDEO_CAPTURE_FILTER_EVENT //! lParam1 = VIDEO_CAPTURE_FILTER_NOTIFICATION //! lParam2 = reserved for future use (e.g. notifications with additional data) #define VIDEO_CAPTURE_FILTER_EVENT EC_USER + 0x0FD9 //! Message callback typedef void (CALLBACK* PFN_VIDEO_CAPTURE_FILTER_NOTIFICATION_CALLBACK)(VIDEO_CAPTURE_FILTER_NOTIFICATION nMessage, void* pData, int nSize, void* pContext); //! Interface DECLARE_INTERFACE_(IElgatoVideoCaptureFilter4, IElgatoVideoCaptureFilter3) { //! Check device is present STDMETHOD(GetDevicePresent)(THIS_ BOOL* pfDevicePresent) PURE; //! Get current device type STDMETHOD(GetDeviceType)(THIS_ VIDEO_CAPTURE_FILTER_DEVICE_TYPE* pnDeviceType) PURE; //! Set callback to receive notifications STDMETHOD(SetNotificationCallback)(THIS_ PFN_VIDEO_CAPTURE_FILTER_NOTIFICATION_CALLBACK pCallback, void* pContext) PURE; }; /*============================================================================= // IElgatoVideoCaptureFilter5 =============================================================================*/ //! Extended Settings struct VIDEO_CAPTURE_FILTER_SETTINGS_EX { VIDEO_CAPTURE_FILTER_SETTINGS Settings; BOOL enableFullFrameRate; //!< Enable full frame rate (50/60 fps) BYTE reserved[20 * 1024]; }; DECLARE_INTERFACE_(IElgatoVideoCaptureFilter5, IElgatoVideoCaptureFilter4) { //! Get current settings STDMETHOD(GetSettingsEx)(THIS_ VIDEO_CAPTURE_FILTER_SETTINGS_EX *pSettings) PURE; //! Set settings STDMETHOD(SetSettingsEx)(THIS_ const VIDEO_CAPTURE_FILTER_SETTINGS_EX *pcSettings) PURE; }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-media-type.hpp000644 001751 001751 00000006735 15153330240 030355 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "dshow-base.hpp" namespace DShow { HRESULT CopyMediaType(AM_MEDIA_TYPE *pmtTarget, const AM_MEDIA_TYPE *pmtSource); void FreeMediaType(AM_MEDIA_TYPE &mt); BITMAPINFOHEADER *GetBitmapInfoHeader(AM_MEDIA_TYPE &mt); const BITMAPINFOHEADER *GetBitmapInfoHeader(const AM_MEDIA_TYPE &mt); class MediaTypePtr; class MediaType { friend class MediaTypePtr; AM_MEDIA_TYPE type; public: inline MediaType() { memset(&type, 0, sizeof(type)); } inline MediaType(const MediaType &mt) { CopyMediaType(&type, &mt.type); } inline MediaType(const AM_MEDIA_TYPE &type_) { CopyMediaType(&type, &type_); } inline ~MediaType() { FreeMediaType(type); } inline operator AM_MEDIA_TYPE *() { return &type; } inline operator AM_MEDIA_TYPE &() { return type; } inline operator const AM_MEDIA_TYPE *() const { return &type; } inline operator const AM_MEDIA_TYPE &() const { return type; } inline AM_MEDIA_TYPE *Ptr() { return &type; } inline AM_MEDIA_TYPE *operator->() { return &type; } inline const AM_MEDIA_TYPE *operator->() const { return &type; } inline AM_MEDIA_TYPE *Duplicate() const { AM_MEDIA_TYPE *ptr = (AM_MEDIA_TYPE *)CoTaskMemAlloc(sizeof(*ptr)); if (ptr) { memset(ptr, 0, sizeof(*ptr)); CopyMediaType(ptr, &type); } return ptr; } inline bool operator==(const AM_MEDIA_TYPE *pMT) const { return pMT == &type; } inline void operator=(const MediaType &mt) { FreeMediaType(type); CopyMediaType(&type, &mt.type); } inline void operator=(const AM_MEDIA_TYPE *pMT) { FreeMediaType(type); CopyMediaType(&type, pMT); } inline void operator=(const AM_MEDIA_TYPE &type_) { FreeMediaType(type); CopyMediaType(&type, &type_); } template inline T *AllocFormat() { if (type.pbFormat) { CoTaskMemFree(type.pbFormat); type.pbFormat = nullptr; type.cbFormat = 0; } type.pbFormat = (PBYTE)CoTaskMemAlloc(sizeof(T)); type.cbFormat = sizeof(T); memset(type.pbFormat, 0, sizeof(T)); return (T *)type.pbFormat; } }; class MediaTypePtr { friend class MediaType; AM_MEDIA_TYPE *ptr; public: inline void Clear() { if (ptr) { FreeMediaType(*ptr); CoTaskMemFree(ptr); ptr = nullptr; } } inline MediaTypePtr() : ptr(nullptr) {} inline MediaTypePtr(AM_MEDIA_TYPE *ptr_) : ptr(ptr_) {} inline ~MediaTypePtr() { Clear(); } inline AM_MEDIA_TYPE **operator&() { Clear(); return &ptr; } inline AM_MEDIA_TYPE *operator->() const { return ptr; } inline operator AM_MEDIA_TYPE *() const { return ptr; } inline void operator=(AM_MEDIA_TYPE *ptr_) { Clear(); ptr = ptr_; } inline bool operator==(const AM_MEDIA_TYPE *ptr_) const { return ptr == ptr_; } }; }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/capture-filter.hpp000644 001751 001751 00000012103 15153330240 030105 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "dshow-base.hpp" #include "dshow-media-type.hpp" #include "../dshowcapture.hpp" namespace DShow { class CaptureFilter; class CaptureSource; typedef void (*CaptureCallback)(void *param, IMediaSample *sample); struct PinCaptureInfo { std::function callback; GUID expectedMajorType{}; GUID expectedSubType{}; }; class CapturePin : public IPin, public IMemInputPin { friend class CaptureEnumMediaTypes; volatile long refCount; PinCaptureInfo captureInfo; ComPtr connectedPin; CaptureFilter *filter; MediaType connectedMediaType; volatile bool flushing = false; bool IsValidMediaType(const AM_MEDIA_TYPE *pmt) const; public: CapturePin(CaptureFilter *filter, const PinCaptureInfo &info); virtual ~CapturePin(); STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IPin methods STDMETHODIMP Connect(IPin *pReceivePin, const AM_MEDIA_TYPE *pmt); STDMETHODIMP ReceiveConnection(IPin *connector, const AM_MEDIA_TYPE *pmt); STDMETHODIMP Disconnect(); STDMETHODIMP ConnectedTo(IPin **pPin); STDMETHODIMP ConnectionMediaType(AM_MEDIA_TYPE *pmt); STDMETHODIMP QueryPinInfo(PIN_INFO *pInfo); STDMETHODIMP QueryDirection(PIN_DIRECTION *pPinDir); STDMETHODIMP QueryId(LPWSTR *lpId); STDMETHODIMP QueryAccept(const AM_MEDIA_TYPE *pmt); STDMETHODIMP EnumMediaTypes(IEnumMediaTypes **ppEnum); STDMETHODIMP QueryInternalConnections(IPin **apPin, ULONG *nPin); STDMETHODIMP EndOfStream(); STDMETHODIMP BeginFlush(); STDMETHODIMP EndFlush(); STDMETHODIMP NewSegment(REFERENCE_TIME tStart, REFERENCE_TIME tStop, double dRate); // IMemInputPin methods STDMETHODIMP GetAllocator(IMemAllocator **ppAllocator); STDMETHODIMP NotifyAllocator(IMemAllocator *pAllocator, BOOL bReadOnly); STDMETHODIMP GetAllocatorRequirements(ALLOCATOR_PROPERTIES *pProps); STDMETHODIMP Receive(IMediaSample *pSample); STDMETHODIMP ReceiveMultiple(IMediaSample **pSamples, long nSamples, long *nSamplesProcessed); STDMETHODIMP ReceiveCanBlock(); }; class CaptureFilter : public IBaseFilter { friend class CapturePin; volatile long refCount; FILTER_STATE state; ComPtr graph; ComPtr pin; ComPtr misc; public: CaptureFilter(const PinCaptureInfo &info); virtual ~CaptureFilter(); // IUnknown methods STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IPersist method STDMETHODIMP GetClassID(CLSID *pClsID); // IMediaFilter methods STDMETHODIMP GetState(DWORD dwMSecs, FILTER_STATE *State); STDMETHODIMP SetSyncSource(IReferenceClock *pClock); STDMETHODIMP GetSyncSource(IReferenceClock **pClock); STDMETHODIMP Stop(); STDMETHODIMP Pause(); STDMETHODIMP Run(REFERENCE_TIME tStart); // IBaseFilter methods STDMETHODIMP EnumPins(IEnumPins **ppEnum); STDMETHODIMP FindPin(LPCWSTR Id, IPin **ppPin); STDMETHODIMP QueryFilterInfo(FILTER_INFO *pInfo); STDMETHODIMP JoinFilterGraph(IFilterGraph *pGraph, LPCWSTR pName); STDMETHODIMP QueryVendorInfo(LPWSTR *pVendorInfo); inline CapturePin *GetPin() const { return (CapturePin *)pin; } }; class CaptureEnumPins : public IEnumPins { volatile long refCount = 1; ComPtr filter; UINT curPin; public: CaptureEnumPins(CaptureFilter *filter, CaptureEnumPins *pEnum); virtual ~CaptureEnumPins(); // IUnknown STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IEnumPins STDMETHODIMP Next(ULONG cPins, IPin **ppPins, ULONG *pcFetched); STDMETHODIMP Skip(ULONG cPins); STDMETHODIMP Reset(); STDMETHODIMP Clone(IEnumPins **ppEnum); }; class CaptureEnumMediaTypes : public IEnumMediaTypes { volatile long refCount = 1; ComPtr pin; UINT curMT = 0; public: CaptureEnumMediaTypes(CapturePin *pin); virtual ~CaptureEnumMediaTypes(); // IUnknown STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IEnumMediaTypes STDMETHODIMP Next(ULONG cMediaTypes, AM_MEDIA_TYPE **ppMediaTypes, ULONG *pcFetched); STDMETHODIMP Skip(ULONG cMediaTypes); STDMETHODIMP Reset(); STDMETHODIMP Clone(IEnumMediaTypes **ppEnum); }; }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-demux.cpp000644 001751 001751 00000010242 15153330240 027420 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "dshow-demux.hpp" #include "dshow-formats.hpp" #include "log.hpp" namespace DShow { static inline DWORD VideoFormatToFourCC(VideoFormat format) { if (format == VideoFormat::H264) return MAKEFOURCC('H', '2', '6', '4'); return 0; } static inline const GUID &VideoFormatToSubType(VideoFormat format) { if (format == VideoFormat::H264) return MEDIASUBTYPE_H264; return GUID_NULL; } bool CreateDemuxVideoPin(IBaseFilter *demuxFilter, MediaType &mt, long width, long height, long long frameTime, VideoFormat format) { ComQIPtr demuxer(demuxFilter); if (!demuxer) { Warning(L"CreateDemuxVideoPin: Failed to get " L"IMpeg2Demultiplexer from filter"); return false; } ComPtr pin; HRESULT hr; VIDEOINFOHEADER *vih = mt.AllocFormat(); vih->bmiHeader.biSize = sizeof(vih->bmiHeader); vih->bmiHeader.biWidth = width; vih->bmiHeader.biHeight = height; vih->bmiHeader.biCompression = VideoFormatToFourCC(format); vih->rcSource.right = width; vih->rcSource.bottom = height; vih->AvgTimePerFrame = frameTime; if (!vih->bmiHeader.biCompression) { Warning(L"CreateDemuxVideoPin: Invalid video format"); return false; } mt->majortype = MEDIATYPE_Video; mt->subtype = VideoFormatToSubType(format); mt->formattype = FORMAT_VideoInfo; mt->bTemporalCompression = true; wchar_t *name = (wchar_t *)CoTaskMemAlloc(sizeof(DEMUX_VIDEO_PIN)); memcpy(name, DEMUX_VIDEO_PIN, sizeof(DEMUX_VIDEO_PIN)); hr = demuxer->CreateOutputPin(mt, name, &pin); if (FAILED(hr)) { WarningHR(L"CreateDemuxVideoPin: Failed to create video pin " L"on demuxer", hr); return false; } return true; } static inline WORD AudioFormatToFormatTag(AudioFormat format) { if (format == AudioFormat::AAC) return WAVE_FORMAT_RAW_AAC1; else if (format == AudioFormat::AC3) return WAVE_FORMAT_DVM; else if (format == AudioFormat::MPGA) return WAVE_FORMAT_MPEG; return 0; } static inline const GUID &AudioFormatToSubType(AudioFormat format) { if (format == AudioFormat::AAC) return MEDIASUBTYPE_RAW_AAC1; else if (format == AudioFormat::AC3) return MEDIASUBTYPE_DVM; else if (format == AudioFormat::MPGA) return MEDIASUBTYPE_MPEG1AudioPayload; return GUID_NULL; } bool CreateDemuxAudioPin(IBaseFilter *demuxFilter, MediaType &mt, DWORD samplesPerSec, WORD bitsPerSample, WORD channels, AudioFormat format) { ComQIPtr demuxer(demuxFilter); if (!demuxer) { Warning(L"CreateDemuxAudioPin: Failed to get " L"IMpeg2Demultiplexer from filter"); return false; } ComPtr pin; HRESULT hr; WAVEFORMATEX *wfex = mt.AllocFormat(); wfex->wFormatTag = AudioFormatToFormatTag(format); wfex->nChannels = channels; wfex->nSamplesPerSec = samplesPerSec; wfex->wBitsPerSample = bitsPerSample; if (!wfex->wFormatTag) { Warning(L"CreateDemuxAudioPin: Invalid audio format"); return false; } mt->majortype = MEDIATYPE_Audio; mt->subtype = AudioFormatToSubType(format); mt->formattype = FORMAT_WaveFormatEx; mt->bTemporalCompression = true; wchar_t *name = (wchar_t *)CoTaskMemAlloc(sizeof(DEMUX_AUDIO_PIN)); memcpy(name, DEMUX_AUDIO_PIN, sizeof(DEMUX_AUDIO_PIN)); hr = demuxer->CreateOutputPin(mt, name, &pin); if (FAILED(hr)) { WarningHR(L"CreateDemuxAudioPin: Failed to create audio pin " L"on demuxer", hr); return false; } return true; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshowencode.cpp000644 001751 001751 00000005052 15153330240 027461 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "../dshowcapture.hpp" #include "dshow-base.hpp" #include "dshow-enum.hpp" #include "encoder.hpp" #include "log.hpp" namespace DShow { VideoEncoder::VideoEncoder() : context(new HVideoEncoder) {} VideoEncoder::~VideoEncoder() { delete context; } bool VideoEncoder::Valid() const { return context->initialized; } bool VideoEncoder::Active() const { return context->active; } bool VideoEncoder::ResetGraph() { delete context; context = new HVideoEncoder; return context->initialized; } bool VideoEncoder::SetConfig(VideoEncoderConfig &config) { if (context->active) { delete context; context = new HVideoEncoder; } return context->SetConfig(config); } bool VideoEncoder::GetConfig(VideoEncoderConfig &config) const { if (context->encoder == nullptr) return false; config = context->config; return true; } bool VideoEncoder::Encode(unsigned char *data[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd, EncoderPacket &packet, bool &new_packet) { if (context->encoder == nullptr) return false; return context->Encode(data, linesize, timestampStart, timestampEnd, packet, new_packet); } static bool EnumVideoEncoder(vector &encoders, IBaseFilter *encoder, const wchar_t *deviceName, const wchar_t *devicePath) { DeviceId id; bool validDevice = wcsstr(deviceName, L"C985") || wcsstr(deviceName, L"C353"); if (!validDevice) return true; id.name = deviceName; id.path = devicePath; encoders.push_back(id); (void)encoder; return true; } bool VideoEncoder::EnumEncoders(vector &encoders) { encoders.clear(); return EnumDevices(KSCATEGORY_ENCODER, EnumDeviceCallback(EnumVideoEncoder), &encoders); } }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/device.cpp000644 001751 001751 00000051762 15153330240 026427 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "device.hpp" #include "dshow-device-defs.hpp" #include "dshow-media-type.hpp" #include "dshow-formats.hpp" #include "dshow-enum.hpp" #include "log.hpp" #define ROCKET_WAIT_TIME_MS 5000 namespace DShow { /* device-vendor.cpp API */ extern bool IsVendorVideoHDR(IKsPropertySet *propertySet); extern void SetVendorVideoFormat(IKsPropertySet *propertySet, bool hevcTrueAvcFalse); extern void SetVendorTonemapperUsage(IBaseFilter *filter, bool enable); bool SetRocketEnabled(IBaseFilter *encoder, bool enable); HDevice::HDevice() : initialized(false), active(false) {} HDevice::~HDevice() { if (active) Stop(); DisconnectFilters(); /* * the sleeps for the rocket are required. It seems that you cannot * simply start/stop the stream right away after/before you enable or * disable the rocket. If you start it too fast after enabling, it * won't return any data. If you try to turn off the rocket too * quickly after stopping, then it'll be perpetually stuck on, and then * you'll have to unplug/replug the device to get it working again. */ if (!!rocketEncoder) { Sleep(ROCKET_WAIT_TIME_MS); SetRocketEnabled(rocketEncoder, false); } } bool HDevice::EnsureInitialized(const wchar_t *func) { if (!initialized) { Error(L"%s: context not initialized", func); return false; } return true; } bool HDevice::EnsureActive(const wchar_t *func) { if (!active) { Error(L"%s: cannot be used while inactive", func); return false; } return true; } bool HDevice::EnsureInactive(const wchar_t *func) { if (active) { Error(L"%s: cannot be used while active", func); return false; } return true; } inline void HDevice::SendToCallback(bool video, unsigned char *data, size_t size, long long startTime, long long stopTime, long rotation) { if (!size) return; if (video) videoConfig.callback(videoConfig, data, size, startTime, stopTime, rotation); else audioConfig.callback(audioConfig, data, size, startTime, stopTime); } void HDevice::Receive(bool isVideo, IMediaSample *sample) { BYTE *ptr; MediaTypePtr mt; long roll = 0; bool encoded = isVideo ? ((int)videoConfig.format >= 400) : ((int)audioConfig.format >= 200); if (!sample) return; if (isVideo ? !videoConfig.callback : !audioConfig.callback) return; if (reactivatePending) return; /* auto-rotation for devices such as streamcam */ if (isVideo && rotatableDevice) { ComQIPtr cc(videoFilter); if (cc) { long ccf = 0; cc->Get(CameraControl_Roll, &roll, &ccf); } } if (isVideo && videoConfig.reactivateCallback) { ComQIPtr propertySet(videoFilter); if (propertySet) { const bool hdr = IsVendorVideoHDR(propertySet); if (deviceHdrSignal != hdr) { deviceHdrSignal = hdr; #ifdef ENABLE_HEVC SetVendorVideoFormat(propertySet, hdr); #endif videoConfig.reactivateCallback(); reactivatePending = true; return; } } } if (sample->GetMediaType(&mt) == S_OK) { if (isVideo) { videoMediaType = mt; ConvertVideoSettings(); } else { audioMediaType = mt; ConvertAudioSettings(); } } int size = sample->GetActualDataLength(); if (!size) return; if (FAILED(sample->GetPointer(&ptr))) return; long long startTime, stopTime; bool hasTime = SUCCEEDED(sample->GetTime(&startTime, &stopTime)); if (encoded) { EncodedData &data = isVideo ? encodedVideo : encodedAudio; /* packets that have time are the first packet in a group of * segments */ if (hasTime) { SendToCallback(isVideo, data.bytes.data(), data.bytes.size(), data.lastStartTime, data.lastStopTime, roll); data.bytes.resize(0); data.lastStartTime = startTime; data.lastStopTime = stopTime; } data.bytes.insert(data.bytes.end(), (unsigned char *)ptr, (unsigned char *)ptr + size); } else if (hasTime) { SendToCallback(isVideo, ptr, size, startTime, stopTime, roll); } } void HDevice::ConvertVideoSettings() { VIDEOINFOHEADER *vih = (VIDEOINFOHEADER *)videoMediaType->pbFormat; BITMAPINFOHEADER *bmih = GetBitmapInfoHeader(videoMediaType); if (bmih) { Debug(L"Video media type changed"); videoConfig.cx = bmih->biWidth; videoConfig.cy_abs = labs(bmih->biHeight); videoConfig.cy_flip = bmih->biHeight < 0; videoConfig.frameInterval = vih->AvgTimePerFrame; bool same = videoConfig.internalFormat == videoConfig.format; GetMediaTypeVFormat(videoMediaType, videoConfig.internalFormat); if (same) videoConfig.format = videoConfig.internalFormat; } } void HDevice::ConvertAudioSettings() { WAVEFORMATEX *wfex = reinterpret_cast(audioMediaType->pbFormat); Debug(L"Audio media type changed"); audioConfig.sampleRate = wfex->nSamplesPerSec; audioConfig.channels = wfex->nChannels; if (wfex->wFormatTag == WAVE_FORMAT_RAW_AAC1) audioConfig.format = AudioFormat::AAC; else if (wfex->wFormatTag == WAVE_FORMAT_DVM) audioConfig.format = AudioFormat::AC3; else if (wfex->wFormatTag == WAVE_FORMAT_MPEG) audioConfig.format = AudioFormat::MPGA; else if (wfex->wBitsPerSample == 16) audioConfig.format = AudioFormat::Wave16bit; else if (wfex->wBitsPerSample == 32) audioConfig.format = AudioFormat::WaveFloat; else audioConfig.format = AudioFormat::Unknown; } #define HD_PVR1_NAME L"Hauppauge HD PVR Capture" bool HDevice::SetupExceptionVideoCapture(IBaseFilter *filter, VideoConfig &config) { ComPtr pin; if (GetPinByName(filter, PINDIR_OUTPUT, L"656", &pin)) return SetupEncodedVideoCapture(filter, config, HD_PVR2); else if (GetPinByName(filter, PINDIR_OUTPUT, L"TS Out", &pin)) return SetupEncodedVideoCapture(filter, config, Roxio); return false; } static bool GetPinMediaType(IPin *pin, MediaType &mt) { ComPtr mediaTypes; if (SUCCEEDED(pin->EnumMediaTypes(&mediaTypes))) { MediaTypePtr curMT; ULONG count = 0; while (mediaTypes->Next(1, &curMT, &count) == S_OK) { if (curMT->formattype == FORMAT_VideoInfo) { mt = curMT; return true; } } } return false; } bool HDevice::SetupVideoCapture(IBaseFilter *filter, VideoConfig &config) { ComPtr pin; HRESULT hr; bool success; if (config.name.find(L"C875") != std::string::npos || config.name.find(L"Prif Streambox") != std::string::npos || config.name.find(L"C835") != std::string::npos) return SetupEncodedVideoCapture(filter, config, AV_LGP); else if (config.name.find(L"IT9910") != std::string::npos) return SetupEncodedVideoCapture(filter, config, HD_PVR_Rocket); else if (config.name.find(HD_PVR1_NAME) != std::string::npos) return SetupEncodedVideoCapture(filter, config, HD_PVR1); rotatableDevice = videoConfig.name.find(L"StreamCam") != std::string::npos; success = GetFilterPin(filter, MEDIATYPE_Video, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); if (!success) { if (SetupExceptionVideoCapture(filter, config)) { return true; } else { Error(L"Could not get video pin"); return false; } } ComQIPtr pinConfig(pin); if (pinConfig == NULL) { Error(L"Could not get IAMStreamConfig for device"); return false; } if (config.useDefaultConfig) { MediaTypePtr defaultMT; hr = pinConfig->GetFormat(&defaultMT); if (hr == E_NOTIMPL) { if (!GetPinMediaType(pin, videoMediaType)) { Error(L"Couldn't get pin media type"); return false; } } else if (FAILED(hr)) { ErrorHR(L"Could not get default format for video", hr); return false; } else { videoMediaType = defaultMT; } ConvertVideoSettings(); config.format = config.internalFormat = VideoFormat::Any; } if (!GetClosestVideoMediaType(filter, config, videoMediaType)) { Error(L"Could not get closest video media type"); return false; } hr = pinConfig->SetFormat(videoMediaType); if (FAILED(hr) && hr != E_NOTIMPL) { ErrorHR(L"Could not set video format", hr); return false; } ConvertVideoSettings(); PinCaptureInfo info; info.callback = [this](IMediaSample *s) { Receive(true, s); }; info.expectedMajorType = videoMediaType->majortype; /* attempt to force intermediary filters for these types */ if (videoConfig.format == VideoFormat::XRGB) info.expectedSubType = MEDIASUBTYPE_RGB32; else if (videoConfig.format == VideoFormat::ARGB) info.expectedSubType = MEDIASUBTYPE_ARGB32; else if (videoConfig.format == VideoFormat::RGB24) info.expectedSubType = MEDIASUBTYPE_RGB24; else if (videoConfig.format == VideoFormat::YVYU) info.expectedSubType = MEDIASUBTYPE_YVYU; else if (videoConfig.format == VideoFormat::YUY2) info.expectedSubType = MEDIASUBTYPE_YUY2; else if (videoConfig.format == VideoFormat::UYVY) info.expectedSubType = MEDIASUBTYPE_UYVY; else info.expectedSubType = videoMediaType->subtype; videoCapture = new CaptureFilter(info); videoFilter = filter; graph->AddFilter(videoCapture, L"Video Capture Filter"); graph->AddFilter(videoFilter, L"Video Filter"); return true; } bool HDevice::SetVideoConfig(VideoConfig *config) { ComPtr filter; if (!EnsureInitialized(L"SetVideoConfig") || !EnsureInactive(L"SetVideoConfig")) return false; videoMediaType = NULL; graph->RemoveFilter(videoFilter); graph->RemoveFilter(videoCapture); videoFilter.Release(); videoCapture.Release(); if (!config) return true; if (config->name.empty() && config->path.empty()) { Error(L"No video device name or path specified"); return false; } bool success = GetDeviceFilter(CLSID_VideoInputDeviceCategory, config->name.c_str(), config->path.c_str(), &filter); if (!success) { Error(L"Video device '%s': %s not found", config->name.c_str(), config->path.c_str()); return false; } if (filter == NULL) { Error(L"Could not get video filter"); return false; } deviceHdrSignal = false; reactivatePending = false; ComPtr propertySet = ComQIPtr(filter); if (propertySet) { const bool hdr = IsVendorVideoHDR(propertySet); #ifdef ENABLE_HEVC SetVendorVideoFormat(propertySet, hdr); #endif deviceHdrSignal = hdr; } videoConfig = *config; if (!SetupVideoCapture(filter, videoConfig)) return false; *config = videoConfig; return true; } bool HDevice::SetupExceptionAudioCapture(IPin *pin) { ComPtr enumMediaTypes; ULONG count = 0; HRESULT hr; MediaTypePtr mt; hr = pin->EnumMediaTypes(&enumMediaTypes); if (FAILED(hr)) { WarningHR(L"SetupExceptionAudioCapture: pin->EnumMediaTypes " L"failed", hr); return false; } enumMediaTypes->Reset(); if (enumMediaTypes->Next(1, &mt, &count) == S_OK && mt->formattype == FORMAT_WaveFormatEx) { audioMediaType = mt; return true; } return false; } static bool is24BitAudio(AM_MEDIA_TYPE *mt) { if (mt->formattype == FORMAT_WaveFormatEx) { WAVEFORMATEX *wfex = (WAVEFORMATEX *)mt->pbFormat; return wfex->wBitsPerSample == 24; } return false; } bool HDevice::SetupAudioCapture(IBaseFilter *filter, AudioConfig &config) { ComPtr pin; MediaTypePtr defaultMT; bool success; HRESULT hr; success = GetFilterPin(filter, MEDIATYPE_Audio, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); if (!success) { Error(L"Could not get audio pin"); return false; } ComQIPtr pinConfig(pin); if (config.useDefaultConfig) { MediaTypePtr defaultMT; if (pinConfig && SUCCEEDED(pinConfig->GetFormat(&defaultMT))) { if (is24BitAudio(defaultMT)) { WAVEFORMATEX *wfex = (WAVEFORMATEX *)defaultMT->pbFormat; config.sampleRate = wfex->nSamplesPerSec; config.channels = wfex->nChannels; config.format = AudioFormat::Wave16bit; config.useDefaultConfig = false; } else { audioMediaType = defaultMT; } } else { if (!SetupExceptionAudioCapture(pin)) { Error(L"Could not get default format for " L"audio pin"); return false; } } } if (!config.useDefaultConfig) { if (!GetClosestAudioMediaType(filter, config, audioMediaType)) { Error(L"Could not get closest audio media type"); return false; } } if (!!pinConfig) { hr = pinConfig->SetFormat(audioMediaType); if (FAILED(hr) && hr != E_NOTIMPL) { Error(L"Could not set audio format"); return false; } } ConvertAudioSettings(); PinCaptureInfo info; info.callback = [this](IMediaSample *s) { Receive(false, s); }; info.expectedMajorType = audioMediaType->majortype; info.expectedSubType = audioMediaType->subtype; audioCapture = new CaptureFilter(info); audioFilter = filter; audioConfig = config; graph->AddFilter(audioCapture, L"Audio Capture Filter"); if (!config.useVideoDevice) graph->AddFilter(audioFilter, L"Audio Filter"); return true; } bool HDevice::SetupAudioOutput(IBaseFilter *filter, AudioConfig &config) { ComPtr outputFilter; const CLSID *clsID; HRESULT hr; if (config.mode == AudioMode::WaveOut) { clsID = &CLSID_AudioRender; } else { clsID = &CLSID_DSoundRender; } hr = CoCreateInstance(*clsID, nullptr, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void **)&outputFilter); if (FAILED(hr)) { ErrorHR(L"Failed to create audio sound output filter", hr); return false; } audioFilter = filter; audioOutput = std::move(outputFilter); graph->AddFilter(audioOutput, L"Audio Output Filter"); if (!config.useVideoDevice) graph->AddFilter(audioFilter, L"Audio Filter"); return true; } bool HDevice::SetAudioConfig(AudioConfig *config) { ComPtr filter; if (!EnsureInitialized(L"SetAudioConfig") || !EnsureInactive(L"SetAudioConfig")) return false; if (!audioConfig.useVideoDevice) graph->RemoveFilter(audioFilter); graph->RemoveFilter(audioCapture); graph->RemoveFilter(audioOutput); audioFilter.Release(); audioCapture.Release(); audioOutput.Release(); audioMediaType = NULL; if (!config) return true; if (!config->useVideoDevice && !config->useSeparateAudioFilter && config->name.empty() && config->path.empty()) { Error(L"No audio device name or path specified"); return false; } if (config->useVideoDevice) { if (videoFilter == NULL) { Error(L"Tried to use video device's built-in audio, " L"but no video device is present"); return false; } filter = videoFilter; } else if (config->useSeparateAudioFilter) { bool success = GetDeviceAudioFilter(videoConfig.path.c_str(), &filter); if (!success) { Error(L"Corresponding audio device for '%s' not found", videoConfig.path.c_str()); return false; } } else { bool success = GetDeviceFilter(CLSID_AudioInputDeviceCategory, config->name.c_str(), config->path.c_str(), &filter); if (!success) { Error(L"Audio device '%s': %s not found", config->name.c_str(), config->path.c_str()); return false; } } if (filter == NULL) return false; audioConfig = *config; if (config->mode == AudioMode::Capture) { if (!SetupAudioCapture(filter, audioConfig)) return false; *config = audioConfig; return true; } return SetupAudioOutput(filter, audioConfig); } bool HDevice::CreateGraph() { if (initialized) { Warning(L"Graph already created"); return false; } if (!CreateFilterGraph(&graph, &builder, &control)) return false; initialized = true; return true; } bool HDevice::FindCrossbar(IBaseFilter *filter, IBaseFilter **crossbar) { ComPtr pin; REGPINMEDIUM medium; HRESULT hr; hr = builder->FindInterface(NULL, NULL, filter, IID_IAMCrossbar, (void **)crossbar); if (SUCCEEDED(hr)) return true; if (!GetPinByName(filter, PINDIR_INPUT, nullptr, &pin)) return false; if (!GetPinMedium(pin, medium)) return false; if (!GetFilterByMedium(AM_KSCATEGORY_CROSSBAR, medium, crossbar)) return false; graph->AddFilter(*crossbar, L"Crossbar Filter"); return true; } bool HDevice::ConnectPins(const GUID &category, const GUID &type, IBaseFilter *filter, IBaseFilter *capture) { HRESULT hr; ComPtr crossbar; ComPtr filterPin; ComPtr capturePin; bool connectCrossbar = !encodedDevice && type == MEDIATYPE_Video; if (!EnsureInitialized(L"HDevice::ConnectPins") || !EnsureInactive(L"HDevice::ConnectPins")) return false; if (connectCrossbar && FindCrossbar(filter, &crossbar)) { if (!DirectConnectFilters(graph, crossbar, filter)) { Warning(L"HDevice::ConnectPins: Failed to connect " L"crossbar"); return false; } } if (!GetFilterPin(filter, type, category, PINDIR_OUTPUT, &filterPin)) { Error(L"HDevice::ConnectPins: Failed to find pin"); return false; } if (!GetPinByName(capture, PINDIR_INPUT, nullptr, &capturePin)) { Error(L"HDevice::ConnectPins: Failed to find capture pin"); return false; } hr = graph->ConnectDirect(filterPin, capturePin, nullptr); if (FAILED(hr)) { WarningHR(L"HDevice::ConnectPins: failed to connect pins", hr); return false; } return true; } bool HDevice::RenderFilters(const GUID &category, const GUID &type, IBaseFilter *filter, IBaseFilter *capture) { HRESULT hr; if (!EnsureInitialized(L"HDevice::RenderFilters") || !EnsureInactive(L"HDevice::RenderFilters")) return false; hr = builder->RenderStream(&category, &type, filter, NULL, capture); if (FAILED(hr)) { WarningHR(L"HDevice::ConnectFilters: RenderStream failed", hr); return false; } return true; } void HDevice::SetAudioBuffering(int bufferingMs) { ComPtr pin; bool success = GetFilterPin(audioFilter, MEDIATYPE_Audio, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); if (!success) return; ComQIPtr config(pin); if (!config) return; ComQIPtr neg(pin); if (!neg) return; MediaTypePtr mt; if (FAILED(config->GetFormat(&mt))) return; if (mt->formattype != FORMAT_WaveFormatEx) return; if (mt->cbFormat != sizeof(WAVEFORMATEX)) return; WAVEFORMATEX *wfex = (WAVEFORMATEX *)mt->pbFormat; ALLOCATOR_PROPERTIES props; props.cBuffers = -1; props.cbBuffer = wfex->nAvgBytesPerSec * bufferingMs / 1000; props.cbAlign = -1; props.cbPrefix = -1; HRESULT hr = neg->SuggestAllocatorProperties(&props); if (FAILED(hr)) WarningHR(L"Could not set allocator properties on audio " L"capture pin", hr); } bool HDevice::ConnectFilters() { bool success = true; if (!EnsureInitialized(L"ConnectFilters") || !EnsureInactive(L"ConnectFilters")) return false; if (videoCapture != NULL) { /* use hardware tonemapper for narrow format (SDR), not wide (HDR) */ const bool enable_tonemapper = videoConfig.format != VideoFormat::P010; SetVendorTonemapperUsage(videoFilter, enable_tonemapper); success = ConnectPins(PIN_CATEGORY_CAPTURE, MEDIATYPE_Video, videoFilter, videoCapture); if (!success) { success = RenderFilters(PIN_CATEGORY_CAPTURE, MEDIATYPE_Video, videoFilter, videoCapture); } } if ((audioCapture || audioOutput) && success) { IBaseFilter *filter = (audioCapture != nullptr) ? audioCapture.Get() : audioOutput.Get(); /* Stream engine has a bug where it will break if you try to * set different audio buffering, so don't use audio buffering * if using the stream engine's audio */ bool streamEngine = audioConfig.useVideoDevice && (videoConfig.name.find(L"Stream Engine") != std::string::npos); if (!streamEngine && audioCapture != nullptr) SetAudioBuffering(audioConfig.buffer ? audioConfig.buffer : 10); success = ConnectPins(PIN_CATEGORY_CAPTURE, MEDIATYPE_Audio, audioFilter, filter); if (!success) { success = RenderFilters(PIN_CATEGORY_CAPTURE, MEDIATYPE_Audio, audioFilter, filter); } } if (success) LogFilters(graph); return success; } void HDevice::DisconnectFilters() { ComPtr filterEnum; HRESULT hr; if (!graph) return; hr = graph->EnumFilters(&filterEnum); if (FAILED(hr)) return; ComPtr filter; while (filterEnum->Next(1, &filter, nullptr) == S_OK) { graph->RemoveFilter(filter); filterEnum->Reset(); } } Result HDevice::Start() { HRESULT hr; if (!EnsureInitialized(L"Start") || !EnsureInactive(L"Start")) return Result::Error; if (!!rocketEncoder) Sleep(ROCKET_WAIT_TIME_MS); hr = control->Run(); if (FAILED(hr)) { if (hr == (HRESULT)0x8007001F) { WarningHR(L"Run failed, device already in use", hr); return Result::InUse; } else { WarningHR(L"Run failed", hr); return Result::Error; } } active = true; return Result::Success; } void HDevice::Stop() { if (active) { control->Stop(); active = false; } } } /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/device-vendor.cpp000644 001751 001751 00000012651 15153330240 027714 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include #include "../external/capture-device-support/SampleCode/DriverInterface.h" #include "device.hpp" #include "log.hpp" #include #include namespace DShow { bool IsVendorVideoHDR(IKsPropertySet *propertySet) { EGAVDeviceProperties properties( propertySet, EGAVDeviceProperties::DeviceType::GC4K60SPlus); bool isHDR = false; return SUCCEEDED(properties.IsVideoHDR(isHDR)) ? isHDR : false; } void SetVendorVideoFormat(IKsPropertySet *propertySet, bool hevcTrueAvcFalse) { EGAVDeviceProperties properties( propertySet, EGAVDeviceProperties::DeviceType::GC4K60SPlus); const HRESULT hr = properties.SetEncoderType(hevcTrueAvcFalse); if (SUCCEEDED(hr)) { Info(L"Elgato GC4K60SPlus encoder type=%ls", hevcTrueAvcFalse ? L"HEVC" : L"AVC"); } } static void SetTonemapperAvermedia(IKsPropertySet *propertySet, bool enable) { typedef struct _KSPROPERTY_AVER_HW_HDR2SDR { KSPROPERTY Property; DWORD Enable; } KSPROPERTY_AVER_HW_HDR2SDR, *PKSPROPERTY_AVER_HW_HDR2SDR; static constexpr GUID KSPROPSETID_AVER_HDR_PROPERTY = { 0x8A80D56F, 0xFAC5, 0x4692, { 0xA4, 0x16, 0xCF, 0x20, 0xD4, 0xA1, 0x8F, 0x47, }, }; KSPROPERTY_AVER_HW_HDR2SDR data{}; data.Enable = enable; const HRESULT hr = propertySet->Set( KSPROPSETID_AVER_HDR_PROPERTY, 2, &data.Enable, sizeof(data) - sizeof(data.Property), &data, sizeof(data)); if (SUCCEEDED(hr)) Info(L"AVerMedia tonemapper enable=%lu", data.Enable); } /* Special thanks to AVerMedia development team */ static bool FindExtensionUnitNodeID(DWORD *pnNode, IBaseFilter *spCapture) { bool bFindNode = false; if (!spCapture) return bFindNode; ComPtr spKsTopologyInfo = NULL; HRESULT hr = spCapture->QueryInterface(IID_PPV_ARGS(&spKsTopologyInfo)); if (spKsTopologyInfo == NULL || FAILED(hr)) return bFindNode; DWORD nNodeNum = 0; hr = spKsTopologyInfo->get_NumNodes(&nNodeNum); if (FAILED(hr) || nNodeNum <= 0) return bFindNode; GUID guidNodeType; for (int i = 0; i < nNodeNum; i++) { spKsTopologyInfo->get_NodeType(i, &guidNodeType); if (IsEqualGUID(guidNodeType, KSNODETYPE_DEV_SPECIFIC)) { *pnNode = i; bFindNode = true; } } return bFindNode; } static bool SetTonemapperAvermedia2(IBaseFilter *filter, bool enable) { static constexpr GUID GUID_GC553 = { 0xC835261B, 0xFF1C, 0x4C9A, { 0xB2, 0xF7, 0x93, 0xC9, 0x1F, 0xCF, 0xBE, 0x77, }, }; static constexpr int nId = 11; ComPtr spKsControl; HRESULT hr = filter->QueryInterface(IID_PPV_ARGS(&spKsControl)); if (spKsControl == NULL || FAILED(hr)) return false; KSP_NODE ExtensionProp{}; if (!FindExtensionUnitNodeID(&ExtensionProp.NodeId, filter)) return false; ExtensionProp.Property.Set = GUID_GC553; ExtensionProp.Property.Id = nId; ExtensionProp.Property.Flags = KSPROPERTY_TYPE_GET | KSPROPERTY_TYPE_TOPOLOGY; char pData[20]; ULONG ulBytesReturned; hr = spKsControl->KsProperty(&ExtensionProp.Property, sizeof(ExtensionProp), pData, sizeof(pData), &ulBytesReturned); if (FAILED(hr) || (ulBytesReturned < 18)) return false; pData[15] = 0x02; pData[17] = enable; ExtensionProp.Property.Flags = KSPROPERTY_TYPE_SET | KSPROPERTY_TYPE_TOPOLOGY; hr = spKsControl->KsProperty(&ExtensionProp.Property, sizeof(ExtensionProp), pData, sizeof(pData), &ulBytesReturned); const bool succeeded = SUCCEEDED(hr); if (succeeded) Info(L"AVerMedia GC553 tonemapper enable=%d", (int)enable); return succeeded; } static void SetTonemapperElgato(IKsPropertySet *propertySet, bool enable) { EGAVDeviceProperties properties( propertySet, EGAVDeviceProperties::DeviceType::GC4K60ProMK2); const HRESULT hr = properties.SetHDRTonemapping(enable); if (SUCCEEDED(hr)) { Info(L"Elgato GC4K60ProMK2 tonemapper enable=%d", (int)enable); } else { for (const EGAVDeviceID &deviceID : GetElgatoUVCDeviceIDs()) { std::shared_ptr hid = CreateEGAVHIDInterface(); if (hid->InitHIDInterface(deviceID).Succeeded()) { ElgatoUVCDevice device( hid, IsNewDeviceType(deviceID)); device.SetHDRTonemappingEnabled(enable); Info(L"Elgato UVC device (PID = 0x%04X) tonemapper enable=%d", deviceID.productID, (int)enable); hid->DeinitHIDInterface(); } } } } void SetVendorTonemapperUsage(IBaseFilter *filter, bool enable) { if (filter) { ComPtr propertySet = ComQIPtr(filter); if (propertySet) { SetTonemapperAvermedia(propertySet, enable); SetTonemapperAvermedia2(filter, enable); SetTonemapperElgato(propertySet, enable); } } } } /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/encoder.cpp000644 001751 001751 00000023606 15153330240 026603 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "encoder.hpp" #include "log.hpp" #include "avermedia-encode.h" namespace DShow { HVideoEncoder::HVideoEncoder() { initialized = CreateFilterGraph(&graph, &builder, &control); } HVideoEncoder::~HVideoEncoder() { ComPtr filterEnum; IBaseFilter *filter; HRESULT hr; if (!initialized) return; if (active) control->Stop(); /* seems like you have to manually release the entire graph otherwise * the encoder device might not end up releasing properly */ hr = graph->EnumFilters(&filterEnum); if (hr == S_OK) { while (filterEnum->Next(1, &filter, nullptr) == S_OK) { graph->RemoveFilter(filter); filterEnum->Reset(); filter->Release(); } } } bool HVideoEncoder::ConnectFilters() { ComPtr deviceIn; ComPtr deviceOut; ComPtr encoderIn; ComPtr encoderOut; bool success; HRESULT hr; success = GetPinByName(device, PINDIR_INPUT, L"YUV In", &deviceIn); if (!success) { Warning(L"Failed to get YUV In pin"); return false; } success = GetPinByName(device, PINDIR_OUTPUT, L"Virtual Video Out", &deviceOut); if (!success) { Warning(L"Failed to get Virtual Video Out pin"); return false; } success = GetPinByName(encoder, PINDIR_INPUT, L"Virtual Video In", &encoderIn); if (!success) { Warning(L"Failed to get encoder input pin"); return false; } success = GetPinByName(encoder, PINDIR_OUTPUT, nullptr, &encoderOut); if (!success) { Warning(L"Failed to get encoder output pin"); return false; } hr = graph->ConnectDirect(output->GetPin(), deviceIn, nullptr); if (FAILED(hr)) { WarningHR(L"Failed to connect output to device", hr); return false; } hr = graph->ConnectDirect(deviceOut, encoderIn, nullptr); if (FAILED(hr)) { WarningHR(L"Failed to connect device to encoder", hr); return false; } hr = graph->ConnectDirect(encoderOut, capture->GetPin(), nullptr); if (FAILED(hr)) { WarningHR(L"Failed to connect encoder to capture", hr); return false; } return true; } static bool GetPinFirstMediaType(IPin *pin, AM_MEDIA_TYPE **mt) { ComPtr mediaEnum; HRESULT hr; ULONG fetched; hr = pin->EnumMediaTypes(&mediaEnum); if (FAILED(hr)) { Warning(L"Failed to get pin media type enum"); return false; } if (mediaEnum->Next(1, mt, &fetched) != S_OK) { Warning(L"Failed to get pin media type"); return false; } return true; } bool HVideoEncoder::SetupCrossbar() { ComPtr crossbar; ComPtr pin; REGPINMEDIUM medium; /* C353 has no crossbar */ if (config.name.find(L"C353") != std::string::npos) return true; if (!GetPinByName(device, PINDIR_INPUT, L"Analog Video In", &pin)) { Warning(L"Failed to get Analog Video In pin"); return false; } if (!GetPinMedium(pin, medium)) { Warning(L"Failed to get Analog Video In pin medium"); return false; } if (!GetFilterByMedium(AM_KSCATEGORY_CROSSBAR, medium, &crossbar)) { Warning(L"Failed to get crossbar filter"); return false; } graph->AddFilter(crossbar, L"Crossbar Filter"); if (!DirectConnectFilters(graph, crossbar, device)) { Warning(L"Failed to connect crossbar to device"); return false; } return true; } bool HVideoEncoder::SetupEncoder(IBaseFilter *filter) { ComPtr deviceFilter; ComPtr inputPin; ComPtr outputPin; REGPINMEDIUM medium; MediaTypePtr mtRaw; MediaTypePtr mtEncoded; if (!GetPinByName(filter, PINDIR_INPUT, nullptr, &inputPin)) { Warning(L"Could not get encoder input pin"); return false; } if (!GetPinByName(filter, PINDIR_OUTPUT, nullptr, &outputPin)) { Warning(L"Could not get encoder output pin"); return false; } if (!GetPinMedium(inputPin, medium)) { Warning(L"Could not get input pin medium"); return false; } inputPin.Release(); if (!GetFilterByMedium(CLSID_VideoInputDeviceCategory, medium, &deviceFilter)) { Warning(L"Could not get device filter from medium"); return false; } if (!GetPinByName(deviceFilter, PINDIR_INPUT, L"YUV In", &inputPin)) { Warning(L"Could not device YUV pin"); return false; } if (!GetPinFirstMediaType(inputPin, &mtRaw)) { Warning(L"Could not get YUV pin media type"); return false; } if (!GetPinFirstMediaType(outputPin, &mtEncoded)) { Warning(L"Could not get encoder output pin media type"); return false; } PinCaptureInfo captureInfo; captureInfo.callback = [this](IMediaSample *s) { Receive(s); }; captureInfo.expectedMajorType = mtEncoded->majortype; captureInfo.expectedSubType = mtEncoded->subtype; long long frameTime; frameTime = config.fpsDenominator; frameTime *= 10000000; frameTime /= config.fpsNumerator; encoder = filter; device = std::move(deviceFilter); capture = new CaptureFilter(captureInfo); output = new OutputFilter(VideoFormat::YV12, config.cx, config.cy, frameTime); graph->AddFilter(output, nullptr); graph->AddFilter(device, L"Device Filter"); graph->AddFilter(encoder, L"Encoder Filter"); graph->AddFilter(capture, nullptr); return true; } static inline void Clamp(ULONG &val, ULONG minVal, ULONG maxVal) { if (val < minVal) val = minVal; else if (val > maxVal) val = maxVal; } HRESULT SetAVMEncoderSetting(IKsPropertySet *propertySet, ULONG setting, ULONG param1, ULONG param2) { AVER_PARAMETERS params = {}; params.ulIndex = setting; params.ulParam1 = param1; params.ulParam2 = param2; if (setting == AVER_PARAMETER_ENCODE_FRAME_RATE) { Clamp(param1, 15, 60); } else if (setting == AVER_PARAMETER_ENCODE_BIT_RATE) { Clamp(param1, 1000, 60000); } else if (setting == AVER_PARAMETER_CURRENT_RESOLUTION) { Clamp(param1, 1, 30); } return propertySet->Set(AVER_HW_ENCODE_PROPERTY, PROPERTY_HW_ENCODE_PARAMETER, ¶ms, sizeof(params), ¶ms, sizeof(params)); } bool SetAvermediaEncoderConfig(IBaseFilter *encoder, VideoEncoderConfig &config) { HRESULT hr; ComQIPtr propertySet(encoder); if (!propertySet) { Warning(L"Could not get IKsPropertySet for encoder"); return false; } double fps = double(config.fpsNumerator) / double(config.fpsDenominator); hr = SetAVMEncoderSetting(propertySet, AVER_PARAMETER_ENCODE_FRAME_RATE, ULONG(fps), 0); if (FAILED(hr)) { WarningHR(L"Failed to set Avermedia encoder FPS", hr); return false; } hr = SetAVMEncoderSetting(propertySet, AVER_PARAMETER_ENCODE_BIT_RATE, ULONG(config.bitrate), 0); if (FAILED(hr)) { WarningHR(L"Failed to set Avermedia encoder bitrate", hr); return false; } hr = SetAVMEncoderSetting(propertySet, AVER_PARAMETER_CURRENT_RESOLUTION, ULONG(config.cx), ULONG(config.cy)); if (FAILED(hr)) { WarningHR(L"Failed to set Avermedia encoder current res", hr); return false; } hr = SetAVMEncoderSetting(propertySet, AVER_PARAMETER_ENCODE_RESOLUTION, ULONG(config.cx), ULONG(config.cy)); if (FAILED(hr)) { WarningHR(L"Failed to set Avermedia encoder res", hr); return false; } hr = SetAVMEncoderSetting(propertySet, AVER_PARAMETER_ENCODE_GOP, ULONG(config.keyframeInterval), 0); if (FAILED(hr)) { WarningHR(L"Failed to set Avermedia encoder GOP", hr); return false; } return true; } bool HVideoEncoder::SetConfig(VideoEncoderConfig &config) { ComPtr filter; ComPtr crossbar; if (config.name.empty() && config.path.empty()) { Warning(L"No video encoder name or path specified"); return false; } bool success = GetDeviceFilter(KSCATEGORY_ENCODER, config.name.c_str(), config.path.c_str(), &filter); if (!success) { Warning(L"Video encoder '%s': %s not found", config.name.c_str(), config.path.c_str()); return false; } if (!filter) { Warning(L"Could not get encoder filter"); return false; } this->config = config; if (!SetupEncoder(filter)) { Warning(L"Failed to set up encoder"); return false; } if (!SetupCrossbar()) { Warning(L"Failed to set up crossbar"); return false; } if (!SetAvermediaEncoderConfig(device, config)) { Warning(L"Failed to set Avermedia encoder settings"); return false; } if (!ConnectFilters()) { Warning(L"Failed to connect encoder filters"); return false; } LogFilters(graph); HRESULT hr = control->Run(); if (FAILED(hr)) { WarningHR(L"Run failed", hr); return false; } active = true; return true; } void HVideoEncoder::Receive(IMediaSample *s) { BYTE *data; size_t size; if (FAILED(s->GetPointer(&data))) return; size = (size_t)s->GetActualDataLength(); if (!size) return; packetMutex.lock(); packets.emplace_back(data, size); packetMutex.unlock(); } bool HVideoEncoder::Encode(unsigned char *data[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd, EncoderPacket &packet, bool &new_packet) { new_packet = false; if (!active) return false; output->Send(data, linesize, timestampStart, timestampEnd); ptsVals.push_back(timestampStart); packetMutex.lock(); if (packets.size() > 0) { curPacket = move(packets.front()); long long ptsOut = ptsVals[0]; packets.pop_front(); ptsVals.pop_front(); packet.data = curPacket.data.data(); packet.size = curPacket.data.size(); packet.pts = ptsOut; packet.dts = ptsOut; new_packet = true; } packetMutex.unlock(); return true; } }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-device-defs.hpp000644 001751 001751 00000004415 15153330240 030466 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once namespace DShow { #define COMMON_ENCODED_CX 720 #define COMMON_ENCODED_CY 480 #define COMMON_ENCODED_INTERVAL (10010000000LL / 60000LL) #define COMMON_ENCODED_VFORMAT VideoFormat::H264 #define COMMON_ENCODED_SAMPLERATE 48000 static const EncodedDevice HD_PVR1 = {COMMON_ENCODED_VFORMAT, 0x1011UL, COMMON_ENCODED_CX, COMMON_ENCODED_CY, COMMON_ENCODED_INTERVAL, AudioFormat::AC3, 0x1100UL, COMMON_ENCODED_SAMPLERATE}; static const EncodedDevice HD_PVR2 = {COMMON_ENCODED_VFORMAT, 0x1011UL, COMMON_ENCODED_CX, COMMON_ENCODED_CY, COMMON_ENCODED_INTERVAL, AudioFormat::AAC, 0x1100UL, COMMON_ENCODED_SAMPLERATE}; static const EncodedDevice Roxio = { COMMON_ENCODED_VFORMAT, 0x1011UL, COMMON_ENCODED_CX, COMMON_ENCODED_CY, COMMON_ENCODED_INTERVAL, AudioFormat::AAC, 0x010FUL, COMMON_ENCODED_SAMPLERATE, }; static const EncodedDevice HD_PVR_Rocket = {COMMON_ENCODED_VFORMAT, 0x07D1UL, COMMON_ENCODED_CX, COMMON_ENCODED_CY, COMMON_ENCODED_INTERVAL, AudioFormat::AAC, 0x07D2UL, COMMON_ENCODED_SAMPLERATE}; static const EncodedDevice AV_LGP = {COMMON_ENCODED_VFORMAT, 68, COMMON_ENCODED_CX, COMMON_ENCODED_CY, COMMON_ENCODED_INTERVAL, AudioFormat::AAC, 69, COMMON_ENCODED_SAMPLERATE}; }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/ComPtr.hpp000644 001751 001751 00000005541 15153330240 026373 0ustar00runnerrunner000000 000000 /* * Copyright (c) 2023 Lain Bailey * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #pragma once /* Oh no I have my own com pointer class, the world is ending, how dare you * write your own! */ template class ComPtr { protected: T *ptr; inline void Kill() { if (ptr) ptr->Release(); } inline void Replace(T *p) { if (ptr != p) { if (p) p->AddRef(); if (ptr) ptr->Release(); ptr = p; } } public: inline ComPtr() : ptr(nullptr) {} inline ComPtr(T *p) : ptr(p) { if (ptr) ptr->AddRef(); } inline ComPtr(const ComPtr &c) : ptr(c.ptr) { if (ptr) ptr->AddRef(); } inline ComPtr(ComPtr &&c) noexcept : ptr(c.ptr) { c.ptr = nullptr; } inline ~ComPtr() { Kill(); } inline void Clear() { if (ptr) { ptr->Release(); ptr = nullptr; } } inline ComPtr &operator=(T *p) { Replace(p); return *this; } inline ComPtr &operator=(const ComPtr &c) { Replace(c.ptr); return *this; } inline ComPtr &operator=(ComPtr &&c) noexcept { if (&ptr != &c.ptr) { Kill(); ptr = c.ptr; c.ptr = nullptr; } return *this; } inline T *Detach() { T *out = ptr; ptr = nullptr; return out; } inline void CopyTo(T **out) { if (out) { if (ptr) ptr->AddRef(); *out = ptr; } } inline ULONG Release() { ULONG ref; if (!ptr) return 0; ref = ptr->Release(); ptr = nullptr; return ref; } inline T **Assign() { Clear(); return &ptr; } inline void Set(T *p) { Kill(); ptr = p; } inline T *Get() const { return ptr; } inline T **operator&() { return Assign(); } inline operator T *() const { return ptr; } inline T *operator->() const { return ptr; } inline bool operator==(T *p) const { return ptr == p; } inline bool operator!=(T *p) const { return ptr != p; } inline bool operator!() const { return !ptr; } }; template class ComQIPtr : public ComPtr { public: inline ComQIPtr(IUnknown *unk) { this->ptr = nullptr; unk->QueryInterface(__uuidof(T), (void **)&this->ptr); } inline ComPtr &operator=(IUnknown *unk) { ComPtr::Clear(); unk->QueryInterface(__uuidof(T), (void **)&this->ptr); return *this; } }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-base.hpp000644 001751 001751 00000004634 15153330240 027225 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #define WIN32_MEAN_AND_LEAN #define __STREAMS__ #include #include #include #include #include #include #include #include "ComPtr.hpp" #include "CoTaskMemPtr.hpp" #include using namespace std; #define DSHOW_UNUSED(param) (void)param; namespace DShow { bool CreateFilterGraph(IGraphBuilder **graph, ICaptureGraphBuilder2 **builder, IMediaControl **control); void LogFilters(IGraphBuilder *graph); bool GetDeviceFilter(const IID &type, const wchar_t *name, const wchar_t *path, IBaseFilter **filter); bool GetFilterPin(IBaseFilter *filter, const GUID &type, const GUID &category, PIN_DIRECTION dir, IPin **pin); bool GetPinByName(IBaseFilter *filter, PIN_DIRECTION dir, const wchar_t *name, IPin **pin); bool GetPinByMedium(IBaseFilter *filter, REGPINMEDIUM &medium, IPin **pin); bool GetFilterByMedium(const CLSID &id, REGPINMEDIUM &medium, IBaseFilter **filter); bool GetPinMedium(IPin *pin, REGPINMEDIUM &medium); bool DirectConnectFilters(IFilterGraph *graph, IBaseFilter *filterOut, IBaseFilter *filterIn); /** * This maps a created demuxer pin to a packet ID for the mux stream. Note * that this needs to be called after the device filters are connected to the * demux filter. */ HRESULT MapPinToPacketID(IPin *pin, ULONG packetID); wstring ConvertHRToEnglish(HRESULT hr); /** * Get audio filter for the same device as the given video device path */ bool GetDeviceAudioFilter(const wchar_t *videoDevicePath, IBaseFilter **audioCaptureFilter); }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-formats.cpp000644 001751 001751 00000017247 15153330240 027765 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "dshow-formats.hpp" #include "dshow-media-type.hpp" #ifndef __MINGW32__ const GUID MEDIASUBTYPE_RAW_AAC1 = {0x000000FF, 0x0000, 0x0010, {0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}}; const GUID MEDIASUBTYPE_I420 = {0x30323449, 0x0000, 0x0010, {0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}}; const GUID MEDIASUBTYPE_DVM = {0x00002000, 0x0000, 0x0010, {0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}}; #endif const GUID MEDIASUBTYPE_Y800 = {0x30303859, 0x0000, 0x0010, {0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}}; #ifdef ENABLE_HEVC const GUID MEDIASUBTYPE_HEVC = {0x43564548, 0x0000, 0x0010, {0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}}; #endif namespace DShow { DWORD VFormatToFourCC(VideoFormat format) { switch (format) { /* raw formats */ case VideoFormat::ARGB: return MAKEFOURCC('A', 'R', 'G', 'B'); case VideoFormat::XRGB: return MAKEFOURCC('R', 'G', 'B', '4'); /* planar YUV formats */ case VideoFormat::I420: return MAKEFOURCC('I', '4', '2', '0'); case VideoFormat::NV12: return MAKEFOURCC('N', 'V', '1', '2'); case VideoFormat::YV12: return MAKEFOURCC('Y', 'V', '1', '2'); case VideoFormat::Y800: return MAKEFOURCC('Y', '8', '0', '0'); case VideoFormat::P010: return MAKEFOURCC('P', '0', '1', '0'); /* packed YUV formats */ case VideoFormat::YVYU: return MAKEFOURCC('Y', 'V', 'Y', 'U'); case VideoFormat::YUY2: return MAKEFOURCC('Y', 'U', 'Y', '2'); case VideoFormat::UYVY: return MAKEFOURCC('U', 'Y', 'V', 'Y'); case VideoFormat::HDYC: return MAKEFOURCC('H', 'D', 'Y', 'C'); /* encoded formats */ case VideoFormat::MJPEG: return MAKEFOURCC('M', 'J', 'P', 'G'); case VideoFormat::H264: return MAKEFOURCC('H', '2', '6', '4'); #ifdef ENABLE_HEVC case VideoFormat::HEVC: return MAKEFOURCC('H', 'E', 'V', 'C'); #endif default: return 0; } } GUID VFormatToSubType(VideoFormat format) { switch (format) { /* raw formats */ case VideoFormat::ARGB: return MEDIASUBTYPE_ARGB32; case VideoFormat::XRGB: return MEDIASUBTYPE_RGB32; /* planar YUV formats */ case VideoFormat::I420: return MEDIASUBTYPE_I420; case VideoFormat::NV12: return MEDIASUBTYPE_NV12; case VideoFormat::YV12: return MEDIASUBTYPE_YV12; case VideoFormat::Y800: return MEDIASUBTYPE_Y800; case VideoFormat::P010: return MEDIASUBTYPE_P010; /* packed YUV formats */ case VideoFormat::YVYU: return MEDIASUBTYPE_YVYU; case VideoFormat::YUY2: return MEDIASUBTYPE_YUY2; case VideoFormat::UYVY: return MEDIASUBTYPE_UYVY; /* encoded formats */ case VideoFormat::MJPEG: return MEDIASUBTYPE_MJPG; case VideoFormat::H264: return MEDIASUBTYPE_H264; #ifdef ENABLE_HEVC case VideoFormat::HEVC: return MEDIASUBTYPE_HEVC; #endif default: return GUID(); } } WORD VFormatBits(VideoFormat format) { switch (format) { /* raw formats */ case VideoFormat::ARGB: case VideoFormat::XRGB: return 32; /* planar YUV formats */ case VideoFormat::I420: case VideoFormat::NV12: case VideoFormat::YV12: return 12; case VideoFormat::Y800: return 8; /* packed YUV formats */ case VideoFormat::YVYU: case VideoFormat::YUY2: case VideoFormat::UYVY: return 16; default: return 0; } } WORD VFormatPlanes(VideoFormat format) { switch (format) { /* raw formats */ case VideoFormat::ARGB: case VideoFormat::XRGB: return 1; /* planar YUV formats */ case VideoFormat::I420: return 3; case VideoFormat::NV12: case VideoFormat::YV12: return 2; case VideoFormat::Y800: return 1; /* packed YUV formats */ case VideoFormat::YVYU: case VideoFormat::YUY2: case VideoFormat::UYVY: return 1; default: return 0; } } static bool GetFourCCVFormat(DWORD fourCC, VideoFormat &format) { switch (fourCC) { /* raw formats */ case MAKEFOURCC('R', 'G', 'B', '2'): format = VideoFormat::XRGB; break; case MAKEFOURCC('R', 'G', 'B', '4'): format = VideoFormat::XRGB; break; case MAKEFOURCC('A', 'R', 'G', 'B'): format = VideoFormat::ARGB; break; /* planar YUV formats */ case MAKEFOURCC('I', '4', '2', '0'): case MAKEFOURCC('I', 'Y', 'U', 'V'): format = VideoFormat::I420; break; case MAKEFOURCC('Y', 'V', '1', '2'): format = VideoFormat::YV12; break; case MAKEFOURCC('N', 'V', '1', '2'): format = VideoFormat::NV12; break; case MAKEFOURCC('Y', '8', '0', '0'): format = VideoFormat::Y800; break; case MAKEFOURCC('P', '0', '1', '0'): format = VideoFormat::P010; break; /* packed YUV formats */ case MAKEFOURCC('Y', 'V', 'Y', 'U'): format = VideoFormat::YVYU; break; case MAKEFOURCC('Y', 'U', 'Y', '2'): format = VideoFormat::YUY2; break; case MAKEFOURCC('U', 'Y', 'V', 'Y'): format = VideoFormat::UYVY; break; case MAKEFOURCC('H', 'D', 'Y', 'C'): format = VideoFormat::HDYC; break; /* compressed formats */ case MAKEFOURCC('H', '2', '6', '4'): format = VideoFormat::H264; break; #ifdef ENABLE_HEVC case MAKEFOURCC('H', 'E', 'V', 'C'): format = VideoFormat::HEVC; break; #endif /* compressed formats that can automatically create intermediary * filters for decompression */ case MAKEFOURCC('M', 'J', 'P', 'G'): format = VideoFormat::MJPEG; break; default: return false; } return true; } bool GetMediaTypeVFormat(const AM_MEDIA_TYPE &mt, VideoFormat &format) { if (mt.majortype != MEDIATYPE_Video) return false; const BITMAPINFOHEADER *bmih = GetBitmapInfoHeader(mt); format = VideoFormat::Unknown; /* raw formats */ if (mt.subtype == MEDIASUBTYPE_RGB24) format = VideoFormat::XRGB; else if (mt.subtype == MEDIASUBTYPE_RGB32) format = VideoFormat::XRGB; else if (mt.subtype == MEDIASUBTYPE_ARGB32) format = VideoFormat::ARGB; /* planar YUV formats */ else if (mt.subtype == MEDIASUBTYPE_I420) format = VideoFormat::I420; else if (mt.subtype == MEDIASUBTYPE_IYUV) format = VideoFormat::I420; else if (mt.subtype == MEDIASUBTYPE_YV12) format = VideoFormat::YV12; else if (mt.subtype == MEDIASUBTYPE_NV12) format = VideoFormat::NV12; else if (mt.subtype == MEDIASUBTYPE_Y800) format = VideoFormat::Y800; else if (mt.subtype == MEDIASUBTYPE_P010) format = VideoFormat::P010; /* packed YUV formats */ else if (mt.subtype == MEDIASUBTYPE_YVYU) format = VideoFormat::YVYU; else if (mt.subtype == MEDIASUBTYPE_YUY2) format = VideoFormat::YUY2; else if (mt.subtype == MEDIASUBTYPE_UYVY) format = VideoFormat::UYVY; /* compressed formats */ else if (mt.subtype == MEDIASUBTYPE_H264) format = VideoFormat::H264; #ifdef ENABLE_HEVC else if (mt.subtype == MEDIASUBTYPE_HEVC) format = VideoFormat::HEVC; #endif /* compressed formats that can automatically create intermediary * filters for decompression */ else if (mt.subtype == MEDIASUBTYPE_MJPG) format = VideoFormat::MJPEG; /* no valid types, check fourcc value instead */ else return bmih ? GetFourCCVFormat(bmih->biCompression, format) : false; return true; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-media-type.cpp000644 001751 001751 00000004674 15153330240 030350 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "dshow-media-type.hpp" namespace DShow { HRESULT CopyMediaType(AM_MEDIA_TYPE *pmtTarget, const AM_MEDIA_TYPE *pmtSource) { if (!pmtSource || !pmtTarget) return S_FALSE; *pmtTarget = *pmtSource; if (pmtSource->cbFormat && pmtSource->pbFormat) { pmtTarget->pbFormat = (PBYTE)CoTaskMemAlloc(pmtSource->cbFormat); if (pmtTarget->pbFormat == nullptr) { pmtTarget->cbFormat = 0; return E_OUTOFMEMORY; } else { memcpy(pmtTarget->pbFormat, pmtSource->pbFormat, pmtTarget->cbFormat); } } if (pmtTarget->pUnk != nullptr) pmtTarget->pUnk->AddRef(); return S_OK; } void FreeMediaType(AM_MEDIA_TYPE &mt) { if (mt.cbFormat != 0) { CoTaskMemFree((LPVOID)mt.pbFormat); mt.cbFormat = 0; mt.pbFormat = nullptr; } if (mt.pUnk) { mt.pUnk->Release(); mt.pUnk = nullptr; } } BITMAPINFOHEADER *GetBitmapInfoHeader(AM_MEDIA_TYPE &mt) { if (mt.formattype == FORMAT_VideoInfo) { VIDEOINFOHEADER *vih; vih = reinterpret_cast(mt.pbFormat); return &vih->bmiHeader; } else if (mt.formattype == FORMAT_VideoInfo2) { VIDEOINFOHEADER2 *vih; vih = reinterpret_cast(mt.pbFormat); return &vih->bmiHeader; } return NULL; } const BITMAPINFOHEADER *GetBitmapInfoHeader(const AM_MEDIA_TYPE &mt) { if (mt.formattype == FORMAT_VideoInfo) { const VIDEOINFOHEADER *vih; vih = reinterpret_cast(mt.pbFormat); return &vih->bmiHeader; } else if (mt.formattype == FORMAT_VideoInfo2) { const VIDEOINFOHEADER2 *vih; vih = reinterpret_cast(mt.pbFormat); return &vih->bmiHeader; } return NULL; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-enum.cpp000644 001751 001751 00000035127 15153330240 027253 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include #include #include "dshow-enum.hpp" #include "dshow-formats.hpp" #include "log.hpp" #undef DEFINE_GUID #define DEFINE_GUID(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \ EXTERN_C const GUID DECLSPEC_SELECTANY name = { \ l, w1, w2, {b1, b2, b3, b4, b5, b6, b7, b8}} #include "external/IVideoCaptureFilter.h" namespace DShow { using namespace std; typedef bool (*EnumCapsCallback)(void *param, const AM_MEDIA_TYPE &mt, const BYTE *data); static void EnumElgatoCaps(IPin *pin, EnumCapsCallback callback, void *param) { ComPtr mediaTypes; if (SUCCEEDED(pin->EnumMediaTypes(&mediaTypes))) { MediaTypePtr mt; ULONG count = 0; while (mediaTypes->Next(1, &mt, &count) == S_OK) { if (!callback(param, *mt, nullptr)) break; } } } static bool EnumPinCaps(IPin *pin, EnumCapsCallback callback, void *param) { HRESULT hr; ComQIPtr config(pin); int count, size; if (config == NULL) return false; hr = config->GetNumberOfCapabilities(&count, &size); if (SUCCEEDED(hr)) { vector caps; caps.resize(size); for (int i = 0; i < count; i++) { MediaTypePtr mt; hr = config->GetStreamCaps(i, &mt, caps.data()); if (SUCCEEDED(hr)) if (!callback(param, *mt, caps.data())) break; } } else if (hr == E_NOTIMPL) { EnumElgatoCaps(pin, callback, param); } else { return false; } return true; } /* Note: DEVICE_VideoInfo is not to be confused with Device::VideoInfo */ static bool Get_FORMAT_VideoInfo_Data(VideoInfo &info, const AM_MEDIA_TYPE &mt, const BYTE *data) { const VIDEO_STREAM_CONFIG_CAPS *vscc; const VIDEOINFOHEADER *viHeader; const BITMAPINFOHEADER *bmiHeader; VideoFormat format; vscc = reinterpret_cast(data); viHeader = reinterpret_cast(mt.pbFormat); bmiHeader = &viHeader->bmiHeader; if (!GetMediaTypeVFormat(mt, format)) return false; info.format = format; if (vscc) { info.minInterval = vscc->MinFrameInterval; info.maxInterval = vscc->MaxFrameInterval; info.minCX = vscc->MinOutputSize.cx; info.minCY = vscc->MinOutputSize.cy; info.maxCX = vscc->MaxOutputSize.cx; info.maxCY = vscc->MaxOutputSize.cy; if (!info.minCX || !info.minCY || !info.maxCX || !info.maxCY) { info.minCX = info.maxCX = bmiHeader->biWidth; info.minCY = info.maxCY = bmiHeader->biHeight; } info.granularityCX = max(vscc->OutputGranularityX, 1); info.granularityCY = max(vscc->OutputGranularityY, 1); } else { info.minInterval = info.maxInterval = 10010000000LL / 60000LL; info.minCX = info.maxCX = bmiHeader->biWidth; info.minCY = info.maxCY = bmiHeader->biHeight; info.granularityCX = 1; info.granularityCY = 1; } return true; } static bool Get_FORMAT_WaveFormatEx_Data(AudioInfo &info, const AM_MEDIA_TYPE &mt, const BYTE *data) { const AUDIO_STREAM_CONFIG_CAPS *ascc; const WAVEFORMATEX *wfex; ascc = reinterpret_cast(data); wfex = reinterpret_cast(mt.pbFormat); if (!wfex || !ascc) { return false; } switch (wfex->wBitsPerSample) { case 16: info.format = AudioFormat::Wave16bit; break; case 32: info.format = AudioFormat::WaveFloat; break; } info.minChannels = ascc->MinimumChannels; info.maxChannels = ascc->MaximumChannels; info.channelsGranularity = ascc->ChannelsGranularity; info.minSampleRate = ascc->MinimumSampleFrequency; info.maxSampleRate = ascc->MaximumSampleFrequency; info.sampleRateGranularity = ascc->SampleFrequencyGranularity; return true; } struct ClosestVideoData { VideoConfig &config; MediaType &mt; long long bestVal; bool found; ClosestVideoData &operator=(ClosestVideoData const &) = delete; ClosestVideoData &operator=(ClosestVideoData &&) = delete; inline ClosestVideoData(VideoConfig &config, MediaType &mt) : config(config), mt(mt), bestVal(0), found(false) { } }; static inline void ClampToGranularity(LONG &val, int minVal, int granularity) { val -= ((val - minVal) % granularity); } static inline int GetFormatRating(VideoFormat format) { if (format >= VideoFormat::I420 && format < VideoFormat::YVYU) return 0; else if (format >= VideoFormat::YVYU && format < VideoFormat::MJPEG) return 5; else if (format == VideoFormat::MJPEG) return 10; return 15; } static bool ClosestVideoMTCallback(ClosestVideoData &data, const AM_MEDIA_TYPE &mt, const BYTE *capData) { VideoInfo info; if (mt.formattype == FORMAT_VideoInfo) { if (!Get_FORMAT_VideoInfo_Data(info, mt, capData)) return true; } else { return true; } MediaType copiedMT = mt; VIDEOINFOHEADER *vih = (VIDEOINFOHEADER *)copiedMT->pbFormat; BITMAPINFOHEADER *bmih = GetBitmapInfoHeader(copiedMT); if (data.config.internalFormat != VideoFormat::Any && data.config.internalFormat != info.format) return true; int xVal = 0; int yVal = 0; int formatVal = 0; long long frameVal = 0; if (data.config.cx < info.minCX) xVal = info.minCX - data.config.cx; else if (data.config.cx > info.maxCX) xVal = data.config.cx - info.maxCX; const int absMinCY = abs(info.minCY); const int absMaxCY = abs(info.maxCY); if (data.config.cy_abs < absMinCY) yVal = absMinCY - data.config.cy_abs; else if (data.config.cy_abs > absMaxCY) yVal = data.config.cy_abs - absMaxCY; const long long frameInterval = data.config.frameInterval; if (frameInterval < info.minInterval) frameVal = info.minInterval - frameInterval; else if (frameInterval > info.maxInterval) frameVal = frameInterval - info.maxInterval; formatVal = GetFormatRating(info.format); long long totalVal = frameVal + yVal + xVal + formatVal; if (!data.found || data.bestVal > totalVal) { if (xVal == 0) { bmih->biWidth = data.config.cx; ClampToGranularity(bmih->biWidth, info.minCX, info.granularityCX); } if (yVal == 0) { LONG cy_abs_clamp = data.config.cy_abs; ClampToGranularity(cy_abs_clamp, info.minCY, info.granularityCY); bmih->biHeight = data.config.cy_flip ? -cy_abs_clamp : cy_abs_clamp; } if (frameVal == 0) { // Close enough. Fixes GV-USB2 29.97 FPS setting. if (abs(vih->AvgTimePerFrame - frameInterval) > 1) vih->AvgTimePerFrame = frameInterval; } data.found = true; data.bestVal = totalVal; data.mt = copiedMT; if (totalVal == 0) return false; } return true; } bool GetClosestVideoMediaType(IBaseFilter *filter, VideoConfig &config, MediaType &mt) { ComPtr pin; ClosestVideoData data(config, mt); bool success; success = GetFilterPin(filter, MEDIATYPE_Video, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); if (!success || pin == NULL) { Error(L"GetClosestVideoMediaType: Could not get pin"); return false; } success = EnumPinCaps(pin, EnumCapsCallback(ClosestVideoMTCallback), &data); if (!success) { Error(L"GetClosestVideoMediaType: Could not enumerate caps"); return false; } return data.found; } struct ClosestAudioData { AudioConfig &config; MediaType &mt; int bestVal; bool found; ClosestAudioData &operator=(ClosestAudioData const &) = delete; ClosestAudioData &operator=(ClosestAudioData &&) = delete; inline ClosestAudioData(AudioConfig &config, MediaType &mt) : config(config), mt(mt), bestVal(0), found(false) { } }; static bool ClosestAudioMTCallback(ClosestAudioData &data, const AM_MEDIA_TYPE &mt, const BYTE *capData) { AudioInfo info = {}; if (mt.formattype == FORMAT_WaveFormatEx) { if (!Get_FORMAT_WaveFormatEx_Data(info, mt, capData)) return false; } else { return true; } MediaType copiedMT = mt; WAVEFORMATEX *wfex = (WAVEFORMATEX *)copiedMT->pbFormat; if (data.config.format != AudioFormat::Any && data.config.format != info.format) return true; int sampleRateVal = 0; int channelsVal = 0; if (data.config.sampleRate < info.minSampleRate) sampleRateVal = info.minSampleRate - data.config.sampleRate; else if (data.config.sampleRate > info.maxSampleRate) sampleRateVal = data.config.sampleRate - info.maxSampleRate; else if (data.config.sampleRate == info.minSampleRate) sampleRateVal = data.config.sampleRate; else if (data.config.sampleRate == info.maxSampleRate) sampleRateVal = data.config.sampleRate; if (data.config.channels < info.minChannels) channelsVal = info.minChannels - data.config.channels; else if (data.config.channels == info.minChannels) channelsVal = info.minChannels; else if (info.maxChannels > data.config.channels) channelsVal = data.config.channels - info.maxChannels; else if (data.config.channels == info.maxChannels) channelsVal = data.config.channels; int totalVal = sampleRateVal + channelsVal; if (!data.found || data.bestVal > totalVal) { if (channelsVal == 0) { LONG channels = data.config.channels; ClampToGranularity(channels, info.minChannels, info.channelsGranularity); wfex->nChannels = (WORD)channels; wfex->nBlockAlign = wfex->wBitsPerSample * wfex->nChannels / 8; } if (sampleRateVal == 0) { wfex->nSamplesPerSec = data.config.sampleRate; ClampToGranularity((LONG &)wfex->nSamplesPerSec, info.minSampleRate, info.sampleRateGranularity); } wfex->nAvgBytesPerSec = wfex->nSamplesPerSec * wfex->nBlockAlign; data.mt = copiedMT; data.found = true; data.bestVal = totalVal; if (totalVal == 0) return false; } return true; } bool GetClosestAudioMediaType(IBaseFilter *filter, AudioConfig &config, MediaType &mt) { ComPtr pin; ClosestAudioData data(config, mt); bool success; success = GetFilterPin(filter, MEDIATYPE_Audio, PIN_CATEGORY_CAPTURE, PINDIR_OUTPUT, &pin); if (!success || pin == NULL) { Error(L"GetClosestAudioMediaType: Could not get pin"); return false; } success = EnumPinCaps(pin, EnumCapsCallback(ClosestAudioMTCallback), &data); if (!success) { Error(L"GetClosestAudioMediaType: Could not enumerate caps"); return false; } return data.found; } static bool EnumVideoCap(vector &caps, const AM_MEDIA_TYPE &mt, const BYTE *data) { VideoInfo info; if (mt.formattype == FORMAT_VideoInfo) if (Get_FORMAT_VideoInfo_Data(info, mt, data)) caps.push_back(info); return true; } bool EnumVideoCaps(IPin *pin, vector &caps) { return EnumPinCaps(pin, EnumCapsCallback(EnumVideoCap), &caps); } static bool EnumAudioCap(vector &caps, const AM_MEDIA_TYPE &mt, const BYTE *data) { AudioInfo info; if (mt.formattype == FORMAT_WaveFormatEx) { if (Get_FORMAT_WaveFormatEx_Data(info, mt, data)) caps.push_back(info); } return true; } bool EnumAudioCaps(IPin *pin, vector &caps) { return EnumPinCaps(pin, EnumCapsCallback(EnumAudioCap), &caps); } static bool decklinkVideoPresent = false; static bool EnumDevice(const GUID &type, IMoniker *deviceInfo, EnumDeviceCallback callback, void *param) { ComPtr propertyData; ComPtr filter; HRESULT hr; hr = deviceInfo->BindToStorage(0, 0, IID_IPropertyBag, (void **)&propertyData); if (FAILED(hr)) return true; VARIANT deviceName, devicePath; deviceName.vt = VT_BSTR; devicePath.vt = VT_BSTR; devicePath.bstrVal = NULL; hr = propertyData->Read(L"FriendlyName", &deviceName, NULL); if (FAILED(hr)) return true; /* workaround to a crash in decklink drivers; if no decklink device * is plugged in to the system, it will still try to enumerate the * decklink audio device, but will crash when trying to bind it to * a filter due to a bug in the drivers */ if (deviceName.bstrVal && type == CLSID_AudioInputDeviceCategory && wcsstr(deviceName.bstrVal, L"Decklink") != nullptr && !decklinkVideoPresent) { return true; } propertyData->Read(L"DevicePath", &devicePath, NULL); hr = deviceInfo->BindToObject(NULL, 0, IID_IBaseFilter, (void **)&filter); if (SUCCEEDED(hr)) { if (!callback(param, filter, deviceName.bstrVal, devicePath.bstrVal)) return false; } return true; } static bool EnumExceptionVideoDevices(EnumDeviceCallback callback, void *param) { ComPtr filter; HRESULT hr; hr = CoCreateInstance(CLSID_ElgatoVideoCaptureFilter, nullptr, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void **)&filter); if (SUCCEEDED(hr)) { if (!callback(param, filter, L"Elgato Game Capture HD", L"__elgato")) return false; } return true; } static recursive_mutex enumMutex; static bool CheckForDLCallback(void *unused, IBaseFilter *filter, const wchar_t *deviceName, const wchar_t *devicePath) { if (wcsstr(deviceName, L"Decklink") != nullptr) { decklinkVideoPresent = true; return false; } DSHOW_UNUSED(unused); DSHOW_UNUSED(filter); DSHOW_UNUSED(devicePath); return true; } static void CheckForDecklinkVideo() { decklinkVideoPresent = false; EnumDevices(CLSID_VideoInputDeviceCategory, CheckForDLCallback, nullptr); } bool EnumDevices(const GUID &type, EnumDeviceCallback callback, void *param) { lock_guard lock(enumMutex); ComPtr deviceEnum; ComPtr enumMoniker; ComPtr deviceInfo; HRESULT hr; DWORD count = 0; if (type == CLSID_AudioInputDeviceCategory) { CheckForDecklinkVideo(); } hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **)&deviceEnum); if (FAILED(hr)) { WarningHR(L"EnumAudioDevices: Could not create " L"ICreateDeviceEnum", hr); return false; } hr = deviceEnum->CreateClassEnumerator(type, &enumMoniker, 0); if (FAILED(hr)) { WarningHR(L"EnumAudioDevices: CreateClassEnumerator failed", hr); return false; } if (hr == S_OK) { while (enumMoniker->Next(1, &deviceInfo, &count) == S_OK) { if (!EnumDevice(type, deviceInfo, callback, param)) return true; } } if (type == CLSID_VideoInputDeviceCategory) if (!EnumExceptionVideoDevices(callback, param)) return true; return true; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/log.hpp000644 001751 001751 00000002377 15153330240 025754 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #define WIN32_LEAN_AND_MEAN #include "windows.h" namespace DShow { void Error(const wchar_t *format, ...); void Warning(const wchar_t *format, ...); void Info(const wchar_t *format, ...); void Debug(const wchar_t *format, ...); void ErrorHR(const wchar_t *str, HRESULT hr); void WarningHR(const wchar_t *str, HRESULT hr); void InfoHR(const wchar_t *str, HRESULT hr); void DebugHR(const wchar_t *str, HRESULT hr); }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/output-filter.hpp000644 001751 001751 00000016767 15153330240 030026 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "dshow-base.hpp" #include "dshow-media-type.hpp" #include "../dshowcapture.hpp" namespace DShow { class OutputFilter; class OutputPin : public IPin, public IAMStreamConfig, public IKsPropertySet { friend class OutputEnumMediaTypes; friend class OutputFilter; volatile long refCount; std::vector mtList; MediaType mt; VideoFormat curVFormat; long long curInterval = 0; int curCX = 0; int curCY = 0; bool setSampleMediaType = false; ComPtr connectedPin; OutputFilter *filter; volatile bool flushing = false; ComPtr allocator; ComPtr sample; size_t bufSize; bool IsValidMediaType(const AM_MEDIA_TYPE *pmt) const; bool AllocateBuffers(IPin *target, bool connecting = false); public: OutputPin(OutputFilter *filter); OutputPin(OutputFilter *filter, VideoFormat format, int cx, int cy, long long interval); virtual ~OutputPin(); STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IPin methods STDMETHODIMP Connect(IPin *pReceivePin, const AM_MEDIA_TYPE *pmt); STDMETHODIMP ReceiveConnection(IPin *connector, const AM_MEDIA_TYPE *pmt); STDMETHODIMP Disconnect(); STDMETHODIMP ConnectedTo(IPin **pPin); STDMETHODIMP ConnectionMediaType(AM_MEDIA_TYPE *pmt); STDMETHODIMP QueryPinInfo(PIN_INFO *pInfo); STDMETHODIMP QueryDirection(PIN_DIRECTION *pPinDir); STDMETHODIMP QueryId(LPWSTR *lpId); STDMETHODIMP QueryAccept(const AM_MEDIA_TYPE *pmt); STDMETHODIMP EnumMediaTypes(IEnumMediaTypes **ppEnum); STDMETHODIMP QueryInternalConnections(IPin **apPin, ULONG *nPin); STDMETHODIMP EndOfStream(); STDMETHODIMP BeginFlush(); STDMETHODIMP EndFlush(); STDMETHODIMP NewSegment(REFERENCE_TIME tStart, REFERENCE_TIME tStop, double dRate); // IAMStreamConfig methods STDMETHODIMP GetFormat(AM_MEDIA_TYPE **ppmt) override; STDMETHODIMP GetNumberOfCapabilities(int *piCount, int *piSize) override; STDMETHODIMP GetStreamCaps(int iIndex, AM_MEDIA_TYPE **ppmt, BYTE *pSCC) override; STDMETHODIMP SetFormat(AM_MEDIA_TYPE *pmt) override; // IKsPropertySet methods STDMETHODIMP Set(REFGUID guidPropSet, DWORD dwID, void *pInstanceData, DWORD cbInstanceData, void *pPropData, DWORD cbPropData) override; STDMETHODIMP Get(REFGUID guidPropSet, DWORD dwPropID, void *pInstanceData, DWORD cbInstanceData, void *pPropData, DWORD cbPropData, DWORD *pcbReturned) override; STDMETHODIMP QuerySupported(REFGUID guidPropSet, DWORD dwPropID, DWORD *pTypeSupport) override; // Other methods inline bool ReallocateBuffers() { return !!connectedPin ? AllocateBuffers(connectedPin) : false; } inline VideoFormat GetVideoFormat() const { return curVFormat; } inline int GetCX() const { return curCX; } inline int GetCY() const { return curCY; } inline long long GetInterval() const { return curInterval; } void AddVideoFormat(VideoFormat format, int cx, int cy, long long interval); bool SetVideoFormat(VideoFormat format, int cx, int cy, long long interval); void Send(unsigned char *data[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd); bool LockSampleData(unsigned char **ptr); void UnlockSampleData(long long timestampStart, long long timestampEnd); void Stop(); }; class OutputFilter : public IBaseFilter { friend class OutputPin; volatile long refCount; FILTER_STATE state; IFilterGraph *graph; ComPtr pin; ComPtr misc; protected: ComPtr clock; public: OutputFilter(); OutputFilter(VideoFormat format, int cx, int cy, long long interval); virtual ~OutputFilter(); // IUnknown methods STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IPersist method STDMETHODIMP GetClassID(CLSID *pClsID); // IMediaFilter methods STDMETHODIMP GetState(DWORD dwMSecs, FILTER_STATE *State); STDMETHODIMP SetSyncSource(IReferenceClock *pClock); STDMETHODIMP GetSyncSource(IReferenceClock **pClock); STDMETHODIMP Stop(); STDMETHODIMP Pause(); STDMETHODIMP Run(REFERENCE_TIME tStart); // IBaseFilter methods STDMETHODIMP EnumPins(IEnumPins **ppEnum); STDMETHODIMP FindPin(LPCWSTR Id, IPin **ppPin); STDMETHODIMP QueryFilterInfo(FILTER_INFO *pInfo); STDMETHODIMP JoinFilterGraph(IFilterGraph *pGraph, LPCWSTR pName); STDMETHODIMP QueryVendorInfo(LPWSTR *pVendorInfo); virtual const wchar_t *FilterName() const; inline OutputPin *GetPin() const { return (OutputPin *)pin; } inline bool ReallocateBuffers() { return pin->ReallocateBuffers(); } inline VideoFormat GetVideoFormat() const { return pin->GetVideoFormat(); } inline int GetCX() const { return pin->GetCX(); } inline int GetCY() const { return pin->GetCY(); } inline long long GetInterval() const { return pin->GetInterval(); } inline void AddVideoFormat(VideoFormat format, int cx, int cy, long long interval) { pin->AddVideoFormat(format, cx, cy, interval); } inline bool SetVideoFormat(VideoFormat format, int cx, int cy, long long interval) { return pin->SetVideoFormat(format, cx, cy, interval); } inline void Send(unsigned char *data[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd) { pin->Send(data, linesize, timestampStart, timestampEnd); } inline bool LockSampleData(unsigned char **ptr) { return pin->LockSampleData(ptr); } inline void UnlockSampleData(long long timestampStart, long long timestampEnd) { pin->UnlockSampleData(timestampStart, timestampEnd); } }; class OutputEnumPins : public IEnumPins { volatile long refCount = 1; ComPtr filter; UINT curPin; public: OutputEnumPins(OutputFilter *filter, OutputEnumPins *pEnum); virtual ~OutputEnumPins(); // IUnknown STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IEnumPins STDMETHODIMP Next(ULONG cPins, IPin **ppPins, ULONG *pcFetched); STDMETHODIMP Skip(ULONG cPins); STDMETHODIMP Reset(); STDMETHODIMP Clone(IEnumPins **ppEnum); }; class OutputEnumMediaTypes : public IEnumMediaTypes { volatile long refCount = 1; ComPtr pin; UINT curMT = 0; public: OutputEnumMediaTypes(OutputPin *pin); virtual ~OutputEnumMediaTypes(); // IUnknown STDMETHODIMP QueryInterface(REFIID riid, void **ppv); STDMETHODIMP_(ULONG) AddRef(); STDMETHODIMP_(ULONG) Release(); // IEnumMediaTypes STDMETHODIMP Next(ULONG cMediaTypes, AM_MEDIA_TYPE **ppMediaTypes, ULONG *pcFetched); STDMETHODIMP Skip(ULONG cMediaTypes); STDMETHODIMP Reset(); STDMETHODIMP Clone(IEnumMediaTypes **ppEnum); }; }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-formats.hpp000644 001751 001751 00000002304 15153330240 027756 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "../dshowcapture.hpp" #include "dshow-base.hpp" #include #include namespace DShow { DWORD VFormatToFourCC(VideoFormat format); WORD VFormatBits(VideoFormat format); WORD VFormatPlanes(VideoFormat format); GUID VFormatToSubType(VideoFormat format); bool GetMediaTypeVFormat(const AM_MEDIA_TYPE &mt, VideoFormat &format); }; /*namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/output-filter.cpp000644 001751 001751 00000050053 15153330240 030003 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "output-filter.hpp" #include "dshow-formats.hpp" #include "log.hpp" #include namespace DShow { #if 0 #define PrintFunc(x) Debug(x) #else #define PrintFunc(x) #endif #define FILTER_NAME L"Output Filter" #define VIDEO_PIN_NAME L"Video Output" #define AUDIO_PIN_NAME L"Audio Output" OutputPin::OutputPin(OutputFilter *filter_) : refCount(0), filter(filter_) {} OutputPin::OutputPin(OutputFilter *filter_, VideoFormat format, int cx, int cy, long long interval) : OutputPin(filter_) { curCX = cx; curCY = cy; curInterval = interval; AddVideoFormat(format, cx, cy, interval); SetVideoFormat(format, cx, cy, interval); } OutputPin::~OutputPin() {} STDMETHODIMP OutputPin::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown) { AddRef(); *ppv = this; } else if (riid == IID_IPin) { AddRef(); *ppv = (IPin *)this; } else if (riid == IID_IMemInputPin) { AddRef(); *ppv = (IMemInputPin *)this; } else if (riid == IID_IAMStreamConfig) { AddRef(); *ppv = (IAMStreamConfig *)this; return S_OK; } else if (riid == IID_IKsPropertySet) { AddRef(); *ppv = (IKsPropertySet *)this; return S_OK; } else { *ppv = nullptr; return E_NOINTERFACE; } return NOERROR; } STDMETHODIMP_(ULONG) OutputPin::AddRef() { return (ULONG)InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) OutputPin::Release() { long newRefs = InterlockedDecrement(&refCount); if (!newRefs) { delete this; return 0; } return (ULONG)newRefs; } // IPin methods STDMETHODIMP OutputPin::Connect(IPin *pReceivePin, const AM_MEDIA_TYPE *pmt) { HRESULT hr; PrintFunc(L"OutputPin::Connect"); if (filter->state == State_Running) return VFW_E_NOT_STOPPED; if (connectedPin) return VFW_E_ALREADY_CONNECTED; hr = pReceivePin->ReceiveConnection(this, mt); if (FAILED(hr)) { #if 0 /* debug code to test caps on fail */ ComPtr enumMT; pReceivePin->EnumMediaTypes(&enumMT); if (enumMT) { MediaTypePtr mt; ULONG count = 0; while (enumMT->Next(1, &mt, &count) == S_OK) { int test = 0; test = 1; } } #endif return E_FAIL; } if (!AllocateBuffers(pReceivePin, true)) { return E_FAIL; } connectedPin = pReceivePin; DSHOW_UNUSED(pmt); return S_OK; } STDMETHODIMP OutputPin::ReceiveConnection(IPin *pConnector, const AM_MEDIA_TYPE *pmt) { PrintFunc(L"OutputPin::ReceiveConnection"); DSHOW_UNUSED(pConnector); DSHOW_UNUSED(pmt); return S_OK; } STDMETHODIMP OutputPin::Disconnect() { PrintFunc(L"OutputPin::Disconnect"); if (!connectedPin) return S_FALSE; if (!!allocator) { allocator->Decommit(); allocator.Clear(); } connectedPin = nullptr; return S_OK; } STDMETHODIMP OutputPin::ConnectedTo(IPin **pPin) { PrintFunc(L"OutputPin::ConnectedTo"); if (!connectedPin) { *pPin = nullptr; return VFW_E_NOT_CONNECTED; } IPin *pin = connectedPin; pin->AddRef(); *pPin = pin; return S_OK; } STDMETHODIMP OutputPin::ConnectionMediaType(AM_MEDIA_TYPE *pmt) { PrintFunc(L"OutputPin::ConnectionMediaType"); if (!connectedPin) return VFW_E_NOT_CONNECTED; return CopyMediaType(pmt, mt); } STDMETHODIMP OutputPin::QueryPinInfo(PIN_INFO *pInfo) { PrintFunc(L"OutputPin::QueryPinInfo"); pInfo->pFilter = filter; if (filter) { IBaseFilter *ptr = filter; ptr->AddRef(); } if (mt->majortype == MEDIATYPE_Video) memcpy(pInfo->achName, VIDEO_PIN_NAME, sizeof(VIDEO_PIN_NAME)); else memcpy(pInfo->achName, AUDIO_PIN_NAME, sizeof(AUDIO_PIN_NAME)); pInfo->dir = PINDIR_OUTPUT; return NOERROR; } STDMETHODIMP OutputPin::QueryDirection(PIN_DIRECTION *pPinDir) { *pPinDir = PINDIR_OUTPUT; return NOERROR; } #define OUTPUT_PIN_NAME L"Output Pin" STDMETHODIMP OutputPin::QueryId(LPWSTR *lpId) { wchar_t *str = (wchar_t *)CoTaskMemAlloc(sizeof(OUTPUT_PIN_NAME)); memcpy(str, OUTPUT_PIN_NAME, sizeof(OUTPUT_PIN_NAME)); *lpId = str; return S_OK; } STDMETHODIMP OutputPin::QueryAccept(const AM_MEDIA_TYPE *) { PrintFunc(L"OutputPin::QueryAccept"); return S_OK; } STDMETHODIMP OutputPin::EnumMediaTypes(IEnumMediaTypes **ppEnum) { PrintFunc(L"OutputPin::EnumMediaTypes"); *ppEnum = new OutputEnumMediaTypes(this); if (!*ppEnum) return E_OUTOFMEMORY; return NOERROR; } STDMETHODIMP OutputPin::QueryInternalConnections(IPin **apPin, ULONG *nPin) { PrintFunc(L"OutputPin::QueryInternalConnections"); DSHOW_UNUSED(apPin); DSHOW_UNUSED(nPin); return E_NOTIMPL; } STDMETHODIMP OutputPin::EndOfStream() { PrintFunc(L"OutputPin::EndOfStream"); return S_OK; } STDMETHODIMP OutputPin::BeginFlush() { PrintFunc(L"OutputPin::BeginFlush"); flushing = true; return S_OK; } STDMETHODIMP OutputPin::EndFlush() { PrintFunc(L"OutputPin::EndFlush"); flushing = false; return S_OK; } STDMETHODIMP OutputPin::NewSegment(REFERENCE_TIME, REFERENCE_TIME, double) { PrintFunc(L"OutputPin::NewSegment"); return S_OK; } STDMETHODIMP OutputPin::GetFormat(AM_MEDIA_TYPE **ppmt) { PrintFunc(L"OutputPin::GetFormat"); if (!ppmt) { return E_POINTER; } *ppmt = mt.Duplicate(); return S_OK; } STDMETHODIMP OutputPin::GetNumberOfCapabilities(int *piCount, int *piSize) { PrintFunc(L"OutputPin::GetNumberOfCapabilities"); if (!piCount || !piSize) { return E_POINTER; } *piCount = (int)mtList.size(); *piSize = sizeof(VIDEO_STREAM_CONFIG_CAPS); return S_OK; } STDMETHODIMP OutputPin::GetStreamCaps(int iIndex, AM_MEDIA_TYPE **ppmt, BYTE *pSCC) { PrintFunc(L"OutputPin::GetStreamCaps"); int count = (int)mtList.size(); if (!ppmt || !pSCC) { return E_POINTER; } if (iIndex > (count - 1)) { return S_FALSE; } if (iIndex < 0) { return E_INVALIDARG; } AM_MEDIA_TYPE *pmt = mtList[iIndex].Duplicate(); VIDEOINFOHEADER *vih = reinterpret_cast(pmt->pbFormat); VIDEO_STREAM_CONFIG_CAPS caps = {}; caps.guid = FORMAT_VideoInfo; caps.MinFrameInterval = vih->AvgTimePerFrame; caps.MaxFrameInterval = vih->AvgTimePerFrame; caps.MinOutputSize.cx = vih->bmiHeader.biWidth; caps.MinOutputSize.cy = vih->bmiHeader.biHeight; caps.MaxOutputSize = caps.MinOutputSize; caps.InputSize = caps.MinOutputSize; caps.MinCroppingSize = caps.MinOutputSize; caps.MaxCroppingSize = caps.MinOutputSize; caps.CropGranularityX = vih->bmiHeader.biWidth; caps.CropGranularityY = vih->bmiHeader.biHeight; caps.MinBitsPerSecond = vih->dwBitRate; caps.MaxBitsPerSecond = caps.MinBitsPerSecond; *ppmt = pmt; memcpy(pSCC, &caps, sizeof(caps)); return S_OK; } STDMETHODIMP OutputPin::SetFormat(AM_MEDIA_TYPE *pmt) { PrintFunc(L"OutputPin::SetFormat"); if (pmt == nullptr) return VFW_E_INVALIDMEDIATYPE; mt = pmt; GetMediaTypeVFormat(mt, curVFormat); VIDEOINFOHEADER *vih = reinterpret_cast(mt->pbFormat); curCX = vih->bmiHeader.biWidth; curCY = vih->bmiHeader.biHeight; curInterval = vih->AvgTimePerFrame; return S_OK; } STDMETHODIMP OutputPin::Set(REFGUID, DWORD, void *, DWORD, void *, DWORD) { PrintFunc(L"OutputPin::Set"); return E_NOTIMPL; } STDMETHODIMP OutputPin::Get(REFGUID guidPropSet, DWORD dwPropID, void *, DWORD, void *pPropData, DWORD cbPropData, DWORD *pcbReturned) { PrintFunc(L"OutputPin::Get"); if (guidPropSet != AMPROPSETID_Pin) return E_PROP_SET_UNSUPPORTED; if (dwPropID != AMPROPERTY_PIN_CATEGORY) return E_PROP_ID_UNSUPPORTED; if (pPropData == NULL && pcbReturned == NULL) return E_POINTER; if (pcbReturned) *pcbReturned = sizeof(GUID); if (pPropData == NULL) return S_OK; if (cbPropData < sizeof(GUID)) return E_UNEXPECTED; *(GUID *)pPropData = PIN_CATEGORY_CAPTURE; return S_OK; } STDMETHODIMP OutputPin::QuerySupported(REFGUID guidPropSet, DWORD dwPropID, DWORD *pTypeSupport) { PrintFunc(L"OutputPin::QuerySupported"); if (guidPropSet != AMPROPSETID_Pin) return E_PROP_SET_UNSUPPORTED; if (dwPropID != AMPROPERTY_PIN_CATEGORY) return E_PROP_ID_UNSUPPORTED; if (pTypeSupport) *pTypeSupport = KSPROPERTY_SUPPORT_GET; return S_OK; } bool OutputPin::AllocateBuffers(IPin *target, bool connecting) { HRESULT hr; ComQIPtr memInput(target); if (!memInput) return false; if (!!allocator) { allocator->Decommit(); } hr = memInput->GetAllocator(&allocator); if (hr == VFW_E_NO_ALLOCATOR) hr = CoCreateInstance(CLSID_MemoryAllocator, NULL, CLSCTX_INPROC_SERVER, __uuidof(IMemAllocator), (void **)&allocator); if (FAILED(hr)) return false; VIDEOINFOHEADER *vih = reinterpret_cast(mt->pbFormat); int cx = vih->bmiHeader.biWidth; int cy = vih->bmiHeader.biHeight; WORD bits = VFormatBits(curVFormat); bufSize = cx * cy * bits / 8; ALLOCATOR_PROPERTIES props; hr = memInput->GetAllocatorRequirements(&props); if (hr == E_NOTIMPL) { props.cBuffers = 4; props.cbAlign = 32; props.cbPrefix = 0; } else if (FAILED(hr)) { return false; } props.cbBuffer = (long)bufSize; ALLOCATOR_PROPERTIES actual; hr = allocator->SetProperties(&props, &actual); if (FAILED(hr)) return false; if (!connecting && FAILED(allocator->Commit())) { return false; } memInput->NotifyAllocator(allocator, false); return true; } static MediaType CreateMediaType(VideoFormat format, int cx, int cy, long long interval) { MediaType mt; WORD bits = VFormatBits(format); DWORD size = cx * cy * bits / 8; uint64_t rate = (uint64_t)size * 10000000ULL / (uint64_t)interval * 8ULL; VIDEOINFOHEADER *vih = mt.AllocFormat(); vih->bmiHeader.biSize = sizeof(vih->bmiHeader); vih->bmiHeader.biWidth = cx; vih->bmiHeader.biHeight = cy; vih->bmiHeader.biPlanes = VFormatPlanes(format); vih->bmiHeader.biBitCount = bits; vih->bmiHeader.biSizeImage = size; vih->bmiHeader.biCompression = VFormatToFourCC(format); vih->rcSource.right = cx; vih->rcSource.bottom = cy; vih->rcTarget = vih->rcSource; vih->dwBitRate = (DWORD)rate; vih->AvgTimePerFrame = interval; mt->majortype = MEDIATYPE_Video; mt->subtype = VFormatToSubType(format); mt->formattype = FORMAT_VideoInfo; mt->bFixedSizeSamples = true; mt->lSampleSize = size; return mt; } void OutputPin::AddVideoFormat(VideoFormat format, int cx, int cy, long long interval) { MediaType newMT = CreateMediaType(format, cx, cy, interval); mtList.push_back(newMT); } bool OutputPin::SetVideoFormat(VideoFormat format, int cx, int cy, long long interval) { mt = CreateMediaType(format, cx, cy, interval); if (curCX != cx || curCY != cy || curInterval != interval || curVFormat != format) { curVFormat = format; curCX = cx; curCY = cy; curInterval = interval; if (!!connectedPin) { setSampleMediaType = true; return ReallocateBuffers(); } } return true; } bool OutputPin::LockSampleData(unsigned char **ptr) { if (!connectedPin) return false; ComQIPtr memInput(connectedPin); HRESULT hr; if (!memInput || !allocator) return false; hr = allocator->GetBuffer(&sample, nullptr, nullptr, 0); if (FAILED(hr)) return false; if (FAILED(sample->SetActualDataLength((long)bufSize))) return false; if (FAILED(sample->SetDiscontinuity(false))) return false; if (FAILED(sample->SetPreroll(false))) return false; if (FAILED(sample->SetSyncPoint(true))) return false; if (FAILED(sample->GetPointer(ptr))) return false; if (setSampleMediaType) { sample->SetMediaType(mt); setSampleMediaType = false; } return true; } void OutputPin::Send(unsigned char *data[DSHOW_MAX_PLANES], size_t linesize[DSHOW_MAX_PLANES], long long timestampStart, long long timestampEnd) { BYTE *ptr; if (!LockSampleData(&ptr)) return; size_t total = 0; for (size_t i = 0; i < DSHOW_MAX_PLANES; i++) { if (!linesize[i]) break; memcpy(ptr + total, data[i], linesize[i]); total += linesize[i]; } UnlockSampleData(timestampStart, timestampEnd); } void OutputPin::UnlockSampleData(long long timestampStart, long long timestampEnd) { if (!connectedPin) return; ComQIPtr memInput(connectedPin); REFERENCE_TIME startTime = timestampStart; REFERENCE_TIME endTime = timestampEnd; sample->SetMediaTime(&startTime, &endTime); sample->SetTime(&startTime, &endTime); memInput->Receive(sample); sample.Clear(); } void OutputPin::Stop() { if (!!connectedPin) { connectedPin->BeginFlush(); connectedPin->EndFlush(); } } // ============================================================================ class SourceMiscFlags : public IAMFilterMiscFlags { volatile long refCount = 0; public: inline SourceMiscFlags() {} virtual ~SourceMiscFlags() {} STDMETHODIMP QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown) { AddRef(); *ppv = this; } else { *ppv = nullptr; return E_NOINTERFACE; } return NOERROR; } STDMETHODIMP_(ULONG) AddRef() { return InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return refCount; } STDMETHODIMP_(ULONG) GetMiscFlags() { return AM_FILTER_MISC_FLAGS_IS_SOURCE; } }; OutputFilter::OutputFilter() : refCount(0), state(State_Stopped), graph(nullptr), pin(new OutputPin(this)), misc(new SourceMiscFlags) { } OutputFilter::OutputFilter(VideoFormat format, int cx, int cy, long long interval) : refCount(0), state(State_Stopped), graph(nullptr), pin(new OutputPin(this, format, cx, cy, interval)), misc(new SourceMiscFlags) { } OutputFilter::~OutputFilter() {} // IUnknown methods STDMETHODIMP OutputFilter::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown) { AddRef(); *ppv = this; } else if (riid == IID_IPersist) { AddRef(); *ppv = (IPersist *)this; } else if (riid == IID_IMediaFilter) { AddRef(); *ppv = (IMediaFilter *)this; } else if (riid == IID_IBaseFilter) { AddRef(); *ppv = (IBaseFilter *)this; } else if (riid == IID_IAMFilterMiscFlags) { misc.CopyTo((IAMFilterMiscFlags **)ppv); } else { *ppv = nullptr; return E_NOINTERFACE; } return NOERROR; } STDMETHODIMP_(ULONG) OutputFilter::AddRef() { return InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) OutputFilter::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return refCount; } // IPersist method STDMETHODIMP OutputFilter::GetClassID(CLSID *pClsID) { DSHOW_UNUSED(pClsID); return E_NOTIMPL; } // IMediaFilter methods STDMETHODIMP OutputFilter::GetState(DWORD dwMSecs, FILTER_STATE *State) { PrintFunc(L"OutputFilter::GetState"); *State = state; DSHOW_UNUSED(dwMSecs); return S_OK; } STDMETHODIMP OutputFilter::SetSyncSource(IReferenceClock *pClock) { clock = pClock; return S_OK; } STDMETHODIMP OutputFilter::GetSyncSource(IReferenceClock **pClock) { *pClock = clock.Get(); if (*pClock) { (*pClock)->AddRef(); } return NOERROR; } STDMETHODIMP OutputFilter::Stop() { PrintFunc(L"OutputFilter::Stop"); if (state != State_Stopped) { pin->Stop(); } state = State_Stopped; return S_OK; } STDMETHODIMP OutputFilter::Pause() { PrintFunc(L"OutputFilter::Pause"); OutputPin *pin = GetPin(); if (!!pin->allocator && state == State_Stopped) { pin->allocator->Commit(); } state = State_Paused; return S_OK; } STDMETHODIMP OutputFilter::Run(REFERENCE_TIME tStart) { PrintFunc(L"OutputFilter::Run"); state = State_Running; DSHOW_UNUSED(tStart); return S_OK; } // IBaseFilter methods STDMETHODIMP OutputFilter::EnumPins(IEnumPins **ppEnum) { PrintFunc(L"OutputFilter::EnumPins"); *ppEnum = new OutputEnumPins(this, nullptr); return (*ppEnum == nullptr) ? E_OUTOFMEMORY : NOERROR; } STDMETHODIMP OutputFilter::FindPin(LPCWSTR Id, IPin **ppPin) { PrintFunc(L"OutputFilter::FindPin"); if (Id == nullptr || ppPin == nullptr) return E_POINTER; if (lstrcmpW(Id, OUTPUT_PIN_NAME) == 0) { *ppPin = pin; pin->AddRef(); return S_OK; } else { *ppPin = nullptr; return VFW_E_NOT_FOUND; } } STDMETHODIMP OutputFilter::QueryFilterInfo(FILTER_INFO *pInfo) { PrintFunc(L"OutputFilter::QueryFilterInfo"); StringCbCopyW(pInfo->achName, sizeof(pInfo->achName), FilterName()); pInfo->pGraph = graph; if (graph) { IFilterGraph *graph_ptr = graph; graph_ptr->AddRef(); } return NOERROR; } STDMETHODIMP OutputFilter::JoinFilterGraph(IFilterGraph *pGraph, LPCWSTR pName) { DSHOW_UNUSED(pName); graph = pGraph; return NOERROR; } STDMETHODIMP OutputFilter::QueryVendorInfo(LPWSTR *pVendorInfo) { DSHOW_UNUSED(pVendorInfo); return E_NOTIMPL; } const wchar_t *OutputFilter::FilterName() const { return FILTER_NAME; } // ============================================================================ OutputEnumPins::OutputEnumPins(OutputFilter *filter_, OutputEnumPins *pEnum) : filter(filter_) { curPin = (pEnum != nullptr) ? pEnum->curPin : 0; } OutputEnumPins::~OutputEnumPins() {} // IUnknown STDMETHODIMP OutputEnumPins::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown || riid == IID_IEnumPins) { AddRef(); *ppv = (IEnumPins *)this; return NOERROR; } else { *ppv = nullptr; return E_NOINTERFACE; } } STDMETHODIMP_(ULONG) OutputEnumPins::AddRef() { return (ULONG)InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) OutputEnumPins::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return (ULONG)refCount; } // IEnumPins STDMETHODIMP OutputEnumPins::Next(ULONG cPins, IPin **ppPins, ULONG *pcFetched) { UINT nFetched = 0; if (curPin == 0 && cPins > 0) { IPin *pPin = filter->GetPin(); *ppPins = pPin; pPin->AddRef(); nFetched = 1; curPin++; } if (pcFetched) *pcFetched = nFetched; return (nFetched == cPins) ? S_OK : S_FALSE; } STDMETHODIMP OutputEnumPins::Skip(ULONG cPins) { return ((curPin += cPins) > 1) ? S_FALSE : S_OK; } STDMETHODIMP OutputEnumPins::Reset() { curPin = 0; return S_OK; } STDMETHODIMP OutputEnumPins::Clone(IEnumPins **ppEnum) { *ppEnum = new OutputEnumPins(filter, this); return (*ppEnum == nullptr) ? E_OUTOFMEMORY : NOERROR; } // ============================================================================ OutputEnumMediaTypes::OutputEnumMediaTypes(OutputPin *pin_) : pin(pin_) {} OutputEnumMediaTypes::~OutputEnumMediaTypes() {} STDMETHODIMP OutputEnumMediaTypes::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown || riid == IID_IEnumMediaTypes) { AddRef(); *ppv = this; return NOERROR; } else { *ppv = nullptr; return E_NOINTERFACE; } } STDMETHODIMP_(ULONG) OutputEnumMediaTypes::AddRef() { return (ULONG)InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) OutputEnumMediaTypes::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return (ULONG)refCount; } // IEnumMediaTypes STDMETHODIMP OutputEnumMediaTypes::Next(ULONG cMediaTypes, AM_MEDIA_TYPE **ppMediaTypes, ULONG *pcFetched) { PrintFunc(L"OutputEnumMediaTypes::Next"); UINT total = (UINT)pin->mtList.size(); UINT nFetched = 0; for (ULONG i = 0; i < cMediaTypes && curMT < total; i++) { *(ppMediaTypes++) = pin->mtList[curMT++].Duplicate(); nFetched++; } if (pcFetched) *pcFetched = nFetched; return (nFetched == cMediaTypes) ? S_OK : S_FALSE; } STDMETHODIMP OutputEnumMediaTypes::Skip(ULONG cMediaTypes) { PrintFunc(L"OutputEnumMediaTypes::Skip"); UINT total = (UINT)pin->mtList.size(); return ((curMT += cMediaTypes) > total) ? S_FALSE : S_OK; } STDMETHODIMP OutputEnumMediaTypes::Reset() { PrintFunc(L"OutputEnumMediaTypes::Reset"); curMT = 0; return S_OK; } STDMETHODIMP OutputEnumMediaTypes::Clone(IEnumMediaTypes **ppEnum) { *ppEnum = new OutputEnumMediaTypes(pin); return (*ppEnum == nullptr) ? E_OUTOFMEMORY : NOERROR; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/log.cpp000644 001751 001751 00000004421 15153330240 025737 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "dshow-base.hpp" #include "log.hpp" #include "../dshowcapture.hpp" namespace DShow { void *logParam = NULL; static LogCallback logCallback = NULL; void SetLogCallback(LogCallback callback, void *param) { logCallback = callback; logParam = param; } static void Log(LogType type, const wchar_t *format, va_list args) { wchar_t str[4096]; vswprintf_s(str, 4096, format, args); if (logCallback) logCallback(type, str, logParam); } void Error(const wchar_t *format, ...) { va_list args; va_start(args, format); Log(LogType::Error, format, args); va_end(args); } void Warning(const wchar_t *format, ...) { va_list args; va_start(args, format); Log(LogType::Warning, format, args); va_end(args); } void Info(const wchar_t *format, ...) { va_list args; va_start(args, format); Log(LogType::Info, format, args); va_end(args); } void Debug(const wchar_t *format, ...) { va_list args; va_start(args, format); Log(LogType::Debug, format, args); va_end(args); } void ErrorHR(const wchar_t *str, HRESULT hr) { Error(L"%s (0x%08lX): %s", str, hr, ConvertHRToEnglish(hr).c_str()); } void WarningHR(const wchar_t *str, HRESULT hr) { Warning(L"%s (0x%08lX): %s", str, hr, ConvertHRToEnglish(hr).c_str()); } void InfoHR(const wchar_t *str, HRESULT hr) { Info(L"%s (0x%08lX): %s", str, hr, ConvertHRToEnglish(hr).c_str()); } void DebugHR(const wchar_t *str, HRESULT hr) { Debug(L"%s (0x%08lX): %s", str, hr, ConvertHRToEnglish(hr).c_str()); } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/avermedia-encode.h000644 001751 001751 00000003340 15153330240 030012 0ustar00runnerrunner000000 000000 /* * aver_prophwencode.h -- This header filer is provide to 3rd party AP * to use the HW encode function of AVerMedia. * * Copyright (C) 2014 AVerMedia TECHNOLOGIES, Inc. * * Authors: Morris Pan, AVerMedia TECHNOLOGIES, Inc. * * This content is released under the MIT License * (http://opensource.org/licenses/MIT). * */ #pragma once static const GUID AVER_HW_ENCODE_PROPERTY = {0x1bd55918, 0xbaf5, 0x4781, {0x8d, 0x76, 0xe0, 0xa0, 0xa5, 0xe1, 0xd2, 0xb8}}; enum { // @brief PropertySet Enumeration // param AVER_PARAMETERS PROPERTY_HW_ENCODE_PARAMETER = 0 }; enum { // property to set/get the encode frame rate // ulParam1 = Frames per second AVER_PARAMETER_ENCODE_FRAME_RATE = 0, // property to set/get the encode bit rate // ulParam1 = Bitrate (kb/s) AVER_PARAMETER_ENCODE_BIT_RATE = 1, // property to get the output resolution // ulParam1 = Resolution width // ulParam2 = Resolution height AVER_PARAMETER_CURRENT_RESOLUTION = 2, // property to set the output resolution // ulParam1 = Resolution width // ulParam2 = Resolution height AVER_PARAMETER_ENCODE_RESOLUTION = 3, // property to set/get the encode GOP // ulParam1 = GOP length AVER_PARAMETER_ENCODE_GOP = 4, // property to insert an I frame to the encoded stream AVER_PARAMETER_INSERT_I_FRAME = 6 }; struct AVER_PARAMETERS { // @brief Use the PROPERTY_PARAMETER Property to Get or Set the // Device Parameter. // // param ulIndex Parameter Index (AVER_PARAMETER_*) // param ulParam1 Parameter 1 (if any) // param ulParam2 Parameter 2 (if any) // param ulParam3 Parameter 3 (if any) ULONG ulIndex; ULONG ulParam1; ULONG ulParam2; ULONG ulParam3; }; obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/dshow-demux.hpp000644 001751 001751 00000002510 15153330240 027424 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #pragma once #include "../dshowcapture.hpp" #include "dshow-base.hpp" #include "dshow-media-type.hpp" namespace DShow { #define DEMUX_VIDEO_PIN L"Demuxer Video Pin" #define DEMUX_AUDIO_PIN L"Demuxer Audio Pin" bool CreateDemuxVideoPin(IBaseFilter *demuxFilter, MediaType &mt, long width, long height, long long frameTime, VideoFormat format); bool CreateDemuxAudioPin(IBaseFilter *demuxFilter, MediaType &mt, DWORD samplesPerSec, WORD bitsPerSample, WORD channels, AudioFormat format); }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libdshowcapture/src/source/capture-filter.cpp000644 001751 001751 00000033227 15153330240 030112 0ustar00runnerrunner000000 000000 /* * Copyright (C) 2023 Lain Bailey * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 * USA */ #include "capture-filter.hpp" #include "log.hpp" namespace DShow { #if 0 #define PrintFunc(x) Debug(x) #else #define PrintFunc(x) #endif #define FILTER_NAME L"Capture Filter" #define VIDEO_PIN_NAME L"Video Capture" #define AUDIO_PIN_NAME L"Audio Capture" CapturePin::CapturePin(CaptureFilter *filter_, const PinCaptureInfo &info) : refCount(0), captureInfo(info), filter(filter_) { connectedMediaType->majortype = info.expectedMajorType; } CapturePin::~CapturePin() {} STDMETHODIMP CapturePin::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown) { AddRef(); *ppv = this; } else if (riid == IID_IPin) { AddRef(); *ppv = (IPin *)this; } else if (riid == IID_IMemInputPin) { AddRef(); *ppv = (IMemInputPin *)this; } else { *ppv = nullptr; return E_NOINTERFACE; } return NOERROR; } STDMETHODIMP_(ULONG) CapturePin::AddRef() { return (ULONG)InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) CapturePin::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return (ULONG)refCount; } // IPin methods STDMETHODIMP CapturePin::Connect(IPin *pReceivePin, const AM_MEDIA_TYPE *pmt) { PrintFunc(L"CapturePin::Connect"); if (filter->state == State_Running) return VFW_E_NOT_STOPPED; if (connectedPin) return VFW_E_ALREADY_CONNECTED; if (!pmt) return S_OK; if (pmt->majortype != GUID_NULL && pmt->majortype != captureInfo.expectedMajorType) return S_FALSE; if (pmt->majortype == captureInfo.expectedMajorType && !IsValidMediaType(pmt)) return S_FALSE; DSHOW_UNUSED(pReceivePin); return S_OK; } STDMETHODIMP CapturePin::ReceiveConnection(IPin *pConnector, const AM_MEDIA_TYPE *pmt) { PrintFunc(L"CapturePin::ReceiveConnection"); if (filter->state != State_Stopped) return VFW_E_NOT_STOPPED; if (!pConnector || !pmt) return E_POINTER; if (connectedPin) return VFW_E_ALREADY_CONNECTED; if (QueryAccept(pmt) != S_OK) return VFW_E_TYPE_NOT_ACCEPTED; connectedPin = pConnector; connectedMediaType = pmt; return S_OK; } STDMETHODIMP CapturePin::Disconnect() { PrintFunc(L"CapturePin::Disconnect"); if (!connectedPin) return S_FALSE; connectedPin = nullptr; return S_OK; } STDMETHODIMP CapturePin::ConnectedTo(IPin **pPin) { PrintFunc(L"CapturePin::ConnectedTo"); if (!connectedPin) return VFW_E_NOT_CONNECTED; IPin *pin = connectedPin; pin->AddRef(); *pPin = pin; return S_OK; } STDMETHODIMP CapturePin::ConnectionMediaType(AM_MEDIA_TYPE *pmt) { PrintFunc(L"CapturePin::ConnectionMediaType"); if (!connectedPin) return VFW_E_NOT_CONNECTED; return CopyMediaType(pmt, connectedMediaType); } STDMETHODIMP CapturePin::QueryPinInfo(PIN_INFO *pInfo) { PrintFunc(L"CapturePin::QueryPinInfo"); pInfo->pFilter = filter; if (filter) { IBaseFilter *ptr = filter; ptr->AddRef(); } if (captureInfo.expectedMajorType == MEDIATYPE_Video) memcpy(pInfo->achName, VIDEO_PIN_NAME, sizeof(VIDEO_PIN_NAME)); else memcpy(pInfo->achName, AUDIO_PIN_NAME, sizeof(AUDIO_PIN_NAME)); pInfo->dir = PINDIR_INPUT; return NOERROR; } STDMETHODIMP CapturePin::QueryDirection(PIN_DIRECTION *pPinDir) { *pPinDir = PINDIR_INPUT; return NOERROR; } #define CAPTURE_PIN_NAME L"Capture Pin" STDMETHODIMP CapturePin::QueryId(LPWSTR *lpId) { wchar_t *str = (wchar_t *)CoTaskMemAlloc(sizeof(CAPTURE_PIN_NAME)); memcpy(str, CAPTURE_PIN_NAME, sizeof(CAPTURE_PIN_NAME)); *lpId = str; return S_OK; } STDMETHODIMP CapturePin::QueryAccept(const AM_MEDIA_TYPE *pmt) { PrintFunc(L"CapturePin::QueryAccept"); if (pmt->majortype != captureInfo.expectedMajorType) return S_FALSE; if (!IsValidMediaType(pmt)) return S_FALSE; if (connectedPin) connectedMediaType = pmt; return S_OK; } STDMETHODIMP CapturePin::EnumMediaTypes(IEnumMediaTypes **ppEnum) { PrintFunc(L"CapturePin::EnumMediaTypes"); *ppEnum = new CaptureEnumMediaTypes(this); if (!*ppEnum) return E_OUTOFMEMORY; return NOERROR; } STDMETHODIMP CapturePin::QueryInternalConnections(IPin **apPin, ULONG *nPin) { PrintFunc(L"CapturePin::QueryInternalConnections"); DSHOW_UNUSED(apPin); DSHOW_UNUSED(nPin); return E_NOTIMPL; } STDMETHODIMP CapturePin::EndOfStream() { PrintFunc(L"CapturePin::EndOfStream"); return S_OK; } STDMETHODIMP CapturePin::BeginFlush() { PrintFunc(L"CapturePin::BeginFlush"); flushing = true; return S_OK; } STDMETHODIMP CapturePin::EndFlush() { PrintFunc(L"CapturePin::EndFlush"); flushing = false; return S_OK; } STDMETHODIMP CapturePin::NewSegment(REFERENCE_TIME tStart, REFERENCE_TIME tStop, double dRate) { PrintFunc(L"CapturePin::NewSegment"); DSHOW_UNUSED(tStart); DSHOW_UNUSED(tStop); DSHOW_UNUSED(dRate); return S_OK; } // IMemInputPin methods STDMETHODIMP CapturePin::GetAllocator(IMemAllocator **ppAllocator) { PrintFunc(L"CapturePin::GetAllocator"); DSHOW_UNUSED(ppAllocator); return VFW_E_NO_ALLOCATOR; } STDMETHODIMP CapturePin::NotifyAllocator(IMemAllocator *pAllocator, BOOL bReadOnly) { PrintFunc(L"CapturePin::NotifyAllocator"); DSHOW_UNUSED(pAllocator); DSHOW_UNUSED(bReadOnly); return S_OK; } STDMETHODIMP CapturePin::GetAllocatorRequirements(ALLOCATOR_PROPERTIES *pProps) { PrintFunc(L"CapturePin::GetAllocatorRequirements"); DSHOW_UNUSED(pProps); return E_NOTIMPL; } STDMETHODIMP CapturePin::Receive(IMediaSample *pSample) { PrintFunc(L"CapturePin::Receive"); if (flushing) return S_FALSE; if (pSample) captureInfo.callback(pSample); return S_OK; } STDMETHODIMP CapturePin::ReceiveMultiple(IMediaSample **pSamples, long nSamples, long *nSamplesProcessed) { PrintFunc(L"CapturePin::ReceiveMultiple"); if (flushing) return S_FALSE; for (long i = 0; i < nSamples; i++) if (pSamples[i]) captureInfo.callback(pSamples[i]); *nSamplesProcessed = nSamples; return S_OK; } STDMETHODIMP CapturePin::ReceiveCanBlock() { return S_FALSE; } bool CapturePin::IsValidMediaType(const AM_MEDIA_TYPE *pmt) const { if (pmt->pbFormat) { if (pmt->subtype != captureInfo.expectedSubType || pmt->majortype != captureInfo.expectedMajorType) return false; if (captureInfo.expectedMajorType == MEDIATYPE_Video) { const BITMAPINFOHEADER *bih = GetBitmapInfoHeader(*pmt); if (!bih || bih->biHeight == 0 || bih->biWidth == 0) return false; } } return true; } // ============================================================================ class MiscFlagsHandler : public IAMFilterMiscFlags { volatile long refCount = 0; public: inline MiscFlagsHandler() {} virtual ~MiscFlagsHandler() {} STDMETHODIMP QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown) { AddRef(); *ppv = this; } else { *ppv = nullptr; return E_NOINTERFACE; } return NOERROR; } STDMETHODIMP_(ULONG) AddRef() { return InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return refCount; } STDMETHODIMP_(ULONG) GetMiscFlags() { return AM_FILTER_MISC_FLAGS_IS_RENDERER; } }; CaptureFilter::CaptureFilter(const PinCaptureInfo &info) : refCount(0), state(State_Stopped), pin(new CapturePin(this, info)), misc(new MiscFlagsHandler) { } CaptureFilter::~CaptureFilter() {} // IUnknown methods STDMETHODIMP CaptureFilter::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown) { AddRef(); *ppv = this; } else if (riid == IID_IPersist) { AddRef(); *ppv = (IPersist *)this; } else if (riid == IID_IMediaFilter) { AddRef(); *ppv = (IMediaFilter *)this; } else if (riid == IID_IBaseFilter) { AddRef(); *ppv = (IBaseFilter *)this; } else if (riid == IID_IAMFilterMiscFlags) { misc.CopyTo((IAMFilterMiscFlags **)ppv); } else { *ppv = nullptr; return E_NOINTERFACE; } return NOERROR; } STDMETHODIMP_(ULONG) CaptureFilter::AddRef() { return InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) CaptureFilter::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return refCount; } // IPersist method STDMETHODIMP CaptureFilter::GetClassID(CLSID *pClsID) { DSHOW_UNUSED(pClsID); return E_NOTIMPL; } // IMediaFilter methods STDMETHODIMP CaptureFilter::GetState(DWORD dwMSecs, FILTER_STATE *State) { PrintFunc(L"CaptureFilter::GetState"); *State = state; DSHOW_UNUSED(dwMSecs); return S_OK; } STDMETHODIMP CaptureFilter::SetSyncSource(IReferenceClock *pClock) { DSHOW_UNUSED(pClock); return S_OK; } STDMETHODIMP CaptureFilter::GetSyncSource(IReferenceClock **pClock) { *pClock = nullptr; return NOERROR; } STDMETHODIMP CaptureFilter::Stop() { PrintFunc(L"CaptureFilter::Stop"); state = State_Stopped; return S_OK; } STDMETHODIMP CaptureFilter::Pause() { PrintFunc(L"CaptureFilter::Pause"); state = State_Paused; return S_OK; } STDMETHODIMP CaptureFilter::Run(REFERENCE_TIME tStart) { PrintFunc(L"CaptureFilter::Run"); state = State_Running; DSHOW_UNUSED(tStart); return S_OK; } // IBaseFilter methods STDMETHODIMP CaptureFilter::EnumPins(IEnumPins **ppEnum) { PrintFunc(L"CaptureFilter::EnumPins"); *ppEnum = new CaptureEnumPins(this, nullptr); return (*ppEnum == nullptr) ? E_OUTOFMEMORY : NOERROR; } STDMETHODIMP CaptureFilter::FindPin(LPCWSTR Id, IPin **ppPin) { PrintFunc(L"CaptureFilter::FindPin"); if (Id == nullptr || ppPin == nullptr) return E_POINTER; if (lstrcmpW(Id, CAPTURE_PIN_NAME) == 0) { *ppPin = pin; pin->AddRef(); return S_OK; } else { *ppPin = nullptr; return VFW_E_NOT_FOUND; } } STDMETHODIMP CaptureFilter::QueryFilterInfo(FILTER_INFO *pInfo) { PrintFunc(L"CaptureFilter::QueryFilterInfo"); memcpy(pInfo->achName, FILTER_NAME, sizeof(FILTER_NAME)); pInfo->pGraph = graph; if (graph) { IFilterGraph *graph_ptr = graph; graph_ptr->AddRef(); } return NOERROR; } STDMETHODIMP CaptureFilter::JoinFilterGraph(IFilterGraph *pGraph, LPCWSTR pName) { DSHOW_UNUSED(pName); graph = pGraph; return NOERROR; } STDMETHODIMP CaptureFilter::QueryVendorInfo(LPWSTR *pVendorInfo) { DSHOW_UNUSED(pVendorInfo); return E_NOTIMPL; } // ============================================================================ CaptureEnumPins::CaptureEnumPins(CaptureFilter *filter_, CaptureEnumPins *pEnum) : filter(filter_) { curPin = (pEnum != nullptr) ? pEnum->curPin : 0; } CaptureEnumPins::~CaptureEnumPins() {} // IUnknown STDMETHODIMP CaptureEnumPins::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown || riid == IID_IEnumPins) { AddRef(); *ppv = (IEnumPins *)this; return NOERROR; } else { *ppv = nullptr; return E_NOINTERFACE; } } STDMETHODIMP_(ULONG) CaptureEnumPins::AddRef() { return (ULONG)InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) CaptureEnumPins::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return (ULONG)refCount; } // IEnumPins STDMETHODIMP CaptureEnumPins::Next(ULONG cPins, IPin **ppPins, ULONG *pcFetched) { UINT nFetched = 0; if (curPin == 0 && cPins > 0) { IPin *pPin = filter->GetPin(); *ppPins = pPin; pPin->AddRef(); nFetched = 1; curPin++; } if (pcFetched) *pcFetched = nFetched; return (nFetched == cPins) ? S_OK : S_FALSE; } STDMETHODIMP CaptureEnumPins::Skip(ULONG cPins) { return ((curPin += cPins) > 1) ? S_FALSE : S_OK; } STDMETHODIMP CaptureEnumPins::Reset() { curPin = 0; return S_OK; } STDMETHODIMP CaptureEnumPins::Clone(IEnumPins **ppEnum) { *ppEnum = new CaptureEnumPins(filter, this); return (*ppEnum == nullptr) ? E_OUTOFMEMORY : NOERROR; } // ============================================================================ CaptureEnumMediaTypes::CaptureEnumMediaTypes(CapturePin *pin_) : pin(pin_) {} CaptureEnumMediaTypes::~CaptureEnumMediaTypes() {} STDMETHODIMP CaptureEnumMediaTypes::QueryInterface(REFIID riid, void **ppv) { if (riid == IID_IUnknown || riid == IID_IEnumMediaTypes) { AddRef(); *ppv = this; return NOERROR; } else { *ppv = nullptr; return E_NOINTERFACE; } } STDMETHODIMP_(ULONG) CaptureEnumMediaTypes::AddRef() { return (ULONG)InterlockedIncrement(&refCount); } STDMETHODIMP_(ULONG) CaptureEnumMediaTypes::Release() { if (!InterlockedDecrement(&refCount)) { delete this; return 0; } return (ULONG)refCount; } // IEnumMediaTypes STDMETHODIMP CaptureEnumMediaTypes::Next(ULONG cMediaTypes, AM_MEDIA_TYPE **ppMediaTypes, ULONG *pcFetched) { PrintFunc(L"CaptureEnumMediaTypes::Next"); UINT nFetched = 0; if (curMT == 0 && cMediaTypes > 0) { *ppMediaTypes = pin->connectedMediaType.Duplicate(); nFetched = 1; curMT++; } if (pcFetched) *pcFetched = nFetched; return (nFetched == cMediaTypes) ? S_OK : S_FALSE; } STDMETHODIMP CaptureEnumMediaTypes::Skip(ULONG cMediaTypes) { PrintFunc(L"CaptureEnumMediaTypes::Skip"); return ((curMT += cMediaTypes) > 1) ? S_FALSE : S_OK; } STDMETHODIMP CaptureEnumMediaTypes::Reset() { PrintFunc(L"CaptureEnumMediaTypes::Reset"); curMT = 0; return S_OK; } STDMETHODIMP CaptureEnumMediaTypes::Clone(IEnumMediaTypes **ppEnum) { *ppEnum = new CaptureEnumMediaTypes(pin); return (*ppEnum == nullptr) ? E_OUTOFMEMORY : NOERROR; } }; /* namespace DShow */ obs-studio-32.1.0-sources/deps/libcaption/000755 001751 001751 00000000000 15153330731 021314 5ustar00runnerrunner000000 000000 obs-studio-32.1.0-sources/deps/libcaption/.clang-format000644 001751 001751 00000000066 15153330235 023670 0ustar00runnerrunner000000 000000 Language: Cpp SortIncludes: false DisableFormat: true obs-studio-32.1.0-sources/deps/libcaption/README.md000644 001751 001751 00000005616 15153330235 022602 0ustar00runnerrunner000000 000000 # version v0.8 Matthew Szatmary m3u8@twitch.tv / matt@szatmary.org # libcaption libcaption is a library written in C to aid in the creating and parsing of closed caption data, open sourced under the MIT license to use within community developed broadcast tools. To maintain consistency across platforms libcaption aims to implement a subset of EIA608, CEA708 as supported by the Apple iOS platform. 608 support is currently limited to encoding and decoding the necessary control and preamble codes as well as support for the Basic North American, Special North American and Extended Western European character sets. 708 support is limited to encoding the 608 data in NTSC field 1 user data type structure. In addition, utility functions to create h.264 SEI (Supplementary enhancement information) NALUs (Network Abstraction Layer Unit) for inclusion into an h.264 elementary stream are provided. H.264 utility functions are limited to wrapping the 708 payload into a SEI NALU. This is accomplished by prepending the 708 payload with 3 bytes (nal_unit_type = 6, payloadType = 4, and PayloadSize = variable), and appending a stop bit encoded into a full byte (with a value of 127). In addition if the 708 payload contains an emulated start code, a three byte sequence equaling 0,0,1 an emulation prevention byte (3) is inserted. Functions to reverse this operation are also provided. ## Characters | | | | | | | | | | | | | | | | | | |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |BNA| |!|"|#|$|%|&|’|(|)|á|+|,|-|.|/| |BNA|0|1|2|3|4|5|6|7|8|9|:|;|<|=|>|?| |BNA|@|A|B|C|D|E|F|G|H|I|J|K|L|M|N|O| |BNA|P|Q|R|S|T|U|V|W|X|Y|Z|[|é|]|í|ó| |BNA|ú|a|b|c|d|e|f|g|h|i|j|k|l|m|n|o| |BNA|p|q|r|s|t|u|v|w|x|y|z|ç|÷|Ñ|ñ|â–ˆ| |SNA|®|°|½|¿|â„¢|¢|£|♪|à| |è|â|ê|î|ô|û| |WES|Ã|É|Ó|Ú|Ü|ü|‘|¡|*|'|—|©|â„ |•|“|â€| |WEF|À|Â|Ç|È|Ê|Ë|ë|ÃŽ|Ã|ï|Ô|Ù|ù|Û|«|»| |WEP|Ã|ã|Ã|ÃŒ|ì|Ã’|ò|Õ|õ|{|}|\|^|_|||~| |WEG|Ä|ä|Ö|ö|ß|Â¥|¤|¦|Ã…|Ã¥|Ø|ø|┌|â”|â””|┘| * BNA = Basic North American character set * SNA = Special North American character set * WES = Extended Western European character set : Extended Spanish/Miscellaneous * WEF = Extended Western European character set : Extended French * WEP = Extended Western European character set : Portuguese * WEG = Extended Western European character set : German/Danish ## Limitations Current B-frame support for caption creation is minimal. libcaption ensures no re-ordering of captions is required on playback. ## Build Directions # Mac Os/Linux Install build dependencies (git, cmake, a compiler such as xcode, gcc or clang and optionally re2c and ffmpeg) * run `cmake . && make` * or to compile without re2c `cmake -DENABLE_RE2C=OFF . && make` * finally `sudo make install` to install # Windows I have never tested libcaption in windows. It is written in pure C with no dependencies, so there is no reason it would not work. obs-studio-32.1.0-sources/deps/libcaption/LICENSE.txt000644 001751 001751 00000002146 15153330235 023141 0ustar00runnerrunner000000 000000 The MIT License Copyright 2016-2017 Twitch Interactive, Inc. or its affiliates. All Rights Reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. obs-studio-32.1.0-sources/deps/libcaption/Doxyfile.in000644 001751 001751 00000315367 15153330235 023445 0ustar00runnerrunner000000 000000 # Doxyfile 1.8.11 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all text # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv # for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = "libcaption" # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = # With the PROJECT_LOGO tag one can specify a logo or an icon that is included # in the documentation. The maximum height of the logo should not exceed 55 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. PROJECT_LOGO = # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = ./docs # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- # directories (in 2 levels) under the output directory of each output format and # will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. # The default value is: NO. CREATE_SUBDIRS = YES # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, # Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = YES # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = NO # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new # page for each member. If set to NO, the documentation of a member will be part # of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 4 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:\n" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". You can put \n's in the value part of an alias to insert # newlines. ALIASES = # This tag can be used to specify a number of word-keyword mappings (TCL only). # A mapping has the form "name=value". For example adding "class=itcl::class" # will allow you to use the command class in the itcl::class meaning. TCL_SUBST = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, Javascript, # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: # Fortran. In the later case the parser tries to guess whether the code is fixed # or free formatted code, this is the default for Fortran type files), VHDL. For # instance to make doxygen treat .inc files as Fortran files (default is PHP), # and .f files as C (default is Fortran), use: inc=Fortran f=C. # # Note: For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by putting a % sign in front of the word or # globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = NO # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # If one adds a struct or class to a group and this option is enabled, then also # any nested class or struct is added to the same group. By default this option # is disabled and one has to add nested compounds explicitly via \ingroup. # The default value is: NO. GROUP_NESTED_COMPOUNDS = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = YES # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = NO # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined # locally in source files will be included in the documentation. If set to NO, # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. If set to YES, local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO, only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO, these classes will be included in the various overviews. This option # has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # (class|struct|union) declarations. If set to NO, these declarations will be # included in the documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO, these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = NO # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file # names in lower-case letters. If set to YES, upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. # The default value is: system dependent. CASE_SENSE_NAMES = NO # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES, the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = NO # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will # append additional text to a page's title, such as Class Reference. If set to # YES the compound reference will be hidden. # The default value is: NO. HIDE_COMPOUND_REFERENCE= NO # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = YES # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = NO # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo # list. This list is created by putting \todo commands in the documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test # list. This list is created by putting \test commands in the documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if ... \endif and \cond # ... \endcond blocks. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES, the # list will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = NO # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = YES # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some parameters # in a documented function, or documenting parameters that don't exist or using # markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO, doxygen will only warn about wrong or incomplete # parameter documentation, but not about the absence of documentation. # The default value is: NO. WARN_NO_PARAMDOC = NO # If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when # a warning is encountered. # The default value is: NO. WARN_AS_ERROR = NO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. INPUT = @CMAKE_CURRENT_SOURCE_DIR@/include @CMAKE_CURRENT_SOURCE_DIR@/examples # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: http://www.gnu.org/software/libiconv) for the list of # possible encodings. # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # read by doxygen. # # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, # *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, # *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f, *.for, *.tcl, # *.vhd, *.vhdl, *.ucf, *.qsf, *.as and *.js. FILE_PATTERNS = *.h *.c *.re2c # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = YES # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # # # where is the value of the INPUT_FILTER tag, and is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = NO # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # function all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = NO # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = NO # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see http://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the config file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = YES #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = YES # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in # which the alphabetical index list will be split. # Minimum value: 1, maximum value: 20, default value: 5. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefore more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the style sheet and background images according to # this color. Hue is specified as an angle on a colorwheel, see # http://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use grayscales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to YES can help to show when doxygen was last run and thus if the # documentation is up to date. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = NO # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = NO # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: http://developer.apple.com/tools/xcode/), introduced with # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a # Makefile in the HTML output directory. Running make will produce the docset in # that directory and running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on # Windows. # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler (hhc.exe). If non-empty, # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated # (YES) or that it should be included in the master .chm file (NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = org.doxygen.Project # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- # folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location of Qt's # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the # generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can # further fine-tune the look of the index. As an example, the default style # sheet generated by doxygen has an example that shows how to put an image at # the root of the tree instead of the PROJECT_NAME. Since the tree basically has # the same information as the tab index, you could consider setting # DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # # Note that when changing this option you need to delete any form_*.png files in # the HTML output directory before the changes have effect. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # http://www.mathjax.org) which uses client side Javascript for the rendering # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # When MathJax is enabled you can set the default output format to be used for # the MathJax output. See the MathJax site (see: # http://docs.mathjax.org/en/latest/output.html) for more details. # Possible values are: HTML-CSS (which is slower, but has the best # compatibility), NativeMML (i.e. MathML) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from http://www.mathjax.org before deployment. # The default value is: http://cdn.mathjax.org/mathjax/latest. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use + S # (what the is depends on the OS and browser, but it is typically # , /