percona-toolkit-3.1/000775 001750 001750 00000000000 13535723560 015646 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/.env000664 001750 001750 00000001466 13535723557 016454 0ustar00jenkinsjenkins000000 000000 GOLANG_DOCKERHUB_TAG=1.10-stretch TEST_MONGODB_ADMIN_USERNAME=admin TEST_MONGODB_ADMIN_PASSWORD=admin123456 TEST_MONGODB_USERNAME=test TEST_MONGODB_PASSWORD=123456 TEST_MONGODB_S1_RS=rs1 TEST_MONGODB_STANDALONE_PORT=27017 TEST_MONGODB_S1_PRIMARY_PORT=17001 TEST_MONGODB_S1_SECONDARY1_PORT=17002 TEST_MONGODB_S1_SECONDARY2_PORT=17003 TEST_MONGODB_S2_RS=rs2 TEST_MONGODB_S2_PRIMARY_PORT=17004 TEST_MONGODB_S2_SECONDARY1_PORT=17005 TEST_MONGODB_S2_SECONDARY2_PORT=17006 TEST_MONGODB_S3_RS=rs3 TEST_MONGODB_S3_PRIMARY_PORT=17021 TEST_MONGODB_S3_SECONDARY1_PORT=17022 TEST_MONGODB_S3_SECONDARY2_PORT=17023 TEST_MONGODB_CONFIGSVR_RS=csReplSet TEST_MONGODB_CONFIGSVR1_PORT=17007 TEST_MONGODB_CONFIGSVR2_PORT=17008 TEST_MONGODB_CONFIGSVR3_PORT=17009 TEST_MONGODB_MONGOS_PORT=17000 TEST_PSMDB_VERSION=4.0 TEST_MONGODB_FLAVOR=mongo percona-toolkit-3.1/.travis.yml000664 001750 001750 00000004370 13535723557 017771 0ustar00jenkinsjenkins000000 000000 language: go go: - 1.9.x - 1.10.x - 1.12.x services: - docker env: global: - DOCKER_COMPOSE_VERSION: 1.8.0 - TEST_MONGODB_FLAVOR: mongo - TEST_PSMDB_VERSION: 4.0 - TEST_MONGODB_ADMIN_USERNAME: admin - TEST_MONGODB_ADMIN_PASSWORD: admin123456 - TEST_MONGODB_USERNAME: test - TEST_MONGODB_PASSWORD: 123456 - TEST_MONGODB_STANDALONE_PORT: 27017 - TEST_MONGODB_MONGOS_PORT: 17000 - TEST_MONGODB_S1_RS: rs1 - TEST_MONGODB_S1_PRIMARY_PORT: 17001 - TEST_MONGODB_S1_SECONDARY1_PORT: 17002 - TEST_MONGODB_S1_SECONDARY2_PORT: 17003 - TEST_MONGODB_S2_RS: rs2 - TEST_MONGODB_S2_PRIMARY_PORT: 17004 - TEST_MONGODB_S2_SECONDARY1_PORT: 17005 - TEST_MONGODB_S2_SECONDARY2_PORT: 17006 - TEST_MONGODB_CONFIGSVR_RS: csReplSet - TEST_MONGODB_CONFIGSVR1_PORT: 17007 ce - TEST_MONGODB_CONFIGSVR2_PORT: 17008 - TEST_MONGODB_CONFIGSVR3_PORT: 17009 - TEST_MONGODB_S3_RS: rs3 - TEST_MONGODB_S3_PRIMARY_PORT: 17021 - TEST_MONGODB_S3_SECONDARY1_PORT: 17022 - TEST_MONGODB_S3_SECONDARY2_PORT: 17023 - MINIO_ENDPOINT: http://localhost:9000/ - MINIO_ACCESS_KEY_ID: example00000 - MINIO_SECRET_ACCESS_KEY: secret00000 matrix: - MONGODB_IMAGE=mongo:3.0 - MONGODB_IMAGE=mongo:3.2 - MONGODB_IMAGE=mongo:3.4 - MONGODB_IMAGE=percona/percona-server-mongodb:3.0 - MONGODB_IMAGE=percona/percona-server-mongodb:3.2 - MONGODB_IMAGE=percona/percona-server-mongodb:3.4 before_install: - sudo apt-get update - sudo apt-get install -o Dpkg::Options::="--force-confold" --force-yes -y docker-ce - docker-compose --version install: - go get -u github.com/golang/dep/cmd/dep before_script: # log versions - docker --version - docker-compose --version # run docker containers - docker-compose up -d init # we need at least one document in test db - dep ensure script: - go test -timeout 20m ./src/... allow_failures: - tip notifications: email: false slack: on_success: change on_failure: change rooms: secure: E5ZRDFtbVmQCo2SLCdvecpaRIZPC35+0srkyA9jVq0BJpvVY6pm4OQceAugy/g5cd6c2reTN9oNSjNF62BKpoJxPuIuu8/JdlvUMMxgxnGkCC1R6hAddbapvIe4EXlybLPGy8kAG7OkYVpGHtWwN3U5MfF7/tGeqL2y8C3fCDZA= percona-toolkit-3.1/CONTRIBUTE.md000664 001750 001750 00000015746 13535723557 017671 0ustar00jenkinsjenkins000000 000000 # Contributing guide ## How Can I Contribute? ## Reporting Bugs Before creating bug reports, please check this list as you might find out that you don't need to create one. When you create a bug report, please include as many details as possible. You can use this template to structure the information. ### Before Submitting A Bug Report - Ensure you have carefully read the documentation. Percona Toolkit is a mature project with many settings that covers a wide range options. - Search for existing bugs in Launchpad to see if the problem has already been reported. If it has, add a comment to the existing issue instead of opening a new one. ### How Do I Submit A (Good) Bug Report? - Explain the problem and include additional details to help others reproduce the problem: - Use a clear and descriptive title for the issue to identify the problem. - Describe the exact steps which reproduce the problem, including as many details as possible. Provide examples of the command you used and include context information like language, OS and database versions. Describe the obtained results and the expected results and, if it is possible, provide examples. ## Submiting fixes ### Create an Issue If you find a bug, the first step is to create an issue. Whatever the problem is, you’re likely not the only one experiencing it. Others will find your issue helpful, and other developers might help you find the cause and discuss the best solution for it. #### Tips for creating an issue - Check if there are any existing issues for your problem. By doing this, we can avoid duplicating efforts, since the issue might have been already reported and if not, you might find useful information on older issues related to the same problem. - Be clear about what your problem is: which program were you using, what was the expected result and what is the result you are getting. Detail how someone else can reproduce the problem, including examples. - Include system details like language version, OS, database details or special configurations, etc. - Paste the error output or logs in your issue or in a Gist. ### Pull Requests If you fixed a bug or added a new feature – awesome! Open a pull request with the code! Be sure you’ve read any documents on contributing, understand the license and have signed a Contributor Licence Agreement (CLA) if required. Once you’ve submitted a pull request, the maintainers can compare your branch to the existing one and decide whether or not to incorporate (merge) your changes. ### Tips for creating a pull request - Fork the repository and clone it locally. Connect your local to the original ‘upstream’ repository by adding it as a remote. Pull in changes from ‘upstream’ often so that you stay up to date so that when you submit your pull request, merge conflicts will be less likely. - Create a branch for your code. Usually it is a good practice to name the branch after the issue ID, like issue-12345. - Be clear about the problem you fixed or the feature you added. Include explanations and code references to help the maintainers understand what you did. - Add useful comments to the code to help others understand it. - Write tests. This is an important step. Run your changes against existing tests and create new ones when needed. Whether tests exist or not, make sure your changes don’t break the existing project. - Contribute in the style of the project to the best of your abilities. This may mean using indents, semicolons, or comments differently than you would in your own repository, but makes it easier for the maintainer to merge, others to understand and maintain in the future. - Keep your changes as small as possible and solve only what's reported in the issue. Mixing fixes might be confusing to others and makes testing harder. - Be as explicit as possible. Avoid using special/internal language variables like $_. Use a variable name that clearly represents the value it holds. - Write good commit messages. A comment like 'Misc bugfixes' or 'More code added' does not help to understand what's the change about. ### Open Pull Requests Once you’ve opened a pull request, a discussion will start around your proposed changes. Other contributors and users may chime in, but ultimately the decision is made by the maintainers. You may be asked to make some changes to your pull request, if so, add more commits to your branch and push them – they’ll automatically go into the existing pull request. # Licensing Along with the pull request, include a message indicating that the submited code is your own creation and it can be distributed under the BSD licence. # Setting up the development environment #### Setting up the source code To start, fork the Percona Toolkit repo to be able to submit pull requests and clone it locally: ``` mkdir ${HOME}/perldev git clone https://github.com//percona-toolkit.git ${HOME}/perldev/percona-toolkit ``` For testing, we are going to need to have MySQL with slaves. For that, we already have scripts in the sandbox directory but first we need to download MySQL binaries. Please download the Linux Generic tar file for your distrubution from [https://www.percona.com/downloads/Percona-Server-5.6/](https://www.percona.com/downloads/Percona-Server-5.6/). ### Set up MySQL sandbox In this example, we are going to download Percona Server 5.6.32. Since I am using Ubuntu, according to the documentation [here](https://www.percona.com/doc/percona-server/5.6/installation.html#installing-percona-server-from-a-binary-tarball), I am going to need this tar file: [Percona-Server-5.6.32-rel78.1-Linux.x86_64.ssl100.tar.gz](https://www.percona.com/downloads/Percona-Server-5.6/Percona-Server-5.6.32-78.1/binary/tarball/Percona-Server-5.6.32-rel78.1-Linux.x86_64.ssl100.tar.gz). ``` mkdir -p ${HOME}/mysql/percona-server-5.6.32 ``` ``` wget https://www.percona.com/downloads/Percona-Server-5.6/Percona-Server-5.6.32-78.1/binary/tarball/Percona-Server-5.6.32-rel78.1-Linux.x86_64.ssl100.tar.gz ``` ``` tar xvzf Percona-Server-5.6.32-rel78.1-Linux.x86_64.ssl100.tar.gz --strip 1 -C ${HOME}/mysql/percona-server-5.6.32 ``` ### Set up environment variables: We need these environment variables to start the MySQL sandbox and to run the tests. Probably it is a good idea to add them to your `.bashrc` file. ``` export PERCONA_TOOLKIT_BRANCH=${HOME}/perldev/percona-toolkit export PERL5LIB=${HOME}/perldev/percona-toolkit/lib export PERCONA_TOOLKIT_SANDBOX=${HOME}/mysql/percona-server-5.6.32 ``` ### Starting the sandbox ``` cd ${HOME}/perldev/percona-toolkit ``` ``` sandbox/test-env start ``` To stop the MySQL sandbox: `sandbox/test-env stop` To enable TokuDB (only available in Percona Server 5.7+), run: ``` ENABLE_TOKUDB=1 sandbox/test-env start ``` ### Running tests ``` cd ${HOME}/perldev/percona-toolkit ``` Run all tests for a particular program (pt-stalk in this example): ``` prove -v t/pt-stalk/ ``` or run a specific test: ``` prove -v t/pt-stalk/option_sanity.t ``` percona-toolkit-3.1/CONTRIBUTING.md000664 001750 001750 00000000340 13535723557 020102 0ustar00jenkinsjenkins000000 000000 To get started, sign the Individual Contributor License Agreement or sign the Corporate Contributor License Agreement percona-toolkit-3.1/COPYING000664 001750 001750 00000043254 13535723557 016717 0ustar00jenkinsjenkins000000 000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. percona-toolkit-3.1/INSTALL000664 001750 001750 00000002767 13535723557 016721 0ustar00jenkinsjenkins000000 000000 Installing Percona Toolkit ========================== System Requirements ------------------- Most tools require: * Perl v5.8 or newer * Bash v3 or newer * Core Perl modules like Time::HiRes Tools that connect to MySQL require: * Perl modules DBI and DBD::mysql * MySQL 5.0 or newer Percona Toolkit is only tested on UNIX systems, primarily Debian and Red Hat derivatives; other operating systems are not supported. Tools that connect to MySQL may work with MySQL v4.1, but this is not test or supported. Quick Install ------------- perl Makefile.PL make make test make install Detailed Install ---------------- Extract the tarball and cd to the resulting directory: tar zxvf percona-toolkit-.tar.gz cd percona-toolkit- Generate the Makefile, which will check Perl module dependencies and so forth: perl Makefile.PL Build the tools' man pages and prep for test and install: make Test that the tools can run: make test All tests should pass. If not, then your system may be missing a Perl module required by a tool. The tests are not comprehensive; they only test that the tools can be executed by Perl and Bash. Finally, install all tools and their man pages: make install On most systems, the tools are installed in /usr/local/bin. Installation Options -------------------- To install to a directory other than your system's default, such as your home directory, generate the Makefile with a prefix: perl Makefile.PL PREFIX=${HOME} percona-toolkit-3.1/MANIFEST000664 001750 001750 00000001340 13535723557 017003 0ustar00jenkinsjenkins000000 000000 Changelog CONTRIBUTE.md CONTRIBUTING.md COPYING docker-compose.yml Gopkg.lock Gopkg.toml INSTALL Makefile.PL MANIFEST README.md bin/pt-align bin/pt-archiver bin/pt-config-diff bin/pt-deadlock-logger bin/pt-diskstats bin/pt-duplicate-key-checker bin/pt-fifo-split bin/pt-find bin/pt-fingerprint bin/pt-fk-error-logger bin/pt-heartbeat bin/pt-index-usage bin/pt-ioprofile bin/pt-kill bin/pt-mext bin/pt-mysql-summary bin/pt-online-schema-change bin/pt-pmp bin/pt-query-digest bin/pt-show-grants bin/pt-sift bin/pt-slave-delay bin/pt-slave-find bin/pt-slave-restart bin/pt-stalk bin/pt-summary bin/pt-table-checksum bin/pt-table-sync bin/pt-table-usage bin/pt-upgrade bin/pt-variable-advisor bin/pt-visual-explain docs/percona-toolkit.pod percona-toolkit-3.1/README.md000664 001750 001750 00000003462 13535723557 017140 0ustar00jenkinsjenkins000000 000000 # Percona Toolkit [![CLA assistant](https://cla-assistant.percona.com/readme/badge/percona/percona-toolkit)](https://cla-assistant.percona.com/percona/percona-toolkit) *Percona Toolkit* is a collection of advanced command-line tools used by [Percona](http://www.percona.com/) support staff to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. These tools are ideal alternatives to private or "one-off" scripts because they are professionally developed, formally tested, and fully documented. They are also fully self-contained, so installation is quick and easy and no libraries are installed. Percona Toolkit is developed and supported by Percona Inc. For more information and other free, open-source software developed by Percona, visit [http://www.percona.com/software/](http://www.percona.com/software/). ## Installing To install all tools, run: ``` perl Makefile.PL make make test make install ``` You probably need to be root to `make install`. On most systems, the tools are installed in /usr/local/bin. See the INSTALL file for more information. ## Documentation Run `man percona-toolkit` to see a list of installed tools, then `man tool` to read the embedded documentation for a specific tool. You can also read the documentation online at [http://www.percona.com/software/percona-toolkit/](http://www.percona.com/software/percona-toolkit/). ## Version 3 Starting from version 3, there are new tools for MongoDB. These tools are written in Go so in order to compile these program, this repo must me cloned into the GOPATH directory. Example: ``` mkdir ${HOME}/go export GOPATH=${HOME}/go mkdir -p ${HOME}/go/src/github.com/percona cd ${HOME}/go/src/github.com/percona git clone https://github.com/percona/percona-toolkit.git cd percona-toolkit/src/go make ``` percona-toolkit-3.1/bin/000775 001750 001750 00000000000 13535723560 016416 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/bin/pt-align000775 001750 001750 00000121423 13535723560 020062 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( OptionParser )); } # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_align; use strict; use warnings FATAL => 'all'; use List::Util qw( max ); sub main { local *ARGV; # In the extremely rare case that this is run as a module, # not resetting ARGV (the filehandle) could cause problems. @ARGV = @_; # set global ARGV for this package my $o = OptionParser->new(); $o->get_specs(); $o->get_opts(); $o->usage_or_errors(); # Read all lines my @lines; my %word_count; while ( <> ) { my $line = $_; my @words = $line =~ m/(\S+)/g; push @lines, \@words; $word_count{ scalar @words }++; } # Find max number of words per line my @wc = reverse sort { $word_count{$a}<=>$word_count{$b} } keys %word_count; my $m_words = $wc[0]; # Filter out non-conformists @lines = grep { scalar @$_ == $m_words } @lines; die "I need at least 2 lines" unless @lines > 1; # Find the widths and alignments of each column my @fmt; foreach my $i ( 0 .. $m_words-1 ) { my $m_len = max(map { length($_->[$i]) } @lines); my $code = $lines[1]->[$i] =~ m/[^0-9.-]/ ? "%-${m_len}s" : "%${m_len}s"; push @fmt, $code; } my $fmt = join(' ', @fmt) . "\n"; # Print! foreach my $l ( @lines ) { printf $fmt, @$l; } } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-align - Align output from other tools to columns. =head1 SYNOPSIS Usage: pt-align [FILES] pt-align aligns output from other tools to columns. If no FILES are specified, STDIN is read. If a tool prints the following output, DATABASE TABLE ROWS foo bar 100 long_db_name table 1 another long_name 500 then pt-align reprints the output as, DATABASE TABLE ROWS foo bar 100 long_db_name table 1 another long_name 500 =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-align reads lines and splits them into words. It counts how many words each line has, and if there is one number that predominates, it assumes this is the number of words in each line. Then it discards all lines that don't have that many words, and looks at the 2nd line that does. It assumes this is the first non-header line. Based on whether each word looks numeric or not, it decides on column alignment. Finally, it goes through and decides how wide each column should be, and then prints them out. This is useful for things like aligning the output of vmstat or iostat so it is easier to read. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --help Show help and exit. =item --version Show version and exit. =back =head1 ENVIRONMENT This tool does not use any environment variables. =head1 SYSTEM REQUIREMENTS You need Perl, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz, Brian Fraser, and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-align 3.1.0 =cut percona-toolkit-3.1/bin/pt-archiver000775 001750 001750 00001020523 13535723560 020573 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo OptionParser TableParser DSNParser VersionParser Quoter TableNibbler Daemon MasterSlave FlowControlWaiter Cxn HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # TableParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TableParser.pm # t/lib/TableParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TableParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # VersionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionParser.pm # t/lib/VersionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionParser; use Lmo; use Scalar::Util qw(blessed); use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use overload ( '""' => "version", '<=>' => "cmp", 'cmp' => "cmp", fallback => 1, ); use Carp (); our $VERSION = 0.01; has major => ( is => 'ro', isa => 'Int', required => 1, ); has [qw( minor revision )] => ( is => 'ro', isa => 'Num', ); has flavor => ( is => 'ro', isa => 'Str', default => sub { 'Unknown' }, ); has innodb_version => ( is => 'ro', isa => 'Str', default => sub { 'NO' }, ); sub series { my $self = shift; return $self->_join_version($self->major, $self->minor); } sub version { my $self = shift; return $self->_join_version($self->major, $self->minor, $self->revision); } sub is_in { my ($self, $target) = @_; return $self eq $target; } sub _join_version { my ($self, @parts) = @_; return join ".", map { my $c = $_; $c =~ s/^0\./0/; $c } grep defined, @parts; } sub _split_version { my ($self, $str) = @_; my @version_parts = map { s/^0(?=\d)/0./; $_ } $str =~ m/(\d+)/g; return @version_parts[0..2]; } sub normalized_version { my ( $self ) = @_; my $result = sprintf('%d%02d%02d', map { $_ || 0 } $self->major, $self->minor, $self->revision); PTDEBUG && _d($self->version, 'normalizes to', $result); return $result; } sub comment { my ( $self, $cmd ) = @_; my $v = $self->normalized_version(); return "/*!$v $cmd */" } my @methods = qw(major minor revision); sub cmp { my ($left, $right) = @_; my $right_obj = (blessed($right) && $right->isa(ref($left))) ? $right : ref($left)->new($right); my $retval = 0; for my $m ( @methods ) { last unless defined($left->$m) && defined($right_obj->$m); $retval = $left->$m <=> $right_obj->$m; last if $retval; } return $retval; } sub BUILDARGS { my $self = shift; if ( @_ == 1 ) { my %args; if ( blessed($_[0]) && $_[0]->can("selectrow_hashref") ) { PTDEBUG && _d("VersionParser got a dbh, trying to get the version"); my $dbh = $_[0]; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $query = eval { $dbh->selectall_arrayref(q/SHOW VARIABLES LIKE 'version%'/, { Slice => {} }) }; if ( $query ) { $query = { map { $_->{variable_name} => $_->{value} } @$query }; @args{@methods} = $self->_split_version($query->{version}); $args{flavor} = delete $query->{version_comment} if $query->{version_comment}; } elsif ( eval { ($query) = $dbh->selectrow_array(q/SELECT VERSION()/) } ) { @args{@methods} = $self->_split_version($query); } else { Carp::confess("Couldn't get the version from the dbh while " . "creating a VersionParser object: $@"); } $args{innodb_version} = eval { $self->_innodb_version($dbh) }; } elsif ( !ref($_[0]) ) { @args{@methods} = $self->_split_version($_[0]); } for my $method (@methods) { delete $args{$method} unless defined $args{$method}; } @_ = %args if %args; } return $self->SUPER::BUILDARGS(@_); } sub _innodb_version { my ( $self, $dbh ) = @_; return unless $dbh; my $innodb_version = "NO"; my ($innodb) = grep { $_->{engine} =~ m/InnoDB/i } map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{ $dbh->selectall_arrayref("SHOW ENGINES", {Slice=>{}}) }; if ( $innodb ) { PTDEBUG && _d("InnoDB support:", $innodb->{support}); if ( $innodb->{support} =~ m/YES|DEFAULT/i ) { my $vars = $dbh->selectrow_hashref( "SHOW VARIABLES LIKE 'innodb_version'"); $innodb_version = !$vars ? "BUILTIN" : ($vars->{Value} || $vars->{value}); } else { $innodb_version = $innodb->{support}; # probably DISABLED or NO } } PTDEBUG && _d("InnoDB version:", $innodb_version); return $innodb_version; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End VersionParser package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(TableParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args }; return bless $self, $class; } sub generate_asc_stmt { my ( $self, %args ) = @_; my @required_args = qw(tbl_struct index); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($tbl_struct, $index) = @args{@required_args}; my @cols = $args{cols} ? @{$args{cols}} : @{$tbl_struct->{cols}}; my $q = $self->{Quoter}; die "Index '$index' does not exist in table" unless exists $tbl_struct->{keys}->{$index}; PTDEBUG && _d('Will ascend index', $index); my @asc_cols = @{$tbl_struct->{keys}->{$index}->{cols}}; if ( $args{asc_first} ) { PTDEBUG && _d('Ascending only first column'); @asc_cols = $asc_cols[0]; } elsif ( my $n = $args{n_index_cols} ) { $n = scalar @asc_cols if $n > @asc_cols; PTDEBUG && _d('Ascending only first', $n, 'columns'); @asc_cols = @asc_cols[0..($n-1)]; } PTDEBUG && _d('Will ascend columns', join(', ', @asc_cols)); my @asc_slice; my %col_posn = do { my $i = 0; map { $_ => $i++ } @cols }; foreach my $col ( @asc_cols ) { if ( !exists $col_posn{$col} ) { push @cols, $col; $col_posn{$col} = $#cols; } push @asc_slice, $col_posn{$col}; } PTDEBUG && _d('Will ascend, in ordinal position:', join(', ', @asc_slice)); my $asc_stmt = { cols => \@cols, index => $index, where => '', slice => [], scols => [], }; if ( @asc_slice ) { my $cmp_where; foreach my $cmp ( qw(< <= >= >) ) { $cmp_where = $self->generate_cmp_where( type => $cmp, slice => \@asc_slice, cols => \@cols, quoter => $q, is_nullable => $tbl_struct->{is_nullable}, ); $asc_stmt->{boundaries}->{$cmp} = $cmp_where->{where}; } my $cmp = $args{asc_only} ? '>' : '>='; $asc_stmt->{where} = $asc_stmt->{boundaries}->{$cmp}; $asc_stmt->{slice} = $cmp_where->{slice}; $asc_stmt->{scols} = $cmp_where->{scols}; } return $asc_stmt; } sub generate_cmp_where { my ( $self, %args ) = @_; foreach my $arg ( qw(type slice cols is_nullable) ) { die "I need a $arg arg" unless defined $args{$arg}; } my @slice = @{$args{slice}}; my @cols = @{$args{cols}}; my $is_nullable = $args{is_nullable}; my $type = $args{type}; my $q = $self->{Quoter}; (my $cmp = $type) =~ s/=//; my @r_slice; # Resulting slice columns, by ordinal my @r_scols; # Ditto, by name my @clauses; foreach my $i ( 0 .. $#slice ) { my @clause; foreach my $j ( 0 .. $i - 1 ) { my $ord = $slice[$j]; my $col = $cols[$ord]; my $quo = $q->quote($col); if ( $is_nullable->{$col} ) { push @clause, "((? IS NULL AND $quo IS NULL) OR ($quo = ?))"; push @r_slice, $ord, $ord; push @r_scols, $col, $col; } else { push @clause, "$quo = ?"; push @r_slice, $ord; push @r_scols, $col; } } my $ord = $slice[$i]; my $col = $cols[$ord]; my $quo = $q->quote($col); my $end = $i == $#slice; # Last clause of the whole group. if ( $is_nullable->{$col} ) { if ( $type =~ m/=/ && $end ) { push @clause, "(? IS NULL OR $quo $type ?)"; } elsif ( $type =~ m/>/ ) { push @clause, "((? IS NULL AND $quo IS NOT NULL) OR ($quo $cmp ?))"; } else { # If $type =~ m/ \@r_slice, scols => \@r_scols, where => $result, }; return $where; } sub generate_del_stmt { my ( $self, %args ) = @_; my $tbl = $args{tbl_struct}; my @cols = $args{cols} ? @{$args{cols}} : (); my $tp = $self->{TableParser}; my $q = $self->{Quoter}; my @del_cols; my @del_slice; my $index = $tp->find_best_index($tbl, $args{index}); die "Cannot find an ascendable index in table" unless $index; if ( $index && $tbl->{keys}->{$index}->{is_unique}) { @del_cols = @{$tbl->{keys}->{$index}->{cols}}; } else { @del_cols = @{$tbl->{cols}}; } PTDEBUG && _d('Columns needed for DELETE:', join(', ', @del_cols)); my %col_posn = do { my $i = 0; map { $_ => $i++ } @cols }; foreach my $col ( @del_cols ) { if ( !exists $col_posn{$col} ) { push @cols, $col; $col_posn{$col} = $#cols; } push @del_slice, $col_posn{$col}; } PTDEBUG && _d('Ordinals needed for DELETE:', join(', ', @del_slice)); my $del_stmt = { cols => \@cols, index => $index, where => '', slice => [], scols => [], }; my @clauses; foreach my $i ( 0 .. $#del_slice ) { my $ord = $del_slice[$i]; my $col = $cols[$ord]; my $quo = $q->quote($col); if ( $tbl->{is_nullable}->{$col} ) { push @clauses, "((? IS NULL AND $quo IS NULL) OR ($quo = ?))"; push @{$del_stmt->{slice}}, $ord, $ord; push @{$del_stmt->{scols}}, $col, $col; } else { push @clauses, "$quo = ?"; push @{$del_stmt->{slice}}, $ord; push @{$del_stmt->{scols}}, $col; } } $del_stmt->{where} = '(' . join(' AND ', @clauses) . ')'; return $del_stmt; } sub generate_ins_stmt { my ( $self, %args ) = @_; foreach my $arg ( qw(ins_tbl sel_cols) ) { die "I need a $arg argument" unless $args{$arg}; } my $ins_tbl = $args{ins_tbl}; my @sel_cols = @{$args{sel_cols}}; die "You didn't specify any SELECT columns" unless @sel_cols; my @ins_cols; my @ins_slice; for my $i ( 0..$#sel_cols ) { next unless $ins_tbl->{is_col}->{$sel_cols[$i]}; push @ins_cols, $sel_cols[$i]; push @ins_slice, $i; } return { cols => \@ins_cols, slice => \@ins_slice, }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableNibbler package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # MasterSlave package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MasterSlave.pm # t/lib/MasterSlave.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MasterSlave; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub check_recursion_method { my ($methods) = @_; if ( @$methods != 1 ) { if ( grep({ !m/processlist|hosts/i } @$methods) && $methods->[0] !~ /^dsn=/i ) { die "Invalid combination of recursion methods: " . join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". " . "Only hosts and processlist may be combined.\n" } } else { my ($method) = @$methods; die "Invalid recursion method: " . ( $method || 'undef' ) unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i; } } sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser DSNParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, replication_thread => {}, }; return bless $self, $class; } sub get_slaves { my ($self, %args) = @_; my @required_args = qw(make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($make_cxn) = @args{@required_args}; my $slaves = []; my $dp = $self->{DSNParser}; my $methods = $self->_resolve_recursion_methods($args{dsn}); return $slaves unless @$methods; if ( grep { m/processlist|hosts/i } @$methods ) { my @required_args = qw(dbh dsn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $dsn) = @args{@required_args}; my $o = $self->{OptionParser}; $self->recurse_to_slaves( { dbh => $dbh, dsn => $dsn, slave_user => $o->got('slave-user') ? $o->get('slave-user') : '', slave_password => $o->got('slave-password') ? $o->get('slave-password') : '', callback => sub { my ( $dsn, $dbh, $level, $parent ) = @_; return unless $level; PTDEBUG && _d('Found slave:', $dp->as_string($dsn)); my $slave_dsn = $dsn; if ($o->got('slave-user')) { $slave_dsn->{u} = $o->get('slave-user'); PTDEBUG && _d("Using slave user ".$o->get('slave-user')." on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($o->got('slave-password')) { $slave_dsn->{p} = $o->get('slave-password'); PTDEBUG && _d("Slave password set"); } push @$slaves, $make_cxn->(dsn => $slave_dsn, dbh => $dbh); return; }, } ); } elsif ( $methods->[0] =~ m/^dsn=/i ) { (my $dsn_table_dsn = join ",", @$methods) =~ s/^dsn=//i; $slaves = $self->get_cxn_from_dsn_table( %args, dsn_table_dsn => $dsn_table_dsn, ); } elsif ( $methods->[0] =~ m/none/i ) { PTDEBUG && _d('Not getting to slaves'); } else { die "Unexpected recursion methods: @$methods"; } return $slaves; } sub _resolve_recursion_methods { my ($self, $dsn) = @_; my $o = $self->{OptionParser}; if ( $o->got('recursion-method') ) { return $o->get('recursion-method'); } elsif ( $dsn && ($dsn->{P} || 3306) != 3306 ) { PTDEBUG && _d('Port number is non-standard; using only hosts method'); return [qw(hosts)]; } else { return $o->get('recursion-method'); } } sub recurse_to_slaves { my ( $self, $args, $level ) = @_; $level ||= 0; my $dp = $self->{DSNParser}; my $recurse = $args->{recurse} || $self->{OptionParser}->get('recurse'); my $dsn = $args->{dsn}; my $slave_user = $args->{slave_user} || ''; my $slave_password = $args->{slave_password} || ''; my $methods = $self->_resolve_recursion_methods($dsn); PTDEBUG && _d('Recursion methods:', @$methods); if ( lc($methods->[0]) eq 'none' ) { PTDEBUG && _d('Not recursing to slaves'); return; } my $slave_dsn = $dsn; if ($slave_user) { $slave_dsn->{u} = $slave_user; PTDEBUG && _d("Using slave user $slave_user on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($slave_password) { $slave_dsn->{p} = $slave_password; PTDEBUG && _d("Slave password set"); } my $dbh; eval { $dbh = $args->{dbh} || $dp->get_dbh( $dp->get_cxn_params($slave_dsn), { AutoCommit => 1 }); PTDEBUG && _d('Connected to', $dp->as_string($slave_dsn)); }; if ( $EVAL_ERROR ) { print STDERR "Cannot connect to ", $dp->as_string($slave_dsn), "\n" or die "Cannot print: $OS_ERROR"; return; } my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); my ($id) = $dbh->selectrow_array($sql); PTDEBUG && _d('Working on server ID', $id); my $master_thinks_i_am = $dsn->{server_id}; if ( !defined $id || ( defined $master_thinks_i_am && $master_thinks_i_am != $id ) || $args->{server_ids_seen}->{$id}++ ) { PTDEBUG && _d('Server ID seen, or not what master said'); if ( $args->{skip_callback} ) { $args->{skip_callback}->($dsn, $dbh, $level, $args->{parent}); } return; } $args->{callback}->($dsn, $dbh, $level, $args->{parent}); if ( !defined $recurse || $level < $recurse ) { my @slaves = grep { !$_->{master_id} || $_->{master_id} == $id } # Only my slaves. $self->find_slave_hosts($dp, $dbh, $dsn, $methods); foreach my $slave ( @slaves ) { PTDEBUG && _d('Recursing from', $dp->as_string($dsn), 'to', $dp->as_string($slave)); $self->recurse_to_slaves( { %$args, dsn => $slave, dbh => undef, parent => $dsn, slave_user => $slave_user, $slave_password => $slave_password }, $level + 1 ); } } } sub find_slave_hosts { my ( $self, $dsn_parser, $dbh, $dsn, $methods ) = @_; PTDEBUG && _d('Looking for slaves on', $dsn_parser->as_string($dsn), 'using methods', @$methods); my @slaves; METHOD: foreach my $method ( @$methods ) { my $find_slaves = "_find_slaves_by_$method"; PTDEBUG && _d('Finding slaves with', $find_slaves); @slaves = $self->$find_slaves($dsn_parser, $dbh, $dsn); last METHOD if @slaves; } PTDEBUG && _d('Found', scalar(@slaves), 'slaves'); return @slaves; } sub _find_slaves_by_processlist { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves = map { my $slave = $dsn_parser->parse("h=$_", $dsn); $slave->{source} = 'processlist'; $slave; } grep { $_ } map { my ( $host ) = $_->{host} =~ m/^([^:]+):/; $host ||= $_->{host}; if ( $host eq 'localhost' ) { $host = '127.0.0.1'; # Replication never uses sockets. } $host; } $self->get_connected_slaves($dbh); return @slaves; } sub _find_slaves_by_hosts { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves; my $sql = 'SHOW SLAVE HOSTS'; PTDEBUG && _d($dbh, $sql); @slaves = @{$dbh->selectall_arrayref($sql, { Slice => {} })}; if ( @slaves ) { PTDEBUG && _d('Found some SHOW SLAVE HOSTS info'); @slaves = map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; my $spec = "h=$hash{host},P=$hash{port}" . ( $hash{user} ? ",u=$hash{user}" : '') . ( $hash{password} ? ",p=$hash{password}" : ''); my $dsn = $dsn_parser->parse($spec, $dsn); $dsn->{server_id} = $hash{server_id}; $dsn->{master_id} = $hash{master_id}; $dsn->{source} = 'hosts'; $dsn; } @slaves; } return @slaves; } sub get_connected_slaves { my ( $self, $dbh ) = @_; my $show = "SHOW GRANTS FOR "; my $user = 'CURRENT_USER()'; my $sql = $show . $user; PTDEBUG && _d($dbh, $sql); my $proc; eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/no such grant defined for user/ ) { PTDEBUG && _d('Retrying SHOW GRANTS without host; error:', $EVAL_ERROR); ($user) = split('@', $user); $sql = $show . $user; PTDEBUG && _d($sql); eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; } die "Failed to $sql: $EVAL_ERROR" if $EVAL_ERROR; } if ( !$proc ) { die "You do not have the PROCESS privilege"; } $sql = 'SHOW FULL PROCESSLIST'; PTDEBUG && _d($dbh, $sql); grep { $_->{command} =~ m/Binlog Dump/i } map { # Lowercase the column names my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{$dbh->selectall_arrayref($sql, { Slice => {} })}; } sub is_master_of { my ( $self, $master, $slave ) = @_; my $master_status = $self->get_master_status($master) or die "The server specified as a master is not a master"; my $slave_status = $self->get_slave_status($slave) or die "The server specified as a slave is not a slave"; my @connected = $self->get_connected_slaves($master) or die "The server specified as a master has no connected slaves"; my (undef, $port) = $master->selectrow_array("SHOW VARIABLES LIKE 'port'"); if ( $port != $slave_status->{master_port} ) { die "The slave is connected to $slave_status->{master_port} " . "but the master's port is $port"; } if ( !grep { $slave_status->{master_user} eq $_->{user} } @connected ) { die "I don't see any slave I/O thread connected with user " . $slave_status->{master_user}; } if ( ($slave_status->{slave_io_state} || '') eq 'Waiting for master to send event' ) { my ( $master_log_name, $master_log_num ) = $master_status->{file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; my ( $slave_log_name, $slave_log_num ) = $slave_status->{master_log_file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; if ( $master_log_name ne $slave_log_name || abs($master_log_num - $slave_log_num) > 1 ) { die "The slave thinks it is reading from " . "$slave_status->{master_log_file}, but the " . "master is writing to $master_status->{file}"; } } return 1; } sub get_master_dsn { my ( $self, $dbh, $dsn, $dsn_parser ) = @_; my $master = $self->get_slave_status($dbh) or return undef; my $spec = "h=$master->{master_host},P=$master->{master_port}"; return $dsn_parser->parse($spec, $dsn); } sub get_slave_status { my ( $self, $dbh ) = @_; if ( !$self->{not_a_slave}->{$dbh} ) { my $sth = $self->{sths}->{$dbh}->{SLAVE_STATUS} ||= $dbh->prepare('SHOW SLAVE STATUS'); PTDEBUG && _d($dbh, 'SHOW SLAVE STATUS'); $sth->execute(); my ($sss_rows) = $sth->fetchall_arrayref({}); # Show Slave Status rows my $ss; if ( $sss_rows && @$sss_rows ) { if (scalar @$sss_rows > 1) { if (!$self->{channel}) { die 'This server returned more than one row for SHOW SLAVE STATUS but "channel" was not specified on the command line'; } for my $row (@$sss_rows) { $row = { map { lc($_) => $row->{$_} } keys %$row }; # lowercase the keys if ($row->{channel_name} eq $self->{channel}) { $ss = $row; last; } } } else { if ($sss_rows->[0]->{channel_name} && $sss_rows->[0]->{channel_name} ne $self->{channel}) { die 'This server is using replication channels but "channel" was not specified on the command line'; } else { $ss = $sss_rows->[0]; } } if ( $ss && %$ss ) { $ss = { map { lc($_) => $ss->{$_} } keys %$ss }; # lowercase the keys return $ss; } } PTDEBUG && _d('This server returns nothing for SHOW SLAVE STATUS'); $self->{not_a_slave}->{$dbh}++; } } sub get_master_status { my ( $self, $dbh ) = @_; if ( $self->{not_a_master}->{$dbh} ) { PTDEBUG && _d('Server on dbh', $dbh, 'is not a master'); return; } my $sth = $self->{sths}->{$dbh}->{MASTER_STATUS} ||= $dbh->prepare('SHOW MASTER STATUS'); PTDEBUG && _d($dbh, 'SHOW MASTER STATUS'); $sth->execute(); my ($ms) = @{$sth->fetchall_arrayref({})}; PTDEBUG && _d( $ms ? map { "$_=" . (defined $ms->{$_} ? $ms->{$_} : '') } keys %$ms : ''); if ( !$ms || scalar keys %$ms < 2 ) { PTDEBUG && _d('Server on dbh', $dbh, 'does not seem to be a master'); $self->{not_a_master}->{$dbh}++; } return { map { lc($_) => $ms->{$_} } keys %$ms }; # lowercase the keys } sub wait_for_master { my ( $self, %args ) = @_; my @required_args = qw(master_status slave_dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($master_status, $slave_dbh) = @args{@required_args}; my $timeout = $args{timeout} || 60; my $result; my $waited; if ( $master_status ) { my $slave_status; eval { $slave_status = $self->get_slave_status($slave_dbh); }; if ($EVAL_ERROR) { return { result => undef, waited => 0, error =>'Wait for master: this is a multi-master slave but "channel" was not specified on the command line', }; } my $server_version = VersionParser->new($slave_dbh); my $channel_sql = $server_version > '5.6' && $self->{channel} ? ", '$self->{channel}'" : ''; my $sql = "SELECT MASTER_POS_WAIT('$master_status->{file}', $master_status->{position}, $timeout $channel_sql)"; PTDEBUG && _d($slave_dbh, $sql); my $start = time; ($result) = $slave_dbh->selectrow_array($sql); $waited = time - $start; PTDEBUG && _d('Result of waiting:', $result); PTDEBUG && _d("Waited", $waited, "seconds"); } else { PTDEBUG && _d('Not waiting: this server is not a master'); } return { result => $result, waited => $waited, }; } sub stop_slave { my ( $self, $dbh ) = @_; my $sth = $self->{sths}->{$dbh}->{STOP_SLAVE} ||= $dbh->prepare('STOP SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } sub start_slave { my ( $self, $dbh, $pos ) = @_; if ( $pos ) { my $sql = "START SLAVE UNTIL MASTER_LOG_FILE='$pos->{file}', " . "MASTER_LOG_POS=$pos->{position}"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } else { my $sth = $self->{sths}->{$dbh}->{START_SLAVE} ||= $dbh->prepare('START SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } } sub catchup_to_master { my ( $self, $slave, $master, $timeout ) = @_; $self->stop_slave($master); $self->stop_slave($slave); my $slave_status = $self->get_slave_status($slave); my $slave_pos = $self->repl_posn($slave_status); my $master_status = $self->get_master_status($master); my $master_pos = $self->repl_posn($master_status); PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); my $result; if ( $self->pos_cmp($slave_pos, $master_pos) < 0 ) { PTDEBUG && _d('Waiting for slave to catch up to master'); $self->start_slave($slave, $master_pos); $result = $self->wait_for_master( master_status => $master_status, slave_dbh => $slave, timeout => $timeout, master_status => $master_status ); if ($result->{error}) { die $result->{error}; } if ( !defined $result->{result} ) { $slave_status = $self->get_slave_status($slave); if ( !$self->slave_is_running($slave_status) ) { PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); $slave_pos = $self->repl_posn($slave_status); if ( $self->pos_cmp($slave_pos, $master_pos) != 0 ) { die "MASTER_POS_WAIT() returned NULL but slave has not " . "caught up to master"; } PTDEBUG && _d('Slave is caught up to master and stopped'); } else { die "Slave has not caught up to master and it is still running"; } } } else { PTDEBUG && _d("Slave is already caught up to master"); } return $result; } sub catchup_to_same_pos { my ( $self, $s1_dbh, $s2_dbh ) = @_; $self->stop_slave($s1_dbh); $self->stop_slave($s2_dbh); my $s1_status = $self->get_slave_status($s1_dbh); my $s2_status = $self->get_slave_status($s2_dbh); my $s1_pos = $self->repl_posn($s1_status); my $s2_pos = $self->repl_posn($s2_status); if ( $self->pos_cmp($s1_pos, $s2_pos) < 0 ) { $self->start_slave($s1_dbh, $s2_pos); } elsif ( $self->pos_cmp($s2_pos, $s1_pos) < 0 ) { $self->start_slave($s2_dbh, $s1_pos); } $s1_status = $self->get_slave_status($s1_dbh); $s2_status = $self->get_slave_status($s2_dbh); $s1_pos = $self->repl_posn($s1_status); $s2_pos = $self->repl_posn($s2_status); if ( $self->slave_is_running($s1_status) || $self->slave_is_running($s2_status) || $self->pos_cmp($s1_pos, $s2_pos) != 0) { die "The servers aren't both stopped at the same position"; } } sub slave_is_running { my ( $self, $slave_status ) = @_; return ($slave_status->{slave_sql_running} || 'No') eq 'Yes'; } sub has_slave_updates { my ( $self, $dbh ) = @_; my $sql = q{SHOW VARIABLES LIKE 'log_slave_updates'}; PTDEBUG && _d($dbh, $sql); my ($name, $value) = $dbh->selectrow_array($sql); return $value && $value =~ m/^(1|ON)$/; } sub repl_posn { my ( $self, $status ) = @_; if ( exists $status->{file} && exists $status->{position} ) { return { file => $status->{file}, position => $status->{position}, }; } else { return { file => $status->{relay_master_log_file}, position => $status->{exec_master_log_pos}, }; } } sub get_slave_lag { my ( $self, $dbh ) = @_; my $stat = $self->get_slave_status($dbh); return unless $stat; # server is not a slave return $stat->{seconds_behind_master}; } sub pos_cmp { my ( $self, $a, $b ) = @_; return $self->pos_to_string($a) cmp $self->pos_to_string($b); } sub short_host { my ( $self, $dsn ) = @_; my ($host, $port); if ( $dsn->{master_host} ) { $host = $dsn->{master_host}; $port = $dsn->{master_port}; } else { $host = $dsn->{h}; $port = $dsn->{P}; } return ($host || '[default]') . ( ($port || 3306) == 3306 ? '' : ":$port" ); } sub is_replication_thread { my ( $self, $query, %args ) = @_; return unless $query; my $type = lc($args{type} || 'all'); die "Invalid type: $type" unless $type =~ m/^binlog_dump|slave_io|slave_sql|all$/i; my $match = 0; if ( $type =~ m/binlog_dump|all/i ) { $match = 1 if ($query->{Command} || $query->{command} || '') eq "Binlog Dump"; } if ( !$match ) { if ( ($query->{User} || $query->{user} || '') eq "system user" ) { PTDEBUG && _d("Slave replication thread"); if ( $type ne 'all' ) { my $state = $query->{State} || $query->{state} || ''; if ( $state =~ m/^init|end$/ ) { PTDEBUG && _d("Special state:", $state); $match = 1; } else { my ($slave_sql) = $state =~ m/ ^(Waiting\sfor\sthe\snext\sevent |Reading\sevent\sfrom\sthe\srelay\slog |Has\sread\sall\srelay\slog;\swaiting |Making\stemp\sfile |Waiting\sfor\sslave\smutex\son\sexit)/xi; $match = $type eq 'slave_sql' && $slave_sql ? 1 : $type eq 'slave_io' && !$slave_sql ? 1 : 0; } } else { $match = 1; } } else { PTDEBUG && _d('Not system user'); } if ( !defined $args{check_known_ids} || $args{check_known_ids} ) { my $id = $query->{Id} || $query->{id}; if ( $match ) { $self->{replication_thread}->{$id} = 1; } else { if ( $self->{replication_thread}->{$id} ) { PTDEBUG && _d("Thread ID is a known replication thread ID"); $match = 1; } } } } PTDEBUG && _d('Matches', $type, 'replication thread:', ($match ? 'yes' : 'no'), '; match:', $match); return $match; } sub get_replication_filters { my ( $self, %args ) = @_; my @required_args = qw(dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh) = @args{@required_args}; my %filters = (); my $status = $self->get_master_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( binlog_do_db binlog_ignore_db ); } $status = $self->get_slave_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( replicate_do_db replicate_ignore_db replicate_do_table replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table ); my $sql = "SHOW VARIABLES LIKE 'slave_skip_errors'"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_arrayref($sql); $filters{slave_skip_errors} = $row->[1] if $row->[1] && $row->[1] ne 'OFF'; } return \%filters; } sub pos_to_string { my ( $self, $pos ) = @_; my $fmt = '%s/%020d'; return sprintf($fmt, @{$pos}{qw(file position)}); } sub reset_known_replication_threads { my ( $self ) = @_; $self->{replication_thread} = {}; return; } sub get_cxn_from_dsn_table { my ($self, %args) = @_; my @required_args = qw(dsn_table_dsn make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn_table_dsn, $make_cxn) = @args{@required_args}; PTDEBUG && _d('DSN table DSN:', $dsn_table_dsn); my $dp = $self->{DSNParser}; my $q = $self->{Quoter}; my $dsn = $dp->parse($dsn_table_dsn); my $dsn_table; if ( $dsn->{D} && $dsn->{t} ) { $dsn_table = $q->quote($dsn->{D}, $dsn->{t}); } elsif ( $dsn->{t} && $dsn->{t} =~ m/\./ ) { $dsn_table = $q->quote($q->split_unquote($dsn->{t})); } else { die "DSN table DSN does not specify a database (D) " . "or a database-qualified table (t)"; } my $dsn_tbl_cxn = $make_cxn->(dsn => $dsn); my $dbh = $dsn_tbl_cxn->connect(); my $sql = "SELECT dsn FROM $dsn_table ORDER BY id"; PTDEBUG && _d($sql); my $dsn_strings = $dbh->selectcol_arrayref($sql); my @cxn; if ( $dsn_strings ) { foreach my $dsn_string ( @$dsn_strings ) { PTDEBUG && _d('DSN from DSN table:', $dsn_string); push @cxn, $make_cxn->(dsn_string => $dsn_string); } } return \@cxn; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MasterSlave package # ########################################################################### # ########################################################################### # FlowControlWaiter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/FlowControlWaiter.pm # t/lib/FlowControlWaiter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package FlowControlWaiter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::HiRes qw(sleep time); use Data::Dumper; sub new { my ( $class, %args ) = @_; my @required_args = qw(oktorun node sleep max_flow_ctl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my $self = { %args }; $self->{last_time} = time(); my (undef, $last_fc_ns) = $self->{node}->selectrow_array('SHOW STATUS LIKE "wsrep_flow_control_paused_ns"'); $self->{last_fc_secs} = $last_fc_ns/1000_000_000; return bless $self, $class; } sub wait { my ( $self, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $pr = $args{Progress}; my $oktorun = $self->{oktorun}; my $sleep = $self->{sleep}; my $node = $self->{node}; my $max_avg = $self->{max_flow_ctl}/100; my $too_much_fc = 1; my $pr_callback; if ( $pr ) { $pr_callback = sub { print STDERR "Pausing because PXC Flow Control is active\n"; return; }; $pr->set_callback($pr_callback); } while ( $oktorun->() && $too_much_fc ) { my $current_time = time(); my (undef, $current_fc_ns) = $node->selectrow_array('SHOW STATUS LIKE "wsrep_flow_control_paused_ns"'); my $current_fc_secs = $current_fc_ns/1000_000_000; my $current_avg = ($current_fc_secs - $self->{last_fc_secs}) / ($current_time - $self->{last_time}); if ( $current_avg > $max_avg ) { if ( $pr ) { $pr->update(sub { return 0; }); } PTDEBUG && _d('Calling sleep callback'); if ( $self->{simple_progress} ) { print STDERR "Waiting for Flow Control to abate\n"; } $sleep->(); } else { $too_much_fc = 0; } $self->{last_time} = $current_time; $self->{last_fc_secs} = $current_fc_secs; } PTDEBUG && _d('Flow Control is Ok'); return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End FlowControlWaiter package # ########################################################################### # ########################################################################### # Cxn package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Cxn.pm # t/lib/Cxn.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Cxn; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Scalar::Util qw(blessed); use constant { PTDEBUG => $ENV{PTDEBUG} || 0, PERCONA_TOOLKIT_TEST_USE_DSN_NAMES => $ENV{PERCONA_TOOLKIT_TEST_USE_DSN_NAMES} || 0, }; sub new { my ( $class, %args ) = @_; my @required_args = qw(DSNParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($dp, $o) = @args{@required_args}; my $dsn_defaults = $dp->parse_options($o); my $prev_dsn = $args{prev_dsn}; my $dsn = $args{dsn}; if ( !$dsn ) { $args{dsn_string} ||= 'h=' . ($dsn_defaults->{h} || 'localhost'); $dsn = $dp->parse( $args{dsn_string}, $prev_dsn, $dsn_defaults); } elsif ( $prev_dsn ) { $dsn = $dp->copy($prev_dsn, $dsn); } my $dsn_name = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; my $self = { dsn => $dsn, dbh => $args{dbh}, dsn_name => $dsn_name, hostname => '', set => $args{set}, NAME_lc => defined($args{NAME_lc}) ? $args{NAME_lc} : 1, dbh_set => 0, ask_pass => $o->get('ask-pass'), DSNParser => $dp, is_cluster_node => undef, parent => $args{parent}, }; return bless $self, $class; } sub connect { my ( $self, %opts ) = @_; my $dsn = $opts{dsn} || $self->{dsn}; my $dp = $self->{DSNParser}; my $dbh = $self->{dbh}; if ( !$dbh || !$dbh->ping() ) { if ( $self->{ask_pass} && !$self->{asked_for_pass} && !defined $dsn->{p} ) { $dsn->{p} = OptionParser::prompt_noecho("Enter MySQL password: "); $self->{asked_for_pass} = 1; } $dbh = $dp->get_dbh( $dp->get_cxn_params($dsn), { AutoCommit => 1, %opts, }, ); } $dbh = $self->set_dbh($dbh); if ( $opts{dsn} ) { $self->{dsn} = $dsn; $self->{dsn_name} = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; } PTDEBUG && _d($dbh, 'Connected dbh to', $self->{hostname},$self->{dsn_name}); return $dbh; } sub set_dbh { my ($self, $dbh) = @_; if ( $self->{dbh} && $self->{dbh} == $dbh && $self->{dbh_set} ) { PTDEBUG && _d($dbh, 'Already set dbh'); return $dbh; } PTDEBUG && _d($dbh, 'Setting dbh'); $dbh->{FetchHashKeyName} = 'NAME_lc' if $self->{NAME_lc}; my $sql = 'SELECT @@server_id /*!50038 , @@hostname*/'; PTDEBUG && _d($dbh, $sql); my ($server_id, $hostname) = $dbh->selectrow_array($sql); PTDEBUG && _d($dbh, 'hostname:', $hostname, $server_id); if ( $hostname ) { $self->{hostname} = $hostname; } if ( $self->{parent} ) { PTDEBUG && _d($dbh, 'Setting InactiveDestroy=1 in parent'); $dbh->{InactiveDestroy} = 1; } if ( my $set = $self->{set}) { $set->($dbh); } $self->{dbh} = $dbh; $self->{dbh_set} = 1; return $dbh; } sub lost_connection { my ($self, $e) = @_; return 0 unless $e; return $e =~ m/MySQL server has gone away/ || $e =~ m/Lost connection to MySQL server/ || $e =~ m/Server shutdown in progress/; } sub dbh { my ($self) = @_; return $self->{dbh}; } sub dsn { my ($self) = @_; return $self->{dsn}; } sub name { my ($self) = @_; return $self->{dsn_name} if PERCONA_TOOLKIT_TEST_USE_DSN_NAMES; return $self->{hostname} || $self->{dsn_name} || 'unknown host'; } sub description { my ($self) = @_; return sprintf("%s -> %s:%s", $self->name(), $self->{dsn}->{h}, $self->{dsn}->{P} || 'socket'); } sub get_id { my ($self, $cxn) = @_; $cxn ||= $self; my $unique_id; if ($cxn->is_cluster_node()) { # for cluster we concatenate various variables to maximize id 'uniqueness' across versions my $sql = q{SHOW STATUS LIKE 'wsrep\_local\_index'}; my (undef, $wsrep_local_index) = $cxn->dbh->selectrow_array($sql); PTDEBUG && _d("Got cluster wsrep_local_index: ",$wsrep_local_index); $unique_id = $wsrep_local_index."|"; foreach my $val ('server\_id', 'wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $val) = $cxn->dbh->selectrow_array($sql); $unique_id .= "|$val"; } } else { my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); $unique_id = $cxn->dbh->selectrow_array($sql); } PTDEBUG && _d("Generated unique id for cluster:", $unique_id); return $unique_id; } sub is_cluster_node { my ($self, $cxn) = @_; $cxn ||= $self; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; my $dbh; if ($cxn->isa('DBI::db')) { $dbh = $cxn; PTDEBUG && _d($sql); #don't invoke name() if it's not a Cxn! } else { $dbh = $cxn->dbh(); PTDEBUG && _d($cxn->name, $sql); } my $row = $dbh->selectrow_arrayref($sql); return $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1') ? 1 : 0; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub DESTROY { my ($self) = @_; PTDEBUG && _d('Destroying cxn'); if ( $self->{parent} ) { PTDEBUG && _d($self->{dbh}, 'Not disconnecting dbh in parent'); } elsif ( $self->{dbh} && blessed($self->{dbh}) && $self->{dbh}->can("disconnect") ) { PTDEBUG && _d($self->{dbh}, 'Disconnecting dbh on', $self->{hostname}, $self->{dsn_name}); $self->{dbh}->disconnect(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Cxn package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_archiver; use utf8; use English qw(-no_match_vars); use List::Util qw(max); use IO::File; use sigtrap qw(handler finish untrapped normal-signals); use Time::HiRes qw(gettimeofday sleep time); use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Quotekeys = 0; use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; # Global variables; as few as possible. my $oktorun = 1; my $txn_cnt = 0; my $cnt = 0; my $can_retry = 1; my $archive_fh; my $get_sth; my ( $OUT_OF_RETRIES, $ROLLED_BACK, $ALL_IS_WELL ) = ( 0, -1, 1 ); my ( $src, $dst ); my $pxc_version = '0'; my $fields_separated_by = "\t"; my $optionally_enclosed_by; # Holds the arguments for the $sth's bind variables, so it can be re-tried # easily. my @beginning_of_txn; my $q = new Quoter; sub main { local @ARGV = @_; # set global ARGV for this package # Reset global vars else tests, which run this tool as a module, # may encounter weird results. $oktorun = 1; $txn_cnt = 0; $cnt = 0; $can_retry = 1; $archive_fh = undef; $get_sth = undef; ($src, $dst) = (undef, undef); @beginning_of_txn = (); undef *trace; ($OUT_OF_RETRIES, $ROLLED_BACK, $ALL_IS_WELL ) = (0, -1, 1); # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); # Frequently used options. $src = $o->get('source'); $dst = $o->get('dest'); my $sentinel = $o->get('sentinel'); my $bulk_del = $o->get('bulk-delete'); my $commit_each = $o->get('commit-each'); my $limit = $o->get('limit'); my $archive_file = $o->get('file'); my $txnsize = $o->get('txn-size'); my $quiet = $o->get('quiet'); my $got_charset = $o->get('charset'); # First things first: if --stop was given, create the sentinel file. if ( $o->get('stop') ) { my $sentinel_fh = IO::File->new($sentinel, ">>") or die "Cannot open $sentinel: $OS_ERROR\n"; print $sentinel_fh "Remove this file to permit pt-archiver to run\n" or die "Cannot write to $sentinel: $OS_ERROR\n"; close $sentinel_fh or die "Cannot close $sentinel: $OS_ERROR\n"; print STDOUT "Successfully created file $sentinel\n" unless $quiet; return 0; } # Generate a filename with sprintf-like formatting codes. if ( $archive_file ) { my @time = localtime(); my %fmt = ( d => sprintf('%02d', $time[3]), H => sprintf('%02d', $time[2]), i => sprintf('%02d', $time[1]), m => sprintf('%02d', $time[4] + 1), s => sprintf('%02d', $time[0]), Y => $time[5] + 1900, D => $src && $src->{D} ? $src->{D} : '', t => $src && $src->{t} ? $src->{t} : '', ); $archive_file =~ s/%([dHimsYDt])/$fmt{$1}/g; } if ( !$o->got('help') ) { $o->save_error("--source DSN requires a 't' (table) part") unless $src->{t}; if ( $dst ) { # Ensure --source and --dest don't point to the same place my $same = 1; foreach my $arg ( qw(h P D t S) ) { if ( defined $src->{$arg} && defined $dst->{$arg} && $src->{$arg} ne $dst->{$arg} ) { $same = 0; last; } } if ( $same ) { $o->save_error("--source and --dest refer to the same table"); } } if ( $o->get('bulk-insert') ) { $o->save_error("--bulk-insert is meaningless without a destination") unless $dst; $bulk_del = 1; # VERY IMPORTANT for safety. } if ( $bulk_del && $limit < 2 ) { $o->save_error("--bulk-delete is meaningless with --limit 1"); } if ( $o->got('purge') && $o->got('no-delete') ) { $o->save_error("--purge and --no-delete are mutually exclusive"); } } if ( $bulk_del || $o->get('bulk-insert') ) { $o->set('commit-each', 1); } $o->usage_or_errors(); # ######################################################################## # If --pid, check it first since we'll die if it already exits. # ######################################################################## my $daemon; if ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. Keep $daemon # in the the scope of main() because when it's destroyed it automatically # removes the PID file. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # ######################################################################## # Set up statistics. # ######################################################################## my %statistics = (); my $stat_start; if ( $o->get('statistics') ) { my $start = gettimeofday(); my $obs_cost = gettimeofday() - $start; # cost of observation *trace = sub { my ( $thing, $sub ) = @_; my $start = gettimeofday(); $sub->(); $statistics{$thing . '_time'} += (gettimeofday() - $start - $obs_cost); ++$statistics{$thing . '_count'}; $stat_start ||= $start; } } else { # Generate a version that doesn't do any timing *trace = sub { my ( $thing, $sub ) = @_; $sub->(); } } # ######################################################################## # Inspect DB servers and tables. # ######################################################################## my $tp = new TableParser(Quoter => $q); foreach my $table ( grep { $_ } ($src, $dst) ) { my $ac = !$txnsize && !$commit_each; if ( !defined $table->{p} && $o->get('ask-pass') ) { $table->{p} = OptionParser::prompt_noecho("Enter password: "); } my $dbh = $dp->get_dbh( $dp->get_cxn_params($table), { AutoCommit => $ac }); PTDEBUG && _d('Inspecting table on', $dp->as_string($table)); # Set options that can enable removing data on the master # and archiving it on the slaves. if ( $table->{a} ) { $dbh->do("USE $table->{a}"); } if ( $table->{b} ) { $dbh->do("SET SQL_LOG_BIN=0"); } my ($dbh_version) = $dbh->selectrow_array("SELECT version()"); #if ($dbh_version =~ m/^(\d+\.\d+)\.\d+.*/ && $1 ge '8.0' && !$o->get('charset')) { if ($dbh_version =~ m/^(\d+\.\d+)\.\d+.*/ && $1 ge '8.0') { PTDEBUG && _d("MySQL 8.0+ detected and charset was not specified.\n Setting character_set_client = utf8mb4 and --charset=utf8"); $dbh->do('/*!40101 SET character_set_connection = utf8mb4 */;'); $o->set('charset', 'utf8'); } $table->{dbh} = $dbh; $table->{irot} = get_irot($dbh); $can_retry = $can_retry && !$table->{irot}; $table->{db_tbl} = $q->quote( map { $_ =~ s/(^`|`$)//g; $_; } grep { $_ } ( $table->{D}, $table->{t} ) ); # Create objects for archivable and dependency handling, BEFORE getting # the tbl structure (because the object might do some setup, including # creating the table to be archived). if ( $table->{m} ) { eval "require $table->{m}"; die $EVAL_ERROR if $EVAL_ERROR; trace('plugin_start', sub { $table->{plugin} = $table->{m}->new( dbh => $table->{dbh}, db => $table->{D}, tbl => $table->{t}, OptionParser => $o, DSNParser => $dp, Quoter => $q, ); }); } $table->{info} = $tp->parse( $tp->get_create_table( $dbh, $table->{D}, $table->{t} )); if ( $o->get('check-charset') ) { my $sql = 'SELECT CONCAT(/*!40100 @@session.character_set_connection, */ "")'; PTDEBUG && _d($sql); my ($dbh_charset) = $table->{dbh}->selectrow_array($sql); if ( ($dbh_charset || "") ne ($table->{info}->{charset} || "") && !($dbh_charset eq "utf8mb4" && ($table->{info}->{charset} || "") eq ("utf8")) ) { $src->{dbh}->disconnect() if $src && $src->{dbh}; $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; die "Character set mismatch: " . ($src && $table eq $src ? "--source " : "--dest ") . "DSN uses " . ($dbh_charset || "") . ", table uses " . ($table->{info}->{charset} || "") . ". You can disable this check by specifying " . "--no-check-charset.\n"; } } } if ( $o->get('primary-key-only') && !exists $src->{info}->{keys}->{PRIMARY} ) { $src->{dbh}->disconnect(); $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; die "--primary-key-only was specified by the --source table " . "$src->{db_tbl} does not have a PRIMARY KEY"; } if ( $dst && $o->get('check-columns') ) { my @not_in_src = grep { !$src->{info}->{is_col}->{$_} } @{$dst->{info}->{cols}}; if ( @not_in_src ) { $src->{dbh}->disconnect(); $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; die "The following columns exist in --dest but not --source: " . join(', ', @not_in_src) . "\n"; } my @not_in_dst = grep { !$dst->{info}->{is_col}->{$_} } @{$src->{info}->{cols}}; if ( @not_in_dst ) { $src->{dbh}->disconnect(); $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; die "The following columns exist in --source but not --dest: " . join(', ', @not_in_dst) . "\n"; } } # ######################################################################## # Get lag dbh. # ######################################################################## my @lag_dbh; my $ms; if ( $o->get('check-slave-lag') ) { my $dsn_defaults = $dp->parse_options($o); my $lag_slaves_dsn = $o->get('check-slave-lag'); $ms = new MasterSlave( OptionParser => $o, DSNParser => $dp, Quoter => $q, channel => $o->get('channel'), ); # we get each slave's connection handler (and its id, for debug and reporting) for my $slave (@$lag_slaves_dsn) { my $dsn = $dp->parse($slave, $dsn_defaults); my $lag_dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit => 1 }); my $lag_id = $ms->short_host($dsn); push @lag_dbh , {'dbh' => $lag_dbh, 'id' => $lag_id} } } # ####################################################################### # Check if it's a cluster and if so get version # Create FlowControlWaiter object if max-flow-ctl was specified and # PXC version supports it # ####################################################################### my $flow_ctl; if ( $src && $src->{dbh} && Cxn::is_cluster_node($src->{dbh}) ) { $pxc_version = VersionParser->new($src->{'dbh'}); if ( $o->got('max-flow-ctl') ) { if ( $pxc_version < '5.6' ) { die "Option '--max-flow-ctl' is only available for PXC version 5.6 " . "or higher." } else { $flow_ctl = new FlowControlWaiter( node => $src->{'dbh'}, max_flow_ctl => $o->get('max-flow-ctl'), oktorun => sub { return $oktorun }, sleep => sub { sleep($o->get('check-interval')) }, simple_progress => $o->got('progress') ? 1 : 0, ); } } } if ( $src && $src->{dbh} && !Cxn::is_cluster_node($src->{dbh}) && $o->got('max-flow-ctl') ) { die "Option '--max-flow-ctl' is for use with PXC clusters." } # ######################################################################## # Set up general plugin. # ######################################################################## my $plugin; if ( $o->get('plugin') ) { eval "require " . $o->get('plugin'); die $EVAL_ERROR if $EVAL_ERROR; $plugin = $o->get('plugin')->new( src => $src, dst => $dst, opts => $o, ); } # ######################################################################## # Design SQL statements. # ######################################################################## my $dbh = $src->{dbh}; my $nibbler = new TableNibbler( TableParser => $tp, Quoter => $q, ); my ($first_sql, $next_sql, $del_sql, $ins_sql); my ($sel_stmt, $ins_stmt, $del_stmt); my (@asc_slice, @sel_slice, @del_slice, @bulkdel_slice, @ins_slice); my @sel_cols = $o->get('columns') ? @{$o->get('columns')} # Explicit : $o->get('primary-key-only') ? @{$src->{info}->{keys}->{PRIMARY}->{cols}} : @{$src->{info}->{cols}}; # All PTDEBUG && _d("sel cols: ", @sel_cols); $del_stmt = $nibbler->generate_del_stmt( tbl_struct => $src->{info}, cols => \@sel_cols, index => $o->get('primary-key-only') ? 'PRIMARY' : $src->{i}, ); @del_slice = @{$del_stmt->{slice}}; # Generate statement for ascending index, if desired if ( !$o->get('no-ascend') ) { $sel_stmt = $nibbler->generate_asc_stmt( tbl_struct => $src->{info}, cols => $del_stmt->{cols}, index => $del_stmt->{index}, asc_first => $o->get('ascend-first'), # A plugin might prevent rows in the source from being deleted # when doing single delete, but it cannot prevent rows from # being deleted when doing a bulk delete. asc_only => $o->get('no-delete') ? 1 : $src->{m} ? ($o->get('bulk-delete') ? 0 : 1) : 0, ) } else { $sel_stmt = { cols => $del_stmt->{cols}, index => undef, where => '1=1', slice => [], # No-ascend = no bind variables in the WHERE clause. scols => [], # No-ascend = no bind variables in the WHERE clause. }; } @asc_slice = @{$sel_stmt->{slice}}; @sel_slice = 0..$#sel_cols; $first_sql = 'SELECT' . ( $o->get('high-priority-select') ? ' HIGH_PRIORITY' : '' ) . ' /*!40001 SQL_NO_CACHE */ ' . join(',', map { $q->quote($_) } @{$sel_stmt->{cols}} ) . " FROM $src->{db_tbl}" . ( $sel_stmt->{index} ? ((VersionParser->new($dbh) >= '4.0.9' ? " FORCE" : " USE") . " INDEX(`$sel_stmt->{index}`)") : '') . " WHERE (".$o->get('where').")"; if ( $o->get('safe-auto-increment') && $sel_stmt->{index} && scalar(@{$src->{info}->{keys}->{$sel_stmt->{index}}->{cols}}) == 1 && $src->{info}->{is_autoinc}->{ $src->{info}->{keys}->{$sel_stmt->{index}}->{cols}->[0] } ) { my $col = $q->quote($sel_stmt->{scols}->[0]); my ($val) = $dbh->selectrow_array("SELECT MAX($col) FROM $src->{db_tbl}"); $first_sql .= " AND ($col < " . $q->quote_val($val) . ")"; } $next_sql = $first_sql; if ( !$o->get('no-ascend') ) { $next_sql .= " AND $sel_stmt->{where}"; } # Obtain index cols so we can order them when ascending # this ensures returned sets are disjoint when ran on partitioned tables # issue 1376561 my $index_cols; if ( $sel_stmt->{index} && $src->{info}->{keys}->{$sel_stmt->{index}}->{cols} ) { $index_cols = join(",",map { "`$_`" } @{$src->{info}->{keys}->{$sel_stmt->{index}}->{cols}}); } foreach my $thing ( $first_sql, $next_sql ) { $thing .= " ORDER BY $index_cols" if $index_cols; $thing .= " LIMIT $limit"; if ( $o->get('for-update') ) { $thing .= ' FOR UPDATE'; } elsif ( $o->get('share-lock') ) { $thing .= ' LOCK IN SHARE MODE'; } } PTDEBUG && _d("Index for DELETE:", $del_stmt->{index}); if ( !$bulk_del ) { # The LIMIT might be 1 here, because even though a SELECT can return # many rows, an INSERT only does one at a time. It would not be safe to # iterate over a SELECT that was LIMIT-ed to 500 rows, read and INSERT # one, and then delete with a LIMIT of 500. Only one row would be written # to the file; only one would be INSERT-ed at the destination. But # LIMIT 1 is actually only needed when the index is not unique # (http://code.google.com/p/maatkit/issues/detail?id=1166). $del_sql = 'DELETE' . ($o->get('low-priority-delete') ? ' LOW_PRIORITY' : '') . ($o->get('quick-delete') ? ' QUICK' : '') . " FROM $src->{db_tbl} WHERE $del_stmt->{where}"; if ( $src->{info}->{keys}->{$del_stmt->{index}}->{is_unique} ) { PTDEBUG && _d("DELETE index is unique; LIMIT 1 is not needed"); } else { PTDEBUG && _d("Adding LIMIT 1 to DELETE because DELETE index " . "is not unique"); $del_sql .= " LIMIT 1"; } } else { # Unless, of course, it's a bulk DELETE, in which case the 500 rows have # already been INSERT-ed. my $asc_stmt = $nibbler->generate_asc_stmt( tbl_struct => $src->{info}, cols => $del_stmt->{cols}, index => $del_stmt->{index}, asc_first => 0, ); $del_sql = 'DELETE' . ($o->get('low-priority-delete') ? ' LOW_PRIORITY' : '') . ($o->get('quick-delete') ? ' QUICK' : '') . " FROM $src->{db_tbl} WHERE (" . $asc_stmt->{boundaries}->{'>='} . ') AND (' . $asc_stmt->{boundaries}->{'<='} # Unlike the row-at-a-time DELETE, this one must include the user's # specified WHERE clause and an appropriate LIMIT clause. . ") AND (".$o->get('where').")" . ($o->get('bulk-delete-limit') ? " LIMIT $limit" : ""); @bulkdel_slice = @{$asc_stmt->{slice}}; } if ( $dst ) { $ins_stmt = $nibbler->generate_ins_stmt( ins_tbl => $dst->{info}, sel_cols => \@sel_cols, ); PTDEBUG && _d("inst stmt: ", Dumper($ins_stmt)); @ins_slice = @{$ins_stmt->{slice}}; if ( $o->get('bulk-insert') ) { $ins_sql = 'LOAD DATA' . ($o->get('low-priority-insert') ? ' LOW_PRIORITY' : '') . ' LOCAL INFILE ?' . ($o->get('replace') ? ' REPLACE' : '') . ($o->get('ignore') ? ' IGNORE' : '') . " INTO TABLE $dst->{db_tbl}" . ($got_charset ? "CHARACTER SET $got_charset" : "") . "(" . join(",", map { $q->quote($_) } @{$ins_stmt->{cols}} ) . ")"; } else { $ins_sql = ($o->get('replace') ? 'REPLACE' : 'INSERT') . ($o->get('low-priority-insert') ? ' LOW_PRIORITY' : '') . ($o->get('delayed-insert') ? ' DELAYED' : '') . ($o->get('ignore') ? ' IGNORE' : '') . " INTO $dst->{db_tbl}(" . join(",", map { $q->quote($_) } @{$ins_stmt->{cols}} ) . ") VALUES (" . join(",", map { "?" } @{$ins_stmt->{cols}} ) . ")"; } } else { $ins_sql = ''; } if ( PTDEBUG ) { _d("get first sql:", $first_sql); _d("get next sql:", $next_sql); _d("del row sql:", $del_sql); _d("ins row sql:", $ins_sql); } if ( $o->get('dry-run') ) { if ( !$quiet ) { print join("\n", grep { $_ } ($archive_file || ''), $first_sql, $next_sql, ($o->get('no-delete') ? '' : $del_sql), $ins_sql) , "\n"; } $src->{dbh}->disconnect(); $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; return 0; } my $get_first = $dbh->prepare($first_sql); my $get_next = $dbh->prepare($next_sql); my $del_row = $dbh->prepare($del_sql); my $ins_row = $dst->{dbh}->prepare($ins_sql) if $dst; # Different $dbh! # ######################################################################## # Set MySQL options. # ######################################################################## if ( $o->get('skip-foreign-key-checks') ) { $src->{dbh}->do("/*!40014 SET FOREIGN_KEY_CHECKS=0 */"); if ( $dst ) { $dst->{dbh}->do("/*!40014 SET FOREIGN_KEY_CHECKS=0 */"); } } # ######################################################################## # Set up the plugins # ######################################################################## foreach my $table ( $dst, $src ) { next unless $table && $table->{plugin}; trace ('before_begin', sub { $table->{plugin}->before_begin( cols => \@sel_cols, allcols => $sel_stmt->{cols}, ); }); } # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ { dbh => $src->{dbh}, dsn => $src->{dsn} }, ( $dst ? { dbh => $dst->{dbh}, dsn => $dst->{dsn} } : () ), ], ); } # ######################################################################## # Start archiving. # ######################################################################## my $start = time(); my $end = $start + ($o->get('run-time') || 0); # When to exit my $now = $start; my $last_select_time; # for --sleep-coef my $retries = $o->get('retries'); printf("%-19s %7s %7s\n", 'TIME', 'ELAPSED', 'COUNT') if $o->get('progress') && !$quiet; printf("%19s %7d %7d\n", ts($now), $now - $start, $cnt) if $o->get('progress') && !$quiet; $get_sth = $get_first; # Later it may be assigned $get_next trace('select', sub { my $select_start = time; $get_sth->execute; $last_select_time = time - $select_start; $statistics{SELECT} += $get_sth->rows; }); my $row = $get_sth->fetchrow_arrayref(); PTDEBUG && _d("First row: ", Dumper($row), 'rows:', $get_sth->rows); if ( !$row ) { $get_sth->finish; $src->{dbh}->disconnect(); $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; return 0; } my $charset = $got_charset || ''; if ($charset eq 'utf8') { $charset = ":$charset"; } elsif ($charset) { eval { require Encode } or (PTDEBUG && _d("Couldn't load Encode: ", $EVAL_ERROR, "Going to try using the charset ", "passed in without checking it.")); # No need to punish a user if they did their # homework and passed in an official charset, # rather than an alias. $charset = ":encoding(" . (defined &Encode::resolve_alias ? Encode::resolve_alias($charset) || $charset : $charset) . ")"; } if ( $charset eq ':utf8' && $DBD::mysql::VERSION lt '4' && ( $archive_file || $o->get('bulk-insert') ) ) { my $plural = ''; my $files = $archive_file ? '--file' : ''; if ( $o->get('bulk-insert') ) { if ($files) { $plural = 's'; $files .= $files ? ' and ' : ''; } $files .= '--bulk-insert' } warn "Setting binmode :raw instead of :utf8 on $files file$plural " . "because DBD::mysql 3.0007 has a bug with UTF-8. " . "Verify the $files file$plural, as the bug may lead to " . "data being double-encoded. Update DBD::mysql to avoid " . "this warning."; $charset = ":raw"; } # Open the file and print the header to it. if ( $archive_file ) { if ($o->got('output-format') && $o->get('output-format') ne 'dump' && $o->get('output-format') ne 'csv') { warn "Invalid output format:". $o->get('format'); warn "Using default 'dump' format"; } elsif ($o->get('output-format') || '' eq 'csv') { $fields_separated_by = ", "; $optionally_enclosed_by = '"'; } my $need_hdr = $o->get('header') && !-f $archive_file; $archive_fh = IO::File->new($archive_file, ">>$charset") or die "Cannot open $charset $archive_file: $OS_ERROR\n"; binmode STDOUT, ":utf8"; binmode $archive_fh, ":utf8"; $archive_fh->autoflush(1) unless $o->get('buffer'); if ( $need_hdr ) { print { $archive_fh } '', escape(\@sel_cols, $fields_separated_by, $optionally_enclosed_by), "\n" or die "Cannot write to $archive_file: $OS_ERROR\n"; } } # Open the bulk insert file, which doesn't get any header info. my $bulkins_file; if ( $o->get('bulk-insert') ) { require File::Temp; $bulkins_file = File::Temp->new( SUFFIX => 'pt-archiver' ) or die "Cannot open temp file: $OS_ERROR\n"; binmode($bulkins_file, $charset) or die "Cannot set $charset as an encoding for the bulk-insert " . "file: $OS_ERROR"; } # This row is the first row fetched from each 'chunk'. my $first_row = [ @$row ]; my $csv_row; my $flow_ctl_count = 0; my $lag_count = 0; my $bulk_count = 0; ROW: while ( # Quit if: $row # There is no data && $retries >= 0 # or retries are exceeded && (!$o->get('run-time') || $now < $end) # or time is exceeded && !-f $sentinel # or the sentinel is set && $oktorun # or instructed to quit ) { my $lastrow = $row; if ( !$src->{plugin} || trace('is_archivable', sub { $src->{plugin}->is_archivable(row => $row) }) ) { # Do the archiving. Write to the file first since, like the file, # MyISAM and other tables cannot be rolled back etc. If there is a # problem, hopefully the data has at least made it to the file. my $escaped_row; if ( $archive_fh || $bulkins_file ) { $escaped_row = escape([@{$row}[@sel_slice]], $fields_separated_by, $optionally_enclosed_by); } if ( $archive_fh ) { trace('print_file', sub { print $archive_fh $escaped_row, "\n" or die "Cannot write to $archive_file: $OS_ERROR\n"; }); } # ################################################################### # This code is for the row-at-a-time archiving functionality. # ################################################################### # INSERT must come first, to be as safe as possible. if ( $dst && !$bulkins_file ) { my $ins_sth; # Let plugin change which sth is used for the INSERT. if ( $dst->{plugin} ) { trace('before_insert', sub { $dst->{plugin}->before_insert(row => $row); }); trace('custom_sth', sub { $ins_sth = $dst->{plugin}->custom_sth( row => $row, sql => $ins_sql); }); } $ins_sth ||= $ins_row; # Default to the sth decided before. my $success = do_with_retries($o, 'inserting', sub { my $ins_cnt = $ins_sth->execute(@{$row}[@ins_slice]); PTDEBUG && _d('Inserted', $ins_cnt, 'rows'); $statistics{INSERT} += $ins_sth->rows; }); if ( $success == $OUT_OF_RETRIES ) { $retries = -1; last ROW; } elsif ( $success == $ROLLED_BACK ) { --$retries; next ROW; } } if ( !$bulk_del ) { # DELETE comes after INSERT for safety. if ( $src->{plugin} ) { trace('before_delete', sub { $src->{plugin}->before_delete(row => $row); }); } if ( !$o->get('no-delete') ) { my $success = do_with_retries($o, 'deleting', sub { $del_row->execute(@{$row}[@del_slice]); PTDEBUG && _d('Deleted', $del_row->rows, 'rows'); $statistics{DELETE} += $del_row->rows; }); if ( $success == $OUT_OF_RETRIES ) { $retries = -1; last ROW; } elsif ( $success == $ROLLED_BACK ) { --$retries; next ROW; } } } # ################################################################### # This code is for the bulk archiving functionality. # ################################################################### if ( $bulkins_file ) { trace('print_bulkfile', sub { print $bulkins_file $escaped_row, "\n" or die "Cannot write to bulk file: $OS_ERROR\n"; }); } } # row is archivable $now = time(); ++$cnt; ++$txn_cnt; $retries = $o->get('retries'); # Possibly flush the file and commit the insert and delete. commit($o) unless $commit_each; # Report on progress. if ( !$quiet && $o->get('progress') && $cnt % $o->get('progress') == 0 ) { printf("%19s %7d %7d\n", ts($now), $now - $start, $cnt); } # Get the next row in this chunk. # First time through this loop $get_sth is set to $get_first. # For non-bulk operations this means that rows ($row) are archived # one-by-one in in the code block above ("row is archivable"). For # bulk operations, the 2nd to 2nd-to-last rows are ignored and # only the first row ($first_row) and the last row ($last_row) of # this chunk are used to do bulk INSERT or DELETE on the range of # rows between first and last. After the bulk ops, $first_row and # $last_row are reset to the next chunk. if ( $get_sth->{Active} ) { # Fetch until exhausted $row = $get_sth->fetchrow_arrayref(); } if ( !$row ) { PTDEBUG && _d('No more rows in this chunk; doing bulk operations'); # ################################################################### # This code is for the bulk archiving functionality. # ################################################################### if ( $bulkins_file ) { $bulkins_file->close() or die "Cannot close bulk insert file: $OS_ERROR\n"; my $ins_sth; # Let plugin change which sth is used for the INSERT. if ( $dst->{plugin} ) { trace('before_bulk_insert', sub { $dst->{plugin}->before_bulk_insert( first_row => $first_row, last_row => $lastrow, filename => $bulkins_file->filename(), ); }); trace('custom_sth', sub { $ins_sth = $dst->{plugin}->custom_sth_bulk( first_row => $first_row, last_row => $lastrow, filename => $bulkins_file->filename(), sql => $ins_sql, ); }); } $ins_sth ||= $ins_row; # Default to the sth decided before. my $success = do_with_retries($o, 'bulk_inserting', sub { $ins_sth->execute($bulkins_file->filename()); $src->{dbh}->do("SELECT 'pt-archiver keepalive'") if $src; PTDEBUG && _d('Bulk inserted', $del_row->rows, 'rows'); $statistics{INSERT} += $ins_sth->rows; }); if ( $success != $ALL_IS_WELL ) { $retries = -1; last ROW; # unlike other places, don't do 'next' } } if ( $bulk_del ) { if ( $src->{plugin} ) { trace('before_bulk_delete', sub { $src->{plugin}->before_bulk_delete( first_row => $first_row, last_row => $lastrow, ); }); } if ( !$o->get('no-delete') ) { my $success = do_with_retries($o, 'bulk_deleting', sub { $del_row->execute( @{$first_row}[@bulkdel_slice], @{$lastrow}[@bulkdel_slice], ); PTDEBUG && _d('Bulk deleted', $del_row->rows, 'rows'); $statistics{DELETE} += $del_row->rows; }); if ( $success != $ALL_IS_WELL ) { $retries = -1; last ROW; # unlike other places, don't do 'next' } } } # ################################################################### # This code is for normal operation AND bulk operation. # ################################################################### commit($o, 1) if $commit_each; $get_sth = $get_next; # Sleep between fetching the next chunk of rows. if( my $sleep_time = $o->get('sleep') ) { $sleep_time = $last_select_time * $o->get('sleep-coef') if $o->get('sleep-coef'); PTDEBUG && _d('Sleeping', $sleep_time); trace('sleep', sub { sleep($sleep_time); }); } PTDEBUG && _d('Fetching rows in next chunk'); trace('select', sub { my $select_start = time; $get_sth->execute(@{$lastrow}[@asc_slice]); $last_select_time = time - $select_start; PTDEBUG && _d('Fetched', $get_sth->rows, 'rows'); $statistics{SELECT} += $get_sth->rows; }); # Reset $first_row to the first row of this new chunk. @beginning_of_txn = @{$lastrow}[@asc_slice] unless $txn_cnt; $row = $get_sth->fetchrow_arrayref(); $first_row = $row ? [ @$row ] : undef; if ( $o->get('bulk-insert') ) { $bulkins_file = File::Temp->new( SUFFIX => 'pt-archiver' ) or die "Cannot open temp file: $OS_ERROR\n"; binmode($bulkins_file, $charset) or die "Cannot set $charset as an encoding for the bulk-insert " . "file: $OS_ERROR"; } } # no next row (do bulk operations) else { # keep alive every 100 rows saved to file # https://bugs.launchpad.net/percona-toolkit/+bug/1452895 if ( $bulk_count++ % 100 == 0 ) { $src->{dbh}->do("SELECT 'pt-archiver keepalive'") if $src; } PTDEBUG && _d('Got another row in this chunk'); } # Check slave lag and wait if slave is too far behind. # Do this check every 100 rows if (@lag_dbh && $lag_count++ % 100 == 0 ) { foreach my $lag_server (@lag_dbh) { my $lag_dbh = $lag_server->{'dbh'}; my $id = $lag_server->{'id'}; if ( $lag_dbh ) { my $lag = $ms->get_slave_lag($lag_dbh); while ( !defined $lag || $lag > $o->get('max-lag') ) { PTDEBUG && _d("Sleeping: slave lag for server '$id' is", $lag); if ($o->got('progress')) { _d("Sleeping: slave lag for server '$id' is", $lag); } sleep($o->get('check-interval')); $lag = $ms->get_slave_lag($lag_dbh); $src->{dbh}->do("SELECT 'pt-archiver keepalive'") if $src; $dst->{dbh}->do("SELECT 'pt-archiver keepalive'") if $dst; } } } } # if it's a cluster, check for flow control every 100 rows if ( $flow_ctl && $flow_ctl_count++ % 100 == 0) { $flow_ctl->wait(); } } # ROW PTDEBUG && _d('Done fetching rows'); # Transactions might still be open, etc commit($o, $txnsize || $commit_each); if ( $archive_file && $archive_fh ) { close $archive_fh or die "Cannot close $archive_file: $OS_ERROR\n"; } if ( !$quiet && $o->get('progress') ) { printf("%19s %7d %7d\n", ts($now), $now - $start, $cnt); } # Tear down the plugins. foreach my $table ( $dst, $src ) { next unless $table && $table->{plugin}; trace('after_finish', sub { $table->{plugin}->after_finish(); }); } # Run ANALYZE or OPTIMIZE. if ( $oktorun && ($o->get('analyze') || $o->get('optimize')) ) { my $action = $o->get('analyze') || $o->get('optimize'); my $maint = ($o->get('analyze') ? 'ANALYZE' : 'OPTIMIZE') . ($o->get('local') ? ' /*!40101 NO_WRITE_TO_BINLOG*/' : ''); if ( $action =~ m/s/i ) { trace($maint, sub { $src->{dbh}->do("$maint TABLE $src->{db_tbl}"); }); } if ( $action =~ m/d/i && $dst ) { trace($maint, sub { $dst->{dbh}->do("$maint TABLE $dst->{db_tbl}"); }); } } # ######################################################################## # Print statistics # ######################################################################## if ( $plugin ) { $plugin->statistics(\%statistics, $stat_start); } if ( !$quiet && $o->get('statistics') ) { my $stat_stop = gettimeofday(); my $stat_total = $stat_stop - $stat_start; my $total2 = 0; my $maxlen = 0; my %summary; printf("Started at %s, ended at %s\n", ts($stat_start), ts($stat_stop)); print("Source: ", $dp->as_string($src), "\n"); print("Dest: ", $dp->as_string($dst), "\n") if $dst; print(join("\n", map { "$_ " . ($statistics{$_} || 0) } qw(SELECT INSERT DELETE)), "\n"); foreach my $thing ( grep { m/_(count|time)/ } keys %statistics ) { my ( $action, $type ) = $thing =~ m/^(.*?)_(count|time)$/; $summary{$action}->{$type} = $statistics{$thing}; $summary{$action}->{action} = $action; $maxlen = max($maxlen, length($action)); # Just in case I get only one type of statistic for a given action (in # case there was a crash or CTRL-C or something). $summary{$action}->{time} ||= 0; $summary{$action}->{count} ||= 0; } printf("%-${maxlen}s \%10s %10s %10s\n", qw(Action Count Time Pct)); my $fmt = "%-${maxlen}s \%10d %10.4f %10.2f\n"; foreach my $stat ( reverse sort { $a->{time} <=> $b->{time} } values %summary ) { my $pct = $stat->{time} / $stat_total * 100; printf($fmt, @{$stat}{qw(action count time)}, $pct); $total2 += $stat->{time}; } printf($fmt, 'other', 0, $stat_total - $total2, ($stat_total - $total2) / $stat_total * 100); } # Optionally print the reason for exiting. Do this even if --quiet is # specified. if ( $o->get('why-quit') ) { if ( $retries < 0 ) { print "Exiting because retries exceeded.\n"; } elsif ( $o->get('run-time') && $now >= $end ) { print "Exiting because time exceeded.\n"; } elsif ( -f $sentinel ) { print "Exiting because sentinel file $sentinel exists.\n"; } elsif ( $o->get('statistics') ) { print "Exiting because there are no more rows.\n"; } } $get_sth->finish() if $get_sth; $src->{dbh}->disconnect(); $dst->{dbh}->disconnect() if $dst && $dst->{dbh}; return 0; } # ############################################################################ # Subroutines. # ############################################################################ # Catches signals so pt-archiver can exit gracefully. sub finish { my ($signal) = @_; print STDERR "Exiting on SIG$signal.\n"; $oktorun = 0; } # Accesses globals, but I wanted the code in one place. sub commit { my ( $o, $force ) = @_; my $txnsize = $o->get('txn-size'); if ( $force || ($txnsize && $txn_cnt && $cnt % $txnsize == 0) ) { if ( $o->get('buffer') && $archive_fh ) { my $archive_file = $o->get('file'); trace('flush', sub { $archive_fh->flush or die "Cannot flush $archive_file: $OS_ERROR\n"; }); } if ( $dst ) { trace('commit', sub { $dst->{dbh}->commit; }); } trace('commit', sub { $src->{dbh}->commit; }); $txn_cnt = 0; } } # Repeatedly retries the code until retries runs out, a really bad error # happens, or it succeeds. This sub uses lots of global variables; I only wrote # it to factor out some repeated code. sub do_with_retries { my ( $o, $doing, $code ) = @_; my $retries = $o->get('retries'); my $txnsize = $o->get('txn-size'); my $success = $OUT_OF_RETRIES; RETRY: while ( !$success && $retries >= 0 ) { eval { trace($doing, $code); $success = $ALL_IS_WELL; }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/Lock wait timeout exceeded|Deadlock found/ ) { if ( # More than one row per txn ( ($txnsize && $txnsize > 1) || ($o->get('commit-each') && $o->get('limit') > 1) ) # Not first row && $txn_cnt # And it's not retry-able && (!$can_retry || $EVAL_ERROR =~ m/Deadlock/) ) { # The txn, which is more than 1 statement, was rolled back. last RETRY; } else { # Only one statement had trouble, and the rest of the txn was # not rolled back. The statement can be retried. --$retries; } } else { die $EVAL_ERROR; } } } if ( $success != $ALL_IS_WELL ) { # Must throw away everything and start the transaction over. if ( $retries >= 0 ) { warn "Deadlock or non-retryable lock wait while $doing; " . "rolling back $txn_cnt rows.\n"; $success = $ROLLED_BACK; } else { warn "Exhausted retries while $doing; rolling back $txn_cnt rows.\n"; $success = $OUT_OF_RETRIES; } $get_sth->finish; trace('rollback', sub { $dst->{dbh}->rollback; }); trace('rollback', sub { $src->{dbh}->rollback; }); # I wish: $archive_fh->rollback trace('select', sub { $get_sth->execute(@beginning_of_txn); }); $cnt -= $txn_cnt; $txn_cnt = 0; } return $success; } # Formats a row the same way SELECT INTO OUTFILE does by default. This is # described in the LOAD DATA INFILE section of the MySQL manual, # http://dev.mysql.com/doc/refman/5.0/en/load-data.html sub escape { my ($row, $fields_separated_by, $optionally_enclosed_by) = @_; $fields_separated_by ||= "\t"; $optionally_enclosed_by ||= ''; return join($fields_separated_by, map { s/([\t\n\\])/\\$1/g if defined $_; # Escape tabs etc my $s = defined $_ ? $_ : '\N'; # NULL = \N # var & ~var will return 0 only for numbers if ($s !~ /^[0-9,.E]+$/ && $optionally_enclosed_by eq '"') { $s =~ s/([^\\])"/$1\\"/g; $s = $optionally_enclosed_by."$s".$optionally_enclosed_by; } # $_ =~ s/([^\\])"/$1\\"/g if ($_ !~ /^[0-9,.E]+$/ && $optionally_enclosed_by eq '"'); # $_ = $optionally_enclosed_by && ($_ & ~$_) ? $optionally_enclosed_by."$_".$optionally_enclosed_by : $_; chomp $s; $s; } @$row); } sub ts { my ( $time ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = localtime($time); $mon += 1; $year += 1900; return sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); } sub get_irot { my ( $dbh ) = @_; return 1 unless VersionParser->new($dbh) >= '5.0.13'; my $rows = $dbh->selectall_arrayref( "show variables like 'innodb_rollback_on_timeout'", { Slice => {} }); return 0 unless $rows; return @$rows && $rows->[0]->{Value} ne 'OFF'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation. # ############################################################################ =pod =head1 NAME pt-archiver - Archive rows from a MySQL table into another table or a file. =head1 SYNOPSIS Usage: pt-archiver [OPTIONS] --source DSN --where WHERE pt-archiver nibbles records from a MySQL table. The --source and --dest arguments use DSN syntax; if COPY is yes, --dest defaults to the key's value from --source. Examples: Archive all rows from oltp_server to olap_server and to a file: pt-archiver --source h=oltp_server,D=test,t=tbl --dest h=olap_server \ --file '/var/log/archive/%Y-%m-%d-%D.%t' \ --where "1=1" --limit 1000 --commit-each Purge (delete) orphan rows from child table: pt-archiver --source h=host,D=db,t=child --purge \ --where 'NOT EXISTS(SELECT * FROM parent WHERE col=child.col)' =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-archiver is the tool I use to archive tables as described in L. The goal is a low-impact, forward-only job to nibble old data out of the table without impacting OLTP queries much. You can insert the data into another table, which need not be on the same server. You can also write it to a file in a format suitable for LOAD DATA INFILE. Or you can do neither, in which case it's just an incremental DELETE. pt-archiver is extensible via a plugin mechanism. You can inject your own code to add advanced archiving logic that could be useful for archiving dependent data, applying complex business rules, or building a data warehouse during the archiving process. You need to choose values carefully for some options. The most important are L<"--limit">, L<"--retries">, and L<"--txn-size">. The strategy is to find the first row(s), then scan some index forward-only to find more rows efficiently. Each subsequent query should not scan the entire table; it should seek into the index, then scan until it finds more archivable rows. Specifying the index with the 'i' part of the L<"--source"> argument can be crucial for this; use L<"--dry-run"> to examine the generated queries and be sure to EXPLAIN them to see if they are efficient (most of the time you probably want to scan the PRIMARY key, which is the default). Even better, examine the difference in the Handler status counters before and after running the query, and make sure it is not scanning the whole table every query. You can disable the seek-then-scan optimizations partially or wholly with L<"--no-ascend"> and L<"--ascend-first">. Sometimes this may be more efficient for multi-column keys. Be aware that pt-archiver is built to start at the beginning of the index it chooses and scan it forward-only. This might result in long table scans if you're trying to nibble from the end of the table by an index other than the one it prefers. See L<"--source"> and read the documentation on the C part if this applies to you. =head1 Percona XtraDB Cluster pt-archiver works with Percona XtraDB Cluster (PXC) 5.5.28-23.7 and newer, but there are three limitations you should consider before archiving on a cluster: =over =item Error on commit pt-archiver does not check for error when it commits transactions. Commits on PXC can fail, but the tool does not yet check for or retry the transaction when this happens. If it happens, the tool will die. =item MyISAM tables Archiving MyISAM tables works, but MyISAM support in PXC is still experimental at the time of this release. There are several known bugs with PXC, MyISAM tables, and C columns. Therefore, you must ensure that archiving will not directly or indirectly result in the use of default C values for a MyISAM table. For example, this happens with L<"--dest"> if L<"--columns"> is used and the C column is not included. The tool does not check for this! =item Non-cluster options Certain options may or may not work. For example, if a cluster node is not also a slave, then L<"--check-slave-lag"> does not work. And since PXC tables are usually InnoDB, but InnoDB doesn't support C, then L<"--delayed-insert"> does not work. Other options may also not work, but the tool does not check them, therefore you should test archiving on a test cluster before archiving on your real cluster. =back =head1 OUTPUT If you specify L<"--progress">, the output is a header row, plus status output at intervals. Each row in the status output lists the current date and time, how many seconds pt-archiver has been running, and how many rows it has archived. If you specify L<"--statistics">, C outputs timing and other information to help you identify which part of your archiving process takes the most time. =head1 ERROR-HANDLING pt-archiver tries to catch signals and exit gracefully; for example, if you send it SIGTERM (Ctrl-C on UNIX-ish systems), it will catch the signal, print a message about the signal, and exit fairly normally. It will not execute L<"--analyze"> or L<"--optimize">, because these may take a long time to finish. It will run all other code normally, including calling after_finish() on any plugins (see L<"EXTENDING">). In other words, a signal, if caught, will break out of the main archiving loop and skip optimize/analyze. =head1 OPTIONS Specify at least one of L<"--dest">, L<"--file">, or L<"--purge">. L<"--ignore"> and L<"--replace"> are mutually exclusive. L<"--txn-size"> and L<"--commit-each"> are mutually exclusive. L<"--low-priority-insert"> and L<"--delayed-insert"> are mutually exclusive. L<"--share-lock"> and L<"--for-update"> are mutually exclusive. L<"--analyze"> and L<"--optimize"> are mutually exclusive. L<"--no-ascend"> and L<"--no-delete"> are mutually exclusive. DSN values in L<"--dest"> default to values from L<"--source"> if COPY is yes. =over =item --analyze type: string Run ANALYZE TABLE afterwards on L<"--source"> and/or L<"--dest">. Runs ANALYZE TABLE after finishing. The argument is an arbitrary string. If it contains the letter 's', the source will be analyzed. If it contains 'd', the destination will be analyzed. You can specify either or both. For example, the following will analyze both: --analyze=ds See L for details on ANALYZE TABLE. =item --ascend-first Ascend only first column of index. If you do want to use the ascending index optimization (see L<"--no-ascend">), but do not want to incur the overhead of ascending a large multi-column index, you can use this option to tell pt-archiver to ascend only the leftmost column of the index. This can provide a significant performance boost over not ascending the index at all, while avoiding the cost of ascending the whole index. See L<"EXTENDING"> for a discussion of how this interacts with plugins. =item --ask-pass Prompt for a password when connecting to MySQL. =item --buffer Buffer output to L<"--file"> and flush at commit. Disables autoflushing to L<"--file"> and flushes L<"--file"> to disk only when a transaction commits. This typically means the file is block-flushed by the operating system, so there may be some implicit flushes to disk between commits as well. The default is to flush L<"--file"> to disk after every row. The danger is that a crash might cause lost data. The performance increase I have seen from using L<"--buffer"> is around 5 to 15 percent. Your mileage may vary. =item --bulk-delete Delete each chunk with a single statement (implies L<"--commit-each">). Delete each chunk of rows in bulk with a single C statement. The statement deletes every row between the first and last row of the chunk, inclusive. It implies L<"--commit-each">, since it would be a bad idea to C rows one at a time and commit them before the bulk C. The normal method is to delete every row by its primary key. Bulk deletes might be a lot faster. B if you have a complex C clause. This option completely defers all C processing until the chunk of rows is finished. If you have a plugin on the source, its C method will not be called. Instead, its C method is called later. B: if you have a plugin on the source that sometimes doesn't return true from C, you should use this option only if you understand what it does. If the plugin instructs C not to archive a row, it will still be deleted by the bulk delete! =item --[no]bulk-delete-limit default: yes Add L<"--limit"> to L<"--bulk-delete"> statement. This is an advanced option and you should not disable it unless you know what you are doing and why! By default, L<"--bulk-delete"> appends a L<"--limit"> clause to the bulk delete SQL statement. In certain cases, this clause can be omitted by specifying C<--no-bulk-delete-limit>. L<"--limit"> must still be specified. =item --bulk-insert Insert each chunk with LOAD DATA INFILE (implies L<"--bulk-delete"> L<"--commit-each">). Insert each chunk of rows with C. This may be much faster than inserting a row at a time with C statements. It is implemented by creating a temporary file for each chunk of rows, and writing the rows to this file instead of inserting them. When the chunk is finished, it uploads the rows. To protect the safety of your data, this option forces bulk deletes to be used. It would be unsafe to delete each row as it is found, before inserting the rows into the destination first. Forcing bulk deletes guarantees that the deletion waits until the insertion is successful. The L<"--low-priority-insert">, L<"--replace">, and L<"--ignore"> options work with this option, but L<"--delayed-insert"> does not. If C throws an error in the lines of C, refer to the documentation for the C DSN option. =item --channel type: string Channel name used when connected to a server using replication channels. Suppose you have two masters, master_a at port 12345, master_b at port 1236 and a slave connected to both masters using channels chan_master_a and chan_master_b. If you want to run pt-archiver to syncronize the slave against master_a, pt-archiver won't be able to determine what's the correct master since SHOW SLAVE STATUS will return 2 rows. In this case, you can use --channel=chan_master_a to specify the channel name to use in the SHOW SLAVE STATUS command. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. Note that only charsets as known by MySQL are recognized; So for example, "UTF8" will work, but "UTF-8" will not. See also L<"--[no]check-charset">. =item --[no]check-charset default: yes Ensure connection and table character sets are the same. Disabling this check may cause text to be erroneously converted from one character set to another (usually from utf8 to latin1) which may cause data loss or mojibake. Disabling this check may be useful or necessary when character set conversions are intended. =item --[no]check-columns default: yes Ensure L<"--source"> and L<"--dest"> have same columns. Enabled by default; causes pt-archiver to check that the source and destination tables have the same columns. It does not check column order, data type, etc. It just checks that all columns in the source exist in the destination and vice versa. If there are any differences, pt-archiver will exit with an error. To disable this check, specify --no-check-columns. =item --check-interval type: time; default: 1s If L<"--check-slave-lag"> is given, this defines how long the tool pauses each time it discovers that a slave is lagging. This check is performed every 100 rows. =item --check-slave-lag type: string; repeatable: yes Pause archiving until the specified DSN's slave lag is less than L<"--max-lag">. This option can be specified multiple times for checking more than one slave. =item --columns short form: -c; type: array Comma-separated list of columns to archive. Specify a comma-separated list of columns to fetch, write to the file, and insert into the destination table. If specified, pt-archiver ignores other columns unless it needs to add them to the C queries so they seek into the index where the previous query ended, then scan along it, rather than scanning from the beginning of the table every time. This is enabled by default because it is generally a good strategy for repeated accesses. Large, multiple-column indexes may cause the WHERE clause to be complex enough that this could actually be less efficient. Consider for example a four-column PRIMARY KEY on (a, b, c, d). The WHERE clause to start where the last query ended is as follows: WHERE (a > ?) OR (a = ? AND b > ?) OR (a = ? AND b = ? AND c > ?) OR (a = ? AND b = ? AND c = ? AND d >= ?) Populating the placeholders with values uses memory and CPU, adds network traffic and parsing overhead, and may make the query harder for MySQL to optimize. A four-column key isn't a big deal, but a ten-column key in which every column allows C might be. Ascending the index might not be necessary if you know you are simply removing rows from the beginning of the table in chunks, but not leaving any holes, so starting at the beginning of the table is actually the most efficient thing to do. See also L<"--ascend-first">. See L<"EXTENDING"> for a discussion of how this interacts with plugins. =item --no-delete Do not delete archived rows. Causes C not to delete rows after processing them. This disallows L<"--no-ascend">, because enabling them both would cause an infinite loop. If there is a plugin on the source DSN, its C method is called anyway, even though C will not execute the delete. See L<"EXTENDING"> for more on plugins. =item --optimize type: string Run OPTIMIZE TABLE afterwards on L<"--source"> and/or L<"--dest">. Runs OPTIMIZE TABLE after finishing. See L<"--analyze"> for the option syntax and L for details on OPTIMIZE TABLE. =item --output-format type: string Used with L<"--file"> to specify the output format. Valid formats are: dump: MySQL dump format using tabs as field separator (default) csv : Dump rows using ',' as separator and optionally enclosing fields by '"'. This format is equivalent to FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --plugin type: string Perl module name to use as a generic plugin. Specify the Perl module name of a general-purpose plugin. It is currently used only for statistics (see L<"--statistics">) and must have C and a C method. The C $src, dst => $dst, opts => $o )> method gets the source and destination DSNs, and their database connections, just like the connection-specific plugins do. It also gets an OptionParser object (C<$o>) for accessing command-line options (example: C<$o->get('purge');>). The C method gets a hashref of the statistics collected by the archiving job, and the time the whole job started. =item --port short form: -P; type: int Port number to use for connection. =item --primary-key-only Primary key columns only. A shortcut for specifying L<"--columns"> with the primary key columns. This is an efficiency if you just want to purge rows; it avoids fetching the entire row, when only the primary key columns are needed for C statements. See also L<"--purge">. =item --progress type: int Print progress information every X rows. Prints current time, elapsed time, and rows archived every X rows. =item --purge Purge instead of archiving; allows omitting L<"--file"> and L<"--dest">. Allows archiving without a L<"--file"> or L<"--dest"> argument, which is effectively a purge since the rows are just deleted. If you just want to purge rows, consider specifying the table's primary key columns with L<"--primary-key-only">. This will prevent fetching all columns from the server for no reason. =item --quick-delete Adds the QUICK modifier to DELETE statements. See L for details. As stated in the documentation, in some cases it may be faster to use DELETE QUICK followed by OPTIMIZE TABLE. You can use L<"--optimize"> for this. =item --quiet short form: -q Do not print any output, such as for L<"--statistics">. Suppresses normal output, including the output of L<"--statistics">, but doesn't suppress the output from L<"--why-quit">. =item --replace Causes INSERTs into L<"--dest"> to be written as REPLACE. =item --retries type: int; default: 1 Number of retries per timeout or deadlock. Specifies the number of times pt-archiver should retry when there is an InnoDB lock wait timeout or deadlock. When retries are exhausted, pt-archiver will exit with an error. Consider carefully what you want to happen when you are archiving between a mixture of transactional and non-transactional storage engines. The INSERT to L<"--dest"> and DELETE from L<"--source"> are on separate connections, so they do not actually participate in the same transaction even if they're on the same server. However, pt-archiver implements simple distributed transactions in code, so commits and rollbacks should happen as desired across the two connections. At this time I have not written any code to handle errors with transactional storage engines other than InnoDB. Request that feature if you need it. =item --run-time type: time Time to run before exiting. Optional suffix s=seconds, m=minutes, h=hours, d=days; if no suffix, s is used. =item --[no]safe-auto-increment default: yes Do not archive row with max AUTO_INCREMENT. Adds an extra WHERE clause to prevent pt-archiver from removing the newest row when ascending a single-column AUTO_INCREMENT key. This guards against re-using AUTO_INCREMENT values if the server restarts, and is enabled by default. The extra WHERE clause contains the maximum value of the auto-increment column as of the beginning of the archive or purge job. If new rows are inserted while pt-archiver is running, it will not see them. =item --sentinel type: string; default: /tmp/pt-archiver-sentinel Exit if this file exists. The presence of the file specified by L<"--sentinel"> will cause pt-archiver to stop archiving and exit. The default is /tmp/pt-archiver-sentinel. You might find this handy to stop cron jobs gracefully if necessary. See also L<"--stop">. =item --slave-user type: string Sets the user to be used to connect to the slaves. This parameter allows you to have a different user with less privileges on the slaves but that user must exist on all slaves. =item --slave-password type: string Sets the password to be used to connect to the slaves. It can be used with --slave-user and the password for the user must be the same on all slaves. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the default value of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --share-lock Adds the LOCK IN SHARE MODE modifier to SELECT statements. See L. =item --skip-foreign-key-checks Disables foreign key checks with SET FOREIGN_KEY_CHECKS=0. =item --sleep type: int Sleep time between fetches. Specifies how long to sleep between SELECT statements. Default is not to sleep at all. Transactions are NOT committed, and the L<"--file"> file is NOT flushed, before sleeping. See L<"--txn-size"> to control that. If L<"--commit-each"> is specified, committing and flushing happens before sleeping. =item --sleep-coef type: float Calculate L<"--sleep"> as a multiple of the last SELECT time. If this option is specified, pt-archiver will sleep for the query time of the last SELECT multiplied by the specified coefficient. This is a slightly more sophisticated way to throttle the SELECTs: sleep a varying amount of time between each SELECT, depending on how long the SELECTs are taking. =item --socket short form: -S; type: string Socket file to use for connection. =item --source type: DSN DSN specifying the table to archive from (required). This argument is a DSN. See L for the syntax. Most options control how pt-archiver connects to MySQL, but there are some extended DSN options in this tool's syntax. The D, t, and i options select a table to archive: --source h=my_server,D=my_database,t=my_tbl The a option specifies the database to set as the connection's default with USE. If the b option is true, it disables binary logging with SQL_LOG_BIN. The m option specifies pluggable actions, which an external Perl module can provide. The only required part is the table; other parts may be read from various places in the environment (such as options files). The 'i' part deserves special mention. This tells pt-archiver which index it should scan to archive. This appears in a FORCE INDEX or USE INDEX hint in the SELECT statements used to fetch archivable rows. If you don't specify anything, pt-archiver will auto-discover a good index, preferring a C if one exists. In my experience this usually works well, so most of the time you can probably just omit the 'i' part. The index is used to optimize repeated accesses to the table; pt-archiver remembers the last row it retrieves from each SELECT statement, and uses it to construct a WHERE clause, using the columns in the specified index, that should allow MySQL to start the next SELECT where the last one ended, rather than potentially scanning from the beginning of the table with each successive SELECT. If you are using external plugins, please see L<"EXTENDING"> for a discussion of how they interact with ascending indexes. The 'a' and 'b' options allow you to control how statements flow through the binary log. If you specify the 'b' option, binary logging will be disabled on the specified connection. If you specify the 'a' option, the connection will C the specified database, which you can use to prevent slaves from executing the binary log events with C<--replicate-ignore-db> options. These two options can be used as different methods to achieve the same goal: archive data off the master, but leave it on the slave. For example, you can run a purge job on the master and prevent it from happening on the slave using your method of choice. B: Using a default options file (F) DSN option that defines a socket for L<"--source"> causes pt-archiver to connect to L<"--dest"> using that socket unless another socket for L<"--dest"> is specified. This means that pt-archiver may incorrectly connect to L<"--source"> when it is meant to connect to L<"--dest">. For example: --source F=host1.cnf,D=db,t=tbl --dest h=host2 When pt-archiver connects to L<"--dest">, host2, it will connect via the L<"--source">, host1, socket defined in host1.cnf. =item --statistics Collect and print timing statistics. Causes pt-archiver to collect timing statistics about what it does. These statistics are available to the plugin specified by L<"--plugin"> Unless you specify L<"--quiet">, C prints the statistics when it exits. The statistics look like this: Started at 2008-07-18T07:18:53, ended at 2008-07-18T07:18:53 Source: D=db,t=table SELECT 4 INSERT 4 DELETE 4 Action Count Time Pct commit 10 0.1079 88.27 select 5 0.0047 3.87 deleting 4 0.0028 2.29 inserting 4 0.0028 2.28 other 0 0.0040 3.29 The first two (or three) lines show times and the source and destination tables. The next three lines show how many rows were fetched, inserted, and deleted. The remaining lines show counts and timing. The columns are the action, the total number of times that action was timed, the total time it took, and the percent of the program's total runtime. The rows are sorted in order of descending total time. The last row is the rest of the time not explicitly attributed to anything. Actions will vary depending on command-line options. If L<"--why-quit"> is given, its behavior is changed slightly. This option causes it to print the reason for exiting even when it's just because there are no more rows. This option requires the standard Time::HiRes module, which is part of core Perl on reasonably new Perl releases. =item --stop Stop running instances by creating the sentinel file. Causes pt-archiver to create the sentinel file specified by L<"--sentinel"> and exit. This should have the effect of stopping all running instances which are watching the same sentinel file. =item --txn-size type: int; default: 1 Number of rows per transaction. Specifies the size, in number of rows, of each transaction. Zero disables transactions altogether. After pt-archiver processes this many rows, it commits both the L<"--source"> and the L<"--dest"> if given, and flushes the file given by L<"--file">. This parameter is critical to performance. If you are archiving from a live server, which for example is doing heavy OLTP work, you need to choose a good balance between transaction size and commit overhead. Larger transactions create the possibility of more lock contention and deadlocks, but smaller transactions cause more frequent commit overhead, which can be significant. To give an idea, on a small test set I worked with while writing pt-archiver, a value of 500 caused archiving to take about 2 seconds per 1000 rows on an otherwise quiet MySQL instance on my desktop machine, archiving to disk and to another table. Disabling transactions with a value of zero, which turns on autocommit, dropped performance to 38 seconds per thousand rows. If you are not archiving from or to a transactional storage engine, you may want to disable transactions so pt-archiver doesn't try to commit. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =item --where type: string WHERE clause to limit which rows to archive (required). Specifies a WHERE clause to limit which rows are archived. Do not include the word WHERE. You may need to quote the argument to prevent your shell from interpreting it. For example: --where 'ts < current_date - interval 90 day' For safety, L<"--where"> is required. If you do not require a WHERE clause, use L<"--where"> 1=1. =item --why-quit Print reason for exiting unless rows exhausted. Causes pt-archiver to print a message if it exits for any reason other than running out of rows to archive. This can be useful if you have a cron job with L<"--run-time"> specified, for example, and you want to be sure pt-archiver is finishing before running out of time. If L<"--statistics"> is given, the behavior is changed slightly. It will print the reason for exiting even when it's just because there are no more rows. This output prints even if L<"--quiet"> is given. That's so you can put C in a C job and get an email if there's an abnormal exit. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * a copy: no Database to USE when executing queries. =item * A dsn: charset; copy: yes Default character set. =item * b copy: no If true, disable binlog with SQL_LOG_BIN. =item * D dsn: database; copy: yes Database that contains the table. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * i copy: yes Index to use. =item * L copy: yes Explicitly enable LOAD DATA LOCAL INFILE. For some reason, some vendors compile libmysql without the --enable-local-infile option, which disables the statement. This can lead to weird situations, like the server allowing LOCAL INFILE, but the client throwing exceptions if it's used. However, as long as the server allows LOAD DATA, clients can easily re-enable it; See L and L. This option does exactly that. Although we've not found a case where turning this option leads to errors or differing behavior, to be on the safe side, this option is not on by default. =item * m copy: no Plugin module name. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * t copy: yes Table to archive from/to. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 EXTENDING pt-archiver is extensible by plugging in external Perl modules to handle some logic and/or actions. You can specify a module for both the L<"--source"> and the L<"--dest">, with the 'm' part of the specification. For example: --source D=test,t=test1,m=My::Module1 --dest m=My::Module2,t=test2 This will cause pt-archiver to load the My::Module1 and My::Module2 packages, create instances of them, and then make calls to them during the archiving process. You can also specify a plugin with L<"--plugin">. The module must provide this interface: =over =item new(dbh => $dbh, db => $db_name, tbl => $tbl_name) The plugin's constructor is passed a reference to the database handle, the database name, and table name. The plugin is created just after pt-archiver opens the connection, and before it examines the table given in the arguments. This gives the plugin a chance to create and populate temporary tables, or do other setup work. =item before_begin(cols => \@cols, allcols => \@allcols) This method is called just before pt-archiver begins iterating through rows and archiving them, but after it does all other setup work (examining table structures, designing SQL queries, and so on). This is the only time pt-archiver tells the plugin column names for the rows it will pass the plugin while archiving. The C argument is the column names the user requested to be archived, either by default or by the L<"--columns"> option. The C argument is the list of column names for every row pt-archiver will fetch from the source table. It may fetch more columns than the user requested, because it needs some columns for its own use. When subsequent plugin functions receive a row, it is the full row containing all the extra columns, if any, added to the end. =item is_archivable(row => \@row) This method is called for each row to determine whether it is archivable. This applies only to L<"--source">. The argument is the row itself, as an arrayref. If the method returns true, the row will be archived; otherwise it will be skipped. Skipping a row adds complications for non-unique indexes. Normally pt-archiver uses a WHERE clause designed to target the last processed row as the place to start the scan for the next SELECT statement. If you have skipped the row by returning false from is_archivable(), pt-archiver could get into an infinite loop because the row still exists. Therefore, when you specify a plugin for the L<"--source"> argument, pt-archiver will change its WHERE clause slightly. Instead of starting at "greater than or equal to" the last processed row, it will start "strictly greater than." This will work fine on unique indexes such as primary keys, but it may skip rows (leave holes) on non-unique indexes or when ascending only the first column of an index. C will change the clause in the same way if you specify L<"--no-delete">, because again an infinite loop is possible. If you specify the L<"--bulk-delete"> option and return false from this method, C may not do what you want. The row won't be archived, but it will be deleted, since bulk deletes operate on ranges of rows and don't know which rows the plugin selected to keep. If you specify the L<"--bulk-insert"> option, this method's return value will influence whether the row is written to the temporary file for the bulk insert, so bulk inserts will work as expected. However, bulk inserts require bulk deletes. =item before_delete(row => \@row) This method is called for each row just before it is deleted. This applies only to L<"--source">. This is a good place for you to handle dependencies, such as deleting things that are foreign-keyed to the row you are about to delete. You could also use this to recursively archive all dependent tables. This plugin method is called even if L<"--no-delete"> is given, but not if L<"--bulk-delete"> is given. =item before_bulk_delete(first_row => \@row, last_row => \@row) This method is called just before a bulk delete is executed. It is similar to the C method, except its arguments are the first and last row of the range to be deleted. It is called even if L<"--no-delete"> is given. =item before_insert(row => \@row) This method is called for each row just before it is inserted. This applies only to L<"--dest">. You could use this to insert the row into multiple tables, perhaps with an ON DUPLICATE KEY UPDATE clause to build summary tables in a data warehouse. This method is not called if L<"--bulk-insert"> is given. =item before_bulk_insert(first_row => \@row, last_row => \@row, filename => bulk_insert_filename) This method is called just before a bulk insert is executed. It is similar to the C method, except its arguments are the first and last row of the range to be deleted. =item custom_sth(row => \@row, sql => $sql) This method is called just before inserting the row, but after L<"before_insert()">. It allows the plugin to specify different C statement if desired. The return value (if any) should be a DBI statement handle. The C parameter is the SQL text used to prepare the default C statement. This method is not called if you specify L<"--bulk-insert">. If no value is returned, the default C statement handle is used. This method applies only to the plugin specified for L<"--dest">, so if your plugin isn't doing what you expect, check that you've specified it for the destination and not the source. =item custom_sth_bulk(first_row => \@row, last_row => \@row, sql => $sql, filename => $bulk_insert_filename) If you've specified L<"--bulk-insert">, this method is called just before the bulk insert, but after L<"before_bulk_insert()">, and the arguments are different. This method's return value etc is similar to the L<"custom_sth()"> method. =item after_finish() This method is called after pt-archiver exits the archiving loop, commits all database handles, closes L<"--file">, and prints the final statistics, but before pt-archiver runs ANALYZE or OPTIMIZE (see L<"--analyze"> and L<"--optimize">). =back If you specify a plugin for both L<"--source"> and L<"--dest">, pt-archiver constructs, calls before_begin(), and calls after_finish() on the two plugins in the order L<"--source">, L<"--dest">. pt-archiver assumes it controls transactions, and that the plugin will NOT commit or roll back the database handle. The database handle passed to the plugin's constructor is the same handle pt-archiver uses itself. Remember that L<"--source"> and L<"--dest"> are separate handles. A sample module might look like this: package My::Module; sub new { my ( $class, %args ) = @_; return bless(\%args, $class); } sub before_begin { my ( $self, %args ) = @_; # Save column names for later $self->{cols} = $args{cols}; } sub is_archivable { my ( $self, %args ) = @_; # Do some advanced logic with $args{row} return 1; } sub before_delete {} # Take no action sub before_insert {} # Take no action sub custom_sth {} # Take no action sub after_finish {} # Take no action 1; =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-archiver ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz =head1 ACKNOWLEDGMENTS Andrew O'Brien =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2007-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-archiver 3.1.0 =cut percona-toolkit-3.1/bin/pt-config-diff000775 001750 001750 00000515437 13535723560 021157 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo OptionParser DSNParser Cxn Daemon TextResultSetParser MySQLConfig MySQLConfigComparer ReportFormatter HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Cxn package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Cxn.pm # t/lib/Cxn.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Cxn; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Scalar::Util qw(blessed); use constant { PTDEBUG => $ENV{PTDEBUG} || 0, PERCONA_TOOLKIT_TEST_USE_DSN_NAMES => $ENV{PERCONA_TOOLKIT_TEST_USE_DSN_NAMES} || 0, }; sub new { my ( $class, %args ) = @_; my @required_args = qw(DSNParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($dp, $o) = @args{@required_args}; my $dsn_defaults = $dp->parse_options($o); my $prev_dsn = $args{prev_dsn}; my $dsn = $args{dsn}; if ( !$dsn ) { $args{dsn_string} ||= 'h=' . ($dsn_defaults->{h} || 'localhost'); $dsn = $dp->parse( $args{dsn_string}, $prev_dsn, $dsn_defaults); } elsif ( $prev_dsn ) { $dsn = $dp->copy($prev_dsn, $dsn); } my $dsn_name = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; my $self = { dsn => $dsn, dbh => $args{dbh}, dsn_name => $dsn_name, hostname => '', set => $args{set}, NAME_lc => defined($args{NAME_lc}) ? $args{NAME_lc} : 1, dbh_set => 0, ask_pass => $o->get('ask-pass'), DSNParser => $dp, is_cluster_node => undef, parent => $args{parent}, }; return bless $self, $class; } sub connect { my ( $self, %opts ) = @_; my $dsn = $opts{dsn} || $self->{dsn}; my $dp = $self->{DSNParser}; my $dbh = $self->{dbh}; if ( !$dbh || !$dbh->ping() ) { if ( $self->{ask_pass} && !$self->{asked_for_pass} && !defined $dsn->{p} ) { $dsn->{p} = OptionParser::prompt_noecho("Enter MySQL password: "); $self->{asked_for_pass} = 1; } $dbh = $dp->get_dbh( $dp->get_cxn_params($dsn), { AutoCommit => 1, %opts, }, ); } $dbh = $self->set_dbh($dbh); if ( $opts{dsn} ) { $self->{dsn} = $dsn; $self->{dsn_name} = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; } PTDEBUG && _d($dbh, 'Connected dbh to', $self->{hostname},$self->{dsn_name}); return $dbh; } sub set_dbh { my ($self, $dbh) = @_; if ( $self->{dbh} && $self->{dbh} == $dbh && $self->{dbh_set} ) { PTDEBUG && _d($dbh, 'Already set dbh'); return $dbh; } PTDEBUG && _d($dbh, 'Setting dbh'); $dbh->{FetchHashKeyName} = 'NAME_lc' if $self->{NAME_lc}; my $sql = 'SELECT @@server_id /*!50038 , @@hostname*/'; PTDEBUG && _d($dbh, $sql); my ($server_id, $hostname) = $dbh->selectrow_array($sql); PTDEBUG && _d($dbh, 'hostname:', $hostname, $server_id); if ( $hostname ) { $self->{hostname} = $hostname; } if ( $self->{parent} ) { PTDEBUG && _d($dbh, 'Setting InactiveDestroy=1 in parent'); $dbh->{InactiveDestroy} = 1; } if ( my $set = $self->{set}) { $set->($dbh); } $self->{dbh} = $dbh; $self->{dbh_set} = 1; return $dbh; } sub lost_connection { my ($self, $e) = @_; return 0 unless $e; return $e =~ m/MySQL server has gone away/ || $e =~ m/Lost connection to MySQL server/ || $e =~ m/Server shutdown in progress/; } sub dbh { my ($self) = @_; return $self->{dbh}; } sub dsn { my ($self) = @_; return $self->{dsn}; } sub name { my ($self) = @_; return $self->{dsn_name} if PERCONA_TOOLKIT_TEST_USE_DSN_NAMES; return $self->{hostname} || $self->{dsn_name} || 'unknown host'; } sub description { my ($self) = @_; return sprintf("%s -> %s:%s", $self->name(), $self->{dsn}->{h}, $self->{dsn}->{P} || 'socket'); } sub get_id { my ($self, $cxn) = @_; $cxn ||= $self; my $unique_id; if ($cxn->is_cluster_node()) { # for cluster we concatenate various variables to maximize id 'uniqueness' across versions my $sql = q{SHOW STATUS LIKE 'wsrep\_local\_index'}; my (undef, $wsrep_local_index) = $cxn->dbh->selectrow_array($sql); PTDEBUG && _d("Got cluster wsrep_local_index: ",$wsrep_local_index); $unique_id = $wsrep_local_index."|"; foreach my $val ('server\_id', 'wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $val) = $cxn->dbh->selectrow_array($sql); $unique_id .= "|$val"; } } else { my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); $unique_id = $cxn->dbh->selectrow_array($sql); } PTDEBUG && _d("Generated unique id for cluster:", $unique_id); return $unique_id; } sub is_cluster_node { my ($self, $cxn) = @_; $cxn ||= $self; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; my $dbh; if ($cxn->isa('DBI::db')) { $dbh = $cxn; PTDEBUG && _d($sql); #don't invoke name() if it's not a Cxn! } else { $dbh = $cxn->dbh(); PTDEBUG && _d($cxn->name, $sql); } my $row = $dbh->selectrow_arrayref($sql); return $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1') ? 1 : 0; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub DESTROY { my ($self) = @_; PTDEBUG && _d('Destroying cxn'); if ( $self->{parent} ) { PTDEBUG && _d($self->{dbh}, 'Not disconnecting dbh in parent'); } elsif ( $self->{dbh} && blessed($self->{dbh}) && $self->{dbh}->can("disconnect") ) { PTDEBUG && _d($self->{dbh}, 'Disconnecting dbh on', $self->{hostname}, $self->{dsn_name}); $self->{dbh}->disconnect(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Cxn package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # TextResultSetParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TextResultSetParser.pm # t/lib/TextResultSetParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TextResultSetParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my %value_for = ( 'NULL' => undef, # DBI::selectall_arrayref() does this ($args{value_for} ? %{$args{value_for}} : ()), ); my $self = { %args, value_for => \%value_for, }; return bless $self, $class; } sub _parse_tabular { my ( $text, @cols ) = @_; my %row; my @vals = $text =~ m/\| +([^\|]*?)(?= +\|)/msg; return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub _parse_tab_sep { my ( $text, @cols ) = @_; my %row; my @vals = split(/\t/, $text); return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub parse_vertical_row { my ( $self, $text ) = @_; my %row = $text =~ m/^\s*(\w+):(?: ([^\n]*))?/msg; if ( $self->{NAME_lc} ) { my %lc_row = map { my $key = lc $_; $key => $row{$_}; } keys %row; return \%lc_row; } else { return \%row; } } sub parse { my ( $self, $text ) = @_; my $result_set; if ( $text =~ m/^\+---/m ) { # standard "tabular" output PTDEBUG && _d('Result set text is standard tabular'); my $line_pattern = qr/^(\| .*)[\r\n]+/m; $result_set = $self->parse_horizontal_row($text, $line_pattern, \&_parse_tabular); } elsif ( $text =~ m/^\w+\t\w+/m ) { # tab-separated PTDEBUG && _d('Result set text is tab-separated'); my $line_pattern = qr/^(.*?\t.*)[\r\n]+/m; $result_set = $self->parse_horizontal_row($text, $line_pattern, \&_parse_tab_sep); } elsif ( $text =~ m/\*\*\* \d+\. row/ ) { # "vertical" output PTDEBUG && _d('Result set text is vertical (\G)'); foreach my $row ( split_vertical_rows($text) ) { push @$result_set, $self->parse_vertical_row($row); } } else { my $text_sample = substr $text, 0, 300; my $remaining = length $text > 300 ? (length $text) - 300 : 0; chomp $text_sample; die "Cannot determine if text is tabular, tab-separated or vertical:\n" . "$text_sample\n" . ($remaining ? "(not showing last $remaining bytes of text)\n" : ""); } if ( $self->{value_for} ) { foreach my $result_set ( @$result_set ) { foreach my $key ( keys %$result_set ) { next unless defined $result_set->{$key}; $result_set->{$key} = $self->{value_for}->{ $result_set->{$key} } if exists $self->{value_for}->{ $result_set->{$key} }; } } } return $result_set; } sub parse_horizontal_row { my ( $self, $text, $line_pattern, $sub ) = @_; my @result_sets = (); my @cols = (); foreach my $line ( $text =~ m/$line_pattern/g ) { my ( $row, $cols ) = $sub->($line, @cols); if ( $row ) { push @result_sets, $row; } else { @cols = map { $self->{NAME_lc} ? lc $_ : $_ } @$cols; } } return \@result_sets; } sub split_vertical_rows { my ( $text ) = @_; my $ROW_HEADER = '\*{3,} \d+\. row \*{3,}'; my @rows = $text =~ m/($ROW_HEADER.*?)(?=$ROW_HEADER|\z)/omgs; return @rows; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TextResultSetParser package # ########################################################################### # ########################################################################### # MySQLConfig package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MySQLConfig.pm # t/lib/MySQLConfig.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MySQLConfig; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; my %can_be_duplicate = ( replicate_wild_do_table => 1, replicate_wild_ignore_table => 1, replicate_rewrite_db => 1, replicate_ignore_table => 1, replicate_ignore_db => 1, replicate_do_table => 1, replicate_do_db => 1, ); sub new { my ( $class, %args ) = @_; my @requires_one_of = qw(file output result_set dbh); my $required_arg = grep { $args{$_} } @requires_one_of; if ( !$required_arg ) { die "I need a " . join(', ', @requires_one_of[0..$#requires_one_of-1]) . " or " . $requires_one_of[-1] . " argument"; } if ( $required_arg > 1 ) { die "Specify only one " . join(', ', @requires_one_of[0..$#requires_one_of-1]) . " or " . $requires_one_of[-1] . " argument"; } if ( $args{file} || $args{output} ) { die "I need a TextResultSetParser argument" unless $args{TextResultSetParser}; } if ( $args{file} ) { $args{output} = _slurp_file($args{file}); } my %config_data = _parse_config(%args); my $self = { %args, %config_data, }; return bless $self, $class; } sub _parse_config { my ( %args ) = @_; my %config_data; if ( $args{output} ) { %config_data = _parse_config_output(%args); } elsif ( my $rows = $args{result_set} ) { $config_data{format} = $args{format} || 'show_variables'; $config_data{vars} = { map { @$_ } @$rows }; } elsif ( my $dbh = $args{dbh} ) { $config_data{format} = $args{format} || 'show_variables'; my $sql = "SHOW /*!40103 GLOBAL*/ VARIABLES"; PTDEBUG && _d($dbh, $sql); my $rows = $dbh->selectall_arrayref($sql); $config_data{vars} = { map { @$_ } @$rows }; $config_data{mysql_version} = _get_version($dbh); } else { die "Unknown config source"; } handle_special_vars(\%config_data); return %config_data; } sub handle_special_vars { my ($config_data) = @_; if ( $config_data->{vars}->{wsrep_provider_options} ) { my $vars = $config_data->{vars}; my $dupes = $config_data->{duplicate_vars}; for my $wpo ( $vars->{wsrep_provider_options}, @{$dupes->{wsrep_provider_options} || [] } ) { my %opts = $wpo =~ /(\S+)\s*=\s*(\S*)(?:;|;?$)/g; while ( my ($var, $val) = each %opts ) { $val =~ s/;$//; if ( exists $vars->{$var} ) { push @{$dupes->{$var} ||= []}, $val; } $vars->{$var} = $val; } } delete $vars->{wsrep_provider_options}; } return; } sub _parse_config_output { my ( %args ) = @_; my @required_args = qw(output TextResultSetParser); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($output) = @args{@required_args}; PTDEBUG && _d("Parsing config output"); my $format = $args{format} || detect_config_output_format(%args); if ( !$format ) { die "Cannot auto-detect the MySQL config format"; } my $vars; # variables hashref my $dupes; # duplicate vars hashref my $opt_files; # option files arrayref if ( $format eq 'show_variables' ) { $vars = parse_show_variables(%args); } elsif ( $format eq 'mysqld' ) { ($vars, $opt_files) = parse_mysqld(%args); } elsif ( $format eq 'my_print_defaults' ) { ($vars, $dupes) = parse_my_print_defaults(%args); } elsif ( $format eq 'option_file' ) { ($vars, $dupes) = parse_option_file(%args); } else { die "Invalid MySQL config format: $format"; } die "Failed to parse MySQL config" unless $vars && keys %$vars; if ( $format ne 'show_variables' ) { _mimic_show_variables( %args, format => $format, vars => $vars, ); } return ( format => $format, vars => $vars, option_files => $opt_files, duplicate_vars => $dupes, ); } sub detect_config_output_format { my ( %args ) = @_; my @required_args = qw(output); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($output) = @args{@required_args}; my $format; if ( $output =~ m/\|\s+\w+\s+\|\s+.+?\|/ || $output =~ m/\*+ \d/ || $output =~ m/Variable_name:\s+\w+/ || $output =~ m/Variable_name\s+Value$/m ) { PTDEBUG && _d('show variables format'); $format = 'show_variables'; } elsif ( $output =~ m/Starts the MySQL database server/ || $output =~ m/Default options are read from / || $output =~ m/^help\s+TRUE /m ) { PTDEBUG && _d('mysqld format'); $format = 'mysqld'; } elsif ( $output =~ m/^--\w+/m ) { PTDEBUG && _d('my_print_defaults format'); $format = 'my_print_defaults'; } elsif ( $output =~ m/^\s*\[[a-zA-Z]+\]\s*$/m ) { PTDEBUG && _d('option file format'); $format = 'option_file', } return $format; } sub parse_show_variables { my ( %args ) = @_; my @required_args = qw(output TextResultSetParser); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($output, $trp) = @args{@required_args}; my %config = map { $_->{Variable_name} => $_->{Value} } @{ $trp->parse($output) }; return \%config; } sub parse_mysqld { my ( %args ) = @_; my @required_args = qw(output); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($output) = @args{@required_args}; my @opt_files; if ( $output =~ m/^Default options are read.+\n/mg ) { my ($opt_files) = $output =~ m/\G^(.+)\n/m; my %seen; my @opt_files = grep { !$seen{$_} } split(' ', $opt_files); PTDEBUG && _d('Option files:', @opt_files); } else { PTDEBUG && _d("mysqld help output doesn't list option files"); } if ( $output !~ m/^-+ -+$(.+?)(?:\n\n.+)?\z/sm ) { PTDEBUG && _d("mysqld help output doesn't list vars and vals"); return; } my $varvals = $1; my ($config, undef) = _parse_varvals( qr/^(\S+)(.*)$/, $varvals, ); return $config, \@opt_files; } sub parse_my_print_defaults { my ( %args ) = @_; my @required_args = qw(output); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($output) = @args{@required_args}; my ($config, $dupes) = _parse_varvals( qr/^--([^=]+)(?:=(.*))?$/, $output, ); return $config, $dupes; } sub parse_option_file { my ( %args ) = @_; my @required_args = qw(output); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($output) = @args{@required_args}; my ($mysqld_section) = $output =~ m/\[mysqld\](.+?)(?:^\s*\[\w+\]|\Z)/xms; die "Failed to parse the [mysqld] section" unless $mysqld_section; my ($config, $dupes) = _parse_varvals( qr/^([^=]+)(?:=(.*))?$/, $mysqld_section, ); return $config, $dupes; } sub _preprocess_varvals { my ($re, $to_parse) = @_; my %vars; LINE: foreach my $line ( split /\n/, $to_parse ) { next LINE if $line =~ m/^\s*$/; # no empty lines next LINE if $line =~ /^\s*[#;]/; # no # or ; comment lines if ( $line !~ $re ) { PTDEBUG && _d("Line <", $line, "> didn't match $re"); next LINE; } my ($var, $val) = ($1, $2); $var =~ tr/-/_/; $var =~ s/\s*#.*$//; if ( !defined $val ) { $val = ''; } for my $item ($var, $val) { $item =~ s/^\s+//; $item =~ s/\s+$//; } push @{$vars{$var} ||= []}, $val } return \%vars; } sub _parse_varvals { my ( $vars ) = _preprocess_varvals(@_); my %config; my %duplicates; while ( my ($var, $vals) = each %$vars ) { my $val = _process_val( pop @$vals ); if ( @$vals && !$can_be_duplicate{$var} ) { PTDEBUG && _d("Duplicate var:", $var); foreach my $current_val ( map { _process_val($_) } @$vals ) { push @{$duplicates{$var} ||= []}, $current_val; } } PTDEBUG && _d("Var:", $var, "val:", $val); $config{$var} = $val; } return \%config, \%duplicates; } my $quote_re = qr/ \A # Start of value (['"]) # Opening quote (.*) # Value \1 # Closing quote \s*(?:\#.*)? # End of line comment [\n\r]*\z # End of value /x; sub _process_val { my ($val) = @_; if ( $val =~ $quote_re ) { $val = $2; } else { $val =~ s/\s*#.*//; } if ( my ($num, $factor) = $val =~ m/^(\d+)([KMGT])b?$/i ) { my %factor_for = ( k => 1_024, m => 1_048_576, g => 1_073_741_824, t => 1_099_511_627_776, ); $val = $num * $factor_for{lc $factor}; } elsif ( $val =~ m/No default/ ) { $val = ''; } return $val; } sub _mimic_show_variables { my ( %args ) = @_; my @required_args = qw(vars format); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($vars, $format) = @args{@required_args}; foreach my $var ( keys %$vars ) { if ( $vars->{$var} eq '' ) { if ( $format eq 'mysqld' ) { if ( $var ne 'log_error' && $var =~ m/^(?:log|skip|ignore)/ ) { $vars->{$var} = 'OFF'; } } else { $vars->{$var} = 'ON'; } } } return; } sub _slurp_file { my ( $file ) = @_; die "I need a file argument" unless $file; PTDEBUG && _d("Reading", $file); open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; return $contents; } sub _get_version { my ( $dbh ) = @_; return unless $dbh; my $version = $dbh->selectrow_arrayref('SELECT VERSION()')->[0]; $version =~ s/(\d\.\d{1,2}.\d{1,2})/$1/; PTDEBUG && _d('MySQL version', $version); return $version; } sub has { my ( $self, $var ) = @_; return exists $self->{vars}->{$var}; } sub value_of { my ( $self, $var ) = @_; return unless $var; return $self->{vars}->{$var}; } sub variables { my ( $self, %args ) = @_; return $self->{vars}; } sub duplicate_variables { my ( $self ) = @_; return $self->{duplicate_vars}; } sub option_files { my ( $self ) = @_; return $self->{option_files}; } sub mysql_version { my ( $self ) = @_; return $self->{mysql_version}; } sub format { my ( $self ) = @_; return $self->{format}; } sub is_active { my ( $self ) = @_; return $self->{dbh} ? 1 : 0; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MySQLConfig package # ########################################################################### # ########################################################################### # MySQLConfigComparer package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MySQLConfigComparer.pm # t/lib/MySQLConfigComparer.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MySQLConfigComparer; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; my %alt_val_for = ( ON => 1, YES => 1, TRUE => 1, OFF => 0, NO => 0, FALSE => 0, ); sub new { my ( $class, %args ) = @_; my %ignore_vars = ( date_format => 1, datetime_format => 1, ft_stopword_file => 1, timestamp => 1, time_format => 1, ($args{ignore_variables} ? map { $_ => 1 } @{$args{ignore_variables}} : ()), ); my %is_numeric = ( long_query_time => 1, ($args{numeric_variables} ? map { $_ => 1 } @{$args{numeric_variables}} : ()), ); my %value_is_optional = ( log_error => 1, log_isam => 1, secure_file_priv => 1, ($args{optional_value_variables} ? map { $_ => 1 } @{$args{optional_value_variables}} : ()), ); my %any_value_is_true = ( log => 1, log_bin => 1, log_slow_queries => 1, ($args{any_value_is_true_variables} ? map { $_ => 1 } @{$args{any_value_is_true_variables}} : ()), ); my %base_path = ( character_sets_dir => 'basedir', datadir => 'basedir', general_log_file => 'datadir', language => 'basedir', log_error => 'datadir', pid_file => 'datadir', plugin_dir => 'basedir', slow_query_log_file => 'datadir', socket => 'datadir', ($args{base_paths} ? map { $_ => 1 } @{$args{base_paths}} : ()), ); my $self = { ignore_vars => \%ignore_vars, is_numeric => \%is_numeric, value_is_optional => \%value_is_optional, any_value_is_true => \%any_value_is_true, base_path => \%base_path, ignore_case => exists $args{ignore_case} ? $args{ignore_case} : 1, }; return bless $self, $class; } sub diff { my ( $self, %args ) = @_; my @required_args = qw(configs); foreach my $arg( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($configs) = @args{@required_args}; if ( @$configs < 2 ) { PTDEBUG && _d("Less than two MySQLConfig objects; nothing to compare"); return; } my $base_path = $self->{base_path}; my $is_numeric = $self->{is_numeric}; my $any_value_is_true = $self->{any_value_is_true}; my $value_is_optional = $self->{value_is_optional}; my $config0 = $configs->[0]; my $last_config = @$configs - 1; my $vars = $self->_get_shared_vars(%args); my $ignore_case = $self->{ignore_case}; my $diffs; VARIABLE: foreach my $var ( @$vars ) { my $is_dir = $var =~ m/dir$/ || $var eq 'language'; my $val0 = $self->_normalize_value( # config0 value value => $config0->value_of($var), is_directory => $is_dir, base_path => $config0->value_of($base_path->{$var}) || "", ); eval { CONFIG: foreach my $configN ( @$configs[1..$last_config] ) { my $valN = $self->_normalize_value( # configN value value => $configN->value_of($var), is_directory => $is_dir, base_path => $configN->value_of($base_path->{$var}) || "", ); if ( $is_numeric->{$var} ) { next CONFIG if $val0 == $valN; } else { next CONFIG if $ignore_case ? lc($val0) eq lc($valN) : $val0 eq $valN; if ( $config0->format() ne $configN->format() ) { if ( $any_value_is_true->{$var} ) { next CONFIG if $val0 && $valN; } if ( $value_is_optional->{$var} ) { next CONFIG if (!$val0 && $valN) || ($val0 && !$valN); } } } PTDEBUG && _d("Different", $var, "values:", $val0, $valN); $diffs->{$var} = [ map { $_->value_of($var) } @$configs ]; last CONFIG; } # CONFIG }; if ( $EVAL_ERROR ) { my $vals = join(', ', map { my $val = $_->value_of($var); defined $val ? $val : 'undef' } @$configs); warn "Comparing $var values ($vals) caused an error: $EVAL_ERROR"; } } # VARIABLE return $diffs; } sub missing { my ( $self, %args ) = @_; my @required_args = qw(configs); foreach my $arg( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($configs) = @args{@required_args}; if ( @$configs < 2 ) { PTDEBUG && _d("Less than two MySQLConfig objects; nothing to compare"); return; } my %vars = map { $_ => 1 } map { keys %{$_->variables()} } @$configs; my $missing; foreach my $var ( keys %vars ) { my $n_configs_having_var = grep { $_->has($var) } @$configs; if ( $n_configs_having_var < @$configs ) { $missing->{$var} = [ map { $_->has($var) ? 1 : 0 } @$configs ]; } } return $missing; } sub _normalize_value { my ( $self, %args ) = @_; my ($val, $is_dir, $base_path) = @args{qw(value is_directory base_path)}; $val = defined $val ? $val : ''; $val = $alt_val_for{$val} if exists $alt_val_for{$val}; if ( $val ) { if ( $is_dir ) { $val .= '/' unless $val =~ m/\/$/; } if ( $base_path && $val !~ m/^\// ) { $val =~ s/^\.?(.+)/$base_path\/$1/; # prepend base path $val =~ s/\/{2,}/\//g; # make redundant // single / } } return $val; } sub _get_shared_vars { my ( $self, %args ) = @_; my ($configs) = @args{qw(configs)}; my $ignore_vars = $self->{ignore_vars}; my $config0 = $configs->[0]; my $last_config = @$configs - 1; my @vars = grep { !$ignore_vars->{$_} } map { my $config = $_; my $vars = $config->variables(); grep { $config0->has($_); } keys %$vars; } @$configs[1..$last_config]; return \@vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MySQLConfigComparer package # ########################################################################### # ########################################################################### # ReportFormatter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/ReportFormatter.pm # t/lib/ReportFormatter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package ReportFormatter; use Lmo; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(min max); use POSIX qw(ceil); eval { require Term::ReadKey }; my $have_term = $EVAL_ERROR ? 0 : 1; has underline_header => ( is => 'ro', isa => 'Bool', default => sub { 1 }, ); has line_prefix => ( is => 'ro', isa => 'Str', default => sub { '# ' }, ); has line_width => ( is => 'ro', isa => 'Int', default => sub { 78 }, ); has column_spacing => ( is => 'ro', isa => 'Str', default => sub { ' ' }, ); has extend_right => ( is => 'ro', isa => 'Bool', default => sub { '' }, ); has truncate_line_mark => ( is => 'ro', isa => 'Str', default => sub { '...' }, ); has column_errors => ( is => 'ro', isa => 'Str', default => sub { 'warn' }, ); has truncate_header_side => ( is => 'ro', isa => 'Str', default => sub { 'left' }, ); has strip_whitespace => ( is => 'ro', isa => 'Bool', default => sub { 1 }, ); has title => ( is => 'rw', isa => 'Str', predicate => 'has_title', ); has n_cols => ( is => 'rw', isa => 'Int', default => sub { 0 }, init_arg => undef, ); has cols => ( is => 'ro', isa => 'ArrayRef', init_arg => undef, default => sub { [] }, clearer => 'clear_cols', ); has lines => ( is => 'ro', isa => 'ArrayRef', init_arg => undef, default => sub { [] }, clearer => 'clear_lines', ); has truncate_headers => ( is => 'rw', isa => 'Bool', default => sub { undef }, init_arg => undef, clearer => 'clear_truncate_headers', ); sub BUILDARGS { my $class = shift; my $args = $class->SUPER::BUILDARGS(@_); if ( ($args->{line_width} || '') eq 'auto' ) { die "Cannot auto-detect line width because the Term::ReadKey module " . "is not installed" unless $have_term; ($args->{line_width}) = GetTerminalSize(); PTDEBUG && _d('Line width:', $args->{line_width}); } return $args; } sub set_columns { my ( $self, @cols ) = @_; my $min_hdr_wid = 0; # check that header fits on line my $used_width = 0; my @auto_width_cols; for my $i ( 0..$#cols ) { my $col = $cols[$i]; my $col_name = $col->{name}; my $col_len = length $col_name; die "Column does not have a name" unless defined $col_name; if ( $col->{width} ) { $col->{width_pct} = ceil(($col->{width} * 100) / $self->line_width()); PTDEBUG && _d('col:', $col_name, 'width:', $col->{width}, 'chars =', $col->{width_pct}, '%'); } if ( $col->{width_pct} ) { $used_width += $col->{width_pct}; } else { PTDEBUG && _d('Auto width col:', $col_name); $col->{auto_width} = 1; push @auto_width_cols, $i; } $col->{truncate} = 1 unless defined $col->{truncate}; $col->{truncate_mark} = '...' unless defined $col->{truncate_mark}; $col->{truncate_side} ||= 'right'; $col->{undef_value} = '' unless defined $col->{undef_value}; $col->{min_val} = 0; $col->{max_val} = 0; $min_hdr_wid += $col_len; $col->{header_width} = $col_len; $col->{right_most} = 1 if $i == $#cols; push @{$self->cols}, $col; } $self->n_cols( scalar @cols ); if ( ($used_width || 0) > 100 ) { die "Total width_pct for all columns is >100%"; } if ( @auto_width_cols ) { my $wid_per_col = int((100 - $used_width) / scalar @auto_width_cols); PTDEBUG && _d('Line width left:', (100-$used_width), '%;', 'each auto width col:', $wid_per_col, '%'); map { $self->cols->[$_]->{width_pct} = $wid_per_col } @auto_width_cols; } $min_hdr_wid += ($self->n_cols() - 1) * length $self->column_spacing(); PTDEBUG && _d('min header width:', $min_hdr_wid); if ( $min_hdr_wid > $self->line_width() ) { PTDEBUG && _d('Will truncate headers because min header width', $min_hdr_wid, '> line width', $self->line_width()); $self->truncate_headers(1); } return; } sub add_line { my ( $self, @vals ) = @_; my $n_vals = scalar @vals; if ( $n_vals != $self->n_cols() ) { $self->_column_error("Number of values $n_vals does not match " . "number of columns " . $self->n_cols()); } for my $i ( 0..($n_vals-1) ) { my $col = $self->cols->[$i]; my $val = defined $vals[$i] ? $vals[$i] : $col->{undef_value}; if ( $self->strip_whitespace() ) { $val =~ s/^\s+//g; $val =~ s/\s+$//; $vals[$i] = $val; } my $width = length $val; $col->{min_val} = min($width, ($col->{min_val} || $width)); $col->{max_val} = max($width, ($col->{max_val} || $width)); } push @{$self->lines}, \@vals; return; } sub get_report { my ( $self, %args ) = @_; $self->_calculate_column_widths(); if ( $self->truncate_headers() ) { $self->_truncate_headers(); } $self->_truncate_line_values(%args); my @col_fmts = $self->_make_column_formats(); my $fmt = $self->line_prefix() . join($self->column_spacing(), @col_fmts); PTDEBUG && _d('Format:', $fmt); (my $hdr_fmt = $fmt) =~ s/%([^-])/%-$1/g; my @lines; push @lines, $self->line_prefix() . $self->title() if $self->has_title(); push @lines, $self->_truncate_line( sprintf($hdr_fmt, map { $_->{name} } @{$self->cols}), strip => 1, mark => '', ); if ( $self->underline_header() ) { my @underlines = map { '=' x $_->{print_width} } @{$self->cols}; push @lines, $self->_truncate_line( sprintf($fmt, map { $_ || '' } @underlines), mark => '', ); } push @lines, map { my $vals = $_; my $i = 0; my @vals = map { my $val = defined $_ ? $_ : $self->cols->[$i++]->{undef_value}; $val = '' if !defined $val; $val =~ s/\n/ /g; $val; } @$vals; my $line = sprintf($fmt, @vals); if ( $self->extend_right() ) { $line; } else { $self->_truncate_line($line); } } @{$self->lines}; $self->clear_cols(); $self->clear_lines(); $self->clear_truncate_headers(); return join("\n", @lines) . "\n"; } sub truncate_value { my ( $self, $col, $val, $width, $side ) = @_; return $val if length $val <= $width; return $val if $col->{right_most} && $self->extend_right(); $side ||= $col->{truncate_side}; my $mark = $col->{truncate_mark}; if ( $side eq 'right' ) { $val = substr($val, 0, $width - length $mark); $val .= $mark; } elsif ( $side eq 'left') { $val = $mark . substr($val, -1 * $width + length $mark); } else { PTDEBUG && _d("I don't know how to", $side, "truncate values"); } return $val; } sub _calculate_column_widths { my ( $self ) = @_; my $extra_space = 0; foreach my $col ( @{$self->cols} ) { my $print_width = int($self->line_width() * ($col->{width_pct} / 100)); PTDEBUG && _d('col:', $col->{name}, 'width pct:', $col->{width_pct}, 'char width:', $print_width, 'min val:', $col->{min_val}, 'max val:', $col->{max_val}); if ( $col->{auto_width} ) { if ( $col->{min_val} && $print_width < $col->{min_val} ) { PTDEBUG && _d('Increased to min val width:', $col->{min_val}); $print_width = $col->{min_val}; } elsif ( $col->{max_val} && $print_width > $col->{max_val} ) { PTDEBUG && _d('Reduced to max val width:', $col->{max_val}); $extra_space += $print_width - $col->{max_val}; $print_width = $col->{max_val}; } } $col->{print_width} = $print_width; PTDEBUG && _d('print width:', $col->{print_width}); } PTDEBUG && _d('Extra space:', $extra_space); while ( $extra_space-- ) { foreach my $col ( @{$self->cols} ) { if ( $col->{auto_width} && ( $col->{print_width} < $col->{max_val} || $col->{print_width} < $col->{header_width}) ) { $col->{print_width}++; } } } return; } sub _truncate_headers { my ( $self, $col ) = @_; my $side = $self->truncate_header_side(); foreach my $col ( @{$self->cols} ) { my $col_name = $col->{name}; my $print_width = $col->{print_width}; next if length $col_name <= $print_width; $col->{name} = $self->truncate_value($col, $col_name, $print_width, $side); PTDEBUG && _d('Truncated hdr', $col_name, 'to', $col->{name}, 'max width:', $print_width); } return; } sub _truncate_line_values { my ( $self, %args ) = @_; my $n_vals = $self->n_cols() - 1; foreach my $vals ( @{$self->lines} ) { for my $i ( 0..$n_vals ) { my $col = $self->cols->[$i]; my $val = defined $vals->[$i] ? $vals->[$i] : $col->{undef_value}; my $width = length $val; if ( $col->{print_width} && $width > $col->{print_width} ) { if ( !$col->{truncate} ) { $self->_column_error("Value '$val' is too wide for column " . $col->{name}); } my $callback = $args{truncate_callback}; my $print_width = $col->{print_width}; $val = $callback ? $callback->($col, $val, $print_width) : $self->truncate_value($col, $val, $print_width); PTDEBUG && _d('Truncated val', $vals->[$i], 'to', $val, '; max width:', $print_width); $vals->[$i] = $val; } } } return; } sub _make_column_formats { my ( $self ) = @_; my @col_fmts; my $n_cols = $self->n_cols() - 1; for my $i ( 0..$n_cols ) { my $col = $self->cols->[$i]; my $width = $col->{right_most} && !$col->{right_justify} ? '' : $col->{print_width}; my $col_fmt = '%' . ($col->{right_justify} ? '' : '-') . $width . 's'; push @col_fmts, $col_fmt; } return @col_fmts; } sub _truncate_line { my ( $self, $line, %args ) = @_; my $mark = defined $args{mark} ? $args{mark} : $self->truncate_line_mark(); if ( $line ) { $line =~ s/\s+$// if $args{strip}; my $len = length($line); if ( $len > $self->line_width() ) { $line = substr($line, 0, $self->line_width() - length $mark); $line .= $mark if $mark; } } return $line; } sub _column_error { my ( $self, $err ) = @_; my $msg = "Column error: $err"; $self->column_errors() eq 'die' ? die $msg : warn $msg; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End ReportFormatter package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_config_diff; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub main { local @ARGV = @_; # set global ARGV for this package # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); if ( !$o->get('help') ) { if ( @ARGV < 1 ) { $o->save_error("Specify at least one file or DSN on the command line"); } } $o->usage_or_errors(); # ######################################################################### # Make common modules. # ######################################################################### my $trp = new TextResultSetParser(); my $config_cmp = new MySQLConfigComparer( ignore_variables => $o->get('ignore-variables'), ignore_case => $o->get('ignore-case'), ); my %common_modules = ( DSNParser => $dp, OptionParser => $o, MySQLConfigComparer => $config_cmp, TextResultSetParser => $trp, ); # ######################################################################### # Make MySQLConfig objs for each FILE|DSN. # ######################################################################### my $dsn_defaults = $dp->parse_options($o); my $last_dsn; my @configs; # MySQLConfig objects my @config_names; # Human-readable names for those ^ objs my @cxn; foreach my $config_src ( @ARGV ) { if ( -f $config_src ) { PTDEBUG && _d('Config source', $config_src, 'is a file'); push @configs, new MySQLConfig( file => $config_src, %common_modules, ); push @config_names, $config_src; # filename } else { PTDEBUG && _d('Config source', $config_src, 'is a DSN'); my $cxn = new Cxn( dsn_string => $config_src, prev_dsn => $last_dsn, DSNParser => $dp, OptionParser => $o, ); $cxn->connect(); $last_dsn = $cxn->dsn(); push @configs, new MySQLConfig( dbh => $cxn->dbh(), dsn => $cxn->dsn(), %common_modules, ); push @config_names, $cxn->name(); push @cxn, $cxn; } } # ######################################################################## # Daemonize now that everything is setup and ready to work. # ######################################################################## my $daemon; if ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ map({ +{ dbh => $_->dbh, dsn => $_->dsn } } @cxn) ], ); } # ######################################################################### # Diff the given configs. # ######################################################################### my $report; my $truncate_callback; if ( $o->get('report') ) { $report = new ReportFormatter( line_prefix => '', line_width => $o->get('report-width'), ); $report->set_columns( { name => 'Variable', width=>25, }, map { { name => $_ } } @config_names, ); # This is difficult. Ideally, we want to know which var this # val applies to (i.e. first column, same row). But that's # not how ReportFormatter works. Plus, even if we truncate a # path on the left side, that might be where the difference is. # So there's no easy solution here. # $truncate_callback = sub { # }; } PTDEBUG && _d("Comparing", scalar @configs, "configs"); my $diffs = $config_cmp->diff(configs=>\@configs); my $n_diffs = scalar keys %$diffs; PTDEBUG && _d($n_diffs, "differences found:", Dumper($diffs)); if ( $n_diffs ) { if ( $o->get('report') ) { foreach my $var ( sort keys %$diffs ) { $report->add_line($var, @{$diffs->{$var}}); } $report->title( "$n_diffs config difference" . ($n_diffs > 1 ? 's' : '')); print $report->get_report(); } return 1; } # No differences. return 0; } # ########################################################################## # Subroutines # ########################################################################## sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-config-diff - Diff MySQL configuration files and server variables. =head1 SYNOPSIS Usage: pt-config-diff [OPTIONS] CONFIG CONFIG [CONFIG...] pt-config-diff diffs MySQL configuration files and server variables. CONFIG can be a filename or a DSN. At least two CONFIG sources must be given. Like standard Unix diff, there is no output if there are no differences. Diff host1 config from SHOW VARIABLES against host2: pt-config-diff h=host1 h=host2 Diff config from [mysqld] section in my.cnf against host1 config: pt-config-diff /etc/my.cnf h=host1 Diff the [mysqld] section of two option files: pt-config-diff /etc/my-small.cnf /etc/my-large.cnf =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-config-diff diffs MySQL configurations by examining the values of server system variables from two or more CONFIG sources specified on the command line. A CONFIG source can be a DSN or a filename containing the output of C, C, C, or an option file (e.g. my.cnf). For each DSN CONFIG, pt-config-diff connects to MySQL and gets variables and values by executing C. This is an "active config" because it shows what server values MySQL is actively (currently) running with. Only variables that all CONFIG sources have are compared because if a variable is not present then we cannot know or safely guess its value. For example, if you compare an option file (e.g. my.cnf) to an active config (i.e. SHOW VARIABLES from a DSN CONFIG), the option file will probably only have a few variables, whereas the active config has every variable. Only values of the variables present in both configs are compared. Option file and DSN configs provide the best results. =head1 OUTPUT There is no output when there are no differences. When there are differences, pt-config-diff prints a report to STDOUT that looks similar to the following: 2 config differences Variable my.master.cnf my.slave.cnf ========================= =============== =============== datadir /tmp/12345/data /tmp/12346/data port 12345 12346 Comparing MySQL variables is difficult because there are many variations and subtleties across the many versions and distributions of MySQL. When a comparison fails, the tool prints a warning to STDERR, such as the following: Comparing log_error values (mysqld.log, /tmp/12345/data/mysqld.log) caused an error: Argument "/tmp/12345/data/mysqld.log" isn't numeric in numeric eq (==) at ./pt-config-diff line 2311. Please report these warnings so the comparison functions can be improved. =head1 EXIT STATUS pt-config-diff exits with a zero exit status when there are no differences, and 1 if there are. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. (This option does not specify a CONFIG; it's equivalent to C<--defaults-file>.) =item --database short form: -D; type: string Connect to this database. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --[no]ignore-case default: yes Compare the variables case-insensitively. =item --ignore-variables type: array Ignore, do not compare, these variables. =item --password short form: -p; type: string Password to use for connection. =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --[no]report default: yes Print the MySQL config diff report to STDOUT. If you just want to check if the given configs are different or not by examining the tool's exit status, then specify C<--no-report> to suppress the report. =item --report-width type: int; default: 78 Truncate report lines to this many characters. Since some variable values can be long, or when comparing multiple configs, it may help to increase the report width so values are not truncated beyond readability. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --user short form: -u; type: string MySQL user if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-config-diff ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-config-diff 3.1.0 =cut percona-toolkit-3.1/bin/pt-deadlock-logger000775 001750 001750 00000510052 13535723560 022013 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit OptionParser Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo VersionParser Quoter DSNParser Cxn Daemon HTTP::Micro VersionCheck Runtime )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # VersionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionParser.pm # t/lib/VersionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionParser; use Lmo; use Scalar::Util qw(blessed); use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use overload ( '""' => "version", '<=>' => "cmp", 'cmp' => "cmp", fallback => 1, ); use Carp (); our $VERSION = 0.01; has major => ( is => 'ro', isa => 'Int', required => 1, ); has [qw( minor revision )] => ( is => 'ro', isa => 'Num', ); has flavor => ( is => 'ro', isa => 'Str', default => sub { 'Unknown' }, ); has innodb_version => ( is => 'ro', isa => 'Str', default => sub { 'NO' }, ); sub series { my $self = shift; return $self->_join_version($self->major, $self->minor); } sub version { my $self = shift; return $self->_join_version($self->major, $self->minor, $self->revision); } sub is_in { my ($self, $target) = @_; return $self eq $target; } sub _join_version { my ($self, @parts) = @_; return join ".", map { my $c = $_; $c =~ s/^0\./0/; $c } grep defined, @parts; } sub _split_version { my ($self, $str) = @_; my @version_parts = map { s/^0(?=\d)/0./; $_ } $str =~ m/(\d+)/g; return @version_parts[0..2]; } sub normalized_version { my ( $self ) = @_; my $result = sprintf('%d%02d%02d', map { $_ || 0 } $self->major, $self->minor, $self->revision); PTDEBUG && _d($self->version, 'normalizes to', $result); return $result; } sub comment { my ( $self, $cmd ) = @_; my $v = $self->normalized_version(); return "/*!$v $cmd */" } my @methods = qw(major minor revision); sub cmp { my ($left, $right) = @_; my $right_obj = (blessed($right) && $right->isa(ref($left))) ? $right : ref($left)->new($right); my $retval = 0; for my $m ( @methods ) { last unless defined($left->$m) && defined($right_obj->$m); $retval = $left->$m <=> $right_obj->$m; last if $retval; } return $retval; } sub BUILDARGS { my $self = shift; if ( @_ == 1 ) { my %args; if ( blessed($_[0]) && $_[0]->can("selectrow_hashref") ) { PTDEBUG && _d("VersionParser got a dbh, trying to get the version"); my $dbh = $_[0]; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $query = eval { $dbh->selectall_arrayref(q/SHOW VARIABLES LIKE 'version%'/, { Slice => {} }) }; if ( $query ) { $query = { map { $_->{variable_name} => $_->{value} } @$query }; @args{@methods} = $self->_split_version($query->{version}); $args{flavor} = delete $query->{version_comment} if $query->{version_comment}; } elsif ( eval { ($query) = $dbh->selectrow_array(q/SELECT VERSION()/) } ) { @args{@methods} = $self->_split_version($query); } else { Carp::confess("Couldn't get the version from the dbh while " . "creating a VersionParser object: $@"); } $args{innodb_version} = eval { $self->_innodb_version($dbh) }; } elsif ( !ref($_[0]) ) { @args{@methods} = $self->_split_version($_[0]); } for my $method (@methods) { delete $args{$method} unless defined $args{$method}; } @_ = %args if %args; } return $self->SUPER::BUILDARGS(@_); } sub _innodb_version { my ( $self, $dbh ) = @_; return unless $dbh; my $innodb_version = "NO"; my ($innodb) = grep { $_->{engine} =~ m/InnoDB/i } map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{ $dbh->selectall_arrayref("SHOW ENGINES", {Slice=>{}}) }; if ( $innodb ) { PTDEBUG && _d("InnoDB support:", $innodb->{support}); if ( $innodb->{support} =~ m/YES|DEFAULT/i ) { my $vars = $dbh->selectrow_hashref( "SHOW VARIABLES LIKE 'innodb_version'"); $innodb_version = !$vars ? "BUILTIN" : ($vars->{Value} || $vars->{value}); } else { $innodb_version = $innodb->{support}; # probably DISABLED or NO } } PTDEBUG && _d("InnoDB version:", $innodb_version); return $innodb_version; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End VersionParser package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Cxn package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Cxn.pm # t/lib/Cxn.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Cxn; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Scalar::Util qw(blessed); use constant { PTDEBUG => $ENV{PTDEBUG} || 0, PERCONA_TOOLKIT_TEST_USE_DSN_NAMES => $ENV{PERCONA_TOOLKIT_TEST_USE_DSN_NAMES} || 0, }; sub new { my ( $class, %args ) = @_; my @required_args = qw(DSNParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($dp, $o) = @args{@required_args}; my $dsn_defaults = $dp->parse_options($o); my $prev_dsn = $args{prev_dsn}; my $dsn = $args{dsn}; if ( !$dsn ) { $args{dsn_string} ||= 'h=' . ($dsn_defaults->{h} || 'localhost'); $dsn = $dp->parse( $args{dsn_string}, $prev_dsn, $dsn_defaults); } elsif ( $prev_dsn ) { $dsn = $dp->copy($prev_dsn, $dsn); } my $dsn_name = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; my $self = { dsn => $dsn, dbh => $args{dbh}, dsn_name => $dsn_name, hostname => '', set => $args{set}, NAME_lc => defined($args{NAME_lc}) ? $args{NAME_lc} : 1, dbh_set => 0, ask_pass => $o->get('ask-pass'), DSNParser => $dp, is_cluster_node => undef, parent => $args{parent}, }; return bless $self, $class; } sub connect { my ( $self, %opts ) = @_; my $dsn = $opts{dsn} || $self->{dsn}; my $dp = $self->{DSNParser}; my $dbh = $self->{dbh}; if ( !$dbh || !$dbh->ping() ) { if ( $self->{ask_pass} && !$self->{asked_for_pass} && !defined $dsn->{p} ) { $dsn->{p} = OptionParser::prompt_noecho("Enter MySQL password: "); $self->{asked_for_pass} = 1; } $dbh = $dp->get_dbh( $dp->get_cxn_params($dsn), { AutoCommit => 1, %opts, }, ); } $dbh = $self->set_dbh($dbh); if ( $opts{dsn} ) { $self->{dsn} = $dsn; $self->{dsn_name} = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; } PTDEBUG && _d($dbh, 'Connected dbh to', $self->{hostname},$self->{dsn_name}); return $dbh; } sub set_dbh { my ($self, $dbh) = @_; if ( $self->{dbh} && $self->{dbh} == $dbh && $self->{dbh_set} ) { PTDEBUG && _d($dbh, 'Already set dbh'); return $dbh; } PTDEBUG && _d($dbh, 'Setting dbh'); $dbh->{FetchHashKeyName} = 'NAME_lc' if $self->{NAME_lc}; my $sql = 'SELECT @@server_id /*!50038 , @@hostname*/'; PTDEBUG && _d($dbh, $sql); my ($server_id, $hostname) = $dbh->selectrow_array($sql); PTDEBUG && _d($dbh, 'hostname:', $hostname, $server_id); if ( $hostname ) { $self->{hostname} = $hostname; } if ( $self->{parent} ) { PTDEBUG && _d($dbh, 'Setting InactiveDestroy=1 in parent'); $dbh->{InactiveDestroy} = 1; } if ( my $set = $self->{set}) { $set->($dbh); } $self->{dbh} = $dbh; $self->{dbh_set} = 1; return $dbh; } sub lost_connection { my ($self, $e) = @_; return 0 unless $e; return $e =~ m/MySQL server has gone away/ || $e =~ m/Lost connection to MySQL server/ || $e =~ m/Server shutdown in progress/; } sub dbh { my ($self) = @_; return $self->{dbh}; } sub dsn { my ($self) = @_; return $self->{dsn}; } sub name { my ($self) = @_; return $self->{dsn_name} if PERCONA_TOOLKIT_TEST_USE_DSN_NAMES; return $self->{hostname} || $self->{dsn_name} || 'unknown host'; } sub description { my ($self) = @_; return sprintf("%s -> %s:%s", $self->name(), $self->{dsn}->{h}, $self->{dsn}->{P} || 'socket'); } sub get_id { my ($self, $cxn) = @_; $cxn ||= $self; my $unique_id; if ($cxn->is_cluster_node()) { # for cluster we concatenate various variables to maximize id 'uniqueness' across versions my $sql = q{SHOW STATUS LIKE 'wsrep\_local\_index'}; my (undef, $wsrep_local_index) = $cxn->dbh->selectrow_array($sql); PTDEBUG && _d("Got cluster wsrep_local_index: ",$wsrep_local_index); $unique_id = $wsrep_local_index."|"; foreach my $val ('server\_id', 'wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $val) = $cxn->dbh->selectrow_array($sql); $unique_id .= "|$val"; } } else { my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); $unique_id = $cxn->dbh->selectrow_array($sql); } PTDEBUG && _d("Generated unique id for cluster:", $unique_id); return $unique_id; } sub is_cluster_node { my ($self, $cxn) = @_; $cxn ||= $self; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; my $dbh; if ($cxn->isa('DBI::db')) { $dbh = $cxn; PTDEBUG && _d($sql); #don't invoke name() if it's not a Cxn! } else { $dbh = $cxn->dbh(); PTDEBUG && _d($cxn->name, $sql); } my $row = $dbh->selectrow_arrayref($sql); return $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1') ? 1 : 0; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub DESTROY { my ($self) = @_; PTDEBUG && _d('Destroying cxn'); if ( $self->{parent} ) { PTDEBUG && _d($self->{dbh}, 'Not disconnecting dbh in parent'); } elsif ( $self->{dbh} && blessed($self->{dbh}) && $self->{dbh}->can("disconnect") ) { PTDEBUG && _d($self->{dbh}, 'Disconnecting dbh on', $self->{hostname}, $self->{dsn_name}); $self->{dbh}->disconnect(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Cxn package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # Runtime package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Runtime.pm # t/lib/Runtime.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Runtime; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(now); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless exists $args{$arg}; } my $run_time = $args{run_time}; if ( defined $run_time ) { die "run_time must be > 0" if $run_time <= 0; } my $now = $args{now}; die "now must be a callback" unless ref $now eq 'CODE'; my $self = { run_time => $run_time, now => $now, start_time => undef, end_time => undef, time_left => undef, stop => 0, }; return bless $self, $class; } sub time_left { my ( $self, %args ) = @_; if ( $self->{stop} ) { PTDEBUG && _d("No time left because stop was called"); return 0; } my $now = $self->{now}->(%args); PTDEBUG && _d("Current time:", $now); if ( !defined $self->{start_time} ) { $self->{start_time} = $now; } return unless defined $now; my $run_time = $self->{run_time}; return unless defined $run_time; if ( !$self->{end_time} ) { $self->{end_time} = $now + $run_time; PTDEBUG && _d("End time:", $self->{end_time}); } $self->{time_left} = $self->{end_time} - $now; PTDEBUG && _d("Time left:", $self->{time_left}); return $self->{time_left}; } sub have_time { my ( $self, %args ) = @_; my $time_left = $self->time_left(%args); return 1 if !defined $time_left; # run forever return $time_left <= 0 ? 0 : 1; # <=0s means run time has elapsed } sub time_elapsed { my ( $self, %args ) = @_; my $start_time = $self->{start_time}; return 0 unless $start_time; my $now = $self->{now}->(%args); PTDEBUG && _d("Current time:", $now); my $time_elapsed = $now - $start_time; PTDEBUG && _d("Time elapsed:", $time_elapsed); if ( $time_elapsed < 0 ) { warn "Current time $now is earlier than start time $start_time"; } return $time_elapsed; } sub reset { my ( $self ) = @_; $self->{start_time} = undef; $self->{end_time} = undef; $self->{time_left} = undef; $self->{stop} = 0; PTDEBUG && _d("Reset run time"); return; } sub stop { my ( $self ) = @_; $self->{stop} = 1; return; } sub start { my ( $self ) = @_; $self->{stop} = 0; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Runtime package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_deadlock_logger; use English qw(-no_match_vars); use List::Util qw(max); use Socket qw(inet_aton); use Time::HiRes qw(sleep); use File::Temp qw(tempfile); use File::Spec; use sigtrap 'handler', \&sig_int, 'normal-signals'; use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; # Some common patterns and variables my $d = qr/(\d+)/; # Digit my $t = qr/((?:\d+ \d+)|(?:[A-Fa-f0-9]+))/; # Transaction ID my $i = qr/((?:\d{1,3}\.){3}\d+)/; # IP address my $n = qr/([^`\s]+)/; # MySQL object name my $u = qr/(\S+)/; # Username. This is somewhat wrong, but # usernames with spaces are rare enough. my $s = qr/((?:\d{6}|\d{4}-\d\d-\d\d) .\d:\d\d:\d\d)(?: [xA-Fa-f0-9]+)?/; # InnoDB timestamp # A thread's proc_info can be at least 98 different things I've found in the # source. Fortunately, most of them begin with a gerunded verb. These are # the ones that don't. my %is_proc_info = ( 'After create' => 1, 'Execution of init_command' => 1, 'FULLTEXT initialization' => 1, 'Reopen tables' => 1, 'Repair done' => 1, 'Repair with keycache' => 1, 'System lock' => 1, 'Table lock' => 1, 'Thread initialized' => 1, 'User lock' => 1, 'copy to tmp table' => 1, 'discard_or_import_tablespace' => 1, 'end' => 1, 'got handler lock' => 1, 'got old table' => 1, 'init' => 1, 'key cache' => 1, 'locks' => 1, 'malloc' => 1, 'query end' => 1, 'rename result table' => 1, 'rename' => 1, 'setup' => 1, 'statistics' => 1, 'status' => 1, 'table cache' => 1, 'update' => 1, ); my $oktorun = 1; my $exit_status = 0; sub main { local @ARGV = @_; # set global ARGV for this package $oktorun = 1; $exit_status = 0; # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); my $src = Cxn->new( dsn_string => shift @ARGV, parent => $o->get('daemonize'), DSNParser => $dp, OptionParser => $o, ); my $dst; if ( my $dst_dsn = $o->get('dest') ) { # set time_zone = SYSTEM , addresses https://bugs.launchpad.net/percona-toolkit/+bug/1295667 my $set_tz = sub { my ($dbh) = @_; my $sql = "SET time_zone=SYSTEM /* pt-deadlock-logger */"; eval { PTDEBUG && _d($dbh, $sql); $dbh->do($sql); }; if ( $EVAL_ERROR ) { die "Failed to $sql: $EVAL_ERROR\n"; } }; $dst = Cxn->new( dsn => $dst_dsn, prev_dsn => ($src ? $src->dsn : undef), parent => $o->get('daemonize'), DSNParser => $dp, OptionParser => $o, set => $set_tz, ); } if ( !$o->get('help') ) { if ( !$src ) { $o->save_error('No DSN was specified.'); } if ( $dst && !$dst->dsn->{D} ) { $o->save_error("--dest requires a 'D' (database) part."); } if ( $dst && !$dst->dsn->{t} ) { $o->save_error("--dest requires a 't' (table) part."); } } $o->usage_or_errors(); # ######################################################################## # Connect to MySQL and set up the --dest, if any. # ######################################################################## my $q = new Quoter(); $src->connect(); my @cols = @{ $o->get('columns') }; my $ins_sth; my $ins_sql; if ( $dst ) { $dst->connect(AutoCommit => 0); my $db_tbl = $q->join_quote($dst->dsn->{D}, $dst->dsn->{t}); my $cols = join(',', map { $q->quote($_) } @cols); my $parms = join(',', map { '?' } @cols); $ins_sql = "INSERT IGNORE INTO $db_tbl ($cols) VALUES ($parms) " . "/* pt-deadlock-logger */"; PTDEBUG && _d($ins_sql); $ins_sth = $dst->dbh->prepare($ins_sql); if ( $o->get('create-dest-table') ) { my $sql = $o->read_para_after(__FILE__, qr/MAGIC_dest_table/); $sql =~ s/deadlocks/IF NOT EXISTS $db_tbl/; PTDEBUG && _d($sql); $dst->dbh->do($sql); } } # ######################################################################## # Daemonize only after (potentially) asking for passwords for --ask-pass. # ######################################################################## my $daemon; if ( $o->get('daemonize') ) { $daemon = new Daemon(o=>$o); $daemon->daemonize(); PTDEBUG && _d('I am a daemon now'); } elsif ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # If we daemonized, the parent has already exited and we're the child. # We shared a copy of every Cxn with the parent, and the parent's copies # were destroyed but the dbhs were not disconnected because the parent # attrib was true. Now, as the child, set it false so the dbhs will be # disconnected when our Cxn copies are destroyed. If we didn't daemonize, # then we're not really a parent (since we have no children), so set it # false to auto-disconnect the dbhs when our Cxns are destroyed. $src->{parent} = 0; $dst->{parent} = 0 if $dst; # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ { dbh => $src->dbh, dsn => $src->dsn }, ($dst ? { dbh => $dst->dbh, dsn => $dst->dsn } : ()) ], ); } # ######################################################################## # Set upt the --clear-deadlocks table. # ######################################################################## my $clear_deadlocks_table_def; my $clear_deadlocks_table = $o->get('clear-deadlocks'); if ( $clear_deadlocks_table ) { $clear_deadlocks_table_def = $o->read_para_after(__FILE__, qr/MAGIC_clear_deadlocks/); if ( VersionParser->new($src->dbh) < '4.1.2') { $clear_deadlocks_table_def =~ s/ENGINE=/TYPE=/; } $clear_deadlocks_table_def =~ s/percona_schema.clear_deadlocks/$clear_deadlocks_table/; PTDEBUG && _d('--clear-deadlocks table:', $clear_deadlocks_table_def); } # ######################################################################## # Start looking for and logging deadlocks. # ######################################################################## my $sep = $o->get('tab') ? "\t" : ' '; my $last_fingerprint = ''; my $parse_deadlocks_options = { 'server' => $src->dsn->{h} || $src->{hostname}, 'numeric-ip' => $o->got('numeric-ip'), }; my $run_time = Runtime->new( run_time => $o->get('run-time'), now => sub { return time }, ); my $interval = $o->get('interval'); my $iters = $o->get('iterations'); PTDEBUG && _d('iterations:', $iters, 'interval:', $interval); ITERATION: while ( $oktorun && $run_time->have_time() && (!defined $iters || $iters--) ) { my %txns; my $fingerprint; eval { my $sql = "SHOW /*!40100 ENGINE*/ INNODB STATUS " . "/* pt-deadlock-logger */"; my $text = $src->dbh->selectrow_hashref($sql)->{status}; %txns = %{parse_deadlocks($text, $parse_deadlocks_options)}; $fingerprint = fingerprint(\%txns); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d('Error getting InnoDB status:', $e); if ( $src->lost_connection($e) ) { eval { $src->connect() }; if ( $EVAL_ERROR ) { warn "Lost connection to " . $src->name . ". Will try " . "to reconnect in the next iteration.\n"; } else { PTDEBUG && _d('Reconnected to MySQL'); redo ITERATION; } } else { warn "Error getting SHOW ENGINE INNODB STATUS: $EVAL_ERROR"; $exit_status |= 1; } } else { if ( $fingerprint ne $last_fingerprint ) { PTDEBUG && _d('New deadlock'); if ( $ins_sth ) { eval { PTDEBUG && _d('Saving deadlock to --dest'); foreach my $txn ( sort { $a->{thread} <=> $b->{thread} } values %txns ) { $ins_sth->execute(@{$txn}{@cols}); } $dst->dbh->commit(); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d('Error saving to --dest:', $e); if ( $dst->lost_connection($e) ) { eval { $ins_sth->finish() if $ins_sth; $dst->dbh->disconnect() if $dst->dbh; $dst->connect(AutoCommit => 0); $ins_sth = $dst->dbh->prepare($ins_sql); }; if ( $EVAL_ERROR ) { warn "Lost connection to " . $dst->name . ". Will try " . "to reconnect in the next iteration.\n"; } else { PTDEBUG && _d('Reconnected to MySQL (--dest)'); redo ITERATION; } } else { warn "Error saving to --dest: $EVAL_ERROR"; $exit_status |= 1; } } } if ( !$o->get('quiet') ) { print join($sep, @cols), "\n"; foreach my $txn ( sort { $a->{thread} <=> $b->{thread} } values %txns ) { $txn->{query} =~ s/\s+/ /g; print join($sep, map { $txn->{$_} } @cols), "\n"; } } } else { PTDEBUG && _d('Same deadlock, not printing'); } $last_fingerprint = $fingerprint; if ( $clear_deadlocks_table ) { clear_deadlocks( dsn => $src->dsn, table => $clear_deadlocks_table, table_def => $clear_deadlocks_table_def, DSNParser => $dp, ); } } # Sleep if there's an --iteration left. if ( !defined $iters || $iters ) { PTDEBUG && _d('Sleeping', $interval, 'seconds'); sleep $interval; } } PTDEBUG && _d('Done running, exiting', $exit_status); return $exit_status; } # ############################################################################ # Subroutines # ############################################################################ sub parse_deadlocks { my ( $text, $args ) = @_; $args ||= {}; # Pull out the deadlock section my $dl_text; my @matches = $text =~ m#\n(---+)\n([A-Z /]+)\n\1\n(.*?)(?=\n(---+)\n[A-Z /]+\n\4\n|$)#gs; while ( my ( $start, $name, $text, $end ) = splice(@matches, 0, 4) ) { next unless $name eq 'LATEST DETECTED DEADLOCK'; $dl_text = $text; last; } return {} unless $dl_text; my @sections = $dl_text =~ m{ ^\*{3}\s([^\n]*) # *** (1) WAITING FOR THIS... (.*?) # Followed by anything, non-greedy (?=(?:^\*{3})|\z) # Followed by another three-stars or EOF }gmsx; # Loop through each section. There are no assumptions about how many # there are, who holds and wants what locks, and who gets rolled back. my %txns; while ( my ($header, $body) = splice(@sections, 0, 2) ) { my ( $txn_id, $what ) = $header =~ m/^\($d\) (.*):$/m; next unless $txn_id; $txns{$txn_id} ||= { id => $txn_id }; my $hash = $txns{$txn_id}; if ( $what eq 'TRANSACTION' ) { @{$hash}{qw(txn_time)} = $body =~ m/ACTIVE $d sec/; # Parsing the line that begins 'MySQL thread id' is complicated. # The only thing always in the line is the thread and query id. # See function innobase_mysql_print_thd in InnoDB source file # sql/ha_innodb.cc. my ( $thread_line ) = $body =~ m/^(MySQL thread id .*)$/m; my ($mysql_thread_id, $query_id, $hostname, $ip, $user, $query_status); if ( $thread_line ) { # These parts can always be gotten. ( $mysql_thread_id, $query_id ) = $thread_line =~ m/^MySQL thread id $d,.* query id $d/m; # If it's a master/slave thread, "Has (read|sent) all" may be the # thread's proc_info. In these cases, there won't be any # host/ip/user info. ( $query_status ) = $thread_line =~ m/(Has (?:read|sent) all .*$)/m; if ( defined($query_status) ) { $user = 'system user'; } # The query id might be the last thing in the line. elsif ( $thread_line =~ m/query id \d+ / ) { # The IP address is the only non-word thing left, so it's # the most useful marker for where I have to start guessing. ( $hostname, $ip ) = $thread_line =~ m/query id \d+(?: ([A-Za-z]\S+))? $i/m; if ( defined $ip ) { ($user, $query_status) = $thread_line =~ m/$ip $u(?: (.*))?$/; } else { # OK, there wasn't an IP address. # There might not be ANYTHING except the query status. ( $query_status ) = $thread_line =~ m/query id \d+ (.*)$/; if ( $query_status !~ m/^\w+ing/ && !exists($is_proc_info{$query_status}) ) { # The remaining tokens are, in order: hostname, user, # query_status. # It's basically impossible to know which is which. ( $hostname, $user, $query_status ) = $thread_line =~ m/query id \d+(?: ([A-Za-z]\S+))?(?: $u(?: (.*))?)?$/m; } else { $user = 'system user'; } } } } my ( $query_text ) = $body =~ m/\nMySQL thread id .*\n((?s).*)/; $query_text =~ s/\s+$//; $query_text =~ s/\s+/ /g; @{$hash}{qw(thread hostname ip user query)} = ($mysql_thread_id, $hostname, $ip, $user, $query_text); foreach my $key ( keys %$hash ) { if ( !defined $hash->{$key} ) { $hash->{$key} = ''; } } } else { # Prefer information about locks waited-for over locks-held. if ( $what eq 'WAITING FOR THIS LOCK TO BE GRANTED' || !$hash->{lock_type} ) { $hash->{wait_hold} = $what eq 'WAITING FOR THIS LOCK TO BE GRANTED' ? 'w' : 'h'; @{$hash}{ qw(lock_type idx db tbl txn_id lock_mode) } = $body =~ m{^(RECORD|TABLE) LOCKS? (?:space id \d+ page no \d+ n bits \d+ index `?$n`? of )?table `$n(?:/|`\.`)$n`.*?trx id $t lock.mode (\S+)}m; if ( $hash->{txn_id} ) { my ( $high, $low ) = $hash->{txn_id} =~ m/^(\d+) (\d+)$/; $hash->{txn_id} = $high ? ( $low + ($high << 32) ) : $low; } } } # Ensure all values are defined map { $hash->{$_} = 0 unless defined $hash->{$_} } qw(thread txn_id txn_time); map { $hash->{$_} = '' unless defined $hash->{$_} } qw(user hostname db tbl idx lock_type lock_mode query); } # Extract some miscellaneous data from the deadlock. my ( $ts ) = $dl_text =~ m/^$s$/m; if ( !$ts ) { # https://bugs.launchpad.net/percona-toolkit/+bug/1195034 # 130624 17:39:24TOO DEEP OR LONG SEARCH IN THE LOCK TABLE ... ($ts) = $dl_text =~ m/^${s}TOO DEEP/m; } my ( $year, $mon, $day, $hour, $min, $sec ) = $ts =~ m/^((?:\d\d)?\d\d)-?(\d\d)-?(\d\d) +(\d+):(\d+):(\d+)$/; if ( length($year) == 2 ) { $year += 2000; } $ts = sprintf('%02d-%02d-%02dT%02d:%02d:%02d', $year, $mon, $day, $hour, $min, $sec); my ( $victim ) = $dl_text =~ m/^\*\*\* WE ROLL BACK TRANSACTION \((\d+)\)$/m; $victim ||= 0; # Stick the misc data into the transactions. foreach my $txn ( values %txns ) { $txn->{victim} = $txn->{id} == $victim ? 1 : 0; $txn->{ts} = $ts; $txn->{server} = $args->{server} || ''; $txn->{ip} = inet_aton($txn->{ip}) if $args->{'numeric-ip'}; } return \%txns; } sub clear_deadlocks { my (%args) = @_; my @required_args = qw(dsn table table_def DSNParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $dsn = $args{dsn}; my $table = $args{table}; my $table_def = $args{table_def}; my $dp = $args{DSNParser}; PTDEBUG && _d('Clearing deadlocks with table', $table, $table_def); my $parent_dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit=>0 }); $parent_dbh->{InactiveDestroy} = 1; # because of forking # Create the deadlocks table. PTDEBUG && _d($table_def); $parent_dbh->do($table_def); # Get a lock on it. my $sql = "INSERT INTO $table (a) VALUES (1) " . "/* pt-deadlock-logger clear deadlocks parent */"; PTDEBUG && _d($sql); $parent_dbh->do($sql); my ($sync_fh, $sync_file) = tempfile( 'pt-deadlock-logger-clear-deadlocks.XXXXXXX', DIR => File::Spec->tmpdir(), ); PTDEBUG && _d('Sync file:', $sync_file); close $sync_fh; unlink $sync_file; # Fork a child to try to take a lock on the table. my $pid = fork(); if ( defined($pid) && $pid == 0 ) { # I am the child PTDEBUG && _d('Clear deadlocks child', $PID); my $child_dbh = $dp->get_dbh($dp->get_cxn_params($dsn), {AutoCommit=>0}); my $sql = "SELECT * FROM $table FOR UPDATE " . "/* pt-deadlock-logger clear deadlocks child */"; PTDEBUG && _d($sql); open my $fh, '>', $sync_file or die "Error creating $sync_file: $OS_ERROR"; close $fh; PTDEBUG && _d('Clear deadlocks child ready (child)'); eval { $child_dbh->do($sql); }; # Should block against parent. PTDEBUG && _d($EVAL_ERROR); # Parent inserted value 0. $child_dbh->commit(); $child_dbh->disconnect(); exit; } elsif ( !defined($pid) ) { die "Failed to fork for --clear-deadlocks: " . ($OS_ERROR || ''); } # Wait up to 10s for the child to connect and become ready. for ( 1..40 ) { last if -f $sync_file; PTDEBUG && _d('Waiting for the clear deadlocks child'); sleep 0.25; } PTDEBUG && _d('Clear deadlocks child ready (parent)'); sleep 0.25; # wait for child to exec its SELECT statement # Make the child deadlock. $sql = "INSERT INTO $table (a) VALUES (0) " . "/* pt-deadlock-logger clear deadlocks parent */"; PTDEBUG && _d($sql); eval { $parent_dbh->do($sql); }; PTDEBUG && _d($EVAL_ERROR); # Reap the child. waitpid($pid, 0); # Drop the table. $sql = "DROP TABLE IF EXISTS $table"; PTDEBUG && _d($sql); $parent_dbh->do($sql); $parent_dbh->disconnect(); unlink $sync_file; return; } sub fingerprint { my ( $txns ) = @_; my $fingerprint = ''; foreach my $txn ( sort { $a->{thread} <=> $b->{thread} } values %$txns ) { $fingerprint = $fingerprint . join('', map { $txn->{$_} } qw(server ts thread) ); } PTDEBUG && _d('Deadlock fingerprint:', $fingerprint); return $fingerprint; } sub sig_int { my ( $signal ) = @_; $oktorun = 0; print STDERR "# Caught SIG$signal. Use 'kill -ABRT $PID' if " . "the tool does not exit normally in a few seconds.\n"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-deadlock-logger - Log MySQL deadlocks. =head1 SYNOPSIS Usage: pt-deadlock-logger [OPTIONS] DSN pt-deadlock-logger logs information about MySQL deadlocks on the given DSN. Information is printed to C, and it can also be saved to a table by specifying L<"--dest">. The tool runs for forever unless L<"--run-time"> or L<"--iterations"> is specified. Print deadlocks on host1: pt-deadlock-logger h=host1 Print deadlocks on host1 once then exit: pt-deadlock-logger h=host1 --iterations 1 Save deadlocks on host1 to percona_schema.deadlocks on host2: pt-deadlock-logger h=host1 --dest h=host2,D=percona_schema,t=deadlocks =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-deadlock-logger prints information about MySQL deadlocks by polling and parsing C. When a new deadlock occurs, it's printed to C and, if specified, saved to L<"--dest">. Only new deadlocks are printed. A fingerprint for each deadlock is created using the deadlock's server, ts, and thread values (even if these columns are not specified by L<"--columns">). A deadlock is printed if its fingerprint is different than the last deadlock's fingerprint. The L<"--dest"> statement uses C to eliminate duplicate deadlocks, so every deadlock is saved for every L<"--iterations">. =head1 OUTPUT New deadlocks are printed to C, unless L<"--quiet"> is specified. Errors and warnings are printed to C. See also L<"--columns"> and L<"--tab">. =head1 INNODB CAVEATS AND DETAILS InnoDB's output is hard to parse and sometimes there's no way to do it right. Sometimes not all information (for example, username or IP address) is included in the deadlock information. In this case there's nothing for the tool to put in those columns. It may also be the case that the deadlock output is so long (because there were a lot of locks) that the whole thing is truncated. Though there are usually two transactions involved in a deadlock, there are more locks than that; at a minimum, one more lock than transactions is necessary to create a cycle in the waits-for graph. pt-deadlock-logger prints the transactions (always two in the InnoDB output, even when there are more transactions in the waits-for graph than that) and fills in locks. It prefers waited-for over held when choosing lock information to output, but you can figure out the rest with a moment's thought. If you see one wait-for and one held lock, you're looking at the same lock, so of course you'd prefer to see both wait-for locks and get more information. If the two waited-for locks are not on the same table, more than two transactions were involved in the deadlock. Finally, keep in mind that, because usernames with spaces are not quoted by InnoDB, the tool will generally misreport the second word of these usernames as the hostname. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --clear-deadlocks type: string Use this table to create a small deadlock. This usually has the effect of clearing out a huge deadlock, which otherwise consumes the entire output of C. The table must not exist. pt-deadlock-logger will create it with the following structure: =for comment ignore-pt-internal-value MAGIC_clear_deadlocks CREATE TABLE percona_schema.clear_deadlocks ( a INT PRIMARY KEY ) ENGINE=InnoDB After creating the table and causing a small deadlock, the tool will drop the table again. =item --columns type: Array; default: server, ts, thread, txn_id, txn_time, user, hostname, ip, db, tbl, idx, lock_type, lock_mode, wait_hold, victim, query The columns are: =over =item server The (source) server on which the deadlock occurred. This might be useful if you're tracking deadlocks on many servers. =item ts The date and time of the last detected deadlock. =item thread The MySQL thread number, which is the same as the connection ID in SHOW FULL PROCESSLIST. =item txn_id The InnoDB transaction ID, which InnoDB expresses as two unsigned integers. I have multiplied them out to be one number. =item txn_time How long the transaction was active when the deadlock happened. =item user The connection's database username. =item hostname The connection's host. =item ip The connection's IP address. If you specify L<"--numeric-ip">, this is converted to an unsigned integer. =item db The database in which the deadlock occurred. =item tbl The table on which the deadlock occurred. =item idx The index on which the deadlock occurred. =item lock_type The lock type the transaction held on the lock that caused the deadlock. =item lock_mode The lock mode of the lock that caused the deadlock. =item wait_hold Whether the transaction was waiting for the lock or holding the lock. Usually you will see the two waited-for locks. =item victim Whether the transaction was selected as the deadlock victim and rolled back. =item query The query that caused the deadlock. =back =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --create-dest-table Create the table specified by L<"--dest">. Normally the L<"--dest"> table is expected to exist already. This option causes pt-deadlock-logger to create the table automatically using the suggested table structure. =item --daemonize Fork to the background and detach from the shell. POSIX operating systems only. =item --database short form: -D; type: string Connect to this database. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --dest type: DSN DSN for where to store deadlocks; specify at least a database (D) and table (t). Missing values are filled in with the same values from the source host, so you can usually omit most parts of this argument if you're storing deadlocks on the same server on which they happen. The following table structure is suggested if you want to store all the information pt-deadlock-logger can extract about deadlocks: =for comment ignore-pt-internal-value MAGIC_dest_table CREATE TABLE deadlocks ( server char(20) NOT NULL, ts timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, thread int unsigned NOT NULL, txn_id bigint unsigned NOT NULL, txn_time smallint unsigned NOT NULL, user char(16) NOT NULL, hostname char(20) NOT NULL, ip char(15) NOT NULL, -- alternatively, ip int unsigned NOT NULL db char(64) NOT NULL, tbl char(64) NOT NULL, idx char(64) NOT NULL, lock_type char(16) NOT NULL, lock_mode char(1) NOT NULL, wait_hold char(1) NOT NULL, victim tinyint unsigned NOT NULL, query text NOT NULL, PRIMARY KEY (server,ts,thread) ) ENGINE=InnoDB If you use L<"--columns">, you can omit whichever columns you don't want to store. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --interval type: time; default: 30 How often to check for deadlocks. If no L<"--run-time"> is specified, pt-deadlock-logger runs forever, checking for deadlocks at every interval. See also L<"--run-time">. =item --iterations type: int How many times to check for deadlocks. By default, this option is undefined which means an infinite number of iterations. The tool always exits for L<"--run-time">, regardless of the value specified for this option. For example, the tool will exit after 1 minute with C<--run-time 1m --iterations 4 --interval 30> because 4 iterations at 30 second intervals would take 2 minutes, longer than the 1 mintue run-time. =item --log type: string Print all output to this file when daemonized. =item --numeric-ip Express IP addresses as integers. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --quiet Do not deadlocks; only print errors and warnings to C. =item --run-time type: time How long to run before exiting. By default pt-deadlock-logger runs forever, checking for deadlocks every L<"--interval"> seconds. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --tab Use tabs to separate columns instead of spaces. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * t Table in which to store deadlock information. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-deadlock-logger ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2007-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-deadlock-logger 3.1.0 =cut percona-toolkit-3.1/bin/pt-diskstats000775 001750 001750 00000505062 13535723560 021006 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit OptionParser Transformers ReadKeyMini Diskstats DiskstatsGroupByAll DiskstatsGroupByDisk DiskstatsGroupBySample DiskstatsMenu HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc substr(md5_hex($val), -16); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # ReadKeyMini package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/ReadKeyMini.pm # t/lib/ReadKeyMini.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { package ReadKeyMini; BEGIN { $INC{"ReadKeyMini.pm"} ||= 1 } use warnings; use strict; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw( :termios_h ); use Fcntl qw( F_SETFL F_GETFL ); use base qw( Exporter ); BEGIN { our @EXPORT_OK = qw( GetTerminalSize ReadMode ); *ReadMode = *Term::ReadKey::ReadMode = \&_ReadMode; *GetTerminalSize = *Term::ReadKey::GetTerminalSize = \&_GetTerminalSize; } my %modes = ( original => 0, restore => 0, normal => 1, noecho => 2, cbreak => 3, raw => 4, 'ultra-raw' => 5, ); { my $fd_stdin = fileno(STDIN); my $flags; unless ( $PerconaTest::DONT_RESTORE_STDIN ) { $flags = fcntl(STDIN, F_GETFL, 0) or warn "Error getting STDIN flags with fcntl: $OS_ERROR"; } my $term = POSIX::Termios->new(); $term->getattr($fd_stdin); my $oterm = $term->getlflag(); my $echo = ECHO | ECHOK | ICANON; my $noecho = $oterm & ~$echo; sub _ReadMode { my $mode = $modes{ $_[0] }; if ( $mode == $modes{normal} ) { cooked(); } elsif ( $mode == $modes{cbreak} || $mode == $modes{noecho} ) { cbreak( $mode == $modes{noecho} ? $noecho : $oterm ); } else { die("ReadMore('$_[0]') not supported"); } } sub cbreak { my ($lflag) = $_[0] || $noecho; $term->setlflag($lflag); $term->setcc( VTIME, 1 ); $term->setattr( $fd_stdin, TCSANOW ); } sub cooked { $term->setlflag($oterm); $term->setcc( VTIME, 0 ); $term->setattr( $fd_stdin, TCSANOW ); if ( !$PerconaTest::DONT_RESTORE_STDIN ) { fcntl(STDIN, F_SETFL, int($flags)) or warn "Error restoring STDIN flags with fcntl: $OS_ERROR"; } } END { cooked() } } sub readkey { my $key = ''; cbreak(); sysread(STDIN, $key, 1); my $timeout = 0.1; if ( $key eq "\033" ) { my $x = ''; STDIN->blocking(0); sysread(STDIN, $x, 2); STDIN->blocking(1); $key .= $x; redo if $key =~ /\[[0-2](?:[0-9];)?$/ } cooked(); return $key; } BEGIN { eval { no warnings; local $^W; require 'sys/ioctl.ph' }; if ( !defined &TIOCGWINSZ ) { *TIOCGWINSZ = sub () { $^O eq 'linux' ? 0x005413 : $^O eq 'solaris' ? 0x005468 : 0x40087468; }; } } sub _GetTerminalSize { if ( @_ ) { die "My::Term::ReadKey doesn't implement GetTerminalSize with arguments"; } my $cols = $ENV{COLUMNS} || 80; my $rows = $ENV{LINES} || 24; if ( open( TTY, "+<", "/dev/tty" ) ) { # Got a tty my $winsize = ''; if ( ioctl( TTY, &TIOCGWINSZ, $winsize ) ) { ( $rows, $cols, my ( $xpixel, $ypixel ) ) = unpack( 'S4', $winsize ); return ( $cols, $rows, $xpixel, $ypixel ); } } if ( $rows = `tput lines 2>/dev/null` ) { chomp($rows); chomp($cols = `tput cols`); } elsif ( my $stty = `stty -a 2>/dev/null` ) { ($rows, $cols) = $stty =~ /([0-9]+) rows; ([0-9]+) columns;/; } else { ($cols, $rows) = @ENV{qw( COLUMNS LINES )}; $cols ||= 80; $rows ||= 24; } return ( $cols, $rows ); } } 1; } # ########################################################################### # End ReadKeyMini package # ########################################################################### # ########################################################################### # Diskstats package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Diskstats.pm # t/lib/Diskstats.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Diskstats; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use IO::Handle; use List::Util qw( max first ); use ReadKeyMini qw( GetTerminalSize ); my $max_lines; BEGIN { (undef, $max_lines) = GetTerminalSize(); $max_lines ||= 24; $Diskstats::printed_lines = $max_lines; } my $diskstat_colno_for; BEGIN { $diskstat_colno_for = { MAJOR => 0, MINOR => 1, DEVICE => 2, READS => 3, READS_MERGED => 4, READ_SECTORS => 5, MS_SPENT_READING => 6, WRITES => 7, WRITES_MERGED => 8, WRITTEN_SECTORS => 9, MS_SPENT_WRITING => 10, IOS_IN_PROGRESS => 11, MS_SPENT_DOING_IO => 12, MS_WEIGHTED => 13, READ_KBS => 14, WRITTEN_KBS => 15, IOS_REQUESTED => 16, IOS_IN_BYTES => 17, SUM_IOS_IN_PROGRESS => 18, }; require constant; constant->import($diskstat_colno_for); } sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o) = @args{@required_args}; my $columns = $o->get('columns-regex'); my $devices = $o->get('devices-regex'); my $headers = $o->get('headers'); my $self = { filename => '/proc/diskstats', block_size => 512, show_inactive => $o->get('show-inactive'), sample_time => $o->get('sample-time') || 0, automatic_headers => $headers->{'scroll'}, space_samples => $headers->{'group'}, show_timestamps => $o->get('show-timestamps'), columns_regex => qr/$columns/, devices_regex => $devices ? qr/$devices/ : undef, interactive => 0, force_header => 1, %args, delta_cols => [ # Calc deltas for these cols, must be uppercase qw( READS READS_MERGED READ_SECTORS MS_SPENT_READING WRITES WRITES_MERGED WRITTEN_SECTORS MS_SPENT_WRITING READ_KBS WRITTEN_KBS MS_SPENT_DOING_IO MS_WEIGHTED READ_KBS WRITTEN_KBS IOS_REQUESTED IOS_IN_BYTES IOS_IN_PROGRESS ) ], _stats_for => {}, _ordered_devs => [], _active_devices => {}, _ts => {}, _first_stats_for => {}, _nochange_skips => [], _length_ts_column => 5, _save_curr_as_prev => 1, }; if ( $self->{show_timestamps} ) { $self->{_length_ts_column} = 8; } $Diskstats::last_was_header = 0; return bless $self, $class; } sub first_ts_line { my ($self) = @_; return $self->{_ts}->{first}->{line}; } sub set_first_ts_line { my ($self, $new_val) = @_; return $self->{_ts}->{first}->{line} = $new_val; } sub prev_ts_line { my ($self) = @_; return $self->{_ts}->{prev}->{line}; } sub set_prev_ts_line { my ($self, $new_val) = @_; return $self->{_ts}->{prev}->{line} = $new_val; } sub curr_ts_line { my ($self) = @_; return $self->{_ts}->{curr}->{line}; } sub set_curr_ts_line { my ($self, $new_val) = @_; return $self->{_ts}->{curr}->{line} = $new_val; } sub show_line_between_samples { my ($self) = @_; return $self->{space_samples}; } sub set_show_line_between_samples { my ($self, $new_val) = @_; return $self->{space_samples} = $new_val; } sub show_timestamps { my ($self) = @_; return $self->{show_timestamps}; } sub set_show_timestamps { my ($self, $new_val) = @_; return $self->{show_timestamps} = $new_val; } sub active_device { my ( $self, $dev ) = @_; return $self->{_active_devices}->{$dev}; } sub set_active_device { my ($self, $dev, $val) = @_; return $self->{_active_devices}->{$dev} = $val; } sub clear_active_devices { my ( $self ) = @_; return $self->{_active_devices} = {}; } sub automatic_headers { my ($self) = @_; return $self->{automatic_headers}; } sub set_automatic_headers { my ($self, $new_val) = @_; return $self->{automatic_headers} = $new_val; } sub curr_ts { my ($self) = @_; return $self->{_ts}->{curr}->{ts} || 0; } sub set_curr_ts { my ($self, $val) = @_; $self->{_ts}->{curr}->{ts} = $val || 0; } sub prev_ts { my ($self) = @_; return $self->{_ts}->{prev}->{ts} || 0; } sub set_prev_ts { my ($self, $val) = @_; $self->{_ts}->{prev}->{ts} = $val || 0; } sub first_ts { my ($self) = @_; return $self->{_ts}->{first}->{ts} || 0; } sub set_first_ts { my ($self, $val) = @_; $self->{_ts}->{first}->{ts} = $val || 0; } sub show_inactive { my ($self) = @_; return $self->{show_inactive}; } sub set_show_inactive { my ($self, $new_val) = @_; $self->{show_inactive} = $new_val; } sub sample_time { my ($self) = @_; return $self->{sample_time}; } sub set_sample_time { my ($self, $new_val) = @_; if (defined($new_val)) { $self->{sample_time} = $new_val; } } sub interactive { my ($self) = @_; return $self->{interactive}; } sub set_interactive { my ($self, $new_val) = @_; if (defined($new_val)) { $self->{interactive} = $new_val; } } sub columns_regex { my ( $self ) = @_; return $self->{columns_regex}; } sub set_columns_regex { my ( $self, $new_re ) = @_; return $self->{columns_regex} = $new_re; } sub devices_regex { my ( $self ) = @_; return $self->{devices_regex}; } sub set_devices_regex { my ( $self, $new_re ) = @_; return $self->{devices_regex} = $new_re; } sub filename { my ( $self ) = @_; return $self->{filename}; } sub set_filename { my ( $self, $new_filename ) = @_; if ( $new_filename ) { return $self->{filename} = $new_filename; } } sub block_size { my ( $self ) = @_; return $self->{block_size}; } sub ordered_devs { my ( $self, $replacement_list ) = @_; if ( $replacement_list ) { $self->{_ordered_devs} = $replacement_list; } return @{ $self->{_ordered_devs} }; } sub add_ordered_dev { my ( $self, $new_dev ) = @_; if ( !$self->{_seen_devs}->{$new_dev}++ ) { push @{ $self->{_ordered_devs} }, $new_dev; } return; } sub force_header { my ($self) = @_; return $self->{force_header}; } sub set_force_header { my ($self, $new_val) = @_; return $self->{force_header} = $new_val; } sub clear_state { my ($self, %args) = @_; $self->set_force_header(1); $self->clear_curr_stats(); if ( $args{force} || !$self->interactive() ) { $self->clear_first_stats(); $self->clear_prev_stats(); } $self->clear_ts(); $self->clear_ordered_devs(); } sub clear_ts { my ($self) = @_; undef($_->{ts}) for @{ $self->{_ts} }{ qw( curr prev first ) }; } sub clear_ordered_devs { my ($self) = @_; $self->{_seen_devs} = {}; $self->ordered_devs( [] ); } sub _clear_stats_common { my ( $self, $key, @args ) = @_; if (@args) { for my $dev (@args) { $self->{$key}->{$dev} = {}; } } else { $self->{$key} = {}; } } sub clear_curr_stats { my ( $self, @args ) = @_; if ( $self->has_stats() ) { $self->_save_curr_as_prev(); } $self->_clear_stats_common( "_stats_for", @args ); } sub clear_prev_stats { my ( $self, @args ) = @_; $self->_clear_stats_common( "_prev_stats_for", @args ); } sub clear_first_stats { my ( $self, @args ) = @_; $self->_clear_stats_common( "_first_stats_for", @args ); } sub stats_for { my ( $self, $dev ) = @_; $self->{_stats_for} ||= {}; if ($dev) { return $self->{_stats_for}->{$dev}; } return $self->{_stats_for}; } sub prev_stats_for { my ( $self, $dev ) = @_; $self->{_prev_stats_for} ||= {}; if ($dev) { return $self->{_prev_stats_for}->{$dev}; } return $self->{_prev_stats_for}; } sub first_stats_for { my ( $self, $dev ) = @_; $self->{_first_stats_for} ||= {}; if ($dev) { return $self->{_first_stats_for}->{$dev}; } return $self->{_first_stats_for}; } sub has_stats { my ($self) = @_; my $stats = $self->stats_for; for my $key ( keys %$stats ) { return 1 if $stats->{$key} && @{ $stats->{$key} } } return; } sub _save_curr_as_prev { my ( $self, $curr ) = @_; if ( $self->{_save_curr_as_prev} ) { $self->{_prev_stats_for} = $curr; for my $dev (keys %$curr) { $self->{_prev_stats_for}->{$dev}->[SUM_IOS_IN_PROGRESS] += $curr->{$dev}->[IOS_IN_PROGRESS]; } $self->set_prev_ts($self->curr_ts()); } return; } sub _save_curr_as_first { my ($self, $curr) = @_; if ( !%{$self->{_first_stats_for}} ) { $self->{_first_stats_for} = { map { $_ => [@{$curr->{$_}}] } keys %$curr }; $self->set_first_ts($self->curr_ts()); } } sub trim { my ($c) = @_; $c =~ s/^\s+//; $c =~ s/\s+$//; return $c; } sub col_ok { my ( $self, $column ) = @_; my $regex = $self->columns_regex(); return ($column =~ $regex) || (trim($column) =~ $regex); } our @columns_in_order = ( [ " rd_s" => "%7.1f", "reads_sec", ], [ "rd_avkb" => "%7.1f", "avg_read_sz", ], [ "rd_mb_s" => "%7.1f", "mbytes_read_sec", ], [ "rd_mrg" => "%5.0f%%", "read_merge_pct", ], [ "rd_cnc" => "%6.1f", "read_conc", ], [ " rd_rt" => "%7.1f", "read_rtime", ], [ " wr_s" => "%7.1f", "writes_sec", ], [ "wr_avkb" => "%7.1f", "avg_write_sz", ], [ "wr_mb_s" => "%7.1f", "mbytes_written_sec", ], [ "wr_mrg" => "%5.0f%%", "write_merge_pct", ], [ "wr_cnc" => "%6.1f", "write_conc", ], [ " wr_rt" => "%7.1f", "write_rtime", ], [ "busy" => "%3.0f%%", "busy", ], [ "in_prg" => "%6d", "in_progress", ], [ " io_s" => "%7.1f", "s_spent_doing_io", ], [ " qtime" => "%6.1f", "qtime", ], [ "stime" => "%5.1f", "stime", ], ); { my %format_for = ( map { ( $_->[0] => $_->[1] ) } @columns_in_order, ); sub _format_for { my ( $self, $col ) = @_; return $format_for{$col}; } } { my %column_to_key = ( map { ( $_->[0] => $_->[2] ) } @columns_in_order, ); sub _column_to_key { my ( $self, $col ) = @_; return $column_to_key{$col}; } } sub design_print_formats { my ( $self, %args ) = @_; my ( $dev_length, $columns ) = @args{qw( max_device_length columns )}; $dev_length ||= max 6, map length, $self->ordered_devs(); my ( $header, $format ); $header = $format = qq{%+*s %-${dev_length}s }; if ( !$columns ) { @$columns = grep { $self->col_ok($_) } map { $_->[0] } @columns_in_order; } elsif ( !ref($columns) || ref($columns) ne ref([]) ) { die "The columns argument to design_print_formats should be an arrayref"; } $header .= join " ", @$columns; $format .= join " ", map $self->_format_for($_), @$columns; return ( $header, $format, $columns ); } sub parse_diskstats_line { my ( $self, $line, $block_size ) = @_; # linux kernel source => Documentation/iostats.txt # 2.6+ => 14 fields # 4.18+ => 18 fields my @dev_stats = split ' ', $line; return unless @dev_stats == 14 or @dev_stats == 18; my $read_bytes = $dev_stats[READ_SECTORS] * $block_size; my $written_bytes = $dev_stats[WRITTEN_SECTORS] * $block_size; $dev_stats[READ_KBS] = $read_bytes / 1024; $dev_stats[WRITTEN_KBS] = $written_bytes / 1024; $dev_stats[IOS_IN_BYTES] = $read_bytes + $written_bytes; $dev_stats[IOS_REQUESTED] = $dev_stats[READS] + $dev_stats[WRITES] + $dev_stats[READS_MERGED] +$dev_stats[WRITES_MERGED]; return $dev_stats[DEVICE], \@dev_stats; } sub parse_from { my ( $self, %args ) = @_; my $lines_read; if ($args{filehandle}) { $lines_read = $self->_parse_from_filehandle( @args{qw( filehandle sample_callback )} ); } elsif ( $args{data} ) { open( my $fh, "<", ref($args{data}) ? $args{data} : \$args{data} ) or die "Couldn't parse data: $OS_ERROR"; $lines_read = $self->_parse_from_filehandle( $fh, $args{sample_callback} ); close $fh or warn "Cannot close: $OS_ERROR"; } else { my $filename = $args{filename} || $self->filename(); open my $fh, "<", $filename or die "Cannot parse $filename: $OS_ERROR"; $lines_read = $self->_parse_from_filehandle( $fh, $args{sample_callback} ); close $fh or warn "Cannot close: $OS_ERROR"; } return $lines_read; } sub _parse_from_filehandle { my ( $self, $filehandle, $sample_callback ) = @_; return $self->_parse_and_load_diskstats( $filehandle, $sample_callback ); } sub _parse_and_load_diskstats { my ( $self, $fh, $sample_callback ) = @_; my $block_size = $self->block_size(); my $current_ts = 0; my $new_cur = {}; my $last_ts_line; while ( my $line = <$fh> ) { if ( my ( $dev, $dev_stats ) = $self->parse_diskstats_line($line, $block_size) ) { $new_cur->{$dev} = $dev_stats; $self->add_ordered_dev($dev); } elsif ( my ($new_ts) = $line =~ /^TS\s+([0-9]+(?:\.[0-9]+)?)/ ) { PTDEBUG && _d("Timestamp:", $line); if ( $current_ts && %$new_cur ) { $self->_handle_ts_line($current_ts, $new_cur, $line, $sample_callback); $new_cur = {}; } $current_ts = $new_ts; $last_ts_line = $line; } else { PTDEBUG && _d("Ignoring unknown diskstats line:", $line); } } if ( $current_ts && %{$new_cur} ) { $self->_handle_ts_line($current_ts, $new_cur, $last_ts_line, $sample_callback); $new_cur = {}; } return $INPUT_LINE_NUMBER; } sub _handle_ts_line { my ($self, $current_ts, $new_cur, $line, $sample_callback) = @_; $self->set_first_ts_line( $line ) unless $self->first_ts_line(); $self->set_prev_ts_line( $self->curr_ts_line() ); $self->set_curr_ts_line( $line ); $self->_save_curr_as_prev( $self->stats_for() ); $self->{_stats_for} = $new_cur; $self->set_curr_ts($current_ts); $self->_save_curr_as_first( $new_cur ); if ($sample_callback) { $self->$sample_callback($current_ts); } return; } sub _calc_read_stats { my ( $self, %args ) = @_; my @required_args = qw( delta_for elapsed devs_in_group ); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($delta_for, $elapsed, $devs_in_group) = @args{ @required_args }; my %read_stats = ( reads_sec => $delta_for->{reads} / $elapsed, read_requests => $delta_for->{reads_merged} + $delta_for->{reads}, mbytes_read_sec => $delta_for->{read_kbs} / $elapsed / 1024, read_conc => $delta_for->{ms_spent_reading} / $elapsed / 1000 / $devs_in_group, ); if ( $delta_for->{reads} > 0 ) { $read_stats{read_rtime} = $delta_for->{ms_spent_reading} / $read_stats{read_requests}; $read_stats{avg_read_sz} = $delta_for->{read_kbs} / $delta_for->{reads}; } else { $read_stats{read_rtime} = 0; $read_stats{avg_read_sz} = 0; } $read_stats{read_merge_pct} = $read_stats{read_requests} > 0 ? 100 * $delta_for->{reads_merged} / $read_stats{read_requests} : 0; return %read_stats; } sub _calc_write_stats { my ( $self, %args ) = @_; my @required_args = qw( delta_for elapsed devs_in_group ); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($delta_for, $elapsed, $devs_in_group) = @args{ @required_args }; my %write_stats = ( writes_sec => $delta_for->{writes} / $elapsed, write_requests => $delta_for->{writes_merged} + $delta_for->{writes}, mbytes_written_sec => $delta_for->{written_kbs} / $elapsed / 1024, write_conc => $delta_for->{ms_spent_writing} / $elapsed / 1000 / $devs_in_group, ); if ( $delta_for->{writes} > 0 ) { $write_stats{write_rtime} = $delta_for->{ms_spent_writing} / $write_stats{write_requests}; $write_stats{avg_write_sz} = $delta_for->{written_kbs} / $delta_for->{writes}; } else { $write_stats{write_rtime} = 0; $write_stats{avg_write_sz} = 0; } $write_stats{write_merge_pct} = $write_stats{write_requests} > 0 ? 100 * $delta_for->{writes_merged} / $write_stats{write_requests} : 0; return %write_stats; } sub _calc_misc_stats { my ( $self, %args ) = @_; my @required_args = qw( delta_for elapsed devs_in_group stats ); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($delta_for, $elapsed, $devs_in_group, $stats) = @args{ @required_args }; my %extra_stats; $extra_stats{busy} = 100 * $delta_for->{ms_spent_doing_io} / ( 1000 * $elapsed * $devs_in_group ); # Highlighting failure: / my $number_of_ios = $delta_for->{ios_requested}; # sum(delta[field1, 2, 5, 6]) my $total_ms_spent_on_io = $delta_for->{ms_spent_reading} + $delta_for->{ms_spent_writing}; if ( $number_of_ios ) { my $average_ios = $number_of_ios + $delta_for->{ios_in_progress}; if ( $average_ios ) { $extra_stats{qtime} = $delta_for->{ms_weighted} / $average_ios - $delta_for->{ms_spent_doing_io} / $number_of_ios; } else { PTDEBUG && _d("IOS_IN_PROGRESS is [", $delta_for->{ios_in_progress}, "], and the number of ios is [", $number_of_ios, "], going to use 0 as qtime."); $extra_stats{qtime} = 0; } $extra_stats{stime} = $delta_for->{ms_spent_doing_io} / $number_of_ios; } else { $extra_stats{qtime} = 0; $extra_stats{stime} = 0; } $extra_stats{s_spent_doing_io} = $stats->{reads_sec} + $stats->{writes_sec}; $extra_stats{line_ts} = $self->compute_line_ts( first_ts => $self->first_ts(), curr_ts => $self->curr_ts(), ); return %extra_stats; } sub _calc_delta_for { my ( $self, $curr, $against ) = @_; my %deltas; foreach my $col ( @{$self->{delta_cols}} ) { my $colno = $diskstat_colno_for->{$col}; $deltas{lc $col} = ($curr->[$colno] || 0) - ($against->[$colno] || 0); } return \%deltas; } sub _print_device_if { my ($self, $dev ) = @_; my $dev_re = $self->devices_regex(); if ( $dev_re ) { $self->_mark_if_active($dev); return $dev if $dev =~ $dev_re; } else { if ( $self->active_device($dev) ) { return $dev; } elsif ( $self->show_inactive() ) { $self->_mark_if_active($dev); return $dev; } else { return $dev if $self->_mark_if_active($dev); } } push @{$self->{_nochange_skips}}, $dev; return; } sub _mark_if_active { my ($self, $dev) = @_; return $dev if $self->active_device($dev); my $curr = $self->stats_for($dev); my $first = $self->first_stats_for($dev); return unless $curr && $first; if ( first { $curr->[$_] != $first->[$_] } READS..IOS_IN_BYTES ) { $self->set_active_device($dev, 1); return $dev; } return; } sub _calc_stats_for_deltas { my ( $self, $elapsed ) = @_; my @end_stats; my @devices = $self->ordered_devs(); my $devs_in_group = $self->compute_devs_in_group(); foreach my $dev ( grep { $self->_print_device_if($_) } @devices ) { my $curr = $self->stats_for($dev); my $against = $self->delta_against($dev); next unless $curr && $against; my $delta_for = $self->_calc_delta_for( $curr, $against ); my $in_progress = $curr->[IOS_IN_PROGRESS]; my $tot_in_progress = $against->[SUM_IOS_IN_PROGRESS] || 0; my %stats = ( $self->_calc_read_stats( delta_for => $delta_for, elapsed => $elapsed, devs_in_group => $devs_in_group, ), $self->_calc_write_stats( delta_for => $delta_for, elapsed => $elapsed, devs_in_group => $devs_in_group, ), in_progress => $self->compute_in_progress( $in_progress, $tot_in_progress ), ); my %extras = $self->_calc_misc_stats( delta_for => $delta_for, elapsed => $elapsed, devs_in_group => $devs_in_group, stats => \%stats, ); @stats{ keys %extras } = values %extras; $stats{dev} = $dev; push @end_stats, \%stats; } if ( @{$self->{_nochange_skips}} ) { my $devs = join ", ", @{$self->{_nochange_skips}}; PTDEBUG && _d("Skipping [$devs], haven't changed from the first sample"); $self->{_nochange_skips} = []; } return @end_stats; } sub _calc_deltas { my ( $self ) = @_; my $elapsed = $self->curr_ts() - $self->delta_against_ts(); die "Time between samples should be > 0, is [$elapsed]" if $elapsed <= 0; return $self->_calc_stats_for_deltas($elapsed); } sub force_print_header { my ($self, @args) = @_; my $orig = $self->force_header(); $self->set_force_header(1); $self->print_header(@args); $self->set_force_header($orig); return; } sub print_header { my ($self, $header, @args) = @_; if ( $self->force_header() ) { printf $header . "\n", $self->{_length_ts_column}, @args; $Diskstats::printed_lines--; $Diskstats::printed_lines ||= $max_lines; $Diskstats::last_was_header = 1; } return; } sub print_rows { my ($self, $format, $cols, $stat) = @_; printf $format . "\n", $self->{_length_ts_column}, @{ $stat }{ qw( line_ts dev ), @$cols }; $Diskstats::printed_lines--; $Diskstats::last_was_header = 0; } sub print_deltas { my ( $self, %args ) = @_; my ( $header, $format, $cols ) = $self->design_print_formats( max_device_length => $args{max_device_length}, columns => $args{columns}, ); return unless $self->delta_against_ts(); @$cols = map { $self->_column_to_key($_) } @$cols; my $header_method = $args{header_callback} || "print_header"; my $rows_method = $args{rows_callback} || "print_rows"; my @stats = $self->_calc_deltas(); $Diskstats::printed_lines = $max_lines unless defined $Diskstats::printed_lines; if ( $self->{space_samples} && @stats && @stats > 1 && !$Diskstats::last_was_header ) { print "\n"; $Diskstats::printed_lines--; } if ( $self->automatic_headers() && $Diskstats::printed_lines <= @stats ) { $self->force_print_header( $header, "#ts", "device" ); } else { $self->$header_method( $header, "#ts", "device" ); } foreach my $stat ( @stats ) { $self->$rows_method( $format, $cols, $stat ); } $Diskstats::printed_lines = $max_lines if $Diskstats::printed_lines <= 0; } sub compute_line_ts { my ( $self, %args ) = @_; my $line_ts; if ( $self->show_timestamps() ) { $line_ts = $self->ts_line_for_timestamp(); if ( $line_ts && $line_ts =~ /([0-9]{2}:[0-9]{2}:[0-9]{2})/ ) { $line_ts = $1; } else { $line_ts = scalar localtime($args{curr_ts}); $line_ts =~ s/.*(\d\d:\d\d:\d\d).*/$1/; } } else { $line_ts = sprintf( "%5.1f", $args{first_ts} > 0 ? $args{curr_ts} - $args{first_ts} : 0 ); } return $line_ts; } sub compute_in_progress { my ( $self, $in_progress, $tot_in_progress ) = @_; return $in_progress; } sub compute_devs_in_group { return 1; } sub ts_line_for_timestamp { die 'You must override ts_line_for_timestamp() in a subclass'; } sub delta_against { die 'You must override delta_against() in a subclass'; } sub delta_against_ts { die 'You must override delta_against_ts() in a subclass'; } sub group_by { die 'You must override group_by() in a subclass'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Diskstats package # ########################################################################### # ########################################################################### # DiskstatsGroupByAll package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DiskstatsGroupByAll.pm # t/lib/DiskstatsGroupByAll.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DiskstatsGroupByAll; use warnings; use strict; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use base qw( Diskstats ); sub group_by { my ($self, %args) = @_; $self->clear_state() unless $self->interactive(); $self->parse_from( filehandle => $args{filehandle}, filename => $args{filename}, data => $args{data}, sample_callback => sub { $self->print_deltas( header_callback => $args{header_callback} || sub { my ($self, @args) = @_; $self->print_header(@args); $self->set_force_header(undef); }, rows_callback => $args{rows_callback}, ); }, ); return; } sub delta_against { my ($self, $dev) = @_; return $self->prev_stats_for($dev); } sub ts_line_for_timestamp { my ($self) = @_; return $self->prev_ts_line(); } sub delta_against_ts { my ($self) = @_; return $self->prev_ts(); } sub compute_line_ts { my ($self, %args) = @_; if ( $self->interactive() ) { $args{first_ts} = $self->prev_ts(); } return $self->SUPER::compute_line_ts(%args); } 1; } # ########################################################################### # End DiskstatsGroupByAll package # ########################################################################### # ########################################################################### # DiskstatsGroupByDisk package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DiskstatsGroupByDisk.pm # t/lib/DiskstatsGroupByDisk.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DiskstatsGroupByDisk; use warnings; use strict; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use base qw( Diskstats ); use POSIX qw( ceil ); sub new { my ($class, %args) = @_; my $self = $class->SUPER::new(%args); $self->{_iterations} = 0; return $self; } sub group_by { my ($self, %args) = @_; my @optional_args = qw( header_callback rows_callback ); my ($header_callback, $rows_callback) = $args{ @optional_args }; $self->clear_state() unless $self->interactive(); my $original_offset = ($args{filehandle} || ref($args{data})) ? tell($args{filehandle} || $args{data}) : undef; my $lines_read = $self->parse_from( sample_callback => sub { my ($self, $ts) = @_; if ( $self->has_stats() ) { $self->{_iterations}++; if ($self->interactive() && $self->{_iterations} >= 2) { my $elapsed = ( $self->curr_ts() || 0 ) - ( $self->first_ts() || 0 ); if ( $ts > 0 && ceil($elapsed) >= $self->sample_time() ) { $self->print_deltas( header_callback => sub { my ($self, @args) = @_; if ( $self->force_header() ) { my $method = $args{header_callback} || "print_header"; $self->$method(@args); } $self->set_force_header(undef); }, rows_callback => $args{rows_callback}, ); return; } } } }, filehandle => $args{filehandle}, filename => $args{filename}, data => $args{data}, ); if ($self->interactive()) { return $lines_read; } return if $self->{_iterations} < 2; $self->print_deltas( header_callback => $args{header_callback}, rows_callback => $args{rows_callback}, ); $self->clear_state(); return $lines_read; } sub clear_state { my ($self, @args) = @_; my $orig_print_h = $self->{force_header}; $self->{_iterations} = 0; $self->SUPER::clear_state(@args); $self->{force_header} = $orig_print_h; } sub compute_line_ts { my ($self, %args) = @_; if ( $self->show_timestamps() ) { return $self->SUPER::compute_line_ts(%args); } else { return "{" . ($self->{_iterations} - 1) . "}"; } } sub delta_against { my ($self, $dev) = @_; return $self->first_stats_for($dev); } sub ts_line_for_timestamp { my ($self) = @_; return $self->prev_ts_line(); } sub delta_against_ts { my ($self) = @_; return $self->first_ts(); } sub compute_in_progress { my ($self, $in_progress, $tot_in_progress) = @_; return $tot_in_progress / ($self->{_iterations} - 1); } 1; } # ########################################################################### # End DiskstatsGroupByDisk package # ########################################################################### # ########################################################################### # DiskstatsGroupBySample package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DiskstatsGroupBySample.pm # t/lib/DiskstatsGroupBySample.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DiskstatsGroupBySample; use warnings; use strict; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use base qw( Diskstats ); use POSIX qw( ceil ); sub new { my ( $class, %args ) = @_; my $self = $class->SUPER::new(%args); $self->{_iterations} = 0; $self->{_save_curr_as_prev} = 0; return $self; } sub group_by { my ( $self, %args ) = @_; my @optional_args = qw( header_callback rows_callback ); my ( $header_callback, $rows_callback ) = $args{ @optional_args }; $self->clear_state() unless $self->interactive(); $self->parse_from( sample_callback => $self->can("_sample_callback"), filehandle => $args{filehandle}, filename => $args{filename}, data => $args{data}, ); return; } sub _sample_callback { my ( $self, $ts, %args ) = @_; my $printed_a_line = 0; if ( $self->has_stats() ) { $self->{_iterations}++; } my $elapsed = ($self->curr_ts() || 0) - ($self->prev_ts() || 0); if ( $ts > 0 && ceil($elapsed) >= $self->sample_time() ) { $self->print_deltas( max_device_length => 6, header_callback => sub { my ( $self, $header, @args ) = @_; if ( $self->force_header() ) { my $method = $args{header_callback} || "print_header"; $self->$method( $header, @args ); $self->set_force_header(undef); } }, rows_callback => sub { my ( $self, $format, $cols, $stat ) = @_; my $method = $args{rows_callback} || "print_rows"; $self->$method( $format, $cols, $stat ); $printed_a_line = 1; } ); } if ( $self->{_iterations} == 1 || $printed_a_line == 1 ) { $self->{_save_curr_as_prev} = 1; $self->_save_curr_as_prev( $self->stats_for() ); $self->set_prev_ts_line( $self->curr_ts_line() ); $self->{_save_curr_as_prev} = 0; } return; } sub delta_against { my ( $self, $dev ) = @_; return $self->prev_stats_for($dev); } sub ts_line_for_timestamp { my ($self) = @_; return $self->prev_ts_line(); } sub delta_against_ts { my ( $self ) = @_; return $self->prev_ts(); } sub clear_state { my ( $self, @args ) = @_; $self->{_iterations} = 0; $self->{_save_curr_as_prev} = 0; $self->SUPER::clear_state(@args); } sub compute_devs_in_group { my ($self) = @_; my $stats = $self->stats_for(); return scalar grep { $stats->{$_} && $self->_print_device_if($_) } $self->ordered_devs; } sub compute_dev { my ( $self, $devs ) = @_; $devs ||= $self->compute_devs_in_group(); return "{" . $devs . "}" if $devs > 1; return (grep { $self->_print_device_if($_) } $self->ordered_devs())[0]; } sub _calc_stats_for_deltas { my ( $self, $elapsed ) = @_; my $delta_for; foreach my $dev ( grep { $self->_print_device_if($_) } $self->ordered_devs() ) { my $curr = $self->stats_for($dev); my $against = $self->delta_against($dev); next unless $curr && $against; my $delta = $self->_calc_delta_for( $curr, $against ); $delta->{ios_in_progress} = $curr->[Diskstats::IOS_IN_PROGRESS]; while ( my ( $k, $v ) = each %$delta ) { $delta_for->{$k} += $v; } } return unless $delta_for && %{$delta_for}; my $in_progress = $delta_for->{ios_in_progress}; my $tot_in_progress = 0; my $devs_in_group = $self->compute_devs_in_group() || 1; my %stats = ( $self->_calc_read_stats( delta_for => $delta_for, elapsed => $elapsed, devs_in_group => $devs_in_group, ), $self->_calc_write_stats( delta_for => $delta_for, elapsed => $elapsed, devs_in_group => $devs_in_group, ), in_progress => $self->compute_in_progress( $in_progress, $tot_in_progress ), ); my %extras = $self->_calc_misc_stats( delta_for => $delta_for, elapsed => $elapsed, devs_in_group => $devs_in_group, stats => \%stats, ); @stats{ keys %extras } = values %extras; $stats{dev} = $self->compute_dev( $devs_in_group ); $self->{_first_time_magic} = undef; if ( @{$self->{_nochange_skips}} ) { my $devs = join ", ", @{$self->{_nochange_skips}}; PTDEBUG && _d("Skipping [$devs], haven't changed from the first sample"); $self->{_nochange_skips} = []; } return \%stats; } sub compute_line_ts { my ($self, %args) = @_; if ( $self->show_timestamps() ) { @args{ qw( first_ts curr_ts ) } = @args{ qw( curr_ts first_ts ) } } return $self->SUPER::compute_line_ts(%args); } 1; } # ########################################################################### # End DiskstatsGroupBySample package # ########################################################################### # ########################################################################### # DiskstatsMenu package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DiskstatsMenu.pm # t/lib/DiskstatsMenu.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DiskstatsMenu; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw( fmod :sys_wait_h ); use IO::Handle; use IO::Select; use Time::HiRes qw( gettimeofday ); use Scalar::Util qw( looks_like_number blessed ); use ReadKeyMini qw( ReadMode ); use Transformers qw( ts ); require DiskstatsGroupByAll; require DiskstatsGroupByDisk; require DiskstatsGroupBySample; my %actions = ( 'A' => \&group_by, 'D' => \&group_by, 'S' => \&group_by, 'i' => \&hide_inactive_disks, 'z' => get_new_value_for( "sample_time", "Enter a new interval between samples in seconds: " ), 'c' => get_new_regex_for( "columns_regex", "Enter a column pattern: " ), '/' => get_new_regex_for( "devices_regex", "Enter a disk/device pattern: " ), 'q' => sub { return 'last' }, 'p' => sub { print "Paused - press any key to continue\n"; pause(@_); return; }, ' ' => \&print_header, "\n" => \&print_header, '?' => \&help, ); my %input_to_object = ( D => "DiskstatsGroupByDisk", A => "DiskstatsGroupByAll", S => "DiskstatsGroupBySample", ); sub new { return bless {}, shift; } sub run_interactive { my ($self, %args) = @_; my @required_args = qw(OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o) = @args{@required_args}; $o->{opts}->{current_group_by_obj}->{value} = undef; my ($tmp_fh, $filename, $child_pid, $child_fh); if ( $filename = $args{filename} ) { if ( ref $filename ) { $tmp_fh = $filename; undef $args{filename}; } else { open $tmp_fh, "<", $filename or die "Cannot open $filename: $OS_ERROR"; } } else { $filename = $o->get('save-samples'); if ( $filename ) { unlink $filename; open my $tmp_fh, "+>", $filename or die "Cannot open $filename: $OS_ERROR"; } $child_pid = open $child_fh, "-|"; die "Cannot fork: $OS_ERROR" unless defined $child_pid; if ( !$child_pid ) { STDOUT->autoflush(1); local $PROGRAM_NAME = "$PROGRAM_NAME (data-gathering daemon)"; close $tmp_fh if $tmp_fh; PTDEBUG && _d("Child is [$PROGRAM_NAME] in ps aux and similar"); gather_samples( gather_while => sub { getppid() }, samples_to_gather => $o->get('iterations'), filename => $filename, sample_interval => $o->get('interval'), ); if ( $filename ) { unlink $filename unless $o->get('save-samples'); } exit(0); } else { PTDEBUG && _d("Forked, child is", $child_pid); $tmp_fh = $child_fh; $tmp_fh->blocking(0); Time::HiRes::sleep(0.5); } } PTDEBUG && _d( $filename ? ("Using file", $filename) : "Not using a file to store samples"); local $SIG{CHLD} = 'IGNORE'; local $SIG{PIPE} = 'IGNORE'; STDOUT->autoflush; STDIN->blocking(0); my $sel = IO::Select->new(\*STDIN); my $group_by = $o->get('group-by') || 'disk'; my $class = $group_by =~ m/disk/i ? 'DiskstatsGroupByDisk' : $group_by =~ m/sample/i ? 'DiskstatsGroupBySample' : $group_by =~ m/all/i ? 'DiskstatsGroupByAll' : die "Invalid --group-by: $group_by"; $o->set("current_group_by_obj", $class->new( OptionParser => $o, interactive => 1 ) ); my $header_callback = $o->get("current_group_by_obj") ->can("print_header"); my $redraw = 0; if ( $args{filename} ) { PTDEBUG && _d("Passed a file from the command line,", "rendering from scratch before looping"); $redraw = 1; group_by( header_callback => $header_callback, select_obj => $sel, OptionParser => $o, filehandle => $tmp_fh, input => substr(ucfirst($group_by), 0, 1), redraw_all => $redraw, ); if ( !-t STDOUT && !tied *STDIN ) { PTDEBUG && _d("Not connected to a tty and not in testing. Quitting"); return 0 } } ReadKeyMini::cbreak(); my $run = 1; MAIN_LOOP: while ($run) { my $refresh_interval = $o->get('interval'); my $time = scalar Time::HiRes::gettimeofday(); my $sleep = ($refresh_interval - fmod( $time, $refresh_interval ))+0.5; if ( my $input = read_command_timeout( $sel, $sleep ) ) { if ($actions{$input}) { PTDEBUG && _d("Got [$input] and have an action for it"); my $ret = $actions{$input}->( select_obj => $sel, OptionParser => $o, input => $input, filehandle => $tmp_fh, redraw_all => $redraw, ) || ''; last MAIN_LOOP if $ret eq 'last'; if ( $args{filename} && !grep { $input eq $_ } qw( A S D ), ' ', "\n" ) { PTDEBUG && _d("Got a file from the command line, redrawing", "from the beginning after getting an option"); my $obj = $o->get("current_group_by_obj"); $obj->clear_state( force => 1 ); local $obj->{force_header} = 1; group_by( redraw_all => 1, select_obj => $sel, OptionParser => $o, input => substr(ref($obj), 16, 1), filehandle => $tmp_fh, ); } } } $o->get("current_group_by_obj") ->group_by( filehandle => $tmp_fh ); if ( eof $tmp_fh ) { $tmp_fh->clearerr; } if ( !$args{filename} && $o->get('iterations') && waitpid($child_pid, WNOHANG) != 0 ) { PTDEBUG && _d("Child quit as expected after", $o->get("iterations"), "iterations. Quitting."); $run = 0; } } ReadKeyMini::cooked(); if ( $child_pid && !$args{filename} && !defined $o->get('iterations') && kill 0, $child_pid ) { kill 9, $child_pid; waitpid $child_pid, 0; } return 0; # Exit status } sub read_command_timeout { my ($sel, $timeout) = @_; if ( $sel->can_read( $timeout ) ) { return scalar ; } return; } sub gather_samples { my (%args) = @_; my $samples = 0; my $sample_interval = $args{sample_interval}; my @fhs; if ( my $filename = $args{filename} ) { open my $fh, ">>", $filename or die "Cannot open $filename for appending: $OS_ERROR"; push @fhs, $fh; } STDOUT->autoflush(1); push @fhs, \*STDOUT; for my $fh ( @fhs ) { $fh->autoflush(1); } { my $time = scalar(Time::HiRes::gettimeofday()); my $sleep = $sample_interval - fmod( $time, $sample_interval); PTDEBUG && _d("Child: Starting at [$time] " . ($sleep < ($sample_interval * 0.2) ? '' : 'not ') . "going to sleep"); Time::HiRes::sleep($sleep) if $sleep < ($sample_interval * 0.2); open my $diskstats_fh, "<", "/proc/diskstats" or die "Cannot open /proc/diskstats: $OS_ERROR"; my @to_print = timestamp(); push @to_print, <$diskstats_fh>; for my $fh ( @fhs ) { print { $fh } @to_print; } close $diskstats_fh or die $OS_ERROR; } GATHER_DATA: while ( $args{gather_while}->() ) { my $time_of_day = scalar(Time::HiRes::gettimeofday()); my $sleep = $sample_interval - fmod( $time_of_day, $sample_interval ); Time::HiRes::sleep($sleep); open my $diskstats_fh, "<", "/proc/diskstats" or die "Cannot open /proc/diskstats: $OS_ERROR"; my @to_print = timestamp(); push @to_print, <$diskstats_fh>; for my $fh ( @fhs ) { print { $fh } @to_print; } close $diskstats_fh or die $OS_ERROR; $samples++; if ( defined($args{samples_to_gather}) && $samples >= $args{samples_to_gather} ) { last GATHER_DATA; } } pop @fhs; # STDOUT for my $fh ( @fhs ) { close $fh or die $OS_ERROR; } return; } sub print_header { my (%args) = @_; my @required_args = qw( OptionParser ); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o) = @args{@required_args}; my $obj = $o->get("current_group_by_obj"); my ($header) = $obj->design_print_formats(); return $obj->force_print_header($header, "#ts", "device"); } sub group_by { my (%args) = @_; my @required_args = qw( OptionParser input ); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o, $input) = @args{@required_args}; my $old_obj = $o->get("current_group_by_obj"); if ( ref( $o->get("current_group_by_obj") ) ne $input_to_object{$input} ) { $o->set("current_group_by_obj", undef); my $new_obj = $input_to_object{$input}->new(OptionParser=>$o, interactive => 1); $o->set( "current_group_by_obj", $new_obj ); $new_obj->{_stats_for} = $old_obj->{_stats_for}; $new_obj->set_curr_ts($old_obj->curr_ts()); $new_obj->{_prev_stats_for} = $old_obj->{_prev_stats_for}; $new_obj->set_prev_ts($old_obj->prev_ts()); $new_obj->{_first_stats_for} = $old_obj->{_first_stats_for}; $new_obj->set_first_ts($old_obj->first_ts()); print_header(%args) unless $args{redraw_all}; } for my $obj ( $o->get("current_group_by_obj") ) { if ( $args{redraw_all} ) { seek $args{filehandle}, 0, 0; if ( $obj->isa("DiskstatsGroupBySample") ) { $obj->set_interactive(1); } else { $obj->set_interactive(0); } my $print_header; my $header_callback = $args{header_callback} || sub { my ($self, @args) = @_; $self->print_header(@args) unless $print_header++ }; $obj->group_by( filehandle => $args{filehandle}, header_callback => $header_callback, ); } $obj->set_interactive(1); $obj->set_force_header(0); } } sub help { my (%args) = @_; my $obj = $args{OptionParser}->get("current_group_by_obj"); my $mode = substr ref($obj), 16, 1; my $column_re = $args{OptionParser}->get('columns-regex'); my $device_re = $args{OptionParser}->get('devices-regex'); my $interval = $obj->sample_time() || '(none)'; my $disp_int = $args{OptionParser}->get('interval'); my $inact_disk = $obj->show_inactive() ? 'no' : 'yes'; for my $re ( $column_re, $device_re ) { $re ||= '(none)'; } print <<"HELP"; You can control this program by key presses: ------------------- Key ------------------- ---- Current Setting ---- A, D, S) Set the group-by mode $mode c) Enter a Perl regex to match column names $column_re /) Enter a Perl regex to match disk names $device_re z) Set the sample size in seconds $interval i) Hide inactive disks $inact_disk p) Pause the program q) Quit the program space) Print headers ------------------- Press any key to continue ----------------------- HELP pause(%args); return; } sub get_blocking_input { my ($message) = @_; STDIN->blocking(1); ReadKeyMini::cooked(); print $message; chomp(my $new_opt = ); ReadKeyMini::cbreak(); STDIN->blocking(0); return $new_opt; } sub hide_inactive_disks { my (%args) = @_; my $obj = $args{OptionParser}->get("current_group_by_obj"); my $new_val = !$obj->show_inactive(); $args{OptionParser}->set('show-inactive', $new_val); $obj->set_show_inactive($new_val); return; } sub get_new_value_for { my ($looking_for, $message) = @_; (my $looking_for_o = $looking_for) =~ tr/_/-/; return sub { my (%args) = @_; my $o = $args{OptionParser}; my $new_interval = get_blocking_input($message) || 0; die "Invalid timeout: $new_interval" unless looks_like_number($new_interval) && ($new_interval = int($new_interval)); my $obj = $o->get("current_group_by_obj"); if ( my $setter = $obj->can("set_$looking_for") ) { $obj->$setter($new_interval); } $o->set($looking_for_o, $new_interval); return $new_interval; }; } sub get_new_regex_for { my ($looking_for, $message) = @_; (my $looking_for_o = $looking_for) =~ tr/_/-/; $looking_for = "set_$looking_for"; return sub { my (%args) = @_; my $o = $args{OptionParser}; my $new_regex = get_blocking_input($message); local $EVAL_ERROR; if ( $new_regex && (my $re = eval { qr/$new_regex/i }) ) { $o->get("current_group_by_obj") ->$looking_for( $re ); $o->set($looking_for_o, $new_regex); } elsif ( !$EVAL_ERROR && !$new_regex ) { my $re; if ( $looking_for =~ /device/ ) { $re = undef; } else { $re = qr/.+/; } $o->get("current_group_by_obj") ->$looking_for( $re ); $o->set($looking_for_o, ''); } else { die "invalid regex specification: $EVAL_ERROR"; } return; }; } sub pause { my (%args) = @_; STDIN->blocking(1); $args{select_obj}->can_read(); STDIN->blocking(0); scalar ; return; } sub timestamp { my ($s, $m) = Time::HiRes::gettimeofday(); return sprintf( "TS %d.%09d %s\n", $s, $m*1000, Transformers::ts( $s ) ); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DiskstatsMenu package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### { package pt_diskstats; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Percona::Toolkit; sub main { local @ARGV = @_; # set global ARGV for this package # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); # --sample-time only applies to --group-by sample. if ( PTDEBUG && $o->get('group-by') !~ m/sample/i && $o->get('sample-time') ) { _d("Possibly useless use of --sample-time without --group-by sample"); } if ( !$o->get('help') ) { if ( !$o->get('columns-regex') ) { $o->save_error("A regex pattern for --column-regex must be specified"); } } $o->usage_or_errors(); # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), ); } # ######################################################################## # Interactive mode. Delegate to DiskstatsMenu::run_interactive # ######################################################################## my $diskstats = new DiskstatsMenu(); return $diskstats->run_interactive( OptionParser => $o, filename => $ARGV[0] ); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; } # ############################################################################# # Documentation. # ############################################################################# =pod =head1 NAME pt-diskstats - An interactive I/O monitoring tool for GNU/Linux. =head1 SYNOPSIS Usage: pt-diskstats [OPTIONS] [FILES] pt-diskstats prints disk I/O statistics for GNU/Linux. It is somewhat similar to iostat, but it is interactive and more detailed. It can analyze samples gathered from another machine. =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION The pt-diskstats tool is similar to iostat, but has some advantages. It prints read and write statistics separately, and has more columns. It is menu-driven and interactive, with several different ways to aggregate the data. It integrates well with the L tool. It also does the "right thing" by default, such as hiding disks that are idle. These properties make it very convenient for quickly drilling down into I/O performance and inspecting disk behavior. This program works in two modes. The default is to collect samples of F and print out the formatted statistics at intervals. The other mode is to process a file that contains saved samples of F; there is a shell script later in this documentation that shows how to collect such a file. In both cases, the tool is interactively controlled by keystrokes, so you can redisplay and slice the data flexibly and easily. It loops forever, until you exit with the 'q' key. If you press the '?' key, you will bring up the interactive help menu that shows which keys control the program. When the program is gathering samples of F and refreshing its display, it prints information about the newest sample each time it refreshes. When it is operating on a file of saved samples, it redraws the entire file's contents every time you change an option. The program doesn't print information about every block device on the system. It hides devices that it has never observed to have any activity. You can enable and disable this by pressing the 'i' key. =head1 OUTPUT In the rest of this documentation, we will try to clarify the distinction between block devices (/dev/sda1, for example), which the kernel presents to the application via a filesystem, versus the (usually) physical device underneath the block device, which could be a disk, a RAID controller, and so on. We will sometimes refer to logical I/O operations, which occur at the block device, versus physical I/Os which are performed on the underlying device. When we refer to the queue, we are speaking of the queue associated with the block device, which holds requests until they're issued to the physical device. The program's output looks like the following sample, which is too wide for this manual page, so we have formatted it as several samples with line breaks: #ts device rd_s rd_avkb rd_mb_s rd_mrg rd_cnc rd_rt {6} sda 0.9 4.2 0.0 0% 0.0 17.9 {6} sdb 0.4 4.0 0.0 0% 0.0 26.1 {6} dm-0 0.0 4.0 0.0 0% 0.0 13.5 {6} dm-1 0.8 4.0 0.0 0% 0.0 16.0 ... wr_s wr_avkb wr_mb_s wr_mrg wr_cnc wr_rt ... 99.7 6.2 0.6 35% 3.7 23.7 ... 14.5 15.8 0.2 75% 0.5 9.2 ... 1.0 4.0 0.0 0% 0.0 2.3 ... 117.7 4.0 0.5 0% 4.1 35.1 ... busy in_prg io_s qtime stime ... 6% 0 100.6 23.3 0.4 ... 4% 0 14.9 8.6 0.6 ... 0% 0 1.1 1.5 1.2 ... 5% 0 118.5 34.5 0.4 The columns are as follows: =over =item #ts This column's contents vary depending on the tool's aggregation mode. In the default mode, when each line contains information about a single disk but possibly aggregates across several samples from that disk, this column shows the number of samples that were included into the line of output, in {curly braces}. In the example shown, each line of output aggregates {10} samples of F. In the "all" group-by mode, this column shows timestamp offsets, relative to the time the tool began aggregating or the timestamp of the previous lines printed, depending on the mode. The output can be confusing to explain, but it's rather intuitive when you see the lines appearing on your screen periodically. Similarly, in "sample" group-by mode, the number indicates the total time span that is grouped into each sample. If you specify L<"--show-timestamps">, this field instead shows the timestamp at which the sample was taken; if multiple timestamps are present in a single line of output, then the first timestamp is used. =item device The device name. If there is more than one device, then instead the number of devices aggregated into the line is shown, in {curly braces}. =item rd_s The average number of reads per second. This is the number of I/O requests that were sent to the underlying device. This usually is a smaller number than the number of logical IO requests made by applications. More requests might have been queued to the block device, but some of them usually are merged before being sent to the disk. This field is computed from the contents of F as follows. See L<"KERNEL DOCUMENTATION"> below for the meaning of the field numbers: delta[field1] / delta[time] =item rd_avkb The average size of the reads, in kilobytes. This field is computed as follows: 2 * delta[field3] / delta[field1] =item rd_mb_s The average number of megabytes read per second. Computed as follows: 2 * delta[field3] / delta[time] =item rd_mrg The percentage of read requests that were merged together in the queue scheduler before being sent to the physical device. The field is computed as follows: 100 * delta[field2] / (delta[field2] + delta[field1]) =item rd_cnc The average concurrency of the read operations, as computed by Little's Law. This is the end-to-end concurrency on the block device, not the underlying disk's concurrency. It includes time spent in the queue. The field is computed as follows: delta[field4] / delta[time] / 1000 / devices-in-group =item rd_rt The average response time of the read operations, in milliseconds. This is the end-to-end response time, including time spent in the queue. It is the response time that the application making I/O requests sees, not the response time of the physical disk underlying the block device. It is computed as follows: delta[field4] / (delta[field1] + delta[field2]) =item wr_s, wr_avkb, wr_mb_s, wr_mrg, wr_cnc, wr_rt These columns show write activity, and they match the corresponding columns for read activity. =item busy The fraction of wall-clock time that the device had at least one request in progress; this is what iostat calls %util, and indeed it is utilization, depending on how you define utilization, but that is sometimes ambiguous in common parlance. It may also be called the residence time; the time during which at least one request was resident in the system. It is computed as follows: 100 * delta[field10] / (1000 * delta[time]) This field cannot exceed 100% unless there is a rounding error, but it is a common mistake to think that a device that's busy all the time is saturated. A device such as a RAID volume should support concurrency higher than 1, and solid-state drives can support very high concurrency. Concurrency can grow without bound, and is a more reliable indicator of how loaded the device really is. =item in_prg The number of requests that were in progress. Unlike the read and write concurrencies, which are averages that are generated from reliable numbers, this number is an instantaneous sample, and you can see that it might represent a spike of requests, rather than the true long-term average. If this number is large, it essentially means that the device is heavily loaded. It is computed as follows: field9 =item ios_s The average throughput of the physical device, in I/O operations per second (IOPS). This column shows the total IOPS the underlying device is handling. It is the sum of rd_s and wr_s. =item qtime The average queue time; that is, time a request spends in the device scheduler queue before being sent to the physical device. This is an average over reads and writes. It is computed in a slightly complex way: the average response time seen by the application, minus the average service time (see the description of the next column). This is derived from the queueing theory formula for response time, R = W + S: response time = queue time + service time. This is solved for W, of course, to give W = R - S. The computation follows: delta[field11] / (delta[field1, 2, 5, 6] + delta[field9]) - delta[field10] / delta[field1, 2, 5, 6] See the description for C for more details and cautions. =item stime The average service time; that is, the time elapsed while the physical device processes the request, after the request finishes waiting in the queue. This is an average over reads and writes. It is computed from the queueing theory utilization formula, U = SX, solved for S. This means that utilization divided by throughput gives service time: delta[field10] / (delta[field1, 2, 5, 6]) Note, however, that there can be some kernel bugs that cause field 9 in F to become negative, and this can cause field 10 to be wrong, thus making the service time computation not wholly trustworthy. Note that in the above formula we use utilization very specifically. It is a duration, not a percentage. You can compare the stime and qtime columns to see whether the response time for reads and writes is spent in the queue or on the physical device. However, you cannot see the difference between reads and writes. Changing the block device scheduler algorithm might improve queue time greatly. The default algorithm, cfq, is very bad for servers, and should only be used on laptops and workstations that perform tasks such as working with spreadsheets and surfing the Internet. =back If you are used to using iostat, you might wonder where you can find the same information in pt-diskstats. Here are two samples of output from both tools on the same machine at the same time, for F, wrapped to fit: #ts dev rd_s rd_avkb rd_mb_s rd_mrg rd_cnc rd_rt 08:50:10 sda 0.0 0.0 0.0 0% 0.0 0.0 08:50:20 sda 0.4 4.0 0.0 0% 0.0 15.5 08:50:30 sda 2.1 4.4 0.0 0% 0.0 21.1 08:50:40 sda 2.4 4.0 0.0 0% 0.0 15.4 08:50:50 sda 0.1 4.0 0.0 0% 0.0 33.0 wr_s wr_avkb wr_mb_s wr_mrg wr_cnc wr_rt 7.7 25.5 0.2 84% 0.0 0.3 49.6 6.8 0.3 41% 2.4 28.8 210.1 5.6 1.1 28% 7.4 25.2 297.1 5.4 1.6 26% 11.4 28.3 11.9 11.7 0.1 66% 0.2 4.9 busy in_prg io_s qtime stime 1% 0 7.7 0.1 0.2 6% 0 50.0 28.1 0.7 12% 0 212.2 24.8 0.4 16% 0 299.5 27.8 0.4 1% 0 12.0 4.7 0.3 Dev rrqm/s wrqm/s r/s w/s rMB/s wMB/s 08:50:10 sda 0.00 41.40 0.00 7.70 0.00 0.19 08:50:20 sda 0.00 34.70 0.40 49.60 0.00 0.33 08:50:30 sda 0.00 83.30 2.10 210.10 0.01 1.15 08:50:40 sda 0.00 105.10 2.40 297.90 0.01 1.58 08:50:50 sda 0.00 22.50 0.10 11.10 0.00 0.13 avgrq-sz avgqu-sz await svctm %util 51.01 0.02 2.04 1.25 0.96 13.55 2.44 48.76 1.16 5.79 11.15 7.45 35.10 0.55 11.76 10.81 11.40 37.96 0.53 15.97 24.07 0.17 15.60 0.87 0.97 The correspondence between the columns is not one-to-one. In particular: =over =item rrqm/s, wrqm/s These columns in iostat are replaced by rd_mrg and wr_mrg in pt-diskstats. =item avgrq-sz This column is in sectors in iostat, and is a combination of reads and writes. The pt-diskstats output breaks these out separately and shows them in kB. You can derive it via a weighted average of rd_avkb and wr_avkb in pt-diskstats, and then multiply by 2 to get sectors (each sector is 512 bytes). =item avgqu-sz This column really represents concurrency at the block device scheduler. The pt-diskstats output shows concurrency for reads and writes separately: rd_cnc and wr_cnc. =item await This column is the average response time from the beginning to the end of a request to the block device, including queue time and service time, and is not shown in pt-diskstats. Instead, pt-diskstats shows individual response times at the disk level for reads and writes (rd_rt and wr_rt), as well as queue time versus service time for reads and writes in aggregate. =item svctm This column is the average service time at the disk, and is shown as stime in pt-diskstats. =item %util This column is called busy in pt-diskstats. Utilization is usually defined as the portion of time during which there was at least one active request, not as a percentage, which is why we chose to avoid this confusing term. =back =head1 COLLECTING DATA It is straightforward to gather a sample of data for this tool. Files should have this format, with a timestamp line preceding each sample of statistics: TS TS ... et cetera You can simply use pt-diskstats with L<"--save-samples"> to collect this data for you. If you wish to capture samples as part of some other tool, and use pt-diskstats to analyze them, you can include a snippet of shell script such as the following: INTERVAL=1 while true; do sleep=$(date +%s.%N | awk "{print $INTERVAL - (\$1 % $INTERVAL)}") sleep $sleep date +"TS %s.%N %F %T" >> diskstats-samples.txt cat /proc/diskstats >> diskstats-samples.txt done =head1 KERNEL DOCUMENTATION This documentation supplements L on the contents of F. That documentation can sometimes be difficult to understand for those who are not familiar with Linux kernel internals. The contents of F are generated by the C function in the kernel source file F. Here is a sample of F on a recent kernel. 8 1 sda1 426 243 3386 2056 3 0 18 87 0 2135 2142 The fields in this sample are as follows. The first three fields are the major and minor device numbers (8, 1), and the device name (sda1). They are followed by 11 fields of statistics: =over =item 1. The number of reads completed. This is the number of physical reads done by the underlying disk, not the number of reads that applications made from the block device. This means that 426 actual reads have completed successfully to the disk on which F resides. Reads are not counted until they complete. =item 2. The number of reads merged because they were adjacent. In the sample, 243 reads were merged. This means that F actually received 869 logical reads, but sent only 426 physical reads to the underlying physical device. =item 3. The number of sectors read successfully. The 426 physical reads to the disk read 3386 sectors. Sectors are 512 bytes, so a total of about 1.65MB have been read from F. =item 4. The number of milliseconds spent reading. This counts only reads that have completed, not reads that are in progress. It counts the time spent from when requests are placed on the queue until they complete, not the time that the underlying disk spends servicing the requests. That is, it measures the total response time seen by applications, not disk response times. =item 5. Ditto for field 1, but for writes. =item 6. Ditto for field 2, but for writes. =item 7. Ditto for field 3, but for writes. =item 8. Ditto for field 4, but for writes. =item 9. The number of I/Os currently in progress, that is, they've been scheduled by the queue scheduler and issued to the disk (submitted to the underlying disk's queue), but not yet completed. There are bugs in some kernels that cause this number, and thus fields 10 and 11, to be wrong sometimes. =item 10. The total number of milliseconds spent doing I/Os. This is B the total response time seen by the applications; it is the total amount of time during which at least one I/O was in progress. If one I/O is issued at time 100, another comes in at 101, and both of them complete at 102, then this field increments by 2, not 3. =item 11. This field counts the total response time of all I/Os. In contrast to field 10, it counts double when two I/Os overlap. In our previous example, this field would increment by 3, not 2. =back =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --columns-regex type: string; default: . Print columns that match this Perl regex. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --devices-regex type: string Print devices that match this Perl regex. =item --group-by type: string; default: all Group-by mode: disk, sample, or all. In B mode, each line of output shows one disk device, with the statistics computed since the tool started. In B mode, each line of output shows one sample of statistics, with all disks averaged together. In B mode, each line of output shows one sample and one disk device. =item --headers type: Hash; default: group,scroll If C is present, each sample will be separated by a blank line, unless the sample is only one line. If C is present, the tool will print the headers as often as needed to prevent them from scrolling out of view. Note that you can press the space bar, or the enter key, to reprint headers at will. =item --help Show help and exit. =item --interval type: int; default: 1 When in interactive mode, wait N seconds before printing to the screen. Also, how often the tool should sample F. The tool attempts to gather statistics exactly on even intervals of clock time. That is, if you specify a 5-second interval, it will try to capture samples at 12:00:00, 12:00:05, and so on; it will not gather at 12:00:01, 12:00:06 and so forth. This can lead to slightly odd delays in some circumstances, because the tool waits one full cycle before printing out the first set of lines. (Unlike iostat and vmstat, pt-diskstats does not start with a line representing the averages since the computer was booted.) Therefore, the rule has an exception to avoid very long delays. Suppose you specify a 10-second interval, but you start the tool at 12:00:00.01. The tool might wait until 12:00:20 to print its first lines of output, and in the intervening 19.99 seconds, it would appear to do nothing. To alleviate this, the tool waits until the next even interval of time to gather, unless more than 20% of that interval remains. This means the tool will never wait more than 120% of the sampling interval to produce output, e.g if you start the tool at 12:00:53 with a 10-second sampling interval, then the first sample will be only 7 seconds long, not 10 seconds. =item --iterations type: int When in interactive mode, stop after N samples. Run forever by default. =item --sample-time type: int; default: 1 In --group-by sample mode, include N seconds of samples per group. =item --save-samples type: string File to save diskstats samples in; these can be used for later analysis. =item --show-inactive Show inactive devices. =item --show-timestamps Show a 'HH:MM:SS' timestamp in the C<#ts> column. If multiple timestamps are aggregated into one line, the first timestamp is shown. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-diskstats ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS This tool requires Perl v5.8.0 or newer and the F filesystem, unless reading from files. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz, Brian Fraser, and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-diskstats 3.1.0 =cut percona-toolkit-3.1/bin/pt-duplicate-key-checker000775 001750 001750 00000516133 13535723560 023140 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit Quoter TableParser DSNParser OptionParser KeySize DuplicateKeyFinder Daemon Schema SchemaIterator HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true return $val if $args{is_float}; $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # KeySize package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/KeySize.pm # t/lib/KeySize.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package KeySize; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my $self = { %args }; return bless $self, $class; } sub get_key_size { my ( $self, %args ) = @_; foreach my $arg ( qw(name cols tbl_name tbl_struct dbh) ) { die "I need a $arg argument" unless $args{$arg}; } my $name = $args{name}; my @cols = @{$args{cols}}; my $dbh = $args{dbh}; $self->{explain} = ''; $self->{query} = ''; $self->{error} = ''; if ( @cols == 0 ) { $self->{error} = "No columns for key $name"; return; } my $key_exists = $self->_key_exists(%args); PTDEBUG && _d('Key', $name, 'exists in', $args{tbl_name}, ':', $key_exists ? 'yes': 'no'); my $sql = 'EXPLAIN SELECT ' . join(', ', @cols) . ' FROM ' . $args{tbl_name} . ($key_exists ? " FORCE INDEX (`$name`)" : '') . ' WHERE '; my @where_cols; foreach my $col ( @cols ) { push @where_cols, "$col=1"; } if ( scalar(@cols) == 1 && !$args{only_eq} ) { push @where_cols, "$cols[0]<>1"; } $sql .= join(' OR ', @where_cols); $self->{query} = $sql; PTDEBUG && _d('sql:', $sql); my $explain; my $sth = $dbh->prepare($sql); eval { $sth->execute(); }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; $self->{error} = "Cannot get size of $name key: $EVAL_ERROR"; return; } $explain = $sth->fetchrow_hashref(); $self->{explain} = $explain; my $key_len = $explain->{key_len}; my $rows = $explain->{rows}; my $chosen_key = $explain->{key}; # May differ from $name PTDEBUG && _d('MySQL chose key:', $chosen_key, 'len:', $key_len, 'rows:', $rows); if ( $chosen_key && $key_len eq '0' ) { if ( $args{recurse} ) { $self->{error} = "key_len = 0 in EXPLAIN:\n" . _explain_to_text($explain); return; } else { return $self->get_key_size( %args, only_eq => 1, recurse => 1, ); } } my $key_size = 0; if ( $key_len && $rows ) { if ( $chosen_key =~ m/,/ && $key_len =~ m/,/ ) { $self->{error} = "MySQL chose multiple keys: $chosen_key"; return; } $key_size = $key_len * $rows; } else { $self->{error} = "key_len or rows NULL in EXPLAIN:\n" . _explain_to_text($explain); return; } return $key_size, $chosen_key; } sub query { my ( $self ) = @_; return $self->{query}; } sub explain { my ( $self ) = @_; return _explain_to_text($self->{explain}); } sub error { my ( $self ) = @_; return $self->{error}; } sub _key_exists { my ( $self, %args ) = @_; return exists $args{tbl_struct}->{keys}->{ lc $args{name} } ? 1 : 0; } sub _explain_to_text { my ( $explain ) = @_; return join("\n", map { "$_: ".($explain->{$_} ? $explain->{$_} : 'NULL') } sort keys %$explain ); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End KeySize package # ########################################################################### # ########################################################################### # DuplicateKeyFinder package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DuplicateKeyFinder.pm # t/lib/DuplicateKeyFinder.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DuplicateKeyFinder; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my $self = {}; return bless $self, $class; } sub get_duplicate_keys { my ( $self, $keys, %args ) = @_; die "I need a keys argument" unless $keys; my %keys = %$keys; # Copy keys because we remove non-duplicates. my $primary_key; my @unique_keys; my @normal_keys; my @fulltext_keys; my @dupes; KEY: foreach my $keyname ( reverse sort keys %keys ) { my $key = $keys{$keyname}; $key->{real_cols} = [ @{$key->{cols}} ]; $key->{len_cols} = length $key->{colnames}; if ( $key->{name} eq 'PRIMARY' || ($args{clustered_key} && $key->{name} eq $args{clustered_key}) ) { $primary_key = $key; PTDEBUG && _d('primary key:', $key->{name}); next KEY; } my $is_fulltext = $key->{type} eq 'FULLTEXT' ? 1 : 0; if ( $args{ignore_order} || $is_fulltext ) { my $ordered_cols = join(',', sort(split(/,/, $key->{colnames}))); PTDEBUG && _d('Reordered', $key->{name}, 'cols from', $key->{colnames}, 'to', $ordered_cols); $key->{colnames} = $ordered_cols; } my $push_to = $key->{is_unique} ? \@unique_keys : \@normal_keys; if ( !$args{ignore_structure} ) { $push_to = \@fulltext_keys if $is_fulltext; } push @$push_to, $key; } push @normal_keys, $self->unconstrain_keys($primary_key, \@unique_keys); if ( $primary_key ) { PTDEBUG && _d('Comparing PRIMARY KEY to UNIQUE keys'); push @dupes, $self->remove_prefix_duplicates([$primary_key], \@unique_keys, %args); PTDEBUG && _d('Comparing PRIMARY KEY to normal keys'); push @dupes, $self->remove_prefix_duplicates([$primary_key], \@normal_keys, %args); } PTDEBUG && _d('Comparing UNIQUE keys to normal keys'); push @dupes, $self->remove_prefix_duplicates(\@unique_keys, \@normal_keys, %args); PTDEBUG && _d('Comparing normal keys'); push @dupes, $self->remove_prefix_duplicates(\@normal_keys, \@normal_keys, %args); PTDEBUG && _d('Comparing FULLTEXT keys'); push @dupes, $self->remove_prefix_duplicates(\@fulltext_keys, \@fulltext_keys, %args, exact_duplicates => 1); my $clustered_key = $args{clustered_key} ? $keys{$args{clustered_key}} : undef; PTDEBUG && _d('clustered key:', $clustered_key ? ($clustered_key->{name}, $clustered_key->{colnames}) : 'none'); if ( $clustered_key && $args{clustered} && $args{tbl_info}->{engine} && $args{tbl_info}->{engine} =~ m/InnoDB/i ) { PTDEBUG && _d('Removing UNIQUE dupes of clustered key'); push @dupes, $self->remove_clustered_duplicates($clustered_key, \@unique_keys, %args); PTDEBUG && _d('Removing ordinary dupes of clustered key'); push @dupes, $self->remove_clustered_duplicates($clustered_key, \@normal_keys, %args); } return \@dupes; } sub get_duplicate_fks { my ( $self, $fks, %args ) = @_; die "I need a fks argument" unless $fks; my @fks = (); foreach my $key ( sort keys %$fks ) { push @fks, $fks->{$key}; } my @dupes; foreach my $i ( 0..$#fks - 1 ) { next unless $fks[$i]; foreach my $j ( $i+1..$#fks ) { next unless $fks[$j]; my $i_cols = join(',', sort @{$fks[$i]->{cols}} ); my $j_cols = join(',', sort @{$fks[$j]->{cols}} ); my $i_pcols = join(',', sort @{$fks[$i]->{parent_cols}} ); my $j_pcols = join(',', sort @{$fks[$j]->{parent_cols}} ); if ( $fks[$i]->{parent_tblname} eq $fks[$j]->{parent_tblname} && $i_cols eq $j_cols && $i_pcols eq $j_pcols ) { my $dupe = { key => $fks[$j]->{name}, cols => [ @{$fks[$j]->{cols}} ], ddl => $fks[$j]->{ddl}, duplicate_of => $fks[$i]->{name}, duplicate_of_cols => [ @{$fks[$i]->{cols}} ], duplicate_of_ddl => $fks[$i]->{ddl}, reason => "FOREIGN KEY $fks[$j]->{name} ($fks[$j]->{colnames}) " . "REFERENCES $fks[$j]->{parent_tblname} " . "($fks[$j]->{parent_colnames}) " . 'is a duplicate of ' . "FOREIGN KEY $fks[$i]->{name} ($fks[$i]->{colnames}) " . "REFERENCES $fks[$i]->{parent_tblname} " ."($fks[$i]->{parent_colnames})", dupe_type => 'fk', }; push @dupes, $dupe; delete $fks[$j]; $args{callback}->($dupe, %args) if $args{callback}; } } } return \@dupes; } sub remove_prefix_duplicates { my ( $self, $left_keys, $right_keys, %args ) = @_; my @dupes; my $right_offset; my $last_left_key; my $last_right_key = scalar(@$right_keys) - 1; if ( $right_keys != $left_keys ) { @$left_keys = sort { lc($a->{colnames}) cmp lc($b->{colnames}) } grep { defined $_; } @$left_keys; @$right_keys = sort { lc($a->{colnames}) cmp lc($b->{colnames}) } grep { defined $_; } @$right_keys; $last_left_key = scalar(@$left_keys) - 1; $right_offset = 0; } else { @$left_keys = reverse sort { lc($a->{colnames}) cmp lc($b->{colnames}) } grep { defined $_; } @$left_keys; $last_left_key = scalar(@$left_keys) - 2; $right_offset = 1; } LEFT_KEY: foreach my $left_index ( 0..$last_left_key ) { next LEFT_KEY unless defined $left_keys->[$left_index]; RIGHT_KEY: foreach my $right_index ( $left_index+$right_offset..$last_right_key ) { next RIGHT_KEY unless defined $right_keys->[$right_index]; my $left_name = $left_keys->[$left_index]->{name}; my $left_cols = $left_keys->[$left_index]->{colnames}; my $left_len_cols = $left_keys->[$left_index]->{len_cols}; my $right_name = $right_keys->[$right_index]->{name}; my $right_cols = $right_keys->[$right_index]->{colnames}; my $right_len_cols = $right_keys->[$right_index]->{len_cols}; PTDEBUG && _d('Comparing left', $left_name, '(',$left_cols,')', 'to right', $right_name, '(',$right_cols,')'); if ( substr($left_cols, 0, $right_len_cols) eq substr($right_cols, 0, $right_len_cols) ) { if ( $args{exact_duplicates} && ($right_len_cols<$left_len_cols) ) { PTDEBUG && _d($right_name, 'not exact duplicate of', $left_name); next RIGHT_KEY; } if ( exists $right_keys->[$right_index]->{unique_col} ) { PTDEBUG && _d('Cannot remove', $right_name, 'because is constrains col', $right_keys->[$right_index]->{cols}->[0]); next RIGHT_KEY; } PTDEBUG && _d('Remove', $right_name); my $reason; if ( my $type = $right_keys->[$right_index]->{unconstrained} ) { $reason .= "Uniqueness of $right_name ignored because " . $right_keys->[$right_index]->{constraining_key}->{name} . " is a $type constraint\n"; } my $exact_dupe = $right_len_cols < $left_len_cols ? 0 : 1; $reason .= $right_name . ($exact_dupe ? ' is a duplicate of ' : ' is a left-prefix of ') . $left_name; my $dupe = { key => $right_name, cols => $right_keys->[$right_index]->{real_cols}, ddl => $right_keys->[$right_index]->{ddl}, duplicate_of => $left_name, duplicate_of_cols => $left_keys->[$left_index]->{real_cols}, duplicate_of_ddl => $left_keys->[$left_index]->{ddl}, reason => $reason, dupe_type => $exact_dupe ? 'exact' : 'prefix', }; push @dupes, $dupe; delete $right_keys->[$right_index]; $args{callback}->($dupe, %args) if $args{callback}; } else { PTDEBUG && _d($right_name, 'not left-prefix of', $left_name); next RIGHT_KEY; } } # RIGHT_KEY } # LEFT_KEY PTDEBUG && _d('No more keys'); @$left_keys = grep { defined $_; } @$left_keys; @$right_keys = grep { defined $_; } @$right_keys; return @dupes; } sub remove_clustered_duplicates { my ( $self, $ck, $keys, %args ) = @_; die "I need a ck argument" unless $ck; die "I need a keys argument" unless $keys; my $ck_cols = $ck->{colnames}; my @dupes; KEY: for my $i ( 0 .. @$keys - 1 ) { my $key = $keys->[$i]->{colnames}; if ( $key =~ m/$ck_cols$/ ) { PTDEBUG && _d("clustered key dupe:", $keys->[$i]->{name}, $keys->[$i]->{colnames}); my $dupe = { key => $keys->[$i]->{name}, cols => $keys->[$i]->{real_cols}, ddl => $keys->[$i]->{ddl}, duplicate_of => $ck->{name}, duplicate_of_cols => $ck->{real_cols}, duplicate_of_ddl => $ck->{ddl}, reason => "Key $keys->[$i]->{name} ends with a " . "prefix of the clustered index", dupe_type => 'clustered', short_key => $self->shorten_clustered_duplicate( $ck_cols, join(',', map { "`$_`" } @{$keys->[$i]->{real_cols}}) ), }; push @dupes, $dupe; delete $keys->[$i]; $args{callback}->($dupe, %args) if $args{callback}; } } PTDEBUG && _d('No more keys'); @$keys = grep { defined $_; } @$keys; return @dupes; } sub shorten_clustered_duplicate { my ( $self, $ck_cols, $dupe_key_cols ) = @_; return $ck_cols if $ck_cols eq $dupe_key_cols; $dupe_key_cols =~ s/$ck_cols$//; $dupe_key_cols =~ s/,+$//; return $dupe_key_cols; } sub unconstrain_keys { my ( $self, $primary_key, $unique_keys ) = @_; die "I need a unique_keys argument" unless $unique_keys; my %unique_cols; my @unique_sets; my %unconstrain; my @unconstrained_keys; PTDEBUG && _d('Unconstraining redundantly unique keys'); UNIQUE_KEY: foreach my $unique_key ( $primary_key, @$unique_keys ) { next unless $unique_key; # primary key may be undefined my $cols = $unique_key->{cols}; if ( @$cols == 1 ) { if ( !exists $unique_cols{$cols->[0]} ) { PTDEBUG && _d($unique_key->{name}, 'defines unique column:', $cols->[0]); $unique_cols{$cols->[0]} = $unique_key; $unique_key->{unique_col} = 1; } else { PTDEBUG && _d($unique_key->{name}, 'redundantly constrains unique column:', $cols->[0]); $unique_key->{exact_dupe} = 1; $unique_key->{constraining_key} = $unique_cols{$cols->[0]}; } } else { local $LIST_SEPARATOR = '-'; PTDEBUG && _d($unique_key->{name}, 'defines unique set:', @$cols); push @unique_sets, { cols => $cols, key => $unique_key }; } } UNIQUE_SET: foreach my $unique_set ( @unique_sets ) { my $n_unique_cols = 0; COL: foreach my $col ( @{$unique_set->{cols}} ) { if ( exists $unique_cols{$col} ) { PTDEBUG && _d('Unique set', $unique_set->{key}->{name}, 'has unique col', $col); last COL if ++$n_unique_cols > 1; $unique_set->{constraining_key} = $unique_cols{$col}; } } if ( $n_unique_cols && $unique_set->{key}->{name} ne 'PRIMARY' ) { PTDEBUG && _d('Will unconstrain unique set', $unique_set->{key}->{name}, 'because it is redundantly constrained by key', $unique_set->{constraining_key}->{name}, '(',$unique_set->{constraining_key}->{colnames},')'); $unconstrain{$unique_set->{key}->{name}} = $unique_set->{constraining_key}; } } for my $i ( 0..(scalar @$unique_keys-1) ) { if ( exists $unconstrain{$unique_keys->[$i]->{name}} ) { PTDEBUG && _d('Unconstraining weak', $unique_keys->[$i]->{name}); $unique_keys->[$i]->{unconstrained} = 'stronger'; $unique_keys->[$i]->{constraining_key} = $unconstrain{$unique_keys->[$i]->{name}}; push @unconstrained_keys, $unique_keys->[$i]; delete $unique_keys->[$i]; } elsif ( $unique_keys->[$i]->{exact_dupe} ) { PTDEBUG && _d('Unconstraining dupe', $unique_keys->[$i]->{name}); $unique_keys->[$i]->{unconstrained} = 'duplicate'; push @unconstrained_keys, $unique_keys->[$i]; delete $unique_keys->[$i]; } } PTDEBUG && _d('No more keys'); return @unconstrained_keys; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DuplicateKeyFinder package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); use Fcntl qw(:DEFAULT); sub new { my ($class, %args) = @_; my $self = { log_file => $args{log_file}, pid_file => $args{pid_file}, daemonize => $args{daemonize}, force_log_file => $args{force_log_file}, parent_exit => $args{parent_exit}, pid_file_owner => 0, }; return bless $self, $class; } sub run { my ($self) = @_; my $daemonize = $self->{daemonize}; my $pid_file = $self->{pid_file}; my $log_file = $self->{log_file}; my $force_log_file = $self->{force_log_file}; my $parent_exit = $self->{parent_exit}; PTDEBUG && _d('Starting daemon'); if ( $pid_file ) { eval { $self->_make_pid_file( pid => $PID, # parent's pid pid_file => $pid_file, ); }; die "$EVAL_ERROR\n" if $EVAL_ERROR; if ( !$daemonize ) { $self->{pid_file_owner} = $PID; # parent's pid } } if ( $daemonize ) { defined (my $child_pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $child_pid ) { PTDEBUG && _d('Forked child', $child_pid); $parent_exit->($child_pid) if $parent_exit; exit 0; } POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; if ( $pid_file ) { $self->_update_pid_file( pid => $PID, # child's pid pid_file => $pid_file, ); $self->{pid_file_owner} = $PID; } } if ( $daemonize || $force_log_file ) { PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $log_file ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $log_file); close STDOUT; open STDOUT, '>>', $log_file or die "Cannot open log file $log_file: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } $OUTPUT_AUTOFLUSH = 1; } PTDEBUG && _d('Daemon running'); return; } sub _make_pid_file { my ($self, %args) = @_; my @required_args = qw(pid pid_file); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid = $args{pid}; my $pid_file = $args{pid_file}; eval { sysopen(PID_FH, $pid_file, O_RDWR|O_CREAT|O_EXCL) or die $OS_ERROR; print PID_FH $PID, "\n"; close PID_FH; }; if ( my $e = $EVAL_ERROR ) { if ( $e =~ m/file exists/i ) { my $old_pid = $self->_check_pid_file( pid_file => $pid_file, pid => $PID, ); if ( $old_pid ) { warn "Overwriting PID file $pid_file because PID $old_pid " . "is not running.\n"; } $self->_update_pid_file( pid => $PID, pid_file => $pid_file ); } else { die "Error creating PID file $pid_file: $e\n"; } } return; } sub _check_pid_file { my ($self, %args) = @_; my @required_args = qw(pid_file pid); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid_file = $args{pid_file}; my $pid = $args{pid}; PTDEBUG && _d('Checking if PID in', $pid_file, 'is running'); if ( ! -f $pid_file ) { PTDEBUG && _d('PID file', $pid_file, 'does not exist'); return; } open my $fh, '<', $pid_file or die "Error opening $pid_file: $OS_ERROR"; my $existing_pid = do { local $/; <$fh> }; chomp($existing_pid) if $existing_pid; close $fh or die "Error closing $pid_file: $OS_ERROR"; if ( $existing_pid ) { if ( $existing_pid == $pid ) { warn "The current PID $pid already holds the PID file $pid_file\n"; return; } else { PTDEBUG && _d('Checking if PID', $existing_pid, 'is running'); my $pid_is_alive = kill 0, $existing_pid; if ( $pid_is_alive ) { die "PID file $pid_file exists and PID $existing_pid is running\n"; } } } else { die "PID file $pid_file exists but it is empty. Remove the file " . "if the process is no longer running.\n"; } return $existing_pid; } sub _update_pid_file { my ($self, %args) = @_; my @required_args = qw(pid pid_file); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid = $args{pid}; my $pid_file = $args{pid_file}; open my $fh, '>', $pid_file or die "Cannot open $pid_file: $OS_ERROR"; print { $fh } $pid, "\n" or die "Cannot print to $pid_file: $OS_ERROR"; close $fh or warn "Cannot close $pid_file: $OS_ERROR"; return; } sub remove_pid_file { my ($self, $pid_file) = @_; $pid_file ||= $self->{pid_file}; if ( $pid_file && -f $pid_file ) { unlink $self->{pid_file} or warn "Cannot remove PID file $pid_file: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ($self) = @_; if ( $self->{pid_file_owner} == $PID ) { $self->remove_pid_file(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # Schema package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Schema.pm # t/lib/Schema.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Schema; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, schema => {}, # keyed on db->tbl }; return bless $self, $class; } sub get_schema { my ( $self ) = @_; return $self->{schema}; } sub get_table { my ( $self, $db_name, $tbl_name ) = @_; if ( exists $self->{schema}->{$db_name} && exists $self->{schema}->{$db_name}->{$tbl_name} ) { return $self->{schema}->{$db_name}->{$tbl_name}; } return; } sub add_schema_object { my ( $self, $schema_object ) = @_; die "I need a schema_object argument" unless $schema_object; my ($db, $tbl) = @{$schema_object}{qw(db tbl)}; if ( !$db || !$tbl ) { warn "No database or table for schema object"; return; } my $tbl_struct = $schema_object->{tbl_struct}; if ( !$tbl_struct ) { warn "No table structure for $db.$tbl"; return; } $self->{schema}->{lc $db}->{lc $tbl} = $schema_object; return; } sub find_column { my ( $self, %args ) = @_; my $ignore = $args{ignore}; my $schema = $self->{schema}; my ($col, $tbl, $db); if ( my $col_name = $args{col_name} ) { ($col, $tbl, $db) = reverse map { s/`//g; $_ } split /[.]/, $col_name; PTDEBUG && _d('Column', $col_name, 'has db', $db, 'tbl', $tbl, 'col', $col); } else { ($col, $tbl, $db) = @args{qw(col tbl db)}; } $db = lc($db || ''); $tbl = lc($tbl || ''); $col = lc($col || ''); if ( !$col ) { PTDEBUG && _d('No column specified or parsed'); return; } PTDEBUG && _d('Finding column', $col, 'in', $db, $tbl); if ( $db && !$schema->{$db} ) { PTDEBUG && _d('Database', $db, 'does not exist'); return; } if ( $db && $tbl && !$schema->{$db}->{$tbl} ) { PTDEBUG && _d('Table', $tbl, 'does not exist in database', $db); return; } my @tbls; my @search_dbs = $db ? ($db) : keys %$schema; DATABASE: foreach my $search_db ( @search_dbs ) { my @search_tbls = $tbl ? ($tbl) : keys %{$schema->{$search_db}}; TABLE: foreach my $search_tbl ( @search_tbls ) { next DATABASE unless exists $schema->{$search_db}->{$search_tbl}; if ( $ignore && grep { $_->{db} eq $search_db && $_->{tbl} eq $search_tbl } @$ignore ) { PTDEBUG && _d('Ignoring', $search_db, $search_tbl, $col); next TABLE; } my $tbl = $schema->{$search_db}->{$search_tbl}; if ( $tbl->{tbl_struct}->{is_col}->{$col} ) { PTDEBUG && _d('Column', $col, 'exists in', $tbl->{db}, $tbl->{tbl}); push @tbls, $tbl; } } } return \@tbls; } sub find_table { my ( $self, %args ) = @_; my $ignore = $args{ignore}; my $schema = $self->{schema}; my ($tbl, $db); if ( my $tbl_name = $args{tbl_name} ) { ($tbl, $db) = reverse map { s/`//g; $_ } split /[.]/, $tbl_name; PTDEBUG && _d('Table', $tbl_name, 'has db', $db, 'tbl', $tbl); } else { ($tbl, $db) = @args{qw(tbl db)}; } $db = lc($db || ''); $tbl = lc($tbl || ''); if ( !$tbl ) { PTDEBUG && _d('No table specified or parsed'); return; } PTDEBUG && _d('Finding table', $tbl, 'in', $db); if ( $db && !$schema->{$db} ) { PTDEBUG && _d('Database', $db, 'does not exist'); return; } if ( $db && $tbl && !$schema->{$db}->{$tbl} ) { PTDEBUG && _d('Table', $tbl, 'does not exist in database', $db); return; } my @dbs; my @search_dbs = $db ? ($db) : keys %$schema; DATABASE: foreach my $search_db ( @search_dbs ) { if ( $ignore && grep { $_->{db} eq $search_db } @$ignore ) { PTDEBUG && _d('Ignoring', $search_db); next DATABASE; } if ( exists $schema->{$search_db}->{$tbl} ) { PTDEBUG && _d('Table', $tbl, 'exists in', $search_db); push @dbs, $search_db; } } return \@dbs; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Schema package # ########################################################################### # ########################################################################### # SchemaIterator package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/SchemaIterator.pm # t/lib/SchemaIterator.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package SchemaIterator; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; my $open_comment = qr{/\*!\d{5} }; my $tbl_name = qr{ CREATE\s+ (?:TEMPORARY\s+)? TABLE\s+ (?:IF NOT EXISTS\s+)? ([^\(]+) }x; sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser TableParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($file_itr, $dbh) = @args{qw(file_itr dbh)}; die "I need either a dbh or file_itr argument" if (!$dbh && !$file_itr) || ($dbh && $file_itr); my %resume; if ( my $table = $args{resume} ) { PTDEBUG && _d('Will resume from or after', $table); my ($db, $tbl) = $args{Quoter}->split_unquote($table); die "Resume table must be database-qualified: $table" unless $db && $tbl; $resume{db} = $db; $resume{tbl} = $tbl; } my $self = { %args, resume => \%resume, filters => _make_filters(%args), }; return bless $self, $class; } sub _make_filters { my ( %args ) = @_; my @required_args = qw(OptionParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o, $q) = @args{@required_args}; my %filters; my @simple_filters = qw( databases tables engines ignore-databases ignore-tables ignore-engines); FILTER: foreach my $filter ( @simple_filters ) { if ( $o->has($filter) ) { my $objs = $o->get($filter); next FILTER unless $objs && scalar keys %$objs; my $is_table = $filter =~ m/table/ ? 1 : 0; foreach my $obj ( keys %$objs ) { die "Undefined value for --$filter" unless $obj; $obj = lc $obj; if ( $is_table ) { my ($db, $tbl) = $q->split_unquote($obj); $db ||= '*'; PTDEBUG && _d('Filter', $filter, 'value:', $db, $tbl); $filters{$filter}->{$db}->{$tbl} = 1; } else { # database PTDEBUG && _d('Filter', $filter, 'value:', $obj); $filters{$filter}->{$obj} = 1; } } } } my @regex_filters = qw( databases-regex tables-regex ignore-databases-regex ignore-tables-regex); REGEX_FILTER: foreach my $filter ( @regex_filters ) { if ( $o->has($filter) ) { my $pat = $o->get($filter); next REGEX_FILTER unless $pat; $filters{$filter} = qr/$pat/; PTDEBUG && _d('Filter', $filter, 'value:', $filters{$filter}); } } PTDEBUG && _d('Schema object filters:', Dumper(\%filters)); return \%filters; } sub next { my ( $self ) = @_; if ( !$self->{initialized} ) { $self->{initialized} = 1; if ( $self->{resume}->{tbl} ) { if ( !$self->table_is_allowed(@{$self->{resume}}{qw(db tbl)}) ) { PTDEBUG && _d('Will resume after', join('.', @{$self->{resume}}{qw(db tbl)})); $self->{resume}->{after}->{tbl} = 1; } if ( !$self->database_is_allowed($self->{resume}->{db}) ) { PTDEBUG && _d('Will resume after', $self->{resume}->{db}); $self->{resume}->{after}->{db} = 1; } } } my $schema_obj; if ( $self->{file_itr} ) { $schema_obj= $self->_iterate_files(); } else { # dbh $schema_obj= $self->_iterate_dbh(); } if ( $schema_obj ) { if ( my $schema = $self->{Schema} ) { $schema->add_schema_object($schema_obj); } PTDEBUG && _d('Next schema object:', $schema_obj->{db}, $schema_obj->{tbl}); } return $schema_obj; } sub _iterate_files { my ( $self ) = @_; if ( !$self->{fh} ) { my ($fh, $file) = $self->{file_itr}->(); if ( !$fh ) { PTDEBUG && _d('No more files to iterate'); return; } $self->{fh} = $fh; $self->{file} = $file; } my $fh = $self->{fh}; PTDEBUG && _d('Getting next schema object from', $self->{file}); local $INPUT_RECORD_SEPARATOR = ''; CHUNK: while (defined(my $chunk = <$fh>)) { if ($chunk =~ m/Database: (\S+)/) { my $db = $1; # XXX $db =~ s/^`//; # strip leading ` $db =~ s/`$//; # and trailing ` if ( $self->database_is_allowed($db) && $self->_resume_from_database($db) ) { $self->{db} = $db; } } elsif ($self->{db} && $chunk =~ m/CREATE TABLE/) { if ($chunk =~ m/DROP VIEW IF EXISTS/) { PTDEBUG && _d('Table is a VIEW, skipping'); next CHUNK; } my ($tbl) = $chunk =~ m/$tbl_name/; $tbl =~ s/^\s*`//; $tbl =~ s/`\s*$//; if ( $self->_resume_from_table($tbl) && $self->table_is_allowed($self->{db}, $tbl) ) { my ($ddl) = $chunk =~ m/^(?:$open_comment)?(CREATE TABLE.+?;)$/ms; if ( !$ddl ) { warn "Failed to parse CREATE TABLE from\n" . $chunk; next CHUNK; } $ddl =~ s/ \*\/;\Z/;/; # remove end of version comment my $tbl_struct = $self->{TableParser}->parse($ddl); if ( $self->engine_is_allowed($tbl_struct->{engine}) ) { return { db => $self->{db}, tbl => $tbl, name => $self->{Quoter}->quote($self->{db}, $tbl), ddl => $ddl, tbl_struct => $tbl_struct, }; } } } } # CHUNK PTDEBUG && _d('No more schema objects in', $self->{file}); close $self->{fh}; $self->{fh} = undef; return $self->_iterate_files(); } sub _iterate_dbh { my ( $self ) = @_; my $q = $self->{Quoter}; my $tp = $self->{TableParser}; my $dbh = $self->{dbh}; PTDEBUG && _d('Getting next schema object from dbh', $dbh); if ( !defined $self->{dbs} ) { my $sql = 'SHOW DATABASES'; PTDEBUG && _d($sql); my @dbs = grep { $self->_resume_from_database($_) && $self->database_is_allowed($_) } @{$dbh->selectcol_arrayref($sql)}; PTDEBUG && _d('Found', scalar @dbs, 'databases'); $self->{dbs} = \@dbs; } DATABASE: while ( $self->{db} || defined(my $db = shift @{$self->{dbs}}) ) { if ( !$self->{db} ) { PTDEBUG && _d('Next database:', $db); $self->{db} = $db; } if ( !$self->{tbls} ) { my $sql = 'SHOW /*!50002 FULL*/ TABLES FROM ' . $q->quote($self->{db}); PTDEBUG && _d($sql); my @tbls = map { $_->[0]; # (tbl, type) } grep { my ($tbl, $type) = @$_; (!$type || ($type ne 'VIEW')) && $self->_resume_from_table($tbl) && $self->table_is_allowed($self->{db}, $tbl); } eval { @{$dbh->selectall_arrayref($sql)}; }; if ($EVAL_ERROR) { warn "Skipping $self->{db}..."; $self->{db} = undef; next; } PTDEBUG && _d('Found', scalar @tbls, 'tables in database',$self->{db}); $self->{tbls} = \@tbls; } TABLE: while ( my $tbl = shift @{$self->{tbls}} ) { my $ddl = eval { $tp->get_create_table($dbh, $self->{db}, $tbl) }; if ( my $e = $EVAL_ERROR ) { my $table_name = "$self->{db}.$tbl"; if ( $e =~ /\QTable '$table_name' doesn't exist/ ) { PTDEBUG && _d("$table_name no longer exists"); } else { warn "Skipping $table_name because SHOW CREATE TABLE failed: $e"; } next TABLE; } my $tbl_struct = $tp->parse($ddl); if ( $self->engine_is_allowed($tbl_struct->{engine}) ) { return { db => $self->{db}, tbl => $tbl, name => $q->quote($self->{db}, $tbl), ddl => $ddl, tbl_struct => $tbl_struct, }; } } PTDEBUG && _d('No more tables in database', $self->{db}); $self->{db} = undef; $self->{tbls} = undef; } # DATABASE PTDEBUG && _d('No more databases'); return; } sub database_is_allowed { my ( $self, $db ) = @_; die "I need a db argument" unless $db; $db = lc $db; my $filter = $self->{filters}; if ( $db =~ m/^(information_schema|performance_schema|lost\+found|percona_schema)$/ ) { PTDEBUG && _d('Database', $db, 'is a system database, ignoring'); return 0; } if ( $self->{filters}->{'ignore-databases'}->{$db} ) { PTDEBUG && _d('Database', $db, 'is in --ignore-databases list'); return 0; } if ( $filter->{'ignore-databases-regex'} && $db =~ $filter->{'ignore-databases-regex'} ) { PTDEBUG && _d('Database', $db, 'matches --ignore-databases-regex'); return 0; } if ( $filter->{'databases'} && !$filter->{'databases'}->{$db} ) { PTDEBUG && _d('Database', $db, 'is not in --databases list, ignoring'); return 0; } if ( $filter->{'databases-regex'} && $db !~ $filter->{'databases-regex'} ) { PTDEBUG && _d('Database', $db, 'does not match --databases-regex, ignoring'); return 0; } return 1; } sub table_is_allowed { my ( $self, $db, $tbl ) = @_; die "I need a db argument" unless $db; die "I need a tbl argument" unless $tbl; $db = lc $db; $tbl = lc $tbl; my $filter = $self->{filters}; return 0 if $db eq 'mysql' && $tbl =~ m/^(?: general_log |gtid_executed |innodb_index_stats |innodb_table_stats |slave_master_info |slave_relay_log_info |slave_worker_info |slow_log )$/x; if ( $filter->{'ignore-tables'}->{'*'}->{$tbl} || $filter->{'ignore-tables'}->{$db}->{$tbl}) { PTDEBUG && _d('Table', $tbl, 'is in --ignore-tables list'); return 0; } if ( $filter->{'ignore-tables-regex'} && $tbl =~ $filter->{'ignore-tables-regex'} ) { PTDEBUG && _d('Table', $tbl, 'matches --ignore-tables-regex'); return 0; } if ( $filter->{'tables'} && (!$filter->{'tables'}->{'*'}->{$tbl} && !$filter->{'tables'}->{$db}->{$tbl}) ) { PTDEBUG && _d('Table', $tbl, 'is not in --tables list, ignoring'); return 0; } if ( $filter->{'tables-regex'} && $tbl !~ $filter->{'tables-regex'} ) { PTDEBUG && _d('Table', $tbl, 'does not match --tables-regex, ignoring'); return 0; } if ( $filter->{'tables'} && $filter->{'tables'}->{$tbl} && $filter->{'tables'}->{$tbl} ne '*' && $filter->{'tables'}->{$tbl} ne $db ) { PTDEBUG && _d('Table', $tbl, 'is only allowed in database', $filter->{'tables'}->{$tbl}); return 0; } return 1; } sub engine_is_allowed { my ( $self, $engine ) = @_; if ( !$engine ) { PTDEBUG && _d('No engine specified; allowing the table'); return 1; } $engine = lc $engine; my $filter = $self->{filters}; if ( $filter->{'ignore-engines'}->{$engine} ) { PTDEBUG && _d('Engine', $engine, 'is in --ignore-databases list'); return 0; } if ( $filter->{'engines'} && !$filter->{'engines'}->{$engine} ) { PTDEBUG && _d('Engine', $engine, 'is not in --engines list, ignoring'); return 0; } return 1; } sub _resume_from_database { my ($self, $db) = @_; return 1 unless $self->{resume}->{db}; if ( $db eq $self->{resume}->{db} ) { if ( !$self->{resume}->{after}->{db} ) { PTDEBUG && _d('Resuming from db', $db); delete $self->{resume}->{db}; return 1; } else { PTDEBUG && _d('Resuming after db', $db); delete $self->{resume}->{db}; delete $self->{resume}->{tbl}; } } return 0; } sub _resume_from_table { my ($self, $tbl) = @_; return 1 unless $self->{resume}->{tbl}; if ( $tbl eq $self->{resume}->{tbl} ) { if ( !$self->{resume}->{after}->{tbl} ) { PTDEBUG && _d('Resuming from table', $tbl); delete $self->{resume}->{tbl}; return 1; } else { PTDEBUG && _d('Resuming after table', $tbl); delete $self->{resume}->{tbl}; } } return 0; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End SchemaIterator package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ############################################################################# # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ############################################################################# package pt_duplicate_key_checker; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use List::Util qw(max); use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; $OUTPUT_AUTOFLUSH = 1; my $max_width = 74; my $hdr_width = $max_width - 2; # for '# ' my $hdr_fmt = "# %-${hdr_width}s\n"; sub main { local @ARGV = @_; # set global ARGV for this package my %summary = ( 'items' => {'Total Indexes' => 0} ); my %seen_tbl; my $q = new Quoter(); my $tp = new TableParser(Quoter => $q); # ####################################################################### # Get configuration information and parse command line options. # ####################################################################### my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); $o->usage_or_errors(); # ######################################################################## # If --pid, check it first since we'll die if it already exits. # ######################################################################## my $daemon = Daemon->new( daemonize => 0, pid_file => $o->get('pid'), ); $daemon->run(); # ####################################################################### # Connect to MySQL. # ####################################################################### if ( $o->got('ask-pass') ) { $o->set('password', OptionParser::prompt_noecho("Enter password: ")); } my $dsn_defaults = $dp->parse_options($o); my $dsn = @ARGV ? $dp->parse(shift @ARGV, $dsn_defaults) : $dsn_defaults; my $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit => 1, }); # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ {dbh => $dbh, dsn => $dsn} ], ); } # ####################################################################### # Do the main work. # ####################################################################### my $ks = $o->get('summary') ? new KeySize(q=>$q) : undef; my $dk = new DuplicateKeyFinder(); my %tp_opts = ( ignore_type => $o->get('all-structs'), ignore_order => $o->get('ignore-order'), clustered => $o->get('clustered'), ); my $get_keys = $o->get('key-types') =~ m/k/ ? 1 : 0; my $get_fks = $o->get('key-types') =~ m/f/ ? 1 : 0; my $schema = new Schema(); my $schema_itr = new SchemaIterator( dbh => $dbh, OptionParser => $o, Quoter => $q, TableParser => $tp, Schema => $schema, ); TABLE: while ( my $tbl = $schema_itr->next() ) { eval { $tbl->{engine} = $tbl->{tbl_struct}->{engine}; my ($keys, $clustered_key, $fks); if ( $get_keys ) { ($keys, $clustered_key) = $tp->get_keys($tbl->{ddl}, {}); } if ( $get_fks ) { $fks = $tp->get_fks($tbl->{ddl}, {database => $tbl->{db}}); } if ( ($keys && %$keys) || ($fks && %$fks) ) { if ( $o->got('verbose') ) { print_all_keys($keys, $tbl, \%seen_tbl) if $keys; print_all_keys($fks, $tbl, \%seen_tbl) if $fks; } PTDEBUG && _d('Getting duplicate keys on', $tbl->{db}, $tbl->{tbl}); if ( $keys ) { $dk->get_duplicate_keys( $keys, clustered_key => $clustered_key, tbl_info => $tbl, callback => \&print_duplicate_key, %tp_opts, # get_duplicate_keys() ignores these args but passes them # to the callback: dbh => $dbh, is_fk => 0, o => $o, ks => $ks, tp => $tp, q => $q, seen_tbl => \%seen_tbl, summary => \%summary, ); } if ( $fks ) { $dk->get_duplicate_fks( $fks, tbl_info => $tbl, callback => \&print_duplicate_key, %tp_opts, # get_duplicate_fks() ignores these args but passes them # to the callback: dbh => $dbh, is_fk => 1, o => $o, ks => $ks, tp => $tp, q => $q, seen_tbl => \%seen_tbl, summary => \%summary, ); } # Always count Total Keys so print_key_summary won't die # because %summary is empty. $summary{items}->{'Total Indexes'} += (scalar keys %$keys) + (scalar keys %$fks) } }; if ( $EVAL_ERROR ) { warn "Error checking $tbl->{db}.$tbl->{tbl}: $EVAL_ERROR"; } } # TABLE print_key_summary(%summary) if $o->get('summary'); return 0; } # ########################################################################## # Subroutines # ########################################################################## sub print_all_keys { my ( $keys, $tbl_info, $seen_tbl ) = @_; return unless $keys; my $db = $tbl_info->{db}; my $tbl = $tbl_info->{tbl}; if ( !$seen_tbl->{"$db$tbl"}++ ) { printf $hdr_fmt, ('#' x $hdr_width); printf $hdr_fmt, "$db.$tbl"; printf $hdr_fmt, ('#' x $hdr_width); } # Print keys sorted by name (easier to test) foreach my $key ( sort {$a->{name} cmp $b->{name}} values %$keys ) { print "\n# $key->{name} ($key->{colnames})"; } print "\n"; return; } sub print_duplicate_key { my ( $dupe, %args ) = @_; return unless $dupe; foreach my $arg ( qw(tbl_info dbh is_fk o ks q tp seen_tbl) ) { die "I need a $arg argument" unless exists $args{$arg}; } PTDEBUG && _d('Printing duplicate key', $dupe->{key}); my $db = $args{tbl_info}->{db}; my $tbl = $args{tbl_info}->{tbl}; my $dbh = $args{dbh}; my $o = $args{o}; my $ks = $args{ks}; my $seen_tbl = $args{seen_tbl}; my $q = $args{q}; my $tp = $args{tp}; my $summary = $args{summary}; my $struct = $tp->parse($args{tbl_info}->{ddl}); if ($dupe->{ddl} =~ /FULLTEXT/) { $summary->{has_fulltext_dupe}++; } if ( !$seen_tbl->{"$db$tbl"}++ ) { printf $hdr_fmt, ('#' x $hdr_width); printf $hdr_fmt, "$db.$tbl"; printf $hdr_fmt, ('#' x $hdr_width); print "\n"; } $dupe->{reason} =~ s/\n/\n# /g; print "# $dupe->{reason}\n"; print "# Key definitions:\n"; print "# " . ($dupe->{ddl} || '') . "\n"; print "# " . ($dupe->{duplicate_of_ddl} || '') . "\n"; print "# Column types:\n"; my %seen; # print each column only once foreach my $col ( @{$dupe->{cols}}, @{$dupe->{duplicate_of_cols}} ) { next if $seen{$col}++; PTDEBUG && _d('col', $col); print "#\t" . lc($struct->{defs}->{lc $col}) . "\n"; } if ( $o->get('sql') ) { if ( $dupe->{dupe_type} ne 'clustered' ) { print "# To remove this duplicate " . ($args{is_fk} ? 'foreign key' : 'index') . ", execute:\n" . 'ALTER TABLE ' . $q->quote($db, $tbl) . ($args{is_fk} ? ' DROP FOREIGN KEY ' : ' DROP INDEX ') . "`$dupe->{key}`;\n"; } else { # Suggest shortening clustered dupes instead of # removing them (issue 295). print "# To shorten this duplicate clustered index, execute:\n" . 'ALTER TABLE '.$q->quote($db, $tbl)." DROP INDEX `$dupe->{key}`, " . "ADD INDEX `$dupe->{key}` ($dupe->{short_key});\n"; } } print "\n"; if ( $o->get('summary') && $summary ) { $summary->{'items'}->{'Total Duplicate Indexes'} += 1; my ($size, $chosen_key) = $ks->get_key_size( name => $dupe->{key}, cols => $dupe->{cols}, tbl_name => $q->quote($db, $tbl), tbl_struct => $struct, dbh => $dbh, ); if ( $args{is_fk} ) { # Foreign keys have no size because they're just constraints. print "# MySQL uses the $chosen_key index for this " . "foreign key constraint\n\n"; } else { $size ||= 0; # Create Size Duplicate Keys summary even if there's no valid keys. $summary->{'items'}->{'Size Duplicate Indexes'} += $size; if ( $size ) { if ( $chosen_key && $chosen_key ne $dupe->{key} ) { # This shouldn't happen. But in case it does, we should know. print "# MySQL chose the $chosen_key index despite FORCE INDEX\n\n"; } } } } return; } sub print_key_summary { my ( %summary ) = @_; my $items = $summary{items}; printf $hdr_fmt, ('#' x $hdr_width); printf $hdr_fmt, 'Summary of indexes'; printf $hdr_fmt, ('#' x $hdr_width); print "\n"; my $max_item = max(map { length($_) } keys %$items); my $line_fmt = "# %-${max_item}s %-s"; foreach my $item ( sort keys %$items ) { printf $line_fmt, $item, $items->{$item}; if ( $item eq 'Size Duplicate Indexes' && $summary{has_fulltext_dupe} ) { print ' (not including FULLTEXT indexes)'; } print "\n"; } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-duplicate-key-checker - Find duplicate indexes and foreign keys on MySQL tables. =head1 SYNOPSIS Usage: pt-duplicate-key-checker [OPTIONS] [DSN] pt-duplicate-key-checker examines MySQL tables for duplicate or redundant indexes and foreign keys. Connection options are read from MySQL option files. pt-duplicate-key-checker --host host1 =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION This program examines the output of SHOW CREATE TABLE on MySQL tables, and if it finds indexes that cover the same columns as another index in the same order, or cover an exact leftmost prefix of another index, it prints out the suspicious indexes. By default, indexes must be of the same type, so a BTREE index is not a duplicate of a FULLTEXT index, even if they have the same columns. You can override this. It also looks for duplicate foreign keys. A duplicate foreign key covers the same columns as another in the same table, and references the same parent table. The output ends with a short summary that includes an estimate of the total size, in bytes, that the duplicate indexes are using. This is calculated by multiplying the index length by the number of rows in their respective tables. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --all-structs Compare indexes with different structs (BTREE, HASH, etc). By default this is disabled, because a BTREE index that covers the same columns as a FULLTEXT index is not really a duplicate, for example. =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --[no]clustered default: yes PK columns appended to secondary key is duplicate. Detects when a suffix of a secondary key is a leftmost prefix of the primary key, and treats it as a duplicate key. Only detects this condition on storage engines whose primary keys are clustered (currently InnoDB and solidDB). Clustered storage engines append the primary key columns to the leaf nodes of all secondary keys anyway, so you might consider it redundant to have them appear in the internal nodes as well. Of course, you may also want them in the internal nodes, because just having them at the leaf nodes won't help for some queries. It does help for covering index queries, however. Here's an example of a key that is considered redundant with this option: PRIMARY KEY (`a`) KEY `b` (`b`,`a`) The use of such indexes is rather subtle. For example, suppose you have the following query: SELECT ... WHERE b=1 ORDER BY a; This query will do a filesort if we remove the index on C. But if we shorten the index on C to just C and also remove the ORDER BY, the query should return the same results. The tool suggests shortening duplicate clustered keys by dropping the key and re-adding it without the primary key prefix. The shortened clustered key may still duplicate another key, but the tool cannot currently detect when this happens without being ran a second time to re-check the newly shortened clustered keys. Therefore, if you shorten any duplicate clustered keys, you should run the tool again. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --databases short form: -d; type: hash Check only this comma-separated list of databases. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --engines short form: -e; type: hash Check only tables whose storage engine is in this comma-separated list. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --ignore-databases type: Hash Ignore this comma-separated list of databases. =item --ignore-engines type: Hash Ignore this comma-separated list of storage engines. =item --ignore-order Ignore index order so KEY(a,b) duplicates KEY(b,a). =item --ignore-tables type: Hash Ignore this comma-separated list of tables. Table names may be qualified with the database name. =item --key-types type: string; default: fk Check for duplicate f=foreign keys, k=keys or fk=both. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --[no]sql default: yes Print DROP KEY statement for each duplicate key. By default an ALTER TABLE DROP KEY statement is printed below each duplicate key so that, if you want to remove the duplicate key, you can copy-paste the statement into MySQL. To disable printing these statements, specify C<--no-sql>. =item --[no]summary default: yes Print summary of indexes at end of output. =item --tables short form: -t; type: hash Check only this comma-separated list of tables. Table names may be qualified with the database name. =item --user short form: -u; type: string User for login if not current user. =item --verbose short form: -v Output all keys and/or foreign keys found, not just redundant ones. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-duplicate-key-checker ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2007-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-duplicate-key-checker 3.1.0 =cut percona-toolkit-3.1/bin/pt-fifo-split000775 001750 001750 00000141755 13535723560 021056 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( OptionParser Daemon )); } # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_fifo_split; use English qw(-no_match_vars); use POSIX qw(mkfifo); use IO::File; use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub main { @ARGV = @_; # set global ARGV for this package # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); if ( !$o->get('lines') || $o->get('lines') <= 0 ) { $o->save_error('--lines must be a positive integer'); } $o->usage_or_errors(); # ######################################################################## # If --pid, check it first since we'll die if it already exits. # ######################################################################## my $daemon; if ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. Keep $daemon # in the the scope of main() because when it's destroyed it automatically # removes the PID file. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } my $file = $o->get('fifo'); if ( $o->get('force') && -e $file ) { unlink($file) or die "Can't unlink $file: $OS_ERROR"; } my $fh; if ( $o->get('statistics') ) { printf("%5s %9s %5s %8s %8s\n", qw(chunks lines time overall current)); } # This is for runtime efficiency. my $OFFSET = $o->get('offset'); my $LINES = $o->get('lines'); my $chunks = 0; my $start = time(); my $cstart = time(); my $printed = 0; while ( my $line = <> ) { my $lines = $INPUT_LINE_NUMBER; next if $OFFSET && $lines < $OFFSET; if ( $printed == 0 ) { mkfifo($file, 0777) or die "Can't make fifo $file: $OS_ERROR"; $fh = IO::File->new($file, '>') or die "Can't open $file: $OS_ERROR"; $fh->autoflush(1); } print $fh $line or die "Can't print: $OS_ERROR"; $printed++; if ( ($lines % $LINES) == 0 ) { close $fh or die "Can't close: $OS_ERROR"; unlink($file) or die "Can't unlink $file: $OS_ERROR"; $printed = 0; $chunks++; my $end = time(); if ( $o->get('statistics') ) { my $overall = ($end - $start) || 1; my $current = ($end - $cstart) || 1; printf("%5d %9d %5d %5.2f %5.2f\n", $chunks, $lines, ($end - $start), ($lines / $overall), ($LINES / $current)); } $cstart = $end; } } close $fh or die "Can't close: $OS_ERROR" if $fh && $fh->opened; unlink($file) or die "Can't unlink $file: $OS_ERROR" if -e $file; return 0; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-fifo-split - Split files and pipe lines to a fifo without really splitting. =head1 SYNOPSIS Usage: pt-fifo-split [OPTIONS] [FILE] pt-fifo-split splits FILE and pipes lines to a fifo. With no FILE, or when FILE is -, read standard input. Read hugefile.txt in chunks of a million lines without physically splitting it: pt-fifo-split --lines 1000000 hugefile.txt while [ -e /tmp/pt-fifo-split ]; do cat /tmp/pt-fifo-split; done =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-fifo-split lets you read from a file as though it contains only some of the lines in the file. When you read from it again, it contains the next set of lines; when you have gone all the way through it, the file disappears. This works only on Unix-like operating systems. You can specify multiple files on the command line. If you don't specify any, or if you use the special filename C<->, lines are read from standard input. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --fifo type: string; default: /tmp/pt-fifo-split The name of the fifo from which the lines can be read. =item --force Remove the fifo if it exists already, then create it again. =item --help Show help and exit. =item --lines type: int; default: 1000 The number of lines to read in each chunk. =item --offset type: int; default: 0 Begin at the Nth line. If the argument is 0, all lines are printed to the fifo. If 1, then beginning at the first line, lines are printed (exactly the same as 0). If 2, the first line is skipped, and the 2nd and subsequent lines are printed to the fifo. =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --statistics Print out statistics between chunks. The statistics are the number of chunks, the number of lines, elapsed time, and lines per second overall and during the last chunk. =item --version Show version and exit. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-fifo-split ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2007-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-fifo-split 3.1.0 =cut percona-toolkit-3.1/bin/pt-find000775 001750 001750 00000450401 13535723560 017711 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit DSNParser OptionParser Quoter TableParser Daemon HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_find; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; $OUTPUT_AUTOFLUSH = 1; # ############################################################################ # Lookup tables and global variables # ############################################################################ my $o; # OptionParser obj my %fmt_for; # Interpolated strings my %time_for; # Holds time constants for mmin, mtime etc my %connections; # Holds a list of thread IDs connected my $server_id; # Holds the server's @@SERVER_ID my $dbh; # This program's $dbh my $exec_dbh; # The $dbh to use for exec and exec-plus my $tp; # Functions to call while evaluating tests. my %test_for = ( autoinc => sub { my ( $table ) = @_; return test_number($table, 'Auto_increment', $o->get('autoinc')); }, avgrowlen => sub { my ( $table ) = @_; return test_number($table, 'Avg_row_length', $o->get('avgrowlen')); }, checksum => sub { my ( $table ) = @_; return test_number($table, 'Checksum', $o->get('checksum')); }, cmin => sub { my ( $table ) = @_; return test_date($table, 'Create_time', 'cmin'); }, collation => sub { my ( $table ) = @_; return test_regex($table, 'Collation', $o->get('collation')); }, 'column-name' => sub { my ( $table ) = @_; my $struct = $table->{struct}; return unless $struct; my $test = $o->get('column-name'); if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } foreach my $col ( @{$struct->{cols}} ) { return 1 if $col =~ m/$test/; } return 0; }, 'column-type' => sub { my ( $table ) = @_; my $struct = $table->{struct}; return unless $struct; my $test = lc($o->get('column-type')); my $type_for = $struct->{type_for}; foreach my $col ( keys %$type_for ) { return 1 if $type_for->{$col} eq $test; } return 0; }, comment => sub { my ( $table ) = @_; return test_regex($table, 'Comment', $o->get('comment')); }, createopts => sub { my ( $table ) = @_; return test_regex($table, 'Create_options', $o->get('createopts')); }, ctime => sub { my ( $table ) = @_; return test_date($table, 'Create_time', 'ctime'); }, datafree => sub { my ( $table ) = @_; return test_number($table, 'Data_free', $o->get('datafree')); }, datasize => sub { my ( $table ) = @_; return test_number($table, 'Data_length', $o->get('datasize')); }, dbregex => sub { my ( $table ) = @_; return test_regex($table, 'Database', $o->get('dbregex')); }, empty => sub { my ( $table ) = @_; return test_number($table, 'Rows', '0'); }, engine => sub { my ( $table ) = @_; return test_regex($table, 'Engine', $o->get('engine')); }, function => sub { my ( $table ) = @_; return unless $table->{stored_code} && $table->{stored_code} eq 'FUNCTION'; my $def = $table->{def}; return unless $def; my $test = $o->get('function'); if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } return $def =~ m/$test/; }, indexsize => sub { my ( $table ) = @_; return test_number($table, 'Index_length', $o->get('indexsize')); }, kmin => sub { my ( $table ) = @_; return test_date($table, 'Check_time', 'kmin'); }, ktime => sub { my ( $table ) = @_; return test_date($table, 'Check_time', 'ktime'); }, mmin => sub { my ( $table ) = @_; return test_date($table, 'Update_time', 'mmin'); }, mtime => sub { my ( $table ) = @_; return test_date($table, 'Update_time', 'mtime'); }, 'connection-id' => sub { my ( $table ) = @_; my $test = $o->get('case-insensitive') ? "(?i)".$o->get('connection-id') : $o->get('connection-id'); my ( $pid ) = $table->{Name} =~ m/$test/; return $pid && !exists $connections{$pid}; }, procedure => sub { my ( $table ) = @_; return unless $table->{stored_code} && $table->{stored_code} eq 'PROCEDURE'; my $def = $table->{def}; return unless $def; my $test = $o->get('procedure'); if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } return $def =~ m/$test/; }, rows => sub { my ( $table ) = @_; return test_number($table, 'Rows', $o->get('rows')); }, rowformat => sub { my ( $table ) = @_; return test_regex($table, 'Row_format', $o->get('rowformat')); }, 'server-id' => sub { my ( $table ) = @_; my $test = $o->get('case-insensitive') ? "(?i)".$o->get('server-id') : $o->get('server-id'); my ( $sid ) = $table->{Name} =~ m/$test/; return $sid && $sid == $server_id; }, tablesize => sub { my ( $table ) = @_; return test_number($table, 'Table_length', $o->get('tablesize')); }, tblregex => sub { my ( $table ) = @_; return test_regex($table, 'Name', $o->get('tblregex')); }, tblversion => sub { my ( $table ) = @_; return test_number($table, 'Version', $o->get('tblversion')); }, trigger => sub { my ( $table ) = @_; return unless $table->{stored_code} && $table->{stored_code} eq 'TRIGGER'; my $def = $table->{def}; return unless $def; my $test = $o->get('trigger'); if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } return $def =~ m/$test/; }, 'trigger-table' => sub { my ( $table ) = @_; return unless $table->{stored_code} && $table->{stored_code} eq 'TRIGGER'; my $test = $o->get('trigger-table'); if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } return $table->{trigger_table} =~ m/$test/; }, view => sub { my ( $table ) = @_; my $view = $table->{view}; return unless $view; my $test = $o->get('view'); if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } return $view =~ m/$test/; }, ); # Functions to call when doing actions my %action_for = ( print => sub { my ( $table ) = @_; print "$table->{Database}.$table->{Name}\n"; }, exec => sub { my ( $table ) = @_; my $sql = sprintf($fmt_for{exec}->{str}, map { defined $_ ? $_ : '' } @{$table}{@{$fmt_for{exec}->{arg_names}}}); PTDEBUG && _d($sql); $exec_dbh->do($sql); }, printf => sub { my ( $table ) = @_; printf($fmt_for{printf}->{str}, map { defined $_ ? $_ : '' } @{$table}{@{$fmt_for{printf}->{arg_names}}}); }, ); my %arg_for = ( a => 'Auto_increment', A => 'Avg_row_length', c => 'Checksum', C => 'Create_time', D => 'Database', d => 'Data_length', E => 'Engine', F => 'Data_free', f => 'Innodb_free', I => 'Index_length', K => 'Check_time', L => 'Collation', M => 'Max_data_length', N => 'Name', O => 'Comment', P => 'Create_options', R => 'Row_format', S => 'Rows', T => 'Table_length', U => 'Update_time', V => 'Version', ); my @table_struct_tests = qw( column-name column-type view ); my @stored_code_tests = qw( procedure function trigger ); sub main { local @ARGV = @_; # set global ARGV for this package # ######################################################################## # Get configuration information. # ######################################################################## my $q = new Quoter(); $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); # Make sure OptionParser understands that these options are used. # cmin ctime empty kmin ktime mmin mtime exec printf # Ensure there is a capture group. if ( $o->get('connection-id') && $o->get('connection-id') !~ m/\(\\d\+\)/ ) { $o->save_error("--connection-id regex doesn't capture digits with (\\d+)"); } # Ensure there is a capture group. if ( $o->get('server-id') && $o->get('server-id') !~ m/\(\\d\+\)/ ) { $o->save_error("--server-id regex doesn't capture digits with (\\d+)"); } $o->usage_or_errors(); # Interpolate strings for printf and exec. At the same time discover whether # I must use SHOW TABLE STATUS (slower than SHOW TABLES) to fetch data. my $showstat = grep { $o->get($_) } qw( autoinc avgrowlen checksum cmin collation comment createopts ctime datasize datafree empty engine indexsize kmin ktime mmin mtime rows rowformat tablesize tblversion); foreach my $thing (qw(exec printf)) { next unless $o->get($thing); my ($str, $arg_names) = interpolate($o->get($thing)); $fmt_for{$thing} = { str => $str, arg_names => $arg_names }; if ( grep { $_ !~ m/^(Database|Name)$/ } @$arg_names ) { $showstat = 1; } } # Discover if we need to parse SHOW CREATE TABLE. my $need_table_struct = grep { $o->got($_); } @table_struct_tests; PTDEBUG && _d('Need table struct:', $need_table_struct); if ( $need_table_struct ) { $tp = new TableParser(Quoter => $q); } # ######################################################################## # If --pid, check it first since we'll die if it already exits. # ######################################################################## my $daemon; if ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. Keep $daemon # in the the scope of main() because when it's destroyed it automatically # removes the PID file. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # ######################################################################## # Get ready to do the main work. # ######################################################################## # Connect to the database. if ( $o->get('ask-pass') ) { $o->set('password', OptionParser::prompt_noecho("Enter password: ")); } my $dsn = $dp->parse_options($o); $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit => 1 } ); if ( $o->get('exec-dsn') ) { my $exec_dsn = $dp->parse($o->get('exec-dsn'), $dsn); $exec_dbh = $dp->get_dbh($dp->get_cxn_params($exec_dsn), { AutoCommit => 1 }); } else { $exec_dbh = $dbh; } # If no other action was given, the default action is to print. if ( !grep { $o->get($_) } qw( exec exec-plus print printf ) ) { $o->set('print', 1); } # Figure out the time referred to by date/time options. my $basetime; foreach my $option ( grep { defined $o->get($_) } qw(cmin ctime kmin ktime mmin mtime) ) { # Initialize a consistent point in time. $basetime ||= $dbh->selectcol_arrayref( "SELECT " . ($o->get('day-start') ? 'CURRENT_DATE' : 'CURRENT_TIMESTAMP') )->[0]; my ($val) = $o->get($option) =~ m/(\d+)/; my $inter = $option =~ m/min/ ? 'MINUTE' : 'DAY'; my $query = "SELECT DATE_SUB('$basetime', INTERVAL $val $inter)"; $time_for{$option} = $dbh->selectcol_arrayref($query)->[0]; } # Fetch and save a list of processes currently running. if ( $o->get('connection-id') ) { # Ensure I have the PROCESS privilege. my $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref('SHOW GRANTS')}; if ( !$proc ) { die "--connection-id requires the PROCESS privilege for safety.\n"; } } ($server_id) = $dbh->selectrow_array('SELECT @@SERVER_ID'); # Discover if we need to get stored code. Need dbh to do this. my $need_stored_code = grep { $o->got($_); } @stored_code_tests; # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ { dbh => $dbh, dsn => $dsn } ], ); } # ######################################################################## # Go do it. # ######################################################################## my @databases = @ARGV ? @ARGV : $o->get('dblike') ? @{$dbh->selectcol_arrayref('SHOW DATABASES LIKE ?', {}, $o->get('dblike'))} : @{$dbh->selectcol_arrayref('SHOW DATABASES')}; my @exec_plus; DATABASE: foreach my $database ( @databases ) { next DATABASE if $database =~ m/^(?:information_schema|lost\+found)$/mi; my $sta = $showstat ? ' STATUS' : 'S'; my $sth = $o->get('tbllike') ? $dbh->prepare("SHOW TABLE$sta FROM `$database` LIKE ?") : $dbh->prepare("SHOW TABLE$sta FROM `$database`"); $sth->execute($o->get('tbllike') || ()); my @tables = @{$sth->fetchall_arrayref({})}; # Must re-fetch every time; there are too many ways things can go wrong # otherwise (for example, the counter wraps over the unsigned int # boundary). if ( $o->get('connection-id') ) { %connections = map { $_ => 1 } @{$dbh->selectcol_arrayref('SHOW FULL PROCESSLIST')}; } # Make results uniform across MySQL versions, and generate additional # properties. foreach my $table ( @tables ) { if ( $showstat ) { my ($ib_free) = $table->{Comment} && $table->{Comment} =~ m/InnoDB free: (\d+) kB/; $table->{Engine} ||= $table->{Type}; $table->{Table_length} = ($table->{Index_length} || 0) + ($table->{Data_length} || 0); $table->{Innodb_free} = $ib_free ? 1_024 * $ib_free : undef; delete $table->{Type}; } else { my ($name) = values %$table; $table = { Name => $name }; } $table->{Database} = $database; if ( $need_table_struct ) { PTDEBUG && _d('Getting table struct for', $database, '.', $table->{Name}); my $ddl = $tp->get_create_table($dbh, $database, $table->{Name}); if ( $ddl =~ m/CREATE TABLE/ ) { my $table_struct; eval { $table_struct = $tp->parse($ddl) }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Failed to parse table:', $EVAL_ERROR); } $table->{struct} = $table_struct; } else { $table->{view} = $ddl; } } } if ( $need_stored_code ) { foreach my $type ( qw(PROCEDURE FUNCTION) ) { my $sql = "SELECT ROUTINE_NAME AS name, " . " ROUTINE_DEFINITION AS definition " . " FROM INFORMATION_SCHEMA.ROUTINES " . " WHERE ROUTINE_SCHEMA = '$database' " . " AND ROUTINE_TYPE = '$type'"; PTDEBUG && _d($sql); my $codes = $dbh->selectall_arrayref($sql); foreach my $code ( @$codes ) { push @tables, { Database => $database, Name => "$type $code->[0]", stored_code => $type, def => $code->[1], }; } } my $sql = "SELECT TRIGGER_NAME AS name, " . " ACTION_STATEMENT AS action, " . " EVENT_OBJECT_TABLE AS `table`, " . " EVENT_MANIPULATION AS type " . " FROM INFORMATION_SCHEMA.TRIGGERS " . " WHERE EVENT_OBJECT_SCHEMA = '$database'"; PTDEBUG && _d($sql); my $trigs = $dbh->selectall_arrayref($sql); my $codes = $dbh->selectall_arrayref($sql); foreach my $trig ( @$trigs ) { push @tables, { Database => $database, Name => "$trig->[3] TRIGGER $trig->[0] on $trig->[2]", trigger_table => $trig->[2], stored_code => 'TRIGGER', def => $trig->[1], }; } } # Apply the tests to find the matching tables. @tables = grep { my $table = $_; my @tests = grep { $o->get($_) } keys %test_for; if ( @tests ) { ($o->get('or') ? any($table, @tests) : all($table, @tests)); } else { $table; # No tests == all tables (issue 549). } } @tables; # Quote database and table names if desired. if ( $o->get('quote') ) { foreach my $table ( @tables ) { $table->{Database} = $q->quote($table->{Database}); $table->{Name} = $q->quote($table->{Name}); } } foreach my $table ( @tables ) { my @actions = grep { $o->get($_) } keys %action_for; foreach my $action ( @actions ) { $action_for{$action}->($table); } } push @exec_plus, @tables; } # Handle exec-plus. if ( $o->get('exec-plus') ) { my $table_list = join(', ',map {"$_->{Database}.$_->{Name}"} @exec_plus); (my $sql = $o->get('exec-plus')) =~ s/%s/$table_list/g; $exec_dbh->do($sql); } return 0; } # ############################################################################ # Subroutines # ############################################################################ # One test is true sub any { my ( $table, @tests ) = @_; foreach my $test ( @tests ) { return 1 if $test_for{$test}->($table); } return 0; } # All tests are true sub all { my ( $table, @tests ) = @_; foreach my $test ( @tests ) { return 0 unless $test_for{$test}->($table); } return 1; } # Checks the given property of the given table to see if it passes the test sub test_number { my ( $table, $prop, $test ) = @_; # E.g. --datasize NULL. if ( $test eq 'null' ) { return !defined $table->{$prop}; } my ($num) = $test =~ m/(\d+)/; return defined $table->{$prop} && ( ( $test =~ m/-/ && $table->{$prop} < $num ) || ( $test =~ m/\+/ && $table->{$prop} > $num ) || ( $table->{$prop} == $num )); } # Checks the given property of the given table to see if it passes the test sub test_date { my ( $table, $prop, $test ) = @_; return defined $table->{$prop} && ( ( $o->get($test) =~ m/-/ && $table->{$prop} gt $time_for{$test} ) || ( $o->get($test) =~ m/\+/ && $table->{$prop} lt $time_for{$test} ) || ( $table->{$prop} eq $time_for{$test} )); } # Checks the given property of the given table to see if it passes the test sub test_regex { my ( $table, $prop, $test ) = @_; if ( $o->get('case-insensitive') ) { $test = "(?i)$test"; } return defined $table->{$prop} && $table->{$prop} =~ m/$test/; } # Does string-interpolation and stuff. Returns the string and a list of the # properties that go into the resulting placeholders. sub interpolate { my ( $str ) = @_; my @arg_names; # Replace % directives $str =~ s/%(.)/(exists $arg_for{$1} && push @arg_names, $arg_for{$1} ) ? '\%s' : "$1"/xge; # Get Perl to interpolate escape sequences $str =~ s/(? 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $test =~ m/([+-])?(\d+)([kMG])?/; if ( $factor ) { $num *= $factor_for{$factor}; } return "$pre$num"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-find - Find MySQL tables and execute actions, like GNU find. =head1 SYNOPSIS Usage: pt-find [OPTIONS] [DATABASES] pt-find searches for MySQL tables and executes actions, like GNU find. The default action is to print the database and table name. Find all tables created more than a day ago, which use the MyISAM engine, and print their names: pt-find --ctime +1 --engine MyISAM Find InnoDB tables and convert them to MyISAM: pt-find --engine InnoDB --exec "ALTER TABLE %D.%N ENGINE=MyISAM" Find tables created by a process that no longer exists, following the name_sid_pid naming convention, and remove them. pt-find --connection-id '\D_\d+_(\d+)$' --server-id '\D_(\d+)_\d+$' --exec-plus "DROP TABLE %s" Find empty tables in the test and junk databases, and delete them: pt-find --empty junk test --exec-plus "DROP TABLE %s" Find tables more than five gigabytes in total size: pt-find --tablesize +5G Find all tables and print their total data and index size, and sort largest tables first (sort is a different program, by the way). pt-find --printf "%T\t%D.%N\n" | sort -rn As above, but this time, insert the data back into the database for posterity: pt-find --noquote --exec "INSERT INTO sysdata.tblsize(db, tbl, size) VALUES('%D', '%N', %T)" =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-find looks for MySQL tables that pass the tests you specify, and executes the actions you specify. The default action is to print the database and table name to STDOUT. pt-find is simpler than GNU find. It doesn't allow you to specify complicated expressions on the command line. pt-find uses SHOW TABLES when possible, and SHOW TABLE STATUS when needed. =head1 OPTION TYPES There are three types of options: normal options, which determine some behavior or setting; tests, which determine whether a table should be included in the list of tables found; and actions, which do something to the tables pt-find finds. pt-find uses standard Getopt::Long option parsing, so you should use double dashes in front of long option names, unlike GNU find. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --case-insensitive Specifies that all regular expression searches are case-insensitive. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --database short form: -D; type: string Connect to this database. =item --day-start Measure times (for L<"--mmin">, etc) from the beginning of today rather than from the current time. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --or Combine tests with OR, not AND. By default, tests are evaluated as though there were an AND between them. This option switches it to OR. Option parsing is not implemented by pt-find itself, so you cannot specify complicated expressions with parentheses and mixtures of OR and AND. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --[no]quote default: yes Quotes MySQL identifier names with MySQL's standard backtick character. Quoting happens after tests are run, and before actions are run. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head2 TESTS Most tests check some criterion against a column of SHOW TABLE STATUS output. Numeric arguments can be specified as +n for greater than n, -n for less than n, and n for exactly n. All numeric options can take an optional suffix multiplier of k, M or G (1_024, 1_048_576, and 1_073_741_824 respectively). All patterns are Perl regular expressions (see 'man perlre') unless specified as SQL LIKE patterns. Dates and times are all measured relative to the same instant, when pt-find first asks the database server what time it is. All date and time manipulation is done in SQL, so if you say to find tables modified 5 days ago, that translates to SELECT DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 5 DAY). If you specify L<"--day-start">, if course it's relative to CURRENT_DATE instead. However, table sizes and other metrics are not consistent at an instant in time. It can take some time for MySQL to process all the SHOW queries, and pt-find can't do anything about that. These measurements are as of the time they're taken. If you need some test that's not in this list, file a bug report and I'll enhance pt-find for you. It's really easy. =over =item --autoinc type: string; group: Tests Table's next AUTO_INCREMENT is n. This tests the Auto_increment column. =item --avgrowlen type: size; group: Tests Table avg row len is n bytes. This tests the Avg_row_length column. The specified size can be "NULL" to test where Avg_row_length IS NULL. =item --checksum type: string; group: Tests Table checksum is n. This tests the Checksum column. =item --cmin type: size; group: Tests Table was created n minutes ago. This tests the Create_time column. =item --collation type: string; group: Tests Table collation matches pattern. This tests the Collation column. =item --column-name type: string; group: Tests A column name in the table matches pattern. =item --column-type type: string; group: Tests A column in the table matches this type (case-insensitive). Examples of types are: varchar, char, int, smallint, bigint, decimal, year, timestamp, text, enum. =item --comment type: string; group: Tests Table comment matches pattern. This tests the Comment column. =item --connection-id type: string; group: Tests Table name has nonexistent MySQL connection ID. This tests the table name for a pattern. The argument to this test must be a Perl regular expression that captures digits like this: (\d+). If the table name matches the pattern, these captured digits are taken to be the MySQL connection ID of some process. If the connection doesn't exist according to SHOW FULL PROCESSLIST, the test returns true. If the connection ID is greater than pt-find's own connection ID, the test returns false for safety. Why would you want to do this? If you use MySQL statement-based replication, you probably know the trouble temporary tables can cause. You might choose to work around this by creating real tables with unique names, instead of temporary tables. One way to do this is to append your connection ID to the end of the table, thusly: scratch_table_12345. This assures the table name is unique and lets you have a way to find which connection it was associated with. And perhaps most importantly, if the connection no longer exists, you can assume the connection died without cleaning up its tables, and this table is a candidate for removal. This is how I manage scratch tables, and that's why I included this test in pt-find. The argument I use to L<"--connection-id"> is "\D_(\d+)$". That finds tables with a series of numbers at the end, preceded by an underscore and some non-number character (the latter criterion prevents me from examining tables with a date at the end, which people tend to do: baron_scratch_2007_05_07 for example). It's better to keep the scratch tables separate of course. If you do this, make sure the user pt-find runs as has the PROCESS privilege! Otherwise it will only see connections from the same user, and might think some tables are ready to remove when they're still in use. For safety, pt-find checks this for you. See also L<"--server-id">. =item --createopts type: string; group: Tests Table create option matches pattern. This tests the Create_options column. =item --ctime type: size; group: Tests Table was created n days ago. This tests the Create_time column. =item --datafree type: size; group: Tests Table has n bytes of free space. This tests the Data_free column. The specified size can be "NULL" to test where Data_free IS NULL. =item --datasize type: size; group: Tests Table data uses n bytes of space. This tests the Data_length column. The specified size can be "NULL" to test where Data_length IS NULL. Note: Starting from MySQL 8.0, empty tables return 0 instead of NULL. =item --dblike type: string; group: Tests Database name matches SQL LIKE pattern. =item --dbregex type: string; group: Tests Database name matches this pattern. =item --empty group: Tests Table has no rows. This tests the Rows column. =item --engine type: string; group: Tests Table storage engine matches this pattern. This tests the Engine column, or in earlier versions of MySQL, the Type column. =item --function type: string; group: Tests Function definition matches pattern. =item --indexsize type: size; group: Tests Table indexes use n bytes of space. This tests the Index_length column. The specified size can be "NULL" to test where Index_length IS NULL. =item --kmin type: size; group: Tests Table was checked n minutes ago. This tests the Check_time column. =item --ktime type: size; group: Tests Table was checked n days ago. This tests the Check_time column. =item --mmin type: size; group: Tests Table was last modified n minutes ago. This tests the Update_time column. =item --mtime type: size; group: Tests Table was last modified n days ago. This tests the Update_time column. =item --procedure type: string; group: Tests Procedure definition matches pattern. =item --rowformat type: string; group: Tests Table row format matches pattern. This tests the Row_format column. =item --rows type: size; group: Tests Table has n rows. This tests the Rows column. The specified size can be "NULL" to test where Rows IS NULL. =item --server-id type: string; group: Tests Table name contains the server ID. If you create temporary tables with the naming convention explained in L<"--connection-id">, but also add the server ID of the server on which the tables are created, then you can use this pattern match to ensure tables are dropped only on the server they're created on. This prevents a table from being accidentally dropped on a slave while it's in use (provided that your server IDs are all unique, which they should be for replication to work). For example, on the master (server ID 22) you create a table called scratch_table_22_12345. If you see this table on the slave (server ID 23), you might think it can be dropped safely if there's no such connection 12345. But if you also force the name to match the server ID with C<--server-id '\D_(\d+)_\d+$'>, the table won't be dropped on the slave. =item --tablesize type: size; group: Tests Table uses n bytes of space. This tests the sum of the Data_length and Index_length columns. =item --tbllike type: string; group: Tests Table name matches SQL LIKE pattern. =item --tblregex type: string; group: Tests Table name matches this pattern. =item --tblversion type: size; group: Tests Table version is n. This tests the Version column. =item --trigger type: string; group: Tests Trigger action statement matches pattern. =item --trigger-table type: string; group: Tests L<"--trigger"> is defined on table matching pattern. =item --view type: string; group: Tests CREATE VIEW matches this pattern. =back =head2 ACTIONS The L<"--exec-plus"> action happens after everything else, but otherwise actions happen in an indeterminate order. If you need determinism, file a bug report and I'll add this feature. =over =item --exec type: string; group: Actions Execute this SQL with each item found. The SQL can contain escapes and formatting directives (see L<"--printf">). =item --exec-dsn type: string; group: Actions Specify a DSN in key-value format to use when executing SQL with L<"--exec"> and L<"--exec-plus">. Any values not specified are inherited from command-line arguments. =item --exec-plus type: string; group: Actions Execute this SQL with all items at once. This option is unlike L<"--exec">. There are no escaping or formatting directives; there is only one special placeholder for the list of database and table names, %s. The list of tables found will be joined together with commas and substituted wherever you place %s. You might use this, for example, to drop all the tables you found: DROP TABLE %s This is sort of like GNU find's "-exec command {} +" syntax. Only it's not totally cryptic. And it doesn't require me to write a command-line parser. =item --print group: Actions Print the database and table name, followed by a newline. This is the default action if no other action is specified. =item --printf type: string; group: Actions Print format on the standard output, interpreting '\' escapes and '%' directives. Escapes are backslashed characters, like \n and \t. Perl interprets these, so you can use any escapes Perl knows about. Directives are replaced by %s, and as of this writing, you can't add any special formatting instructions, like field widths or alignment (though I'm musing over ways to do that). Here is a list of the directives. Note that most of them simply come from columns of SHOW TABLE STATUS. If the column is NULL or doesn't exist, you get an empty string in the output. A % character followed by any character not in the following list is discarded (but the other character is printed). CHAR DATA SOURCE NOTES ---- ------------------ ------------------------------------------ a Auto_increment A Avg_row_length c Checksum C Create_time D Database The database name in which the table lives d Data_length E Engine In older versions of MySQL, this is Type F Data_free f Innodb_free Parsed from the Comment field I Index_length K Check_time L Collation M Max_data_length N Name O Comment P Create_options R Row_format S Rows T Table_length Data_length+Index_length U Update_time V Version =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-find ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2007-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-find 3.1.0 =cut percona-toolkit-3.1/bin/pt-fingerprint000775 001750 001750 00000203350 13535723560 021317 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( OptionParser QueryParser QueryRewriter )); } # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # QueryParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryParser.pm # t/lib/QueryParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $tbl_ident = qr/(?:`[^`]+`|\w+)(?:\.(?:`[^`]+`|\w+))?/; our $tbl_regex = qr{ \b(?:FROM|JOIN|(?get_tables($select); } my ($tbl) = $query =~ m/TABLE\s+($tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches table:', $tbl); return ($tbl); } $query =~ s/ (?:LOW_PRIORITY|IGNORE|STRAIGHT_JOIN)//ig; if ( $query =~ s/^\s*LOCK TABLES\s+//i ) { PTDEBUG && _d('Special table type: LOCK TABLES'); $query =~ s/\s+(?:READ(?:\s+LOCAL)?|WRITE)\s*//gi; PTDEBUG && _d('Locked tables:', $query); $query = "FROM $query"; } $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings my @tables; foreach my $tbls ( $query =~ m/$tbl_regex/gio ) { PTDEBUG && _d('Match tables:', $tbls); next if $tbls =~ m/\ASELECT\b/i; foreach my $tbl ( split(',', $tbls) ) { $tbl =~ s/\s*($tbl_ident)(\s+.*)?/$1/gio; if ( $tbl !~ m/[a-zA-Z]/ ) { PTDEBUG && _d('Skipping suspicious table name:', $tbl); next; } push @tables, $tbl; } } return @tables; } sub has_derived_table { my ( $self, $query ) = @_; my $match = $query =~ m/$has_derived/; PTDEBUG && _d($query, 'has ' . ($match ? 'a' : 'no') . ' derived table'); return $match; } sub get_aliases { my ( $self, $query, $list ) = @_; my $result = { DATABASE => {}, TABLE => {}, }; return $result unless $query; $query =~ s/ (?:LOW_PRIORITY|IGNORE|STRAIGHT_JOIN)//ig; $query =~ s/ (?:INNER|OUTER|CROSS|LEFT|RIGHT|NATURAL)//ig; my @tbl_refs; my ($tbl_refs, $from) = $query =~ m{ ( (FROM|INTO|UPDATE)\b\s* # Keyword before table refs .+? # Table refs ) (?:\s+|\z) # If the query does not end with the table (?:WHERE|ORDER|LIMIT|HAVING|SET|VALUES|\z) # Keyword after table refs }ix; if ( $tbl_refs ) { if ( $query =~ m/^(?:INSERT|REPLACE)/i ) { $tbl_refs =~ s/\([^\)]+\)\s*//; } PTDEBUG && _d('tbl refs:', $tbl_refs); my $before_tbl = qr/(?:,|JOIN|\s|$from)+/i; my $after_tbl = qr/(?:,|JOIN|ON|USING|\z)/i; $tbl_refs =~ s/ = /=/g; while ( $tbl_refs =~ m{ $before_tbl\b\s* ( ($tbl_ident) (?:\s+ (?:AS\s+)? (\w+))? ) \s*$after_tbl }xgio ) { my ( $tbl_ref, $db_tbl, $alias ) = ($1, $2, $3); PTDEBUG && _d('Match table:', $tbl_ref); push @tbl_refs, $tbl_ref; $alias = $self->trim_identifier($alias); if ( $tbl_ref =~ m/^AS\s+\w+/i ) { PTDEBUG && _d('Subquery', $tbl_ref); $result->{TABLE}->{$alias} = undef; next; } my ( $db, $tbl ) = $db_tbl =~ m/^(?:(.*?)\.)?(.*)/; $db = $self->trim_identifier($db); $tbl = $self->trim_identifier($tbl); $result->{TABLE}->{$alias || $tbl} = $tbl; $result->{DATABASE}->{$tbl} = $db if $db; } } else { PTDEBUG && _d("No tables ref in", $query); } if ( $list ) { return \@tbl_refs; } else { return $result; } } sub split { my ( $self, $query ) = @_; return unless $query; $query = $self->clean_query($query); PTDEBUG && _d('Splitting', $query); my $verbs = qr{SELECT|INSERT|UPDATE|DELETE|REPLACE|UNION|CREATE}i; my @split_statements = grep { $_ } split(m/\b($verbs\b(?!(?:\s*\()))/io, $query); my @statements; if ( @split_statements == 1 ) { push @statements, $query; } else { for ( my $i = 0; $i <= $#split_statements; $i += 2 ) { push @statements, $split_statements[$i].$split_statements[$i+1]; if ( $statements[-2] && $statements[-2] =~ m/on duplicate key\s+$/i ) { $statements[-2] .= pop @statements; } } } PTDEBUG && _d('statements:', map { $_ ? "<$_>" : 'none' } @statements); return @statements; } sub clean_query { my ( $self, $query ) = @_; return unless $query; $query =~ s!/\*.*?\*/! !g; # Remove /* comment blocks */ $query =~ s/^\s+//; # Remove leading spaces $query =~ s/\s+$//; # Remove trailing spaces $query =~ s/\s{2,}/ /g; # Remove extra spaces return $query; } sub split_subquery { my ( $self, $query ) = @_; return unless $query; $query = $self->clean_query($query); $query =~ s/;$//; my @subqueries; my $sqno = 0; # subquery number my $pos = 0; while ( $query =~ m/(\S+)(?:\s+|\Z)/g ) { $pos = pos($query); my $word = $1; PTDEBUG && _d($word, $sqno); if ( $word =~ m/^\(?SELECT\b/i ) { my $start_pos = $pos - length($word) - 1; if ( $start_pos ) { $sqno++; PTDEBUG && _d('Subquery', $sqno, 'starts at', $start_pos); $subqueries[$sqno] = { start_pos => $start_pos, end_pos => 0, len => 0, words => [$word], lp => 1, # left parentheses rp => 0, # right parentheses done => 0, }; } else { PTDEBUG && _d('Main SELECT at pos 0'); } } else { next unless $sqno; # next unless we're in a subquery PTDEBUG && _d('In subquery', $sqno); my $sq = $subqueries[$sqno]; if ( $sq->{done} ) { PTDEBUG && _d('This subquery is done; SQL is for', ($sqno - 1 ? "subquery $sqno" : "the main SELECT")); next; } push @{$sq->{words}}, $word; my $lp = ($word =~ tr/\(//) || 0; my $rp = ($word =~ tr/\)//) || 0; PTDEBUG && _d('parentheses left', $lp, 'right', $rp); if ( ($sq->{lp} + $lp) - ($sq->{rp} + $rp) == 0 ) { my $end_pos = $pos - 1; PTDEBUG && _d('Subquery', $sqno, 'ends at', $end_pos); $sq->{end_pos} = $end_pos; $sq->{len} = $end_pos - $sq->{start_pos}; } } } for my $i ( 1..$#subqueries ) { my $sq = $subqueries[$i]; next unless $sq; $sq->{sql} = join(' ', @{$sq->{words}}); substr $query, $sq->{start_pos} + 1, # +1 for ( $sq->{len} - 1, # -1 for ) "__subquery_$i"; } return $query, map { $_->{sql} } grep { defined $_ } @subqueries; } sub query_type { my ( $self, $query, $qr ) = @_; my ($type, undef) = $qr->distill_verbs($query); my $rw; if ( $type =~ m/^SELECT\b/ ) { $rw = 'read'; } elsif ( $type =~ m/^$data_manip_stmts\b/ || $type =~ m/^$data_def_stmts\b/ ) { $rw = 'write' } return { type => $type, rw => $rw, } } sub get_columns { my ( $self, $query ) = @_; my $cols = []; return $cols unless $query; my $cols_def; if ( $query =~ m/^SELECT/i ) { $query =~ s/ ^SELECT\s+ (?:ALL |DISTINCT |DISTINCTROW |HIGH_PRIORITY |STRAIGHT_JOIN |SQL_SMALL_RESULT |SQL_BIG_RESULT |SQL_BUFFER_RESULT |SQL_CACHE |SQL_NO_CACHE |SQL_CALC_FOUND_ROWS )\s+ /SELECT /xgi; ($cols_def) = $query =~ m/^SELECT\s+(.+?)\s+FROM/i; } elsif ( $query =~ m/^(?:INSERT|REPLACE)/i ) { ($cols_def) = $query =~ m/\(([^\)]+)\)\s*VALUE/i; } PTDEBUG && _d('Columns:', $cols_def); if ( $cols_def ) { @$cols = split(',', $cols_def); map { my $col = $_; $col = s/^\s+//g; $col = s/\s+$//g; $col; } @$cols; } return $cols; } sub parse { my ( $self, $query ) = @_; return unless $query; my $parsed = {}; $query =~ s/\n/ /g; $query = $self->clean_query($query); $parsed->{query} = $query, $parsed->{tables} = $self->get_aliases($query, 1); $parsed->{columns} = $self->get_columns($query); my ($type) = $query =~ m/^(\w+)/; $parsed->{type} = lc $type; $parsed->{sub_queries} = []; return $parsed; } sub extract_tables { my ( $self, %args ) = @_; my $query = $args{query}; my $default_db = $args{default_db}; my $q = $self->{Quoter} || $args{Quoter}; return unless $query; PTDEBUG && _d('Extracting tables'); my @tables; my %seen; foreach my $db_tbl ( $self->get_tables($query) ) { next unless $db_tbl; next if $seen{$db_tbl}++; # Unique-ify for issue 337. my ( $db, $tbl ) = $q->split_unquote($db_tbl); push @tables, [ $db || $default_db, $tbl ]; } return @tables; } sub trim_identifier { my ($self, $str) = @_; return unless defined $str; $str =~ s/`//g; $str =~ s/^\s+//; $str =~ s/\s+$//; return $str; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryParser package # ########################################################################### # ########################################################################### # QueryRewriter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryRewriter.pm # t/lib/QueryRewriter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryRewriter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $verbs = qr{^SHOW|^FLUSH|^COMMIT|^ROLLBACK|^BEGIN|SELECT|INSERT |UPDATE|DELETE|REPLACE|^SET|UNION|^START|^LOCK}xi; my $quote_re = qr/"(?:(?!(? [^()]+ ) # Non-parens without backtracking | (??{ $bal }) # Group with matching parens )* \) /x; my $olc_re = qr/(?:--|#)[^'"\r\n]*(?=[\r\n]|\Z)/; # One-line comments my $mlc_re = qr#/\*[^!].*?\*/#sm; # But not /*!version */ my $vlc_re = qr#/\*.*?[0-9+].*?\*/#sm; # For SHOW + /*!version */ my $vlc_rf = qr#^(SHOW).*?/\*![0-9+].*?\*/#sm; # Variation for SHOW sub new { my ( $class, %args ) = @_; my $self = { %args }; return bless $self, $class; } sub strip_comments { my ( $self, $query ) = @_; return unless $query; $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; if ( $query =~ m/$vlc_rf/i ) { # contains show + version $query =~ s/$vlc_re//go; } return $query; } sub shorten { my ( $self, $query, $length ) = @_; $query =~ s{ \A( (?:INSERT|REPLACE) (?:\s+LOW_PRIORITY|DELAYED|HIGH_PRIORITY|IGNORE)? (?:\s\w+)*\s+\S+\s+VALUES\s*\(.*?\) ) \s*,\s*\(.*?(ON\s+DUPLICATE|\Z)} {$1 /*... omitted ...*/$2}xsi; return $query unless $query =~ m/IN\s*\(\s*(?!select)/i; my $last_length = 0; my $query_length = length($query); while ( $length > 0 && $query_length > $length && $query_length < ( $last_length || $query_length + 1 ) ) { $last_length = $query_length; $query =~ s{ (\bIN\s*\() # The opening of an IN list ([^\)]+) # Contents of the list, assuming no item contains paren (?=\)) # Close of the list } { $1 . __shorten($2) }gexsi; } return $query; } sub __shorten { my ( $snippet ) = @_; my @vals = split(/,/, $snippet); return $snippet unless @vals > 20; my @keep = splice(@vals, 0, 20); # Remove and save the first 20 items return join(',', @keep) . "/*... omitted " . scalar(@vals) . " items ...*/"; } sub fingerprint { my ( $self, $query ) = @_; $query =~ m#\ASELECT /\*!40001 SQL_NO_CACHE \*/ \* FROM `# # mysqldump query && return 'mysqldump'; $query =~ m#/\*\w+\.\w+:[0-9]/[0-9]\*/# # pt-table-checksum, etc query && return 'percona-toolkit'; $query =~ m/\Aadministrator command: / && return $query; $query =~ m/\A\s*(call\s+\S+)\(/i && return lc($1); # Warning! $1 used, be careful. if ( my ($beginning) = $query =~ m/\A((?:INSERT|REPLACE)(?: IGNORE)?\s+INTO.+?VALUES\s*\(.*?\))\s*,\s*\(/is ) { $query = $beginning; # Shorten multi-value INSERT statements ASAP } $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; $query =~ s/\Ause \S+\Z/use ?/i # Abstract the DB in USE && return $query; $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings if ( $self->{match_md5_checksums} ) { $query =~ s/([._-])[a-f0-9]{32}/$1?/g; } if ( !$self->{match_embedded_numbers} ) { $query =~ s/[0-9+-][0-9a-f.xb+-]*/?/g; } else { $query =~ s/\b[0-9+-][0-9a-f.xb+-]*/?/g; } if ( $self->{match_md5_checksums} ) { $query =~ s/[xb+-]\?/?/g; } else { $query =~ s/[xb.+-]\?/?/g; } $query =~ s/\A\s+//; # Chop off leading whitespace chomp $query; # Kill trailing whitespace $query =~ tr[ \n\t\r\f][ ]s; # Collapse whitespace $query = lc $query; $query =~ s/\bnull\b/?/g; # Get rid of NULLs $query =~ s{ # Collapse IN and VALUES lists \b(in|values?)(?:[\s,]*\([\s?,]*\))+ } {$1(?+)}gx; $query =~ s{ # Collapse UNION \b(select\s.*?)(?:(\sunion(?:\sall)?)\s\1)+ } {$1 /*repeat$2*/}xg; $query =~ s/\blimit \?(?:, ?\?| offset \?)?/limit ?/; # LIMIT if ( $query =~ m/\bORDER BY /gi ) { # Find, anchor on ORDER BY clause 1 while $query =~ s/\G(.+?)\s+ASC/$1/gi && pos $query; } return $query; } sub distill_verbs { my ( $self, $query ) = @_; $query =~ m/\A\s*call\s+(\S+)\(/i && return "CALL $1"; $query =~ m/\A\s*use\s+/ && return "USE"; $query =~ m/\A\s*UNLOCK TABLES/i && return "UNLOCK"; $query =~ m/\A\s*xa\s+(\S+)/i && return "XA_$1"; if ( $query =~ m/\Aadministrator command:/ ) { $query =~ s/administrator command:/ADMIN/; $query = uc $query; return $query; } $query = $self->strip_comments($query); if ( $query =~ m/\A\s*SHOW\s+/i ) { PTDEBUG && _d($query); $query = uc $query; $query =~ s/\s+(?:GLOBAL|SESSION|FULL|STORAGE|ENGINE)\b/ /g; $query =~ s/\s+COUNT[^)]+\)//g; $query =~ s/\s+(?:FOR|FROM|LIKE|WHERE|LIMIT|IN)\b.+//ms; $query =~ s/\A(SHOW(?:\s+\S+){1,2}).*\Z/$1/s; $query =~ s/\s+/ /g; PTDEBUG && _d($query); return $query; } eval $QueryParser::data_def_stmts; eval $QueryParser::tbl_ident; my ( $dds ) = $query =~ /^\s*($QueryParser::data_def_stmts)\b/i; if ( $dds) { my ( $obj ) = $query =~ m/$dds.+(DATABASE|TABLE)\b/i; $obj = uc $obj if $obj; PTDEBUG && _d('Data def statment:', $dds, 'obj:', $obj); my ($db_or_tbl) = $query =~ m/(?:TABLE|DATABASE)\s+($QueryParser::tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches db or table:', $db_or_tbl); return uc($dds . ($obj ? " $obj" : '')), $db_or_tbl; } my @verbs = $query =~ m/\b($verbs)\b/gio; @verbs = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } map { uc } @verbs; }; if ( ($verbs[0] || '') eq 'SELECT' && @verbs > 1 ) { PTDEBUG && _d("False-positive verbs after SELECT:", @verbs[1..$#verbs]); my $union = grep { $_ eq 'UNION' } @verbs; @verbs = $union ? qw(SELECT UNION) : qw(SELECT); } my $verb_str = join(q{ }, @verbs); return $verb_str; } sub __distill_tables { my ( $self, $query, $table, %args ) = @_; my $qp = $args{QueryParser} || $self->{QueryParser}; die "I need a QueryParser argument" unless $qp; my @tables = map { $_ =~ s/`//g; $_ =~ s/(_?)[0-9]+/$1?/g; $_; } grep { defined $_ } $qp->get_tables($query); push @tables, $table if $table; @tables = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } @tables; }; return @tables; } sub distill { my ( $self, $query, %args ) = @_; if ( $args{generic} ) { my ($cmd, $arg) = $query =~ m/^(\S+)\s+(\S+)/; return '' unless $cmd; $query = (uc $cmd) . ($arg ? " $arg" : ''); } else { my ($verbs, $table) = $self->distill_verbs($query, %args); if ( $verbs && $verbs =~ m/^SHOW/ ) { my %alias_for = qw( SCHEMA DATABASE KEYS INDEX INDEXES INDEX ); map { $verbs =~ s/$_/$alias_for{$_}/ } keys %alias_for; $query = $verbs; } else { my @tables = $self->__distill_tables($query, $table, %args); $query = join(q{ }, $verbs, @tables); } } if ( $args{trf} ) { $query = $args{trf}->($query, %args); } return $query; } sub convert_to_select { my ( $self, $query ) = @_; return unless $query; return if $query =~ m/=\s*\(\s*SELECT /i; $query =~ s{ \A.*? update(?:\s+(?:low_priority|ignore))?\s+(.*?) \s+set\b(.*?) (?:\s*where\b(.*?))? (limit\s*[0-9]+(?:\s*,\s*[0-9]+)?)? \Z } {__update_to_select($1, $2, $3, $4)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ .*?\binto\b(.*?)\(([^\)]+)\)\s* values?\s*(\(.*?\))\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select($1, $2, $3)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ (?:.*?\binto)\b(.*?)\s* set\s+(.*?)\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select_with_set($1, $2)}exsi || $query =~ s{ \A.*? delete\s+(.*?) \bfrom\b(.*) \Z } {__delete_to_select($1, $2)}exsi; $query =~ s/\s*on\s+duplicate\s+key\s+update.*\Z//si; $query =~ s/\A.*?(?=\bSELECT\s*\b)//ism; return $query; } sub convert_select_list { my ( $self, $query ) = @_; $query =~ s{ \A\s*select(.*?)\bfrom\b } {$1 =~ m/\*/ ? "select 1 from" : "select isnull(coalesce($1)) from"}exi; return $query; } sub __delete_to_select { my ( $delete, $join ) = @_; if ( $join =~ m/\bjoin\b/ ) { return "select 1 from $join"; } return "select * from $join"; } sub __insert_to_select { my ( $tbl, $cols, $vals ) = @_; PTDEBUG && _d('Args:', @_); my @cols = split(/,/, $cols); PTDEBUG && _d('Cols:', @cols); $vals =~ s/^\(|\)$//g; # Strip leading/trailing parens my @vals = $vals =~ m/($quote_re|[^,]*${bal}[^,]*|[^,]+)/g; PTDEBUG && _d('Vals:', @vals); if ( @cols == @vals ) { return "select * from $tbl where " . join(' and ', map { "$cols[$_]=$vals[$_]" } (0..$#cols)); } else { return "select * from $tbl limit 1"; } } sub __insert_to_select_with_set { my ( $from, $set ) = @_; $set =~ s/,/ and /g; return "select * from $from where $set "; } sub __update_to_select { my ( $from, $set, $where, $limit ) = @_; return "select $set from $from " . ( $where ? "where $where" : '' ) . ( $limit ? " $limit " : '' ); } sub wrap_in_derived { my ( $self, $query ) = @_; return unless $query; return $query =~ m/\A\s*select/i ? "select 1 from ($query) as x limit 1" : $query; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryRewriter package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_fingerprint; use English qw(-no_match_vars); use Data::Dumper; $Data::Dumper::Indent = 1; $OUTPUT_AUTOFLUSH = 1; use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub main { @ARGV = @_; # set global ARGV for this package # ########################################################################## # Get configuration information. # ########################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); $o->usage_or_errors(); my $qp = new QueryParser(); my $qr = new QueryRewriter( QueryParser => $qp, match_md5_checksums => $o->get('match-md5-checksums'), match_embedded_numbers => $o->get('match-embedded-numbers'), ); if ( $o->got('query') ) { print $qr->fingerprint($o->get('query')), "\n"; } else { local $INPUT_RECORD_SEPARATOR = ";\n"; while ( <> ) { my $query = $_; chomp $query; $query =~ s/^#.+$//mg; $query =~ s/^\s+//; next unless $query =~ m/^\w/; print $qr->fingerprint($query), "\n"; } } } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################# # Documentation. # ############################################################################# =pod =head1 NAME pt-fingerprint - Convert queries into fingerprints. =head1 SYNOPSIS Usage: pt-fingerprint [OPTIONS] [FILES] pt-fingerprint converts queries into fingerprints. With the --query option, converts the option's value into a fingerprint. With no options, treats command-line arguments as FILEs and reads and converts semicolon-separated queries from the FILEs. When FILE is -, it read standard input. Convert a single query: pt-fingerprint --query "select a, b, c from users where id = 500" Convert a file full of queries: pt-fingerprint /path/to/file.txt =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION A query fingerprint is the abstracted form of a query, which makes it possible to group similar queries together. Abstracting a query removes literal values, normalizes whitespace, and so on. For example, consider these two queries: SELECT name, password FROM user WHERE id='12823'; select name, password from user where id=5; Both of those queries will fingerprint to select name, password from user where id=? Once the query's fingerprint is known, we can then talk about a query as though it represents all similar queries. Query fingerprinting accommodates a great many special cases, which have proven necessary in the real world. For example, an IN list with 5 literals is really equivalent to one with 4 literals, so lists of literals are collapsed to a single one. If you want to understand more about how and why all of these cases are handled, please review the test cases in the Subversion repository. If you find something that is not fingerprinted properly, please submit a bug report with a reproducible test case. Here is a list of transformations during fingerprinting, which might not be exhaustive: =over =item * Group all SELECT queries from mysqldump together, even if they are against different tables. Ditto for all of pt-table-checksum's checksum queries. =item * Shorten multi-value INSERT statements to a single VALUES() list. =item * Strip comments. =item * Abstract the databases in USE statements, so all USE statements are grouped together. =item * Replace all literals, such as quoted strings. For efficiency, the code that replaces literal numbers is somewhat non-selective, and might replace some things as numbers when they really are not. Hexadecimal literals are also replaced. NULL is treated as a literal. Numbers embedded in identifiers are also replaced, so tables named similarly will be fingerprinted to the same values (e.g. users_2009 and users_2010 will fingerprint identically). =item * Collapse all whitespace into a single space. =item * Lowercase the entire query. =item * Replace all literals inside of IN() and VALUES() lists with a single placeholder, regardless of cardinality. =item * Collapse multiple identical UNION queries into a single one. =back =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --help Show help and exit. =item --match-embedded-numbers Match numbers embedded in words and replace as single values. This option causes the tool to be more careful about matching numbers so that words with numbers, like C are matched and replaced as a single C placeholder. Otherwise the default number matching pattern will replace C as C. This is helpful if database or table names contain numbers. =item --match-md5-checksums Match MD5 checksums and replace as single values. This option causes the tool to be more careful about matching numbers so that MD5 checksums like C are matched and replaced as a single C placeholder. Otherwise, the default number matching pattern will replace C as C. =item --query type: string The query to convert into a fingerprint. =item --version Show version and exit. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-fingerprint ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-fingerprint 3.1.0 =cut percona-toolkit-3.1/bin/pt-fk-error-logger000775 001750 001750 00000407453 13535723560 022006 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit OptionParser Quoter DSNParser Cxn Daemon Transformers HTTP::Micro VersionCheck Runtime )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Cxn package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Cxn.pm # t/lib/Cxn.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Cxn; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Scalar::Util qw(blessed); use constant { PTDEBUG => $ENV{PTDEBUG} || 0, PERCONA_TOOLKIT_TEST_USE_DSN_NAMES => $ENV{PERCONA_TOOLKIT_TEST_USE_DSN_NAMES} || 0, }; sub new { my ( $class, %args ) = @_; my @required_args = qw(DSNParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($dp, $o) = @args{@required_args}; my $dsn_defaults = $dp->parse_options($o); my $prev_dsn = $args{prev_dsn}; my $dsn = $args{dsn}; if ( !$dsn ) { $args{dsn_string} ||= 'h=' . ($dsn_defaults->{h} || 'localhost'); $dsn = $dp->parse( $args{dsn_string}, $prev_dsn, $dsn_defaults); } elsif ( $prev_dsn ) { $dsn = $dp->copy($prev_dsn, $dsn); } my $dsn_name = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; my $self = { dsn => $dsn, dbh => $args{dbh}, dsn_name => $dsn_name, hostname => '', set => $args{set}, NAME_lc => defined($args{NAME_lc}) ? $args{NAME_lc} : 1, dbh_set => 0, ask_pass => $o->get('ask-pass'), DSNParser => $dp, is_cluster_node => undef, parent => $args{parent}, }; return bless $self, $class; } sub connect { my ( $self, %opts ) = @_; my $dsn = $opts{dsn} || $self->{dsn}; my $dp = $self->{DSNParser}; my $dbh = $self->{dbh}; if ( !$dbh || !$dbh->ping() ) { if ( $self->{ask_pass} && !$self->{asked_for_pass} && !defined $dsn->{p} ) { $dsn->{p} = OptionParser::prompt_noecho("Enter MySQL password: "); $self->{asked_for_pass} = 1; } $dbh = $dp->get_dbh( $dp->get_cxn_params($dsn), { AutoCommit => 1, %opts, }, ); } $dbh = $self->set_dbh($dbh); if ( $opts{dsn} ) { $self->{dsn} = $dsn; $self->{dsn_name} = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; } PTDEBUG && _d($dbh, 'Connected dbh to', $self->{hostname},$self->{dsn_name}); return $dbh; } sub set_dbh { my ($self, $dbh) = @_; if ( $self->{dbh} && $self->{dbh} == $dbh && $self->{dbh_set} ) { PTDEBUG && _d($dbh, 'Already set dbh'); return $dbh; } PTDEBUG && _d($dbh, 'Setting dbh'); $dbh->{FetchHashKeyName} = 'NAME_lc' if $self->{NAME_lc}; my $sql = 'SELECT @@server_id /*!50038 , @@hostname*/'; PTDEBUG && _d($dbh, $sql); my ($server_id, $hostname) = $dbh->selectrow_array($sql); PTDEBUG && _d($dbh, 'hostname:', $hostname, $server_id); if ( $hostname ) { $self->{hostname} = $hostname; } if ( $self->{parent} ) { PTDEBUG && _d($dbh, 'Setting InactiveDestroy=1 in parent'); $dbh->{InactiveDestroy} = 1; } if ( my $set = $self->{set}) { $set->($dbh); } $self->{dbh} = $dbh; $self->{dbh_set} = 1; return $dbh; } sub lost_connection { my ($self, $e) = @_; return 0 unless $e; return $e =~ m/MySQL server has gone away/ || $e =~ m/Lost connection to MySQL server/ || $e =~ m/Server shutdown in progress/; } sub dbh { my ($self) = @_; return $self->{dbh}; } sub dsn { my ($self) = @_; return $self->{dsn}; } sub name { my ($self) = @_; return $self->{dsn_name} if PERCONA_TOOLKIT_TEST_USE_DSN_NAMES; return $self->{hostname} || $self->{dsn_name} || 'unknown host'; } sub description { my ($self) = @_; return sprintf("%s -> %s:%s", $self->name(), $self->{dsn}->{h}, $self->{dsn}->{P} || 'socket'); } sub get_id { my ($self, $cxn) = @_; $cxn ||= $self; my $unique_id; if ($cxn->is_cluster_node()) { # for cluster we concatenate various variables to maximize id 'uniqueness' across versions my $sql = q{SHOW STATUS LIKE 'wsrep\_local\_index'}; my (undef, $wsrep_local_index) = $cxn->dbh->selectrow_array($sql); PTDEBUG && _d("Got cluster wsrep_local_index: ",$wsrep_local_index); $unique_id = $wsrep_local_index."|"; foreach my $val ('server\_id', 'wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $val) = $cxn->dbh->selectrow_array($sql); $unique_id .= "|$val"; } } else { my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); $unique_id = $cxn->dbh->selectrow_array($sql); } PTDEBUG && _d("Generated unique id for cluster:", $unique_id); return $unique_id; } sub is_cluster_node { my ($self, $cxn) = @_; $cxn ||= $self; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; my $dbh; if ($cxn->isa('DBI::db')) { $dbh = $cxn; PTDEBUG && _d($sql); #don't invoke name() if it's not a Cxn! } else { $dbh = $cxn->dbh(); PTDEBUG && _d($cxn->name, $sql); } my $row = $dbh->selectrow_arrayref($sql); return $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1') ? 1 : 0; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub DESTROY { my ($self) = @_; PTDEBUG && _d('Destroying cxn'); if ( $self->{parent} ) { PTDEBUG && _d($self->{dbh}, 'Not disconnecting dbh in parent'); } elsif ( $self->{dbh} && blessed($self->{dbh}) && $self->{dbh}->can("disconnect") ) { PTDEBUG && _d($self->{dbh}, 'Disconnecting dbh on', $self->{hostname}, $self->{dsn_name}); $self->{dbh}->disconnect(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Cxn package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc substr(md5_hex($val), -16); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # Runtime package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Runtime.pm # t/lib/Runtime.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Runtime; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(now); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless exists $args{$arg}; } my $run_time = $args{run_time}; if ( defined $run_time ) { die "run_time must be > 0" if $run_time <= 0; } my $now = $args{now}; die "now must be a callback" unless ref $now eq 'CODE'; my $self = { run_time => $run_time, now => $now, start_time => undef, end_time => undef, time_left => undef, stop => 0, }; return bless $self, $class; } sub time_left { my ( $self, %args ) = @_; if ( $self->{stop} ) { PTDEBUG && _d("No time left because stop was called"); return 0; } my $now = $self->{now}->(%args); PTDEBUG && _d("Current time:", $now); if ( !defined $self->{start_time} ) { $self->{start_time} = $now; } return unless defined $now; my $run_time = $self->{run_time}; return unless defined $run_time; if ( !$self->{end_time} ) { $self->{end_time} = $now + $run_time; PTDEBUG && _d("End time:", $self->{end_time}); } $self->{time_left} = $self->{end_time} - $now; PTDEBUG && _d("Time left:", $self->{time_left}); return $self->{time_left}; } sub have_time { my ( $self, %args ) = @_; my $time_left = $self->time_left(%args); return 1 if !defined $time_left; # run forever return $time_left <= 0 ? 0 : 1; # <=0s means run time has elapsed } sub time_elapsed { my ( $self, %args ) = @_; my $start_time = $self->{start_time}; return 0 unless $start_time; my $now = $self->{now}->(%args); PTDEBUG && _d("Current time:", $now); my $time_elapsed = $now - $start_time; PTDEBUG && _d("Time elapsed:", $time_elapsed); if ( $time_elapsed < 0 ) { warn "Current time $now is earlier than start time $start_time"; } return $time_elapsed; } sub reset { my ( $self ) = @_; $self->{start_time} = undef; $self->{end_time} = undef; $self->{time_left} = undef; $self->{stop} = 0; PTDEBUG && _d("Reset run time"); return; } sub stop { my ( $self ) = @_; $self->{stop} = 1; return; } sub start { my ( $self ) = @_; $self->{stop} = 0; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Runtime package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_fk_error_logger; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use sigtrap 'handler', \&sig_int, 'normal-signals'; use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; Transformers->import(qw(parse_timestamp)); my $oktorun = 1; my $exit_status = 0; sub main { local @ARGV = @_; # set global ARGV for this package $oktorun = 1; $exit_status = 0; # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); my $src; if ( my $src_dsn_string = shift @ARGV ) { $src = Cxn->new( dsn_string => $src_dsn_string, parent => $o->get('daemonize'), DSNParser => $dp, OptionParser => $o, ); } my $dst; if ( my $dst_dsn = $o->get('dest') ) { $dst = Cxn->new( dsn => $dst_dsn, prev_dsn => ($src ? $src->dsn : undef), parent => $o->get('daemonize'), DSNParser => $dp, OptionParser => $o, ); } if ( !$o->get('help') ) { if ( !$src ) { $o->save_error('No DSN was specified.'); } if ( $dst && !$dst->dsn->{D} ) { $o->save_error("--dest requires a 'D' (database) part."); } if ( $dst && !$dst->dsn->{t} ) { $o->save_error("--dest requires a 't' (table) part."); } } $o->usage_or_errors(); # ######################################################################## # Connect to MySQL. # ######################################################################## my $q = Quoter->new(); $src->connect(); my $ins_sth; if ( $dst ) { $dst->connect(); my $db_tbl = $q->join_quote($dst->dsn->{D}, $dst->dsn->{t}); my $sql = "INSERT IGNORE INTO $db_tbl VALUES (?, ?)"; PTDEBUG && _d('--dest INSERT SQL:', $sql); $ins_sth = $dst->dbh->prepare($sql); } # ######################################################################## # Daemonize only after (potentially) asking for passwords for --ask-pass. # ######################################################################## my $daemon; if ( $o->get('daemonize') ) { $daemon = new Daemon(o=>$o); $daemon->daemonize(); PTDEBUG && _d('I am a daemon now'); } elsif ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # If we daemonized, the parent has already exited and we're the child. # We shared a copy of every Cxn with the parent, and the parent's copies # were destroyed but the dbhs were not disconnected because the parent # attrib was true. Now, as the child, set it false so the dbhs will be # disconnected when our Cxn copies are destroyed. If we didn't daemonize, # then we're not really a parent (since we have no children), so set it # false to auto-disconnect the dbhs when our Cxns are destroyed. $src->{parent} = 0; $dst->{parent} = 0 if $dst; # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ { dbh => $src->dbh, dsn => $src->dsn }, ($dst ? { dbh => $dst->dbh, dsn => $dst->dsn } : ()) ], ); } # ######################################################################## # Start finding and logging foreign key errors. # ######################################################################## my $run_time = Runtime->new( run_time => $o->get('run-time'), now => sub { return time }, ); my $interval = $o->get('interval'); my $iters = $o->get('iterations'); PTDEBUG && _d('iterations:', $iters, 'interval:', $interval); ITERATION: while ( $oktorun && $run_time->have_time() && (!defined $iters || $iters--) ) { my ($ts, $fk_error); eval { my $sql = "SHOW /*!40100 ENGINE*/ INNODB STATUS " . "/* pt-fk-error-logger */"; PTDEBUG && _d($sql); my $text = $src->dbh->selectrow_hashref($sql)->{status}; ($ts, $fk_error) = get_fk_error($text); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d('Error getting InnoDB status:', $e); if ( $src->lost_connection($e) ) { eval { $src->connect() }; if ( $EVAL_ERROR ) { warn "Lost connection to MySQL. Will try to reconnect " . "in the next iteration.\n"; } else { PTDEBUG && _d('Reconnected to MySQL'); redo ITERATION; } } else { warn "Error parsing SHOW ENGINE INNODB STATUS: $EVAL_ERROR"; $exit_status |= 1; } } else { if ( $ts && $fk_error ) { # Save and/or print the foreign key error. if ( $ins_sth ) { my $fk_ts = parse_timestamp($ts); PTDEBUG && _d('Saving fk error', $ts, $fk_error); eval { $ins_sth->execute($fk_ts, $fk_error); }; if ( $EVAL_ERROR ) { warn $EVAL_ERROR; PTDEBUG && _d($EVAL_ERROR); } } if ( !$o->get('quiet') ) { print "$ts $fk_error\n\n"; } } } # Sleep if there's an --iteration left. if ( !defined $iters || $iters ) { PTDEBUG && _d('Sleeping', $interval, 'seconds'); sleep $interval; } } PTDEBUG && _d('Done running, exiting', $exit_status); return $exit_status; } # ############################################################################ # Subroutines # ############################################################################ sub get_fk_error { my ( $text ) = @_; PTDEBUG && _d($text); # Quick check if text even has a foreign key error. if ( $text !~ m/LATEST FOREIGN KEY ERROR/ ) { PTDEBUG && _d('No fk error'); return; } # InnoDB timestamp my $idb_ts = qr/((?:\d{6}|\d{4}-\d\d-\d\d) .\d:\d\d:\d\d)/; my ($ts, $fke) = $text =~ m/LATEST FOREIGN KEY ERROR.+?$idb_ts\s*(.+?)---/ms; chomp $fke if $fke; PTDEBUG && _d('Latest fk error:', $ts, $fke); return $ts, $fke; } sub sig_int { my ( $signal ) = @_; $oktorun = 0; print STDERR "# Caught SIG$signal. Use 'kill -ABRT $PID' if " . "the tool does not exit normally in a few seconds.\n"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-fk-error-logger - Log MySQL foreign key errors. =head1 SYNOPSIS Usage: pt-fk-error-logger [OPTIONS] [DSN] pt-fk-error-logger logs information about foreign key errors on the given DSN. Information is printed to C, and it can also be saved to a table by specifying L<"--dest">. The tool runs for forever unless L<"--run-time"> or L<"--iterations"> is specified. Print foreign key errors on host1: pt-fk-error-logger h=host1 Print foreign key errors on host1 once then exit: pt-fk-error-logger h=host1 --iterations 1 Save foreign key errors on host1 to percona_schema.fke on host2: pt-fk-error-logger h=host1 --dest h=host2,D=percona_schema,t=fke =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-fk-error-logger prints or saves the foreign key errors text from C. The errors are not parsed or interpreted in any way. Foreign key errors are uniquely identified by their timestamp. Only new (more recent) errors are printed or saved. By default the tool runs forever, checking every L<"--interval"> seconds for new foreign key errors. Specify L<"--run-time"> and/or L<"--iterations"> to limit how long the tool runs. =head1 OUTPUT The foreign key error text from C is printed to C, unless L<"--quiet"> is specified. Errors and warnings are printed to C. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --daemonize Fork to the background and detach from the shell. POSIX operating systems only. =item --database short form: -D; type: string Connect to this database. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --dest type: DSN Save foreign key errors in this table. The DSN must specify a database (D) and table (t). Missing DSN values are inherited from the DSN being monitored, so you can omit most values if you're saving foreign key errors on the same host. The following table is suggested: CREATE TABLE foreign_key_errors ( ts datetime NOT NULL, error text NOT NULL, PRIMARY KEY (ts) ) The only information saved is the timestamp and the foreign key error text. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --interval type: time; default: 30 How often to check for foreign key errors. =item --iterations type: int How many times to check for foreign key errors. By default, this option is undefined which means an infinite number of iterations. The tool always exits for L<"--run-time">, regardless of the value specified for this option. For example, the tool will exit after 1 minute with C<--run-time 1m --iterations 4 --interval 30> because 4 iterations at 30 second intervals would take 2 minutes, longer than the 1 mintue run-time. =item --log type: string Print all output to this file when daemonized. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --quiet Do not print foreign key errors; only print errors and warnings to C. =item --run-time type: time How long to run before exiting. By default, the tool runs forever. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * t Table in which to store foreign key errors. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-fk-error-logger ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-fk-error-logger 3.1.0 =cut percona-toolkit-3.1/bin/pt-heartbeat000775 001750 001750 00000665217 13535723560 020745 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit MasterSlave OptionParser Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo DSNParser Daemon Quoter TableParser Retry Transformers HTTP::Micro VersionCheck VersionParser )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # MasterSlave package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MasterSlave.pm # t/lib/MasterSlave.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MasterSlave; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub check_recursion_method { my ($methods) = @_; if ( @$methods != 1 ) { if ( grep({ !m/processlist|hosts/i } @$methods) && $methods->[0] !~ /^dsn=/i ) { die "Invalid combination of recursion methods: " . join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". " . "Only hosts and processlist may be combined.\n" } } else { my ($method) = @$methods; die "Invalid recursion method: " . ( $method || 'undef' ) unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i; } } sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser DSNParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, replication_thread => {}, }; return bless $self, $class; } sub get_slaves { my ($self, %args) = @_; my @required_args = qw(make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($make_cxn) = @args{@required_args}; my $slaves = []; my $dp = $self->{DSNParser}; my $methods = $self->_resolve_recursion_methods($args{dsn}); return $slaves unless @$methods; if ( grep { m/processlist|hosts/i } @$methods ) { my @required_args = qw(dbh dsn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $dsn) = @args{@required_args}; my $o = $self->{OptionParser}; $self->recurse_to_slaves( { dbh => $dbh, dsn => $dsn, slave_user => $o->got('slave-user') ? $o->get('slave-user') : '', slave_password => $o->got('slave-password') ? $o->get('slave-password') : '', callback => sub { my ( $dsn, $dbh, $level, $parent ) = @_; return unless $level; PTDEBUG && _d('Found slave:', $dp->as_string($dsn)); my $slave_dsn = $dsn; if ($o->got('slave-user')) { $slave_dsn->{u} = $o->get('slave-user'); PTDEBUG && _d("Using slave user ".$o->get('slave-user')." on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($o->got('slave-password')) { $slave_dsn->{p} = $o->get('slave-password'); PTDEBUG && _d("Slave password set"); } push @$slaves, $make_cxn->(dsn => $slave_dsn, dbh => $dbh); return; }, } ); } elsif ( $methods->[0] =~ m/^dsn=/i ) { (my $dsn_table_dsn = join ",", @$methods) =~ s/^dsn=//i; $slaves = $self->get_cxn_from_dsn_table( %args, dsn_table_dsn => $dsn_table_dsn, ); } elsif ( $methods->[0] =~ m/none/i ) { PTDEBUG && _d('Not getting to slaves'); } else { die "Unexpected recursion methods: @$methods"; } return $slaves; } sub _resolve_recursion_methods { my ($self, $dsn) = @_; my $o = $self->{OptionParser}; if ( $o->got('recursion-method') ) { return $o->get('recursion-method'); } elsif ( $dsn && ($dsn->{P} || 3306) != 3306 ) { PTDEBUG && _d('Port number is non-standard; using only hosts method'); return [qw(hosts)]; } else { return $o->get('recursion-method'); } } sub recurse_to_slaves { my ( $self, $args, $level ) = @_; $level ||= 0; my $dp = $self->{DSNParser}; my $recurse = $args->{recurse} || $self->{OptionParser}->get('recurse'); my $dsn = $args->{dsn}; my $slave_user = $args->{slave_user} || ''; my $slave_password = $args->{slave_password} || ''; my $methods = $self->_resolve_recursion_methods($dsn); PTDEBUG && _d('Recursion methods:', @$methods); if ( lc($methods->[0]) eq 'none' ) { PTDEBUG && _d('Not recursing to slaves'); return; } my $slave_dsn = $dsn; if ($slave_user) { $slave_dsn->{u} = $slave_user; PTDEBUG && _d("Using slave user $slave_user on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($slave_password) { $slave_dsn->{p} = $slave_password; PTDEBUG && _d("Slave password set"); } my $dbh; eval { $dbh = $args->{dbh} || $dp->get_dbh( $dp->get_cxn_params($slave_dsn), { AutoCommit => 1 }); PTDEBUG && _d('Connected to', $dp->as_string($slave_dsn)); }; if ( $EVAL_ERROR ) { print STDERR "Cannot connect to ", $dp->as_string($slave_dsn), "\n" or die "Cannot print: $OS_ERROR"; return; } my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); my ($id) = $dbh->selectrow_array($sql); PTDEBUG && _d('Working on server ID', $id); my $master_thinks_i_am = $dsn->{server_id}; if ( !defined $id || ( defined $master_thinks_i_am && $master_thinks_i_am != $id ) || $args->{server_ids_seen}->{$id}++ ) { PTDEBUG && _d('Server ID seen, or not what master said'); if ( $args->{skip_callback} ) { $args->{skip_callback}->($dsn, $dbh, $level, $args->{parent}); } return; } $args->{callback}->($dsn, $dbh, $level, $args->{parent}); if ( !defined $recurse || $level < $recurse ) { my @slaves = grep { !$_->{master_id} || $_->{master_id} == $id } # Only my slaves. $self->find_slave_hosts($dp, $dbh, $dsn, $methods); foreach my $slave ( @slaves ) { PTDEBUG && _d('Recursing from', $dp->as_string($dsn), 'to', $dp->as_string($slave)); $self->recurse_to_slaves( { %$args, dsn => $slave, dbh => undef, parent => $dsn, slave_user => $slave_user, $slave_password => $slave_password }, $level + 1 ); } } } sub find_slave_hosts { my ( $self, $dsn_parser, $dbh, $dsn, $methods ) = @_; PTDEBUG && _d('Looking for slaves on', $dsn_parser->as_string($dsn), 'using methods', @$methods); my @slaves; METHOD: foreach my $method ( @$methods ) { my $find_slaves = "_find_slaves_by_$method"; PTDEBUG && _d('Finding slaves with', $find_slaves); @slaves = $self->$find_slaves($dsn_parser, $dbh, $dsn); last METHOD if @slaves; } PTDEBUG && _d('Found', scalar(@slaves), 'slaves'); return @slaves; } sub _find_slaves_by_processlist { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves = map { my $slave = $dsn_parser->parse("h=$_", $dsn); $slave->{source} = 'processlist'; $slave; } grep { $_ } map { my ( $host ) = $_->{host} =~ m/^([^:]+):/; $host ||= $_->{host}; if ( $host eq 'localhost' ) { $host = '127.0.0.1'; # Replication never uses sockets. } $host; } $self->get_connected_slaves($dbh); return @slaves; } sub _find_slaves_by_hosts { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves; my $sql = 'SHOW SLAVE HOSTS'; PTDEBUG && _d($dbh, $sql); @slaves = @{$dbh->selectall_arrayref($sql, { Slice => {} })}; if ( @slaves ) { PTDEBUG && _d('Found some SHOW SLAVE HOSTS info'); @slaves = map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; my $spec = "h=$hash{host},P=$hash{port}" . ( $hash{user} ? ",u=$hash{user}" : '') . ( $hash{password} ? ",p=$hash{password}" : ''); my $dsn = $dsn_parser->parse($spec, $dsn); $dsn->{server_id} = $hash{server_id}; $dsn->{master_id} = $hash{master_id}; $dsn->{source} = 'hosts'; $dsn; } @slaves; } return @slaves; } sub get_connected_slaves { my ( $self, $dbh ) = @_; my $show = "SHOW GRANTS FOR "; my $user = 'CURRENT_USER()'; my $sql = $show . $user; PTDEBUG && _d($dbh, $sql); my $proc; eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/no such grant defined for user/ ) { PTDEBUG && _d('Retrying SHOW GRANTS without host; error:', $EVAL_ERROR); ($user) = split('@', $user); $sql = $show . $user; PTDEBUG && _d($sql); eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; } die "Failed to $sql: $EVAL_ERROR" if $EVAL_ERROR; } if ( !$proc ) { die "You do not have the PROCESS privilege"; } $sql = 'SHOW FULL PROCESSLIST'; PTDEBUG && _d($dbh, $sql); grep { $_->{command} =~ m/Binlog Dump/i } map { # Lowercase the column names my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{$dbh->selectall_arrayref($sql, { Slice => {} })}; } sub is_master_of { my ( $self, $master, $slave ) = @_; my $master_status = $self->get_master_status($master) or die "The server specified as a master is not a master"; my $slave_status = $self->get_slave_status($slave) or die "The server specified as a slave is not a slave"; my @connected = $self->get_connected_slaves($master) or die "The server specified as a master has no connected slaves"; my (undef, $port) = $master->selectrow_array("SHOW VARIABLES LIKE 'port'"); if ( $port != $slave_status->{master_port} ) { die "The slave is connected to $slave_status->{master_port} " . "but the master's port is $port"; } if ( !grep { $slave_status->{master_user} eq $_->{user} } @connected ) { die "I don't see any slave I/O thread connected with user " . $slave_status->{master_user}; } if ( ($slave_status->{slave_io_state} || '') eq 'Waiting for master to send event' ) { my ( $master_log_name, $master_log_num ) = $master_status->{file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; my ( $slave_log_name, $slave_log_num ) = $slave_status->{master_log_file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; if ( $master_log_name ne $slave_log_name || abs($master_log_num - $slave_log_num) > 1 ) { die "The slave thinks it is reading from " . "$slave_status->{master_log_file}, but the " . "master is writing to $master_status->{file}"; } } return 1; } sub get_master_dsn { my ( $self, $dbh, $dsn, $dsn_parser ) = @_; my $master = $self->get_slave_status($dbh) or return undef; my $spec = "h=$master->{master_host},P=$master->{master_port}"; return $dsn_parser->parse($spec, $dsn); } sub get_slave_status { my ( $self, $dbh ) = @_; if ( !$self->{not_a_slave}->{$dbh} ) { my $sth = $self->{sths}->{$dbh}->{SLAVE_STATUS} ||= $dbh->prepare('SHOW SLAVE STATUS'); PTDEBUG && _d($dbh, 'SHOW SLAVE STATUS'); $sth->execute(); my ($sss_rows) = $sth->fetchall_arrayref({}); # Show Slave Status rows my $ss; if ( $sss_rows && @$sss_rows ) { if (scalar @$sss_rows > 1) { if (!$self->{channel}) { die 'This server returned more than one row for SHOW SLAVE STATUS but "channel" was not specified on the command line'; } for my $row (@$sss_rows) { $row = { map { lc($_) => $row->{$_} } keys %$row }; # lowercase the keys if ($row->{channel_name} eq $self->{channel}) { $ss = $row; last; } } } else { if ($sss_rows->[0]->{channel_name} && $sss_rows->[0]->{channel_name} ne $self->{channel}) { die 'This server is using replication channels but "channel" was not specified on the command line'; } else { $ss = $sss_rows->[0]; } } if ( $ss && %$ss ) { $ss = { map { lc($_) => $ss->{$_} } keys %$ss }; # lowercase the keys return $ss; } } PTDEBUG && _d('This server returns nothing for SHOW SLAVE STATUS'); $self->{not_a_slave}->{$dbh}++; } } sub get_master_status { my ( $self, $dbh ) = @_; if ( $self->{not_a_master}->{$dbh} ) { PTDEBUG && _d('Server on dbh', $dbh, 'is not a master'); return; } my $sth = $self->{sths}->{$dbh}->{MASTER_STATUS} ||= $dbh->prepare('SHOW MASTER STATUS'); PTDEBUG && _d($dbh, 'SHOW MASTER STATUS'); $sth->execute(); my ($ms) = @{$sth->fetchall_arrayref({})}; PTDEBUG && _d( $ms ? map { "$_=" . (defined $ms->{$_} ? $ms->{$_} : '') } keys %$ms : ''); if ( !$ms || scalar keys %$ms < 2 ) { PTDEBUG && _d('Server on dbh', $dbh, 'does not seem to be a master'); $self->{not_a_master}->{$dbh}++; } return { map { lc($_) => $ms->{$_} } keys %$ms }; # lowercase the keys } sub wait_for_master { my ( $self, %args ) = @_; my @required_args = qw(master_status slave_dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($master_status, $slave_dbh) = @args{@required_args}; my $timeout = $args{timeout} || 60; my $result; my $waited; if ( $master_status ) { my $slave_status; eval { $slave_status = $self->get_slave_status($slave_dbh); }; if ($EVAL_ERROR) { return { result => undef, waited => 0, error =>'Wait for master: this is a multi-master slave but "channel" was not specified on the command line', }; } my $server_version = VersionParser->new($slave_dbh); my $channel_sql = $server_version > '5.6' && $self->{channel} ? ", '$self->{channel}'" : ''; my $sql = "SELECT MASTER_POS_WAIT('$master_status->{file}', $master_status->{position}, $timeout $channel_sql)"; PTDEBUG && _d($slave_dbh, $sql); my $start = time; ($result) = $slave_dbh->selectrow_array($sql); $waited = time - $start; PTDEBUG && _d('Result of waiting:', $result); PTDEBUG && _d("Waited", $waited, "seconds"); } else { PTDEBUG && _d('Not waiting: this server is not a master'); } return { result => $result, waited => $waited, }; } sub stop_slave { my ( $self, $dbh ) = @_; my $sth = $self->{sths}->{$dbh}->{STOP_SLAVE} ||= $dbh->prepare('STOP SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } sub start_slave { my ( $self, $dbh, $pos ) = @_; if ( $pos ) { my $sql = "START SLAVE UNTIL MASTER_LOG_FILE='$pos->{file}', " . "MASTER_LOG_POS=$pos->{position}"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } else { my $sth = $self->{sths}->{$dbh}->{START_SLAVE} ||= $dbh->prepare('START SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } } sub catchup_to_master { my ( $self, $slave, $master, $timeout ) = @_; $self->stop_slave($master); $self->stop_slave($slave); my $slave_status = $self->get_slave_status($slave); my $slave_pos = $self->repl_posn($slave_status); my $master_status = $self->get_master_status($master); my $master_pos = $self->repl_posn($master_status); PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); my $result; if ( $self->pos_cmp($slave_pos, $master_pos) < 0 ) { PTDEBUG && _d('Waiting for slave to catch up to master'); $self->start_slave($slave, $master_pos); $result = $self->wait_for_master( master_status => $master_status, slave_dbh => $slave, timeout => $timeout, master_status => $master_status ); if ($result->{error}) { die $result->{error}; } if ( !defined $result->{result} ) { $slave_status = $self->get_slave_status($slave); if ( !$self->slave_is_running($slave_status) ) { PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); $slave_pos = $self->repl_posn($slave_status); if ( $self->pos_cmp($slave_pos, $master_pos) != 0 ) { die "MASTER_POS_WAIT() returned NULL but slave has not " . "caught up to master"; } PTDEBUG && _d('Slave is caught up to master and stopped'); } else { die "Slave has not caught up to master and it is still running"; } } } else { PTDEBUG && _d("Slave is already caught up to master"); } return $result; } sub catchup_to_same_pos { my ( $self, $s1_dbh, $s2_dbh ) = @_; $self->stop_slave($s1_dbh); $self->stop_slave($s2_dbh); my $s1_status = $self->get_slave_status($s1_dbh); my $s2_status = $self->get_slave_status($s2_dbh); my $s1_pos = $self->repl_posn($s1_status); my $s2_pos = $self->repl_posn($s2_status); if ( $self->pos_cmp($s1_pos, $s2_pos) < 0 ) { $self->start_slave($s1_dbh, $s2_pos); } elsif ( $self->pos_cmp($s2_pos, $s1_pos) < 0 ) { $self->start_slave($s2_dbh, $s1_pos); } $s1_status = $self->get_slave_status($s1_dbh); $s2_status = $self->get_slave_status($s2_dbh); $s1_pos = $self->repl_posn($s1_status); $s2_pos = $self->repl_posn($s2_status); if ( $self->slave_is_running($s1_status) || $self->slave_is_running($s2_status) || $self->pos_cmp($s1_pos, $s2_pos) != 0) { die "The servers aren't both stopped at the same position"; } } sub slave_is_running { my ( $self, $slave_status ) = @_; return ($slave_status->{slave_sql_running} || 'No') eq 'Yes'; } sub has_slave_updates { my ( $self, $dbh ) = @_; my $sql = q{SHOW VARIABLES LIKE 'log_slave_updates'}; PTDEBUG && _d($dbh, $sql); my ($name, $value) = $dbh->selectrow_array($sql); return $value && $value =~ m/^(1|ON)$/; } sub repl_posn { my ( $self, $status ) = @_; if ( exists $status->{file} && exists $status->{position} ) { return { file => $status->{file}, position => $status->{position}, }; } else { return { file => $status->{relay_master_log_file}, position => $status->{exec_master_log_pos}, }; } } sub get_slave_lag { my ( $self, $dbh ) = @_; my $stat = $self->get_slave_status($dbh); return unless $stat; # server is not a slave return $stat->{seconds_behind_master}; } sub pos_cmp { my ( $self, $a, $b ) = @_; return $self->pos_to_string($a) cmp $self->pos_to_string($b); } sub short_host { my ( $self, $dsn ) = @_; my ($host, $port); if ( $dsn->{master_host} ) { $host = $dsn->{master_host}; $port = $dsn->{master_port}; } else { $host = $dsn->{h}; $port = $dsn->{P}; } return ($host || '[default]') . ( ($port || 3306) == 3306 ? '' : ":$port" ); } sub is_replication_thread { my ( $self, $query, %args ) = @_; return unless $query; my $type = lc($args{type} || 'all'); die "Invalid type: $type" unless $type =~ m/^binlog_dump|slave_io|slave_sql|all$/i; my $match = 0; if ( $type =~ m/binlog_dump|all/i ) { $match = 1 if ($query->{Command} || $query->{command} || '') eq "Binlog Dump"; } if ( !$match ) { if ( ($query->{User} || $query->{user} || '') eq "system user" ) { PTDEBUG && _d("Slave replication thread"); if ( $type ne 'all' ) { my $state = $query->{State} || $query->{state} || ''; if ( $state =~ m/^init|end$/ ) { PTDEBUG && _d("Special state:", $state); $match = 1; } else { my ($slave_sql) = $state =~ m/ ^(Waiting\sfor\sthe\snext\sevent |Reading\sevent\sfrom\sthe\srelay\slog |Has\sread\sall\srelay\slog;\swaiting |Making\stemp\sfile |Waiting\sfor\sslave\smutex\son\sexit)/xi; $match = $type eq 'slave_sql' && $slave_sql ? 1 : $type eq 'slave_io' && !$slave_sql ? 1 : 0; } } else { $match = 1; } } else { PTDEBUG && _d('Not system user'); } if ( !defined $args{check_known_ids} || $args{check_known_ids} ) { my $id = $query->{Id} || $query->{id}; if ( $match ) { $self->{replication_thread}->{$id} = 1; } else { if ( $self->{replication_thread}->{$id} ) { PTDEBUG && _d("Thread ID is a known replication thread ID"); $match = 1; } } } } PTDEBUG && _d('Matches', $type, 'replication thread:', ($match ? 'yes' : 'no'), '; match:', $match); return $match; } sub get_replication_filters { my ( $self, %args ) = @_; my @required_args = qw(dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh) = @args{@required_args}; my %filters = (); my $status = $self->get_master_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( binlog_do_db binlog_ignore_db ); } $status = $self->get_slave_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( replicate_do_db replicate_ignore_db replicate_do_table replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table ); my $sql = "SHOW VARIABLES LIKE 'slave_skip_errors'"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_arrayref($sql); $filters{slave_skip_errors} = $row->[1] if $row->[1] && $row->[1] ne 'OFF'; } return \%filters; } sub pos_to_string { my ( $self, $pos ) = @_; my $fmt = '%s/%020d'; return sprintf($fmt, @{$pos}{qw(file position)}); } sub reset_known_replication_threads { my ( $self ) = @_; $self->{replication_thread} = {}; return; } sub get_cxn_from_dsn_table { my ($self, %args) = @_; my @required_args = qw(dsn_table_dsn make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn_table_dsn, $make_cxn) = @args{@required_args}; PTDEBUG && _d('DSN table DSN:', $dsn_table_dsn); my $dp = $self->{DSNParser}; my $q = $self->{Quoter}; my $dsn = $dp->parse($dsn_table_dsn); my $dsn_table; if ( $dsn->{D} && $dsn->{t} ) { $dsn_table = $q->quote($dsn->{D}, $dsn->{t}); } elsif ( $dsn->{t} && $dsn->{t} =~ m/\./ ) { $dsn_table = $q->quote($q->split_unquote($dsn->{t})); } else { die "DSN table DSN does not specify a database (D) " . "or a database-qualified table (t)"; } my $dsn_tbl_cxn = $make_cxn->(dsn => $dsn); my $dbh = $dsn_tbl_cxn->connect(); my $sql = "SELECT dsn FROM $dsn_table ORDER BY id"; PTDEBUG && _d($sql); my $dsn_strings = $dbh->selectcol_arrayref($sql); my @cxn; if ( $dsn_strings ) { foreach my $dsn_string ( @$dsn_strings ) { PTDEBUG && _d('DSN from DSN table:', $dsn_string); push @cxn, $make_cxn->(dsn_string => $dsn_string); } } return \@cxn; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MasterSlave package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # Retry package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Retry.pm # t/lib/Retry.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Retry; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::HiRes qw(sleep); sub new { my ( $class, %args ) = @_; my $self = { %args, }; return bless $self, $class; } sub retry { my ( $self, %args ) = @_; my @required_args = qw(try fail final_fail); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($try, $fail, $final_fail) = @args{@required_args}; my $wait = $args{wait} || sub { sleep 1; }; my $tries = $args{tries} || 3; my $last_error; my $tryno = 0; TRY: while ( ++$tryno <= $tries ) { PTDEBUG && _d("Try", $tryno, "of", $tries); my $result; eval { $result = $try->(tryno=>$tryno); }; if ( $EVAL_ERROR ) { PTDEBUG && _d("Try code failed:", $EVAL_ERROR); $last_error = $EVAL_ERROR; if ( $tryno < $tries ) { # more retries my $retry = $fail->(tryno=>$tryno, error=>$last_error); last TRY unless $retry; PTDEBUG && _d("Calling wait code"); $wait->(tryno=>$tryno); } } else { PTDEBUG && _d("Try code succeeded"); return $result; } } PTDEBUG && _d('Try code did not succeed'); return $final_fail->(error=>$last_error); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Retry package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc substr(md5_hex($val), -16); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # VersionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionParser.pm # t/lib/VersionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionParser; use Lmo; use Scalar::Util qw(blessed); use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use overload ( '""' => "version", '<=>' => "cmp", 'cmp' => "cmp", fallback => 1, ); use Carp (); has major => ( is => 'ro', isa => 'Int', required => 1, ); has [qw( minor revision )] => ( is => 'ro', isa => 'Num', ); has flavor => ( is => 'ro', isa => 'Str', default => sub { 'Unknown' }, ); has innodb_version => ( is => 'ro', isa => 'Str', default => sub { 'NO' }, ); sub series { my $self = shift; return $self->_join_version($self->major, $self->minor); } sub version { my $self = shift; return $self->_join_version($self->major, $self->minor, $self->revision); } sub is_in { my ($self, $target) = @_; return $self eq $target; } sub _join_version { my ($self, @parts) = @_; return join ".", map { my $c = $_; $c =~ s/^0\./0/; $c } grep defined, @parts; } sub _split_version { my ($self, $str) = @_; my @version_parts = map { s/^0(?=\d)/0./; $_ } $str =~ m/(\d+)/g; return @version_parts[0..2]; } sub normalized_version { my ( $self ) = @_; my $result = sprintf('%d%02d%02d', map { $_ || 0 } $self->major, $self->minor, $self->revision); PTDEBUG && _d($self->version, 'normalizes to', $result); return $result; } sub comment { my ( $self, $cmd ) = @_; my $v = $self->normalized_version(); return "/*!$v $cmd */" } my @methods = qw(major minor revision); sub cmp { my ($left, $right) = @_; my $right_obj = (blessed($right) && $right->isa(ref($left))) ? $right : ref($left)->new($right); my $retval = 0; for my $m ( @methods ) { last unless defined($left->$m) && defined($right_obj->$m); $retval = $left->$m <=> $right_obj->$m; last if $retval; } return $retval; } sub BUILDARGS { my $self = shift; if ( @_ == 1 ) { my %args; if ( blessed($_[0]) && $_[0]->can("selectrow_hashref") ) { PTDEBUG && _d("VersionParser got a dbh, trying to get the version"); my $dbh = $_[0]; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $query = eval { $dbh->selectall_arrayref(q/SHOW VARIABLES LIKE 'version%'/, { Slice => {} }) }; if ( $query ) { $query = { map { $_->{variable_name} => $_->{value} } @$query }; @args{@methods} = $self->_split_version($query->{version}); $args{flavor} = delete $query->{version_comment} if $query->{version_comment}; } elsif ( eval { ($query) = $dbh->selectrow_array(q/SELECT VERSION()/) } ) { @args{@methods} = $self->_split_version($query); } else { Carp::confess("Couldn't get the version from the dbh while " . "creating a VersionParser object: $@"); } $args{innodb_version} = eval { $self->_innodb_version($dbh) }; } elsif ( !ref($_[0]) ) { @args{@methods} = $self->_split_version($_[0]); } for my $method (@methods) { delete $args{$method} unless defined $args{$method}; } @_ = %args if %args; } return $self->SUPER::BUILDARGS(@_); } sub _innodb_version { my ( $self, $dbh ) = @_; return unless $dbh; my $innodb_version = "NO"; my ($innodb) = grep { $_->{engine} =~ m/InnoDB/i } map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{ $dbh->selectall_arrayref("SHOW ENGINES", {Slice=>{}}) }; if ( $innodb ) { PTDEBUG && _d("InnoDB support:", $innodb->{support}); if ( $innodb->{support} =~ m/YES|DEFAULT/i ) { my $vars = $dbh->selectrow_hashref( "SHOW VARIABLES LIKE 'innodb_version'"); $innodb_version = !$vars ? "BUILTIN" : ($vars->{Value} || $vars->{value}); } else { $innodb_version = $innodb->{support}; # probably DISABLED or NO } } PTDEBUG && _d("InnoDB version:", $innodb_version); return $innodb_version; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End VersionParser package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_heartbeat; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use List::Util qw(min max sum); use Time::HiRes qw(gettimeofday time sleep usleep); use IO::File; use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; Transformers->import(qw(ts unix_timestamp)); my @dbhs; # Holds slave DBHs if --recurse my @sths; # Holds [$host, $sth] if --recurse sub main { local @ARGV = @_; # set global ARGV for this package # Reset all global vars between test runs else weird things happen. @dbhs = (); @sths = (); # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser; $dp->prop('dbidriver', $o->get('dbi-driver')); $dp->prop('set-vars', $o->set_vars()); if ( !$o->get('help') ) { my @frames = $o->get('frames') =~ m/(\d+[smhd])/g; if ( @frames ) { my @times; foreach my $frame ( @frames ) { my ($num, $suf) = $frame =~ m/(\d+)([smhd])$/; if ( !$num ) { $o->save_error("Invalid --frames argument"); } else { push @times, $suf eq 's' ? $num # Seconds : $suf eq 'm' ? $num * 60 # Minutes : $suf eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days } } $o->set('frames', \@times); } else { $o->save_error("Invalid --frames argument"); } if ( $o->get('create-table') && !($o->get('database') && $o->get('table'))) { $o->save_error('--create-table requires both --database and --table'); } if ( $o->get('interval') < 0.01 ) { $o->save_error("--interval must be >= 0.01"); } if ( !$o->get('stop') && !$o->get('database') ) { $o->save_error('--database must be specified'); } } eval { MasterSlave::check_recursion_method($o->get('recursion-method')); }; if ( $EVAL_ERROR ) { $o->save_error("Invalid --recursion-method: $EVAL_ERROR") } $o->usage_or_errors(); # ######################################################################## # Make common modules and var for frequently used options. # ######################################################################## my $q = new Quoter(); my $tp = new TableParser(Quoter => $q); my $interval = $o->get('interval'); my $skew = $o->get('update') ? 0 : $o->get('skew'); my $sentinel = $o->get('sentinel'); my $frames = $o->get('frames'); my $db = $o->get('database'); my $tbl = $o->get('table'); # ######################################################################## # Create --sentinel file if --stop was given, and possibly exit. # ######################################################################## if ( $o->get('stop') ) { PTDEBUG && _d('Creating sentinel file', $sentinel); my $file = IO::File->new($sentinel, ">>") or die "Cannot open $sentinel: $OS_ERROR\n"; print $file "Remove this file to permit pt-heartbeat to run\n" or die "Cannot write to $sentinel: $OS_ERROR\n"; close $file or die "Cannot close $sentinel: $OS_ERROR\n"; print STDOUT "Successfully created file $sentinel\n"; # Exit only if no other action (update, monitor, check) is given. if ( !$o->get('update') && !$o->get('check') && !$o->get('monitor') ) { PTDEBUG && _d("Nothing more to do, quitting"); return 0; } else { # Wait for all other running instances to quit, assuming they have the # same --interval as this invocation. Then remove the file and # continue. PTDEBUG && _d("Waiting for other instances to quit"); sleep $interval ; PTDEBUG && _d("Unlinking", $sentinel); unlink $sentinel or die "Cannot unlink $sentinel: $OS_ERROR"; } } # ######################################################################## # Connect to MySQL. # ######################################################################## if ( $o->get('ask-pass') ) { $o->set('password', OptionParser::prompt_noecho("Enter password: ")); } my $dsn_defaults = $dp->parse_options($o); my $dsn = @ARGV ? $dp->parse(shift @ARGV, $dsn_defaults) : $dsn_defaults; my $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), {AutoCommit=>1}); $dbh->{InactiveDestroy} = 1; # Don't disconnect on fork $dbh->{FetchHashKeyName} = 'NAME_lc'; $dbh->do("USE `$db`"); # ######################################################################## # If --check-read-only option was given and we are in --update mode # we wait until server is writable , or run-time is over, or sentinel file # We also do this check after daemon is up and running, but it is necessary # to check this before attempting to create the table and inserting rows # https://bugs.launchpad.net/percona-toolkit/+bug/1328686 # ####################################################################### if ( $o->get('check-read-only') && $o->get('update') ) { PTDEBUG && _d('Checking if server is read_only'); if ( server_is_readonly($dbh) && PTDEBUG ) { _d('Server is read-only, waiting') } my $start_time = time; my $run_time = $o->get('run-time'); my $interval = $o->get('interval') || 5; my $read_only_interval = $o->get('read-only-interval') || $interval; while (server_is_readonly($dbh)) { PTDEBUG && _d("Sleeping for $read_only_interval seconds"); sleep($read_only_interval); if ( ($run_time && $run_time < time - $start_time) || -f $sentinel ) { return 0; } } } # ######################################################################## # Create the heartbeat table if --create-table was given. # ######################################################################## my $utc = $o->get('utc'); my $now_func = $utc ? 'UTC_TIMESTAMP()' : 'NOW()'; my $db_tbl = $q->quote($db, $tbl); my $server_id = $dbh->selectrow_array('SELECT @@server_id'); if ( $o->get('create-table') ) { my $sql = $o->read_para_after(__FILE__, qr/MAGIC_create_heartbeat/); $sql =~ s/heartbeat/IF NOT EXISTS $db_tbl/; PTDEBUG && _d($sql); if ($o->get('create-table-engine')) { my $engine = $o->get('create-table-engine'); $sql =~ s/;$/ ENGINE=$engine;/; PTDEBUG && _d("Using engine $engine"); PTDEBUG && _d($sql); } eval { $dbh->do($sql); }; if ( $EVAL_ERROR ) { die "Error creating heartbeat table:". $EVAL_ERROR; } # Now we insert first row. # Some caveats: # 1) # This may fail if the table already existed and already had this row. # We eval to ignore this possibility. # NOTE: This can break replication though! See: # https://bugs.launchpad.net/percona-toolkit/+bug/1004567 # So --replace should be used in most cases. my $sql_insert_row = ($o->get('replace') ? "REPLACE" : "INSERT") . qq/ INTO $db_tbl (ts, server_id) VALUES ($now_func, $server_id)/; # 2) # RBR (Row Based Replication) converts REPLACE to INSERT if row isn't # present in master. This breakes replication when the row is present in slave. # Other workarounds also fail. # INSERT IGNORE (ignore is not replicated if no error in master) # DELETE then INSERT (DELETE is ignored, INSERT breaks replication) # INSERT ON DUPLICATE UPDATE (converts to simple INSERT) # TRUNCATE gets trough and replicates! So we use that to wipe slave(s). if ($o->get('replace')) { my $sql_truncate = "TRUNCATE TABLE $db_tbl"; PTDEBUG && _d($sql_truncate); eval { $dbh->do($sql_truncate) }; } PTDEBUG && _d($sql_insert_row); eval { $dbh->do($sql_insert_row); }; } # ######################################################################## # Get and check heartbeat table structure. # ######################################################################## my $tbl_def = $dbh->selectrow_arrayref("SHOW CREATE TABLE $db_tbl"); my $tbl_struct = $tp->parse($tbl_def->[1]); die "Heartbeat table $db_tbl does not have a ts column" unless $tbl_struct->{is_col}->{ts}; my $hires_ts = $tbl_struct->{type_for}->{ts} =~ m/char/i ? 1 : 0; PTDEBUG && _d("Hi-res ts:", ($hires_ts ? 'yes' : 'no')); my $id = $tbl_struct->{is_col}->{id}; # legacy table struct die "Heartbeat table $db_tbl does not have a server_id or id column" unless $tbl_struct->{is_col}->{server_id} || $id; # If there's an id column, then we're running in legacy mode. If there's # a server_id column, then we're running in the new mode which supports # multiple --update instances. if ( $tbl_struct->{is_col}->{id} && $tbl_struct->{is_col}->{server_id} ) { die "Heartbeat table $db_tbl cannot have both an id column and " . "a server_id column"; } # pk_col and pk_val are used to identify the heartbeat row to update or # or monitor. my ($pk_col, $pk_val); if ( $id ) { # Legacy mode: update heartbeat row WHERE id=1 and monitor heartbeat # row WHERE id=1. $pk_col = 'id'; $pk_val = '1'; } elsif ( $tbl_struct->{is_col}->{server_id} ) { # Multi-update mode: update heartbeat row WHERE server_id=@@server_id # and monitor heartbeat row WHERE server_id=master_server_id. if ( $o->get('update') ) { $pk_col = 'server_id'; $pk_val = $server_id; } else { # monitor or check my $master_server_id = $o->get('master-server-id'); if ( !$master_server_id ) { eval { my $ms = new MasterSlave( OptionParser => $o, DSNParser => $dp, Quoter => $q, ); my $master_dsn = $ms->get_master_dsn($dbh, $dsn, $dp) or die "This server is not a slave"; my $master_dbh = $dp->get_dbh($dp->get_cxn_params($master_dsn), { AutoCommit => 1 }); ($master_server_id) = $master_dbh->selectrow_array('SELECT @@server_id'); $master_dbh->disconnect; }; if ( $EVAL_ERROR ) { PTDEBUG && _d("Error determining master id:", $EVAL_ERROR); } } if ( !$master_server_id ) { die "The --master-server-id option must be specified because " . "the heartbeat table $db_tbl uses the server_id column " . "for --update or --check but the server's master could " . "not be automatically determined.\n" . "Please read the DESCRIPTION section of the pt-heartbeat POD.\n"; } $pk_col = 'server_id'; $pk_val = $master_server_id; } } else { die "Heartbeat table $db_tbl does not have a server_id or id column"; } PTDEBUG && _d('Heartbeat row primary key:', $pk_col, '=', $pk_val); # Check that heartbeat table has at least 1 row unless --replace because # --replace will create the row if it doesn't exist. if ( !$o->get('replace') ) { my $sql = "SELECT 1 FROM $db_tbl WHERE $pk_col='$pk_val' LIMIT 1"; PTDEBUG && _d($sql); my $row = $dbh->selectall_arrayref($sql); if ( scalar @$row == 0 ) { PTDEBUG && _d('No heartbeat row in table'); if ( $o->get('insert-heartbeat-row') ) { my $sql = "INSERT INTO $db_tbl ($pk_col, ts) " . "VALUES ('$pk_val', $now_func)"; PTDEBUG && _d($sql); $dbh->do($sql); } else { if ( $id ) { die "The heartbeat table is empty.\n" . "At least one row must be inserted into the heartbeat " . "table.\nPlease read the DESCRIPTION section of the " . "pt-heartbeat POD.\n"; } else { die "No row found in heartbeat table for server_id $pk_val.\n" . "At least one row must be inserted into the heartbeat " . "table for server_id $pk_val.\nPlease read the " . "DESCRIPTION section of the pt-heartbeat POD.\n"; } } } } # ######################################################################## # Make sth for updating or checking the heartbeat table. # ######################################################################## my ($heartbeat_sql, $heartbeat_sth); my ($get_delay, $update_heartbeat); if ( $o->get('update') ) { my @master_status_cols = grep { $tbl_struct->{is_col}->{$_} } qw(file position); PTDEBUG && _d("Master status columns:", join(', ', @master_status_cols)); my @slave_status_cols = grep { $tbl_struct->{is_col}->{$_} } qw(relay_master_log_file exec_master_log_pos); PTDEBUG && _d("Slave status columns:", join(', ', @slave_status_cols)); # Just a shortcut so I don't have to check both arrays when creating # SQL statement below. my @extra_cols = (@master_status_cols, @slave_status_cols); if ( $o->get('replace') ) { $heartbeat_sql = "REPLACE INTO $db_tbl (ts, $pk_col" . (@extra_cols ? ", " . join(', ', @extra_cols) : '') . ") VALUES (?, '$pk_val'" . (@extra_cols ? ", " . join(', ', map { '?' } @extra_cols) : '') . ")"; } else { $heartbeat_sql = "UPDATE $db_tbl SET ts=?" . (@extra_cols ? ", " . join(', ', map { "$_=?" } @extra_cols) : "") . " WHERE $pk_col='$pk_val'"; } PTDEBUG && _d("UPDATE SQL:", $heartbeat_sql); $heartbeat_sth = $dbh->prepare($heartbeat_sql); my $ro_check = !!$o->get('check-read-only'); $update_heartbeat = sub { my ($sth) = @_; my @vals; return if $ro_check && server_is_readonly($dbh); my $sql; if ( @master_status_cols ) { $sql = "SHOW MASTER STATUS"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_hashref($sql); if ( !$row ) { PTDEBUG && _d("No row from", $sql); push @vals, map { undef } @master_status_cols; } else { push @vals, map { $row->{$_} } @master_status_cols; } } if ( @slave_status_cols ) { $sql = "SHOW SLAVE STATUS"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_hashref($sql); if ( !$row ) { PTDEBUG && _d("No row from", $sql); push @vals, map { undef } @slave_status_cols; } else { push @vals, map { $row->{$_} } @slave_status_cols; } } my $retry = Retry->new(); $retry->retry( tries => 3, wait => sub { sleep 0.25; return; }, try => sub { $sth->execute(ts(time, $utc), @vals); PTDEBUG && _d($sth->{Statement}); $sth->finish(); }, fail => sub { my (%args) = @_; my $error = $args{error}; if ( $error =~ m/Deadlock found/ ) { return 1; # try again } else { return 0; } }, final_fail => sub { my (%args) = @_; die $args{error}; } ); return; }; } else { # --monitor or --check my $dbi_driver = lc $o->get('dbi-driver'); # UNIX_TIMESTAMP(UTC_TIMESTAMP()) instead of UNIX_TIMESTAMP() alone, # so we make sure that we aren't being fooled by a timezone. # UNIX_TIMESTAMP(ts) replaces unix_timestamp($ts) -- MySQL is the # authority here, so let it calculate everything. $heartbeat_sql = "SELECT " . ($utc ? 'UNIX_TIMESTAMP(ts)' : 'ts') . ($dbi_driver eq 'mysql' ? '/*!50038, @@hostname AS host*/' : '') . ($id ? "" : ", server_id") . " FROM $db_tbl " . "WHERE $pk_col='$pk_val' " . "LIMIT 1"; PTDEBUG && _d("SELECT SQL:", $heartbeat_sql); $heartbeat_sth = $dbh->prepare($heartbeat_sql); $get_delay = sub { my ($sth) = @_; $sth->execute(); PTDEBUG && _d($sth->{Statement}); my ($ts, $hostname, $server_id) = $sth->fetchrow_array(); my $now = time; PTDEBUG && _d("Heartbeat from server", $server_id, "\n", " now:", ts($now, $utc), "\n", " ts:", $ts, "\n", "skew:", $skew); my $delay = $now - unix_timestamp($ts, $utc) - $skew; PTDEBUG && _d('Delay', sprintf('%.6f', $delay), 'on', $hostname); # Because we adjust for skew, if the ts are less than skew seconds # apart (i.e. replication is very fast) then delay will be negative. # So it's effectively 0 seconds of lag. $delay = 0.00 if $delay < 0; $sth->finish(); return ($delay, $hostname, $pk_val); }; # https://bugs.launchpad.net/percona-toolkit/+bug/1163372 # "pt-heartbeat --utc --check always returns 0" if ( $utc ) { my $sql = "SET time_zone='+0:00'"; PTDEBUG && _d($sql); $dbh->do($sql); } } # Do a little check just to make sure the table is there, so there's one last # chance to catch errors before daemonizing. if ( $o->get('update') ) { $update_heartbeat->($heartbeat_sth); } else { $get_delay->($heartbeat_sth); } $heartbeat_sth->finish(); # ######################################################################## # Daemonize only after (potentially) asking for passwords for --ask-pass. # ######################################################################## my $daemon; if ( $o->get('daemonize') ) { $daemon = new Daemon(o=>$o); $daemon->daemonize(); PTDEBUG && _d('I am a daemon now'); } elsif ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # ######################################################################## # --check and exit if --check was given. # ######################################################################## if ( $o->get('check') ) { PTDEBUG && _d('--check and exit'); check_delay( dsn => $dsn, dbh => $dbh, sth => $heartbeat_sth, sql => $heartbeat_sql, get_delay => $get_delay, interval => $interval, skew => $skew, hires_ts => $hires_ts, OptionParser => $o, DSNParser => $dp, ); disconnect($dbh, $heartbeat_sth); return 0; } # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ { dbh => $dbh, dsn => $dsn } ], ); } # ######################################################################## # Setup moving averages for --frames. # ######################################################################## my @samples; my $limit = max(@$frames); # 2.00s [ 0.05s, 0.01s, 0.00s ] my $format = ($hires_ts ? '%1$.2f' : '%1$4d') . "s [ "; my $findex = 2; foreach (@$frames) { $format .= $findex > 2 ? ', ' : ''; $format .= '%'.$findex.'$5.2fs'; $findex++; } $format .= " ]"; $format .= ($o->get('print-master-server-id') ? ' %'.$findex.'$d' : '') . "\n"; # ######################################################################## # Monitor or update the heartbeat table. # ######################################################################## my $end = $o->get('run-time') ? int(time + $o->get('run-time')) : 0; PTDEBUG && _d($end ? ('Will exit at', ts($end)) : 'Running forever'); my $get_next_interval = make_interval_iter($interval, $skew); my $max_successive_errors = $o->get('fail-successive-errors') || 0; my $num_successive_errors = 0; while ( # Stop if... (!$end || int(time) < $end) # runtime exceeded, or && !-f $sentinel # sentinel file created ) { eval { my $next_interval = $get_next_interval->(); # save current time in variable to avoid race condition # https://bugs.launchpad.net/percona-toolkit/+bug/1406390 my $time = time; if ( $time >= $next_interval ) { do { $next_interval = $get_next_interval->() } until $next_interval > $time; PTDEBUG && _d("Missed last interval; next interval:", ts($next_interval)); } sleep $next_interval - $time; PTDEBUG && _d('Woke up at', ts(time)); if ( $o->get('check-read-only') && $o->get('update') ) { my $read_only_interval = $o->get('read-only-interval') || $interval; while (server_is_readonly($dbh)) { PTDEBUG && _d("Server is read only. Sleeping for $read_only_interval seconds..."); sleep($read_only_interval); if ( -f $sentinel ) { return 0; } } } # Connect or reconnect if necessary. if ( !$dbh->ping() ) { $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit => 1 }); $dbh->{InactiveDestroy} = 1; # Don't disconnect on fork $dbh->{FetchHashKeyName} = 'NAME_lc'; $dbh->do("USE `$db`"); $heartbeat_sth = undef; } if ( $o->get('monitor') ) { $heartbeat_sth ||= $dbh->prepare($heartbeat_sql); my ($delay) = $get_delay->($heartbeat_sth); unshift @samples, $delay; pop @samples if @samples > $limit; # Calculate and print results my @vals = map { my $bound = min($_, scalar(@samples)); sum(@samples[0 .. $bound-1]) / $_; } @$frames; my $output = sprintf $format, $delay, @vals, $pk_val; if ( my $file = $o->get('file') ) { open my $file, '>', $file or die "Can't open $file: $OS_ERROR"; print $file $output or die "Can't print to $file: $OS_ERROR"; close $file or die "Can't close $file: $OS_ERROR"; } else { print $output; } } else { # --update mode $heartbeat_sth ||= $dbh->prepare($heartbeat_sql); $update_heartbeat->($heartbeat_sth); } }; if ( $EVAL_ERROR ) { $num_successive_errors = $num_successive_errors + 1; my ( $err ) = $EVAL_ERROR =~ m/^(?:DBI|DBD).*failed: (.*?)\s*at \S+ line .*/; if ( $err ) { warn "$err\n"; } else { die $EVAL_ERROR; } if ($max_successive_errors > 0 && $num_successive_errors >= $max_successive_errors) { die $EVAL_ERROR; } } else { $num_successive_errors = 0; } } disconnect($dbh, $heartbeat_sth); return 0; } # ############################################################################ # Subroutines. # ############################################################################ sub server_is_readonly { my ($dbh) = @_; my ( $is_read_only ) = $dbh->selectrow_array(q{SELECT @@global.read_only}); if ( $is_read_only ) { my ( $privs ) = eval { $dbh->selectrow_array(q{SHOW GRANTS}) }; if ( $privs && $privs =~ /\b(?:ALL|SUPER)\b/ ) { $is_read_only = undef; } } return $is_read_only; } # Check the delay on a single server. Optionally recurse to all its slaves. sub check_delay { my ( %args ) = @_; my @required_args = qw(dsn dbh sth sql get_delay interval skew OptionParser DSNParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn, $dbh, $sth, $sql, $get_delay, $interval, $skew, $o, $dp) = @args{@required_args}; PTDEBUG && _d('Checking slave delay'); # Collect a list of connections to the slaves. if ( $o->get('recurse') ) { PTDEBUG && _d('Recursing to slaves'); my $ms = new MasterSlave( OptionParser => $o, DSNParser => $dp, Quoter => "Quoter", ); $ms->recurse_to_slaves( { dbh => $dbh, dsn => $dsn, callback => sub { my ( $dsn, $dbh, $level ) = @_; push @dbhs, $dbh; PTDEBUG && _d("Found slave", $dp->as_string($dsn)); push @sths, [ $dsn, $dbh->prepare($sql) ]; }, }, ); } else { push @sths, [ $dsn, $sth ]; } my $format_delay = ($args{hires_ts} ? '%1$.2f' : '%1$d') . ($o->get('print-master-server-id') ? ' %2$d' : "") . "\n"; my $format_host = '%1$-20s '.($args{hires_ts} ? '%2$.2f' : '%2$d') . ($o->get('print-master-server-id') ? ' %3$d' : "") . "\n"; # Before hi-res ts, we could check all slaves at one interval, assuming # the checks were fast, i.e. able to be done within one interval. But # now we have intervals up to 0.01 fast and that's too short to check all # slaves. So for each slave we sleep until the next interval. my $get_next_interval = make_interval_iter($interval, $skew); SLAVE: foreach my $thing ( @sths ) { my ( $dsn, $sth ) = @$thing; PTDEBUG && _d('Checking slave', $dp->as_string($dsn)); my $next_interval = $get_next_interval->(); # save current time in variable to avoid race condition # https://bugs.launchpad.net/percona-toolkit/+bug/1406390 my $time = time; if ( $time >= $next_interval ) { do { $next_interval = $get_next_interval->() } until $next_interval > $time; PTDEBUG && _d("Missed last interval; next interval:", ts($next_interval)); } sleep $next_interval - $time; PTDEBUG && _d('Woke up at', ts(time)); my ($delay, $hostname, $master_server_id) = $get_delay->($sth); if ( $o->get('recurse') ) { # Must print not only the delay, but the server's hostname if # available. Prefer the hostname from the DSN, then the hostname # from @@hostname, then fall back to Socket or default File. my $host = $dsn->{h} || $hostname || $dsn->{S} || $dsn->{F} || ''; if ( $dsn->{P} && $dsn->{P} ne '3306' ) { $host .= ":$dsn->{P}"; } printf($format_host, $host, $delay, $master_server_id); } else { # Just print the delay. printf($format_delay, $delay, $master_server_id); } } return; } # The interval iterator works by first returning the next whole second. # So if the current time (since epoch) is 5.123, then the next whole second # is 6.0, plus an optional skew. The next interval is 6.0 * the interval. # If the interval is 0.5s, then the next interval is 6.5, plus an optional # skew. Therefore, we always start on a whole second and return when the # next interval is or should be. The caller can then sleep(time-next_interval) # to wake up at that interval. If the caller misses the next interval, # they just call the iterator until the next interval is later then the # current time. sub make_interval_iter { my ( $interval, $skew ) = @_; die "I need an interval argument" unless defined $interval; my ($s) = gettimeofday(); my $start_s = $s + 1; my $i = 0; my $get_next_interval = sub { return $start_s + ($interval * $i++) + $skew; }; return $get_next_interval; } sub disconnect { my ( $dbh, $sth ) = @_; PTDEBUG && _d('Disconnecting'); $sth->finish() if $sth; foreach my $handle ( @sths ) { my $sth = $handle->[1]; $sth->finish() if $sth; } foreach my $handle ( $dbh, @dbhs ) { $handle->disconnect() if $handle; } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation. # ############################################################################ =pod =head1 NAME pt-heartbeat - Monitor MySQL replication delay. =head1 SYNOPSIS Usage: pt-heartbeat [OPTIONS] [DSN] --update|--monitor|--check|--stop pt-heartbeat measures replication lag on a MySQL or PostgreSQL server. You can use it to update a master or monitor a replica. If possible, MySQL connection options are read from your .my.cnf file. Start daemonized process to update test.heartbeat table on master: pt-heartbeat -D test --update -h master-server --daemonize Monitor replication lag on slave: pt-heartbeat -D test --monitor -h slave-server pt-heartbeat -D test --monitor -h slave-server --dbi-driver Pg Check slave lag once and exit (using optional DSN to specify slave host): pt-heartbeat -D test --check h=slave-server =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-heartbeat is a two-part MySQL and PostgreSQL replication delay monitoring system that measures delay by looking at actual replicated data. This avoids reliance on the replication mechanism itself, which is unreliable. (For example, C on MySQL). The first part is an L<"--update"> instance of pt-heartbeat that connects to a master and updates a timestamp ("heartbeat record") every L<"--interval"> seconds. Since the heartbeat table may contain records from multiple masters (see L<"MULTI-SLAVE HIERARCHY">), the server's ID (@@server_id) is used to identify records. The second part is a L<"--monitor"> or L<"--check"> instance of pt-heartbeat that connects to a slave, examines the replicated heartbeat record from its immediate master or the specified L<"--master-server-id">, and computes the difference from the current system time. If replication between the slave and the master is delayed or broken, the computed difference will be greater than zero and potentially increase if L<"--monitor"> is specified. You must either manually create the heartbeat table on the master or use L<"--create-table">. See L<"--create-table"> for the proper heartbeat table structure. The C storage engine is suggested, but not required of course, for MySQL. The heartbeat table must contain a heartbeat row. By default, a heartbeat row is inserted if it doesn't exist. This feature can be disabled with the L<"--[no]insert-heartbeat-row"> option in case the database user does not have INSERT privileges. pt-heartbeat depends only on the heartbeat record being replicated to the slave, so it works regardless of the replication mechanism (built-in replication, a system such as Continuent Tungsten, etc). It works at any depth in the replication hierarchy; for example, it will reliably report how far a slave lags its master's master's master. And if replication is stopped, it will continue to work and report (accurately!) that the slave is falling further and further behind the master. pt-heartbeat has a maximum resolution of 0.01 second. The clocks on the master and slave servers must be closely synchronized via NTP. By default, L<"--update"> checks happen on the edge of the second (e.g. 00:01) and L<"--monitor"> checks happen halfway between seconds (e.g. 00:01.5). As long as the servers' clocks are closely synchronized and replication events are propagating in less than half a second, pt-heartbeat will report zero seconds of delay. pt-heartbeat will try to reconnect if the connection has an error, but will not retry if it can't get a connection when it first starts. The L<"--dbi-driver"> option lets you use pt-heartbeat to monitor PostgreSQL as well. It is reported to work well with Slony-1 replication. =head1 MULTI-SLAVE HIERARCHY If the replication hierarchy has multiple slaves which are masters of other slaves, like "master -> slave1 -> slave2", L<"--update"> instances can be ran on the slaves as well as the master. The default heartbeat table (see L<"--create-table">) is keyed on the C column, so each server will update the row where C. For L<"--monitor"> and L<"--check">, if L<"--master-server-id"> is not specified, the tool tries to discover and use the slave's immediate master. If this fails, or if you want monitor lag from another master, then you can specify the L<"--master-server-id"> to use. For example, if the replication hierarchy is "master -> slave1 -> slave2" with corresponding server IDs 1, 2 and 3, you can: pt-heartbeat --daemonize -D test --update -h master pt-heartbeat --daemonize -D test --update -h slave1 Then check (or monitor) the replication delay from master to slave2: pt-heartbeat -D test --master-server-id 1 --check slave2 Or check the replication delay from slave1 to slave2: pt-heartbeat -D test --master-server-id 2 --check slave2 Stopping the L<"--update"> instance one slave1 will not affect the instance on master. =head1 MASTER AND SLAVE STATUS The default heartbeat table (see L<"--create-table">) has columns for saving information from C and C. These columns are optional. If any are present, their corresponding information will be saved. =head1 Percona XtraDB Cluster Although pt-heartbeat should work with all supported versions of Percona XtraDB Cluster (PXC), we recommend using 5.5.28-23.7 and newer. If you are setting up heartbeat instances between cluster nodes, keep in mind that, since the speed of the cluster is determined by its slowest node, pt-heartbeat will not report how fast the cluster itself is, but only how fast events are replicating from one node to another. You must specify L<"--master-server-id"> for L<"--monitor"> and L<"--check"> instances. =head1 OPTIONS Specify at least one of L<"--stop">, L<"--update">, L<"--monitor">, or L<"--check">. L<"--update">, L<"--monitor">, and L<"--check"> are mutually exclusive. L<"--daemonize"> and L<"--check"> are mutually exclusive. This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --check Check slave delay once and exit. If you also specify L<"--recurse">, the tool will try to discover slave's of the given slave and check and print their lag, too. The hostname or IP and port for each slave is printed before its delay. L<"--recurse"> only works with MySQL. =item --check-read-only Check if the server has read_only enabled; If it does, the tool skips doing any inserts. See also L<"--read-only-interval"> =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --create-table Create the heartbeat L<"--table"> if it does not exist. This option causes the table specified by L<"--database"> and L<"--table"> to be created with the following MAGIC_create_heartbeat table definition: CREATE TABLE heartbeat ( ts varchar(26) NOT NULL, server_id int unsigned NOT NULL PRIMARY KEY, file varchar(255) DEFAULT NULL, -- SHOW MASTER STATUS position bigint unsigned DEFAULT NULL, -- SHOW MASTER STATUS relay_master_log_file varchar(255) DEFAULT NULL, -- SHOW SLAVE STATUS exec_master_log_pos bigint unsigned DEFAULT NULL -- SHOW SLAVE STATUS ); The heartbeat table requires at least one row. If you manually create the heartbeat table, then you must insert a row by doing: INSERT INTO heartbeat (ts, server_id) VALUES (NOW(), N); or if using L<"--utc">: INSERT INTO heartbeat (ts, server_id) VALUES (UTC_TIMESTAMP(), N); where C is the server's ID; do not use @@server_id because it will replicate and slaves will insert their own server ID instead of the master's server ID. This is done automatically by L<"--create-table">. A legacy version of the heartbeat table is still supported: CREATE TABLE heartbeat ( id int NOT NULL PRIMARY KEY, ts datetime NOT NULL ); Legacy tables do not support L<"--update"> instances on each slave of a multi-slave hierarchy like "master -> slave1 -> slave2". To manually insert the one required row into a legacy table: INSERT INTO heartbeat (id, ts) VALUES (1, NOW()); or if using L<"--utc">: INSERT INTO heartbeat (id, ts) VALUES (1, UTC_TIMESTAMP()); The tool automatically detects if the heartbeat table is legacy. See also L<"MULTI-SLAVE HIERARCHY">. =item --create-table-engine type: string Sets the engine to be used for the heartbeat table. The default storage engine is InnoDB as of MySQL 5.5.5. =item --daemonize Fork to the background and detach from the shell. POSIX operating systems only. =item --database short form: -D; type: string The database to use for the connection. =item --dbi-driver default: mysql; type: string Specify a driver for the connection; C and C are supported. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --file type: string Print latest L<"--monitor"> output to this file. When L<"--monitor"> is given, prints output to the specified file instead of to STDOUT. The file is opened, truncated, and closed every interval, so it will only contain the most recent statistics. Useful when L<"--daemonize"> is given. =item --frames type: string; default: 1m,5m,15m Timeframes for averages. Specifies the timeframes over which to calculate moving averages when L<"--monitor"> is given. Specify as a comma-separated list of numbers with suffixes. The suffix can be s for seconds, m for minutes, h for hours, or d for days. The size of the largest frame determines the maximum memory usage, as up to the specified number of per-second samples are kept in memory to calculate the averages. You can specify as many timeframes as you like. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --[no]insert-heartbeat-row default: yes Insert a heartbeat row in the L<"--table"> if one doesn't exist. The heartbeat L<"--table"> requires a heartbeat row, else there's nothing to L<"--update">, L<"--monitor">, or L<"--check">! By default, the tool will insert a heartbeat row if one is not already present. You can disable this feature by specifying C<--no-insert-heartbeat-row> in case the database user does not have INSERT privileges. =item --interval type: float; default: 1.0 How often to update or check the heartbeat L<"--table">. Updates and checks begin on the first whole second then repeat every L<"--interval"> seconds for L<"--update"> and every L<"--interval"> plus L<"--skew"> seconds for L<"--monitor">. For example, if at 00:00.4 an L<"--update"> instance is started at 0.5 second intervals, the first update happens at 00:01.0, the next at 00:01.5, etc. If at 00:10.7 a L<"--monitor"> instance is started at 0.05 second intervals with the default 0.5 second L<"--skew">, then the first check happens at 00:11.5 (00:11.0 + 0.5) which will be L<"--skew"> seconds after the last update which, because the instances are checking at synchronized intervals, happened at 00:11.0. The tool waits for and begins on the first whole second just to make the interval calculations simpler. Therefore, the tool could wait up to 1 second before updating or checking. The minimum (fastest) interval is 0.01, and the maximum precision is two decimal places, so 0.015 will be rounded to 0.02. If a legacy heartbeat table (see L<"--create-table">) is used, then the maximum precision is 1s because the C column is type C. =item --log type: string Print all output to this file when daemonized. =item --master-server-id type: string Calculate delay from this master server ID for L<"--monitor"> or L<"--check">. If not given, pt-heartbeat attempts to connect to the server's master and determine its server id. =item --monitor Monitor slave delay continuously. Specifies that pt-heartbeat should check the slave's delay every second and report to STDOUT (or if L<"--file"> is given, to the file instead). The output is the current delay followed by moving averages over the timeframe given in L<"--frames">. For example, 5s [ 0.25s, 0.05s, 0.02s ] =item --fail-successive-errors type: int If specified, pt-heartbeat will fail after given number of successive DBI errors (failure to connect to server or issue a query). =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --print-master-server-id Print the auto-detected or given L<"--master-server-id">. If L<"--check"> or L<"--monitor"> is specified, specifying this option will print the auto-detected or given L<"--master-server-id"> at the end of each line. =item --read-only-interval type: int When L<"--check-read-only"> is specified, the interval to sleep while the server is found to be read-only. If unspecified, L<"--interval"> is used. =item --recurse type: int Check slaves recursively to this depth in L<"--check"> mode. Try to discover slave servers recursively, to the specified depth. After discovering servers, run the check on each one of them and print the hostname (if possible), followed by the slave delay. This currently works only with MySQL. See L<"--recursion-method">. =item --recursion-method type: array; default: processlist,hosts Preferred recursion method used to find slaves. Possible methods are: METHOD USES =========== ================== processlist SHOW PROCESSLIST hosts SHOW SLAVE HOSTS none Do not find slaves The processlist method is preferred because SHOW SLAVE HOSTS is not reliable. However, the hosts method is required if the server uses a non-standard port (not 3306). Usually pt-heartbeat does the right thing and finds the slaves, but you may give a preferred method and it will be used first. If it doesn't find any slaves, the other methods will be tried. =item --replace Use C instead of C for --update. When running in L<"--update"> mode, use C instead of C to set the heartbeat table's timestamp. The C statement is a MySQL extension to SQL. This option is useful when you don't know whether the table contains any rows or not. It must be used in conjunction with --update. =item --run-time type: time Time to run before exiting. =item --sentinel type: string; default: /tmp/pt-heartbeat-sentinel Exit if this file exists. =item --slave-user type: string Sets the user to be used to connect to the slaves. This parameter allows you to have a different user with less privileges on the slaves but that user must exist on all slaves. =item --slave-password type: string Sets the password to be used to connect to the slaves. It can be used with --slave-user and the password for the user must be the same on all slaves. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --skew type: float; default: 0.5 How long to delay checks. The default is to delay checks one half second. Since the update happens as soon as possible after the beginning of the second on the master, this allows one half second of replication delay before reporting that the slave lags the master by one second. If your clocks are not completely accurate or there is some other reason you'd like to delay the slave more or less, you can tweak this value. Try setting the C environment variable to see the effect this has. =item --socket short form: -S; type: string Socket file to use for connection. =item --stop Stop running instances by creating the sentinel file. This should have the effect of stopping all running instances which are watching the same sentinel file. If none of L<"--update">, L<"--monitor"> or L<"--check"> is specified, C will exit after creating the file. If one of these is specified, C will wait the interval given by L<"--interval">, then remove the file and continue working. You might find this handy to stop cron jobs gracefully if necessary, or to replace one running instance with another. For example, if you want to stop and restart C every hour (just to make sure that it is restarted every hour, in case of a server crash or some other problem), you could use a C line like this: 0 * * * * pt-heartbeat --update -D test --stop \ --sentinel /tmp/pt-heartbeat-hourly The non-default L<"--sentinel"> will make sure the hourly C job stops only instances previously started with the same options (that is, from the same C job). See also L<"--sentinel">. =item --table type: string; default: heartbeat The table to use for the heartbeat. Don't specify database.table; use L<"--database"> to specify the database. See L<"--create-table">. =item --update Update a master's heartbeat. =item --user short form: -u; type: string User for login if not current user. =item --utc Ignore system time zones and use only UTC. By default pt-heartbeat does not check or adjust for different system or MySQL time zones which can cause the tool to compute the lag incorrectly. Specifying this option is a good idea because it ensures that the tool works correctly regardless of time zones. If used, this option must be used for all pt-heartbeat instances: L<"--update">, L<"--monitor">, L<"--check">, etc. You should probably set the option in a L<"--config"> file. Mixing this option with pt-heartbeat instances not using this option will cause false-positive lag readings due to different time zones (unless all your systems are set to use UTC, in which case this option isn't required). =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-heartbeat ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Proven Scaling LLC, SixApart Ltd, Baron Schwartz, and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2007-2018 Percona LLC and/or its affiliates, 2006 Proven Scaling LLC and Six Apart Ltd. Feedback and improvements are welcome. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-heartbeat 3.1.0 =cut percona-toolkit-3.1/bin/pt-index-usage000775 001750 001750 00000675565 13535723560 021226 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit DSNParser Quoter OptionParser PodParser QueryParser QueryRewriter SlowLogParser TableParser Transformers Schema SchemaIterator FileIterator ExplainAnalyzer IndexUsage Progress HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # PodParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/PodParser.pm # t/lib/PodParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package PodParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; my %parse_items_from = ( 'OPTIONS' => 1, 'DSN OPTIONS' => 1, 'RULES' => 1, ); my %item_pattern_for = ( 'OPTIONS' => qr/--(.*)/, 'DSN OPTIONS' => qr/\* (.)/, 'RULES' => qr/(.*)/, ); my %section_has_rules = ( 'OPTIONS' => 1, 'DSN OPTIONS' => 0, 'RULES' => 0, ); sub new { my ( $class, %args ) = @_; my $self = { current_section => '', current_item => '', items => {}, # keyed off SECTION magic => {}, # keyed off SECTION->magic ident (without MAGIC_) magic_ident => '', # set when next para is a magic para }; return bless $self, $class; } sub get_items { my ( $self, $section ) = @_; return $section ? $self->{items}->{$section} : $self->{items}; } sub get_magic { my ( $self, $section ) = @_; return $section ? $self->{magic}->{$section} : $self->{magic}; } sub parse_from_file { my ( $self, $file ) = @_; return unless $file; PTDEBUG && _d('Parsing POD in', $file); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs my $para; 1 while defined($para = <$fh>) && $para !~ m/^=pod/; die "$file does not contain =pod" unless $para; while ( defined($para = <$fh>) && $para !~ m/^=cut/ ) { if ( $para =~ m/^=(head|item|over|back)/ ) { my ($cmd, $name) = $para =~ m/^=(\w+)(?:\s+(.+))?/; $name ||= ''; PTDEBUG && _d('cmd:', $cmd, 'name:', $name); $self->command($cmd, $name); } elsif ( $parse_items_from{$self->{current_section}} ) { $self->textblock($para); } } close $fh; } sub command { my ( $self, $cmd, $name ) = @_; $name =~ s/\s+\Z//m; # Remove \n and blank line after name. if ( $cmd eq 'head1' ) { PTDEBUG && _d('In section', $name); $self->{current_section} = $name; } elsif ( $cmd eq 'over' ) { if ( $parse_items_from{$name} ) { PTDEBUG && _d('Start items in', $self->{current_section}); $self->{items}->{$self->{current_section}} = {}; } } elsif ( $cmd eq 'item' && $parse_items_from{$self->{current_section}} ) { my $pat = $item_pattern_for{ $self->{current_section} }; my ($item) = $name =~ m/$pat/; if ( $item ) { PTDEBUG && _d($self->{current_section}, 'item:', $item); $self->{items}->{ $self->{current_section} }->{$item} = { desc => '', # every item should have a desc }; $self->{current_item} = $item; } else { warn "Item $name does not match $pat"; } } elsif ( $cmd eq 'back' ) { if ( $parse_items_from{$self->{current_section}} ) { PTDEBUG && _d('End items in', $self->{current_section}); } } else { $self->{current_section} = ''; } return; } sub textblock { my ( $self, $para ) = @_; return unless $self->{current_section} && $self->{current_item}; my $section = $self->{current_section}; my $item = $self->{items}->{$section}->{ $self->{current_item} }; $para =~ s/\s+\Z//; if ( $para =~ m/^[a-z]\w+[:;] / ) { PTDEBUG && _d('Item attributes:', $para); map { my ($attrib, $val) = split(/: /, $_); $item->{$attrib} = defined $val ? $val : 1; } split(/; /, $para); } else { if ( $self->{magic_ident} ) { my ($leading_space) = $para =~ m/^(\s+)/; my $indent = length($leading_space || ''); if ( $indent ) { $para =~ s/^\s{$indent}//mg; $para =~ s/\s+$//; PTDEBUG && _d("MAGIC", $self->{magic_ident}, "para:", $para); $self->{magic}->{$self->{current_section}}->{$self->{magic_ident}} = $para; } else { PTDEBUG && _d("MAGIC", $self->{magic_ident}, "para is not indented; treating as normal para"); } $self->{magic_ident} = ''; # must unset this! } PTDEBUG && _d('Item desc:', substr($para, 0, 40), length($para) > 40 ? '...' : ''); $para =~ s/\n+/ /g; $item->{desc} .= $para; if ( $para =~ m/MAGIC_(\w+)/ ) { $self->{magic_ident} = $1; # XXX PTDEBUG && _d("MAGIC", $self->{magic_ident}, "follows"); } } return; } sub verbatim { my ( $self, $para ) = @_; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End PodParser package # ########################################################################### # ########################################################################### # QueryParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryParser.pm # t/lib/QueryParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $tbl_ident = qr/(?:`[^`]+`|\w+)(?:\.(?:`[^`]+`|\w+))?/; our $tbl_regex = qr{ \b(?:FROM|JOIN|(?get_tables($select); } my ($tbl) = $query =~ m/TABLE\s+($tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches table:', $tbl); return ($tbl); } $query =~ s/ (?:LOW_PRIORITY|IGNORE|STRAIGHT_JOIN)//ig; if ( $query =~ s/^\s*LOCK TABLES\s+//i ) { PTDEBUG && _d('Special table type: LOCK TABLES'); $query =~ s/\s+(?:READ(?:\s+LOCAL)?|WRITE)\s*//gi; PTDEBUG && _d('Locked tables:', $query); $query = "FROM $query"; } $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings my @tables; foreach my $tbls ( $query =~ m/$tbl_regex/gio ) { PTDEBUG && _d('Match tables:', $tbls); next if $tbls =~ m/\ASELECT\b/i; foreach my $tbl ( split(',', $tbls) ) { $tbl =~ s/\s*($tbl_ident)(\s+.*)?/$1/gio; if ( $tbl !~ m/[a-zA-Z]/ ) { PTDEBUG && _d('Skipping suspicious table name:', $tbl); next; } push @tables, $tbl; } } return @tables; } sub has_derived_table { my ( $self, $query ) = @_; my $match = $query =~ m/$has_derived/; PTDEBUG && _d($query, 'has ' . ($match ? 'a' : 'no') . ' derived table'); return $match; } sub get_aliases { my ( $self, $query, $list ) = @_; my $result = { DATABASE => {}, TABLE => {}, }; return $result unless $query; $query =~ s/ (?:LOW_PRIORITY|IGNORE|STRAIGHT_JOIN)//ig; $query =~ s/ (?:INNER|OUTER|CROSS|LEFT|RIGHT|NATURAL)//ig; my @tbl_refs; my ($tbl_refs, $from) = $query =~ m{ ( (FROM|INTO|UPDATE)\b\s* # Keyword before table refs .+? # Table refs ) (?:\s+|\z) # If the query does not end with the table (?:WHERE|ORDER|LIMIT|HAVING|SET|VALUES|\z) # Keyword after table refs }ix; if ( $tbl_refs ) { if ( $query =~ m/^(?:INSERT|REPLACE)/i ) { $tbl_refs =~ s/\([^\)]+\)\s*//; } PTDEBUG && _d('tbl refs:', $tbl_refs); my $before_tbl = qr/(?:,|JOIN|\s|$from)+/i; my $after_tbl = qr/(?:,|JOIN|ON|USING|\z)/i; $tbl_refs =~ s/ = /=/g; while ( $tbl_refs =~ m{ $before_tbl\b\s* ( ($tbl_ident) (?:\s+ (?:AS\s+)? (\w+))? ) \s*$after_tbl }xgio ) { my ( $tbl_ref, $db_tbl, $alias ) = ($1, $2, $3); PTDEBUG && _d('Match table:', $tbl_ref); push @tbl_refs, $tbl_ref; $alias = $self->trim_identifier($alias); if ( $tbl_ref =~ m/^AS\s+\w+/i ) { PTDEBUG && _d('Subquery', $tbl_ref); $result->{TABLE}->{$alias} = undef; next; } my ( $db, $tbl ) = $db_tbl =~ m/^(?:(.*?)\.)?(.*)/; $db = $self->trim_identifier($db); $tbl = $self->trim_identifier($tbl); $result->{TABLE}->{$alias || $tbl} = $tbl; $result->{DATABASE}->{$tbl} = $db if $db; } } else { PTDEBUG && _d("No tables ref in", $query); } if ( $list ) { return \@tbl_refs; } else { return $result; } } sub split { my ( $self, $query ) = @_; return unless $query; $query = $self->clean_query($query); PTDEBUG && _d('Splitting', $query); my $verbs = qr{SELECT|INSERT|UPDATE|DELETE|REPLACE|UNION|CREATE}i; my @split_statements = grep { $_ } split(m/\b($verbs\b(?!(?:\s*\()))/io, $query); my @statements; if ( @split_statements == 1 ) { push @statements, $query; } else { for ( my $i = 0; $i <= $#split_statements; $i += 2 ) { push @statements, $split_statements[$i].$split_statements[$i+1]; if ( $statements[-2] && $statements[-2] =~ m/on duplicate key\s+$/i ) { $statements[-2] .= pop @statements; } } } PTDEBUG && _d('statements:', map { $_ ? "<$_>" : 'none' } @statements); return @statements; } sub clean_query { my ( $self, $query ) = @_; return unless $query; $query =~ s!/\*.*?\*/! !g; # Remove /* comment blocks */ $query =~ s/^\s+//; # Remove leading spaces $query =~ s/\s+$//; # Remove trailing spaces $query =~ s/\s{2,}/ /g; # Remove extra spaces return $query; } sub split_subquery { my ( $self, $query ) = @_; return unless $query; $query = $self->clean_query($query); $query =~ s/;$//; my @subqueries; my $sqno = 0; # subquery number my $pos = 0; while ( $query =~ m/(\S+)(?:\s+|\Z)/g ) { $pos = pos($query); my $word = $1; PTDEBUG && _d($word, $sqno); if ( $word =~ m/^\(?SELECT\b/i ) { my $start_pos = $pos - length($word) - 1; if ( $start_pos ) { $sqno++; PTDEBUG && _d('Subquery', $sqno, 'starts at', $start_pos); $subqueries[$sqno] = { start_pos => $start_pos, end_pos => 0, len => 0, words => [$word], lp => 1, # left parentheses rp => 0, # right parentheses done => 0, }; } else { PTDEBUG && _d('Main SELECT at pos 0'); } } else { next unless $sqno; # next unless we're in a subquery PTDEBUG && _d('In subquery', $sqno); my $sq = $subqueries[$sqno]; if ( $sq->{done} ) { PTDEBUG && _d('This subquery is done; SQL is for', ($sqno - 1 ? "subquery $sqno" : "the main SELECT")); next; } push @{$sq->{words}}, $word; my $lp = ($word =~ tr/\(//) || 0; my $rp = ($word =~ tr/\)//) || 0; PTDEBUG && _d('parentheses left', $lp, 'right', $rp); if ( ($sq->{lp} + $lp) - ($sq->{rp} + $rp) == 0 ) { my $end_pos = $pos - 1; PTDEBUG && _d('Subquery', $sqno, 'ends at', $end_pos); $sq->{end_pos} = $end_pos; $sq->{len} = $end_pos - $sq->{start_pos}; } } } for my $i ( 1..$#subqueries ) { my $sq = $subqueries[$i]; next unless $sq; $sq->{sql} = join(' ', @{$sq->{words}}); substr $query, $sq->{start_pos} + 1, # +1 for ( $sq->{len} - 1, # -1 for ) "__subquery_$i"; } return $query, map { $_->{sql} } grep { defined $_ } @subqueries; } sub query_type { my ( $self, $query, $qr ) = @_; my ($type, undef) = $qr->distill_verbs($query); my $rw; if ( $type =~ m/^SELECT\b/ ) { $rw = 'read'; } elsif ( $type =~ m/^$data_manip_stmts\b/ || $type =~ m/^$data_def_stmts\b/ ) { $rw = 'write' } return { type => $type, rw => $rw, } } sub get_columns { my ( $self, $query ) = @_; my $cols = []; return $cols unless $query; my $cols_def; if ( $query =~ m/^SELECT/i ) { $query =~ s/ ^SELECT\s+ (?:ALL |DISTINCT |DISTINCTROW |HIGH_PRIORITY |STRAIGHT_JOIN |SQL_SMALL_RESULT |SQL_BIG_RESULT |SQL_BUFFER_RESULT |SQL_CACHE |SQL_NO_CACHE |SQL_CALC_FOUND_ROWS )\s+ /SELECT /xgi; ($cols_def) = $query =~ m/^SELECT\s+(.+?)\s+FROM/i; } elsif ( $query =~ m/^(?:INSERT|REPLACE)/i ) { ($cols_def) = $query =~ m/\(([^\)]+)\)\s*VALUE/i; } PTDEBUG && _d('Columns:', $cols_def); if ( $cols_def ) { @$cols = split(',', $cols_def); map { my $col = $_; $col = s/^\s+//g; $col = s/\s+$//g; $col; } @$cols; } return $cols; } sub parse { my ( $self, $query ) = @_; return unless $query; my $parsed = {}; $query =~ s/\n/ /g; $query = $self->clean_query($query); $parsed->{query} = $query, $parsed->{tables} = $self->get_aliases($query, 1); $parsed->{columns} = $self->get_columns($query); my ($type) = $query =~ m/^(\w+)/; $parsed->{type} = lc $type; $parsed->{sub_queries} = []; return $parsed; } sub extract_tables { my ( $self, %args ) = @_; my $query = $args{query}; my $default_db = $args{default_db}; my $q = $self->{Quoter} || $args{Quoter}; return unless $query; PTDEBUG && _d('Extracting tables'); my @tables; my %seen; foreach my $db_tbl ( $self->get_tables($query) ) { next unless $db_tbl; next if $seen{$db_tbl}++; # Unique-ify for issue 337. my ( $db, $tbl ) = $q->split_unquote($db_tbl); push @tables, [ $db || $default_db, $tbl ]; } return @tables; } sub trim_identifier { my ($self, $str) = @_; return unless defined $str; $str =~ s/`//g; $str =~ s/^\s+//; $str =~ s/\s+$//; return $str; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryParser package # ########################################################################### # ########################################################################### # QueryRewriter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryRewriter.pm # t/lib/QueryRewriter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryRewriter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $verbs = qr{^SHOW|^FLUSH|^COMMIT|^ROLLBACK|^BEGIN|SELECT|INSERT |UPDATE|DELETE|REPLACE|^SET|UNION|^START|^LOCK}xi; my $quote_re = qr/"(?:(?!(? [^()]+ ) # Non-parens without backtracking | (??{ $bal }) # Group with matching parens )* \) /x; my $olc_re = qr/(?:--|#)[^'"\r\n]*(?=[\r\n]|\Z)/; # One-line comments my $mlc_re = qr#/\*[^!].*?\*/#sm; # But not /*!version */ my $vlc_re = qr#/\*.*?[0-9]+.*?\*/#sm; # For SHOW + /*!version */ my $vlc_rf = qr#^(?:SHOW).*?/\*![0-9]+(.*?)\*/#sm; # Variation for SHOW sub new { my ( $class, %args ) = @_; my $self = { %args }; return bless $self, $class; } sub strip_comments { my ( $self, $query ) = @_; return unless $query; $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; if ( $query =~ m/$vlc_rf/i ) { # contains show + version my $qualifier = $1 || ''; $query =~ s/$vlc_re/$qualifier/go; } return $query; } sub shorten { my ( $self, $query, $length ) = @_; $query =~ s{ \A( (?:INSERT|REPLACE) (?:\s+LOW_PRIORITY|DELAYED|HIGH_PRIORITY|IGNORE)? (?:\s\w+)*\s+\S+\s+VALUES\s*\(.*?\) ) \s*,\s*\(.*?(ON\s+DUPLICATE|\Z)} {$1 /*... omitted ...*/$2}xsi; return $query unless $query =~ m/IN\s*\(\s*(?!select)/i; my $last_length = 0; my $query_length = length($query); while ( $length > 0 && $query_length > $length && $query_length < ( $last_length || $query_length + 1 ) ) { $last_length = $query_length; $query =~ s{ (\bIN\s*\() # The opening of an IN list ([^\)]+) # Contents of the list, assuming no item contains paren (?=\)) # Close of the list } { $1 . __shorten($2) }gexsi; } return $query; } sub __shorten { my ( $snippet ) = @_; my @vals = split(/,/, $snippet); return $snippet unless @vals > 20; my @keep = splice(@vals, 0, 20); # Remove and save the first 20 items return join(',', @keep) . "/*... omitted " . scalar(@vals) . " items ...*/"; } sub fingerprint { my ( $self, $query ) = @_; $query =~ m#\ASELECT /\*!40001 SQL_NO_CACHE \*/ \* FROM `# # mysqldump query && return 'mysqldump'; $query =~ m#/\*\w+\.\w+:[0-9]/[0-9]\*/# # pt-table-checksum, etc query && return 'percona-toolkit'; $query =~ m/\Aadministrator command: / && return $query; $query =~ m/\A\s*(call\s+\S+)\(/i && return lc($1); # Warning! $1 used, be careful. if ( my ($beginning) = $query =~ m/\A((?:INSERT|REPLACE)(?: IGNORE)?\s+INTO.+?VALUES\s*\(.*?\))\s*,\s*\(/is ) { $query = $beginning; # Shorten multi-value INSERT statements ASAP } $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; $query =~ s/\Ause \S+\Z/use ?/i # Abstract the DB in USE && return $query; $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings $query =~ s/\bfalse\b|\btrue\b/?/isg; # boolean values if ( $self->{match_md5_checksums} ) { $query =~ s/([._-])[a-f0-9]{32}/$1?/g; } if ( !$self->{match_embedded_numbers} ) { $query =~ s/[0-9+-][0-9a-f.xb+-]*/?/g; } else { $query =~ s/\b[0-9+-][0-9a-f.xb+-]*/?/g; } if ( $self->{match_md5_checksums} ) { $query =~ s/[xb+-]\?/?/g; } else { $query =~ s/[xb.+-]\?/?/g; } $query =~ s/\A\s+//; # Chop off leading whitespace chomp $query; # Kill trailing whitespace $query =~ tr[ \n\t\r\f][ ]s; # Collapse whitespace $query = lc $query; $query =~ s/\bnull\b/?/g; # Get rid of NULLs $query =~ s{ # Collapse IN and VALUES lists \b(in|values?)(?:[\s,]*\([\s?,]*\))+ } {$1(?+)}gx; $query =~ s{ # Collapse UNION \b(select\s.*?)(?:(\sunion(?:\sall)?)\s\1)+ } {$1 /*repeat$2*/}xg; $query =~ s/\blimit \?(?:, ?\?| offset \?)?/limit ?/; # LIMIT if ( $query =~ m/\bORDER BY /gi ) { # Find, anchor on ORDER BY clause 1 while $query =~ s/\G(.+?)\s+ASC/$1/gi && pos $query; } return $query; } sub distill_verbs { my ( $self, $query ) = @_; $query =~ m/\A\s*call\s+(\S+)\(/i && return "CALL $1"; $query =~ m/\A\s*use\s+/ && return "USE"; $query =~ m/\A\s*UNLOCK TABLES/i && return "UNLOCK"; $query =~ m/\A\s*xa\s+(\S+)/i && return "XA_$1"; if ( $query =~ m/\A\s*LOAD/i ) { my ($tbl) = $query =~ m/INTO TABLE\s+(\S+)/i; $tbl ||= ''; $tbl =~ s/`//g; return "LOAD DATA $tbl"; } if ( $query =~ m/\Aadministrator command:/ ) { $query =~ s/administrator command:/ADMIN/; $query = uc $query; return $query; } $query = $self->strip_comments($query); if ( $query =~ m/\A\s*SHOW\s+/i ) { PTDEBUG && _d($query); $query = uc $query; $query =~ s/\s+(?:SESSION|FULL|STORAGE|ENGINE)\b/ /g; $query =~ s/\s+COUNT[^)]+\)//g; $query =~ s/\s+(?:FOR|FROM|LIKE|WHERE|LIMIT|IN)\b.+//ms; $query =~ s/\A(SHOW(?:\s+\S+){1,2}).*\Z/$1/s; $query =~ s/\s+/ /g; PTDEBUG && _d($query); return $query; } eval $QueryParser::data_def_stmts; eval $QueryParser::tbl_ident; my ( $dds ) = $query =~ /^\s*($QueryParser::data_def_stmts)\b/i; if ( $dds) { $query =~ s/\s+IF(?:\s+NOT)?\s+EXISTS/ /i; my ( $obj ) = $query =~ m/$dds.+(DATABASE|TABLE)\b/i; $obj = uc $obj if $obj; PTDEBUG && _d('Data def statment:', $dds, 'obj:', $obj); my ($db_or_tbl) = $query =~ m/(?:TABLE|DATABASE)\s+($QueryParser::tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches db or table:', $db_or_tbl); return uc($dds . ($obj ? " $obj" : '')), $db_or_tbl; } my @verbs = $query =~ m/\b($verbs)\b/gio; @verbs = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } map { uc } @verbs; }; if ( ($verbs[0] || '') eq 'SELECT' && @verbs > 1 ) { PTDEBUG && _d("False-positive verbs after SELECT:", @verbs[1..$#verbs]); my $union = grep { $_ eq 'UNION' } @verbs; @verbs = $union ? qw(SELECT UNION) : qw(SELECT); } my $verb_str = join(q{ }, @verbs); return $verb_str; } sub __distill_tables { my ( $self, $query, $table, %args ) = @_; my $qp = $args{QueryParser} || $self->{QueryParser}; die "I need a QueryParser argument" unless $qp; my @tables = map { $_ =~ s/`//g; $_ =~ s/(_?)[0-9]+/$1?/g; $_; } grep { defined $_ } $qp->get_tables($query); push @tables, $table if $table; @tables = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } @tables; }; return @tables; } sub distill { my ( $self, $query, %args ) = @_; if ( $args{generic} ) { my ($cmd, $arg) = $query =~ m/^(\S+)\s+(\S+)/; return '' unless $cmd; $query = (uc $cmd) . ($arg ? " $arg" : ''); } else { my ($verbs, $table) = $self->distill_verbs($query, %args); if ( $verbs && $verbs =~ m/^SHOW/ ) { my %alias_for = qw( SCHEMA DATABASE KEYS INDEX INDEXES INDEX ); map { $verbs =~ s/$_/$alias_for{$_}/ } keys %alias_for; $query = $verbs; } elsif ( $verbs && $verbs =~ m/^LOAD DATA/ ) { return $verbs; } else { my @tables = $self->__distill_tables($query, $table, %args); $query = join(q{ }, $verbs, @tables); } } if ( $args{trf} ) { $query = $args{trf}->($query, %args); } return $query; } sub convert_to_select { my ( $self, $query ) = @_; return unless $query; return if $query =~ m/=\s*\(\s*SELECT /i; $query =~ s{ \A.*? update(?:\s+(?:low_priority|ignore))?\s+(.*?) \s+set\b(.*?) (?:\s*where\b(.*?))? (limit\s*[0-9]+(?:\s*,\s*[0-9]+)?)? \Z } {__update_to_select($1, $2, $3, $4)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ .*?\binto\b(.*?)\(([^\)]+)\)\s* values?\s*(\(.*?\))\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select($1, $2, $3)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ (?:.*?\binto)\b(.*?)\s* set\s+(.*?)\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select_with_set($1, $2)}exsi || $query =~ s{ \A.*? delete\s+(.*?) \bfrom\b(.*) \Z } {__delete_to_select($1, $2)}exsi; $query =~ s/\s*on\s+duplicate\s+key\s+update.*\Z//si; $query =~ s/\A.*?(?=\bSELECT\s*\b)//ism; return $query; } sub convert_select_list { my ( $self, $query ) = @_; $query =~ s{ \A\s*select(.*?)\bfrom\b } {$1 =~ m/\*/ ? "select 1 from" : "select isnull(coalesce($1)) from"}exi; return $query; } sub __delete_to_select { my ( $delete, $join ) = @_; if ( $join =~ m/\bjoin\b/ ) { return "select 1 from $join"; } return "select * from $join"; } sub __insert_to_select { my ( $tbl, $cols, $vals ) = @_; PTDEBUG && _d('Args:', @_); my @cols = split(/,/, $cols); PTDEBUG && _d('Cols:', @cols); $vals =~ s/^\(|\)$//g; # Strip leading/trailing parens my @vals = $vals =~ m/($quote_re|[^,]*${bal}[^,]*|[^,]+)/g; PTDEBUG && _d('Vals:', @vals); if ( @cols == @vals ) { return "select * from $tbl where " . join(' and ', map { "$cols[$_]=$vals[$_]" } (0..$#cols)); } else { return "select * from $tbl limit 1"; } } sub __insert_to_select_with_set { my ( $from, $set ) = @_; $set =~ s/,/ and /g; return "select * from $from where $set "; } sub __update_to_select { my ( $from, $set, $where, $limit ) = @_; return "select $set from $from " . ( $where ? "where $where" : '' ) . ( $limit ? " $limit " : '' ); } sub wrap_in_derived { my ( $self, $query ) = @_; return unless $query; return $query =~ m/\A\s*select/i ? "select 1 from ($query) as x limit 1" : $query; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryRewriter package # ########################################################################### # ########################################################################### # SlowLogParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/SlowLogParser.pm # t/lib/SlowLogParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package SlowLogParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class ) = @_; my $self = { pending => [], last_event_offset => undef, }; return bless $self, $class; } my $slow_log_ts_line = qr/^# Time: ((?:[0-9: ]{15})|(?:[-0-9: T]{19}))/; my $slow_log_uh_line = qr/# User\@Host: ([^\[]+|\[[^[]+\]).*?@ (\S*) \[(.*)\]\s*(?:Id:\s*(\d+))?/; my $slow_log_hd_line = qr{ ^(?: T[cC][pP]\s[pP]ort:\s+\d+ # case differs on windows/unix | [/A-Z].*mysqld,\sVersion.*(?:started\swith:|embedded\slibrary) | Time\s+Id\s+Command ).*\n }xm; sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(next_event tell); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($next_event, $tell) = @args{@required_args}; my $pending = $self->{pending}; local $INPUT_RECORD_SEPARATOR = ";\n#"; my $trimlen = length($INPUT_RECORD_SEPARATOR); my $pos_in_log = $tell->(); my $stmt; EVENT: while ( defined($stmt = shift @$pending) or defined($stmt = $next_event->()) ) { my @properties = ('cmd', 'Query', 'pos_in_log', $pos_in_log); $self->{last_event_offset} = $pos_in_log; $pos_in_log = $tell->(); if ( $stmt =~ s/$slow_log_hd_line//go ){ # Throw away header lines in log my @chunks = split(/$INPUT_RECORD_SEPARATOR/o, $stmt); if ( @chunks > 1 ) { PTDEBUG && _d("Found multiple chunks"); $stmt = shift @chunks; unshift @$pending, @chunks; } } $stmt = '#' . $stmt unless $stmt =~ m/\A#/; $stmt =~ s/;\n#?\Z//; my ($got_ts, $got_uh, $got_ac, $got_db, $got_set, $got_embed); my $pos = 0; my $len = length($stmt); my $found_arg = 0; LINE: while ( $stmt =~ m/^(.*)$/mg ) { # /g is important, requires scalar match. $pos = pos($stmt); # Be careful not to mess this up! my $line = $1; # Necessary for /g and pos() to work. PTDEBUG && _d($line); if ($line =~ m/^(?:#|use |SET (?:last_insert_id|insert_id|timestamp))/o) { if ( !$got_ts && (my ( $time ) = $line =~ m/$slow_log_ts_line/o)) { PTDEBUG && _d("Got ts", $time); push @properties, 'ts', $time; ++$got_ts; if ( !$got_uh && ( my ( $user, $host, $ip, $thread_id ) = $line =~ m/$slow_log_uh_line/o ) ) { PTDEBUG && _d("Got user, host, ip", $user, $host, $ip); $host ||= $ip; # sometimes host is missing when using skip-name-resolve (LP #issue 1262456) push @properties, 'user', $user, 'host', $host, 'ip', $ip; if ( $thread_id ) { push @properties, 'Thread_id', $thread_id; } ++$got_uh; } } elsif ( !$got_uh && ( my ( $user, $host, $ip, $thread_id ) = $line =~ m/$slow_log_uh_line/o ) ) { PTDEBUG && _d("Got user, host, ip", $user, $host, $ip); $host ||= $ip; # sometimes host is missing when using skip-name-resolve (LP #issue 1262456) push @properties, 'user', $user, 'host', $host, 'ip', $ip; if ( $thread_id ) { push @properties, 'Thread_id', $thread_id; } ++$got_uh; } elsif (!$got_ac && $line =~ m/^# (?:administrator command:.*)$/) { PTDEBUG && _d("Got admin command"); $line =~ s/^#\s+//; # string leading "# ". push @properties, 'cmd', 'Admin', 'arg', $line; push @properties, 'bytes', length($properties[-1]); ++$found_arg; ++$got_ac; } elsif ( $line =~ m/^# +[A-Z][A-Za-z_]+: \S+/ ) { # Make the test cheap! PTDEBUG && _d("Got some line with properties"); if ( $line =~ m/Schema:\s+\w+: / ) { PTDEBUG && _d('Removing empty Schema attrib'); $line =~ s/Schema:\s+//; PTDEBUG && _d($line); } my @temp = $line =~ m/(\w+):\s+(\S+|\Z)/g; push @properties, @temp; } elsif ( !$got_db && (my ( $db ) = $line =~ m/^use ([^;]+)/ ) ) { PTDEBUG && _d("Got a default database:", $db); push @properties, 'db', $db; ++$got_db; } elsif (!$got_set && (my ($setting) = $line =~ m/^SET\s+([^;]*)/)) { PTDEBUG && _d("Got some setting:", $setting); push @properties, split(/,|\s*=\s*/, $setting); ++$got_set; } if ( !$found_arg && $pos == $len ) { PTDEBUG && _d("Did not find arg, looking for special cases"); local $INPUT_RECORD_SEPARATOR = ";\n"; # get next line if ( defined(my $l = $next_event->()) ) { if ( $l =~ /^\s*[A-Z][a-z_]+: / ) { PTDEBUG && _d("Found NULL query before", $l); local $INPUT_RECORD_SEPARATOR = ";\n#"; my $rest_of_event = $next_event->(); push @{$self->{pending}}, $l . $rest_of_event; push @properties, 'cmd', 'Query', 'arg', '/* No query */'; push @properties, 'bytes', 0; $found_arg++; } else { chomp $l; $l =~ s/^\s+//; PTDEBUG && _d("Found admin statement", $l); push @properties, 'cmd', 'Admin', 'arg', $l; push @properties, 'bytes', length($properties[-1]); $found_arg++; } } else { PTDEBUG && _d("I can't figure out what to do with this line"); next EVENT; } } } else { PTDEBUG && _d("Got the query/arg line"); my $arg = substr($stmt, $pos - length($line)); push @properties, 'arg', $arg, 'bytes', length($arg); if ( $args{misc} && $args{misc}->{embed} && ( my ($e) = $arg =~ m/($args{misc}->{embed})/) ) { push @properties, $e =~ m/$args{misc}->{capture}/g; } last LINE; } } PTDEBUG && _d('Properties of event:', Dumper(\@properties)); my $event = { @properties }; if ( !$event->{arg} ) { PTDEBUG && _d('Partial event, no arg'); } else { $self->{last_event_offset} = undef; if ( $args{stats} ) { $args{stats}->{events_read}++; $args{stats}->{events_parsed}++; } } return $event; } # EVENT @$pending = (); $args{oktorun}->(0) if $args{oktorun}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End SlowLogParser package # ########################################################################### # ########################################################################### # TableParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TableParser.pm # t/lib/TableParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TableParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc substr(md5_hex($val), -16); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # Schema package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Schema.pm # t/lib/Schema.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Schema; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, schema => {}, # keyed on db->tbl }; return bless $self, $class; } sub get_schema { my ( $self ) = @_; return $self->{schema}; } sub get_table { my ( $self, $db_name, $tbl_name ) = @_; if ( exists $self->{schema}->{$db_name} && exists $self->{schema}->{$db_name}->{$tbl_name} ) { return $self->{schema}->{$db_name}->{$tbl_name}; } return; } sub add_schema_object { my ( $self, $schema_object ) = @_; die "I need a schema_object argument" unless $schema_object; my ($db, $tbl) = @{$schema_object}{qw(db tbl)}; if ( !$db || !$tbl ) { warn "No database or table for schema object"; return; } my $tbl_struct = $schema_object->{tbl_struct}; if ( !$tbl_struct ) { warn "No table structure for $db.$tbl"; return; } $self->{schema}->{lc $db}->{lc $tbl} = $schema_object; return; } sub find_column { my ( $self, %args ) = @_; my $ignore = $args{ignore}; my $schema = $self->{schema}; my ($col, $tbl, $db); if ( my $col_name = $args{col_name} ) { ($col, $tbl, $db) = reverse map { s/`//g; $_ } split /[.]/, $col_name; PTDEBUG && _d('Column', $col_name, 'has db', $db, 'tbl', $tbl, 'col', $col); } else { ($col, $tbl, $db) = @args{qw(col tbl db)}; } $db = lc($db || ''); $tbl = lc($tbl || ''); $col = lc($col || ''); if ( !$col ) { PTDEBUG && _d('No column specified or parsed'); return; } PTDEBUG && _d('Finding column', $col, 'in', $db, $tbl); if ( $db && !$schema->{$db} ) { PTDEBUG && _d('Database', $db, 'does not exist'); return; } if ( $db && $tbl && !$schema->{$db}->{$tbl} ) { PTDEBUG && _d('Table', $tbl, 'does not exist in database', $db); return; } my @tbls; my @search_dbs = $db ? ($db) : keys %$schema; DATABASE: foreach my $search_db ( @search_dbs ) { my @search_tbls = $tbl ? ($tbl) : keys %{$schema->{$search_db}}; TABLE: foreach my $search_tbl ( @search_tbls ) { next DATABASE unless exists $schema->{$search_db}->{$search_tbl}; if ( $ignore && grep { $_->{db} eq $search_db && $_->{tbl} eq $search_tbl } @$ignore ) { PTDEBUG && _d('Ignoring', $search_db, $search_tbl, $col); next TABLE; } my $tbl = $schema->{$search_db}->{$search_tbl}; if ( $tbl->{tbl_struct}->{is_col}->{$col} ) { PTDEBUG && _d('Column', $col, 'exists in', $tbl->{db}, $tbl->{tbl}); push @tbls, $tbl; } } } return \@tbls; } sub find_table { my ( $self, %args ) = @_; my $ignore = $args{ignore}; my $schema = $self->{schema}; my ($tbl, $db); if ( my $tbl_name = $args{tbl_name} ) { ($tbl, $db) = reverse map { s/`//g; $_ } split /[.]/, $tbl_name; PTDEBUG && _d('Table', $tbl_name, 'has db', $db, 'tbl', $tbl); } else { ($tbl, $db) = @args{qw(tbl db)}; } $db = lc($db || ''); $tbl = lc($tbl || ''); if ( !$tbl ) { PTDEBUG && _d('No table specified or parsed'); return; } PTDEBUG && _d('Finding table', $tbl, 'in', $db); if ( $db && !$schema->{$db} ) { PTDEBUG && _d('Database', $db, 'does not exist'); return; } if ( $db && $tbl && !$schema->{$db}->{$tbl} ) { PTDEBUG && _d('Table', $tbl, 'does not exist in database', $db); return; } my @dbs; my @search_dbs = $db ? ($db) : keys %$schema; DATABASE: foreach my $search_db ( @search_dbs ) { if ( $ignore && grep { $_->{db} eq $search_db } @$ignore ) { PTDEBUG && _d('Ignoring', $search_db); next DATABASE; } if ( exists $schema->{$search_db}->{$tbl} ) { PTDEBUG && _d('Table', $tbl, 'exists in', $search_db); push @dbs, $search_db; } } return \@dbs; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Schema package # ########################################################################### # ########################################################################### # SchemaIterator package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/SchemaIterator.pm # t/lib/SchemaIterator.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package SchemaIterator; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; my $open_comment = qr{/\*!\d{5} }; my $tbl_name = qr{ CREATE\s+ (?:TEMPORARY\s+)? TABLE\s+ (?:IF NOT EXISTS\s+)? ([^\(]+) }x; sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser TableParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($file_itr, $dbh) = @args{qw(file_itr dbh)}; die "I need either a dbh or file_itr argument" if (!$dbh && !$file_itr) || ($dbh && $file_itr); my %resume; if ( my $table = $args{resume} ) { PTDEBUG && _d('Will resume from or after', $table); my ($db, $tbl) = $args{Quoter}->split_unquote($table); die "Resume table must be database-qualified: $table" unless $db && $tbl; $resume{db} = $db; $resume{tbl} = $tbl; } my $self = { %args, resume => \%resume, filters => _make_filters(%args), }; return bless $self, $class; } sub _make_filters { my ( %args ) = @_; my @required_args = qw(OptionParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o, $q) = @args{@required_args}; my %filters; my @simple_filters = qw( databases tables engines ignore-databases ignore-tables ignore-engines); FILTER: foreach my $filter ( @simple_filters ) { if ( $o->has($filter) ) { my $objs = $o->get($filter); next FILTER unless $objs && scalar keys %$objs; my $is_table = $filter =~ m/table/ ? 1 : 0; foreach my $obj ( keys %$objs ) { die "Undefined value for --$filter" unless $obj; $obj = lc $obj; if ( $is_table ) { my ($db, $tbl) = $q->split_unquote($obj); $db ||= '*'; PTDEBUG && _d('Filter', $filter, 'value:', $db, $tbl); $filters{$filter}->{$db}->{$tbl} = 1; } else { # database PTDEBUG && _d('Filter', $filter, 'value:', $obj); $filters{$filter}->{$obj} = 1; } } } } my @regex_filters = qw( databases-regex tables-regex ignore-databases-regex ignore-tables-regex); REGEX_FILTER: foreach my $filter ( @regex_filters ) { if ( $o->has($filter) ) { my $pat = $o->get($filter); next REGEX_FILTER unless $pat; $filters{$filter} = qr/$pat/; PTDEBUG && _d('Filter', $filter, 'value:', $filters{$filter}); } } PTDEBUG && _d('Schema object filters:', Dumper(\%filters)); return \%filters; } sub next { my ( $self ) = @_; if ( !$self->{initialized} ) { $self->{initialized} = 1; if ( $self->{resume}->{tbl} ) { if ( !$self->table_is_allowed(@{$self->{resume}}{qw(db tbl)}) ) { PTDEBUG && _d('Will resume after', join('.', @{$self->{resume}}{qw(db tbl)})); $self->{resume}->{after}->{tbl} = 1; } if ( !$self->database_is_allowed($self->{resume}->{db}) ) { PTDEBUG && _d('Will resume after', $self->{resume}->{db}); $self->{resume}->{after}->{db} = 1; } } } my $schema_obj; if ( $self->{file_itr} ) { $schema_obj= $self->_iterate_files(); } else { # dbh $schema_obj= $self->_iterate_dbh(); } if ( $schema_obj ) { if ( my $schema = $self->{Schema} ) { $schema->add_schema_object($schema_obj); } PTDEBUG && _d('Next schema object:', $schema_obj->{db}, $schema_obj->{tbl}); } return $schema_obj; } sub _iterate_files { my ( $self ) = @_; if ( !$self->{fh} ) { my ($fh, $file) = $self->{file_itr}->(); if ( !$fh ) { PTDEBUG && _d('No more files to iterate'); return; } $self->{fh} = $fh; $self->{file} = $file; } my $fh = $self->{fh}; PTDEBUG && _d('Getting next schema object from', $self->{file}); local $INPUT_RECORD_SEPARATOR = ''; CHUNK: while (defined(my $chunk = <$fh>)) { if ($chunk =~ m/Database: (\S+)/) { my $db = $1; # XXX $db =~ s/^`//; # strip leading ` $db =~ s/`$//; # and trailing ` if ( $self->database_is_allowed($db) && $self->_resume_from_database($db) ) { $self->{db} = $db; } } elsif ($self->{db} && $chunk =~ m/CREATE TABLE/) { if ($chunk =~ m/DROP VIEW IF EXISTS/) { PTDEBUG && _d('Table is a VIEW, skipping'); next CHUNK; } my ($tbl) = $chunk =~ m/$tbl_name/; $tbl =~ s/^\s*`//; $tbl =~ s/`\s*$//; if ( $self->_resume_from_table($tbl) && $self->table_is_allowed($self->{db}, $tbl) ) { my ($ddl) = $chunk =~ m/^(?:$open_comment)?(CREATE TABLE.+?;)$/ms; if ( !$ddl ) { warn "Failed to parse CREATE TABLE from\n" . $chunk; next CHUNK; } $ddl =~ s/ \*\/;\Z/;/; # remove end of version comment my $tbl_struct = $self->{TableParser}->parse($ddl); if ( $self->engine_is_allowed($tbl_struct->{engine}) ) { return { db => $self->{db}, tbl => $tbl, name => $self->{Quoter}->quote($self->{db}, $tbl), ddl => $ddl, tbl_struct => $tbl_struct, }; } } } } # CHUNK PTDEBUG && _d('No more schema objects in', $self->{file}); close $self->{fh}; $self->{fh} = undef; return $self->_iterate_files(); } sub _iterate_dbh { my ( $self ) = @_; my $q = $self->{Quoter}; my $tp = $self->{TableParser}; my $dbh = $self->{dbh}; PTDEBUG && _d('Getting next schema object from dbh', $dbh); if ( !defined $self->{dbs} ) { my $sql = 'SHOW DATABASES'; PTDEBUG && _d($sql); my @dbs = grep { $self->_resume_from_database($_) && $self->database_is_allowed($_) } @{$dbh->selectcol_arrayref($sql)}; PTDEBUG && _d('Found', scalar @dbs, 'databases'); $self->{dbs} = \@dbs; } DATABASE: while ( $self->{db} || defined(my $db = shift @{$self->{dbs}}) ) { if ( !$self->{db} ) { PTDEBUG && _d('Next database:', $db); $self->{db} = $db; } if ( !$self->{tbls} ) { my $sql = 'SHOW /*!50002 FULL*/ TABLES FROM ' . $q->quote($self->{db}); PTDEBUG && _d($sql); my @tbls = map { $_->[0]; # (tbl, type) } grep { my ($tbl, $type) = @$_; (!$type || ($type ne 'VIEW')) && $self->_resume_from_table($tbl) && $self->table_is_allowed($self->{db}, $tbl); } eval { @{$dbh->selectall_arrayref($sql)}; }; if ($EVAL_ERROR) { warn "Skipping $self->{db}..."; $self->{db} = undef; next; } PTDEBUG && _d('Found', scalar @tbls, 'tables in database',$self->{db}); $self->{tbls} = \@tbls; } TABLE: while ( my $tbl = shift @{$self->{tbls}} ) { my $ddl = eval { $tp->get_create_table($dbh, $self->{db}, $tbl) }; if ( my $e = $EVAL_ERROR ) { my $table_name = "$self->{db}.$tbl"; if ( $e =~ /\QTable '$table_name' doesn't exist/ ) { PTDEBUG && _d("$table_name no longer exists"); } else { warn "Skipping $table_name because SHOW CREATE TABLE failed: $e"; } next TABLE; } my $tbl_struct = $tp->parse($ddl); if ( $self->engine_is_allowed($tbl_struct->{engine}) ) { return { db => $self->{db}, tbl => $tbl, name => $q->quote($self->{db}, $tbl), ddl => $ddl, tbl_struct => $tbl_struct, }; } } PTDEBUG && _d('No more tables in database', $self->{db}); $self->{db} = undef; $self->{tbls} = undef; } # DATABASE PTDEBUG && _d('No more databases'); return; } sub database_is_allowed { my ( $self, $db ) = @_; die "I need a db argument" unless $db; $db = lc $db; my $filter = $self->{filters}; if ( $db =~ m/^(information_schema|performance_schema|lost\+found|percona_schema)$/ ) { PTDEBUG && _d('Database', $db, 'is a system database, ignoring'); return 0; } if ( $self->{filters}->{'ignore-databases'}->{$db} ) { PTDEBUG && _d('Database', $db, 'is in --ignore-databases list'); return 0; } if ( $filter->{'ignore-databases-regex'} && $db =~ $filter->{'ignore-databases-regex'} ) { PTDEBUG && _d('Database', $db, 'matches --ignore-databases-regex'); return 0; } if ( $filter->{'databases'} && !$filter->{'databases'}->{$db} ) { PTDEBUG && _d('Database', $db, 'is not in --databases list, ignoring'); return 0; } if ( $filter->{'databases-regex'} && $db !~ $filter->{'databases-regex'} ) { PTDEBUG && _d('Database', $db, 'does not match --databases-regex, ignoring'); return 0; } return 1; } sub table_is_allowed { my ( $self, $db, $tbl ) = @_; die "I need a db argument" unless $db; die "I need a tbl argument" unless $tbl; $db = lc $db; $tbl = lc $tbl; my $filter = $self->{filters}; return 0 if $db eq 'mysql' && $tbl =~ m/^(?: general_log |gtid_executed |innodb_index_stats |innodb_table_stats |slave_master_info |slave_relay_log_info |slave_worker_info |slow_log )$/x; if ( $filter->{'ignore-tables'}->{'*'}->{$tbl} || $filter->{'ignore-tables'}->{$db}->{$tbl}) { PTDEBUG && _d('Table', $tbl, 'is in --ignore-tables list'); return 0; } if ( $filter->{'ignore-tables-regex'} && $tbl =~ $filter->{'ignore-tables-regex'} ) { PTDEBUG && _d('Table', $tbl, 'matches --ignore-tables-regex'); return 0; } if ( $filter->{'tables'} && (!$filter->{'tables'}->{'*'}->{$tbl} && !$filter->{'tables'}->{$db}->{$tbl}) ) { PTDEBUG && _d('Table', $tbl, 'is not in --tables list, ignoring'); return 0; } if ( $filter->{'tables-regex'} && $tbl !~ $filter->{'tables-regex'} ) { PTDEBUG && _d('Table', $tbl, 'does not match --tables-regex, ignoring'); return 0; } if ( $filter->{'tables'} && $filter->{'tables'}->{$tbl} && $filter->{'tables'}->{$tbl} ne '*' && $filter->{'tables'}->{$tbl} ne $db ) { PTDEBUG && _d('Table', $tbl, 'is only allowed in database', $filter->{'tables'}->{$tbl}); return 0; } return 1; } sub engine_is_allowed { my ( $self, $engine ) = @_; if ( !$engine ) { PTDEBUG && _d('No engine specified; allowing the table'); return 1; } $engine = lc $engine; my $filter = $self->{filters}; if ( $filter->{'ignore-engines'}->{$engine} ) { PTDEBUG && _d('Engine', $engine, 'is in --ignore-databases list'); return 0; } if ( $filter->{'engines'} && !$filter->{'engines'}->{$engine} ) { PTDEBUG && _d('Engine', $engine, 'is not in --engines list, ignoring'); return 0; } return 1; } sub _resume_from_database { my ($self, $db) = @_; return 1 unless $self->{resume}->{db}; if ( $db eq $self->{resume}->{db} ) { if ( !$self->{resume}->{after}->{db} ) { PTDEBUG && _d('Resuming from db', $db); delete $self->{resume}->{db}; return 1; } else { PTDEBUG && _d('Resuming after db', $db); delete $self->{resume}->{db}; delete $self->{resume}->{tbl}; } } return 0; } sub _resume_from_table { my ($self, $tbl) = @_; return 1 unless $self->{resume}->{tbl}; if ( $tbl eq $self->{resume}->{tbl} ) { if ( !$self->{resume}->{after}->{tbl} ) { PTDEBUG && _d('Resuming from table', $tbl); delete $self->{resume}->{tbl}; return 1; } else { PTDEBUG && _d('Resuming after table', $tbl); delete $self->{resume}->{tbl}; } } return 0; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End SchemaIterator package # ########################################################################### # ########################################################################### # FileIterator package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/FileIterator.pm # t/lib/FileIterator.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package FileIterator; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my $self = { %args, }; return bless $self, $class; } sub get_file_itr { my ( $self, @filenames ) = @_; my @final_filenames; FILENAME: foreach my $fn ( @filenames ) { if ( !defined $fn ) { warn "Skipping undefined filename"; next FILENAME; } if ( $fn ne '-' ) { if ( !-e $fn || !-r $fn ) { warn "$fn does not exist or is not readable"; next FILENAME; } } push @final_filenames, $fn; } if ( !@filenames ) { push @final_filenames, '-'; PTDEBUG && _d('Auto-adding "-" to the list of filenames'); } PTDEBUG && _d('Final filenames:', @final_filenames); return sub { while ( @final_filenames ) { my $fn = shift @final_filenames; PTDEBUG && _d('Filename:', $fn); if ( $fn eq '-' ) { # Magical STDIN filename. return (*STDIN, undef, undef); } open my $fh, '<', $fn or warn "Cannot open $fn: $OS_ERROR"; if ( $fh ) { return ( $fh, $fn, -s $fn ); } } return (); # Avoids $f being set to 0 in list context. }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End FileIterator package # ########################################################################### # ########################################################################### # ExplainAnalyzer package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/ExplainAnalyzer.pm # t/lib/ExplainAnalyzer.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package ExplainAnalyzer; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(QueryRewriter QueryParser) ) { die "I need a $arg argument" unless defined $args{$arg}; } my $self = { %args, }; return bless $self, $class; } sub explain_query { my ( $self, %args ) = @_; foreach my $arg ( qw(dbh query) ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($query, $dbh) = @args{qw(query dbh)}; $query = $self->{QueryRewriter}->convert_to_select($query); if ( $query !~ m/^\s*select/i ) { PTDEBUG && _d("Cannot EXPLAIN non-SELECT query:", (length $query <= 100 ? $query : substr($query, 0, 100) . "...")); return; } my $sql = "EXPLAIN $query"; PTDEBUG && _d($dbh, $sql); my $explain = $dbh->selectall_arrayref($sql, { Slice => {} }); PTDEBUG && _d("Result of EXPLAIN:", Dumper($explain)); return $explain; } sub normalize { my ( $self, $explain ) = @_; my @result; # Don't modify the input. foreach my $row ( @$explain ) { $row = { %$row }; # Make a copy -- don't modify the input. foreach my $col ( qw(key possible_keys key_len ref) ) { $row->{$col} = [ split(/,/, $row->{$col} || '') ]; } $row->{Extra} = { map { my $var = $_; if ( my ($key, $vals) = $var =~ m/(Using union)\(([^)]+)\)/ ) { $key => [ split(/,/, $vals) ]; } else { $var => 1; } } split(/; /, $row->{Extra} || '') # Split on semicolons. }; push @result, $row; } return \@result; } sub get_alternate_indexes { my ( $self, $keys, $possible_keys ) = @_; my %used = map { $_ => 1 } @$keys; return [ grep { !$used{$_} } @$possible_keys ]; } sub get_index_usage { my ( $self, %args ) = @_; foreach my $arg ( qw(query explain) ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($query, $explain) = @args{qw(query explain)}; my @result; my $lookup = $self->{QueryParser}->get_aliases($query); foreach my $row ( @$explain ) { next if !defined $row->{table} || $row->{table} =~ m/^<(derived|union)\d/; my $table = $lookup->{TABLE}->{$row->{table}} || $row->{table}; my $db = $lookup->{DATABASE}->{$table} || $args{db}; push @result, { db => $db, tbl => $table, idx => $row->{key}, alt => $self->get_alternate_indexes( $row->{key}, $row->{possible_keys}), }; } PTDEBUG && _d("Index usage for", (length $query <= 100 ? $query : substr($query, 0, 100) . "..."), ":", Dumper(\@result)); return \@result; } sub get_usage_for { my ( $self, $checksum, $db ) = @_; die "I need a checksum and db" unless defined $checksum && defined $db; my $usage; if ( exists $self->{usage}->{$db} # Don't auto-vivify && exists $self->{usage}->{$db}->{$checksum} ) { $usage = $self->{usage}->{$db}->{$checksum}; } PTDEBUG && _d("Usage for", (length $checksum <= 100 ? $checksum : substr($checksum, 0, 100) . "..."), "on", $db, ":", Dumper($usage)); return $usage; } sub save_usage_for { my ( $self, $checksum, $db, $usage ) = @_; die "I need a checksum and db" unless defined $checksum && defined $db; $self->{usage}->{$db}->{$checksum} = $usage; } sub fingerprint { my ( $self, %args ) = @_; my @required_args = qw(explain); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($explain) = @args{@required_args}; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End ExplainAnalyzer package # ########################################################################### # ########################################################################### # IndexUsage package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/IndexUsage.pm # t/lib/IndexUsage.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package IndexUsage; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my $self = { %args, tables_for => {}, # Keyed off db indexes_for => {}, # Keyed off db->tbl queries => {}, # Keyed off query id index_usage => {}, # Keyed off query id->db->tbl alt_index_usage => {}, # Keyed off query id->db->tbl->index }; return bless $self, $class; } sub add_indexes { my ( $self, %args ) = @_; my @required_args = qw(db tbl indexes); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($db, $tbl, $indexes) = @args{@required_args}; $self->{tables_for}->{$db}->{$tbl} = 0; # usage cnt, zero until used $self->{indexes_for}->{$db}->{$tbl} = $indexes; foreach my $index ( keys %$indexes ) { $indexes->{$index}->{cnt} = 0; } return; } sub add_query { my ( $self, %args ) = @_; my @required_args = qw(query_id fingerprint sample); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($query_id, $fingerprint, $sample) = @args{@required_args}; $self->{queries}->{$query_id} = { fingerprint => $fingerprint, sample => $sample, }; return; } sub add_table_usage { my ( $self, $db, $tbl ) = @_; die "I need a db and table" unless defined $db && defined $tbl; ++$self->{tables_for}->{$db}->{$tbl}; return; } sub add_index_usage { my ( $self, %args ) = @_; my @required_args = qw(usage); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($usage) = @args{@required_args}; foreach my $access ( @$usage ) { my ($db, $tbl, $idx, $alt) = @{$access}{qw(db tbl idx alt)}; foreach my $index ( @$idx ) { $self->{indexes_for}->{$db}->{$tbl}->{$index}->{cnt}++; if ( my $query_id = $args{query_id} ) { $self->{index_usage}->{$query_id}->{$db}->{$tbl}->{$index}++; foreach my $alt_index ( @$alt ) { $self->{alt_index_usage}->{$query_id}->{$db}->{$tbl}->{$index}->{$alt_index}++; } } } # INDEX } # ACCESS return; } sub find_unused_indexes { my ( $self, $callback ) = @_; die "I need a callback" unless $callback; PTDEBUG && _d("Finding unused indexes"); DATABASE: foreach my $db ( sort keys %{$self->{indexes_for}} ) { TABLE: foreach my $tbl ( sort keys %{$self->{indexes_for}->{$db}} ) { next TABLE unless $self->{tables_for}->{$db}->{$tbl}; # Skip unused my $indexes = $self->{indexes_for}->{$db}->{$tbl}; my @unused_indexes; foreach my $index ( sort keys %$indexes ) { if ( !$indexes->{$index}->{cnt} ) { # count of times accessed/used push @unused_indexes, $indexes->{$index}; } } if ( @unused_indexes ) { $callback->( { db => $db, tbl => $tbl, idx => \@unused_indexes, } ); } } # TABLE } # DATABASE return; } sub save_results { my ( $self, %args ) = @_; my @required_args = qw(dbh db); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($dbh, $db) = @args{@required_args}; PTDEBUG && _d("Saving results to tables in database", $db); PTDEBUG && _d("Saving index data"); my $insert_index_sth = $dbh->prepare( "INSERT INTO `$db`.`indexes` (db, tbl, idx, cnt) VALUES (?, ?, ?, ?) " . "ON DUPLICATE KEY UPDATE cnt = cnt + ?"); foreach my $db ( keys %{$self->{indexes_for}} ) { foreach my $tbl ( keys %{$self->{indexes_for}->{$db}} ) { foreach my $index ( keys %{$self->{indexes_for}->{$db}->{$tbl}} ) { my $cnt = $self->{indexes_for}->{$db}->{$tbl}->{$index}->{cnt}; $insert_index_sth->execute($db, $tbl, $index, $cnt, $cnt); } } } PTDEBUG && _d("Saving table data"); my $insert_tbl_sth = $dbh->prepare( "INSERT INTO `$db`.`tables` (db, tbl, cnt) VALUES (?, ?, ?) " . "ON DUPLICATE KEY UPDATE cnt = cnt + ?"); foreach my $db ( keys %{$self->{tables_for}} ) { foreach my $tbl ( keys %{$self->{tables_for}->{$db}} ) { my $cnt = $self->{tables_for}->{$db}->{$tbl}; $insert_tbl_sth->execute($db, $tbl, $cnt, $cnt); } } PTDEBUG && _d("Save query data"); my $insert_query_sth = $dbh->prepare( "INSERT IGNORE INTO `$db`.`queries` (query_id, fingerprint, sample) " . " VALUES (CONV(?, 16, 10), ?, ?)"); foreach my $query_id ( keys %{$self->{queries}} ) { my $query = $self->{queries}->{$query_id}; $insert_query_sth->execute( $query_id, $query->{fingerprint}, $query->{sample}); } PTDEBUG && _d("Saving index usage data"); my $insert_index_usage_sth = $dbh->prepare( "INSERT INTO `$db`.`index_usage` (query_id, db, tbl, idx, cnt) " . "VALUES (CONV(?, 16, 10), ?, ?, ?, ?) " . "ON DUPLICATE KEY UPDATE cnt = cnt + ?"); foreach my $query_id ( keys %{$self->{index_usage}} ) { foreach my $db ( keys %{$self->{index_usage}->{$query_id}} ) { foreach my $tbl ( keys %{$self->{index_usage}->{$query_id}->{$db}} ) { my $indexes = $self->{index_usage}->{$query_id}->{$db}->{$tbl}; foreach my $index ( keys %$indexes ) { my $cnt = $indexes->{$index}; $insert_index_usage_sth->execute( $query_id, $db, $tbl, $index, $cnt, $cnt); } } } } PTDEBUG && _d("Saving alternate index usage data"); my $insert_index_alt_sth = $dbh->prepare( "INSERT INTO `$db`.`index_alternatives` " . "(query_id, db, tbl, idx, alt_idx, cnt) " . "VALUES (CONV(?, 16, 10), ?, ?, ?, ?, ?) " . "ON DUPLICATE KEY UPDATE cnt = cnt + ?"); foreach my $query_id ( keys %{$self->{alt_index_usage}} ) { foreach my $db ( keys %{$self->{alt_index_usage}->{$query_id}} ) { foreach my $tbl ( keys %{$self->{alt_index_usage}->{$query_id}->{$db}} ) { foreach my $index ( keys %{$self->{alt_index_usage}->{$query_id}->{$db}->{$tbl}} ){ my $alt_indexes = $self->{alt_index_usage}->{$query_id}->{$db}->{$tbl}->{$index}; foreach my $alt_index ( keys %$alt_indexes ) { my $cnt = $alt_indexes->{$alt_index}; $insert_index_alt_sth->execute( $query_id, $db, $tbl, $index, $alt_index, $cnt, $cnt); } } } } } $dbh->commit unless $dbh->{AutoCommit}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End IndexUsage package # ########################################################################### # ########################################################################### # Progress package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Progress.pm # t/lib/Progress.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Progress; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; foreach my $arg (qw(jobsize)) { die "I need a $arg argument" unless defined $args{$arg}; } if ( (!$args{report} || !$args{interval}) ) { if ( $args{spec} && @{$args{spec}} == 2 ) { @args{qw(report interval)} = @{$args{spec}}; } else { die "I need either report and interval arguments, or a spec"; } } my $name = $args{name} || "Progress"; $args{start} ||= time(); my $self; $self = { last_reported => $args{start}, fraction => 0, # How complete the job is callback => sub { my ($fraction, $elapsed, $remaining) = @_; printf STDERR "$name: %3d%% %s remain\n", $fraction * 100, Transformers::secs_to_time($remaining); }, %args, }; return bless $self, $class; } sub validate_spec { shift @_ if $_[0] eq 'Progress'; # Permit calling as Progress-> or Progress:: my ( $spec ) = @_; if ( @$spec != 2 ) { die "spec array requires a two-part argument\n"; } if ( $spec->[0] !~ m/^(?:percentage|time|iterations)$/ ) { die "spec array's first element must be one of " . "percentage,time,iterations\n"; } if ( $spec->[1] !~ m/^\d+$/ ) { die "spec array's second element must be an integer\n"; } } sub set_callback { my ( $self, $callback ) = @_; $self->{callback} = $callback; } sub start { my ( $self, $start ) = @_; $self->{start} = $self->{last_reported} = $start || time(); $self->{first_report} = 0; } sub update { my ( $self, $callback, %args ) = @_; my $jobsize = $self->{jobsize}; my $now ||= $args{now} || time; $self->{iterations}++; # How many updates have happened; if ( !$self->{first_report} && $args{first_report} ) { $args{first_report}->(); $self->{first_report} = 1; } if ( $self->{report} eq 'time' && $self->{interval} > $now - $self->{last_reported} ) { return; } elsif ( $self->{report} eq 'iterations' && ($self->{iterations} - 1) % $self->{interval} > 0 ) { return; } $self->{last_reported} = $now; my $completed = $callback->(); $self->{updates}++; # How many times we have run the update callback return if $completed > $jobsize; my $fraction = $completed > 0 ? $completed / $jobsize : 0; if ( $self->{report} eq 'percentage' && $self->fraction_modulo($self->{fraction}) >= $self->fraction_modulo($fraction) ) { $self->{fraction} = $fraction; return; } $self->{fraction} = $fraction; my $elapsed = $now - $self->{start}; my $remaining = 0; my $eta = $now; if ( $completed > 0 && $completed <= $jobsize && $elapsed > 0 ) { my $rate = $completed / $elapsed; if ( $rate > 0 ) { $remaining = ($jobsize - $completed) / $rate; $eta = $now + int($remaining); } } $self->{callback}->($fraction, $elapsed, $remaining, $eta, $completed); } sub fraction_modulo { my ( $self, $num ) = @_; $num *= 100; # Convert from fraction to percentage return sprintf('%d', sprintf('%d', $num / $self->{interval}) * $self->{interval}); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Progress package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check just above main() for the call to main() which actually runs the # program. # ########################################################################### package pt_index_usage; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Data::Dumper; $Data::Dumper::Indent = 1; $OUTPUT_AUTOFLUSH = 1; use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; Transformers->import(qw(make_checksum)); # Global variables. Only really essential variables should be here. my $oktorun = 1; sub main { local @ARGV = @_; # set global ARGV for this package $oktorun = 1; # ########################################################################## # Get configuration information. # ########################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); $o->set('progress', undef) if $o->get('q'); if ( !$o->got('help') ) { if ( $o->get('progress') ) { eval { Progress->validate_spec($o->get('progress')) }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; $o->save_error("--progress $EVAL_ERROR"); } } if ( my $dsn = $o->get('save-results-database') ) { if ( !$dsn->{D} ) { $o->save_error("You must specify a D (database) part for the " . "--save-results-database DSN"); } } } $o->usage_or_errors(); # ########################################################################## # Open the database connections. If no connection opts (-h, -P, etc.) # are given on the cmd line then parse_options() will return undef, # but get_cxn() required a defined dsn arg so use an empty hashref. # ########################################################################## my ($dsn, $dbh, $si_dbh, $res_dbh); my $res_dsn; my $res_db; eval { $dsn = $dp->parse_options($o) || {}; # dbh for EXPLAIN-ing. $dbh = get_cxn( dsn => $dsn, OptionParser => $o, DSNParser => $dp, ); # dbh for SchemaIterator # http://code.google.com/p/maatkit/issues/detail?id=1140 $si_dbh = get_cxn( dsn => $dsn, OptionParser => $o, DSNParser => $dp, ); # dbh for --save-results-database if ( $res_dsn = $o->get('save-results-database') ) { # To make --create-save-results-database work we have to # temporarily remove the D from the DSN to avoid the error # "DBI connect failed: Unknown database". It's restored # to the DSN after connecting. $res_db = $res_dsn->{D}; $res_dsn->{D} = undef if $o->get('create-save-results-database'); $res_dbh = get_cxn( dsn => $res_dsn, OptionParser => $o, DSNParser => $dp, ); $res_dsn->{D} = $res_db; } }; if ( $EVAL_ERROR ) { # Avoid "Issuing rollback() for database handle being DESTROY'd # without explicit disconnect()" errors. $dbh->disconnect if $dbh; $si_dbh->disconnect if $si_dbh; $res_dbh->disconnect if $res_dbh; die $EVAL_ERROR; } # ########################################################################## # Make common modules. # ########################################################################## my $q = new Quoter(); my $qp = new QueryParser(); my $qr = new QueryRewriter(QueryParser => $qp); my $tp = new TableParser(Quoter => $q); my $parser = new SlowLogParser(); my $fi = new FileIterator(); my $iu = new IndexUsage( QueryRewriter => $qr, ); my $exa = new ExplainAnalyzer( QueryRewriter => $qr, QueryParser => $qp ); my %common_modules = ( OptionParser => $o, DSNParser => $dp, Quoter => $q, QueryParser => $qp, QueryRewriter => $qr, TableParser => $tp, IndexUsage => $iu, ExplainAnalyzer => $exa, ); # ######################################################################## # Ready the save results database and its tables. # ######################################################################## if ( $res_dbh ) { my $db = $o->get('save-results-database')->{D}; # checked earlier # Create the database (if it doesn't already exist). if ( $o->get('create-save-results-database') ) { create_save_results_database( dbh => $res_dbh, db => $db, %common_modules, ); } # Parse the CREATE TABLE defs from the POD. my @tables = get_save_results_tables(%common_modules); # Empty the tables. This is actually done via DROP TABLE IF EXISTS. if ( $o->get('empty-save-results-tables') ) { empty_save_results_tables( dbh => $res_dbh, db => $db, tbls => \@tables, %common_modules, ); } # Create the tables if necessary. If they were dropped above, now # they'll be recreated. If they never existed (e.g. new db), they'll # be recreated. Or if they already exist, each def has "IF NOT EXISTS" # so existing tables will remain untouched. create_save_results_tables( dbh => $res_dbh, db => $db, tbls => \@tables, %common_modules, ); # Create views for the canned/example queries. # http://code.google.com/p/maatkit/issues/detail?id=1184 if ( $o->get('create-views') ) { eval { create_views( dbh => $res_dbh, db => $db, %common_modules, ); }; if ( $EVAL_ERROR ) { warn "Failed to create views: $EVAL_ERROR"; } } } # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ { dbh => $dbh, dsn => $dsn }, ($res_dbh ? { dbh => $res_dbh, dsn => $res_dsn } : ()) ], ); } # ######################################################################## # Populate the IndexUsage object with indexes. Also get a list of all # databases and tables before going on to parse the queries. This will be # important when we see a query without any default database, and we have to # guess which database to USE for EXPLAIN-ing it. This code block doesn't # read query logs, it's just inventorying the tables and indexes. # ######################################################################## my $schema = new Schema(); my $schema_itr = new SchemaIterator( dbh => $si_dbh, OptionParser => $o, Quoter => $q, TableParser => $tp, Schema => $schema, ); TALBE: while ( my $tbl = $schema_itr->next() ) { eval { my $ddl = $tbl->{ddl}; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $tp->ansi_to_legacy($ddl); } my ($indexes) = $tp->get_keys($ddl, {}); $iu->add_indexes(%$tbl, indexes=>$indexes); }; if ( $EVAL_ERROR ) { warn $EVAL_ERROR unless $o->get('q'); PTDEBUG && _d($EVAL_ERROR); } } $si_dbh->disconnect(); # ######################################################################## # This keeps track of the $dbh's current DB, so we know when to USE a # different database. # ######################################################################## my $cur_db = $o->get('database') || ''; # ######################################################################## # This keeps track of statements that can't be EXPLAINed for some reason, so # they are not tried again. # ######################################################################## my %err_for = (); # ######################################################################## # This is the main loop over the input filenames. # ######################################################################## my $next_file = $fi->get_file_itr(@ARGV); my ( $fh, $filename, $filesize ) = $next_file->(); FILE: while ( defined $fh ) { # Create a callback to get events from the slow query log file. my $next_event = sub { return <$fh>; }; my $tell = sub { return tell $fh; }; my $event; my $get_event = sub { return $parser->parse_event( event => $event, next_event => $next_event, tell => $tell, oktorun => sub { return 1 }, misc => {}, stats => {}, ); }; # ##################################################################### # Set up a progress reporter. For right now, we just do one per file. # Maybe someday we can do a global progress report? # ##################################################################### my $pr; if ( $o->get('progress') && $filename && -e $filename ) { $pr = new Progress( jobsize => -s $filename, spec => $o->get('progress'), name => $filename, ); } # ##################################################################### # This is the main loop over the queries in the log. For each query we # are going to store what we learn about that query's EXPLAIN plan, keyed # off its fingerprint. # ##################################################################### EVENT: while ( $event = $get_event->() ) { my $arg = $event->{arg} or next EVENT; # The arg is the SQL. my $fingerprint = $event->{fingerprint} = $qr->fingerprint($arg); # Skip events that previously had an error. next if $err_for{$fingerprint}; eval { # Checksum the query and get the query's ID. my $chk = make_checksum($arg); my $id = make_checksum($fingerprint); # Do we need to USE a new database before we EXPLAIN the query? my $new_db = $event->{db} || $event->{Schema}; if ( $new_db && $new_db ne $cur_db ) { my $sql = 'USE ' . $q->quote($new_db); PTDEBUG && _d($sql); $dbh->do($sql); $cur_db = $new_db; } # See if we've EXPLAINed this checksum before. If so, just # increment counters with the saved info from $exa. If not, EXPLAIN # and increment counters, then save to $exa. my $access = $exa->get_usage_for($chk, $cur_db); if ( !$access ) { # The query might not be explain-able. If that is so, it will # die, and we want that to happen so it gets blacklisted. We # don't want it to return an error or something like that, and we # don't want to filter it out and skip it in the first place, # because then we will keep burning cycles on it trying to # explain it over and over. my $explain = $exa->explain_query( dbh => $dbh, query => $arg, ); $access = $exa->get_index_usage( query => $arg, db => $cur_db, explain => $exa->normalize($explain), ); $exa->save_usage_for($chk, $cur_db, $access); $iu->add_query( query_id => $id, fingerprint => $fingerprint, sample => $arg, ); } foreach my $row ( @$access ) { $iu->add_table_usage($row->{db}, $row->{tbl}); $iu->add_index_usage( usage => $access, query_id => $id, ); } }; if ( $EVAL_ERROR ) { # Skip statements with this fingerprint in the future (blacklist). $err_for{$fingerprint} = { event => $event, error => $EVAL_ERROR }; # Log the error. PTDEBUG && _d('Problem on query', $event, $EVAL_ERROR); warn $EVAL_ERROR unless $o->get('q'); } $pr->update($tell) if $pr; } # EVENT ( $fh, $filename, $filesize ) = $next_file->(); } # FILE # ######################################################################## # All done! Now print the reports, maybe. # ######################################################################## if ( $res_dbh ) { $iu->save_results( dbh => $res_dbh, db => $res_db, ); } if ( $o->get('report') ) { print_reports( dbh => $dbh, err_for => \%err_for, %common_modules ); } $dbh->disconnect; $res_dbh->disconnect if $res_dbh; return 0; } # End main(). # ############################################################################ # Subroutines. # ############################################################################ sub print_reports { my ( %args ) = @_; my $iu = $args{IndexUsage}; my $o = $args{OptionParser}; my @reports = @{$o->get('report-format')}; PTDEBUG && _d("Printing reports"); if ( grep { $_ eq 'drop_unused_indexes' } @reports ) { $iu->find_unused_indexes( sub { my ( $unused ) = @_; print_unused_indexes( unused => $unused, drop => $o->get('drop'), %args, ); } ); } return; } sub print_unused_indexes { my ( %args ) = @_; my @required_args = qw(unused drop Quoter); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($unused, $drop, $q) = @args{@required_args}; my $db_tbl = $q->quote($unused->{db}, $unused->{tbl}); # We must ignore the types that we're not dropping, then group # indexes of the remaining types together and print them together. my (@primary, @unique, @nonunique); foreach my $idx ( @{$unused->{idx}} ) { if ($idx->{name} =~ m/PRIMARY/i ) { push @primary, $idx; } elsif ( $idx->{is_unique} ) { push @unique, $idx; } else { push @nonunique, $idx; } } print_alter_drop_key( db_tbl => $db_tbl, idx => \@primary, type => 'primary key', %args ) if $drop->{primary} || $drop->{all}; print_alter_drop_key( db_tbl => $db_tbl, idx => \@unique, type => 'unique', %args ) if $drop->{unique} || $drop->{all}; print_alter_drop_key( db_tbl => $db_tbl, idx => \@nonunique, type => 'non-unique', %args ) if $drop->{"non-unique"} || $drop->{all}; return; } sub print_alter_drop_key { my ( %args ) = @_; my @required_args = qw(db_tbl idx Quoter); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($db_tbl, $idx, $q) = @args{@required_args}; return unless @$idx; print "\nALTER TABLE $db_tbl " . join(', ', map { "DROP KEY " . $q->quote($_->{name}) } @$idx) . ";" . ($args{type} ? " -- type:$args{type}" : "") . "\n"; return; } sub get_cxn { my ( %args ) = @_; my @required_args = qw(dsn OptionParser DSNParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn, $o, $dp) = @args{@required_args}; if ( $o->get('ask-pass') ) { $dsn->{p} = OptionParser::prompt_noecho("Enter password " . ($args{for} ? "for $args{for}: " : ": ")); } my $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), $args{opts}); PTDEBUG && _d('Connected dbh', $dbh); return $dbh; } sub create_save_results_database { my ( %args ) = @_; my @required_args = qw(dbh db Quoter); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($dbh, $db, $q) = @args{@required_args}; my $sql; $db = $q->quote($db); eval { PTDEBUG && _d("Checking if", $db, "database already exists"); $sql = "USE $db"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); }; if ( $EVAL_ERROR ) { PTDEBUG && _d($db, "does not exist:", $EVAL_ERROR); $sql = "CREATE DATABASE $db"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); # Now USE the newly created db (the first attempt failed obviously). $sql = "USE $db"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } else { PTDEBUG && _d($db, "already exists"); } return; } sub get_save_results_tables { my ( %args ) = @_; my @required_args = qw(OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($o) = @args{@required_args}; my $file = $args{file} || __FILE__; PTDEBUG && _d("Getting CREATE TABLE defs from POD"); my @table_defs = qw(indexes tables queries index_usage index_alternatives); my @tables; foreach my $tbl ( @table_defs ) { my $magic = "MAGIC_create_$tbl"; my $sql = $o->read_para_after($file, qr/$magic/); push @tables, { name => $tbl, def => $sql, }; } return @tables; } sub empty_save_results_tables { my ( %args ) = @_; my @required_args = qw(dbh db tbls Quoter); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($dbh, $db, $tbls, $q) = @args{@required_args}; foreach my $tbl ( @$tbls ) { # Nothing is more "empty" than non-existence. The tables # will be recreated later by calling create_save_results_tables(). # Dropping and recreating has an advantage over truncating/deleting: # if the CREATE TABLE def is changed, this will auto-upgrade. my $sql = "DROP TABLE IF EXISTS " . $q->quote($db, $tbl->{name}); PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } return; } sub create_save_results_tables { my ( %args ) = @_; my @required_args = qw(dbh db tbls); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($dbh, $db, $tbls) = @args{@required_args}; foreach my $tbl ( @$tbls ) { my $sql = $tbl->{def}; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } return; } sub create_views { my ( %args ) = @_; my @required_args = qw(dbh); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($dbh) = @args{@required_args}; PTDEBUG && _d("Creating views"); my $pod_parser = new PodParser(); $pod_parser->parse_from_file(__FILE__); my $magic = $pod_parser->get_magic('OPTIONS'); foreach my $ident ( keys %$magic ) { next unless $ident =~ m/^view/; my $sql = "CREATE VIEW `$ident` AS $magic->{$ident}"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################# # Documentation. # ############################################################################# =pod =head1 NAME pt-index-usage - Read queries from a log and analyze how they use indexes. =head1 SYNOPSIS Usage: pt-index-usage [OPTIONS] [FILES] pt-index-usage reads queries from logs and analyzes how they use indexes. Analyze queries in slow.log and print reports: pt-index-usage /path/to/slow.log --host localhost Disable reports and save results to percona database for later analysis: pt-index-usage slow.log --no-report --save-results-database percona =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION This tool connects to a MySQL database server, reads through a query log, and uses EXPLAIN to ask MySQL how it will use each query. When it is finished, it prints out a report on indexes that the queries didn't use. The query log needs to be in MySQL's slow query log format. If you need to input a different format, you can use L to translate the formats. If you don't specify a filename, the tool reads from STDIN. The tool runs two stages. In the first stage, the tool takes inventory of all the tables and indexes in your database, so it can compare the existing indexes to those that were actually used by the queries in the log. In the second stage, it runs EXPLAIN on each query in the query log. It uses separate database connections to inventory the tables and run EXPLAIN, so it opens two connections to the database. If a query is not a SELECT, it tries to transform it to a roughly equivalent SELECT query so it can be EXPLAINed. This is not a perfect process, but it is good enough to be useful. The tool skips the EXPLAIN step for queries that are exact duplicates of those seen before. It assumes that the same query will generate the same EXPLAIN plan as it did previously (usually a safe assumption, and generally good for performance), and simply increments the count of times that the indexes were used. However, queries that have the same fingerprint but different checksums will be re-EXPLAINed. Queries that have different literal constants can have different execution plans, and this is important to measure. After EXPLAIN-ing the query, it is necessary to try to map aliases in the query back to the original table names. For example, consider the EXPLAIN plan for the following query: SELECT * FROM tbl1 AS foo; The EXPLAIN output will show access to table C, and that must be translated back to C. This process involves complex parsing. It is generally very accurate, but there is some chance that it might not work right. If you find cases where it fails, submit a bug report and a reproducible test case. Queries that cannot be EXPLAINed will cause all subsequent queries with the same fingerprint to be blacklisted. This is to reduce the work they cause, and prevent them from continuing to print error messages. However, at least in this stage of the tool's development, it is my opinion that it's not a good idea to preemptively silence these, or prevent them from being EXPLAINed at all. I am looking for lots of feedback on how to improve things like the query parsing. So please submit your test cases based on the errors the tool prints! =head1 OUTPUT After it reads all the events in the log, the tool prints out DROP statements for every index that was not used. It skips indexes for tables that were never accessed by any queries in the log, to avoid false-positive results. If you don't specify L<"--quiet">, the tool also outputs warnings about statements that cannot be EXPLAINed and similar. These go to standard error. Progress reports are enabled by default (see L<"--progress">). These also go to standard error. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --create-save-results-database Create the L<"--save-results-database"> if it does not exist. If the L<"--save-results-database"> already exists and this option is specified, the database is used and the necessary tables are created if they do not already exist. =item --[no]create-views Create views for L<"--save-results-database"> example queries. Several example queries are given for querying the tables in the L<"--save-results-database">. These example queries are, by default, created as views. Specifying C<--no-create-views> prevents these views from being created. =item --database short form: -D; type: string The database to use for the connection. =item --databases short form: -d; type: hash Only get tables and indexes from this comma-separated list of databases. =item --databases-regex type: string Only get tables and indexes from database whose names match this Perl regex. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --drop type: Hash; default: non-unique Suggest dropping only these types of unused indexes. By default pt-index-usage will only suggest to drop unused secondary indexes, not primary or unique indexes. You can specify which types of unused indexes the tool suggests to drop: primary, unique, non-unique, all. A separate C statement for each type is printed. So if you specify C<--drop all> and there is a primary key and a non-unique index, the C for each will be printed on separate lines. =item --empty-save-results-tables Drop and re-create all pre-existing tables in the L<"--save-results-database">. This allows information from previous runs to be removed before the current run. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --ignore-databases type: Hash Ignore this comma-separated list of databases. =item --ignore-databases-regex type: string Ignore databases whose names match this Perl regex. =item --ignore-tables type: Hash Ignore this comma-separated list of table names. Table names may be qualified with the database name. =item --ignore-tables-regex type: string Ignore tables whose names match the Perl regex. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --port short form: -P; type: int Port number to use for connection. =item --progress type: array; default: time,30 Print progress reports to STDERR. The value is a comma-separated list with two parts. The first part can be percentage, time, or iterations; the second part specifies how often an update should be printed, in percentage, seconds, or number of iterations. =item --quiet short form: -q Do not print any warnings. Also disables L<"--progress">. =item --[no]report default: yes Print the reports for L<"--report-format">. You may want to disable the reports by specifying C<--no-report> if, for example, you also specify L<"--save-results-database"> and you only want to query the results tables later. =item --report-format type: Array; default: drop_unused_indexes Right now there is only one report: drop_unused_indexes. This report prints SQL statements for dropping any unused indexes. See also L<"--drop">. See also L<"--[no]report">. =item --save-results-database type: DSN Save results to tables in this database. Information about indexes, queries, tables and their usage is stored in several tables in the specified database. The tables are auto-created if they do not exist. If the database doesn't exist, it can be auto-created with L<"--create-save-results-database">. In this case the connection is initially created with no default database, then after the database is created, it is USE'ed. pt-index-usage executes INSERT statements to save the results. Therefore, you should be careful if you use this feature on a production server. It might increase load, or cause trouble if you don't want the server to be written to, or so on. This is a new feature. It may change in future releases. After a run, you can query the usage tables to answer various questions about index usage. The tables have the following CREATE TABLE definitions: MAGIC_create_indexes: CREATE TABLE IF NOT EXISTS indexes ( db VARCHAR(64) NOT NULL, tbl VARCHAR(64) NOT NULL, idx VARCHAR(64) NOT NULL, cnt BIGINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (db, tbl, idx) ) MAGIC_create_queries: CREATE TABLE IF NOT EXISTS queries ( query_id BIGINT UNSIGNED NOT NULL, fingerprint TEXT NOT NULL, sample TEXT NOT NULL, PRIMARY KEY (query_id) ) MAGIC_create_tables: CREATE TABLE IF NOT EXISTS tables ( db VARCHAR(64) NOT NULL, tbl VARCHAR(64) NOT NULL, cnt BIGINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (db, tbl) ) MAGIC_create_index_usage: CREATE TABLE IF NOT EXISTS index_usage ( query_id BIGINT UNSIGNED NOT NULL, db VARCHAR(64) NOT NULL, tbl VARCHAR(64) NOT NULL, idx VARCHAR(64) NOT NULL, cnt BIGINT UNSIGNED NOT NULL DEFAULT 1, UNIQUE INDEX (query_id, db, tbl, idx) ) MAGIC_create_index_alternatives: CREATE TABLE IF NOT EXISTS index_alternatives ( query_id BIGINT UNSIGNED NOT NULL, -- This query used db VARCHAR(64) NOT NULL, -- this index, but... tbl VARCHAR(64) NOT NULL, -- idx VARCHAR(64) NOT NULL, -- alt_idx VARCHAR(64) NOT NULL, -- was an alternative cnt BIGINT UNSIGNED NOT NULL DEFAULT 1, UNIQUE INDEX (query_id, db, tbl, idx, alt_idx), INDEX (db, tbl, idx), INDEX (db, tbl, alt_idx) ) The following are some queries you can run against these tables to answer common questions you might have. Each query is also created as a view (with MySQL v5.0 and newer) if C<"--[no]create-views"> is true (it is by default). The view names are the strings after the C prefix. Question: which queries sometimes use different indexes, and what fraction of the time is each index chosen? MAGIC_view_query_uses_several_indexes: SELECT iu.query_id, CONCAT_WS('.', iu.db, iu.tbl, iu.idx) AS idx, variations, iu.cnt, iu.cnt / total_cnt * 100 AS pct FROM index_usage AS iu INNER JOIN ( SELECT query_id, db, tbl, SUM(cnt) AS total_cnt, COUNT(*) AS variations FROM index_usage GROUP BY query_id, db, tbl HAVING COUNT(*) > 1 ) AS qv USING(query_id, db, tbl); Question: which indexes have lots of alternatives, i.e. are chosen instead of other indexes, and for what queries? MAGIC_view_index_has_alternates: SELECT CONCAT_WS('.', db, tbl, idx) AS idx_chosen, GROUP_CONCAT(DISTINCT alt_idx) AS alternatives, GROUP_CONCAT(DISTINCT query_id) AS queries, SUM(cnt) AS cnt FROM index_alternatives GROUP BY db, tbl, idx HAVING COUNT(*) > 1; Question: which indexes are considered as alternates for other indexes, and for what queries? MAGIC_view_index_alternates: SELECT CONCAT_WS('.', db, tbl, alt_idx) AS idx_considered, GROUP_CONCAT(DISTINCT idx) AS alternative_to, GROUP_CONCAT(DISTINCT query_id) AS queries, SUM(cnt) AS cnt FROM index_alternatives GROUP BY db, tbl, alt_idx HAVING COUNT(*) > 1; Question: which of those are never chosen by any queries, and are therefore superfluous? MAGIC_view_unused_index_alternates: SELECT CONCAT_WS('.', i.db, i.tbl, i.idx) AS idx, alt.alternative_to, alt.queries, alt.cnt FROM indexes AS i INNER JOIN ( SELECT db, tbl, alt_idx, GROUP_CONCAT(DISTINCT idx) AS alternative_to, GROUP_CONCAT(DISTINCT query_id) AS queries, SUM(cnt) AS cnt FROM index_alternatives GROUP BY db, tbl, alt_idx HAVING COUNT(*) > 1 ) AS alt ON i.db = alt.db AND i.tbl = alt.tbl AND i.idx = alt.alt_idx WHERE i.cnt = 0; Question: given a table, which indexes were used, by how many queries, with how many distinct fingerprints? Were there alternatives? Which indexes were not used? You can edit the following query's SELECT list to also see the query IDs in question. MAGIC_view_index_usage: SELECT i.idx, iu.usage_cnt, iu.usage_total, ia.alt_cnt, ia.alt_total FROM indexes AS i LEFT OUTER JOIN ( SELECT db, tbl, idx, COUNT(*) AS usage_cnt, SUM(cnt) AS usage_total, GROUP_CONCAT(query_id) AS used_by FROM index_usage GROUP BY db, tbl, idx ) AS iu ON i.db=iu.db AND i.tbl=iu.tbl AND i.idx = iu.idx LEFT OUTER JOIN ( SELECT db, tbl, idx, COUNT(*) AS alt_cnt, SUM(cnt) AS alt_total, GROUP_CONCAT(query_id) AS alt_queries FROM index_alternatives GROUP BY db, tbl, idx ) AS ia ON i.db=ia.db AND i.tbl=ia.tbl AND i.idx = ia.idx; Question: which indexes on a given table are vital for at least one query (there is no alternative)? MAGIC_view_required_indexes: SELECT i.db, i.tbl, i.idx, no_alt.queries FROM indexes AS i INNER JOIN ( SELECT iu.db, iu.tbl, iu.idx, GROUP_CONCAT(iu.query_id) AS queries FROM index_usage AS iu LEFT OUTER JOIN index_alternatives AS ia USING(db, tbl, idx) WHERE ia.db IS NULL GROUP BY iu.db, iu.tbl, iu.idx ) AS no_alt ON no_alt.db = i.db AND no_alt.tbl = i.tbl AND no_alt.idx = i.idx ORDER BY i.db, i.tbl, i.idx, no_alt.queries; =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --tables short form: -t; type: hash Only get indexes from this comma-separated list of tables. =item --tables-regex type: string Only get indexes from tables whose names match this Perl regex. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Database to connect to. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-index-usage ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-index-usage 3.1.0 =cut percona-toolkit-3.1/bin/pt-ioprofile000775 001750 001750 00000077225 13535723560 020772 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env bash # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. # ########################################################################### # log_warn_die package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/log_warn_die.sh # t/lib/bash/log_warn_die.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PTFUNCNAME="" PTDEBUG="${PTDEBUG:-""}" EXIT_STATUS=0 ts() { TS=$(date +%F-%T | tr ':-' '_') echo "$TS $*" } info() { [ ${OPT_VERBOSE:-3} -ge 3 ] && ts "$*" } log() { [ ${OPT_VERBOSE:-3} -ge 2 ] && ts "$*" } warn() { [ ${OPT_VERBOSE:-3} -ge 1 ] && ts "$*" >&2 EXIT_STATUS=1 } die() { ts "$*" >&2 EXIT_STATUS=1 exit 1 } _d () { [ "$PTDEBUG" ] && echo "# $PTFUNCNAME: $(ts "$*")" >&2 } # ########################################################################### # End log_warn_die package # ########################################################################### # ########################################################################### # parse_options package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/parse_options.sh # t/lib/bash/parse_options.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u ARGV="" # Non-option args (probably input files) EXT_ARGV="" # Everything after -- (args for an external command) HAVE_EXT_ARGV="" # Got --, everything else is put into EXT_ARGV OPT_ERRS=0 # How many command line option errors OPT_VERSION="" # If --version was specified OPT_HELP="" # If --help was specified OPT_ASK_PASS="" # If --ask-pass was specified PO_DIR="" # Directory with program option spec files usage() { local file="$1" local usage="$(grep '^Usage: ' "$file")" echo $usage echo echo "For more information, 'man $TOOL' or 'perldoc $file'." } usage_or_errors() { local file="$1" local version="" if [ "$OPT_VERSION" ]; then version=$(grep '^pt-[^ ]\+ [0-9]' "$file") echo "$version" return 1 fi if [ "$OPT_HELP" ]; then usage "$file" echo echo "Command line options:" echo perl -e ' use strict; use warnings FATAL => qw(all); my $lcol = 20; # Allow this much space for option names. my $rcol = 80 - $lcol; # The terminal is assumed to be 80 chars wide. my $name; while ( <> ) { my $line = $_; chomp $line; if ( $line =~ s/^long:/ --/ ) { $name = $line; } elsif ( $line =~ s/^desc:// ) { $line =~ s/ +$//mg; my @lines = grep { $_ } $line =~ m/(.{0,$rcol})(?:\s+|\Z)/g; if ( length($name) >= $lcol ) { print $name, "\n", (q{ } x $lcol); } else { printf "%-${lcol}s", $name; } print join("\n" . (q{ } x $lcol), @lines); print "\n"; } } ' "$PO_DIR"/* echo echo "Options and values after processing arguments:" echo ( cd "$PO_DIR" for opt in *; do local varname="OPT_$(echo "$opt" | tr a-z- A-Z_)" eval local varvalue=\$$varname if ! grep -q "type:" "$PO_DIR/$opt" >/dev/null; then if [ "$varvalue" -a "$varvalue" = "yes" ]; then varvalue="TRUE" else varvalue="FALSE" fi fi printf -- " --%-30s %s" "$opt" "${varvalue:-(No value)}" echo done ) return 1 fi if [ $OPT_ERRS -gt 0 ]; then echo usage "$file" return 1 fi return 0 } option_error() { local err="$1" OPT_ERRS=$(($OPT_ERRS + 1)) echo "$err" >&2 } parse_options() { local file="$1" shift ARGV="" EXT_ARGV="" HAVE_EXT_ARGV="" OPT_ERRS=0 OPT_VERSION="" OPT_HELP="" OPT_ASK_PASS="" PO_DIR="$PT_TMPDIR/po" if [ ! -d "$PO_DIR" ]; then mkdir "$PO_DIR" if [ $? -ne 0 ]; then echo "Cannot mkdir $PO_DIR" >&2 exit 1 fi fi rm -rf "$PO_DIR"/* if [ $? -ne 0 ]; then echo "Cannot rm -rf $PO_DIR/*" >&2 exit 1 fi _parse_pod "$file" # Parse POD into program option (po) spec files _eval_po # Eval po into existence with default values if [ $# -ge 2 ] && [ "$1" = "--config" ]; then shift # --config local user_config_files="$1" shift # that ^ local IFS="," for user_config_file in $user_config_files; do _parse_config_files "$user_config_file" done else _parse_config_files "/etc/percona-toolkit/percona-toolkit.conf" "/etc/percona-toolkit/$TOOL.conf" if [ "${HOME:-}" ]; then _parse_config_files "$HOME/.percona-toolkit.conf" "$HOME/.$TOOL.conf" fi fi _parse_command_line "${@:-""}" } _parse_pod() { local file="$1" PO_FILE="$file" PO_DIR="$PO_DIR" perl -e ' $/ = ""; my $file = $ENV{PO_FILE}; open my $fh, "<", $file or die "Cannot open $file: $!"; while ( defined(my $para = <$fh>) ) { next unless $para =~ m/^=head1 OPTIONS/; while ( defined(my $para = <$fh>) ) { last if $para =~ m/^=head1/; chomp; if ( $para =~ m/^=item --(\S+)/ ) { my $opt = $1; my $file = "$ENV{PO_DIR}/$opt"; open my $opt_fh, ">", $file or die "Cannot open $file: $!"; print $opt_fh "long:$opt\n"; $para = <$fh>; chomp; if ( $para =~ m/^[a-z ]+:/ ) { map { chomp; my ($attrib, $val) = split(/: /, $_); print $opt_fh "$attrib:$val\n"; } split(/; /, $para); $para = <$fh>; chomp; } my ($desc) = $para =~ m/^([^?.]+)/; print $opt_fh "desc:$desc.\n"; close $opt_fh; } } last; } ' } _eval_po() { local IFS=":" for opt_spec in "$PO_DIR"/*; do local opt="" local default_val="" local neg=0 local size=0 while read key val; do case "$key" in long) opt=$(echo $val | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') ;; default) default_val="$val" ;; "short form") ;; type) [ "$val" = "size" ] && size=1 ;; desc) ;; negatable) if [ "$val" = "yes" ]; then neg=1 fi ;; *) echo "Invalid attribute in $opt_spec: $line" >&2 exit 1 esac done < "$opt_spec" if [ -z "$opt" ]; then echo "No long attribute in option spec $opt_spec" >&2 exit 1 fi if [ $neg -eq 1 ]; then if [ -z "$default_val" ] || [ "$default_val" != "yes" ]; then echo "Option $opt_spec is negatable but not default: yes" >&2 exit 1 fi fi if [ $size -eq 1 -a -n "$default_val" ]; then default_val=$(size_to_bytes $default_val) fi eval "OPT_${opt}"="$default_val" done } _parse_config_files() { for config_file in "${@:-""}"; do test -f "$config_file" || continue while read config_opt; do echo "$config_opt" | grep '^[ ]*[^#]' >/dev/null 2>&1 || continue config_opt="$(echo "$config_opt" | sed -e 's/^ *//g' -e 's/ *$//g' -e 's/[ ]*=[ ]*/=/' -e 's/[ ]+#.*$//')" [ "$config_opt" = "" ] && continue echo "$config_opt" | grep -v 'version-check' >/dev/null 2>&1 || continue if ! [ "$HAVE_EXT_ARGV" ]; then config_opt="--$config_opt" fi _parse_command_line "$config_opt" done < "$config_file" HAVE_EXT_ARGV="" # reset for each file done } _parse_command_line() { local opt="" local val="" local next_opt_is_val="" local opt_is_ok="" local opt_is_negated="" local real_opt="" local required_arg="" local spec="" for opt in "${@:-""}"; do if [ "$opt" = "--" -o "$opt" = "----" ]; then HAVE_EXT_ARGV=1 continue fi if [ "$HAVE_EXT_ARGV" ]; then if [ "$EXT_ARGV" ]; then EXT_ARGV="$EXT_ARGV $opt" else EXT_ARGV="$opt" fi continue fi if [ "$next_opt_is_val" ]; then next_opt_is_val="" if [ $# -eq 0 ] || [ $(expr "$opt" : "\-") -eq 1 ]; then option_error "$real_opt requires a $required_arg argument" continue fi val="$opt" opt_is_ok=1 else if [ $(expr "$opt" : "\-") -eq 0 ]; then if [ -z "$ARGV" ]; then ARGV="$opt" else ARGV="$ARGV $opt" fi continue fi real_opt="$opt" if $(echo $opt | grep '^--no[^-]' >/dev/null); then local base_opt=$(echo $opt | sed 's/^--no//') if [ -f "$PT_TMPDIR/po/$base_opt" ]; then opt_is_negated=1 opt="$base_opt" else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi else if $(echo $opt | grep '^--no-' >/dev/null); then opt_is_negated=1 opt=$(echo $opt | sed 's/^--no-//') else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi fi if $(echo $opt | grep '^[a-z-][a-z-]*=' >/dev/null 2>&1); then val="$(echo $opt | awk -F= '{print $2}')" opt="$(echo $opt | awk -F= '{print $1}')" fi if [ -f "$PT_TMPDIR/po/$opt" ]; then spec="$PT_TMPDIR/po/$opt" else spec=$(grep "^short form:-$opt\$" "$PT_TMPDIR"/po/* | cut -d ':' -f 1) if [ -z "$spec" ]; then option_error "Unknown option: $real_opt" continue fi fi required_arg=$(cat "$spec" | awk -F: '/^type:/{print $2}') if [ "$required_arg" ]; then if [ "$val" ]; then opt_is_ok=1 else next_opt_is_val=1 fi else if [ "$val" ]; then option_error "Option $real_opt does not take a value" continue fi if [ "$opt_is_negated" ]; then val="" else val="yes" fi opt_is_ok=1 fi fi if [ "$opt_is_ok" ]; then opt=$(cat "$spec" | grep '^long:' | cut -d':' -f2 | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') if grep "^type:size" "$spec" >/dev/null; then val=$(size_to_bytes $val) fi eval "OPT_$opt"="'$val'" opt="" val="" next_opt_is_val="" opt_is_ok="" opt_is_negated="" real_opt="" required_arg="" spec="" fi done } size_to_bytes() { local size="$1" echo $size | perl -ne '%f=(B=>1, K=>1_024, M=>1_048_576, G=>1_073_741_824, T=>1_099_511_627_776); m/^(\d+)([kMGT])?/i; print $1 * $f{uc($2 || "B")};' } # ########################################################################### # End parse_options package # ########################################################################### # ########################################################################### # tmpdir package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/tmpdir.sh # t/lib/bash/tmpdir.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PT_TMPDIR="" mk_tmpdir() { local dir="${1:-""}" if [ -n "$dir" ]; then if [ ! -d "$dir" ]; then mkdir "$dir" || die "Cannot make tmpdir $dir" fi PT_TMPDIR="$dir" else local tool="${0##*/}" local pid="$$" PT_TMPDIR=`mktemp -d -t "${tool}.${pid}.XXXXXX"` \ || die "Cannot make secure tmpdir" fi } rm_tmpdir() { if [ -n "$PT_TMPDIR" ] && [ -d "$PT_TMPDIR" ]; then rm -rf "$PT_TMPDIR" fi PT_TMPDIR="" } # ########################################################################### # End tmpdir package # ########################################################################### # ########################################################################### # alt_cmds package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/alt_cmds.sh # t/lib/bash/alt_cmds.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u _seq() { local i="$1" awk "BEGIN { for(i=1; i<=$i; i++) print i; }" } _pidof() { local cmd="$1" if ! pidof "$cmd" 2>/dev/null; then ps -eo pid,ucomm | awk -v comm="$cmd" '$2 == comm { print $1 }' fi } _lsof() { local pid="$1" if ! lsof -p $pid 2>/dev/null; then /bin/ls -l /proc/$pid/fd 2>/dev/null fi } _which() { if [ -x /usr/bin/which ]; then /usr/bin/which "$1" 2>/dev/null | awk '{print $1}' elif which which 1>/dev/null 2>&1; then which "$1" 2>/dev/null | awk '{print $1}' else echo "$1" fi } # ########################################################################### # End alt_cmds package # ########################################################################### # ########################################################################### # Global variables # ########################################################################### TOOL="pt-ioprofile" # ########################################################################### # Subroutines # ########################################################################### # Read the 'lsof' and 'strace' from the file, and convert it into lines: # pid function fd_no size timing filename # The arguments are the files to summarize. tabulate_strace() { cat > $PT_TMPDIR/tabulate_strace.awk < function call if ( \$3 == "<..." ) { funcn = \$4; fd = unfinished[pid "," funcn]; if ( fd > 0 ) { filename = filename_for[fd]; if ( filename != "" ) { if ( funcn ~ /open/ ) { size = 0; } else { size_field = NF - 1; size = \$size_field; } timing = \$NF; gsub(/[<>]/, "", timing); print pid, funcn, fd, size, timing, filename; } } } # The beginning of a function call (not resumed). There are basically # two cases here: the whole call is on one line, and it's unfinished # and ends on a later line. else { funcn = substr(\$3, 1, index(\$3, "(") - 1); if ( funcn ~ wanted_pat ) { # Save the file descriptor and name for lookup later. if ( funcn ~ /open/ ) { filename = substr(\$3, index(\$3, "(") + 2); filename = substr(filename, 1, index(filename, "\\"") - 1); if ( "./" == substr(filename, 1, 2) ) { # Translate relative filenames into absolute ones. filename = cwd substr(filename, 2); } fd_field = NF - 1; fd = \$fd_field; filename_for[fd] = filename; } else { fd = substr(\$3, index(\$3, "(") + 1); gsub(/[^0-9].*/, "", fd); } # Save unfinished calls for later if ( \$NF == "...>" ) { unfinished[pid "," funcn] = fd; } # Function calls that are all on one line, not else { filename = filename_for[fd]; if ( filename != "" ) { if ( funcn ~ /open/ ) { size = 0; } else { size_field = NF - 1; size = \$size_field; } timing = \$NF; gsub(/[<>]/, "", timing); print pid, funcn, fd, size, timing, filename; } } } } } } EOF awk -f $PT_TMPDIR/tabulate_strace.awk "$@" } # Takes as input the output from tabulate_strace. Arguments are just a subset # of the overall command-line options, but no validation is needed. The last # command-line option is the filename of the tabulate_strace output. summarize_strace() { local func="$1" local cell="$2" local group_by="$3" local file="$4" cat > "$PT_TMPDIR/summarize_strace.awk" < 0 ) { result /= count[funcn "," thing]; } else { result = 0; } } if ( "$group_by" != "all" ) { output = output sprintf(col_pat, result); } else { printf(col_pat funcn "\\n", result); } } total_result = total_$cell; if ( "$func" == "avg" ) { if ( total_count > 0 ) { total_result /= total_count; } else { total_result = 0; } } printf(col_pat, total_result); if ( "$group_by" != "all" ) { print(output thing); } else { print "TOTAL"; } } } EOF awk -f $PT_TMPDIR/summarize_strace.awk "$file" > $PT_TMPDIR/summarized_samples if [ "$group_by" != "all" ]; then head -n1 $PT_TMPDIR/summarized_samples tail -n +2 $PT_TMPDIR/summarized_samples | sort -rn -k1 else grep TOTAL $PT_TMPDIR/summarized_samples grep -v TOTAL $PT_TMPDIR/summarized_samples | sort -rn -k1 fi } sigtrap() { warn "Caught signal, forcing exit" rm_tmpdir exit $EXIT_STATUS } main() { trap sigtrap HUP INT TERM if [ $# -gt 0 ]; then # Summarize the files the user passed in. tabulate_strace "$@" > $PT_TMPDIR/tabulated_samples else # There's no file to analyze, so we'll make one. if which strace > /dev/null 2>&1; then local samples=${OPT_SAVE_SAMPLES:-"$PT_TMPDIR/samples"} # Get the PID of the process to profile, unless the user # gave us it explicitly with --profile-pid. local proc_pid="$OPT_PROFILE_PID" if [ -z "$proc_pid" ]; then proc_pid=$(_pidof "$OPT_PROFILE_PROCESS" | awk '{print $1; exit;'}) fi date if [ "$proc_pid" ]; then echo "Tracing process ID $proc_pid" _lsof "$proc_pid" > "$samples" 2>&1 if [ "$?" -ne "0" ]; then echo "Error: could not execute lsof, error code $?" EXIT_STATUS=1 return 1 fi strace -T -s 0 -f -p $proc_pid >> "$samples" 2>&1 & if [ "$?" -ne "0" ]; then echo "Error: could not execute strace, error code $?" EXIT_STATUS=1 return 1 fi strace_pid=$! # sleep one second then check to make sure the strace is # actually running sleep 1 ps -p $strace_pid > /dev/null 2>&1 if [ "$?" -ne "0" ]; then echo "Cannot find strace process" >&2 tail "$samples" >&2 EXIT_STATUS=1 return 1 fi # sleep for interval -1, since we did a one second sleep # before checking for the PID of strace if [ $((${OPT_RUN_TIME}-1)) -gt 0 ]; then sleep $((${OPT_RUN_TIME}-1)) fi kill -s 2 $strace_pid sleep 1 kill -s 15 $strace_pid 2>/dev/null # Sometimes strace leaves threads/processes in T status. kill -s 18 $proc_pid # Summarize the output we just generated. tabulate_strace "$samples" > $PT_TMPDIR/tabulated_samples else echo "Cannot determine PID of $OPT_PROFILE_PROCESS process" >&2 EXIT_STATUS=1 return 1 fi else echo "strace is not in PATH" >&2 EXIT_STATUS=1 return 1 fi fi summarize_strace \ $OPT_AGGREGATE \ $OPT_CELL \ $OPT_GROUP_BY \ "$PT_TMPDIR/tabulated_samples" } # Execute the program if it was not included from another file. # This makes it possible to include without executing, and thus test. if [ "${0##*/}" = "$TOOL" ] \ || [ "${0##*/}" = "bash" -a "${_:-""}" = "$0" ]; then # Parse command line options. We must do this first so we can # see if --daemonize was specified. mk_tmpdir parse_options "$0" "$@" usage_or_errors "$0" po_status=$? rm_tmpdir if [ $po_status -eq 0 ]; then # Make a secure tmpdir. mk_tmpdir # XXX # TODO: This should be quoted but because the way parse_options() # currently works, it flattens files in $@ (i.e. given on the cmd # line) into the string $ARGV. So if we pass "$ARGV" then other # functions will see 1 file named "file1 file2" instead of "file1" # "file2". main $ARGV # Clean up. rm_tmpdir else [ $OPT_ERRS -gt 0 ] && EXIT_STATUS=1 fi exit $EXIT_STATUS fi # ############################################################################ # Documentation # ############################################################################ :<<'DOCUMENTATION' =pod =head1 NAME pt-ioprofile - Watch process IO and print a table of file and I/O activity. =head1 SYNOPSIS Usage: pt-ioprofile [OPTIONS] [FILE] pt-ioprofile does two things: 1) get lsof+strace for -s seconds, 2) aggregate the result. If you specify a FILE, then step 1) is not performed. =head1 RISKS B: pt-ioprofile freezes the server and may crash the process, or make it perform badly after detaching, or leave it in a sleeping state! Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups pt-ioprofile should be considered an intrusive tool, and should not be used on production servers unless you understand and accept the risks. =back =head1 DESCRIPTION pt-ioprofile uses C and C to watch a process's IO and print out a table of files and I/O activity. By default, it watches the mysqld process for 30 seconds. The output is like: Tue Dec 27 15:33:57 PST 2011 Tracing process ID 1833 total read write lseek ftruncate filename 0.000150 0.000029 0.000068 0.000038 0.000015 /tmp/ibBE5opS You probably need to run this tool as root. pt-ioprofile works by attaching C to the process using C, which will make it run very slowly until C detaches. In addition to freezing the server, there is some risk of the process crashing or performing badly after C detaches from it, or of C not detaching cleanly and leaving the process in a sleeping state. As a result, this should be considered an intrusive tool, and should not be used on production servers unless you are comfortable with that. =head1 OPTIONS =over =item --aggregate short form: -a; type: string; default: sum The aggregate function, either C or C. If sum, then each cell will contain the sum of the values in it. If avg, then each cell will contain the average of the values in it. =item --cell short form: -c; type: string; default: times The cell contents. Valid values are: VALUE CELLS CONTAIN ===== ======================= count Count of I/O operations sizes Sizes of I/O operations times I/O operation timing =item --group-by short form: -g; type: string; default: filename The group-by item. Valid values are: VALUE GROUPING ===== ====================================== all Summarize into a single line of output filename One line of output per filename pid One line of output per process ID =item --help Print help and exit. =item --profile-pid short form: -p; type: int The PID to profile, overrides L<"--profile-process">. =item --profile-process short form: -b; type: string; default: mysqld The process name to profile. =item --run-time type: int; default: 30 How long to profile. =item --save-samples type: string Filename to save samples in; these can be used for later analysis. =item --version Print the tool's version and exit. =back =head1 ENVIRONMENT This tool does not use any environment variables. =head1 SYSTEM REQUIREMENTS This tool requires the Bourne shell (F). =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-ioprofile 3.1.0 =cut DOCUMENTATION percona-toolkit-3.1/bin/pt-kill000775 001750 001750 00000764134 13535723560 017737 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit OptionParser Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo DSNParser Daemon Transformers TableParser Processlist TextResultSetParser MasterSlave Quoter QueryRewriter Retry Cxn HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc substr(md5_hex($val), -16); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # TableParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TableParser.pm # t/lib/TableParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TableParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # Processlist package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Processlist.pm # t/lib/Processlist.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Processlist; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Time::HiRes qw(time usleep); use List::Util qw(max); use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; use constant PTDEBUG => $ENV{PTDEBUG} || 0; use constant { ID => 0, USER => 1, HOST => 2, DB => 3, COMMAND => 4, TIME => 5, STATE => 6, INFO => 7, START => 8, # Calculated start time of statement ($start - TIME) ETIME => 9, # Exec time of SHOW PROCESSLIST (margin of error in START) FSEEN => 10, # First time ever seen PROFILE => 11, # Profile of individual STATE times }; sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(MasterSlave) ) { die "I need a $arg argument" unless $args{$arg}; } my $kill_busy_commands = {}; if ($args{kill_busy_commands}) { for my $command (split /,/,$args{kill_busy_commands}) { $command =~ s/^\s+|\s+$//g; $kill_busy_commands->{$command} = 1; } } else { $kill_busy_commands->{Query} = 1; } $args{kill_busy_commands} = $kill_busy_commands; my $self = { %args, polls => 0, last_poll => 0, active_cxn => {}, # keyed off ID event_cache => [], _reasons_for_matching => {}, }; return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(code); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($code) = @args{@required_args}; if ( @{$self->{event_cache}} ) { PTDEBUG && _d("Returning cached event"); return shift @{$self->{event_cache}}; } if ( $self->{interval} && $self->{polls} ) { PTDEBUG && _d("Sleeping between polls"); usleep($self->{interval}); } PTDEBUG && _d("Polling PROCESSLIST"); my ($time, $etime) = @args{qw(time etime)}; my $start = $etime ? 0 : time; # don't need start if etime given my $rows = $code->(); if ( !$rows ) { warn "Processlist callback did not return an arrayref"; return; } $time = time unless $time; $etime = $time - $start unless $etime; $self->{polls}++; PTDEBUG && _d('Rows:', ($rows ? scalar @$rows : 0), 'in', $etime, 'seconds'); my $active_cxn = $self->{active_cxn}; my $curr_cxn = {}; my @new_cxn = (); CURRENTLY_ACTIVE_CXN: foreach my $curr ( @$rows ) { $curr_cxn->{$curr->[ID]} = $curr; my $query_start = $time - ($curr->[TIME] || 0); if ( $active_cxn->{$curr->[ID]} ) { PTDEBUG && _d('Checking existing cxn', $curr->[ID]); my $prev = $active_cxn->{$curr->[ID]}; # previous state of cxn my $new_query = 0; my $fudge = ($curr->[TIME] || 0) =~ m/\D/ ? 0.001 : 1; # micro-t? if ( $prev->[INFO] ) { if ( !$curr->[INFO] || $prev->[INFO] ne $curr->[INFO] ) { PTDEBUG && _d('Info is different; new query'); $new_query = 1; } elsif ( defined $curr->[TIME] && $curr->[TIME] < $prev->[TIME] ) { PTDEBUG && _d('Time is less than previous; new query'); $new_query = 1; } elsif ( $curr->[INFO] && defined $curr->[TIME] && $query_start - $etime - $prev->[START] > $fudge) { my $ms = $self->{MasterSlave}; my $is_repl_thread = $ms->is_replication_thread({ Command => $curr->[COMMAND], User => $curr->[USER], State => $curr->[STATE], Id => $curr->[ID]}); if ( $is_repl_thread ) { PTDEBUG && _d(q{Query has restarted but it's a replication thread, ignoring}); } else { PTDEBUG && _d('Query restarted; new query', $query_start, $etime, $prev->[START], $fudge); $new_query = 1; } } if ( $new_query ) { $self->_update_profile($prev, $curr, $time); push @{$self->{event_cache}}, $self->make_event($prev, $time); } } if ( $curr->[INFO] ) { if ( $prev->[INFO] && !$new_query ) { PTDEBUG && _d("Query on cxn", $curr->[ID], "hasn't changed"); $self->_update_profile($prev, $curr, $time); } else { PTDEBUG && _d('Saving new query, state', $curr->[STATE]); push @new_cxn, [ @{$curr}[0..7], # proc info int($query_start), # START $etime, # ETIME $time, # FSEEN { ($curr->[STATE] || "") => 0 }, # PROFILE ]; } } } else { PTDEBUG && _d('New cxn', $curr->[ID]); if ( $curr->[INFO] && defined $curr->[TIME] ) { PTDEBUG && _d('Saving query of new cxn, state', $curr->[STATE]); push @new_cxn, [ @{$curr}[0..7], # proc info int($query_start), # START $etime, # ETIME $time, # FSEEN { ($curr->[STATE] || "") => 0 }, # PROFILE ]; } } } # CURRENTLY_ACTIVE_CXN PREVIOUSLY_ACTIVE_CXN: foreach my $prev ( values %$active_cxn ) { if ( !$curr_cxn->{$prev->[ID]} ) { PTDEBUG && _d('cxn', $prev->[ID], 'ended'); push @{$self->{event_cache}}, $self->make_event($prev, $time); delete $active_cxn->{$prev->[ID]}; } elsif ( ($curr_cxn->{$prev->[ID]}->[COMMAND] || "") eq 'Sleep' || !$curr_cxn->{$prev->[ID]}->[STATE] || !$curr_cxn->{$prev->[ID]}->[INFO] ) { PTDEBUG && _d('cxn', $prev->[ID], 'became idle'); delete $active_cxn->{$prev->[ID]}; } } map { $active_cxn->{$_->[ID]} = $_; } @new_cxn; $self->{last_poll} = $time; my $event = shift @{$self->{event_cache}}; PTDEBUG && _d(scalar @{$self->{event_cache}}, "events in cache"); return $event; } sub make_event { my ( $self, $row, $time ) = @_; my $observed_time = $time - $row->[FSEEN]; my $Query_time = max($row->[TIME], $observed_time); my $event = { id => $row->[ID], db => $row->[DB], user => $row->[USER], host => $row->[HOST], arg => $row->[INFO], bytes => length($row->[INFO]), ts => Transformers::ts($row->[START] + $row->[TIME]), # Query END time Query_time => $Query_time, Lock_time => $row->[PROFILE]->{Locked} || 0, }; PTDEBUG && _d('Properties of event:', Dumper($event)); return $event; } sub _get_active_cxn { my ( $self ) = @_; PTDEBUG && _d("Active cxn:", Dumper($self->{active_cxn})); return $self->{active_cxn}; } sub _update_profile { my ( $self, $prev, $curr, $time ) = @_; return unless $prev && $curr; my $time_elapsed = $time - $self->{last_poll}; if ( ($prev->[STATE] || "") eq ($curr->[STATE] || "") ) { PTDEBUG && _d("Query is still in", $curr->[STATE], "state"); $prev->[PROFILE]->{$prev->[STATE] || ""} += $time_elapsed; } else { PTDEBUG && _d("Query changed from state", $prev->[STATE], "to", $curr->[STATE]); my $half_time = ($time_elapsed || 0) / 2; $prev->[PROFILE]->{$prev->[STATE] || ""} += $half_time; $prev->[STATE] = $curr->[STATE]; $prev->[PROFILE]->{$curr->[STATE] || ""} = $half_time; } return; } sub find { my ( $self, $proclist, %find_spec ) = @_; PTDEBUG && _d('find specs:', Dumper(\%find_spec)); my $ms = $self->{MasterSlave}; my @matches; $self->{_reasons_for_matching} = undef; QUERY: foreach my $query ( @$proclist ) { PTDEBUG && _d('Checking query', Dumper($query)); my $matched = 0; if ( !$find_spec{replication_threads} && $ms->is_replication_thread($query) ) { PTDEBUG && _d('Skipping replication thread'); next QUERY; } if ( $find_spec{busy_time} && exists($self->{kill_busy_commands}->{$query->{Command} || ''}) ) { next QUERY unless defined($query->{Time}); if ( $query->{Time} < $find_spec{busy_time} ) { PTDEBUG && _d("Query isn't running long enough"); next QUERY; } my $reason = 'Exceeds busy time'; PTDEBUG && _d($reason); push @{$self->{_reasons_for_matching}->{$query} ||= []}, $reason; $matched++; } if ( $find_spec{idle_time} && ($query->{Command} || '') eq 'Sleep' ) { next QUERY unless defined($query->{Time}); if ( $query->{Time} < $find_spec{idle_time} ) { PTDEBUG && _d("Query isn't idle long enough"); next QUERY; } my $reason = 'Exceeds idle time'; PTDEBUG && _d($reason); push @{$self->{_reasons_for_matching}->{$query} ||= []}, $reason; $matched++; } PROPERTY: foreach my $property ( qw(Id User Host db State Command Info) ) { my $filter = "_find_match_$property"; if ( defined $find_spec{ignore}->{$property} && $self->$filter($query, $find_spec{ignore}->{$property}) ) { PTDEBUG && _d('Query matches ignore', $property, 'spec'); next QUERY; } if ( defined $find_spec{match}->{$property} ) { if ( !$self->$filter($query, $find_spec{match}->{$property}) ) { PTDEBUG && _d('Query does not match', $property, 'spec'); next QUERY; } my $reason = 'Query matches ' . $property . ' spec'; PTDEBUG && _d($reason); push @{$self->{_reasons_for_matching}->{$query} ||= []}, $reason; $matched++; } } if ( $matched || $find_spec{all} ) { PTDEBUG && _d("Query matched one or more specs, adding"); push @matches, $query; next QUERY; } PTDEBUG && _d('Query does not match any specs, ignoring'); } # QUERY return @matches; } sub _find_match_Id { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Id} && $query->{Id} == $property; } sub _find_match_User { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{User} && $query->{User} =~ m/$property/; } sub _find_match_Host { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Host} && $query->{Host} =~ m/$property/; } sub _find_match_db { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{db} && $query->{db} =~ m/$property/; } sub _find_match_State { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{State} && $query->{State} =~ m/$property/; } sub _find_match_Command { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Command} && $query->{Command} =~ m/$property/; } sub _find_match_Info { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Info} && $query->{Info} =~ m/$property/; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Processlist package # ########################################################################### # ########################################################################### # TextResultSetParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TextResultSetParser.pm # t/lib/TextResultSetParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TextResultSetParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my %value_for = ( 'NULL' => undef, # DBI::selectall_arrayref() does this ($args{value_for} ? %{$args{value_for}} : ()), ); my $self = { %args, value_for => \%value_for, }; return bless $self, $class; } sub _parse_tabular { my ( $text, @cols ) = @_; my %row; my @vals = $text =~ m/\| +([^\|]*?)(?= +\|)/msg; return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub _parse_tab_sep { my ( $text, @cols ) = @_; my %row; my @vals = split(/\t/, $text); return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub parse_vertical_row { my ( $self, $text ) = @_; my %row = $text =~ m/^\s*(\w+):(?: ([^\n]*))?/msg; if ( $self->{NAME_lc} ) { my %lc_row = map { my $key = lc $_; $key => $row{$_}; } keys %row; return \%lc_row; } else { return \%row; } } sub parse { my ( $self, $text ) = @_; my $result_set; if ( $text =~ m/^\+---/m ) { # standard "tabular" output PTDEBUG && _d('Result set text is standard tabular'); my $line_pattern = qr/^(\| .*)[\r\n]+/m; $result_set = $self->parse_horizontal_row($text, $line_pattern, \&_parse_tabular); } elsif ( $text =~ m/^\w+\t\w+/m ) { # tab-separated PTDEBUG && _d('Result set text is tab-separated'); my $line_pattern = qr/^(.*?\t.*)[\r\n]+/m; $result_set = $self->parse_horizontal_row($text, $line_pattern, \&_parse_tab_sep); } elsif ( $text =~ m/\*\*\* \d+\. row/ ) { # "vertical" output PTDEBUG && _d('Result set text is vertical (\G)'); foreach my $row ( split_vertical_rows($text) ) { push @$result_set, $self->parse_vertical_row($row); } } else { my $text_sample = substr $text, 0, 300; my $remaining = length $text > 300 ? (length $text) - 300 : 0; chomp $text_sample; die "Cannot determine if text is tabular, tab-separated or vertical:\n" . "$text_sample\n" . ($remaining ? "(not showing last $remaining bytes of text)\n" : ""); } if ( $self->{value_for} ) { foreach my $result_set ( @$result_set ) { foreach my $key ( keys %$result_set ) { next unless defined $result_set->{$key}; $result_set->{$key} = $self->{value_for}->{ $result_set->{$key} } if exists $self->{value_for}->{ $result_set->{$key} }; } } } return $result_set; } sub parse_horizontal_row { my ( $self, $text, $line_pattern, $sub ) = @_; my @result_sets = (); my @cols = (); foreach my $line ( $text =~ m/$line_pattern/g ) { my ( $row, $cols ) = $sub->($line, @cols); if ( $row ) { push @result_sets, $row; } else { @cols = map { $self->{NAME_lc} ? lc $_ : $_ } @$cols; } } return \@result_sets; } sub split_vertical_rows { my ( $text ) = @_; my $ROW_HEADER = '\*{3,} \d+\. row \*{3,}'; my @rows = $text =~ m/($ROW_HEADER.*?)(?=$ROW_HEADER|\z)/omgs; return @rows; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TextResultSetParser package # ########################################################################### # ########################################################################### # MasterSlave package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MasterSlave.pm # t/lib/MasterSlave.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MasterSlave; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub check_recursion_method { my ($methods) = @_; if ( @$methods != 1 ) { if ( grep({ !m/processlist|hosts/i } @$methods) && $methods->[0] !~ /^dsn=/i ) { die "Invalid combination of recursion methods: " . join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". " . "Only hosts and processlist may be combined.\n" } } else { my ($method) = @$methods; die "Invalid recursion method: " . ( $method || 'undef' ) unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i; } } sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser DSNParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, replication_thread => {}, }; return bless $self, $class; } sub get_slaves { my ($self, %args) = @_; my @required_args = qw(make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($make_cxn) = @args{@required_args}; my $slaves = []; my $dp = $self->{DSNParser}; my $methods = $self->_resolve_recursion_methods($args{dsn}); return $slaves unless @$methods; if ( grep { m/processlist|hosts/i } @$methods ) { my @required_args = qw(dbh dsn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $dsn) = @args{@required_args}; my $o = $self->{OptionParser}; $self->recurse_to_slaves( { dbh => $dbh, dsn => $dsn, slave_user => $o->got('slave-user') ? $o->get('slave-user') : '', slave_password => $o->got('slave-password') ? $o->get('slave-password') : '', callback => sub { my ( $dsn, $dbh, $level, $parent ) = @_; return unless $level; PTDEBUG && _d('Found slave:', $dp->as_string($dsn)); my $slave_dsn = $dsn; if ($o->got('slave-user')) { $slave_dsn->{u} = $o->get('slave-user'); PTDEBUG && _d("Using slave user ".$o->get('slave-user')." on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($o->got('slave-password')) { $slave_dsn->{p} = $o->get('slave-password'); PTDEBUG && _d("Slave password set"); } push @$slaves, $make_cxn->(dsn => $slave_dsn, dbh => $dbh); return; }, } ); } elsif ( $methods->[0] =~ m/^dsn=/i ) { (my $dsn_table_dsn = join ",", @$methods) =~ s/^dsn=//i; $slaves = $self->get_cxn_from_dsn_table( %args, dsn_table_dsn => $dsn_table_dsn, ); } elsif ( $methods->[0] =~ m/none/i ) { PTDEBUG && _d('Not getting to slaves'); } else { die "Unexpected recursion methods: @$methods"; } return $slaves; } sub _resolve_recursion_methods { my ($self, $dsn) = @_; my $o = $self->{OptionParser}; if ( $o->got('recursion-method') ) { return $o->get('recursion-method'); } elsif ( $dsn && ($dsn->{P} || 3306) != 3306 ) { PTDEBUG && _d('Port number is non-standard; using only hosts method'); return [qw(hosts)]; } else { return $o->get('recursion-method'); } } sub recurse_to_slaves { my ( $self, $args, $level ) = @_; $level ||= 0; my $dp = $self->{DSNParser}; my $recurse = $args->{recurse} || $self->{OptionParser}->get('recurse'); my $dsn = $args->{dsn}; my $slave_user = $args->{slave_user} || ''; my $slave_password = $args->{slave_password} || ''; my $methods = $self->_resolve_recursion_methods($dsn); PTDEBUG && _d('Recursion methods:', @$methods); if ( lc($methods->[0]) eq 'none' ) { PTDEBUG && _d('Not recursing to slaves'); return; } my $slave_dsn = $dsn; if ($slave_user) { $slave_dsn->{u} = $slave_user; PTDEBUG && _d("Using slave user $slave_user on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($slave_password) { $slave_dsn->{p} = $slave_password; PTDEBUG && _d("Slave password set"); } my $dbh; eval { $dbh = $args->{dbh} || $dp->get_dbh( $dp->get_cxn_params($slave_dsn), { AutoCommit => 1 }); PTDEBUG && _d('Connected to', $dp->as_string($slave_dsn)); }; if ( $EVAL_ERROR ) { print STDERR "Cannot connect to ", $dp->as_string($slave_dsn), "\n" or die "Cannot print: $OS_ERROR"; return; } my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); my ($id) = $dbh->selectrow_array($sql); PTDEBUG && _d('Working on server ID', $id); my $master_thinks_i_am = $dsn->{server_id}; if ( !defined $id || ( defined $master_thinks_i_am && $master_thinks_i_am != $id ) || $args->{server_ids_seen}->{$id}++ ) { PTDEBUG && _d('Server ID seen, or not what master said'); if ( $args->{skip_callback} ) { $args->{skip_callback}->($dsn, $dbh, $level, $args->{parent}); } return; } $args->{callback}->($dsn, $dbh, $level, $args->{parent}); if ( !defined $recurse || $level < $recurse ) { my @slaves = grep { !$_->{master_id} || $_->{master_id} == $id } # Only my slaves. $self->find_slave_hosts($dp, $dbh, $dsn, $methods); foreach my $slave ( @slaves ) { PTDEBUG && _d('Recursing from', $dp->as_string($dsn), 'to', $dp->as_string($slave)); $self->recurse_to_slaves( { %$args, dsn => $slave, dbh => undef, parent => $dsn, slave_user => $slave_user, $slave_password => $slave_password }, $level + 1 ); } } } sub find_slave_hosts { my ( $self, $dsn_parser, $dbh, $dsn, $methods ) = @_; PTDEBUG && _d('Looking for slaves on', $dsn_parser->as_string($dsn), 'using methods', @$methods); my @slaves; METHOD: foreach my $method ( @$methods ) { my $find_slaves = "_find_slaves_by_$method"; PTDEBUG && _d('Finding slaves with', $find_slaves); @slaves = $self->$find_slaves($dsn_parser, $dbh, $dsn); last METHOD if @slaves; } PTDEBUG && _d('Found', scalar(@slaves), 'slaves'); return @slaves; } sub _find_slaves_by_processlist { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves = map { my $slave = $dsn_parser->parse("h=$_", $dsn); $slave->{source} = 'processlist'; $slave; } grep { $_ } map { my ( $host ) = $_->{host} =~ m/^([^:]+):/; $host ||= $_->{host}; if ( $host eq 'localhost' ) { $host = '127.0.0.1'; # Replication never uses sockets. } $host; } $self->get_connected_slaves($dbh); return @slaves; } sub _find_slaves_by_hosts { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves; my $sql = 'SHOW SLAVE HOSTS'; PTDEBUG && _d($dbh, $sql); @slaves = @{$dbh->selectall_arrayref($sql, { Slice => {} })}; if ( @slaves ) { PTDEBUG && _d('Found some SHOW SLAVE HOSTS info'); @slaves = map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; my $spec = "h=$hash{host},P=$hash{port}" . ( $hash{user} ? ",u=$hash{user}" : '') . ( $hash{password} ? ",p=$hash{password}" : ''); my $dsn = $dsn_parser->parse($spec, $dsn); $dsn->{server_id} = $hash{server_id}; $dsn->{master_id} = $hash{master_id}; $dsn->{source} = 'hosts'; $dsn; } @slaves; } return @slaves; } sub get_connected_slaves { my ( $self, $dbh ) = @_; my $show = "SHOW GRANTS FOR "; my $user = 'CURRENT_USER()'; my $sql = $show . $user; PTDEBUG && _d($dbh, $sql); my $proc; eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/no such grant defined for user/ ) { PTDEBUG && _d('Retrying SHOW GRANTS without host; error:', $EVAL_ERROR); ($user) = split('@', $user); $sql = $show . $user; PTDEBUG && _d($sql); eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; } die "Failed to $sql: $EVAL_ERROR" if $EVAL_ERROR; } if ( !$proc ) { die "You do not have the PROCESS privilege"; } $sql = 'SHOW FULL PROCESSLIST'; PTDEBUG && _d($dbh, $sql); grep { $_->{command} =~ m/Binlog Dump/i } map { # Lowercase the column names my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{$dbh->selectall_arrayref($sql, { Slice => {} })}; } sub is_master_of { my ( $self, $master, $slave ) = @_; my $master_status = $self->get_master_status($master) or die "The server specified as a master is not a master"; my $slave_status = $self->get_slave_status($slave) or die "The server specified as a slave is not a slave"; my @connected = $self->get_connected_slaves($master) or die "The server specified as a master has no connected slaves"; my (undef, $port) = $master->selectrow_array("SHOW VARIABLES LIKE 'port'"); if ( $port != $slave_status->{master_port} ) { die "The slave is connected to $slave_status->{master_port} " . "but the master's port is $port"; } if ( !grep { $slave_status->{master_user} eq $_->{user} } @connected ) { die "I don't see any slave I/O thread connected with user " . $slave_status->{master_user}; } if ( ($slave_status->{slave_io_state} || '') eq 'Waiting for master to send event' ) { my ( $master_log_name, $master_log_num ) = $master_status->{file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; my ( $slave_log_name, $slave_log_num ) = $slave_status->{master_log_file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; if ( $master_log_name ne $slave_log_name || abs($master_log_num - $slave_log_num) > 1 ) { die "The slave thinks it is reading from " . "$slave_status->{master_log_file}, but the " . "master is writing to $master_status->{file}"; } } return 1; } sub get_master_dsn { my ( $self, $dbh, $dsn, $dsn_parser ) = @_; my $master = $self->get_slave_status($dbh) or return undef; my $spec = "h=$master->{master_host},P=$master->{master_port}"; return $dsn_parser->parse($spec, $dsn); } sub get_slave_status { my ( $self, $dbh ) = @_; if ( !$self->{not_a_slave}->{$dbh} ) { my $sth = $self->{sths}->{$dbh}->{SLAVE_STATUS} ||= $dbh->prepare('SHOW SLAVE STATUS'); PTDEBUG && _d($dbh, 'SHOW SLAVE STATUS'); $sth->execute(); my ($sss_rows) = $sth->fetchall_arrayref({}); # Show Slave Status rows my $ss; if ( $sss_rows && @$sss_rows ) { if (scalar @$sss_rows > 1) { if (!$self->{channel}) { die 'This server returned more than one row for SHOW SLAVE STATUS but "channel" was not specified on the command line'; } for my $row (@$sss_rows) { $row = { map { lc($_) => $row->{$_} } keys %$row }; # lowercase the keys if ($row->{channel_name} eq $self->{channel}) { $ss = $row; last; } } } else { if ($sss_rows->[0]->{channel_name} && $sss_rows->[0]->{channel_name} ne $self->{channel}) { die 'This server is using replication channels but "channel" was not specified on the command line'; } else { $ss = $sss_rows->[0]; } } if ( $ss && %$ss ) { $ss = { map { lc($_) => $ss->{$_} } keys %$ss }; # lowercase the keys return $ss; } } PTDEBUG && _d('This server returns nothing for SHOW SLAVE STATUS'); $self->{not_a_slave}->{$dbh}++; } } sub get_master_status { my ( $self, $dbh ) = @_; if ( $self->{not_a_master}->{$dbh} ) { PTDEBUG && _d('Server on dbh', $dbh, 'is not a master'); return; } my $sth = $self->{sths}->{$dbh}->{MASTER_STATUS} ||= $dbh->prepare('SHOW MASTER STATUS'); PTDEBUG && _d($dbh, 'SHOW MASTER STATUS'); $sth->execute(); my ($ms) = @{$sth->fetchall_arrayref({})}; PTDEBUG && _d( $ms ? map { "$_=" . (defined $ms->{$_} ? $ms->{$_} : '') } keys %$ms : ''); if ( !$ms || scalar keys %$ms < 2 ) { PTDEBUG && _d('Server on dbh', $dbh, 'does not seem to be a master'); $self->{not_a_master}->{$dbh}++; } return { map { lc($_) => $ms->{$_} } keys %$ms }; # lowercase the keys } sub wait_for_master { my ( $self, %args ) = @_; my @required_args = qw(master_status slave_dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($master_status, $slave_dbh) = @args{@required_args}; my $timeout = $args{timeout} || 60; my $result; my $waited; if ( $master_status ) { my $slave_status; eval { $slave_status = $self->get_slave_status($slave_dbh); }; if ($EVAL_ERROR) { return { result => undef, waited => 0, error =>'Wait for master: this is a multi-master slave but "channel" was not specified on the command line', }; } my $server_version = VersionParser->new($slave_dbh); my $channel_sql = $server_version > '5.6' && $self->{channel} ? ", '$self->{channel}'" : ''; my $sql = "SELECT MASTER_POS_WAIT('$master_status->{file}', $master_status->{position}, $timeout $channel_sql)"; PTDEBUG && _d($slave_dbh, $sql); my $start = time; ($result) = $slave_dbh->selectrow_array($sql); $waited = time - $start; PTDEBUG && _d('Result of waiting:', $result); PTDEBUG && _d("Waited", $waited, "seconds"); } else { PTDEBUG && _d('Not waiting: this server is not a master'); } return { result => $result, waited => $waited, }; } sub stop_slave { my ( $self, $dbh ) = @_; my $sth = $self->{sths}->{$dbh}->{STOP_SLAVE} ||= $dbh->prepare('STOP SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } sub start_slave { my ( $self, $dbh, $pos ) = @_; if ( $pos ) { my $sql = "START SLAVE UNTIL MASTER_LOG_FILE='$pos->{file}', " . "MASTER_LOG_POS=$pos->{position}"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } else { my $sth = $self->{sths}->{$dbh}->{START_SLAVE} ||= $dbh->prepare('START SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } } sub catchup_to_master { my ( $self, $slave, $master, $timeout ) = @_; $self->stop_slave($master); $self->stop_slave($slave); my $slave_status = $self->get_slave_status($slave); my $slave_pos = $self->repl_posn($slave_status); my $master_status = $self->get_master_status($master); my $master_pos = $self->repl_posn($master_status); PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); my $result; if ( $self->pos_cmp($slave_pos, $master_pos) < 0 ) { PTDEBUG && _d('Waiting for slave to catch up to master'); $self->start_slave($slave, $master_pos); $result = $self->wait_for_master( master_status => $master_status, slave_dbh => $slave, timeout => $timeout, master_status => $master_status ); if ($result->{error}) { die $result->{error}; } if ( !defined $result->{result} ) { $slave_status = $self->get_slave_status($slave); if ( !$self->slave_is_running($slave_status) ) { PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); $slave_pos = $self->repl_posn($slave_status); if ( $self->pos_cmp($slave_pos, $master_pos) != 0 ) { die "MASTER_POS_WAIT() returned NULL but slave has not " . "caught up to master"; } PTDEBUG && _d('Slave is caught up to master and stopped'); } else { die "Slave has not caught up to master and it is still running"; } } } else { PTDEBUG && _d("Slave is already caught up to master"); } return $result; } sub catchup_to_same_pos { my ( $self, $s1_dbh, $s2_dbh ) = @_; $self->stop_slave($s1_dbh); $self->stop_slave($s2_dbh); my $s1_status = $self->get_slave_status($s1_dbh); my $s2_status = $self->get_slave_status($s2_dbh); my $s1_pos = $self->repl_posn($s1_status); my $s2_pos = $self->repl_posn($s2_status); if ( $self->pos_cmp($s1_pos, $s2_pos) < 0 ) { $self->start_slave($s1_dbh, $s2_pos); } elsif ( $self->pos_cmp($s2_pos, $s1_pos) < 0 ) { $self->start_slave($s2_dbh, $s1_pos); } $s1_status = $self->get_slave_status($s1_dbh); $s2_status = $self->get_slave_status($s2_dbh); $s1_pos = $self->repl_posn($s1_status); $s2_pos = $self->repl_posn($s2_status); if ( $self->slave_is_running($s1_status) || $self->slave_is_running($s2_status) || $self->pos_cmp($s1_pos, $s2_pos) != 0) { die "The servers aren't both stopped at the same position"; } } sub slave_is_running { my ( $self, $slave_status ) = @_; return ($slave_status->{slave_sql_running} || 'No') eq 'Yes'; } sub has_slave_updates { my ( $self, $dbh ) = @_; my $sql = q{SHOW VARIABLES LIKE 'log_slave_updates'}; PTDEBUG && _d($dbh, $sql); my ($name, $value) = $dbh->selectrow_array($sql); return $value && $value =~ m/^(1|ON)$/; } sub repl_posn { my ( $self, $status ) = @_; if ( exists $status->{file} && exists $status->{position} ) { return { file => $status->{file}, position => $status->{position}, }; } else { return { file => $status->{relay_master_log_file}, position => $status->{exec_master_log_pos}, }; } } sub get_slave_lag { my ( $self, $dbh ) = @_; my $stat = $self->get_slave_status($dbh); return unless $stat; # server is not a slave return $stat->{seconds_behind_master}; } sub pos_cmp { my ( $self, $a, $b ) = @_; return $self->pos_to_string($a) cmp $self->pos_to_string($b); } sub short_host { my ( $self, $dsn ) = @_; my ($host, $port); if ( $dsn->{master_host} ) { $host = $dsn->{master_host}; $port = $dsn->{master_port}; } else { $host = $dsn->{h}; $port = $dsn->{P}; } return ($host || '[default]') . ( ($port || 3306) == 3306 ? '' : ":$port" ); } sub is_replication_thread { my ( $self, $query, %args ) = @_; return unless $query; my $type = lc($args{type} || 'all'); die "Invalid type: $type" unless $type =~ m/^binlog_dump|slave_io|slave_sql|all$/i; my $match = 0; if ( $type =~ m/binlog_dump|all/i ) { $match = 1 if ($query->{Command} || $query->{command} || '') eq "Binlog Dump"; } if ( !$match ) { if ( ($query->{User} || $query->{user} || '') eq "system user" ) { PTDEBUG && _d("Slave replication thread"); if ( $type ne 'all' ) { my $state = $query->{State} || $query->{state} || ''; if ( $state =~ m/^init|end$/ ) { PTDEBUG && _d("Special state:", $state); $match = 1; } else { my ($slave_sql) = $state =~ m/ ^(Waiting\sfor\sthe\snext\sevent |Reading\sevent\sfrom\sthe\srelay\slog |Has\sread\sall\srelay\slog;\swaiting |Making\stemp\sfile |Waiting\sfor\sslave\smutex\son\sexit)/xi; $match = $type eq 'slave_sql' && $slave_sql ? 1 : $type eq 'slave_io' && !$slave_sql ? 1 : 0; } } else { $match = 1; } } else { PTDEBUG && _d('Not system user'); } if ( !defined $args{check_known_ids} || $args{check_known_ids} ) { my $id = $query->{Id} || $query->{id}; if ( $match ) { $self->{replication_thread}->{$id} = 1; } else { if ( $self->{replication_thread}->{$id} ) { PTDEBUG && _d("Thread ID is a known replication thread ID"); $match = 1; } } } } PTDEBUG && _d('Matches', $type, 'replication thread:', ($match ? 'yes' : 'no'), '; match:', $match); return $match; } sub get_replication_filters { my ( $self, %args ) = @_; my @required_args = qw(dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh) = @args{@required_args}; my %filters = (); my $status = $self->get_master_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( binlog_do_db binlog_ignore_db ); } $status = $self->get_slave_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( replicate_do_db replicate_ignore_db replicate_do_table replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table ); my $sql = "SHOW VARIABLES LIKE 'slave_skip_errors'"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_arrayref($sql); $filters{slave_skip_errors} = $row->[1] if $row->[1] && $row->[1] ne 'OFF'; } return \%filters; } sub pos_to_string { my ( $self, $pos ) = @_; my $fmt = '%s/%020d'; return sprintf($fmt, @{$pos}{qw(file position)}); } sub reset_known_replication_threads { my ( $self ) = @_; $self->{replication_thread} = {}; return; } sub get_cxn_from_dsn_table { my ($self, %args) = @_; my @required_args = qw(dsn_table_dsn make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn_table_dsn, $make_cxn) = @args{@required_args}; PTDEBUG && _d('DSN table DSN:', $dsn_table_dsn); my $dp = $self->{DSNParser}; my $q = $self->{Quoter}; my $dsn = $dp->parse($dsn_table_dsn); my $dsn_table; if ( $dsn->{D} && $dsn->{t} ) { $dsn_table = $q->quote($dsn->{D}, $dsn->{t}); } elsif ( $dsn->{t} && $dsn->{t} =~ m/\./ ) { $dsn_table = $q->quote($q->split_unquote($dsn->{t})); } else { die "DSN table DSN does not specify a database (D) " . "or a database-qualified table (t)"; } my $dsn_tbl_cxn = $make_cxn->(dsn => $dsn); my $dbh = $dsn_tbl_cxn->connect(); my $sql = "SELECT dsn FROM $dsn_table ORDER BY id"; PTDEBUG && _d($sql); my $dsn_strings = $dbh->selectcol_arrayref($sql); my @cxn; if ( $dsn_strings ) { foreach my $dsn_string ( @$dsn_strings ) { PTDEBUG && _d('DSN from DSN table:', $dsn_string); push @cxn, $make_cxn->(dsn_string => $dsn_string); } } return \@cxn; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MasterSlave package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $verbs = qr{^SHOW|^FLUSH|^COMMIT|^ROLLBACK|^BEGIN|SELECT|INSERT |UPDATE|DELETE|REPLACE|^SET|UNION|^START|^LOCK}xi; my $quote_re = qr/"(?:(?!(? [^()]+ ) # Non-parens without backtracking | (??{ $bal }) # Group with matching parens )* \) /x; my $olc_re = qr/(?:--|#)[^'"\r\n]*(?=[\r\n]|\Z)/; # One-line comments my $mlc_re = qr#/\*[^!].*?\*/#sm; # But not /*!version */ my $vlc_re = qr#/\*.*?[0-9]+.*?\*/#sm; # For SHOW + /*!version */ my $vlc_rf = qr#^(?:SHOW).*?/\*![0-9]+(.*?)\*/#sm; # Variation for SHOW sub new { my ( $class, %args ) = @_; my $self = { %args }; return bless $self, $class; } sub strip_comments { my ( $self, $query ) = @_; return unless $query; $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; if ( $query =~ m/$vlc_rf/i ) { # contains show + version my $qualifier = $1 || ''; $query =~ s/$vlc_re/$qualifier/go; } return $query; } sub shorten { my ( $self, $query, $length ) = @_; $query =~ s{ \A( (?:INSERT|REPLACE) (?:\s+LOW_PRIORITY|DELAYED|HIGH_PRIORITY|IGNORE)? (?:\s\w+)*\s+\S+\s+VALUES\s*\(.*?\) ) \s*,\s*\(.*?(ON\s+DUPLICATE|\Z)} {$1 /*... omitted ...*/$2}xsi; return $query unless $query =~ m/IN\s*\(\s*(?!select)/i; my $last_length = 0; my $query_length = length($query); while ( $length > 0 && $query_length > $length && $query_length < ( $last_length || $query_length + 1 ) ) { $last_length = $query_length; $query =~ s{ (\bIN\s*\() # The opening of an IN list ([^\)]+) # Contents of the list, assuming no item contains paren (?=\)) # Close of the list } { $1 . __shorten($2) }gexsi; } return $query; } sub __shorten { my ( $snippet ) = @_; my @vals = split(/,/, $snippet); return $snippet unless @vals > 20; my @keep = splice(@vals, 0, 20); # Remove and save the first 20 items return join(',', @keep) . "/*... omitted " . scalar(@vals) . " items ...*/"; } sub fingerprint { my ( $self, $query ) = @_; $query =~ m#\ASELECT /\*!40001 SQL_NO_CACHE \*/ \* FROM `# # mysqldump query && return 'mysqldump'; $query =~ m#/\*\w+\.\w+:[0-9]/[0-9]\*/# # pt-table-checksum, etc query && return 'percona-toolkit'; $query =~ m/\Aadministrator command: / && return $query; $query =~ m/\A\s*(call\s+\S+)\(/i && return lc($1); # Warning! $1 used, be careful. if ( my ($beginning) = $query =~ m/\A((?:INSERT|REPLACE)(?: IGNORE)?\s+INTO.+?VALUES\s*\(.*?\))\s*,\s*\(/is ) { $query = $beginning; # Shorten multi-value INSERT statements ASAP } $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; $query =~ s/\Ause \S+\Z/use ?/i # Abstract the DB in USE && return $query; $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings $query =~ s/\bfalse\b|\btrue\b/?/isg; # boolean values if ( $self->{match_md5_checksums} ) { $query =~ s/([._-])[a-f0-9]{32}/$1?/g; } if ( !$self->{match_embedded_numbers} ) { $query =~ s/[0-9+-][0-9a-f.xb+-]*/?/g; } else { $query =~ s/\b[0-9+-][0-9a-f.xb+-]*/?/g; } if ( $self->{match_md5_checksums} ) { $query =~ s/[xb+-]\?/?/g; } else { $query =~ s/[xb.+-]\?/?/g; } $query =~ s/\A\s+//; # Chop off leading whitespace chomp $query; # Kill trailing whitespace $query =~ tr[ \n\t\r\f][ ]s; # Collapse whitespace $query = lc $query; $query =~ s/\bnull\b/?/g; # Get rid of NULLs $query =~ s{ # Collapse IN and VALUES lists \b(in|values?)(?:[\s,]*\([\s?,]*\))+ } {$1(?+)}gx; $query =~ s{ # Collapse UNION \b(select\s.*?)(?:(\sunion(?:\sall)?)\s\1)+ } {$1 /*repeat$2*/}xg; $query =~ s/\blimit \?(?:, ?\?| offset \?)?/limit ?/; # LIMIT if ( $query =~ m/\bORDER BY /gi ) { # Find, anchor on ORDER BY clause 1 while $query =~ s/\G(.+?)\s+ASC/$1/gi && pos $query; } return $query; } sub distill_verbs { my ( $self, $query ) = @_; $query =~ m/\A\s*call\s+(\S+)\(/i && return "CALL $1"; $query =~ m/\A\s*use\s+/ && return "USE"; $query =~ m/\A\s*UNLOCK TABLES/i && return "UNLOCK"; $query =~ m/\A\s*xa\s+(\S+)/i && return "XA_$1"; if ( $query =~ m/\A\s*LOAD/i ) { my ($tbl) = $query =~ m/INTO TABLE\s+(\S+)/i; $tbl ||= ''; $tbl =~ s/`//g; return "LOAD DATA $tbl"; } if ( $query =~ m/\Aadministrator command:/ ) { $query =~ s/administrator command:/ADMIN/; $query = uc $query; return $query; } $query = $self->strip_comments($query); if ( $query =~ m/\A\s*SHOW\s+/i ) { PTDEBUG && _d($query); $query = uc $query; $query =~ s/\s+(?:SESSION|FULL|STORAGE|ENGINE)\b/ /g; $query =~ s/\s+COUNT[^)]+\)//g; $query =~ s/\s+(?:FOR|FROM|LIKE|WHERE|LIMIT|IN)\b.+//ms; $query =~ s/\A(SHOW(?:\s+\S+){1,2}).*\Z/$1/s; $query =~ s/\s+/ /g; PTDEBUG && _d($query); return $query; } eval $QueryParser::data_def_stmts; eval $QueryParser::tbl_ident; my ( $dds ) = $query =~ /^\s*($QueryParser::data_def_stmts)\b/i; if ( $dds) { $query =~ s/\s+IF(?:\s+NOT)?\s+EXISTS/ /i; my ( $obj ) = $query =~ m/$dds.+(DATABASE|TABLE)\b/i; $obj = uc $obj if $obj; PTDEBUG && _d('Data def statment:', $dds, 'obj:', $obj); my ($db_or_tbl) = $query =~ m/(?:TABLE|DATABASE)\s+($QueryParser::tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches db or table:', $db_or_tbl); return uc($dds . ($obj ? " $obj" : '')), $db_or_tbl; } my @verbs = $query =~ m/\b($verbs)\b/gio; @verbs = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } map { uc } @verbs; }; if ( ($verbs[0] || '') eq 'SELECT' && @verbs > 1 ) { PTDEBUG && _d("False-positive verbs after SELECT:", @verbs[1..$#verbs]); my $union = grep { $_ eq 'UNION' } @verbs; @verbs = $union ? qw(SELECT UNION) : qw(SELECT); } my $verb_str = join(q{ }, @verbs); return $verb_str; } sub __distill_tables { my ( $self, $query, $table, %args ) = @_; my $qp = $args{QueryParser} || $self->{QueryParser}; die "I need a QueryParser argument" unless $qp; my @tables = map { $_ =~ s/`//g; $_ =~ s/(_?)[0-9]+/$1?/g; $_; } grep { defined $_ } $qp->get_tables($query); push @tables, $table if $table; @tables = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } @tables; }; return @tables; } sub distill { my ( $self, $query, %args ) = @_; if ( $args{generic} ) { my ($cmd, $arg) = $query =~ m/^(\S+)\s+(\S+)/; return '' unless $cmd; $query = (uc $cmd) . ($arg ? " $arg" : ''); } else { my ($verbs, $table) = $self->distill_verbs($query, %args); if ( $verbs && $verbs =~ m/^SHOW/ ) { my %alias_for = qw( SCHEMA DATABASE KEYS INDEX INDEXES INDEX ); map { $verbs =~ s/$_/$alias_for{$_}/ } keys %alias_for; $query = $verbs; } elsif ( $verbs && $verbs =~ m/^LOAD DATA/ ) { return $verbs; } else { my @tables = $self->__distill_tables($query, $table, %args); $query = join(q{ }, $verbs, @tables); } } if ( $args{trf} ) { $query = $args{trf}->($query, %args); } return $query; } sub convert_to_select { my ( $self, $query ) = @_; return unless $query; return if $query =~ m/=\s*\(\s*SELECT /i; $query =~ s{ \A.*? update(?:\s+(?:low_priority|ignore))?\s+(.*?) \s+set\b(.*?) (?:\s*where\b(.*?))? (limit\s*[0-9]+(?:\s*,\s*[0-9]+)?)? \Z } {__update_to_select($1, $2, $3, $4)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ .*?\binto\b(.*?)\(([^\)]+)\)\s* values?\s*(\(.*?\))\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select($1, $2, $3)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ (?:.*?\binto)\b(.*?)\s* set\s+(.*?)\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select_with_set($1, $2)}exsi || $query =~ s{ \A.*? delete\s+(.*?) \bfrom\b(.*) \Z } {__delete_to_select($1, $2)}exsi; $query =~ s/\s*on\s+duplicate\s+key\s+update.*\Z//si; $query =~ s/\A.*?(?=\bSELECT\s*\b)//ism; return $query; } sub convert_select_list { my ( $self, $query ) = @_; $query =~ s{ \A\s*select(.*?)\bfrom\b } {$1 =~ m/\*/ ? "select 1 from" : "select isnull(coalesce($1)) from"}exi; return $query; } sub __delete_to_select { my ( $delete, $join ) = @_; if ( $join =~ m/\bjoin\b/ ) { return "select 1 from $join"; } return "select * from $join"; } sub __insert_to_select { my ( $tbl, $cols, $vals ) = @_; PTDEBUG && _d('Args:', @_); my @cols = split(/,/, $cols); PTDEBUG && _d('Cols:', @cols); $vals =~ s/^\(|\)$//g; # Strip leading/trailing parens my @vals = $vals =~ m/($quote_re|[^,]*${bal}[^,]*|[^,]+)/g; PTDEBUG && _d('Vals:', @vals); if ( @cols == @vals ) { return "select * from $tbl where " . join(' and ', map { "$cols[$_]=$vals[$_]" } (0..$#cols)); } else { return "select * from $tbl limit 1"; } } sub __insert_to_select_with_set { my ( $from, $set ) = @_; $set =~ s/,/ and /g; return "select * from $from where $set "; } sub __update_to_select { my ( $from, $set, $where, $limit ) = @_; return "select $set from $from " . ( $where ? "where $where" : '' ) . ( $limit ? " $limit " : '' ); } sub wrap_in_derived { my ( $self, $query ) = @_; return unless $query; return $query =~ m/\A\s*select/i ? "select 1 from ($query) as x limit 1" : $query; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryRewriter package # ########################################################################### # ########################################################################### # Retry package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Retry.pm # t/lib/Retry.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Retry; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::HiRes qw(sleep); sub new { my ( $class, %args ) = @_; my $self = { %args, }; return bless $self, $class; } sub retry { my ( $self, %args ) = @_; my @required_args = qw(try fail final_fail); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($try, $fail, $final_fail) = @args{@required_args}; my $wait = $args{wait} || sub { sleep 1; }; my $tries = $args{tries} || 3; my $last_error; my $tryno = 0; TRY: while ( ++$tryno <= $tries ) { PTDEBUG && _d("Try", $tryno, "of", $tries); my $result; eval { $result = $try->(tryno=>$tryno); }; if ( $EVAL_ERROR ) { PTDEBUG && _d("Try code failed:", $EVAL_ERROR); $last_error = $EVAL_ERROR; if ( $tryno < $tries ) { # more retries my $retry = $fail->(tryno=>$tryno, error=>$last_error); last TRY unless $retry; PTDEBUG && _d("Calling wait code"); $wait->(tryno=>$tryno); } } else { PTDEBUG && _d("Try code succeeded"); return $result; } } PTDEBUG && _d('Try code did not succeed'); return $final_fail->(error=>$last_error); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Retry package # ########################################################################### # ########################################################################### # Cxn package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Cxn.pm # t/lib/Cxn.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Cxn; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Scalar::Util qw(blessed); use constant { PTDEBUG => $ENV{PTDEBUG} || 0, PERCONA_TOOLKIT_TEST_USE_DSN_NAMES => $ENV{PERCONA_TOOLKIT_TEST_USE_DSN_NAMES} || 0, }; sub new { my ( $class, %args ) = @_; my @required_args = qw(DSNParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($dp, $o) = @args{@required_args}; my $dsn_defaults = $dp->parse_options($o); my $prev_dsn = $args{prev_dsn}; my $dsn = $args{dsn}; if ( !$dsn ) { $args{dsn_string} ||= 'h=' . ($dsn_defaults->{h} || 'localhost'); $dsn = $dp->parse( $args{dsn_string}, $prev_dsn, $dsn_defaults); } elsif ( $prev_dsn ) { $dsn = $dp->copy($prev_dsn, $dsn); } my $dsn_name = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; my $self = { dsn => $dsn, dbh => $args{dbh}, dsn_name => $dsn_name, hostname => '', set => $args{set}, NAME_lc => defined($args{NAME_lc}) ? $args{NAME_lc} : 1, dbh_set => 0, ask_pass => $o->get('ask-pass'), DSNParser => $dp, is_cluster_node => undef, parent => $args{parent}, }; return bless $self, $class; } sub connect { my ( $self, %opts ) = @_; my $dsn = $opts{dsn} || $self->{dsn}; my $dp = $self->{DSNParser}; my $dbh = $self->{dbh}; if ( !$dbh || !$dbh->ping() ) { if ( $self->{ask_pass} && !$self->{asked_for_pass} && !defined $dsn->{p} ) { $dsn->{p} = OptionParser::prompt_noecho("Enter MySQL password: "); $self->{asked_for_pass} = 1; } $dbh = $dp->get_dbh( $dp->get_cxn_params($dsn), { AutoCommit => 1, %opts, }, ); } $dbh = $self->set_dbh($dbh); if ( $opts{dsn} ) { $self->{dsn} = $dsn; $self->{dsn_name} = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; } PTDEBUG && _d($dbh, 'Connected dbh to', $self->{hostname},$self->{dsn_name}); return $dbh; } sub set_dbh { my ($self, $dbh) = @_; if ( $self->{dbh} && $self->{dbh} == $dbh && $self->{dbh_set} ) { PTDEBUG && _d($dbh, 'Already set dbh'); return $dbh; } PTDEBUG && _d($dbh, 'Setting dbh'); $dbh->{FetchHashKeyName} = 'NAME_lc' if $self->{NAME_lc}; my $sql = 'SELECT @@server_id /*!50038 , @@hostname*/'; PTDEBUG && _d($dbh, $sql); my ($server_id, $hostname) = $dbh->selectrow_array($sql); PTDEBUG && _d($dbh, 'hostname:', $hostname, $server_id); if ( $hostname ) { $self->{hostname} = $hostname; } if ( $self->{parent} ) { PTDEBUG && _d($dbh, 'Setting InactiveDestroy=1 in parent'); $dbh->{InactiveDestroy} = 1; } if ( my $set = $self->{set}) { $set->($dbh); } $self->{dbh} = $dbh; $self->{dbh_set} = 1; return $dbh; } sub lost_connection { my ($self, $e) = @_; return 0 unless $e; return $e =~ m/MySQL server has gone away/ || $e =~ m/Lost connection to MySQL server/ || $e =~ m/Server shutdown in progress/; } sub dbh { my ($self) = @_; return $self->{dbh}; } sub dsn { my ($self) = @_; return $self->{dsn}; } sub name { my ($self) = @_; return $self->{dsn_name} if PERCONA_TOOLKIT_TEST_USE_DSN_NAMES; return $self->{hostname} || $self->{dsn_name} || 'unknown host'; } sub description { my ($self) = @_; return sprintf("%s -> %s:%s", $self->name(), $self->{dsn}->{h}, $self->{dsn}->{P} || 'socket'); } sub get_id { my ($self, $cxn) = @_; $cxn ||= $self; my $unique_id; if ($cxn->is_cluster_node()) { # for cluster we concatenate various variables to maximize id 'uniqueness' across versions my $sql = q{SHOW STATUS LIKE 'wsrep\_local\_index'}; my (undef, $wsrep_local_index) = $cxn->dbh->selectrow_array($sql); PTDEBUG && _d("Got cluster wsrep_local_index: ",$wsrep_local_index); $unique_id = $wsrep_local_index."|"; foreach my $val ('server\_id', 'wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $val) = $cxn->dbh->selectrow_array($sql); $unique_id .= "|$val"; } } else { my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); $unique_id = $cxn->dbh->selectrow_array($sql); } PTDEBUG && _d("Generated unique id for cluster:", $unique_id); return $unique_id; } sub is_cluster_node { my ($self, $cxn) = @_; $cxn ||= $self; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; my $dbh; if ($cxn->isa('DBI::db')) { $dbh = $cxn; PTDEBUG && _d($sql); #don't invoke name() if it's not a Cxn! } else { $dbh = $cxn->dbh(); PTDEBUG && _d($cxn->name, $sql); } my $row = $dbh->selectrow_arrayref($sql); return $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1') ? 1 : 0; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub DESTROY { my ($self) = @_; PTDEBUG && _d('Destroying cxn'); if ( $self->{parent} ) { PTDEBUG && _d($self->{dbh}, 'Not disconnecting dbh in parent'); } elsif ( $self->{dbh} && blessed($self->{dbh}) && $self->{dbh}->can("disconnect") ) { PTDEBUG && _d($self->{dbh}, 'Disconnecting dbh on', $self->{hostname}, $self->{dsn_name}); $self->{dbh}->disconnect(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Cxn package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_kill; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use POSIX qw(setsid); use List::Util qw(max); use Digest::MD5 qw(md5_hex); use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; Transformers->import(qw(ts)); use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; $OUTPUT_AUTOFLUSH = 1; my $o; # ######################################################################## # Configuration info. # ######################################################################## sub main { local @ARGV = @_; # set global ARGV for this package # ######################################################################## # Get configuration information. # ######################################################################## $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); if ( !$o->got('busy-time') ) { $o->set('interval', 30) unless $o->got('interval'); } else { $o->set('interval', max(1, $o->get('busy-time') / 2)) unless $o->got('interval'); } # Disable opts that don't make sense when reading proclist # from a file (or STDIN). if ( $o->get('test-matching') ) { $o->set('run-time', 0); $o->set('interval', 0); $o->set('ignore-self', 0); } # TODO: parse valid values from POD my $victims = lc $o->get('victims'); if ( !grep { $victims eq $_ } qw(oldest all all-but-oldest) ) { $o->save_error("Invalid value for --victims: $victims"); } $o->usage_or_errors(); # ######################################################################## # First things first: if --stop was given, create the sentinel file. # ######################################################################## if ( $o->get('stop') ) { my $sentinel = $o->get('sentinel'); PTDEBUG && _d('Creating sentinel file', $sentinel); open my $fh, '>', $sentinel or die "Cannot open $sentinel: $OS_ERROR\n"; print $fh "Remove this file to permit pt-kill to run.\n" or die "Cannot write to $sentinel: $OS_ERROR\n"; close $fh or die "Cannot close $sentinel: $OS_ERROR\n"; print "Successfully created file $sentinel\n"; return 0; } # ######################################################################## # Create the --filter sub. # ######################################################################## my $filter_sub; if ( my $filter = $o->get('filter') ) { if ( -f $filter && -r $filter ) { PTDEBUG && _d('Reading file', $filter, 'for --filter code'); open my $fh, "<", $filter or die "Cannot open $filter: $OS_ERROR"; $filter = do { local $/ = undef; <$fh> }; close $fh; } else { $filter = "( $filter )"; # issue 565 } my $code = 'sub { my ( $event ) = @_; ' . "$filter && return \$event; };"; PTDEBUG && _d('--filter code:', $code); $filter_sub = eval $code or die "Error compiling --filter code: $code\n$EVAL_ERROR"; } # ######################################################################## # Make input sub that will either get processlist from MySQL or a file. # ######################################################################## my $ms = new MasterSlave( OptionParser => $o, DSNParser => $dp, Quoter => "Quoter", ); my $pl = new Processlist(MasterSlave => $ms, kill_busy_commands => $o->get('kill-busy-commands')); my $qr = new QueryRewriter(); my $cxn; my $dbh; # $cxn->dbh my $get_proclist; # callback to SHOW PROCESSLIST my $proc_sth; my $kill; # callback to KILL my $kill_sth; my $kill_sql; if ( $o->get('rds') ){ $kill_sql = $o->get('kill-query') ? 'CALL mysql.rds_kill_query(?)' : 'CALL mysql.rds_kill(?)'; } else{ $kill_sql = $o->get('kill-query') ? 'KILL QUERY ?' : 'KILL ?'; } my $files; if ( $files = $o->get('test-matching') ) { PTDEBUG && _d('Getting processlist from files:', @$files); my $trp = new TextResultSetParser(); my $fh; $get_proclist = sub { if ( !$fh ) { my $file = shift @$files; die 'No more files' unless $file; if ( $file eq '-' ) { $fh = *STDIN; } else { if ( !open $fh, '<', $file ) { warn "Cannot open $file: $OS_ERROR"; $fh = undef; return; } } } if ( $fh ) { local $INPUT_RECORD_SEPARATOR = ''; my $proclist_text = <$fh>; if ( $proclist_text ) { return $trp->parse($proclist_text); } else { # No more proclists in this file. $fh = undef; } } return; }; } else { PTDEBUG && _d('Getting processlist from MySQL'); $cxn = Cxn->new( dsn_string => shift @ARGV, NAME_lc => 0, parent => $o->get('daemonize'), DSNParser => $dp, OptionParser => $o, ); $dbh = $cxn->connect(); # Make the get_proclist and kill callbacks. Use Retry in case # the connection to MySQL is lost, then the dbh and the sths # will need to be re-initialized. my $retry = Retry->new(); $proc_sth = $dbh->prepare('SHOW FULL PROCESSLIST'); $get_proclist = sub { return $retry->retry( # Retry for an hour: 1,200 tries x 3 seconds = 3600s/1hr tries => 1200, wait => sub { sleep 3; }, try => sub { $proc_sth->execute(); return $proc_sth->fetchall_arrayref({}); }, fail => sub { my (%args) = @_; my $error = $args{error}; # The 1st pattern means that MySQL itself died or was stopped. # The 2nd pattern means that our cxn was killed (KILL ). if ( $error =~ m/MySQL server has gone away/ || $error =~ m/Lost connection to MySQL server/ ) { eval { $dbh = $cxn->connect(); $proc_sth = $dbh->prepare('SHOW FULL PROCESSLIST'); msg('Reconnected to ' . $cxn->name()); }; return 1 unless $EVAL_ERROR; # try again } return 0; # call final_fail }, final_fail => sub { my (%args) = @_; die $args{error}; }, ); }; $kill_sth = $dbh->prepare($kill_sql); $kill = sub { my ($id) = @_; PTDEBUG && _d('Killing process', $id); return $retry->retry( tries => 2, try => sub { return $kill_sth->execute($id); }, fail => sub { my (%args) = @_; my $error = $args{error}; # The 1st pattern means that MySQL itself died or was stopped. # The 2nd pattern means that our cxn was killed (KILL ). if ( $error =~ m/MySQL server has gone away/ || $error =~ m/Lost connection to MySQL server/ ) { eval { $dbh = $cxn->connect(); $kill_sth = $dbh->prepare($kill_sql); msg('Reconnected to ' . $cxn->name()); }; return 1 unless $EVAL_ERROR; # try again } return 0; # call final_fail }, final_fail => sub { my (%args) = @_; die $args{error}; }, ); }; } # Set up --log-dsn if specified. my ($log, $log_dsn, $log_sql, $log_sth, $log_cxn); my @processlist_columns = qw( Id User Host db Command Time State Info Time_ms ); if ( $log_dsn = $o->get('log-dsn') ) { my $db = $log_dsn->{D}; my $table = $log_dsn->{t}; die "--log-dsn does not specify a database (D) " . "or a database-qualified table (t)" unless defined $table && defined $db; PTDEBUG && _d('Connecting --log-dsn:', Dumper($log_dsn)); $log_cxn = Cxn->new( dsn => $log_dsn, NAME_lc => 0, DSNParser => $dp, OptionParser => $o, ); my $log_dbh = $log_cxn->connect(); my $log_table = Quoter->quote($db, $table); PTDEBUG && _d('Connected --log-dsn:', Dumper($log_cxn->dsn)); # Create the log-table table if it doesn't exist and --create-log-table # was passed in my $tp = TableParser->new( Quoter => "Quoter" ); if ( !$tp->check_table( dbh => $log_dbh, db => $db, tbl => $table ) ) { if ($o->get('create-log-table') ) { my $sql = $o->read_para_after( __FILE__, qr/MAGIC_create_log_table/); $sql =~ s/kill_log/IF NOT EXISTS $log_table/; PTDEBUG && _d($sql); $log_dbh->do($sql); } else { die "--log-dsn table does not exist. Please create it or specify " . "--create-log-table."; } } # All the columns of the table that we care about my @all_log_columns = ( qw( server_id timestamp reason kill_error ), @processlist_columns ); my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); my ($server_id) = $dbh->selectrow_array($sql); $log_sql = "INSERT INTO $log_table (" . join(", ", @all_log_columns) . ") VALUES(" . join(", ", $server_id, ("?") x (@all_log_columns-1)) . ")"; PTDEBUG && _d($sql); $log_sth = $log_dbh->prepare($log_sql); my $retry = Retry->new(); $log = sub { my (@params) = @_; PTDEBUG && _d('Logging values:', @params); return $retry->retry( tries => 3, wait => sub { sleep 3; }, try => sub { return $log_sth->execute(@params); }, fail => sub { my (%args) = @_; my $error = $args{error}; # The 1st pattern means that MySQL itself died or was stopped. # The 2nd pattern means that our cxn was killed (KILL ). if ( $error =~ m/MySQL server has gone away/ || $error =~ m/Lost connection to MySQL server/ ) { eval { $log_dbh = $log_cxn->connect(); $log_sth = $log_dbh->prepare($log_sql); msg('Reconnected to ' . $cxn->name()); }; if ( $EVAL_ERROR ) { warn "Fail code failed: $EVAL_ERROR"; } return 1; # retry } return 0; # call final_fail }, final_fail => sub { my (%args) = @_; die $args{error}; }, ); }; } # ######################################################################## # Daemonize only after (potentially) asking for passwords for --ask-pass. # ######################################################################## my $daemon; if ( $o->get('daemonize') ) { $daemon = new Daemon(o=>$o); $daemon->daemonize(); PTDEBUG && _d('I am a daemon now'); } elsif ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # If we daemonized, the parent has already exited and we're the child. # We shared a copy of every Cxn with the parent, and the parent's copies # were destroyed but the dbhs were not disconnected because the parent # attrib was true. Now, as the child, set it false so the dbhs will be # disconnected when our Cxn copies are destroyed. If we didn't daemonize, # then we're not really a parent (since we have no children), so set it # false to auto-disconnect the dbhs when our Cxns are destroyed. $cxn->{parent} = 0 if $cxn; # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ ($dbh ? { dbh => $dbh, dsn => $cxn->dsn() } : ()) ], ); } # ######################################################################## # Start working. # ######################################################################## msg("$PROGRAM_NAME starting"); msg($dbh ? "Connected to host " . $cxn->name() : "Test matching files @$files"); # Class-based match criteria. my $query_count = $o->get('query-count'); my $each_busy_time = $o->get('each-busy-time'); my $any_busy_time = $o->get('any-busy-time'); my $group_by = $o->get('group-by'); if ( $group_by && $group_by =~ m/id|user|host|db|command|time|state|info/i ) { # Processlist.pm is case-sensitive. It matches Id, Host, db, etc. # So we'll do the same because if we set NAME_lc on the dbh then # we'll break our Processlist obj. $group_by = lc $group_by; $group_by = ucfirst $group_by unless $group_by eq 'db'; } # Per-class match criteria. my %find_spec = ( busy_time => $o->get('busy-time'), idle_time => $o->get('idle-time'), all => $o->get('match-all'), replication_threads => $o->get('replication-threads'), ignore => { Command => $o->get('ignore-command'), db => $o->get('ignore-db'), Host => $o->get('ignore-host'), Id => $o->get('ignore-self') ? $dbh->{mysql_thread_id} : undef, Info => $o->get('ignore-info'), State => $o->get('ignore-state'), User => $o->get('ignore-user'), }, match => { Command => $o->get('match-command'), db => $o->get('match-db'), Host => $o->get('match-host'), Info => $o->get('match-info'), State => $o->get('match-state'), User => $o->get('match-user'), }, ); msg("Find spec: " . Dumper(\%find_spec)); my $sentinel = $o->get('sentinel'); my $run_time = $o->get('run-time') || 0; my $start = time(); my $end = $start + $run_time; # When we should exit my $now = $start; if ( $dbh ) { msg("Run-time: " . ($run_time ? "$run_time seconds" : "forever") . " at " . ($o->get('interval') || 0) . " second intervals"); } # We don't care about the executed command, and we don't want # to wait for it, so we ignore dead children so we don't have # to reap them and they won't become zombies. # https://bugs.launchpad.net/percona-toolkit/+bug/919819 if ( $o->get('execute-command') ) { $SIG{CHLD} = 'IGNORE'; } while ( (!$run_time || $now < $end) && !-f $sentinel ) { msg('Checking processlist'); my $proclist; eval { $proclist = $get_proclist->(); }; if ( $EVAL_ERROR ) { last if $EVAL_ERROR =~ m/No more files/; die "Error getting SHOW PROCESSLIST: $EVAL_ERROR"; } # Apply --filter to the processlist events. my $filtered_proclist; if ( $filter_sub && $proclist && @$proclist ) { foreach my $proc ( @$proclist ) { push @$filtered_proclist, $proc if $filter_sub->($proc); } } else { $filtered_proclist = $proclist; } $proclist = $filtered_proclist; my @queries; if ( $proclist ) { # ################################################################## # Group queries into classes. If --group-by wasn't specified # then all queries will be put in the "default" class. # ################################################################## my $query_classes = group_queries( proclist => $proclist, group_by => $group_by, strip_comments => $o->get('strip-comments'), QueryRewriter => $qr, ); # ################################################################## # Find matching queries in each class. # ################################################################## CLASS: foreach my $class ( keys %$query_classes ) { PTDEBUG && _d('Finding matching queries in class', $class); my @matches = $pl->find($query_classes->{$class}, %find_spec); PTDEBUG && _d(scalar @matches, 'queries in class', $class); next CLASS unless scalar @matches; # ############################################################### # Apply class-based filters. # ############################################################### if ( $query_count && @matches < $query_count ) { PTDEBUG && _d('Not enough queries in class', $class, '; has', scalar @matches, 'but needs at least', $query_count); next CLASS; } if ( $each_busy_time ) { foreach my $proc ( @matches ) { if ( ($proc->{Time} || 0) <= $each_busy_time ) { PTDEBUG && _d('This query in class', $class, 'hasn\'t been running long enough:', Dumper($proc)); next CLASS; } } } elsif ( $any_busy_time ) { my $busy_enough = 0; foreach my $proc ( @matches ) { if ( ($proc->{Time} || 0) > $any_busy_time ) { $busy_enough = 1; last; } } if ( !$busy_enough ) { PTDEBUG && _d('No query is busy enough in class', $class); next CLASS; } } # ############################################################### # Select the victims (which of the matching queries to kill). # ############################################################### @matches = reverse sort { ($a->{Time} || 0) <=> ($b->{Time} || 0) } @matches; if ( $victims eq 'oldest' ) { @matches = ($matches[0]); } elsif ( $victims eq 'all-but-oldest' ) { shift @matches; # remove fist/oldest query } elsif ( $victims eq 'all' ) { # Don't do anything. } else { # Shouldn't happen. Option val should be verified earlier. die "I don't know how to kill $victims"; } # ############################################################### # Save matching queries in this class. # ############################################################### PTDEBUG && _d(scalar @matches, 'queries to kill in class', $class); push @queries, @matches; } # CLASS msg('Matched ' . scalar @queries . ' queries'); MATCHING_QUERY: foreach my $query ( @queries ) { if ( $o->get('print') ) { printf "# %s %s %d (%s %d sec) %s\n", ts(time), $o->get('kill-query') ? 'KILL QUERY' : 'KILL', $query->{Id}, ($query->{Command} || 'NULL'), $query->{Time}, ($query->{Info} || 'NULL'); } if ( $o->get('query-id') ) { my $fp = $qr->fingerprint($query->{'Info'}); my $chksm = Transformers::make_checksum($fp); print "Query ID: 0x$chksm\n"; } if ( $o->get('execute-command') ) { exec_cmd($o->get('execute-command')); msg('Executed ' . $o->get('execute-command')); } if ( $o->get('kill') || $o->get('kill-query') ) { if ( $o->get('wait-before-kill') ) { msg("Sleeping " . $o->get('wait-before-kill') . " seconds before kill"); sleep $o->get('wait-before-kill'); } local $@; eval { $kill->($query->{Id}) }; if ( $log ) { log_to_table( log => $log, query => $query, proclist => $pl, columns => \@processlist_columns, eval_error => $EVAL_ERROR, ); } if ( $EVAL_ERROR ) { msg("Error killing $query->{Id}: $EVAL_ERROR"); } else { msg("Killed $query->{Id}"); } } } } else { msg('Processlist returned no queries'); } if ( $dbh ) { if ( @queries && $o->get('wait-after-kill') ) { msg("Sleeping " . $o->get('wait-after-kill') . " seconds after killing queries"); sleep $o->get('wait-after-kill'); } else { msg("Sleeping " . $o->get('interval') . " seconds after normal interval"); sleep $o->get('interval'); } } $now = time(); } msg("Sentinel file $sentinel exists") if $sentinel && -f $sentinel; msg("$PROGRAM_NAME ending"); return 0; } # ############################################################################ # Subroutines. # ############################################################################ # Forks and detaches from parent to execute the given command; # does not block parent. sub exec_cmd { my ( $cmd ) = @_; PTDEBUG && _d('exec cmd:', $cmd); return unless $cmd; my $pid = fork(); if ( $pid ) { # parent PTDEBUG && _d('child pid:', $pid); return $pid; } # child POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; my $retval = system($cmd); $retval = $retval >> 8; PTDEBUG && _d('child exit status:', $retval); exit $retval; } sub msg { my ( $msg ) = @_; print '# ', ts(time), " $msg\n" if $o->get('verbose'); PTDEBUG && _d($msg); return; } sub log_to_table { my (%args) = @_; my ($log, $query, $pl, $processlist_columns) = @args{qw( log query proclist columns )}; my $ts = Transformers::ts(time()); my $reasons = join "\n", map { defined($_) ? $_ : "Unkown reason" } @{ $pl->{_reasons_for_matching}->{$query} }; $log->( $ts, $reasons, $args{eval_error}, @{$query}{@$processlist_columns} ); } sub group_queries { my ( %args ) = @_; my ($proclist, $group_by, $qr) = @args{qw(proclist group_by QueryRewriter)}; PTDEBUG && _d("Grouping queries by", $group_by); # If there's proclist then there's nothing to do. If there's no group by # then all the procs in the list are in the same class. return $proclist unless $proclist; return { 'default' => $proclist } unless $group_by; my $query_classes = {}; foreach my $proc ( @$proclist ) { if ( $args{strip_comments} && $proc->{Info} ) { $proc->{Info} = $qr->strip_comments($proc->{Info}); } my $class; if ( $group_by eq 'fingerprint' ) { $class = $proc->{Info} ? $qr->fingerprint($proc->{Info}) : 'NULL'; } else { $class = $proc->{$group_by} ? $proc->{$group_by} : 'NULL'; } push @{$query_classes->{$class}}, $proc; } return $query_classes; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-kill - Kill MySQL queries that match certain criteria. =head1 SYNOPSIS Usage: pt-kill [OPTIONS] [DSN] pt-kill kills MySQL connections. pt-kill connects to MySQL and gets queries from SHOW PROCESSLIST if no FILE is given. Else, it reads queries from one or more FILE which contains the output of SHOW PROCESSLIST. If FILE is -, pt-kill reads from STDIN. Kill queries running longer than 60s: pt-kill --busy-time 60 --kill Print, do not kill, queries running longer than 60s: pt-kill --busy-time 60 --print Check for sleeping processes and kill them all every 10s: pt-kill --match-command Sleep --kill --victims all --interval 10 Print all login processes: pt-kill --match-state login --print --victims all See which queries in the processlist right now would match: mysql -e "SHOW PROCESSLIST" > proclist.txt pt-kill --test-matching proclist.txt --busy-time 60 --print =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-kill captures queries from SHOW PROCESSLIST, filters them, and then either kills or prints them. This is also known as a "slow query sniper" in some circles. The idea is to watch for queries that might be consuming too many resources, and kill them. For brevity, we talk about killing queries, but they may just be printed (or some other future action) depending on what options are given. Normally pt-kill connects to MySQL to get queries from SHOW PROCESSLIST. Alternatively, it can read SHOW PROCESSLIST output from files. In this case, pt-kill does not connect to MySQL and L<"--kill"> has no effect. You should use L<"--print"> instead when reading files. The ability to read a file with L<"--test-matching"> allows you to capture SHOW PROCESSLIST and test it later with pt-kill to make sure that your matches kill the proper queries. There are a lot of special rules to follow, such as "don't kill replication threads," so be careful not to kill something important! Two important options to know are L<"--busy-time"> and L<"--victims">. First, whereas most match/filter options match their corresponding value from SHOW PROCESSLIST (e.g. L<"--match-command"> matches a query's Command value), the Time value is matched by L<"--busy-time">. See also L<"--interval">. Second, L<"--victims"> controls which matching queries from each class are killed. By default, the matching query with the highest Time value is killed (the oldest query). See the next section, L<"GROUP, MATCH AND KILL">, for more details. Usually you need to specify at least one C<--match> option, else no queries will match. Or, you can specify L<"--match-all"> to match all queries that aren't ignored by an C<--ignore> option. =head1 GROUP, MATCH AND KILL Queries pass through several steps to determine which exactly will be killed (or printed--whatever action is specified). Understanding these steps will help you match precisely the queries you want. The first step is grouping queries into classes. The L<"--group-by"> option controls grouping. By default, this option has no value so all queries are grouped into one default class. All types of matching and filtering (the next step) are applied per-class. Therefore, you may need to group queries in order to match/filter some classes but not others. The second step is matching. Matching implies filtering since if a query doesn't match some criteria, it is removed from its class. Matching happens for each class. First, queries are filtered from their class by the various C options like L<"--match-user">. Then, entire classes are filtered by the various C options like L<"--query-count">. The third step is victim selection, that is, which matching queries in each class to kill. This is controlled by the L<"--victims"> option. Although many queries in a class may match, you may only want to kill the oldest query, or all queries, etc. The forth and final step is to take some action on all matching queries from all classes. The C options specify which actions will be taken. At this step, there are no more classes, just a single list of queries to kill, print, etc. pt-kill will kill all the queries matching ANY of the specified criteria (logical OR). For example, using: --busy-time 114 --match-command 'Query|Execute' will kill all queries having busy-time > 114 C where the command is C or C If you want to kill only the queries where C 114> C the command is Query or Execute, you need to use L<"--kill-busy-commands>: --busy-time 114 --kill-busy-commands 'Query|Execute' =head1 OUTPUT If only L<"--kill"> is given, then there is no output. If only L<"--print"> is given, then a timestamped KILL statement if printed for every query that would have been killed, like: # 2009-07-15T15:04:01 KILL 8 (Query 42 sec) SELECT * FROM huge_table The line shows a timestamp, the query's Id (8), its Time (42 sec) and its Info (usually the query SQL). If both L<"--kill"> and L<"--print"> are given, then matching queries are killed and a line for each like the one above is printed. Any command executed by L<"--execute-command"> is responsible for its own output and logging. After being executed, pt-kill has no control or interaction with the command. =head1 OPTIONS Specify at least one of L<"--kill">, L<"--kill-query">, L<"--print">, L<"--execute-command"> or L<"--stop">. L<"--any-busy-time"> and L<"--each-busy-time"> are mutually exclusive. L<"--kill"> and L<"--kill-query"> are mutually exclusive. L<"--daemonize"> and L<"--test-matching"> are mutually exclusive. This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --create-log-table Create the L<"--log-dsn"> table if it does not exist. This option causes the table specified by L<"--log-dsn"> to be created with the default structure shown in the documentation for that option. =item --daemonize Fork to the background and detach from the shell. POSIX operating systems only. =item --database short form: -D; type: string The database to use for the connection. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --filter type: string Discard events for which this Perl code doesn't return true. This option is a string of Perl code or a file containing Perl code that gets compiled into a subroutine with one argument: $event. This is a hashref. If the given value is a readable file, then pt-kill reads the entire file and uses its contents as the code. The file should not contain a shebang (#!/usr/bin/perl) line. If the code returns true, the chain of callbacks continues; otherwise it ends. The code is the last statement in the subroutine other than C. The subroutine template is: sub { $event = shift; filter && return $event; } Filters given on the command line are wrapped inside parentheses like like C<( filter )>. For complex, multi-line filters, you must put the code inside a file so it will not be wrapped inside parentheses. Either way, the filter must produce syntactically valid code given the template. For example, an if-else branch given on the command line would not be valid: --filter 'if () { } else { }' # WRONG Since it's given on the command line, the if-else branch would be wrapped inside parentheses which is not syntactically valid. So to accomplish something more complex like this would require putting the code in a file, for example filter.txt: my $event_ok; if (...) { $event_ok=1; } else { $event_ok=0; } $event_ok Then specify C<--filter filter.txt> to read the code from filter.txt. If the filter code won't compile, pt-kill will die with an error. If the filter code does compile, an error may still occur at runtime if the code tries to do something wrong (like pattern match an undefined value). pt-kill does not provide any safeguards so code carefully! It is permissible for the code to have side effects (to alter C<$event>). =item --group-by type: string Apply matches to each class of queries grouped by this SHOW PROCESSLIST column. In addition to the basic columns of SHOW PROCESSLIST (user, host, command, state, etc.), queries can be matched by C which abstracts the SQL query in the C column. By default, queries are not grouped, so matches and actions apply to all queries. Grouping allows matches and actions to apply to classes of similar queries, if any queries in the class match. For example, detecting cache stampedes (see C under L<"--victims"> for an explanation of that term) requires that queries are grouped by the C attribute. This creates classes of identical queries (stripped of comments). So queries C<"SELECT c FROM t WHERE id=1"> and C<"SELECT c FROM t WHERE id=1"> are grouped into the same class, but query c<"SELECT c FROM t WHERE id=3"> is not identical to the first two queries so it is grouped into another class. Then when L<"--victims"> C is specified, all but the oldest query in each class is killed for each class of queries that matches the match criteria. =item --help Show help and exit. =item --host short form: -h; type: string; default: localhost Connect to host. =item --interval type: time How often to check for queries to kill. If L<"--busy-time"> is not given, then the default interval is 30 seconds. Else the default is half as often as L<"--busy-time">. If both L<"--interval"> and L<"--busy-time"> are given, then the explicit L<"--interval"> value is used. See also L<"--run-time">. =item --log type: string Print all output to this file when daemonized. =item --log-dsn type: DSN Store each query killed in this DSN. The argument specifies a table to store all killed queries. The DSN passed in must have the databse (D) and table (t) options. The table must have at least the following columns. You can add more columns for your own special purposes, but they won't be used by pt-kill. The following CREATE TABLE definition is also used for L<"--create-log-table">. MAGIC_create_log_table: CREATE TABLE kill_log ( kill_id int(10) unsigned NOT NULL AUTO_INCREMENT, server_id bigint(4) NOT NULL DEFAULT '0', timestamp DATETIME, reason TEXT, kill_error TEXT, Id bigint(4) NOT NULL DEFAULT '0', User varchar(16) NOT NULL DEFAULT '', Host varchar(64) NOT NULL DEFAULT '', db varchar(64) DEFAULT NULL, Command varchar(16) NOT NULL DEFAULT '', Time int(7) NOT NULL DEFAULT '0', State varchar(64) DEFAULT NULL, Info longtext, Time_ms bigint(21) DEFAULT '0', # NOTE, TODO: currently not used PRIMARY KEY (kill_id) ) DEFAULT CHARSET=utf8 =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --query-id Prints an ID of the query that was just killed. This is equivalent to the "ID" output of pt-query-digest. This allows cross-referencing the output of both tools. Example: Query ID 0xE9800998ECF8427E Note that this is a digest (or hash) of the query's "fingerprint", so queries of the same form but with different values will have the same ID. See pt-query-digest for more information. =item --rds Denotes the instance in question is on Amazon RDS. By default pt-kill runs the MySQL command "kill" for L<"--kill"> and "kill query" L<"--kill-query">. On RDS these two commands are not available and are replaced by function calls. This option modifies L<"--kill"> to use "CALL mysql.rds_kill(thread-id)" instead and L<"--kill-query"> to use "CALL mysql.rds_kill_query(thread-id)" =item --run-time type: time How long to run before exiting. By default pt-kill runs forever, or until its process is killed or stopped by the creation of a L<"--sentinel"> file. If this option is specified, pt-kill runs for the specified amount of time and sleeps L<"--interval"> seconds between each check of the PROCESSLIST. =item --sentinel type: string; default: /tmp/pt-kill-sentinel Exit if this file exists. The presence of the file specified by L<"--sentinel"> will cause all running instances of pt-kill to exit. You might find this handy to stop cron jobs gracefully if necessary. See also L<"--stop">. =item --slave-user type: string Sets the user to be used to connect to the slaves. This parameter allows you to have a different user with less privileges on the slaves but that user must exist on all slaves. =item --slave-password type: string Sets the password to be used to connect to the slaves. It can be used with --slave-user and the password for the user must be the same on all slaves. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --stop Stop running instances by creating the L<"--sentinel"> file. Causes pt-kill to create the sentinel file specified by L<"--sentinel"> and exit. This should have the effect of stopping all running instances which are watching the same sentinel file. =item --[no]strip-comments default: yes Remove SQL comments from queries in the Info column of the PROCESSLIST. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =item --victims type: string; default: oldest Which of the matching queries in each class will be killed. After classes have been matched/filtered, this option specifies which of the matching queries in each class will be killed (or printed, etc.). The following values are possible: =over =item oldest Only kill the single oldest query. This is to prevent killing queries that aren't really long-running, they're just long-waiting. This sorts matching queries by Time and kills the one with the highest Time value. =item all Kill all queries in the class. =item all-but-oldest Kill all but the oldest query. This is the inverse of the C value. This value can be used to prevent "cache stampedes", the condition where several identical queries are executed and create a backlog while the first query attempts to finish. Since all queries are identical, all but the first query are killed so that it can complete and populate the cache. =back =item --wait-after-kill type: time Wait after killing a query, before looking for more to kill. The purpose of this is to give blocked queries a chance to execute, so we don't kill a query that's blocking a bunch of others, and then kill the others immediately afterwards. =item --wait-before-kill type: time Wait before killing a query. The purpose of this is to give L<"--execute-command"> a chance to see the matching query and gather other MySQL or system information before it's killed. =back =head2 QUERY MATCHES These options filter queries from their classes. If a query does not match, it is removed from its class. The C<--ignore> options take precedence. The matches for command, db, host, etc. correspond to the columns returned by SHOW PROCESSLIST: Command, db, Host, etc. All pattern matches are case-sensitive by default, but they can be made case-insensitive by specifying a regex pattern like C<(?i-xsm:select)>. See also L<"GROUP, MATCH AND KILL">. =over =item --busy-time type: time; group: Query Matches Match queries that have been running for longer than this time. The queries must be in Command=Query status. This matches a query's Time value as reported by SHOW PROCESSLIST. =item --idle-time type: time; group: Query Matches Match queries that have been idle/sleeping for longer than this time. The queries must be in Command=Sleep status. This matches a query's Time value as reported by SHOW PROCESSLIST. =item --ignore-command type: string; group: Query Matches Ignore queries whose Command matches this Perl regex. See L<"--match-command">. =item --ignore-db type: string; group: Query Matches Ignore queries whose db (database) matches this Perl regex. See L<"--match-db">. =item --ignore-host type: string; group: Query Matches Ignore queries whose Host matches this Perl regex. See L<"--match-host">. =item --ignore-info type: string; group: Query Matches Ignore queries whose Info (query) matches this Perl regex. See L<"--match-info">. =item --[no]ignore-self default: yes; group: Query Matches Don't kill pt-kill's own connection. =item --ignore-state type: string; group: Query Matches; default: Locked Ignore queries whose State matches this Perl regex. The default is to keep threads from being killed if they are locked waiting for another thread. See L<"--match-state">. =item --ignore-user type: string; group: Query Matches Ignore queries whose user matches this Perl regex. See L<"--match-user">. =item --match-all group: Query Matches Match all queries that are not ignored. If no ignore options are specified, then every query matches (except replication threads, unless L<"--replication-threads"> is also specified). This option allows you to specify negative matches, i.e. "match every query I..." where the exceptions are defined by specifying various C<--ignore> options. This option is I the same as L<"--victims"> C. This option matches all queries within a class, whereas L<"--victims"> C specifies that all matching queries in a class (however they matched) will be killed. Normally, however, the two are used together because if, for example, you specify L<"--victims"> C, then although all queries may match, only the oldest will be killed. =item --match-command type: string; group: Query Matches Match only queries whose Command matches this Perl regex. Common Command values are: Query Sleep Binlog Dump Connect Delayed insert Execute Fetch Init DB Kill Prepare Processlist Quit Reset stmt Table Dump See L for a full list and description of Command values. =item --match-db type: string; group: Query Matches Match only queries whose db (database) matches this Perl regex. =item --match-host type: string; group: Query Matches Match only queries whose Host matches this Perl regex. The Host value often time includes the port like "host:port". =item --match-info type: string; group: Query Matches Match only queries whose Info (query) matches this Perl regex. The Info column of the processlist shows the query that is being executed or NULL if no query is being executed. =item --match-state type: string; group: Query Matches Match only queries whose State matches this Perl regex. Common State values are: Locked login copy to tmp table Copying to tmp table Copying to tmp table on disk Creating tmp table executing Reading from net Sending data Sorting for order Sorting result Table lock Updating See L for a full list and description of State values. =item --match-user type: string; group: Query Matches Match only queries whose User matches this Perl regex. =item --replication-threads group: Query Matches Allow matching and killing replication threads. By default, matches do not apply to replication threads; i.e. replication threads are completely ignored. Specifying this option allows matches to match (and potentially kill) replication threads on masters and slaves. =item --test-matching type: array; group: Query Matches Files with processlist snapshots to test matching options against. Since the matching options can be complex, you can save snapshots of processlist in files, then test matching options against queries in those files. This option disables L<"--run-time">, L<"--interval">, and L<"--[no]ignore-self">. =back =head2 CLASS MATCHES These matches apply to entire query classes. Classes are created by specifying the L<"--group-by"> option, else all queries are members of a single, default class. See also L<"GROUP, MATCH AND KILL">. =over =item --any-busy-time type: time; group: Class Matches Match query class if any query has been running for longer than this time. "Longer than" means that if you specify C<10>, for example, the class will only match if there's at least one query that has been running for greater than 10 seconds. See L<"--each-busy-time"> for more details. =item --each-busy-time type: time; group: Class Matches Match query class if each query has been running for longer than this time. "Longer than" means that if you specify C<10>, for example, the class will only match if each and every query has been running for greater than 10 seconds. See also L<"--any-busy-time"> (to match a class if ANY query has been running longer than the specified time) and L<"--busy-time">. =item --query-count type: int; group: Class Matches Match query class if it has at least this many queries. When queries are grouped into classes by specifying L<"--group-by">, this option causes matches to apply only to classes with at least this many queries. If L<"--group-by"> is not specified then this option causes matches to apply only if there are at least this many queries in the entire SHOW PROCESSLIST. =item --verbose short form: -v Print information to STDOUT about what is being done. =back =head2 ACTIONS These actions are taken for every matching query from all classes. The actions are taken in this order: L<"--print">, L<"--execute-command">, L<"--kill">/L<"--kill-query">. This order allows L<"--execute-command"> to see the output of L<"--print"> and the query before L<"--kill">/L<"--kill-query">. This may be helpful because pt-kill does not pass any information to L<"--execute-command">. See also L<"GROUP, MATCH AND KILL">. =over =item --execute-command type: string; group: Actions Execute this command when a query matches. After the command is executed, pt-kill has no control over it, so the command is responsible for its own info gathering, logging, interval, etc. The command is executed each time a query matches, so be careful that the command behaves well when multiple instances are ran. No information from pt-kill is passed to the command. See also L<"--wait-before-kill">. =item --kill group: Actions Kill the connection for matching queries. This option makes pt-kill kill the connections (a.k.a. processes, threads) that have matching queries. Use L<"--kill-query"> if you only want to kill individual queries and not their connections. Unless L<"--print"> is also given, no other information is printed that shows that pt-kill matched and killed a query. See also L<"--wait-before-kill"> and L<"--wait-after-kill">. =item --kill-busy-commands type: string; default: Query group: Actions Comma sepatated list of commands that will be watched/killed if they ran for more than L<"--busy-time"> seconds. Default: C By default, L<"--busy-time"> kills only C commands but in some cases, it is needed to make L<"--busy-time"> to watch and kill other commands. For example, a prepared statement execution command is C instead of C. In this case, specifying C<--kill-busy-commands=Query,Execute> will also kill the prepared stamente execution. =item --kill-query group: Actions Kill matching queries. This option makes pt-kill kill matching queries. This requires MySQL 5.0 or newer. Unlike L<"--kill"> which kills the connection for matching queries, this option only kills the query, not its connection. =item --print group: Actions Print a KILL statement for matching queries; does not actually kill queries. If you just want to see which queries match and would be killed without actually killing them, specify L<"--print">. To both kill and print matching queries, specify both L<"--kill"> and L<"--print">. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =item * t Table to log actions in, if passed through --log-dsn. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-kill ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2009-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-kill 3.1.0 =cut percona-toolkit-3.1/bin/pt-mext000775 001750 001750 00000052631 13535723560 017751 0ustar00jenkinsjenkins000000 000000 #!/bin/sh # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. # ########################################################################### # log_warn_die package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/log_warn_die.sh # t/lib/bash/log_warn_die.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PTFUNCNAME="" PTDEBUG="${PTDEBUG:-""}" EXIT_STATUS=0 ts() { TS=$(date +%F-%T | tr ':-' '_') echo "$TS $*" } info() { [ ${OPT_VERBOSE:-3} -ge 3 ] && ts "$*" } log() { [ ${OPT_VERBOSE:-3} -ge 2 ] && ts "$*" } warn() { [ ${OPT_VERBOSE:-3} -ge 1 ] && ts "$*" >&2 EXIT_STATUS=1 } die() { ts "$*" >&2 EXIT_STATUS=1 exit 1 } _d () { [ "$PTDEBUG" ] && echo "# $PTFUNCNAME: $(ts "$*")" >&2 } # ########################################################################### # End log_warn_die package # ########################################################################### # ########################################################################### # tmpdir package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/tmpdir.sh # t/lib/bash/tmpdir.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PT_TMPDIR="" mk_tmpdir() { local dir="${1:-""}" if [ -n "$dir" ]; then if [ ! -d "$dir" ]; then mkdir "$dir" || die "Cannot make tmpdir $dir" fi PT_TMPDIR="$dir" else local tool="${0##*/}" local pid="$$" PT_TMPDIR=`mktemp -d -t "${tool}.${pid}.XXXXXX"` \ || die "Cannot make secure tmpdir" fi } rm_tmpdir() { if [ -n "$PT_TMPDIR" ] && [ -d "$PT_TMPDIR" ]; then rm -rf "$PT_TMPDIR" fi PT_TMPDIR="" } # ########################################################################### # End tmpdir package # ########################################################################### # ########################################################################### # parse_options package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/parse_options.sh # t/lib/bash/parse_options.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u ARGV="" # Non-option args (probably input files) EXT_ARGV="" # Everything after -- (args for an external command) HAVE_EXT_ARGV="" # Got --, everything else is put into EXT_ARGV OPT_ERRS=0 # How many command line option errors OPT_VERSION="" # If --version was specified OPT_HELP="" # If --help was specified OPT_ASK_PASS="" # If --ask-pass was specified PO_DIR="" # Directory with program option spec files usage() { local file="$1" local usage="$(grep '^Usage: ' "$file")" echo $usage echo echo "For more information, 'man $TOOL' or 'perldoc $file'." } usage_or_errors() { local file="$1" local version="" if [ "$OPT_VERSION" ]; then version=$(grep '^pt-[^ ]\+ [0-9]' "$file") echo "$version" return 1 fi if [ "$OPT_HELP" ]; then usage "$file" echo echo "Command line options:" echo perl -e ' use strict; use warnings FATAL => qw(all); my $lcol = 20; # Allow this much space for option names. my $rcol = 80 - $lcol; # The terminal is assumed to be 80 chars wide. my $name; while ( <> ) { my $line = $_; chomp $line; if ( $line =~ s/^long:/ --/ ) { $name = $line; } elsif ( $line =~ s/^desc:// ) { $line =~ s/ +$//mg; my @lines = grep { $_ } $line =~ m/(.{0,$rcol})(?:\s+|\Z)/g; if ( length($name) >= $lcol ) { print $name, "\n", (q{ } x $lcol); } else { printf "%-${lcol}s", $name; } print join("\n" . (q{ } x $lcol), @lines); print "\n"; } } ' "$PO_DIR"/* echo echo "Options and values after processing arguments:" echo ( cd "$PO_DIR" for opt in *; do local varname="OPT_$(echo "$opt" | tr a-z- A-Z_)" eval local varvalue=\$$varname if ! grep -q "type:" "$PO_DIR/$opt" >/dev/null; then if [ "$varvalue" -a "$varvalue" = "yes" ]; then varvalue="TRUE" else varvalue="FALSE" fi fi printf -- " --%-30s %s" "$opt" "${varvalue:-(No value)}" echo done ) return 1 fi if [ $OPT_ERRS -gt 0 ]; then echo usage "$file" return 1 fi return 0 } option_error() { local err="$1" OPT_ERRS=$(($OPT_ERRS + 1)) echo "$err" >&2 } parse_options() { local file="$1" shift ARGV="" EXT_ARGV="" HAVE_EXT_ARGV="" OPT_ERRS=0 OPT_VERSION="" OPT_HELP="" OPT_ASK_PASS="" PO_DIR="$PT_TMPDIR/po" if [ ! -d "$PO_DIR" ]; then mkdir "$PO_DIR" if [ $? -ne 0 ]; then echo "Cannot mkdir $PO_DIR" >&2 exit 1 fi fi rm -rf "$PO_DIR"/* if [ $? -ne 0 ]; then echo "Cannot rm -rf $PO_DIR/*" >&2 exit 1 fi _parse_pod "$file" # Parse POD into program option (po) spec files _eval_po # Eval po into existence with default values if [ $# -ge 2 ] && [ "$1" = "--config" ]; then shift # --config local user_config_files="$1" shift # that ^ local IFS="," for user_config_file in $user_config_files; do _parse_config_files "$user_config_file" done else _parse_config_files "/etc/percona-toolkit/percona-toolkit.conf" "/etc/percona-toolkit/$TOOL.conf" if [ "${HOME:-}" ]; then _parse_config_files "$HOME/.percona-toolkit.conf" "$HOME/.$TOOL.conf" fi fi _parse_command_line "${@:-""}" } _parse_pod() { local file="$1" PO_FILE="$file" PO_DIR="$PO_DIR" perl -e ' $/ = ""; my $file = $ENV{PO_FILE}; open my $fh, "<", $file or die "Cannot open $file: $!"; while ( defined(my $para = <$fh>) ) { next unless $para =~ m/^=head1 OPTIONS/; while ( defined(my $para = <$fh>) ) { last if $para =~ m/^=head1/; chomp; if ( $para =~ m/^=item --(\S+)/ ) { my $opt = $1; my $file = "$ENV{PO_DIR}/$opt"; open my $opt_fh, ">", $file or die "Cannot open $file: $!"; print $opt_fh "long:$opt\n"; $para = <$fh>; chomp; if ( $para =~ m/^[a-z ]+:/ ) { map { chomp; my ($attrib, $val) = split(/: /, $_); print $opt_fh "$attrib:$val\n"; } split(/; /, $para); $para = <$fh>; chomp; } my ($desc) = $para =~ m/^([^?.]+)/; print $opt_fh "desc:$desc.\n"; close $opt_fh; } } last; } ' } _eval_po() { local IFS=":" for opt_spec in "$PO_DIR"/*; do local opt="" local default_val="" local neg=0 local size=0 while read key val; do case "$key" in long) opt=$(echo $val | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') ;; default) default_val="$val" ;; "short form") ;; type) [ "$val" = "size" ] && size=1 ;; desc) ;; negatable) if [ "$val" = "yes" ]; then neg=1 fi ;; *) echo "Invalid attribute in $opt_spec: $line" >&2 exit 1 esac done < "$opt_spec" if [ -z "$opt" ]; then echo "No long attribute in option spec $opt_spec" >&2 exit 1 fi if [ $neg -eq 1 ]; then if [ -z "$default_val" ] || [ "$default_val" != "yes" ]; then echo "Option $opt_spec is negatable but not default: yes" >&2 exit 1 fi fi if [ $size -eq 1 -a -n "$default_val" ]; then default_val=$(size_to_bytes $default_val) fi eval "OPT_${opt}"="$default_val" done } _parse_config_files() { for config_file in "${@:-""}"; do test -f "$config_file" || continue while read config_opt; do echo "$config_opt" | grep '^[ ]*[^#]' >/dev/null 2>&1 || continue config_opt="$(echo "$config_opt" | sed -e 's/^ *//g' -e 's/ *$//g' -e 's/[ ]*=[ ]*/=/' -e 's/[ ]+#.*$//')" [ "$config_opt" = "" ] && continue echo "$config_opt" | grep -v 'version-check' >/dev/null 2>&1 || continue if ! [ "$HAVE_EXT_ARGV" ]; then config_opt="--$config_opt" fi _parse_command_line "$config_opt" done < "$config_file" HAVE_EXT_ARGV="" # reset for each file done } _parse_command_line() { local opt="" local val="" local next_opt_is_val="" local opt_is_ok="" local opt_is_negated="" local real_opt="" local required_arg="" local spec="" for opt in "${@:-""}"; do if [ "$opt" = "--" -o "$opt" = "----" ]; then HAVE_EXT_ARGV=1 continue fi if [ "$HAVE_EXT_ARGV" ]; then if [ "$EXT_ARGV" ]; then EXT_ARGV="$EXT_ARGV $opt" else EXT_ARGV="$opt" fi continue fi if [ "$next_opt_is_val" ]; then next_opt_is_val="" if [ $# -eq 0 ] || [ $(expr "$opt" : "\-") -eq 1 ]; then option_error "$real_opt requires a $required_arg argument" continue fi val="$opt" opt_is_ok=1 else if [ $(expr "$opt" : "\-") -eq 0 ]; then if [ -z "$ARGV" ]; then ARGV="$opt" else ARGV="$ARGV $opt" fi continue fi real_opt="$opt" if $(echo $opt | grep '^--no[^-]' >/dev/null); then local base_opt=$(echo $opt | sed 's/^--no//') if [ -f "$PT_TMPDIR/po/$base_opt" ]; then opt_is_negated=1 opt="$base_opt" else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi else if $(echo $opt | grep '^--no-' >/dev/null); then opt_is_negated=1 opt=$(echo $opt | sed 's/^--no-//') else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi fi if $(echo $opt | grep '^[a-z-][a-z-]*=' >/dev/null 2>&1); then val="$(echo $opt | awk -F= '{print $2}')" opt="$(echo $opt | awk -F= '{print $1}')" fi if [ -f "$PT_TMPDIR/po/$opt" ]; then spec="$PT_TMPDIR/po/$opt" else spec=$(grep "^short form:-$opt\$" "$PT_TMPDIR"/po/* | cut -d ':' -f 1) if [ -z "$spec" ]; then option_error "Unknown option: $real_opt" continue fi fi required_arg=$(cat "$spec" | awk -F: '/^type:/{print $2}') if [ "$required_arg" ]; then if [ "$val" ]; then opt_is_ok=1 else next_opt_is_val=1 fi else if [ "$val" ]; then option_error "Option $real_opt does not take a value" continue fi if [ "$opt_is_negated" ]; then val="" else val="yes" fi opt_is_ok=1 fi fi if [ "$opt_is_ok" ]; then opt=$(cat "$spec" | grep '^long:' | cut -d':' -f2 | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') if grep "^type:size" "$spec" >/dev/null; then val=$(size_to_bytes $val) fi eval "OPT_$opt"="'$val'" opt="" val="" next_opt_is_val="" opt_is_ok="" opt_is_negated="" real_opt="" required_arg="" spec="" fi done } size_to_bytes() { local size="$1" echo $size | perl -ne '%f=(B=>1, K=>1_024, M=>1_048_576, G=>1_073_741_824, T=>1_099_511_627_776); m/^(\d+)([kMGT])?/i; print $1 * $f{uc($2 || "B")};' } # ########################################################################### # End parse_options package # ########################################################################### # ########################################################################### # alt_cmds package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/alt_cmds.sh # t/lib/bash/alt_cmds.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u _seq() { local i="$1" awk "BEGIN { for(i=1; i<=$i; i++) print i; }" } _pidof() { local cmd="$1" if ! pidof "$cmd" 2>/dev/null; then ps -eo pid,ucomm | awk -v comm="$cmd" '$2 == comm { print $1 }' fi } _lsof() { local pid="$1" if ! lsof -p $pid 2>/dev/null; then /bin/ls -l /proc/$pid/fd 2>/dev/null fi } _which() { if [ -x /usr/bin/which ]; then /usr/bin/which "$1" 2>/dev/null | awk '{print $1}' elif which which 1>/dev/null 2>&1; then which "$1" 2>/dev/null | awk '{print $1}' else echo "$1" fi } # ########################################################################### # End alt_cmds package # ########################################################################### TOOL="pt-mext" # Parse command line options. mk_tmpdir parse_options "$0" "${@:-""}" if [ -z "$OPT_HELP" -a -z "$OPT_VERSION" ]; then if [ -z "$EXT_ARGV" ]; then option_error "No COMMAND was given." fi fi usage_or_errors "$0" po_status=$? if [ $po_status -ne 0 ]; then [ $OPT_ERRS -gt 0 ] && exit 1 exit 0 fi FILE="$PT_TMPDIR/mext_temp_file"; NUM=1; # Split the output on empty lines and put each into a different file; eliminate # lines that don't have "real" content. $EXT_ARGV | grep -v "| Rsa_public_key *|.*|$" | sed '/| Rsa/, /END PUBLIC KEY-----/d' | grep -v '^ |' | grep -v '+' | grep -v Variable_name | sed 's/|//g' \ | while read line; do if [ "$line" = "" ]; then NUM=$(($NUM + 1)) echo "" > "$FILE$NUM" fi echo "$line" >> "$FILE$NUM" done SPEC="%-33s %13d" AWKS="" # Count how many files there are and prepare to format the output, but... NUM=`ls "$FILE"* | wc -l`; # ... iterate through files 1..(N-2) because the last file is empty and # we join N to N+1 so also don't read the last real file. NUM=$((NUM - 2)) # Join each file with the next file, joining on the first field. Build a printf # spec and awk spec at the same time. for i in `_seq $NUM`; do NEXTFILE=$(($i + 1)) # Sort each file and eliminate empty lines, so 'join' doesn't complain. sort -s "$FILE$i" | grep . > "$FILE$i.tmp" mv "$FILE$i.tmp" "$FILE$i" sort -s "$FILE${NEXTFILE}" | grep . > "$FILE${NEXTFILE}.tmp" mv "$FILE${NEXTFILE}.tmp" "$FILE${NEXTFILE}" # Join the files together. This gets slow O(n^2) as we add more files, but # this really shouldn't be performance critical. join "$FILE$i" "$FILE${NEXTFILE}" | grep . > "$FILE" # Find the max length of the [numeric only] values in the file so we know how # wide to make the columns MAXLEN=`awk '{print $2}' "$FILE${NEXTFILE}" | grep -v '[^0-9]' | awk '{print length($1)}' | sort -rns | head -n1` mv "$FILE" "$FILE${NEXTFILE}" SPEC="$SPEC %${MAXLEN}d"; # The final file will contain lines like: # # Bytes_received 100 200 50 300 # # For each such line in awk, $1 is the var name and $2 is the first value # of the var, so these are fixed when we build AWKCMD after this loop. # When i=1, we're comparing file1 to file2, and the resulting value becomes # awk $3. Hence $i + 2=$3 below. Then incr and repeat for subsequent files. # # With --relative, the $i and awk field numbers are the same, but we print # differences $3-$2, $4-$3, $5-$4 from the input line for awk fields $3, $4, # and $5 respectively. Here's a table: # # i awk Input line fields # == === ================= # 1 $3 $3-$2 # 2 $4 $4-$3 # 3 $5 $5-$4 if [ "$OPT_RELATIVE" ]; then AWKS="$AWKS, \$`expr $i + 2` - \$`expr $i + 1`"; else AWKS="$AWKS, \$$(($i + 2))"; fi done # Print output AWKCMD="printf(\"$SPEC\n\", \$1, \$2$AWKS);"; awk "{$AWKCMD}" "$FILE`expr $NUM + 1`" # Remove all temporary files and the tmp dir. rm_tmpdir exit 0 # ############################################################################ # Documentation # ############################################################################ :<<'DOCUMENTATION' =pod =head1 NAME pt-mext - Look at many samples of MySQL C side-by-side. =head1 SYNOPSIS Usage: pt-mext [OPTIONS] -- COMMAND pt-mext columnizes repeated output from a program like mysqladmin extended. Get output from C: pt-mext -r -- mysqladmin ext -i10 -c3 Get output from a file: pt-mext -r -- cat mysqladmin-output.txt =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-mext executes the C you specify, and reads through the result one line at a time. It places each line into a temporary file. When it finds a blank line, it assumes that a new sample of SHOW GLOBAL STATUS is starting, and it creates a new temporary file. At the end of this process, it has a number of temporary files. It joins the temporary files together side-by-side and prints the result. If L<"--relative"> option is given, it first subtracts each sample from the one after it before printing results. =head1 OPTIONS =over =item --help Show help and exit. =item --relative short form: -r Subtract each column from the previous column. =item --version Show version and exit. =back =head1 ENVIRONMENT This tool does not use any environment variables. =head1 SYSTEM REQUIREMENTS This tool requires the Bourne shell (F) and the seq program. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-mext 3.1.0 =cut DOCUMENTATION percona-toolkit-3.1/bin/pt-mysql-summary000775 001750 001750 00000323121 13535723560 021627 0ustar00jenkinsjenkins000000 000000 #!/bin/sh # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. set -u # ########################################################################### # log_warn_die package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/log_warn_die.sh # t/lib/bash/log_warn_die.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PTFUNCNAME="" PTDEBUG="${PTDEBUG:-""}" EXIT_STATUS=0 ts() { TS=$(date +%F-%T | tr ':-' '_') echo "$TS $*" } info() { [ ${OPT_VERBOSE:-3} -ge 3 ] && ts "$*" } log() { [ ${OPT_VERBOSE:-3} -ge 2 ] && ts "$*" } warn() { [ ${OPT_VERBOSE:-3} -ge 1 ] && ts "$*" >&2 EXIT_STATUS=1 } die() { ts "$*" >&2 EXIT_STATUS=1 exit 1 } _d () { [ "$PTDEBUG" ] && echo "# $PTFUNCNAME: $(ts "$*")" >&2 } # ########################################################################### # End log_warn_die package # ########################################################################### # ########################################################################### # parse_options package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/parse_options.sh # t/lib/bash/parse_options.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u ARGV="" # Non-option args (probably input files) EXT_ARGV="" # Everything after -- (args for an external command) HAVE_EXT_ARGV="" # Got --, everything else is put into EXT_ARGV OPT_ERRS=0 # How many command line option errors OPT_VERSION="" # If --version was specified OPT_HELP="" # If --help was specified OPT_ASK_PASS="" # If --ask-pass was specified PO_DIR="" # Directory with program option spec files usage() { local file="$1" local usage="$(grep '^Usage: ' "$file")" echo $usage echo echo "For more information, 'man $TOOL' or 'perldoc $file'." } usage_or_errors() { local file="$1" local version="" if [ "$OPT_VERSION" ]; then version=$(grep '^pt-[^ ]\+ [0-9]' "$file") echo "$version" return 1 fi if [ "$OPT_HELP" ]; then usage "$file" echo echo "Command line options:" echo perl -e ' use strict; use warnings FATAL => qw(all); my $lcol = 20; # Allow this much space for option names. my $rcol = 80 - $lcol; # The terminal is assumed to be 80 chars wide. my $name; while ( <> ) { my $line = $_; chomp $line; if ( $line =~ s/^long:/ --/ ) { $name = $line; } elsif ( $line =~ s/^desc:// ) { $line =~ s/ +$//mg; my @lines = grep { $_ } $line =~ m/(.{0,$rcol})(?:\s+|\Z)/g; if ( length($name) >= $lcol ) { print $name, "\n", (q{ } x $lcol); } else { printf "%-${lcol}s", $name; } print join("\n" . (q{ } x $lcol), @lines); print "\n"; } } ' "$PO_DIR"/* echo echo "Options and values after processing arguments:" echo ( cd "$PO_DIR" for opt in *; do local varname="OPT_$(echo "$opt" | tr a-z- A-Z_)" eval local varvalue=\$$varname if ! grep -q "type:" "$PO_DIR/$opt" >/dev/null; then if [ "$varvalue" -a "$varvalue" = "yes" ]; then varvalue="TRUE" else varvalue="FALSE" fi fi printf -- " --%-30s %s" "$opt" "${varvalue:-(No value)}" echo done ) return 1 fi if [ $OPT_ERRS -gt 0 ]; then echo usage "$file" return 1 fi return 0 } option_error() { local err="$1" OPT_ERRS=$(($OPT_ERRS + 1)) echo "$err" >&2 } parse_options() { local file="$1" shift ARGV="" EXT_ARGV="" HAVE_EXT_ARGV="" OPT_ERRS=0 OPT_VERSION="" OPT_HELP="" OPT_ASK_PASS="" PO_DIR="$PT_TMPDIR/po" if [ ! -d "$PO_DIR" ]; then mkdir "$PO_DIR" if [ $? -ne 0 ]; then echo "Cannot mkdir $PO_DIR" >&2 exit 1 fi fi rm -rf "$PO_DIR"/* if [ $? -ne 0 ]; then echo "Cannot rm -rf $PO_DIR/*" >&2 exit 1 fi _parse_pod "$file" # Parse POD into program option (po) spec files _eval_po # Eval po into existence with default values if [ $# -ge 2 ] && [ "$1" = "--config" ]; then shift # --config local user_config_files="$1" shift # that ^ local IFS="," for user_config_file in $user_config_files; do _parse_config_files "$user_config_file" done else _parse_config_files "/etc/percona-toolkit/percona-toolkit.conf" "/etc/percona-toolkit/$TOOL.conf" if [ "${HOME:-}" ]; then _parse_config_files "$HOME/.percona-toolkit.conf" "$HOME/.$TOOL.conf" fi fi _parse_command_line "${@:-""}" } _parse_pod() { local file="$1" PO_FILE="$file" PO_DIR="$PO_DIR" perl -e ' $/ = ""; my $file = $ENV{PO_FILE}; open my $fh, "<", $file or die "Cannot open $file: $!"; while ( defined(my $para = <$fh>) ) { next unless $para =~ m/^=head1 OPTIONS/; while ( defined(my $para = <$fh>) ) { last if $para =~ m/^=head1/; chomp; if ( $para =~ m/^=item --(\S+)/ ) { my $opt = $1; my $file = "$ENV{PO_DIR}/$opt"; open my $opt_fh, ">", $file or die "Cannot open $file: $!"; print $opt_fh "long:$opt\n"; $para = <$fh>; chomp; if ( $para =~ m/^[a-z ]+:/ ) { map { chomp; my ($attrib, $val) = split(/: /, $_); print $opt_fh "$attrib:$val\n"; } split(/; /, $para); $para = <$fh>; chomp; } my ($desc) = $para =~ m/^([^?.]+)/; print $opt_fh "desc:$desc.\n"; close $opt_fh; } } last; } ' } _eval_po() { local IFS=":" for opt_spec in "$PO_DIR"/*; do local opt="" local default_val="" local neg=0 local size=0 while read key val; do case "$key" in long) opt=$(echo $val | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') ;; default) default_val="$val" ;; "short form") ;; type) [ "$val" = "size" ] && size=1 ;; desc) ;; negatable) if [ "$val" = "yes" ]; then neg=1 fi ;; *) echo "Invalid attribute in $opt_spec: $line" >&2 exit 1 esac done < "$opt_spec" if [ -z "$opt" ]; then echo "No long attribute in option spec $opt_spec" >&2 exit 1 fi if [ $neg -eq 1 ]; then if [ -z "$default_val" ] || [ "$default_val" != "yes" ]; then echo "Option $opt_spec is negatable but not default: yes" >&2 exit 1 fi fi if [ $size -eq 1 -a -n "$default_val" ]; then default_val=$(size_to_bytes $default_val) fi eval "OPT_${opt}"="$default_val" done } _parse_config_files() { for config_file in "${@:-""}"; do test -f "$config_file" || continue while read config_opt; do echo "$config_opt" | grep '^[ ]*[^#]' >/dev/null 2>&1 || continue config_opt="$(echo "$config_opt" | sed -e 's/^ *//g' -e 's/ *$//g' -e 's/[ ]*=[ ]*/=/' -e 's/[ ]+#.*$//')" [ "$config_opt" = "" ] && continue echo "$config_opt" | grep -v 'version-check' >/dev/null 2>&1 || continue if ! [ "$HAVE_EXT_ARGV" ]; then config_opt="--$config_opt" fi _parse_command_line "$config_opt" done < "$config_file" HAVE_EXT_ARGV="" # reset for each file done } _parse_command_line() { local opt="" local val="" local next_opt_is_val="" local opt_is_ok="" local opt_is_negated="" local real_opt="" local required_arg="" local spec="" for opt in "${@:-""}"; do if [ "$opt" = "--" -o "$opt" = "----" ]; then HAVE_EXT_ARGV=1 continue fi if [ "$HAVE_EXT_ARGV" ]; then if [ "$EXT_ARGV" ]; then EXT_ARGV="$EXT_ARGV $opt" else EXT_ARGV="$opt" fi continue fi if [ "$next_opt_is_val" ]; then next_opt_is_val="" if [ $# -eq 0 ] || [ $(expr "$opt" : "\-") -eq 1 ]; then option_error "$real_opt requires a $required_arg argument" continue fi val="$opt" opt_is_ok=1 else if [ $(expr "$opt" : "\-") -eq 0 ]; then if [ -z "$ARGV" ]; then ARGV="$opt" else ARGV="$ARGV $opt" fi continue fi real_opt="$opt" if $(echo $opt | grep '^--no[^-]' >/dev/null); then local base_opt=$(echo $opt | sed 's/^--no//') if [ -f "$PT_TMPDIR/po/$base_opt" ]; then opt_is_negated=1 opt="$base_opt" else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi else if $(echo $opt | grep '^--no-' >/dev/null); then opt_is_negated=1 opt=$(echo $opt | sed 's/^--no-//') else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi fi if $(echo $opt | grep '^[a-z-][a-z-]*=' >/dev/null 2>&1); then val="$(echo $opt | awk -F= '{print $2}')" opt="$(echo $opt | awk -F= '{print $1}')" fi if [ -f "$PT_TMPDIR/po/$opt" ]; then spec="$PT_TMPDIR/po/$opt" else spec=$(grep "^short form:-$opt\$" "$PT_TMPDIR"/po/* | cut -d ':' -f 1) if [ -z "$spec" ]; then option_error "Unknown option: $real_opt" continue fi fi required_arg=$(cat "$spec" | awk -F: '/^type:/{print $2}') if [ "$required_arg" ]; then if [ "$val" ]; then opt_is_ok=1 else next_opt_is_val=1 fi else if [ "$val" ]; then option_error "Option $real_opt does not take a value" continue fi if [ "$opt_is_negated" ]; then val="" else val="yes" fi opt_is_ok=1 fi fi if [ "$opt_is_ok" ]; then opt=$(cat "$spec" | grep '^long:' | cut -d':' -f2 | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') if grep "^type:size" "$spec" >/dev/null; then val=$(size_to_bytes $val) fi eval "OPT_$opt"="'$val'" opt="" val="" next_opt_is_val="" opt_is_ok="" opt_is_negated="" real_opt="" required_arg="" spec="" fi done } size_to_bytes() { local size="$1" echo $size | perl -ne '%f=(B=>1, K=>1_024, M=>1_048_576, G=>1_073_741_824, T=>1_099_511_627_776); m/^(\d+)([kMGT])?/i; print $1 * $f{uc($2 || "B")};' } # ########################################################################### # End parse_options package # ########################################################################### # ########################################################################### # mysql_options package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/mysql_options.sh # t/lib/bash/mysql_options.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u mysql_options() { local MYSQL_ARGS="" if [ -n "$OPT_DEFAULTS_FILE" ]; then MYSQL_ARGS="--defaults-file=$OPT_DEFAULTS_FILE" fi if [ -n "$OPT_PORT" ]; then MYSQL_ARGS="$MYSQL_ARGS --port=$OPT_PORT" fi if [ -n "$OPT_SOCKET" ]; then MYSQL_ARGS="$MYSQL_ARGS --socket=$OPT_SOCKET" fi if [ -n "$OPT_HOST" ]; then MYSQL_ARGS="$MYSQL_ARGS --host=$OPT_HOST" fi if [ -n "$OPT_USER" ]; then MYSQL_ARGS="$MYSQL_ARGS --user=$OPT_USER" fi if [ -n "$OPT_ASK_PASS" ]; then stty -echo >&2 printf "Enter MySQL password: " read GIVEN_PASS stty echo printf "\n" MYSQL_ARGS="$MYSQL_ARGS --password=$GIVEN_PASS" elif [ -n "$OPT_PASSWORD" ]; then MYSQL_ARGS="$MYSQL_ARGS --password=$OPT_PASSWORD" fi echo $MYSQL_ARGS } arrange_mysql_options() { local opts="$1" local rearranged="" for opt in $opts; do if [ "$(echo $opt | awk -F= '{print $1}')" = "--defaults-file" ]; then rearranged="$opt $rearranged" else rearranged="$rearranged $opt" fi done echo "$rearranged" } # ########################################################################### # End mysql_options package # ########################################################################### # ########################################################################### # tmpdir package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/tmpdir.sh # t/lib/bash/tmpdir.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PT_TMPDIR="" mk_tmpdir() { local dir="${1:-""}" if [ -n "$dir" ]; then if [ ! -d "$dir" ]; then mkdir "$dir" || die "Cannot make tmpdir $dir" fi PT_TMPDIR="$dir" else local tool="${0##*/}" local pid="$$" PT_TMPDIR=`mktemp -d -t "${tool}.${pid}.XXXXXX"` \ || die "Cannot make secure tmpdir" fi } rm_tmpdir() { if [ -n "$PT_TMPDIR" ] && [ -d "$PT_TMPDIR" ]; then rm -rf "$PT_TMPDIR" fi PT_TMPDIR="" } # ########################################################################### # End tmpdir package # ########################################################################### # ########################################################################### # alt_cmds package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/alt_cmds.sh # t/lib/bash/alt_cmds.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u _seq() { local i="$1" awk "BEGIN { for(i=1; i<=$i; i++) print i; }" } _pidof() { local cmd="$1" if ! pidof "$cmd" 2>/dev/null; then ps -eo pid,ucomm | awk -v comm="$cmd" '$2 == comm { print $1 }' fi } _lsof() { local pid="$1" if ! lsof -p $pid 2>/dev/null; then /bin/ls -l /proc/$pid/fd 2>/dev/null fi } _which() { if [ -x /usr/bin/which ]; then /usr/bin/which "$1" 2>/dev/null | awk '{print $1}' elif which which 1>/dev/null 2>&1; then which "$1" 2>/dev/null | awk '{print $1}' else echo "$1" fi } # ########################################################################### # End alt_cmds package # ########################################################################### # ########################################################################### # report_formatting package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/report_formatting.sh # t/lib/bash/report_formatting.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u POSIXLY_CORRECT=1 export POSIXLY_CORRECT fuzzy_formula=' rounded = 0; if (fuzzy_var <= 10 ) { rounded = 1; } factor = 1; while ( rounded == 0 ) { if ( fuzzy_var <= 50 * factor ) { fuzzy_var = sprintf("%.0f", fuzzy_var / (5 * factor)) * 5 * factor; rounded = 1; } else if ( fuzzy_var <= 100 * factor) { fuzzy_var = sprintf("%.0f", fuzzy_var / (10 * factor)) * 10 * factor; rounded = 1; } else if ( fuzzy_var <= 250 * factor) { fuzzy_var = sprintf("%.0f", fuzzy_var / (25 * factor)) * 25 * factor; rounded = 1; } factor = factor * 10; }' fuzz () { awk -v fuzzy_var="$1" "BEGIN { ${fuzzy_formula} print fuzzy_var;}" } fuzzy_pct () { local pct="$(awk -v one="$1" -v two="$2" 'BEGIN{ if (two > 0) { printf "%d", one/two*100; } else {print 0} }')"; echo "$(fuzz "${pct}")%" } section () { local str="$1" awk -v var="${str} _" 'BEGIN { line = sprintf("# %-60s", var); i = index(line, "_"); x = substr(line, i); gsub(/[_ \t]/, "#", x); printf("%s%s\n", substr(line, 1, i-1), x); }' } NAME_VAL_LEN=12 name_val () { printf "%+*s | %s\n" "${NAME_VAL_LEN}" "$1" "$2" } shorten() { local num="$1" local prec="${2:-2}" local div="${3:-1024}" echo "$num" | awk -v prec="$prec" -v div="$div" ' { num = $1; unit = num >= 1125899906842624 ? "P" \ : num >= 1099511627776 ? "T" \ : num >= 1073741824 ? "G" \ : num >= 1048576 ? "M" \ : num >= 1024 ? "k" \ : ""; while ( num >= div ) { num /= div; } printf "%.*f%s", prec, num, unit; } ' } group_concat () { sed -e '{H; $!d;}' -e 'x' -e 's/\n[[:space:]]*\([[:digit:]]*\)[[:space:]]*/, \1x/g' -e 's/[[:space:]][[:space:]]*/ /g' -e 's/, //' "${1}" } # ########################################################################### # End report_formatting package # ########################################################################### # ########################################################################### # summary_common package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/summary_common.sh # t/lib/bash/summary_common.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u CMD_FILE="$( _which file 2>/dev/null )" CMD_NM="$( _which nm 2>/dev/null )" CMD_OBJDUMP="$( _which objdump 2>/dev/null )" get_nice_of_pid () { local pid="$1" local niceness="$(ps -p $pid -o nice | awk '$1 !~ /[^0-9]/ {print $1; exit}')" if [ -n "${niceness}" ]; then echo $niceness else local tmpfile="$PT_TMPDIR/nice_through_c.tmp.c" _d "Getting the niceness from ps failed, somehow. We are about to try this:" cat < "$tmpfile" int main(void) { int priority = getpriority(PRIO_PROCESS, $pid); if ( priority == -1 && errno == ESRCH ) { return 1; } else { printf("%d\\n", priority); return 0; } } EOC local c_comp=$(_which gcc) if [ -z "${c_comp}" ]; then c_comp=$(_which cc) fi _d "$tmpfile: $( cat "$tmpfile" )" _d "$c_comp -xc \"$tmpfile\" -o \"$tmpfile\" && eval \"$tmpfile\"" $c_comp -xc "$tmpfile" -o "$tmpfile" 2>/dev/null && eval "$tmpfile" 2>/dev/null if [ $? -ne 0 ]; then echo "?" _d "Failed to get a niceness value for $pid" fi fi } get_oom_of_pid () { local pid="$1" local oom_adj="" if [ -n "${pid}" -a -e /proc/cpuinfo ]; then if [ -s "/proc/$pid/oom_score_adj" ]; then oom_adj=$(cat "/proc/$pid/oom_score_adj" 2>/dev/null) _d "For $pid, the oom value is $oom_adj, retreived from oom_score_adj" else oom_adj=$(cat "/proc/$pid/oom_adj" 2>/dev/null) _d "For $pid, the oom value is $oom_adj, retreived from oom_adj" fi fi if [ -n "${oom_adj}" ]; then echo "${oom_adj}" else echo "?" _d "Can't find the oom value for $pid" fi } has_symbols () { local executable="$(_which "$1")" local has_symbols="" if [ "${CMD_FILE}" ] \ && [ "$($CMD_FILE "${executable}" | grep 'not stripped' )" ]; then has_symbols=1 elif [ "${CMD_NM}" ] \ || [ "${CMD_OBJDMP}" ]; then if [ "${CMD_NM}" ] \ && [ !"$("${CMD_NM}" -- "${executable}" 2>&1 | grep 'File format not recognized' )" ]; then if [ -z "$( $CMD_NM -- "${executable}" 2>&1 | grep ': no symbols' )" ]; then has_symbols=1 fi elif [ -z "$("${CMD_OBJDUMP}" -t -- "${executable}" | grep '^no symbols$' )" ]; then has_symbols=1 fi fi if [ "${has_symbols}" ]; then echo "Yes" else echo "No" fi } setup_data_dir () { local existing_dir="$1" local data_dir="" if [ -z "$existing_dir" ]; then mkdir "$PT_TMPDIR/data" || die "Cannot mkdir $PT_TMPDIR/data" data_dir="$PT_TMPDIR/data" else if [ ! -d "$existing_dir" ]; then mkdir "$existing_dir" || die "Cannot mkdir $existing_dir" elif [ "$( ls -A "$existing_dir" )" ]; then die "--save-samples directory isn't empty, halting." fi touch "$existing_dir/test" || die "Cannot write to $existing_dir" rm "$existing_dir/test" || die "Cannot rm $existing_dir/test" data_dir="$existing_dir" fi echo "$data_dir" } get_var () { local varname="$1" local file="$2" awk -v pattern="${varname}" '$1 == pattern { if (length($2)) { len = length($1); print substr($0, len+index(substr($0, len+1), $2)) } }' "${file}" | tr -d '\r' } # ########################################################################### # End summary_common package # ########################################################################### # ########################################################################### # collect_mysql_info package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/collect_mysql_info.sh # t/lib/bash/collect_mysql_info.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### CMD_MYSQL="${CMD_MYSQL:-""}" CMD_MYSQLDUMP="${CMD_MYSQLDUMP:-""}" collect_mysqld_instances () { local variables_file="$1" local pids="$(_pidof mysqld)" if [ -n "$pids" ]; then for pid in $pids; do local nice="$( get_nice_of_pid $pid )" local oom="$( get_oom_of_pid $pid )" echo "internal::nice_of_$pid $nice" >> "$variables_file" echo "internal::oom_of_$pid $oom" >> "$variables_file" done pids="$(echo $pids | sed -e 's/ /,/g')" ps ww -p "$pids" 2>/dev/null else echo "mysqld doesn't appear to be running" fi } find_my_cnf_file() { local file="$1" local port="${2:-""}" local cnf_file="" if [ "$port" ]; then cnf_file="$(grep --max-count 1 "/mysqld.*--port=$port" "$file" \ | awk 'BEGIN{RS=" "; FS="=";} $1 ~ /--defaults-file/ { print $2; }')" else cnf_file="$(grep --max-count 1 '/mysqld' "$file" \ | awk 'BEGIN{RS=" "; FS="=";} $1 ~ /--defaults-file/ { print $2; }')" fi if [ -z "$cnf_file" ]; then if [ -e "/etc/my.cnf" ]; then cnf_file="/etc/my.cnf" elif [ -e "/etc/mysql/my.cnf" ]; then cnf_file="/etc/mysql/my.cnf" elif [ -e "/var/db/mysql/my.cnf" ]; then cnf_file="/var/db/mysql/my.cnf"; fi fi echo "$cnf_file" } collect_mysql_variables () { $CMD_MYSQL $EXT_ARGV -ss -e 'SHOW /*!40100 GLOBAL*/ VARIABLES' } collect_mysql_status () { $CMD_MYSQL $EXT_ARGV -ss -e 'SHOW /*!50000 GLOBAL*/ STATUS' } collect_mysql_databases () { $CMD_MYSQL $EXT_ARGV -ss -e 'SHOW DATABASES' 2>/dev/null } collect_mysql_plugins () { $CMD_MYSQL $EXT_ARGV -ss -e 'SHOW PLUGINS' 2>/dev/null } collect_mysql_slave_status () { $CMD_MYSQL $EXT_ARGV -ssE -e 'SHOW SLAVE STATUS' 2>/dev/null } collect_mysql_innodb_status () { $CMD_MYSQL $EXT_ARGV -ssE -e 'SHOW /*!50000 ENGINE*/ INNODB STATUS' 2>/dev/null } collect_mysql_ndb_status () { $CMD_MYSQL $EXT_ARGV -ssE -e 'show /*!50000 ENGINE*/ NDB STATUS' 2>/dev/null } collect_mysql_processlist () { $CMD_MYSQL $EXT_ARGV -ssE -e 'SHOW FULL PROCESSLIST' 2>/dev/null } collect_mysql_users () { $CMD_MYSQL $EXT_ARGV -ss -e 'SELECT COUNT(*), SUM(user=""), SUM(password=""), SUM(password NOT LIKE "*%") FROM mysql.user' 2>/dev/null if [ "$?" -ne 0 ]; then $CMD_MYSQL $EXT_ARGV -ss -e 'SELECT COUNT(*), SUM(user=""), SUM(authentication_string=""), SUM(authentication_string NOT LIKE "*%") FROM mysql.user WHERE account_locked <> "Y" AND password_expired <> "Y" AND authentication_string <> ""' 2>/dev/null fi } collect_mysql_roles () { QUERY="SELECT DISTINCT User 'Role Name', if(from_user is NULL,0, 1) Active FROM mysql.user LEFT JOIN mysql.role_edges ON from_user=user WHERE account_locked='Y' AND password_expired='Y' AND authentication_string=''\G" $CMD_MYSQL $EXT_ARGV -ss -e "$QUERY" 2>/dev/null } collect_mysql_show_slave_hosts () { $CMD_MYSQL $EXT_ARGV -ssE -e 'SHOW SLAVE HOSTS' 2>/dev/null } collect_master_logs_status () { local master_logs_file="$1" local master_status_file="$2" $CMD_MYSQL $EXT_ARGV -ss -e 'SHOW MASTER LOGS' > "$master_logs_file" 2>/dev/null $CMD_MYSQL $EXT_ARGV -ss -e 'SHOW MASTER STATUS' > "$master_status_file" 2>/dev/null } collect_mysql_deferred_status () { local status_file="$1" collect_mysql_status > "$PT_TMPDIR/defer_gatherer" join "$status_file" "$PT_TMPDIR/defer_gatherer" } collect_internal_vars () { local mysqld_executables="${1:-""}" local FNV_64="" if $CMD_MYSQL $EXT_ARGV -e 'SELECT FNV_64("a")' >/dev/null 2>&1; then FNV_64="Enabled"; else FNV_64="Unknown"; fi local now="$($CMD_MYSQL $EXT_ARGV -ss -e 'SELECT NOW()')" local user="$($CMD_MYSQL $EXT_ARGV -ss -e 'SELECT CURRENT_USER()')" local trigger_count=$($CMD_MYSQL $EXT_ARGV -ss -e "SELECT COUNT(*) FROM INFORMATION_SCHEMA.TRIGGERS" 2>/dev/null) echo "pt-summary-internal-mysql_executable $CMD_MYSQL" echo "pt-summary-internal-now $now" echo "pt-summary-internal-user $user" echo "pt-summary-internal-FNV_64 $FNV_64" echo "pt-summary-internal-trigger_count $trigger_count" if [ -e "$mysqld_executables" ]; then local i=1 while read executable; do echo "pt-summary-internal-mysqld_executable_${i} $(has_symbols "$executable")" i=$(($i + 1)) done < "$mysqld_executables" fi } get_mysqldump_for () { local args="$1" local dbtodump="${2:-"--all-databases"}" $CMD_MYSQLDUMP $EXT_ARGV --no-data --skip-comments \ --skip-add-locks --skip-add-drop-table --compact \ --skip-lock-all-tables --skip-lock-tables --skip-set-charset \ ${args} --databases $(local IFS=,; echo ${dbtodump}) } get_mysqldump_args () { local file="$1" local trg_arg="" if $CMD_MYSQLDUMP --help --verbose 2>&1 | grep triggers >/dev/null; then trg_arg="--routines" fi if [ "${trg_arg}" ]; then local triggers="--skip-triggers" local trg=$(get_var "pt-summary-internal-trigger_count" "$file" ) if [ -n "${trg}" ] && [ "${trg}" -gt 0 ]; then triggers="--triggers" fi trg_arg="${trg_arg} ${triggers}"; fi echo "${trg_arg}" } collect_mysqld_executables () { local mysqld_instances="$1" local ps_opt="cmd=" if [ "$(uname -s)" = "Darwin" ]; then ps_opt="command=" fi for pid in $( grep '/mysqld' "$mysqld_instances" | awk '/^.*[0-9]/{print $1}' ); do ps -o $ps_opt -p $pid | sed -e 's/^\(.*mysqld\) .*/\1/' done | sort -u } collect_mysql_info () { local dir="$1" collect_mysql_variables > "$dir/mysql-variables" collect_mysql_status > "$dir/mysql-status" collect_mysql_databases > "$dir/mysql-databases" collect_mysql_plugins > "$dir/mysql-plugins" collect_mysql_slave_status > "$dir/mysql-slave" collect_mysql_innodb_status > "$dir/innodb-status" collect_mysql_ndb_status > "$dir/ndb-status" collect_mysql_processlist > "$dir/mysql-processlist" collect_mysql_users > "$dir/mysql-users" collect_mysql_roles > "$dir/mysql-roles" collect_mysqld_instances "$dir/mysql-variables" > "$dir/mysqld-instances" collect_mysqld_executables "$dir/mysqld-instances" > "$dir/mysqld-executables" collect_mysql_show_slave_hosts "$dir/mysql-slave-hosts" > "$dir/mysql-slave-hosts" local binlog="$(get_var log_bin "$dir/mysql-variables")" if [ "${binlog}" ]; then collect_master_logs_status "$dir/mysql-master-logs" "$dir/mysql-master-status" fi local uptime="$(get_var Uptime "$dir/mysql-status")" local current_time="$($CMD_MYSQL $EXT_ARGV -ss -e \ "SELECT LEFT(NOW() - INTERVAL ${uptime} SECOND, 16)")" local port="$(get_var port "$dir/mysql-variables")" local cnf_file="$(find_my_cnf_file "$dir/mysqld-instances" ${port})" [ -e "$cnf_file" ] && cat "$cnf_file" > "$dir/mysql-config-file" local pid_file="$(get_var "pid_file" "$dir/mysql-variables")" local pid_file_exists="" [ -e "${pid_file}" ] && pid_file_exists=1 echo "pt-summary-internal-pid_file_exists $pid_file_exists" >> "$dir/mysql-variables" echo "pt-summary-internal-current_time $current_time" >> "$dir/mysql-variables" echo "pt-summary-internal-Config_File_path $cnf_file" >> "$dir/mysql-variables" collect_internal_vars "$dir/mysqld-executables" >> "$dir/mysql-variables" if [ "$OPT_DATABASES" -o "$OPT_ALL_DATABASES" ]; then local trg_arg="$(get_mysqldump_args "$dir/mysql-variables")" local dbs="${OPT_DATABASES:-""}" get_mysqldump_for "${trg_arg}" "$dbs" > "$dir/mysqldump" fi ( sleep $OPT_SLEEP collect_mysql_deferred_status "$dir/mysql-status" > "$dir/mysql-status-defer" ) & _d "Forked child is $!" } # ########################################################################### # End collect_mysql_info package # ########################################################################### # ########################################################################### # report_mysql_info package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/report_mysql_info.sh # t/lib/bash/report_mysql_info.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u POSIXLY_CORRECT=1 secs_to_time () { awk -v sec="$1" 'BEGIN { printf( "%d+%02d:%02d:%02d", sec / 86400, (sec % 86400) / 3600, (sec % 3600) / 60, sec % 60); }' } feat_on() { local file="$1" local varname="$2" [ -e "$file" ] || return if [ "$( grep "$varname" "${file}" )" ]; then local var="$(awk "\$1 ~ /^$2$/ { print \$2 }" $file)" if [ "${var}" = "ON" ]; then echo "Enabled" elif [ "${var}" = "OFF" -o "${var}" = "0" -o -z "${var}" ]; then echo "Disabled" elif [ "${3:-""}" = "ne" ]; then if [ "${var}" != "$4" ]; then echo "Enabled" else echo "Disabled" fi elif [ "${3:-""}" = "gt" ]; then if [ "${var}" -gt "$4" ]; then echo "Enabled" else echo "Disabled" fi elif [ "${var}" ]; then echo "Enabled" else echo "Disabled" fi else echo "Not Supported" fi } feat_on_renamed () { local file="$1" shift; for varname in "$@"; do local feat_on="$( feat_on "$file" $varname )" if [ "${feat_on:-"Not Supported"}" != "Not Supported" ]; then echo $feat_on return fi done echo "Not Supported" } get_table_cache () { local file="$1" [ -e "$file" ] || return local table_cache="" if [ "$( get_var table_open_cache "${file}" )" ]; then table_cache="$(get_var table_open_cache "${file}")" else table_cache="$(get_var table_cache "${file}")" fi echo ${table_cache:-0} } get_plugin_status () { local file="$1" local plugin="$2" local status="$(grep -w "$plugin" "$file" | awk '{ print $2 }')" echo ${status:-"Not found"} } collect_keyring_plugins() { $CMD_MYSQL $EXT_ARGV --table -ss -e 'SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS WHERE PLUGIN_NAME LIKE "keyring%";' } collect_encrypted_tables() { $CMD_MYSQL $EXT_ARGV --table -ss -e "SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION=\"Y\"%';" } collect_encrypted_tablespaces() { $CMD_MYSQL $EXT_ARGV --table -ss -e "SELECT SPACE, NAME, SPACE_TYPE from INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES where FLAG&8192 = 8192;" } _NO_FALSE_NEGATIVES="" parse_mysqld_instances () { local file="$1" local variables_file="$2" local socket="" local port="" local datadir="" local defaults_file="" [ -e "$file" ] || return echo " Port Data Directory Nice OOM Socket" echo " ===== ========================== ==== === ======" grep '/mysqld ' "$file" | while read line; do local pid=$(echo "$line" | awk '{print $1;}') for word in ${line}; do if echo "${word}" | grep -- "--socket=" > /dev/null; then socket="$(echo "${word}" | cut -d= -f2)" fi if echo "${word}" | grep -- "--port=" > /dev/null; then port="$(echo "${word}" | cut -d= -f2)" fi if echo "${word}" | grep -- "--datadir=" > /dev/null; then datadir="$(echo "${word}" | cut -d= -f2)" fi if echo "${word}" | grep -- "--defaults-file=" > /dev/null; then defaults_file="$(echo "${word}" | cut -d= -f2)" fi done if [ -n "${defaults_file:-""}" -a -r "${defaults_file:-""}" ]; then socket="${socket:-"$(grep "^socket\>" "$defaults_file" | tail -n1 | cut -d= -f2 | sed 's/^[ \t]*//;s/[ \t]*$//')"}" port="${port:-"$(grep "^port\>" "$defaults_file" | tail -n1 | cut -d= -f2 | sed 's/^[ \t]*//;s/[ \t]*$//')"}" datadir="${datadir:-"$(grep "^datadir\>" "$defaults_file" | tail -n1 | cut -d= -f2 | sed 's/^[ \t]*//;s/[ \t]*$//')"}" fi local nice="$(get_var "internal::nice_of_$pid" "$variables_file")" local oom="$(get_var "internal::oom_of_$pid" "$variables_file")" if [ -n "${_NO_FALSE_NEGATIVES}" ]; then nice="?" oom="?" fi printf " %5s %-26s %-4s %-3s %s\n" "${port}" "${datadir}" "${nice:-"?"}" "${oom:-"?"}" "${socket}" defaults_file="" socket="" port="" datadir="" done } get_mysql_timezone () { local file="$1" [ -e "$file" ] || return local tz="$(get_var time_zone "${file}")" if [ "${tz}" = "SYSTEM" ]; then tz="$(get_var system_time_zone "${file}")" fi echo "${tz}" } get_mysql_version () { local file="$1" name_val Version "$(get_var version "${file}") $(get_var version_comment "${file}")" name_val "Built On" "$(get_var version_compile_os "${file}") $(get_var version_compile_machine "${file}")" } get_mysql_uptime () { local uptime="$1" local restart="$2" uptime="$(secs_to_time ${uptime})" echo "${restart} (up ${uptime})" } summarize_binlogs () { local file="$1" [ -e "$file" ] || return local size="$(awk '{t += $2} END{printf "%0.f\n", t}' "$file")" name_val "Binlogs" $(wc -l "$file") name_val "Zero-Sized" $(grep -c '\<0$' "$file") name_val "Total Size" $(shorten ${size} 1) } format_users () { local file="$1" [ -e "$file" ] || return awk '{printf "%d users, %d anon, %d w/o pw, %d old pw\n", $1, $2, $3, $4}' "${file}" } format_binlog_filters () { local file="$1" [ -e "$file" ] || return name_val "binlog_do_db" "$(cut -f3 "$file")" name_val "binlog_ignore_db" "$(cut -f4 "$file")" } format_status_variables () { local file="$1" [ -e "$file" ] || return utime1="$(awk '/Uptime /{print $2}' "$file")"; utime2="$(awk '/Uptime /{print $3}' "$file")"; awk " BEGIN { utime1 = ${utime1}; utime2 = ${utime2}; udays = utime1 / 86400; udiff = utime2 - utime1; printf(\"%-35s %11s %11s %11s\\n\", \"Variable\", \"Per day\", \"Per second\", udiff \" secs\"); } \$2 ~ /^[0-9]*\$/ { if ( \$2 > 0 && \$2 < 18446744073709551615 ) { if ( udays > 0 ) { fuzzy_var=\$2 / udays; ${fuzzy_formula}; perday=fuzzy_var; } if ( utime1 > 0 ) { fuzzy_var=\$2 / utime1; ${fuzzy_formula}; persec=fuzzy_var; } if ( udiff > 0 ) { fuzzy_var=(\$3 - \$2) / udiff; ${fuzzy_formula}; nowsec=fuzzy_var; } perday = int(perday); persec = int(persec); nowsec = int(nowsec); if ( perday + persec + nowsec > 0 ) { perday_format=\"%11.f\"; persec_format=\"%11.f\"; nowsec_format=\"%11.f\"; if ( perday == 0 ) { perday = \"\"; perday_format=\"%11s\"; } if ( persec == 0 ) { persec = \"\"; persec_format=\"%11s\"; } if ( nowsec == 0 ) { nowsec = \"\"; nowsec_format=\"%11s\"; } format=\"%-35s \" perday_format \" \" persec_format \" \" nowsec_format \"\\n\"; printf(format, \$1, perday, persec, nowsec); } } }" "$file" } summarize_processlist () { local file="$1" [ -e "$file" ] || return for param in Command User Host db State; do echo printf ' %-30s %8s %7s %9s %9s\n' \ "${param}" "COUNT(*)" Working "SUM(Time)" "MAX(Time)" echo " ------------------------------" \ "-------- ------- --------- ---------" cut -c1-80 "$file" \ | awk " \$1 == \"${param}:\" { p = substr(\$0, index(\$0, \":\") + 2); if ( index(p, \":\") > 0 ) { p = substr(p, 1, index(p, \":\") - 1); } if ( length(p) > 30 ) { p = substr(p, 1, 30); } } \$1 == \"Time:\" { t = \$2; if ( t == \"NULL\" ) { t = 0; } } \$1 == \"Command:\" { c = \$2; } \$1 == \"Info:\" { count[p]++; if ( c == \"Sleep\" ) { sleep[p]++; } if ( \"${param}\" == \"Command\" || c != \"Sleep\" ) { time[p] += t; if ( t > mtime[p] ) { mtime[p] = t; } } } END { for ( p in count ) { fuzzy_var=count[p]-sleep[p]; ${fuzzy_formula} fuzzy_work=fuzzy_var; fuzzy_var=count[p]; ${fuzzy_formula} fuzzy_count=fuzzy_var; fuzzy_var=time[p]; ${fuzzy_formula} fuzzy_time=fuzzy_var; fuzzy_var=mtime[p]; ${fuzzy_formula} fuzzy_mtime=fuzzy_var; printf \" %-30s %8d %7d %9d %9d\n\", p, fuzzy_count, fuzzy_work, fuzzy_time, fuzzy_mtime; } } " | sort done echo } pretty_print_cnf_file () { local file="$1" [ -e "$file" ] || return perl -n -l -e ' my $line = $_; if ( $line =~ /^\s*[a-zA-Z[]/ ) { if ( $line=~/\s*(.*?)\s*=\s*(.*)\s*$/ ) { printf("%-35s = %s\n", $1, $2) } elsif ( $line =~ /\s*\[/ ) { print "\n$line" } else { print $line } }' "$file" } find_checkpoint_age() { local file="$1" awk ' /Log sequence number/{ if ( $5 ) { lsn = $5 + ($4 * 4294967296); } else { lsn = $4; } } /Last checkpoint at/{ if ( $5 ) { print lsn - ($5 + ($4 * 4294967296)); } else { print lsn - $4; } } ' "$file" } find_pending_io_reads() { local file="$1" [ -e "$file" ] || return awk ' /Pending normal aio reads/ { normal_aio_reads = substr($5, 1, index($5, ",")); } /ibuf aio reads/ { ibuf_aio_reads = substr($4, 1, index($4, ",")); } /pending preads/ { preads = $1; } /Pending reads/ { reads = $3; } END { printf "%d buf pool reads, %d normal AIO", reads, normal_aio_reads; printf ", %d ibuf AIO, %d preads", ibuf_aio_reads, preads; } ' "${file}" } find_pending_io_writes() { local file="$1" [ -e "$file" ] || return awk ' /aio writes/ { aio_writes = substr($NF, 1, index($NF, ",")); } /ibuf aio reads/ { log_ios = substr($7, 1, index($7, ",")); sync_ios = substr($10, 1, index($10, ",")); } /pending log writes/ { log_writes = $1; chkp_writes = $5; } /pending pwrites/ { pwrites = $4; } /Pending writes:/ { lru = substr($4, 1, index($4, ",")); flush_list = substr($7, 1, index($7, ",")); single_page = $NF; } END { printf "%d buf pool (%d LRU, %d flush list, %d page); %d AIO, %d sync, %d log IO (%d log, %d chkp); %d pwrites", lru + flush_list + single_page, lru, flush_list, single_page, aio_writes, sync_ios, log_ios, log_writes, chkp_writes, pwrites; } ' "${file}" } find_pending_io_flushes() { local file="$1" [ -e "$file" ] || return awk ' /Pending flushes/ { log_flushes = substr($5, 1, index($5, ";")); buf_pool = $NF; } END { printf "%d buf pool, %d log", buf_pool, log_flushes; } ' "${file}" } summarize_undo_log_entries() { local file="$1" [ -e "$file" ] || return grep 'undo log entries' "${file}" \ | sed -e 's/^.*undo log entries \([0-9]*\)/\1/' \ | awk ' { count++; sum += $1; if ( $1 > max ) { max = $1; } } END { printf "%d transactions, %d total undo, %d max undo\n", count, sum, max; }' } find_max_trx_time() { local file="$1" [ -e "$file" ] || return awk ' BEGIN { max = 0; } /^---TRANSACTION.* sec,/ { for ( i = 0; i < 7; ++i ) { if ( $i == "sec," ) { j = i-1; if ( max < $j ) { max = $j; } } } } END { print max; }' "${file}" } find_transation_states () { local file="$1" local tmpfile="$PT_TMPDIR/find_transation_states.tmp" [ -e "$file" ] || return awk -F, '/^---TRANSACTION/{print $2}' "${file}" \ | sed -e 's/ [0-9]* sec.*//' \ | sort \ | uniq -c > "${tmpfile}" group_concat "${tmpfile}" } format_innodb_status () { local file=$1 [ -e "$file" ] || return name_val "Checkpoint Age" "$(shorten $(find_checkpoint_age "${file}") 0)" name_val "InnoDB Queue" "$(awk '/queries inside/{print}' "${file}")" name_val "Oldest Transaction" "$(find_max_trx_time "${file}") Seconds"; name_val "History List Len" "$(awk '/History list length/{print $4}' "${file}")" name_val "Read Views" "$(awk '/read views open inside/{print $1}' "${file}")" name_val "Undo Log Entries" "$(summarize_undo_log_entries "${file}")" name_val "Pending I/O Reads" "$(find_pending_io_reads "${file}")" name_val "Pending I/O Writes" "$(find_pending_io_writes "${file}")" name_val "Pending I/O Flushes" "$(find_pending_io_flushes "${file}")" name_val "Transaction States" "$(find_transation_states "${file}" )" if grep 'TABLE LOCK table' "${file}" >/dev/null ; then echo "Tables Locked" awk '/^TABLE LOCK table/{print $4}' "${file}" \ | sort | uniq -c | sort -rn fi if grep 'has waited at' "${file}" > /dev/null ; then echo "Semaphore Waits" grep 'has waited at' "${file}" | cut -d' ' -f6-8 \ | sort | uniq -c | sort -rn fi if grep 'reserved it in mode' "${file}" > /dev/null; then echo "Semaphore Holders" awk '/has reserved it in mode/{ print substr($0, 1 + index($0, "("), index($0, ")") - index($0, "(") - 1); }' "${file}" | sort | uniq -c | sort -rn fi if grep -e 'Mutex at' -e 'lock on' "${file}" >/dev/null 2>&1; then echo "Mutexes/Locks Waited For" grep -e 'Mutex at' -e 'lock on' "${file}" | sed -e 's/^[XS]-//' -e 's/,.*$//' \ | sort | uniq -c | sort -rn fi } format_ndb_status() { local file=$1 [ -e "$file" ] || return egrep '^[ \t]*Name:|[ \t]*Status:' $file|sed 's/^[ \t]*//g'|while read line; do echo $line; echo $line | grep '^Status:'>/dev/null && echo ; done } format_keyring_plugins() { local keyring_plugins="$1" local encrypted_tables="$2" if [ -z "$keyring_plugins" ]; then echo "No keyring plugins found" if [ ! -z "$encrypted_tables" ]; then echo "Warning! There are encrypted tables but keyring plugins are not loaded" fi else echo "Keyring plugins:" echo "'$keyring_plugins'" fi } format_encrypted_tables() { local encrypted_tables="$1" if [ ! -z "$encrypted_tables" ]; then echo "Encrypted tables:" echo "$encrypted_tables" fi } format_encrypted_tablespaces() { local encrypted_tablespaces="$1" if [ ! -z "$encrypted_tablespaces" ]; then echo "Encrypted tablespaces:" echo "$encrypted_tablespaces" fi } format_mysql_roles() { local file=$1 [ -e "$file" ] || return cat $file } format_overall_db_stats () { local file="$1" local tmpfile="$PT_TMPDIR/format_overall_db_stats.tmp" [ -e "$file" ] || return echo awk ' BEGIN { db = "{chosen}"; num_dbs = 0; } /^USE `.*`;$/ { db = substr($2, 2, length($2) - 3); if ( db_seen[db]++ == 0 ) { dbs[num_dbs] = db; num_dbs++; } } /^CREATE TABLE/ { if (num_dbs == 0) { num_dbs = 1; db_seen[db] = 1; dbs[0] = db; } counts[db ",tables"]++; } /CREATE ALGORITHM=/ { counts[db ",views"]++; } /03 CREATE.*03 PROCEDURE/ { counts[db ",sps"]++; } /03 CREATE.*03 FUNCTION/ { counts[db ",func"]++; } /03 CREATE.*03 TRIGGER/ { counts[db ",trg"]++; } /FOREIGN KEY/ { counts[db ",fk"]++; } /PARTITION BY/ { counts[db ",partn"]++; } END { mdb = length("Database"); for ( i = 0; i < num_dbs; i++ ) { if ( length(dbs[i]) > mdb ) { mdb = length(dbs[i]); } } fmt = " %-" mdb "s %6s %5s %3s %5s %5s %5s %5s\n"; printf fmt, "Database", "Tables", "Views", "SPs", "Trigs", "Funcs", "FKs", "Partn"; for ( i=0;i "$tmpfile" head -n2 "$tmpfile" tail -n +3 "$tmpfile" | sort echo awk ' BEGIN { db = "{chosen}"; num_dbs = 0; num_engines = 0; } /^USE `.*`;$/ { db = substr($2, 2, length($2) - 3); if ( db_seen[db]++ == 0 ) { dbs[num_dbs] = db; num_dbs++; } } /^\) ENGINE=/ { if (num_dbs == 0) { num_dbs = 1; db_seen[db] = 1; dbs[0] = db; } engine=substr($2, index($2, "=") + 1); if ( engine_seen[tolower(engine)]++ == 0 ) { engines[num_engines] = engine; num_engines++; } counts[db "," engine]++; } END { mdb = length("Database"); for ( i=0;i mdb ) { mdb = length(db); } } fmt = " %-" mdb "s" printf fmt, "Database"; for ( i=0;i "$tmpfile" head -n1 "$tmpfile" tail -n +2 "$tmpfile" | sort echo awk ' BEGIN { db = "{chosen}"; num_dbs = 0; num_idxes = 0; } /^USE `.*`;$/ { db = substr($2, 2, length($2) - 3); if ( db_seen[db]++ == 0 ) { dbs[num_dbs] = db; num_dbs++; } } /KEY/ { if (num_dbs == 0) { num_dbs = 1; db_seen[db] = 1; dbs[0] = db; } idx="BTREE"; if ( $0 ~ /SPATIAL/ ) { idx="SPATIAL"; } if ( $0 ~ /FULLTEXT/ ) { idx="FULLTEXT"; } if ( $0 ~ /USING RTREE/ ) { idx="RTREE"; } if ( $0 ~ /USING HASH/ ) { idx="HASH"; } if ( idx_seen[idx]++ == 0 ) { idxes[num_idxes] = idx; num_idxes++; } counts[db "," idx]++; } END { mdb = length("Database"); for ( i=0;i mdb ) { mdb = length(db); } } fmt = " %-" mdb "s" printf fmt, "Database"; for ( i=0;i "$tmpfile" head -n1 "$tmpfile" tail -n +2 "$tmpfile" | sort echo awk ' BEGIN { db = "{chosen}"; num_dbs = 0; num_types = 0; } /^USE `.*`;$/ { db = substr($2, 2, length($2) - 3); if ( db_seen[db]++ == 0 ) { dbs[num_dbs] = db; num_dbs++; } } /^ `/ { if (num_dbs == 0) { num_dbs = 1; db_seen[db] = 1; dbs[0] = db; } str = $0; str = substr(str, index(str, "`") + 1); str = substr(str, index(str, "`") + 2); if ( index(str, " ") > 0 ) { str = substr(str, 1, index(str, " ") - 1); } if ( index(str, ",") > 0 ) { str = substr(str, 1, index(str, ",") - 1); } if ( index(str, "(") > 0 ) { str = substr(str, 1, index(str, "(") - 1); } type = str; if ( type_seen[type]++ == 0 ) { types[num_types] = type; num_types++; } counts[db "," type]++; } END { mdb = length("Database"); for ( i=0;i mdb ) { mdb = length(db); } } fmt = " %-" mdb "s" mtlen = 0; # max type length for ( i=0;i mtlen ) { mtlen = length(type); } } for ( i=1;i<=mtlen;i++ ) { printf " %-" mdb "s", ""; for ( j=0;j length(type) ) { ch = " "; } else { ch = substr(type, i, 1); } printf(" %3s", ch); } print ""; } printf " %-" mdb "s", "Database"; for ( i=0;i "$tmpfile" local hdr=$(grep -n Database "$tmpfile" | cut -d: -f1); head -n${hdr} "$tmpfile" tail -n +$((${hdr} + 1)) "$tmpfile" | sort echo } section_percona_server_features () { local file="$1" [ -e "$file" ] || return name_val "Table & Index Stats" \ "$(feat_on_renamed "$file" userstat_running userstat)" name_val "Multiple I/O Threads" \ "$(feat_on "$file" innodb_read_io_threads gt 1)" name_val "Corruption Resilient" \ "$(feat_on_renamed "$file" innodb_pass_corrupt_table innodb_corrupt_table_action)" name_val "Durable Replication" \ "$(feat_on_renamed "$file" innodb_overwrite_relay_log_info innodb_recovery_update_relay_log)" name_val "Import InnoDB Tables" \ "$(feat_on_renamed "$file" innodb_expand_import innodb_import_table_from_xtrabackup)" name_val "Fast Server Restarts" \ "$(feat_on_renamed "$file" innodb_auto_lru_dump innodb_buffer_pool_restore_at_startup)" name_val "Enhanced Logging" \ "$(feat_on "$file" log_slow_verbosity ne microtime)" name_val "Replica Perf Logging" \ "$(feat_on "$file" log_slow_slave_statements)" name_val "Response Time Hist." \ "$(feat_on_renamed "$file" enable_query_response_time_stats query_response_time_stats)" local smooth_flushing="$(feat_on_renamed "$file" innodb_adaptive_checkpoint innodb_adaptive_flushing_method)" if [ "${smooth_flushing:-""}" != "Not Supported" ]; then if [ -n "$(get_var innodb_adaptive_checkpoint "$file")" ]; then smooth_flushing="$(feat_on "$file" "innodb_adaptive_checkpoint" ne none)" else smooth_flushing="$(feat_on "$file" "innodb_adaptive_flushing_method" ne native)" fi fi name_val "Smooth Flushing" "$smooth_flushing" name_val "HandlerSocket NoSQL" \ "$(feat_on "$file" handlersocket_port)" name_val "Fast Hash UDFs" \ "$(get_var "pt-summary-internal-FNV_64" "$file")" } section_myisam () { local variables_file="$1" local status_file="$2" [ -e "$variables_file" -a -e "$status_file" ] || return local buf_size="$(get_var key_buffer_size "$variables_file")" local blk_size="$(get_var key_cache_block_size "$variables_file")" local blk_unus="$(get_var Key_blocks_unused "$status_file")" local blk_unfl="$(get_var Key_blocks_not_flushed "$variables_file")" local unus=$((${blk_unus:-0} * ${blk_size:-0})) local unfl=$((${blk_unfl:-0} * ${blk_size:-0})) local used=$((${buf_size:-0} - ${unus})) name_val "Key Cache" "$(shorten ${buf_size} 1)" name_val "Pct Used" "$(fuzzy_pct ${used} ${buf_size})" name_val "Unflushed" "$(fuzzy_pct ${unfl} ${buf_size})" } section_innodb () { local variables_file="$1" local status_file="$2" [ -e "$variables_file" -a -e "$status_file" ] || return local version=$(get_var innodb_version "$variables_file") name_val Version ${version:-default} local bp_size="$(get_var innodb_buffer_pool_size "$variables_file")" name_val "Buffer Pool Size" "$(shorten "${bp_size:-0}" 1)" local bp_pags="$(get_var Innodb_buffer_pool_pages_total "$status_file")" local bp_free="$(get_var Innodb_buffer_pool_pages_free "$status_file")" local bp_dirt="$(get_var Innodb_buffer_pool_pages_dirty "$status_file")" local bp_fill=$((${bp_pags} - ${bp_free})) name_val "Buffer Pool Fill" "$(fuzzy_pct ${bp_fill} ${bp_pags})" name_val "Buffer Pool Dirty" "$(fuzzy_pct ${bp_dirt} ${bp_pags})" name_val "File Per Table" $(get_var innodb_file_per_table "$variables_file") name_val "Page Size" $(shorten $(get_var Innodb_page_size "$status_file") 0) local log_size="$(get_var innodb_log_file_size "$variables_file")" local log_file="$(get_var innodb_log_files_in_group "$variables_file")" local log_total=$(awk "BEGIN {printf \"%.2f\n\", ${log_size}*${log_file}}" ) name_val "Log File Size" \ "${log_file} * $(shorten ${log_size} 1) = $(shorten ${log_total} 1)" name_val "Log Buffer Size" \ "$(shorten $(get_var innodb_log_buffer_size "$variables_file") 0)" name_val "Flush Method" \ "$(get_var innodb_flush_method "$variables_file")" name_val "Flush Log At Commit" \ "$(get_var innodb_flush_log_at_trx_commit "$variables_file")" name_val "XA Support" \ "$(get_var innodb_support_xa "$variables_file")" name_val "Checksums" \ "$(get_var innodb_checksums "$variables_file")" name_val "Doublewrite" \ "$(get_var innodb_doublewrite "$variables_file")" name_val "R/W I/O Threads" \ "$(get_var innodb_read_io_threads "$variables_file") $(get_var innodb_write_io_threads "$variables_file")" name_val "I/O Capacity" \ "$(get_var innodb_io_capacity "$variables_file")" name_val "Thread Concurrency" \ "$(get_var innodb_thread_concurrency "$variables_file")" name_val "Concurrency Tickets" \ "$(get_var innodb_concurrency_tickets "$variables_file")" name_val "Commit Concurrency" \ "$(get_var innodb_commit_concurrency "$variables_file")" name_val "Txn Isolation Level" \ "$(get_var tx_isolation "$variables_file")" name_val "Adaptive Flushing" \ "$(get_var innodb_adaptive_flushing "$variables_file")" name_val "Adaptive Checkpoint" \ "$(get_var innodb_adaptive_checkpoint "$variables_file")" } section_rocksdb () { local variables_file="$1" local status_file="$2" local NAME_VAL_LEN=32 [ -e "$variables_file" -a -e "$status_file" ] || return name_val "Block Cache Size" "$(shorten $(get_var rocksdb_block_cache_size "$variables_file") 0)" name_val "Block Size" "$(shorten $(get_var rocksdb_block_size "$variables_file") 0)" name_val "Bytes Per Sync" "$(shorten $(get_var rocksdb_bytes_per_sync "$variables_file") 0)" name_val "Compaction Seq Deletes " "$(shorten $(get_var rocksdb_compaction_sequential_deletes "$variables_file") 0)" name_val "Compaction Seq Deletes Count SD" "$(get_var rocksdb_compaction_sequential_deletes_count_sd "$variables_file")" name_val "Compaction Seq Deletes Window" "$(shorten $(get_var rocksdb_compaction_sequential_deletes_window "$variables_file") 0)" name_val "Default CF Options" "$(get_var rocksdb_default_cf_options "$variables_file")" name_val "Max Background Jobs" "$(shorten $(get_var rocksdb_max_background_jobs "$variables_file") 0)" name_val "Max Block Cache Size" "$(shorten $(get_var rocksdb_max_block_cache_size "$variables_file") 0)" name_val "Max Block Size" "$(shorten $(get_var rocksdb_max_block_size "$variables_file") 0)" name_val "Max Open Files" "$(shorten $(get_var rocksdb_max_open_files "$variables_file") 0)" name_val "Max Total Wal Size" "$(shorten $(get_var rocksdb_max_total_wal_size "$variables_file") 0)" name_val "Rate Limiter Bytes Per Second" "$(shorten $(get_var rocksdb_rate_limiter_bytes_per_sec "$variables_file") 0)" name_val "Rate Limiter Bytes Per Sync" "$(shorten $(get_var rocksdb_bytes_per_sync "$variables_file") 0)" name_val "Rate Limiter Wal Bytes Per Sync" "$(shorten $(get_var rocksdb_wal_bytes_per_sync "$variables_file") 0)" name_val "Table Cache NumHardBits" "$(shorten $(get_var rocksdb_table_cache_numshardbits "$variables_file") 0)" name_val "Wal Bytes per Sync" "$(shorten $(get_var rocksdb_wal_bytes_per_sync "$variables_file") 0)" } section_noteworthy_variables () { local file="$1" [ -e "$file" ] || return name_val "Auto-Inc Incr/Offset" "$(get_var auto_increment_increment "$file")/$(get_var auto_increment_offset "$file")" for v in \ default_storage_engine flush_time init_connect init_file sql_mode; do name_val "${v}" "$(get_var ${v} "$file")" done for v in \ join_buffer_size sort_buffer_size read_buffer_size read_rnd_buffer_size \ bulk_insert_buffer max_heap_table_size tmp_table_size \ max_allowed_packet thread_stack; do name_val "${v}" "$(shorten $(get_var ${v} "$file") 0)" done for v in log log_error log_warnings log_slow_queries \ log_queries_not_using_indexes log_slave_updates; do name_val "${v}" "$(get_var ${v} "$file")" done } _semi_sync_stats_for () { local target="$1" local file="$2" [ -e "$file" ] || return local semisync_status="$(get_var "Rpl_semi_sync_${target}_status" "${file}" )" local semisync_trace="$(get_var "rpl_semi_sync_${target}_trace_level" "${file}")" local trace_extra="" if [ -n "${semisync_trace}" ]; then if [ $semisync_trace -eq 1 ]; then trace_extra="general (for example, time function failures) " elif [ $semisync_trace -eq 16 ]; then trace_extra="detail (more verbose information) " elif [ $semisync_trace -eq 32 ]; then trace_extra="net wait (more information about network waits)" elif [ $semisync_trace -eq 64 ]; then trace_extra="function (information about function entry and exit)" else trace_extra="Unknown setting" fi fi name_val "${target} semisync status" "${semisync_status}" name_val "${target} trace level" "${semisync_trace}, ${trace_extra}" if [ "${target}" = "master" ]; then name_val "${target} timeout in milliseconds" \ "$(get_var "rpl_semi_sync_${target}_timeout" "${file}")" name_val "${target} waits for slaves" \ "$(get_var "rpl_semi_sync_${target}_wait_no_slave" "${file}")" _d "Prepend Rpl_semi_sync_master_ to the following" for v in \ clients net_avg_wait_time net_wait_time net_waits \ no_times no_tx timefunc_failures tx_avg_wait_time \ tx_wait_time tx_waits wait_pos_backtraverse \ wait_sessions yes_tx; do name_val "${target} ${v}" \ "$( get_var "Rpl_semi_sync_master_${v}" "${file}" )" done fi } noncounters_pattern () { local noncounters_pattern="" for var in Compression Delayed_insert_threads Innodb_buffer_pool_pages_data \ Innodb_buffer_pool_pages_dirty Innodb_buffer_pool_pages_free \ Innodb_buffer_pool_pages_latched Innodb_buffer_pool_pages_misc \ Innodb_buffer_pool_pages_total Innodb_data_pending_fsyncs \ Innodb_data_pending_reads Innodb_data_pending_writes \ Innodb_os_log_pending_fsyncs Innodb_os_log_pending_writes \ Innodb_page_size Innodb_row_lock_current_waits Innodb_row_lock_time_avg \ Innodb_row_lock_time_max Key_blocks_not_flushed Key_blocks_unused \ Key_blocks_used Last_query_cost Max_used_connections Ndb_cluster_node_id \ Ndb_config_from_host Ndb_config_from_port Ndb_number_of_data_nodes \ Not_flushed_delayed_rows Open_files Open_streams Open_tables \ Prepared_stmt_count Qcache_free_blocks Qcache_free_memory \ Qcache_queries_in_cache Qcache_total_blocks Rpl_status \ Slave_open_temp_tables Slave_running Ssl_cipher Ssl_cipher_list \ Ssl_ctx_verify_depth Ssl_ctx_verify_mode Ssl_default_timeout \ Ssl_session_cache_mode Ssl_session_cache_size Ssl_verify_depth \ Ssl_verify_mode Ssl_version Tc_log_max_pages_used Tc_log_page_size \ Threads_cached Threads_connected Threads_running \ Uptime_since_flush_status; do if [ -z "${noncounters_pattern}" ]; then noncounters_pattern="${var}" else noncounters_pattern="${noncounters_pattern}\|${var}" fi done echo $noncounters_pattern } section_mysqld () { local executables_file="$1" local variables_file="$2" [ -e "$executables_file" -a -e "$variables_file" ] || return section "MySQL Executable" local i=1; while read executable; do name_val "Path to executable" "$executable" name_val "Has symbols" "$( get_var "pt-summary-internal-mysqld_executable_${i}" "$variables_file" )" i=$(($i + 1)) done < "$executables_file" } section_slave_hosts () { local slave_hosts_file="$1" [ -e "$slave_hosts_file" ] || return section "Slave Hosts" if [ -s "$slave_hosts_file" ]; then cat "$slave_hosts_file" else echo "No slaves found" fi } section_mysql_files () { local variables_file="$1" section "MySQL Files" for file_name in pid_file slow_query_log_file general_log_file log_error; do local file="$(get_var "${file_name}" "$variables_file")" local name_out="$(echo "$file_name" | sed 'y/[a-z]/[A-Z]/')" if [ -e "${file}" ]; then name_val "$name_out" "$file" name_val "${name_out} Size" "$(du "$file" | awk '{print $1}')" else name_val "$name_out" "(does not exist)" fi done } section_percona_xtradb_cluster () { local mysql_var="$1" local mysql_status="$2" name_val "Cluster Name" "$(get_var "wsrep_cluster_name" "$mysql_var")" name_val "Cluster Address" "$(get_var "wsrep_cluster_address" "$mysql_var")" name_val "Cluster Size" "$(get_var "wsrep_cluster_size" "$mysql_status")" name_val "Cluster Nodes" "$(get_var "wsrep_incoming_addresses" "$mysql_status")" name_val "Node Name" "$(get_var "wsrep_node_name" "$mysql_var")" name_val "Node Status" "$(get_var "wsrep_cluster_status" "$mysql_status")" name_val "SST Method" "$(get_var "wsrep_sst_method" "$mysql_var")" name_val "Slave Threads" "$(get_var "wsrep_slave_threads" "$mysql_var")" name_val "Ignore Split Brain" "$( parse_wsrep_provider_options "pc.ignore_sb" "$mysql_var" )" name_val "Ignore Quorum" "$( parse_wsrep_provider_options "pc.ignore_quorum" "$mysql_var" )" name_val "gcache Size" "$( parse_wsrep_provider_options "gcache.size" "$mysql_var" )" name_val "gcache Directory" "$( parse_wsrep_provider_options "gcache.dir" "$mysql_var" )" name_val "gcache Name" "$( parse_wsrep_provider_options "gcache.name" "$mysql_var" )" } parse_wsrep_provider_options () { local looking_for="$1" local mysql_var_file="$2" grep wsrep_provider_options "$mysql_var_file" \ | perl -Mstrict -le ' my $provider_opts = scalar(); my $looking_for = $ARGV[0]; my %opts = $provider_opts =~ /(\S+)\s*=\s*(\S*)(?:;|$)/g; print $opts{$looking_for}; ' "$looking_for" } report_jemalloc_enabled() { local JEMALLOC_STATUS='' local GENERAL_JEMALLOC_STATUS=0 local JEMALLOC_LOCATION='' for pid in $(pidof mysqld); do grep -qc jemalloc /proc/${pid}/environ || ldd $(which mysqld) 2>/dev/null | grep -qc jemalloc jemalloc_status=$? if [ $jemalloc_status = 1 ]; then echo "jemalloc is not enabled in mysql config for process with id ${pid}" else echo "jemalloc enabled in mysql config for process with id ${pid}" GENERAL_JEMALLOC_STATUS=1 fi done if [ $GENERAL_JEMALLOC_STATUS -eq 1 ]; then JEMALLOC_LOCATION=$(find /usr/lib64/ /usr/lib/x86_64-linux-gnu /usr/lib -name "libjemalloc.*" 2>/dev/null | head -n 1) if [ -z "$JEMALLOC_LOCATION" ]; then echo "Jemalloc library not found" else echo "Using jemalloc from $JEMALLOC_LOCATION" fi fi } report_mysql_summary () { local dir="$1" local NAME_VAL_LEN=25 section "Percona Toolkit MySQL Summary Report" name_val "System time" "`date -u +'%F %T UTC'` (local TZ: `date +'%Z %z'`)" section "Instances" parse_mysqld_instances "$dir/mysqld-instances" "$dir/mysql-variables" section_mysqld "$dir/mysqld-executables" "$dir/mysql-variables" section_slave_hosts "$dir/mysql-slave-hosts" local user="$(get_var "pt-summary-internal-user" "$dir/mysql-variables")" local port="$(get_var port "$dir/mysql-variables")" local now="$(get_var "pt-summary-internal-now" "$dir/mysql-variables")" section "Report On Port ${port}" name_val User "${user}" name_val Time "${now} ($(get_mysql_timezone "$dir/mysql-variables"))" name_val Hostname "$(get_var hostname "$dir/mysql-variables")" get_mysql_version "$dir/mysql-variables" local uptime="$(get_var Uptime "$dir/mysql-status")" local current_time="$(get_var "pt-summary-internal-current_time" "$dir/mysql-variables")" name_val Started "$(get_mysql_uptime "${uptime}" "${current_time}")" local num_dbs="$(grep -c . "$dir/mysql-databases")" name_val Databases "${num_dbs}" name_val Datadir "$(get_var datadir "$dir/mysql-variables")" local fuzz_procs=$(fuzz $(get_var Threads_connected "$dir/mysql-status")) local fuzz_procr=$(fuzz $(get_var Threads_running "$dir/mysql-status")) name_val Processes "${fuzz_procs} connected, ${fuzz_procr} running" local slave="" if [ -s "$dir/mysql-slave" ]; then slave=""; else slave="not "; fi local slavecount=$(grep -c 'Binlog Dump' "$dir/mysql-processlist") name_val Replication "Is ${slave}a slave, has ${slavecount} slaves connected" local pid_file="$(get_var "pid_file" "$dir/mysql-variables")" local PID_EXISTS="" if [ "$( get_var "pt-summary-internal-pid_file_exists" "$dir/mysql-variables" )" ]; then PID_EXISTS="(exists)" else PID_EXISTS="(does not exist)" fi name_val Pidfile "${pid_file} ${PID_EXISTS}" section "Processlist" summarize_processlist "$dir/mysql-processlist" section "Status Counters (Wait ${OPT_SLEEP} Seconds)" wait local noncounters_pattern="$(noncounters_pattern)" format_status_variables "$dir/mysql-status-defer" | grep -v "${noncounters_pattern}" section "Table cache" local open_tables=$(get_var "Open_tables" "$dir/mysql-status") local table_cache=$(get_table_cache "$dir/mysql-variables") name_val Size $table_cache name_val Usage "$(fuzzy_pct ${open_tables} ${table_cache})" section "Key Percona Server features" section_percona_server_features "$dir/mysql-variables" section "Percona XtraDB Cluster" local has_wsrep=$($CMD_MYSQL $EXT_ARGV -ss -e 'show session variables like "%wsrep_on%";' | cut -f2 | grep -i "on") if [ -n "${has_wsrep:-""}" ]; then if [ "${has_wsrep:-""}" = "ON" ]; then section_percona_xtradb_cluster "$dir/mysql-variables" "$dir/mysql-status" else name_val "wsrep_on" "OFF" fi fi section "Plugins" name_val "InnoDB compression" "$(get_plugin_status "$dir/mysql-plugins" "INNODB_CMP")" local has_query_cache=$(get_var have_query_cache "$dir/mysql-variables") if [ "$has_query_cache" = 'YES' ]; then section "Query cache" local query_cache_size=$(get_var query_cache_size "$dir/mysql-variables") local used=$(( ${query_cache_size} - $(get_var Qcache_free_memory "$dir/mysql-status") )) local hrat=$(fuzzy_pct $(get_var Qcache_hits "$dir/mysql-status") $(get_var Qcache_inserts "$dir/mysql-status")) name_val query_cache_type $(get_var query_cache_type "$dir/mysql-variables") name_val Size "$(shorten ${query_cache_size} 1)" name_val Usage "$(fuzzy_pct ${used} ${query_cache_size})" name_val HitToInsertRatio "${hrat}" fi local semisync_enabled_master="$(get_var "rpl_semi_sync_master_enabled" "$dir/mysql-variables")" if [ -n "${semisync_enabled_master}" ]; then section "Semisynchronous Replication" if [ "$semisync_enabled_master" = "OFF" -o "$semisync_enabled_master" = "0" -o -z "$semisync_enabled_master" ]; then name_val "Master" "Disabled" else _semi_sync_stats_for "master" "$dir/mysql-variables" fi local semisync_enabled_slave="$(get_var rpl_semi_sync_slave_enabled "$dir/mysql-variables")" if [ "$semisync_enabled_slave" = "OFF" -o "$semisync_enabled_slave" = "0" -o -z "$semisync_enabled_slave" ]; then name_val "Slave" "Disabled" else _semi_sync_stats_for "slave" "$dir/mysql-variables" fi fi section "Schema" if [ -s "$dir/mysqldump" ] \ && grep 'CREATE TABLE' "$dir/mysqldump" >/dev/null 2>&1; then format_overall_db_stats "$dir/mysqldump" elif [ ! -e "$dir/mysqldump" -a "$OPT_READ_SAMPLES" ]; then echo "Skipping schema analysis because --read-samples $dir/mysqldump " \ "does not exist" elif [ -z "$OPT_DATABASES" -a -z "$OPT_ALL_DATABASES" ]; then echo "Specify --databases or --all-databases to dump and summarize schemas" else echo "Skipping schema analysis due to apparent error in dump file" fi section "Noteworthy Technologies" if [ -s "$dir/mysqldump" ]; then if grep FULLTEXT "$dir/mysqldump" > /dev/null; then name_val "Full Text Indexing" "Yes" else name_val "Full Text Indexing" "No" fi if grep 'GEOMETRY\|POINT\|LINESTRING\|POLYGON' "$dir/mysqldump" > /dev/null; then name_val "Geospatial Types" "Yes" else name_val "Geospatial Types" "No" fi if grep 'FOREIGN KEY' "$dir/mysqldump" > /dev/null; then name_val "Foreign Keys" "Yes" else name_val "Foreign Keys" "No" fi if grep 'PARTITION BY' "$dir/mysqldump" > /dev/null; then name_val "Partitioning" "Yes" else name_val "Partitioning" "No" fi if grep -e 'ENGINE=InnoDB.*ROW_FORMAT' \ -e 'ENGINE=InnoDB.*KEY_BLOCK_SIZE' "$dir/mysqldump" > /dev/null; then name_val "InnoDB Compression" "Yes" else name_val "InnoDB Compression" "No" fi fi local ssl="$(get_var Ssl_accepts "$dir/mysql-status")" if [ -n "$ssl" -a "${ssl:-0}" -gt 0 ]; then name_val "SSL" "Yes" else name_val "SSL" "No" fi local lock_tables="$(get_var Com_lock_tables "$dir/mysql-status")" if [ -n "$lock_tables" -a "${lock_tables:-0}" -gt 0 ]; then name_val "Explicit LOCK TABLES" "Yes" else name_val "Explicit LOCK TABLES" "No" fi local delayed_insert="$(get_var Delayed_writes "$dir/mysql-status")" if [ -n "$delayed_insert" -a "${delayed_insert:-0}" -gt 0 ]; then name_val "Delayed Insert" "Yes" else name_val "Delayed Insert" "No" fi local xat="$(get_var Com_xa_start "$dir/mysql-status")" if [ -n "$xat" -a "${xat:-0}" -gt 0 ]; then name_val "XA Transactions" "Yes" else name_val "XA Transactions" "No" fi local ndb_cluster="$(get_var "Ndb_cluster_node_id" "$dir/mysql-status")" if [ -n "$ndb_cluster" -a "${ndb_cluster:-0}" -gt 0 ]; then name_val "NDB Cluster" "Yes" else name_val "NDB Cluster" "No" fi local prep=$(( $(get_var "Com_stmt_prepare" "$dir/mysql-status") + $(get_var "Com_prepare_sql" "$dir/mysql-status") )) if [ "${prep}" -gt 0 ]; then name_val "Prepared Statements" "Yes" else name_val "Prepared Statements" "No" fi local prep_count="$(get_var Prepared_stmt_count "$dir/mysql-status")" if [ "${prep_count}" ]; then name_val "Prepared statement count" "${prep_count}" fi section "InnoDB" local have_innodb="$(get_var "have_innodb" "$dir/mysql-variables")" local innodb_version="$(get_var "innodb_version" "$dir/mysql-variables")" if [ "${have_innodb}" = "YES" ] || [ -n "${innodb_version}" ]; then section_innodb "$dir/mysql-variables" "$dir/mysql-status" if [ -s "$dir/innodb-status" ]; then format_innodb_status "$dir/innodb-status" fi fi local has_rocksdb=$($CMD_MYSQL $EXT_ARGV -ss -e 'SHOW ENGINES' 2>/dev/null | grep -i 'rocksdb') if [ ! -z "$has_rocksdb" ]; then section "RocksDB" section_rocksdb "$dir/mysql-variables" "$dir/mysql-status" fi if [ -s "$dir/ndb-status" ]; then section "NDB" format_ndb_status "$dir/ndb-status" fi section "MyISAM" section_myisam "$dir/mysql-variables" "$dir/mysql-status" section "Security" local users="$( format_users "$dir/mysql-users" )" name_val "Users" "${users}" name_val "Old Passwords" "$(get_var old_passwords "$dir/mysql-variables")" if [ -s "$dir/mysql-roles" ]; then section "Roles" format_mysql_roles "$dir/mysql-roles" fi section "Encryption" local keyring_plugins="$(collect_keyring_plugins)" local encrypted_tables="" local encrypted_tablespaces="" if [ "${OPT_LIST_ENCRYPTED_TABLES}" = 'yes' ]; then encrypted_tables="$(collect_encrypted_tables)" encrypted_tablespaces="$(collect_encrypted_tablespaces)" fi format_keyring_plugins "$keyring_plugins" "$encrypted_tables" format_encrypted_tables "$encrypted_tables" format_encrypted_tablespaces "$encrypted_tablespaces" section "Binary Logging" if [ -s "$dir/mysql-master-logs" ] \ || [ -s "$dir/mysql-master-status" ]; then summarize_binlogs "$dir/mysql-master-logs" local format="$(get_var binlog_format "$dir/mysql-variables")" name_val binlog_format "${format:-STATEMENT}" name_val expire_logs_days "$(get_var expire_logs_days "$dir/mysql-variables")" name_val sync_binlog "$(get_var sync_binlog "$dir/mysql-variables")" name_val server_id "$(get_var server_id "$dir/mysql-variables")" format_binlog_filters "$dir/mysql-master-status" fi section "Noteworthy Variables" section_noteworthy_variables "$dir/mysql-variables" section "Configuration File" local cnf_file="$(get_var "pt-summary-internal-Config_File_path" "$dir/mysql-variables")" if [ -n "${cnf_file}" ]; then name_val "Config File" "${cnf_file}" pretty_print_cnf_file "$dir/mysql-config-file" else name_val "Config File" "Cannot autodetect or find, giving up" fi section "Memory management library" report_jemalloc_enabled section "The End" } # ########################################################################### # End report_mysql_info package # ########################################################################### # ######################################################################## # Some global setup is necessary for cross-platform compatibility, even # when sourcing this script for testing purposes. # ######################################################################## TOOL="pt-mysql-summary" # These vars are declared earlier in the collect_mysql_info package, # but if they're still undefined here, try to find them in PATH. [ "$CMD_MYSQL" ] || CMD_MYSQL="$(_which mysql)" [ "$CMD_MYSQLDUMP" ] || CMD_MYSQLDUMP="$( _which mysqldump )" check_mysql () { # Check that mysql and mysqldump are in PATH. If not, we're # already dead in the water, so don't bother with cmd line opts, # just error and exit. [ -n "$(${CMD_MYSQL} --help 2>/dev/null)" ] \ || die "Cannot execute mysql. Check that it is in PATH." [ -n "$(${CMD_MYSQLDUMP} --help 2>/dev/null)" ] \ || die "Cannot execute mysqldump. Check that it is in PATH." # Now that we have the cmd line opts, check that we can actually # connect to MySQL. [ -n "$(${CMD_MYSQL} ${EXT_ARGV} -e 'SHOW STATUS')" ] \ || die "Cannot connect to MySQL. Check that MySQL is running and that the options after -- are correct." } sigtrap() { warn "Caught signal, forcing exit" rm_tmpdir exit $EXIT_STATUS } # ############################################################################## # The main() function is called at the end of the script. This makes it # testable. Major bits of parsing are separated into functions for testability. # ############################################################################## main() { # Prepending SIG to these doesn't work with NetBSD's sh trap sigtrap HUP INT TERM local MYSQL_ARGS="$(mysql_options)" EXT_ARGV="$(arrange_mysql_options "$EXT_ARGV $MYSQL_ARGS")" # Check if mysql and mysqldump are there, otherwise bail out early. # But don't if they passed in --read-samples, since we don't need # a connection then. [ "$OPT_READ_SAMPLES" ] || check_mysql local RAN_WITH="--sleep=$OPT_SLEEP --databases=$OPT_DATABASES --save-samples=$OPT_SAVE_SAMPLES" _d "Starting $0 $RAN_WITH" # Begin by setting the $PATH to include some common locations that are not # always in the $PATH, including the "sbin" locations. On SunOS systems, # prefix the path with the location of more sophisticated utilities. export PATH="${PATH}:/usr/local/bin:/usr/bin:/bin:/usr/libexec" export PATH="${PATH}:/usr/mysql/bin/:/usr/local/sbin:/usr/sbin:/sbin" export PATH="/usr/gnu/bin/:/usr/xpg4/bin/:${PATH}" _d "Going to use: mysql=${CMD_MYSQL} mysqldump=${CMD_MYSQLDUMP}" # Create the tmpdir for everything to run in mk_tmpdir # Set DATA_DIR where we'll save collected data files. local data_dir="$(setup_data_dir "${OPT_SAVE_SAMPLES:-""}")" if [ -z "$data_dir" ]; then exit $? fi if [ -n "$OPT_READ_SAMPLES" -a -d "$OPT_READ_SAMPLES" ]; then # --read-samples was set and is a directory, so the samples # will already be there. data_dir="$OPT_READ_SAMPLES" else # ##################################################################### # Fetch most info, leave a child in the background gathering the rest # ##################################################################### collect_mysql_info "${data_dir}" 2>"${data_dir}/collect.err" fi # ######################################################################## # Format and pretty-print the data # ######################################################################## report_mysql_summary "${data_dir}" rm_tmpdir } # Execute the program if it was not included from another file. # This makes it possible to include without executing, and thus test. if [ "${0##*/}" = "$TOOL" ] \ || [ "${0##*/}" = "bash" -a "${_:-""}" = "$0" ]; then # Set up temporary dir. mk_tmpdir # Parse command line options. parse_options "$0" "${@:-""}" # Verify that --sleep, if present, is positive if [ -n "$OPT_SLEEP" ] && [ "$OPT_SLEEP" -lt 0 ]; then option_error "Invalid --sleep value: $sleep" fi usage_or_errors "$0" po_status=$? rm_tmpdir if [ $po_status -ne 0 ]; then [ $OPT_ERRS -gt 0 ] && exit 1 exit 0 fi main "${@:-""}" fi # ############################################################################ # Documentation # ############################################################################ :<<'DOCUMENTATION' =pod =head1 NAME pt-mysql-summary - Summarize MySQL information nicely. =head1 SYNOPSIS Usage: pt-mysql-summary [OPTIONS] pt-mysql-summary conveniently summarizes the status and configuration of a MySQL database server so that you can learn about it at a glance. It is not a tuning tool or diagnosis tool. It produces a report that is easy to diff and can be pasted into emails without losing the formatting. It should work well on any modern UNIX systems. =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-mysql-summary works by connecting to a MySQL database server and querying it for status and configuration information. It saves these bits of data into files in a temporary directory, and then formats them neatly with awk and other scripting languages. To use, simply execute it. Optionally add a double dash and then the same command-line options you would use to connect to MySQL, such as the following: pt-mysql-summary --user=root The tool interacts minimally with the server upon which it runs. It assumes that you'll run it on the same server you're inspecting, and therefore it assumes that it will be able to find the my.cnf configuration file, for example. However, it should degrade gracefully if this is not the case. Note, however, that its output does not indicate which information comes from the MySQL database and which comes from the host operating system, so it is possible for confusing output to be generated if you run the tool on one server and connect to a MySQL database server running on another server. =head1 OUTPUT Many of the outputs from this tool are deliberately rounded to show their magnitude but not the exact detail. This is called fuzzy-rounding. The idea is that it does not matter whether a server is running 918 queries per second or 921 queries per second; such a small variation is insignificant, and only makes the output hard to compare to other servers. Fuzzy-rounding rounds in larger increments as the input grows. It begins by rounding to the nearest 5, then the nearest 10, nearest 25, and then repeats by a factor of 10 larger (50, 100, 250), and so on, as the input grows. The following is a sample of the report that the tool produces: # Percona Toolkit MySQL Summary Report ####################### System time | 2012-03-30 18:46:05 UTC (local TZ: EDT -0400) # Instances ################################################## Port Data Directory Nice OOM Socket ===== ========================== ==== === ====== 12345 /tmp/12345/data 0 0 /tmp/12345.sock 12346 /tmp/12346/data 0 0 /tmp/12346.sock 12347 /tmp/12347/data 0 0 /tmp/12347.sock The first two sections show which server the report was generated on and which MySQL instances are running on the server. This is detected from the output of C and does not always detect all instances and parameters, but often works well. From this point forward, the report will be focused on a single MySQL instance, although several instances may appear in the above paragraph. # Report On Port 12345 ####################################### User | msandbox@% Time | 2012-03-30 14:46:05 (EDT) Hostname | localhost.localdomain Version | 5.5.20-log MySQL Community Server (GPL) Built On | linux2.6 i686 Started | 2012-03-28 23:33 (up 1+15:12:09) Databases | 4 Datadir | /tmp/12345/data/ Processes | 2 connected, 2 running Replication | Is not a slave, has 1 slaves connected Pidfile | /tmp/12345/data/12345.pid (exists) This section is a quick summary of the MySQL instance: version, uptime, and other very basic parameters. The Time output is generated from the MySQL server, unlike the system date and time printed earlier, so you can see whether the database and operating system times match. # Processlist ################################################ Command COUNT(*) Working SUM(Time) MAX(Time) ------------------------------ -------- ------- --------- --------- Binlog Dump 1 1 150000 150000 Query 1 1 0 0 User COUNT(*) Working SUM(Time) MAX(Time) ------------------------------ -------- ------- --------- --------- msandbox 2 2 150000 150000 Host COUNT(*) Working SUM(Time) MAX(Time) ------------------------------ -------- ------- --------- --------- localhost 2 2 150000 150000 db COUNT(*) Working SUM(Time) MAX(Time) ------------------------------ -------- ------- --------- --------- NULL 2 2 150000 150000 State COUNT(*) Working SUM(Time) MAX(Time) ------------------------------ -------- ------- --------- --------- Master has sent all binlog to 1 1 150000 150000 NULL 1 1 0 0 This section is a summary of the output from SHOW PROCESSLIST. Each sub-section is aggregated by a different item, which is shown as the first column heading. When summarized by Command, every row in SHOW PROCESSLIST is included, but otherwise, rows whose Command is Sleep are excluded from the SUM and MAX columns, so they do not skew the numbers too much. In the example shown, the server is idle except for this tool itself, and one connected replica, which is executing Binlog Dump. The columns are the number of rows included, the number that are not in Sleep status, the sum of the Time column, and the maximum Time column. The numbers are fuzzy-rounded. # Status Counters (Wait 10 Seconds) ########################## Variable Per day Per second 10 secs Binlog_cache_disk_use 4 Binlog_cache_use 80 Bytes_received 15000000 175 200 Bytes_sent 15000000 175 2000 Com_admin_commands 1 ...................(many lines omitted)............................ Threads_created 40 1 Uptime 90000 1 1 This section shows selected counters from two snapshots of SHOW GLOBAL STATUS, gathered approximately 10 seconds apart and fuzzy-rounded. It includes only items that are incrementing counters; it does not include absolute numbers such as the Threads_running status variable, which represents a current value, rather than an accumulated number over time. The first column is the variable name, and the second column is the counter from the first snapshot divided by 86400 (the number of seconds in a day), so you can see the magnitude of the counter's change per day. 86400 fuzzy-rounds to 90000, so the Uptime counter should always be about 90000. The third column is the value from the first snapshot, divided by Uptime and then fuzzy-rounded, so it represents approximately how quickly the counter is growing per-second over the uptime of the server. The third column is the incremental difference from the first and second snapshot, divided by the difference in uptime and then fuzzy-rounded. Therefore, it shows how quickly the counter is growing per second at the time the report was generated. # Table cache ################################################ Size | 400 Usage | 15% This section shows the size of the table cache, followed by the percentage of the table cache in use. The usage is fuzzy-rounded. # Key Percona Server features ################################ Table & Index Stats | Not Supported Multiple I/O Threads | Enabled Corruption Resilient | Not Supported Durable Replication | Not Supported Import InnoDB Tables | Not Supported Fast Server Restarts | Not Supported Enhanced Logging | Not Supported Replica Perf Logging | Not Supported Response Time Hist. | Not Supported Smooth Flushing | Not Supported HandlerSocket NoSQL | Not Supported Fast Hash UDFs | Unknown This section shows features that are available in Percona Server and whether they are enabled or not. In the example shown, the server is standard MySQL, not Percona Server, so the features are generally not supported. # Plugins #################################################### InnoDB compression | ACTIVE This feature shows specific plugins and whether they are enabled. # Query cache ################################################ query_cache_type | ON Size | 0.0 Usage | 0% HitToInsertRatio | 0% This section shows whether the query cache is enabled and its size, followed by the percentage of the cache in use and the hit-to-insert ratio. The latter two are fuzzy-rounded. # Schema ##################################################### Database Tables Views SPs Trigs Funcs FKs Partn mysql 24 performance_schema 17 sakila 16 7 3 6 3 22 Database MyISAM CSV PERFORMANCE_SCHEMA InnoDB mysql 22 2 performance_schema 17 sakila 8 15 Database BTREE FULLTEXT mysql 31 performance_schema sakila 63 1 c t s e l d i t m v s h i e n o a n i e a m a m t u n t t n d r a r e m g e y i c l s b t i u h l t l i n m a i a o m t t r n m b e e t p x t Database === === === === === === === === === === === mysql 61 10 6 78 5 4 26 3 4 5 3 performance_schema 5 16 33 sakila 1 15 1 3 4 3 19 42 26 If you specify L<"--databases"> or L<"--all-databases">, the tool will print the above section. This summarizes the number and type of objects in the databases. It is generated by running C, not by querying the INFORMATION_SCHEMA, which can freeze a busy server. The first sub-report in the section is the count of objects by type in each database: tables, views, and so on. The second one shows how many tables use various storage engines in each database. The third sub-report shows the number of each type of indexes in each database. The last section shows the number of columns of various data types in each database. For compact display, the column headers are formatted vertically, so you need to read downwards from the top. In this example, the first column is C and the second column is C. This example is truncated so it does not wrap on a terminal. All of the numbers in this portion of the output are exact, not fuzzy-rounded. # Noteworthy Technologies #################################### Full Text Indexing | Yes Geospatial Types | No Foreign Keys | Yes Partitioning | No InnoDB Compression | Yes SSL | No Explicit LOCK TABLES | No Delayed Insert | No XA Transactions | No NDB Cluster | No Prepared Statements | No Prepared statement count | 0 This section shows some specific technologies used on this server. Some of them are detected from the schema dump performed for the previous sections; others can be detected by looking at SHOW GLOBAL STATUS. # InnoDB ##################################################### Version | 1.1.8 Buffer Pool Size | 16.0M Buffer Pool Fill | 100% Buffer Pool Dirty | 0% File Per Table | OFF Page Size | 16k Log File Size | 2 * 5.0M = 10.0M Log Buffer Size | 8M Flush Method | Flush Log At Commit | 1 XA Support | ON Checksums | ON Doublewrite | ON R/W I/O Threads | 4 4 I/O Capacity | 200 Thread Concurrency | 0 Concurrency Tickets | 500 Commit Concurrency | 0 Txn Isolation Level | REPEATABLE-READ Adaptive Flushing | ON Adaptive Checkpoint | Checkpoint Age | 0 InnoDB Queue | 0 queries inside InnoDB, 0 queries in queue Oldest Transaction | 0 Seconds History List Len | 209 Read Views | 1 Undo Log Entries | 1 transactions, 1 total undo, 1 max undo Pending I/O Reads | 0 buf pool reads, 0 normal AIO, 0 ibuf AIO, 0 preads Pending I/O Writes | 0 buf pool (0 LRU, 0 flush list, 0 page); 0 AIO, 0 sync, 0 log IO (0 log, 0 chkp); 0 pwrites Pending I/O Flushes | 0 buf pool, 0 log Transaction States | 1xnot started This section shows important configuration variables for the InnoDB storage engine. The buffer pool fill percent and dirty percent are fuzzy-rounded. The last few lines are derived from the output of SHOW INNODB STATUS. It is likely that this output will change in the future to become more useful. # MyISAM ##################################################### Key Cache | 16.0M Pct Used | 10% Unflushed | 0% This section shows the size of the MyISAM key cache, followed by the percentage of the cache in use and percentage unflushed (fuzzy-rounded). # Security ################################################### Users | 2 users, 0 anon, 0 w/o pw, 0 old pw Old Passwords | OFF This section is generated from queries to tables in the mysql system database. It shows how many users exist, and various potential security risks such as old-style passwords and users without passwords. # Binary Logging ############################################# Binlogs | 1 Zero-Sized | 0 Total Size | 21.8M binlog_format | STATEMENT expire_logs_days | 0 sync_binlog | 0 server_id | 12345 binlog_do_db | binlog_ignore_db | This section shows configuration and status of the binary logs. If there are zero-sized binary logs, then it is possible that the binlog index is out of sync with the binary logs that actually exist on disk. # Noteworthy Variables ####################################### Auto-Inc Incr/Offset | 1/1 default_storage_engine | InnoDB flush_time | 0 init_connect | init_file | sql_mode | join_buffer_size | 128k sort_buffer_size | 2M read_buffer_size | 128k read_rnd_buffer_size | 256k bulk_insert_buffer | 0.00 max_heap_table_size | 16M tmp_table_size | 16M max_allowed_packet | 1M thread_stack | 192k log | OFF log_error | /tmp/12345/data/mysqld.log log_warnings | 1 log_slow_queries | ON log_queries_not_using_indexes | OFF log_slave_updates | ON This section shows several noteworthy server configuration variables that might be important to know about when working with this server. # Configuration File ######################################### Config File | /tmp/12345/my.sandbox.cnf [client] user = msandbox password = msandbox port = 12345 socket = /tmp/12345/mysql_sandbox12345.sock [mysqld] port = 12345 socket = /tmp/12345/mysql_sandbox12345.sock pid-file = /tmp/12345/data/mysql_sandbox12345.pid basedir = /home/baron/5.5.20 datadir = /tmp/12345/data key_buffer_size = 16M innodb_buffer_pool_size = 16M innodb_data_home_dir = /tmp/12345/data innodb_log_group_home_dir = /tmp/12345/data innodb_data_file_path = ibdata1:10M:autoextend innodb_log_file_size = 5M log-bin = mysql-bin relay_log = mysql-relay-bin log_slave_updates server-id = 12345 report-host = 127.0.0.1 report-port = 12345 log-error = mysqld.log innodb_lock_wait_timeout = 3 # The End #################################################### This section shows a pretty-printed version of the my.cnf file, with comments removed and with whitespace added to align things for easy reading. The tool tries to detect the my.cnf file by looking at the output of ps, and if it does not find the location of the file there, it tries common locations until it finds a file. Note that this file might not actually correspond with the server from which the report was generated. This can happen when the tool isn't run on the same server it's reporting on, or when detecting the location of the configuration file fails. =head1 OPTIONS All options after -- are passed to C. =over =item --all-databases mysqldump and summarize all databases. See L<"--databases">. =item --ask-pass Prompt for a password when connecting to MySQL. =item --config type: string Read this comma-separated list of config files. If specified, this must be the first option on the command line. =item --databases type: string mysqldump and summarize this comma-separated list of databases. Specify L<"--all-databases"> instead if you want to dump and summary all databases. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --help Print help and exit. =item --host short form: -h; type: string Host to connect to. =item --list-encrypted-tables default: false Include a list of the encrypted tables in all databases. This can cause slowdowns since querying Information Schema tables can be slow. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --port short form: -P; type: int Port number to use for connection. =item --read-samples type: string Create a report from the files found in this directory. =item --save-samples type: string Save the data files used to generate the summary in this directory. =item --sleep type: int; default: 10 Seconds to sleep when gathering status counters. =item --socket short form: -S; type: string Socket file to use for connection. =item --user short form: -u; type: string User for login if not current user. =item --version Print tool's version and exit. =back =head1 ENVIRONMENT This tool does not use any environment variables. =head1 SYSTEM REQUIREMENTS This tool requires Bash v3 or newer, Perl 5.8 or newer, and binutils. These are generally already provided by most distributions. On BSD systems, it may require a mounted procfs. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz, Brian Fraser, and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-mysql-summary 3.1.0 =cut DOCUMENTATION percona-toolkit-3.1/bin/pt-online-schema-change000775 001750 001750 00001501764 13535723560 022750 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit VersionCompare OptionParser Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo VersionParser DSNParser Daemon Quoter TableNibbler TableParser Progress Retry Cxn MasterSlave ReplicaLagWaiter FlowControlWaiter MySQLStatusWaiter WeightedAvgRate NibbleIterator Transformers CleanupTask IndexLength HTTP::Micro VersionCheck Percona::XtraDB::Cluster )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # # ########################################################################### # VersionCompare package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCompare.pm # t/lib/VersionCompare.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCompare; use strict; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub cmp { my ($v1, $v2) = @_; $v1 =~ s/[^\d\.]//; $v2 =~ s/[^\d\.]//; my @a = ( $v1 =~ /(\d+)\.?/g ); my @b = ( $v2 =~ /(\d+)\.?/g ); foreach my $n1 (@a) { $n1 += 0; #convert to number if (!@b) { return 1; } my $n2 = shift @b; $n2 += 0; # convert to number if ($n1 == $n2) { next; } else { return $n1 <=> $n2; } } return @b ? -1 : 0; } 1; } # ########################################################################### # End VersionCompare package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # VersionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionParser.pm # t/lib/VersionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionParser; use Lmo; use Scalar::Util qw(blessed); use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use overload ( '""' => "version", '<=>' => "cmp", 'cmp' => "cmp", fallback => 1, ); use Carp (); has major => ( is => 'ro', isa => 'Int', required => 1, ); has [qw( minor revision )] => ( is => 'ro', isa => 'Num', ); has flavor => ( is => 'ro', isa => 'Str', default => sub { 'Unknown' }, ); has innodb_version => ( is => 'ro', isa => 'Str', default => sub { 'NO' }, ); sub series { my $self = shift; return $self->_join_version($self->major, $self->minor); } sub version { my $self = shift; return $self->_join_version($self->major, $self->minor, $self->revision); } sub is_in { my ($self, $target) = @_; return $self eq $target; } sub _join_version { my ($self, @parts) = @_; return join ".", map { my $c = $_; $c =~ s/^0\./0/; $c } grep defined, @parts; } sub _split_version { my ($self, $str) = @_; my @version_parts = map { s/^0(?=\d)/0./; $_ } $str =~ m/(\d+)/g; return @version_parts[0..2]; } sub normalized_version { my ( $self ) = @_; my $result = sprintf('%d%02d%02d', map { $_ || 0 } $self->major, $self->minor, $self->revision); PTDEBUG && _d($self->version, 'normalizes to', $result); return $result; } sub comment { my ( $self, $cmd ) = @_; my $v = $self->normalized_version(); return "/*!$v $cmd */" } my @methods = qw(major minor revision); sub cmp { my ($left, $right) = @_; my $right_obj = (blessed($right) && $right->isa(ref($left))) ? $right : ref($left)->new($right); my $retval = 0; for my $m ( @methods ) { last unless defined($left->$m) && defined($right_obj->$m); $retval = $left->$m <=> $right_obj->$m; last if $retval; } return $retval; } sub BUILDARGS { my $self = shift; if ( @_ == 1 ) { my %args; if ( blessed($_[0]) && $_[0]->can("selectrow_hashref") ) { PTDEBUG && _d("VersionParser got a dbh, trying to get the version"); my $dbh = $_[0]; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $query = eval { $dbh->selectall_arrayref(q/SHOW VARIABLES LIKE 'version%'/, { Slice => {} }) }; if ( $query ) { $query = { map { $_->{variable_name} => $_->{value} } @$query }; @args{@methods} = $self->_split_version($query->{version}); $args{flavor} = delete $query->{version_comment} if $query->{version_comment}; } elsif ( eval { ($query) = $dbh->selectrow_array(q/SELECT VERSION()/) } ) { @args{@methods} = $self->_split_version($query); } else { Carp::confess("Couldn't get the version from the dbh while " . "creating a VersionParser object: $@"); } $args{innodb_version} = eval { $self->_innodb_version($dbh) }; } elsif ( !ref($_[0]) ) { @args{@methods} = $self->_split_version($_[0]); } for my $method (@methods) { delete $args{$method} unless defined $args{$method}; } @_ = %args if %args; } return $self->SUPER::BUILDARGS(@_); } sub _innodb_version { my ( $self, $dbh ) = @_; return unless $dbh; my $innodb_version = "NO"; my ($innodb) = grep { $_->{engine} =~ m/InnoDB/i } map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{ $dbh->selectall_arrayref("SHOW ENGINES", {Slice=>{}}) }; if ( $innodb ) { PTDEBUG && _d("InnoDB support:", $innodb->{support}); if ( $innodb->{support} =~ m/YES|DEFAULT/i ) { my $vars = $dbh->selectrow_hashref( "SHOW VARIABLES LIKE 'innodb_version'"); $innodb_version = !$vars ? "BUILTIN" : ($vars->{Value} || $vars->{value}); } else { $innodb_version = $innodb->{support}; # probably DISABLED or NO } } PTDEBUG && _d("InnoDB version:", $innodb_version); return $innodb_version; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End VersionParser package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); use Fcntl qw(:DEFAULT); sub new { my ($class, %args) = @_; my $self = { log_file => $args{log_file}, pid_file => $args{pid_file}, daemonize => $args{daemonize}, force_log_file => $args{force_log_file}, parent_exit => $args{parent_exit}, pid_file_owner => 0, }; return bless $self, $class; } sub run { my ($self) = @_; my $daemonize = $self->{daemonize}; my $pid_file = $self->{pid_file}; my $log_file = $self->{log_file}; my $force_log_file = $self->{force_log_file}; my $parent_exit = $self->{parent_exit}; PTDEBUG && _d('Starting daemon'); if ( $pid_file ) { eval { $self->_make_pid_file( pid => $PID, # parent's pid pid_file => $pid_file, ); }; die "$EVAL_ERROR\n" if $EVAL_ERROR; if ( !$daemonize ) { $self->{pid_file_owner} = $PID; # parent's pid } } if ( $daemonize ) { defined (my $child_pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $child_pid ) { PTDEBUG && _d('Forked child', $child_pid); $parent_exit->($child_pid) if $parent_exit; exit 0; } POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; if ( $pid_file ) { $self->_update_pid_file( pid => $PID, # child's pid pid_file => $pid_file, ); $self->{pid_file_owner} = $PID; } } if ( $daemonize || $force_log_file ) { PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $log_file ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $log_file); close STDOUT; open STDOUT, '>>', $log_file or die "Cannot open log file $log_file: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } $OUTPUT_AUTOFLUSH = 1; } PTDEBUG && _d('Daemon running'); return; } sub _make_pid_file { my ($self, %args) = @_; my @required_args = qw(pid pid_file); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid = $args{pid}; my $pid_file = $args{pid_file}; eval { sysopen(PID_FH, $pid_file, O_RDWR|O_CREAT|O_EXCL) or die $OS_ERROR; print PID_FH $PID, "\n"; close PID_FH; }; if ( my $e = $EVAL_ERROR ) { if ( $e =~ m/file exists/i ) { my $old_pid = $self->_check_pid_file( pid_file => $pid_file, pid => $PID, ); if ( $old_pid ) { warn "Overwriting PID file $pid_file because PID $old_pid " . "is not running.\n"; } $self->_update_pid_file( pid => $PID, pid_file => $pid_file ); } else { die "Error creating PID file $pid_file: $e\n"; } } return; } sub _check_pid_file { my ($self, %args) = @_; my @required_args = qw(pid_file pid); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid_file = $args{pid_file}; my $pid = $args{pid}; PTDEBUG && _d('Checking if PID in', $pid_file, 'is running'); if ( ! -f $pid_file ) { PTDEBUG && _d('PID file', $pid_file, 'does not exist'); return; } open my $fh, '<', $pid_file or die "Error opening $pid_file: $OS_ERROR"; my $existing_pid = do { local $/; <$fh> }; chomp($existing_pid) if $existing_pid; close $fh or die "Error closing $pid_file: $OS_ERROR"; if ( $existing_pid ) { if ( $existing_pid == $pid ) { warn "The current PID $pid already holds the PID file $pid_file\n"; return; } else { PTDEBUG && _d('Checking if PID', $existing_pid, 'is running'); my $pid_is_alive = kill 0, $existing_pid; if ( $pid_is_alive ) { die "PID file $pid_file exists and PID $existing_pid is running\n"; } } } else { die "PID file $pid_file exists but it is empty. Remove the file " . "if the process is no longer running.\n"; } return $existing_pid; } sub _update_pid_file { my ($self, %args) = @_; my @required_args = qw(pid pid_file); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid = $args{pid}; my $pid_file = $args{pid_file}; open my $fh, '>', $pid_file or die "Cannot open $pid_file: $OS_ERROR"; print { $fh } $pid, "\n" or die "Cannot print to $pid_file: $OS_ERROR"; close $fh or warn "Cannot close $pid_file: $OS_ERROR"; return; } sub remove_pid_file { my ($self, $pid_file) = @_; $pid_file ||= $self->{pid_file}; if ( $pid_file && -f $pid_file ) { unlink $self->{pid_file} or warn "Cannot remove PID file $pid_file: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ($self) = @_; if ( $self->{pid_file_owner} == $PID ) { $self->remove_pid_file(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true return $val if $args{is_float}; $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(TableParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args }; return bless $self, $class; } sub generate_asc_stmt { my ( $self, %args ) = @_; my @required_args = qw(tbl_struct index); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($tbl_struct, $index) = @args{@required_args}; my @cols = $args{cols} ? @{$args{cols}} : @{$tbl_struct->{cols}}; my $q = $self->{Quoter}; die "Index '$index' does not exist in table" unless exists $tbl_struct->{keys}->{$index}; PTDEBUG && _d('Will ascend index', $index); my @asc_cols = @{$tbl_struct->{keys}->{$index}->{cols}}; if ( $args{asc_first} ) { PTDEBUG && _d('Ascending only first column'); @asc_cols = $asc_cols[0]; } elsif ( my $n = $args{n_index_cols} ) { $n = scalar @asc_cols if $n > @asc_cols; PTDEBUG && _d('Ascending only first', $n, 'columns'); @asc_cols = @asc_cols[0..($n-1)]; } PTDEBUG && _d('Will ascend columns', join(', ', @asc_cols)); my @asc_slice; my %col_posn = do { my $i = 0; map { $_ => $i++ } @cols }; foreach my $col ( @asc_cols ) { if ( !exists $col_posn{$col} ) { push @cols, $col; $col_posn{$col} = $#cols; } push @asc_slice, $col_posn{$col}; } PTDEBUG && _d('Will ascend, in ordinal position:', join(', ', @asc_slice)); my $asc_stmt = { cols => \@cols, index => $index, where => '', slice => [], scols => [], }; if ( @asc_slice ) { my $cmp_where; foreach my $cmp ( qw(< <= >= >) ) { $cmp_where = $self->generate_cmp_where( type => $cmp, slice => \@asc_slice, cols => \@cols, quoter => $q, is_nullable => $tbl_struct->{is_nullable}, ); $asc_stmt->{boundaries}->{$cmp} = $cmp_where->{where}; } my $cmp = $args{asc_only} ? '>' : '>='; $asc_stmt->{where} = $asc_stmt->{boundaries}->{$cmp}; $asc_stmt->{slice} = $cmp_where->{slice}; $asc_stmt->{scols} = $cmp_where->{scols}; } return $asc_stmt; } sub generate_cmp_where { my ( $self, %args ) = @_; foreach my $arg ( qw(type slice cols is_nullable) ) { die "I need a $arg arg" unless defined $args{$arg}; } my @slice = @{$args{slice}}; my @cols = @{$args{cols}}; my $is_nullable = $args{is_nullable}; my $type = $args{type}; my $q = $self->{Quoter}; (my $cmp = $type) =~ s/=//; my @r_slice; # Resulting slice columns, by ordinal my @r_scols; # Ditto, by name my @clauses; foreach my $i ( 0 .. $#slice ) { my @clause; foreach my $j ( 0 .. $i - 1 ) { my $ord = $slice[$j]; my $col = $cols[$ord]; my $quo = $q->quote($col); if ( $is_nullable->{$col} ) { push @clause, "((? IS NULL AND $quo IS NULL) OR ($quo = ?))"; push @r_slice, $ord, $ord; push @r_scols, $col, $col; } else { push @clause, "$quo = ?"; push @r_slice, $ord; push @r_scols, $col; } } my $ord = $slice[$i]; my $col = $cols[$ord]; my $quo = $q->quote($col); my $end = $i == $#slice; # Last clause of the whole group. if ( $is_nullable->{$col} ) { if ( $type =~ m/=/ && $end ) { push @clause, "(? IS NULL OR $quo $type ?)"; } elsif ( $type =~ m/>/ ) { push @clause, "((? IS NULL AND $quo IS NOT NULL) OR ($quo $cmp ?))"; } else { # If $type =~ m/ \@r_slice, scols => \@r_scols, where => $result, }; return $where; } sub generate_del_stmt { my ( $self, %args ) = @_; my $tbl = $args{tbl_struct}; my @cols = $args{cols} ? @{$args{cols}} : (); my $tp = $self->{TableParser}; my $q = $self->{Quoter}; my @del_cols; my @del_slice; my $index = $tp->find_best_index($tbl, $args{index}); die "Cannot find an ascendable index in table" unless $index; if ( $index && $tbl->{keys}->{$index}->{is_unique}) { @del_cols = @{$tbl->{keys}->{$index}->{cols}}; } else { @del_cols = @{$tbl->{cols}}; } PTDEBUG && _d('Columns needed for DELETE:', join(', ', @del_cols)); my %col_posn = do { my $i = 0; map { $_ => $i++ } @cols }; foreach my $col ( @del_cols ) { if ( !exists $col_posn{$col} ) { push @cols, $col; $col_posn{$col} = $#cols; } push @del_slice, $col_posn{$col}; } PTDEBUG && _d('Ordinals needed for DELETE:', join(', ', @del_slice)); my $del_stmt = { cols => \@cols, index => $index, where => '', slice => [], scols => [], }; my @clauses; foreach my $i ( 0 .. $#del_slice ) { my $ord = $del_slice[$i]; my $col = $cols[$ord]; my $quo = $q->quote($col); if ( $tbl->{is_nullable}->{$col} ) { push @clauses, "((? IS NULL AND $quo IS NULL) OR ($quo = ?))"; push @{$del_stmt->{slice}}, $ord, $ord; push @{$del_stmt->{scols}}, $col, $col; } else { push @clauses, "$quo = ?"; push @{$del_stmt->{slice}}, $ord; push @{$del_stmt->{scols}}, $col; } } $del_stmt->{where} = '(' . join(' AND ', @clauses) . ')'; return $del_stmt; } sub generate_ins_stmt { my ( $self, %args ) = @_; foreach my $arg ( qw(ins_tbl sel_cols) ) { die "I need a $arg argument" unless $args{$arg}; } my $ins_tbl = $args{ins_tbl}; my @sel_cols = @{$args{sel_cols}}; die "You didn't specify any SELECT columns" unless @sel_cols; my @ins_cols; my @ins_slice; for my $i ( 0..$#sel_cols ) { next unless $ins_tbl->{is_col}->{$sel_cols[$i]}; push @ins_cols, $sel_cols[$i]; push @ins_slice, $i; } return { cols => \@ins_cols, slice => \@ins_slice, }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableNibbler package # ########################################################################### # ########################################################################### # TableParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TableParser.pm # t/lib/TableParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TableParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # Progress package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Progress.pm # t/lib/Progress.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Progress; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; foreach my $arg (qw(jobsize)) { die "I need a $arg argument" unless defined $args{$arg}; } if ( (!$args{report} || !$args{interval}) ) { if ( $args{spec} && @{$args{spec}} == 2 ) { @args{qw(report interval)} = @{$args{spec}}; } else { die "I need either report and interval arguments, or a spec"; } } my $name = $args{name} || "Progress"; $args{start} ||= time(); my $self; $self = { last_reported => $args{start}, fraction => 0, # How complete the job is callback => sub { my ($fraction, $elapsed, $remaining) = @_; printf STDERR "$name: %3d%% %s remain\n", $fraction * 100, Transformers::secs_to_time($remaining); }, %args, }; return bless $self, $class; } sub validate_spec { shift @_ if $_[0] eq 'Progress'; # Permit calling as Progress-> or Progress:: my ( $spec ) = @_; if ( @$spec != 2 ) { die "spec array requires a two-part argument\n"; } if ( $spec->[0] !~ m/^(?:percentage|time|iterations)$/ ) { die "spec array's first element must be one of " . "percentage,time,iterations\n"; } if ( $spec->[1] !~ m/^\d+$/ ) { die "spec array's second element must be an integer\n"; } } sub set_callback { my ( $self, $callback ) = @_; $self->{callback} = $callback; } sub start { my ( $self, $start ) = @_; $self->{start} = $self->{last_reported} = $start || time(); $self->{first_report} = 0; } sub update { my ( $self, $callback, %args ) = @_; my $jobsize = $self->{jobsize}; my $now ||= $args{now} || time; $self->{iterations}++; # How many updates have happened; if ( !$self->{first_report} && $args{first_report} ) { $args{first_report}->(); $self->{first_report} = 1; } if ( $self->{report} eq 'time' && $self->{interval} > $now - $self->{last_reported} ) { return; } elsif ( $self->{report} eq 'iterations' && ($self->{iterations} - 1) % $self->{interval} > 0 ) { return; } $self->{last_reported} = $now; my $completed = $callback->(); $self->{updates}++; # How many times we have run the update callback return if $completed > $jobsize; my $fraction = $completed > 0 ? $completed / $jobsize : 0; if ( $self->{report} eq 'percentage' && $self->fraction_modulo($self->{fraction}) >= $self->fraction_modulo($fraction) ) { $self->{fraction} = $fraction; return; } $self->{fraction} = $fraction; my $elapsed = $now - $self->{start}; my $remaining = 0; my $eta = $now; if ( $completed > 0 && $completed <= $jobsize && $elapsed > 0 ) { my $rate = $completed / $elapsed; if ( $rate > 0 ) { $remaining = ($jobsize - $completed) / $rate; $eta = $now + int($remaining); } } $self->{callback}->($fraction, $elapsed, $remaining, $eta, $completed); } sub fraction_modulo { my ( $self, $num ) = @_; $num *= 100; # Convert from fraction to percentage return sprintf('%d', sprintf('%d', $num / $self->{interval}) * $self->{interval}); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Progress package # ########################################################################### # ########################################################################### # Retry package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Retry.pm # t/lib/Retry.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Retry; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::HiRes qw(sleep); sub new { my ( $class, %args ) = @_; my $self = { %args, }; return bless $self, $class; } sub retry { my ( $self, %args ) = @_; my @required_args = qw(try fail final_fail); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($try, $fail, $final_fail) = @args{@required_args}; my $wait = $args{wait} || sub { sleep 1; }; my $tries = $args{tries} || 3; my $last_error; my $tryno = 0; TRY: while ( ++$tryno <= $tries ) { PTDEBUG && _d("Try", $tryno, "of", $tries); my $result; eval { $result = $try->(tryno=>$tryno); }; if ( $EVAL_ERROR ) { PTDEBUG && _d("Try code failed:", $EVAL_ERROR); $last_error = $EVAL_ERROR; if ( $tryno < $tries ) { # more retries my $retry = $fail->(tryno=>$tryno, error=>$last_error); last TRY unless $retry; PTDEBUG && _d("Calling wait code"); $wait->(tryno=>$tryno); } } else { PTDEBUG && _d("Try code succeeded"); return $result; } } PTDEBUG && _d('Try code did not succeed'); return $final_fail->(error=>$last_error); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Retry package # ########################################################################### # ########################################################################### # Cxn package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Cxn.pm # t/lib/Cxn.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Cxn; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Scalar::Util qw(blessed); use constant { PTDEBUG => $ENV{PTDEBUG} || 0, PERCONA_TOOLKIT_TEST_USE_DSN_NAMES => $ENV{PERCONA_TOOLKIT_TEST_USE_DSN_NAMES} || 0, }; sub new { my ( $class, %args ) = @_; my @required_args = qw(DSNParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my ($dp, $o) = @args{@required_args}; my $dsn_defaults = $dp->parse_options($o); my $prev_dsn = $args{prev_dsn}; my $dsn = $args{dsn}; if ( !$dsn ) { $args{dsn_string} ||= 'h=' . ($dsn_defaults->{h} || 'localhost'); $dsn = $dp->parse( $args{dsn_string}, $prev_dsn, $dsn_defaults); } elsif ( $prev_dsn ) { $dsn = $dp->copy($prev_dsn, $dsn); } my $dsn_name = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; my $self = { dsn => $dsn, dbh => $args{dbh}, dsn_name => $dsn_name, hostname => '', set => $args{set}, NAME_lc => defined($args{NAME_lc}) ? $args{NAME_lc} : 1, dbh_set => 0, ask_pass => $o->get('ask-pass'), DSNParser => $dp, is_cluster_node => undef, parent => $args{parent}, }; return bless $self, $class; } sub connect { my ( $self, %opts ) = @_; my $dsn = $opts{dsn} || $self->{dsn}; my $dp = $self->{DSNParser}; my $dbh = $self->{dbh}; if ( !$dbh || !$dbh->ping() ) { if ( $self->{ask_pass} && !$self->{asked_for_pass} && !defined $dsn->{p} ) { $dsn->{p} = OptionParser::prompt_noecho("Enter MySQL password: "); $self->{asked_for_pass} = 1; } $dbh = $dp->get_dbh( $dp->get_cxn_params($dsn), { AutoCommit => 1, %opts, }, ); } $dbh = $self->set_dbh($dbh); if ( $opts{dsn} ) { $self->{dsn} = $dsn; $self->{dsn_name} = $dp->as_string($dsn, [qw(h P S)]) || $dp->as_string($dsn, [qw(F)]) || ''; } PTDEBUG && _d($dbh, 'Connected dbh to', $self->{hostname},$self->{dsn_name}); return $dbh; } sub set_dbh { my ($self, $dbh) = @_; if ( $self->{dbh} && $self->{dbh} == $dbh && $self->{dbh_set} ) { PTDEBUG && _d($dbh, 'Already set dbh'); return $dbh; } PTDEBUG && _d($dbh, 'Setting dbh'); $dbh->{FetchHashKeyName} = 'NAME_lc' if $self->{NAME_lc}; my $sql = 'SELECT @@server_id /*!50038 , @@hostname*/'; PTDEBUG && _d($dbh, $sql); my ($server_id, $hostname) = $dbh->selectrow_array($sql); PTDEBUG && _d($dbh, 'hostname:', $hostname, $server_id); if ( $hostname ) { $self->{hostname} = $hostname; } if ( $self->{parent} ) { PTDEBUG && _d($dbh, 'Setting InactiveDestroy=1 in parent'); $dbh->{InactiveDestroy} = 1; } if ( my $set = $self->{set}) { $set->($dbh); } $self->{dbh} = $dbh; $self->{dbh_set} = 1; return $dbh; } sub lost_connection { my ($self, $e) = @_; return 0 unless $e; return $e =~ m/MySQL server has gone away/ || $e =~ m/Lost connection to MySQL server/ || $e =~ m/Server shutdown in progress/; } sub dbh { my ($self) = @_; return $self->{dbh}; } sub dsn { my ($self) = @_; return $self->{dsn}; } sub name { my ($self) = @_; return $self->{dsn_name} if PERCONA_TOOLKIT_TEST_USE_DSN_NAMES; return $self->{hostname} || $self->{dsn_name} || 'unknown host'; } sub description { my ($self) = @_; return sprintf("%s -> %s:%s", $self->name(), $self->{dsn}->{h}, $self->{dsn}->{P} || 'socket'); } sub get_id { my ($self, $cxn) = @_; $cxn ||= $self; my $unique_id; if ($cxn->is_cluster_node()) { # for cluster we concatenate various variables to maximize id 'uniqueness' across versions my $sql = q{SHOW STATUS LIKE 'wsrep\_local\_index'}; my (undef, $wsrep_local_index) = $cxn->dbh->selectrow_array($sql); PTDEBUG && _d("Got cluster wsrep_local_index: ",$wsrep_local_index); $unique_id = $wsrep_local_index."|"; foreach my $val ('server\_id', 'wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $val) = $cxn->dbh->selectrow_array($sql); $unique_id .= "|$val"; } } else { my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); $unique_id = $cxn->dbh->selectrow_array($sql); } PTDEBUG && _d("Generated unique id for cluster:", $unique_id); return $unique_id; } sub is_cluster_node { my ($self, $cxn) = @_; $cxn ||= $self; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; my $dbh; if ($cxn->isa('DBI::db')) { $dbh = $cxn; PTDEBUG && _d($sql); #don't invoke name() if it's not a Cxn! } else { $dbh = $cxn->dbh(); PTDEBUG && _d($cxn->name, $sql); } my $row = $dbh->selectrow_arrayref($sql); return $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1') ? 1 : 0; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub DESTROY { my ($self) = @_; PTDEBUG && _d('Destroying cxn'); if ( $self->{parent} ) { PTDEBUG && _d($self->{dbh}, 'Not disconnecting dbh in parent'); } elsif ( $self->{dbh} && blessed($self->{dbh}) && $self->{dbh}->can("disconnect") ) { PTDEBUG && _d($self->{dbh}, 'Disconnecting dbh on', $self->{hostname}, $self->{dsn_name}); $self->{dbh}->disconnect(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Cxn package # ########################################################################### # ########################################################################### # MasterSlave package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MasterSlave.pm # t/lib/MasterSlave.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MasterSlave; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub check_recursion_method { my ($methods) = @_; if ( @$methods != 1 ) { if ( grep({ !m/processlist|hosts/i } @$methods) && $methods->[0] !~ /^dsn=/i ) { die "Invalid combination of recursion methods: " . join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". " . "Only hosts and processlist may be combined.\n" } } else { my ($method) = @$methods; die "Invalid recursion method: " . ( $method || 'undef' ) unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i; } } sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser DSNParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, replication_thread => {}, }; return bless $self, $class; } sub get_slaves { my ($self, %args) = @_; my @required_args = qw(make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($make_cxn) = @args{@required_args}; my $slaves = []; my $dp = $self->{DSNParser}; my $methods = $self->_resolve_recursion_methods($args{dsn}); return $slaves unless @$methods; if ( grep { m/processlist|hosts/i } @$methods ) { my @required_args = qw(dbh dsn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $dsn) = @args{@required_args}; my $o = $self->{OptionParser}; $self->recurse_to_slaves( { dbh => $dbh, dsn => $dsn, slave_user => $o->got('slave-user') ? $o->get('slave-user') : '', slave_password => $o->got('slave-password') ? $o->get('slave-password') : '', callback => sub { my ( $dsn, $dbh, $level, $parent ) = @_; return unless $level; PTDEBUG && _d('Found slave:', $dp->as_string($dsn)); my $slave_dsn = $dsn; if ($o->got('slave-user')) { $slave_dsn->{u} = $o->get('slave-user'); PTDEBUG && _d("Using slave user ".$o->get('slave-user')." on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($o->got('slave-password')) { $slave_dsn->{p} = $o->get('slave-password'); PTDEBUG && _d("Slave password set"); } push @$slaves, $make_cxn->(dsn => $slave_dsn, dbh => $dbh); return; }, } ); } elsif ( $methods->[0] =~ m/^dsn=/i ) { (my $dsn_table_dsn = join ",", @$methods) =~ s/^dsn=//i; $slaves = $self->get_cxn_from_dsn_table( %args, dsn_table_dsn => $dsn_table_dsn, ); } elsif ( $methods->[0] =~ m/none/i ) { PTDEBUG && _d('Not getting to slaves'); } else { die "Unexpected recursion methods: @$methods"; } return $slaves; } sub _resolve_recursion_methods { my ($self, $dsn) = @_; my $o = $self->{OptionParser}; if ( $o->got('recursion-method') ) { return $o->get('recursion-method'); } elsif ( $dsn && ($dsn->{P} || 3306) != 3306 ) { PTDEBUG && _d('Port number is non-standard; using only hosts method'); return [qw(hosts)]; } else { return $o->get('recursion-method'); } } sub recurse_to_slaves { my ( $self, $args, $level ) = @_; $level ||= 0; my $dp = $self->{DSNParser}; my $recurse = $args->{recurse} || $self->{OptionParser}->get('recurse'); my $dsn = $args->{dsn}; my $slave_user = $args->{slave_user} || ''; my $slave_password = $args->{slave_password} || ''; my $methods = $self->_resolve_recursion_methods($dsn); PTDEBUG && _d('Recursion methods:', @$methods); if ( lc($methods->[0]) eq 'none' ) { PTDEBUG && _d('Not recursing to slaves'); return; } my $slave_dsn = $dsn; if ($slave_user) { $slave_dsn->{u} = $slave_user; PTDEBUG && _d("Using slave user $slave_user on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($slave_password) { $slave_dsn->{p} = $slave_password; PTDEBUG && _d("Slave password set"); } my $dbh; eval { $dbh = $args->{dbh} || $dp->get_dbh( $dp->get_cxn_params($slave_dsn), { AutoCommit => 1 }); PTDEBUG && _d('Connected to', $dp->as_string($slave_dsn)); }; if ( $EVAL_ERROR ) { print STDERR "Cannot connect to ", $dp->as_string($slave_dsn), "\n" or die "Cannot print: $OS_ERROR"; return; } my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); my ($id) = $dbh->selectrow_array($sql); PTDEBUG && _d('Working on server ID', $id); my $master_thinks_i_am = $dsn->{server_id}; if ( !defined $id || ( defined $master_thinks_i_am && $master_thinks_i_am != $id ) || $args->{server_ids_seen}->{$id}++ ) { PTDEBUG && _d('Server ID seen, or not what master said'); if ( $args->{skip_callback} ) { $args->{skip_callback}->($dsn, $dbh, $level, $args->{parent}); } return; } $args->{callback}->($dsn, $dbh, $level, $args->{parent}); if ( !defined $recurse || $level < $recurse ) { my @slaves = grep { !$_->{master_id} || $_->{master_id} == $id } # Only my slaves. $self->find_slave_hosts($dp, $dbh, $dsn, $methods); foreach my $slave ( @slaves ) { PTDEBUG && _d('Recursing from', $dp->as_string($dsn), 'to', $dp->as_string($slave)); $self->recurse_to_slaves( { %$args, dsn => $slave, dbh => undef, parent => $dsn, slave_user => $slave_user, $slave_password => $slave_password }, $level + 1 ); } } } sub find_slave_hosts { my ( $self, $dsn_parser, $dbh, $dsn, $methods ) = @_; PTDEBUG && _d('Looking for slaves on', $dsn_parser->as_string($dsn), 'using methods', @$methods); my @slaves; METHOD: foreach my $method ( @$methods ) { my $find_slaves = "_find_slaves_by_$method"; PTDEBUG && _d('Finding slaves with', $find_slaves); @slaves = $self->$find_slaves($dsn_parser, $dbh, $dsn); last METHOD if @slaves; } PTDEBUG && _d('Found', scalar(@slaves), 'slaves'); return @slaves; } sub _find_slaves_by_processlist { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves = map { my $slave = $dsn_parser->parse("h=$_", $dsn); $slave->{source} = 'processlist'; $slave; } grep { $_ } map { my ( $host ) = $_->{host} =~ m/^([^:]+):/; $host ||= $_->{host}; if ( $host eq 'localhost' ) { $host = '127.0.0.1'; # Replication never uses sockets. } $host; } $self->get_connected_slaves($dbh); return @slaves; } sub _find_slaves_by_hosts { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves; my $sql = 'SHOW SLAVE HOSTS'; PTDEBUG && _d($dbh, $sql); @slaves = @{$dbh->selectall_arrayref($sql, { Slice => {} })}; if ( @slaves ) { PTDEBUG && _d('Found some SHOW SLAVE HOSTS info'); @slaves = map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; my $spec = "h=$hash{host},P=$hash{port}" . ( $hash{user} ? ",u=$hash{user}" : '') . ( $hash{password} ? ",p=$hash{password}" : ''); my $dsn = $dsn_parser->parse($spec, $dsn); $dsn->{server_id} = $hash{server_id}; $dsn->{master_id} = $hash{master_id}; $dsn->{source} = 'hosts'; $dsn; } @slaves; } return @slaves; } sub get_connected_slaves { my ( $self, $dbh ) = @_; my $show = "SHOW GRANTS FOR "; my $user = 'CURRENT_USER()'; my $sql = $show . $user; PTDEBUG && _d($dbh, $sql); my $proc; eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/no such grant defined for user/ ) { PTDEBUG && _d('Retrying SHOW GRANTS without host; error:', $EVAL_ERROR); ($user) = split('@', $user); $sql = $show . $user; PTDEBUG && _d($sql); eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; } die "Failed to $sql: $EVAL_ERROR" if $EVAL_ERROR; } if ( !$proc ) { die "You do not have the PROCESS privilege"; } $sql = 'SHOW FULL PROCESSLIST'; PTDEBUG && _d($dbh, $sql); grep { $_->{command} =~ m/Binlog Dump/i } map { # Lowercase the column names my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{$dbh->selectall_arrayref($sql, { Slice => {} })}; } sub is_master_of { my ( $self, $master, $slave ) = @_; my $master_status = $self->get_master_status($master) or die "The server specified as a master is not a master"; my $slave_status = $self->get_slave_status($slave) or die "The server specified as a slave is not a slave"; my @connected = $self->get_connected_slaves($master) or die "The server specified as a master has no connected slaves"; my (undef, $port) = $master->selectrow_array("SHOW VARIABLES LIKE 'port'"); if ( $port != $slave_status->{master_port} ) { die "The slave is connected to $slave_status->{master_port} " . "but the master's port is $port"; } if ( !grep { $slave_status->{master_user} eq $_->{user} } @connected ) { die "I don't see any slave I/O thread connected with user " . $slave_status->{master_user}; } if ( ($slave_status->{slave_io_state} || '') eq 'Waiting for master to send event' ) { my ( $master_log_name, $master_log_num ) = $master_status->{file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; my ( $slave_log_name, $slave_log_num ) = $slave_status->{master_log_file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; if ( $master_log_name ne $slave_log_name || abs($master_log_num - $slave_log_num) > 1 ) { die "The slave thinks it is reading from " . "$slave_status->{master_log_file}, but the " . "master is writing to $master_status->{file}"; } } return 1; } sub get_master_dsn { my ( $self, $dbh, $dsn, $dsn_parser ) = @_; my $master = $self->get_slave_status($dbh) or return undef; my $spec = "h=$master->{master_host},P=$master->{master_port}"; return $dsn_parser->parse($spec, $dsn); } sub get_slave_status { my ( $self, $dbh ) = @_; if ( !$self->{not_a_slave}->{$dbh} ) { my $sth = $self->{sths}->{$dbh}->{SLAVE_STATUS} ||= $dbh->prepare('SHOW SLAVE STATUS'); PTDEBUG && _d($dbh, 'SHOW SLAVE STATUS'); $sth->execute(); my ($sss_rows) = $sth->fetchall_arrayref({}); # Show Slave Status rows my $ss; if ( $sss_rows && @$sss_rows ) { if (scalar @$sss_rows > 1) { if (!$self->{channel}) { die 'This server returned more than one row for SHOW SLAVE STATUS but "channel" was not specified on the command line'; } for my $row (@$sss_rows) { $row = { map { lc($_) => $row->{$_} } keys %$row }; # lowercase the keys if ($row->{channel_name} eq $self->{channel}) { $ss = $row; last; } } } else { if ($sss_rows->[0]->{channel_name} && $sss_rows->[0]->{channel_name} ne $self->{channel}) { die 'This server is using replication channels but "channel" was not specified on the command line'; } else { $ss = $sss_rows->[0]; } } if ( $ss && %$ss ) { $ss = { map { lc($_) => $ss->{$_} } keys %$ss }; # lowercase the keys return $ss; } } PTDEBUG && _d('This server returns nothing for SHOW SLAVE STATUS'); $self->{not_a_slave}->{$dbh}++; } } sub get_master_status { my ( $self, $dbh ) = @_; if ( $self->{not_a_master}->{$dbh} ) { PTDEBUG && _d('Server on dbh', $dbh, 'is not a master'); return; } my $sth = $self->{sths}->{$dbh}->{MASTER_STATUS} ||= $dbh->prepare('SHOW MASTER STATUS'); PTDEBUG && _d($dbh, 'SHOW MASTER STATUS'); $sth->execute(); my ($ms) = @{$sth->fetchall_arrayref({})}; PTDEBUG && _d( $ms ? map { "$_=" . (defined $ms->{$_} ? $ms->{$_} : '') } keys %$ms : ''); if ( !$ms || scalar keys %$ms < 2 ) { PTDEBUG && _d('Server on dbh', $dbh, 'does not seem to be a master'); $self->{not_a_master}->{$dbh}++; } return { map { lc($_) => $ms->{$_} } keys %$ms }; # lowercase the keys } sub wait_for_master { my ( $self, %args ) = @_; my @required_args = qw(master_status slave_dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($master_status, $slave_dbh) = @args{@required_args}; my $timeout = $args{timeout} || 60; my $result; my $waited; if ( $master_status ) { my $slave_status; eval { $slave_status = $self->get_slave_status($slave_dbh); }; if ($EVAL_ERROR) { return { result => undef, waited => 0, error =>'Wait for master: this is a multi-master slave but "channel" was not specified on the command line', }; } my $server_version = VersionParser->new($slave_dbh); my $channel_sql = $server_version > '5.6' && $self->{channel} ? ", '$self->{channel}'" : ''; my $sql = "SELECT MASTER_POS_WAIT('$master_status->{file}', $master_status->{position}, $timeout $channel_sql)"; PTDEBUG && _d($slave_dbh, $sql); my $start = time; ($result) = $slave_dbh->selectrow_array($sql); $waited = time - $start; PTDEBUG && _d('Result of waiting:', $result); PTDEBUG && _d("Waited", $waited, "seconds"); } else { PTDEBUG && _d('Not waiting: this server is not a master'); } return { result => $result, waited => $waited, }; } sub stop_slave { my ( $self, $dbh ) = @_; my $sth = $self->{sths}->{$dbh}->{STOP_SLAVE} ||= $dbh->prepare('STOP SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } sub start_slave { my ( $self, $dbh, $pos ) = @_; if ( $pos ) { my $sql = "START SLAVE UNTIL MASTER_LOG_FILE='$pos->{file}', " . "MASTER_LOG_POS=$pos->{position}"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } else { my $sth = $self->{sths}->{$dbh}->{START_SLAVE} ||= $dbh->prepare('START SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } } sub catchup_to_master { my ( $self, $slave, $master, $timeout ) = @_; $self->stop_slave($master); $self->stop_slave($slave); my $slave_status = $self->get_slave_status($slave); my $slave_pos = $self->repl_posn($slave_status); my $master_status = $self->get_master_status($master); my $master_pos = $self->repl_posn($master_status); PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); my $result; if ( $self->pos_cmp($slave_pos, $master_pos) < 0 ) { PTDEBUG && _d('Waiting for slave to catch up to master'); $self->start_slave($slave, $master_pos); $result = $self->wait_for_master( master_status => $master_status, slave_dbh => $slave, timeout => $timeout, master_status => $master_status ); if ($result->{error}) { die $result->{error}; } if ( !defined $result->{result} ) { $slave_status = $self->get_slave_status($slave); if ( !$self->slave_is_running($slave_status) ) { PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); $slave_pos = $self->repl_posn($slave_status); if ( $self->pos_cmp($slave_pos, $master_pos) != 0 ) { die "MASTER_POS_WAIT() returned NULL but slave has not " . "caught up to master"; } PTDEBUG && _d('Slave is caught up to master and stopped'); } else { die "Slave has not caught up to master and it is still running"; } } } else { PTDEBUG && _d("Slave is already caught up to master"); } return $result; } sub catchup_to_same_pos { my ( $self, $s1_dbh, $s2_dbh ) = @_; $self->stop_slave($s1_dbh); $self->stop_slave($s2_dbh); my $s1_status = $self->get_slave_status($s1_dbh); my $s2_status = $self->get_slave_status($s2_dbh); my $s1_pos = $self->repl_posn($s1_status); my $s2_pos = $self->repl_posn($s2_status); if ( $self->pos_cmp($s1_pos, $s2_pos) < 0 ) { $self->start_slave($s1_dbh, $s2_pos); } elsif ( $self->pos_cmp($s2_pos, $s1_pos) < 0 ) { $self->start_slave($s2_dbh, $s1_pos); } $s1_status = $self->get_slave_status($s1_dbh); $s2_status = $self->get_slave_status($s2_dbh); $s1_pos = $self->repl_posn($s1_status); $s2_pos = $self->repl_posn($s2_status); if ( $self->slave_is_running($s1_status) || $self->slave_is_running($s2_status) || $self->pos_cmp($s1_pos, $s2_pos) != 0) { die "The servers aren't both stopped at the same position"; } } sub slave_is_running { my ( $self, $slave_status ) = @_; return ($slave_status->{slave_sql_running} || 'No') eq 'Yes'; } sub has_slave_updates { my ( $self, $dbh ) = @_; my $sql = q{SHOW VARIABLES LIKE 'log_slave_updates'}; PTDEBUG && _d($dbh, $sql); my ($name, $value) = $dbh->selectrow_array($sql); return $value && $value =~ m/^(1|ON)$/; } sub repl_posn { my ( $self, $status ) = @_; if ( exists $status->{file} && exists $status->{position} ) { return { file => $status->{file}, position => $status->{position}, }; } else { return { file => $status->{relay_master_log_file}, position => $status->{exec_master_log_pos}, }; } } sub get_slave_lag { my ( $self, $dbh ) = @_; my $stat = $self->get_slave_status($dbh); return unless $stat; # server is not a slave return $stat->{seconds_behind_master}; } sub pos_cmp { my ( $self, $a, $b ) = @_; return $self->pos_to_string($a) cmp $self->pos_to_string($b); } sub short_host { my ( $self, $dsn ) = @_; my ($host, $port); if ( $dsn->{master_host} ) { $host = $dsn->{master_host}; $port = $dsn->{master_port}; } else { $host = $dsn->{h}; $port = $dsn->{P}; } return ($host || '[default]') . ( ($port || 3306) == 3306 ? '' : ":$port" ); } sub is_replication_thread { my ( $self, $query, %args ) = @_; return unless $query; my $type = lc($args{type} || 'all'); die "Invalid type: $type" unless $type =~ m/^binlog_dump|slave_io|slave_sql|all$/i; my $match = 0; if ( $type =~ m/binlog_dump|all/i ) { $match = 1 if ($query->{Command} || $query->{command} || '') eq "Binlog Dump"; } if ( !$match ) { if ( ($query->{User} || $query->{user} || '') eq "system user" ) { PTDEBUG && _d("Slave replication thread"); if ( $type ne 'all' ) { my $state = $query->{State} || $query->{state} || ''; if ( $state =~ m/^init|end$/ ) { PTDEBUG && _d("Special state:", $state); $match = 1; } else { my ($slave_sql) = $state =~ m/ ^(Waiting\sfor\sthe\snext\sevent |Reading\sevent\sfrom\sthe\srelay\slog |Has\sread\sall\srelay\slog;\swaiting |Making\stemp\sfile |Waiting\sfor\sslave\smutex\son\sexit)/xi; $match = $type eq 'slave_sql' && $slave_sql ? 1 : $type eq 'slave_io' && !$slave_sql ? 1 : 0; } } else { $match = 1; } } else { PTDEBUG && _d('Not system user'); } if ( !defined $args{check_known_ids} || $args{check_known_ids} ) { my $id = $query->{Id} || $query->{id}; if ( $match ) { $self->{replication_thread}->{$id} = 1; } else { if ( $self->{replication_thread}->{$id} ) { PTDEBUG && _d("Thread ID is a known replication thread ID"); $match = 1; } } } } PTDEBUG && _d('Matches', $type, 'replication thread:', ($match ? 'yes' : 'no'), '; match:', $match); return $match; } sub get_replication_filters { my ( $self, %args ) = @_; my @required_args = qw(dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh) = @args{@required_args}; my %filters = (); my $status = $self->get_master_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( binlog_do_db binlog_ignore_db ); } $status = $self->get_slave_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( replicate_do_db replicate_ignore_db replicate_do_table replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table ); my $sql = "SHOW VARIABLES LIKE 'slave_skip_errors'"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_arrayref($sql); $filters{slave_skip_errors} = $row->[1] if $row->[1] && $row->[1] ne 'OFF'; } return \%filters; } sub pos_to_string { my ( $self, $pos ) = @_; my $fmt = '%s/%020d'; return sprintf($fmt, @{$pos}{qw(file position)}); } sub reset_known_replication_threads { my ( $self ) = @_; $self->{replication_thread} = {}; return; } sub get_cxn_from_dsn_table { my ($self, %args) = @_; my @required_args = qw(dsn_table_dsn make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn_table_dsn, $make_cxn) = @args{@required_args}; PTDEBUG && _d('DSN table DSN:', $dsn_table_dsn); my $dp = $self->{DSNParser}; my $q = $self->{Quoter}; my $dsn = $dp->parse($dsn_table_dsn); my $dsn_table; if ( $dsn->{D} && $dsn->{t} ) { $dsn_table = $q->quote($dsn->{D}, $dsn->{t}); } elsif ( $dsn->{t} && $dsn->{t} =~ m/\./ ) { $dsn_table = $q->quote($q->split_unquote($dsn->{t})); } else { die "DSN table DSN does not specify a database (D) " . "or a database-qualified table (t)"; } my $dsn_tbl_cxn = $make_cxn->(dsn => $dsn); my $dbh = $dsn_tbl_cxn->connect(); my $sql = "SELECT dsn FROM $dsn_table ORDER BY id"; PTDEBUG && _d($sql); my $dsn_strings = $dbh->selectcol_arrayref($sql); my @cxn; if ( $dsn_strings ) { foreach my $dsn_string ( @$dsn_strings ) { PTDEBUG && _d('DSN from DSN table:', $dsn_string); push @cxn, $make_cxn->(dsn_string => $dsn_string); } } return \@cxn; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MasterSlave package # ########################################################################### # ########################################################################### # ReplicaLagWaiter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/ReplicaLagWaiter.pm # t/lib/ReplicaLagWaiter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package ReplicaLagWaiter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::HiRes qw(sleep time); use Data::Dumper; sub new { my ( $class, %args ) = @_; my @required_args = qw(oktorun get_lag sleep max_lag slaves); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my $self = { %args, }; return bless $self, $class; } sub wait { my ( $self, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $pr = $args{Progress}; my $oktorun = $self->{oktorun}; my $get_lag = $self->{get_lag}; my $sleep = $self->{sleep}; my $slaves = $self->{slaves}; my $max_lag = $self->{max_lag}; my $worst; # most lagging slave my $pr_callback; my $pr_first_report; if ( $pr ) { $pr_callback = sub { my ($fraction, $elapsed, $remaining, $eta, $completed) = @_; my $dsn_name = $worst->{cxn}->name(); if ( defined $worst->{lag} ) { print STDERR "Replica lag is " . ($worst->{lag} || '?') . " seconds on $dsn_name. Waiting.\n"; } else { if ($self->{fail_on_stopped_replication}) { die 'replication is stopped'; } print STDERR "Replica $dsn_name is stopped. Waiting.\n"; } return; }; $pr->set_callback($pr_callback); $pr_first_report = sub { my $dsn_name = $worst->{cxn}->name(); if ( !defined $worst->{lag} ) { if ($self->{fail_on_stopped_replication}) { die 'replication is stopped'; } print STDERR "Replica $dsn_name is stopped. Waiting.\n"; } return; }; } my @lagged_slaves = map { {cxn=>$_, lag=>undef} } @$slaves; while ( $oktorun->() && @lagged_slaves ) { PTDEBUG && _d('Checking slave lag'); for my $i ( 0..$#lagged_slaves ) { my $lag = $get_lag->($lagged_slaves[$i]->{cxn}); PTDEBUG && _d($lagged_slaves[$i]->{cxn}->name(), 'slave lag:', $lag); if ( !defined $lag || $lag > $max_lag ) { $lagged_slaves[$i]->{lag} = $lag; } else { delete $lagged_slaves[$i]; } } @lagged_slaves = grep { defined $_ } @lagged_slaves; if ( @lagged_slaves ) { @lagged_slaves = reverse sort { defined $a->{lag} && defined $b->{lag} ? $a->{lag} <=> $b->{lag} : defined $a->{lag} ? -1 : 1; } @lagged_slaves; $worst = $lagged_slaves[0]; PTDEBUG && _d(scalar @lagged_slaves, 'slaves are lagging, worst:', $worst->{lag}, 'on', Dumper($worst->{cxn}->dsn())); if ( $pr ) { $pr->update( sub { return 0; }, first_report => $pr_first_report, ); } PTDEBUG && _d('Calling sleep callback'); $sleep->($worst->{cxn}, $worst->{lag}); } } PTDEBUG && _d('All slaves caught up'); return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End ReplicaLagWaiter package # ########################################################################### # ########################################################################### # FlowControlWaiter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/FlowControlWaiter.pm # t/lib/FlowControlWaiter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package FlowControlWaiter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::HiRes qw(sleep time); use Data::Dumper; sub new { my ( $class, %args ) = @_; my @required_args = qw(oktorun node sleep max_flow_ctl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my $self = { %args }; $self->{last_time} = time(); my (undef, $last_fc_ns) = $self->{node}->selectrow_array('SHOW STATUS LIKE "wsrep_flow_control_paused_ns"'); $self->{last_fc_secs} = $last_fc_ns/1000_000_000; return bless $self, $class; } sub wait { my ( $self, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $pr = $args{Progress}; my $oktorun = $self->{oktorun}; my $sleep = $self->{sleep}; my $node = $self->{node}; my $max_avg = $self->{max_flow_ctl}/100; my $too_much_fc = 1; my $pr_callback; if ( $pr ) { $pr_callback = sub { print STDERR "Pausing because PXC Flow Control is active\n"; return; }; $pr->set_callback($pr_callback); } while ( $oktorun->() && $too_much_fc ) { my $current_time = time(); my (undef, $current_fc_ns) = $node->selectrow_array('SHOW STATUS LIKE "wsrep_flow_control_paused_ns"'); my $current_fc_secs = $current_fc_ns/1000_000_000; my $current_avg = ($current_fc_secs - $self->{last_fc_secs}) / ($current_time - $self->{last_time}); if ( $current_avg > $max_avg ) { if ( $pr ) { $pr->update(sub { return 0; }); } PTDEBUG && _d('Calling sleep callback'); if ( $self->{simple_progress} ) { print STDERR "Waiting for Flow Control to abate\n"; } $sleep->(); } else { $too_much_fc = 0; } $self->{last_time} = $current_time; $self->{last_fc_secs} = $current_fc_secs; } PTDEBUG && _d('Flow Control is Ok'); return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End FlowControlWaiter package # ########################################################################### # ########################################################################### # MySQLStatusWaiter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MySQLStatusWaiter.pm # t/lib/MySQLStatusWaiter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MySQLStatusWaiter; use strict; use warnings FATAL => 'all'; use POSIX qw( ceil ); use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(max_spec get_status sleep oktorun); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } PTDEBUG && _d('Parsing spec for max thresholds'); my $max_val_for = _parse_spec($args{max_spec}); if ( $max_val_for ) { _check_and_set_vals( vars => $max_val_for, get_status => $args{get_status}, threshold_factor => 0.2, # +20% ); } PTDEBUG && _d('Parsing spec for critical thresholds'); my $critical_val_for = _parse_spec($args{critical_spec} || []); if ( $critical_val_for ) { _check_and_set_vals( vars => $critical_val_for, get_status => $args{get_status}, threshold_factor => 1.0, # double (x2; +100%) ); } my $self = { get_status => $args{get_status}, sleep => $args{sleep}, oktorun => $args{oktorun}, max_val_for => $max_val_for, critical_val_for => $critical_val_for, }; return bless $self, $class; } sub _parse_spec { my ($spec) = @_; return unless $spec && scalar @$spec; my %max_val_for; foreach my $var_val ( @$spec ) { die "Empty or undefined spec\n" unless $var_val; $var_val =~ s/^\s+//; $var_val =~ s/\s+$//g; my ($var, $val) = split /[:=]/, $var_val; die "$var_val does not contain a variable\n" unless $var; die "$var is not a variable name\n" unless $var =~ m/^[a-zA-Z_]+$/; if ( !$val ) { PTDEBUG && _d('Will get intial value for', $var, 'later'); $max_val_for{$var} = undef; } else { die "The value for $var must be a number\n" unless $val =~ m/^[\d\.]+$/; $max_val_for{$var} = $val; } } return \%max_val_for; } sub max_values { my ($self) = @_; return $self->{max_val_for}; } sub critical_values { my ($self) = @_; return $self->{critical_val_for}; } sub wait { my ( $self, %args ) = @_; return unless $self->{max_val_for}; my $pr = $args{Progress}; # optional my $oktorun = $self->{oktorun}; my $get_status = $self->{get_status}; my $sleep = $self->{sleep}; my %vals_too_high = %{$self->{max_val_for}}; my $pr_callback; if ( $pr ) { $pr_callback = sub { print STDERR "Pausing because " . join(', ', map { "$_=" . (defined $vals_too_high{$_} ? $vals_too_high{$_} : 'unknown') } sort keys %vals_too_high ) . ".\n"; return; }; $pr->set_callback($pr_callback); } while ( $oktorun->() ) { PTDEBUG && _d('Checking status variables'); foreach my $var ( sort keys %vals_too_high ) { my $val = $get_status->($var); PTDEBUG && _d($var, '=', $val); if ( $val && exists $self->{critical_val_for}->{$var} && $val >= $self->{critical_val_for}->{$var} ) { die "$var=$val exceeds its critical threshold " . "$self->{critical_val_for}->{$var}\n"; } if ( $val >= $self->{max_val_for}->{$var} ) { $vals_too_high{$var} = $val; } else { delete $vals_too_high{$var}; } } last unless scalar keys %vals_too_high; PTDEBUG && _d(scalar keys %vals_too_high, 'values are too high:', %vals_too_high); if ( $pr ) { $pr->update(sub { return 0; }); } PTDEBUG && _d('Calling sleep callback'); $sleep->(); %vals_too_high = %{$self->{max_val_for}}; # recheck all vars } PTDEBUG && _d('All var vals are low enough'); return; } sub _check_and_set_vals { my (%args) = @_; my @required_args = qw(vars get_status threshold_factor); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($vars, $get_status, $threshold_factor) = @args{@required_args}; PTDEBUG && _d('Checking and setting values'); return unless $vars && scalar %$vars; foreach my $var ( keys %$vars ) { my $init_val = $get_status->($var); die "Variable $var does not exist or its value is undefined\n" unless defined $init_val; my $val; if ( defined $vars->{$var} ) { $val = $vars->{$var}; } else { PTDEBUG && _d('Initial', $var, 'value:', $init_val); $val = ($init_val * $threshold_factor) + $init_val; $vars->{$var} = int(ceil($val)); } PTDEBUG && _d('Wait if', $var, '>=', $val); } } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MySQLStatusWaiter package # ########################################################################### # ########################################################################### # WeightedAvgRate package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/WeightedAvgRate.pm # t/lib/WeightedAvgRate.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package WeightedAvgRate; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(target_t); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my $self = { %args, avg_n => 0, avg_t => 0, weight => $args{weight} || 0.75, }; return bless $self, $class; } sub update { my ($self, $n, $t) = @_; PTDEBUG && _d('Master op time:', $n, 'n /', $t, 's'); if ( $self->{avg_n} && $self->{avg_t} ) { $self->{avg_n} = ($self->{avg_n} * $self->{weight}) + $n; $self->{avg_t} = ($self->{avg_t} * $self->{weight}) + $t; $self->{avg_rate} = $self->{avg_n} / $self->{avg_t}; PTDEBUG && _d('Weighted avg rate:', $self->{avg_rate}, 'n/s'); } else { $self->{avg_n} = $n; $self->{avg_t} = $t; $self->{avg_rate} = $self->{avg_n} / $self->{avg_t}; PTDEBUG && _d('Initial avg rate:', $self->{avg_rate}, 'n/s'); } my $new_n = int($self->{avg_rate} * $self->{target_t}); PTDEBUG && _d('Adjust n to', $new_n); return $new_n; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End WeightedAvgRate package # ########################################################################### # ########################################################################### # NibbleIterator package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/NibbleIterator.pm # t/lib/NibbleIterator.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package NibbleIterator; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(Cxn tbl chunk_size OptionParser Quoter TableNibbler TableParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn, $tbl, $chunk_size, $o, $q) = @args{@required_args}; my $nibble_params = can_nibble(%args); my %comments = ( bite => "bite table", nibble => "nibble table", ); if ( $args{comments} ) { map { $comments{$_} = $args{comments}->{$_} } grep { defined $args{comments}->{$_} } keys %{$args{comments}}; } my $where = $o->has('where') ? $o->get('where') : ''; my $tbl_struct = $tbl->{tbl_struct}; my $ignore_col = $o->has('ignore-columns') ? ($o->get('ignore-columns') || {}) : {}; my $all_cols = $o->has('columns') ? ($o->get('columns') || $tbl_struct->{cols}) : $tbl_struct->{cols}; my @cols = grep { !$ignore_col->{$_} } @$all_cols; my $self; if ( $nibble_params->{one_nibble} ) { my $nibble_sql = ($args{dml} ? "$args{dml} " : "SELECT ") . ($args{select} ? $args{select} : join(', ', map { $q->quote($_) } @cols)) . " FROM $tbl->{name}" . ($where ? " WHERE $where" : '') . ($args{lock_in_share_mode} ? " LOCK IN SHARE MODE" : "") . " /*$comments{bite}*/"; PTDEBUG && _d('One nibble statement:', $nibble_sql); my $explain_nibble_sql = "EXPLAIN SELECT " . ($args{select} ? $args{select} : join(', ', map { $q->quote($_) } @cols)) . " FROM $tbl->{name}" . ($where ? " WHERE $where" : '') . ($args{lock_in_share_mode} ? " LOCK IN SHARE MODE" : "") . " /*explain $comments{bite}*/"; PTDEBUG && _d('Explain one nibble statement:', $explain_nibble_sql); $self = { %args, one_nibble => 1, limit => 0, nibble_sql => $nibble_sql, explain_nibble_sql => $explain_nibble_sql, }; } else { my $index = $nibble_params->{index}; # brevity my $index_cols = $tbl->{tbl_struct}->{keys}->{$index}->{cols}; my $asc = $args{TableNibbler}->generate_asc_stmt( %args, tbl_struct => $tbl->{tbl_struct}, index => $index, n_index_cols => $args{n_chunk_index_cols}, cols => \@cols, asc_only => 1, ); PTDEBUG && _d('Ascend params:', Dumper($asc)); my $force_concat_enums = $o->has('force-concat-enums') && $o->get('force-concat-enums'); my $i=0; for my $index (@{$index_cols}) { last if $args{n_chunk_index_cols} && $i >= $args{n_chunk_index_cols}; $i++; if ($tbl->{tbl_struct}->{type_for}->{$index} eq 'enum') { if ($tbl->{tbl_struct}->{defs}->{$index} =~ m/enum\s*\((.*?)\)/) { my @items = split(/,\s*/, $1); my $sorted = 1; # Asume the items list is sorted to later check if this is true for (my $i=1; $i < scalar(@items); $i++) { if ($items[$i-1] gt $items[$i]) { $sorted = 0; last; } } if (!$force_concat_enums && !$sorted) { die "The index " . $index . " in table " . $tbl->{name} . " has unsorted enum items.\nPlease read the documentation for the --force-concat-enums parameter\n"; } } } } my $from = "$tbl->{name} FORCE INDEX(`$index`)"; my $order_by = join(', ', map { $tbl->{tbl_struct}->{type_for}->{$_} eq 'enum' && $force_concat_enums ? "CONCAT(".$q->quote($_).")" : $q->quote($_)} @{$index_cols}); my $order_by_dec = join(' DESC,', map { $tbl->{tbl_struct}->{type_for}->{$_} eq 'enum' && $force_concat_enums ? "CONCAT(".$q->quote($_).")" : $q->quote($_)} @{$index_cols}); my $first_lb_sql = "SELECT /*!40001 SQL_NO_CACHE */ " . join(', ', map { $q->quote($_) } @{$asc->{scols}}) . " FROM $from" . ($where ? " WHERE $where" : '') . " ORDER BY $order_by" . " LIMIT 1" . " /*first lower boundary*/"; PTDEBUG && _d('First lower boundary statement:', $first_lb_sql); my $resume_lb_sql; if ( $args{resume} ) { $resume_lb_sql = "SELECT /*!40001 SQL_NO_CACHE */ " . join(', ', map { $q->quote($_) } @{$asc->{scols}}) . " FROM $from" . " WHERE " . $asc->{boundaries}->{'>'} . ($where ? " AND ($where)" : '') . " ORDER BY $order_by" . " LIMIT 1" . " /*resume lower boundary*/"; PTDEBUG && _d('Resume lower boundary statement:', $resume_lb_sql); } my $last_ub_sql = "SELECT /*!40001 SQL_NO_CACHE */ " . join(', ', map { $q->quote($_) } @{$asc->{scols}}) . " FROM $from" . ($where ? " WHERE $where" : '') . " ORDER BY " . $order_by_dec . ' DESC' . " LIMIT 1" . " /*last upper boundary*/"; PTDEBUG && _d('Last upper boundary statement:', $last_ub_sql); my $ub_sql = "SELECT /*!40001 SQL_NO_CACHE */ " . join(', ', map { $q->quote($_) } @{$asc->{scols}}) . " FROM $from" . " WHERE " . $asc->{boundaries}->{'>='} . ($where ? " AND ($where)" : '') . " ORDER BY $order_by" . " LIMIT ?, 2" . " /*next chunk boundary*/"; PTDEBUG && _d('Upper boundary statement:', $ub_sql); my $nibble_sql = ($args{dml} ? "$args{dml} " : "SELECT ") . ($args{select} ? $args{select} : join(', ', map { $q->quote($_) } @{$asc->{cols}})) . " FROM $from" . " WHERE " . $asc->{boundaries}->{'>='} # lower boundary . " AND " . $asc->{boundaries}->{'<='} # upper boundary . ($where ? " AND ($where)" : '') . ($args{order_by} ? " ORDER BY $order_by" : "") . ($args{lock_in_share_mode} ? " LOCK IN SHARE MODE" : "") . " /*$comments{nibble}*/"; PTDEBUG && _d('Nibble statement:', $nibble_sql); my $explain_nibble_sql = "EXPLAIN SELECT " . ($args{select} ? $args{select} : join(', ', map { $q->quote($_) } @{$asc->{cols}})) . " FROM $from" . " WHERE " . $asc->{boundaries}->{'>='} # lower boundary . " AND " . $asc->{boundaries}->{'<='} # upper boundary . ($where ? " AND ($where)" : '') . ($args{order_by} ? " ORDER BY $order_by" : "") . ($args{lock_in_share_mode} ? " LOCK IN SHARE MODE" : "") . " /*explain $comments{nibble}*/"; PTDEBUG && _d('Explain nibble statement:', $explain_nibble_sql); my $limit = $chunk_size - 1; PTDEBUG && _d('Initial chunk size (LIMIT):', $limit); $self = { %args, index => $index, limit => $limit, first_lb_sql => $first_lb_sql, last_ub_sql => $last_ub_sql, ub_sql => $ub_sql, nibble_sql => $nibble_sql, explain_first_lb_sql => "EXPLAIN $first_lb_sql", explain_ub_sql => "EXPLAIN $ub_sql", explain_nibble_sql => $explain_nibble_sql, resume_lb_sql => $resume_lb_sql, sql => { columns => $asc->{scols}, from => $from, where => $where, boundaries => $asc->{boundaries}, order_by => $order_by, }, }; } $self->{row_est} = $nibble_params->{row_est}, $self->{nibbleno} = 0; $self->{have_rows} = 0; $self->{rowno} = 0; $self->{oktonibble} = 1; $self->{pause_file} = $nibble_params->{pause_file}; $self->{sleep} = $args{sleep} || 60; return bless $self, $class; } sub next { my ($self) = @_; if ( !$self->{oktonibble} ) { PTDEBUG && _d('Not ok to nibble'); return; } my %callback_args = ( Cxn => $self->{Cxn}, tbl => $self->{tbl}, NibbleIterator => $self, ); if ($self->{nibbleno} == 0) { $self->_prepare_sths(); $self->_get_bounds(); if ( my $callback = $self->{callbacks}->{init} ) { $self->{oktonibble} = $callback->(%callback_args); PTDEBUG && _d('init callback returned', $self->{oktonibble}); if ( !$self->{oktonibble} ) { $self->{no_more_boundaries} = 1; return; } } if ( !$self->{one_nibble} && !$self->{first_lower} ) { PTDEBUG && _d('No first lower boundary, table must be empty'); $self->{no_more_boundaries} = 1; return; } } NIBBLE: while ( $self->{have_rows} || $self->_next_boundaries() ) { if ($self->{pause_file}) { while(-f $self->{pause_file}) { print "Sleeping $self->{sleep} seconds because $self->{pause_file} exists\n"; my $dbh = $self->{Cxn}->dbh(); if ( !$dbh || !$dbh->ping() ) { eval { $dbh = $self->{Cxn}->connect() }; # connect or die trying if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; die "Lost connection to " . $self->{Cxn}->name() . " while waiting for " . "replica lag ($EVAL_ERROR)\n"; } } $dbh->do("SELECT 'nibble iterator keepalive'"); sleep($self->{sleep}); } } if ( !$self->{have_rows} ) { $self->{nibbleno}++; PTDEBUG && _d('Nibble:', $self->{nibble_sth}->{Statement}, 'params:', join(', ', (@{$self->{lower}} || [], @{$self->{upper} }||[]))); if ( my $callback = $self->{callbacks}->{exec_nibble} ) { $self->{have_rows} = $callback->(%callback_args); } else { $self->{nibble_sth}->execute(@{$self->{lower}}, @{$self->{upper}}); $self->{have_rows} = $self->{nibble_sth}->rows(); } PTDEBUG && _d($self->{have_rows}, 'rows in nibble', $self->{nibbleno}); } if ( $self->{have_rows} ) { my $row = $self->{nibble_sth}->fetchrow_arrayref(); if ( $row ) { $self->{rowno}++; PTDEBUG && _d('Row', $self->{rowno}, 'in nibble',$self->{nibbleno}); return [ @$row ]; } } PTDEBUG && _d('No rows in nibble or nibble skipped'); if ( my $callback = $self->{callbacks}->{after_nibble} ) { $callback->(%callback_args); } $self->{rowno} = 0; $self->{have_rows} = 0; } PTDEBUG && _d('Done nibbling'); if ( my $callback = $self->{callbacks}->{done} ) { $callback->(%callback_args); } return; } sub nibble_number { my ($self) = @_; return $self->{nibbleno}; } sub set_nibble_number { my ($self, $n) = @_; die "I need a number" unless $n; $self->{nibbleno} = $n; PTDEBUG && _d('Set new nibble number:', $n); return; } sub nibble_index { my ($self) = @_; return $self->{index}; } sub statements { my ($self) = @_; return { explain_first_lower_boundary => $self->{explain_first_lb_sth}, nibble => $self->{nibble_sth}, explain_nibble => $self->{explain_nibble_sth}, upper_boundary => $self->{ub_sth}, explain_upper_boundary => $self->{explain_ub_sth}, } } sub boundaries { my ($self) = @_; return { first_lower => $self->{first_lower}, lower => $self->{lower}, upper => $self->{upper}, next_lower => $self->{next_lower}, last_upper => $self->{last_upper}, }; } sub set_boundary { my ($self, $boundary, $values) = @_; die "I need a boundary parameter" unless $boundary; die "Invalid boundary: $boundary" unless $boundary =~ m/^(?:lower|upper|next_lower|last_upper)$/; die "I need a values arrayref parameter" unless $values && ref $values eq 'ARRAY'; $self->{$boundary} = $values; PTDEBUG && _d('Set new', $boundary, 'boundary:', Dumper($values)); return; } sub one_nibble { my ($self) = @_; return $self->{one_nibble}; } sub limit { my ($self) = @_; return $self->{limit}; } sub set_chunk_size { my ($self, $limit) = @_; return if $self->{one_nibble}; die "Chunk size must be > 0" unless $limit; $self->{limit} = $limit - 1; PTDEBUG && _d('Set new chunk size (LIMIT):', $limit); return; } sub sql { my ($self) = @_; return $self->{sql}; } sub more_boundaries { my ($self) = @_; return !$self->{no_more_boundaries}; } sub row_estimate { my ($self) = @_; return $self->{row_est}; } sub can_nibble { my (%args) = @_; my @required_args = qw(Cxn tbl chunk_size OptionParser TableParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn, $tbl, $chunk_size, $o) = @args{@required_args}; my $where = $o->has('where') ? $o->get('where') : ''; my ($row_est, $mysql_index) = get_row_estimate( Cxn => $cxn, tbl => $tbl, where => $where, ); if ( !$where ) { $mysql_index = undef; } my $chunk_size_limit = $o->get('chunk-size-limit') || 1; my $one_nibble = !defined $args{one_nibble} || $args{one_nibble} ? $row_est <= $chunk_size * $chunk_size_limit : 0; PTDEBUG && _d('One nibble:', $one_nibble ? 'yes' : 'no'); if ( $args{resume} && !defined $args{resume}->{lower_boundary} && !defined $args{resume}->{upper_boundary} ) { PTDEBUG && _d('Resuming from one nibble table'); $one_nibble = 1; } my $index = _find_best_index(%args, mysql_index => $mysql_index); if ( !$index && !$one_nibble ) { die "There is no good index and the table is oversized."; } my $pause_file = ($o->has('pause-file') && $o->get('pause-file')) || undef; return { row_est => $row_est, # nibble about this many rows index => $index, # using this index one_nibble => $one_nibble, # if the table fits in one nibble/chunk pause_file => $pause_file, }; } sub _find_best_index { my (%args) = @_; my @required_args = qw(Cxn tbl TableParser); my ($cxn, $tbl, $tp) = @args{@required_args}; my $tbl_struct = $tbl->{tbl_struct}; my $indexes = $tbl_struct->{keys}; my $best_index; my $want_index = $args{chunk_index}; if ( $want_index ) { PTDEBUG && _d('User wants to use index', $want_index); if ( !exists $indexes->{$want_index} ) { PTDEBUG && _d('Cannot use user index because it does not exist'); $want_index = undef; } else { $best_index = $want_index; } } if ( !$best_index && !$want_index && $args{mysql_index} ) { PTDEBUG && _d('MySQL wants to use index', $args{mysql_index}); $want_index = $args{mysql_index}; } my @possible_indexes; if ( !$best_index && $want_index ) { if ( $indexes->{$want_index}->{is_unique} ) { PTDEBUG && _d('Will use wanted index'); $best_index = $want_index; } else { PTDEBUG && _d('Wanted index is a possible index'); push @possible_indexes, $want_index; } } if (!$best_index) { PTDEBUG && _d('Auto-selecting best index'); foreach my $index ( $tp->sort_indexes($tbl_struct) ) { if ( $index eq 'PRIMARY' || $indexes->{$index}->{is_unique} ) { $best_index = $index; last; } else { push @possible_indexes, $index; } } } if ( !$best_index && @possible_indexes ) { PTDEBUG && _d('No PRIMARY or unique indexes;', 'will use index with highest cardinality'); foreach my $index ( @possible_indexes ) { $indexes->{$index}->{cardinality} = _get_index_cardinality( %args, index => $index, ); } @possible_indexes = sort { my $cmp = $indexes->{$b}->{cardinality} <=> $indexes->{$a}->{cardinality}; if ( $cmp == 0 ) { $cmp = scalar @{$indexes->{$b}->{cols}} <=> scalar @{$indexes->{$a}->{cols}}; } $cmp; } @possible_indexes; $best_index = $possible_indexes[0]; } PTDEBUG && _d('Best index:', $best_index); return $best_index; } sub _get_index_cardinality { my (%args) = @_; my @required_args = qw(Cxn tbl index); my ($cxn, $tbl, $index) = @args{@required_args}; my $sql = "SHOW INDEXES FROM $tbl->{name} " . "WHERE Key_name = '$index'"; PTDEBUG && _d($sql); my $cardinality = 1; my $dbh = $cxn->dbh(); my $key_name = $dbh && ($dbh->{FetchHashKeyName} || '') eq 'NAME_lc' ? 'key_name' : 'Key_name'; my $rows = $dbh->selectall_hashref($sql, $key_name); foreach my $row ( values %$rows ) { $cardinality *= $row->{cardinality} if $row->{cardinality}; } PTDEBUG && _d('Index', $index, 'cardinality:', $cardinality); return $cardinality; } sub get_row_estimate { my (%args) = @_; my @required_args = qw(Cxn tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn, $tbl) = @args{@required_args}; my $sql = "EXPLAIN SELECT * FROM $tbl->{name} " . "WHERE " . ($args{where} || '1=1'); PTDEBUG && _d($sql); my $expl = $cxn->dbh()->selectrow_hashref($sql); PTDEBUG && _d(Dumper($expl)); my $mysql_index = $expl->{key} || ''; if ( $mysql_index ne 'PRIMARY' ) { $mysql_index = lc($mysql_index); } return ($expl->{rows} || 0), $mysql_index; } sub _prepare_sths { my ($self) = @_; PTDEBUG && _d('Preparing statement handles'); my $dbh = $self->{Cxn}->dbh(); $self->{nibble_sth} = $dbh->prepare($self->{nibble_sql}); $self->{explain_nibble_sth} = $dbh->prepare($self->{explain_nibble_sql}); if ( !$self->{one_nibble} ) { $self->{explain_first_lb_sth} = $dbh->prepare($self->{explain_first_lb_sql}); $self->{ub_sth} = $dbh->prepare($self->{ub_sql}); $self->{explain_ub_sth} = $dbh->prepare($self->{explain_ub_sql}); } return; } sub _get_bounds { my ($self) = @_; if ( $self->{one_nibble} ) { if ( $self->{resume} ) { $self->{no_more_boundaries} = 1; } return; } my $dbh = $self->{Cxn}->dbh(); $self->{first_lower} = $dbh->selectrow_arrayref($self->{first_lb_sql}); PTDEBUG && _d('First lower boundary:', Dumper($self->{first_lower})); if ( my $nibble = $self->{resume} ) { if ( defined $nibble->{lower_boundary} && defined $nibble->{upper_boundary} ) { my $sth = $dbh->prepare($self->{resume_lb_sql}); my @ub = split ',', $nibble->{upper_boundary}; PTDEBUG && _d($sth->{Statement}, 'params:', @ub); $sth->execute(@ub); $self->{next_lower} = $sth->fetchrow_arrayref(); $sth->finish(); } } else { $self->{next_lower} = $self->{first_lower}; } PTDEBUG && _d('Next lower boundary:', Dumper($self->{next_lower})); if ( !$self->{next_lower} ) { PTDEBUG && _d('At end of table, or no more boundaries to resume'); $self->{no_more_boundaries} = 1; $self->{last_upper} = $dbh->selectrow_arrayref($self->{last_ub_sql}); PTDEBUG && _d('Last upper boundary:', Dumper($self->{last_upper})); } return; } sub _next_boundaries { my ($self) = @_; if ( $self->{no_more_boundaries} ) { PTDEBUG && _d('No more boundaries'); return; # stop nibbling } if ( $self->{one_nibble} ) { $self->{lower} = $self->{upper} = []; $self->{no_more_boundaries} = 1; # for next call return 1; # continue nibbling } if ( $self->identical_boundaries($self->{lower}, $self->{next_lower}) ) { PTDEBUG && _d('Infinite loop detected'); my $tbl = $self->{tbl}; my $index = $tbl->{tbl_struct}->{keys}->{$self->{index}}; my $n_cols = scalar @{$index->{cols}}; my $chunkno = $self->{nibbleno}; die "Possible infinite loop detected! " . "The lower boundary for chunk $chunkno is " . "<" . join(', ', @{$self->{lower}}) . "> and the lower " . "boundary for chunk " . ($chunkno + 1) . " is also " . "<" . join(', ', @{$self->{next_lower}}) . ">. " . "This usually happens when using a non-unique single " . "column index. The current chunk index for table " . "$tbl->{db}.$tbl->{tbl} is $self->{index} which is" . ($index->{is_unique} ? '' : ' not') . " unique and covers " . ($n_cols > 1 ? "$n_cols columns" : "1 column") . ".\n"; } $self->{lower} = $self->{next_lower}; if ( my $callback = $self->{callbacks}->{next_boundaries} ) { my $oktonibble = $callback->( Cxn => $self->{Cxn}, tbl => $self->{tbl}, NibbleIterator => $self, ); PTDEBUG && _d('next_boundaries callback returned', $oktonibble); if ( !$oktonibble ) { $self->{no_more_boundaries} = 1; return; # stop nibbling } } PTDEBUG && _d($self->{ub_sth}->{Statement}, 'params:', join(', ', @{$self->{lower}} || []), $self->{limit}); $self->{ub_sth}->execute(@{$self->{lower}}, $self->{limit}); my $boundary = $self->{ub_sth}->fetchall_arrayref(); PTDEBUG && _d('Next boundary:', Dumper($boundary)); if ( $boundary && @$boundary ) { $self->{upper} = $boundary->[0]; if ( $boundary->[1] ) { $self->{next_lower} = $boundary->[1]; } else { PTDEBUG && _d('End of table boundary:', Dumper($boundary->[0])); $self->{no_more_boundaries} = 1; # for next call $self->{last_upper} = $boundary->[0]; } } else { my $dbh = $self->{Cxn}->dbh(); $self->{upper} = $dbh->selectrow_arrayref($self->{last_ub_sql}); PTDEBUG && _d('Last upper boundary:', Dumper($self->{upper})); $self->{no_more_boundaries} = 1; # for next call $self->{last_upper} = $self->{upper}; } $self->{ub_sth}->finish(); return 1; # continue nibbling } sub identical_boundaries { my ($self, $b1, $b2) = @_; return 0 if ($b1 && !$b2) || (!$b1 && $b2); return 1 if !$b1 && !$b2; die "Boundaries have different numbers of values" if scalar @$b1 != scalar @$b2; # shouldn't happen my $n_vals = scalar @$b1; for my $i ( 0..($n_vals-1) ) { next if (!defined($b1->[$i]) && !defined($b2->[$i])); return 0 if (!defined($b1->[$i]) && defined($b2->[$i])); # diff return 0 if (defined($b1->[$i]) && !defined($b2->[$i])); # diff return 0 if $b1->[$i] ne $b2->[$i]; # diff } return 1; } sub DESTROY { my ( $self ) = @_; foreach my $key ( keys %$self ) { if ( $key =~ m/_sth$/ ) { PTDEBUG && _d('Finish', $key); $self->{$key}->finish(); } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End NibbleIterator package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc substr(md5_hex($val), -16); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # CleanupTask package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/CleanupTask.pm # t/lib/CleanupTask.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package CleanupTask; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, $task ) = @_; die "I need a task parameter" unless $task; die "The task parameter must be a coderef" unless ref $task eq 'CODE'; my $self = { task => $task, }; open $self->{stdout_copy}, ">&=", *STDOUT or die "Cannot dup stdout: $OS_ERROR"; open $self->{stderr_copy}, ">&=", *STDERR or die "Cannot dup stderr: $OS_ERROR"; PTDEBUG && _d('Created cleanup task', $task); return bless $self, $class; } sub DESTROY { my ($self) = @_; my $task = $self->{task}; if ( ref $task ) { PTDEBUG && _d('Calling cleanup task', $task); open local(*STDOUT), ">&=", $self->{stdout_copy} if $self->{stdout_copy}; open local(*STDERR), ">&=", $self->{stderr_copy} if $self->{stderr_copy}; $task->(); } else { warn "Lost cleanup task"; } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End CleanupTask package # ########################################################################### # ########################################################################### # IndexLength package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/IndexLength.pm # t/lib/IndexLength.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package IndexLength; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { Quoter => $args{Quoter}, }; return bless $self, $class; } sub index_length { my ($self, %args) = @_; my @required_args = qw(Cxn tbl index); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn) = @args{@required_args}; die "The tbl argument does not have a tbl_struct" unless exists $args{tbl}->{tbl_struct}; die "Index $args{index} does not exist in table $args{tbl}->{name}" unless $args{tbl}->{tbl_struct}->{keys}->{$args{index}}; my $index_struct = $args{tbl}->{tbl_struct}->{keys}->{$args{index}}; my $index_cols = $index_struct->{cols}; my $n_index_cols = $args{n_index_cols}; if ( !$n_index_cols || $n_index_cols > @$index_cols ) { $n_index_cols = scalar @$index_cols; } my $vals = $self->_get_first_values( %args, n_index_cols => $n_index_cols, ); my $sql = $self->_make_range_query( %args, n_index_cols => $n_index_cols, vals => $vals, ); my $sth = $cxn->dbh()->prepare($sql); PTDEBUG && _d($sth->{Statement}, 'params:', @$vals); $sth->execute(@$vals); my $row = $sth->fetchrow_hashref(); $sth->finish(); PTDEBUG && _d('Range scan:', Dumper($row)); return $row->{key_len}, $row->{key}; } sub _get_first_values { my ($self, %args) = @_; my @required_args = qw(Cxn tbl index n_index_cols); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn, $tbl, $index, $n_index_cols) = @args{@required_args}; my $q = $self->{Quoter}; my $index_struct = $tbl->{tbl_struct}->{keys}->{$index}; my $index_cols = $index_struct->{cols}; my $index_columns = join (', ', map { $q->quote($_) } @{$index_cols}[0..($n_index_cols - 1)]); my @where; foreach my $col ( @{$index_cols}[0..($n_index_cols - 1)] ) { push @where, $q->quote($col) . " IS NOT NULL" } my $sql = "SELECT /*!40001 SQL_NO_CACHE */ $index_columns " . "FROM $tbl->{name} FORCE INDEX (" . $q->quote($index) . ") " . "WHERE " . join(' AND ', @where) . " ORDER BY $index_columns " . "LIMIT 1 /*key_len*/"; # only need 1 row PTDEBUG && _d($sql); my $vals = $cxn->dbh()->selectrow_arrayref($sql); return $vals; } sub _make_range_query { my ($self, %args) = @_; my @required_args = qw(tbl index n_index_cols vals); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($tbl, $index, $n_index_cols, $vals) = @args{@required_args}; my $q = $self->{Quoter}; my $index_struct = $tbl->{tbl_struct}->{keys}->{$index}; my $index_cols = $index_struct->{cols}; my @where; if ( $n_index_cols > 1 ) { foreach my $n ( 0..($n_index_cols - 2) ) { my $col = $index_cols->[$n]; my $val = $vals->[$n]; push @where, $q->quote($col) . " = ?"; } } my $col = $index_cols->[$n_index_cols - 1]; my $val = $vals->[-1]; # should only be as many vals as cols push @where, $q->quote($col) . " >= ?"; my $sql = "EXPLAIN SELECT /*!40001 SQL_NO_CACHE */ * " . "FROM $tbl->{name} FORCE INDEX (" . $q->quote($index) . ") " . "WHERE " . join(' AND ', @where) . " /*key_len*/"; return $sql; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End IndexLength package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # Percona::XtraDB::Cluster package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/XtraDB/Cluster.pm # t/lib/Percona/XtraDB/Cluster.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::XtraDB::Cluster; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Lmo; use Data::Dumper; { local $EVAL_ERROR; eval { require Cxn } }; sub get_cluster_name { my ($self, $cxn) = @_; my $sql = "SHOW VARIABLES LIKE 'wsrep\_cluster\_name'"; PTDEBUG && _d($cxn->name, $sql); my (undef, $cluster_name) = $cxn->dbh->selectrow_array($sql); return $cluster_name; } sub is_cluster_node { my ($self, $cxn) = @_; my $sql = "SHOW VARIABLES LIKE 'wsrep\_on'"; PTDEBUG && _d($cxn->name, $sql); my $row = $cxn->dbh->selectrow_arrayref($sql); PTDEBUG && _d(Dumper($row)); return unless $row && $row->[1] && ($row->[1] eq 'ON' || $row->[1] eq '1'); my $cluster_name = $self->get_cluster_name($cxn); return $cluster_name; } sub same_node { my ($self, $cxn1, $cxn2) = @_; foreach my $val ('wsrep\_sst\_receive\_address', 'wsrep\_node\_name', 'wsrep\_node\_address') { my $sql = "SHOW VARIABLES LIKE '$val'"; PTDEBUG && _d($cxn1->name, $cxn2->name, $sql); my (undef, $val1) = $cxn1->dbh->selectrow_array($sql); my (undef, $val2) = $cxn2->dbh->selectrow_array($sql); return unless ($val1 || '') eq ($val2 || ''); } return 1; } sub find_cluster_nodes { my ($self, %args) = @_; my $dbh = $args{dbh}; my $dsn = $args{dsn}; my $dp = $args{DSNParser}; my $make_cxn = $args{make_cxn}; my $sql = q{SHOW STATUS LIKE 'wsrep\_incoming\_addresses'}; PTDEBUG && _d($sql); my (undef, $addresses) = $dbh->selectrow_array($sql); PTDEBUG && _d("Cluster nodes found: ", $addresses); return unless $addresses; my @addresses = grep { !/\Aunspecified\z/i } split /,\s*/, $addresses; my @nodes; foreach my $address ( @addresses ) { my ($host, $port) = split /:/, $address; my $spec = "h=$host" . ($port ? ",P=$port" : ""); my $node_dsn = $dp->parse($spec, $dsn); my $node_dbh = eval { $dp->get_dbh( $dp->get_cxn_params($node_dsn), { AutoCommit => 1 }) }; if ( $EVAL_ERROR ) { print STDERR "Cannot connect to ", $dp->as_string($node_dsn), ", discovered through $sql: $EVAL_ERROR\n"; if ( !$port && $dsn->{P} != 3306 ) { $address .= ":3306"; redo; } next; } PTDEBUG && _d('Connected to', $dp->as_string($node_dsn)); $node_dbh->disconnect(); push @nodes, $make_cxn->(dsn => $node_dsn); } return \@nodes; } sub remove_duplicate_cxns { my ($self, %args) = @_; my @cxns = @{$args{cxns}}; my $seen_ids = $args{seen_ids} || {}; PTDEBUG && _d("Removing duplicates nodes from ", join(" ", map { $_->name } @cxns)); my @trimmed_cxns; for my $cxn ( @cxns ) { my $id = $cxn->get_id(); PTDEBUG && _d('Server ID for ', $cxn->name, ': ', $id); if ( ! $seen_ids->{$id}++ ) { push @trimmed_cxns, $cxn } else { PTDEBUG && _d("Removing ", $cxn->name, ", ID ", $id, ", because we've already seen it"); } } return \@trimmed_cxns; } sub same_cluster { my ($self, $cxn1, $cxn2) = @_; return 0 if !$self->is_cluster_node($cxn1) || !$self->is_cluster_node($cxn2); my $cluster1 = $self->get_cluster_name($cxn1); my $cluster2 = $self->get_cluster_name($cxn2); return ($cluster1 || '') eq ($cluster2 || ''); } sub autodetect_nodes { my ($self, %args) = @_; my $ms = $args{MasterSlave}; my $dp = $args{DSNParser}; my $make_cxn = $args{make_cxn}; my $nodes = $args{nodes}; my $seen_ids = $args{seen_ids}; my $new_nodes = []; return $new_nodes unless @$nodes; for my $node ( @$nodes ) { my $nodes_found = $self->find_cluster_nodes( dbh => $node->dbh(), dsn => $node->dsn(), make_cxn => $make_cxn, DSNParser => $dp, ); push @$new_nodes, @$nodes_found; } $new_nodes = $self->remove_duplicate_cxns( cxns => $new_nodes, seen_ids => $seen_ids ); my $new_slaves = []; foreach my $node (@$new_nodes) { my $node_slaves = $ms->get_slaves( dbh => $node->dbh(), dsn => $node->dsn(), make_cxn => $make_cxn, ); push @$new_slaves, @$node_slaves; } $new_slaves = $self->remove_duplicate_cxns( cxns => $new_slaves, seen_ids => $seen_ids ); my @new_slave_nodes = grep { $self->is_cluster_node($_) } @$new_slaves; my $slaves_of_slaves = $self->autodetect_nodes( %args, nodes => \@new_slave_nodes, ); my @autodetected_nodes = ( @$new_nodes, @$new_slaves, @$slaves_of_slaves ); return \@autodetected_nodes; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::XtraDB::Cluster package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_online_schema_change; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Time::HiRes qw(time sleep); use Data::Dumper; use VersionCompare; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; # Import Term::Readkey if available # Not critical so don't fail if it's not my $term_readkey = eval { require Term::ReadKey; Term::ReadKey->import(); 1; }; use sigtrap 'handler', \&sig_int, 'normal-signals'; my $exit_status = 0; my $oktorun = 1; my $dont_interrupt_now = 0; my @drop_trigger_sqls; my @triggers_not_dropped; my $pxc_version = '0'; my $triggers_info = []; # Completely ignore these error codes. my %ignore_code = ( # Error: 1592 SQLSTATE: HY000 (ER_BINLOG_UNSAFE_STATEMENT) # Message: Statement may not be safe to log in statement format. # Ignore this warning because we have purposely set statement-based # replication. 1592 => 1, # Error: 1062 SQLSTATE: 23000 ( ER_DUP_ENTRY ) # Message: Duplicate entry '%ld' for key '%s' # MariaDB 5.5.28+ has this as a warning; See https://bugs.launchpad.net/percona-toolkit/+bug/1099836 1062 => 1, ); $OUTPUT_AUTOFLUSH = 1; use constant { INVALID_PARAMETERS => 1, UNSUPORTED_MYSQL_VERSION => 2, NO_MINIMUM_REQUIREMENTS => 3, NO_PRIMARY_OR_UNIQUE_KEY => 4, INVALID_PLUGIN_FILE => 5, INVALID_ALTER_FK_METHOD => 6, INVALID_KEY_SIZE => 7, CANNOT_DETERMINE_KEY_SIZE => 9, NOT_SAFE_TO_ASCEND => 9, ERROR_CREATING_NEW_TABLE => 10, ERROR_ALTERING_TABLE => 11, ERROR_CREATING_TRIGGERS => 12, ERROR_RESTORING_TRIGGERS => 13, ERROR_SWAPPING_TABLES => 14, ERROR_UPDATING_FKS => 15, ERROR_DROPPING_OLD_TABLE => 16, UNSUPORTED_OPERATION => 17, MYSQL_CONNECTION_ERROR => 18, LOST_MYSQL_CONNECTION => 19, }; sub _die { my ($msg, $exit_status) = @_; $exit_status ||= 255; chomp ($msg); print "$msg\n"; exit $exit_status; } sub main { local @ARGV = @_; # Reset global vars else tests will fail. $exit_status = 0; $oktorun = 1; @drop_trigger_sqls = (); @triggers_not_dropped = (); $dont_interrupt_now = 0; %ignore_code = (1592 => 1, 1062 => 1); my %stats = ( INSERT => 0, ); # ######################################################################## # Get configuration information. # ######################################################################## my $q = new Quoter(); my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); if ( $o->get('null-to-not-null') ) { $ignore_code{1048} = 1; } my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); # The original table, i.e. the one being altered, must be specified # on the command line via the DSN. my ($db, $tbl); my $dsn = shift @ARGV; if ( !$dsn ) { $o->save_error('A DSN must be specified'); } else { # Parse DSN string and convert it to a DSN data struct. $dsn = $dp->parse($dsn, $dp->parse_options($o)); $db = $dsn->{D}; $tbl = $dsn->{t}; } my $alter_fk_method = $o->get('alter-foreign-keys-method') || ''; if ( $alter_fk_method eq 'drop_swap' ) { $o->set('swap-tables', 0); $o->set('drop-old-table', 0); } # Explicit --chunk-size disable auto chunk sizing. $o->set('chunk-time', 0) if $o->got('chunk-size'); if (!$o->get('swap-tables') && !$o->get('drop-triggers')) { PTDEBUG && _d('Enabling no-drop-new-table since no-swap-tables & no-drop-triggers were specified'); $o->set('drop-new-table', 0); } foreach my $opt ( qw(max-load critical-load) ) { next unless $o->has($opt); my $spec = $o->get($opt); eval { MySQLStatusWaiter::_parse_spec($o->get($opt)); }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; $o->save_error("Invalid --$opt: $EVAL_ERROR"); } } # https://bugs.launchpad.net/percona-toolkit/+bug/1010232 my $n_chunk_index_cols = $o->get('chunk-index-columns'); if ( defined $n_chunk_index_cols && (!$n_chunk_index_cols || $n_chunk_index_cols =~ m/\D/ || $n_chunk_index_cols < 1) ) { $o->save_error('Invalid number of --chunk-index columns: ' . $n_chunk_index_cols); } my $tries = eval { validate_tries($o); }; if ( $EVAL_ERROR ) { $o->save_error($EVAL_ERROR); } if ( !$o->get('drop-triggers') ) { $o->set('drop-old-table', 0); } if ( !$o->get('drop-triggers') && $o->get('preserve-triggers') ) { my $msg = "Cannot use --no-drop-triggers along with --preserve-triggers " . "since --preserve-triggers implies that the old triggers should be deleted" . " and recreated in the new table.\nPlease read the documentation for " . "--preserve-triggers"; _die($msg, INVALID_PARAMETERS); } if ( $o->get('preserve-triggers') ) { $o->set('drop-triggers', 1); } if ( !$o->get('help') ) { if ( @ARGV ) { $o->save_error('Specify only one DSN on the command line'); } if ( !$db || !$tbl ) { $o->save_error("The DSN must specify a database (D) and a table (t)"); } if ( $o->get('progress') ) { eval { Progress->validate_spec($o->get('progress')) }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; $o->save_error("--progress $EVAL_ERROR"); } } # See the "pod-based-option-value-validation" spec for how this may # be automagically validated. if ( $alter_fk_method && $alter_fk_method ne 'auto' && $alter_fk_method ne 'rebuild_constraints' && $alter_fk_method ne 'drop_swap' && $alter_fk_method ne 'none' ) { $o->save_error("Invalid --alter-foreign-keys-method value: $alter_fk_method"); } # Issue a strong warning if alter-foreign-keys-method = none if ( $alter_fk_method eq 'none' && !$o->get('force') ) { print STDERR "WARNING! Using alter-foreign-keys-method = \"none\". This will typically cause foreign key violations!\nThis method of handling foreign key constraints is only provided so that the database administrator can disable the tool’s built-in functionality if desired.\n\nContinue anyway? (y/N)"; my $response; chomp($response = ); if ($response !~ /y|(yes)/i) { exit 1; } } if ( $alter_fk_method eq 'drop_swap' && !$o->get('drop-new-table') ) { $o->save_error("--alter-foreign-keys-method=drop_swap does not work with --no-drop-new-table."); } } eval { MasterSlave::check_recursion_method($o->get('recursion-method')); }; if ( $EVAL_ERROR ) { $o->save_error("Invalid --recursion-method: $EVAL_ERROR") } $o->usage_or_errors(); if ( $o->get('quiet') ) { # BARON: this will fail on Windows, where there is no /dev/null. I feel # it's a hack, like ignoring a problem instead of fixing it somehow. We # should take a look at the things that get printed in a "normal" # non-quiet run, and "if !quiet" them, and then do some kind of Logger.pm # or Messager.pm module for a future release. close STDOUT; open STDOUT, '>', '/dev/null' or warn "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } # ######################################################################## # Connect to MySQL. # ######################################################################## my $set_on_connect = sub { my ($dbh) = @_; return; }; # Do not call "new Cxn(" directly; use this sub so that set_on_connect # is applied to every cxn. # BARON: why not make this a subroutine instead of a subroutine variable? I # think that can be less confusing. Also, the $set_on_connect variable can be # inlined into this subroutine. Many of our tools have a get_dbh() subroutine # and it might be good to just make a convention of it. my $make_cxn = sub { my (%args) = @_; my $cxn = Cxn->new( %args, DSNParser => $dp, OptionParser => $o, set => $set_on_connect, ); eval { $cxn->connect() }; # connect or die trying if ( $EVAL_ERROR ) { _die("Cannot connect to MySQL: $EVAL_ERROR", MYSQL_CONNECTION_ERROR); } return $cxn; }; my $cxn = $make_cxn->(dsn => $dsn); my $aux_cxn = $make_cxn->(dsn => $dsn, prev_dsn => $dsn); my $cluster = Percona::XtraDB::Cluster->new; if ( $cluster->is_cluster_node($cxn) ) { # Because of https://bugs.launchpad.net/codership-mysql/+bug/1040108 # ptc and pt-osc check Threads_running by default for --max-load. # Strictly speaking, they can run on 5.5.27 as long as that bug doesn't # manifest itself. If it does, however, then the tools will wait forever. $pxc_version = VersionParser->new($cxn->dbh); if ( $pxc_version < '5.5.28' ) { _die("Percona XtraDB Cluster 5.5.28 or newer is required to run " . "this tool on a cluster, but node " . $cxn->name . " is running version " . $pxc_version->version . ". Please upgrade the node, or run the tool on a newer node, " . "or contact Percona for support.", UNSUPORTED_MYSQL_VERSION); } if ( $pxc_version < '5.6' && $o->got('max-flow-ctl') ) { _die("Option '--max-flow-ctl is only available for PXC version 5.6 " . "or higher.", INVALID_PARAMETERS); } # If wsrep_OSU_method=RSU the "DDL will be only processed locally at # the node." So _table_new (the altered version of table) will not # replicate to other nodes but our INSERT..SELECT operations on it # will, thereby crashing all other nodes. my (undef, $wsrep_osu_method) = $cxn->dbh->selectrow_array( "SHOW VARIABLES LIKE 'wsrep\_OSU\_method'"); if ( lc($wsrep_osu_method || '') ne 'toi' ) { _die("wsrep_OSU_method=TOI is required because " . $cxn->name . " is a cluster node. wsrep_OSU_method is " . "currently set to " . ($wsrep_osu_method || '') . ". " . "Set it to TOI, or contact Percona for support.", NO_MINIMUM_REQUIREMENTS); } } elsif ( $o->got('max-flow-ctl') ) { _die("Option '--max-flow-ctl' is meant to be used on PXC clusters. " ."For normal async replication use '--max-lag' and '--check-interval' " ."instead.", INVALID_PARAMETERS); } # ######################################################################## # Check if MySQL is new enough to have the triggers we need. # Although triggers were introduced in 5.0.2, "Prior to MySQL 5.0.10, # triggers cannot contain direct references to tables by name." # ######################################################################## my $server_version = VersionParser->new($cxn->dbh()); if ( $server_version < '5.0.10' ) { _die("This tool requires MySQL 5.0.10 or newer.", UNSUPORTED_MYSQL_VERSION); } # Use LOCK IN SHARE mode unless MySQL 5.0 because there's a bug like # http://bugs.mysql.com/bug.php?id=45694 my $lock_in_share_mode = $server_version < '5.1' ? 0 : 1; # ######################################################################## # Check if analyze-before-swap is necessary. # https://bugs.launchpad.net/percona-toolkit/+bug/1491261 # ######################################################################## my $analyze_table = $o->get('analyze-before-swap'); if ( $o->got('analyze-before-swap') ) { # User specified so respect their wish. If --analyze-before-swap, do it # regardless of MySQL version and innodb_stats_peristent. # If --no-analyze-before-swap, don't do it. PTDEBUG && _d('User specified explicit --analyze-before-swap:', ($analyze_table ? 'on' : 'off')); } elsif ( $analyze_table ) { # User did not specify --analyze-before-swap on command line, and it # defaults to "yes", so auto-check for the conditions it's affected by # and enable only if those conditions are true. if ( $server_version >= '5.6' ) { my (undef, $innodb_stats_persistent) = $cxn->dbh->selectrow_array( "SHOW VARIABLES LIKE 'innodb_stats_persistent'"); if ($innodb_stats_persistent eq 'ON' || $innodb_stats_persistent eq '1') { PTDEBUG && _d('innodb_stats_peristent is ON, enabling --analyze-before-swap'); $analyze_table = 1; } else { PTDEBUG && _d('innodb_stats_peristent is OFF, disabling --analyze-before-swap'); $analyze_table = 0; } } else { PTDEBUG && _d('MySQL < 5.6, disabling --analyze-before-swap'); $analyze_table = 0; } } # ######################################################################## # Create --plugin. # ######################################################################## my $plugin; if ( my $file = $o->get('plugin') ) { _die("--plugin file $file does not exist", INVALID_PLUGIN_FILE) unless -f $file; eval { require $file; }; _die("Error loading --plugin $file: $EVAL_ERROR", INVALID_PLUGIN_FILE) if $EVAL_ERROR; eval { $plugin = pt_online_schema_change_plugin->new( cxn => $cxn, aux_cxn => $aux_cxn, alter => $o->get('alter'), execute => $o->get('execute'), dry_run => $o->get('dry-run'), print => $o->get('print'), quiet => $o->get('quiet'), Quoter => $q, ); }; _die("Error creating --plugin: $EVAL_ERROR", INVALID_PLUGIN_FILE) if $EVAL_ERROR; print "Created plugin from $file.\n"; } # ######################################################################## # Setup lag and load monitors. # ######################################################################## my $slaves; # all slaves that are found or specified my $slave_lag_cxns; # slaves whose lag we'll check my $replica_lag; # ReplicaLagWaiter object my $replica_lag_pr; # Progress for ReplicaLagWaiter my $flow_ctl; # FlowControlWaiter object my $flow_ctl_pr; # Progress for FlowControlWaiter my $sys_load; # MySQLStatusWaiter object my $sys_load_pr; # Progress for MySQLStatusWaiter object if ( $o->get('execute') ) { # ##################################################################### # Find and connect to slaves. # ##################################################################### my $ms = new MasterSlave( OptionParser => $o, DSNParser => $dp, Quoter => $q, ); $slaves = $ms->get_slaves( dbh => $cxn->dbh(), dsn => $cxn->dsn(), make_cxn => sub { return $make_cxn->(@_, prev_dsn => $cxn->dsn()); }, ); PTDEBUG && _d(scalar @$slaves, 'slaves found'); if ( scalar @$slaves ) { print "Found " . scalar(@$slaves) . " slaves:\n"; foreach my $cxn ( @$slaves ) { print $cxn->description()."\n"; } } elsif ( ($o->get('recursion-method') || '') ne 'none') { print "No slaves found. See --recursion-method if host " . $cxn->name() . " has slaves.\n"; } else { print "Ignoring all slaves because --recursion-method=none " . "was specified\n"; } if ( my $dsn = $o->get('check-slave-lag') ) { PTDEBUG && _d('Will use --check-slave-lag to check for slave lag'); my $cxn = $make_cxn->( dsn_string => $o->get('check-slave-lag'), prev_dsn => $cxn->dsn(), ); $slave_lag_cxns = [ $cxn ]; } else { PTDEBUG && _d('Will check slave lag on all slaves'); $slave_lag_cxns = $slaves; } if ( $slave_lag_cxns && scalar @$slave_lag_cxns ) { if ($o->get('skip-check-slave-lag')) { my $slaves_to_skip = $o->get('skip-check-slave-lag'); my $filtered_slaves = []; for my $slave (@$slave_lag_cxns) { my $found=0; for my $slave_to_skip (@$slaves_to_skip) { if ($slave->{dsn}->{h} eq $slave_to_skip->{h} && $slave->{dsn}->{P} eq $slave_to_skip->{P}) { $found=1; } } if ($found) { print "Skipping slave ". $slave->description()."\n"; } else { push @$filtered_slaves, $slave; } } $slave_lag_cxns = $filtered_slaves; } if (!scalar @$slave_lag_cxns) { print "Not checking slave lag because all slaves were skipped\n"; } else{ print "Will check slave lag on:\n"; foreach my $cxn ( @$slave_lag_cxns ) { print $cxn->description()."\n"; } } } else { print "Not checking slave lag because no slaves were found " . "and --check-slave-lag was not specified.\n"; } # ##################################################################### # Check for replication filters. # ##################################################################### if ( $o->get('check-replication-filters') ) { PTDEBUG && _d("Checking slave replication filters"); my @all_repl_filters; foreach my $slave ( @$slaves ) { my $repl_filters = $ms->get_replication_filters( dbh => $slave->dbh(), ); if ( keys %$repl_filters ) { push @all_repl_filters, { name => $slave->name(), filters => $repl_filters, }; } } if ( @all_repl_filters ) { my $msg = "Replication filters are set on these hosts:\n"; foreach my $host ( @all_repl_filters ) { my $filters = $host->{filters}; $msg .= " $host->{name}\n" . join("\n", map { " $_ = $host->{filters}->{$_}" } keys %{$host->{filters}}) . "\n"; } $msg .= "Please read the --check-replication-filters documentation " . "to learn how to solve this problem."; _die($msg, INVALID_PARAMETERS); } } # ##################################################################### # Make a ReplicaLagWaiter to help wait for slaves after each chunk. # Note: the "sleep" function is also used by MySQLStatusWaiter and # FlowControlWaiter # ##################################################################### my $sleep = sub { # Don't let the master dbh die while waiting for slaves because we # may wait a very long time for slaves. my $dbh = $cxn->dbh(); if ( !$dbh || !$dbh->ping() ) { eval { $dbh = $cxn->connect() }; # connect or die trying if ( $EVAL_ERROR ) { $oktorun = 0; # flag for cleanup tasks chomp $EVAL_ERROR; _die("Lost connection to " . $cxn->name() . " while waiting for " . "replica lag ($EVAL_ERROR)", LOST_MYSQL_CONNECTION); } } $dbh->do("SELECT 'pt-online-schema-change keepalive'"); sleep $o->get('check-interval'); return; }; my $get_lag; # The plugin is able to override the slavelag check so tools like # pt-heartbeat or other replicators (Tungsten...) can be used to # measure replication lag if ( $plugin && $plugin->can('get_slave_lag') ) { $get_lag = $plugin->get_slave_lag(oktorun => \$oktorun); } else { $get_lag = sub { my ($cxn) = @_; my $dbh = $cxn->dbh(); if ( !$dbh || !$dbh->ping() ) { eval { $dbh = $cxn->connect() }; # connect or die trying if ( $EVAL_ERROR ) { # As the docs say: "The tool waits forever for replicas # to stop lagging. If any replica is stopped, the tool # waits forever until the replica is started." # https://bugs.launchpad.net/percona-toolkit/+bug/1402051 #TODO REMOVE DEBUG PTDEBUG && _d('2> Cannot connect to', $cxn->name(), ':', $EVAL_ERROR); # Make ReplicaLagWaiter::wait() report slave is stopped. return undef; } } my $lag; eval { $lag = $ms->get_slave_lag($dbh); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Cannot get lag for', $cxn->name(), ':', $EVAL_ERROR); } return $lag; # undef if error }; } $replica_lag = new ReplicaLagWaiter( slaves => $slave_lag_cxns, max_lag => $o->get('max-lag'), oktorun => sub { return $oktorun }, get_lag => $get_lag, sleep => $sleep, ); my $get_status; { my $sql = "SHOW GLOBAL STATUS LIKE ?"; my $sth = $cxn->dbh()->prepare($sql); $get_status = sub { my ($var) = @_; PTDEBUG && _d($sth->{Statement}, $var); $sth->execute($var); my (undef, $val) = $sth->fetchrow_array(); return $val; }; } eval { $sys_load = new MySQLStatusWaiter( max_spec => $o->get('max-load'), critical_spec => $o->get('critical-load'), get_status => $get_status, oktorun => sub { return $oktorun }, sleep => $sleep, ); }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; _die("Error checking --max-load or --critial-load: $EVAL_ERROR. " . "Check that the variables specified for --max-load and " . "--critical-load are spelled correctly and exist in " . "SHOW GLOBAL STATUS. Current values for these options are:\n" . " --max-load " . (join(',', @{$o->get('max-load')})) . "\n" . " --critial-load " . (join(',', @{$o->get('critical-load')})) , INVALID_PARAMETERS); } if ( $pxc_version >= '5.6' && $o->got('max-flow-ctl') ) { $flow_ctl = new FlowControlWaiter( node => $cxn->dbh(), max_flow_ctl => $o->get('max-flow-ctl'), oktorun => sub { return $oktorun }, sleep => $sleep, ); } if ( $o->get('progress') ) { $replica_lag_pr = new Progress( jobsize => scalar @$slaves, spec => $o->get('progress'), name => "Waiting for replicas to catch up", # not used ); $sys_load_pr = new Progress( jobsize => scalar @{$o->get('max-load')}, spec => $o->get('progress'), name => "Waiting for --max-load", # not used ); if ( $pxc_version >= '5.6' && $o->got('max-flow-ctl') ) { $flow_ctl_pr = new Progress( jobsize => $o->get('max-flow-ctl'), spec => $o->get('progress'), name => "Waiting for flow control to abate", # not used ); } } } # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ map ( { +{ dbh => $_->dbh(), dsn => $_->dsn() } } $cxn, ($slaves ? @$slaves : ()) ) ], ); } # ######################################################################## # Setup and check the original table. # ######################################################################## my $tp = TableParser->new(Quoter => $q); # Common table data struct (that modules like NibbleIterator expect). my $orig_tbl = { db => $db, tbl => $tbl, name => $q->quote($db, $tbl), }; check_orig_table( orig_tbl => $orig_tbl, Cxn => $cxn, OptionParser => $o, TableParser => $tp, Quoter => $q, ); # ######################################################################## # Print --tries. # ######################################################################## print "Operation, tries, wait:\n"; { my $fmt = " %s, %s, %s\n"; foreach my $op ( sort keys %$tries ) { printf $fmt, $op, $tries->{$op}->{tries}, $tries->{$op}->{wait}; } } # ######################################################################## # Get child tables of the original table, if necessary. # ######################################################################## my $child_tables; my $have_child_tables = find_child_tables( tbl => $orig_tbl, Cxn => $cxn, Quoter => $q, ); my $vp = VersionParser->new($cxn->dbh()); if ($vp->cmp('8.0.14') > -1 && $vp->flavor() !~ m/maria/i) { my $msg = "There is an error in MySQL that makes the server to die when trying to ". "rename a table with FKs. See https://bugs.mysql.com/bug.php?id=96145\n". "Since pt-online-schema change needs to rename the old <-> new tables as the final " . "step, and the requested table has FKs, it cannot be executed under the current MySQL version"; _die($msg, NO_MINIMUM_REQUIREMENTS); } if ( ($alter_fk_method || '') eq 'none' ) { print "Not updating foreign keys because " . "--alter-foreign-keys-method=none. Foreign keys " . "that reference the table will no longer work.\n"; } else { $child_tables = find_child_tables( tbl => $orig_tbl, Cxn => $cxn, Quoter => $q, only_same_schema_fks => $o->get('only-same-schema-fks'), ); if ( !$child_tables ) { if ( $alter_fk_method ) { warn "No foreign keys reference $orig_tbl->{name}; ignoring " . "--alter-foreign-keys-method.\n"; if ( $alter_fk_method eq 'drop_swap' ) { # These opts are disabled at the start if the user specifies # the drop_swap method, but now that we know there are no # child tables, we must re-enable these to make the alter work. $o->set('swap-tables', 1); $o->set('drop-old-table', 1); } $alter_fk_method = ''; } # No child tables and --alter-fk-method wasn't specified, # so nothing to do. } else { print "Child tables:\n"; foreach my $child_table ( @$child_tables ) { printf " %s (approx. %s rows)\n", $child_table->{name}, $child_table->{row_est} || '?'; } if ( $alter_fk_method ) { # Let the user know how we're going to update the child table # fk refs. my $choice = $alter_fk_method eq 'none' ? "not" : $alter_fk_method eq 'auto' ? "automatically choose the method to" : "use the $alter_fk_method method to"; print "Will $choice update foreign keys.\n"; } else { print "You did not specify --alter-foreign-keys-method, but there " . "are foreign keys that reference the table. " . "Please read the tool's documentation carefully.\n"; return 1; } } } # ######################################################################## # XXX # Ready to begin the alter! Nothing has been changed on the server at # this point; we've just checked and looked for things. Past this point, # the code is live if --execute, else it's doing a --dry-run. Or, if # the user didn't read the docs, we may bail out here. # XXX # ######################################################################## if ( $o->get('dry-run') ) { print "Starting a dry run. $orig_tbl->{name} will not be altered. " . "Specify --execute instead of --dry-run to alter the table.\n"; } elsif ( $o->get('execute') ) { print "Altering $orig_tbl->{name}...\n"; } else { print "Exiting without altering $orig_tbl->{name} because neither " . "--dry-run nor --execute was specified. Please read the tool's " . "documentation carefully before using this tool.\n"; return 1; } # ######################################################################## # Create a cleanup task object to undo changes (i.e. clean up) if the # code dies, or we may call this explicitly at the end if all goes well. # ######################################################################## my @cleanup_tasks; my $cleanup = new CleanupTask( sub { # XXX We shouldn't copy $EVAL_ERROR here, but I found that # errors are not re-thrown in tests. If you comment (*) out this # line and the die below, an error fails: # not ok 5 - Doesn't try forever to find a new table name # Failed test 'Doesn't try forever to find a new table name' # at /Users/daniel/p/pt-osc-2.1.1/lib/PerconaTest.pm line 559. # '' # doesn't match '(?-xism:Failed to find a unique new table name)' # (*) Frank: commented them out because it caused infinite loop # and the mentioned test error doesn't arise #my $original_error = $EVAL_ERROR; foreach my $task ( reverse @cleanup_tasks ) { eval { $task->(); }; if ( $EVAL_ERROR ) { warn "Error cleaning up: $EVAL_ERROR\n"; } } #die $original_error if $original_error; # rethrow original error return; } ); local $SIG{__DIE__} = sub { return if $EXCEPTIONS_BEING_CAUGHT; local $EVAL_ERROR = $_[0]; undef $cleanup; die @_; }; # The last cleanup task is to report whether or not the orig table # was altered. push @cleanup_tasks, sub { PTDEBUG && _d('Clean up done, report if orig table was altered'); if ( $o->get('dry-run') ) { print "Dry run complete. $orig_tbl->{name} was not altered.\n"; } else { if ( $orig_tbl->{swapped} ) { if ( $orig_tbl->{success} ) { print "Successfully altered $orig_tbl->{name}.\n"; } else { print "Altered $orig_tbl->{name} but there were errors " . "or warnings.\n"; } } else { print "$orig_tbl->{name} was not altered.\n"; } } return; }; # The 2nd to last cleanup task is printing the --statistics which # may reveal something about the failure. if ( $o->get('statistics') ) { push @cleanup_tasks, sub { my $n = max( map { length $_ } keys %stats ); my $fmt = "# %-${n}s %5s\n"; printf $fmt, 'Event', 'Count'; printf $fmt, ('=' x $n),'====='; foreach my $event ( sort keys %stats ) { printf $fmt, $event, (defined $stats{$event} ? $stats{$event} : '?'); } }; } # ######################################################################## # Check the --alter statement. # ######################################################################## my $renamed_cols = {}; if ( my $alter = $o->get('alter') ) { $renamed_cols = find_renamed_cols( alter => $o->get('alter'), TableParser => $tp, ); if ( $o->get('check-alter') ) { check_alter( tbl => $orig_tbl, alter => $alter, dry_run => $o->get('dry-run'), renamed_cols => $renamed_cols, Cxn => $cxn, TableParser => $tp, OptionParser => $o, ); } } if ( %$renamed_cols && !$o->get('dry-run') ) { print "Renaming columns:\n" . join("\n", map { " $_ to $renamed_cols->{$_}" } sort keys %$renamed_cols) . "\n"; } # ######################################################################## # Check and create PID file if user specified --pid. # ######################################################################## my $daemon = Daemon->new( daemonize => 0, # not daemoninzing, just PID file pid_file => $o->get('pid'), ); $daemon->run(); # ######################################################################## # Init the --plugin. # ######################################################################## # --plugin hook if ( $plugin && $plugin->can('init') ) { $plugin->init( orig_tbl => $orig_tbl, child_tables => $child_tables, renamed_cols => $renamed_cols, slaves => $slaves, slave_lag_cxns => $slave_lag_cxns, ); } # ##################################################################### # Step 1: Create the new table. # ##################################################################### my $new_table_name = $o->get('new-table-name'); my $new_table_prefix = $o->got('new-table-name') ? undef : '_'; # --plugin hook if ( $plugin && $plugin->can('before_create_new_table') ) { $plugin->before_create_new_table( new_table_name => $new_table_name, new_table_prefix => $new_table_prefix, ); } my $new_tbl; eval { $new_tbl = create_new_table( new_table_name => $new_table_name, new_table_prefix => $new_table_prefix, orig_tbl => $orig_tbl, Cxn => $cxn, Quoter => $q, OptionParser => $o, TableParser => $tp, ); }; if ( $EVAL_ERROR ) { _die("Error creating new table: $EVAL_ERROR", ERROR_CREATING_NEW_TABLE); } # If the new table still exists, drop it unless the tool was interrupted. push @cleanup_tasks, sub { PTDEBUG && _d('Clean up new table'); my $new_tbl_exists = $tp->check_table( dbh => $cxn->dbh(), db => $new_tbl->{db}, tbl => $new_tbl->{tbl}, ); PTDEBUG && _d('New table exists:', $new_tbl_exists ? 'yes' : 'no'); return unless $new_tbl_exists; my $sql = "DROP TABLE IF EXISTS $new_tbl->{name};"; if ( !$oktorun ) { # The tool was interrupted, so do not drop the new table # in case the user wants to resume (once resume capability # is implemented). print "Not dropping the new table $new_tbl->{name} because " . "the tool was interrupted. To drop the new table, " . "execute:\n$sql\n"; } elsif ( $orig_tbl->{copied} && !$orig_tbl->{swapped} ) { print "Not dropping the new table $new_tbl->{name} because " . "--swap-tables failed. To drop the new table, " . "execute:\n$sql\n"; } elsif ( !$o->get('drop-new-table') ) { # https://bugs.launchpad.net/percona-toolkit/+bug/998831 print "Not dropping the new table $new_tbl->{name} because " . "--no-drop-new-table was specified. To drop the new table, " . "execute:\n$sql\n"; } elsif ( @triggers_not_dropped ) { # https://bugs.launchpad.net/percona-toolkit/+bug/1188002 print "Not dropping the new table $new_tbl->{name} because " . "dropping these triggers failed:\n" . join("\n", map { " $_" } @triggers_not_dropped) . "\nThese triggers must be dropped before dropping " . "$new_tbl->{name}, else writing to $orig_tbl->{name} will " . "cause MySQL error 1146 (42S02): \"Table $new_tbl->{name} " . " doesn't exist\".\n"; } else { print ts("Dropping new table...\n"); print $sql, "\n" if $o->get('print'); PTDEBUG && _d($sql); eval { $cxn->dbh()->do($sql); }; if ( $EVAL_ERROR ) { warn ts("Error dropping new table $new_tbl->{name}: $EVAL_ERROR\n" . "To try dropping the new table again, execute:\n$sql\n"); } print ts("Dropped new table OK.\n"); } }; my $table_is_replicated; if ( $slaves && scalar @$slaves && $table_is_replicated) { foreach my $slave (@$slaves) { my ($pr, $pr_first_report); if ( $o->get('progress') ) { $pr = new Progress( jobsize => scalar @$slaves, spec => $o->get('progress'), name => "Waiting for " . $slave->name(), ); $pr_first_report = sub { print "Waiting forever for new table $new_tbl->{name} to replicate " . "to " . $slave->name() . "...\n"; }; } $pr->start() if $pr; my $has_table = 0; while ( !$has_table ) { $has_table = $tp->check_table( dbh => $slave->dbh(), db => $new_tbl->{db}, tbl => $new_tbl->{tbl} ); last if $has_table; $pr->update( sub { return 0; }, first_report => $pr_first_report, ) if $pr; sleep 1; } } } # --plugin hook if ( $plugin && $plugin->can('after_create_new_table') ) { $plugin->after_create_new_table( new_tbl => $new_tbl, ); } # ##################################################################### # Step 2: Alter the new, empty table. This should be very quick, # or die if the user specified a bad alter statement. # ##################################################################### # --plugin hook if ( $plugin && $plugin->can('before_alter_new_table') ) { $plugin->before_alter_new_table( new_tbl => $new_tbl, ); } if ( my $alter = $o->get('alter') ) { print "Altering new table...\n"; my $sql = "ALTER TABLE $new_tbl->{name} $alter"; print $sql, "\n" if $o->get('print'); PTDEBUG && _d($sql); eval { $cxn->dbh()->do($sql); }; if ( $EVAL_ERROR ) { # this is trapped by a signal handler. Don't replace it with _die die "Error altering new table $new_tbl->{name}: $EVAL_ERROR\n"; } print "Altered $new_tbl->{name} OK.\n"; } # Get the new table struct. This shouldn't die because # we just created the table successfully so we know it's # there. But the ghost of Ryan is everywhere. my $ddl = $tp->get_create_table( $cxn->dbh(), $new_tbl->{db}, $new_tbl->{tbl}, ); $new_tbl->{tbl_struct} = $tp->parse($ddl); # Determine what columns the original and new table share. # If the user drops a col, that's easy: just don't copy it. If they # add a column, it must have a default value. Other alterations # may or may not affect the copy process--we'll know when we try! # Col posn (position) is just for looks because user's like # to see columns listed in their original order, not Perl's # random hash key sorting. my $col_posn = $orig_tbl->{tbl_struct}->{col_posn}; my $orig_cols = $orig_tbl->{tbl_struct}->{is_col}; my $new_cols = $new_tbl->{tbl_struct}->{is_col}; my @common_cols = map { +{ old => $_, new => $renamed_cols->{$_} || $_ } } sort { $col_posn->{$a} <=> $col_posn->{$b} } grep { $new_cols->{$_} || $renamed_cols->{$_} } keys %$orig_cols; PTDEBUG && _d('Common columns', Dumper(\@common_cols)); # Find a pk or unique index to use for the delete trigger. can_nibble() # above returns an index, but NibbleIterator will use non-unique indexes, # so we have to do this again here. { my $indexes = $new_tbl->{tbl_struct}->{keys}; # brevity foreach my $index ( $tp->sort_indexes($new_tbl->{tbl_struct}) ) { if ( $index eq 'PRIMARY' || ($indexes->{$index}->{is_unique} && $indexes->{$index}->{is_nullable} == 0)) { PTDEBUG && _d('Delete trigger new index:', Dumper($index)); $new_tbl->{del_index} = $index; last; } } } { my $indexes = $orig_tbl->{tbl_struct}->{keys}; # brevity foreach my $index ( $tp->sort_indexes($orig_tbl->{tbl_struct}) ) { if ( $index eq 'PRIMARY' || $indexes->{$index}->{is_unique} ) { PTDEBUG && _d('Delete trigger orig index:', Dumper($index)); $orig_tbl->{del_index} = $index; last; } } PTDEBUG && _d('Orig table delete index:', $orig_tbl->{del_index}); } if ( !$new_tbl->{del_index} ) { _die("The new table $new_tbl->{name} does not have a PRIMARY KEY " . "or a unique index which is required for the DELETE trigger.\n" . "Please check you have at least one UNIQUE and NOT NULLABLE index.", NO_PRIMARY_OR_UNIQUE_KEY); } # Determine whether to use the new or orig table delete index. # The new table del index is preferred due to # https://bugs.launchpad.net/percona-toolkit/+bug/1062324 # In short, if the chosen del index is re-created with new columns, # its original columns may be dropped, so just use its new columns. # But, due to https://bugs.launchpad.net/percona-toolkit/+bug/1103672, # the chosen del index on the new table may reference columns which # do not/no longer exist in the orig table, so we check for this # and, if it's the case, we fall back to using the del index from # the orig table. my $del_tbl = $new_tbl; # preferred my $new_del_index_cols # brevity = $new_tbl->{tbl_struct}->{keys}->{ $new_tbl->{del_index} }->{cols}; foreach my $new_del_index_col ( @$new_del_index_cols ) { if ( !exists $orig_cols->{$new_del_index_col} ) { if ( !$orig_tbl->{del_index} ) { _die("The new table index $new_tbl->{del_index} would be used " . "for the DELETE trigger, but it uses column " . "$new_del_index_col which does not exist in the original " . "table and the original table does not have a PRIMARY KEY " . "or a unique index to use for the DELETE trigger.", NO_PRIMARY_OR_UNIQUE_KEY); } print "Using original table index $orig_tbl->{del_index} for the " . "DELETE trigger instead of new table index $new_tbl->{del_index} " . "because the new table index uses column $new_del_index_col " . "which does not exist in the original table.\n"; $del_tbl = $orig_tbl; last; } } { my $del_cols = $del_tbl->{tbl_struct}->{keys}->{ $del_tbl->{del_index} }->{cols}; PTDEBUG && _d('Index for delete trigger: table', $del_tbl->{name}, 'index', $del_tbl->{del_index}, 'columns', @$del_cols); } # --plugin hook if ( $plugin && $plugin->can('after_alter_new_table') ) { $plugin->after_alter_new_table( new_tbl => $new_tbl, del_tbl => $del_tbl, ); } # ######################################################################## # Step 3: Create the triggers to capture changes on the original table and # apply them to the new table. # ######################################################################## my $retry = new Retry(); # Drop the triggers. We can save this cleanup task before # adding the triggers because if adding them fails, this will be # called which will drop whichever triggers were created. my $drop_triggers = $o->get('drop-triggers'); push @cleanup_tasks, sub { PTDEBUG && _d('Clean up triggers'); # --plugin hook if ( $plugin && $plugin->can('before_drop_triggers') ) { $plugin->before_drop_triggers( oktorun => $oktorun, drop_triggers => $drop_triggers, drop_trigger_sqls => \@drop_trigger_sqls, ); } if ( !$oktorun ) { print "Not dropping triggers because the tool was interrupted. " . "To drop the triggers, execute:\n" . join("\n", @drop_trigger_sqls) . "\n"; } elsif ( !$drop_triggers ) { print "Not dropping triggers because --no-drop-triggers was " . "specified. To drop the triggers, execute:\n" . join("\n", @drop_trigger_sqls) . "\n"; } else { drop_triggers( tbl => $orig_tbl, Cxn => $cxn, Quoter => $q, OptionParser => $o, Retry => $retry, tries => $tries, stats => \%stats, ); } }; # --plugin hook if ( $plugin && $plugin->can('before_create_triggers') ) { $plugin->before_create_triggers(); } my @trigger_names = eval { create_triggers( orig_tbl => $orig_tbl, new_tbl => $new_tbl, del_tbl => $del_tbl, columns => \@common_cols, Cxn => $cxn, Quoter => $q, OptionParser => $o, Retry => $retry, tries => $tries, stats => \%stats, ); }; if ( $EVAL_ERROR ) { _die("Error creating triggers: $EVAL_ERROR", ERROR_CREATING_TRIGGERS); }; # --plugin hook if ( $plugin && $plugin->can('after_create_triggers') ) { $plugin->after_create_triggers(); } # ##################################################################### # Step 4: Copy rows. # ##################################################################### # The hashref of callbacks below is what NibbleIterator calls internally # to do all the copy work. The callbacks do not need to eval their work # because the higher call to $nibble_iter->next() is eval'ed which will # catch any errors in the callbacks. my $total_rows = 0; my $total_time = 0; my $avg_rate = 0; # rows/second my $limit = $o->get('chunk-size-limit'); # brevity my $chunk_time = $o->get('chunk-time'); # brevity my $callbacks = { init => sub { my (%args) = @_; my $tbl = $args{tbl}; my $nibble_iter = $args{NibbleIterator}; my $statements = $nibble_iter->statements(); my $boundary = $nibble_iter->boundaries(); if ( $o->get('dry-run') ) { print "Not copying rows because this is a dry run.\n"; } else { if ( !$nibble_iter->one_nibble() && !$boundary->{first_lower} ) { # https://bugs.launchpad.net/percona-toolkit/+bug/1020997 print "$tbl->{name} is empty, no rows to copy.\n"; return; } else { print ts("Copying approximately " . $nibble_iter->row_estimate() . " rows...\n"); } } if ( $o->get('print') ) { # Print the checksum and next boundary statements. foreach my $sth ( sort keys %$statements ) { next if $sth =~ m/^explain/; if ( $statements->{$sth} ) { print $statements->{$sth}->{Statement}, "\n"; } } } return unless $o->get('execute'); # If table is a single chunk on the master, make sure it's also # a single chunk on all slaves. E.g. if a slave is out of sync # and has a lot more rows than the master, single chunking on the # master could cause the slave to choke. if ( $nibble_iter->one_nibble() ) { PTDEBUG && _d('Getting table row estimate on replicas'); my @too_large; foreach my $slave ( @$slaves ) { my ($n_rows) = NibbleIterator::get_row_estimate( Cxn => $slave, tbl => $tbl, ); PTDEBUG && _d('Table on',$slave->name(),'has', $n_rows, 'rows'); if ( $limit && $n_rows && $n_rows > ($tbl->{chunk_size} * $limit) ) { PTDEBUG && _d('Table too large on', $slave->name()); push @too_large, [$slave->name(), $n_rows || 0]; } } if ( @too_large ) { my $msg = "Cannot copy table $tbl->{name} because" . " on the master it would be checksummed in one chunk" . " but on these replicas it has too many rows:\n"; foreach my $info ( @too_large ) { $msg .= " $info->[1] rows on $info->[0]\n"; } $msg .= "The current chunk size limit is " . ($tbl->{chunk_size} * $limit) . " rows (chunk size=$tbl->{chunk_size}" . " * chunk size limit=$limit).\n"; die ts($msg); } } else { # chunking the table if ( $o->get('check-plan') ) { my $idx_len = new IndexLength(Quoter => $q); my ($key_len, $key) = $idx_len->index_length( Cxn => $args{Cxn}, tbl => $tbl, index => $nibble_iter->nibble_index(), n_index_cols => $o->get('chunk-index-columns'), ); if ( !$key || lc($key) ne lc($nibble_iter->nibble_index()) ) { _die(ts("Cannot determine the key_len of the chunk index " . "because MySQL chose " . ($key ? "the $key" : "no") . " index " . "instead of the " . $nibble_iter->nibble_index() . " index for the first lower boundary statement. " . "See --[no]check-plan in the documentation for more " . "information."), CANNOT_DETERMINE_KEY_SIZE); } elsif ( !$key_len ) { _die(ts("The key_len of the $key index is " . (defined $key_len ? "zero" : "NULL") . ", but this should not be possible. " . "See --[no]check-plan in the documentation for more " . "information."), INVALID_KEY_SIZE); } $tbl->{key_len} = $key_len; } } return 1; # continue nibbling table }, next_boundaries => sub { my (%args) = @_; my $tbl = $args{tbl}; my $nibble_iter = $args{NibbleIterator}; my $sth = $nibble_iter->statements(); my $boundary = $nibble_iter->boundaries(); return 0 if $o->get('dry-run'); return 1 if $nibble_iter->one_nibble(); # Check that MySQL will use the nibble index for the next upper # boundary sql. This check applies to the next nibble. So if # the current nibble number is 5, then nibble 5 is already done # and we're checking nibble number 6. # Skip if --nocheck-plan See: https://bugs.launchpad.net/percona-toolkit/+bug/1340728 if ( $o->get('check-plan') ) { my $expl = explain_statement( tbl => $tbl, sth => $sth->{explain_upper_boundary}, vals => [ @{$boundary->{lower}}, $nibble_iter->limit() ], ); if ( lc($expl->{key} || '') ne lc($nibble_iter->nibble_index() || '') ) { my $msg = "Aborting copying table $tbl->{name} at chunk " . ($nibble_iter->nibble_number() + 1) . " because it is not safe to ascend. Chunking should " . "use the " . ($nibble_iter->nibble_index() || '?') . " index, but MySQL EXPLAIN reports that " . ($expl->{key} ? "the $expl->{key}" : "no") . " index will be used for " . $sth->{upper_boundary}->{Statement} . " with values " . join(", ", map { defined $_ ? $_ : "NULL" } (@{$boundary->{lower}}, $nibble_iter->limit())) . "\n"; _die(ts($msg), NOT_SAFE_TO_ASCEND); } } # Once nibbling begins for a table, control does not return to this # tool until nibbling is done because, as noted above, all work is # done in these callbacks. This callback is the only place where we # can prematurely stop nibbling by returning false. This allows # Ctrl-C to stop the tool between nibbles instead of between tables. return $oktorun; # continue nibbling table? }, exec_nibble => sub { my (%args) = @_; my $tbl = $args{tbl}; my $nibble_iter = $args{NibbleIterator}; return if $o->get('dry-run'); # Count every chunk, even if it's ultimately skipped, etc. $tbl->{results}->{n_chunks}++; # Die unless the nibble is safe. nibble_is_safe( %args, OptionParser => $o, ); # Exec and time the chunk checksum query. $tbl->{nibble_time} = exec_nibble( %args, tries => $tries, Retry => $retry, Quoter => $q, stats => \%stats, ); PTDEBUG && _d('Nibble time:', $tbl->{nibble_time}); # We're executing REPLACE queries which don't return rows. # Returning 0 from this callback causes the nibble iter to # get the next boundaries/nibble. return 0; }, after_nibble => sub { my (%args) = @_; my $tbl = $args{tbl}; my $nibble_iter = $args{NibbleIterator}; return unless $o->get('execute'); # Update rate, chunk size, and progress if the nibble actually # selected some rows. my $cnt = $tbl->{row_cnt}; if ( ($cnt || 0) > 0 ) { # Update the rate of rows per second for the entire server. # This is used for the initial chunk size of the next table. $total_rows += $cnt; $total_time += $tbl->{nibble_time}; $avg_rate = int($total_rows / $total_time); PTDEBUG && _d('Average copy rate (rows/s):', $avg_rate); # Adjust chunk size. This affects the next chunk. if ( $chunk_time ) { # Calcuate a new chunk-size based on the rate of rows/s. $tbl->{chunk_size} = $tbl->{rate}->update( $cnt, # processed this many rows $tbl->{nibble_time}, # is this amount of time ); if ( $tbl->{chunk_size} < 1 ) { # This shouldn't happen. WeightedAvgRate::update() may # return a value < 1, but minimum chunk size is 1. $tbl->{chunk_size} = 1; # This warning is printed once per table. if ( !$tbl->{warned_slow} ) { warn ts("Rows are copying very slowly. " . "--chunk-size has been automatically reduced to 1. " . "Check that the server is not being overloaded, " . "or increase --chunk-time. The last chunk " . "selected $cnt rows and took " . sprintf('%.3f', $tbl->{nibble_time}) . " seconds to execute.\n"); $tbl->{warned_slow} = 1; } } # Update chunk-size based on the rate of rows/s. $nibble_iter->set_chunk_size($tbl->{chunk_size}); } # Every table should have a Progress obj; update it. if ( my $tbl_pr = $tbl->{progress} ) { $tbl_pr->update( sub { return $total_rows } ); } } # Wait forever for slaves to catch up. $replica_lag_pr->start() if $replica_lag_pr; $replica_lag->wait(Progress => $replica_lag_pr); # Wait forever for system load to abate. wait() will die if # --critical load is reached. $sys_load_pr->start() if $sys_load_pr; $sys_load->wait(Progress => $sys_load_pr); # Wait forever for flow control to abate. $flow_ctl_pr->start() if $flow_ctl_pr; $flow_ctl->wait(Progress => $flow_ctl_pr) if $flow_ctl; # sleep between chunks to avoid overloading PXC nodes my $sleep = $args{NibbleIterator}->{OptionParser}->get('sleep'); if ( $sleep ) { sleep $sleep; } return; }, done => sub { if ( $o->get('execute') ) { print ts("Copied rows OK.\n"); } }, }; # NibbleIterator combines these two statements and adds # "FROM $orig_table->{name} WHERE ". my $dml = "INSERT LOW_PRIORITY IGNORE INTO $new_tbl->{name} " . "(" . join(', ', map { $q->quote($_->{new}) } @common_cols) . ") " . "SELECT"; my $select = join(', ', map { $q->quote($_->{old}) } @common_cols); # The chunk size is auto-adjusted, so use --chunk-size as # the initial value, but then save and update the adjusted # chunk size in the table data struct. $orig_tbl->{chunk_size} = $o->get('chunk-size'); # This won't (shouldn't) fail because we already verified in # check_orig_table() table we can NibbleIterator::can_nibble(). my $nibble_iter = new NibbleIterator( Cxn => $cxn, tbl => $orig_tbl, chunk_size => $orig_tbl->{chunk_size}, chunk_index => $o->get('chunk-index'), n_chunk_index_cols => $o->get('chunk-index-columns'), dml => $dml, select => $select, callbacks => $callbacks, lock_in_share_mode => $lock_in_share_mode, OptionParser => $o, Quoter => $q, TableParser => $tp, TableNibbler => new TableNibbler(TableParser => $tp, Quoter => $q), comments => { bite => "pt-online-schema-change $PID copy table", nibble => "pt-online-schema-change $PID copy nibble", }, ); # Init a new weighted avg rate calculator for the table. $orig_tbl->{rate} = new WeightedAvgRate(target_t => $chunk_time); # Make a Progress obj for this table. It may not be used; # depends on how many rows, chunk size, how fast the server # is, etc. But just in case, all tables have a Progress obj. if ( $o->get('progress') && !$nibble_iter->one_nibble() && $nibble_iter->row_estimate() ) { $orig_tbl->{progress} = new Progress( jobsize => $nibble_iter->row_estimate(), spec => $o->get('progress'), name => "Copying $orig_tbl->{name}", ); } # --plugin hook if ( $plugin && $plugin->can('before_copy_rows') ) { $plugin->before_copy_rows(); } # Start copying rows. This may take awhile, but --progress is on # by default so there will be progress updates to stderr. eval { 1 while $nibble_iter->next(); }; if ( $EVAL_ERROR ) { die ts("Error copying rows from $orig_tbl->{name} to " . "$new_tbl->{name}: $EVAL_ERROR"); } $orig_tbl->{copied} = 1; # flag for cleanup tasks # XXX Auto-choose the alter fk method BEFORE swapping/renaming tables # else everything will break because if drop_swap is chosen, then we # most NOT rename tables or drop the old table. if ( $alter_fk_method eq 'auto' ) { # If chunk time is set, then use the average rate of rows/s # from copying the orig table to determine the max size of # a child table that can be altered within one chunk time. # The limit is a fudge factor. Chunk time won't be set if # the user specified --chunk-size=N on the cmd line, in which # case the max child table size is their specified chunk size # times the fudge factor. my $max_rows = $o->get('dry-run') ? $o->get('chunk-size') * $limit : $chunk_time && $avg_rate ? $avg_rate * $chunk_time * $limit : $o->get('chunk-size') * $limit; PTDEBUG && _d('Max allowed child table size:', $max_rows); $alter_fk_method = determine_alter_fk_method( child_tables => $child_tables, max_rows => $max_rows, Cxn => $cxn, OptionParser => $o, ); if ( $alter_fk_method eq 'drop_swap' ) { $o->set('swap-tables', 0); $o->set('drop-old-table', 0); } } if ($vp->cmp('8.0') > -1 && $vp->flavor() !~ m/maria/i && $alter_fk_method eq 'drop_swap') { my $msg = "--alter-foreign-keys-method=drop_swap doesn't work with MySQL 8.0+\n". "See https://bugs.mysql.com/bug.php?id=89441"; _die($msg, INVALID_PARAMETERS); } # --plugin hook if ( $plugin && $plugin->can('after_copy_rows') ) { $plugin->after_copy_rows(); } # ##################################################################### # XXX # Step 5: Rename tables: orig -> old, new -> orig # Past this step, the original table has been altered. This shouldn't # fail, but if it does, the failure could be serious depending on what # state the tables are left in. # XXX # ##################################################################### # --plugin hook if ( $plugin && $plugin->can('before_swap_tables') ) { $plugin->before_swap_tables(); } if ( $o->get('preserve-triggers') ) { if ( !$o->get('swap-tables') && $o->get('drop-new-table') ) { print ts("Skipping triggers creation since --no-swap-tables was specified along with --drop-new-table\n"); } else { print ts("Adding original triggers to new table.\n"); foreach my $trigger_info (@$triggers_info) { next if ! ($trigger_info->{orig_triggers}); foreach my $orig_trigger (@{$trigger_info->{orig_triggers}}) { # if --no-swap-tables is used and --drop-new-table (default), then we don't do any trigger stuff my $new_trigger_sqls; eval { # if --no-swap-tables is used and --no-drop-new-table is used, then we need to duplicate the trigger my $duplicate_trigger = ( ! $o->get('swap-tables') && ! $o->get('drop-new-table') ) ? 1 : undef; $new_trigger_sqls = create_trigger_sql(trigger => $orig_trigger, db => $new_tbl->{db}, new_tbl => $new_tbl->{tbl}, orig_tbl => $orig_tbl->{tbl}, duplicate_trigger => $duplicate_trigger, ); }; if ($EVAL_ERROR) { _die("Cannot create triggers: $EVAL_ERROR", ERROR_CREATING_TRIGGERS); } next if !$o->get('execute'); PTDEBUG && _d('New triggers sqls'); for my $sql (@$new_trigger_sqls) { PTDEBUG && _d($sql); eval { $cxn->dbh()->do($sql); }; if ($EVAL_ERROR) { _die("Exiting due to errors while restoring triggers: $EVAL_ERROR", ERROR_RESTORING_TRIGGERS); } } } } } } my $old_tbl; if ( $o->get('swap-tables') ) { eval { $old_tbl = swap_tables( orig_tbl => $orig_tbl, new_tbl => $new_tbl, suffix => '_old', Cxn => $cxn, Quoter => $q, OptionParser => $o, Retry => $retry, tries => $tries, stats => \%stats, analyze_table => $analyze_table, ); }; if ( $EVAL_ERROR ) { # TODO: one of these values can be undefined _die(ts("Error swapping tables: $EVAL_ERROR\n" . "To clean up, first verify that the original table " . "$orig_tbl->{name} has not been modified or renamed, " . "then drop the new table $new_tbl->{name} if it exists."), ERROR_SWAPPING_TABLES); } } $orig_tbl->{swapped} = 1; # flag for cleanup tasks PTDEBUG && _d('Old table:', Dumper($old_tbl)); # --plugin hook if ( $plugin && $plugin->can('after_swap_tables') ) { $plugin->after_swap_tables( old_tbl => $old_tbl, ); } # ##################################################################### # Step 6: Update foreign key constraints if there are child tables. # ##################################################################### if ( $child_tables ) { # --plugin hook if ( $plugin && $plugin->can('before_update_foreign_keys') ) { $plugin->before_update_foreign_keys(); } eval { if ( $alter_fk_method eq 'none' ) { # This shouldn't happen, but in case it does we should know. warn "The tool detected child tables but " . "--alter-foreign-keys-method=none"; } elsif ( $alter_fk_method eq 'rebuild_constraints' ) { rebuild_constraints( orig_tbl => $orig_tbl, old_tbl => $old_tbl, child_tables => $child_tables, OptionParser => $o, Quoter => $q, Cxn => $cxn, TableParser => $tp, stats => \%stats, Retry => $retry, tries => $tries, ); } elsif ( $alter_fk_method eq 'drop_swap' ) { drop_swap( orig_tbl => $orig_tbl, new_tbl => $new_tbl, Cxn => $cxn, OptionParser => $o, stats => \%stats, Retry => $retry, tries => $tries, analyze_table => $analyze_table, ); } elsif ( !$alter_fk_method && $o->has('alter-foreign-keys-method') && ($o->get('alter-foreign-keys-method') || '') eq 'auto' ) { # If --alter-foreign-keys-method is 'auto' and we are on a dry run, # $alter_fk_method is left as an empty string. print "Not updating foreign key constraints because this is a dry run.\n"; } else { # This should "never" happen because we check this var earlier. _die("Invalid --alter-foreign-keys-method: $alter_fk_method", INVALID_ALTER_FK_METHOD); } }; if ( $EVAL_ERROR ) { # TODO: improve error message and handling. _die("Error updating foreign key constraints: $EVAL_ERROR", ERROR_UPDATING_FKS); } # --plugin hook if ( $plugin && $plugin->can('after_update_foreign_keys') ) { $plugin->after_update_foreign_keys(); } } # ######################################################################## # Step 7: Drop the old table. # ######################################################################## if ( $o->get('drop-old-table') ) { if ( $o->get('dry-run') ) { print "Not dropping old table because this is a dry run.\n"; } elsif ( !$old_tbl ) { print "Not dropping old table because --no-swap-tables was specified.\n"; } else { # --plugin hook if ( $plugin && $plugin->can('before_drop_old_table') ) { $plugin->before_drop_old_table(); } print ts("Dropping old table...\n"); if ( $alter_fk_method eq 'none' ) { # Child tables still reference the old table, but the user # has chosen to break fks, so we need to disable fk checks # in order to drop the old table. my $sql = "SET foreign_key_checks=0"; PTDEBUG && _d($sql); print $sql, "\n" if $o->get('print'); $cxn->dbh()->do($sql); } my $sql = "DROP TABLE IF EXISTS $old_tbl->{name}"; print $sql, "\n" if $o->get('print'); PTDEBUG && _d($sql); eval { $cxn->dbh()->do($sql); }; if ( $EVAL_ERROR ) { _die(ts("Error dropping the old table: $EVAL_ERROR\n"), ERROR_DROPPING_OLD_TABLE); } print ts("Dropped old table $old_tbl->{name} OK.\n"); # --plugin hook if ( $plugin && $plugin->can('after_drop_old_table') ) { $plugin->after_drop_old_table(); } } } elsif ( !$drop_triggers ) { print "Not dropping old table because --no-drop-triggers was specified.\n"; } else { print "Not dropping old table because --no-drop-old-table was specified.\n"; } # ######################################################################## # Done. # ######################################################################## $orig_tbl->{success} = 1; # flag for cleanup tasks $cleanup = undef; # exec cleanup tasks # --plugin hook if ( $plugin && $plugin->can('before_exit') ) { $plugin->before_exit( exit_status => $exit_status, ); } return $exit_status; } # ############################################################################ # Subroutines. # ############################################################################ sub validate_tries { my ($o) = @_; my @ops = qw( create_triggers drop_triggers copy_rows swap_tables update_foreign_keys analyze_table ); my %user_tries; my $user_tries = $o->get('tries'); if ( $user_tries ) { foreach my $var_val ( @$user_tries ) { my ($op, $tries, $wait) = split(':', $var_val); _die("Invalid --tries value: $var_val\n", INVALID_PARAMETERS) unless $op && $tries && $wait; _die("Invalid --tries operation: $op\n", INVALID_PARAMETERS) unless grep { $op eq $_ } @ops; _die("Invalid --tries tries: $tries\n", INVALID_PARAMETERS) unless $tries > 0; _die("Invalid --tries wait: $wait\n", INVALID_PARAMETERS) unless $wait > 0; $user_tries{$op} = { tries => $tries, wait => $wait, }; } } my %default_tries; my $default_tries = $o->read_para_after(__FILE__, qr/MAGIC_tries/); if ( $default_tries ) { %default_tries = map { my $var_val = $_; my ($op, $tries, $wait) = $var_val =~ m/(\S+)/g; _die("Invalid --tries value: $var_val\n", INVALID_PARAMETERS) unless $op && $tries && $wait; _die("Invalid --tries operation: $op\n", INVALID_PARAMETERS) unless grep { $op eq $_ } @ops; _die("Invalid --tries tries: $tries\n", INVALID_PARAMETERS) unless $tries > 0; _die("Invalid --tries wait: $wait\n", INVALID_PARAMETERS) unless $wait > 0; $op => { tries => $tries, wait => $wait, }; } grep { m/^\s+\w+\s+\d+\s+[\d\.]+/ } split("\n", $default_tries); } my %tries = ( %default_tries, # first the tool's defaults %user_tries, # then the user's which overwrite the defaults ); PTDEBUG && _d('--tries:', Dumper(\%tries)); return \%tries; } sub check_alter { my (%args) = @_; my @required_args = qw(alter tbl dry_run Cxn TableParser OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless exists $args{$arg}; } my ($alter, $tbl, $dry_run, $cxn, $tp, $o) = @args{@required_args}; my $ok = 1; $alter =~ s/^(.*?)\s+COMMENT\s+'(.*?[^\\]')+(.*)/$1$3/; $alter =~ s/^(.*?)\s+COMMENT\s+"(.*?[^\\]")+(.*)/$1$3/; my $unique_fields = get_unique_index_fields($alter); if (scalar @$unique_fields && $o->get('check-unique-key-change')) { my $msg = "You are trying to add an unique key. This can result in data loss if the " . "data is not unique.\n" . "Please read the documentation for the --check-unique-key-change parameter.\n" . "You can check if the column(s) contain duplicate content " . "by running this/these query/queries:\n\n"; foreach my $fields (@$unique_fields) { my $sql = "SELECT IF(COUNT(DISTINCT " . join(", ", @$fields) . ") = COUNT(*),\n" . " 'Yes, the desired unique index currently contains only unique values', \n" . " 'No, the desired unique index contains duplicated values. There will be data loss'\n" . ") AS IsThereUniqueness FROM `$tbl->{db}`.`$tbl->{tbl}`;\n\n"; $msg .= $sql; } $msg .= "Keep in mind that these queries could take a long time and consume a lot of resources\n\n"; _die($msg, INVALID_PARAMETERS); } if ( ($tbl->{tbl_struct}->{engine} || '') =~ m/RocksDB/i ) { if ($alter =~ m/FOREIGN KEY/i) { my $msg = "FOREIGN KEYS are not supported by the RocksDB engine\n\n"; _die($msg, UNSUPORTED_OPERATION); } } if ( $alter =~ m/Engine\s*=\s*["']?RocksDB["']?/i ) { my $row = $cxn->dbh()->selectrow_arrayref('SELECT @@binlog_format'); if (scalar $row > 0 && $row->[0] eq 'STATEMENT') { _die("Cannot change engine to RocksDB while binlog_format is other than 'ROW'", UNSUPORTED_OPERATION); } } # ######################################################################## # Check for DROP PRIMARY KEY. # ######################################################################## if ( $alter =~ m/DROP\s+PRIMARY\s+KEY/i ) { my $msg = "--alter contains 'DROP PRIMARY KEY'. Dropping and " . "altering the primary key can be dangerous, " . "especially if the original table does not have other " . "unique indexes.\n"; if ( $dry_run ) { print $msg; } else { $ok = 0; warn $msg . "The tool should handle this correctly, but you should " . "test it first and carefully examine the triggers which " . "rely on the PRIMARY KEY or a unique index. Specify " . "--no-check-alter to disable this check and perform the " . "--alter.\n"; } } # ######################################################################## # Check for renamed columns. # https://bugs.launchpad.net/percona-toolkit/+bug/1068562 # ######################################################################## my $renamed_cols = $args{renamed_cols}; if ( %$renamed_cols ) { # sort is just for making output consistent for testing my $msg = "--alter appears to rename these columns:\n" . join("\n", map { " $_ to $renamed_cols->{$_}" } sort keys %$renamed_cols) . "\n"; if ( $dry_run ) { print $msg; } else { $ok = 0; warn $msg . "The tool should handle this correctly, but you should " . "test it first because if it fails the renamed columns' " . "data will be lost! Specify --no-check-alter to disable " . "this check and perform the --alter.\n"; } } # ######################################################################## # If it's a cluster node, check for MyISAM which does not work. # ######################################################################## my $cluster = Percona::XtraDB::Cluster->new; if ( $cluster->is_cluster_node($cxn) ) { if ( ($tbl->{tbl_struct}->{engine} || '') =~ m/MyISAM/i ) { $ok = 0; warn $cxn->name . " is a cluster node and the table is MyISAM, " . "but MyISAM tables " . "do not work with clusters and this tool. To alter the " . "table, you must manually convert it to InnoDB first.\n"; } elsif ( $alter =~ m/ENGINE=MyISAM/i ) { $ok = 0; warn $cxn->name . " is a cluster node and the table is being " . "converted to MyISAM (ENGINE=MyISAM), but MyISAM tables " . "do not work with clusters and this tool. To alter the " . "table, you must manually convert it to InnoDB first.\n"; } } if ( !$ok ) { # check_alter.t relies on this output. _die("--check-alter failed.\n", UNSUPORTED_OPERATION); } return; } # This function tries to detect if the --alter param is adding unique indexes. # It returns an array of arrays, having a list of fields for each unique index # found. # Example: # Input string: add i int comment "first comment ", ADD UNIQUE INDEX (C1) comment # 'second comment', CREATE UNIQUE INDEX C ON T1 (C2, c3) comment "third" # # Output: # $VAR1 = [ # [ 'C1' ], # [ 'C2', 'c3' ] # ]; # # Thse fields are used to build an example SELECT to detect if currently there are # rows that will produce duplicates when the new UNIQUE INDEX is created. sub get_unique_index_fields { my ($alter) = @_; my $remove_comments_re = qr/(.*?\s+)?comment ('.*?'|".*?")(.*)/i; $alter =~ s/\\"//g; # Remove \" just to make remove_comments_re easier my $clean; my $suffix = $alter; while ($alter =~ /$remove_comments_re/g) { $clean .= $1; $suffix = $3; } $clean .= $suffix; my $fields = []; my $fields_re = qr/\s(?:PRIMARY|UNIQUE)\s+(?:INDEX|KEY|)\s*(?:.*?)\s*\((.*?)\)/i; while($clean =~ /$fields_re/g) { push @$fields, [ split /\s*,\s*/, $1 ]; } return $fields; } sub find_renamed_cols { my (%args) = @_; my @required_args = qw(alter TableParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($alter, $tp) = @args{@required_args}; my $unquoted_ident = qr/ (?!\p{Digit}+[.\s]) # Not all digits [0-9a-zA-Z_\x{80}-\x{FFFF}\$]+ # As per the spec /x; my $quoted_ident = do { my $quoted_ident_character = qr/ [\x{01}-\x{5F}\x{61}-\x{FFFF}] # Any character but the null byte and ` /x; qr{ # The following alternation is there because something like (?<=.) # would match if this regex was used like /.$re/, # or even more tellingly, would match on "``" =~ /`$re`/ $quoted_ident_character+ # One or more characters (?:``$quoted_ident_character*)* # possibly followed by `` and # more characters, zero or more times |$quoted_ident_character* # OR, zero or more characters (?:``$quoted_ident_character* )+ # Followed by `` and maybe more # characters, one or more times. }x }; my $ansi_quotes_ident = qr/ [^"]+ (?: "" [^"]* )* | [^"]* (?: "" [^"]* )+ /x; my $table_ident = qr/$unquoted_ident|`$quoted_ident`|"$ansi_quotes_ident"/; # remove comments $alter =~ s/^(.*?)\s+COMMENT\s+'(.*?[^\\]')+(.*)/$1$3/; $alter =~ s/^(.*?)\s+COMMENT\s+"(.*?[^\\]")+(.*)/$1$3/; my $alter_change_col_re = qr/\bCHANGE \s+ (?:COLUMN \s+)? ($table_ident) \s+ ($table_ident)/ix; my %renames; while ( $alter =~ /$alter_change_col_re/g ) { my ($orig, $new) = map { $tp->ansi_to_legacy($_) } $1, $2; next unless $orig && $new; my (undef, $orig_tbl) = Quoter->split_unquote($orig); my (undef, $new_tbl) = Quoter->split_unquote($new); # Silly but plausible: CHANGE COLUMN same_name same_name ... next if lc($orig_tbl) eq lc($new_tbl); $renames{lc($orig_tbl)} = $new_tbl; } PTDEBUG && _d("Renamed columns (old => new): ", Dumper(\%renames)); return \%renames; } sub nibble_is_safe { my (%args) = @_; my @required_args = qw(Cxn tbl NibbleIterator OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn, $tbl, $nibble_iter, $o)= @args{@required_args}; # EXPLAIN the checksum chunk query to get its row estimate and index. # XXX This call and others like it are relying on a Perl oddity. # See https://bugs.launchpad.net/percona-toolkit/+bug/987393 my $sth = $nibble_iter->statements(); my $boundary = $nibble_iter->boundaries(); my $expl = explain_statement( tbl => $tbl, sth => $sth->{explain_nibble}, vals => [ @{$boundary->{lower}}, @{$boundary->{upper}} ], ); # Ensure that MySQL is using the chunk index if the table is being chunked. # Skip if --nocheck-plan See: https://bugs.launchpad.net/percona-toolkit/+bug/1340728 if ( !$nibble_iter->one_nibble() && lc($expl->{key} || '') ne lc($nibble_iter->nibble_index() || '') && $o->get('check-plan') ) { die ts("Error copying rows at chunk " . $nibble_iter->nibble_number() . " of $tbl->{db}.$tbl->{tbl} because MySQL chose " . ($expl->{key} ? "the $expl->{key}" : "no") . " index " . " instead of the " . $nibble_iter->nibble_index() . "index.\n"); } # Ensure that the chunk isn't too large if there's a --chunk-size-limit. # If single-chunking the table, this has already been checked, so it # shouldn't have changed. If chunking the table with a non-unique key, # oversize chunks are possible. if ( my $limit = $o->get('chunk-size-limit') ) { my $oversize_chunk = $limit ? ($expl->{rows} || 0) >= $tbl->{chunk_size} * $limit : 0; if ( $oversize_chunk && $nibble_iter->identical_boundaries($boundary->{upper}, $boundary->{next_lower}) ) { die ts("Error copying rows at chunk " . $nibble_iter->nibble_number() . " of $tbl->{db}.$tbl->{tbl} because it is oversized. " . "The current chunk size limit is " . ($tbl->{chunk_size} * $limit) . " rows (chunk size=$tbl->{chunk_size}" . " * chunk size limit=$limit), but MySQL estimates " . "that there are " . ($expl->{rows} || 0) . " rows in the chunk.\n"); } } # Ensure that MySQL is still using the entire index. # https://bugs.launchpad.net/percona-toolkit/+bug/1010232 # Skip if --nocheck-plan See: https://bugs.launchpad.net/percona-toolkit/+bug/1340728 if ( !$nibble_iter->one_nibble() && $tbl->{key_len} && ($expl->{key_len} || 0) < $tbl->{key_len} && $o->get('check-plan') ) { die ts("Error copying rows at chunk " . $nibble_iter->nibble_number() . " of $tbl->{db}.$tbl->{tbl} because MySQL used " . "only " . ($expl->{key_len} || 0) . " bytes " . "of the " . ($expl->{key} || '?') . " index instead of " . $tbl->{key_len} . ". See the --[no]check-plan documentation " . "for more information.\n"); } return 1; # safe } sub create_new_table { my (%args) = @_; my @required_args = qw(new_table_name orig_tbl Cxn Quoter OptionParser TableParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($new_table_name, $orig_tbl, $cxn, $q, $o, $tp) = @args{@required_args}; my $new_table_prefix = $args{new_table_prefix}; # Get the original table struct. my $ddl = $tp->get_create_table( $cxn->dbh(), $orig_tbl->{db}, $orig_tbl->{tbl}, ); $new_table_name =~ s/%T/$orig_tbl->{tbl}/; print "Creating new table...\n"; my $tries = $new_table_prefix ? 10 : 1; my $tryno = 1; my @old_tables; while ( $tryno++ <= $tries ) { if ( $new_table_prefix ) { $new_table_name = $new_table_prefix . $new_table_name; } if ( length($new_table_name) > 64 ) { my $truncated_table_name = substr($new_table_name, 0, 64); PTDEBUG && _d($new_table_name, 'is over 64 characters long, ' . 'truncating to', $truncated_table_name); $new_table_name = $truncated_table_name; } # Generate SQL to create the new table. We do not use CREATE TABLE LIKE # because it doesn't preserve foreign key constraints. Here we need to # rename the FK constraints, too. This is because FK constraints are # internally stored as . and there cannot be # duplicates. If we don't rename the constraints, then InnoDB will throw # error 121 (duplicate key violation) when we try to execute the CREATE # TABLE. TODO: this code isn't perfect. If we rename a constraint from # foo to _foo and there is already a constraint with that name in this # or another table, we can still have a collision. But if there are # multiple FKs on this table, it's hard to know which one is causing the # trouble. Should we generate random/UUID FK names or something instead? my $quoted = $q->quote($orig_tbl->{db}, $new_table_name); my $sql = $ddl; $sql =~ s/\ACREATE TABLE .*?\($/CREATE TABLE $quoted (/m; # When the new temp table is created, we need to avoid collisions on constraint names # This is in contrast to previous behavior were we added underscores # indefinitely, sometimes exceeding the allowed name limit # https://bugs.launchpad.net/percona-toolkit/+bug/1215587 # So we do replacements when constraint names: # Has 2 _, we remove them # Has 1 _, we add one to make 2 # Has no _, we add one to make 1 # This gives on more salt where the FK names have been previously been altered # https://bugs.launchpad.net/percona-toolkit/+bug/1632522 my %search_dict = ( 'CONSTRAINT `__' => 'CONSTRAINT `', 'CONSTRAINT `_' => 'CONSTRAINT `__', 'CONSTRAINT `' => 'CONSTRAINT `_' ); my $constraint_pattern = qr((CONSTRAINT `__|CONSTRAINT `_|CONSTRAINT `)); $sql =~ s/$constraint_pattern/$search_dict{$1}/gm; # Limit constraint name to 64 characters $sql =~ s/CONSTRAINT `([^`]{1,64})[^`]*` (.*)/ CONSTRAINT `$1` $2/gm; if ( $o->get('default-engine') ) { $sql =~ s/\s+ENGINE=\S+//; } if ( $o->get('data-dir') && !$o->got('remove-data-dir') ) { if ( (-d $o->get('data-dir')) && (-w $o->get('data-dir')) ){ $sql = insert_data_directory($sql, $o->get('data-dir')); PTDEBUG && _d("adding data dir ".$o->get('data-dir')); PTDEBUG && _d("New query\n$sql\n"); } else { die $o->get('data-dir') . " is not a directory or it is not writable"; } } if ( $o->got('remove-data-dir') ) { $sql =~ s/DATA DIRECTORY\s*=\s*'.*?'//; PTDEBUG && _d("removing data dir"); } PTDEBUG && _d($sql); eval { $cxn->dbh()->do($sql); }; if ( $EVAL_ERROR ) { # Ignore this error because if multiple instances of the tool # are running, or previous runs failed and weren't cleaned up, # then there will be other similarly named tables with fewer # leading prefix chars. Or, in rarer cases, the db just happens # to have a similarly named table created by the user for other # purposes. if ( $EVAL_ERROR =~ m/table.+?already exists/i ) { push @old_tables, $q->quote($orig_tbl->{db}, $new_table_name); next; } # Some other error happened. Let the caller catch it. die $EVAL_ERROR; } print $sql, "\n" if $o->get('print'); # the sql that work print "Created new table $orig_tbl->{db}.$new_table_name OK.\n"; return { # success db => $orig_tbl->{db}, tbl => $new_table_name, name => $q->quote($orig_tbl->{db}, $new_table_name), }; } die "Failed to find a unique new table name after $tries attemps. " . "The following tables exist which may be left over from previous " . "failed runs of the tool:\n" . join("\n", map { " $_" } @old_tables) . "\nExamine these tables and drop some or all of them if they are " . "no longer need, then re-run the tool.\n"; } sub insert_data_directory { my ($sql, $data_dir) = @_; $sql =~ s/DATA DIRECTORY\s*=\s*'.*?'//; my $re_ps=qr/(\/\*!50100 )?(PARTITION|SUBPARTITION)/; if ($sql=~ m/$re_ps/) { my $insert_pos=$-[0]; $sql = substr($sql, 0, $insert_pos - 1). " DATA DIRECTORY = '$data_dir' " .substr($sql, $insert_pos); } else { $sql .= " DATA DIRECTORY = '$data_dir' "; } return $sql; } sub swap_tables { my (%args) = @_; my @required_args = qw(orig_tbl new_tbl Cxn Quoter OptionParser Retry tries stats); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($orig_tbl, $new_tbl, $cxn, $q, $o, $retry, $tries, $stats) = @args{@required_args}; my $prefix = '_'; my $table_name = $orig_tbl->{tbl} . ($args{suffix} || ''); my $name_tries = 20; # don't try forever my $table_exists = qr/table.+?already exists/i; # This sub only works for --execute. Since the options are # mutually exclusive and we return in the if case, the elsif # is just a paranoid check because swapping the tables is one # of the most sensitive/dangerous operations. if ( $o->get('dry-run') ) { print "Not swapping tables because this is a dry run.\n"; # A return value really isn't needed, but this trick allows # rebuild_constraints() to parse and show the sql statements # it would used. Otherwise, this has no effect. return $orig_tbl; } elsif ( $o->get('execute') ) { # ANALYZE TABLE before renaming to update InnoDB optimizer statistics. # https://bugs.launchpad.net/percona-toolkit/+bug/1491261 if ( $args{analyze_table} ) { print ts("Analyzing new table...\n"); my $sql_analyze = "ANALYZE TABLE $new_tbl->{name} /* pt-online-schema-change */"; osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{analyze_table}, stats => $stats, code => sub { PTDEBUG && _d($sql_analyze); $cxn->dbh()->do($sql_analyze); }, ); } print ts("Swapping tables...\n"); while ( $name_tries-- ) { # https://bugs.launchpad.net/percona-toolkit/+bug/1526105 if ( $name_tries <= 10 ) { # we've already added 10 underscores? # time to try a small random string my @chars = ("A".."Z", "0".."9"); $prefix = ''; $prefix .= $chars[rand @chars] for 1..6; $prefix .= "_"; } $table_name = $prefix . $table_name; if ( length($table_name) > 64 ) { my $truncated_table_name = substr($table_name, 0, 64); PTDEBUG && _d($table_name, 'is > 64 chars, truncating to', $truncated_table_name); $table_name = $truncated_table_name; } my $sql = "RENAME TABLE $orig_tbl->{name} " . "TO " . $q->quote($orig_tbl->{db}, $table_name) . ", $new_tbl->{name} TO $orig_tbl->{name}"; eval { osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{swap_tables}, stats => $stats, code => sub { PTDEBUG && _d($sql); $cxn->dbh()->do($sql); }, ignore_errors => [ # Ignore this error because if multiple instances of the tool # are running, or previous runs failed and weren't cleaned up, # then there will be other similarly named tables with fewer # leading prefix chars. Or, in rare cases, the db happens # to have a similarly named table created by the user for # other purposes. $table_exists, ], operation => "swap_tables", ); }; if ( my $e = $EVAL_ERROR ) { if ( $e =~ $table_exists ) { PTDEBUG && _d($e); next; } die ts($e); # Don't replace this by _die } print $sql, "\n" if $o->get('print'); print ts("Swapped original and new tables OK.\n"); return { # success db => $orig_tbl->{db}, tbl => $table_name, name => $q->quote($orig_tbl->{db}, $table_name), }; } # This shouldn't happen. die ts("Failed to find a unique old table name after " . "serveral attempts.\n"); } } sub check_orig_table { my ( %args ) = @_; my @required_args = qw(orig_tbl Cxn TableParser OptionParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($orig_tbl, $cxn, $tp, $o, $q) = @args{@required_args}; my $dbh = $cxn->dbh(); # The original table must exist, of course. if (!$tp->check_table(dbh=>$dbh,db=>$orig_tbl->{db},tbl=>$orig_tbl->{tbl})) { die "The original table $orig_tbl->{name} does not exist.\n"; } my ( $version ) = $dbh->selectrow_array("SELECT VERSION()"); # There cannot be any triggers on the original table. my $sql = 'SHOW TRIGGERS FROM ' . $q->quote($orig_tbl->{db}) . ' LIKE ' . $q->literal_like($orig_tbl->{tbl}); PTDEBUG && _d($sql); my $triggers = $dbh->selectall_arrayref($sql); if ( $triggers && @$triggers ) { if ( VersionCompare::cmp($version, '5.7.0') < 0 && VersionCompare::cmp($version, '10.0.0') <= 0) { die "The table $orig_tbl->{name} has triggers. This tool " . "needs to create its own triggers, so the table cannot " . "already have triggers.\n"; } elsif ( ( VersionCompare::cmp($version, '5.7.0') >= 0 || VersionCompare::cmp($version, '10.0.0') >0 ) && !$o->get('preserve-triggers') ) { die "The table $orig_tbl->{name} has triggers but --preserve-triggers was not specified.\n" . "Please read the documentation for --preserve-triggers.\n"; } } # Get the table struct. NibbleIterator needs this, and so do we. my $ddl = $tp->get_create_table( $cxn->dbh(), $orig_tbl->{db}, $orig_tbl->{tbl}, ); $orig_tbl->{tbl_struct} = $tp->parse($ddl); # Must be able to nibble the original table (to copy rows to the new table). eval { NibbleIterator::can_nibble( Cxn => $cxn, tbl => $orig_tbl, chunk_size => $o->get('chunk-size'), chunk_indx => $o->get('chunk-index'), OptionParser => $o, TableParser => $tp, ); }; if ( $EVAL_ERROR ) { die "Cannot chunk the original table $orig_tbl->{name}: $EVAL_ERROR\n"; } return; # success } sub find_child_tables { my ( %args ) = @_; my @required_args = qw(tbl Cxn Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($tbl, $cxn, $q) = @args{@required_args}; if ( lc($tbl->{tbl_struct}->{engine} || '') eq 'myisam' ) { PTDEBUG && _d(q{MyISAM table, not looking for child tables}); return; } PTDEBUG && _d('Finding child tables'); my $sql = "SELECT table_schema, table_name " . "FROM information_schema.key_column_usage " . "WHERE referenced_table_schema='$tbl->{db}' " . "AND referenced_table_name='$tbl->{tbl}'"; if ($args{only_same_schema_fks}) { $sql .= " AND table_schema='$tbl->{db}'"; } PTDEBUG && _d($sql); my $rows = $cxn->dbh()->selectall_arrayref($sql); if ( !$rows || !@$rows ) { PTDEBUG && _d('No child tables found'); return; } my @child_tables; foreach my $row ( @$rows ) { my $tbl = { db => $row->[0], tbl => $row->[1], name => $q->quote(@$row), }; # Get row estimates for each child table so we can give the user # some input on choosing an --alter-foreign-keys-method if they # don't use "auto". my ($n_rows) = NibbleIterator::get_row_estimate( Cxn => $cxn, tbl => $tbl, ); $tbl->{row_est} = $n_rows; push @child_tables, $tbl; } PTDEBUG && _d('Child tables:', Dumper(\@child_tables)); return \@child_tables; } sub determine_alter_fk_method { my ( %args ) = @_; my @required_args = qw(child_tables max_rows Cxn OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($child_tables, $max_rows, $cxn, $o) = @args{@required_args}; if ( $o->get('dry-run') ) { print "Not determining the method to update foreign keys " . "because this is a dry run.\n"; return ''; # $alter_fk_method can't be undef } # The rebuild_constraints method is the default becuase it's safer # and doesn't cause the orig table to go missing for a moment. my $method = 'rebuild_constraints'; print ts("Max rows for the rebuild_constraints method: $max_rows\n" . "Determining the method to update foreign keys...\n"); foreach my $child_tbl ( @$child_tables ) { print ts(" $child_tbl->{name}: "); my ($n_rows) = NibbleIterator::get_row_estimate( Cxn => $cxn, tbl => $child_tbl, ); if ( $n_rows > $max_rows ) { print "too many rows: $n_rows; must use drop_swap\n"; $method = 'drop_swap'; last; } else { print "$n_rows rows; can use rebuild_constraints\n"; } } return $method || ''; # $alter_fk_method can't be undef } sub rebuild_constraints { my ( %args ) = @_; my @required_args = qw(orig_tbl old_tbl child_tables stats Cxn Quoter OptionParser TableParser Retry tries); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($orig_tbl, $old_tbl, $child_tables, $stats, $cxn, $q, $o, $tp, $retry, $tries) = @args{@required_args}; # MySQL has a "feature" where if the parent tbl is in the same db, # then the child tbl ref is simply `parent_tbl`, but if the parent tbl # is in another db, then the child tbl ref is `other_db`.`parent_tbl`. # When we recreate the ref below, we use the db-qualified form, and # MySQL will automatically trim the db if the tables are in the same db. my $quoted_old_table = $q->quote($old_tbl->{tbl}); my $constraint = qr/ ^\s+ ( CONSTRAINT.+? REFERENCES\s(?:$quoted_old_table|$old_tbl->{name}) .+ )$ /xm; PTDEBUG && _d('Rebuilding fk constraint matching', $constraint); if ( $o->get('dry-run') ) { print "Not rebuilding foreign key constraints because this is a dry run.\n"; } else { print ts("Rebuilding foreign key constraints...\n"); } CHILD_TABLE: foreach my $child_tbl ( @$child_tables ) { my $table_def = $tp->get_create_table( $cxn->dbh(), $child_tbl->{db}, $child_tbl->{tbl}, ); my @constraints = $table_def =~ m/$constraint/g; if ( !@constraints ) { warn ts("$child_tbl->{name} has no foreign key " . "constraints referencing $old_tbl->{name}.\n"); next CHILD_TABLE; } my @rebuilt_constraints; foreach my $constraint ( @constraints ) { PTDEBUG && _d('Rebuilding fk constraint:', $constraint); # Remove trailing commas in case there are multiple constraints on the # table. $constraint =~ s/,$//; # Find the constraint name. It will be quoted already. my ($fk) = $constraint =~ m/CONSTRAINT\s+`([^`]+)`/; # Drop the reference to the old table/renamed orig table, and add a new # reference to the new table. InnoDB will throw an error if the new # constraint has the same name as the old one, so we must rename it. # Example: after renaming sakila.actor to sakila.actor_old (for # example), the foreign key on film_actor looks like this: # CONSTRAINT `fk_film_actor_actor` FOREIGN KEY (`actor_id`) REFERENCES # `actor_old` (`actor_id`) ON UPDATE CASCADE # We need it to look like this instead: # CONSTRAINT `_fk_film_actor_actor` FOREIGN KEY (`actor_id`) REFERENCES # `actor` (`actor_id`) ON UPDATE CASCADE # Reference the correct table name... $constraint =~ s/REFERENCES[^\(]+/REFERENCES $orig_tbl->{name} /; # And rename the constraint to avoid conflict # If it has a leading underscore, we remove one, otherwise we add one # This is in contrast to previous behavior were we added underscores # indefinitely, sometimes exceeding the allowed name limit # https://bugs.launchpad.net/percona-toolkit/+bug/1215587 # Add one more salt to renaming FK constraint names # This will add 2 _ to a self referencing FK thus avoiding a duplicate key constraint # https://bugs.launchpad.net/percona-toolkit/+bug/1632522 my $new_fk; if ($fk =~ /^__/) { ($new_fk = $fk) =~ s/^__//; } else { $new_fk = '_'.$fk; } PTDEBUG && _d("Old FK name: $fk New FK name: $new_fk"); $constraint =~ s/CONSTRAINT `$fk`/CONSTRAINT `$new_fk`/; my $sql = "DROP FOREIGN KEY `$fk`, " . "ADD $constraint"; push @rebuilt_constraints, $sql; } my $sql = "ALTER TABLE $child_tbl->{name} " . join(', ', @rebuilt_constraints); print $sql, "\n" if $o->get('print'); if ( $o->get('execute') ) { osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{update_foreign_keys}, stats => $stats, code => sub { PTDEBUG && _d($sql); $cxn->dbh()->do($sql); $stats->{rebuilt_constraint}++; }, ); } } if ( $o->get('execute') ) { print ts("Rebuilt foreign key constraints OK.\n"); } return; } sub drop_swap { my ( %args ) = @_; my @required_args = qw(orig_tbl new_tbl Cxn OptionParser stats Retry tries); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($orig_tbl, $new_tbl, $cxn, $o, $stats, $retry, $tries) = @args{@required_args}; if ( $o->get('dry-run') ) { print "Not drop-swapping tables because this is a dry run.\n"; } else { print ts("Drop-swapping tables...\n"); } # ANALYZE TABLE before renaming to update InnoDB optimizer statistics. # https://bugs.launchpad.net/percona-toolkit/+bug/1491261 if ( $args{analyze_table} ) { print ts("Analyzing new table...\n"); my $sql_analyze = "ANALYZE TABLE $new_tbl->{name} /* pt-online-schema-change */"; osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{analyze_table}, stats => $stats, code => sub { PTDEBUG && _d($sql_analyze); $cxn->dbh()->do($sql_analyze); }, ); } my @sqls = ( "SET foreign_key_checks=0", "DROP TABLE IF EXISTS $orig_tbl->{name}", "RENAME TABLE $new_tbl->{name} TO $orig_tbl->{name}", ); # we don't want to be interrupted during the swap! # since it might leave original table dropped # https://bugs.launchpad.net/percona-toolkit/+bug/1368244 $dont_interrupt_now = 1; foreach my $sql ( @sqls ) { PTDEBUG && _d($sql); print $sql, "\n" if $o->get('print'); if ( $o->get('execute') ) { osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{update_foreign_keys}, stats => $stats, code => sub { PTDEBUG && _d($sql); $cxn->dbh()->do($sql); }, ); } } $dont_interrupt_now = 0; if ( $o->get('execute') ) { print ts("Dropped and swapped tables OK.\n"); } return; } sub create_triggers { my ( %args ) = @_; my @required_args = qw(orig_tbl new_tbl del_tbl columns Cxn Quoter OptionParser Retry tries stats); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($orig_tbl, $new_tbl, $del_tbl, $cols, $cxn, $q, $o, $retry, $tries, $stats) = @args{@required_args}; # This sub works for --dry-run and --execute. With --dry-run it's # only interesting if --print is specified, too; then the user can # see the create triggers statements for --execute. if ( $o->get('dry-run') ) { print "Not creating triggers because this is a dry run.\n"; } else { print ts("Creating triggers...\n"); } # Create a unique trigger name prefix based on the orig table name # so multiple instances of the tool can run on different tables. my $prefix = 'pt_osc_' . $orig_tbl->{db} . '_' . $orig_tbl->{tbl}; $prefix =~ s/\W/_/g; if ( length($prefix) > 60 ) { my $truncated_prefix = substr($prefix, 0, 60); PTDEBUG && _d('Trigger prefix', $prefix, 'is over 60 characters long,', 'truncating to', $truncated_prefix); $prefix = $truncated_prefix; } # To be safe, the delete trigger must specify all the columns of the # primary key/unique index. We use null-safe equals, because unique # unique indexes can be nullable. Cols are from the new table and # they may have been renamed my %old_col_for = map { $_->{new} => $_->{old} } @$cols; my $tbl_struct = $del_tbl->{tbl_struct}; # --------------------------------------------------------------------------------------- my $del_index = $del_tbl->{del_index}; my $del_index_cols = join(" AND ", map { my $new_col = $_; my $old_col = $old_col_for{$new_col} || $new_col; my $new_qcol = $q->quote($new_col); my $old_qcol = $q->quote($old_col); "$new_tbl->{name}.$new_qcol <=> OLD.$old_qcol" } @{$tbl_struct->{keys}->{$del_index}->{cols}} ); my $delete_trigger = "CREATE TRIGGER `${prefix}_del` AFTER DELETE ON $orig_tbl->{name} " . "FOR EACH ROW " . "DELETE IGNORE FROM $new_tbl->{name} " . "WHERE $del_index_cols"; # --------------------------------------------------------------------------------------- my $qcols = join(', ', map { $q->quote($_->{new}) } @$cols); my $new_vals = join(', ', map { "NEW.".$q->quote($_->{old}) } @$cols); my $insert_trigger = "CREATE TRIGGER `${prefix}_ins` AFTER INSERT ON $orig_tbl->{name} " . "FOR EACH ROW " . "REPLACE INTO $new_tbl->{name} ($qcols) VALUES ($new_vals)"; # --------------------------------------------------------------------------------------- my $upd_index_cols = join(" AND ", map { my $new_col = $_; my $old_col = $old_col_for{$new_col} || $new_col; my $new_qcol = $q->quote($new_col); my $old_qcol = $q->quote($old_col); "OLD.$old_qcol <=> NEW.$new_qcol" } @{$tbl_struct->{keys}->{$del_index}->{cols}} ); # --------------------------------------------------------------------------------------- my $update_trigger = "CREATE TRIGGER `${prefix}_upd` AFTER UPDATE ON $orig_tbl->{name} " . "FOR EACH ROW " . "BEGIN " . "DELETE IGNORE FROM $new_tbl->{name} WHERE !($upd_index_cols) AND $del_index_cols;" . "REPLACE INTO $new_tbl->{name} ($qcols) VALUES ($new_vals);" . "END "; $triggers_info = [ { suffix => 'del', event => 'DELETE', time => 'AFTER', orig_triggers => [], new_trigger_sql => $delete_trigger, new_trigger_name => "${prefix}_del", }, { suffix => 'upd', event => 'UPDATE', time => 'AFTER', orig_triggers => [], new_trigger_sql => $update_trigger, new_trigger_name => "${prefix}_upd", }, { suffix => 'ins', event => 'INSERT', time => 'AFTER', orig_triggers => [], new_trigger_sql => $insert_trigger, new_trigger_name => "${prefix}_ins", }, { suffix => 'delb', event => 'DELETE', time => 'BEFORE', orig_triggers => [], new_trigger_sql => '', new_trigger_name => '' }, { suffix => 'updb', event => 'UPDATE', time => 'BEFORE', orig_triggers => [], new_trigger_sql => '', new_trigger_name => '' }, { suffix => 'insb', event => 'INSERT', time => 'BEFORE', orig_triggers => [], new_trigger_sql => '', new_trigger_name => '' }, ]; $cxn->connect(); my $dbh = $cxn->dbh(); my $trigger_sql = "SELECT TRIGGER_SCHEMA, TRIGGER_NAME, DEFINER, ACTION_STATEMENT, SQL_MODE, " . " CHARACTER_SET_CLIENT, COLLATION_CONNECTION, EVENT_MANIPULATION, ACTION_TIMING " . " FROM INFORMATION_SCHEMA.TRIGGERS " . " WHERE EVENT_MANIPULATION = ? " . " AND ACTION_TIMING = ? " . " AND TRIGGER_SCHEMA = ? " . " AND EVENT_OBJECT_TABLE = ?"; foreach my $trigger_info (@$triggers_info) { $trigger_info->{orig_triggers} = $dbh->selectall_arrayref( $trigger_sql, { Slice => {} }, $trigger_info->{event}, $trigger_info->{time}, $orig_tbl->{db}, $orig_tbl->{tbl} ) || []; } # If --preserve-triggers was specified, try to create the original triggers into the new table. # We are doing this to ensure the original triggers will work in the new modified table # and we want to know this BEFORE copying all rows from the old table to the new one. if ($o->get('preserve-triggers')) { foreach my $trigger_info (@$triggers_info) { foreach my $orig_trigger (@{$trigger_info->{orig_triggers}}) { my $definer = $orig_trigger->{definer} || ''; $definer =~ s/@/`@`/; $definer = "`$definer`" ; my @chars = ("a".."z"); my $tmp_trigger_name; $tmp_trigger_name .= $chars[rand @chars] for 1..15; my $sql = "CREATE DEFINER=$definer " . "TRIGGER `$new_tbl->{db}`.`$tmp_trigger_name` " . "$orig_trigger->{action_timing} $orig_trigger->{event_manipulation} ON $new_tbl->{tbl}\n" . "FOR EACH ROW\n" . $orig_trigger->{action_statement}; eval { $dbh->do($sql); }; if ($EVAL_ERROR) { my $msg = "$EVAL_ERROR.\n" . "Check if all fields referenced by the trigger still exists " . "after the operation you are trying to apply"; die ($msg); } $dbh->do("DROP TRIGGER IF EXISTS `$tmp_trigger_name`"); } } } my @trigger_names; @drop_trigger_sqls = (); foreach my $trigger_info ( @$triggers_info ) { next if !$trigger_info->{new_trigger_sql}; if ( $o->get('execute') ) { osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{create_triggers}, stats => $stats, code => sub { PTDEBUG && _d($trigger_info->{new_trigger_sql}); $cxn->dbh()->do($trigger_info->{new_trigger_sql}); }, ); } # Only save the trigger once it has been created # (or faked to be created) so if the 2nd trigger # fails to create, we know to only drop the 1st. push @trigger_names, $trigger_info->{new_trigger_name}; push @drop_trigger_sqls, "DROP TRIGGER IF EXISTS " . $q->quote($orig_tbl->{db}, $trigger_info->{new_trigger_name}); } if ( $o->get('execute') ) { print ts("Created triggers OK.\n"); } return @trigger_names; } sub random_suffix { my @chars = ("a".."z"); my $suffix; $suffix .= $chars[rand @chars] for 1..15; return "_$suffix"; } # Create the sql staments for the new trigger # Required args: # trigger : Hash with trigger definition # db : Database handle # new_table : New table name # # Optional args: # orig_table.......: Original table name. Used to LOCK the table. # In case we are creating a new temporary trigger for testing # purposes or if --no-swap-tables is enabled, this param should # be omitted since we are creating a completelly new trigger so, # since in this case we are not going to DROP the old trigger, # there is no need for a LOCK # # duplicate_trigger: If set, it will create the trigger on the new table # with a random string as a trigger name suffix. # It will also not drop the original trigger. # This is usefull when creating a temporary trigger for testing # purposes or if --no-swap-tables AND --no-drop-new-table was # specified along with --preserve-triggers. In this case, # since the original table and triggers are not going to be # deleted we need a new random name because trigger names # cannot be duplicated sub create_trigger_sql { my (%args) = @_; my @required_args = qw(trigger db new_tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $trigger = $args{trigger}; my $suffix = $args{duplicate_trigger} ? random_suffix() : ''; if (length("$trigger->{trigger_name}$suffix") > 64) { die "New trigger name $trigger->{trigger_name}$suffix is too long"; } my $definer = $args{trigger}->{definer} | ''; $definer =~ s/@/`@`/; $definer = "`$definer`" ; my $sqls = []; push @$sqls, "LOCK TABLES `$args{db}`.`$args{new_tbl}` WRITE, `$args{db}`. `$args{orig_tbl}` WRITE;"; push @$sqls, '/*!50003 SET @saved_sql_mode = @@sql_mode */'; push @$sqls, '/*!50003 SET @saved_cs_client = @@character_set_client */ ;'; push @$sqls, '/*!50003 SET @saved_cs_results = @@character_set_results */ ;'; push @$sqls, '/*!50003 SET @saved_col_connection = @@collation_connection */ ;'; push @$sqls, "/*!50003 SET character_set_client = $trigger->{character_set_client} */ ;"; push @$sqls, "/*!50003 SET collation_connection = $trigger->{collation_connection} */ ;"; push @$sqls, "SET SESSION sql_mode = '$trigger->{sql_mode}'"; push @$sqls, "DROP TRIGGER IF EXISTS `$args{db}`.`$trigger->{trigger_name}` " if ! $args{duplicate_trigger}; push @$sqls, "CREATE DEFINER=$definer " . "TRIGGER `$args{db}`.`$trigger->{trigger_name}$suffix` " . "$trigger->{action_timing} $trigger->{event_manipulation} ON $args{new_tbl}\n" . "FOR EACH ROW\n" . $trigger->{action_statement}; push @$sqls, '/*!50003 SET sql_mode = @saved_sql_mode */ ;'; push @$sqls, '/*!50003 SET character_set_client = @saved_cs_client */ ;'; push @$sqls, '/*!50003 SET character_set_results = @saved_cs_results */'; push @$sqls, '/*!50003 SET collation_connection = @saved_col_connection */ ;'; push @$sqls, 'UNLOCK TABLES'; return $sqls; } sub drop_triggers { my ( %args ) = @_; my @required_args = qw(tbl Cxn Quoter OptionParser Retry tries stats); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($tbl, $cxn, $q, $o, $retry, $tries, $stats) = @args{@required_args}; # This sub works for --dry-run and --execute, although --dry-run is # only interesting with --print so the user can see the drop trigger # statements for --execute. if ( $o->get('dry-run') ) { print "Not dropping triggers because this is a dry run.\n"; } else { print ts("Dropping triggers...\n"); } foreach my $sql ( @drop_trigger_sqls ) { print $sql, "\n" if $o->get('print'); if ( $o->get('execute') ) { eval { osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{drop_triggers}, stats => $stats, code => sub { PTDEBUG && _d($sql); $cxn->dbh()->do($sql); }, ); }; if ( $EVAL_ERROR ) { warn ts("Error dropping trigger: $EVAL_ERROR\n"); push @triggers_not_dropped, $sql; $exit_status = 1; } } } if ( $o->get('execute') ) { if ( !@triggers_not_dropped ) { print ts("Dropped triggers OK.\n"); } else { warn ts("To try dropping the triggers again, execute:\n" . join("\n", @triggers_not_dropped) . "\n"); } } return; } sub error_event { my ($error) = @_; return 'undefined_error' unless $error; my $event = $error =~ m/Lock wait timeout/ ? 'lock_wait_timeout' : $error =~ m/Deadlock found/ ? 'deadlock' : $error =~ m/execution was interrupted/ ? 'query_killed' : $error =~ m/server has gone away/ ? 'lost_connection' : $error =~ m/Lost connection/ ? 'connection_killed' : 'unknown_error'; return $event; } sub osc_retry { my (%args) = @_; my @required_args = qw(Cxn Retry tries code stats); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $cxn = $args{Cxn}; my $retry = $args{Retry}; my $tries = $args{tries}; my $code = $args{code}; my $stats = $args{stats}; my $ignore_errors = $args{ignore_errors}; return $retry->retry( tries => $tries->{tries}, wait => sub { sleep ($tries->{wait} || 0.25) }, try => $code, fail => sub { my (%args) = @_; my $error = $args{error}; PTDEBUG && _d('Retry fail:', $error); if ( $ignore_errors ) { if ($error =~ /table.+?already exists/i) { PTDEBUG && _d('Aborting retries because of table name conflict. Trying with different name'); } return 0 if grep { $error =~ $_ } @$ignore_errors; } # The query failed/caused an error. If the error is one of these, # then we can possibly retry. if ( $error =~ m/Lock wait timeout exceeded/ || $error =~ m/Deadlock found/ || $error =~ m/Query execution was interrupted/ || $error =~ m/WSREP detected deadlock\/conflict/ ) { # These errors/warnings can be retried, so don't print # a warning yet; do that in final_fail. $stats->{ error_event($error) }++; return 1; # try again } elsif ( $error =~ m/MySQL server has gone away/ || $error =~ m/Lost connection to MySQL server/ ) { # The 1st pattern means that MySQL itself died or was stopped. # The 2nd pattern means that our cxn was killed (KILL ). $stats->{ error_event($error) }++; $cxn->connect(); # connect or die trying return 1; # reconnected, try again } $stats->{retry_fail}++; # At this point, either the error/warning cannot be retried, # or we failed to reconnect. Don't retry; call final_fail. return 0; }, final_fail => sub { my (%args) = @_; my $error = $args{error}; # This die should be caught by the caller. Copying rows and # the tool will stop, which is probably good because by this # point the error or warning indicates that something is wrong. $stats->{ error_event($error) }++; die ts($error); } ); } sub exec_nibble { my (%args) = @_; my @required_args = qw(Cxn tbl stats tries Retry NibbleIterator Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($cxn, $tbl, $stats, $tries, $retry, $nibble_iter, $q) = @args{@required_args}; my $sth = $nibble_iter->statements(); my $boundary = $nibble_iter->boundaries(); my $lb_quoted = $q->serialize_list(@{$boundary->{lower}}); my $ub_quoted = $q->serialize_list(@{$boundary->{upper}}); my $chunk = $nibble_iter->nibble_number(); my $chunk_index = $nibble_iter->nibble_index(); # Warn once per-table for these error codes if the error message # matches the pattern. my %warn_code = ( # Error: 1265 SQLSTATE: 01000 (WARN_DATA_TRUNCATED) # Message: Data truncated for column '%s' at row %ld 1265 => { # any pattern # use MySQL's message for this warning }, ); return osc_retry( Cxn => $cxn, Retry => $retry, tries => $tries->{copy_rows}, stats => $stats, code => sub { # ################################################################### # Start timing the query. # ################################################################### my $t_start = time; # Execute the INSERT..SELECT query. PTDEBUG && _d($sth->{nibble}->{Statement}, 'lower boundary:', @{$boundary->{lower}}, 'upper boundary:', @{$boundary->{upper}}); $sth->{nibble}->execute( # WHERE @{$boundary->{lower}}, # upper boundary values @{$boundary->{upper}}, # lower boundary values ); my $t_end = time; $stats->{INSERT}++; # ################################################################### # End timing the query. # ################################################################### # How many rows were inserted this time. Used for auto chunk sizing. $tbl->{row_cnt} = $sth->{nibble}->rows(); # Check if query caused any warnings. my $sql_warn = 'SHOW WARNINGS'; PTDEBUG && _d($sql_warn); my $warnings = $cxn->dbh->selectall_arrayref($sql_warn, {Slice => {}}); foreach my $warning ( @$warnings ) { my $code = ($warning->{code} || 0); my $message = $warning->{message}; if ( $ignore_code{$code} ) { $stats->{"mysql_warning_$code"}++; PTDEBUG && _d('Ignoring warning:', $code, $message); next; } elsif ( $warn_code{$code} && (!$warn_code{$code}->{pattern} || $message =~ m/$warn_code{$code}->{pattern}/) ) { if ( !$stats->{"mysql_warning_$code"}++ ) { # warn once warn "Copying rows caused a MySQL error $code: " . ($warn_code{$code}->{message} ? $warn_code{$code}->{message} : $message) . "\nNo more warnings about this MySQL error will be " . "reported. If --statistics was specified, " . "mysql_warning_$code will list the total count of " . "this MySQL error.\n"; } } else { # This die will propagate to fail which will return 0 # and propagate it to final_fail which will die with # this error message. die "Copying rows caused a MySQL error $code:\n" . " Level: " . ($warning->{level} || '') . "\n" . " Code: " . ($warning->{code} || '') . "\n" . " Message: " . ($warning->{message} || '') . "\n" . " Query: " . $sth->{nibble}->{Statement} . "\n"; } } # Success: no warnings, no errors. Return nibble time. return $t_end - $t_start; }, ); } # Sub: explain_statement # EXPLAIN a statement. # # Required Arguments: # * tbl - Standard tbl hashref # * sth - Sth with EXLAIN # * vals - Values for sth, if any # # Returns: # Hashref with EXPLAIN plan sub explain_statement { my ( %args ) = @_; my @required_args = qw(tbl sth vals); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($tbl, $sth, $vals) = @args{@required_args}; my $expl; eval { PTDEBUG && _d($sth->{Statement}, 'params:', @$vals); $sth->execute(@$vals); $expl = $sth->fetchrow_hashref(); $sth->finish(); }; if ( $EVAL_ERROR ) { # This shouldn't happen. die "Error executing " . $sth->{Statement} . ": $EVAL_ERROR\n"; } PTDEBUG && _d('EXPLAIN plan:', Dumper($expl)); return $expl; } sub ts { my ($msg) = @_; my $ts = $ENV{PTTEST_FAKE_TS} ? 'TS' : Transformers::ts(int(time)); return $msg ? "$ts $msg" : $ts; } # find point in trigger we can insert pt-osc code for --preserve-triggers sub trigger_ins_point { my ( %args ) = @_; my @required_args = qw(trigger); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($trigger) = @args{@required_args}; my $ins_point; if ($trigger =~ /begin(.*?)end(?!.*end)/igms) { $ins_point = $+[0] - 3; } else { $ins_point = 0;} return $ins_point; } # sub to add ; if line doesn't end in ; sub terminate_sql { my ( $text ) = @_; die "I need a text argument" unless defined $text; $text = trim($text); if(substr($text, -1) ne ';') { $text .= ';'; } return $text; } sub trim { my ( $text ) = @_; die "I need a text argument" unless defined $text; $text =~ s/^\s+|\s+$//g; return $text; } # Catches signals so we can exit gracefully. sub sig_int { my ( $signal ) = @_; if ( $dont_interrupt_now ) { # we're in the middle of something that shouldn't be interrupted PTDEBUG && _d("Received Signal: \"$signal\" in middle of critical operation. Continuing anyway."); return; } $oktorun = 0; # flag for cleanup tasks print STDERR "# Exiting on SIG$signal.\n"; # This is to restore terminal to "normal". lp #1396870 if ($term_readkey) { ReadMode(0); } exit 1; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-online-schema-change - ALTER tables without locking them. =head1 SYNOPSIS Usage: pt-online-schema-change [OPTIONS] DSN pt-online-schema-change alters a table's structure without blocking reads or writes. Specify the database and table in the DSN. Do not use this tool before reading its documentation and checking your backups carefully. Add a column to sakila.actor: pt-online-schema-change --alter "ADD COLUMN c1 INT" D=sakila,t=actor Change sakila.actor to InnoDB, effectively performing OPTIMIZE TABLE in a non-blocking fashion because it is already an InnoDB table: pt-online-schema-change --alter "ENGINE=InnoDB" D=sakila,t=actor =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-online-schema-change emulates the way that MySQL alters tables internally, but it works on a copy of the table you wish to alter. This means that the original table is not locked, and clients may continue to read and change data in it. pt-online-schema-change works by creating an empty copy of the table to alter, modifying it as desired, and then copying rows from the original table into the new table. When the copy is complete, it moves away the original table and replaces it with the new one. By default, it also drops the original table. The data copy process is performed in small chunks of data, which are varied to attempt to make them execute in a specific amount of time (see L<"--chunk-time">). This process is very similar to how other tools, such as pt-table-checksum, work. Any modifications to data in the original tables during the copy will be reflected in the new table, because the tool creates triggers on the original table to update the corresponding rows in the new table. The use of triggers means that the tool will not work if any triggers are already defined on the table. When the tool finishes copying data into the new table, it uses an atomic C operation to simultaneously rename the original and new tables. After this is complete, the tool drops the original table. Foreign keys complicate the tool's operation and introduce additional risk. The technique of atomically renaming the original and new tables does not work when foreign keys refer to the table. The tool must update foreign keys to refer to the new table after the schema change is complete. The tool supports two methods for accomplishing this. You can read more about this in the documentation for L<"--alter-foreign-keys-method">. Foreign keys also cause some side effects. The final table will have the same foreign keys and indexes as the original table (unless you specify differently in your ALTER statement), but the names of the objects may be changed slightly to avoid object name collisions in MySQL and InnoDB. For safety, the tool does not modify the table unless you specify the L<"--execute"> option, which is not enabled by default. The tool supports a variety of other measures to prevent unwanted load or other problems, including automatically detecting replicas, connecting to them, and using the following safety checks: =over =item * In most cases the tool will refuse to operate unless a PRIMARY KEY or UNIQUE INDEX is present in the table. See L<"--alter"> for details. =item * The tool refuses to operate if it detects replication filters. See L<"--[no]check-replication-filters"> for details. =item * The tool pauses the data copy operation if it observes any replicas that are delayed in replication. See L<"--max-lag"> for details. =item * The tool pauses or aborts its operation if it detects too much load on the server. See L<"--max-load"> and L<"--critical-load"> for details. =item * The tool sets C and (for MySQL 5.5 and newer) C so that it is more likely to be the victim of any lock contention, and less likely to disrupt other transactions. These values can be changed by specifying L<"--set-vars">. =item * The tool refuses to alter the table if foreign key constraints reference it, unless you specify L<"--alter-foreign-keys-method">. =item * The tool cannot alter MyISAM tables on L<"Percona XtraDB Cluster"> nodes. =back =head1 Percona XtraDB Cluster pt-online-schema-change works with Percona XtraDB Cluster (PXC) 5.5.28-23.7 and newer, but there are two limitations: only InnoDB tables can be altered, and C must be set to C (total order isolation). The tool exits with an error if the host is a cluster node and the table is MyISAM or is being converted to MyISAM (C), or if C is not C. There is no way to disable these checks. =head1 MySQL 5.7+ Generated columns The tools ignores MySQL 5.7+ C columns since the value for those columns is generated according to the expresion used to compute column values. =head1 OUTPUT The tool prints information about its activities to STDOUT so that you can see what it is doing. During the data copy phase, it prints L<"--progress"> reports to STDERR. You can get additional information by specifying L<"--print">. If L<"--statistics"> is specified, a report of various internal event counts is printed at the end, like: # Event Count # ====== ===== # INSERT 1 =head1 OPTIONS L<"--dry-run"> and L<"--execute"> are mutually exclusive. This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --alter type: string The schema modification, without the ALTER TABLE keywords. You can perform multiple modifications to the table by specifying them with commas. Please refer to the MySQL manual for the syntax of ALTER TABLE. The following limitations apply which, if attempted, will cause the tool to fail in unpredictable ways: =over =item * In almost all cases a PRIMARY KEY or UNIQUE INDEX needs to be present in the table. This is necessary because the tool creates a DELETE trigger to keep the new table updated while the process is running. A notable exception is when a PRIMARY KEY or UNIQUE INDEX is being created from B as part of the ALTER clause; in that case it will use these column(s) for the DELETE trigger. =item * The C clause cannot be used to rename the table. =item * Columns cannot be renamed by dropping and re-adding with the new name. The tool will not copy the original column's data to the new column. =item * If you add a column without a default value and make it NOT NULL, the tool will fail, as it will not try to guess a default value for you; You must specify the default. =item * C requires specifying C<_constraint_name> rather than the real C. Due to a limitation in MySQL, pt-online-schema-change adds a leading underscore to foreign key constraint names when creating the new table. For example, to drop this constraint: CONSTRAINT `fk_foo` FOREIGN KEY (`foo_id`) REFERENCES `bar` (`foo_id`) You must specify C<--alter "DROP FOREIGN KEY _fk_foo">. =item * The tool does not use C with MySQL 5.0 because it can cause a slave error which breaks replication: Query caused different errors on master and slave. Error on master: 'Deadlock found when trying to get lock; try restarting transaction' (1213), Error on slave: 'no error' (0). Default database: 'pt_osc'. Query: 'INSERT INTO pt_osc.t (id, c) VALUES ('730', 'new row')' The error happens when converting a MyISAM table to InnoDB because MyISAM is non-transactional but InnoDB is transactional. MySQL 5.1 and newer handle this case correctly, but testing reproduces the error 5% of the time with MySQL 5.0. This is a MySQL bug, similar to L, but there is no fix or workaround in MySQL 5.0. Without C, tests pass 100% of the time, so the risk of data loss or breaking replication should be negligible. B =back =item --alter-foreign-keys-method type: string How to modify foreign keys so they reference the new table. Foreign keys that reference the table to be altered must be treated specially to ensure that they continue to reference the correct table. When the tool renames the original table to let the new one take its place, the foreign keys "follow" the renamed table, and must be changed to reference the new table instead. The tool supports two techniques to achieve this. It automatically finds "child tables" that reference the table to be altered. =over =item auto Automatically determine which method is best. The tool uses C if possible (see the description of that method for details), and if not, then it uses C. =item rebuild_constraints This method uses C to drop and re-add foreign key constraints that reference the new table. This is the preferred technique, unless one or more of the "child" tables is so large that the C would take too long. The tool determines that by comparing the number of rows in the child table to the rate at which the tool is able to copy rows from the old table to the new table. If the tool estimates that the child table can be altered in less time than the L<"--chunk-time">, then it will use this technique. For purposes of estimating the time required to alter the child table, the tool multiplies the row-copying rate by L<"--chunk-size-limit">, because MySQL's C is typically much faster than the external process of copying rows. Due to a limitation in MySQL, foreign keys will not have the same names after the ALTER that they did prior to it. The tool has to rename the foreign key when it redefines it, which adds a leading underscore to the name. In some cases, MySQL also automatically renames indexes required for the foreign key. =item drop_swap Disable foreign key checks (FOREIGN_KEY_CHECKS=0), then drop the original table before renaming the new table into its place. This is different from the normal method of swapping the old and new table, which uses an atomic C that is undetectable to client applications. This method is faster and does not block, but it is riskier for two reasons. First, for a short time between dropping the original table and renaming the temporary table, the table to be altered simply does not exist, and queries against it will result in an error. Secondly, if there is an error and the new table cannot be renamed into the place of the old one, then it is too late to abort, because the old table is gone permanently. This method forces C<--no-swap-tables> and C<--no-drop-old-table>. =item none This method is like C without the "swap". Any foreign keys that referenced the original table will now reference a nonexistent table. This will typically cause foreign key violations that are visible in C, similar to the following: Trying to add to index `idx_fk_staff_id` tuple: DATA TUPLE: 2 fields; 0: len 1; hex 05; asc ;; 1: len 4; hex 80000001; asc ;; But the parent table `sakila`.`staff_old` or its .ibd file does not currently exist! This is because the original table (in this case, sakila.staff) was renamed to sakila.staff_old and then dropped. This method of handling foreign key constraints is provided so that the database administrator can disable the tool's built-in functionality if desired. =back =item --[no]analyze-before-swap default: yes Execute ANALYZE TABLE on the new table before swapping with the old one. By default, this happens only when running MySQL 5.6 and newer, and C is enabled. Specify the option explicitly to enable or disable it regardless of MySQL version and C. This circumvents a potentially serious issue related to InnoDB optimizer statistics. If the table being alerted is busy and the tool completes quickly, the new table will not have optimizer statistics after being swapped. This can cause fast, index-using queries to do full table scans until optimizer statistics are updated (usually after 10 seconds). If the table is large and the server very busy, this can cause an outage. =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --[no]check-alter default: yes Parses the L<"--alter"> specified and tries to warn of possible unintended behavior. Currently, it checks for: =over =item Column renames In previous versions of the tool, renaming a column with C would lead to that column's data being lost. The tool now parses the alter statement and tries to catch these cases, so the renamed columns should have the same data as the originals. However, the code that does this is not a full-blown SQL parser, so you should first run the tool with L<"--dry-run"> and L<"--print"> and verify that it detects the renamed columns correctly. =item DROP PRIMARY KEY If L<"--alter"> contain C (case- and space-insensitive), a warning is printed and the tool exits unless L<"--dry-run"> is specified. Altering the primary key can be dangerous, but the tool can handle it. The tool's triggers, particularly the DELETE trigger, are most affected by altering the primary key because the tool prefers to use the primary key for its triggers. You should first run the tool with L<"--dry-run"> and L<"--print"> and verify that the triggers are correct. =back =item --check-interval type: time; default: 1 Sleep time between checks for L<"--max-lag">. =item --[no]check-plan default: yes Check query execution plans for safety. By default, this option causes the tool to run EXPLAIN before running queries that are meant to access a small amount of data, but which could access many rows if MySQL chooses a bad execution plan. These include the queries to determine chunk boundaries and the chunk queries themselves. If it appears that MySQL will use a bad query execution plan, the tool will skip the chunk of the table. The tool uses several heuristics to determine whether an execution plan is bad. The first is whether EXPLAIN reports that MySQL intends to use the desired index to access the rows. If MySQL chooses a different index, the tool considers the query unsafe. The tool also checks how much of the index MySQL reports that it will use for the query. The EXPLAIN output shows this in the key_len column. The tool remembers the largest key_len seen, and skips chunks where MySQL reports that it will use a smaller prefix of the index. This heuristic can be understood as skipping chunks that have a worse execution plan than other chunks. The tool prints a warning the first time a chunk is skipped due to a bad execution plan in each table. Subsequent chunks are skipped silently, although you can see the count of skipped chunks in the SKIPPED column in the tool's output. This option adds some setup work to each table and chunk. Although the work is not intrusive for MySQL, it results in more round-trips to the server, which consumes time. Making chunks too small will cause the overhead to become relatively larger. It is therefore recommended that you not make chunks too small, because the tool may take a very long time to complete if you do. =item --[no]check-replication-filters default: yes Abort if any replication filter is set on any server. The tool looks for server options that filter replication, such as binlog_ignore_db and replicate_do_db. If it finds any such filters, it aborts with an error. If the replicas are configured with any filtering options, you should be careful not to modify any databases or tables that exist on the master and not the replicas, because it could cause replication to fail. For more information on replication rules, see L. =item --check-slave-lag type: string Pause the data copy until this replica's lag is less than L<"--max-lag">. The value is a DSN that inherits properties from the the connection options (L<"--port">, L<"--user">, etc.). This option overrides the normal behavior of finding and continually monitoring replication lag on ALL connected replicas. If you don't want to monitor ALL replicas, but you want more than just one replica to be monitored, then use the DSN option to the L<"--recursion-method"> option instead of this option. =item --chunk-index type: string Prefer this index for chunking tables. By default, the tool chooses the most appropriate index for chunking. This option lets you specify the index that you prefer. If the index doesn't exist, then the tool will fall back to its default behavior of choosing an index. The tool adds the index to the SQL statements in a C clause. Be careful when using this option; a poor choice of index could cause bad performance. =item --chunk-index-columns type: int Use only this many left-most columns of a L<"--chunk-index">. This works only for compound indexes, and is useful in cases where a bug in the MySQL query optimizer (planner) causes it to scan a large range of rows instead of using the index to locate starting and ending points precisely. This problem sometimes occurs on indexes with many columns, such as 4 or more. If this happens, the tool might print a warning related to the L<"--[no]check-plan"> option. Instructing the tool to use only the first N columns of the index is a workaround for the bug in some cases. =item --chunk-size type: size; default: 1000 Number of rows to select for each chunk copied. Allowable suffixes are k, M, G. This option can override the default behavior, which is to adjust chunk size dynamically to try to make chunks run in exactly L<"--chunk-time"> seconds. When this option isn't set explicitly, its default value is used as a starting point, but after that, the tool ignores this option's value. If you set this option explicitly, however, then it disables the dynamic adjustment behavior and tries to make all chunks exactly the specified number of rows. There is a subtlety: if the chunk index is not unique, then it's possible that chunks will be larger than desired. For example, if a table is chunked by an index that contains 10,000 of a given value, there is no way to write a WHERE clause that matches only 1,000 of the values, and that chunk will be at least 10,000 rows large. Such a chunk will probably be skipped because of L<"--chunk-size-limit">. =item --chunk-size-limit type: float; default: 4.0 Do not copy chunks this much larger than the desired chunk size. When a table has no unique indexes, chunk sizes can be inaccurate. This option specifies a maximum tolerable limit to the inaccuracy. The tool uses to estimate how many rows are in the chunk. If that estimate exceeds the desired chunk size times the limit, then the tool skips the chunk. The minimum value for this option is 1, which means that no chunk can be larger than L<"--chunk-size">. You probably don't want to specify 1, because rows reported by EXPLAIN are estimates, which can be different from the real number of rows in the chunk. You can disable oversized chunk checking by specifying a value of 0. The tool also uses this option to determine how to handle foreign keys that reference the table to be altered. See L<"--alter-foreign-keys-method"> for details. =item --chunk-time type: float; default: 0.5 Adjust the chunk size dynamically so each data-copy query takes this long to execute. The tool tracks the copy rate (rows per second) and adjusts the chunk size after each data-copy query, so that the next query takes this amount of time (in seconds) to execute. It keeps an exponentially decaying moving average of queries per second, so that if the server's performance changes due to changes in server load, the tool adapts quickly. If this option is set to zero, the chunk size doesn't auto-adjust, so query times will vary, but query chunk sizes will not. Another way to do the same thing is to specify a value for L<"--chunk-size"> explicitly, instead of leaving it at the default. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --critical-load type: Array; default: Threads_running=50 Examine SHOW GLOBAL STATUS after every chunk, and abort if the load is too high. The option accepts a comma-separated list of MySQL status variables and thresholds. An optional C<=MAX_VALUE> (or C<:MAX_VALUE>) can follow each variable. If not given, the tool determines a threshold by examining the current value at startup and doubling it. See L<"--max-load"> for further details. These options work similarly, except that this option will abort the tool's operation instead of pausing it, and the default value is computed differently if you specify no threshold. The reason for this option is as a safety check in case the triggers on the original table add so much load to the server that it causes downtime. There is probably no single value of Threads_running that is wrong for every server, but a default of 50 seems likely to be unacceptably high for most servers, indicating that the operation should be canceled immediately. =item --database short form: -D; type: string Connect to this database. =item --default-engine Remove C from the new table. By default the new table is created with the same table options as the original table, so if the original table uses InnoDB, then the new table will use InnoDB. In certain cases involving replication, this may cause unintended changes on replicas which use a different engine for the same table. Specifying this option causes the new table to be created with the system's default engine. =item --data-dir type: string Create the new table on a different partition using the DATA DIRECTORY feature. Only available on 5.6+. This parameter is ignored if it is used at the same time than remove-data-dir. =item --remove-data-dir default: no If the original table was created using the DATA DIRECTORY feature, remove it and create the new table in MySQL default directory without creating a new isl file. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --[no]drop-new-table default: yes Drop the new table if copying the original table fails. Specifying C<--no-drop-new-table> and C<--no-swap-tables> leaves the new, altered copy of the table without modifying the original table. See L<"--new-table-name">. L<--no-drop-new-table> does not work with C. =item --[no]drop-old-table default: yes Drop the original table after renaming it. After the original table has been successfully renamed to let the new table take its place, and if there are no errors, the tool drops the original table by default. If there are any errors, the tool leaves the original table in place. If C<--no-swap-tables> is specified, then there is no old table to drop. =item --[no]drop-triggers default: yes Drop triggers on the old table. C<--no-drop-triggers> forces C<--no-drop-old-table>. =item --dry-run Create and alter the new table, but do not create triggers, copy data, or replace the original table. =item --execute Indicate that you have read the documentation and want to alter the table. You must specify this option to alter the table. If you do not, then the tool will only perform some safety checks and exit. This helps ensure that you have read the documentation and understand how to use this tool. If you have not read the documentation, then do not specify this option. =item --[no]check-unique-key-change default: yes Avoid C to run if the specified statement for L<"--alter"> is trying to add an unique index. Since C uses C to copy rows to the new table, if the row being written produces a duplicate key, it will fail silently and data will be lost. Example: CREATE DATABASE test; USE test; CREATE TABLE `a` ( `id` int(11) NOT NULL, `unique_id` varchar(32) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into a values (1, "a"); insert into a values (2, "b"); insert into a values (3, ""); insert into a values (4, ""); insert into a values (5, NULL); insert into a values (6, NULL); Using C to add an unique index on the C field, will cause some rows to be lost due to the use of C to copy rows from the source table. For this reason, C will fail if it detects that the L<"--alter"> parameter is trying to add an unique key and it will show an example query to run to detect if there are rows that will produce duplicated indexes. Even if you run the query and there are no rows that will produce duplicated indexes, take into consideration that after running this query, changes can be made to the table that can produce duplicate rows and this data will be lost. =item --force This options bypasses confirmation in case of using alter-foreign-keys-method = none , which might break foreign key constraints. =item --force-concat-enums The NibbleIterator in Percona Toolkit can detect indexes having ENUM fields and if the items it has are sorted or not. According to MySQL documentation at L: ENUM values are sorted based on their index numbers, which depend on the order in which the enumeration members were listed in the column specification. For example, 'b' sorts before 'a' for ENUM('b', 'a'). The empty string sorts before nonempty strings, and NULL values sort before all other enumeration values. To prevent unexpected results when using the ORDER BY clause on an ENUM column, use one of these techniques: - Specify the ENUM list in alphabetic order. - Make sure that the column is sorted lexically rather than by index number by coding ORDER BY CAST(col AS CHAR) or ORDER BY CONCAT(col). The NibbleIterator in Percona Toolkit uses CONCAT(col) but, doing that, adds overhead since MySQL cannot use the column directly and has to calculate the result of CONCAT for every row. To make this scenario vissible to the user, if there are indexes having ENUM fields with usorted items, it is necessary to specify the C<--force-concat-enums> parameter. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --max-flow-ctl type: float Somewhat similar to --max-lag but for PXC clusters. Check average time cluster spent pausing for Flow Control and make tool pause if it goes over the percentage indicated in the option. A value of 0 would make the tool pause when *any* Flow Control activity is detected. Default is no Flow Control checking. This option is available for PXC versions 5.6 or higher. =item --max-lag type: time; default: 1s Pause the data copy until all replicas' lag is less than this value. After each data-copy query (each chunk), the tool looks at the replication lag of all replicas to which it connects, using Seconds_Behind_Master. If any replica is lagging more than the value of this option, then the tool will sleep for L<"--check-interval"> seconds, then check all replicas again. If you specify L<"--check-slave-lag">, then the tool only examines that server for lag, not all servers. If you want to control exactly which servers the tool monitors, use the DSN value to L<"--recursion-method">. The tool waits forever for replicas to stop lagging. If any replica is stopped, the tool waits forever until the replica is started. The data copy continues when all replicas are running and not lagging too much. The tool prints progress reports while waiting. If a replica is stopped, it prints a progress report immediately, then again at every progress report interval. =item --max-load type: Array; default: Threads_running=25 Examine SHOW GLOBAL STATUS after every chunk, and pause if any status variables are higher than their thresholds. The option accepts a comma-separated list of MySQL status variables. An optional C<=MAX_VALUE> (or C<:MAX_VALUE>) can follow each variable. If not given, the tool determines a threshold by examining the current value and increasing it by 20%. For example, if you want the tool to pause when Threads_connected gets too high, you can specify "Threads_connected", and the tool will check the current value when it starts working and add 20% to that value. If the current value is 100, then the tool will pause when Threads_connected exceeds 120, and resume working when it is below 120 again. If you want to specify an explicit threshold, such as 110, you can use either "Threads_connected:110" or "Threads_connected=110". The purpose of this option is to prevent the tool from adding too much load to the server. If the data-copy queries are intrusive, or if they cause lock waits, then other queries on the server will tend to block and queue. This will typically cause Threads_running to increase, and the tool can detect that by running SHOW GLOBAL STATUS immediately after each query finishes. If you specify a threshold for this variable, then you can instruct the tool to wait until queries are running normally again. This will not prevent queueing, however; it will only give the server a chance to recover from the queueing. If you notice queueing, it is best to decrease the chunk time. =item --preserve-triggers Preserves old triggers when specified. As of MySQL 5.7.2, it is possible to define multiple triggers for a given table that have the same trigger event and action time. This allows us to add the triggers needed for C even if the table already has its own triggers. If this option is enabled, C will try to copy all the existing triggers to the new table BEFORE start copying rows from the original table to ensure the old triggers can be applied after altering the table. Example. CREATE TABLE test.t1 ( id INT NOT NULL AUTO_INCREMENT, f1 INT, f2 VARCHAR(32), PRIMARY KEY (id) ); CREATE TABLE test.log ( ts TIMESTAMP, msg VARCHAR(255) ); CREATE TRIGGER test.after_update AFTER UPDATE ON test.t1 FOR EACH ROW INSERT INTO test.log VALUES (NOW(), CONCAT("updated row row with id ", OLD.id, " old f1:", OLD.f1, " new f1: ", NEW.f1 )); For this table and triggers combination, it is not possible to use L<--preserve-triggers> with an L<--alter> like this: C<"DROP COLUMN f1"> since the trigger references the column being dropped and at would make the trigger to fail. After testing the triggers will work on the new table, the triggers are dropped from the new table until all rows have been copied and then they are re-applied. L<--preserve-triggers> cannot be used with these other parameters, L<--no-drop-triggers>, L<--no-drop-old-table> and L<--no-swap-tables> since L<--preserve-triggers> implies that the old triggers should be deleted and recreated in the new table. Since it is not possible to have more than one trigger with the same name, old triggers must be deleted in order to be able to recreate them into the new table. Using C<--preserve-triggers> with C<--no-swap-tables> will cause triggers to remain defined for the original table. Please read the documentation for L<--swap-tables> If both C<--no-swap-tables> and C<--no-drop-new-table> is set, the trigger will remain on the original table and will be duplicated on the new table (the trigger will have a random suffix as no trigger names are unique). =item --new-table-name type: string; default: %T_new New table name before it is swapped. C<%T> is replaced with the original table name. When the default is used, the tool prefixes the name with up to 10 C<_> (underscore) to find a unique table name. If a table name is specified, the tool does not prefix it with C<_>, so the table must not exist. =item --null-to-not-null Allows MODIFYing a column that allows NULL values to one that doesn't allow them. The rows which contain NULL values will be converted to the defined default value. If no explicit DEFAULT value is given MySQL will assign a default value based on datatype, e.g. 0 for number datatypes, '' for string datatypes. =item --only-same-schema-fks Check foreigns keys only on tables on the same schema than the original table. This option is dangerous since if you have FKs refenrencing tables in other schemas, they won't be detected. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pause-file type: string Execution will be paused while the file specified by this param exists. =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --plugin type: string Perl module file that defines a C class. A plugin allows you to write a Perl module that can hook into many parts of pt-online-schema-change. This requires a good knowledge of Perl and Percona Toolkit conventions, which are beyond this scope of this documentation. Please contact Percona if you have questions or need help. See L<"PLUGIN"> for more information. =item --port short form: -P; type: int Port number to use for connection. =item --print Print SQL statements to STDOUT. Specifying this option allows you to see most of the statements that the tool executes. You can use this option with L<"--dry-run">, for example. =item --progress type: array; default: time,30 Print progress reports to STDERR while copying rows. The value is a comma-separated list with two parts. The first part can be percentage, time, or iterations; the second part specifies how often an update should be printed, in percentage, seconds, or number of iterations. =item --quiet short form: -q Do not print messages to STDOUT (disables L<"--progress">). Errors and warnings are still printed to STDERR. =item --recurse type: int Number of levels to recurse in the hierarchy when discovering replicas. Default is infinite. See also L<"--recursion-method">. =item --recursion-method type: array; default: processlist,hosts Preferred recursion method for discovering replicas. Possible methods are: METHOD USES =========== ================== processlist SHOW PROCESSLIST hosts SHOW SLAVE HOSTS dsn=DSN DSNs from a table none Do not find slaves The processlist method is the default, because SHOW SLAVE HOSTS is not reliable. However, the hosts method can work better if the server uses a non-standard port (not 3306). The tool usually does the right thing and finds all replicas, but you may give a preferred method and it will be used first. The hosts method requires replicas to be configured with report_host, report_port, etc. The dsn method is special: it specifies a table from which other DSN strings are read. The specified DSN must specify a D and t, or a database-qualified t. The DSN table should have the following structure: CREATE TABLE `dsns` ( `id` int(11) NOT NULL AUTO_INCREMENT, `parent_id` int(11) DEFAULT NULL, `dsn` varchar(255) NOT NULL, PRIMARY KEY (`id`) ); To make the tool monitor only the hosts 10.10.1.16 and 10.10.1.17 for replication lag, insert the values C and C into the table. Currently, the DSNs are ordered by id, but id and parent_id are otherwise ignored. You can change the list of hosts while OSC is executing: if you change the contents of the DSN table, OSC will pick it up very soon. =item --skip-check-slave-lag type: DSN; repeatable: yes DSN to skip when checking slave lag. It can be used multiple times. Example: --skip-check-slave-lag h=127.0.0.1,P=12345 --skip-check-slave-lag h=127.0.0.1,P=12346 Plase take into consideration that even when for the MySQL driver h=127.1 is equal to h=127.0.0.1, for this parameter you need to specify the full IP address. =item --slave-user type: string Sets the user to be used to connect to the slaves. This parameter allows you to have a different user with less privileges on the slaves but that user must exist on all slaves. =item --slave-password type: string Sets the password to be used to connect to the slaves. It can be used with --slave-user and the password for the user must be the same on all slaves. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 innodb_lock_wait_timeout=1 lock_wait_timeout=60 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the default value of C<10000>. The tool prints a warning and continues if a variable cannot be set. Note that setting the C variable requires some tricky escapes to be able to parse the quotes and commas. Example: --set-vars sql_mode=\'STRICT_ALL_TABLES\\,ALLOW_INVALID_DATES\' Note the single backslash for the quotes and double backslash for the comma. =item --sleep type: float; default: 0 How long to sleep (in seconds) after copying each chunk. This option is useful when throttling by L<"--max-lag"> and L<"--max-load"> are not possible. A small, sub-second value should be used, like 0.1, else the tool could take a very long time to copy large tables. =item --socket short form: -S; type: string Socket file to use for connection. =item --statistics Print statistics about internal counters. This is useful to see how many warnings were suppressed compared to the number of INSERT. =item --[no]swap-tables default: yes Swap the original table and the new, altered table. This step completes the online schema change process by making the table with the new schema take the place of the original table. The original table becomes the "old table," and the tool drops it unless you disable L<"--[no]drop-old-table">. Using C<--no-swap-tables> will run the whole process, it will create the new table, it will copy all rows but at the end it will drop the new table. It is intended to run a more realistic L<--dry-run>. =item --tries type: array How many times to try critical operations. If certain operations fail due to non-fatal, recoverable errors, the tool waits and tries the operation again. These are the operations that are retried, with their default number of tries and wait time between tries (in seconds): =for comment ignore-pt-internal-value MAGIC_tries OPERATION TRIES WAIT =================== ===== ==== create_triggers 10 1 drop_triggers 10 1 copy_rows 10 0.25 swap_tables 10 1 update_foreign_keys 10 1 analyze_table 10 1 To change the defaults, specify the new values like: --tries create_triggers:5:0.5,drop_triggers:5:0.5 That makes the tool try C and C 5 times with a 0.5 second wait between tries. So the format is: operation:tries:wait[,operation:tries:wait] All three values must be specified. Note that most operations are affected only in MySQL 5.5 and newer by C (see L<"--set-vars">) because of metadata locks. The C operation is affected in any version of MySQL by C. For creating and dropping triggers, the number of tries applies to each C and C statement for each trigger. For copying rows, the number of tries applies to each chunk, not the entire table. For swapping tables, the number of tries usually applies once because there is usually only one C statement. For rebuilding foreign key constraints, the number of tries applies to each statement (C statements for the C L<"--alter-foreign-keys-method">; other statements for the C method). The tool retries each operation if these errors occur: Lock wait timeout (innodb_lock_wait_timeout and lock_wait_timeout) Deadlock found Query is killed (KILL QUERY ) Connection is killed (KILL CONNECTION ) Lost connection to MySQL In the case of lost and killed connections, the tool will automatically reconnect. Failures and retries are recorded in the L<"--statistics">. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 PLUGIN The file specified by L<"--plugin"> must define a class (i.e. a package) called C with a C subroutine. The tool will create an instance of this class and call any hooks that it defines. No hooks are required, but a plugin isn't very useful without them. These hooks, in this order, are called if defined: init before_create_new_table after_create_new_table before_alter_new_table after_alter_new_table before_create_triggers after_create_triggers before_copy_rows after_copy_rows before_swap_tables after_swap_tables before_update_foreign_keys after_update_foreign_keys before_drop_old_table after_drop_old_table before_drop_triggers before_exit get_slave_lag Each hook is passed different arguments. To see which arguments are passed to a hook, search for the hook's name in the tool's source code, like: # --plugin hook if ( $plugin && $plugin->can('init') ) { $plugin->init( orig_tbl => $orig_tbl, child_tables => $child_tables, renamed_cols => $renamed_cols, slaves => $slaves, slave_lag_cxns => $slave_lag_cxns, ); } The comment C<# --plugin hook> precedes every hook call. Here's a plugin file template for all hooks: package pt_online_schema_change_plugin; use strict; sub new { my ($class, %args) = @_; my $self = { %args }; return bless $self, $class; } sub init { my ($self, %args) = @_; print "PLUGIN init\n"; } sub before_create_new_table { my ($self, %args) = @_; print "PLUGIN before_create_new_table\n"; } sub after_create_new_table { my ($self, %args) = @_; print "PLUGIN after_create_new_table\n"; } sub before_alter_new_table { my ($self, %args) = @_; print "PLUGIN before_alter_new_table\n"; } sub after_alter_new_table { my ($self, %args) = @_; print "PLUGIN after_alter_new_table\n"; } sub before_create_triggers { my ($self, %args) = @_; print "PLUGIN before_create_triggers\n"; } sub after_create_triggers { my ($self, %args) = @_; print "PLUGIN after_create_triggers\n"; } sub before_copy_rows { my ($self, %args) = @_; print "PLUGIN before_copy_rows\n"; } sub after_copy_rows { my ($self, %args) = @_; print "PLUGIN after_copy_rows\n"; } sub before_swap_tables { my ($self, %args) = @_; print "PLUGIN before_swap_tables\n"; } sub after_swap_tables { my ($self, %args) = @_; print "PLUGIN after_swap_tables\n"; } sub before_update_foreign_keys { my ($self, %args) = @_; print "PLUGIN before_update_foreign_keys\n"; } sub after_update_foreign_keys { my ($self, %args) = @_; print "PLUGIN after_update_foreign_keys\n"; } sub before_drop_old_table { my ($self, %args) = @_; print "PLUGIN before_drop_old_table\n"; } sub after_drop_old_table { my ($self, %args) = @_; print "PLUGIN after_drop_old_table\n"; } sub before_drop_triggers { my ($self, %args) = @_; print "PLUGIN before_drop_triggers\n"; } sub before_exit { my ($self, %args) = @_; print "PLUGIN before_exit\n"; } sub get_slave_lag { my ($self, %args) = @_; print "PLUGIN get_slave_lag\n"; return sub { return 0; }; } 1; Notice that C must return a function reference; ideally one that returns actual slave lag, not simply zero like in the example. Here's an example that actually does something: package pt_online_schema_change_plugin; use strict; sub new { my ($class, %args) = @_; my $self = { %args }; return bless $self, $class; } sub after_create_new_table { my ($self, %args) = @_; my $new_tbl = $args{new_tbl}; my $dbh = $self->{cxn}->dbh; my $row = $dbh->selectrow_arrayref("SHOW CREATE TABLE $new_tbl->{name}"); warn "after_create_new_table: $row->[1]\n\n"; } sub after_alter_new_table { my ($self, %args) = @_; my $new_tbl = $args{new_tbl}; my $dbh = $self->{cxn}->dbh; my $row = $dbh->selectrow_arrayref("SHOW CREATE TABLE $new_tbl->{name}"); warn "after_alter_new_table: $row->[1]\n\n"; } 1; You could use this with L<"--dry-run"> to check how the table will look before and after. Please contact Percona if you have questions or need help. =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: no Database for the old and new table. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * t dsn: table; copy: no Table to alter. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-online-schema-change ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 EXIT STATUS INVALID_PARAMETERS = 1 UNSUPORTED_MYSQL_VERSION = 2 NO_MINIMUM_REQUIREMENTS = 3 NO_PRIMARY_OR_UNIQUE_KEY = 4 INVALID_PLUGIN_FILE = 5 INVALID_ALTER_FK_METHOD = 6 INVALID_KEY_SIZE = 7 CANNOT_DETERMINE_KEY_SIZE = 9 NOT_SAFE_TO_ASCEND = 9 ERROR_CREATING_NEW_TABLE = 10 ERROR_ALTERING_TABLE = 11 ERROR_CREATING_TRIGGERS = 12 ERROR_RESTORING_TRIGGERS = 13 ERROR_SWAPPING_TABLES = 14 ERROR_UPDATING_FKS = 15 ERROR_DROPPING_OLD_TABLE = 16 UNSUPORTED_OPERATION = 17 MYSQL_CONNECTION_ERROR = 18 LOST_MYSQL_CONNECTION = 19 =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. This tool works only on MySQL 5.0.2 and newer versions, because earlier versions do not support triggers. Also a number of permissions should be set on MySQL to make pt-online-schema-change operate as expected. PROCESS, SUPER, REPLICATION SLAVE global privileges, as well as SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER, and TRIGGER table privileges should be granted on server. Slave needs only REPLICATION SLAVE and REPLICATION CLIENT privileges. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Daniel Nichter and Baron Schwartz =head1 ACKNOWLEDGMENTS The "online schema change" concept was first implemented by Shlomi Noach in his tool C, part of L. Engineers at Facebook then built another version called C as explained by their blog post: L. This tool is a hybrid of both approaches, with additional features and functionality not present in either. =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-online-schema-change 3.1.0 =cut percona-toolkit-3.1/bin/pt-pmp000775 001750 001750 00000060026 13535723560 017565 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env bash # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. TOOL="pt-pmp" # ########################################################################### # log_warn_die package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/log_warn_die.sh # t/lib/bash/log_warn_die.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PTFUNCNAME="" PTDEBUG="${PTDEBUG:-""}" EXIT_STATUS=0 ts() { TS=$(date +%F-%T | tr ':-' '_') echo "$TS $*" } info() { [ ${OPT_VERBOSE:-3} -ge 3 ] && ts "$*" } log() { [ ${OPT_VERBOSE:-3} -ge 2 ] && ts "$*" } warn() { [ ${OPT_VERBOSE:-3} -ge 1 ] && ts "$*" >&2 EXIT_STATUS=1 } die() { ts "$*" >&2 EXIT_STATUS=1 exit 1 } _d () { [ "$PTDEBUG" ] && echo "# $PTFUNCNAME: $(ts "$*")" >&2 } # ########################################################################### # End log_warn_die package # ########################################################################### # ########################################################################### # tmpdir package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/tmpdir.sh # t/lib/bash/tmpdir.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u PT_TMPDIR="" mk_tmpdir() { local dir="${1:-""}" if [ -n "$dir" ]; then if [ ! -d "$dir" ]; then mkdir "$dir" || die "Cannot make tmpdir $dir" fi PT_TMPDIR="$dir" else local tool="${0##*/}" local pid="$$" PT_TMPDIR=`mktemp -d -t "${tool}.${pid}.XXXXXX"` \ || die "Cannot make secure tmpdir" fi } rm_tmpdir() { if [ -n "$PT_TMPDIR" ] && [ -d "$PT_TMPDIR" ]; then rm -rf "$PT_TMPDIR" fi PT_TMPDIR="" } # ########################################################################### # End tmpdir package # ########################################################################### # ########################################################################### # parse_options package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/parse_options.sh # t/lib/bash/parse_options.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u ARGV="" # Non-option args (probably input files) EXT_ARGV="" # Everything after -- (args for an external command) HAVE_EXT_ARGV="" # Got --, everything else is put into EXT_ARGV OPT_ERRS=0 # How many command line option errors OPT_VERSION="" # If --version was specified OPT_HELP="" # If --help was specified OPT_ASK_PASS="" # If --ask-pass was specified PO_DIR="" # Directory with program option spec files usage() { local file="$1" local usage="$(grep '^Usage: ' "$file")" echo $usage echo echo "For more information, 'man $TOOL' or 'perldoc $file'." } usage_or_errors() { local file="$1" local version="" if [ "$OPT_VERSION" ]; then version=$(grep '^pt-[^ ]\+ [0-9]' "$file") echo "$version" return 1 fi if [ "$OPT_HELP" ]; then usage "$file" echo echo "Command line options:" echo perl -e ' use strict; use warnings FATAL => qw(all); my $lcol = 20; # Allow this much space for option names. my $rcol = 80 - $lcol; # The terminal is assumed to be 80 chars wide. my $name; while ( <> ) { my $line = $_; chomp $line; if ( $line =~ s/^long:/ --/ ) { $name = $line; } elsif ( $line =~ s/^desc:// ) { $line =~ s/ +$//mg; my @lines = grep { $_ } $line =~ m/(.{0,$rcol})(?:\s+|\Z)/g; if ( length($name) >= $lcol ) { print $name, "\n", (q{ } x $lcol); } else { printf "%-${lcol}s", $name; } print join("\n" . (q{ } x $lcol), @lines); print "\n"; } } ' "$PO_DIR"/* echo echo "Options and values after processing arguments:" echo ( cd "$PO_DIR" for opt in *; do local varname="OPT_$(echo "$opt" | tr a-z- A-Z_)" eval local varvalue=\$$varname if ! grep -q "type:" "$PO_DIR/$opt" >/dev/null; then if [ "$varvalue" -a "$varvalue" = "yes" ]; then varvalue="TRUE" else varvalue="FALSE" fi fi printf -- " --%-30s %s" "$opt" "${varvalue:-(No value)}" echo done ) return 1 fi if [ $OPT_ERRS -gt 0 ]; then echo usage "$file" return 1 fi return 0 } option_error() { local err="$1" OPT_ERRS=$(($OPT_ERRS + 1)) echo "$err" >&2 } parse_options() { local file="$1" shift ARGV="" EXT_ARGV="" HAVE_EXT_ARGV="" OPT_ERRS=0 OPT_VERSION="" OPT_HELP="" OPT_ASK_PASS="" PO_DIR="$PT_TMPDIR/po" if [ ! -d "$PO_DIR" ]; then mkdir "$PO_DIR" if [ $? -ne 0 ]; then echo "Cannot mkdir $PO_DIR" >&2 exit 1 fi fi rm -rf "$PO_DIR"/* if [ $? -ne 0 ]; then echo "Cannot rm -rf $PO_DIR/*" >&2 exit 1 fi _parse_pod "$file" # Parse POD into program option (po) spec files _eval_po # Eval po into existence with default values if [ $# -ge 2 ] && [ "$1" = "--config" ]; then shift # --config local user_config_files="$1" shift # that ^ local IFS="," for user_config_file in $user_config_files; do _parse_config_files "$user_config_file" done else _parse_config_files "/etc/percona-toolkit/percona-toolkit.conf" "/etc/percona-toolkit/$TOOL.conf" if [ "${HOME:-}" ]; then _parse_config_files "$HOME/.percona-toolkit.conf" "$HOME/.$TOOL.conf" fi fi _parse_command_line "${@:-""}" } _parse_pod() { local file="$1" PO_FILE="$file" PO_DIR="$PO_DIR" perl -e ' $/ = ""; my $file = $ENV{PO_FILE}; open my $fh, "<", $file or die "Cannot open $file: $!"; while ( defined(my $para = <$fh>) ) { next unless $para =~ m/^=head1 OPTIONS/; while ( defined(my $para = <$fh>) ) { last if $para =~ m/^=head1/; chomp; if ( $para =~ m/^=item --(\S+)/ ) { my $opt = $1; my $file = "$ENV{PO_DIR}/$opt"; open my $opt_fh, ">", $file or die "Cannot open $file: $!"; print $opt_fh "long:$opt\n"; $para = <$fh>; chomp; if ( $para =~ m/^[a-z ]+:/ ) { map { chomp; my ($attrib, $val) = split(/: /, $_); print $opt_fh "$attrib:$val\n"; } split(/; /, $para); $para = <$fh>; chomp; } my ($desc) = $para =~ m/^([^?.]+)/; print $opt_fh "desc:$desc.\n"; close $opt_fh; } } last; } ' } _eval_po() { local IFS=":" for opt_spec in "$PO_DIR"/*; do local opt="" local default_val="" local neg=0 local size=0 while read key val; do case "$key" in long) opt=$(echo $val | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') ;; default) default_val="$val" ;; "short form") ;; type) [ "$val" = "size" ] && size=1 ;; desc) ;; negatable) if [ "$val" = "yes" ]; then neg=1 fi ;; *) echo "Invalid attribute in $opt_spec: $line" >&2 exit 1 esac done < "$opt_spec" if [ -z "$opt" ]; then echo "No long attribute in option spec $opt_spec" >&2 exit 1 fi if [ $neg -eq 1 ]; then if [ -z "$default_val" ] || [ "$default_val" != "yes" ]; then echo "Option $opt_spec is negatable but not default: yes" >&2 exit 1 fi fi if [ $size -eq 1 -a -n "$default_val" ]; then default_val=$(size_to_bytes $default_val) fi eval "OPT_${opt}"="$default_val" done } _parse_config_files() { for config_file in "${@:-""}"; do test -f "$config_file" || continue while read config_opt; do echo "$config_opt" | grep '^[ ]*[^#]' >/dev/null 2>&1 || continue config_opt="$(echo "$config_opt" | sed -e 's/^ *//g' -e 's/ *$//g' -e 's/[ ]*=[ ]*/=/' -e 's/[ ]+#.*$//')" [ "$config_opt" = "" ] && continue echo "$config_opt" | grep -v 'version-check' >/dev/null 2>&1 || continue if ! [ "$HAVE_EXT_ARGV" ]; then config_opt="--$config_opt" fi _parse_command_line "$config_opt" done < "$config_file" HAVE_EXT_ARGV="" # reset for each file done } _parse_command_line() { local opt="" local val="" local next_opt_is_val="" local opt_is_ok="" local opt_is_negated="" local real_opt="" local required_arg="" local spec="" for opt in "${@:-""}"; do if [ "$opt" = "--" -o "$opt" = "----" ]; then HAVE_EXT_ARGV=1 continue fi if [ "$HAVE_EXT_ARGV" ]; then if [ "$EXT_ARGV" ]; then EXT_ARGV="$EXT_ARGV $opt" else EXT_ARGV="$opt" fi continue fi if [ "$next_opt_is_val" ]; then next_opt_is_val="" if [ $# -eq 0 ] || [ $(expr "$opt" : "\-") -eq 1 ]; then option_error "$real_opt requires a $required_arg argument" continue fi val="$opt" opt_is_ok=1 else if [ $(expr "$opt" : "\-") -eq 0 ]; then if [ -z "$ARGV" ]; then ARGV="$opt" else ARGV="$ARGV $opt" fi continue fi real_opt="$opt" if $(echo $opt | grep '^--no[^-]' >/dev/null); then local base_opt=$(echo $opt | sed 's/^--no//') if [ -f "$PT_TMPDIR/po/$base_opt" ]; then opt_is_negated=1 opt="$base_opt" else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi else if $(echo $opt | grep '^--no-' >/dev/null); then opt_is_negated=1 opt=$(echo $opt | sed 's/^--no-//') else opt_is_negated="" opt=$(echo $opt | sed 's/^-*//') fi fi if $(echo $opt | grep '^[a-z-][a-z-]*=' >/dev/null 2>&1); then val="$(echo $opt | awk -F= '{print $2}')" opt="$(echo $opt | awk -F= '{print $1}')" fi if [ -f "$PT_TMPDIR/po/$opt" ]; then spec="$PT_TMPDIR/po/$opt" else spec=$(grep "^short form:-$opt\$" "$PT_TMPDIR"/po/* | cut -d ':' -f 1) if [ -z "$spec" ]; then continue fi fi required_arg=$(cat "$spec" | awk -F: '/^type:/{print $2}') if [ "$required_arg" ]; then if [ "$val" ]; then opt_is_ok=1 else next_opt_is_val=1 fi else if [ "$val" ]; then option_error "Option $real_opt does not take a value" continue fi if [ "$opt_is_negated" ]; then val="" else val="yes" fi opt_is_ok=1 fi fi if [ "$opt_is_ok" ]; then opt=$(cat "$spec" | grep '^long:' | cut -d':' -f2 | sed 's/-/_/g' | tr '[:lower:]' '[:upper:]') if grep "^type:size" "$spec" >/dev/null; then val=$(size_to_bytes $val) fi eval "OPT_$opt"="'$val'" opt="" val="" next_opt_is_val="" opt_is_ok="" opt_is_negated="" real_opt="" required_arg="" spec="" fi done } size_to_bytes() { local size="$1" echo $size | perl -ne '%f=(B=>1, K=>1_024, M=>1_048_576, G=>1_073_741_824, T=>1_099_511_627_776); m/^(\d+)([kMGT])?/i; print $1 * $f{uc($2 || "B")};' } # ########################################################################### # End parse_options package # ########################################################################### # ########################################################################### # alt_cmds package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/bash/alt_cmds.sh # t/lib/bash/alt_cmds.sh # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### set -u _seq() { local i="$1" awk "BEGIN { for(i=1; i<=$i; i++) print i; }" } _pidof() { local cmd="$1" if ! pidof "$cmd" 2>/dev/null; then ps -eo pid,ucomm | awk -v comm="$cmd" '$2 == comm { print $1 }' fi } _lsof() { local pid="$1" if ! lsof -p $pid 2>/dev/null; then /bin/ls -l /proc/$pid/fd 2>/dev/null fi } _which() { if [ -x /usr/bin/which ]; then /usr/bin/which "$1" 2>/dev/null | awk '{print $1}' elif which which 1>/dev/null 2>&1; then which "$1" 2>/dev/null | awk '{print $1}' else echo "$1" fi } # ########################################################################### # End alt_cmds package # ########################################################################### set +u # Actually does the aggregation. The arguments are the max number of functions # to aggregate, and the files to read. If maxlen=0, it means infinity. We have # to pass the maxlen argument into this function to make maxlen testable. aggregate_stacktrace() { local maxlen=$1; shift awk " BEGIN { s = \"\"; } /^Thread/ { if ( s != \"\" ) { print s; } s = \"\"; c = 0; } /^\#/ { if ( \$2 ~ /0x/ ) { if ( \$4 ~/void|const/ ) { targ = \$5; } else { targ = \$4; tfile= \$NF; } if ( targ ~ /[<\\(]/ ) { targ = substr(\$0, index(\$0, \" in \") + 4); if ( targ ~ / from / ) { targ = substr(targ, 1, index(targ, \" from \") - 1); } if ( targ ~ / at / ) { targ = substr(targ, 1, index(targ, \" at \") - 1); } # Shorten C++ templates, e.g. in t/samples/stacktrace-004.txt while ( targ ~ />::/ ) { if ( 0 == gsub(/<[^<>]*>/, \"\", targ) ) { break; } } # Further shorten argument lists. while ( targ ~ /\\(/ ) { if ( 0 == gsub(/\\([^()]*\\)/, \"\", targ) ) { break; } } # Remove void and const decorators. gsub(/ ?(void|const) ?/, \"\", targ); gsub(/ /, \"\", targ); } else if ( targ ~ /\\?\\?/ && \$2 ~ /[1-9]/ ) { # Substitute ?? by the name of the library. targ = \$NF; while ( targ ~ /\\// ) { targ = substr(targ, index(targ, \"/\") + 1); } targ = substr(targ, 1, index(targ, \".\") - 1); targ = targ \"::??\"; } } else { targ = \$2; } # get rid of long symbol names such as 'pthread_cond_wait@@GLIBC_2.3.2' if ( targ ~ /@@/ ) { fname = substr(targ, 1, index(targ, \"@@\") - 1); } else { fname = targ; if ( tfile ~ /^\// ) { last=split(tfile,filen,/\//); fname = targ \"(\" filen[last] \")\"; } else { fname = targ } } if ( ${maxlen:-0} == 0 || c < ${maxlen:-0} ) { if (s != \"\" ) { s = s \",\" fname; } else { s = fname; } } c++; } END { print s } " "$@" | sort | uniq -c | sort -r -n -k 1,1 } # The main program to run. main() { local output_file="${OPT_SAVE_SAMPLES:-"$PT_TMPDIR/percona-toolkit"}" if [ -z "$ARGV" ]; then # There are no files to analyze, so we'll make one. if [ -z "$OPT_PID" ]; then OPT_PID=$(pidof -s "$OPT_BINARY" 2>/dev/null); if [ -z "$OPT_PID" ]; then OPT_PID=$(pgrep -o -x "$OPT_BINARY" 2>/dev/null) fi if [ -z "$OPT_PID" ]; then OPT_PID=$(ps -eaf | grep "$OPT_BINARY" | grep -v grep | awk '{print $2}' | head -n1); fi fi date for x in $(_seq $OPT_ITERATIONS); do gdb -ex "set pagination 0" \ -ex "thread apply all bt" \ -batch \ -p $OPT_PID \ >> "$output_file" date +'TS %N.%s %F %T' >> "$output_file" sleep $OPT_INTERVAL done fi if [ -z "$ARGV" ]; then aggregate_stacktrace "$OPT_LINES" "$output_file" else aggregate_stacktrace "$OPT_LINES" $ARGV fi } # Execute the program if it was not included from another file. This makes it # possible to include without executing, and thus test. if [ "${0##*/}" = "$TOOL" ] \ || [ "${0##*/}" = "bash" -a "${_:-""}" = "$0" ]; then mk_tmpdir parse_options "$0" "${@:-""}" if [ -z "$OPT_HELP" -a -z "$OPT_VERSION" ]; then # Validate options : fi usage_or_errors "$0" po_status=$? if [ $po_status -ne 0 ]; then [ $OPT_ERRS -gt 0 ] && exit 1 exit 0 fi main $ARGV rm_tmpdir fi # ############################################################################ # Documentation # ############################################################################ :<<'DOCUMENTATION' =pod =head1 NAME pt-pmp - Aggregate GDB stack traces for a selected program. =head1 SYNOPSIS Usage: pt-pmp [OPTIONS] [FILES] pt-pmp is a poor man's profiler, inspired by L. It can create and summarize full stack traces of processes on Linux. Summaries of stack traces can be an invaluable tool for diagnosing what a process is waiting for. =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-pmp performs two tasks: it gets a stack trace, and it summarizes the stack trace. If a file is given on the command line, the tool skips the first step and just aggregates the file. To summarize the stack trace, the tool extracts the function name (symbol) from each level of the stack, and combines them with commas. It does this for each thread in the output. Afterwards, it sorts similar threads together and counts how many of each one there are, then sorts them most-frequent first. pt-pmp is a read-only tool. However, collecting GDB stacktraces is achieved by attaching GDB to the program and printing stack traces from all threads. This will freeze the program for some period of time, ranging from a second or so to much longer on very busy systems with a lot of memory and many threads in the program. In the tool's default usage as a MySQL profiling tool, this means that MySQL will be unresponsive while the tool runs, although if you are using the tool to diagnose an unresponsive server, there is really no reason not to do this. In addition to freezing the server, there is also some risk of the server crashing or performing badly after GDB detaches from it. =head1 OPTIONS =over =item --binary short form: -b; type: string; default: mysqld Which binary to trace. =item --help Show help and exit. =item --interval short form: -s; type: int; default: 0 Number of seconds to sleep between L<"--iterations">. =item --iterations short form: -i; type: int; default: 1 How many traces to gather and aggregate. =item --lines short form: -l; type: int; default: 0 Aggregate only first specified number of many functions; 0=infinity. =item --pid short form: -p; type: int Process ID of the process to trace; overrides L<"--binary">. =item --save-samples short form: -k; type: string Keep the raw traces in this file after aggregation. =item --version Show version and exit. =back =head1 ENVIRONMENT This tool does not use any environment variables. =head1 SYSTEM REQUIREMENTS This tool requires Bash v3 or newer. If no backtrace files are given, then gdb is also required to create backtraces for the process specified on the command line. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz, based on a script by Domas Mituzas (L) =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2010-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-pmp 3.1.0 =cut DOCUMENTATION percona-toolkit-3.1/bin/pt-query-digest000775 001750 001750 00002006367 13535723560 021425 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo DSNParser Quoter OptionParser Transformers QueryRewriter Processlist TcpdumpParser MySQLProtocolParser SlowLogParser SlowLogWriter EventAggregator ReportFormatter QueryReportFormatter JSONReportFormatter EventTimeline QueryParser TableParser QueryReview QueryHistory Daemon BinaryLogParser GeneralLogParser RawLogParser ProtocolParser MasterSlave Progress FileIterator Runtime Pipeline HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Quoter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Quoter.pm # t/lib/Quoter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Quoter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; return bless {}, $class; } sub quote { my ( $self, @vals ) = @_; foreach my $val ( @vals ) { $val =~ s/`/``/g; } return join('.', map { '`' . $_ . '`' } @vals); } sub quote_val { my ( $self, $val, %args ) = @_; return 'NULL' unless defined $val; # undef = NULL return "''" if $val eq ''; # blank string = '' return $val if $val =~ m/^0x[0-9a-fA-F]+$/ # quote hex data && !$args{is_char}; # unless is_char is true $val =~ s/(['\\])/\\$1/g; return "'$val'"; } sub split_unquote { my ( $self, $db_tbl, $default_db ) = @_; my ( $db, $tbl ) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } for ($db, $tbl) { next unless $_; s/\A`//; s/`\z//; s/``/`/g; } return ($db, $tbl); } sub literal_like { my ( $self, $like ) = @_; return unless $like; $like =~ s/([%_])/\\$1/g; return "'$like'"; } sub join_quote { my ( $self, $default_db, $db_tbl ) = @_; return unless $db_tbl; my ($db, $tbl) = split(/[.]/, $db_tbl); if ( !$tbl ) { $tbl = $db; $db = $default_db; } $db = "`$db`" if $db && $db !~ m/^`/; $tbl = "`$tbl`" if $tbl && $tbl !~ m/^`/; return $db ? "$db.$tbl" : $tbl; } sub serialize_list { my ( $self, @args ) = @_; PTDEBUG && _d('Serializing', Dumper(\@args)); return unless @args; my @parts; foreach my $arg ( @args ) { if ( defined $arg ) { $arg =~ s/,/\\,/g; # escape commas $arg =~ s/\\N/\\\\N/g; # escape literal \N push @parts, $arg; } else { push @parts, '\N'; } } my $string = join(',', @parts); PTDEBUG && _d('Serialized: <', $string, '>'); return $string; } sub deserialize_list { my ( $self, $string ) = @_; PTDEBUG && _d('Deserializing <', $string, '>'); die "Cannot deserialize an undefined string" unless defined $string; my @parts; foreach my $arg ( split(/(? 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Transformers package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Transformers.pm # t/lib/Transformers.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Transformers; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timegm timelocal); use Digest::MD5 qw(md5_hex); use B qw(); BEGIN { require Exporter; our @ISA = qw(Exporter); our %EXPORT_TAGS = (); our @EXPORT = (); our @EXPORT_OK = qw( micro_t percentage_of secs_to_time time_to_secs shorten ts parse_timestamp unix_timestamp any_unix_timestamp make_checksum crc32 encode_json ); } our $mysql_ts = qr/(\d\d)(\d\d)(\d\d) +(\d+):(\d+):(\d+)(\.\d+)?/; our $proper_ts = qr/(\d\d\d\d)-(\d\d)-(\d\d)[T ](\d\d):(\d\d):(\d\d)(\.\d+)?/; our $n_ts = qr/(\d{1,5})([shmd]?)/; # Limit \d{1,5} because \d{6} looks sub micro_t { my ( $t, %args ) = @_; my $p_ms = defined $args{p_ms} ? $args{p_ms} : 0; # precision for ms vals my $p_s = defined $args{p_s} ? $args{p_s} : 0; # precision for s vals my $f; $t = 0 if $t < 0; $t = sprintf('%.17f', $t) if $t =~ /e/; $t =~ s/\.(\d{1,6})\d*/\.$1/; if ($t > 0 && $t <= 0.000999) { $f = ($t * 1000000) . 'us'; } elsif ($t >= 0.001000 && $t <= 0.999999) { $f = sprintf("%.${p_ms}f", $t * 1000); $f = ($f * 1) . 'ms'; # * 1 to remove insignificant zeros } elsif ($t >= 1) { $f = sprintf("%.${p_s}f", $t); $f = ($f * 1) . 's'; # * 1 to remove insignificant zeros } else { $f = 0; # $t should = 0 at this point } return $f; } sub percentage_of { my ( $is, $of, %args ) = @_; my $p = $args{p} || 0; # float precision my $fmt = $p ? "%.${p}f" : "%d"; return sprintf $fmt, ($is * 100) / ($of ||= 1); } sub secs_to_time { my ( $secs, $fmt ) = @_; $secs ||= 0; return '00:00' unless $secs; $fmt ||= $secs >= 86_400 ? 'd' : $secs >= 3_600 ? 'h' : 'm'; return $fmt eq 'd' ? sprintf( "%d+%02d:%02d:%02d", int($secs / 86_400), int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : $fmt eq 'h' ? sprintf( "%02d:%02d:%02d", int(($secs % 86_400) / 3_600), int(($secs % 3_600) / 60), $secs % 60) : sprintf( "%02d:%02d", int(($secs % 3_600) / 60), $secs % 60); } sub time_to_secs { my ( $val, $default_suffix ) = @_; die "I need a val argument" unless defined $val; my $t = 0; my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; $suffix = $suffix || $default_suffix || 's'; if ( $suffix =~ m/[smhd]/ ) { $t = $suffix eq 's' ? $num * 1 # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $t *= -1 if $prefix && $prefix eq '-'; } else { die "Invalid suffix for $val: $suffix"; } return $t; } sub shorten { my ( $num, %args ) = @_; my $p = defined $args{p} ? $args{p} : 2; # float precision my $d = defined $args{d} ? $args{d} : 1_024; # divisor my $n = 0; my @units = ('', qw(k M G T P E Z Y)); while ( $num >= $d && $n < @units - 1 ) { $num /= $d; ++$n; } return sprintf( $num =~ m/\./ || $n ? '%1$.'.$p.'f%2$s' : '%1$d', $num, $units[$n]); } sub ts { my ( $time, $gmt ) = @_; my ( $sec, $min, $hour, $mday, $mon, $year ) = $gmt ? gmtime($time) : localtime($time); $mon += 1; $year += 1900; my $val = sprintf("%d-%02d-%02dT%02d:%02d:%02d", $year, $mon, $mday, $hour, $min, $sec); if ( my ($us) = $time =~ m/(\.\d+)$/ ) { $us = sprintf("%.6f", $us); $us =~ s/^0\././; $val .= $us; } return $val; } sub parse_timestamp { my ( $val ) = @_; if ( my($y, $m, $d, $h, $i, $s, $f) = $val =~ m/^$mysql_ts$/ ) { return sprintf "%d-%02d-%02d %02d:%02d:" . (defined $f ? '%09.6f' : '%02d'), $y + 2000, $m, $d, $h, $i, (defined $f ? $s + $f : $s); } elsif ( $val =~ m/^$proper_ts$/ ) { return $val; } return $val; } sub unix_timestamp { my ( $val, $gmt ) = @_; if ( my($y, $m, $d, $h, $i, $s, $us) = $val =~ m/^$proper_ts$/ ) { $val = $gmt ? timegm($s, $i, $h, $d, $m - 1, $y) : timelocal($s, $i, $h, $d, $m - 1, $y); if ( defined $us ) { $us = sprintf('%.6f', $us); $us =~ s/^0\././; $val .= $us; } } return $val; } sub any_unix_timestamp { my ( $val, $callback ) = @_; if ( my ($n, $suffix) = $val =~ m/^$n_ts$/ ) { $n = $suffix eq 's' ? $n # Seconds : $suffix eq 'm' ? $n * 60 # Minutes : $suffix eq 'h' ? $n * 3600 # Hours : $suffix eq 'd' ? $n * 86400 # Days : $n; # default: Seconds PTDEBUG && _d('ts is now - N[shmd]:', $n); return time - $n; } elsif ( $val =~ m/^\d{9,}/ ) { PTDEBUG && _d('ts is already a unix timestamp'); return $val; } elsif ( my ($ymd, $hms) = $val =~ m/^(\d{6})(?:\s+(\d+:\d+:\d+))?/ ) { PTDEBUG && _d('ts is MySQL slow log timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp(parse_timestamp($val)); } elsif ( ($ymd, $hms) = $val =~ m/^(\d{4}-\d\d-\d\d)(?:[T ](\d+:\d+:\d+))?/) { PTDEBUG && _d('ts is properly formatted timestamp'); $val .= ' 00:00:00' unless $hms; return unix_timestamp($val); } else { PTDEBUG && _d('ts is MySQL expression'); return $callback->($val) if $callback && ref $callback eq 'CODE'; } PTDEBUG && _d('Unknown ts type:', $val); return; } sub make_checksum { my ( $val ) = @_; my $checksum = uc md5_hex($val); PTDEBUG && _d($checksum, 'checksum for', $val); return $checksum; } sub crc32 { my ( $string ) = @_; return unless $string; my $poly = 0xEDB88320; my $crc = 0xFFFFFFFF; foreach my $char ( split(//, $string) ) { my $comp = ($crc ^ ord($char)) & 0xFF; for ( 1 .. 8 ) { $comp = $comp & 1 ? $poly ^ ($comp >> 1) : $comp >> 1; } $crc = (($crc >> 8) & 0x00FFFFFF) ^ $comp; } return $crc ^ 0xFFFFFFFF; } my $got_json = eval { require JSON }; sub encode_json { return JSON::encode_json(@_) if $got_json; my ( $data ) = @_; return (object_to_json($data) || ''); } sub object_to_json { my ($obj) = @_; my $type = ref($obj); if($type eq 'HASH'){ return hash_to_json($obj); } elsif($type eq 'ARRAY'){ return array_to_json($obj); } else { return value_to_json($obj); } } sub hash_to_json { my ($obj) = @_; my @res; for my $k ( sort { $a cmp $b } keys %$obj ) { push @res, string_to_json( $k ) . ":" . ( object_to_json( $obj->{$k} ) || value_to_json( $obj->{$k} ) ); } return '{' . ( @res ? join( ",", @res ) : '' ) . '}'; } sub array_to_json { my ($obj) = @_; my @res; for my $v (@$obj) { push @res, object_to_json($v) || value_to_json($v); } return '[' . ( @res ? join( ",", @res ) : '' ) . ']'; } sub value_to_json { my ($value) = @_; return 'null' if(!defined $value); my $b_obj = B::svref_2object(\$value); # for round trip problem my $flags = $b_obj->FLAGS; return $value # as is if $flags & ( B::SVp_IOK | B::SVp_NOK ) and !( $flags & B::SVp_POK ); # SvTYPE is IV or NV? my $type = ref($value); if( !$type ) { return string_to_json($value); } else { return 'null'; } } my %esc = ( "\n" => '\n', "\r" => '\r', "\t" => '\t', "\f" => '\f', "\b" => '\b', "\"" => '\"', "\\" => '\\\\', "\'" => '\\\'', ); sub string_to_json { my ($arg) = @_; $arg =~ s/([\x22\x5c\n\r\t\f\b])/$esc{$1}/g; $arg =~ s/\//\\\//g; $arg =~ s/([\x00-\x08\x0b\x0e-\x1f])/'\\u00' . unpack('H2', $1)/eg; utf8::upgrade($arg); utf8::encode($arg); return '"' . $arg . '"'; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Transformers package # ########################################################################### # ########################################################################### # QueryRewriter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryRewriter.pm # t/lib/QueryRewriter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryRewriter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $verbs = qr{^SHOW|^FLUSH|^COMMIT|^ROLLBACK|^BEGIN|SELECT|INSERT |UPDATE|DELETE|REPLACE|^SET|UNION|^START|^LOCK}xi; my $quote_re = qr/"(?:(?!(? [^()]+ ) # Non-parens without backtracking | (??{ $bal }) # Group with matching parens )* \) /x; my $olc_re = qr/(?:--|#)[^'"\r\n]*(?=[\r\n]|\Z)/; # One-line comments my $mlc_re = qr#/\*[^!].*?\*/#sm; # But not /*!version */ my $vlc_re = qr#/\*.*?[0-9]+.*?\*/#sm; # For SHOW + /*!version */ my $vlc_rf = qr#^(?:SHOW).*?/\*![0-9]+(.*?)\*/#sm; # Variation for SHOW sub new { my ( $class, %args ) = @_; my $self = { %args }; return bless $self, $class; } sub strip_comments { my ( $self, $query ) = @_; return unless $query; $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; if ( $query =~ m/$vlc_rf/i ) { # contains show + version my $qualifier = $1 || ''; $query =~ s/$vlc_re/$qualifier/go; } return $query; } sub shorten { my ( $self, $query, $length ) = @_; $query =~ s{ \A( (?:INSERT|REPLACE) (?:\s+LOW_PRIORITY|DELAYED|HIGH_PRIORITY|IGNORE)? (?:\s\w+)*\s+\S+\s+VALUES\s*\(.*?\) ) \s*,\s*\(.*?(ON\s+DUPLICATE|\Z)} {$1 /*... omitted ...*/$2}xsi; return $query unless $query =~ m/IN\s*\(\s*(?!select)/i; my $last_length = 0; my $query_length = length($query); while ( $length > 0 && $query_length > $length && $query_length < ( $last_length || $query_length + 1 ) ) { $last_length = $query_length; $query =~ s{ (\bIN\s*\() # The opening of an IN list ([^\)]+) # Contents of the list, assuming no item contains paren (?=\)) # Close of the list } { $1 . __shorten($2) }gexsi; } return $query; } sub __shorten { my ( $snippet ) = @_; my @vals = split(/,/, $snippet); return $snippet unless @vals > 20; my @keep = splice(@vals, 0, 20); # Remove and save the first 20 items return join(',', @keep) . "/*... omitted " . scalar(@vals) . " items ...*/"; } sub fingerprint { my ( $self, $query ) = @_; $query =~ m#\ASELECT /\*!40001 SQL_NO_CACHE \*/ \* FROM `# # mysqldump query && return 'mysqldump'; $query =~ m#/\*\w+\.\w+:[0-9]/[0-9]\*/# # pt-table-checksum, etc query && return 'percona-toolkit'; $query =~ m/\Aadministrator command: / && return $query; $query =~ m/\A\s*(call\s+\S+)\(/i && return lc($1); # Warning! $1 used, be careful. if ( my ($beginning) = $query =~ m/\A((?:INSERT|REPLACE)(?: IGNORE)?\s+INTO.+?VALUES\s*\(.*?\))\s*,\s*\(/is ) { $query = $beginning; # Shorten multi-value INSERT statements ASAP } $query =~ s/$mlc_re//go; $query =~ s/$olc_re//go; $query =~ s/\Ause \S+\Z/use ?/i # Abstract the DB in USE && return $query; $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings $query =~ s/\bfalse\b|\btrue\b/?/isg; # boolean values if ( $self->{match_md5_checksums} ) { $query =~ s/([._-])[a-f0-9]{32}/$1?/g; } if ( !$self->{match_embedded_numbers} ) { $query =~ s/[0-9+-][0-9a-f.xb+-]*/?/g; } else { $query =~ s/\b[0-9+-][0-9a-f.xb+-]*/?/g; } if ( $self->{match_md5_checksums} ) { $query =~ s/[xb+-]\?/?/g; } else { $query =~ s/[xb.+-]\?/?/g; } $query =~ s/\A\s+//; # Chop off leading whitespace chomp $query; # Kill trailing whitespace $query =~ tr[ \n\t\r\f][ ]s; # Collapse whitespace $query = lc $query; $query =~ s/\bnull\b/?/g; # Get rid of NULLs $query =~ s{ # Collapse IN and VALUES lists \b(in|values?)(?:[\s,]*\([\s?,]*\))+ } {$1(?+)}gx; $query =~ s{ # Collapse UNION \b(select\s.*?)(?:(\sunion(?:\sall)?)\s\1)+ } {$1 /*repeat$2*/}xg; $query =~ s/\blimit \?(?:, ?\?| offset \?)?/limit ?/; # LIMIT if ( $query =~ m/\bORDER BY /gi ) { # Find, anchor on ORDER BY clause 1 while $query =~ s/\G(.+?)\s+ASC/$1/gi && pos $query; } return $query; } sub distill_verbs { my ( $self, $query ) = @_; $query =~ m/\A\s*call\s+(\S+)\(/i && return "CALL $1"; $query =~ m/\A\s*use\s+/ && return "USE"; $query =~ m/\A\s*UNLOCK TABLES/i && return "UNLOCK"; $query =~ m/\A\s*xa\s+(\S+)/i && return "XA_$1"; if ( $query =~ m/\A\s*LOAD/i ) { my ($tbl) = $query =~ m/INTO TABLE\s+(\S+)/i; $tbl ||= ''; $tbl =~ s/`//g; return "LOAD DATA $tbl"; } if ( $query =~ m/\Aadministrator command:/ ) { $query =~ s/administrator command:/ADMIN/; $query = uc $query; return $query; } $query = $self->strip_comments($query); if ( $query =~ m/\A\s*SHOW\s+/i ) { PTDEBUG && _d($query); $query = uc $query; $query =~ s/\s+(?:SESSION|FULL|STORAGE|ENGINE)\b/ /g; $query =~ s/\s+COUNT[^)]+\)//g; $query =~ s/\s+(?:FOR|FROM|LIKE|WHERE|LIMIT|IN)\b.+//ms; $query =~ s/\A(SHOW(?:\s+\S+){1,2}).*\Z/$1/s; $query =~ s/\s+/ /g; PTDEBUG && _d($query); return $query; } eval $QueryParser::data_def_stmts; eval $QueryParser::tbl_ident; my ( $dds ) = $query =~ /^\s*($QueryParser::data_def_stmts)\b/i; if ( $dds) { $query =~ s/\s+IF(?:\s+NOT)?\s+EXISTS/ /i; my ( $obj ) = $query =~ m/$dds.+(DATABASE|TABLE)\b/i; $obj = uc $obj if $obj; PTDEBUG && _d('Data def statment:', $dds, 'obj:', $obj); my ($db_or_tbl) = $query =~ m/(?:TABLE|DATABASE)\s+($QueryParser::tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches db or table:', $db_or_tbl); return uc($dds . ($obj ? " $obj" : '')), $db_or_tbl; } my @verbs = $query =~ m/\b($verbs)\b/gio; @verbs = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } map { uc } @verbs; }; if ( ($verbs[0] || '') eq 'SELECT' && @verbs > 1 ) { PTDEBUG && _d("False-positive verbs after SELECT:", @verbs[1..$#verbs]); my $union = grep { $_ eq 'UNION' } @verbs; @verbs = $union ? qw(SELECT UNION) : qw(SELECT); } my $verb_str = join(q{ }, @verbs); return $verb_str; } sub __distill_tables { my ( $self, $query, $table, %args ) = @_; my $qp = $args{QueryParser} || $self->{QueryParser}; die "I need a QueryParser argument" unless $qp; my @tables = map { $_ =~ s/`//g; $_ =~ s/(_?)[0-9]+/$1?/g; $_; } grep { defined $_ } $qp->get_tables($query); push @tables, $table if $table; @tables = do { my $last = ''; grep { my $pass = $_ ne $last; $last = $_; $pass } @tables; }; return @tables; } sub distill { my ( $self, $query, %args ) = @_; if ( $args{generic} ) { my ($cmd, $arg) = $query =~ m/^(\S+)\s+(\S+)/; return '' unless $cmd; $query = (uc $cmd) . ($arg ? " $arg" : ''); } else { my ($verbs, $table) = $self->distill_verbs($query, %args); if ( $verbs && $verbs =~ m/^SHOW/ ) { my %alias_for = qw( SCHEMA DATABASE KEYS INDEX INDEXES INDEX ); map { $verbs =~ s/$_/$alias_for{$_}/ } keys %alias_for; $query = $verbs; } elsif ( $verbs && $verbs =~ m/^LOAD DATA/ ) { return $verbs; } else { my @tables = $self->__distill_tables($query, $table, %args); $query = join(q{ }, $verbs, @tables); } } if ( $args{trf} ) { $query = $args{trf}->($query, %args); } return $query; } sub convert_to_select { my ( $self, $query ) = @_; return unless $query; return if $query =~ m/=\s*\(\s*SELECT /i; $query =~ s{ \A.*? update(?:\s+(?:low_priority|ignore))?\s+(.*?) \s+set\b(.*?) (?:\s*where\b(.*?))? (limit\s*[0-9]+(?:\s*,\s*[0-9]+)?)? \Z } {__update_to_select($1, $2, $3, $4)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ .*?\binto\b(.*?)\(([^\)]+)\)\s* values?\s*(\(.*?\))\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select($1, $2, $3)}exsi || $query =~ s{ \A.*? (?:insert(?:\s+ignore)?|replace)\s+ (?:.*?\binto)\b(.*?)\s* set\s+(.*?)\s* (?:\blimit\b|on\s+duplicate\s+key.*)?\s* \Z } {__insert_to_select_with_set($1, $2)}exsi || $query =~ s{ \A.*? delete\s+(.*?) \bfrom\b(.*) \Z } {__delete_to_select($1, $2)}exsi; $query =~ s/\s*on\s+duplicate\s+key\s+update.*\Z//si; $query =~ s/\A.*?(?=\bSELECT\s*\b)//ism; return $query; } sub convert_select_list { my ( $self, $query ) = @_; $query =~ s{ \A\s*select(.*?)\bfrom\b } {$1 =~ m/\*/ ? "select 1 from" : "select isnull(coalesce($1)) from"}exi; return $query; } sub __delete_to_select { my ( $delete, $join ) = @_; if ( $join =~ m/\bjoin\b/ ) { return "select 1 from $join"; } return "select * from $join"; } sub __insert_to_select { my ( $tbl, $cols, $vals ) = @_; PTDEBUG && _d('Args:', @_); my @cols = split(/,/, $cols); PTDEBUG && _d('Cols:', @cols); $vals =~ s/^\(|\)$//g; # Strip leading/trailing parens my @vals = $vals =~ m/($quote_re|[^,]*${bal}[^,]*|[^,]+)/g; PTDEBUG && _d('Vals:', @vals); if ( @cols == @vals ) { return "select * from $tbl where " . join(' and ', map { "$cols[$_]=$vals[$_]" } (0..$#cols)); } else { return "select * from $tbl limit 1"; } } sub __insert_to_select_with_set { my ( $from, $set ) = @_; $set =~ s/,/ and /g; return "select * from $from where $set "; } sub __update_to_select { my ( $from, $set, $where, $limit ) = @_; return "select $set from $from " . ( $where ? "where $where" : '' ) . ( $limit ? " $limit " : '' ); } sub wrap_in_derived { my ( $self, $query ) = @_; return unless $query; return $query =~ m/\A\s*select/i ? "select 1 from ($query) as x limit 1" : $query; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryRewriter package # ########################################################################### # ########################################################################### # Processlist package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Processlist.pm # t/lib/Processlist.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Processlist; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Time::HiRes qw(time usleep); use List::Util qw(max); use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; use constant PTDEBUG => $ENV{PTDEBUG} || 0; use constant { ID => 0, USER => 1, HOST => 2, DB => 3, COMMAND => 4, TIME => 5, STATE => 6, INFO => 7, START => 8, # Calculated start time of statement ($start - TIME) ETIME => 9, # Exec time of SHOW PROCESSLIST (margin of error in START) FSEEN => 10, # First time ever seen PROFILE => 11, # Profile of individual STATE times }; sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(MasterSlave) ) { die "I need a $arg argument" unless $args{$arg}; } my $kill_busy_commands = {}; if ($args{kill_busy_commands}) { for my $command (split /,/,$args{kill_busy_commands}) { $command =~ s/^\s+|\s+$//g; $kill_busy_commands->{$command} = 1; } } else { $kill_busy_commands->{Query} = 1; } $args{kill_busy_commands} = $kill_busy_commands; my $self = { %args, polls => 0, last_poll => 0, active_cxn => {}, # keyed off ID event_cache => [], _reasons_for_matching => {}, }; return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(code); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($code) = @args{@required_args}; if ( @{$self->{event_cache}} ) { PTDEBUG && _d("Returning cached event"); return shift @{$self->{event_cache}}; } if ( $self->{interval} && $self->{polls} ) { PTDEBUG && _d("Sleeping between polls"); usleep($self->{interval}); } PTDEBUG && _d("Polling PROCESSLIST"); my ($time, $etime) = @args{qw(time etime)}; my $start = $etime ? 0 : time; # don't need start if etime given my $rows = $code->(); if ( !$rows ) { warn "Processlist callback did not return an arrayref"; return; } $time = time unless $time; $etime = $time - $start unless $etime; $self->{polls}++; PTDEBUG && _d('Rows:', ($rows ? scalar @$rows : 0), 'in', $etime, 'seconds'); my $active_cxn = $self->{active_cxn}; my $curr_cxn = {}; my @new_cxn = (); CURRENTLY_ACTIVE_CXN: foreach my $curr ( @$rows ) { $curr_cxn->{$curr->[ID]} = $curr; my $query_start = $time - ($curr->[TIME] || 0); if ( $active_cxn->{$curr->[ID]} ) { PTDEBUG && _d('Checking existing cxn', $curr->[ID]); my $prev = $active_cxn->{$curr->[ID]}; # previous state of cxn my $new_query = 0; my $fudge = ($curr->[TIME] || 0) =~ m/\D/ ? 0.001 : 1; # micro-t? if ( $prev->[INFO] ) { if ( !$curr->[INFO] || $prev->[INFO] ne $curr->[INFO] ) { PTDEBUG && _d('Info is different; new query'); $new_query = 1; } elsif ( defined $curr->[TIME] && $curr->[TIME] < $prev->[TIME] ) { PTDEBUG && _d('Time is less than previous; new query'); $new_query = 1; } elsif ( $curr->[INFO] && defined $curr->[TIME] && $query_start - $etime - $prev->[START] > $fudge) { my $ms = $self->{MasterSlave}; my $is_repl_thread = $ms->is_replication_thread({ Command => $curr->[COMMAND], User => $curr->[USER], State => $curr->[STATE], Id => $curr->[ID]}); if ( $is_repl_thread ) { PTDEBUG && _d(q{Query has restarted but it's a replication thread, ignoring}); } else { PTDEBUG && _d('Query restarted; new query', $query_start, $etime, $prev->[START], $fudge); $new_query = 1; } } if ( $new_query ) { $self->_update_profile($prev, $curr, $time); push @{$self->{event_cache}}, $self->make_event($prev, $time); } } if ( $curr->[INFO] ) { if ( $prev->[INFO] && !$new_query ) { PTDEBUG && _d("Query on cxn", $curr->[ID], "hasn't changed"); $self->_update_profile($prev, $curr, $time); } else { PTDEBUG && _d('Saving new query, state', $curr->[STATE]); push @new_cxn, [ @{$curr}[0..7], # proc info int($query_start), # START $etime, # ETIME $time, # FSEEN { ($curr->[STATE] || "") => 0 }, # PROFILE ]; } } } else { PTDEBUG && _d('New cxn', $curr->[ID]); if ( $curr->[INFO] && defined $curr->[TIME] ) { PTDEBUG && _d('Saving query of new cxn, state', $curr->[STATE]); push @new_cxn, [ @{$curr}[0..7], # proc info int($query_start), # START $etime, # ETIME $time, # FSEEN { ($curr->[STATE] || "") => 0 }, # PROFILE ]; } } } # CURRENTLY_ACTIVE_CXN PREVIOUSLY_ACTIVE_CXN: foreach my $prev ( values %$active_cxn ) { if ( !$curr_cxn->{$prev->[ID]} ) { PTDEBUG && _d('cxn', $prev->[ID], 'ended'); push @{$self->{event_cache}}, $self->make_event($prev, $time); delete $active_cxn->{$prev->[ID]}; } elsif ( ($curr_cxn->{$prev->[ID]}->[COMMAND] || "") eq 'Sleep' || !$curr_cxn->{$prev->[ID]}->[STATE] || !$curr_cxn->{$prev->[ID]}->[INFO] ) { PTDEBUG && _d('cxn', $prev->[ID], 'became idle'); delete $active_cxn->{$prev->[ID]}; } } map { $active_cxn->{$_->[ID]} = $_; } @new_cxn; $self->{last_poll} = $time; my $event = shift @{$self->{event_cache}}; PTDEBUG && _d(scalar @{$self->{event_cache}}, "events in cache"); return $event; } sub make_event { my ( $self, $row, $time ) = @_; my $observed_time = $time - $row->[FSEEN]; my $Query_time = max($row->[TIME], $observed_time); my $event = { id => $row->[ID], db => $row->[DB], user => $row->[USER], host => $row->[HOST], arg => $row->[INFO], bytes => length($row->[INFO]), ts => Transformers::ts($row->[START] + $row->[TIME]), # Query END time Query_time => $Query_time, Lock_time => $row->[PROFILE]->{Locked} || 0, }; PTDEBUG && _d('Properties of event:', Dumper($event)); return $event; } sub _get_active_cxn { my ( $self ) = @_; PTDEBUG && _d("Active cxn:", Dumper($self->{active_cxn})); return $self->{active_cxn}; } sub _update_profile { my ( $self, $prev, $curr, $time ) = @_; return unless $prev && $curr; my $time_elapsed = $time - $self->{last_poll}; if ( ($prev->[STATE] || "") eq ($curr->[STATE] || "") ) { PTDEBUG && _d("Query is still in", $curr->[STATE], "state"); $prev->[PROFILE]->{$prev->[STATE] || ""} += $time_elapsed; } else { PTDEBUG && _d("Query changed from state", $prev->[STATE], "to", $curr->[STATE]); my $half_time = ($time_elapsed || 0) / 2; $prev->[PROFILE]->{$prev->[STATE] || ""} += $half_time; $prev->[STATE] = $curr->[STATE]; $prev->[PROFILE]->{$curr->[STATE] || ""} = $half_time; } return; } sub find { my ( $self, $proclist, %find_spec ) = @_; PTDEBUG && _d('find specs:', Dumper(\%find_spec)); my $ms = $self->{MasterSlave}; my @matches; $self->{_reasons_for_matching} = undef; QUERY: foreach my $query ( @$proclist ) { PTDEBUG && _d('Checking query', Dumper($query)); my $matched = 0; if ( !$find_spec{replication_threads} && $ms->is_replication_thread($query) ) { PTDEBUG && _d('Skipping replication thread'); next QUERY; } if ( $find_spec{busy_time} && exists($self->{kill_busy_commands}->{$query->{Command} || ''}) ) { next QUERY unless defined($query->{Time}); if ( $query->{Time} < $find_spec{busy_time} ) { PTDEBUG && _d("Query isn't running long enough"); next QUERY; } my $reason = 'Exceeds busy time'; PTDEBUG && _d($reason); push @{$self->{_reasons_for_matching}->{$query} ||= []}, $reason; $matched++; } if ( $find_spec{idle_time} && ($query->{Command} || '') eq 'Sleep' ) { next QUERY unless defined($query->{Time}); if ( $query->{Time} < $find_spec{idle_time} ) { PTDEBUG && _d("Query isn't idle long enough"); next QUERY; } my $reason = 'Exceeds idle time'; PTDEBUG && _d($reason); push @{$self->{_reasons_for_matching}->{$query} ||= []}, $reason; $matched++; } PROPERTY: foreach my $property ( qw(Id User Host db State Command Info) ) { my $filter = "_find_match_$property"; if ( defined $find_spec{ignore}->{$property} && $self->$filter($query, $find_spec{ignore}->{$property}) ) { PTDEBUG && _d('Query matches ignore', $property, 'spec'); next QUERY; } if ( defined $find_spec{match}->{$property} ) { if ( !$self->$filter($query, $find_spec{match}->{$property}) ) { PTDEBUG && _d('Query does not match', $property, 'spec'); next QUERY; } my $reason = 'Query matches ' . $property . ' spec'; PTDEBUG && _d($reason); push @{$self->{_reasons_for_matching}->{$query} ||= []}, $reason; $matched++; } } if ( $matched || $find_spec{all} ) { PTDEBUG && _d("Query matched one or more specs, adding"); push @matches, $query; next QUERY; } PTDEBUG && _d('Query does not match any specs, ignoring'); } # QUERY return @matches; } sub _find_match_Id { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Id} && $query->{Id} == $property; } sub _find_match_User { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{User} && $query->{User} =~ m/$property/; } sub _find_match_Host { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Host} && $query->{Host} =~ m/$property/; } sub _find_match_db { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{db} && $query->{db} =~ m/$property/; } sub _find_match_State { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{State} && $query->{State} =~ m/$property/; } sub _find_match_Command { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Command} && $query->{Command} =~ m/$property/; } sub _find_match_Info { my ( $self, $query, $property ) = @_; return defined $property && defined $query->{Info} && $query->{Info} =~ m/$property/; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Processlist package # ########################################################################### # ########################################################################### # TcpdumpParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TcpdumpParser.pm # t/lib/TcpdumpParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TcpdumpParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; my $self = {}; return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(next_event tell); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($next_event, $tell) = @args{@required_args}; local $INPUT_RECORD_SEPARATOR = "\n20"; my $pos_in_log = $tell->(); while ( defined(my $raw_packet = $next_event->()) ) { next if $raw_packet =~ m/^$/; # issue 564 $pos_in_log -= 1 if $pos_in_log; $raw_packet =~ s/\n20\Z//; $raw_packet = "20$raw_packet" unless $raw_packet =~ m/\A20/; $raw_packet =~ s/0x0000:.+?(450.) /0x0000: $1 /; my $packet = $self->_parse_packet($raw_packet); $packet->{pos_in_log} = $pos_in_log; $packet->{raw_packet} = $raw_packet; $args{stats}->{events_read}++ if $args{stats}; return $packet; } $args{oktorun}->(0) if $args{oktorun}; return; } sub _parse_packet { my ( $self, $packet ) = @_; die "I need a packet" unless $packet; my ( $ts, $source, $dest ) = $packet =~ m/\A(\S+ \S+).*? IP .*?(\S+) > (\S+):/; my ( $src_host, $src_port ) = $source =~ m/((?:\d+\.){3}\d+)\.(\w+)/; my ( $dst_host, $dst_port ) = $dest =~ m/((?:\d+\.){3}\d+)\.(\w+)/; $src_port = $self->port_number($src_port); $dst_port = $self->port_number($dst_port); my $hex = qr/[0-9a-f]/; (my $data = join('', $packet =~ m/\s+0x$hex+:\s((?:\s$hex{2,4})+)/go)) =~ s/\s+//g; my $ip_hlen = hex(substr($data, 1, 1)); # Num of 32-bit words in header. my $ip_plen = hex(substr($data, 4, 4)); # Num of BYTES in IPv4 datagram. my $complete = length($data) == 2 * $ip_plen ? 1 : 0; my $tcp_hlen = hex(substr($data, ($ip_hlen + 3) * 8, 1)); my $seq = hex(substr($data, ($ip_hlen + 1) * 8, 8)); my $ack = hex(substr($data, ($ip_hlen + 2) * 8, 8)); my $flags = hex(substr($data, (($ip_hlen + 3) * 8) + 2, 2)); $data = substr($data, ($ip_hlen + $tcp_hlen) * 8); my $pkt = { ts => $ts, seq => $seq, ack => $ack, fin => $flags & 0x01, syn => $flags & 0x02, rst => $flags & 0x04, src_host => $src_host, src_port => $src_port, dst_host => $dst_host, dst_port => $dst_port, complete => $complete, ip_hlen => $ip_hlen, tcp_hlen => $tcp_hlen, dgram_len => $ip_plen, data_len => $ip_plen - (($ip_hlen + $tcp_hlen) * 4), data => $data ? substr($data, 0, 10).(length $data > 10 ? '...' : '') : '', }; PTDEBUG && _d('packet:', Dumper($pkt)); $pkt->{data} = $data; return $pkt; } sub port_number { my ( $self, $port ) = @_; return unless $port; return $port eq 'mysql' ? 3306 : $port; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TcpdumpParser package # ########################################################################### # ########################################################################### # MySQLProtocolParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MySQLProtocolParser.pm # t/lib/MySQLProtocolParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MySQLProtocolParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; eval { require IO::Uncompress::Inflate; # yum: perl-IO-Compress-Zlib IO::Uncompress::Inflate->import(qw(inflate $InflateError)); }; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; BEGIN { our @ISA = 'ProtocolParser'; } use constant { COM_SLEEP => '00', COM_QUIT => '01', COM_INIT_DB => '02', COM_QUERY => '03', COM_FIELD_LIST => '04', COM_CREATE_DB => '05', COM_DROP_DB => '06', COM_REFRESH => '07', COM_SHUTDOWN => '08', COM_STATISTICS => '09', COM_PROCESS_INFO => '0a', COM_CONNECT => '0b', COM_PROCESS_KILL => '0c', COM_DEBUG => '0d', COM_PING => '0e', COM_TIME => '0f', COM_DELAYED_INSERT => '10', COM_CHANGE_USER => '11', COM_BINLOG_DUMP => '12', COM_TABLE_DUMP => '13', COM_CONNECT_OUT => '14', COM_REGISTER_SLAVE => '15', COM_STMT_PREPARE => '16', COM_STMT_EXECUTE => '17', COM_STMT_SEND_LONG_DATA => '18', COM_STMT_CLOSE => '19', COM_STMT_RESET => '1a', COM_SET_OPTION => '1b', COM_STMT_FETCH => '1c', SERVER_QUERY_NO_GOOD_INDEX_USED => 16, SERVER_QUERY_NO_INDEX_USED => 32, }; my %com_for = ( '00' => 'COM_SLEEP', '01' => 'COM_QUIT', '02' => 'COM_INIT_DB', '03' => 'COM_QUERY', '04' => 'COM_FIELD_LIST', '05' => 'COM_CREATE_DB', '06' => 'COM_DROP_DB', '07' => 'COM_REFRESH', '08' => 'COM_SHUTDOWN', '09' => 'COM_STATISTICS', '0a' => 'COM_PROCESS_INFO', '0b' => 'COM_CONNECT', '0c' => 'COM_PROCESS_KILL', '0d' => 'COM_DEBUG', '0e' => 'COM_PING', '0f' => 'COM_TIME', '10' => 'COM_DELAYED_INSERT', '11' => 'COM_CHANGE_USER', '12' => 'COM_BINLOG_DUMP', '13' => 'COM_TABLE_DUMP', '14' => 'COM_CONNECT_OUT', '15' => 'COM_REGISTER_SLAVE', '16' => 'COM_STMT_PREPARE', '17' => 'COM_STMT_EXECUTE', '18' => 'COM_STMT_SEND_LONG_DATA', '19' => 'COM_STMT_CLOSE', '1a' => 'COM_STMT_RESET', '1b' => 'COM_SET_OPTION', '1c' => 'COM_STMT_FETCH', ); my %flag_for = ( 'CLIENT_LONG_PASSWORD' => 1, # new more secure passwords 'CLIENT_FOUND_ROWS' => 2, # Found instead of affected rows 'CLIENT_LONG_FLAG' => 4, # Get all column flags 'CLIENT_CONNECT_WITH_DB' => 8, # One can specify db on connect 'CLIENT_NO_SCHEMA' => 16, # Don't allow database.table.column 'CLIENT_COMPRESS' => 32, # Can use compression protocol 'CLIENT_ODBC' => 64, # Odbc client 'CLIENT_LOCAL_FILES' => 128, # Can use LOAD DATA LOCAL 'CLIENT_IGNORE_SPACE' => 256, # Ignore spaces before '(' 'CLIENT_PROTOCOL_41' => 512, # New 4.1 protocol 'CLIENT_INTERACTIVE' => 1024, # This is an interactive client 'CLIENT_SSL' => 2048, # Switch to SSL after handshake 'CLIENT_IGNORE_SIGPIPE' => 4096, # IGNORE sigpipes 'CLIENT_TRANSACTIONS' => 8192, # Client knows about transactions 'CLIENT_RESERVED' => 16384, # Old flag for 4.1 protocol 'CLIENT_SECURE_CONNECTION' => 32768, # New 4.1 authentication 'CLIENT_MULTI_STATEMENTS' => 65536, # Enable/disable multi-stmt support 'CLIENT_MULTI_RESULTS' => 131072, # Enable/disable multi-results ); use constant { MYSQL_TYPE_DECIMAL => 0, MYSQL_TYPE_TINY => 1, MYSQL_TYPE_SHORT => 2, MYSQL_TYPE_LONG => 3, MYSQL_TYPE_FLOAT => 4, MYSQL_TYPE_DOUBLE => 5, MYSQL_TYPE_NULL => 6, MYSQL_TYPE_TIMESTAMP => 7, MYSQL_TYPE_LONGLONG => 8, MYSQL_TYPE_INT24 => 9, MYSQL_TYPE_DATE => 10, MYSQL_TYPE_TIME => 11, MYSQL_TYPE_DATETIME => 12, MYSQL_TYPE_YEAR => 13, MYSQL_TYPE_NEWDATE => 14, MYSQL_TYPE_VARCHAR => 15, MYSQL_TYPE_BIT => 16, MYSQL_TYPE_NEWDECIMAL => 246, MYSQL_TYPE_ENUM => 247, MYSQL_TYPE_SET => 248, MYSQL_TYPE_TINY_BLOB => 249, MYSQL_TYPE_MEDIUM_BLOB => 250, MYSQL_TYPE_LONG_BLOB => 251, MYSQL_TYPE_BLOB => 252, MYSQL_TYPE_VAR_STRING => 253, MYSQL_TYPE_STRING => 254, MYSQL_TYPE_GEOMETRY => 255, }; my %type_for = ( 0 => 'MYSQL_TYPE_DECIMAL', 1 => 'MYSQL_TYPE_TINY', 2 => 'MYSQL_TYPE_SHORT', 3 => 'MYSQL_TYPE_LONG', 4 => 'MYSQL_TYPE_FLOAT', 5 => 'MYSQL_TYPE_DOUBLE', 6 => 'MYSQL_TYPE_NULL', 7 => 'MYSQL_TYPE_TIMESTAMP', 8 => 'MYSQL_TYPE_LONGLONG', 9 => 'MYSQL_TYPE_INT24', 10 => 'MYSQL_TYPE_DATE', 11 => 'MYSQL_TYPE_TIME', 12 => 'MYSQL_TYPE_DATETIME', 13 => 'MYSQL_TYPE_YEAR', 14 => 'MYSQL_TYPE_NEWDATE', 15 => 'MYSQL_TYPE_VARCHAR', 16 => 'MYSQL_TYPE_BIT', 246 => 'MYSQL_TYPE_NEWDECIMAL', 247 => 'MYSQL_TYPE_ENUM', 248 => 'MYSQL_TYPE_SET', 249 => 'MYSQL_TYPE_TINY_BLOB', 250 => 'MYSQL_TYPE_MEDIUM_BLOB', 251 => 'MYSQL_TYPE_LONG_BLOB', 252 => 'MYSQL_TYPE_BLOB', 253 => 'MYSQL_TYPE_VAR_STRING', 254 => 'MYSQL_TYPE_STRING', 255 => 'MYSQL_TYPE_GEOMETRY', ); my %unpack_type = ( MYSQL_TYPE_NULL => sub { return 'NULL', 0; }, MYSQL_TYPE_TINY => sub { return to_num(@_, 1), 1; }, MySQL_TYPE_SHORT => sub { return to_num(@_, 2), 2; }, MYSQL_TYPE_LONG => sub { return to_num(@_, 4), 4; }, MYSQL_TYPE_LONGLONG => sub { return to_num(@_, 8), 8; }, MYSQL_TYPE_DOUBLE => sub { return to_double(@_), 8; }, MYSQL_TYPE_VARCHAR => \&unpack_string, MYSQL_TYPE_VAR_STRING => \&unpack_string, MYSQL_TYPE_STRING => \&unpack_string, ); sub new { my ( $class, %args ) = @_; my $self = { server => $args{server}, port => $args{port} || '3306', version => '41', # MySQL proto version; not used yet sessions => {}, o => $args{o}, fake_thread_id => 2**32, # see _make_event() null_event => $args{null_event}, }; PTDEBUG && $self->{server} && _d('Watching only server', $self->{server}); return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(event); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $packet = @args{@required_args}; my $src_host = "$packet->{src_host}:$packet->{src_port}"; my $dst_host = "$packet->{dst_host}:$packet->{dst_port}"; if ( my $server = $self->{server} ) { # Watch only the given server. $server .= ":$self->{port}"; if ( $src_host ne $server && $dst_host ne $server ) { PTDEBUG && _d('Packet is not to or from', $server); return $self->{null_event}; } } my $packet_from; my $client; if ( $src_host =~ m/:$self->{port}$/ ) { $packet_from = 'server'; $client = $dst_host; } elsif ( $dst_host =~ m/:$self->{port}$/ ) { $packet_from = 'client'; $client = $src_host; } else { PTDEBUG && _d('Packet is not to or from a MySQL server'); return $self->{null_event}; } PTDEBUG && _d('Client', $client); my $packetno = -1; if ( $packet->{data_len} >= 5 ) { $packetno = to_num(substr($packet->{data}, 6, 2)); } if ( !exists $self->{sessions}->{$client} ) { if ( $packet->{syn} ) { PTDEBUG && _d('New session (SYN)'); } elsif ( $packetno == 0 ) { PTDEBUG && _d('New session (packetno 0)'); } else { PTDEBUG && _d('Ignoring mid-stream', $packet_from, 'data,', 'packetno', $packetno); return $self->{null_event}; } $self->{sessions}->{$client} = { client => $client, ts => $packet->{ts}, state => undef, compress => undef, raw_packets => [], buff => '', sths => {}, attribs => {}, n_queries => 0, }; } my $session = $self->{sessions}->{$client}; PTDEBUG && _d('Client state:', $session->{state}); push @{$session->{raw_packets}}, $packet->{raw_packet}; if ( $packet->{syn} && ($session->{n_queries} > 0 || $session->{state}) ) { PTDEBUG && _d('Client port reuse and last session did not quit'); $self->fail_session($session, 'client port reuse and last session did not quit'); return $self->parse_event(%args); } if ( $packet->{data_len} == 0 ) { PTDEBUG && _d('TCP control:', map { uc $_ } grep { $packet->{$_} } qw(syn ack fin rst)); if ( $packet->{'fin'} && ($session->{state} || '') eq 'server_handshake' ) { PTDEBUG && _d('Client aborted connection'); my $event = { cmd => 'Admin', arg => 'administrator command: Connect', ts => $packet->{ts}, }; $session->{attribs}->{Error_msg} = 'Client closed connection during handshake'; $event = $self->_make_event($event, $packet, $session); delete $self->{sessions}->{$session->{client}}; return $event; } return $self->{null_event}; } if ( $session->{compress} ) { return unless $self->uncompress_packet($packet, $session); } if ( $session->{buff} && $packet_from eq 'client' ) { $session->{buff} .= $packet->{data}; $packet->{data} = $session->{buff}; $session->{buff_left} -= $packet->{data_len}; $packet->{mysql_data_len} = $session->{mysql_data_len}; $packet->{number} = $session->{number}; PTDEBUG && _d('Appending data to buff; expecting', $session->{buff_left}, 'more bytes'); } else { eval { remove_mysql_header($packet); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('remove_mysql_header() failed; failing session'); $session->{EVAL_ERROR} = $EVAL_ERROR; $self->fail_session($session, 'remove_mysql_header() failed'); return $self->{null_event}; } } my $event; if ( $packet_from eq 'server' ) { $event = $self->_packet_from_server($packet, $session, $args{misc}); } elsif ( $packet_from eq 'client' ) { if ( $session->{buff} ) { if ( $session->{buff_left} <= 0 ) { PTDEBUG && _d('Data is complete'); $self->_delete_buff($session); } else { return $self->{null_event}; # waiting for more data; buff_left was reported earlier } } elsif ( $packet->{mysql_data_len} > ($packet->{data_len} - 4) ) { if ( $session->{cmd} && ($session->{state} || '') eq 'awaiting_reply' ) { PTDEBUG && _d('No server OK to previous command (frag)'); $self->fail_session($session, 'no server OK to previous command'); $packet->{data} = $packet->{mysql_hdr} . $packet->{data}; return $self->parse_event(%args); } $session->{buff} = $packet->{data}; $session->{mysql_data_len} = $packet->{mysql_data_len}; $session->{number} = $packet->{number}; $session->{buff_left} ||= $packet->{mysql_data_len} - ($packet->{data_len} - 4); PTDEBUG && _d('Data not complete; expecting', $session->{buff_left}, 'more bytes'); return $self->{null_event}; } if ( $session->{cmd} && ($session->{state} || '') eq 'awaiting_reply' ) { PTDEBUG && _d('No server OK to previous command'); $self->fail_session($session, 'no server OK to previous command'); $packet->{data} = $packet->{mysql_hdr} . $packet->{data}; return $self->parse_event(%args); } $event = $self->_packet_from_client($packet, $session, $args{misc}); } else { die 'Packet origin unknown'; } PTDEBUG && _d('Done parsing packet; client state:', $session->{state}); if ( $session->{closed} ) { delete $self->{sessions}->{$session->{client}}; PTDEBUG && _d('Session deleted'); } $args{stats}->{events_parsed}++ if $args{stats}; return $event || $self->{null_event}; } sub _packet_from_server { my ( $self, $packet, $session, $misc ) = @_; die "I need a packet" unless $packet; die "I need a session" unless $session; PTDEBUG && _d('Packet is from server; client state:', $session->{state}); if ( ($session->{server_seq} || '') eq $packet->{seq} ) { push @{ $session->{server_retransmissions} }, $packet->{seq}; PTDEBUG && _d('TCP retransmission'); return; } $session->{server_seq} = $packet->{seq}; my $data = $packet->{data}; my ( $first_byte ) = substr($data, 0, 2, ''); PTDEBUG && _d('First byte of packet:', $first_byte); if ( !$first_byte ) { $self->fail_session($session, 'no first byte'); return; } if ( !$session->{state} ) { if ( $first_byte eq '0a' && length $data >= 33 && $data =~ m/00{13}/ ) { my $handshake = parse_server_handshake_packet($data); if ( !$handshake ) { $self->fail_session($session, 'failed to parse server handshake'); return; } $session->{state} = 'server_handshake'; $session->{thread_id} = $handshake->{thread_id}; $session->{ts} = $packet->{ts} unless $session->{ts}; } elsif ( $session->{buff} ) { $self->fail_session($session, 'got server response before full buffer'); return; } else { PTDEBUG && _d('Ignoring mid-stream server response'); return; } } else { if ( $first_byte eq '00' ) { if ( ($session->{state} || '') eq 'client_auth' ) { $session->{compress} = $session->{will_compress}; delete $session->{will_compress}; PTDEBUG && $session->{compress} && _d('Packets will be compressed'); PTDEBUG && _d('Admin command: Connect'); return $self->_make_event( { cmd => 'Admin', arg => 'administrator command: Connect', ts => $packet->{ts}, # Events are timestamped when they end }, $packet, $session ); } elsif ( $session->{cmd} ) { my $com = $session->{cmd}->{cmd}; my $ok; if ( $com eq COM_STMT_PREPARE ) { PTDEBUG && _d('OK for prepared statement'); $ok = parse_ok_prepared_statement_packet($data); if ( !$ok ) { $self->fail_session($session, 'failed to parse OK prepared statement packet'); return; } my $sth_id = $ok->{sth_id}; $session->{attribs}->{Statement_id} = $sth_id; $session->{sths}->{$sth_id} = $ok; $session->{sths}->{$sth_id}->{statement} = $session->{cmd}->{arg}; } else { $ok = parse_ok_packet($data); if ( !$ok ) { $self->fail_session($session, 'failed to parse OK packet'); return; } } my $arg; if ( $com eq COM_QUERY || $com eq COM_STMT_EXECUTE || $com eq COM_STMT_RESET ) { $com = 'Query'; $arg = $session->{cmd}->{arg}; } elsif ( $com eq COM_STMT_PREPARE ) { $com = 'Query'; $arg = "PREPARE $session->{cmd}->{arg}"; } else { $arg = 'administrator command: ' . ucfirst(lc(substr($com_for{$com}, 4))); $com = 'Admin'; } return $self->_make_event( { cmd => $com, arg => $arg, ts => $packet->{ts}, Insert_id => $ok->{insert_id}, Warning_count => $ok->{warnings}, Rows_affected => $ok->{affected_rows}, }, $packet, $session ); } else { PTDEBUG && _d('Looks like an OK packet but session has no cmd'); } } elsif ( $first_byte eq 'ff' ) { my $error = parse_error_packet($data); if ( !$error ) { $self->fail_session($session, 'failed to parse error packet'); return; } my $event; if ( $session->{state} eq 'client_auth' || $session->{state} eq 'server_handshake' ) { PTDEBUG && _d('Connection failed'); $event = { cmd => 'Admin', arg => 'administrator command: Connect', ts => $packet->{ts}, Error_no => $error->{errno}, }; $session->{attribs}->{Error_msg} = $error->{message}; $session->{closed} = 1; # delete session when done return $self->_make_event($event, $packet, $session); } elsif ( $session->{cmd} ) { my $com = $session->{cmd}->{cmd}; my $arg; if ( $com eq COM_QUERY || $com eq COM_STMT_EXECUTE ) { $com = 'Query'; $arg = $session->{cmd}->{arg}; } else { $arg = 'administrator command: ' . ucfirst(lc(substr($com_for{$com}, 4))); $com = 'Admin'; } $event = { cmd => $com, arg => $arg, ts => $packet->{ts}, }; if ( $error->{errno} ) { $event->{Error_no} = $error->{errno}; } $session->{attribs}->{Error_msg} = $error->{message}; return $self->_make_event($event, $packet, $session); } else { PTDEBUG && _d('Looks like an error packet but client is not ' . 'authenticating and session has no cmd'); } } elsif ( $first_byte eq 'fe' && $packet->{mysql_data_len} < 9 ) { if ( $packet->{mysql_data_len} == 1 && $session->{state} eq 'client_auth' && $packet->{number} == 2 ) { PTDEBUG && _d('Server has old password table;', 'client will resend password using old algorithm'); $session->{state} = 'client_auth_resend'; } else { PTDEBUG && _d('Got an EOF packet'); $self->fail_session($session, 'got an unexpected EOF packet'); } } else { if ( $session->{cmd} ) { PTDEBUG && _d('Got a row/field/result packet'); my $com = $session->{cmd}->{cmd}; PTDEBUG && _d('Responding to client', $com_for{$com}); my $event = { ts => $packet->{ts} }; if ( $com eq COM_QUERY || $com eq COM_STMT_EXECUTE ) { $event->{cmd} = 'Query'; $event->{arg} = $session->{cmd}->{arg}; } else { $event->{arg} = 'administrator command: ' . ucfirst(lc(substr($com_for{$com}, 4))); $event->{cmd} = 'Admin'; } if ( $packet->{complete} ) { my ( $warning_count, $status_flags ) = $data =~ m/fe(.{4})(.{4})\Z/; if ( $warning_count ) { $event->{Warnings} = to_num($warning_count); my $flags = to_num($status_flags); # TODO set all flags? $event->{No_good_index_used} = $flags & SERVER_QUERY_NO_GOOD_INDEX_USED ? 1 : 0; $event->{No_index_used} = $flags & SERVER_QUERY_NO_INDEX_USED ? 1 : 0; } } return $self->_make_event($event, $packet, $session); } else { PTDEBUG && _d('Unknown in-stream server response'); } } } return; } sub _packet_from_client { my ( $self, $packet, $session, $misc ) = @_; die "I need a packet" unless $packet; die "I need a session" unless $session; PTDEBUG && _d('Packet is from client; state:', $session->{state}); if ( ($session->{client_seq} || '') eq $packet->{seq} ) { push @{ $session->{client_retransmissions} }, $packet->{seq}; PTDEBUG && _d('TCP retransmission'); return; } $session->{client_seq} = $packet->{seq}; my $data = $packet->{data}; my $ts = $packet->{ts}; if ( ($session->{state} || '') eq 'server_handshake' ) { PTDEBUG && _d('Expecting client authentication packet'); my $handshake = parse_client_handshake_packet($data); if ( !$handshake ) { $self->fail_session($session, 'failed to parse client handshake'); return; } $session->{state} = 'client_auth'; $session->{pos_in_log} = $packet->{pos_in_log}; $session->{user} = $handshake->{user}; $session->{db} = $handshake->{db}; $session->{will_compress} = $handshake->{flags}->{CLIENT_COMPRESS}; } elsif ( ($session->{state} || '') eq 'client_auth_resend' ) { PTDEBUG && _d('Client resending password using old algorithm'); $session->{state} = 'client_auth'; } elsif ( ($session->{state} || '') eq 'awaiting_reply' ) { my $arg = $session->{cmd}->{arg} ? substr($session->{cmd}->{arg}, 0, 50) : 'unknown'; PTDEBUG && _d('More data for previous command:', $arg, '...'); return; } else { if ( $packet->{number} != 0 ) { $self->fail_session($session, 'client cmd not packet 0'); return; } if ( !defined $session->{compress} ) { return unless $self->detect_compression($packet, $session); $data = $packet->{data}; } my $com = parse_com_packet($data, $packet->{mysql_data_len}); if ( !$com ) { $self->fail_session($session, 'failed to parse COM packet'); return; } if ( $com->{code} eq COM_STMT_EXECUTE ) { PTDEBUG && _d('Execute prepared statement'); my $exec = parse_execute_packet($com->{data}, $session->{sths}); if ( !$exec ) { PTDEBUG && _d('Failed to parse execute packet'); $session->{state} = undef; return; } $com->{data} = $exec->{arg}; $session->{attribs}->{Statement_id} = $exec->{sth_id}; } elsif ( $com->{code} eq COM_STMT_RESET ) { my $sth_id = get_sth_id($com->{data}); if ( !$sth_id ) { $self->fail_session($session, 'failed to parse prepared statement reset packet'); return; } $com->{data} = "RESET $sth_id"; $session->{attribs}->{Statement_id} = $sth_id; } $session->{state} = 'awaiting_reply'; $session->{pos_in_log} = $packet->{pos_in_log}; $session->{ts} = $ts; $session->{cmd} = { cmd => $com->{code}, arg => $com->{data}, }; if ( $com->{code} eq COM_QUIT ) { # Fire right away; will cleanup later. PTDEBUG && _d('Got a COM_QUIT'); $session->{closed} = 1; # delete session when done return $self->_make_event( { cmd => 'Admin', arg => 'administrator command: Quit', ts => $ts, }, $packet, $session ); } elsif ( $com->{code} eq COM_STMT_CLOSE ) { my $sth_id = get_sth_id($com->{data}); if ( !$sth_id ) { $self->fail_session($session, 'failed to parse prepared statement close packet'); return; } delete $session->{sths}->{$sth_id}; return $self->_make_event( { cmd => 'Query', arg => "DEALLOCATE PREPARE $sth_id", ts => $ts, }, $packet, $session ); } } return; } sub _make_event { my ( $self, $event, $packet, $session ) = @_; PTDEBUG && _d('Making event'); $session->{raw_packets} = []; $self->_delete_buff($session); if ( !$session->{thread_id} ) { PTDEBUG && _d('Giving session fake thread id', $self->{fake_thread_id}); $session->{thread_id} = $self->{fake_thread_id}++; } my ($host, $port) = $session->{client} =~ m/((?:\d+\.){3}\d+)\:(\w+)/; my $new_event = { cmd => $event->{cmd}, arg => $event->{arg}, bytes => length( $event->{arg} ), ts => tcp_timestamp( $event->{ts} ), host => $host, ip => $host, port => $port, db => $session->{db}, user => $session->{user}, Thread_id => $session->{thread_id}, pos_in_log => $session->{pos_in_log}, Query_time => timestamp_diff($session->{ts}, $packet->{ts}), Rows_affected => ($event->{Rows_affected} || 0), Warning_count => ($event->{Warning_count} || 0), No_good_index_used => ($event->{No_good_index_used} ? 'Yes' : 'No'), No_index_used => ($event->{No_index_used} ? 'Yes' : 'No'), }; @{$new_event}{keys %{$session->{attribs}}} = values %{$session->{attribs}}; foreach my $opt_attrib ( qw(Error_no) ) { if ( defined $event->{$opt_attrib} ) { $new_event->{$opt_attrib} = $event->{$opt_attrib}; } } PTDEBUG && _d('Properties of event:', Dumper($new_event)); delete $session->{cmd}; $session->{state} = undef; $session->{attribs} = {}; $session->{n_queries}++; $session->{server_retransmissions} = []; $session->{client_retransmissions} = []; return $new_event; } sub tcp_timestamp { my ( $ts ) = @_; $ts =~ s/^\d\d(\d\d)-(\d\d)-(\d\d)/$1$2$3/; return $ts; } sub timestamp_diff { my ( $start, $end ) = @_; my $sd = substr($start, 0, 11, ''); my $ed = substr($end, 0, 11, ''); my ( $sh, $sm, $ss ) = split(/:/, $start); my ( $eh, $em, $es ) = split(/:/, $end); my $esecs = ($eh * 3600 + $em * 60 + $es); my $ssecs = ($sh * 3600 + $sm * 60 + $ss); if ( $sd eq $ed ) { return sprintf '%.6f', $esecs - $ssecs; } else { # Assume only one day boundary has been crossed, no DST, etc return sprintf '%.6f', ( 86_400 - $ssecs ) + $esecs; } } sub to_string { my ( $data ) = @_; return pack('H*', $data); } sub unpack_string { my ( $data ) = @_; my $len = 0; my $encode_len = 0; ($data, $len, $encode_len) = decode_len($data); my $t = 'H' . ($len ? $len * 2 : '*'); $data = pack($t, $data); return "\"$data\"", $encode_len + $len; } sub decode_len { my ( $data ) = @_; return unless $data; my $first_byte = to_num(substr($data, 0, 2, '')); my $len; my $encode_len; if ( $first_byte <= 251 ) { $len = $first_byte; $encode_len = 1; } elsif ( $first_byte == 252 ) { $len = to_num(substr($data, 4, '')); $encode_len = 2; } elsif ( $first_byte == 253 ) { $len = to_num(substr($data, 6, '')); $encode_len = 3; } elsif ( $first_byte == 254 ) { $len = to_num(substr($data, 16, '')); $encode_len = 8; } else { PTDEBUG && _d('data:', $data, 'first byte:', $first_byte); die "Invalid length encoded byte: $first_byte"; } PTDEBUG && _d('len:', $len, 'encode len', $encode_len); return $data, $len, $encode_len; } sub to_num { my ( $str, $len ) = @_; if ( $len ) { $str = substr($str, 0, $len * 2); } my @bytes = $str =~ m/(..)/g; my $result = 0; foreach my $i ( 0 .. $#bytes ) { $result += hex($bytes[$i]) * (16 ** ($i * 2)); } return $result; } sub to_double { my ( $str ) = @_; return unpack('d', pack('H*', $str)); } sub get_lcb { my ( $string ) = @_; my $first_byte = hex(substr($$string, 0, 2, '')); if ( $first_byte < 251 ) { return $first_byte; } elsif ( $first_byte == 252 ) { return to_num(substr($$string, 0, 4, '')); } elsif ( $first_byte == 253 ) { return to_num(substr($$string, 0, 6, '')); } elsif ( $first_byte == 254 ) { return to_num(substr($$string, 0, 16, '')); } } sub parse_error_packet { my ( $data ) = @_; return unless $data; PTDEBUG && _d('ERROR data:', $data); if ( length $data < 16 ) { PTDEBUG && _d('Error packet is too short:', $data); return; } my $errno = to_num(substr($data, 0, 4)); my $marker = to_string(substr($data, 4, 2)); my $sqlstate = ''; my $message = ''; if ( $marker eq '#' ) { $sqlstate = to_string(substr($data, 6, 10)); $message = to_string(substr($data, 16)); } else { $marker = ''; $message = to_string(substr($data, 4)); } return unless $message; my $pkt = { errno => $errno, sqlstate => $marker . $sqlstate, message => $message, }; PTDEBUG && _d('Error packet:', Dumper($pkt)); return $pkt; } sub parse_ok_packet { my ( $data ) = @_; return unless $data; PTDEBUG && _d('OK data:', $data); if ( length $data < 12 ) { PTDEBUG && _d('OK packet is too short:', $data); return; } my $affected_rows = get_lcb(\$data); my $insert_id = get_lcb(\$data); my $status = to_num(substr($data, 0, 4, '')); my $warnings = to_num(substr($data, 0, 4, '')); my $message = to_string($data); my $pkt = { affected_rows => $affected_rows, insert_id => $insert_id, status => $status, warnings => $warnings, message => $message, }; PTDEBUG && _d('OK packet:', Dumper($pkt)); return $pkt; } sub parse_ok_prepared_statement_packet { my ( $data ) = @_; return unless $data; PTDEBUG && _d('OK prepared statement data:', $data); if ( length $data < 8 ) { PTDEBUG && _d('OK prepared statement packet is too short:', $data); return; } my $sth_id = to_num(substr($data, 0, 8, '')); my $num_cols = to_num(substr($data, 0, 4, '')); my $num_params = to_num(substr($data, 0, 4, '')); my $pkt = { sth_id => $sth_id, num_cols => $num_cols, num_params => $num_params, }; PTDEBUG && _d('OK prepared packet:', Dumper($pkt)); return $pkt; } sub parse_server_handshake_packet { my ( $data ) = @_; return unless $data; PTDEBUG && _d('Server handshake data:', $data); my $handshake_pattern = qr{ ^ # ----- ---- (.+?)00 # n Null-Term String server_version (.{8}) # 4 thread_id .{16} # 8 scramble_buff .{2} # 1 filler: always 0x00 (.{4}) # 2 server_capabilities .{2} # 1 server_language .{4} # 2 server_status .{26} # 13 filler: always 0x00 }x; my ( $server_version, $thread_id, $flags ) = $data =~ m/$handshake_pattern/; my $pkt = { server_version => to_string($server_version), thread_id => to_num($thread_id), flags => parse_flags($flags), }; PTDEBUG && _d('Server handshake packet:', Dumper($pkt)); return $pkt; } sub parse_client_handshake_packet { my ( $data ) = @_; return unless $data; PTDEBUG && _d('Client handshake data:', $data); my ( $flags, $user, $buff_len ) = $data =~ m{ ^ (.{8}) # Client flags .{10} # Max packet size, charset (?:00){23} # Filler ((?:..)+?)00 # Null-terminated user name (..) # Length-coding byte for scramble buff }x; if ( !$buff_len ) { PTDEBUG && _d('Did not match client handshake packet'); return; } my $code_len = hex($buff_len); my $db; my $capability_flags = to_num($flags); # $flags is stored as little endian. if ($capability_flags & $flag_for{CLIENT_CONNECT_WITH_DB}) { ( $db ) = $data =~ m! ^.{64}${user}00.. # Everything matched before (?:..){$code_len} # The scramble buffer (.*?)00.*\Z # The database name !x; } my $pkt = { user => to_string($user), db => $db ? to_string($db) : '', flags => parse_flags($flags), }; PTDEBUG && _d('Client handshake packet:', Dumper($pkt)); return $pkt; } sub parse_com_packet { my ( $data, $len ) = @_; return unless $data && $len; PTDEBUG && _d('COM data:', (substr($data, 0, 100).(length $data > 100 ? '...' : '')), 'len:', $len); my $code = substr($data, 0, 2); my $com = $com_for{$code}; if ( !$com ) { PTDEBUG && _d('Did not match COM packet'); return; } if ( $code ne COM_STMT_EXECUTE && $code ne COM_STMT_CLOSE && $code ne COM_STMT_RESET ) { $data = to_string(substr($data, 2, ($len - 1) * 2)); } my $pkt = { code => $code, com => $com, data => $data, }; PTDEBUG && _d('COM packet:', Dumper($pkt)); return $pkt; } sub parse_execute_packet { my ( $data, $sths ) = @_; return unless $data && $sths; my $sth_id = to_num(substr($data, 2, 8)); return unless defined $sth_id; my $sth = $sths->{$sth_id}; if ( !$sth ) { PTDEBUG && _d('Skipping unknown statement handle', $sth_id); return; } my $null_count = int(($sth->{num_params} + 7) / 8) || 1; my $null_bitmap = to_num(substr($data, 20, $null_count * 2)); PTDEBUG && _d('NULL bitmap:', $null_bitmap, 'count:', $null_count); substr($data, 0, 20 + ($null_count * 2), ''); my $new_params = to_num(substr($data, 0, 2, '')); my @types; if ( $new_params ) { PTDEBUG && _d('New param types'); for my $i ( 0..($sth->{num_params}-1) ) { my $type = to_num(substr($data, 0, 4, '')); push @types, $type_for{$type}; PTDEBUG && _d('Param', $i, 'type:', $type, $type_for{$type}); } $sth->{types} = \@types; } else { @types = @{$sth->{types}} if $data; } my $arg = $sth->{statement}; PTDEBUG && _d('Statement:', $arg); for my $i ( 0..($sth->{num_params}-1) ) { my $val; my $len; # in bytes if ( $null_bitmap & (2**$i) ) { PTDEBUG && _d('Param', $i, 'is NULL (bitmap)'); $val = 'NULL'; $len = 0; } else { if ( $unpack_type{$types[$i]} ) { ($val, $len) = $unpack_type{$types[$i]}->($data); } else { PTDEBUG && _d('No handler for param', $i, 'type', $types[$i]); $val = '?'; $len = 0; } } PTDEBUG && _d('Param', $i, 'val:', $val); $arg =~ s/\?/$val/; substr($data, 0, $len * 2, '') if $len; } my $pkt = { sth_id => $sth_id, arg => "EXECUTE $arg", }; PTDEBUG && _d('Execute packet:', Dumper($pkt)); return $pkt; } sub get_sth_id { my ( $data ) = @_; return unless $data; my $sth_id = to_num(substr($data, 2, 8)); return $sth_id; } sub parse_flags { my ( $flags ) = @_; die "I need flags" unless $flags; PTDEBUG && _d('Flag data:', $flags); my %flags = %flag_for; my $flags_dec = to_num($flags); foreach my $flag ( keys %flag_for ) { my $flagno = $flag_for{$flag}; $flags{$flag} = ($flags_dec & $flagno ? 1 : 0); } return \%flags; } sub uncompress_data { my ( $data, $len ) = @_; die "I need data" unless $data; die "I need a len argument" unless $len; die "I need a scalar reference to data" unless ref $data eq 'SCALAR'; PTDEBUG && _d('Uncompressing data'); our $InflateError; my $comp_bin_data = pack('H*', $$data); my $uncomp_bin_data = ''; my $z = new IO::Uncompress::Inflate( \$comp_bin_data ) or die "IO::Uncompress::Inflate failed: $InflateError"; my $status = $z->read(\$uncomp_bin_data, $len) or die "IO::Uncompress::Inflate failed: $InflateError"; my $uncomp_data = unpack('H*', $uncomp_bin_data); return \$uncomp_data; } sub detect_compression { my ( $self, $packet, $session ) = @_; PTDEBUG && _d('Checking for client compression'); my $com = parse_com_packet($packet->{data}, $packet->{mysql_data_len}); if ( $com && $com->{code} eq COM_SLEEP ) { PTDEBUG && _d('Client is using compression'); $session->{compress} = 1; $packet->{data} = $packet->{mysql_hdr} . $packet->{data}; return 0 unless $self->uncompress_packet($packet, $session); remove_mysql_header($packet); } else { PTDEBUG && _d('Client is NOT using compression'); $session->{compress} = 0; } return 1; } sub uncompress_packet { my ( $self, $packet, $session ) = @_; die "I need a packet" unless $packet; die "I need a session" unless $session; my $data; my $comp_hdr; my $comp_data_len; my $pkt_num; my $uncomp_data_len; eval { $data = \$packet->{data}; $comp_hdr = substr($$data, 0, 14, ''); $comp_data_len = to_num(substr($comp_hdr, 0, 6)); $pkt_num = to_num(substr($comp_hdr, 6, 2)); $uncomp_data_len = to_num(substr($comp_hdr, 8, 6)); PTDEBUG && _d('Compression header data:', $comp_hdr, 'compressed data len (bytes)', $comp_data_len, 'number', $pkt_num, 'uncompressed data len (bytes)', $uncomp_data_len); }; if ( $EVAL_ERROR ) { $session->{EVAL_ERROR} = $EVAL_ERROR; $self->fail_session($session, 'failed to parse compression header'); return 0; } if ( $uncomp_data_len ) { eval { $data = uncompress_data($data, $uncomp_data_len); $packet->{data} = $$data; }; if ( $EVAL_ERROR ) { $session->{EVAL_ERROR} = $EVAL_ERROR; $self->fail_session($session, 'failed to uncompress data'); die "Cannot uncompress packet. Check that IO::Uncompress::Inflate " . "is installed.\nError: $EVAL_ERROR"; } } else { PTDEBUG && _d('Packet is not really compressed'); $packet->{data} = $$data; } return 1; } sub remove_mysql_header { my ( $packet ) = @_; die "I need a packet" unless $packet; my $mysql_hdr = substr($packet->{data}, 0, 8, ''); my $mysql_data_len = to_num(substr($mysql_hdr, 0, 6)); my $pkt_num = to_num(substr($mysql_hdr, 6, 2)); PTDEBUG && _d('MySQL packet: header data', $mysql_hdr, 'data len (bytes)', $mysql_data_len, 'number', $pkt_num); $packet->{mysql_hdr} = $mysql_hdr; $packet->{mysql_data_len} = $mysql_data_len; $packet->{number} = $pkt_num; return; } sub _delete_buff { my ( $self, $session ) = @_; map { delete $session->{$_} } qw(buff buff_left mysql_data_len); return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MySQLProtocolParser package # ########################################################################### # ########################################################################### # SlowLogParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/SlowLogParser.pm # t/lib/SlowLogParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package SlowLogParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class ) = @_; my $self = { pending => [], last_event_offset => undef, }; return bless $self, $class; } my $slow_log_ts_line = qr/^# Time: ((?:[0-9: ]{15})|(?:[-0-9: T]{19}))/; my $slow_log_uh_line = qr/# User\@Host: ([^\[]+|\[[^[]+\]).*?@ (\S*) \[(.*)\]\s*(?:Id:\s*(\d+))?/; my $slow_log_hd_line = qr{ ^(?: T[cC][pP]\s[pP]ort:\s+\d+ # case differs on windows/unix | [/A-Z].*mysqld,\sVersion.*(?:started\swith:|embedded\slibrary) | Time\s+Id\s+Command ).*\n }xm; sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(next_event tell); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($next_event, $tell) = @args{@required_args}; my $pending = $self->{pending}; local $INPUT_RECORD_SEPARATOR = ";\n#"; my $trimlen = length($INPUT_RECORD_SEPARATOR); my $pos_in_log = $tell->(); my $stmt; EVENT: while ( defined($stmt = shift @$pending) or defined($stmt = $next_event->()) ) { my @properties = ('cmd', 'Query', 'pos_in_log', $pos_in_log); $self->{last_event_offset} = $pos_in_log; $pos_in_log = $tell->(); if ( $stmt =~ s/$slow_log_hd_line//go ){ # Throw away header lines in log my @chunks = split(/$INPUT_RECORD_SEPARATOR/o, $stmt); if ( @chunks > 1 ) { PTDEBUG && _d("Found multiple chunks"); $stmt = shift @chunks; unshift @$pending, @chunks; } } $stmt = '#' . $stmt unless $stmt =~ m/\A#/; $stmt =~ s/;\n#?\Z//; my ($got_ts, $got_uh, $got_ac, $got_db, $got_set, $got_embed); my $pos = 0; my $len = length($stmt); my $found_arg = 0; LINE: while ( $stmt =~ m/^(.*)$/mg ) { # /g is important, requires scalar match. $pos = pos($stmt); # Be careful not to mess this up! my $line = $1; # Necessary for /g and pos() to work. PTDEBUG && _d($line); if ($line =~ m/^(?:#|use |SET (?:last_insert_id|insert_id|timestamp))/o) { if ( !$got_ts && (my ( $time ) = $line =~ m/$slow_log_ts_line/o)) { PTDEBUG && _d("Got ts", $time); push @properties, 'ts', $time; ++$got_ts; if ( !$got_uh && ( my ( $user, $host, $ip, $thread_id ) = $line =~ m/$slow_log_uh_line/o ) ) { PTDEBUG && _d("Got user, host, ip", $user, $host, $ip); $host ||= $ip; # sometimes host is missing when using skip-name-resolve (LP #issue 1262456) push @properties, 'user', $user, 'host', $host, 'ip', $ip; if ( $thread_id ) { push @properties, 'Thread_id', $thread_id; } ++$got_uh; } } elsif ( !$got_uh && ( my ( $user, $host, $ip, $thread_id ) = $line =~ m/$slow_log_uh_line/o ) ) { PTDEBUG && _d("Got user, host, ip", $user, $host, $ip); $host ||= $ip; # sometimes host is missing when using skip-name-resolve (LP #issue 1262456) push @properties, 'user', $user, 'host', $host, 'ip', $ip; if ( $thread_id ) { push @properties, 'Thread_id', $thread_id; } ++$got_uh; } elsif (!$got_ac && $line =~ m/^# (?:administrator command:.*)$/) { PTDEBUG && _d("Got admin command"); $line =~ s/^#\s+//; # string leading "# ". push @properties, 'cmd', 'Admin', 'arg', $line; push @properties, 'bytes', length($properties[-1]); ++$found_arg; ++$got_ac; } elsif ( $line =~ m/^# +[A-Z][A-Za-z_]+: \S+/ ) { # Make the test cheap! PTDEBUG && _d("Got some line with properties"); if ( $line =~ m/Schema:\s+\w+: / ) { PTDEBUG && _d('Removing empty Schema attrib'); $line =~ s/Schema:\s+//; PTDEBUG && _d($line); } my @temp = $line =~ m/(\w+):\s+(\S+|\Z)/g; push @properties, @temp; } elsif ( !$got_db && (my ( $db ) = $line =~ m/^use ([^;]+)/ ) ) { PTDEBUG && _d("Got a default database:", $db); push @properties, 'db', $db; ++$got_db; } elsif (!$got_set && (my ($setting) = $line =~ m/^SET\s+([^;]*)/)) { PTDEBUG && _d("Got some setting:", $setting); push @properties, split(/,|\s*=\s*/, $setting); ++$got_set; } if ( !$found_arg && $pos == $len ) { PTDEBUG && _d("Did not find arg, looking for special cases"); local $INPUT_RECORD_SEPARATOR = ";\n"; # get next line if ( defined(my $l = $next_event->()) ) { if ( $l =~ /^\s*[A-Z][a-z_]+: / ) { PTDEBUG && _d("Found NULL query before", $l); local $INPUT_RECORD_SEPARATOR = ";\n#"; my $rest_of_event = $next_event->(); push @{$self->{pending}}, $l . $rest_of_event; push @properties, 'cmd', 'Query', 'arg', '/* No query */'; push @properties, 'bytes', 0; $found_arg++; } else { chomp $l; $l =~ s/^\s+//; PTDEBUG && _d("Found admin statement", $l); push @properties, 'cmd', 'Admin', 'arg', $l; push @properties, 'bytes', length($properties[-1]); $found_arg++; } } else { PTDEBUG && _d("I can't figure out what to do with this line"); next EVENT; } } } else { PTDEBUG && _d("Got the query/arg line"); my $arg = substr($stmt, $pos - length($line)); push @properties, 'arg', $arg, 'bytes', length($arg); if ( $args{misc} && $args{misc}->{embed} && ( my ($e) = $arg =~ m/($args{misc}->{embed})/) ) { push @properties, $e =~ m/$args{misc}->{capture}/g; } last LINE; } } PTDEBUG && _d('Properties of event:', Dumper(\@properties)); my $event = { @properties }; if ( !$event->{arg} ) { PTDEBUG && _d('Partial event, no arg'); } else { $self->{last_event_offset} = undef; if ( $args{stats} ) { $args{stats}->{events_read}++; $args{stats}->{events_parsed}++; } } return $event; } # EVENT @$pending = (); $args{oktorun}->(0) if $args{oktorun}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End SlowLogParser package # ########################################################################### # ########################################################################### # SlowLogWriter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/SlowLogWriter.pm # t/lib/SlowLogWriter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package SlowLogWriter; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class ) = @_; bless {}, $class; } sub write { my ( $self, $fh, $event, $field ) = @_; if ( $event->{ts} ) { print $fh "# Time: $event->{ts}\n"; } if ( $event->{user} ) { printf $fh "# User\@Host: %s[%s] \@ %s []\n", $event->{user}, $event->{user}, $event->{host}; } if ( $event->{ip} && $event->{port} ) { printf $fh "# Client: $event->{ip}:$event->{port}\n"; } if ( $event->{Thread_id} ) { printf $fh "# Thread_id: $event->{Thread_id}\n"; } my $percona_patched = exists $event->{QC_Hit} ? 1 : 0; printf $fh "# Query_time: %.6f Lock_time: %.6f Rows_sent: %d Rows_examined: %d\n", map { $_ || 0 } @{$event}{qw(Query_time Lock_time Rows_sent Rows_examined)}; if ( $percona_patched ) { printf $fh "# QC_Hit: %s Full_scan: %s Full_join: %s Tmp_table: %s Tmp_table_on_disk: %s\n# Filesort: %s Filesort_on_disk: %s Merge_passes: %d\n", map { $_ || 0 } @{$event}{qw(QC_Hit Full_scan Full_join Tmp_table Tmp_table_on_disk Filesort Filesort_on_disk Merge_passes)}; if ( exists $event->{InnoDB_IO_r_ops} ) { printf $fh "# InnoDB_IO_r_ops: %d InnoDB_IO_r_bytes: %d InnoDB_IO_r_wait: %s\n# InnoDB_rec_lock_wait: %s InnoDB_queue_wait: %s\n# InnoDB_pages_distinct: %d\n", map { $_ || 0 } @{$event}{qw(InnoDB_IO_r_ops InnoDB_IO_r_bytes InnoDB_IO_r_wait InnoDB_rec_lock_wait InnoDB_queue_wait InnoDB_pages_distinct)}; } else { printf $fh "# No InnoDB statistics available for this query\n"; } } if ( $event->{db} ) { printf $fh "use %s;\n", $event->{db}; } if ( $event->{arg} =~ m/^administrator command/ ) { print $fh '# '; } if ($field && $event->{$field}) { print $fh $event->{$field}, ";\n"; } else { print $fh $event->{arg}, ";\n"; } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End SlowLogWriter package # ########################################################################### # ########################################################################### # EventAggregator package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/EventAggregator.pm # t/lib/EventAggregator.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package EventAggregator; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(min max); use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5); use constant BUCK_SIZE => 1.05; use constant BASE_LOG => log(BUCK_SIZE); use constant BASE_OFFSET => abs(1 - log(0.000001) / BASE_LOG); # 284.1617969 use constant NUM_BUCK => 1000; use constant MIN_BUCK => .000001; my @buck_vals = map { bucket_value($_); } (0..NUM_BUCK-1); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(groupby worst) ) { die "I need a $arg argument" unless $args{$arg}; } my $attributes = $args{attributes} || {}; my $self = { groupby => $args{groupby}, detect_attribs => scalar keys %$attributes == 0 ? 1 : 0, all_attribs => [ keys %$attributes ], ignore_attribs => { map { $_ => $args{attributes}->{$_} } grep { $_ ne $args{groupby} } @{$args{ignore_attributes}} }, attributes => { map { $_ => $args{attributes}->{$_} } grep { $_ ne $args{groupby} } keys %$attributes }, alt_attribs => { map { $_ => make_alt_attrib(@{$args{attributes}->{$_}}) } grep { $_ ne $args{groupby} } keys %$attributes }, worst => $args{worst}, unroll_limit => $ENV{PT_QUERY_DIGEST_CHECK_ATTRIB_LIMIT} || 1000, attrib_limit => $args{attrib_limit}, result_classes => {}, result_globals => {}, result_samples => {}, class_metrics => {}, global_metrics => {}, n_events => 0, unrolled_loops => undef, type_for => { %{$args{type_for} || { Query_time => 'num' }} }, }; return bless $self, $class; } sub reset_aggregated_data { my ( $self ) = @_; foreach my $class ( values %{$self->{result_classes}} ) { foreach my $attrib ( values %$class ) { delete @{$attrib}{keys %$attrib}; } } foreach my $class ( values %{$self->{result_globals}} ) { delete @{$class}{keys %$class}; } delete @{$self->{result_samples}}{keys %{$self->{result_samples}}}; $self->{n_events} = 0; } sub aggregate { my ( $self, $event ) = @_; my $group_by = $event->{$self->{groupby}}; return unless defined $group_by; $self->{n_events}++; PTDEBUG && _d('Event', $self->{n_events}); return $self->{unrolled_loops}->($self, $event, $group_by) if $self->{unrolled_loops}; if ( $self->{n_events} <= $self->{unroll_limit} ) { $self->add_new_attributes($event) if $self->{detect_attribs}; ATTRIB: foreach my $attrib ( keys %{$self->{attributes}} ) { if ( !exists $event->{$attrib} ) { PTDEBUG && _d("attrib doesn't exist in event:", $attrib); my $alt_attrib = $self->{alt_attribs}->{$attrib}->($event); PTDEBUG && _d('alt attrib:', $alt_attrib); next ATTRIB unless $alt_attrib; } GROUPBY: foreach my $val ( ref $group_by ? @$group_by : ($group_by) ) { my $class_attrib = $self->{result_classes}->{$val}->{$attrib} ||= {}; my $global_attrib = $self->{result_globals}->{$attrib} ||= {}; my $samples = $self->{result_samples}; my $handler = $self->{handlers}->{ $attrib }; if ( !$handler ) { $handler = $self->make_handler( event => $event, attribute => $attrib, alternates => $self->{attributes}->{$attrib}, worst => $self->{worst} eq $attrib, ); $self->{handlers}->{$attrib} = $handler; } next GROUPBY unless $handler; $samples->{$val} ||= $event; # Initialize to the first event. $handler->($event, $class_attrib, $global_attrib, $samples, $group_by); } } } else { $self->_make_unrolled_loops($event); $self->{unrolled_loops}->($self, $event, $group_by); } return; } sub _make_unrolled_loops { my ( $self, $event ) = @_; my $group_by = $event->{$self->{groupby}}; my @attrs = grep { $self->{handlers}->{$_} } keys %{$self->{attributes}}; my $globs = $self->{result_globals}; # Global stats for each my $samples = $self->{result_samples}; my @lines = ( 'my ( $self, $event, $group_by ) = @_;', 'my ($val, $class, $global, $idx);', (ref $group_by ? ('foreach my $group_by ( @$group_by ) {') : ()), 'my $temp = $self->{result_classes}->{ $group_by } ||= { map { $_ => { } } @attrs };', '$samples->{$group_by} ||= $event;', # Always start with the first. ); foreach my $i ( 0 .. $#attrs ) { push @lines, ( '$class = $temp->{\'' . $attrs[$i] . '\'};', '$global = $globs->{\'' . $attrs[$i] . '\'};', $self->{unrolled_for}->{$attrs[$i]}, ); } if ( ref $group_by ) { push @lines, '}'; # Close the loop opened above } @lines = map { s/^/ /gm; $_ } @lines; # Indent for debugging unshift @lines, 'sub {'; push @lines, '}'; my $code = join("\n", @lines); PTDEBUG && _d('Unrolled subroutine:', @lines); my $sub = eval $code; die $EVAL_ERROR if $EVAL_ERROR; $self->{unrolled_loops} = $sub; return; } sub results { my ( $self ) = @_; return { classes => $self->{result_classes}, globals => $self->{result_globals}, samples => $self->{result_samples}, }; } sub set_results { my ( $self, $results ) = @_; $self->{result_classes} = $results->{classes}; $self->{result_globals} = $results->{globals}; $self->{result_samples} = $results->{samples}; return; } sub stats { my ( $self ) = @_; return { classes => $self->{class_metrics}, globals => $self->{global_metrics}, }; } sub attributes { my ( $self ) = @_; return $self->{type_for}; } sub set_attribute_types { my ( $self, $attrib_types ) = @_; $self->{type_for} = $attrib_types; return; } sub type_for { my ( $self, $attrib ) = @_; return $self->{type_for}->{$attrib}; } sub make_handler { my ( $self, %args ) = @_; my @required_args = qw(event attribute); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($event, $attrib) = @args{@required_args}; my $val; eval { $val= $self->_get_value(%args); }; if ( $EVAL_ERROR ) { PTDEBUG && _d("Cannot make", $attrib, "handler:", $EVAL_ERROR); return; } return unless defined $val; # can't determine type if it's undef my $float_re = qr{[+-]?(?:(?=\d|[.])\d+(?:[.])\d{0,})(?:E[+-]?\d+)?}i; my $type = $self->type_for($attrib) ? $self->type_for($attrib) : $attrib =~ m/_crc$/ ? 'string' : $val =~ m/^(?:\d+|$float_re)$/o ? 'num' : $val =~ m/^(?:Yes|No)$/ ? 'bool' : 'string'; PTDEBUG && _d('Type for', $attrib, 'is', $type, '(sample:', $val, ')'); $self->{type_for}->{$attrib} = $type; my @lines; my %track = ( sum => $type =~ m/num|bool/ ? 1 : 0, # sum of values unq => $type =~ m/bool|string/ ? 1 : 0, # count of unique values seen all => $type eq 'num' ? 1 : 0, # all values in bucketed list ); my $trf = ($type eq 'bool') ? q{(($val || '') eq 'Yes') ? 1 : 0} : undef; if ( $trf ) { push @lines, q{$val = } . $trf . ';'; } if ( $attrib eq 'Query_time' ) { push @lines, ( '$val =~ s/^(\d+(?:\.\d+)?).*/$1/;', '$event->{\''.$attrib.'\'} = $val;', ); } if ( $type eq 'num' && $self->{attrib_limit} ) { push @lines, ( "if ( \$val > $self->{attrib_limit} ) {", ' $val = $class->{last} ||= 0;', '}', '$class->{last} = $val;', ); } my $lt = $type eq 'num' ? '<' : 'lt'; my $gt = $type eq 'num' ? '>' : 'gt'; foreach my $place ( qw($class $global) ) { my @tmp; # hold lines until PLACE placeholder is replaced push @tmp, '++PLACE->{cnt};'; # count of all values seen if ( $attrib =~ m/_crc$/ ) { push @tmp, '$val = $val % 1_000;'; } push @tmp, ( 'PLACE->{min} = $val if !defined PLACE->{min} || $val ' . $lt . ' PLACE->{min};', ); push @tmp, ( 'PLACE->{max} = $val if !defined PLACE->{max} || $val ' . $gt . ' PLACE->{max};', ); if ( $track{sum} ) { push @tmp, 'PLACE->{sum} += $val;'; } if ( $track{all} ) { push @tmp, ( 'exists PLACE->{all} or PLACE->{all} = {};', '++PLACE->{all}->{ EventAggregator::bucket_idx($val) };', ); } push @lines, map { s/PLACE/$place/g; $_ } @tmp; } if ( $track{unq} ) { push @lines, '++$class->{unq}->{$val}'; } if ( $args{worst} ) { my $op = $type eq 'num' ? '>=' : 'ge'; push @lines, ( 'if ( $val ' . $op . ' ($class->{max} || 0) ) {', ' $samples->{$group_by} = $event;', '}', ); } my @unrolled = ( "\$val = \$event->{'$attrib'};", ( map { "\$val = \$event->{'$_'} unless defined \$val;" } grep { $_ ne $attrib } @{$args{alternates}} ), 'defined $val && do {', @lines, '};', ); $self->{unrolled_for}->{$attrib} = join("\n", @unrolled); my @code = ( 'sub {', 'my ( $event, $class, $global, $samples, $group_by ) = @_;', 'my ($val, $idx);', $self->{unrolled_for}->{$attrib}, 'return;', '}', ); $self->{code_for}->{$attrib} = join("\n", @code); PTDEBUG && _d($attrib, 'handler code:', $self->{code_for}->{$attrib}); my $sub = eval $self->{code_for}->{$attrib}; if ( $EVAL_ERROR ) { die "Failed to compile $attrib handler code: $EVAL_ERROR"; } return $sub; } sub bucket_idx { my ( $val ) = @_; return 0 if $val < MIN_BUCK; my $idx = int(BASE_OFFSET + log($val)/BASE_LOG); return $idx > (NUM_BUCK-1) ? (NUM_BUCK-1) : $idx; } sub bucket_value { my ( $bucket ) = @_; return 0 if $bucket == 0; die "Invalid bucket: $bucket" if $bucket < 0 || $bucket > (NUM_BUCK-1); return (BUCK_SIZE**($bucket-1)) * MIN_BUCK; } { my @buck_tens; sub buckets_of { return @buck_tens if @buck_tens; my $start_bucket = 0; my @base10_starts = (0); map { push @base10_starts, (10**$_)*MIN_BUCK } (1..7); for my $base10_bucket ( 0..($#base10_starts-1) ) { my $next_bucket = bucket_idx( $base10_starts[$base10_bucket+1] ); PTDEBUG && _d('Base 10 bucket', $base10_bucket, 'maps to', 'base 1.05 buckets', $start_bucket, '..', $next_bucket-1); for my $base1_05_bucket ($start_bucket..($next_bucket-1)) { $buck_tens[$base1_05_bucket] = $base10_bucket; } $start_bucket = $next_bucket; } map { $buck_tens[$_] = 7 } ($start_bucket..(NUM_BUCK-1)); return @buck_tens; } } sub calculate_statistical_metrics { my ( $self, %args ) = @_; my $classes = $self->{result_classes}; my $globals = $self->{result_globals}; my $class_metrics = $self->{class_metrics}; my $global_metrics = $self->{global_metrics}; PTDEBUG && _d('Calculating statistical_metrics'); foreach my $attrib ( keys %$globals ) { if ( exists $globals->{$attrib}->{all} ) { $global_metrics->{$attrib} = $self->_calc_metrics( $globals->{$attrib}->{all}, $globals->{$attrib}, ); } foreach my $class ( keys %$classes ) { if ( exists $classes->{$class}->{$attrib}->{all} ) { $class_metrics->{$class}->{$attrib} = $self->_calc_metrics( $classes->{$class}->{$attrib}->{all}, $classes->{$class}->{$attrib} ); } } } return; } sub _calc_metrics { my ( $self, $vals, $args ) = @_; my $statistical_metrics = { pct_95 => 0, stddev => 0, median => 0, cutoff => undef, }; return $statistical_metrics unless defined $vals && %$vals && $args->{cnt}; my $n_vals = $args->{cnt}; if ( $n_vals == 1 || $args->{max} == $args->{min} ) { my $v = $args->{max} || 0; my $bucket = int(6 + ( log($v > 0 ? $v : MIN_BUCK) / log(10))); $bucket = $bucket > 7 ? 7 : $bucket < 0 ? 0 : $bucket; return { pct_95 => $v, stddev => 0, median => $v, cutoff => $n_vals, }; } elsif ( $n_vals == 2 ) { foreach my $v ( $args->{min}, $args->{max} ) { my $bucket = int(6 + ( log($v && $v > 0 ? $v : MIN_BUCK) / log(10))); $bucket = $bucket > 7 ? 7 : $bucket < 0 ? 0 : $bucket; } my $v = $args->{max} || 0; my $mean = (($args->{min} || 0) + $v) / 2; return { pct_95 => $v, stddev => sqrt((($v - $mean) ** 2) *2), median => $mean, cutoff => $n_vals, }; } my $cutoff = $n_vals >= 10 ? int ( $n_vals * 0.95 ) : $n_vals; $statistical_metrics->{cutoff} = $cutoff; my $total_left = $n_vals; my $top_vals = $n_vals - $cutoff; # vals > 95th my $sum_excl = 0; my $sum = 0; my $sumsq = 0; my $mid = int($n_vals / 2); my $median = 0; my $prev = NUM_BUCK-1; # Used for getting median when $cutoff is odd my $bucket_95 = 0; # top bucket in 95th PTDEBUG && _d('total vals:', $total_left, 'top vals:', $top_vals, 'mid:', $mid); my @buckets = map { 0 } (0..NUM_BUCK-1); map { $buckets[$_] = $vals->{$_} } keys %$vals; $vals = \@buckets; # repoint vals from given hashref to our array BUCKET: for my $bucket ( reverse 0..(NUM_BUCK-1) ) { my $val = $vals->[$bucket]; next BUCKET unless $val; $total_left -= $val; $sum_excl += $val; $bucket_95 = $bucket if !$bucket_95 && $sum_excl > $top_vals; if ( !$median && $total_left <= $mid ) { $median = (($cutoff % 2) || ($val > 1)) ? $buck_vals[$bucket] : ($buck_vals[$bucket] + $buck_vals[$prev]) / 2; } $sum += $val * $buck_vals[$bucket]; $sumsq += $val * ($buck_vals[$bucket]**2); $prev = $bucket; } my $var = $sumsq/$n_vals - ( ($sum/$n_vals) ** 2 ); my $stddev = $var > 0 ? sqrt($var) : 0; my $maxstdev = (($args->{max} || 0) - ($args->{min} || 0)) / 2; $stddev = $stddev > $maxstdev ? $maxstdev : $stddev; PTDEBUG && _d('sum:', $sum, 'sumsq:', $sumsq, 'stddev:', $stddev, 'median:', $median, 'prev bucket:', $prev, 'total left:', $total_left, 'sum excl', $sum_excl, 'bucket 95:', $bucket_95, $buck_vals[$bucket_95]); $statistical_metrics->{stddev} = $stddev; $statistical_metrics->{pct_95} = $buck_vals[$bucket_95]; $statistical_metrics->{median} = $median; return $statistical_metrics; } sub metrics { my ( $self, %args ) = @_; foreach my $arg ( qw(attrib where) ) { die "I need a $arg argument" unless defined $args{$arg}; } my $attrib = $args{attrib}; my $where = $args{where}; my $stats = $self->results(); my $metrics = $self->stats(); my $store = $stats->{classes}->{$where}->{$attrib}; my $global_cnt = $stats->{globals}->{$attrib}->{cnt}; return { cnt => $store->{cnt}, pct => $global_cnt && $store->{cnt} ? $store->{cnt} / $global_cnt : 0, sum => $store->{sum}, min => $store->{min}, max => $store->{max}, avg => $store->{sum} && $store->{cnt} ? $store->{sum} / $store->{cnt} : 0, median => $metrics->{classes}->{$where}->{$attrib}->{median} || 0, pct_95 => $metrics->{classes}->{$where}->{$attrib}->{pct_95} || 0, stddev => $metrics->{classes}->{$where}->{$attrib}->{stddev} || 0, }; } sub top_events { my ( $self, %args ) = @_; my $classes = $self->{result_classes}; my @sorted = reverse sort { # Sorted list of $groupby values $classes->{$a}->{$args{attrib}}->{$args{orderby}} <=> $classes->{$b}->{$args{attrib}}->{$args{orderby}} || tiebreaker($classes->{$a}, $classes->{$b}); } grep { defined $classes->{$_}->{$args{attrib}}->{$args{orderby}} } keys %$classes; # this should first be sorted for test consistency, but many tests already in place would fail my @chosen; # top events my @other; # other events (< top) my ($total, $count) = (0, 0); foreach my $groupby ( @sorted ) { if ( (!$args{total} || $total < $args{total} ) && ( !$args{count} || $count < $args{count} ) ) { push @chosen, [$groupby, 'top', $count+1]; } elsif ( $args{ol_attrib} && (!$args{ol_freq} || $classes->{$groupby}->{$args{ol_attrib}}->{cnt} >= $args{ol_freq}) ) { my $stats = $self->{class_metrics}->{$groupby}->{$args{ol_attrib}}; if ( ($stats->{pct_95} || 0) >= $args{ol_limit} ) { push @chosen, [$groupby, 'outlier', $count+1]; } else { push @other, [$groupby, 'misc', $count+1]; } } else { push @other, [$groupby, 'misc', $count+1]; } $total += $classes->{$groupby}->{$args{attrib}}->{$args{orderby}}; $count++; } return \@chosen, \@other; } sub tiebreaker { my ($a, $b) = @_; if (defined $a->{pos_in_log}) { return $a->{pos_in_log}->{max} cmp $b->{pos_in_log}->{max}; } return 0; } sub add_new_attributes { my ( $self, $event ) = @_; return unless $event; map { my $attrib = $_; $self->{attributes}->{$attrib} = [$attrib]; $self->{alt_attribs}->{$attrib} = make_alt_attrib($attrib); push @{$self->{all_attribs}}, $attrib; PTDEBUG && _d('Added new attribute:', $attrib); } grep { $_ ne $self->{groupby} && !exists $self->{attributes}->{$_} && !exists $self->{ignore_attribs}->{$_} } keys %$event; return; } sub get_attributes { my ( $self ) = @_; return $self->{all_attribs}; } sub events_processed { my ( $self ) = @_; return $self->{n_events}; } sub make_alt_attrib { my ( @attribs ) = @_; my $attrib = shift @attribs; # Primary attribute. return sub {} unless @attribs; # No alternates. my @lines; push @lines, 'sub { my ( $event ) = @_; my $alt_attrib;'; push @lines, map { "\$alt_attrib = '$_' if !defined \$alt_attrib " . "&& exists \$event->{'$_'};" } @attribs; push @lines, 'return $alt_attrib; }'; PTDEBUG && _d('alt attrib sub for', $attrib, ':', @lines); my $sub = eval join("\n", @lines); die if $EVAL_ERROR; return $sub; } sub merge { my ( @ea_objs ) = @_; PTDEBUG && _d('Merging', scalar @ea_objs, 'ea'); return unless scalar @ea_objs; my $ea1 = shift @ea_objs; my $r1 = $ea1->results; my $worst = $ea1->{worst}; # for merging, finding worst sample my %attrib_types = %{ $ea1->attributes() }; foreach my $ea ( @ea_objs ) { die "EventAggregator objects have different groupby: " . "$ea1->{groupby} and $ea->{groupby}" unless $ea1->{groupby} eq $ea->{groupby}; die "EventAggregator objects have different worst: " . "$ea1->{worst} and $ea->{worst}" unless $ea1->{worst} eq $ea->{worst}; my $attrib_types = $ea->attributes(); map { $attrib_types{$_} = $attrib_types->{$_} unless exists $attrib_types{$_}; } keys %$attrib_types; } my $r_merged = { classes => {}, globals => _deep_copy_attribs($r1->{globals}), samples => {}, }; map { $r_merged->{classes}->{$_} = _deep_copy_attribs($r1->{classes}->{$_}); @{$r_merged->{samples}->{$_}}{keys %{$r1->{samples}->{$_}}} = values %{$r1->{samples}->{$_}}; } keys %{$r1->{classes}}; for my $i ( 0..$#ea_objs ) { PTDEBUG && _d('Merging ea obj', ($i + 1)); my $r2 = $ea_objs[$i]->results; eval { CLASS: foreach my $class ( keys %{$r2->{classes}} ) { my $r1_class = $r_merged->{classes}->{$class}; my $r2_class = $r2->{classes}->{$class}; if ( $r1_class && $r2_class ) { CLASS_ATTRIB: foreach my $attrib ( keys %$r2_class ) { PTDEBUG && _d('merge', $attrib); if ( $r1_class->{$attrib} && $r2_class->{$attrib} ) { _add_attrib_vals($r1_class->{$attrib}, $r2_class->{$attrib}); } elsif ( !$r1_class->{$attrib} ) { PTDEBUG && _d('copy', $attrib); $r1_class->{$attrib} = _deep_copy_attrib_vals($r2_class->{$attrib}) } } } elsif ( !$r1_class ) { PTDEBUG && _d('copy class'); $r_merged->{classes}->{$class} = _deep_copy_attribs($r2_class); } my $new_worst_sample; if ( $r_merged->{samples}->{$class} && $r2->{samples}->{$class} ) { if ( $r2->{samples}->{$class}->{$worst} > $r_merged->{samples}->{$class}->{$worst} ) { $new_worst_sample = $r2->{samples}->{$class} } } elsif ( !$r_merged->{samples}->{$class} ) { $new_worst_sample = $r2->{samples}->{$class}; } if ( $new_worst_sample ) { PTDEBUG && _d('New worst sample:', $worst, '=', $new_worst_sample->{$worst}, 'item:', substr($class, 0, 100)); my %new_sample; @new_sample{keys %$new_worst_sample} = values %$new_worst_sample; $r_merged->{samples}->{$class} = \%new_sample; } } }; if ( $EVAL_ERROR ) { warn "Error merging class/sample: $EVAL_ERROR"; } eval { GLOBAL_ATTRIB: PTDEBUG && _d('Merging global attributes'); foreach my $attrib ( keys %{$r2->{globals}} ) { my $r1_global = $r_merged->{globals}->{$attrib}; my $r2_global = $r2->{globals}->{$attrib}; if ( $r1_global && $r2_global ) { PTDEBUG && _d('merge', $attrib); _add_attrib_vals($r1_global, $r2_global); } elsif ( !$r1_global ) { PTDEBUG && _d('copy', $attrib); $r_merged->{globals}->{$attrib} = _deep_copy_attrib_vals($r2_global); } } }; if ( $EVAL_ERROR ) { warn "Error merging globals: $EVAL_ERROR"; } } my $ea_merged = new EventAggregator( groupby => $ea1->{groupby}, worst => $ea1->{worst}, attributes => { map { $_=>[$_] } keys %attrib_types }, ); $ea_merged->set_results($r_merged); $ea_merged->set_attribute_types(\%attrib_types); return $ea_merged; } sub _add_attrib_vals { my ( $vals1, $vals2 ) = @_; foreach my $val ( keys %$vals1 ) { my $val1 = $vals1->{$val}; my $val2 = $vals2->{$val}; if ( (!ref $val1) && (!ref $val2) ) { die "undefined $val value" unless defined $val1 && defined $val2; my $is_num = exists $vals1->{sum} ? 1 : 0; if ( $val eq 'max' ) { if ( $is_num ) { $vals1->{$val} = $val1 > $val2 ? $val1 : $val2; } else { $vals1->{$val} = $val1 gt $val2 ? $val1 : $val2; } } elsif ( $val eq 'min' ) { if ( $is_num ) { $vals1->{$val} = $val1 < $val2 ? $val1 : $val2; } else { $vals1->{$val} = $val1 lt $val2 ? $val1 : $val2; } } else { $vals1->{$val} += $val2; } } elsif ( (ref $val1 eq 'ARRAY') && (ref $val2 eq 'ARRAY') ) { die "Empty $val arrayref" unless @$val1 && @$val2; my $n_buckets = (scalar @$val1) - 1; for my $i ( 0..$n_buckets ) { $vals1->{$val}->[$i] += $val2->[$i]; } } elsif ( (ref $val1 eq 'HASH') && (ref $val2 eq 'HASH') ) { die "Empty $val hashref" unless %$val1 and %$val2; map { $vals1->{$val}->{$_} += $val2->{$_} } keys %$val2; } else { PTDEBUG && _d('vals1:', Dumper($vals1)); PTDEBUG && _d('vals2:', Dumper($vals2)); die "$val type mismatch"; } } return; } sub _deep_copy_attribs { my ( $attribs ) = @_; my $copy = {}; foreach my $attrib ( keys %$attribs ) { $copy->{$attrib} = _deep_copy_attrib_vals($attribs->{$attrib}); } return $copy; } sub _deep_copy_attrib_vals { my ( $vals ) = @_; my $copy; if ( ref $vals eq 'HASH' ) { $copy = {}; foreach my $val ( keys %$vals ) { if ( my $ref_type = ref $val ) { if ( $ref_type eq 'ARRAY' ) { my $n_elems = (scalar @$val) - 1; $copy->{$val} = [ map { undef } ( 0..$n_elems ) ]; for my $i ( 0..$n_elems ) { $copy->{$val}->[$i] = $vals->{$val}->[$i]; } } elsif ( $ref_type eq 'HASH' ) { $copy->{$val} = {}; map { $copy->{$val}->{$_} += $vals->{$val}->{$_} } keys %{$vals->{$val}} } else { die "I don't know how to deep copy a $ref_type reference"; } } else { $copy->{$val} = $vals->{$val}; } } } else { $copy = $vals; } return $copy; } sub _get_value { my ( $self, %args ) = @_; my ($event, $attrib, $alts) = @args{qw(event attribute alternates)}; return unless $event && $attrib; my $value; if ( exists $event->{$attrib} ) { $value = $event->{$attrib}; } elsif ( $alts ) { my $found_value = 0; foreach my $alt_attrib( @$alts ) { if ( exists $event->{$alt_attrib} ) { $value = $event->{$alt_attrib}; $found_value = 1; last; } } die "Event does not have attribute $attrib or any of its alternates" unless $found_value; } else { die "Event does not have attribute $attrib and there are no alterantes"; } return $value; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End EventAggregator package # ########################################################################### # ########################################################################### # ReportFormatter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/ReportFormatter.pm # t/lib/ReportFormatter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package ReportFormatter; use Lmo; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(min max); use POSIX qw(ceil); eval { require Term::ReadKey }; my $have_term = $EVAL_ERROR ? 0 : 1; has underline_header => ( is => 'ro', isa => 'Bool', default => sub { 1 }, ); has line_prefix => ( is => 'ro', isa => 'Str', default => sub { '# ' }, ); has line_width => ( is => 'ro', isa => 'Int', default => sub { 78 }, ); has column_spacing => ( is => 'ro', isa => 'Str', default => sub { ' ' }, ); has extend_right => ( is => 'ro', isa => 'Bool', default => sub { '' }, ); has truncate_line_mark => ( is => 'ro', isa => 'Str', default => sub { '...' }, ); has column_errors => ( is => 'ro', isa => 'Str', default => sub { 'warn' }, ); has truncate_header_side => ( is => 'ro', isa => 'Str', default => sub { 'left' }, ); has strip_whitespace => ( is => 'ro', isa => 'Bool', default => sub { 1 }, ); has title => ( is => 'rw', isa => 'Str', predicate => 'has_title', ); has n_cols => ( is => 'rw', isa => 'Int', default => sub { 0 }, init_arg => undef, ); has cols => ( is => 'ro', isa => 'ArrayRef', init_arg => undef, default => sub { [] }, clearer => 'clear_cols', ); has lines => ( is => 'ro', isa => 'ArrayRef', init_arg => undef, default => sub { [] }, clearer => 'clear_lines', ); has truncate_headers => ( is => 'rw', isa => 'Bool', default => sub { undef }, init_arg => undef, clearer => 'clear_truncate_headers', ); sub BUILDARGS { my $class = shift; my $args = $class->SUPER::BUILDARGS(@_); if ( ($args->{line_width} || '') eq 'auto' ) { die "Cannot auto-detect line width because the Term::ReadKey module " . "is not installed" unless $have_term; ($args->{line_width}) = GetTerminalSize(); PTDEBUG && _d('Line width:', $args->{line_width}); } return $args; } sub set_columns { my ( $self, @cols ) = @_; my $min_hdr_wid = 0; # check that header fits on line my $used_width = 0; my @auto_width_cols; for my $i ( 0..$#cols ) { my $col = $cols[$i]; my $col_name = $col->{name}; my $col_len = length $col_name; die "Column does not have a name" unless defined $col_name; if ( $col->{width} ) { $col->{width_pct} = ceil(($col->{width} * 100) / $self->line_width()); PTDEBUG && _d('col:', $col_name, 'width:', $col->{width}, 'chars =', $col->{width_pct}, '%'); } if ( $col->{width_pct} ) { $used_width += $col->{width_pct}; } else { PTDEBUG && _d('Auto width col:', $col_name); $col->{auto_width} = 1; push @auto_width_cols, $i; } $col->{truncate} = 1 unless defined $col->{truncate}; $col->{truncate_mark} = '...' unless defined $col->{truncate_mark}; $col->{truncate_side} ||= 'right'; $col->{undef_value} = '' unless defined $col->{undef_value}; $col->{min_val} = 0; $col->{max_val} = 0; $min_hdr_wid += $col_len; $col->{header_width} = $col_len; $col->{right_most} = 1 if $i == $#cols; push @{$self->cols}, $col; } $self->n_cols( scalar @cols ); if ( ($used_width || 0) > 100 ) { die "Total width_pct for all columns is >100%"; } if ( @auto_width_cols ) { my $wid_per_col = int((100 - $used_width) / scalar @auto_width_cols); PTDEBUG && _d('Line width left:', (100-$used_width), '%;', 'each auto width col:', $wid_per_col, '%'); map { $self->cols->[$_]->{width_pct} = $wid_per_col } @auto_width_cols; } $min_hdr_wid += ($self->n_cols() - 1) * length $self->column_spacing(); PTDEBUG && _d('min header width:', $min_hdr_wid); if ( $min_hdr_wid > $self->line_width() ) { PTDEBUG && _d('Will truncate headers because min header width', $min_hdr_wid, '> line width', $self->line_width()); $self->truncate_headers(1); } return; } sub add_line { my ( $self, @vals ) = @_; my $n_vals = scalar @vals; if ( $n_vals != $self->n_cols() ) { $self->_column_error("Number of values $n_vals does not match " . "number of columns " . $self->n_cols()); } for my $i ( 0..($n_vals-1) ) { my $col = $self->cols->[$i]; my $val = defined $vals[$i] ? $vals[$i] : $col->{undef_value}; if ( $self->strip_whitespace() ) { $val =~ s/^\s+//g; $val =~ s/\s+$//; $vals[$i] = $val; } my $width = length $val; $col->{min_val} = min($width, ($col->{min_val} || $width)); $col->{max_val} = max($width, ($col->{max_val} || $width)); } push @{$self->lines}, \@vals; return; } sub get_report { my ( $self, %args ) = @_; $self->_calculate_column_widths(); if ( $self->truncate_headers() ) { $self->_truncate_headers(); } $self->_truncate_line_values(%args); my @col_fmts = $self->_make_column_formats(); my $fmt = $self->line_prefix() . join($self->column_spacing(), @col_fmts); PTDEBUG && _d('Format:', $fmt); (my $hdr_fmt = $fmt) =~ s/%([^-])/%-$1/g; my @lines; push @lines, $self->line_prefix() . $self->title() if $self->has_title(); push @lines, $self->_truncate_line( sprintf($hdr_fmt, map { $_->{name} } @{$self->cols}), strip => 1, mark => '', ); if ( $self->underline_header() ) { my @underlines = map { '=' x $_->{print_width} } @{$self->cols}; push @lines, $self->_truncate_line( sprintf($fmt, map { $_ || '' } @underlines), mark => '', ); } push @lines, map { my $vals = $_; my $i = 0; my @vals = map { my $val = defined $_ ? $_ : $self->cols->[$i++]->{undef_value}; $val = '' if !defined $val; $val =~ s/\n/ /g; $val; } @$vals; my $line = sprintf($fmt, @vals); if ( $self->extend_right() ) { $line; } else { $self->_truncate_line($line); } } @{$self->lines}; $self->clear_cols(); $self->clear_lines(); $self->clear_truncate_headers(); return join("\n", @lines) . "\n"; } sub truncate_value { my ( $self, $col, $val, $width, $side ) = @_; return $val if length $val <= $width; return $val if $col->{right_most} && $self->extend_right(); $side ||= $col->{truncate_side}; my $mark = $col->{truncate_mark}; if ( $side eq 'right' ) { $val = substr($val, 0, $width - length $mark); $val .= $mark; } elsif ( $side eq 'left') { $val = $mark . substr($val, -1 * $width + length $mark); } else { PTDEBUG && _d("I don't know how to", $side, "truncate values"); } return $val; } sub _calculate_column_widths { my ( $self ) = @_; my $extra_space = 0; foreach my $col ( @{$self->cols} ) { my $print_width = int($self->line_width() * ($col->{width_pct} / 100)); PTDEBUG && _d('col:', $col->{name}, 'width pct:', $col->{width_pct}, 'char width:', $print_width, 'min val:', $col->{min_val}, 'max val:', $col->{max_val}); if ( $col->{auto_width} ) { if ( $col->{min_val} && $print_width < $col->{min_val} ) { PTDEBUG && _d('Increased to min val width:', $col->{min_val}); $print_width = $col->{min_val}; } elsif ( $col->{max_val} && $print_width > $col->{max_val} ) { PTDEBUG && _d('Reduced to max val width:', $col->{max_val}); $extra_space += $print_width - $col->{max_val}; $print_width = $col->{max_val}; } } $col->{print_width} = $print_width; PTDEBUG && _d('print width:', $col->{print_width}); } PTDEBUG && _d('Extra space:', $extra_space); while ( $extra_space-- ) { foreach my $col ( @{$self->cols} ) { if ( $col->{auto_width} && ( $col->{print_width} < $col->{max_val} || $col->{print_width} < $col->{header_width}) ) { $col->{print_width}++; } } } return; } sub _truncate_headers { my ( $self, $col ) = @_; my $side = $self->truncate_header_side(); foreach my $col ( @{$self->cols} ) { my $col_name = $col->{name}; my $print_width = $col->{print_width}; next if length $col_name <= $print_width; $col->{name} = $self->truncate_value($col, $col_name, $print_width, $side); PTDEBUG && _d('Truncated hdr', $col_name, 'to', $col->{name}, 'max width:', $print_width); } return; } sub _truncate_line_values { my ( $self, %args ) = @_; my $n_vals = $self->n_cols() - 1; foreach my $vals ( @{$self->lines} ) { for my $i ( 0..$n_vals ) { my $col = $self->cols->[$i]; my $val = defined $vals->[$i] ? $vals->[$i] : $col->{undef_value}; my $width = length $val; if ( $col->{print_width} && $width > $col->{print_width} ) { if ( !$col->{truncate} ) { $self->_column_error("Value '$val' is too wide for column " . $col->{name}); } my $callback = $args{truncate_callback}; my $print_width = $col->{print_width}; $val = $callback ? $callback->($col, $val, $print_width) : $self->truncate_value($col, $val, $print_width); PTDEBUG && _d('Truncated val', $vals->[$i], 'to', $val, '; max width:', $print_width); $vals->[$i] = $val; } } } return; } sub _make_column_formats { my ( $self ) = @_; my @col_fmts; my $n_cols = $self->n_cols() - 1; for my $i ( 0..$n_cols ) { my $col = $self->cols->[$i]; my $width = $col->{right_most} && !$col->{right_justify} ? '' : $col->{print_width}; my $col_fmt = '%' . ($col->{right_justify} ? '' : '-') . $width . 's'; push @col_fmts, $col_fmt; } return @col_fmts; } sub _truncate_line { my ( $self, $line, %args ) = @_; my $mark = defined $args{mark} ? $args{mark} : $self->truncate_line_mark(); if ( $line ) { $line =~ s/\s+$// if $args{strip}; my $len = length($line); if ( $len > $self->line_width() ) { $line = substr($line, 0, $self->line_width() - length $mark); $line .= $mark if $mark; } } return $line; } sub _column_error { my ( $self, $err ) = @_; my $msg = "Column error: $err"; $self->column_errors() eq 'die' ? die $msg : warn $msg; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End ReportFormatter package # ########################################################################### # ########################################################################### # QueryReportFormatter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryReportFormatter.pm # t/lib/QueryReportFormatter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryReportFormatter; use Lmo; use English qw(-no_match_vars); use POSIX qw(floor); Transformers->import(qw( shorten micro_t parse_timestamp unix_timestamp make_checksum percentage_of crc32 )); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use constant LINE_LENGTH => 74; use constant MAX_STRING_LENGTH => 10; { local $EVAL_ERROR; eval { require Quoter } }; { local $EVAL_ERROR; eval { require ReportFormatter } }; has Quoter => ( is => 'ro', isa => 'Quoter', default => sub { Quoter->new() }, ); has label_width => ( is => 'ro', isa => 'Int', ); has global_headers => ( is => 'ro', isa => 'ArrayRef', default => sub { [qw( total min max avg 95% stddev median)] }, ); has event_headers => ( is => 'ro', isa => 'ArrayRef', default => sub { [qw(pct total min max avg 95% stddev median)] }, ); has show_all => ( is => 'ro', isa => 'HashRef', default => sub { {} }, ); has ReportFormatter => ( is => 'ro', isa => 'ReportFormatter', builder => '_build_report_formatter', ); sub _build_report_formatter { return ReportFormatter->new( line_width => LINE_LENGTH, extend_right => 1, ); } sub BUILDARGS { my $class = shift; my $args = $class->SUPER::BUILDARGS(@_); foreach my $arg ( qw(OptionParser QueryRewriter) ) { die "I need a $arg argument" unless $args->{$arg}; } my $label_width = $args->{label_width} ||= 12; PTDEBUG && _d('Label width:', $label_width); my $o = delete $args->{OptionParser}; my $self = { %$args, options => { shorten => 1024, report_all => $o->get('report-all'), report_histogram => $o->get('report-histogram'), output => $o->got('output') ? $o->get('output') : '', }, num_format => '# %1$-'.$label_width.'s %2$3s %3$7s %4$7s %5$7s %6$7s %7$7s %8$7s %9$7s', bool_format => '# %1$-'.$label_width.'s %2$3d%% yes, %3$3d%% no', string_format => '# %1$-'.$label_width.'s %2$s', no_partitions => 0, hidden_attrib => { # Don't sort/print these attribs in the reports. arg => 1, # They're usually handled specially, or not fingerprint => 1, # printed at all. pos_in_log => 1, ts => 1, }, }; if (!defined($self->{max_hostname_length})) { $self->{max_hostname_length} = MAX_STRING_LENGTH; } if (!defined($self->{max_line_length})) { $self->{max_line_length} = LINE_LENGTH; } return $self; } sub print_reports { my ( $self, %args ) = @_; foreach my $arg ( qw(reports ea worst orderby groupby) ) { die "I need a $arg argument" unless exists $args{$arg}; } my $reports = $args{reports}; my $group = $args{group}; my $last_report; foreach my $report ( @$reports ) { PTDEBUG && _d('Printing', $report, 'report'); my $report_output = $self->$report(%args); if ( $report_output ) { print "\n" if !$last_report || !($group->{$last_report} && $group->{$report}); print $report_output; } else { PTDEBUG && _d('No', $report, 'report'); } $last_report = $report; } return; } sub rusage { my ( $self ) = @_; my ( $rss, $vsz, $user, $system ) = ( 0, 0, 0, 0 ); my $rusage = ''; eval { my $mem = `ps -o rss,vsz -p $PID 2>&1`; ( $rss, $vsz ) = $mem =~ m/(\d+)/g; ( $user, $system ) = times(); $rusage = sprintf "# %s user time, %s system time, %s rss, %s vsz\n", micro_t( $user, p_s => 1, p_ms => 1 ), micro_t( $system, p_s => 1, p_ms => 1 ), shorten( ($rss || 0) * 1_024 ), shorten( ($vsz || 0) * 1_024 ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); } return $rusage ? $rusage : "# Could not get rusage\n"; } sub date { my ( $self ) = @_; return "# Current date: " . (scalar localtime) . "\n"; } sub hostname { my ( $self ) = @_; my $hostname = `hostname`; if ( $hostname ) { chomp $hostname; return "# Hostname: $hostname\n"; } return; } sub files { my ( $self, %args ) = @_; if ( $args{files} ) { return "# Files: " . join(', ', map { $_->{name} } @{$args{files}}) . "\n"; } return; } sub header { my ( $self, %args ) = @_; foreach my $arg ( qw(ea orderby) ) { die "I need a $arg argument" unless defined $args{$arg}; } my $ea = $args{ea}; my $orderby = $args{orderby}; my $results = $ea->results(); my @result; my $global_cnt = $results->{globals}->{$orderby}->{cnt} || 0; my ($qps, $conc) = (0, 0); if ( $global_cnt && $results->{globals}->{ts} && ($results->{globals}->{ts}->{max} || '') gt ($results->{globals}->{ts}->{min} || '') ) { eval { my $min = parse_timestamp($results->{globals}->{ts}->{min}); my $max = parse_timestamp($results->{globals}->{ts}->{max}); my $diff = unix_timestamp($max) - unix_timestamp($min); $qps = $global_cnt / ($diff || 1); $conc = $results->{globals}->{$args{orderby}}->{sum} / $diff; }; } PTDEBUG && _d('global_cnt:', $global_cnt, 'unique:', scalar keys %{$results->{classes}}, 'qps:', $qps, 'conc:', $conc); my $line = sprintf( '# Overall: %s total, %s unique, %s QPS, %sx concurrency ', shorten($global_cnt, d=>1_000), shorten(scalar keys %{$results->{classes}}, d=>1_000), shorten($qps || 0, d=>1_000), shorten($conc || 0, d=>1_000)); $line .= ('_' x (LINE_LENGTH - length($line) + $self->label_width() - 12)); push @result, $line; if ( my $ts = $results->{globals}->{ts} ) { my $time_range = $self->format_time_range($ts) || "unknown"; push @result, "# Time range: $time_range"; } if ( $results->{globals}->{rate_limit} ) { print "# Rate limits apply\n"; } push @result, $self->make_global_header(); my $attribs = $self->sort_attribs( $ea ); foreach my $type ( qw(num innodb) ) { if ( $type eq 'innodb' && @{$attribs->{$type}} ) { push @result, "# InnoDB:"; }; NUM_ATTRIB: foreach my $attrib ( @{$attribs->{$type}} ) { next unless exists $results->{globals}->{$attrib}; my $store = $results->{globals}->{$attrib}; my $metrics = $ea->stats()->{globals}->{$attrib}; my $func = $attrib =~ m/time|wait$/ ? \µ_t : \&shorten; my @values = ( @{$store}{qw(sum min max)}, $store->{sum} / $store->{cnt}, @{$metrics}{qw(pct_95 stddev median)}, ); @values = map { defined $_ ? $func->($_) : '' } @values; push @result, sprintf $self->{num_format}, $self->make_label($attrib), '', @values; } } if ( @{$attribs->{bool}} ) { push @result, "# Boolean:"; my $printed_bools = 0; BOOL_ATTRIB: foreach my $attrib ( @{$attribs->{bool}} ) { next unless exists $results->{globals}->{$attrib}; my $store = $results->{globals}->{$attrib}; if ( $store->{sum} > 0 ) { push @result, sprintf $self->{bool_format}, $self->make_label($attrib), $self->bool_percents($store); $printed_bools = 1; } } pop @result unless $printed_bools; } return join("\n", map { s/\s+$//; $_ } @result) . "\n"; } sub query_report_values { my ($self, %args) = @_; foreach my $arg ( qw(ea worst orderby groupby) ) { die "I need a $arg argument" unless defined $arg; } my $ea = $args{ea}; my $groupby = $args{groupby}; my $worst = $args{worst}; my $q = $self->Quoter; my $qv = $self->{QueryReview}; my $qr = $self->{QueryRewriter}; my @values; ITEM: foreach my $top_event ( @$worst ) { my $item = $top_event->[0]; my $reason = $args{explain_why} ? $top_event->[1] : ''; my $rank = $top_event->[2]; my $stats = $ea->results->{classes}->{$item}; my $sample = $ea->results->{samples}->{$item}; my $samp_query = ($self->{options}->{output} eq 'secure-slowlog') ? $sample->{fingerprint} || '' : $sample->{arg} || ''; my %item_vals = ( item => $item, samp_query => $samp_query, rank => ($rank || 0), reason => $reason, ); my $review_vals; if ( $qv ) { $review_vals = $qv->get_review_info($item); next ITEM if $review_vals->{reviewed_by} && !$self->{options}->{report_all}; for my $col ( $qv->review_cols() ) { push @{$item_vals{review_vals}}, [$col, $review_vals->{$col}]; } } $item_vals{default_db} = $sample->{db} ? $sample->{db} : $stats->{db}->{unq} ? keys %{$stats->{db}->{unq}} : undef; $item_vals{tables} = [$self->{QueryParser}->extract_tables( query => $samp_query, default_db => $item_vals{default_db}, Quoter => $self->Quoter, )]; if ( $samp_query && ($args{variations} && @{$args{variations}}) ) { $item_vals{crc} = crc32($samp_query); } push @values, \%item_vals; } return \@values; } sub query_report { my ( $self, %args ) = @_; my $ea = $args{ea}; my $groupby = $args{groupby}; my $report_values = $self->query_report_values(%args); my $qr = $self->{QueryRewriter}; my $report = ''; if ( $args{print_header} ) { $report .= "# " . ( '#' x 72 ) . "\n" . "# Report grouped by $groupby\n" . '# ' . ( '#' x 72 ) . "\n\n"; } my $attribs = $self->sort_attribs( $ea ); ITEM: foreach my $vals ( @$report_values ) { my $item = $vals->{item}; $report .= "\n" if $vals->{rank} > 1; # space between each event report $report .= $self->event_report( %args, item => $item, sample => $ea->results->{samples}->{$item}, rank => $vals->{rank}, reason => $vals->{reason}, attribs => $attribs, db => $vals->{default_db}, ); if ( $self->{options}->{report_histogram} ) { $report .= $self->chart_distro( %args, attrib => $self->{options}->{report_histogram}, item => $vals->{item}, ); } if ( $vals->{review_vals} ) { $report .= "# Review information\n"; foreach my $elem ( @{$vals->{review_vals}} ) { my ($col, $val) = @$elem; if ( !$val || $val ne '0000-00-00 00:00:00' ) { # issue 202 $report .= sprintf "# %13s: %-s\n", $col, ($val ? $val : ''); } } } my $partitions_msg = $self->{no_partitions} ? '' : '/*!50100 PARTITIONS*/'; if ( $groupby eq 'fingerprint' ) { my $samp_query = $qr->shorten($vals->{samp_query}, $self->{options}->{shorten}) if $self->{options}->{shorten}; PTDEBUG && _d("Fingerprint\n# $vals->{item}\n"); $report .= $self->tables_report($vals->{tables}, \%args); if ( $vals->{crc} ) { $report.= "# CRC " . ($vals->{crc} % 1_000) . "\n"; } my $log_type = $args{log_type} || ''; my $mark = $args{no_v_format} ? '' : '\G'; if ( $item =~ m/^(?:[\(\s]*select|insert|replace)/ ) { if ( $item =~ m/^(?:insert|replace)/ ) { # No EXPLAIN $report .= "$samp_query${mark}\n"; } else { $report .= "# EXPLAIN $partitions_msg\n$samp_query${mark}\n"; $report .= $self->explain_report($samp_query, $vals->{default_db}); } } else { $report .= "$samp_query${mark}\n"; my $converted = $qr->convert_to_select($samp_query); if ( $converted && $converted =~ m/^[\(\s]*select/i ) { $report .= "# Converted for EXPLAIN\n# EXPLAIN $partitions_msg\n$converted${mark}\n"; } } } else { if ( $groupby eq 'tables' ) { my ( $db, $tbl ) = $self->Quoter->split_unquote($item); $report .= $self->tables_report([ [$db, $tbl] ], \%args); } $report .= "$item\n"; } } return $report; } sub event_report_values { my ($self, %args) = @_; my $ea = $args{ea}; my $item = $args{item}; my $orderby = $args{orderby}; my $results = $ea->results(); my %vals; my $store = $results->{classes}->{$item}; return unless $store; my $global_cnt = $results->{globals}->{$orderby}->{cnt}; my $class_cnt = $store->{$orderby}->{cnt}; my ($qps, $conc) = (0, 0); if ( $global_cnt && $store->{ts} && ($store->{ts}->{max} || '') gt ($store->{ts}->{min} || '') ) { eval { my $min = parse_timestamp($store->{ts}->{min}); my $max = parse_timestamp($store->{ts}->{max}); my $diff = unix_timestamp($max) - unix_timestamp($min); $qps = $class_cnt / $diff; $conc = $store->{$orderby}->{sum} / $diff; }; } $vals{groupby} = $ea->{groupby}; $vals{qps} = $qps || 0; $vals{concurrency} = $conc || 0; $vals{checksum} = make_checksum($item); $vals{pos_in_log} = $results->{samples}->{$item}->{pos_in_log} || 0; $vals{reason} = $args{reason}; $vals{variance_to_mean} = do { my $query_time = $ea->metrics(where => $item, attrib => 'Query_time'); $query_time->{stddev}**2 / ($query_time->{avg} || 1) }; $vals{counts} = { class_cnt => $class_cnt, global_cnt => $global_cnt, }; if ( my $ts = $store->{ts}) { $vals{time_range} = $self->format_time_range($ts) || "unknown"; } my $attribs = $args{attribs}; if ( !$attribs ) { $attribs = $self->sort_attribs( $ea ); } $vals{attributes} = { map { $_ => [] } qw(num innodb bool string) }; foreach my $type ( qw(num innodb) ) { NUM_ATTRIB: foreach my $attrib ( @{$attribs->{$type}} ) { next NUM_ATTRIB unless exists $store->{$attrib}; my $vals = $store->{$attrib}; next unless scalar %$vals; my $pct; my $func = $attrib =~ m/time|wait$/ ? \µ_t : \&shorten; my $metrics = $ea->stats()->{classes}->{$item}->{$attrib}; my @values = ( @{$vals}{qw(sum min max)}, $vals->{sum} / $vals->{cnt}, @{$metrics}{qw(pct_95 stddev median)}, ); @values = map { defined $_ ? $func->($_) : '' } @values; $pct = percentage_of( $vals->{sum}, $results->{globals}->{$attrib}->{sum}); push @{$vals{attributes}{$type}}, [ $attrib, $pct, @values ]; } } if ( @{$attribs->{bool}} ) { BOOL_ATTRIB: foreach my $attrib ( @{$attribs->{bool}} ) { next BOOL_ATTRIB unless exists $store->{$attrib}; my $vals = $store->{$attrib}; next unless scalar %$vals; if ( $vals->{sum} > 0 ) { push @{$vals{attributes}{bool}}, [ $attrib, $self->bool_percents($vals) ]; } } } if ( @{$attribs->{string}} ) { STRING_ATTRIB: foreach my $attrib ( @{$attribs->{string}} ) { next STRING_ATTRIB unless exists $store->{$attrib}; my $vals = $store->{$attrib}; next unless scalar %$vals; push @{$vals{attributes}{string}}, [ $attrib, $vals ]; } } return \%vals; } sub event_report { my ( $self, %args ) = @_; foreach my $arg ( qw(ea item orderby) ) { die "I need a $arg argument" unless defined $args{$arg}; } my $item = $args{item}; my $val = $self->event_report_values(%args); my @result; return "# No such event $item\n" unless $val; my $line = sprintf( '# %s %d: %s QPS, %sx concurrency, ID 0x%s at byte %.f ', ($val->{groupby} eq 'fingerprint' ? 'Query' : 'Item'), $args{rank} || 0, shorten($val->{qps}, d=>1_000), shorten($val->{concurrency}, d=>1_000), $val->{checksum}, $val->{pos_in_log}, ); my $underscores = LINE_LENGTH - length($line) + $self->label_width() - 12; if ( $underscores < 0 ) { $underscores = 0; } $line .= ('_' x $underscores); push @result, $line; if ( $val->{reason} ) { push @result, "# This item is included in the report because it matches " . ($val->{reason} eq 'top' ? '--limit.' : '--outliers.'); } push @result, sprintf("# Scores: V/M = %.2f", $val->{variance_to_mean} ); if ( $val->{time_range} ) { push @result, "# Time range: $val->{time_range}"; } push @result, $self->make_event_header(); push @result, sprintf $self->{num_format}, 'Count', percentage_of($val->{counts}{class_cnt}, $val->{counts}{global_cnt}), $val->{counts}{class_cnt}, map { '' } (1..8); my $attribs = $val->{attributes}; foreach my $type ( qw(num innodb) ) { if ( $type eq 'innodb' && @{$attribs->{$type}} ) { push @result, "# InnoDB:"; }; NUM_ATTRIB: foreach my $attrib ( @{$attribs->{$type}} ) { my ($attrib_name, @vals) = @$attrib; push @result, sprintf $self->{num_format}, $self->make_label($attrib_name), @vals; } } if ( @{$attribs->{bool}} ) { push @result, "# Boolean:"; BOOL_ATTRIB: foreach my $attrib ( @{$attribs->{bool}} ) { my ($attrib_name, @vals) = @$attrib; push @result, sprintf $self->{bool_format}, $self->make_label($attrib_name), @vals; } } if ( @{$attribs->{string}} ) { push @result, "# String:"; STRING_ATTRIB: foreach my $attrib ( @{$attribs->{string}} ) { my ($attrib_name, $vals) = @$attrib; push @result, sprintf $self->{string_format}, $self->make_label($attrib_name), $self->format_string_list($attrib_name, $vals, $val->{counts}{class_cnt}); } } return join("\n", map { s/\s+$//; $_ } @result) . "\n"; } sub chart_distro { my ( $self, %args ) = @_; foreach my $arg ( qw(ea item attrib) ) { die "I need a $arg argument" unless defined $args{$arg}; } my $ea = $args{ea}; my $item = $args{item}; my $attrib = $args{attrib}; my $results = $ea->results(); my $store = $results->{classes}->{$item}->{$attrib}; my $vals = $store->{all}; return "" unless defined $vals && scalar %$vals; my @buck_tens = $ea->buckets_of(10); my @distro = map { 0 } (0 .. 7); my @buckets = map { 0 } (0..999); map { $buckets[$_] = $vals->{$_} } keys %$vals; $vals = \@buckets; # repoint vals from given hashref to our array map { $distro[$buck_tens[$_]] += $vals->[$_] } (1 .. @$vals - 1); my $vals_per_mark; # number of vals represented by 1 #-mark my $max_val = 0; my $max_disp_width = 64; my $bar_fmt = "# %5s%s"; my @distro_labels = qw(1us 10us 100us 1ms 10ms 100ms 1s 10s+); my @results = "# $attrib distribution"; foreach my $n_vals ( @distro ) { $max_val = $n_vals if $n_vals > $max_val; } $vals_per_mark = $max_val / $max_disp_width; foreach my $i ( 0 .. $#distro ) { my $n_vals = $distro[$i]; my $n_marks = $n_vals / ($vals_per_mark || 1); $n_marks = 1 if $n_marks < 1 && $n_vals > 0; my $bar = ($n_marks ? ' ' : '') . '#' x $n_marks; push @results, sprintf $bar_fmt, $distro_labels[$i], $bar; } return join("\n", @results) . "\n"; } sub profile { my ( $self, %args ) = @_; foreach my $arg ( qw(ea worst groupby) ) { die "I need a $arg argument" unless defined $arg; } my $ea = $args{ea}; my $worst = $args{worst}; my $other = $args{other}; my $groupby = $args{groupby}; my $qr = $self->{QueryRewriter}; my $results = $ea->results(); my $total_r = $results->{globals}->{Query_time}->{sum} || 0; my @profiles; foreach my $top_event ( @$worst ) { my $item = $top_event->[0]; my $rank = $top_event->[2]; my $stats = $ea->results->{classes}->{$item}; my $sample = $ea->results->{samples}->{$item}; my $samp_query = $sample->{arg} || ''; my $query_time = $ea->metrics(where => $item, attrib => 'Query_time'); my %profile = ( rank => $rank, r => $stats->{Query_time}->{sum}, cnt => $stats->{Query_time}->{cnt}, sample => $groupby eq 'fingerprint' ? $qr->distill($samp_query, %{$args{distill_args}}) : $item, id => $groupby eq 'fingerprint' ? make_checksum($item) : '', vmr => ($query_time->{stddev}**2) / ($query_time->{avg} || 1), ); push @profiles, \%profile; } my $report = $self->ReportFormatter(); $report->title('Profile'); my @cols = ( { name => 'Rank', right_justify => 1, }, { name => 'Query ID', }, { name => 'Response time', right_justify => 1, }, { name => 'Calls', right_justify => 1, }, { name => 'R/Call', right_justify => 1, }, { name => 'V/M', right_justify => 1, width => 5, }, { name => 'Item', }, ); $report->set_columns(@cols); foreach my $item ( sort { $a->{rank} <=> $b->{rank} } @profiles ) { my $rt = sprintf('%10.4f', $item->{r}); my $rtp = sprintf('%4.1f%%', $item->{r} / ($total_r || 1) * 100); my $rc = sprintf('%8.4f', $item->{r} / $item->{cnt}); my $vmr = sprintf('%4.2f', $item->{vmr}); my @vals = ( $item->{rank}, "0x$item->{id}", "$rt $rtp", $item->{cnt}, $rc, $vmr, $item->{sample}, ); $report->add_line(@vals); } if ( $other && @$other ) { my $misc = { r => 0, cnt => 0, }; foreach my $other_event ( @$other ) { my $item = $other_event->[0]; my $stats = $ea->results->{classes}->{$item}; $misc->{r} += $stats->{Query_time}->{sum}; $misc->{cnt} += $stats->{Query_time}->{cnt}; } my $rt = sprintf('%10.4f', $misc->{r}); my $rtp = sprintf('%4.1f%%', $misc->{r} / ($total_r || 1) * 100); my $rc = sprintf('%8.4f', $misc->{r} / $misc->{cnt}); $report->add_line( "MISC", "0xMISC", "$rt $rtp", $misc->{cnt}, $rc, '0.0', # variance-to-mean ratio is not meaningful here "<".scalar @$other." ITEMS>", ); } return $report->get_report(); } sub prepared { my ( $self, %args ) = @_; foreach my $arg ( qw(ea worst groupby) ) { die "I need a $arg argument" unless defined $arg; } my $ea = $args{ea}; my $worst = $args{worst}; my $groupby = $args{groupby}; my $qr = $self->{QueryRewriter}; my @prepared; # prepared statements my %seen_prepared; # report each PREP-EXEC pair once my $total_r = 0; foreach my $top_event ( @$worst ) { my $item = $top_event->[0]; my $rank = $top_event->[2]; my $stats = $ea->results->{classes}->{$item}; my $sample = $ea->results->{samples}->{$item}; my $samp_query = $sample->{arg} || ''; $total_r += $stats->{Query_time}->{sum}; next unless $stats->{Statement_id} && $item =~ m/^(?:prepare|execute) /; my ($prep_stmt, $prep, $prep_r, $prep_cnt); my ($exec_stmt, $exec, $exec_r, $exec_cnt); if ( $item =~ m/^prepare / ) { $prep_stmt = $item; ($exec_stmt = $item) =~ s/^prepare /execute /; } else { ($prep_stmt = $item) =~ s/^execute /prepare /; $exec_stmt = $item; } if ( !$seen_prepared{$prep_stmt}++ ) { if ( exists $ea->results->{classes}->{$exec_stmt} ) { $exec = $ea->results->{classes}->{$exec_stmt}; $exec_r = $exec->{Query_time}->{sum}; $exec_cnt = $exec->{Query_time}->{cnt}; } else { PTDEBUG && _d('Statement prepared but not executed:', $item); $exec_r = 0; $exec_cnt = 0; } if ( exists $ea->results->{classes}->{$prep_stmt} ) { $prep = $ea->results->{classes}->{$prep_stmt}; $prep_r = $prep->{Query_time}->{sum}; $prep_cnt = scalar keys %{$prep->{Statement_id}->{unq}}, } else { PTDEBUG && _d('Statement executed but not prepared:', $item); $prep_r = 0; $prep_cnt = 0; } push @prepared, { prep_r => $prep_r, prep_cnt => $prep_cnt, exec_r => $exec_r, exec_cnt => $exec_cnt, rank => $rank, sample => $groupby eq 'fingerprint' ? $qr->distill($samp_query, %{$args{distill_args}}) : $item, id => $groupby eq 'fingerprint' ? make_checksum($item) : '', }; } } return unless scalar @prepared; my $report = $self->ReportFormatter(); $report->title('Prepared statements'); $report->set_columns( { name => 'Rank', right_justify => 1, }, { name => 'Query ID', }, { name => 'PREP', right_justify => 1, }, { name => 'PREP Response', right_justify => 1, }, { name => 'EXEC', right_justify => 1, }, { name => 'EXEC Response', right_justify => 1, }, { name => 'Item', }, ); foreach my $item ( sort { $a->{rank} <=> $b->{rank} } @prepared ) { my $exec_rt = sprintf('%10.4f', $item->{exec_r}); my $exec_rtp = sprintf('%4.1f%%',$item->{exec_r}/($total_r || 1) * 100); my $prep_rt = sprintf('%10.4f', $item->{prep_r}); my $prep_rtp = sprintf('%4.1f%%',$item->{prep_r}/($total_r || 1) * 100); $report->add_line( $item->{rank}, "0x$item->{id}", $item->{prep_cnt} || 0, "$prep_rt $prep_rtp", $item->{exec_cnt} || 0, "$exec_rt $exec_rtp", $item->{sample}, ); } return $report->get_report(); } sub make_global_header { my ( $self ) = @_; my @lines; push @lines, sprintf $self->{num_format}, "Attribute", '', @{$self->global_headers()}; push @lines, sprintf $self->{num_format}, (map { "=" x $_ } $self->label_width()), (map { " " x $_ } qw(3)), # no pct column in global header (map { "=" x $_ } qw(7 7 7 7 7 7 7)); return @lines; } sub make_event_header { my ( $self ) = @_; return @{$self->{event_header_lines}} if $self->{event_header_lines}; my @lines; push @lines, sprintf $self->{num_format}, "Attribute", @{$self->event_headers()}; push @lines, sprintf $self->{num_format}, map { "=" x $_ } ($self->label_width(), qw(3 7 7 7 7 7 7 7)); $self->{event_header_lines} = \@lines; return @lines; } sub make_label { my ( $self, $val ) = @_; return '' unless $val; $val =~ s/_/ /g; if ( $val =~ m/^InnoDB/ ) { $val =~ s/^InnoDB //; $val = $val eq 'trx id' ? "InnoDB trxID" : substr($val, 0, $self->label_width()); } $val = $val eq 'user' ? 'Users' : $val eq 'db' ? 'Databases' : $val eq 'Query time' ? 'Exec time' : $val eq 'host' ? 'Hosts' : $val eq 'Error no' ? 'Errors' : $val eq 'bytes' ? 'Query size' : $val eq 'Tmp disk tables' ? 'Tmp disk tbl' : $val eq 'Tmp table sizes' ? 'Tmp tbl size' : substr($val, 0, $self->label_width); return $val; } sub bool_percents { my ( $self, $vals ) = @_; my $p_true = percentage_of($vals->{sum}, $vals->{cnt}); my $p_false = percentage_of(($vals->{cnt} - $vals->{sum}), $vals->{cnt}); return $p_true, $p_false; } sub format_string_list { my ( $self, $attrib, $vals, $class_cnt ) = @_; if ( !exists $vals->{unq} ) { return ($vals->{cnt}); } my $show_all = $self->show_all(); my $cnt_for = $vals->{unq}; if ( 1 == keys %$cnt_for ) { my ($str) = keys %$cnt_for; $str = substr($str, 0, LINE_LENGTH - 30) . '...' if length $str > LINE_LENGTH - 30; return $str; } my $line = ''; my @top = sort { $cnt_for->{$b} <=> $cnt_for->{$a} || $a cmp $b } keys %$cnt_for; my $i = 0; foreach my $str ( @top ) { my $print_str; if ( $str =~ m/(?:\d+\.){3}\d+/ ) { $print_str = $str; # Do not shorten IP addresses. } elsif ( $self->{max_hostname_length} > 0 and length $str > $self->{max_hostname_length} ) { $print_str = substr($str, 0, $self->{max_hostname_length}) . '...'; } else { $print_str = $str; } my $p = percentage_of($cnt_for->{$str}, $class_cnt); $print_str .= " ($cnt_for->{$str}/$p%)"; my $trim_length = LINE_LENGTH; if ($self->{max_hostname_length} == 0 or $self->{max_hostname_length} > LINE_LENGTH) { $trim_length = $self->{max_hostname_length}; } if ( $self->{max_line_length} > 0 and !$show_all->{$attrib} ) { last if (length $line) + (length $print_str) > $self->{max_line_length} - 27; } $line .= "$print_str, "; $i++; } $line =~ s/, $//; if ( $i < @top ) { $line .= "... " . (@top - $i) . " more"; } return $line; } sub sort_attribs { my ( $self, $ea ) = @_; my $attribs = $ea->get_attributes(); return unless $attribs && @$attribs; PTDEBUG && _d("Sorting attribs:", @$attribs); my @num_order = qw( Query_time Exec_orig_time Transmit_time Lock_time Rows_sent Rows_examined Rows_affected Rows_read Bytes_sent Merge_passes Tmp_tables Tmp_disk_tables Tmp_table_sizes bytes ); my $i = 0; my %num_order = map { $_ => $i++ } @num_order; my (@num, @innodb, @bool, @string); ATTRIB: foreach my $attrib ( @$attribs ) { next if $self->{hidden_attrib}->{$attrib}; my $type = $ea->type_for($attrib) || 'string'; if ( $type eq 'num' ) { if ( $attrib =~ m/^InnoDB_/ ) { push @innodb, $attrib; } else { push @num, $attrib; } } elsif ( $type eq 'bool' ) { push @bool, $attrib; } elsif ( $type eq 'string' ) { push @string, $attrib; } else { PTDEBUG && _d("Unknown attrib type:", $type, "for", $attrib); } } @num = sort { pref_sort($a, $num_order{$a}, $b, $num_order{$b}) } @num; @innodb = sort { uc $a cmp uc $b } @innodb; @bool = sort { uc $a cmp uc $b } @bool; @string = sort { uc $a cmp uc $b } @string; return { num => \@num, innodb => \@innodb, string => \@string, bool => \@bool, }; } sub pref_sort { my ( $attrib_a, $order_a, $attrib_b, $order_b ) = @_; if ( !defined $order_a && !defined $order_b ) { return $attrib_a cmp $attrib_b; } if ( defined $order_a && defined $order_b ) { return $order_a <=> $order_b; } if ( !defined $order_a ) { return 1; } else { return -1; } } sub tables_report { my ( $self, $tables_ref, $args_ref ) = @_; return '' unless @$tables_ref; my $q = $self->Quoter(); my $tables = ""; my $mark = $args_ref->{no_v_format} ? '' : '\G'; foreach my $db_tbl ( @$tables_ref ) { my ( $db, $tbl ) = @$db_tbl; $tables .= '# SHOW TABLE STATUS' . ($db ? " FROM `$db`" : '') . " LIKE '$tbl'${mark}\n"; $tables .= "# SHOW CREATE TABLE " . $q->quote(grep { $_ } @$db_tbl) . "${mark}\n"; } return $tables ? "# Tables\n$tables" : "# No tables\n"; } sub explain_report { my ( $self, $query, $db ) = @_; return '' unless $query; my $dbh = $self->{dbh}; my $q = $self->Quoter(); my $qp = $self->{QueryParser}; return '' unless $dbh && $q && $qp; my $explain = ''; eval { if ( !$qp->has_derived_table($query) ) { if ( $db ) { PTDEBUG && _d($dbh, "USE", $db); $dbh->do("USE " . $q->quote($db)); } my $sth; eval { $sth = $dbh->prepare("EXPLAIN /*!50100 PARTITIONS*/ $query"); $sth->execute(); }; if ($EVAL_ERROR) { # MySQL 8.0+ doesn't support PARTITIONS $self->{no_partitions} = 1; $sth = $dbh->prepare("EXPLAIN $query"); $sth->execute(); } $sth->execute(); my $i = 1; while ( my @row = $sth->fetchrow_array() ) { $explain .= "# *************************** $i. " . "row ***************************\n"; foreach my $j ( 0 .. $#row ) { # In some OSes/Perl versions, the filtered row can be reported with or without decimals. # Example, in Ubuntu 16.04 it is being printed as 100.00 while in Ubuntu 18.04 it is # being printed as 100. # To make it testeable, we need to have a consistent format across versions. my $value_format = $sth->{NAME}->[$j] eq 'filtered' ? "%.02f" : "%s"; $explain .= sprintf "# %13s: $value_format\n", $sth->{NAME}->[$j], defined $row[$j] ? $row[$j] : 'NULL'; } $i++; # next row number } } }; if ( $EVAL_ERROR ) { PTDEBUG && _d("EXPLAIN failed:", $query, $EVAL_ERROR); } return $explain ? $explain : "# EXPLAIN failed: $EVAL_ERROR"; } sub format_time_range { my ( $self, $vals ) = @_; my $min = parse_timestamp($vals->{min} || ''); my $max = parse_timestamp($vals->{max} || ''); if ( $min && $max && $min eq $max ) { return "all events occurred at $min"; } my ($min_day) = split(' ', $min) if $min; my ($max_day) = split(' ', $max) if $max; if ( ($min_day || '') eq ($max_day || '') ) { (undef, $max) = split(' ', $max); } return $min && $max ? "$min to $max" : ''; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End QueryReportFormatter package # ########################################################################### # ########################################################################### # JSONReportFormatter package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/JSONReportFormatter.pm # t/lib/JSONReportFormatter.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package JSONReportFormatter; use Lmo; use List::Util qw(sum); use Transformers qw(make_checksum parse_timestamp); use constant PTDEBUG => $ENV{PTDEBUG} || 0; my $have_json = eval { require JSON }; our $pretty_json = $ENV{PTTEST_PRETTY_JSON} || 0; our $sorted_json = $ENV{PTTEST_PRETTY_JSON} || 0; extends qw(QueryReportFormatter); has 'QueryRewriter' => ( is => 'ro', isa => 'Object', required => 1, ); has 'QueryParser' => ( is => 'ro', isa => 'Object', required => 1, ); has 'Quoter' => ( is => 'ro', isa => 'Object', required => 1, ); has _json => ( is => 'ro', init_arg => undef, builder => '_build_json', ); has 'max_query_length' => ( is => 'rw', isa => 'Int', required => 0, default => sub { return 10_000; }, # characters, not bytes ); has 'max_fingerprint_length' => ( is => 'rw', isa => 'Int', required => 0, default => sub { return 5_000; }, # characters, not bytes ); sub _build_json { return unless $have_json; return JSON->new->utf8 ->pretty($pretty_json) ->canonical($sorted_json); } sub encode_json { my ($self, $encode) = @_; if ( my $json = $self->_json ) { return $json->encode($encode); } else { return Transformers::encode_json($encode); } } override [qw(rusage date hostname files header profile prepared)] => sub { return; }; override event_report => sub { my ($self, %args) = @_; return $self->event_report_values(%args); }; override query_report => sub { my ($self, %args) = @_; foreach my $arg ( qw(ea worst orderby groupby) ) { die "I need a $arg argument" unless defined $arg; } my $ea = $args{ea}; my $worst = $args{worst}; my $orderby = $args{orderby}; my $groupby = $args{groupby}; my $results = $ea->results(); my @attribs = @{$ea->get_attributes()}; my $q = $self->Quoter; my $qr = $self->QueryRewriter; my $global_data = { metrics => {}, files => $args{files}, ($args{resume} && scalar keys %{$args{resume}} ? (resume => $args{resume}) : ()), }; my $global_cnt = $results->{globals}->{$orderby}->{cnt} || 0; my $global_unq = scalar keys %{$results->{classes}}; my ($qps, $conc) = (0, 0); if ( $global_cnt && $results->{globals}->{ts} && ($results->{globals}->{ts}->{max} || '') gt ($results->{globals}->{ts}->{min} || '') ) { eval { my $min = parse_timestamp($results->{globals}->{ts}->{min}); my $max = parse_timestamp($results->{globals}->{ts}->{max}); my $diff = unix_timestamp($max) - unix_timestamp($min); $qps = $global_cnt / ($diff || 1); $conc = $results->{globals}->{$orderby}->{sum} / $diff; }; } $global_data->{query_count} = $global_cnt; $global_data->{unique_query_count} = $global_unq; $global_data->{queries_per_second} = $qps if $qps; $global_data->{concurrency} = $conc if $conc; if ( exists $results->{globals}->{rate_limit} ) { my $rate_limit = $results->{globals}->{rate_limit}->{min} || ''; my ($type, $limit) = $rate_limit =~ m/^(\w+):(\d+)$/; if ( $type && $limit ) { $global_data->{rate_limit} = { type => $type, limit => int($limit), }; } else { $global_data->{rate_limit}->{error} = "Invalid rate limit: $rate_limit"; } if ( ($results->{globals}->{rate_limit}->{min} || '') ne ($results->{globals}->{rate_limit}->{max} || '') ) { $global_data->{rate_limit}->{diff} = 1; } } my %hidden_attrib = ( arg => 1, fingerprint => 1, pos_in_log => 1, ts => 1, ); foreach my $attrib ( grep { !$hidden_attrib{$_} } @attribs ) { my $type = $ea->type_for($attrib) || 'string'; next if $type eq 'string'; next unless exists $results->{globals}->{$attrib}; my $store = $results->{globals}->{$attrib}; my $metrics = $ea->stats()->{globals}->{$attrib}; my $int = $attrib =~ m/(?:time|wait)$/ ? 0 : 1; my $real_attrib = $attrib eq 'bytes' ? 'Query_length' : $attrib; if ( $type eq 'num' ) { foreach my $m ( qw(sum min max) ) { if ( $int ) { $global_data->{metrics}->{$real_attrib}->{$m} = sprintf('%d', $store->{$m} || 0); } else { # microsecond $global_data->{metrics}->{$real_attrib}->{$m} = sprintf('%.6f', $store->{$m} || 0); } } foreach my $m ( qw(pct_95 stddev median) ) { if ( $int ) { $global_data->{metrics}->{$real_attrib}->{$m} = sprintf('%d', $metrics->{$m} || 0); } else { # microsecond $global_data->{metrics}->{$real_attrib}->{$m} = sprintf('%.6f', $metrics->{$m} || 0); } } if ( $int ) { $global_data->{metrics}->{$real_attrib}->{avg} = sprintf('%d', $store->{sum} / $store->{cnt}); } else { $global_data->{metrics}->{$real_attrib}->{avg} = sprintf('%.6f', $store->{sum} / $store->{cnt}); } } elsif ( $type eq 'bool' ) { my $store = $results->{globals}->{$real_attrib}; $global_data->{metrics}->{$real_attrib}->{cnt} = sprintf('%d', $store->{sum}); } } my @classes; foreach my $worst_info ( @$worst ) { my $item = $worst_info->[0]; my $stats = $ea->results->{classes}->{$item}; my $sample = $ea->results->{samples}->{$item}; my $all_log_pos = $ea->{result_classes}->{$item}->{pos_in_log}->{all}; my $times_seen = sum values %$all_log_pos; my $distill = $groupby eq 'fingerprint' ? $qr->distill($sample->{arg}) : undef; my $fingerprint = substr($item, 0, $self->max_fingerprint_length); my $checksum = make_checksum($item); my $class = { checksum => $checksum, fingerprint => $fingerprint, distillate => $distill, attribute => $groupby, query_count => $times_seen, $args{anon} ? () : ( example => { query => substr($sample->{arg}, 0, $self->max_query_length), ts => $sample->{ts} ? parse_timestamp($sample->{ts}) : undef, Query_time => $sample->{Query_time}, }, ), }; my %metrics; foreach my $attrib ( @attribs ) { my $real_attrib = $attrib eq 'bytes' ? 'Query_length' : $attrib; next if $real_attrib eq 'Rows_affected' && $distill && $distill =~ m/^(?:SELECT|SHOW|SET|ADMIN)/; $metrics{$real_attrib} = $ea->metrics( attrib => $attrib, where => $item, ); } foreach my $attrib ( keys %metrics ) { if ( ! grep { $_ } values %{$metrics{$attrib}} ) { delete $metrics{$attrib}; next; } delete $metrics{pos_in_log}; delete $metrics{$attrib}->{cnt}; if ($attrib eq 'ts') { my $ts = delete $metrics{ts}; foreach my $thing ( qw(min max) ) { next unless defined $ts && defined $ts->{$thing}; $ts->{$thing} = parse_timestamp($ts->{$thing}); } $class->{ts_min} = $ts->{min}; $class->{ts_max} = $ts->{max}; } else { my $type = $attrib eq 'Query_length' ? 'num' : $ea->type_for($attrib) || 'string'; if ( $type eq 'string' ) { $metrics{$attrib} = { value => $metrics{$attrib}{max} }; } elsif ( $type eq 'num' ) { foreach my $value ( values %{$metrics{$attrib}} ) { next unless defined $value; if ( $attrib =~ m/_(?:time|wait)$/ ) { $value = sprintf('%.6f', $value); } else { $value = sprintf('%d', $value); } } } elsif ( $type eq 'bool' ) { $metrics{$attrib} = { yes => sprintf('%d', $metrics{$attrib}->{sum}), }; } } } my @tables; if ( $groupby eq 'fingerprint' ) { my $default_db = $sample->{db} ? $sample->{db} : $stats->{db}->{unq} ? keys %{$stats->{db}->{unq}} : undef; my @table_names = $self->QueryParser->extract_tables( query => $sample->{arg} || '', default_db => $default_db, Quoter => $q, ); my $mark = $args{no_v_format} ? '' : '\G'; foreach my $db_tbl ( @table_names ) { my ( $db, $tbl ) = @$db_tbl; my $status = 'SHOW TABLE STATUS' . ($db ? " FROM `$db`" : '') . " LIKE '$tbl'${mark}"; my $create = "SHOW CREATE TABLE " . $q->quote(grep { $_ } @$db_tbl) . ${mark}; push @tables, { status => $status, create => $create }; } if ( !$args{anon} ) { if ( $item =~ m/^(?:[\(\s]*select|insert|replace)/ ) { if ( $item =~ m/^(?:insert|replace)/ ) { } else { } } else { my $converted = $qr->convert_to_select( $sample->{arg} || '', ); if ( $converted && $converted =~ m/^[\(\s]*select/i ) { $class->{example}->{as_select} = $converted; } } } } my $vals = $stats->{Query_time}->{all}; if ( defined $vals && scalar %$vals ) { my @buck_tens = $ea->buckets_of(10); my @distro = map { 0 } (0 .. 7); my @buckets = map { 0 } (0..999); map { $buckets[$_] = $vals->{$_} } keys %$vals; $vals = \@buckets; # repoint vals from given hashref to our array map { $distro[$buck_tens[$_]] += $vals->[$_] } (1 .. @$vals - 1); $class->{histograms}->{Query_time} = \@distro; } # histogram $class->{metrics} = \%metrics; if ( @tables ) { $class->{tables} = \@tables; } push @classes, $class; } my $data = { global => $global_data, classes => \@classes, }; my $json = $self->encode_json($data); $json .= "\n" unless $json =~ /\n\Z/; return $json; }; no Lmo; 1; } # ########################################################################### # End JSONReportFormatter package # ########################################################################### # ########################################################################### # EventTimeline package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/EventTimeline.pm # t/lib/EventTimeline.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package EventTimeline; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; Transformers->import(qw(parse_timestamp secs_to_time unix_timestamp)); use constant KEY => 0; use constant CNT => 1; use constant ATT => 2; sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(groupby attributes) ) { die "I need a $arg argument" unless $args{$arg}; } my %is_groupby = map { $_ => 1 } @{$args{groupby}}; return bless { groupby => $args{groupby}, attributes => [ grep { !$is_groupby{$_} } @{$args{attributes}} ], results => [], }, $class; } sub reset_aggregated_data { my ( $self ) = @_; $self->{results} = []; } sub aggregate { my ( $self, $event ) = @_; my $handler = $self->{handler}; if ( !$handler ) { $handler = $self->make_handler($event); $self->{handler} = $handler; } return unless $handler; $handler->($event); } sub results { my ( $self ) = @_; return $self->{results}; } sub make_handler { my ( $self, $event ) = @_; my $float_re = qr{[+-]?(?:(?=\d|[.])\d*(?:[.])\d{0,})?(?:[E](?:[+-]?\d+)|)}i; my @lines; # lines of code for the subroutine foreach my $attrib ( @{$self->{attributes}} ) { my ($val) = $event->{$attrib}; next unless defined $val; # Can't decide type if it's undef. my $type = $val =~ m/^(?:\d+|$float_re)$/o ? 'num' : $val =~ m/^(?:Yes|No)$/ ? 'bool' : 'string'; PTDEBUG && _d('Type for', $attrib, 'is', $type, '(sample:', $val, ')'); $self->{type_for}->{$attrib} = $type; push @lines, ( "\$val = \$event->{$attrib};", 'defined $val && do {', "# type: $type", "\$store = \$last->[ATT]->{$attrib} ||= {};", ); if ( $type eq 'bool' ) { push @lines, q{$val = $val eq 'Yes' ? 1 : 0;}; $type = 'num'; } my $op = $type eq 'num' ? '<' : 'lt'; push @lines, ( '$store->{min} = $val if !defined $store->{min} || $val ' . $op . ' $store->{min};', ); $op = ($type eq 'num') ? '>' : 'gt'; push @lines, ( '$store->{max} = $val if !defined $store->{max} || $val ' . $op . ' $store->{max};', ); if ( $type eq 'num' ) { push @lines, '$store->{sum} += $val;'; } push @lines, '};'; } unshift @lines, ( 'sub {', 'my ( $event ) = @_;', 'my ($val, $last, $store);', # NOTE: define all variables here '$last = $results->[-1];', 'if ( !$last || ' . join(' || ', map { "\$last->[KEY]->[$_] ne (\$event->{$self->{groupby}->[$_]} || 0)" } (0 .. @{$self->{groupby}} -1)) . ' ) {', ' $last = [[' . join(', ', map { "(\$event->{$self->{groupby}->[$_]} || 0)" } (0 .. @{$self->{groupby}} -1)) . '], 0, {} ];', ' push @$results, $last;', '}', '++$last->[CNT];', ); push @lines, '}'; my $results = $self->{results}; # Referred to by the eval my $code = join("\n", @lines); $self->{code} = $code; PTDEBUG && _d('Timeline handler:', $code); my $sub = eval $code; die if $EVAL_ERROR; return $sub; } sub report { my ( $self, $results, $callback ) = @_; $callback->("# " . ('#' x 72) . "\n"); $callback->("# " . join(',', @{$self->{groupby}}) . " report\n"); $callback->("# " . ('#' x 72) . "\n"); foreach my $res ( @$results ) { my $t; my @vals; if ( ($t = $res->[ATT]->{ts}) && $t->{min} ) { my $min = parse_timestamp($t->{min}); push @vals, $min; if ( $t->{max} && $t->{max} gt $t->{min} ) { my $max = parse_timestamp($t->{max}); my $diff = secs_to_time(unix_timestamp($max) - unix_timestamp($min)); push @vals, $diff; } else { push @vals, '0:00'; } } else { push @vals, ('', ''); } $callback->(sprintf("# %19s %7s %3d %s\n", @vals, $res->[CNT], $res->[KEY]->[0])); } } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End EventTimeline package # ########################################################################### # ########################################################################### # QueryParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryParser.pm # t/lib/QueryParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; our $tbl_ident = qr/(?:`[^`]+`|\w+)(?:\.(?:`[^`]+`|\w+))?/; our $tbl_regex = qr{ \b(?:FROM|JOIN|(?get_tables($select); } my ($tbl) = $query =~ m/TABLE\s+($tbl_ident)(\s+.*)?/i; PTDEBUG && _d('Matches table:', $tbl); return ($tbl); } $query =~ s/(?:LOW_PRIORITY|IGNORE|STRAIGHT_JOIN|DELAYED)\s+/ /ig; if ( $query =~ s/^\s*LOCK TABLES\s+//i ) { PTDEBUG && _d('Special table type: LOCK TABLES'); $query =~ s/\s+(?:READ(?:\s+LOCAL)?|WRITE)\s*//gi; PTDEBUG && _d('Locked tables:', $query); $query = "FROM $query"; } $query =~ s/\\["']//g; # quoted strings $query =~ s/".*?"/?/sg; # quoted strings $query =~ s/'.*?'/?/sg; # quoted strings if ( $query =~ m/\A\s*(?:INSERT|REPLACE)(?!\s+INTO)/i ) { $query =~ s/\A\s*((?:INSERT|REPLACE))\s+/$1 INTO /i; } if ( $query =~ m/\A\s*LOAD DATA/i ) { my ($tbl) = $query =~ m/INTO TABLE\s+(\S+)/i; return $tbl; } my @tables; foreach my $tbls ( $query =~ m/$tbl_regex/gio ) { PTDEBUG && _d('Match tables:', $tbls); next if $tbls =~ m/\ASELECT\b/i; foreach my $tbl ( split(',', $tbls) ) { $tbl =~ s/\s*($tbl_ident)(\s+.*)?/$1/gio; if ( $tbl !~ m/[a-zA-Z]/ ) { PTDEBUG && _d('Skipping suspicious table name:', $tbl); next; } push @tables, $tbl; } } return @tables; } sub has_derived_table { my ( $self, $query ) = @_; my $match = $query =~ m/$has_derived/; PTDEBUG && _d($query, 'has ' . ($match ? 'a' : 'no') . ' derived table'); return $match; } sub get_aliases { my ( $self, $query, $list ) = @_; my $result = { DATABASE => {}, TABLE => {}, }; return $result unless $query; $query =~ s/ (?:LOW_PRIORITY|IGNORE|STRAIGHT_JOIN)//ig; $query =~ s/ (?:INNER|OUTER|CROSS|LEFT|RIGHT|NATURAL)//ig; my @tbl_refs; my ($tbl_refs, $from) = $query =~ m{ ( (FROM|INTO|UPDATE)\b\s* # Keyword before table refs .+? # Table refs ) (?:\s+|\z) # If the query does not end with the table (?:WHERE|ORDER|LIMIT|HAVING|SET|VALUES|\z) # Keyword after table refs }ix; if ( $tbl_refs ) { if ( $query =~ m/^(?:INSERT|REPLACE)/i ) { $tbl_refs =~ s/\([^\)]+\)\s*//; } PTDEBUG && _d('tbl refs:', $tbl_refs); my $before_tbl = qr/(?:,|JOIN|\s|$from)+/i; my $after_tbl = qr/(?:,|JOIN|ON|USING|\z)/i; $tbl_refs =~ s/ = /=/g; while ( $tbl_refs =~ m{ $before_tbl\b\s* ( ($tbl_ident) (?:\s+ (?:AS\s+)? (\w+))? ) \s*$after_tbl }xgio ) { my ( $tbl_ref, $db_tbl, $alias ) = ($1, $2, $3); PTDEBUG && _d('Match table:', $tbl_ref); push @tbl_refs, $tbl_ref; $alias = $self->trim_identifier($alias); if ( $tbl_ref =~ m/^AS\s+\w+/i ) { PTDEBUG && _d('Subquery', $tbl_ref); $result->{TABLE}->{$alias} = undef; next; } my ( $db, $tbl ) = $db_tbl =~ m/^(?:(.*?)\.)?(.*)/; $db = $self->trim_identifier($db); $tbl = $self->trim_identifier($tbl); $result->{TABLE}->{$alias || $tbl} = $tbl; $result->{DATABASE}->{$tbl} = $db if $db; } } else { PTDEBUG && _d("No tables ref in", $query); } if ( $list ) { return \@tbl_refs; } else { return $result; } } sub split { my ( $self, $query ) = @_; return unless $query; $query = $self->clean_query($query); PTDEBUG && _d('Splitting', $query); my $verbs = qr{SELECT|INSERT|UPDATE|DELETE|REPLACE|UNION|CREATE}i; my @split_statements = grep { $_ } split(m/\b($verbs\b(?!(?:\s*\()))/io, $query); my @statements; if ( @split_statements == 1 ) { push @statements, $query; } else { for ( my $i = 0; $i <= $#split_statements; $i += 2 ) { push @statements, $split_statements[$i].$split_statements[$i+1]; if ( $statements[-2] && $statements[-2] =~ m/on duplicate key\s+$/i ) { $statements[-2] .= pop @statements; } } } PTDEBUG && _d('statements:', map { $_ ? "<$_>" : 'none' } @statements); return @statements; } sub clean_query { my ( $self, $query ) = @_; return unless $query; $query =~ s!/\*.*?\*/! !g; # Remove /* comment blocks */ $query =~ s/^\s+//; # Remove leading spaces $query =~ s/\s+$//; # Remove trailing spaces $query =~ s/\s{2,}/ /g; # Remove extra spaces return $query; } sub split_subquery { my ( $self, $query ) = @_; return unless $query; $query = $self->clean_query($query); $query =~ s/;$//; my @subqueries; my $sqno = 0; # subquery number my $pos = 0; while ( $query =~ m/(\S+)(?:\s+|\Z)/g ) { $pos = pos($query); my $word = $1; PTDEBUG && _d($word, $sqno); if ( $word =~ m/^\(?SELECT\b/i ) { my $start_pos = $pos - length($word) - 1; if ( $start_pos ) { $sqno++; PTDEBUG && _d('Subquery', $sqno, 'starts at', $start_pos); $subqueries[$sqno] = { start_pos => $start_pos, end_pos => 0, len => 0, words => [$word], lp => 1, # left parentheses rp => 0, # right parentheses done => 0, }; } else { PTDEBUG && _d('Main SELECT at pos 0'); } } else { next unless $sqno; # next unless we're in a subquery PTDEBUG && _d('In subquery', $sqno); my $sq = $subqueries[$sqno]; if ( $sq->{done} ) { PTDEBUG && _d('This subquery is done; SQL is for', ($sqno - 1 ? "subquery $sqno" : "the main SELECT")); next; } push @{$sq->{words}}, $word; my $lp = ($word =~ tr/\(//) || 0; my $rp = ($word =~ tr/\)//) || 0; PTDEBUG && _d('parentheses left', $lp, 'right', $rp); if ( ($sq->{lp} + $lp) - ($sq->{rp} + $rp) == 0 ) { my $end_pos = $pos - 1; PTDEBUG && _d('Subquery', $sqno, 'ends at', $end_pos); $sq->{end_pos} = $end_pos; $sq->{len} = $end_pos - $sq->{start_pos}; } } } for my $i ( 1..$#subqueries ) { my $sq = $subqueries[$i]; next unless $sq; $sq->{sql} = join(' ', @{$sq->{words}}); substr $query, $sq->{start_pos} + 1, # +1 for ( $sq->{len} - 1, # -1 for ) "__subquery_$i"; } return $query, map { $_->{sql} } grep { defined $_ } @subqueries; } sub query_type { my ( $self, $query, $qr ) = @_; my ($type, undef) = $qr->distill_verbs($query); my $rw; if ( $type =~ m/^SELECT\b/ ) { $rw = 'read'; } elsif ( $type =~ m/^$data_manip_stmts\b/ || $type =~ m/^$data_def_stmts\b/ ) { $rw = 'write' } return { type => $type, rw => $rw, } } sub get_columns { my ( $self, $query ) = @_; my $cols = []; return $cols unless $query; my $cols_def; if ( $query =~ m/^SELECT/i ) { $query =~ s/ ^SELECT\s+ (?:ALL |DISTINCT |DISTINCTROW |HIGH_PRIORITY |STRAIGHT_JOIN |SQL_SMALL_RESULT |SQL_BIG_RESULT |SQL_BUFFER_RESULT |SQL_CACHE |SQL_NO_CACHE |SQL_CALC_FOUND_ROWS )\s+ /SELECT /xgi; ($cols_def) = $query =~ m/^SELECT\s+(.+?)\s+FROM/i; } elsif ( $query =~ m/^(?:INSERT|REPLACE)/i ) { ($cols_def) = $query =~ m/\(([^\)]+)\)\s*VALUE/i; } PTDEBUG && _d('Columns:', $cols_def); if ( $cols_def ) { @$cols = split(',', $cols_def); map { my $col = $_; $col = s/^\s+//g; $col = s/\s+$//g; $col; } @$cols; } return $cols; } sub parse { my ( $self, $query ) = @_; return unless $query; my $parsed = {}; $query =~ s/\n/ /g; $query = $self->clean_query($query); $parsed->{query} = $query, $parsed->{tables} = $self->get_aliases($query, 1); $parsed->{columns} = $self->get_columns($query); my ($type) = $query =~ m/^(\w+)/; $parsed->{type} = lc $type; $parsed->{sub_queries} = []; return $parsed; } sub extract_tables { my ( $self, %args ) = @_; my $query = $args{query}; my $default_db = $args{default_db}; my $q = $self->{Quoter} || $args{Quoter}; return unless $query; PTDEBUG && _d('Extracting tables'); my @tables; my %seen; foreach my $db_tbl ( $self->get_tables($query) ) { next unless $db_tbl; next if $seen{$db_tbl}++; # Unique-ify for issue 337. my ( $db, $tbl ) = $q->split_unquote($db_tbl); push @tables, [ $db || $default_db, $tbl ]; } return @tables; } sub trim_identifier { my ($self, $str) = @_; return unless defined $str; $str =~ s/`//g; $str =~ s/^\s+//; $str =~ s/\s+$//; return $str; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryParser package # ########################################################################### # ########################################################################### # TableParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TableParser.pm # t/lib/TableParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TableParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; local $EVAL_ERROR; eval { require Quoter; }; sub new { my ( $class, %args ) = @_; my $self = { %args }; $self->{Quoter} ||= Quoter->new(); return bless $self, $class; } sub Quoter { shift->{Quoter} } sub get_create_table { my ( $self, $dbh, $db, $tbl ) = @_; die "I need a dbh parameter" unless $dbh; die "I need a db parameter" unless $db; die "I need a tbl parameter" unless $tbl; my $q = $self->{Quoter}; my $new_sql_mode = q{/*!40101 SET @OLD_SQL_MODE := @@SQL_MODE, } . q{@@SQL_MODE := '', } . q{@OLD_QUOTE := @@SQL_QUOTE_SHOW_CREATE, } . q{@@SQL_QUOTE_SHOW_CREATE := 1 */}; my $old_sql_mode = q{/*!40101 SET @@SQL_MODE := @OLD_SQL_MODE, } . q{@@SQL_QUOTE_SHOW_CREATE := @OLD_QUOTE */}; PTDEBUG && _d($new_sql_mode); eval { $dbh->do($new_sql_mode); }; PTDEBUG && $EVAL_ERROR && _d($EVAL_ERROR); my $use_sql = 'USE ' . $q->quote($db); PTDEBUG && _d($dbh, $use_sql); $dbh->do($use_sql); my $show_sql = "SHOW CREATE TABLE " . $q->quote($db, $tbl); PTDEBUG && _d($show_sql); my $href; eval { $href = $dbh->selectrow_hashref($show_sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); die $e; } PTDEBUG && _d($old_sql_mode); $dbh->do($old_sql_mode); my ($key) = grep { m/create (?:table|view)/i } keys %$href; if ( !$key ) { die "Error: no 'Create Table' or 'Create View' in result set from " . "$show_sql: " . Dumper($href); } return $href->{$key}; } sub parse { my ( $self, $ddl, $opts ) = @_; return unless $ddl; if ( $ddl =~ m/CREATE (?:TEMPORARY )?TABLE "/ ) { $ddl = $self->ansi_to_legacy($ddl); } elsif ( $ddl !~ m/CREATE (?:TEMPORARY )?TABLE `/ ) { die "TableParser doesn't handle CREATE TABLE without quoting."; } my ($name) = $ddl =~ m/CREATE (?:TEMPORARY )?TABLE\s+(`.+?`)/; (undef, $name) = $self->{Quoter}->split_unquote($name) if $name; $ddl =~ s/(`[^`\n]+`)/\L$1/gm; my $engine = $self->get_engine($ddl); my @defs = $ddl =~ m/^(\s+`.*?),?$/gm; my @cols = map { $_ =~ m/`([^`]+)`/ } @defs; PTDEBUG && _d('Table cols:', join(', ', map { "`$_`" } @cols)); my %def_for; @def_for{@cols} = @defs; my (@nums, @null, @non_generated); my (%type_for, %is_nullable, %is_numeric, %is_autoinc, %is_generated); foreach my $col ( @cols ) { my $def = $def_for{$col}; $def =~ s/``//g; my ( $type ) = $def =~ m/`[^`]+`\s([a-z]+)/; die "Can't determine column type for $def" unless $type; $type_for{$col} = $type; if ( $type =~ m/(?:(?:tiny|big|medium|small)?int|float|double|decimal|year)/ ) { push @nums, $col; $is_numeric{$col} = 1; } if ( $def !~ m/NOT NULL/ ) { push @null, $col; $is_nullable{$col} = 1; } if ( remove_quoted_text($def) =~ m/\WGENERATED\W/i ) { $is_generated{$col} = 1; } else { push @non_generated, $col; } $is_autoinc{$col} = $def =~ m/AUTO_INCREMENT/i ? 1 : 0; } my ($keys, $clustered_key) = $self->get_keys($ddl, $opts, \%is_nullable); my ($charset) = $ddl =~ m/DEFAULT CHARSET=(\w+)/; return { name => $name, cols => \@cols, col_posn => { map { $cols[$_] => $_ } 0..$#cols }, is_col => { map { $_ => 1 } @non_generated }, null_cols => \@null, is_nullable => \%is_nullable, non_generated_cols => \@non_generated, is_autoinc => \%is_autoinc, is_generated => \%is_generated, clustered_key => $clustered_key, keys => $keys, defs => \%def_for, numeric_cols => \@nums, is_numeric => \%is_numeric, engine => $engine, type_for => \%type_for, charset => $charset, }; } sub remove_quoted_text { my ($string) = @_; $string =~ s/[^\\]`[^`]*[^\\]`//g; $string =~ s/[^\\]"[^"]*[^\\]"//g; $string =~ s/[^\\]'[^']*[^\\]'//g; return $string; } sub sort_indexes { my ( $self, $tbl ) = @_; my @indexes = sort { (($a ne 'PRIMARY') <=> ($b ne 'PRIMARY')) || ( !$tbl->{keys}->{$a}->{is_unique} <=> !$tbl->{keys}->{$b}->{is_unique} ) || ( $tbl->{keys}->{$a}->{is_nullable} <=> $tbl->{keys}->{$b}->{is_nullable} ) || ( scalar(@{$tbl->{keys}->{$a}->{cols}}) <=> scalar(@{$tbl->{keys}->{$b}->{cols}}) ) } grep { $tbl->{keys}->{$_}->{type} eq 'BTREE' } sort keys %{$tbl->{keys}}; PTDEBUG && _d('Indexes sorted best-first:', join(', ', @indexes)); return @indexes; } sub find_best_index { my ( $self, $tbl, $index ) = @_; my $best; if ( $index ) { ($best) = grep { uc $_ eq uc $index } keys %{$tbl->{keys}}; } if ( !$best ) { if ( $index ) { die "Index '$index' does not exist in table"; } else { ($best) = $self->sort_indexes($tbl); } } PTDEBUG && _d('Best index found is', $best); return $best; } sub find_possible_keys { my ( $self, $dbh, $database, $table, $quoter, $where ) = @_; return () unless $where; my $sql = 'EXPLAIN SELECT * FROM ' . $quoter->quote($database, $table) . ' WHERE ' . $where; PTDEBUG && _d($sql); my $expl = $dbh->selectrow_hashref($sql); $expl = { map { lc($_) => $expl->{$_} } keys %$expl }; if ( $expl->{possible_keys} ) { PTDEBUG && _d('possible_keys =', $expl->{possible_keys}); my @candidates = split(',', $expl->{possible_keys}); my %possible = map { $_ => 1 } @candidates; if ( $expl->{key} ) { PTDEBUG && _d('MySQL chose', $expl->{key}); unshift @candidates, grep { $possible{$_} } split(',', $expl->{key}); PTDEBUG && _d('Before deduping:', join(', ', @candidates)); my %seen; @candidates = grep { !$seen{$_}++ } @candidates; } PTDEBUG && _d('Final list:', join(', ', @candidates)); return @candidates; } else { PTDEBUG && _d('No keys in possible_keys'); return (); } } sub check_table { my ( $self, %args ) = @_; my @required_args = qw(dbh db tbl); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $db, $tbl) = @args{@required_args}; my $q = $self->{Quoter} || 'Quoter'; my $db_tbl = $q->quote($db, $tbl); PTDEBUG && _d('Checking', $db_tbl); $self->{check_table_error} = undef; my $sql = "SHOW TABLES FROM " . $q->quote($db) . ' LIKE ' . $q->literal_like($tbl); PTDEBUG && _d($sql); my $row; eval { $row = $dbh->selectrow_arrayref($sql); }; if ( my $e = $EVAL_ERROR ) { PTDEBUG && _d($e); $self->{check_table_error} = $e; return 0; } if ( !$row->[0] || $row->[0] ne $tbl ) { PTDEBUG && _d('Table does not exist'); return 0; } PTDEBUG && _d('Table', $db, $tbl, 'exists'); return 1; } sub get_engine { my ( $self, $ddl, $opts ) = @_; my ( $engine ) = $ddl =~ m/\).*?(?:ENGINE|TYPE)=(\w+)/; PTDEBUG && _d('Storage engine:', $engine); return $engine || undef; } sub get_keys { my ( $self, $ddl, $opts, $is_nullable ) = @_; my $engine = $self->get_engine($ddl); my $keys = {}; my $clustered_key = undef; KEY: foreach my $key ( $ddl =~ m/^ ((?:[A-Z]+ )?KEY .*)$/gm ) { next KEY if $key =~ m/FOREIGN/; my $key_ddl = $key; PTDEBUG && _d('Parsed key:', $key_ddl); if ( !$engine || $engine !~ m/MEMORY|HEAP/ ) { $key =~ s/USING HASH/USING BTREE/; } my ( $type, $cols ) = $key =~ m/(?:USING (\w+))? \((.+)\)/; my ( $special ) = $key =~ m/(FULLTEXT|SPATIAL)/; $type = $type || $special || 'BTREE'; my ($name) = $key =~ m/(PRIMARY|`[^`]*`)/; my $unique = $key =~ m/PRIMARY|UNIQUE/ ? 1 : 0; my @cols; my @col_prefixes; foreach my $col_def ( $cols =~ m/`[^`]+`(?:\(\d+\))?/g ) { my ($name, $prefix) = $col_def =~ m/`([^`]+)`(?:\((\d+)\))?/; push @cols, $name; push @col_prefixes, $prefix; } $name =~ s/`//g; PTDEBUG && _d( $name, 'key cols:', join(', ', map { "`$_`" } @cols)); $keys->{$name} = { name => $name, type => $type, colnames => $cols, cols => \@cols, col_prefixes => \@col_prefixes, is_unique => $unique, is_nullable => scalar(grep { $is_nullable->{$_} } @cols), is_col => { map { $_ => 1 } @cols }, ddl => $key_ddl, }; if ( ($engine || '') =~ m/InnoDB/i && !$clustered_key ) { my $this_key = $keys->{$name}; if ( $this_key->{name} eq 'PRIMARY' ) { $clustered_key = 'PRIMARY'; } elsif ( $this_key->{is_unique} && !$this_key->{is_nullable} ) { $clustered_key = $this_key->{name}; } PTDEBUG && $clustered_key && _d('This key is the clustered key'); } } return $keys, $clustered_key; } sub get_fks { my ( $self, $ddl, $opts ) = @_; my $q = $self->{Quoter}; my $fks = {}; foreach my $fk ( $ddl =~ m/CONSTRAINT .* FOREIGN KEY .* REFERENCES [^\)]*\)/mg ) { my ( $name ) = $fk =~ m/CONSTRAINT `(.*?)`/; my ( $cols ) = $fk =~ m/FOREIGN KEY \(([^\)]+)\)/; my ( $parent, $parent_cols ) = $fk =~ m/REFERENCES (\S+) \(([^\)]+)\)/; my ($db, $tbl) = $q->split_unquote($parent, $opts->{database}); my %parent_tbl = (tbl => $tbl); $parent_tbl{db} = $db if $db; if ( $parent !~ m/\./ && $opts->{database} ) { $parent = $q->quote($opts->{database}) . ".$parent"; } $fks->{$name} = { name => $name, colnames => $cols, cols => [ map { s/[ `]+//g; $_; } split(',', $cols) ], parent_tbl => \%parent_tbl, parent_tblname => $parent, parent_cols => [ map { s/[ `]+//g; $_; } split(',', $parent_cols) ], parent_colnames=> $parent_cols, ddl => $fk, }; } return $fks; } sub remove_auto_increment { my ( $self, $ddl ) = @_; $ddl =~ s/(^\).*?) AUTO_INCREMENT=\d+\b/$1/m; return $ddl; } sub get_table_status { my ( $self, $dbh, $db, $like ) = @_; my $q = $self->{Quoter}; my $sql = "SHOW TABLE STATUS FROM " . $q->quote($db); my @params; if ( $like ) { $sql .= ' LIKE ?'; push @params, $like; } PTDEBUG && _d($sql, @params); my $sth = $dbh->prepare($sql); eval { $sth->execute(@params); }; if ($EVAL_ERROR) { PTDEBUG && _d($EVAL_ERROR); return; } my @tables = @{$sth->fetchall_arrayref({})}; @tables = map { my %tbl; # Make a copy with lowercased keys @tbl{ map { lc $_ } keys %$_ } = values %$_; $tbl{engine} ||= $tbl{type} || $tbl{comment}; delete $tbl{type}; \%tbl; } @tables; return @tables; } my $ansi_quote_re = qr/" [^"]* (?: "" [^"]* )* (?<=.) "/ismx; sub ansi_to_legacy { my ($self, $ddl) = @_; $ddl =~ s/($ansi_quote_re)/ansi_quote_replace($1)/ge; return $ddl; } sub ansi_quote_replace { my ($val) = @_; $val =~ s/^"|"$//g; $val =~ s/`/``/g; $val =~ s/""/"/g; return "`$val`"; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TableParser package # ########################################################################### # ########################################################################### # QueryReview package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryReview.pm # t/lib/QueryReview.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryReview; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; Transformers->import(qw(make_checksum parse_timestamp)); my %basic_cols = map { $_ => 1 } qw(checksum fingerprint sample first_seen last_seen reviewed_by reviewed_on comments); my %skip_cols = map { $_ => 1 } qw(fingerprint sample checksum); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(dbh db_tbl tbl_struct quoter) ) { die "I need a $arg argument" unless $args{$arg}; } foreach my $col ( keys %basic_cols ) { die "Query review table $args{db_tbl} does not have a $col column" unless $args{tbl_struct}->{is_col}->{$col}; } my $now = defined $args{ts_default} ? $args{ts_default} : 'NOW()'; my $sql = <<" SQL"; INSERT INTO $args{db_tbl} (checksum, fingerprint, sample, first_seen, last_seen) VALUES(?, ?, ?, COALESCE(?, $now), COALESCE(?, $now)) ON DUPLICATE KEY UPDATE first_seen = IF( first_seen IS NULL, COALESCE(?, $now), LEAST(first_seen, COALESCE(?, $now))), last_seen = IF( last_seen IS NULL, COALESCE(?, $now), GREATEST(last_seen, COALESCE(?, $now))) SQL PTDEBUG && _d('SQL to insert into review table:', $sql); my $insert_sth = $args{dbh}->prepare($sql); my @review_cols = grep { !$skip_cols{$_} } @{$args{tbl_struct}->{cols}}; $sql = "SELECT " . join(', ', map { $args{quoter}->quote($_) } @review_cols) . ", checksum AS checksum_conv FROM $args{db_tbl}" . " WHERE checksum=?"; PTDEBUG && _d('SQL to select from review table:', $sql); my $select_sth = $args{dbh}->prepare($sql); my $self = { dbh => $args{dbh}, db_tbl => $args{db_tbl}, insert_sth => $insert_sth, select_sth => $select_sth, tbl_struct => $args{tbl_struct}, quoter => $args{quoter}, ts_default => $now, }; return bless $self, $class; } sub get_review_info { my ( $self, $id ) = @_; $self->{select_sth}->execute(make_checksum($id)); my $review_vals = $self->{select_sth}->fetchall_arrayref({}); if ( $review_vals && @$review_vals == 1 ) { return $review_vals->[0]; } return undef; } sub set_review_info { my ( $self, %args ) = @_; $self->{insert_sth}->execute( make_checksum($args{fingerprint}), @args{qw(fingerprint sample)}, map { $args{$_} ? parse_timestamp($args{$_}) : undef } qw(first_seen last_seen first_seen first_seen last_seen last_seen)); } sub review_cols { my ( $self ) = @_; return grep { !$skip_cols{$_} } @{$self->{tbl_struct}->{cols}}; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryReview package # ########################################################################### # ########################################################################### # QueryHistory package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/QueryHistory.pm # t/lib/QueryHistory.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package QueryHistory; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Lmo; use Quoter; use Transformers qw(make_checksum parse_timestamp); has history_dbh => ( is => 'ro', required => 1, ); has history_sth => ( is => 'rw', ); has history_metrics => ( is => 'rw', isa => 'ArrayRef', ); has column_pattern => ( is => 'ro', isa => 'Regexp', required => 1, ); has ts_default => ( is => 'ro', isa => 'Str', default => sub { 'NOW()' }, ); sub set_history_options { my ( $self, %args ) = @_; foreach my $arg ( qw(table tbl_struct) ) { die "I need a $arg argument" unless $args{$arg}; } my $col_pat = $self->column_pattern(); my @cols; my @metrics; foreach my $col ( @{$args{tbl_struct}->{cols}} ) { my ( $attr, $metric ) = $col =~ m/$col_pat/; next unless $attr && $metric; $attr = ucfirst $attr if $attr =~ m/_/; $attr = 'Filesort' if $attr eq 'filesort'; $attr =~ s/^Qc_hit/QC_Hit/; # Qc_hit is really QC_Hit $attr =~ s/^Innodb/InnoDB/g; # Innodb is really InnoDB $attr =~ s/_io_/_IO_/g; # io is really IO push @cols, $col; push @metrics, [$attr, $metric]; } my $ts_default = $self->ts_default; my $sql = "REPLACE INTO $args{table}(" . join(', ', map { Quoter->quote($_) } ('checksum', 'sample', @cols)) . ') VALUES (?, ?' . (@cols ? ', ' : '') # issue 1265 . join(', ', map { $_ eq 'ts_min' || $_ eq 'ts_max' ? "COALESCE(?, $ts_default)" : '?' } @cols) . ')'; PTDEBUG && _d($sql); $self->history_sth($self->history_dbh->prepare($sql)); $self->history_metrics(\@metrics); return; } sub set_review_history { my ( $self, $id, $sample, %data ) = @_; foreach my $thing ( qw(min max) ) { next unless defined $data{ts} && defined $data{ts}->{$thing}; $data{ts}->{$thing} = parse_timestamp($data{ts}->{$thing}); } $self->history_sth->execute( make_checksum($id), $sample, map { $data{$_->[0]}->{$_->[1]} } @{$self->history_metrics}); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End QueryHistory package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); use Fcntl qw(:DEFAULT); sub new { my ($class, %args) = @_; my $self = { log_file => $args{log_file}, pid_file => $args{pid_file}, daemonize => $args{daemonize}, force_log_file => $args{force_log_file}, parent_exit => $args{parent_exit}, pid_file_owner => 0, }; return bless $self, $class; } sub run { my ($self) = @_; my $daemonize = $self->{daemonize}; my $pid_file = $self->{pid_file}; my $log_file = $self->{log_file}; my $force_log_file = $self->{force_log_file}; my $parent_exit = $self->{parent_exit}; PTDEBUG && _d('Starting daemon'); if ( $pid_file ) { eval { $self->_make_pid_file( pid => $PID, # parent's pid pid_file => $pid_file, ); }; die "$EVAL_ERROR\n" if $EVAL_ERROR; if ( !$daemonize ) { $self->{pid_file_owner} = $PID; # parent's pid } } if ( $daemonize ) { defined (my $child_pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $child_pid ) { PTDEBUG && _d('Forked child', $child_pid); $parent_exit->($child_pid) if $parent_exit; exit 0; } POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; if ( $pid_file ) { $self->_update_pid_file( pid => $PID, # child's pid pid_file => $pid_file, ); $self->{pid_file_owner} = $PID; } } if ( $daemonize || $force_log_file ) { PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $log_file ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $log_file); close STDOUT; open STDOUT, '>>', $log_file or die "Cannot open log file $log_file: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } $OUTPUT_AUTOFLUSH = 1; } PTDEBUG && _d('Daemon running'); return; } sub _make_pid_file { my ($self, %args) = @_; my @required_args = qw(pid pid_file); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid = $args{pid}; my $pid_file = $args{pid_file}; eval { sysopen(PID_FH, $pid_file, O_RDWR|O_CREAT|O_EXCL) or die $OS_ERROR; print PID_FH $PID, "\n"; close PID_FH; }; if ( my $e = $EVAL_ERROR ) { if ( $e =~ m/file exists/i ) { my $old_pid = $self->_check_pid_file( pid_file => $pid_file, pid => $PID, ); if ( $old_pid ) { warn "Overwriting PID file $pid_file because PID $old_pid " . "is not running.\n"; } $self->_update_pid_file( pid => $PID, pid_file => $pid_file ); } else { die "Error creating PID file $pid_file: $e\n"; } } return; } sub _check_pid_file { my ($self, %args) = @_; my @required_args = qw(pid_file pid); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid_file = $args{pid_file}; my $pid = $args{pid}; PTDEBUG && _d('Checking if PID in', $pid_file, 'is running'); if ( ! -f $pid_file ) { PTDEBUG && _d('PID file', $pid_file, 'does not exist'); return; } open my $fh, '<', $pid_file or die "Error opening $pid_file: $OS_ERROR"; my $existing_pid = do { local $/; <$fh> }; chomp($existing_pid) if $existing_pid; close $fh or die "Error closing $pid_file: $OS_ERROR"; if ( $existing_pid ) { if ( $existing_pid == $pid ) { warn "The current PID $pid already holds the PID file $pid_file\n"; return; } else { PTDEBUG && _d('Checking if PID', $existing_pid, 'is running'); my $pid_is_alive = kill 0, $existing_pid; if ( $pid_is_alive ) { die "PID file $pid_file exists and PID $existing_pid is running\n"; } } } else { die "PID file $pid_file exists but it is empty. Remove the file " . "if the process is no longer running.\n"; } return $existing_pid; } sub _update_pid_file { my ($self, %args) = @_; my @required_args = qw(pid pid_file); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; }; my $pid = $args{pid}; my $pid_file = $args{pid_file}; open my $fh, '>', $pid_file or die "Cannot open $pid_file: $OS_ERROR"; print { $fh } $pid, "\n" or die "Cannot print to $pid_file: $OS_ERROR"; close $fh or warn "Cannot close $pid_file: $OS_ERROR"; return; } sub remove_pid_file { my ($self, $pid_file) = @_; $pid_file ||= $self->{pid_file}; if ( $pid_file && -f $pid_file ) { unlink $self->{pid_file} or warn "Cannot remove PID file $pid_file: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ($self) = @_; if ( $self->{pid_file_owner} == $PID ) { $self->remove_pid_file(); } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # BinaryLogParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/BinaryLogParser.pm # t/lib/BinaryLogParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package BinaryLogParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; my $binlog_line_1 = qr/at (\d+)$/m; my $binlog_line_2 = qr/^#(\d{6}\s+\d{1,2}:\d\d:\d\d)\s+server\s+id\s+(\d+)\s+end_log_pos\s+(\d+)\s+(?:CRC32\s+0x[a-f0-9]{8}\s+)?(\S+)\s*([^\n]*)$/m; my $binlog_line_2_rest = qr/thread_id=(\d+)\s+exec_time=(\d+)\s+error_code=(\d+)/m; sub new { my ( $class, %args ) = @_; my $self = { delim => undef, delim_len => 0, }; return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(next_event tell); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($next_event, $tell) = @args{@required_args}; local $INPUT_RECORD_SEPARATOR = ";\n#"; my $pos_in_log = $tell->(); my $stmt; my ($delim, $delim_len) = ($self->{delim}, $self->{delim_len}); EVENT: while ( defined($stmt = $next_event->()) ) { my @properties = ('pos_in_log', $pos_in_log); my ($ts, $sid, $end, $type, $rest); $pos_in_log = $tell->(); $stmt =~ s/;\n#?\Z//; my ( $got_offset, $got_hdr ); my $pos = 0; my $len = length($stmt); my $found_arg = 0; LINE: while ( $stmt =~ m/^(.*)$/mg ) { # /g requires scalar match. $pos = pos($stmt); # Be careful not to mess this up! my $line = $1; # Necessary for /g and pos() to work. $line =~ s/$delim// if $delim; PTDEBUG && _d($line); if ( $line =~ m/^\/\*.+\*\/;/ ) { PTDEBUG && _d('Comment line'); next LINE; } if ( $line =~ m/^DELIMITER/m ) { my ( $del ) = $line =~ m/^DELIMITER (\S*)$/m; if ( $del ) { $self->{delim_len} = $delim_len = length $del; $self->{delim} = $delim = quotemeta $del; PTDEBUG && _d('delimiter:', $delim); } else { PTDEBUG && _d('Delimiter reset to ;'); $self->{delim} = $delim = undef; $self->{delim_len} = $delim_len = 0; } next LINE; } next LINE if $line =~ m/End of log file/; if ( !$got_offset && (my ( $offset ) = $line =~ m/$binlog_line_1/m) ) { PTDEBUG && _d('Got the at offset line'); push @properties, 'offset', $offset; $got_offset++; } elsif ( !$got_hdr && $line =~ m/^#(\d{6}\s+\d{1,2}:\d\d:\d\d)/ ) { ($ts, $sid, $end, $type, $rest) = $line =~ m/$binlog_line_2/m; PTDEBUG && _d('Got the header line; type:', $type, 'rest:', $rest); push @properties, 'cmd', 'Query', 'ts', $ts, 'server_id', $sid, 'end_log_pos', $end; $got_hdr++; } elsif ( $line =~ m/^(?:#|use |SET)/i ) { if ( my ( $db ) = $line =~ m/^use ([^;]+)/ ) { PTDEBUG && _d("Got a default database:", $db); push @properties, 'db', $db; } elsif ( my ($setting) = $line =~ m/^SET\s+([^;]*)/ ) { PTDEBUG && _d("Got some setting:", $setting); push @properties, map { s/\s+//; lc } split(/,|\s*=\s*/, $setting); } } else { PTDEBUG && _d("Got the query/arg line at pos", $pos); $found_arg++; if ( $got_offset && $got_hdr ) { if ( $type eq 'Xid' ) { my ($xid) = $rest =~ m/(\d+)/; push @properties, 'Xid', $xid; } elsif ( $type eq 'Query' ) { my ($i, $t, $c) = $rest =~ m/$binlog_line_2_rest/m; push @properties, 'Thread_id', $i, 'Query_time', $t, 'error_code', $c; } elsif ( $type eq 'Start:' ) { PTDEBUG && _d("Binlog start"); } else { PTDEBUG && _d('Unknown event type:', $type); next EVENT; } } else { PTDEBUG && _d("It's not a query/arg, it's just some SQL fluff"); push @properties, 'cmd', 'Query', 'ts', undef; } my $delim_len = ($pos == length($stmt) ? $delim_len : 0); my $arg = substr($stmt, $pos - length($line) - $delim_len); $arg =~ s/$delim// if $delim; # Remove the delimiter. if ( $arg =~ m/^DELIMITER/m ) { my ( $del ) = $arg =~ m/^DELIMITER (\S*)$/m; if ( $del ) { $self->{delim_len} = $delim_len = length $del; $self->{delim} = $delim = quotemeta $del; PTDEBUG && _d('delimiter:', $delim); } else { PTDEBUG && _d('Delimiter reset to ;'); $del = ';'; $self->{delim} = $delim = undef; $self->{delim_len} = $delim_len = 0; } $arg =~ s/^DELIMITER.*$//m; # Remove DELIMITER from arg. } $arg =~ s/;$//gm; # Ensure ending ; are gone. $arg =~ s/\s+$//; # Remove trailing spaces and newlines. push @properties, 'arg', $arg, 'bytes', length($arg); last LINE; } } # LINE if ( $found_arg ) { PTDEBUG && _d('Properties of event:', Dumper(\@properties)); my $event = { @properties }; if ( $args{stats} ) { $args{stats}->{events_read}++; $args{stats}->{events_parsed}++; } return $event; } else { PTDEBUG && _d('Event had no arg'); } } # EVENT $args{oktorun}->(0) if $args{oktorun}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End BinaryLogParser package # ########################################################################### # ########################################################################### # GeneralLogParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/GeneralLogParser.pm # t/lib/GeneralLogParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package GeneralLogParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class ) = @_; my $self = { pending => [], db_for => {}, }; return bless $self, $class; } my $genlog_line_1= qr{ \A (?:(\d{6}\s+\d{1,2}:\d\d:\d\d|\d{4}-\d{1,2}-\d{1,2}T\d\d:\d\d:\d\d\.\d+(?:Z|[-+]?\d\d:\d\d)?))? # Timestamp \s+ (?:\s*(\d+)) # Thread ID \s (\w+) # Command \s+ (.*) # Argument \Z }xs; sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(next_event tell); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($next_event, $tell) = @args{@required_args}; my $pending = $self->{pending}; my $db_for = $self->{db_for}; my $line; my $pos_in_log = $tell->(); LINE: while ( defined($line = shift @$pending) or defined($line = $next_event->()) ) { PTDEBUG && _d($line); my ($ts, $thread_id, $cmd, $arg) = $line =~ m/$genlog_line_1/; if ( !($thread_id && $cmd) ) { PTDEBUG && _d('Not start of general log event'); next; } my @properties = ('pos_in_log', $pos_in_log, 'ts', $ts, 'Thread_id', $thread_id); $pos_in_log = $tell->(); @$pending = (); if ( $cmd eq 'Query' ) { my $done = 0; do { $line = $next_event->(); if ( $line ) { my (undef, $next_thread_id, $next_cmd) = $line =~ m/$genlog_line_1/; if ( $next_thread_id && $next_cmd ) { PTDEBUG && _d('Event done'); $done = 1; push @$pending, $line; } else { PTDEBUG && _d('More arg:', $line); $arg .= $line; } } else { PTDEBUG && _d('No more lines'); $done = 1; } } until ( $done ); chomp $arg; push @properties, 'cmd', 'Query', 'arg', $arg; push @properties, 'bytes', length($properties[-1]); push @properties, 'db', $db_for->{$thread_id} if $db_for->{$thread_id}; } else { push @properties, 'cmd', 'Admin'; if ( $cmd eq 'Connect' ) { if ( $arg =~ m/^Access denied/ ) { $cmd = $arg; } else { my ($user) = $arg =~ m/(\S+)/; my ($db) = $arg =~ m/on (\S+)/; my $host; ($user, $host) = split(/@/, $user); PTDEBUG && _d('Connect', $user, '@', $host, 'on', $db); push @properties, 'user', $user if $user; push @properties, 'host', $host if $host; push @properties, 'db', $db if $db; $db_for->{$thread_id} = $db; } } elsif ( $cmd eq 'Init' ) { $cmd = 'Init DB'; $arg =~ s/^DB\s+//; my ($db) = $arg =~ /(\S+)/; PTDEBUG && _d('Init DB:', $db); push @properties, 'db', $db if $db; $db_for->{$thread_id} = $db; } push @properties, 'arg', "administrator command: $cmd"; push @properties, 'bytes', length($properties[-1]); } push @properties, 'Query_time', 0; PTDEBUG && _d('Properties of event:', Dumper(\@properties)); my $event = { @properties }; if ( $args{stats} ) { $args{stats}->{events_read}++; $args{stats}->{events_parsed}++; } return $event; } # LINE @{$self->{pending}} = (); $args{oktorun}->(0) if $args{oktorun}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End GeneralLogParser package # ########################################################################### # ########################################################################### # RawLogParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/RawLogParser.pm # t/lib/RawLogParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package RawLogParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class ) = @_; my $self = { }; return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(next_event tell); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($next_event, $tell) = @args{@required_args}; my $line; my $pos_in_log = $tell->(); LINE: while ( defined($line = $next_event->()) ) { PTDEBUG && _d($line); chomp($line); my @properties = ( 'pos_in_log', $pos_in_log, 'cmd', 'Query', 'bytes', length($line), 'Query_time', 0, 'arg', $line, ); $pos_in_log = $tell->(); PTDEBUG && _d('Properties of event:', Dumper(\@properties)); my $event = { @properties }; if ( $args{stats} ) { $args{stats}->{events_read}++; $args{stats}->{events_parsed}++; } return $event; } $args{oktorun}->(0) if $args{oktorun}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End RawLogParser package # ########################################################################### # ########################################################################### # ProtocolParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/ProtocolParser.pm # t/lib/ProtocolParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package ProtocolParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use File::Basename qw(basename); use File::Temp qw(tempfile); eval { require IO::Uncompress::Inflate; # yum: perl-IO-Compress-Zlib IO::Uncompress::Inflate->import(qw(inflate $InflateError)); }; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; sub new { my ( $class, %args ) = @_; my $self = { server => $args{server}, port => $args{port}, sessions => {}, o => $args{o}, }; return bless $self, $class; } sub parse_event { my ( $self, %args ) = @_; my @required_args = qw(event); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $packet = @args{@required_args}; if ( $self->{buffer} ) { my ($packet_from, $session) = $self->_get_session($packet); if ( $packet->{data_len} ) { if ( $packet_from eq 'client' ) { push @{$session->{client_packets}}, $packet; PTDEBUG && _d('Saved client packet'); } else { push @{$session->{server_packets}}, $packet; PTDEBUG && _d('Saved server packet'); } } return unless ($packet_from eq 'client') && ($packet->{fin} || $packet->{rst}); my $event; map { $event = $self->_parse_packet($_, $args{misc}); $args{stats}->{events_parsed}++ if $args{stats}; } sort { $a->{seq} <=> $b->{seq} } @{$session->{client_packets}}; map { $event = $self->_parse_packet($_, $args{misc}); $args{stats}->{events_parsed}++ if $args{stats}; } sort { $a->{seq} <=> $b->{seq} } @{$session->{server_packets}}; return $event; } if ( $packet->{data_len} == 0 ) { PTDEBUG && _d('No TCP data'); return; } my $event = $self->_parse_packet($packet, $args{misc}); $args{stats}->{events_parsed}++ if $args{stats}; return $event; } sub _parse_packet { my ( $self, $packet, $misc ) = @_; my ($packet_from, $session) = $self->_get_session($packet); PTDEBUG && _d('State:', $session->{state}); push @{$session->{raw_packets}}, $packet->{raw_packet} unless $misc->{recurse}; if ( $session->{buff} ) { $session->{buff_left} -= $packet->{data_len}; if ( $session->{buff_left} > 0 ) { PTDEBUG && _d('Added data to buff; expecting', $session->{buff_left}, 'more bytes'); return; } PTDEBUG && _d('Got all data; buff left:', $session->{buff_left}); $packet->{data} = $session->{buff} . $packet->{data}; $packet->{data_len} += length $session->{buff}; $session->{buff} = ''; $session->{buff_left} = 0; } $packet->{data} = pack('H*', $packet->{data}) unless $misc->{recurse}; my $event; if ( $packet_from eq 'server' ) { $event = $self->_packet_from_server($packet, $session, $misc); } elsif ( $packet_from eq 'client' ) { $event = $self->_packet_from_client($packet, $session, $misc); } else { die 'Packet origin unknown'; } PTDEBUG && _d('State:', $session->{state}); if ( $session->{out_of_order} ) { PTDEBUG && _d('Session packets are out of order'); push @{$session->{packets}}, $packet; $session->{ts_min} = $packet->{ts} if $packet->{ts} lt ($session->{ts_min} || ''); $session->{ts_max} = $packet->{ts} if $packet->{ts} gt ($session->{ts_max} || ''); if ( $session->{have_all_packets} ) { PTDEBUG && _d('Have all packets; ordering and processing'); delete $session->{out_of_order}; delete $session->{have_all_packets}; map { $event = $self->_parse_packet($_, { recurse => 1 }); } sort { $a->{seq} <=> $b->{seq} } @{$session->{packets}}; } } PTDEBUG && _d('Done with packet; event:', Dumper($event)); return $event; } sub _get_session { my ( $self, $packet ) = @_; my $src_host = "$packet->{src_host}:$packet->{src_port}"; my $dst_host = "$packet->{dst_host}:$packet->{dst_port}"; if ( my $server = $self->{server} ) { # Watch only the given server. $server .= ":$self->{port}"; if ( $src_host ne $server && $dst_host ne $server ) { PTDEBUG && _d('Packet is not to or from', $server); return; } } my $packet_from; my $client; if ( $src_host =~ m/:$self->{port}$/ ) { $packet_from = 'server'; $client = $dst_host; } elsif ( $dst_host =~ m/:$self->{port}$/ ) { $packet_from = 'client'; $client = $src_host; } else { warn 'Packet is not to or from server: ', Dumper($packet); return; } PTDEBUG && _d('Client:', $client); if ( !exists $self->{sessions}->{$client} ) { PTDEBUG && _d('New session'); $self->{sessions}->{$client} = { client => $client, state => undef, raw_packets => [], }; }; my $session = $self->{sessions}->{$client}; return $packet_from, $session; } sub _packet_from_server { die "Don't call parent class _packet_from_server()"; } sub _packet_from_client { die "Don't call parent class _packet_from_client()"; } sub make_event { my ( $self, $session, $packet ) = @_; die "Event has no attributes" unless scalar keys %{$session->{attribs}}; die "Query has no arg attribute" unless $session->{attribs}->{arg}; my $start_request = $session->{start_request} || 0; my $start_reply = $session->{start_reply} || 0; my $end_reply = $session->{end_reply} || 0; PTDEBUG && _d('Request start:', $start_request, 'reply start:', $start_reply, 'reply end:', $end_reply); my $event = { Query_time => $self->timestamp_diff($start_request, $start_reply), Transmit_time => $self->timestamp_diff($start_reply, $end_reply), }; @{$event}{keys %{$session->{attribs}}} = values %{$session->{attribs}}; return $event; } sub _get_errors_fh { my ( $self ) = @_; return $self->{errors_fh} if $self->{errors_fh}; my $exec = basename($0); my ($errors_fh, $filename); if ( $filename = $ENV{PERCONA_TOOLKIT_TCP_ERRORS_FILE} ) { open $errors_fh, ">", $filename or die "Cannot open $filename for writing (supplied from " . "PERCONA_TOOLKIT_TCP_ERRORS_FILE): $OS_ERROR"; } else { ($errors_fh, $filename) = tempfile("/tmp/$exec-errors.XXXXXXX", UNLINK => 0); } $self->{errors_file} = $filename; $self->{errors_fh} = $errors_fh; return $errors_fh; } sub fail_session { my ( $self, $session, $reason ) = @_; PTDEBUG && _d('Failed session', $session->{client}, 'because', $reason); delete $self->{sessions}->{$session->{client}}; return if $self->{_no_save_error}; my $errors_fh = $self->_get_errors_fh(); warn "TCP session $session->{client} had errors, will save them in $self->{errors_file}\n" unless $self->{_warned_for}->{$self->{errors_file}}++; my $raw_packets = delete $session->{raw_packets}; $session->{reason_for_failure} = $reason; my $session_dump = '# ' . Dumper($session); chomp $session_dump; $session_dump =~ s/\n/\n# /g; print $errors_fh join("\n", $session_dump, @$raw_packets), "\n"; return; } sub timestamp_diff { my ( $self, $start, $end ) = @_; return 0 unless $start && $end; my $sd = substr($start, 0, 11, ''); my $ed = substr($end, 0, 11, ''); my ( $sh, $sm, $ss ) = split(/:/, $start); my ( $eh, $em, $es ) = split(/:/, $end); my $esecs = ($eh * 3600 + $em * 60 + $es); my $ssecs = ($sh * 3600 + $sm * 60 + $ss); if ( $sd eq $ed ) { return sprintf '%.6f', $esecs - $ssecs; } else { # Assume only one day boundary has been crossed, no DST, etc return sprintf '%.6f', ( 86_400 - $ssecs ) + $esecs; } } sub uncompress_data { my ( $self, $data, $len ) = @_; die "I need data" unless $data; die "I need a len argument" unless $len; die "I need a scalar reference to data" unless ref $data eq 'SCALAR'; PTDEBUG && _d('Uncompressing data'); our $InflateError; my $comp_bin_data = pack('H*', $$data); my $uncomp_bin_data = ''; my $z = new IO::Uncompress::Inflate( \$comp_bin_data ) or die "IO::Uncompress::Inflate failed: $InflateError"; my $status = $z->read(\$uncomp_bin_data, $len) or die "IO::Uncompress::Inflate failed: $InflateError"; my $uncomp_data = unpack('H*', $uncomp_bin_data); return \$uncomp_data; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End ProtocolParser package # ########################################################################### # ########################################################################### # MasterSlave package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/MasterSlave.pm # t/lib/MasterSlave.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package MasterSlave; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub check_recursion_method { my ($methods) = @_; if ( @$methods != 1 ) { if ( grep({ !m/processlist|hosts/i } @$methods) && $methods->[0] !~ /^dsn=/i ) { die "Invalid combination of recursion methods: " . join(", ", map { defined($_) ? $_ : 'undef' } @$methods) . ". " . "Only hosts and processlist may be combined.\n" } } else { my ($method) = @$methods; die "Invalid recursion method: " . ( $method || 'undef' ) unless $method && $method =~ m/^(?:processlist$|hosts$|none$|cluster$|dsn=)/i; } } sub new { my ( $class, %args ) = @_; my @required_args = qw(OptionParser DSNParser Quoter); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, replication_thread => {}, }; return bless $self, $class; } sub get_slaves { my ($self, %args) = @_; my @required_args = qw(make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($make_cxn) = @args{@required_args}; my $slaves = []; my $dp = $self->{DSNParser}; my $methods = $self->_resolve_recursion_methods($args{dsn}); return $slaves unless @$methods; if ( grep { m/processlist|hosts/i } @$methods ) { my @required_args = qw(dbh dsn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh, $dsn) = @args{@required_args}; my $o = $self->{OptionParser}; $self->recurse_to_slaves( { dbh => $dbh, dsn => $dsn, slave_user => $o->got('slave-user') ? $o->get('slave-user') : '', slave_password => $o->got('slave-password') ? $o->get('slave-password') : '', callback => sub { my ( $dsn, $dbh, $level, $parent ) = @_; return unless $level; PTDEBUG && _d('Found slave:', $dp->as_string($dsn)); my $slave_dsn = $dsn; if ($o->got('slave-user')) { $slave_dsn->{u} = $o->get('slave-user'); PTDEBUG && _d("Using slave user ".$o->get('slave-user')." on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($o->got('slave-password')) { $slave_dsn->{p} = $o->get('slave-password'); PTDEBUG && _d("Slave password set"); } push @$slaves, $make_cxn->(dsn => $slave_dsn, dbh => $dbh); return; }, } ); } elsif ( $methods->[0] =~ m/^dsn=/i ) { (my $dsn_table_dsn = join ",", @$methods) =~ s/^dsn=//i; $slaves = $self->get_cxn_from_dsn_table( %args, dsn_table_dsn => $dsn_table_dsn, ); } elsif ( $methods->[0] =~ m/none/i ) { PTDEBUG && _d('Not getting to slaves'); } else { die "Unexpected recursion methods: @$methods"; } return $slaves; } sub _resolve_recursion_methods { my ($self, $dsn) = @_; my $o = $self->{OptionParser}; if ( $o->got('recursion-method') ) { return $o->get('recursion-method'); } elsif ( $dsn && ($dsn->{P} || 3306) != 3306 ) { PTDEBUG && _d('Port number is non-standard; using only hosts method'); return [qw(hosts)]; } else { return $o->get('recursion-method'); } } sub recurse_to_slaves { my ( $self, $args, $level ) = @_; $level ||= 0; my $dp = $self->{DSNParser}; my $recurse = $args->{recurse} || $self->{OptionParser}->get('recurse'); my $dsn = $args->{dsn}; my $slave_user = $args->{slave_user} || ''; my $slave_password = $args->{slave_password} || ''; my $methods = $self->_resolve_recursion_methods($dsn); PTDEBUG && _d('Recursion methods:', @$methods); if ( lc($methods->[0]) eq 'none' ) { PTDEBUG && _d('Not recursing to slaves'); return; } my $slave_dsn = $dsn; if ($slave_user) { $slave_dsn->{u} = $slave_user; PTDEBUG && _d("Using slave user $slave_user on ".$slave_dsn->{h}.":".$slave_dsn->{P}); } if ($slave_password) { $slave_dsn->{p} = $slave_password; PTDEBUG && _d("Slave password set"); } my $dbh; eval { $dbh = $args->{dbh} || $dp->get_dbh( $dp->get_cxn_params($slave_dsn), { AutoCommit => 1 }); PTDEBUG && _d('Connected to', $dp->as_string($slave_dsn)); }; if ( $EVAL_ERROR ) { print STDERR "Cannot connect to ", $dp->as_string($slave_dsn), "\n" or die "Cannot print: $OS_ERROR"; return; } my $sql = 'SELECT @@SERVER_ID'; PTDEBUG && _d($sql); my ($id) = $dbh->selectrow_array($sql); PTDEBUG && _d('Working on server ID', $id); my $master_thinks_i_am = $dsn->{server_id}; if ( !defined $id || ( defined $master_thinks_i_am && $master_thinks_i_am != $id ) || $args->{server_ids_seen}->{$id}++ ) { PTDEBUG && _d('Server ID seen, or not what master said'); if ( $args->{skip_callback} ) { $args->{skip_callback}->($dsn, $dbh, $level, $args->{parent}); } return; } $args->{callback}->($dsn, $dbh, $level, $args->{parent}); if ( !defined $recurse || $level < $recurse ) { my @slaves = grep { !$_->{master_id} || $_->{master_id} == $id } # Only my slaves. $self->find_slave_hosts($dp, $dbh, $dsn, $methods); foreach my $slave ( @slaves ) { PTDEBUG && _d('Recursing from', $dp->as_string($dsn), 'to', $dp->as_string($slave)); $self->recurse_to_slaves( { %$args, dsn => $slave, dbh => undef, parent => $dsn, slave_user => $slave_user, $slave_password => $slave_password }, $level + 1 ); } } } sub find_slave_hosts { my ( $self, $dsn_parser, $dbh, $dsn, $methods ) = @_; PTDEBUG && _d('Looking for slaves on', $dsn_parser->as_string($dsn), 'using methods', @$methods); my @slaves; METHOD: foreach my $method ( @$methods ) { my $find_slaves = "_find_slaves_by_$method"; PTDEBUG && _d('Finding slaves with', $find_slaves); @slaves = $self->$find_slaves($dsn_parser, $dbh, $dsn); last METHOD if @slaves; } PTDEBUG && _d('Found', scalar(@slaves), 'slaves'); return @slaves; } sub _find_slaves_by_processlist { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves = map { my $slave = $dsn_parser->parse("h=$_", $dsn); $slave->{source} = 'processlist'; $slave; } grep { $_ } map { my ( $host ) = $_->{host} =~ m/^([^:]+):/; $host ||= $_->{host}; if ( $host eq 'localhost' ) { $host = '127.0.0.1'; # Replication never uses sockets. } $host; } $self->get_connected_slaves($dbh); return @slaves; } sub _find_slaves_by_hosts { my ( $self, $dsn_parser, $dbh, $dsn ) = @_; my @slaves; my $sql = 'SHOW SLAVE HOSTS'; PTDEBUG && _d($dbh, $sql); @slaves = @{$dbh->selectall_arrayref($sql, { Slice => {} })}; if ( @slaves ) { PTDEBUG && _d('Found some SHOW SLAVE HOSTS info'); @slaves = map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; my $spec = "h=$hash{host},P=$hash{port}" . ( $hash{user} ? ",u=$hash{user}" : '') . ( $hash{password} ? ",p=$hash{password}" : ''); my $dsn = $dsn_parser->parse($spec, $dsn); $dsn->{server_id} = $hash{server_id}; $dsn->{master_id} = $hash{master_id}; $dsn->{source} = 'hosts'; $dsn; } @slaves; } return @slaves; } sub get_connected_slaves { my ( $self, $dbh ) = @_; my $show = "SHOW GRANTS FOR "; my $user = 'CURRENT_USER()'; my $sql = $show . $user; PTDEBUG && _d($dbh, $sql); my $proc; eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/no such grant defined for user/ ) { PTDEBUG && _d('Retrying SHOW GRANTS without host; error:', $EVAL_ERROR); ($user) = split('@', $user); $sql = $show . $user; PTDEBUG && _d($sql); eval { $proc = grep { m/ALL PRIVILEGES.*?\*\.\*|PROCESS/ } @{$dbh->selectcol_arrayref($sql)}; }; } die "Failed to $sql: $EVAL_ERROR" if $EVAL_ERROR; } if ( !$proc ) { die "You do not have the PROCESS privilege"; } $sql = 'SHOW FULL PROCESSLIST'; PTDEBUG && _d($dbh, $sql); grep { $_->{command} =~ m/Binlog Dump/i } map { # Lowercase the column names my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{$dbh->selectall_arrayref($sql, { Slice => {} })}; } sub is_master_of { my ( $self, $master, $slave ) = @_; my $master_status = $self->get_master_status($master) or die "The server specified as a master is not a master"; my $slave_status = $self->get_slave_status($slave) or die "The server specified as a slave is not a slave"; my @connected = $self->get_connected_slaves($master) or die "The server specified as a master has no connected slaves"; my (undef, $port) = $master->selectrow_array("SHOW VARIABLES LIKE 'port'"); if ( $port != $slave_status->{master_port} ) { die "The slave is connected to $slave_status->{master_port} " . "but the master's port is $port"; } if ( !grep { $slave_status->{master_user} eq $_->{user} } @connected ) { die "I don't see any slave I/O thread connected with user " . $slave_status->{master_user}; } if ( ($slave_status->{slave_io_state} || '') eq 'Waiting for master to send event' ) { my ( $master_log_name, $master_log_num ) = $master_status->{file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; my ( $slave_log_name, $slave_log_num ) = $slave_status->{master_log_file} =~ m/^(.*?)\.0*([1-9][0-9]*)$/; if ( $master_log_name ne $slave_log_name || abs($master_log_num - $slave_log_num) > 1 ) { die "The slave thinks it is reading from " . "$slave_status->{master_log_file}, but the " . "master is writing to $master_status->{file}"; } } return 1; } sub get_master_dsn { my ( $self, $dbh, $dsn, $dsn_parser ) = @_; my $master = $self->get_slave_status($dbh) or return undef; my $spec = "h=$master->{master_host},P=$master->{master_port}"; return $dsn_parser->parse($spec, $dsn); } sub get_slave_status { my ( $self, $dbh ) = @_; if ( !$self->{not_a_slave}->{$dbh} ) { my $sth = $self->{sths}->{$dbh}->{SLAVE_STATUS} ||= $dbh->prepare('SHOW SLAVE STATUS'); PTDEBUG && _d($dbh, 'SHOW SLAVE STATUS'); $sth->execute(); my ($sss_rows) = $sth->fetchall_arrayref({}); # Show Slave Status rows my $ss; if ( $sss_rows && @$sss_rows ) { if (scalar @$sss_rows > 1) { if (!$self->{channel}) { die 'This server returned more than one row for SHOW SLAVE STATUS but "channel" was not specified on the command line'; } for my $row (@$sss_rows) { $row = { map { lc($_) => $row->{$_} } keys %$row }; # lowercase the keys if ($row->{channel_name} eq $self->{channel}) { $ss = $row; last; } } } else { if ($sss_rows->[0]->{channel_name} && $sss_rows->[0]->{channel_name} ne $self->{channel}) { die 'This server is using replication channels but "channel" was not specified on the command line'; } else { $ss = $sss_rows->[0]; } } if ( $ss && %$ss ) { $ss = { map { lc($_) => $ss->{$_} } keys %$ss }; # lowercase the keys return $ss; } } PTDEBUG && _d('This server returns nothing for SHOW SLAVE STATUS'); $self->{not_a_slave}->{$dbh}++; } } sub get_master_status { my ( $self, $dbh ) = @_; if ( $self->{not_a_master}->{$dbh} ) { PTDEBUG && _d('Server on dbh', $dbh, 'is not a master'); return; } my $sth = $self->{sths}->{$dbh}->{MASTER_STATUS} ||= $dbh->prepare('SHOW MASTER STATUS'); PTDEBUG && _d($dbh, 'SHOW MASTER STATUS'); $sth->execute(); my ($ms) = @{$sth->fetchall_arrayref({})}; PTDEBUG && _d( $ms ? map { "$_=" . (defined $ms->{$_} ? $ms->{$_} : '') } keys %$ms : ''); if ( !$ms || scalar keys %$ms < 2 ) { PTDEBUG && _d('Server on dbh', $dbh, 'does not seem to be a master'); $self->{not_a_master}->{$dbh}++; } return { map { lc($_) => $ms->{$_} } keys %$ms }; # lowercase the keys } sub wait_for_master { my ( $self, %args ) = @_; my @required_args = qw(master_status slave_dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($master_status, $slave_dbh) = @args{@required_args}; my $timeout = $args{timeout} || 60; my $result; my $waited; if ( $master_status ) { my $slave_status; eval { $slave_status = $self->get_slave_status($slave_dbh); }; if ($EVAL_ERROR) { return { result => undef, waited => 0, error =>'Wait for master: this is a multi-master slave but "channel" was not specified on the command line', }; } my $server_version = VersionParser->new($slave_dbh); my $channel_sql = $server_version > '5.6' && $self->{channel} ? ", '$self->{channel}'" : ''; my $sql = "SELECT MASTER_POS_WAIT('$master_status->{file}', $master_status->{position}, $timeout $channel_sql)"; PTDEBUG && _d($slave_dbh, $sql); my $start = time; ($result) = $slave_dbh->selectrow_array($sql); $waited = time - $start; PTDEBUG && _d('Result of waiting:', $result); PTDEBUG && _d("Waited", $waited, "seconds"); } else { PTDEBUG && _d('Not waiting: this server is not a master'); } return { result => $result, waited => $waited, }; } sub stop_slave { my ( $self, $dbh ) = @_; my $sth = $self->{sths}->{$dbh}->{STOP_SLAVE} ||= $dbh->prepare('STOP SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } sub start_slave { my ( $self, $dbh, $pos ) = @_; if ( $pos ) { my $sql = "START SLAVE UNTIL MASTER_LOG_FILE='$pos->{file}', " . "MASTER_LOG_POS=$pos->{position}"; PTDEBUG && _d($dbh, $sql); $dbh->do($sql); } else { my $sth = $self->{sths}->{$dbh}->{START_SLAVE} ||= $dbh->prepare('START SLAVE'); PTDEBUG && _d($dbh, $sth->{Statement}); $sth->execute(); } } sub catchup_to_master { my ( $self, $slave, $master, $timeout ) = @_; $self->stop_slave($master); $self->stop_slave($slave); my $slave_status = $self->get_slave_status($slave); my $slave_pos = $self->repl_posn($slave_status); my $master_status = $self->get_master_status($master); my $master_pos = $self->repl_posn($master_status); PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); my $result; if ( $self->pos_cmp($slave_pos, $master_pos) < 0 ) { PTDEBUG && _d('Waiting for slave to catch up to master'); $self->start_slave($slave, $master_pos); $result = $self->wait_for_master( master_status => $master_status, slave_dbh => $slave, timeout => $timeout, master_status => $master_status ); if ($result->{error}) { die $result->{error}; } if ( !defined $result->{result} ) { $slave_status = $self->get_slave_status($slave); if ( !$self->slave_is_running($slave_status) ) { PTDEBUG && _d('Master position:', $self->pos_to_string($master_pos), 'Slave position:', $self->pos_to_string($slave_pos)); $slave_pos = $self->repl_posn($slave_status); if ( $self->pos_cmp($slave_pos, $master_pos) != 0 ) { die "MASTER_POS_WAIT() returned NULL but slave has not " . "caught up to master"; } PTDEBUG && _d('Slave is caught up to master and stopped'); } else { die "Slave has not caught up to master and it is still running"; } } } else { PTDEBUG && _d("Slave is already caught up to master"); } return $result; } sub catchup_to_same_pos { my ( $self, $s1_dbh, $s2_dbh ) = @_; $self->stop_slave($s1_dbh); $self->stop_slave($s2_dbh); my $s1_status = $self->get_slave_status($s1_dbh); my $s2_status = $self->get_slave_status($s2_dbh); my $s1_pos = $self->repl_posn($s1_status); my $s2_pos = $self->repl_posn($s2_status); if ( $self->pos_cmp($s1_pos, $s2_pos) < 0 ) { $self->start_slave($s1_dbh, $s2_pos); } elsif ( $self->pos_cmp($s2_pos, $s1_pos) < 0 ) { $self->start_slave($s2_dbh, $s1_pos); } $s1_status = $self->get_slave_status($s1_dbh); $s2_status = $self->get_slave_status($s2_dbh); $s1_pos = $self->repl_posn($s1_status); $s2_pos = $self->repl_posn($s2_status); if ( $self->slave_is_running($s1_status) || $self->slave_is_running($s2_status) || $self->pos_cmp($s1_pos, $s2_pos) != 0) { die "The servers aren't both stopped at the same position"; } } sub slave_is_running { my ( $self, $slave_status ) = @_; return ($slave_status->{slave_sql_running} || 'No') eq 'Yes'; } sub has_slave_updates { my ( $self, $dbh ) = @_; my $sql = q{SHOW VARIABLES LIKE 'log_slave_updates'}; PTDEBUG && _d($dbh, $sql); my ($name, $value) = $dbh->selectrow_array($sql); return $value && $value =~ m/^(1|ON)$/; } sub repl_posn { my ( $self, $status ) = @_; if ( exists $status->{file} && exists $status->{position} ) { return { file => $status->{file}, position => $status->{position}, }; } else { return { file => $status->{relay_master_log_file}, position => $status->{exec_master_log_pos}, }; } } sub get_slave_lag { my ( $self, $dbh ) = @_; my $stat = $self->get_slave_status($dbh); return unless $stat; # server is not a slave return $stat->{seconds_behind_master}; } sub pos_cmp { my ( $self, $a, $b ) = @_; return $self->pos_to_string($a) cmp $self->pos_to_string($b); } sub short_host { my ( $self, $dsn ) = @_; my ($host, $port); if ( $dsn->{master_host} ) { $host = $dsn->{master_host}; $port = $dsn->{master_port}; } else { $host = $dsn->{h}; $port = $dsn->{P}; } return ($host || '[default]') . ( ($port || 3306) == 3306 ? '' : ":$port" ); } sub is_replication_thread { my ( $self, $query, %args ) = @_; return unless $query; my $type = lc($args{type} || 'all'); die "Invalid type: $type" unless $type =~ m/^binlog_dump|slave_io|slave_sql|all$/i; my $match = 0; if ( $type =~ m/binlog_dump|all/i ) { $match = 1 if ($query->{Command} || $query->{command} || '') eq "Binlog Dump"; } if ( !$match ) { if ( ($query->{User} || $query->{user} || '') eq "system user" ) { PTDEBUG && _d("Slave replication thread"); if ( $type ne 'all' ) { my $state = $query->{State} || $query->{state} || ''; if ( $state =~ m/^init|end$/ ) { PTDEBUG && _d("Special state:", $state); $match = 1; } else { my ($slave_sql) = $state =~ m/ ^(Waiting\sfor\sthe\snext\sevent |Reading\sevent\sfrom\sthe\srelay\slog |Has\sread\sall\srelay\slog;\swaiting |Making\stemp\sfile |Waiting\sfor\sslave\smutex\son\sexit)/xi; $match = $type eq 'slave_sql' && $slave_sql ? 1 : $type eq 'slave_io' && !$slave_sql ? 1 : 0; } } else { $match = 1; } } else { PTDEBUG && _d('Not system user'); } if ( !defined $args{check_known_ids} || $args{check_known_ids} ) { my $id = $query->{Id} || $query->{id}; if ( $match ) { $self->{replication_thread}->{$id} = 1; } else { if ( $self->{replication_thread}->{$id} ) { PTDEBUG && _d("Thread ID is a known replication thread ID"); $match = 1; } } } } PTDEBUG && _d('Matches', $type, 'replication thread:', ($match ? 'yes' : 'no'), '; match:', $match); return $match; } sub get_replication_filters { my ( $self, %args ) = @_; my @required_args = qw(dbh); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dbh) = @args{@required_args}; my %filters = (); my $status = $self->get_master_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( binlog_do_db binlog_ignore_db ); } $status = $self->get_slave_status($dbh); if ( $status ) { map { $filters{$_} = $status->{$_} } grep { defined $status->{$_} && $status->{$_} ne '' } qw( replicate_do_db replicate_ignore_db replicate_do_table replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table ); my $sql = "SHOW VARIABLES LIKE 'slave_skip_errors'"; PTDEBUG && _d($dbh, $sql); my $row = $dbh->selectrow_arrayref($sql); $filters{slave_skip_errors} = $row->[1] if $row->[1] && $row->[1] ne 'OFF'; } return \%filters; } sub pos_to_string { my ( $self, $pos ) = @_; my $fmt = '%s/%020d'; return sprintf($fmt, @{$pos}{qw(file position)}); } sub reset_known_replication_threads { my ( $self ) = @_; $self->{replication_thread} = {}; return; } sub get_cxn_from_dsn_table { my ($self, %args) = @_; my @required_args = qw(dsn_table_dsn make_cxn); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn_table_dsn, $make_cxn) = @args{@required_args}; PTDEBUG && _d('DSN table DSN:', $dsn_table_dsn); my $dp = $self->{DSNParser}; my $q = $self->{Quoter}; my $dsn = $dp->parse($dsn_table_dsn); my $dsn_table; if ( $dsn->{D} && $dsn->{t} ) { $dsn_table = $q->quote($dsn->{D}, $dsn->{t}); } elsif ( $dsn->{t} && $dsn->{t} =~ m/\./ ) { $dsn_table = $q->quote($q->split_unquote($dsn->{t})); } else { die "DSN table DSN does not specify a database (D) " . "or a database-qualified table (t)"; } my $dsn_tbl_cxn = $make_cxn->(dsn => $dsn); my $dbh = $dsn_tbl_cxn->connect(); my $sql = "SELECT dsn FROM $dsn_table ORDER BY id"; PTDEBUG && _d($sql); my $dsn_strings = $dbh->selectcol_arrayref($sql); my @cxn; if ( $dsn_strings ) { foreach my $dsn_string ( @$dsn_strings ) { PTDEBUG && _d('DSN from DSN table:', $dsn_string); push @cxn, $make_cxn->(dsn_string => $dsn_string); } } return \@cxn; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End MasterSlave package # ########################################################################### # ########################################################################### # Progress package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Progress.pm # t/lib/Progress.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Progress; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; foreach my $arg (qw(jobsize)) { die "I need a $arg argument" unless defined $args{$arg}; } if ( (!$args{report} || !$args{interval}) ) { if ( $args{spec} && @{$args{spec}} == 2 ) { @args{qw(report interval)} = @{$args{spec}}; } else { die "I need either report and interval arguments, or a spec"; } } my $name = $args{name} || "Progress"; $args{start} ||= time(); my $self; $self = { last_reported => $args{start}, fraction => 0, # How complete the job is callback => sub { my ($fraction, $elapsed, $remaining) = @_; printf STDERR "$name: %3d%% %s remain\n", $fraction * 100, Transformers::secs_to_time($remaining); }, %args, }; return bless $self, $class; } sub validate_spec { shift @_ if $_[0] eq 'Progress'; # Permit calling as Progress-> or Progress:: my ( $spec ) = @_; if ( @$spec != 2 ) { die "spec array requires a two-part argument\n"; } if ( $spec->[0] !~ m/^(?:percentage|time|iterations)$/ ) { die "spec array's first element must be one of " . "percentage,time,iterations\n"; } if ( $spec->[1] !~ m/^\d+$/ ) { die "spec array's second element must be an integer\n"; } } sub set_callback { my ( $self, $callback ) = @_; $self->{callback} = $callback; } sub start { my ( $self, $start ) = @_; $self->{start} = $self->{last_reported} = $start || time(); $self->{first_report} = 0; } sub update { my ( $self, $callback, %args ) = @_; my $jobsize = $self->{jobsize}; my $now ||= $args{now} || time; $self->{iterations}++; # How many updates have happened; if ( !$self->{first_report} && $args{first_report} ) { $args{first_report}->(); $self->{first_report} = 1; } if ( $self->{report} eq 'time' && $self->{interval} > $now - $self->{last_reported} ) { return; } elsif ( $self->{report} eq 'iterations' && ($self->{iterations} - 1) % $self->{interval} > 0 ) { return; } $self->{last_reported} = $now; my $completed = $callback->(); $self->{updates}++; # How many times we have run the update callback return if $completed > $jobsize; my $fraction = $completed > 0 ? $completed / $jobsize : 0; if ( $self->{report} eq 'percentage' && $self->fraction_modulo($self->{fraction}) >= $self->fraction_modulo($fraction) ) { $self->{fraction} = $fraction; return; } $self->{fraction} = $fraction; my $elapsed = $now - $self->{start}; my $remaining = 0; my $eta = $now; if ( $completed > 0 && $completed <= $jobsize && $elapsed > 0 ) { my $rate = $completed / $elapsed; if ( $rate > 0 ) { $remaining = ($jobsize - $completed) / $rate; $eta = $now + int($remaining); } } $self->{callback}->($fraction, $elapsed, $remaining, $eta, $completed); } sub fraction_modulo { my ( $self, $num ) = @_; $num *= 100; # Convert from fraction to percentage return sprintf('%d', sprintf('%d', $num / $self->{interval}) * $self->{interval}); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Progress package # ########################################################################### # ########################################################################### # FileIterator package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/FileIterator.pm # t/lib/FileIterator.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package FileIterator; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my $self = { %args, }; return bless $self, $class; } sub get_file_itr { my ( $self, @filenames ) = @_; my @final_filenames; FILENAME: foreach my $fn ( @filenames ) { if ( !defined $fn ) { warn "Skipping undefined filename"; next FILENAME; } if ( $fn ne '-' ) { if ( !-e $fn || !-r $fn ) { warn "$fn does not exist or is not readable"; next FILENAME; } } push @final_filenames, $fn; } if ( !@filenames ) { push @final_filenames, '-'; PTDEBUG && _d('Auto-adding "-" to the list of filenames'); } PTDEBUG && _d('Final filenames:', @final_filenames); return sub { while ( @final_filenames ) { my $fn = shift @final_filenames; PTDEBUG && _d('Filename:', $fn); if ( $fn eq '-' ) { # Magical STDIN filename. return (*STDIN, undef, undef); } open my $fh, '<', $fn or warn "Cannot open $fn: $OS_ERROR"; if ( $fh ) { return ( $fh, $fn, -s $fn ); } } return (); # Avoids $f being set to 0 in list context. }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End FileIterator package # ########################################################################### # ########################################################################### # Runtime package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Runtime.pm # t/lib/Runtime.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Runtime; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my @required_args = qw(now); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless exists $args{$arg}; } my $run_time = $args{run_time}; if ( defined $run_time ) { die "run_time must be > 0" if $run_time <= 0; } my $now = $args{now}; die "now must be a callback" unless ref $now eq 'CODE'; my $self = { run_time => $run_time, now => $now, start_time => undef, end_time => undef, time_left => undef, stop => 0, }; return bless $self, $class; } sub time_left { my ( $self, %args ) = @_; if ( $self->{stop} ) { PTDEBUG && _d("No time left because stop was called"); return 0; } my $now = $self->{now}->(%args); PTDEBUG && _d("Current time:", $now); if ( !defined $self->{start_time} ) { $self->{start_time} = $now; } return unless defined $now; my $run_time = $self->{run_time}; return unless defined $run_time; if ( !$self->{end_time} ) { $self->{end_time} = $now + $run_time; PTDEBUG && _d("End time:", $self->{end_time}); } $self->{time_left} = $self->{end_time} - $now; PTDEBUG && _d("Time left:", $self->{time_left}); return $self->{time_left}; } sub have_time { my ( $self, %args ) = @_; my $time_left = $self->time_left(%args); return 1 if !defined $time_left; # run forever return $time_left <= 0 ? 0 : 1; # <=0s means run time has elapsed } sub time_elapsed { my ( $self, %args ) = @_; my $start_time = $self->{start_time}; return 0 unless $start_time; my $now = $self->{now}->(%args); PTDEBUG && _d("Current time:", $now); my $time_elapsed = $now - $start_time; PTDEBUG && _d("Time elapsed:", $time_elapsed); if ( $time_elapsed < 0 ) { warn "Current time $now is earlier than start time $start_time"; } return $time_elapsed; } sub reset { my ( $self ) = @_; $self->{start_time} = undef; $self->{end_time} = undef; $self->{time_left} = undef; $self->{stop} = 0; PTDEBUG && _d("Reset run time"); return; } sub stop { my ( $self ) = @_; $self->{stop} = 1; return; } sub start { my ( $self ) = @_; $self->{stop} = 0; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Runtime package # ########################################################################### # ########################################################################### # Pipeline package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Pipeline.pm # t/lib/Pipeline.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Pipeline; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; use Time::HiRes qw(time); sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my $self = { instrument => PTDEBUG, continue_on_error => 0, %args, procs => [], # coderefs for pipeline processes names => [], # names for each ^ pipeline proc instrumentation => { # keyed on proc index in procs Pipeline => { time => 0, calls => 0, }, }, }; return bless $self, $class; } sub add { my ( $self, %args ) = @_; my @required_args = qw(process name); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless defined $args{$arg}; } my ($process, $name) = @args{@required_args}; push @{$self->{procs}}, $process; push @{$self->{names}}, $name; $self->{retries}->{$name} = $args{retry_on_error} || 100; if ( $self->{instrument} ) { $self->{instrumentation}->{$name} = { time => 0, calls => 0 }; } PTDEBUG && _d("Added pipeline process", $name); return; } sub processes { my ( $self ) = @_; return @{$self->{names}}; } sub execute { my ( $self, %args ) = @_; die "Cannot execute pipeline because no process have been added" unless scalar @{$self->{procs}}; my $oktorun = $args{oktorun}; die "I need an oktorun argument" unless $oktorun; die '$oktorun argument must be a reference' unless ref $oktorun; my $pipeline_data = $args{pipeline_data} || {}; $pipeline_data->{oktorun} = $oktorun; my $stats = $args{stats}; # optional PTDEBUG && _d("Pipeline starting at", time); my $instrument = $self->{instrument}; my $processes = $self->{procs}; EVENT: while ( $$oktorun ) { my $procno = 0; # so we can see which proc if one causes an error my $output; eval { PIPELINE_PROCESS: while ( $procno < scalar @{$self->{procs}} ) { my $call_start = $instrument ? time : 0; PTDEBUG && _d("Pipeline process", $self->{names}->[$procno]); $output = $processes->[$procno]->($pipeline_data); if ( $instrument ) { my $call_end = time; my $call_t = $call_end - $call_start; $self->{instrumentation}->{$self->{names}->[$procno]}->{time} += $call_t; $self->{instrumentation}->{$self->{names}->[$procno]}->{count}++; $self->{instrumentation}->{Pipeline}->{time} += $call_t; $self->{instrumentation}->{Pipeline}->{count}++; } if ( !$output ) { PTDEBUG && _d("Pipeline restarting early after", $self->{names}->[$procno]); if ( $stats ) { $stats->{"pipeline_restarted_after_" .$self->{names}->[$procno]}++; } last PIPELINE_PROCESS; } $procno++; } }; if ( $EVAL_ERROR ) { my $name = $self->{names}->[$procno] || ""; my $msg = "Pipeline process " . ($procno + 1) . " ($name) caused an error: " . $EVAL_ERROR; if ( !$self->{continue_on_error} ) { die $msg . "Terminating pipeline because --continue-on-error " . "is false.\n"; } elsif ( defined $self->{retries}->{$name} ) { my $n = $self->{retries}->{$name}; if ( $n ) { warn $msg . "Will retry pipeline process $procno ($name) " . "$n more " . ($n > 1 ? "times" : "time") . ".\n"; $self->{retries}->{$name}--; } else { die $msg . "Terminating pipeline because process $procno " . "($name) caused too many errors.\n"; } } else { warn $msg; } } } PTDEBUG && _d("Pipeline stopped at", time); return; } sub instrumentation { my ( $self ) = @_; return $self->{instrumentation}; } sub reset { my ( $self ) = @_; foreach my $proc_name ( @{$self->{names}} ) { if ( exists $self->{instrumentation}->{$proc_name} ) { $self->{instrumentation}->{$proc_name}->{calls} = 0; $self->{instrumentation}->{$proc_name}->{time} = 0; } } $self->{instrumentation}->{Pipeline}->{calls} = 0; $self->{instrumentation}->{Pipeline}->{time} = 0; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Pipeline package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_query_digest; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Time::Local qw(timelocal); use Time::HiRes qw(time usleep); use List::Util qw(max); use Scalar::Util qw(looks_like_number); use POSIX qw(signal_h); use Data::Dumper; use Percona::Toolkit; $Data::Dumper::Indent = 1; $Data::Dumper::Sortkeys = 1; $Data::Dumper::Quotekeys = 0; $OUTPUT_AUTOFLUSH = 1; Transformers->import(qw( shorten micro_t percentage_of ts make_checksum any_unix_timestamp parse_timestamp unix_timestamp crc32 )); use sigtrap 'handler', \&sig_int, 'normal-signals'; # Global variables. Only really essential variables should be here. my $oktorun = 1; my $ep_dbh; # For --explain my $ps_dbh; # For Processlist my $aux_dbh; # For --aux-dsn (--since/--until "MySQL expression") my $resume_file; my $resume = {}; my $offset; my $exit_status = 0; (my $tool = __PACKAGE__) =~ tr/_/-/; sub main { # Reset global vars, else tests will fail. local @ARGV = @_; $oktorun = 1; $resume = {}; $offset = undef; $exit_status = 0; # ########################################################################## # Get configuration information. # ########################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); my $aux_dsn; for my $i (0..$#ARGV) { next if -e $ARGV[$i]; $aux_dsn = $dp->parse(splice(@ARGV, $i, 1)); last; } # Frequently used options. my $review_dsn = handle_special_defaults($o, 'review'); my $history_dsn = handle_special_defaults($o, 'history'); my @groupby = @{$o->get('group-by')}; my @orderby; if ( (grep { $_ =~ m/genlog|GeneralLogParser|rawlog|RawLogParser/ } @{$o->get('type')}) && !$o->got('order-by') ) { @orderby = 'Query_time:cnt'; } else { @orderby = @{$o->get('order-by')}; } if ( !$o->get('help') ) { if ( $o->get('outliers') && grep { $_ !~ m/^\w+:[0-9.]+(?::[0-9.]+)?$/ } @{$o->get('outliers')} ) { $o->save_error('--outliers requires two or three colon-separated fields'); } if ( $o->get('progress') ) { eval { Progress->validate_spec($o->get('progress')) }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; $o->save_error("--progress $EVAL_ERROR"); } } if ( my $patterns = $o->get('embedded-attributes') ) { $o->save_error("--embedded-attributes should be passed two " . "comma-separated patterns, got " . scalar(@$patterns) ) unless scalar(@$patterns) == 2; for my $re (@$patterns) { no re 'eval'; eval { qr/$re/ }; if ( $EVAL_ERROR ) { $o->save_error("--embedded-attributes $EVAL_ERROR") } } } } # Set an orderby for each groupby; use the default orderby if there # are more groupby than orderby attribs. my $default_orderby = $o->get_defaults()->{'order-by'}; foreach my $i ( 0..$#groupby ) { $orderby[$i] ||= $default_orderby; } $o->set('order-by', \@orderby); my $run_time_mode = lc $o->get('run-time-mode'); my $run_time_interval; eval { $run_time_interval = verify_run_time( run_mode => $run_time_mode, run_time => $o->get('run-time'), ); }; if ( $EVAL_ERROR ) { chomp $EVAL_ERROR; $o->save_error($EVAL_ERROR); } $o->usage_or_errors(); # ######################################################################## # Common modules. # ####################################################################### my $q = new Quoter(); my $qp = new QueryParser(); my $qr = new QueryRewriter(QueryParser=>$qp, match_embedded_numbers => $o->get('preserve-embedded-numbers') ? 1 : 0); my %common_modules = ( OptionParser => $o, DSNParser => $dp, Quoter => $q, QueryParser => $qp, QueryRewriter => $qr, ); # ######################################################################## # Set up for --explain # ######################################################################## if ( my $ep_dsn = $o->get('explain') ) { $ep_dbh = get_cxn( for => '--explain', dsn => $ep_dsn, OptionParser => $o, DSNParser => $dp, opts => { AutoCommit => 1 }, ); $ep_dbh->{InactiveDestroy} = 1; # Don't die on fork(). } # ######################################################################## # Set up for --review. # ######################################################################## my $qv; # QueryReview my $qv_dbh; # For QueryReview my $tp = new TableParser(Quoter => $q); if ( $review_dsn ) { my %dsn_without_Dt = %$review_dsn; delete $dsn_without_Dt{D}; delete $dsn_without_Dt{t}; $qv_dbh = get_cxn( for => '--review', dsn => \%dsn_without_Dt, OptionParser => $o, DSNParser => $dp, opts => { AutoCommit => 1 }, ); $qv_dbh->{InactiveDestroy} = 1; # Don't die on fork(). my @db_tbl = @{$review_dsn}{qw(D t)}; my $db_tbl = $q->quote(@db_tbl); my $create_review_sql = $o->read_para_after( __FILE__, qr/\bMAGIC_create_review_table\b/); $create_review_sql =~ s/\bquery_review\b/$db_tbl/; create_review_tables( type => 'review', dbh => $qv_dbh, full_table => $db_tbl, create_table_sql => $create_review_sql, create_table => $o->get('create-review-table'), TableParser => $tp, ); # Set up the new QueryReview object. my $struct = $tp->parse($tp->get_create_table($qv_dbh, @db_tbl)); $qv = new QueryReview( dbh => $qv_dbh, db_tbl => $db_tbl, tbl_struct => $struct, quoter => $q, ); } # ######################################################################## # Set up for --history. # ######################################################################## my $qh; # QueryHistory my $qh_dbh; if ( $history_dsn ) { my %dsn_without_Dt = %$history_dsn; delete $dsn_without_Dt{D}; delete $dsn_without_Dt{t}; my $qh_dbh = get_cxn( for => '--history', dsn => \%dsn_without_Dt, OptionParser => $o, DSNParser => $dp, opts => { AutoCommit => 1 }, ); $qh_dbh->{InactiveDestroy} = 1; # Don't die on fork(). my @hdb_tbl = @{$history_dsn}{qw(D t)}; my $hdb_tbl = $q->quote(@hdb_tbl); my $create_history_sql = $o->read_para_after( __FILE__, qr/\bMAGIC_create_history_table\b/); $create_history_sql =~ s/\bquery_history\b/$hdb_tbl/; create_review_tables( type => 'history', dbh => $qh_dbh, full_table => $hdb_tbl, create_table_sql => $create_history_sql, create_table => $o->get('create-history-table'), TableParser => $tp, ); my $tbl = $tp->parse($tp->get_create_table($qh_dbh, @hdb_tbl)); my $pat = $o->read_para_after(__FILE__, qr/\bMAGIC_history_columns\b/); $pat =~ s/\s+//g; $pat = qr/^(.*?)_($pat)$/; $qh = QueryHistory->new( history_dbh => $qh_dbh, column_pattern => $pat, ); # And tell the QueryReview that it has more work to do. $qh->set_history_options( table => $hdb_tbl, tbl_struct => $tbl, ); } # ######################################################################## # Create all the pipeline processes that do all the work: get input, # parse events, manage runtime, switch iterations, aggregate, etc. # ######################################################################## # These four vars are passed to print_reports(). my @ea; # EventAggregator objs my @tl; # EventTimeline obj my @read_files; # file names that have been parsed my %stats; # various stats/counters used in some procs # The pipeline data hashref is passed to each proc. Procs use this to # pass data through the pipeline. The most importat data is the event. # Other data includes in the next_event callback, time and iters left, # etc. This hashref is accessed inside a proc via the $args arg. my $pipeline_data = { iter => 1, stats => \%stats, }; my $pipeline = new Pipeline( continue_on_error => $o->get('continue-on-error'), ); # ######################################################################## # Procs before the terminator are, in general, responsible for getting # and event that procs after the terminator process before aggregation # at the end of the pipeline. Therefore, these pre-terminator procs # should not assume an event exists. If one does, they should let the # pipeline continue. Only the terminator proc terminates the pipeline. # ######################################################################## { # prep $pipeline->add( name => 'prep', process => sub { my ( $args ) = @_; # Stuff you'd like to do to make sure pipeline data is prepped # and ready to go... $args->{event} = undef; # remove event from previous pass return $args; }, ); } # prep { # input my $fi = FileIterator->new(); my $next_file = $fi->get_file_itr(@ARGV); my $input_fh; # the current input fh my $pr; # Progress obj for ^ $pipeline->add( name => 'input', process => sub { my ( $args ) = @_; # Only get the next file when there's no fh or no more events in # the current fh. This allows us to do collect-and-report cycles # (i.e. iterations) on huge files. This doesn't apply to infinite # inputs because they don't set more_events false. if ( !$args->{input_fh} || !$args->{more_events} ) { # Close the current file. if ( $args->{input_fh} ) { close $args->{input_fh} or die "Cannot close input fh: $OS_ERROR"; } # Open the next file. my ($fh, $filename, $filesize) = $next_file->(); if ( $fh ) { my $fileno = fileno $fh; if ($fileno == 0) { print "Reading from STDIN ...\n"; } PTDEBUG && _d('Reading', $filename); PTDEBUG && _d('File size:', $filesize); # catch if user is trying to use an uncoverted (raw) binlog # issue 1377888 if ( $filename && $o->get('type')->[0] eq 'binlog') { if (is_raw_binlog($filename)) { warn "Binlog file $filename must first be converted to text format using mysqlbinlog"; return 1; } } push @read_files, { name => ($filename || "STDIN"), size => $filesize }; # Read the file offset for --resume. if ( ($resume_file = $o->get('resume')) && $filename ) { if ( -s $resume_file ) { open my $resume_fh, "<", $resume_file or die "Cannot open $resume_file: $OS_ERROR"; my $resume_offset = do { local $/; <$resume_fh> }; close $resume_fh or die "Error close $resume_file: $OS_ERROR"; chomp($resume_offset) if $resume_offset; if ( looks_like_number($resume_offset) ) { PTDEBUG && _d('Resuming at offset', $resume_offset); $resume->{simple} = 1; seek $fh, $resume_offset, 0 or die "Error seeking to $resume_offset in " . "$resume_file: $OS_ERROR"; warn "# Resuming $filename from offset " . "$resume_offset (file size: $filesize)...\n"; } else { $resume->{simple} = 0; # enhanced resume file map { my $line = $_; chomp $line; my ($key, $value) = split('=', $line); if ( !$key || !defined $value || !looks_like_number($value) || $value < 0 ) { $exit_status = 1; warn "Invalid line in --resume $resume_file: $line\n"; $oktorun = 0; return; } $resume->{$key} = $value; } split("\n", $resume_offset); if ( $resume->{end_offset} && $resume->{end_offset} <= ($resume->{stop_offset} || 0) ) { close $args->{input_fh} if $args->{input_fh}; $args->{input_fh} = undef; $args->{more_events} = 0; $oktorun = 0; $resume_file = ''; warn "# Not resuming $filename because " . "end_offset $resume->{end_offset} is " . "less than or equal to stop_offset " . ($resume->{stop_offset} || 0) . "\n"; } else { $resume_offset = $resume->{stop_offset} || $resume->{start_offset} || 0; seek $fh, $resume_offset, 0 or die "Error seeking to $resume_offset in " . "$resume_file: $OS_ERROR"; warn "# Resuming $filename from offset " . "$resume_offset to " . ($resume->{end_offset} ? $resume->{end_offset} : "end of file") . " (file size: $filesize)...\n"; } } } else { warn "# Resuming $filename from offset 0 because " . "resume file $filename does not exist " . "(file size: $filesize)...\n"; $resume->{simple} = 0; $resume->{start_offset} = 0; } } # Create callback to read next event. Some inputs, like # Processlist, may use something else but most next_event. if ( my $read_time = $o->get('read-timeout') ) { $args->{next_event} = sub { return read_timeout($fh, $read_time); }; } else { $args->{next_event} = sub { return <$fh>; }; } $args->{filename} = $filename; $args->{input_fh} = $fh; $args->{tell} = sub { $offset = tell $fh; # update global $offset if ( $args->{filename} ) { $args->{pos_for}->{$args->{filename}} = $offset; } return $offset; # legacy: return global $offset }; $args->{more_events} = 1; # Reset in case we read two logs out of order by time. $args->{past_since} = 0 if $o->get('since'); $args->{at_until} = 0 if $o->get('until'); # Make a progress reporter, one per file. if ( $o->get('progress') && $filename && -e $filename ) { $pr = new Progress( jobsize => $filesize, spec => $o->get('progress'), name => $filename, ); } } else { PTDEBUG && _d("No more input"); # This will cause terminator proc to terminate the pipeline. $args->{input_fh} = undef; $args->{more_events} = 0; } } elsif ( $resume->{end_offset} && $offset >= $resume->{end_offset} ) { PTDEBUG && _d('Offset', $offset, 'at end_offset', $resume->{end_offset}); close $args->{input_fh} if $args->{input_fh}; $args->{input_fh} = undef; $args->{more_events} = 0; } else { $pr->update($args->{tell}) if $pr; } return $args; }, ); } # input my $ps_dsn; my @parsers; { # event my $misc; if ( $ps_dsn = $o->get('processlist') ) { my $ms = new MasterSlave( OptionParser => $o, DSNParser => $dp, Quoter => $q, ); my $pl = new Processlist( interval => $o->get('interval') * 1_000_000, MasterSlave => $ms ); my ( $sth, $cxn ); my $cur_server = 'processlist'; my $cur_time = 0; if ( $o->get('ask-pass') ) { $ps_dsn->{p} = OptionParser::prompt_noecho("Enter password for " . "--processlist: "); $o->get('processlist', $ps_dsn); } my $code = sub { my $err; do { eval { $sth->execute; }; $err = $EVAL_ERROR; if ( $err ) { # Try to reconnect when there's an error. eval { if ( !$ps_dbh || !$ps_dbh->ping ) { PTDEBUG && _d('Getting a dbh from', $cur_server); $ps_dbh = $dp->get_dbh( $dp->get_cxn_params($o->get($cur_server)), {AutoCommit => 1}); $ps_dbh->{InactiveDestroy} = 1; # Don't die on fork(). } $cur_time = time(); $sth = $ps_dbh->prepare('SHOW FULL PROCESSLIST'); $cxn = $ps_dbh->{mysql_thread_id}; $sth->execute(); }; $err = $EVAL_ERROR; if ( $err ) { warn $err; sleep 1; } } } until ( $sth && !$err ); return [ grep { $_->[0] != $cxn } @{ $sth->fetchall_arrayref(); } ]; }; $pipeline->add( name => ref $pl, process => sub { my ( $args ) = @_; my $event = $pl->parse_event(code => $code); if ( $event ) { sanitize_event($event); $args->{event} = $event; } return $args; }, ); } # get events from processlist else { my %alias_for = ( slowlog => ['SlowLogParser'], binlog => ['BinaryLogParser'], genlog => ['GeneralLogParser'], tcpdump => ['TcpdumpParser','MySQLProtocolParser'], rawlog => ['RawLogParser'], ); my $type = $o->get('type'); $type = $alias_for{$type->[0]} if $alias_for{$type->[0]}; my ($server, $port); if ( my $watch_server = $o->get('watch-server') ) { # This should match all combinations of HOST and PORT except # "host-name.port" because "host.mysql" could be either # host "host" and port "mysql" or just host "host.mysql" # (e.g. if someone added "127.1 host.mysql" to etc/hosts). # So host-name* requires a colon between it and a port. ($server, $port) = $watch_server =~ m/^((?:\d+\.\d+\.\d+\.\d+|[\w\.\-]+\w))(?:[\:\.](\S+))?/; PTDEBUG && _d('Watch server', $server, 'port', $port); } foreach my $module ( @$type ) { my $parser; eval { $parser = $module->new( server => $server, port => $port, o => $o, ); }; if ( $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/perhaps you forgot to load/ ) { # There is no module to handle --type, so wrong --type die "'$module' is not a valid input type. " . "Please check the documentation for --type.\n"; } die "Failed to load $module module: $EVAL_ERROR"; } push @parsers, $parser; $pipeline->add( name => ref $parser, process => sub { my ( $args ) = @_; if ( $args->{input_fh} ) { my $event = $parser->parse_event( event => $args->{event}, next_event => $args->{next_event}, tell => $args->{tell}, misc => $args->{misc}, oktorun => sub { $args->{more_events} = $_[0]; }, stats => $args->{stats}, ); if ( $event ) { sanitize_event($event); $args->{event} = $event; return $args; } PTDEBUG && _d("No more events, input EOF"); return; # next input } # No input, let pipeline run so the last report is printed. return $args; }, ); } } # get events from log file if ( my $patterns = $o->get('embedded-attributes') ) { $misc->{embed} = qr/$patterns->[0]/; $misc->{capture} = qr/$patterns->[1]/; PTDEBUG && _d('Patterns for embedded attributes:', $misc->{embed}, $misc->{capture}); } $pipeline_data->{misc} = $misc; } # event { # runtime my $now_callback; if ( $run_time_mode eq 'clock' ) { $now_callback = sub { return time; }; } elsif ( $run_time_mode eq 'event' ) { $now_callback = sub { my ( %args ) = @_; my $event = $args{event}; return unless $event && $event->{ts}; PTDEBUG && _d("Log time:", $event->{ts}); return unix_timestamp(parse_timestamp($event->{ts})); }; } else { $now_callback = sub { return; }; } $pipeline_data->{Runtime} = new Runtime( now => $now_callback, run_time => $o->get('run-time'), ); $pipeline->add( name => 'runtime', process => sub { my ( $args ) = @_; if ( $run_time_mode eq 'interval' ) { my $event = $args->{event}; return $args unless $event && $event->{ts}; my $ts = $args->{unix_ts} = unix_timestamp(parse_timestamp($event->{ts})); if ( !$args->{next_ts_interval} ) { # We need to figure out what interval we're in and what # interval is next. So first we need to parse the ts. if ( my($y, $m, $d, $h, $i, $s) = $args->{event}->{ts} =~ m/^$Transformers::mysql_ts$/ ) { my $rt = $o->get('run-time'); if ( $run_time_interval == 60 ) { PTDEBUG && _d("Run-time interval in seconds"); my $this_minute = unix_timestamp(parse_timestamp( "$y$m$d $h:$i:00")); do { $this_minute += $rt } until $this_minute > $ts; $args->{next_ts_interval} = $this_minute; } elsif ( $run_time_interval == 3600 ) { PTDEBUG && _d("Run-time interval in minutes"); my $this_hour = unix_timestamp(parse_timestamp( "$y$m$d $h:00:00")); do { $this_hour += $rt } until $this_hour > $ts; $args->{next_ts_interval} = $this_hour; } elsif ( $run_time_interval == 86400 ) { PTDEBUG && _d("Run-time interval in days"); my $this_day = unix_timestamp(parse_timestamp( "$y$m$d 00:00:00")); $args->{next_ts_interval} = $this_day + $rt; } else { die "Invalid run-time interval: $run_time_interval"; } PTDEBUG && _d("First ts interval:", $args->{next_ts_interval}); } else { PTDEBUG && _d("Failed to parse MySQL ts:", $args->{event}->{ts}); } } } else { # Clock and event run-time modes need to check the time. $args->{time_left} = $args->{Runtime}->time_left(event=>$args->{event}); } return $args; }, ); } # runtime # Filter early for --since and --until. # If --since or --until is a MySQL expression, then any_unix_timestamp() # will need this callback to execute the expression. We don't know what # type of time value the user gave, so we'll create the callback in any case. if ( $o->get('since') || $o->get('until') ) { if ( $aux_dsn ) { $aux_dbh = get_cxn( for => '--aux', dsn => $aux_dsn, OptionParser => $o, DSNParser => $dp, opts => { AutoCommit => 1 } ); $aux_dbh->{InactiveDestroy} = 1; # Don't die on fork(). } $aux_dbh ||= $qv_dbh || $qh_dbh || $ps_dbh || $ep_dbh; PTDEBUG && _d('aux dbh:', $aux_dbh); my $time_callback = sub { my ( $exp ) = @_; return unless $aux_dbh; my $sql = "SELECT UNIX_TIMESTAMP($exp)"; PTDEBUG && _d($sql); return $aux_dbh->selectall_arrayref($sql)->[0]->[0]; }; if ( $o->get('since') ) { my $since = any_unix_timestamp($o->get('since'), $time_callback); die "Invalid --since value" unless $since; $pipeline->add( name => 'since', process => sub { my ( $args ) = @_; my $event = $args->{event}; return $args unless $event; if ( $args->{past_since} ) { PTDEBUG && _d('Already past --since'); return $args; } if ( $event->{ts} ) { my $ts = any_unix_timestamp($event->{ts}, $time_callback); if ( ($ts || 0) >= $since ) { PTDEBUG && _d('Event is at or past --since'); $args->{past_since} = 1; return $args; } } PTDEBUG && _d('Event is before --since (or ts unknown)'); return; # next event }, ); } if ( $o->get('until') ) { my $until = any_unix_timestamp($o->get('until'), $time_callback); die "Invalid --until value" unless $until; $pipeline->add( name => 'until', process => sub { my ( $args ) = @_; my $event = $args->{event}; return $args unless $event; if ( $args->{at_until} ) { PTDEBUG && _d('Already past --until'); return; } if ( $event->{ts} ) { my $ts = any_unix_timestamp($event->{ts}, $time_callback); if ( ($ts || 0) >= $until ) { PTDEBUG && _d('Event at or after --until'); $args->{at_until} = 1; return; } } PTDEBUG && _d('Event is before --until (or ts unknown)'); return $args; }, ); } } # since/until { # iteration $pipeline->add( # This is a critical proc: if we die here, we probably need # to stop, else an infinite loop can develop: # https://bugs.launchpad.net/percona-toolkit/+bug/888114 # We'll retry twice in case the problem is just one bad # query class, or something like that. retry_on_error => 2, name => 'iteration', process => sub { my ( $args ) = @_; # Start the (next) iteration. if ( !$args->{iter_start} ) { my $iter_start = $args->{iter_start} = time; PTDEBUG && _d('Iteration', $args->{iter}, 'started at', ts($iter_start)); if ( PTDEBUG ) { _d("\n# Iteration $args->{iter} started at ", ts($iter_start), "\n"); } } # Determine if we should stop the current iteration. # If we do, then we report events collected during this # iter, then reset and increment for the next iter. my $report = 0; my $time_left = $args->{time_left}; if ( !$args->{more_events} || defined $time_left && $time_left <= 0 ) { PTDEBUG && _d("Runtime elapsed or no more events, reporting"); $report = 1; } elsif ( $run_time_mode eq 'interval' && $args->{next_ts_interval} && $args->{unix_ts} >= $args->{next_ts_interval} ) { PTDEBUG && _d("Event is in the next interval, reporting"); # Get the next ts interval based on the current log ts. # Log ts can make big jumps, so just += $rt might not # set the next ts interval at a time past the current # log ts. my $rt = $o->get('run-time'); do { $args->{next_ts_interval} += $rt; } until $args->{next_ts_interval} >= $args->{unix_ts}; $report = 1; } if ( $report ) { PTDEBUG && _d("Iteration", $args->{iter}, "stopped at",ts(time)); save_resume_offset( last_event_offset => $parsers[0]->{last_event_offset}, ); # Get this before calling print_reports() because that sub # resets each ea and we may need this later for stats. my $n_events_aggregated = $ea[0]->events_processed(); if ( $n_events_aggregated ) { print_reports( eas => \@ea, tls => \@tl, groupby => \@groupby, orderby => \@orderby, files => \@read_files, Pipeline => $pipeline, QueryReview => $qv, QueryHistory => $qh, %common_modules, ); } else { if ( $o->get('output') eq 'report' ) { print "\n# No events processed.\n"; } } if ( PTDEBUG ) { if ( keys %stats ) { my $report = new ReportFormatter( line_width => 74, ); $report->set_columns( { name => 'Statistic', }, { name => 'Count', right_justify => 1 }, { name => '%/Events', right_justify => 1 }, ); # Have to add this one manually because currently # EventAggregator::aggregate() doesn't know about stats. # It's the same thing as events_processed() though. $stats{events_aggregated} = $n_events_aggregated; # Save value else events_read will be reset during the # foreach loop below and mess up percentage_of(). my $n_events_read = $stats{events_read} || 0; my %stats_sort_order = ( events_read => 1, events_parsed => 2, events_aggregated => 3, ); my @stats = sort { QueryReportFormatter::pref_sort( $a, $stats_sort_order{$a}, $b, $stats_sort_order{$b}) } keys %stats; foreach my $stat ( @stats ) { $report->add_line( $stat, $stats{$stat} || 0, percentage_of( $stats{$stat} || 0, $n_events_read, p => 2), ); $stats{$stat} = 0; # Reset for next iteration. } print STDERR "\n" . $report->get_report(); } else { print STDERR "\n# No statistics values.\n"; } } # Decrement iters_left after finishing an iter because in the # default case, 1 iter, if we decr when the iter starts, then # terminator will think there's no iters left before the one # iter has finished. if ( my $max_iters = $o->get('iterations') ) { $args->{iters_left} = $max_iters - $args->{iter}; PTDEBUG && _d($args->{iters_left}, "iterations left"); } # Next iteration. $args->{iter}++; $args->{iter_start} = undef; # Runtime is per-iteration, so reset it, and reset time_left # else terminator will think runtime has elapsed when really # we may just be between iters. $args->{Runtime}->reset(); $args->{time_left} = undef; } # Continue the pipeline even if we reported and went to the next # iter because there could be an event in the pipeline that is # the first in the next/new iter. return $args; }, ); } # iteration { # terminator $pipeline->add( name => 'terminator', process => sub { my ( $args ) = @_; # The first sure-fire state that terminates the pipeline is # having no more input. if ( !$args->{input_fh} ) { PTDEBUG && _d("No more input, terminating pipeline"); # This shouldn't happen, but I want to know if it does. warn "There's an event in the pipeline but no current input: " . Dumper($args) if $args->{event}; $oktorun = 0; # 2. terminate pipeline return; # 1. exit pipeline early } # The second sure-first state is having no more iterations. my $iters_left = $args->{iters_left}; if ( defined $iters_left && $iters_left <= 0 ) { PTDEBUG && _d("No more iterations, terminating pipeline"); $oktorun = 0; # 2. terminate pipeline return; # 1. exit pipeline early } # There's time or iters left so keep running. if ( $args->{event} ) { PTDEBUG && _d("Event in pipeline, continuing"); return $args; } else { PTDEBUG && _d("No event in pipeline, get next event"); return; } }, ); } # terminator # ######################################################################## # All pipeline processes after the terminator expect an event # (i.e. that $args->{event} exists and is a valid event). # ######################################################################## if ( grep { $_ eq 'fingerprint' } @groupby ) { $pipeline->add( name => 'fingerprint', process => sub { my ( $args ) = @_; my $event = $args->{event}; # Skip events which do not have the groupby attribute. my $groupby_val = $event->{arg}; return unless $groupby_val; $event->{fingerprint} = $qr->fingerprint($groupby_val); return $args; }, ); } # Make subs which map attrib aliases to their primary attrib. foreach my $alt_attrib ( @{$o->get('attribute-aliases')} ) { $pipeline->add( name => 'attribute aliases', process => make_alt_attrib($alt_attrib), ); } # Carry attribs forward for --inherit-attributes. my $inherited_attribs = $o->get('inherit-attributes'); if ( @$inherited_attribs ) { my $last_val = {}; $pipeline->add( name => 'inherit attributes', process => sub { my ( $args ) = @_; my $event = $args->{event}; foreach my $attrib ( @$inherited_attribs ) { if ( defined $event->{$attrib} ) { # Event has val for this attrib; save it as the last val. $last_val->{$attrib} = $event->{$attrib}; } else { # Inherit last val for this attrib (if there was a last val). $event->{$attrib} = $last_val->{$attrib} if defined $last_val->{$attrib}; } } return $args; }, ); } { # variations my @variations = @{$o->get('variations')}; if ( @variations ) { $pipeline->add( name => 'variations', process => sub { my ( $args ) = @_; my $event = $args->{event}; foreach my $attrib ( @variations ) { my $checksum = crc32($event->{$attrib}); $event->{"${attrib}_crc"} = $checksum if defined $checksum; } return $args; }, ); } } # variations if ( grep { $_ eq 'tables' } @groupby ) { $pipeline->add( name => 'tables', process => sub { my ( $args ) = @_; my $event = $args->{event}; my $group_by_val = $event->{arg}; return unless defined $group_by_val; $event->{tables} = [ map { # Canonicalize and add the db name in front $_ =~ s/`//g; if ( $_ !~ m/\./ && (my $db = $event->{db} || $event->{Schema}) ) { $_ = "$db.$_"; } $_; } $qp->get_tables($group_by_val) ]; return $args; }, ); } { # distill my %distill_args; if ( grep { $_ eq 'distill' } @groupby ) { $pipeline->add( name => 'distill', process => sub { my ( $args ) = @_; my $event = $args->{event}; my $group_by_val = $event->{arg}; return unless defined $group_by_val; $event->{distill} = $qr->distill($group_by_val, %distill_args); PTDEBUG && !$event->{distill} && _d('Cannot distill', $event->{arg}); return $args; }, ); } } # distill # Former --zero-admin $pipeline->add( name => 'zero admin', process => sub { my ( $args ) = @_; my $event = $args->{event}; if ( $event->{arg} && $event->{arg} =~ m/^administrator/ ) { $event->{Rows_sent} = 0 if exists $event->{Rows_sent}; $event->{Rows_examined} = 0 if exists $event->{Rows_examined}; $event->{Rows_read} = 0 if exists $event->{Rows_read}; $event->{Rows_affected} = 0 if exists $event->{Rows_affected}; } return $args; }, ); # zero admin # Filter after special attributes, like fingerprint, tables, # distill, etc., have been created. if ( $o->get('filter') ) { my $filter = $o->get('filter'); if ( -f $filter && -r $filter ) { PTDEBUG && _d('Reading file', $filter, 'for --filter code'); open my $fh, "<", $filter or die "Cannot open $filter: $OS_ERROR"; $filter = do { local $/ = undef; <$fh> }; close $fh; } else { $filter = "( $filter )"; # issue 565 } my $code = 'sub { my ( $args ) = @_; my $event = $args->{event}; ' . "$filter && return \$args; };"; PTDEBUG && _d('--filter code:', $code); my $sub = eval $code or die "Error compiling --filter code: $code\n$EVAL_ERROR"; $pipeline->add( name => 'filter', process => $sub, ); } # filter if ( $o->got('sample') ) { my $group_by_val = $groupby[0]; my $num_samples = $o->get('sample'); if ( $group_by_val ) { my %seen; $pipeline->add( name => 'sample', process => sub { my ( $args ) = @_; my $event = $args->{event}; if ( ++$seen{$event->{$group_by_val}} <= $num_samples ) { PTDEBUG && _d("--sample permits event", $event->{$group_by_val}); return $args; } PTDEBUG && _d("--sample rejects event", $event->{$group_by_val}); return; }, ); } } # sample if ( $o->get('output') =~ /slowlog/i ) { my $w = new SlowLogWriter(); my $field = $o->get('output') eq 'secure-slowlog' ? 'fingerprint' : ''; $pipeline->add( name => '--output slowlog', process => sub { my ( $args ) = @_; my $event = $args->{event}; PTDEBUG && _d('callback: --output slowlog'); $w->write(*STDOUT, $event, $field); return $args; }, ); } # print # Combine "# Log_slow_rate_type: query Log_slow_rate_limit: 2" # as rate_limit=>'query:2'. $pipeline->add( name => 'rate limit', process => sub { my ( $args ) = @_; my $event = $args->{event}; PTDEBUG && _d('callback: rate limit'); if ( my $limit = $event->{Log_slow_rate_limit} ) { $event->{rate_limit} = ($event->{Log_slow_rate_type} || 'session') . ":$limit"; delete $event->{Log_slow_rate_limit}; delete $event->{Log_slow_rate_type}; } return $args; }, ); # Finally, add aggregator obj for each groupby attrib to the callbacks. # These aggregating objs should be the last pipeline processes. foreach my $i ( 0..$#groupby ) { my $groupby = $groupby[$i]; # This shouldn't happen. die "No --order-by value for --group-by $groupby" unless $orderby[$i]; my ( $orderby_attrib, $orderby_func ) = split(/:/, $orderby[$i]); # Create an EventAggregator for this groupby attrib and # add it to callbacks. my $type_for = { val => 'string', key_print => 'string', Status_code => 'string', Statement_id => 'string', Error_no => 'string', Last_errno => 'string', Thread_id => 'string', InnoDB_trx_id => 'string', host => 'string', ip => 'string', port => 'string', Killed => 'bool', rate_limit => 'string', }; my $ea = new EventAggregator( groupby => $groupby, attributes => { }, worst => $orderby_attrib, attrib_limit => $o->get('attribute-value-limit'), ignore_attributes => $o->get('ignore-attributes'), type_for => $type_for, ); push @ea, $ea; $pipeline->add( name => "aggregate $groupby", process => sub { my ( $args ) = @_; $ea->aggregate($args->{event}); return $args; }, ); # If user wants a timeline report, too, then create an EventTimeline # aggregator for this groupby attrib and add it to the callbacks, too. if ( $o->get('timeline') ) { my $tl = new EventTimeline( groupby => [$groupby], attributes => [qw(Query_time ts)], ); push @tl, $tl; $pipeline->add( name => "timeline $groupby", process => sub { my ( $args ) = @_; $tl->aggregate($args->{event}); return $args; }, ); } } # aggregate # ######################################################################## # Daemonize now that everything is setup and ready to work. # ######################################################################## my $daemon = Daemon->new( daemonize => $o->get('daemonize'), pid_file => $o->get('pid'), log_file => $o->get('log'), ); $daemon->run(); # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ ($qv_dbh ? { dbh => $qv_dbh, dsn => $review_dsn } : ()), ($qh_dbh ? { dbh => $qh_dbh, dsn => $history_dsn } : ()), ($ps_dbh ? { dbh => $ps_dbh, dsn => $ps_dsn } : ()), ], ); } # ########################################################################## # Parse the input. # ########################################################################## # Pump the pipeline until either no more input, or we're interrupted by # CTRL-C, or--this shouldn't happen--the pipeline causes an error. All # work happens inside the pipeline via the procs we created above. eval { $pipeline->execute( oktorun => \$oktorun, pipeline_data => $pipeline_data, stats => \%stats, ); }; if ( $EVAL_ERROR ) { warn "The pipeline caused an error: $EVAL_ERROR"; } PTDEBUG && _d("Pipeline data:", Dumper($pipeline_data)); save_resume_offset( last_event_offset => $parsers[0]->{last_event_offset}, ); # Disconnect all open $dbh's map { $dp->disconnect($_); PTDEBUG && _d('Disconnected dbh', $_); } grep { $_ } ($qv_dbh, $qh_dbh, $ps_dbh, $ep_dbh, $aux_dbh); return $exit_status; } # End main() # ############################################################################ # Subroutines. # ############################################################################ sub create_review_tables { my ( %args ) = @_; my @required_args = qw(dbh full_table TableParser type); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my $create_table_sql = $args{create_table_sql}; my ($dbh, $full_table, $tp, $type) = @args{@required_args}; PTDEBUG && _d('Checking --review table', $full_table); # If the repl db doesn't exit, auto-create it, maybe. my ($db, $tbl) = Quoter->split_unquote($full_table); my $show_db_sql = qq{SHOW DATABASES LIKE '$db'}; PTDEBUG && _d($show_db_sql); my @db_exists = $dbh->selectrow_array($show_db_sql); if ( !@db_exists && !$args{create_table} ) { die "--$type database $db does not exist and " . "--no-create-$type-table was specified. You need " . "to create the database.\n"; } else { # Even if the db already exists, do this in case it does not exist # on a slave. my $create_db_sql = "CREATE DATABASE IF NOT EXISTS " . Quoter->quote($db) . " /* $tool */"; PTDEBUG && _d($create_db_sql); eval { $dbh->do($create_db_sql); }; if ( $EVAL_ERROR && !@db_exists ) { warn $EVAL_ERROR; die "--$type database $db does not exist and it cannot be " . "created automatically. You need to create the database.\n"; } } # USE the correct db my $sql = "USE " . Quoter->quote($db); PTDEBUG && _d($sql); $dbh->do($sql); # Check if the table exists; if not, create it, maybe. my $tbl_exists = $tp->check_table( dbh => $dbh, db => $db, tbl => $tbl, ); PTDEBUG && _d('Table exists: ', $tbl_exists ? 'yes' : 'no'); if ( !$tbl_exists && !$args{create_table} ) { die "Table $full_table does not exist and " . "--no-create-$type-table was specified. " . "You need to create the table.\n"; } else { PTDEBUG && _d($dbh, $create_table_sql); eval { $dbh->do($create_table_sql); }; if ( $EVAL_ERROR && !$args{create_table} ) { warn $EVAL_ERROR; die "--$type history table $full_table does not exist and it cannot be " . "created automatically. You need to create the table.\n" } } } # TODO: This sub is poorly named since it does more than print reports: # it aggregates, reports, does QueryReview stuff, etc. sub print_reports { my ( %args ) = @_; my @required_args = qw(eas OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($o, $qv, $pipeline) = @args{qw(OptionParser QueryReview Pipeline)}; my ($eas, $tls, $stats) = @args{qw(eas tls stats)}; my $qh = $args{QueryHistory}; my @reports = @{$o->get('report-format')}; my @groupby = @{$args{groupby}}; my @orderby = @{$args{orderby}}; my $show_all = $o->get('show-all'); for my $i ( 0..$#groupby ) { if ( $o->get('report') || $qv || $qh ) { $eas->[$i]->calculate_statistical_metrics(); } my ($orderby_attrib, $orderby_func) = split(/:/, $orderby[$i]); $orderby_attrib = check_orderby_attrib($orderby_attrib, $eas->[$i], $o); PTDEBUG && _d('Doing reports for groupby', $groupby[$i], 'orderby', $orderby_attrib, $orderby_func); my ($worst, $other) = get_worst_queries( OptionParser => $o, ea => $eas->[$i], orderby_attrib => $orderby_attrib, orderby_func => $orderby_func, limit => $o->get('limit')->[$i] || '95%:20', outliers => $o->get('outliers')->[$i], ); if ( $o->get('report') ) { # XXX There's a bug here: --expected-range '','' will cause # Use of uninitialized value in numeric lt (<) # This bug is intentionally left unfixed at the moment because # we exploit it to test a more serious bug: an infinite loop: # https://bugs.launchpad.net/percona-toolkit/+bug/888114 my $expected_range = $o->get('expected-range'); my $explain_why = $expected_range && ( @$worst < $expected_range->[0] || @$worst > $expected_range->[1]); # Print a header for this groupby/class if we're doing the # standard query report and there's more than one class or # there's one class but it's not the normal class grouped # by fingerprint. my $print_header = 0; if ( (grep { $_ eq 'query_report'; } @{$o->get('report-format')}) && (@groupby > 1 || $groupby[$i] ne 'fingerprint') ) { $print_header = 1; } my $report_class = $o->get('output') =~ m/^json/i ? 'JSONReportFormatter' : 'QueryReportFormatter'; my $qrf = $report_class->new( dbh => $ep_dbh, QueryReview => $args{QueryReview}, QueryRewriter => $args{QueryRewriter}, OptionParser => $args{OptionParser}, QueryParser => $args{QueryParser}, Quoter => $args{Quoter}, show_all => $show_all, max_hostname_length => $o->get('max-hostname-length'), max_line_length => $o->get('max-line-length'), ); $qrf->print_reports( reports => \@reports, ea => $eas->[$i], worst => $worst, other => $other, orderby => $orderby_attrib, groupby => $groupby[$i], print_header => $print_header, explain_why => $explain_why, files => $args{files}, log_type => $o->get('type')->[0], no_v_format => !$o->get('vertical-format'), variations => $o->get('variations'), group => { map { $_=>1 } qw(rusage date hostname files header) }, resume => $resume, anon => $o->get('output') eq 'json-anon', ); } if ( $qv ) { # query review update_query_review_table( ea => $eas->[$i], worst => $worst, QueryReview => $qv, ); } if ( $qh ) { # query history update_query_history_table( ea => $eas->[$i], worst => $worst, QueryHistory => $qh, ); } if ( $o->get('timeline') ) { # --timeline $tls->[$i]->report($tls->[$i]->results(), sub { print @_ }); $tls->[$i]->reset_aggregated_data(); } $eas->[$i]->reset_aggregated_data(); # Reset for next iteration. # Print header report only once. So remove it from the # list of reports after the first groupby's reports. if ( $i == 0 ) { @reports = grep { $_ ne 'header' } @reports; } } # Each groupby if ( PTDEBUG ) { my $report = new ReportFormatter( line_width => 74, ); $report->set_columns( { name => 'Process' }, { name => 'Time', right_justify => 1 }, { name => 'Count', right_justify => 1 }, ); $report->title('Pipeline profile'); my $instrument = $pipeline->instrumentation; my $total_time = $instrument->{Pipeline}; foreach my $process_name ( $pipeline->processes() ) { my $t = $instrument->{$process_name}->{time} || 0; my $tp = sprintf('%.2f %4.1f%%', $t, $t / ($total_time || 1) * 100); $report->add_line($process_name, $tp, $instrument->{$process_name}->{count} || 0); } # Reset profile for next iteration. $pipeline->reset(); _d($report->get_report()); } return; } # Catches signals so we can exit gracefully. sub sig_int { my ( $signal ) = @_; if ( $oktorun ) { print STDERR "# Caught SIG$signal.\n"; $oktorun = 0; } else { print STDERR "# Exiting on SIG$signal.\n"; save_resume_offset(); exit(1); } } # Handle the special defaults for --review & --history sub handle_special_defaults { my ($o, $opt) = @_; my $dsn = $o->get($opt); return unless $dsn; my $para = $o->read_para_after( __FILE__, qr/MAGIC_default_${opt}_table/); my ($default_table) = $para =~ m/default table is C<([^>]+)>/; die "Error parsing special default for --$opt" unless $default_table; my ($D, $t) = Quoter->split_unquote($default_table); $dsn->{D} ||= $D; $dsn->{t} ||= $t; return $dsn; } sub make_alt_attrib { my ( $alt_attrib ) = @_; my @alts = split('\|', $alt_attrib); my $attrib = shift @alts; PTDEBUG && _d('Primary attrib:', $attrib, 'aliases:', @alts); my @lines; push @lines, 'sub { my ( $args ) = @_; ', 'my $event = $args->{event}; ', "if ( exists \$event->{'$attrib'} ) { ", (map { "delete \$event->{'$_'}; "; } @alts), 'return $args; }', # Primary attrib doesn't exist; look for alts (map { "if ( exists \$event->{'$_'} ) { " . "\$event->{'$attrib'} = \$event->{'$_'}; " . "delete \$event->{'$_'}; " . 'return $args; }'; } @alts), 'return $args; }'; PTDEBUG && _d('attrib alias sub for', $attrib, ':', @lines); my $sub = eval join("\n", @lines); die if $EVAL_ERROR; return $sub; } # Checks that the orderby attrib exists in the ea, returns the default # orderby attrib if not. sub check_orderby_attrib { my ( $orderby_attrib, $ea, $o ) = @_; if ( !$ea->type_for($orderby_attrib) && $orderby_attrib ne 'Query_time' ) { my $default_orderby = $o->get_defaults()->{'order-by'}; # Print the notice only if the query report is being printed, too. if ( grep { $_ eq 'query_report' } @{$o->get('report-format')} ) { print "--order-by attribute $orderby_attrib doesn't exist, " . "using $default_orderby\n"; } # Fall back to the default orderby attrib. ( $orderby_attrib, undef ) = split(/:/, $default_orderby); } PTDEBUG && _d('orderby attrib:', $orderby_attrib); return $orderby_attrib; } # Read the fh and timeout after t seconds. sub read_timeout { my ( $fh, $t ) = @_; return unless $fh; $t ||= 0; # will reset alarm and cause read to wait forever # Set the SIGALRM handler. my $mask = POSIX::SigSet->new(&POSIX::SIGALRM); my $action = POSIX::SigAction->new( sub { # This sub is called when a SIGALRM is received. die 'read timeout'; }, $mask, ); my $oldaction = POSIX::SigAction->new(); sigaction(&POSIX::SIGALRM, $action, $oldaction); my $res; eval { alarm $t; $res = <$fh>; alarm 0; }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Read error:', $EVAL_ERROR); die $EVAL_ERROR unless $EVAL_ERROR =~ m/read timeout/; $oktorun = 0; $res = undef; # res is a blank string after a timeout } return $res; } sub get_cxn { my ( %args ) = @_; my @required_args = qw(dsn OptionParser DSNParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($dsn, $o, $dp) = @args{@required_args}; if ( $o->get('ask-pass') ) { $dsn->{p} = OptionParser::prompt_noecho("Enter password " . ($args{for} ? "for $args{for}: " : ": ")); } my $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), $args{opts}); PTDEBUG && _d('Connected dbh', $dbh); return $dbh; } sub get_worst_queries { my ( %args ) = @_; my $o = $args{OptionParser}; my $ea = $args{ea}; my $orderby_attrib = $args{orderby_attrib}; my $orderby_func = $args{orderby_func}; my $limit = $args{limit}; my $outliers = $args{outliers}; # We don't report on all queries, just the worst, i.e. the top # however many. my ($total, $count); if ( $limit =~ m/^\d+$/ ) { $count = $limit; } else { # It's a percentage, so grab as many as needed to get to # that % of the file. ($total, $count) = $limit =~ m/(\d+)/g; $total *= ($ea->results->{globals}->{$orderby_attrib}->{sum} || 0) / 100; } my %top_spec = ( attrib => $orderby_attrib, orderby => $orderby_func || 'cnt', total => $total, count => $count, ); if ( $args{outliers} ) { @top_spec{qw(ol_attrib ol_limit ol_freq)} = split(/:/, $args{outliers}); } # The queries that will be reported. return $ea->top_events(%top_spec); } sub update_query_review_table { my ( %args ) = @_; foreach my $arg ( qw(ea worst QueryReview) ) { die "I need a $arg argument" unless $args{$arg}; } my $ea = $args{ea}; my $worst = $args{worst}; my $qv = $args{QueryReview}; my $attribs = $ea->get_attributes(); PTDEBUG && _d('Updating query review tables'); foreach my $worst_info ( @$worst ) { my $item = $worst_info->[0]; my $stats = $ea->results->{classes}->{$item}; my $sample = $ea->results->{samples}->{$item}; my $review_vals = $qv->get_review_info($item); $qv->set_review_info( fingerprint => $item, sample => $sample->{arg} || '', first_seen => $stats->{ts}->{min}, last_seen => $stats->{ts}->{max} ); } return; } sub update_query_history_table { my ( %args ) = @_; foreach my $arg ( qw(ea worst QueryHistory) ) { die "I need a $arg argument" unless $args{$arg}; } my $ea = $args{ea}; my $worst = $args{worst}; my $qh = $args{QueryHistory}; my $attribs = $ea->get_attributes(); PTDEBUG && _d('Updating query review tables'); foreach my $worst_info ( @$worst ) { my $item = $worst_info->[0]; my $sample = $ea->results->{samples}->{$item}; my %history; foreach my $attrib ( @$attribs ) { $history{$attrib} = $ea->metrics( attrib => $attrib, where => $item, ); } $qh->set_review_history( $item, $sample->{arg} || '', %history); } return; } # Sub: verify_run_time # Verify that the given run mode and run time are valid. If the run mode # is "interval", the time boundary (in seconds) for the run time is returned # if valid. Else, undef is returned because modes "clock" and "event" have # no boundaries that need to be verified. In any case the sub will die if # something is invalid, so the caller should eval their call. The eval # error message is suitable for . # # Parameters: # %args - Arguments # # Required Arguments: # run_mode - Name of run mode (e.g. "clock", "event" or "interval") # run_time - Run time in seconds # # Returns: # Time boundary in seconds if run mode and time are valid; dies if # they are not. Time boundary is undef except for interval run mode. sub verify_run_time { my ( %args ) = @_; my $run_mode = lc $args{run_mode}; my $run_time = defined $args{run_time} ? lc $args{run_time} : undef; PTDEBUG && _d("Verifying run time mode", $run_mode, "and time", $run_time); die "Invalid --run-time-mode: $run_mode\n" unless $run_mode =~ m/clock|event|interval/; if ( defined $run_time && $run_time < 0 ) { die "--run-time must be greater than zero\n"; } my $boundary; if ( $run_mode eq 'interval' ) { if ( !defined $run_time || $run_time <= 0 ) { die "--run-time must be greater than zero for " . "--run-time-mode $run_mode\n"; } if ( $run_time > 86400 ) { # 1 day # Make sure run time is a whole day and not something like 25h. if ( $run_time % 86400 ) { die "Invalid --run-time argument for --run-time-mode $run_mode; " . "see documentation.\n" } $boundary = $run_time; } else { # If run time is sub-minute (some amount of seconds), it should # divide evenly into minute boundaries. If it's sub-minute # (some amount of minutes), it should divide evenly into hour # boundaries. If it's sub-hour, it should divide eventy into # day boundaries. $boundary = $run_time <= 60 ? 60 # seconds divide into minutes : $run_time <= 3600 ? 3600 # minutes divide into hours : 86400; # hours divide into days if ( $boundary % $run_time ) { die "Invalid --run-time argument for --run-time-mode $run_mode; " . "see documentation.\n" } } } return $boundary; } sub save_resume_offset { my (%args) = @_; my $last_event_offset = $args{last_event_offset}; if ( !$resume_file || !$offset ) { PTDEBUG && _d('Not saving resume offset because there is no ' . 'resume file or offset:', $resume_file, $offset); return; } PTDEBUG && _d('Saving resume at offset', $offset, 'to', $resume_file); open my $resume_fh, '>', $resume_file or die "Error opening $resume_file: $OS_ERROR"; if ( $resume->{simple} ) { print { $resume_fh } $offset, "\n"; warn "\n# Saved resume file offset $offset to $resume_file\n"; } else { # 2.2.3+ enhanced resume file $resume->{stop_offset} = defined $last_event_offset ? $last_event_offset : $offset; foreach my $key ( sort keys %$resume ) { next if $key eq 'simple'; print { $resume_fh } "$key=$resume->{$key}\n"; } warn "\n# Saved resume file stop_offset $resume->{stop_offset} to " . "$resume_file\n"; } close $resume_fh or die "Error close $resume_file: $OS_ERROR"; return; } sub sanitize_event { my ($event) = @_; # Quoted and unquoted values should be treated the same # https://bugs.launchpad.net/percona-toolkit/+bug/1176010 if ( $event->{db} ) { $event->{db} =~ s/^`//; $event->{db} =~ s/`$//; } if ( $event->{Schema} ) { $event->{Schema} =~ s/^`//; $event->{Schema} =~ s/`$//; } return; } # make an effort to check if file is a raw binlog # (i.e. was not converted to text using mysqlbinlog) sub is_raw_binlog { my $filename = shift; return -B $filename; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################# # Documentation. # ############################################################################# =pod =head1 NAME pt-query-digest - Analyze MySQL queries from logs, processlist, and tcpdump. =head1 SYNOPSIS Usage: pt-query-digest [OPTIONS] [FILES] [DSN] pt-query-digest analyzes MySQL queries from slow, general, and binary log files. It can also analyze queries from C and MySQL protocol data from tcpdump. By default, queries are grouped by fingerprint and reported in descending order of query time (i.e. the slowest queries first). If no C are given, the tool reads C. The optional C is used for certain options like L<"--since"> and L<"--until">. Report the slowest queries from C: pt-query-digest slow.log Report the slowest queries from the processlist on host1: pt-query-digest --processlist h=host1 Capture MySQL protocol data with tcppdump, then report the slowest queries: tcpdump -s 65535 -x -nn -q -tttt -i any -c 1000 port 3306 > mysql.tcp.txt pt-query-digest --type tcpdump mysql.tcp.txt Save query data from C to host2 for later review and trend analysis: pt-query-digest --review h=host2 --no-report slow.log =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-query-digest is a sophisticated but easy to use tool for analyzing MySQL queries. It can analyze queries from MySQL slow, general, and binary logs. (Binary logs must first be converted to text, see L<"--type">). It can also use C and MySQL protocol data from tcpdump. By default, the tool reports which queries are the slowest, and therefore the most important to optimize. More complex and custom-tailored reports can be created by using options like L<"--group-by">, L<"--filter">, and L<"--embedded-attributes">. Query analysis is a best-practice that should be done frequently. To make this easier, pt-query-digest has two features: query review (L<"--review">) and query history (L<"--history">). When the L<"--review"> option is used, all unique queries are saved to a database. When the tool is ran again with L<"--review">, queries marked as reviewed in the database are not printed in the report. This highlights new queries that need to be reviewed. When the L<"--history"> option is used, query metrics (query time, lock time, etc.) for each unique query are saved to database. Each time the tool is ran with L<"--history">, the more historical data is saved which can be used to trend and analyze query performance over time. =head1 ATTRIBUTES pt-query-digest works on events, which are a collection of key-value pairs called attributes. You'll recognize most of the attributes right away: C, C, and so on. You can just look at a slow log and see them. However, there are some that don't exist in the slow log, and slow logs may actually include different kinds of attributes (for example, you may have a server with the Percona patches). See L<"ATTRIBUTES REFERENCE"> near the end of this documentation for a list of common and L<"--type"> specific attributes. A familiarity with these attributes is necessary for working with L<"--filter">, L<"--ignore-attributes">, and other attribute-related options. With creative use of L<"--filter">, you can create new attributes derived from existing attributes. For example, to create an attribute called C for examining the ratio of C to C, specify a filter like: --filter '($event->{Row_ratio} = $event->{Rows_sent} / ($event->{Rows_examined})) && 1' The C<&& 1> trick is needed to create a valid one-line syntax that is always true, even if the assignment happens to evaluate false. The new attribute will automatically appears in the output: # Row ratio 1.00 0.00 1 0.50 1 0.71 0.50 Attributes created this way can be specified for L<"--order-by"> or any option that requires an attribute. =head1 OUTPUT The default L<"--output"> is a query analysis report. The L<"--[no]report"> option controls whether or not this report is printed. Sometimes you may want to parse all the queries but suppress the report, for example when using L<"--review"> or L<"--history">. There is one paragraph for each class of query analyzed. A "class" of queries all have the same value for the L<"--group-by"> attribute which is C by default. (See L<"ATTRIBUTES">.) A fingerprint is an abstracted version of the query text with literals removed, whitespace collapsed, and so forth. The report is formatted so it's easy to paste into emails without wrapping, and all non-query lines begin with a comment, so you can save it to a .sql file and open it in your favorite syntax-highlighting text editor. There is a response-time profile at the beginning. The output described here is controlled by L<"--report-format">. That option allows you to specify what to print and in what order. The default output in the default order is described here. The report, by default, begins with a paragraph about the entire analysis run The information is very similar to what you'll see for each class of queries in the log, but it doesn't have some information that would be too expensive to keep globally for the analysis. It also has some statistics about the code's execution itself, such as the CPU and memory usage, the local date and time of the run, and a list of input file read/parsed. Following this is the response-time profile over the events. This is a highly summarized view of the unique events in the detailed query report that follows. It contains the following columns: Column Meaning ============ ========================================================== Rank The query's rank within the entire set of queries analyzed Query ID The query's fingerprint Response time The total response time, and percentage of overall total Calls The number of times this query was executed R/Call The mean response time per execution V/M The Variance-to-mean ratio of response time Item The distilled query A final line whose rank is shown as MISC contains aggregate statistics on the queries that were not included in the report, due to options such as L<"--limit"> and L<"--outliers">. For details on the variance-to-mean ratio, please see http://en.wikipedia.org/wiki/Index_of_dispersion. Next, the detailed query report is printed. Each query appears in a paragraph. Here is a sample, slightly reformatted so 'perldoc' will not wrap lines in a terminal. The following will all be one paragraph, but we'll break it up for commentary. # Query 2: 0.01 QPS, 0.02x conc, ID 0xFDEA8D2993C9CAF3 at byte 160665 This line identifies the sequential number of the query in the sort order specified by L<"--order-by">. Then there's the queries per second, and the approximate concurrency for this query (calculated as a function of the timespan and total Query_time). Next there's a query ID. This ID is a hex version of the query's checksum in the database, if you're using L<"--review">. You can select the reviewed query's details from the database with a query like C results in: Query_id: 0xBDDEB6EDA41897A8.1 SELECT t1 SELECT t2 TLIST t1 TLIST t2 First of all, there are two SELECT contexts, because C and C statements. (This does not include 'SELECT...INTO' statements, which do not return rows but dump output to a file or variable.) If you're using recreatable test or development servers and wish to compare write statements too (e.g. C, C, C), then specify C<--no-read-only>. If using a binary log, you must specify C<--no-read-only> because binary logs don't contain C and C statements. If C<--no-read-only> is specified, I queries are exeucted: C, C, C, etc. Even when running in default read-only mode, you should use a MySQL user with only C on each host before executing each query. The table must be database-qualified. The database and table are automatically created unless C<--no-create-upgrade-table> is specified (see L<"--[no]create-upgrade-table">). If the table does not already exist, it is created with this definition: =for comment ignore-pt-internal-value MAGIC_upgrade_table CREATE TABLE pt_upgrade ( id INT NOT NULL PRIMARY KEY ) =item --user short form: -u; type: string MySQL user if not the current system user. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =item --watch-server type: string Parse only events for this IP:port for L<"--type"> tcpdump. All other IP addresses are ignored. If not specified, pt-upgrade watches all servers by looking for any IP address using port 3306 or "mysql". If you're watching a server with a non-standard port, this won't work, so you must specify the IP address and port to watch. If you want to watch a mix of servers, some running on standard port 3306 and some running on non-standard ports, you need to create separate tcpdump outputs for the non-standard port servers and then specify this option for each. At present pt-upgrade cannot auto-detect servers on port 3306 and also be told to watch a server on a non-standard port. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=>, and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * L copy: yes Explicitly enable LOAD DATA LOCAL INFILE. For some reason, some vendors compile libmysql without the --enable-local-infile option, which disables the statement. This can lead to weird situations, like the server allowing LOCAL INFILE, but the client throwing exceptions if it's used. However, as long as the server allows LOAD DATA, clients can easily re-enable it; See L and L. This option does exactly that. Although we've not found a case where turning this option leads to errors or differing behavior, to be on the safe side, this option is not on by default. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-upgrade ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2009-2018 Percona LLC and/or its affiliates. Feedback and improvements are welcome. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-upgrade 3.1.0 =cut percona-toolkit-3.1/bin/pt-variable-advisor000775 001750 001750 00000534357 13535723560 022240 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( Percona::Toolkit OptionParser Lmo::Utils Lmo::Meta Lmo::Object Lmo::Types Lmo DSNParser VersionParser Daemon PodParser TextResultSetParser Advisor AdvisorRules VariableAdvisorRules HTTP::Micro VersionCheck )); } # ########################################################################### # Percona::Toolkit package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Percona/Toolkit.pm # t/lib/Percona/Toolkit.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Percona::Toolkit; our $VERSION = '3.1.0'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Carp qw(carp cluck); use Data::Dumper qw(); require Exporter; our @ISA = qw(Exporter); our @EXPORT_OK = qw( have_required_args Dumper _d ); sub have_required_args { my ($args, @required_args) = @_; my $have_required_args = 1; foreach my $arg ( @required_args ) { if ( !defined $args->{$arg} ) { $have_required_args = 0; carp "Argument $arg is not defined"; } } cluck unless $have_required_args; # print backtrace return $have_required_args; } sub Dumper { local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; Data::Dumper::Dumper(@_); } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Percona::Toolkit package # ########################################################################### # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # Lmo::Utils package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Utils.pm # t/lib/Lmo/Utils.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Utils; use strict; use warnings qw( FATAL all ); require Exporter; our (@ISA, @EXPORT, @EXPORT_OK); BEGIN { @ISA = qw(Exporter); @EXPORT = @EXPORT_OK = qw( _install_coderef _unimport_coderefs _glob_for _stash_for ); } { no strict 'refs'; sub _glob_for { return \*{shift()} } sub _stash_for { return \%{ shift() . "::" }; } } sub _install_coderef { my ($to, $code) = @_; return *{ _glob_for $to } = $code; } sub _unimport_coderefs { my ($target, @names) = @_; return unless @names; my $stash = _stash_for($target); foreach my $name (@names) { if ($stash->{$name} and defined(&{$stash->{$name}})) { delete $stash->{$name}; } } } 1; } # ########################################################################### # End Lmo::Utils package # ########################################################################### # ########################################################################### # Lmo::Meta package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Meta.pm # t/lib/Lmo/Meta.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Meta; use strict; use warnings qw( FATAL all ); my %metadata_for; sub new { my $class = shift; return bless { @_ }, $class } sub metadata_for { my $self = shift; my ($class) = @_; return $metadata_for{$class} ||= {}; } sub class { shift->{class} } sub attributes { my $self = shift; return keys %{$self->metadata_for($self->class)} } sub attributes_for_new { my $self = shift; my @attributes; my $class_metadata = $self->metadata_for($self->class); while ( my ($attr, $meta) = each %$class_metadata ) { if ( exists $meta->{init_arg} ) { push @attributes, $meta->{init_arg} if defined $meta->{init_arg}; } else { push @attributes, $attr; } } return @attributes; } 1; } # ########################################################################### # End Lmo::Meta package # ########################################################################### # ########################################################################### # Lmo::Object package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Object.pm # t/lib/Lmo/Object.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Object; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(blessed); use Lmo::Meta; use Lmo::Utils qw(_glob_for); sub new { my $class = shift; my $args = $class->BUILDARGS(@_); my $class_metadata = Lmo::Meta->metadata_for($class); my @args_to_delete; while ( my ($attr, $meta) = each %$class_metadata ) { next unless exists $meta->{init_arg}; my $init_arg = $meta->{init_arg}; if ( defined $init_arg ) { $args->{$attr} = delete $args->{$init_arg}; } else { push @args_to_delete, $attr; } } delete $args->{$_} for @args_to_delete; for my $attribute ( keys %$args ) { if ( my $coerce = $class_metadata->{$attribute}{coerce} ) { $args->{$attribute} = $coerce->($args->{$attribute}); } if ( my $isa_check = $class_metadata->{$attribute}{isa} ) { my ($check_name, $check_sub) = @$isa_check; $check_sub->($args->{$attribute}); } } while ( my ($attribute, $meta) = each %$class_metadata ) { next unless $meta->{required}; Carp::confess("Attribute ($attribute) is required for $class") if ! exists $args->{$attribute} } my $self = bless $args, $class; my @build_subs; my $linearized_isa = mro::get_linear_isa($class); for my $isa_class ( @$linearized_isa ) { unshift @build_subs, *{ _glob_for "${isa_class}::BUILD" }{CODE}; } my @args = %$args; for my $sub (grep { defined($_) && exists &$_ } @build_subs) { $sub->( $self, @args); } return $self; } sub BUILDARGS { shift; # No need for the classname if ( @_ == 1 && ref($_[0]) ) { Carp::confess("Single parameters to new() must be a HASH ref, not $_[0]") unless ref($_[0]) eq ref({}); return {%{$_[0]}} # We want a new reference, always } else { return { @_ }; } } sub meta { my $class = shift; $class = Scalar::Util::blessed($class) || $class; return Lmo::Meta->new(class => $class); } 1; } # ########################################################################### # End Lmo::Object package # ########################################################################### # ########################################################################### # Lmo::Types package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo/Types.pm # t/lib/Lmo/Types.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Lmo::Types; use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); our %TYPES = ( Bool => sub { !$_[0] || (defined $_[0] && looks_like_number($_[0]) && $_[0] == 1) }, Num => sub { defined $_[0] && looks_like_number($_[0]) }, Int => sub { defined $_[0] && looks_like_number($_[0]) && $_[0] == int($_[0]) }, Str => sub { defined $_[0] }, Object => sub { defined $_[0] && blessed($_[0]) }, FileHandle => sub { local $@; require IO::Handle; fileno($_[0]) && $_[0]->opened }, map { my $type = /R/ ? $_ : uc $_; $_ . "Ref" => sub { ref $_[0] eq $type } } qw(Array Code Hash Regexp Glob Scalar) ); sub check_type_constaints { my ($attribute, $type_check, $check_name, $val) = @_; ( ref($type_check) eq 'CODE' ? $type_check->($val) : (ref $val eq $type_check || ($val && $val eq $type_check) || (exists $TYPES{$type_check} && $TYPES{$type_check}->($val))) ) || Carp::confess( qq . qq . (defined $val ? Lmo::Dumper($val) : 'undef') ) } sub _nested_constraints { my ($attribute, $aggregate_type, $type) = @_; my $inner_types; if ( $type =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $inner_types = _nested_constraints($1, $2); } else { $inner_types = $TYPES{$type}; } if ( $aggregate_type eq 'ArrayRef' ) { return sub { my ($val) = @_; return unless ref($val) eq ref([]); if ($inner_types) { for my $value ( @{$val} ) { return unless $inner_types->($value) } } else { for my $value ( @{$val} ) { return unless $value && ($value eq $type || (Scalar::Util::blessed($value) && $value->isa($type))); } } return 1; }; } elsif ( $aggregate_type eq 'Maybe' ) { return sub { my ($value) = @_; return 1 if ! defined($value); if ($inner_types) { return unless $inner_types->($value) } else { return unless $value eq $type || (Scalar::Util::blessed($value) && $value->isa($type)); } return 1; } } else { Carp::confess("Nested aggregate types are only implemented for ArrayRefs and Maybe"); } } 1; } # ########################################################################### # End Lmo::Types package # ########################################################################### # ########################################################################### # Lmo package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Lmo.pm # t/lib/Lmo.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { BEGIN { $INC{"Lmo.pm"} = __FILE__; package Lmo; our $VERSION = '0.30_Percona'; # Forked from 0.30 of Mo. use strict; use warnings qw( FATAL all ); use Carp (); use Scalar::Util qw(looks_like_number blessed); use Lmo::Meta; use Lmo::Object; use Lmo::Types; use Lmo::Utils; my %export_for; sub import { warnings->import(qw(FATAL all)); strict->import(); my $caller = scalar caller(); # Caller's package my %exports = ( extends => \&extends, has => \&has, with => \&with, override => \&override, confess => \&Carp::confess, ); $export_for{$caller} = \%exports; for my $keyword ( keys %exports ) { _install_coderef "${caller}::$keyword" => $exports{$keyword}; } if ( !@{ *{ _glob_for "${caller}::ISA" }{ARRAY} || [] } ) { @_ = "Lmo::Object"; goto *{ _glob_for "${caller}::extends" }{CODE}; } } sub extends { my $caller = scalar caller(); for my $class ( @_ ) { _load_module($class); } _set_package_isa($caller, @_); _set_inherited_metadata($caller); } sub _load_module { my ($class) = @_; (my $file = $class) =~ s{::|'}{/}g; $file .= '.pm'; { local $@; eval { require "$file" } } # or warn $@; return; } sub with { my $package = scalar caller(); require Role::Tiny; for my $role ( @_ ) { _load_module($role); _role_attribute_metadata($package, $role); } Role::Tiny->apply_roles_to_package($package, @_); } sub _role_attribute_metadata { my ($package, $role) = @_; my $package_meta = Lmo::Meta->metadata_for($package); my $role_meta = Lmo::Meta->metadata_for($role); %$package_meta = (%$role_meta, %$package_meta); } sub has { my $names = shift; my $caller = scalar caller(); my $class_metadata = Lmo::Meta->metadata_for($caller); for my $attribute ( ref $names ? @$names : $names ) { my %args = @_; my $method = ($args{is} || '') eq 'ro' ? sub { Carp::confess("Cannot assign a value to a read-only accessor at reader ${caller}::${attribute}") if $#_; return $_[0]{$attribute}; } : sub { return $#_ ? $_[0]{$attribute} = $_[1] : $_[0]{$attribute}; }; $class_metadata->{$attribute} = (); if ( my $type_check = $args{isa} ) { my $check_name = $type_check; if ( my ($aggregate_type, $inner_type) = $type_check =~ /\A(ArrayRef|Maybe)\[(.*)\]\z/ ) { $type_check = Lmo::Types::_nested_constraints($attribute, $aggregate_type, $inner_type); } my $check_sub = sub { my ($new_val) = @_; Lmo::Types::check_type_constaints($attribute, $type_check, $check_name, $new_val); }; $class_metadata->{$attribute}{isa} = [$check_name, $check_sub]; my $orig_method = $method; $method = sub { $check_sub->($_[1]) if $#_; goto &$orig_method; }; } if ( my $builder = $args{builder} ) { my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$builder : goto &$original_method }; } if ( my $code = $args{default} ) { Carp::confess("${caller}::${attribute}'s default is $code, but should be a coderef") unless ref($code) eq 'CODE'; my $original_method = $method; $method = sub { $#_ ? goto &$original_method : ! exists $_[0]{$attribute} ? $_[0]{$attribute} = $_[0]->$code : goto &$original_method }; } if ( my $role = $args{does} ) { my $original_method = $method; $method = sub { if ( $#_ ) { Carp::confess(qq) unless Scalar::Util::blessed($_[1]) && eval { $_[1]->does($role) } } goto &$original_method }; } if ( my $coercion = $args{coerce} ) { $class_metadata->{$attribute}{coerce} = $coercion; my $original_method = $method; $method = sub { if ( $#_ ) { return $original_method->($_[0], $coercion->($_[1])) } goto &$original_method; } } _install_coderef "${caller}::$attribute" => $method; if ( $args{required} ) { $class_metadata->{$attribute}{required} = 1; } if ($args{clearer}) { _install_coderef "${caller}::$args{clearer}" => sub { delete shift->{$attribute} } } if ($args{predicate}) { _install_coderef "${caller}::$args{predicate}" => sub { exists shift->{$attribute} } } if ($args{handles}) { _has_handles($caller, $attribute, \%args); } if (exists $args{init_arg}) { $class_metadata->{$attribute}{init_arg} = $args{init_arg}; } } } sub _has_handles { my ($caller, $attribute, $args) = @_; my $handles = $args->{handles}; my $ref = ref $handles; my $kv; if ( $ref eq ref [] ) { $kv = { map { $_,$_ } @{$handles} }; } elsif ( $ref eq ref {} ) { $kv = $handles; } elsif ( $ref eq ref qr// ) { Carp::confess("Cannot delegate methods based on a Regexp without a type constraint (isa)") unless $args->{isa}; my $target_class = $args->{isa}; $kv = { map { $_, $_ } grep { $_ =~ $handles } grep { !exists $Lmo::Object::{$_} && $target_class->can($_) } grep { !$export_for{$target_class}->{$_} } keys %{ _stash_for $target_class } }; } else { Carp::confess("handles for $ref not yet implemented"); } while ( my ($method, $target) = each %{$kv} ) { my $name = _glob_for "${caller}::$method"; Carp::confess("You cannot overwrite a locally defined method ($method) with a delegation") if defined &$name; my ($target, @curried_args) = ref($target) ? @$target : $target; *$name = sub { my $self = shift; my $delegate_to = $self->$attribute(); my $error = "Cannot delegate $method to $target because the value of $attribute"; Carp::confess("$error is not defined") unless $delegate_to; Carp::confess("$error is not an object (got '$delegate_to')") unless Scalar::Util::blessed($delegate_to) || (!ref($delegate_to) && $delegate_to->can($target)); return $delegate_to->$target(@curried_args, @_); } } } sub _set_package_isa { my ($package, @new_isa) = @_; my $package_isa = \*{ _glob_for "${package}::ISA" }; @{*$package_isa} = @new_isa; } sub _set_inherited_metadata { my $class = shift; my $class_metadata = Lmo::Meta->metadata_for($class); my $linearized_isa = mro::get_linear_isa($class); my %new_metadata; for my $isa_class (reverse @$linearized_isa) { my $isa_metadata = Lmo::Meta->metadata_for($isa_class); %new_metadata = ( %new_metadata, %$isa_metadata, ); } %$class_metadata = %new_metadata; } sub unimport { my $caller = scalar caller(); my $target = caller; _unimport_coderefs($target, keys %{$export_for{$caller}}); } sub Dumper { require Data::Dumper; local $Data::Dumper::Indent = 0; local $Data::Dumper::Sortkeys = 0; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Terse = 1; Data::Dumper::Dumper(@_) } BEGIN { if ($] >= 5.010) { { local $@; require mro; } } else { local $@; eval { require MRO::Compat; } or do { *mro::get_linear_isa = *mro::get_linear_isa_dfs = sub { no strict 'refs'; my $classname = shift; my @lin = ($classname); my %stored; foreach my $parent (@{"$classname\::ISA"}) { my $plin = mro::get_linear_isa_dfs($parent); foreach (@$plin) { next if exists $stored{$_}; push(@lin, $_); $stored{$_} = 1; } } return \@lin; }; } } } sub override { my ($methods, $code) = @_; my $caller = scalar caller; for my $method ( ref($methods) ? @$methods : $methods ) { my $full_method = "${caller}::${method}"; *{_glob_for $full_method} = $code; } } } 1; } # ########################################################################### # End Lmo package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # VersionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionParser.pm # t/lib/VersionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionParser; use Lmo; use Scalar::Util qw(blessed); use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use overload ( '""' => "version", '<=>' => "cmp", 'cmp' => "cmp", fallback => 1, ); use Carp (); our $VERSION = 0.01; has major => ( is => 'ro', isa => 'Int', required => 1, ); has [qw( minor revision )] => ( is => 'ro', isa => 'Num', ); has flavor => ( is => 'ro', isa => 'Str', default => sub { 'Unknown' }, ); has innodb_version => ( is => 'ro', isa => 'Str', default => sub { 'NO' }, ); sub series { my $self = shift; return $self->_join_version($self->major, $self->minor); } sub version { my $self = shift; return $self->_join_version($self->major, $self->minor, $self->revision); } sub is_in { my ($self, $target) = @_; return $self eq $target; } sub _join_version { my ($self, @parts) = @_; return join ".", map { my $c = $_; $c =~ s/^0\./0/; $c } grep defined, @parts; } sub _split_version { my ($self, $str) = @_; my @version_parts = map { s/^0(?=\d)/0./; $_ } $str =~ m/(\d+)/g; return @version_parts[0..2]; } sub normalized_version { my ( $self ) = @_; my $result = sprintf('%d%02d%02d', map { $_ || 0 } $self->major, $self->minor, $self->revision); PTDEBUG && _d($self->version, 'normalizes to', $result); return $result; } sub comment { my ( $self, $cmd ) = @_; my $v = $self->normalized_version(); return "/*!$v $cmd */" } my @methods = qw(major minor revision); sub cmp { my ($left, $right) = @_; my $right_obj = (blessed($right) && $right->isa(ref($left))) ? $right : ref($left)->new($right); my $retval = 0; for my $m ( @methods ) { last unless defined($left->$m) && defined($right_obj->$m); $retval = $left->$m <=> $right_obj->$m; last if $retval; } return $retval; } sub BUILDARGS { my $self = shift; if ( @_ == 1 ) { my %args; if ( blessed($_[0]) && $_[0]->can("selectrow_hashref") ) { PTDEBUG && _d("VersionParser got a dbh, trying to get the version"); my $dbh = $_[0]; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $query = eval { $dbh->selectall_arrayref(q/SHOW VARIABLES LIKE 'version%'/, { Slice => {} }) }; if ( $query ) { $query = { map { $_->{variable_name} => $_->{value} } @$query }; @args{@methods} = $self->_split_version($query->{version}); $args{flavor} = delete $query->{version_comment} if $query->{version_comment}; } elsif ( eval { ($query) = $dbh->selectrow_array(q/SELECT VERSION()/) } ) { @args{@methods} = $self->_split_version($query); } else { Carp::confess("Couldn't get the version from the dbh while " . "creating a VersionParser object: $@"); } $args{innodb_version} = eval { $self->_innodb_version($dbh) }; } elsif ( !ref($_[0]) ) { @args{@methods} = $self->_split_version($_[0]); } for my $method (@methods) { delete $args{$method} unless defined $args{$method}; } @_ = %args if %args; } return $self->SUPER::BUILDARGS(@_); } sub _innodb_version { my ( $self, $dbh ) = @_; return unless $dbh; my $innodb_version = "NO"; my ($innodb) = grep { $_->{engine} =~ m/InnoDB/i } map { my %hash; @hash{ map { lc $_ } keys %$_ } = values %$_; \%hash; } @{ $dbh->selectall_arrayref("SHOW ENGINES", {Slice=>{}}) }; if ( $innodb ) { PTDEBUG && _d("InnoDB support:", $innodb->{support}); if ( $innodb->{support} =~ m/YES|DEFAULT/i ) { my $vars = $dbh->selectrow_hashref( "SHOW VARIABLES LIKE 'innodb_version'"); $innodb_version = !$vars ? "BUILTIN" : ($vars->{Value} || $vars->{value}); } else { $innodb_version = $innodb->{support}; # probably DISABLED or NO } } PTDEBUG && _d("InnoDB version:", $innodb_version); return $innodb_version; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } no Lmo; 1; } # ########################################################################### # End VersionParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # PodParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/PodParser.pm # t/lib/PodParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package PodParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; my %parse_items_from = ( 'OPTIONS' => 1, 'DSN OPTIONS' => 1, 'RULES' => 1, ); my %item_pattern_for = ( 'OPTIONS' => qr/--(.*)/, 'DSN OPTIONS' => qr/\* (.)/, 'RULES' => qr/(.*)/, ); my %section_has_rules = ( 'OPTIONS' => 1, 'DSN OPTIONS' => 0, 'RULES' => 0, ); sub new { my ( $class, %args ) = @_; my $self = { current_section => '', current_item => '', items => {}, # keyed off SECTION magic => {}, # keyed off SECTION->magic ident (without MAGIC_) magic_ident => '', # set when next para is a magic para }; return bless $self, $class; } sub get_items { my ( $self, $section ) = @_; return $section ? $self->{items}->{$section} : $self->{items}; } sub get_magic { my ( $self, $section ) = @_; return $section ? $self->{magic}->{$section} : $self->{magic}; } sub parse_from_file { my ( $self, $file ) = @_; return unless $file; PTDEBUG && _d('Parsing POD in', $file); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs my $para; 1 while defined($para = <$fh>) && $para !~ m/^=pod/; die "$file does not contain =pod" unless $para; while ( defined($para = <$fh>) && $para !~ m/^=cut/ ) { if ( $para =~ m/^=(head|item|over|back)/ ) { my ($cmd, $name) = $para =~ m/^=(\w+)(?:\s+(.+))?/; $name ||= ''; PTDEBUG && _d('cmd:', $cmd, 'name:', $name); $self->command($cmd, $name); } elsif ( $parse_items_from{$self->{current_section}} ) { $self->textblock($para); } } close $fh; } sub command { my ( $self, $cmd, $name ) = @_; $name =~ s/\s+\Z//m; # Remove \n and blank line after name. if ( $cmd eq 'head1' ) { PTDEBUG && _d('In section', $name); $self->{current_section} = $name; } elsif ( $cmd eq 'over' ) { if ( $parse_items_from{$name} ) { PTDEBUG && _d('Start items in', $self->{current_section}); $self->{items}->{$self->{current_section}} = {}; } } elsif ( $cmd eq 'item' && $parse_items_from{$self->{current_section}} ) { my $pat = $item_pattern_for{ $self->{current_section} }; my ($item) = $name =~ m/$pat/; if ( $item ) { PTDEBUG && _d($self->{current_section}, 'item:', $item); $self->{items}->{ $self->{current_section} }->{$item} = { desc => '', # every item should have a desc }; $self->{current_item} = $item; } else { warn "Item $name does not match $pat"; } } elsif ( $cmd eq 'back' ) { if ( $parse_items_from{$self->{current_section}} ) { PTDEBUG && _d('End items in', $self->{current_section}); } } else { $self->{current_section} = ''; } return; } sub textblock { my ( $self, $para ) = @_; return unless $self->{current_section} && $self->{current_item}; my $section = $self->{current_section}; my $item = $self->{items}->{$section}->{ $self->{current_item} }; $para =~ s/\s+\Z//; if ( $para =~ m/^[a-z]\w+[:;] / ) { PTDEBUG && _d('Item attributes:', $para); map { my ($attrib, $val) = split(/: /, $_); $item->{$attrib} = defined $val ? $val : 1; } split(/; /, $para); } else { if ( $self->{magic_ident} ) { my ($leading_space) = $para =~ m/^(\s+)/; my $indent = length($leading_space || ''); if ( $indent ) { $para =~ s/^\s{$indent}//mg; $para =~ s/\s+$//; PTDEBUG && _d("MAGIC", $self->{magic_ident}, "para:", $para); $self->{magic}->{$self->{current_section}}->{$self->{magic_ident}} = $para; } else { PTDEBUG && _d("MAGIC", $self->{magic_ident}, "para is not indented; treating as normal para"); } $self->{magic_ident} = ''; # must unset this! } PTDEBUG && _d('Item desc:', substr($para, 0, 40), length($para) > 40 ? '...' : ''); $para =~ s/\n+/ /g; $item->{desc} .= $para; if ( $para =~ m/MAGIC_(\w+)/ ) { $self->{magic_ident} = $1; # XXX PTDEBUG && _d("MAGIC", $self->{magic_ident}, "follows"); } } return; } sub verbatim { my ( $self, $para ) = @_; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End PodParser package # ########################################################################### # ########################################################################### # TextResultSetParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/TextResultSetParser.pm # t/lib/TextResultSetParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package TextResultSetParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my %value_for = ( 'NULL' => undef, # DBI::selectall_arrayref() does this ($args{value_for} ? %{$args{value_for}} : ()), ); my $self = { %args, value_for => \%value_for, }; return bless $self, $class; } sub _parse_tabular { my ( $text, @cols ) = @_; my %row; my @vals = $text =~ m/\| +([^\|]*?)(?= +\|)/msg; return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub _parse_tab_sep { my ( $text, @cols ) = @_; my %row; my @vals = split(/\t/, $text); return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub parse_vertical_row { my ( $self, $text ) = @_; my %row = $text =~ m/^\s*(\w+):(?: ([^\n]*))?/msg; if ( $self->{NAME_lc} ) { my %lc_row = map { my $key = lc $_; $key => $row{$_}; } keys %row; return \%lc_row; } else { return \%row; } } sub parse { my ( $self, $text ) = @_; my $result_set; if ( $text =~ m/^\+---/m ) { # standard "tabular" output PTDEBUG && _d('Result set text is standard tabular'); my $line_pattern = qr/^(\| .*)[\r\n]+/m; $result_set = $self->parse_horizontal_row($text, $line_pattern, \&_parse_tabular); } elsif ( $text =~ m/^\w+\t\w+/m ) { # tab-separated PTDEBUG && _d('Result set text is tab-separated'); my $line_pattern = qr/^(.*?\t.*)[\r\n]+/m; $result_set = $self->parse_horizontal_row($text, $line_pattern, \&_parse_tab_sep); } elsif ( $text =~ m/\*\*\* \d+\. row/ ) { # "vertical" output PTDEBUG && _d('Result set text is vertical (\G)'); foreach my $row ( split_vertical_rows($text) ) { push @$result_set, $self->parse_vertical_row($row); } } else { my $text_sample = substr $text, 0, 300; my $remaining = length $text > 300 ? (length $text) - 300 : 0; chomp $text_sample; die "Cannot determine if text is tabular, tab-separated or vertical:\n" . "$text_sample\n" . ($remaining ? "(not showing last $remaining bytes of text)\n" : ""); } if ( $self->{value_for} ) { foreach my $result_set ( @$result_set ) { foreach my $key ( keys %$result_set ) { next unless defined $result_set->{$key}; $result_set->{$key} = $self->{value_for}->{ $result_set->{$key} } if exists $self->{value_for}->{ $result_set->{$key} }; } } } return $result_set; } sub parse_horizontal_row { my ( $self, $text, $line_pattern, $sub ) = @_; my @result_sets = (); my @cols = (); foreach my $line ( $text =~ m/$line_pattern/g ) { my ( $row, $cols ) = $sub->($line, @cols); if ( $row ) { push @result_sets, $row; } else { @cols = map { $self->{NAME_lc} ? lc $_ : $_ } @$cols; } } return \@result_sets; } sub split_vertical_rows { my ( $text ) = @_; my $ROW_HEADER = '\*{3,} \d+\. row \*{3,}'; my @rows = $text =~ m/($ROW_HEADER.*?)(?=$ROW_HEADER|\z)/omgs; return @rows; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End TextResultSetParser package # ########################################################################### # ########################################################################### # Advisor package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Advisor.pm # t/lib/Advisor.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Advisor; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(match_type) ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, rules => [], # Rules from all advisor modules. rule_index_for => {}, # Maps rules by ID to their array index in $rules. rule_info => {}, # ID, severity, description, etc. for each rule. }; return bless $self, $class; } sub load_rules { my ( $self, $advisor ) = @_; return unless $advisor; PTDEBUG && _d('Loading rules from', ref $advisor); my $i = scalar @{$self->{rules}}; RULE: foreach my $rule ( $advisor->get_rules() ) { my $id = $rule->{id}; if ( $self->{ignore_rules}->{"$id"} ) { PTDEBUG && _d("Ignoring rule", $id); next RULE; } die "Rule $id already exists and cannot be redefined" if defined $self->{rule_index_for}->{$id}; push @{$self->{rules}}, $rule; $self->{rule_index_for}->{$id} = $i++; } return; } sub load_rule_info { my ( $self, $advisor ) = @_; return unless $advisor; PTDEBUG && _d('Loading rule info from', ref $advisor); my $rules = $self->{rules}; foreach my $rule ( @$rules ) { my $id = $rule->{id}; if ( $self->{ignore_rules}->{"$id"} ) { die "Rule $id was loaded but should be ignored"; } my $rule_info = $advisor->get_rule_info($id); next unless $rule_info; die "Info for rule $id already exists and cannot be redefined" if $self->{rule_info}->{$id}; $self->{rule_info}->{$id} = $rule_info; } return; } sub run_rules { my ( $self, %args ) = @_; my @matched_rules; my @matched_pos; my $rules = $self->{rules}; my $match_type = lc $self->{match_type}; foreach my $rule ( @$rules ) { eval { my $match = $rule->{code}->(%args); if ( $match_type eq 'pos' ) { if ( defined $match ) { PTDEBUG && _d('Matches rule', $rule->{id}, 'near pos', $match); push @matched_rules, $rule->{id}; push @matched_pos, $match; } } elsif ( $match_type eq 'bool' ) { if ( $match ) { PTDEBUG && _d("Matches rule", $rule->{id}); push @matched_rules, $rule->{id}; } } }; if ( $EVAL_ERROR ) { warn "Code for rule $rule->{id} caused an error: $EVAL_ERROR"; } } return \@matched_rules, \@matched_pos; }; sub get_rule_info { my ( $self, $id ) = @_; return unless $id; return $self->{rule_info}->{$id}; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Advisor package # ########################################################################### # ########################################################################### # AdvisorRules package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/AdvisorRules.pm # t/lib/AdvisorRules.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package AdvisorRules; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(PodParser) ) { die "I need a $arg argument" unless $args{$arg}; } my $self = { %args, rules => [], rule_info => {}, }; return bless $self, $class; } sub load_rule_info { my ( $self, %args ) = @_; foreach my $arg ( qw(file section ) ) { die "I need a $arg argument" unless $args{$arg}; } my $rules = $args{rules} || $self->{rules}; my $p = $self->{PodParser}; $p->parse_from_file($args{file}); my $rule_items = $p->get_items($args{section}); my %seen; foreach my $rule_id ( keys %$rule_items ) { my $rule = $rule_items->{$rule_id}; die "Rule $rule_id has no description" unless $rule->{desc}; die "Rule $rule_id has no severity" unless $rule->{severity}; die "Rule $rule_id is already defined" if exists $self->{rule_info}->{$rule_id}; $self->{rule_info}->{$rule_id} = { id => $rule_id, severity => $rule->{severity}, description => $rule->{desc}, }; } foreach my $rule ( @$rules ) { die "There is no info for rule $rule->{id} in $args{file}" unless $self->{rule_info}->{ $rule->{id} }; } return; } sub get_rule_info { my ( $self, $id ) = @_; return unless $id; return $self->{rule_info}->{$id}; } sub _reset_rule_info { my ( $self ) = @_; $self->{rule_info} = {}; return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End AdvisorRules package # ########################################################################### # ########################################################################### # VariableAdvisorRules package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VariableAdvisorRules.pm # t/lib/VariableAdvisorRules.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VariableAdvisorRules; use base 'AdvisorRules'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub new { my ( $class, %args ) = @_; my $self = $class->SUPER::new(%args); @{$self->{rules}} = $self->get_rules(); PTDEBUG && _d(scalar @{$self->{rules}}, "rules"); return $self; } sub get_rules { return { id => 'auto_increment', code => sub { my ( %args ) = @_; my $vars = $args{variables}; return unless defined $vars->{auto_increment_increment} && defined $vars->{auto_increment_offset}; return $vars->{auto_increment_increment} != 1 || $vars->{auto_increment_offset} != 1 ? 1 : 0; }, }, { id => 'concurrent_insert', code => sub { my ( %args ) = @_; if ( $args{variables}->{concurrent_insert} && $args{variables}->{concurrent_insert} =~ m/[^\d]/ ) { return $args{variables}->{concurrent_insert} eq 'ALWAYS' ? 1 : 0; } return _var_gt($args{variables}->{concurrent_insert}, 1); }, }, { id => 'connect_timeout', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{connect_timeout}, 10); }, }, { id => 'debug', code => sub { my ( %args ) = @_; return $args{variables}->{debug} ? 1 : 0; }, }, { id => 'delay_key_write', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{delay_key_write}, "ON"); }, }, { id => 'flush', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{flush}, "ON"); }, }, { id => 'flush_time', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{flush_time}, 0); }, }, { id => 'have_bdb', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{have_bdb}, 'YES'); }, }, { id => 'init_connect', code => sub { my ( %args ) = @_; return $args{variables}->{init_connect} ? 1 : 0; }, }, { id => 'init_file', code => sub { my ( %args ) = @_; return $args{variables}->{init_file} ? 1 : 0; }, }, { id => 'init_slave', code => sub { my ( %args ) = @_; return $args{variables}->{init_slave} ? 1 : 0; }, }, { id => 'innodb_additional_mem_pool_size', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{innodb_additional_mem_pool_size}, 20 * 1_048_576); # 20M }, }, { id => 'innodb_buffer_pool_size', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{innodb_buffer_pool_size}, 10 * 1_048_576); # 10M }, }, { id => 'innodb_checksums', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{innodb_checksums}, "ON"); }, }, { id => 'innodb_doublewrite', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{innodb_doublewrite}, "ON"); }, }, { id => 'innodb_fast_shutdown', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{innodb_fast_shutdown}, 1); }, }, { id => 'innodb_flush_log_at_trx_commit-1', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{innodb_flush_log_at_trx_commit}, 1); }, }, { id => 'innodb_flush_log_at_trx_commit-2', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{innodb_flush_log_at_trx_commit}, 0); }, }, { id => 'innodb_force_recovery', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{innodb_force_recovery}, 0); }, }, { id => 'innodb_lock_wait_timeout', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{innodb_lock_wait_timeout}, 50); }, }, { id => 'innodb_log_buffer_size', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{innodb_log_buffer_size}, 16 * 1_048_576); # 16M }, }, { id => 'innodb_log_file_size', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{innodb_log_file_size}, 5 * 1_048_576); # 5M }, }, { id => 'innodb_max_dirty_pages_pct', code => sub { my ( %args ) = @_; my $mysql_version = $args{mysql_version}; return _var_lt($args{variables}->{innodb_max_dirty_pages_pct}, ($mysql_version < '5.5' ? 90 : 75)); }, }, { id => 'key_buffer_size', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{key_buffer_size}, 8 * 1_048_576); # 8M }, }, { id => 'large_pages', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{large_pages}, "ON"); }, }, { id => 'locked_in_memory', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{locked_in_memory}, "ON"); }, }, { id => 'log_warnings-1', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{log_warnings}, 0); }, }, { id => 'log_warnings-2', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{log_warnings}, 1); }, }, { id => 'low_priority_updates', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{low_priority_updates}, "ON"); }, }, { id => 'max_binlog_size', code => sub { my ( %args ) = @_; return _var_lt($args{variables}->{max_binlog_size}, 1 * 1_073_741_824); # 1G }, }, { id => 'max_connect_errors', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{max_connect_errors}, 10); }, }, { id => 'max_connections', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{max_connections}, 1_000); }, }, { id => 'myisam_repair_threads', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{myisam_repair_threads}, 1); }, }, { id => 'old_passwords', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{old_passwords}, "ON"); }, }, { id => 'optimizer_prune_level', code => sub { my ( %args ) = @_; return _var_lt($args{variables}->{optimizer_prune_level}, 1); }, }, { id => 'port', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{port}, 3306); }, }, { id => 'query_cache_size-1', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{query_cache_size}, 128 * 1_048_576); # 128M }, }, { id => 'query_cache_size-2', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{query_cache_size}, 512 * 1_048_576); # 512M }, }, { id => 'read_buffer_size-1', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{read_buffer_size}, 131_072); }, }, { id => 'read_buffer_size-2', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{read_buffer_size}, 8 * 1_048_576); # 8M }, }, { id => 'read_rnd_buffer_size-1', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{read_rnd_buffer_size}, 262_144); }, }, { id => 'read_rnd_buffer_size-2', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{read_rnd_buffer_size}, 4 * 1_048_576); # 4M }, }, { id => 'relay_log_space_limit', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{relay_log_space_limit}, 0); }, }, { id => 'slave_net_timeout', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{slave_net_timeout}, 60); }, }, { id => 'slave_skip_errors', code => sub { my ( %args ) = @_; return $args{variables}->{slave_skip_errors} && $args{variables}->{slave_skip_errors} ne 'OFF' ? 1 : 0; }, }, { id => 'sort_buffer_size-1', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{sort_buffer_size}, 2_097_144); }, }, { id => 'sort_buffer_size-2', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{sort_buffer_size}, 4 * 1_048_576); # 4M }, }, { id => 'sql_notes', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{sql_notes}, "OFF"); }, }, { id => 'sync_frm', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{sync_frm}, "ON"); }, }, { id => 'tx_isolation-1', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{tx_isolation}, "REPEATABLE-READ"); }, }, { id => 'tx_isolation-2', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{tx_isolation}, "REPEATABLE-READ") && _var_sneq($args{variables}->{tx_isolation}, "READ-COMMITTED") ? 1 : 0; }, }, { id => 'expire_logs_days', code => sub { my ( %args ) = @_; return _var_eq($args{variables}->{expire_logs_days}, 0) && _var_seq($args{variables}->{log_bin}, "ON"); }, }, { id => 'innodb_file_io_threads', code => sub { my ( %args ) = @_; return _var_neq($args{variables}->{innodb_file_io_threads}, 4) && $OSNAME ne 'MSWin32' ? 1 : 0; }, }, { id => 'innodb_data_file_path', code => sub { my ( %args ) = @_; return ($args{variables}->{innodb_data_file_path} || '') =~ m/autoextend/ ? 1 : 0; }, }, { id => 'innodb_flush_method', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{innodb_flush_method}, 'O_DIRECT') && $OSNAME ne 'MSWin32' ? 1 : 0; }, }, { id => 'innodb_locks_unsafe_for_binlog', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{innodb_locks_unsafe_for_binlog}, "ON") && _var_seq($args{variables}->{log_bin}, "ON"); }, }, { id => 'innodb_support_xa', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{innodb_support_xa}, "ON") && _var_seq($args{variables}->{log_bin}, "ON"); }, }, { id => 'log_bin', code => sub { my ( %args ) = @_; return _var_sneq($args{variables}->{log_bin}, "ON"); }, }, { id => 'log_output', code => sub { my ( %args ) = @_; return ($args{variables}->{log_output} || '') =~ m/TABLE/i ? 1 : 0; }, }, { id => 'max_relay_log_size', code => sub { my ( %args ) = @_; return _var_gt($args{variables}->{max_relay_log_size}, 0) && _var_lt($args{variables}->{max_relay_log_size}, 1 * 1_073_741_824) ? 1 : 0; }, }, { id => 'myisam_recover_options', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{myisam_recover_options}, "OFF") || _var_seq($args{variables}->{myisam_recover_options}, "DEFAULT") ? 1 : 0; }, }, { id => 'storage_engine', code => sub { my ( %args ) = @_; return 0 unless $args{variables}->{storage_engine}; return $args{variables}->{storage_engine} !~ m/InnoDB|MyISAM/i ? 1 : 0; }, }, { id => 'sync_binlog', code => sub { my ( %args ) = @_; return _var_seq($args{variables}->{log_bin}, "ON") && ( _var_eq($args{variables}->{sync_binlog}, 0) || _var_gt($args{variables}->{sync_binlog}, 1)) ? 1 : 0; }, }, { id => 'tmp_table_size', code => sub { my ( %args ) = @_; return ($args{variables}->{tmp_table_size} || 0) > ($args{variables}->{max_heap_table_size} || 0) ? 1 : 0; }, }, { id => 'old mysql version', code => sub { my ( %args ) = @_; my $mysql_version = $args{mysql_version}; return 0 unless $mysql_version; return 1 if ($mysql_version == '3' && $mysql_version < '3.23' ) || ($mysql_version == '4' && $mysql_version < '4.1.20') || ($mysql_version == '5.0' && $mysql_version < '5.0.37') || ($mysql_version == '5.1' && $mysql_version < '5.1.30'); return 0; }, }, { id => 'end-of-life mysql version', code => sub { my ( %args ) = @_; my $mysql_version = $args{mysql_version}; return 0 unless $mysql_version; return $mysql_version < '5.1' ? 1 : 0; # 5.1.x }, }, }; sub _var_gt { my ($var, $val) = @_; return 0 unless defined $var; return $var > $val ? 1 : 0; } sub _var_lt { my ($var, $val) = @_; return 0 unless defined $var; return $var < $val ? 1 : 0; } sub _var_eq { my ($var, $val) = @_; return 0 unless defined $var; return $var == $val ? 1 : 0; } sub _var_neq { my ($var, $val) = @_; return 0 unless defined $var; return _var_eq($var, $val) ? 0 : 1; } sub _var_seq { my ($var, $val) = @_; return 0 unless defined $var; return $var eq $val ? 1 : 0; } sub _var_sneq { my ($var, $val) = @_; return 0 unless defined $var; return _var_seq($var, $val) ? 0 : 1; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VariableAdvisorRules package # ########################################################################### # ########################################################################### # HTTP::Micro package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/HTTP/Micro.pm # t/lib/HTTP/Micro.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package HTTP::Micro; our $VERSION = '0.01'; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp (); my @attributes; BEGIN { @attributes = qw(agent timeout); no strict 'refs'; for my $accessor ( @attributes ) { *{$accessor} = sub { @_ > 1 ? $_[0]->{$accessor} = $_[1] : $_[0]->{$accessor}; }; } } sub new { my($class, %args) = @_; (my $agent = $class) =~ s{::}{-}g; my $self = { agent => $agent . "/" . ($class->VERSION || 0), timeout => 60, }; for my $key ( @attributes ) { $self->{$key} = $args{$key} if exists $args{$key} } return bless $self, $class; } my %DefaultPort = ( http => 80, https => 443, ); sub request { my ($self, $method, $url, $args) = @_; @_ == 3 || (@_ == 4 && ref $args eq 'HASH') or Carp::croak(q/Usage: $http->request(METHOD, URL, [HASHREF])/); $args ||= {}; # we keep some state in this during _request my $response; for ( 0 .. 1 ) { $response = eval { $self->_request($method, $url, $args) }; last unless $@ && $method eq 'GET' && $@ =~ m{^(?:Socket closed|Unexpected end)}; } if (my $e = "$@") { $response = { success => q{}, status => 599, reason => 'Internal Exception', content => $e, headers => { 'content-type' => 'text/plain', 'content-length' => length $e, } }; } return $response; } sub _request { my ($self, $method, $url, $args) = @_; my ($scheme, $host, $port, $path_query) = $self->_split_url($url); my $request = { method => $method, scheme => $scheme, host_port => ($port == $DefaultPort{$scheme} ? $host : "$host:$port"), uri => $path_query, headers => {}, }; my $handle = HTTP::Micro::Handle->new(timeout => $self->{timeout}); $handle->connect($scheme, $host, $port); $self->_prepare_headers_and_cb($request, $args); $handle->write_request_header(@{$request}{qw/method uri headers/}); $handle->write_content_body($request) if $request->{content}; my $response; do { $response = $handle->read_response_header } until (substr($response->{status},0,1) ne '1'); if (!($method eq 'HEAD' || $response->{status} =~ /^[23]04/)) { $response->{content} = ''; $handle->read_content_body(sub { $_[1]->{content} .= $_[0] }, $response); } $handle->close; $response->{success} = substr($response->{status},0,1) eq '2'; return $response; } sub _prepare_headers_and_cb { my ($self, $request, $args) = @_; for ($args->{headers}) { next unless defined; while (my ($k, $v) = each %$_) { $request->{headers}{lc $k} = $v; } } $request->{headers}{'host'} = $request->{host_port}; $request->{headers}{'connection'} = "close"; $request->{headers}{'user-agent'} ||= $self->{agent}; if (defined $args->{content}) { $request->{headers}{'content-type'} ||= "application/octet-stream"; utf8::downgrade($args->{content}, 1) or Carp::croak(q/Wide character in request message body/); $request->{headers}{'content-length'} = length $args->{content}; $request->{content} = $args->{content}; } return; } sub _split_url { my $url = pop; my ($scheme, $authority, $path_query) = $url =~ m<\A([^:/?#]+)://([^/?#]*)([^#]*)> or Carp::croak(qq/Cannot parse URL: '$url'/); $scheme = lc $scheme; $path_query = "/$path_query" unless $path_query =~ m<\A/>; my $host = (length($authority)) ? lc $authority : 'localhost'; $host =~ s/\A[^@]*@//; # userinfo my $port = do { $host =~ s/:([0-9]*)\z// && length $1 ? $1 : $DefaultPort{$scheme} }; return ($scheme, $host, $port, $path_query); } } # HTTP::Micro { package HTTP::Micro::Handle; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Carp qw(croak); use Errno qw(EINTR EPIPE); use IO::Socket qw(SOCK_STREAM); sub BUFSIZE () { 32768 } my $Printable = sub { local $_ = shift; s/\r/\\r/g; s/\n/\\n/g; s/\t/\\t/g; s/([^\x20-\x7E])/sprintf('\\x%.2X', ord($1))/ge; $_; }; sub new { my ($class, %args) = @_; return bless { rbuf => '', timeout => 60, max_line_size => 16384, %args }, $class; } my $ssl_verify_args = { check_cn => "when_only", wildcards_in_alt => "anywhere", wildcards_in_cn => "anywhere" }; sub connect { @_ == 4 || croak(q/Usage: $handle->connect(scheme, host, port)/); my ($self, $scheme, $host, $port) = @_; if ( $scheme eq 'https' ) { eval "require IO::Socket::SSL" unless exists $INC{'IO/Socket/SSL.pm'}; croak(qq/IO::Socket::SSL must be installed for https support\n/) unless $INC{'IO/Socket/SSL.pm'}; } elsif ( $scheme ne 'http' ) { croak(qq/Unsupported URL scheme '$scheme'\n/); } $self->{fh} = IO::Socket::INET->new( PeerHost => $host, PeerPort => $port, Proto => 'tcp', Type => SOCK_STREAM, Timeout => $self->{timeout} ) or croak(qq/Could not connect to '$host:$port': $@/); binmode($self->{fh}) or croak(qq/Could not binmode() socket: '$!'/); if ( $scheme eq 'https') { IO::Socket::SSL->start_SSL($self->{fh}); ref($self->{fh}) eq 'IO::Socket::SSL' or die(qq/SSL connection failed for $host\n/); if ( $self->{fh}->can("verify_hostname") ) { $self->{fh}->verify_hostname( $host, $ssl_verify_args ) or die(qq/SSL certificate not valid for $host\n/); } else { my $fh = $self->{fh}; _verify_hostname_of_cert($host, _peer_certificate($fh), $ssl_verify_args) or die(qq/SSL certificate not valid for $host\n/); } } $self->{host} = $host; $self->{port} = $port; return $self; } sub close { @_ == 1 || croak(q/Usage: $handle->close()/); my ($self) = @_; CORE::close($self->{fh}) or croak(qq/Could not close socket: '$!'/); } sub write { @_ == 2 || croak(q/Usage: $handle->write(buf)/); my ($self, $buf) = @_; my $len = length $buf; my $off = 0; local $SIG{PIPE} = 'IGNORE'; while () { $self->can_write or croak(q/Timed out while waiting for socket to become ready for writing/); my $r = syswrite($self->{fh}, $buf, $len, $off); if (defined $r) { $len -= $r; $off += $r; last unless $len > 0; } elsif ($! == EPIPE) { croak(qq/Socket closed by remote server: $!/); } elsif ($! != EINTR) { croak(qq/Could not write to socket: '$!'/); } } return $off; } sub read { @_ == 2 || @_ == 3 || croak(q/Usage: $handle->read(len)/); my ($self, $len) = @_; my $buf = ''; my $got = length $self->{rbuf}; if ($got) { my $take = ($got < $len) ? $got : $len; $buf = substr($self->{rbuf}, 0, $take, ''); $len -= $take; } while ($len > 0) { $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $buf, $len, length $buf); if (defined $r) { last unless $r; $len -= $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } if ($len) { croak(q/Unexpected end of stream/); } return $buf; } sub readline { @_ == 1 || croak(q/Usage: $handle->readline()/); my ($self) = @_; while () { if ($self->{rbuf} =~ s/\A ([^\x0D\x0A]* \x0D?\x0A)//x) { return $1; } $self->can_read or croak(q/Timed out while waiting for socket to become ready for reading/); my $r = sysread($self->{fh}, $self->{rbuf}, BUFSIZE, length $self->{rbuf}); if (defined $r) { last unless $r; } elsif ($! != EINTR) { croak(qq/Could not read from socket: '$!'/); } } croak(q/Unexpected end of stream while looking for line/); } sub read_header_lines { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->read_header_lines([headers])/); my ($self, $headers) = @_; $headers ||= {}; my $lines = 0; my $val; while () { my $line = $self->readline; if ($line =~ /\A ([^\x00-\x1F\x7F:]+) : [\x09\x20]* ([^\x0D\x0A]*)/x) { my ($field_name) = lc $1; $val = \($headers->{$field_name} = $2); } elsif ($line =~ /\A [\x09\x20]+ ([^\x0D\x0A]*)/x) { $val or croak(q/Unexpected header continuation line/); next unless length $1; $$val .= ' ' if length $$val; $$val .= $1; } elsif ($line =~ /\A \x0D?\x0A \z/x) { last; } else { croak(q/Malformed header line: / . $Printable->($line)); } } return $headers; } sub write_header_lines { (@_ == 2 && ref $_[1] eq 'HASH') || croak(q/Usage: $handle->write_header_lines(headers)/); my($self, $headers) = @_; my $buf = ''; while (my ($k, $v) = each %$headers) { my $field_name = lc $k; $field_name =~ /\A [\x21\x23-\x27\x2A\x2B\x2D\x2E\x30-\x39\x41-\x5A\x5E-\x7A\x7C\x7E]+ \z/x or croak(q/Invalid HTTP header field name: / . $Printable->($field_name)); $field_name =~ s/\b(\w)/\u$1/g; $buf .= "$field_name: $v\x0D\x0A"; } $buf .= "\x0D\x0A"; return $self->write($buf); } sub read_content_body { @_ == 3 || @_ == 4 || croak(q/Usage: $handle->read_content_body(callback, response, [read_length])/); my ($self, $cb, $response, $len) = @_; $len ||= $response->{headers}{'content-length'}; croak("No content-length in the returned response, and this " . "UA doesn't implement chunking") unless defined $len; while ($len > 0) { my $read = ($len > BUFSIZE) ? BUFSIZE : $len; $cb->($self->read($read), $response); $len -= $read; } return; } sub write_content_body { @_ == 2 || croak(q/Usage: $handle->write_content_body(request)/); my ($self, $request) = @_; my ($len, $content_length) = (0, $request->{headers}{'content-length'}); $len += $self->write($request->{content}); $len == $content_length or croak(qq/Content-Length missmatch (got: $len expected: $content_length)/); return $len; } sub read_response_header { @_ == 1 || croak(q/Usage: $handle->read_response_header()/); my ($self) = @_; my $line = $self->readline; $line =~ /\A (HTTP\/(0*\d+\.0*\d+)) [\x09\x20]+ ([0-9]{3}) [\x09\x20]+ ([^\x0D\x0A]*) \x0D?\x0A/x or croak(q/Malformed Status-Line: / . $Printable->($line)); my ($protocol, $version, $status, $reason) = ($1, $2, $3, $4); return { status => $status, reason => $reason, headers => $self->read_header_lines, protocol => $protocol, }; } sub write_request_header { @_ == 4 || croak(q/Usage: $handle->write_request_header(method, request_uri, headers)/); my ($self, $method, $request_uri, $headers) = @_; return $self->write("$method $request_uri HTTP/1.1\x0D\x0A") + $self->write_header_lines($headers); } sub _do_timeout { my ($self, $type, $timeout) = @_; $timeout = $self->{timeout} unless defined $timeout && $timeout >= 0; my $fd = fileno $self->{fh}; defined $fd && $fd >= 0 or croak(q/select(2): 'Bad file descriptor'/); my $initial = time; my $pending = $timeout; my $nfound; vec(my $fdset = '', $fd, 1) = 1; while () { $nfound = ($type eq 'read') ? select($fdset, undef, undef, $pending) : select(undef, $fdset, undef, $pending) ; if ($nfound == -1) { $! == EINTR or croak(qq/select(2): '$!'/); redo if !$timeout || ($pending = $timeout - (time - $initial)) > 0; $nfound = 0; } last; } $! = 0; return $nfound; } sub can_read { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_read([timeout])/); my $self = shift; return $self->_do_timeout('read', @_) } sub can_write { @_ == 1 || @_ == 2 || croak(q/Usage: $handle->can_write([timeout])/); my $self = shift; return $self->_do_timeout('write', @_) } } # HTTP::Micro::Handle my $prog = <<'EOP'; BEGIN { if ( defined &IO::Socket::SSL::CAN_IPV6 ) { *CAN_IPV6 = \*IO::Socket::SSL::CAN_IPV6; } else { constant->import( CAN_IPV6 => '' ); } my %const = ( NID_CommonName => 13, GEN_DNS => 2, GEN_IPADD => 7, ); while ( my ($name,$value) = each %const ) { no strict 'refs'; *{$name} = UNIVERSAL::can( 'Net::SSLeay', $name ) || sub { $value }; } } { use Carp qw(croak); my %dispatcher = ( issuer => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_issuer_name( shift )) }, subject => sub { Net::SSLeay::X509_NAME_oneline( Net::SSLeay::X509_get_subject_name( shift )) }, ); if ( $Net::SSLeay::VERSION >= 1.30 ) { $dispatcher{commonName} = sub { my $cn = Net::SSLeay::X509_NAME_get_text_by_NID( Net::SSLeay::X509_get_subject_name( shift ), NID_CommonName); $cn =~s{\0$}{}; # work around Bug in Net::SSLeay <1.33 $cn; } } else { $dispatcher{commonName} = sub { croak "you need at least Net::SSLeay version 1.30 for getting commonName" } } if ( $Net::SSLeay::VERSION >= 1.33 ) { $dispatcher{subjectAltNames} = sub { Net::SSLeay::X509_get_subjectAltNames( shift ) }; } else { $dispatcher{subjectAltNames} = sub { return; }; } $dispatcher{authority} = $dispatcher{issuer}; $dispatcher{owner} = $dispatcher{subject}; $dispatcher{cn} = $dispatcher{commonName}; sub _peer_certificate { my ($self, $field) = @_; my $ssl = $self->_get_ssl_object or return; my $cert = ${*$self}{_SSL_certificate} ||= Net::SSLeay::get_peer_certificate($ssl) or return $self->error("Could not retrieve peer certificate"); if ($field) { my $sub = $dispatcher{$field} or croak "invalid argument for peer_certificate, valid are: ".join( " ",keys %dispatcher ). "\nMaybe you need to upgrade your Net::SSLeay"; return $sub->($cert); } else { return $cert } } my %scheme = ( ldap => { wildcards_in_cn => 0, wildcards_in_alt => 'leftmost', check_cn => 'always', }, http => { wildcards_in_cn => 'anywhere', wildcards_in_alt => 'anywhere', check_cn => 'when_only', }, smtp => { wildcards_in_cn => 0, wildcards_in_alt => 0, check_cn => 'always' }, none => {}, # do not check ); $scheme{www} = $scheme{http}; # alias $scheme{xmpp} = $scheme{http}; # rfc 3920 $scheme{pop3} = $scheme{ldap}; # rfc 2595 $scheme{imap} = $scheme{ldap}; # rfc 2595 $scheme{acap} = $scheme{ldap}; # rfc 2595 $scheme{nntp} = $scheme{ldap}; # rfc 4642 $scheme{ftp} = $scheme{http}; # rfc 4217 sub _verify_hostname_of_cert { my $identity = shift; my $cert = shift; my $scheme = shift || 'none'; if ( ! ref($scheme) ) { $scheme = $scheme{$scheme} or croak "scheme $scheme not defined"; } return 1 if ! %$scheme; # 'none' my $commonName = $dispatcher{cn}->($cert); my @altNames = $dispatcher{subjectAltNames}->($cert); if ( my $sub = $scheme->{callback} ) { return $sub->($identity,$commonName,@altNames); } my $ipn; if ( CAN_IPV6 and $identity =~m{:} ) { $ipn = IO::Socket::SSL::inet_pton(IO::Socket::SSL::AF_INET6,$identity) or croak "'$identity' is not IPv6, but neither IPv4 nor hostname"; } elsif ( $identity =~m{^\d+\.\d+\.\d+\.\d+$} ) { $ipn = IO::Socket::SSL::inet_aton( $identity ) or croak "'$identity' is not IPv4, but neither IPv6 nor hostname"; } else { if ( $identity =~m{[^a-zA-Z0-9_.\-]} ) { $identity =~m{\0} and croak("name '$identity' has \\0 byte"); $identity = IO::Socket::SSL::idn_to_ascii($identity) or croak "Warning: Given name '$identity' could not be converted to IDNA!"; } } my $check_name = sub { my ($name,$identity,$wtyp) = @_; $wtyp ||= ''; my $pattern; if ( $wtyp eq 'anywhere' and $name =~m{^([a-zA-Z0-9_\-]*)\*(.+)} ) { $pattern = qr{^\Q$1\E[a-zA-Z0-9_\-]*\Q$2\E$}i; } elsif ( $wtyp eq 'leftmost' and $name =~m{^\*(\..+)$} ) { $pattern = qr{^[a-zA-Z0-9_\-]*\Q$1\E$}i; } else { $pattern = qr{^\Q$name\E$}i; } return $identity =~ $pattern; }; my $alt_dnsNames = 0; while (@altNames) { my ($type, $name) = splice (@altNames, 0, 2); if ( $ipn and $type == GEN_IPADD ) { return 1 if $ipn eq $name; } elsif ( ! $ipn and $type == GEN_DNS ) { $name =~s/\s+$//; $name =~s/^\s+//; $alt_dnsNames++; $check_name->($name,$identity,$scheme->{wildcards_in_alt}) and return 1; } } if ( ! $ipn and ( $scheme->{check_cn} eq 'always' or $scheme->{check_cn} eq 'when_only' and !$alt_dnsNames)) { $check_name->($commonName,$identity,$scheme->{wildcards_in_cn}) and return 1; } return 0; # no match } } EOP eval { require IO::Socket::SSL }; if ( $INC{"IO/Socket/SSL.pm"} ) { eval $prog; die $@ if $@; } 1; # ########################################################################### # End HTTP::Micro package # ########################################################################### # ########################################################################### # VersionCheck package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/VersionCheck.pm # t/lib/VersionCheck.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package VersionCheck; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; local $Data::Dumper::Indent = 1; local $Data::Dumper::Sortkeys = 1; local $Data::Dumper::Quotekeys = 0; use Digest::MD5 qw(md5_hex); use Sys::Hostname qw(hostname); use File::Basename qw(); use File::Spec; use FindBin qw(); eval { require Percona::Toolkit; require HTTP::Micro; }; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my @vc_dirs = ( '/etc/percona', '/etc/percona-toolkit', '/tmp', "$home", ); { my $file = 'percona-version-check'; sub version_check_file { foreach my $dir ( @vc_dirs ) { if ( -d $dir && -w $dir ) { PTDEBUG && _d('Version check file', $file, 'in', $dir); return $dir . '/' . $file; } } PTDEBUG && _d('Version check file', $file, 'in', $ENV{PWD}); return $file; # in the CWD } } sub version_check_time_limit { return 60 * 60 * 24; # one day } sub version_check { my (%args) = @_; my $instances = $args{instances} || []; my $instances_to_check; PTDEBUG && _d('FindBin::Bin:', $FindBin::Bin); if ( !$args{force} ) { if ( $FindBin::Bin && (-d "$FindBin::Bin/../.bzr" || -d "$FindBin::Bin/../../.bzr" || -d "$FindBin::Bin/../.git" || -d "$FindBin::Bin/../../.git" ) ) { PTDEBUG && _d("$FindBin::Bin/../.bzr disables --version-check"); return; } } eval { foreach my $instance ( @$instances ) { my ($name, $id) = get_instance_id($instance); $instance->{name} = $name; $instance->{id} = $id; } push @$instances, { name => 'system', id => 0 }; $instances_to_check = get_instances_to_check( instances => $instances, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); PTDEBUG && _d(scalar @$instances_to_check, 'instances to check'); return unless @$instances_to_check; my $protocol = 'https'; eval { require IO::Socket::SSL; }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); PTDEBUG && _d("SSL not available, won't run version_check"); return; } PTDEBUG && _d('Using', $protocol); my $advice = pingback( instances => $instances_to_check, protocol => $protocol, url => $args{url} # testing || $ENV{PERCONA_VERSION_CHECK_URL} # testing || "$protocol://v.percona.com", ); if ( $advice ) { PTDEBUG && _d('Advice:', Dumper($advice)); if ( scalar @$advice > 1) { print "\n# " . scalar @$advice . " software updates are " . "available:\n"; } else { print "\n# A software update is available:\n"; } print join("\n", map { "# * $_" } @$advice), "\n\n"; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Version check failed:', $EVAL_ERROR); } if ( @$instances_to_check ) { eval { update_check_times( instances => $instances_to_check, vc_file => $args{vc_file}, # testing now => $args{now}, # testing ); }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error updating version check file:', $EVAL_ERROR); } } if ( $ENV{PTDEBUG_VERSION_CHECK} ) { warn "Exiting because the PTDEBUG_VERSION_CHECK " . "environment variable is defined.\n"; exit 255; } return; } sub get_instances_to_check { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); if ( !-f $vc_file ) { PTDEBUG && _d('Version check file', $vc_file, 'does not exist;', 'version checking all instances'); return $instances; } open my $fh, '<', $vc_file or die "Cannot open $vc_file: $OS_ERROR"; chomp(my $file_contents = do { local $/ = undef; <$fh> }); PTDEBUG && _d('Version check file', $vc_file, 'contents:', $file_contents); close $fh; my %last_check_time_for = $file_contents =~ /^([^,]+),(.+)$/mg; my $check_time_limit = version_check_time_limit(); my @instances_to_check; foreach my $instance ( @$instances ) { my $last_check_time = $last_check_time_for{ $instance->{id} }; PTDEBUG && _d('Intsance', $instance->{id}, 'last checked', $last_check_time, 'now', $now, 'diff', $now - ($last_check_time || 0), 'hours until next check', sprintf '%.2f', ($check_time_limit - ($now - ($last_check_time || 0))) / 3600); if ( !defined $last_check_time || ($now - $last_check_time) >= $check_time_limit ) { PTDEBUG && _d('Time to check', Dumper($instance)); push @instances_to_check, $instance; } } return \@instances_to_check; } sub update_check_times { my (%args) = @_; my $instances = $args{instances}; my $now = $args{now} || int(time); my $vc_file = $args{vc_file} || version_check_file(); PTDEBUG && _d('Updating last check time:', $now); my %all_instances = map { $_->{id} => { name => $_->{name}, ts => $now } } @$instances; if ( -f $vc_file ) { open my $fh, '<', $vc_file or die "Cannot read $vc_file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; foreach my $line ( split("\n", ($contents || '')) ) { my ($id, $ts) = split(',', $line); if ( !exists $all_instances{$id} ) { $all_instances{$id} = { ts => $ts }; # original ts, not updated } } } open my $fh, '>', $vc_file or die "Cannot write to $vc_file: $OS_ERROR"; foreach my $id ( sort keys %all_instances ) { PTDEBUG && _d('Updated:', $id, Dumper($all_instances{$id})); print { $fh } $id . ',' . $all_instances{$id}->{ts} . "\n"; } close $fh; return; } sub get_instance_id { my ($instance) = @_; my $dbh = $instance->{dbh}; my $dsn = $instance->{dsn}; my $sql = q{SELECT CONCAT(@@hostname, @@port)}; PTDEBUG && _d($sql); my ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $sql = q{SELECT @@hostname}; PTDEBUG && _d($sql); ($name) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { PTDEBUG && _d($EVAL_ERROR); $name = ($dsn->{h} || 'localhost') . ($dsn->{P} || 3306); } else { $sql = q{SHOW VARIABLES LIKE 'port'}; PTDEBUG && _d($sql); my (undef, $port) = eval { $dbh->selectrow_array($sql) }; PTDEBUG && _d('port:', $port); $name .= $port || ''; } } my $id = md5_hex($name); PTDEBUG && _d('MySQL instance:', $id, $name, Dumper($dsn)); return $name, $id; } sub get_uuid { my $uuid_file = '/.percona-toolkit.uuid'; foreach my $dir (@vc_dirs) { my $filename = $dir.$uuid_file; my $uuid=_read_uuid($filename); return $uuid if $uuid; } my $filename = $ENV{"HOME"} . $uuid_file; my $uuid = _generate_uuid(); open(my $fh, '>', $filename) or die "Could not open file '$filename' $!"; print $fh $uuid; close $fh; return $uuid; } sub _generate_uuid { return sprintf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7; } sub _read_uuid { my $filename = shift; my $fh; eval { open($fh, '<:encoding(UTF-8)', $filename); }; return if ($EVAL_ERROR); my $uuid; eval { $uuid = <$fh>; }; return if ($EVAL_ERROR); chomp $uuid; return $uuid; } sub pingback { my (%args) = @_; my @required_args = qw(url instances); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my $url = $args{url}; my $instances = $args{instances}; my $ua = $args{ua} || HTTP::Micro->new( timeout => 3 ); my $response = $ua->request('GET', $url); PTDEBUG && _d('Server response:', Dumper($response)); die "No response from GET $url" if !$response; die("GET on $url returned HTTP status $response->{status}; expected 200\n", ($response->{content} || '')) if $response->{status} != 200; die("GET on $url did not return any programs to check") if !$response->{content}; my $items = parse_server_response( response => $response->{content} ); die "Failed to parse server requested programs: $response->{content}" if !scalar keys %$items; my $versions = get_versions( items => $items, instances => $instances, ); die "Failed to get any program versions; should have at least gotten Perl" if !scalar keys %$versions; my $client_content = encode_client_response( items => $items, versions => $versions, general_id => get_uuid(), ); my $client_response = { headers => { "X-Percona-Toolkit-Tool" => File::Basename::basename($0) }, content => $client_content, }; PTDEBUG && _d('Client response:', Dumper($client_response)); $response = $ua->request('POST', $url, $client_response); PTDEBUG && _d('Server suggestions:', Dumper($response)); die "No response from POST $url $client_response" if !$response; die "POST $url returned HTTP status $response->{status}; expected 200" if $response->{status} != 200; return unless $response->{content}; $items = parse_server_response( response => $response->{content}, split_vars => 0, ); die "Failed to parse server suggestions: $response->{content}" if !scalar keys %$items; my @suggestions = map { $_->{vars} } sort { $a->{item} cmp $b->{item} } values %$items; return \@suggestions; } sub encode_client_response { my (%args) = @_; my @required_args = qw(items versions general_id); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items, $versions, $general_id) = @args{@required_args}; my @lines; foreach my $item ( sort keys %$items ) { next unless exists $versions->{$item}; if ( ref($versions->{$item}) eq 'HASH' ) { my $mysql_versions = $versions->{$item}; for my $id ( sort keys %$mysql_versions ) { push @lines, join(';', $id, $item, $mysql_versions->{$id}); } } else { push @lines, join(';', $general_id, $item, $versions->{$item}); } } my $client_response = join("\n", @lines) . "\n"; return $client_response; } sub parse_server_response { my (%args) = @_; my @required_args = qw(response); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($response) = @args{@required_args}; my %items = map { my ($item, $type, $vars) = split(";", $_); if ( !defined $args{split_vars} || $args{split_vars} ) { $vars = [ split(",", ($vars || '')) ]; } $item => { item => $item, type => $type, vars => $vars, }; } split("\n", $response); PTDEBUG && _d('Items:', Dumper(\%items)); return \%items; } my %sub_for_type = ( os_version => \&get_os_version, perl_version => \&get_perl_version, perl_module_version => \&get_perl_module_version, mysql_variable => \&get_mysql_variable, ); sub valid_item { my ($item) = @_; return unless $item; if ( !exists $sub_for_type{ $item->{type} } ) { PTDEBUG && _d('Invalid type:', $item->{type}); return 0; } return 1; } sub get_versions { my (%args) = @_; my @required_args = qw(items); foreach my $arg ( @required_args ) { die "I need a $arg arugment" unless $args{$arg}; } my ($items) = @args{@required_args}; my %versions; foreach my $item ( values %$items ) { next unless valid_item($item); eval { my $version = $sub_for_type{ $item->{type} }->( item => $item, instances => $args{instances}, ); if ( $version ) { chomp $version unless ref($version); $versions{$item->{item}} = $version; } }; if ( $EVAL_ERROR ) { PTDEBUG && _d('Error getting version for', Dumper($item), $EVAL_ERROR); } } return \%versions; } sub get_os_version { if ( $OSNAME eq 'MSWin32' ) { require Win32; return Win32::GetOSDisplayName(); } chomp(my $platform = `uname -s`); PTDEBUG && _d('platform:', $platform); return $OSNAME unless $platform; chomp(my $lsb_release = `which lsb_release 2>/dev/null | awk '{print \$1}'` || ''); PTDEBUG && _d('lsb_release:', $lsb_release); my $release = ""; if ( $platform eq 'Linux' ) { if ( -f "/etc/fedora-release" ) { $release = `cat /etc/fedora-release`; } elsif ( -f "/etc/redhat-release" ) { $release = `cat /etc/redhat-release`; } elsif ( -f "/etc/system-release" ) { $release = `cat /etc/system-release`; } elsif ( $lsb_release ) { $release = `$lsb_release -ds`; } elsif ( -f "/etc/lsb-release" ) { $release = `grep DISTRIB_DESCRIPTION /etc/lsb-release`; $release =~ s/^\w+="([^"]+)".+/$1/; } elsif ( -f "/etc/debian_version" ) { chomp(my $rel = `cat /etc/debian_version`); $release = "Debian $rel"; if ( -f "/etc/apt/sources.list" ) { chomp(my $code_name = `awk '/^deb/ {print \$3}' /etc/apt/sources.list | awk -F/ '{print \$1}'| awk 'BEGIN {FS="|"} {print \$1}' | sort | uniq -c | sort -rn | head -n1 | awk '{print \$2}'`); $release .= " ($code_name)" if $code_name; } } elsif ( -f "/etc/os-release" ) { # openSUSE chomp($release = `grep PRETTY_NAME /etc/os-release`); $release =~ s/^PRETTY_NAME="(.+)"$/$1/; } elsif ( `ls /etc/*release 2>/dev/null` ) { if ( `grep DISTRIB_DESCRIPTION /etc/*release 2>/dev/null` ) { $release = `grep DISTRIB_DESCRIPTION /etc/*release | head -n1`; } else { $release = `cat /etc/*release | head -n1`; } } } elsif ( $platform =~ m/(?:BSD|^Darwin)$/ ) { my $rel = `uname -r`; $release = "$platform $rel"; } elsif ( $platform eq "SunOS" ) { my $rel = `head -n1 /etc/release` || `uname -r`; $release = "$platform $rel"; } if ( !$release ) { PTDEBUG && _d('Failed to get the release, using platform'); $release = $platform; } chomp($release); $release =~ s/^"|"$//g; PTDEBUG && _d('OS version =', $release); return $release; } sub get_perl_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $version = sprintf '%vd', $PERL_VERSION; PTDEBUG && _d('Perl version', $version); return $version; } sub get_perl_module_version { my (%args) = @_; my $item = $args{item}; return unless $item; my $var = '$' . $item->{item} . '::VERSION'; my $version = eval "use $item->{item}; $var;"; PTDEBUG && _d('Perl version for', $var, '=', $version); return $version; } sub get_mysql_variable { return get_from_mysql( show => 'VARIABLES', @_, ); } sub get_from_mysql { my (%args) = @_; my $show = $args{show}; my $item = $args{item}; my $instances = $args{instances}; return unless $show && $item; if ( !$instances || !@$instances ) { PTDEBUG && _d('Cannot check', $item, 'because there are no MySQL instances'); return; } if ($item->{item} eq 'MySQL' && $item->{type} eq 'mysql_variable') { @{$item->{vars}} = grep { $_ eq 'version' || $_ eq 'version_comment' } @{$item->{vars}}; } my @versions; my %version_for; foreach my $instance ( @$instances ) { next unless $instance->{id}; # special system instance has id=0 my $dbh = $instance->{dbh}; local $dbh->{FetchHashKeyName} = 'NAME_lc'; my $sql = qq/SHOW $show/; PTDEBUG && _d($sql); my $rows = $dbh->selectall_hashref($sql, 'variable_name'); my @versions; foreach my $var ( @{$item->{vars}} ) { $var = lc($var); my $version = $rows->{$var}->{value}; PTDEBUG && _d('MySQL version for', $item->{item}, '=', $version, 'on', $instance->{name}); push @versions, $version; } $version_for{ $instance->{id} } = join(' ', @versions); } return \%version_for; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End VersionCheck package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_variable_advisor; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use Percona::Toolkit; use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub main { local @ARGV = @_; # set global ARGV for this package # ######################################################################## # Get configuration information. # ######################################################################## my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); my $vars_from = $o->get('source-of-variables'); # my $status_from = lc $o->get('source-of-status'); # my $slave_status_from = lc $o->get('source-of-slave-status'); my $need_dbh = $vars_from =~ m/^mysql$/i; # || $status_from eq 'mysql' etc. if ( !$o->get('help') ) { if ( $vars_from =~ m/^mysql$/i && @ARGV == 0 ) { $o->save_error("A DSN must be specified when --source-of-variables=mysql"); } } $o->usage_or_errors(); # ######################################################################### # Check that any files given exit. # ######################################################################### if ( $vars_from !~ m/^mysql|none^/i ) { die "The --source-of-variables file $vars_from does not exist" unless -f $vars_from; } # ######################################################################### # Load rules from POD and plugins. # ######################################################################### my $p = new PodParser(); my $var = new VariableAdvisorRules(PodParser => $p); my $adv = new Advisor( match_type => "bool", ignore_rules => $o->get('ignore-rules'), ); $var->load_rule_info( file => __FILE__, section => 'RULES', ); $adv->load_rules($var); $adv->load_rule_info($var); # TODO: load rules from plugins # ######################################################################### # Make common modules. # ######################################################################### my $trp = new TextResultSetParser(); my %common_modules = ( OptionParser => $o, DSNParser => $dp, TextResultSetParser => $trp, ); # ########################################################################## # Connect to MySQL if any of the input sources is mysql. # ########################################################################## my ($dbh, $dsn); if ( $need_dbh ) { my $dsn_defaults = $dp->parse_options($o); $dsn = $dp->parse(shift @ARGV, $dsn_defaults); if ( $o->get('ask-pass') ) { $dsn->{p} = OptionParser::prompt_noecho("Enter password: "); } $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), {AutoCommit => 1}); $dbh->{FetchHashKeyName} = 'NAME_lc'; PTDEBUG && _d('Connected dbh', $dbh); } # ######################################################################## # Daemonize now that everything is setup and ready to work. # ######################################################################## my $daemon; if ( $o->get('daemonize') ) { $daemon = new Daemon(o=>$o); $daemon->daemonize(); PTDEBUG && _d('I am a daemon now'); } elsif ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # ######################################################################## # Do the version-check # ######################################################################## if ( $o->get('version-check') && (!$o->has('quiet') || !$o->get('quiet')) ) { VersionCheck::version_check( force => $o->got('version-check'), instances => [ ($dbh ? { dbh => $dbh, dsn => $dsn } : ()) ], ); } # ######################################################################### # Get the variables and other MySQL info to pass to rules. # ######################################################################### my $vars = get_variables( source => $vars_from, dbh => $dbh, %common_modules, ); my $mysql_version = VersionParser->new($vars->{version}); my $innodb_version = VersionParser->new($dbh)->innodb_version() if $dbh; PTDEBUG && _d("MySQL version", $mysql_version, "InnoDB version", $innodb_version); # ######################################################################### # Run rules, print advice. # ######################################################################### my ($advice) = $adv->run_rules( variables => $vars, mysql_version => $mysql_version, innodb_version => $innodb_version, %common_modules, ); print_advice( advice => $advice, Advisor => $adv, %common_modules, ); return 0; } # ########################################################################## # Subroutines # ########################################################################## # Sub: get_variables # Get SHOW VARIABLES from MySQL or a file. # # Parameters: # %args - Arguments # # Required Arguments: # source - "mysql" or a file name # # Optional Arguments: # dbh - dbh if source=="mysql" # TextResultSetParser - object if source==file # # Returns: # Hashref of SHOW /*40003 GLOBAL*/ VARIABLES values. sub get_variables { my ( %args ) = @_; my @required_args = qw(source); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($source) = @args{@required_args}; my $vars; if ( ($source || '') =~ m/^mysql$/i ) { my $dbh = $args{dbh}; die "I need a dbh argument" unless $dbh; PTDEBUG && _d("Getting variables from dbh", $dbh); my $sql = "SHOW /*40003 GLOBAL*/ VARIABLES"; PTDEBUG && _d($dbh, $sql); map { $vars->{$_->{variable_name}} = $_->{value}; } @{ $dbh->selectall_arrayref($sql, {Slice=>{}}) }; } else { my $trp = $args{TextResultSetParser}; die "I need a TextResultSetParser arg" unless $trp; PTDEBUG && _d("Getting variables from", $source); open my $fh, "<", $source or die "Cannot open $source: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; map { $vars->{$_->{Variable_name}} = $_->{Value} } @{ $trp->parse($contents) }; } return $vars; } # Sub: print_advice # Print information about rules that matched. # # Parameters: # %args - Arguments # # Required Arguments: # advice - Arrayref of rule IDs, returned by # Advisor - object # OptionParser - object sub print_advice { my ( %args ) = @_; my @required_args = qw(advice Advisor OptionParser); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($advice, $adv, $o) = @args{@required_args}; my $verbose = $o->get('verbose'); return unless scalar @$advice; foreach my $id ( @$advice ) { my $info = $adv->get_rule_info($id); my @desc = map { $_ .= '.' unless m/[.?]$/; $_; } split(/(?<=[.?])\s{1,2}/, $info->{description} || ''); $desc[1] ||= ""; # Some desc have only 1 sentence. my $desc = $verbose == 1 ? $desc[0] # terse : $verbose == 2 ? "$desc[0] $desc[1]" # fuller : $verbose > 2 ? $info->{description} # complete : ''; # none print "# ", uc $info->{severity}, " $id: $desc\n\n"; } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation # ############################################################################ =pod =head1 NAME pt-variable-advisor - Analyze MySQL variables and advise on possible problems. =head1 SYNOPSIS Usage: pt-variable-advisor [OPTIONS] [DSN] pt-variable-advisor analyzes variables and advises on possible problems. Get SHOW VARIABLES from localhost: pt-variable-advisor localhost Get SHOW VARIABLES output saved in vars.txt: pt-variable-advisor --source-of-variables vars.txt =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-variable-advisor examines C for bad values and settings according to the L<"RULES"> described below. It reports on variables that match the rules, so you can find bad settings in your MySQL server. At the time of this release, pt-variable-advisor only examples C, but other input sources are planned like C and C. =head1 RULES These are the rules that pt-variable-advisor will apply to SHOW VARIABLES. Each rule has three parts: an ID, a severity, and a description. The rule's ID is a short, unique name for the rule. It usually relates to the variable that the rule examines. If a variable is examined by several rules, then the rules' IDs are numbered like "-1", "-2", "-N". The rule's severity is an indication of how important it is that this rule matched a query. We use NOTE, WARN, and CRIT to denote these levels. The rule's description is a textual, human-readable explanation of what it means when a variable matches this rule. Depending on the verbosity of the report you generate, you will see more of the text in the description. By default, you'll see only the first sentence, which is sort of a terse synopsis of the rule's meaning. At a higher verbosity, you'll see subsequent sentences. =over =item auto_increment severity: note Are you trying to write to more than one server in a dual-master or ring replication configuration? This is potentially very dangerous and in most cases is a serious mistake. Most people's reasons for doing this are actually not valid at all. =item concurrent_insert severity: note Holes (spaces left by deletes) in MyISAM tables might never be reused. =item connect_timeout severity: note A large value of this setting can create a denial of service vulnerability. =item debug severity: crit Servers built with debugging capability should not be used in production because of the large performance impact. =item delay_key_write severity: warn MyISAM index blocks are never flushed until necessary. If there is a server crash, data corruption on MyISAM tables can be much worse than usual. =item flush severity: warn This option might decrease performance greatly. =item flush_time severity: warn This option might decrease performance greatly. =item have_bdb severity: note The BDB engine is deprecated. If you aren't using it, you should disable it with the skip_bdb option. =item init_connect severity: note The init_connect option is enabled on this server. =item init_file severity: note The init_file option is enabled on this server. =item init_slave severity: note The init_slave option is enabled on this server. =item innodb_additional_mem_pool_size severity: warn This variable generally doesn't need to be larger than 20MB. =item innodb_buffer_pool_size severity: warn The InnoDB buffer pool size is unconfigured. In a production environment it should always be configured explicitly, and the default 10MB size is not good. =item innodb_checksums severity: warn InnoDB checksums are disabled. Your data is not protected from hardware corruption or other errors! =item innodb_doublewrite severity: warn InnoDB doublewrite is disabled. Unless you use a filesystem that protects against partial page writes, your data is not safe! =item innodb_fast_shutdown severity: warn InnoDB's shutdown behavior is not the default. This can lead to poor performance, or the need to perform crash recovery upon startup. =item innodb_flush_log_at_trx_commit-1 severity: warn InnoDB is not configured in strictly ACID mode. If there is a crash, some transactions can be lost. =item innodb_flush_log_at_trx_commit-2 severity: warn Setting innodb_flush_log_at_trx_commit to 0 has no performance benefits over setting it to 2, and more types of data loss are possible. If you are trying to change it from 1 for performance reasons, you should set it to 2 instead of 0. =item innodb_force_recovery severity: warn InnoDB is in forced recovery mode! This should be used only temporarily when recovering from data corruption or other bugs, not for normal usage. =item innodb_lock_wait_timeout severity: warn This option has an unusually long value, which can cause system overload if locks are not being released. =item innodb_log_buffer_size severity: warn The InnoDB log buffer size generally should not be set larger than 16MB. If you are doing large BLOB operations, InnoDB is not really a good choice of engines anyway. =item innodb_log_file_size severity: warn The InnoDB log file size is set to its default value, which is not usable on production systems. =item innodb_max_dirty_pages_pct severity: note The innodb_max_dirty_pages_pct is lower than the default. This can cause overly aggressive flushing and add load to the I/O system. =item flush_time severity: warn This setting is likely to cause very bad performance every flush_time seconds. =item key_buffer_size severity: warn The key buffer size is set to its default value, which is not good for most production systems. In a production environment, key_buffer_size should be larger than the default 8MB size. =item large_pages severity: note Large pages are enabled. =item locked_in_memory severity: note The server is locked in memory with --memlock. =item log_warnings-1 severity: note Log_warnings is disabled, so unusual events such as statements unsafe for replication and aborted connections will not be logged to the error log. =item log_warnings-2 severity: note Log_warnings must be set greater than 1 to log unusual events such as aborted connections. =item low_priority_updates severity: note The server is running with non-default lock priority for updates. This could cause update queries to wait unexpectedly for read queries. =item max_binlog_size severity: note The max_binlog_size is smaller than the default of 1GB. =item max_connect_errors severity: note max_connect_errors should probably be set as large as your platform allows. =item max_connections severity: warn If the server ever really has more than a thousand threads running, then the system is likely to spend more time scheduling threads than really doing useful work. This variable's value should be considered in light of your workload. =item myisam_repair_threads severity: note myisam_repair_threads > 1 enables multi-threaded repair, which is relatively untested and is still listed as beta-quality code in the official documentation. =item old_passwords severity: warn Old-style passwords are insecure. They are sent in plain text across the wire. =item optimizer_prune_level severity: warn The optimizer will use an exhaustive search when planning complex queries, which can cause the planning process to take a long time. =item port severity: note The server is listening on a non-default port. =item query_cache_size-1 severity: note The query cache does not scale to large sizes and can cause unstable performance when larger than 128MB, especially on multi-core machines. =item query_cache_size-2 severity: warn The query cache can cause severe performance problems when it is larger than 256MB, especially on multi-core machines. =item read_buffer_size-1 severity: note The read_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. =item read_buffer_size-2 severity: warn The read_buffer_size variable should not be larger than 8MB. It should generally be left at its default unless an expert determines it is necessary to change it. Making it larger than 2MB can hurt performance significantly, and can make the server crash, swap to death, or just become extremely unstable. =item read_rnd_buffer_size-1 severity: note The read_rnd_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. =item read_rnd_buffer_size-2 severity: warn The read_rnd_buffer_size variable should not be larger than 4M. It should generally be left at its default unless an expert determines it is necessary to change it. =item relay_log_space_limit severity: warn Setting relay_log_space_limit can cause replicas to stop fetching binary logs from their master immediately. This could increase the risk that your data will be lost if the master crashes. If the replicas have encountered a limit on relay log space, then it is possible that the latest transactions exist only on the master and no replica has retrieved them. =item slave_net_timeout severity: warn This variable is set too high. This is too long to wait before noticing that the connection to the master has failed and retrying. This should probably be set to 60 seconds or less. It is also a good idea to use pt-heartbeat to ensure that the connection does not appear to time out when the master is simply idle. =item slave_skip_errors severity: crit You should not set this option. If replication is having errors, you need to find and resolve the cause of that; it is likely that your slave's data is different from the master. You can find out with pt-table-checksum. =item sort_buffer_size-1 severity: note The sort_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. =item sort_buffer_size-2 severity: note The sort_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. Making it larger than a few MB can hurt performance significantly, and can make the server crash, swap to death, or just become extremely unstable. =item sql_notes severity: note This server is configured not to log Note level warnings to the error log. =item sync_frm severity: warn It is best to set sync_frm so that .frm files are flushed safely to disk in case of a server crash. =item tx_isolation-1 severity: note This server's transaction isolation level is non-default. =item tx_isolation-2 severity: warn Most applications should use the default REPEATABLE-READ transaction isolation level, or in a few cases READ-COMMITTED. =item expire_logs_days severity: warn Binary logs are enabled, but automatic purging is not enabled. If you do not purge binary logs, your disk will fill up. If you delete binary logs externally to MySQL, you will cause unwanted behaviors. Always ask MySQL to purge obsolete logs, never delete them externally. =item innodb_file_io_threads severity: note This option is useless except on Windows. =item innodb_data_file_path severity: note Auto-extending InnoDB files can consume a lot of disk space that is very difficult to reclaim later. Some people prefer to set innodb_file_per_table and allocate a fixed-size file for ibdata1. =item innodb_flush_method severity: note Most production database servers that use InnoDB should set innodb_flush_method to O_DIRECT to avoid double-buffering, unless the I/O system is very low performance. =item innodb_locks_unsafe_for_binlog severity: warn This option makes point-in-time recovery from binary logs, and replication, untrustworthy if statement-based logging is used. =item innodb_support_xa severity: warn MySQL's internal XA transaction support between InnoDB and the binary log is disabled. The binary log might not match InnoDB's state after crash recovery, and replication might drift out of sync due to out-of-order statements in the binary log. =item log_bin severity: warn Binary logging is disabled, so point-in-time recovery and replication are not possible. =item log_output severity: warn Directing log output to tables has a high performance impact. =item max_relay_log_size severity: note A custom max_relay_log_size is defined. =item myisam_recover_options severity: warn myisam_recover_options should be set to some value such as BACKUP,FORCE to ensure that table corruption is noticed. =item storage_engine severity: note The server is using a non-standard storage engine as default. =item sync_binlog severity: warn Binary logging is enabled, but sync_binlog isn't configured so that every transaction is flushed to the binary log for durability. =item tmp_table_size severity: note The effective minimum size of in-memory implicit temporary tables used internally during query execution is min(tmp_table_size, max_heap_table_size), so max_heap_table_size should be at least as large as tmp_table_size. =item old mysql version severity: warn These are the recommended minimum version for each major release: 3.23, 4.1.20, 5.0.37, 5.1.30. =item end-of-life mysql version severity: note Every release older than 5.1 is now officially end-of-life. =back =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --daemonize Fork to the background and detach from the shell. POSIX operating systems only. =item --database short form: -D; type: string Connect to this database. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --ignore-rules type: hash Ignore these rule IDs. Specify a comma-separated list of rule IDs (e.g. LIT.001,RES.002,etc.) to ignore. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --source-of-variables type: string; default: mysql Read C from this source. Possible values are "mysql", "none" or a file name. If "mysql" is specified then you must also specify a DSN on the command line. =item --user short form: -u; type: string User for login if not current user. =item --verbose short form: -v; cumulative: yes; default: 1 Increase verbosity of output. At the default level of verbosity, the program prints only the first sentence of each rule's description. At higher levels, the program prints more of the description. =item --version Show version and exit. =item --[no]version-check default: yes Check for the latest version of Percona Toolkit, MySQL, and other programs. This is a standard "check for updates automatically" feature, with two additional features. First, the tool checks its own version and also the versions of the following software: operating system, Percona Monitoring and Management (PMM), MySQL, Perl, MySQL driver for Perl (DBD::mysql), and Percona Toolkit. Second, it checks for and warns about versions with known problems. For example, MySQL 5.5.25 had a critical bug and was re-released as 5.5.25a. A secure connection to Percona’s Version Check database server is done to perform these checks. Each request is logged by the server, including software version numbers and unique ID of the checked system. The ID is generated by the Percona Toolkit installation script or when the Version Check database call is done for the first time. Any updates or known problems are printed to STDOUT before the tool's normal output. This feature should never interfere with the normal operation of the tool. For more information, visit L. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-variable-advisor ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz and Daniel Nichter =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2010-2018 Percona LLC and/or its affiliates. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-variable-advisor 3.1.0 =cut percona-toolkit-3.1/bin/pt-visual-explain000775 001750 001750 00000310221 13535723560 021725 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/env perl # This program is part of Percona Toolkit: http://www.percona.com/software/ # See "COPYRIGHT, LICENSE, AND WARRANTY" at the end of this file for legal # notices and disclaimers. use strict; use warnings FATAL => 'all'; # This tool is "fat-packed": most of its dependent modules are embedded # in this file. Setting %INC to this file for each module makes Perl aware # of this so it will not try to load the module from @INC. See the tool's # documentation for a full list of dependencies. BEGIN { $INC{$_} = __FILE__ for map { (my $pkg = "$_.pm") =~ s!::!/!g; $pkg } (qw( ExplainParser ExplainTree OptionParser DSNParser Daemon )); } # ########################################################################### # Converts text (e.g. saved output) to a "recordset" -- an array of hashrefs # -- just like EXPLAIN does for selectall_arrayref({}). # ########################################################################### package ExplainParser; use strict; use warnings FATAL => 'all'; sub new { bless {}, shift; } sub parse_tabular { my ( $text, @cols ) = @_; my %row; my @vals = $text =~ m/\| +([^\|]*?)(?= +\|)/msg; return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub parse_tab_sep { my ( $text, @cols ) = @_; my %row; my @vals = split(/\t/, $text); return (undef, \@vals) unless @cols; @row{@cols} = @vals; return (\%row, undef); } sub parse_vertical { my ( $text, @cols ) = @_; my %row = $text =~ m/^ *(\w+): ([^\n]*) *$/msg; return (\%row, undef); } sub parse { my ($self, $text) = @_; my $started = 0; my $lines = 0; my @cols = (); my @result = (); # Detect which kind of input it is my ( $line_re, $vals_sub ); if ( $text =~ m/^\+---/m ) { # standard "tabular" output $line_re = qr/^(\| .*)[\r\n]+/m; $vals_sub = \&parse_tabular; } elsif ( $text =~ m/^id\tselect_type\t/m ) { # tab-separated $line_re = qr/^(.*?\t.*)[\r\n]+/m; $vals_sub = \&parse_tab_sep; } elsif ( $text =~ m/\*\*\* 1. row/ ) { # "vertical" output $line_re = qr/^( *.*?^ *Extra:[^\n]*$)/ms; $vals_sub = \&parse_vertical; } if ( $line_re ) { # Pull it apart into lines and parse them. LINE: foreach my $line ( $text =~ m/$line_re/g ) { my ($row, $cols) = $vals_sub->($line, @cols); if ( $row ) { foreach my $key ( keys %$row ) { if ( !$row->{$key} || $row->{$key} eq 'NULL' ) { $row->{$key} = undef; } } push @result, $row; } else { @cols = @$cols; } } } return \@result; } # ########################################################################### # Converts output of EXPLAIN into a human-readable tree. # ########################################################################### package ExplainTree; use List::Util qw(max); use Data::Dumper; sub new { my ( $class, $options ) = @_; my $self = bless {}, $class; $self->load_options($options); return $self; } sub load_options { my ( $self, $options ) = @_; if ( $options && ref $options eq 'HASH' ) { @{$self}{keys %$options} = values %$options; } else { delete @{$self}{keys %$self}; } } sub parse { my ( $self, $text, $options ) = @_; return $self->process(ExplainParser->new->parse($text), $options); } # The main method that turns a result set into a tree. Accepts an arrayref of # hashrefs which correspond to the rows in EXPLAIN. See the ALGORITHM in the # documentation for a small novel about this process. sub process { my ( $self, $rows, $options ) = @_; $self->load_options($options); return unless ref $rows eq 'ARRAY' && @$rows; # Pre-process and sanity check the rows. my @rows = @$rows; foreach my $i ( 0 .. $#rows ) { my $row = $rows[$i]; $row->{rowid} = $i; $row->{Extra} ||= ''; # The source code says if there are too many tables unioned together, the # table column will get truncated, like "". If this # happens, I've got to bail out. I'm not going to check all the source # code for all versions, but in 5.0 it looks like I can get this to happen # around table 20. die "UNION has too many tables: $row->{table}" if $row->{table} && $row->{table} =~ m/\./; if ( !defined $row->{id} ) { if ( $row->{table} && (my ($id) = $row->{table} =~ m/^{id} = $id; } else { die "Unexpected NULL in id column, please report as a bug"; } } } # Re-order the rows so all references are forward. my %union_for = map { $_->{id} => $_ } grep { $_->{select_type} eq 'UNION RESULT' } @rows; my $last_id = 0; my @reordered; foreach my $row ( grep { $_->{select_type} ne 'UNION RESULT' } @rows ) { if ( $last_id != $row->{id} && $union_for{$row->{id}} ) { push @reordered, $union_for{$row->{id}}; } push @reordered, $row; $last_id = $row->{id}; } # Process the rows recursively. my $tree = $self->build_query_plan(@reordered); return $tree; } sub build_query_plan { my ( $self, @rows ) = @_; if ( !@rows ) { die "I got no rows"; } # Is it a UNION RESULT? Split it up into sub-scopes and recurse. if ( $rows[0]->{select_type} eq 'UNION RESULT' ) { my $row = shift @rows; my @kids; my @ids = $row->{table} =~ m/(\d+)/g; my $enclosing_scope; if ( $rows[0]->{select_type} =~ m/SUBQUERY/ ) { $enclosing_scope = $rows[0]; } foreach my $i ( 0 .. $#ids ) { my $start = $self->index_of($ids[$i], @rows); my $end = $i < $#ids ? $self->index_of($ids[$i + 1], @rows) : @rows; push @kids, $self->build_query_plan(splice(@rows, $start, $end - $start)); } $row->{children} = [ @kids ]; $row->{table} = "union(" . join(',', map { $self->recursive_table_name($_) || '' } @kids) . ")"; my $tree = $self->transform($row); if ( $enclosing_scope ) { my $node = $self->transform($enclosing_scope); $node->{children} = [ $tree ]; $tree = $node; } return $tree; } # Are there DERIVED tables? If so, find its children and pull them out of the # list under it. while ( my ($der) = grep { $_->{table} && $_->{table} =~ m/^$/ } @rows ) { # Figure out the start and end of the derived scope. my ($der_id) = $der->{table} =~ m/^$/; my $start = $self->index_of($der_id, @rows); my $end = $start; while ( $end < @rows && $rows[$end]->{id} >= $der_id ) { $end++; } # Get the rows that belong to this scope and recurse. my @enclosed_scope = splice(@rows, $start, $end - $start); my $kids = $self->build_query_plan(@enclosed_scope); $der->{children} = [$kids]; $der->{table} = "derived(" . ($self->recursive_table_name($kids) || '') . ")"; } # Handle the "normal case." For each node, if the id is the same as the last # one, JOIN and continue. If the id is greater, it's a subquery, so should # be recursed. # But, filesort/temporary have to be handled specially, because they appear # in the first row, even if they are done later. Here are the cases, # according to http://s.petrunia.net/blog/?p=24: # ... MySQL has three ways to run a join and produce ordered output: # Method EXPLAIN output # ################################## #################################### # Use index-based access method that no mention of filesort # produces ordered output # ---------------------------------- ------------------------------------ # Use filesort() on 1st non-constant "Using filesort" in the first row # table # ---------------------------------- ------------------------------------ # Put join result into a temporary "Using temporary; Using filesort" in # table and use filesort() on it the first row # ---------------------------------- ------------------------------------ my $first = shift(@rows); # This is "case three" above. my $is_temp_filesort; if ( $first->{Extra} =~ m/Using temporary; Using filesort/ ) { # The entire join is being placed into a temporary table and filesorted, # so I'll make a note of that and apply it afterwards. In the meantime I # must remove mention of it from the node so the node doesn't get extra # transformations in transform(). $is_temp_filesort = 1; $first->{Extra} =~ s/Using temporary; Using filesort(?:; )?//; } # This is "case two" above. Must find first non-constant table and move # the filesort() there. elsif ( $first->{Extra} =~ m/Using filesort/ && $first->{type} =~ m/^(?:system|const)$/ ) { my ( $first_non_const ) = grep { $_->{type} !~ m/^(?:system|const)$/ } @rows; if ( $first_non_const ) { $first->{Extra} =~ s/Using filesort(?:; )?//; $first_non_const->{Extra} .= '; Using filesort'; } } my $scope = $first->{id}; my $tree = $self->transform($first); my $i = 0; while ( $i < @rows ) { my $row = $rows[$i]; if ( $row->{id} == $scope ) { $tree = { type => 'JOIN', children => [ $tree, $self->transform($row) ], }; $i++; } else { # It's another kind of "join". Find the enclosing scope boundaries and # recurse. The scope starts at $i. my $end = $i; while ( $end < @rows && $rows[$end]->{id} >= $row->{id} ) { $end++; } my @enclosed_scope = splice(@rows, $i, $end - $i); $tree = { type => $row->{select_type}, children => [ $tree, $self->build_query_plan(@enclosed_scope) ], }; # Don't increment the pointer because I just removed rows from @rows. # $i++ } } if ( $is_temp_filesort ) { $tree = $self->filesort( $self->temporary($tree, $self->recursive_table_name($tree))); } return $tree; } sub transform { my ( $self, $row ) = @_; my $sub = $row->{type}; # ################################################################## # Dispatch to a class method to generate the tree. # ################################################################## my $no_matching_row = join('|', "Impossible (?:WHERE|HAVING)(?: noticed after reading const tables)?", 'No matching.*row', '(?:unique|const) row not found', ); my $node = $sub ? $self->$sub($row) : $row->{Extra} =~ m/No tables/ ? { type => ( $row->{select_type} !~ m/^(?:PRIMARY|SIMPLE)$/ ? $row->{select_type} : 'DUAL') } : $row->{Extra} =~ m/(?:$no_matching_row)/i ? { type => 'IMPOSSIBLE' } : $row->{Extra} =~ m/optimized away/ ? { type => 'CONSTANT' } : die "Can't handle " . Dumper($row); my ($warn) = $row->{Extra} =~ m/($no_matching_row)/; if ( $warn ) { $node->{warning} = $warn; } # ################################################################## # Apply other tree transformations. # ################################################################## if ( $row->{Extra} =~ m/Using where/ ) { $node = { type => 'Filter with WHERE', children => [$node], }; } if ( $row->{Extra} =~ m/Using join buffer/ ) { $node = { type => 'Join buffer', children => [$node], }; } if ( $row->{Extra} =~ m/Distinct|Not exists/ ) { $node = { type => 'Distinct/Not-Exists', children => [$node], }; } if ( $row->{Extra} =~ m/Range checked for each record \(\w+ map: ([^\)]+)\)/ ) { # (index map: N) is a bitmap of which indexes are used. For example: # 0x5 base 16 (or base 10) # 0101 base 2 # 4321 position of bits # 3 1 indexes used my $bitmap = eval "int($1)"; # Hex to decimal if it begins with '0x' $bitmap = unpack("B32", pack("N", $bitmap)); # Convert into binary string of 1/0 $bitmap =~ s/^0+//; # Remove leading zeros $bitmap = reverse $bitmap; # Iterate from left-to-right my $possible_keys = join(',', grep { substr($bitmap, $_ - 1, 1) } ( 1 .. length($bitmap) )); $node = { type => 'Re-evaluate indexes each row', possible_keys => $possible_keys, children => [$node], }; } if ( $row->{Extra} =~ m/Using filesort/ ) { $node = $self->filesort($node); } if ( $row->{Extra} =~ m/Using temporary/ ) { $node = $self->temporary($node, $row->{table}, 1); } # Add some data that will help me keep track of nodes as I manipulate # them later $node->{id} = $row->{id}; $node->{rowid} = $row->{rowid}; return $node; } sub index_of { my ( $self, $id, @rows ) = @_; my $i = 0; foreach my $row ( @rows ) { if ( $row->{id} && $row->{id} == $id ) { return $i; } $i++; } die "Can't find row $id in " . join(',', map { $_->{id} || '' } @rows); } sub pretty_print { my ( $self, $node, $prefix ) = @_; $prefix ||= ''; my $branch = $prefix ? substr($prefix, 0, length($prefix) -3) . '+- ' : ''; my $output = $branch . $node->{type} . "\n"; my @kids; if ( $node->{children} ) { @kids = reverse @{$node->{children}}; } my $suffix = (@kids > 1) ? '| ' : ' '; foreach my $thing ( qw(table key partitions possible_keys method key_len ref rows warning) ) { if ( defined $node->{$thing} ) { $output .= $prefix . sprintf('%-14s %s', $thing, $node->{$thing}) . "\n"; } } my $last_child = pop @kids; foreach my $child ( @kids ) { $output .= $self->pretty_print($child, $prefix . $suffix); } if ( $last_child ) { $output .= $self->pretty_print($last_child, $prefix . ' '); } return $output; } ############################################################################# # Each method in this section corresponds to a value you will find in the 'type' # column in EXPLAIN. ############################################################################# sub ALL { my ( $self, $row ) = @_; return { type => 'Table scan', rows => $row->{rows}, children => [$self->table($row)], }; } sub fulltext { my ( $self, $row ) = @_; return $self->index_access($row, 'Fulltext scan'); } sub range { my ( $self, $row ) = @_; return $self->index_access($row, 'Index range scan'); } sub index { my ( $self, $row ) = @_; return $self->index_access($row, 'Index scan'); } sub eq_ref { my ( $self, $row ) = @_; return $self->index_access($row, 'Unique index lookup'); } sub ref { my ( $self, $row ) = @_; return $self->index_access($row, 'Index lookup'); } sub ref_or_null { my ( $self, $row ) = @_; return $self->index_access($row, 'Index lookup with extra null lookup'); } sub const { my ( $self, $row ) = @_; return $self->index_access($row, 'Constant index lookup'); } sub system { my ( $self, $row ) = @_; return { type => 'Constant table access', rows => $row->{rows}, children => [$self->table($row)], }; } sub unique_subquery { my ( $self, $row ) = @_; return $self->index_access($row, 'Unique subquery'); } sub index_subquery { my ( $self, $row ) = @_; return $self->index_access($row, 'Index subquery'); } # From the manual: "The Index Merge method is used to retrieve rows with # several range scans and to merge their results into one." Therefore each # index access should be shown as an index range scan. The unions and # intersections can be recursive, as in # union(intersect(key1,key2),intersect(key3,key4)) sub index_merge { my ( $self, $row ) = @_; my ( $merge_spec ) = $row->{Extra} =~ m/Using ((?:intersect|union|sort_union)\(.*?\))(?=;|$)/; my ($merge, $num) = $self->recurse_index_merge($row, $merge_spec, 0); # index_merge_bookmark_lookup note: # From the manual, "If the used indexes don't cover all columns used in the # query, full rows are retrieved only when the range conditions for all # used keys are satisfied." So a bookmark lookup shouldn't be shown for # all indexes; it should be shown from the merge results. return $self->bookmark_lookup($merge, $row); } # ########################################################################### # Helper subroutines. # ########################################################################### sub recursive_table_name { my ( $self, $node ) = @_; if ( $node->{table} ) { return $node->{table}; } if ( $node->{key} ) { my ( $table ) = $node->{key} =~ m/(.*?)->/; return $table; } if ( $node->{type} eq 'Bookmark lookup' ) { return $node->{children}->[1]->{table}; } if ( $node->{type} eq 'IMPOSSIBLE' ) { return ''; } if ( $node->{children} ) { return join(',', grep { $_ } map { $self->recursive_table_name($_) } @{$node->{children}}); } } # $num is the number of nodes to the left of this node in a depth-first # traversal. It lets me figure out which value goes in key_len. my $bal; # Workaround for issue 90 (Variable "$bal" will not stay shared). sub recurse_index_merge { my ( $self, $row, $spec, $num ) = @_; my ($type, $args) = $spec =~ m/(intersect|union|sort_union)\((.*)\)$/; my @children; # See 'man perlre' and search for 'matches a parenthesized group'. $bal = qr/ \( (?: (?> [^()]+ ) # Non-parens without backtracking | (??{ $bal }) # Group with matching parens )* \) /x; # Extract a thing, followed by balanced parentheses. foreach my $child ( $args =~ m/(\w+$bal)/g ) { my ( $subtree, $num ) = $self->recurse_index_merge($row, $child, $num); push @children, $subtree; } if ( !@children ) { # Recursion base case; $args is an index list foreach my $idx ( split(/,/, $args) ) { my $index_scan = $self->index_access($row, 'Index range scan', $idx); $index_scan->{key_len} = ($row->{key_len} =~ m/(\d+)/g)[$num++]; push @children, $index_scan; } } return ( { type => 'Index merge', method => $type, rows => $row->{rows}, children => \@children, }, $num ); } sub table { my ( $self, $row ) = @_; my $node = { type => ($row->{table} && $row->{table} =~ m/^(derived|union)\(/) ? uc $1 : 'Table', table => $row->{table}, possible_keys => $row->{possible_keys}, partitions => $row->{partitions}, }; if ( $row->{children} ) { $node->{children} = $row->{children}; } return $node; } sub bookmark_lookup { my ( $self, $node, $row ) = @_; if ( $row->{Extra} =~ m/Using index/ || ( $self->{clustered} && $row->{key} && $row->{key} eq 'PRIMARY' )) { return $node; } return { type => 'Bookmark lookup', children => [ $node, $self->table($row) ], }; } sub filesort { my ( $self, $node ) = @_; return { type => 'Filesort', children => [$node], }; } sub temporary { my ( $self, $node, $table_name, $is_scan ) = @_; $node = { type => 'TEMPORARY', table => "temporary($table_name)", possible_keys => undef, partitions => undef, children => [$node], }; if ( $is_scan ) { $node = { type => 'Table scan', rows => undef, children => [ $node ], }; } return $node; } sub index_access { my ( $self, $row, $type, $key ) = @_; my $node = { type => $type, key => $row->{table} . '->' . ( $key || $row->{key} ), possible_keys => $row->{possible_keys}, partitions => $row->{partitions}, key_len => $row->{key_len}, 'ref' => $row->{ref}, rows => $row->{rows}, }; if ( $row->{Extra} =~ m/Full scan on NULL key/ ) { $node->{warning} = 'Full scan on NULL key'; } if ( $row->{Extra} =~ m/Using index for group-by/ ) { $node->{type} = 'Loose index scan'; } # See index_merge_bookmark_lookup note above. if ( $row->{type} ne 'index_merge' ) { $node = $self->bookmark_lookup($node, $row); } return $node; } # ########################################################################### # OptionParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/OptionParser.pm # t/lib/OptionParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package OptionParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use List::Util qw(max); use Getopt::Long; use Data::Dumper; my $POD_link_re = '[LC]<"?([^">]+)"?>'; sub new { my ( $class, %args ) = @_; my @required_args = qw(); foreach my $arg ( @required_args ) { die "I need a $arg argument" unless $args{$arg}; } my ($program_name) = $PROGRAM_NAME =~ m/([.A-Za-z-]+)$/; $program_name ||= $PROGRAM_NAME; my $home = $ENV{HOME} || $ENV{HOMEPATH} || $ENV{USERPROFILE} || '.'; my %attributes = ( 'type' => 1, 'short form' => 1, 'group' => 1, 'default' => 1, 'cumulative' => 1, 'negatable' => 1, 'repeatable' => 1, # means it can be specified more than once ); my $self = { head1 => 'OPTIONS', # These args are used internally skip_rules => 0, # to instantiate another Option- item => '--(.*)', # Parser obj that parses the attributes => \%attributes, # DSN OPTIONS section. Tools parse_attributes => \&_parse_attribs, # don't tinker with these args. %args, strict => 1, # disabled by a special rule program_name => $program_name, opts => {}, got_opts => 0, short_opts => {}, defaults => {}, groups => {}, allowed_groups => {}, errors => [], rules => [], # desc of rules for --help mutex => [], # rule: opts are mutually exclusive atleast1 => [], # rule: at least one opt is required disables => {}, # rule: opt disables other opts defaults_to => {}, # rule: opt defaults to value of other opt DSNParser => undef, default_files => [ "/etc/percona-toolkit/percona-toolkit.conf", "/etc/percona-toolkit/$program_name.conf", "$home/.percona-toolkit.conf", "$home/.$program_name.conf", ], types => { string => 's', # standard Getopt type int => 'i', # standard Getopt type float => 'f', # standard Getopt type Hash => 'H', # hash, formed from a comma-separated list hash => 'h', # hash as above, but only if a value is given Array => 'A', # array, similar to Hash array => 'a', # array, similar to hash DSN => 'd', # DSN size => 'z', # size with kMG suffix (powers of 2^10) time => 'm', # time, with an optional suffix of s/h/m/d }, }; return bless $self, $class; } sub get_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; my @specs = $self->_pod_to_specs($file); $self->_parse_specs(@specs); open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $contents = do { local $/ = undef; <$fh> }; close $fh; if ( $contents =~ m/^=head1 DSN OPTIONS/m ) { PTDEBUG && _d('Parsing DSN OPTIONS'); my $dsn_attribs = { dsn => 1, copy => 1, }; my $parse_dsn_attribs = sub { my ( $self, $option, $attribs ) = @_; map { my $val = $attribs->{$_}; if ( $val ) { $val = $val eq 'yes' ? 1 : $val eq 'no' ? 0 : $val; $attribs->{$_} = $val; } } keys %$attribs; return { key => $option, %$attribs, }; }; my $dsn_o = new OptionParser( description => 'DSN OPTIONS', head1 => 'DSN OPTIONS', dsn => 0, # XXX don't infinitely recurse! item => '\* (.)', # key opts are a single character skip_rules => 1, # no rules before opts attributes => $dsn_attribs, parse_attributes => $parse_dsn_attribs, ); my @dsn_opts = map { my $opts = { key => $_->{spec}->{key}, dsn => $_->{spec}->{dsn}, copy => $_->{spec}->{copy}, desc => $_->{desc}, }; $opts; } $dsn_o->_pod_to_specs($file); $self->{DSNParser} = DSNParser->new(opts => \@dsn_opts); } if ( $contents =~ m/^=head1 VERSION\n\n^(.+)$/m ) { $self->{version} = $1; PTDEBUG && _d($self->{version}); } return; } sub DSNParser { my ( $self ) = @_; return $self->{DSNParser}; }; sub get_defaults_files { my ( $self ) = @_; return @{$self->{default_files}}; } sub _pod_to_specs { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; open my $fh, '<', $file or die "Cannot open $file: $OS_ERROR"; my @specs = (); my @rules = (); my $para; local $INPUT_RECORD_SEPARATOR = ''; while ( $para = <$fh> ) { next unless $para =~ m/^=head1 $self->{head1}/; last; } while ( $para = <$fh> ) { last if $para =~ m/^=over/; next if $self->{skip_rules}; chomp $para; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; PTDEBUG && _d('Option rule:', $para); push @rules, $para; } die "POD has no $self->{head1} section" unless $para; do { if ( my ($option) = $para =~ m/^=item $self->{item}/ ) { chomp $para; PTDEBUG && _d($para); my %attribs; $para = <$fh>; # read next paragraph, possibly attributes if ( $para =~ m/: / ) { # attributes $para =~ s/\s+\Z//g; %attribs = map { my ( $attrib, $val) = split(/: /, $_); die "Unrecognized attribute for --$option: $attrib" unless $self->{attributes}->{$attrib}; ($attrib, $val); } split(/; /, $para); if ( $attribs{'short form'} ) { $attribs{'short form'} =~ s/-//; } $para = <$fh>; # read next paragraph, probably short help desc } else { PTDEBUG && _d('Option has no attributes'); } $para =~ s/\s+\Z//g; $para =~ s/\s+/ /g; $para =~ s/$POD_link_re/$1/go; $para =~ s/\.(?:\n.*| [A-Z].*|\Z)//s; PTDEBUG && _d('Short help:', $para); die "No description after option spec $option" if $para =~ m/^=item/; if ( my ($base_option) = $option =~ m/^\[no\](.*)/ ) { $option = $base_option; $attribs{'negatable'} = 1; } push @specs, { spec => $self->{parse_attributes}->($self, $option, \%attribs), desc => $para . (defined $attribs{default} ? " (default $attribs{default})" : ''), group => ($attribs{'group'} ? $attribs{'group'} : 'default'), attributes => \%attribs }; } while ( $para = <$fh> ) { last unless $para; if ( $para =~ m/^=head1/ ) { $para = undef; # Can't 'last' out of a do {} block. last; } last if $para =~ m/^=item /; } } while ( $para ); die "No valid specs in $self->{head1}" unless @specs; close $fh; return @specs, @rules; } sub _parse_specs { my ( $self, @specs ) = @_; my %disables; # special rule that requires deferred checking foreach my $opt ( @specs ) { if ( ref $opt ) { # It's an option spec, not a rule. PTDEBUG && _d('Parsing opt spec:', map { ($_, '=>', $opt->{$_}) } keys %$opt); my ( $long, $short ) = $opt->{spec} =~ m/^([\w-]+)(?:\|([^!+=]*))?/; if ( !$long ) { die "Cannot parse long option from spec $opt->{spec}"; } $opt->{long} = $long; die "Duplicate long option --$long" if exists $self->{opts}->{$long}; $self->{opts}->{$long} = $opt; if ( length $long == 1 ) { PTDEBUG && _d('Long opt', $long, 'looks like short opt'); $self->{short_opts}->{$long} = $long; } if ( $short ) { die "Duplicate short option -$short" if exists $self->{short_opts}->{$short}; $self->{short_opts}->{$short} = $long; $opt->{short} = $short; } else { $opt->{short} = undef; } $opt->{is_negatable} = $opt->{spec} =~ m/!/ ? 1 : 0; $opt->{is_cumulative} = $opt->{spec} =~ m/\+/ ? 1 : 0; $opt->{is_repeatable} = $opt->{attributes}->{repeatable} ? 1 : 0; $opt->{is_required} = $opt->{desc} =~ m/required/ ? 1 : 0; $opt->{group} ||= 'default'; $self->{groups}->{ $opt->{group} }->{$long} = 1; $opt->{value} = undef; $opt->{got} = 0; my ( $type ) = $opt->{spec} =~ m/=(.)/; $opt->{type} = $type; PTDEBUG && _d($long, 'type:', $type); $opt->{spec} =~ s/=./=s/ if ( $type && $type =~ m/[HhAadzm]/ ); if ( (my ($def) = $opt->{desc} =~ m/default\b(?: ([^)]+))?/) ) { $self->{defaults}->{$long} = defined $def ? $def : 1; PTDEBUG && _d($long, 'default:', $def); } if ( $long eq 'config' ) { $self->{defaults}->{$long} = join(',', $self->get_defaults_files()); } if ( (my ($dis) = $opt->{desc} =~ m/(disables .*)/) ) { $disables{$long} = $dis; PTDEBUG && _d('Deferring check of disables rule for', $opt, $dis); } $self->{opts}->{$long} = $opt; } else { # It's an option rule, not a spec. PTDEBUG && _d('Parsing rule:', $opt); push @{$self->{rules}}, $opt; my @participants = $self->_get_participants($opt); my $rule_ok = 0; if ( $opt =~ m/mutually exclusive|one and only one/ ) { $rule_ok = 1; push @{$self->{mutex}}, \@participants; PTDEBUG && _d(@participants, 'are mutually exclusive'); } if ( $opt =~ m/at least one|one and only one/ ) { $rule_ok = 1; push @{$self->{atleast1}}, \@participants; PTDEBUG && _d(@participants, 'require at least one'); } if ( $opt =~ m/default to/ ) { $rule_ok = 1; $self->{defaults_to}->{$participants[0]} = $participants[1]; PTDEBUG && _d($participants[0], 'defaults to', $participants[1]); } if ( $opt =~ m/restricted to option groups/ ) { $rule_ok = 1; my ($groups) = $opt =~ m/groups ([\w\s\,]+)/; my @groups = split(',', $groups); %{$self->{allowed_groups}->{$participants[0]}} = map { s/\s+//; $_ => 1; } @groups; } if( $opt =~ m/accepts additional command-line arguments/ ) { $rule_ok = 1; $self->{strict} = 0; PTDEBUG && _d("Strict mode disabled by rule"); } die "Unrecognized option rule: $opt" unless $rule_ok; } } foreach my $long ( keys %disables ) { my @participants = $self->_get_participants($disables{$long}); $self->{disables}->{$long} = \@participants; PTDEBUG && _d('Option', $long, 'disables', @participants); } return; } sub _get_participants { my ( $self, $str ) = @_; my @participants; foreach my $long ( $str =~ m/--(?:\[no\])?([\w-]+)/g ) { die "Option --$long does not exist while processing rule $str" unless exists $self->{opts}->{$long}; push @participants, $long; } PTDEBUG && _d('Participants for', $str, ':', @participants); return @participants; } sub opts { my ( $self ) = @_; my %opts = %{$self->{opts}}; return %opts; } sub short_opts { my ( $self ) = @_; my %short_opts = %{$self->{short_opts}}; return %short_opts; } sub set_defaults { my ( $self, %defaults ) = @_; $self->{defaults} = {}; foreach my $long ( keys %defaults ) { die "Cannot set default for nonexistent option $long" unless exists $self->{opts}->{$long}; $self->{defaults}->{$long} = $defaults{$long}; PTDEBUG && _d('Default val for', $long, ':', $defaults{$long}); } return; } sub get_defaults { my ( $self ) = @_; return $self->{defaults}; } sub get_groups { my ( $self ) = @_; return $self->{groups}; } sub _set_option { my ( $self, $opt, $val ) = @_; my $long = exists $self->{opts}->{$opt} ? $opt : exists $self->{short_opts}->{$opt} ? $self->{short_opts}->{$opt} : die "Getopt::Long gave a nonexistent option: $opt"; $opt = $self->{opts}->{$long}; if ( $opt->{is_cumulative} ) { $opt->{value}++; } elsif ( ($opt->{type} || '') eq 's' && $val =~ m/^--?(.+)/ ) { my $next_opt = $1; if ( exists $self->{opts}->{$next_opt} || exists $self->{short_opts}->{$next_opt} ) { $self->save_error("--$long requires a string value"); return; } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } } else { if ($opt->{is_repeatable}) { push @{$opt->{value}} , $val; } else { $opt->{value} = $val; } } $opt->{got} = 1; PTDEBUG && _d('Got option', $long, '=', $val); } sub get_opts { my ( $self ) = @_; foreach my $long ( keys %{$self->{opts}} ) { $self->{opts}->{$long}->{got} = 0; $self->{opts}->{$long}->{value} = exists $self->{defaults}->{$long} ? $self->{defaults}->{$long} : $self->{opts}->{$long}->{is_cumulative} ? 0 : undef; } $self->{got_opts} = 0; $self->{errors} = []; if ( @ARGV && $ARGV[0] =~/^--config=/ ) { $ARGV[0] = substr($ARGV[0],9); $ARGV[0] =~ s/^'(.*)'$/$1/; $ARGV[0] =~ s/^"(.*)"$/$1/; $self->_set_option('config', shift @ARGV); } if ( @ARGV && $ARGV[0] eq "--config" ) { shift @ARGV; $self->_set_option('config', shift @ARGV); } if ( $self->has('config') ) { my @extra_args; foreach my $filename ( split(',', $self->get('config')) ) { eval { push @extra_args, $self->_read_config_file($filename); }; if ( $EVAL_ERROR ) { if ( $self->got('config') ) { die $EVAL_ERROR; } elsif ( PTDEBUG ) { _d($EVAL_ERROR); } } } unshift @ARGV, @extra_args; } Getopt::Long::Configure('no_ignore_case', 'bundling'); GetOptions( map { $_->{spec} => sub { $self->_set_option(@_); } } grep { $_->{long} ne 'config' } # --config is handled specially above. values %{$self->{opts}} ) or $self->save_error('Error parsing options'); if ( exists $self->{opts}->{version} && $self->{opts}->{version}->{got} ) { if ( $self->{version} ) { print $self->{version}, "\n"; exit 0; } else { print "Error parsing version. See the VERSION section of the tool's documentation.\n"; exit 1; } } if ( @ARGV && $self->{strict} ) { $self->save_error("Unrecognized command-line options @ARGV"); } foreach my $mutex ( @{$self->{mutex}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$mutex; if ( @set > 1 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$mutex}[ 0 .. scalar(@$mutex) - 2] ) . ' and --'.$self->{opts}->{$mutex->[-1]}->{long} . ' are mutually exclusive.'; $self->save_error($err); } } foreach my $required ( @{$self->{atleast1}} ) { my @set = grep { $self->{opts}->{$_}->{got} } @$required; if ( @set == 0 ) { my $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } @{$required}[ 0 .. scalar(@$required) - 2] ) .' or --'.$self->{opts}->{$required->[-1]}->{long}; $self->save_error("Specify at least one of $err"); } } $self->_check_opts( keys %{$self->{opts}} ); $self->{got_opts} = 1; return; } sub _check_opts { my ( $self, @long ) = @_; my $long_last = scalar @long; while ( @long ) { foreach my $i ( 0..$#long ) { my $long = $long[$i]; next unless $long; my $opt = $self->{opts}->{$long}; if ( $opt->{got} ) { if ( exists $self->{disables}->{$long} ) { my @disable_opts = @{$self->{disables}->{$long}}; map { $self->{opts}->{$_}->{value} = undef; } @disable_opts; PTDEBUG && _d('Unset options', @disable_opts, 'because', $long,'disables them'); } if ( exists $self->{allowed_groups}->{$long} ) { my @restricted_groups = grep { !exists $self->{allowed_groups}->{$long}->{$_} } keys %{$self->{groups}}; my @restricted_opts; foreach my $restricted_group ( @restricted_groups ) { RESTRICTED_OPT: foreach my $restricted_opt ( keys %{$self->{groups}->{$restricted_group}} ) { next RESTRICTED_OPT if $restricted_opt eq $long; push @restricted_opts, $restricted_opt if $self->{opts}->{$restricted_opt}->{got}; } } if ( @restricted_opts ) { my $err; if ( @restricted_opts == 1 ) { $err = "--$restricted_opts[0]"; } else { $err = join(', ', map { "--$self->{opts}->{$_}->{long}" } grep { $_ } @restricted_opts[0..scalar(@restricted_opts) - 2] ) . ' or --'.$self->{opts}->{$restricted_opts[-1]}->{long}; } $self->save_error("--$long is not allowed with $err"); } } } elsif ( $opt->{is_required} ) { $self->save_error("Required option --$long must be specified"); } $self->_validate_type($opt); if ( $opt->{parsed} ) { delete $long[$i]; } else { PTDEBUG && _d('Temporarily failed to parse', $long); } } die "Failed to parse options, possibly due to circular dependencies" if @long == $long_last; $long_last = @long; } return; } sub _validate_type { my ( $self, $opt ) = @_; return unless $opt; if ( !$opt->{type} ) { $opt->{parsed} = 1; return; } my $val = $opt->{value}; if ( $val && $opt->{type} eq 'm' ) { # type time PTDEBUG && _d('Parsing option', $opt->{long}, 'as a time value'); my ( $prefix, $num, $suffix ) = $val =~ m/([+-]?)(\d+)([a-z])?$/; if ( !$suffix ) { my ( $s ) = $opt->{desc} =~ m/\(suffix (.)\)/; $suffix = $s || 's'; PTDEBUG && _d('No suffix given; using', $suffix, 'for', $opt->{long}, '(value:', $val, ')'); } if ( $suffix =~ m/[smhd]/ ) { $val = $suffix eq 's' ? $num # Seconds : $suffix eq 'm' ? $num * 60 # Minutes : $suffix eq 'h' ? $num * 3600 # Hours : $num * 86400; # Days $opt->{value} = ($prefix || '') . $val; PTDEBUG && _d('Setting option', $opt->{long}, 'to', $val); } else { $self->save_error("Invalid time suffix for --$opt->{long}"); } } elsif ( $val && $opt->{type} eq 'd' ) { # type DSN PTDEBUG && _d('Parsing option', $opt->{long}, 'as a DSN'); my $prev = {}; my $from_key = $self->{defaults_to}->{ $opt->{long} }; if ( $from_key ) { PTDEBUG && _d($opt->{long}, 'DSN copies from', $from_key, 'DSN'); if ( $self->{opts}->{$from_key}->{parsed} ) { $prev = $self->{opts}->{$from_key}->{value}; } else { PTDEBUG && _d('Cannot parse', $opt->{long}, 'until', $from_key, 'parsed'); return; } } my $defaults = $self->{DSNParser}->parse_options($self); if (!$opt->{attributes}->{repeatable}) { $opt->{value} = $self->{DSNParser}->parse($val, $prev, $defaults); } else { my $values = []; for my $dsn_string (@$val) { push @$values, $self->{DSNParser}->parse($dsn_string, $prev, $defaults); } $opt->{value} = $values; } } elsif ( $val && $opt->{type} eq 'z' ) { # type size PTDEBUG && _d('Parsing option', $opt->{long}, 'as a size value'); $self->_parse_size($opt, $val); } elsif ( $opt->{type} eq 'H' || (defined $val && $opt->{type} eq 'h') ) { $opt->{value} = { map { $_ => 1 } split(/(?{type} eq 'A' || (defined $val && $opt->{type} eq 'a') ) { $opt->{value} = [ split(/(?{long}, 'type', $opt->{type}, 'value', $val); } $opt->{parsed} = 1; return; } sub get { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{value}; } sub got { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; return $self->{opts}->{$long}->{got}; } sub has { my ( $self, $opt ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); return defined $long ? exists $self->{opts}->{$long} : 0; } sub set { my ( $self, $opt, $val ) = @_; my $long = (length $opt == 1 ? $self->{short_opts}->{$opt} : $opt); die "Option $opt does not exist" unless $long && exists $self->{opts}->{$long}; $self->{opts}->{$long}->{value} = $val; return; } sub save_error { my ( $self, $error ) = @_; push @{$self->{errors}}, $error; return; } sub errors { my ( $self ) = @_; return $self->{errors}; } sub usage { my ( $self ) = @_; warn "No usage string is set" unless $self->{usage}; # XXX return "Usage: " . ($self->{usage} || '') . "\n"; } sub descr { my ( $self ) = @_; warn "No description string is set" unless $self->{description}; # XXX my $descr = ($self->{description} || $self->{program_name} || '') . " For more details, please use the --help option, " . "or try 'perldoc $PROGRAM_NAME' " . "for complete documentation."; $descr = join("\n", $descr =~ m/(.{0,80})(?:\s+|$)/g) unless $ENV{DONT_BREAK_LINES}; $descr =~ s/ +$//mg; return $descr; } sub usage_or_errors { my ( $self, $file, $return ) = @_; $file ||= $self->{file} || __FILE__; if ( !$self->{description} || !$self->{usage} ) { PTDEBUG && _d("Getting description and usage from SYNOPSIS in", $file); my %synop = $self->_parse_synopsis($file); $self->{description} ||= $synop{description}; $self->{usage} ||= $synop{usage}; PTDEBUG && _d("Description:", $self->{description}, "\nUsage:", $self->{usage}); } if ( $self->{opts}->{help}->{got} ) { print $self->print_usage() or die "Cannot print usage: $OS_ERROR"; exit 0 unless $return; } elsif ( scalar @{$self->{errors}} ) { print $self->print_errors() or die "Cannot print errors: $OS_ERROR"; exit 1 unless $return; } return; } sub print_errors { my ( $self ) = @_; my $usage = $self->usage() . "\n"; if ( (my @errors = @{$self->{errors}}) ) { $usage .= join("\n * ", 'Errors in command-line arguments:', @errors) . "\n"; } return $usage . "\n" . $self->descr(); } sub print_usage { my ( $self ) = @_; die "Run get_opts() before print_usage()" unless $self->{got_opts}; my @opts = values %{$self->{opts}}; my $maxl = max( map { length($_->{long}) # option long name + ($_->{is_negatable} ? 4 : 0) # "[no]" if opt is negatable + ($_->{type} ? 2 : 0) # "=x" where x is the opt type } @opts); my $maxs = max(0, map { length($_) + ($self->{opts}->{$_}->{is_negatable} ? 4 : 0) + ($self->{opts}->{$_}->{type} ? 2 : 0) } values %{$self->{short_opts}}); my $lcol = max($maxl, ($maxs + 3)); my $rcol = 80 - $lcol - 6; my $rpad = ' ' x ( 80 - $rcol ); $maxs = max($lcol - 3, $maxs); my $usage = $self->descr() . "\n" . $self->usage(); my @groups = reverse sort grep { $_ ne 'default'; } keys %{$self->{groups}}; push @groups, 'default'; foreach my $group ( reverse @groups ) { $usage .= "\n".($group eq 'default' ? 'Options' : $group).":\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } grep { $_->{group} eq $group } @opts ) { my $long = $opt->{is_negatable} ? "[no]$opt->{long}" : $opt->{long}; my $short = $opt->{short}; my $desc = $opt->{desc}; $long .= $opt->{type} ? "=$opt->{type}" : ""; if ( $opt->{type} && $opt->{type} eq 'm' ) { my ($s) = $desc =~ m/\(suffix (.)\)/; $s ||= 's'; $desc =~ s/\s+\(suffix .\)//; $desc .= ". Optional suffix s=seconds, m=minutes, h=hours, " . "d=days; if no suffix, $s is used."; } $desc = join("\n$rpad", grep { $_ } $desc =~ m/(.{0,$rcol}(?!\W))(?:\s+|(?<=\W)|$)/g); $desc =~ s/ +$//mg; if ( $short ) { $usage .= sprintf(" --%-${maxs}s -%s %s\n", $long, $short, $desc); } else { $usage .= sprintf(" --%-${lcol}s %s\n", $long, $desc); } } } $usage .= "\nOption types: s=string, i=integer, f=float, h/H/a/A=comma-separated list, d=DSN, z=size, m=time\n"; if ( (my @rules = @{$self->{rules}}) ) { $usage .= "\nRules:\n\n"; $usage .= join("\n", map { " $_" } @rules) . "\n"; } if ( $self->{DSNParser} ) { $usage .= "\n" . $self->{DSNParser}->usage(); } $usage .= "\nOptions and values after processing arguments:\n\n"; foreach my $opt ( sort { $a->{long} cmp $b->{long} } @opts ) { my $val = $opt->{value}; my $type = $opt->{type} || ''; my $bool = $opt->{spec} =~ m/^[\w-]+(?:\|[\w-])?!?$/; $val = $bool ? ( $val ? 'TRUE' : 'FALSE' ) : !defined $val ? '(No value)' : $type eq 'd' ? $self->{DSNParser}->as_string($val) : $type =~ m/H|h/ ? join(',', sort keys %$val) : $type =~ m/A|a/ ? join(',', @$val) : $val; $usage .= sprintf(" --%-${lcol}s %s\n", $opt->{long}, $val); } return $usage; } sub prompt_noecho { shift @_ if ref $_[0] eq __PACKAGE__; my ( $prompt ) = @_; local $OUTPUT_AUTOFLUSH = 1; print STDERR $prompt or die "Cannot print: $OS_ERROR"; my $response; eval { require Term::ReadKey; Term::ReadKey::ReadMode('noecho'); chomp($response = ); Term::ReadKey::ReadMode('normal'); print "\n" or die "Cannot print: $OS_ERROR"; }; if ( $EVAL_ERROR ) { die "Cannot read response; is Term::ReadKey installed? $EVAL_ERROR"; } return $response; } sub _read_config_file { my ( $self, $filename ) = @_; open my $fh, "<", $filename or die "Cannot open $filename: $OS_ERROR\n"; my @args; my $prefix = '--'; my $parse = 1; LINE: while ( my $line = <$fh> ) { chomp $line; next LINE if $line =~ m/^\s*(?:\#|\;|$)/; $line =~ s/\s+#.*$//g; $line =~ s/^\s+|\s+$//g; if ( $line eq '--' ) { $prefix = ''; $parse = 0; next LINE; } if ( $parse && !$self->has('version-check') && $line =~ /version-check/ ) { next LINE; } if ( $parse && (my($opt, $arg) = $line =~ m/^\s*([^=\s]+?)(?:\s*=\s*(.*?)\s*)?$/) ) { push @args, grep { defined $_ } ("$prefix$opt", $arg); } elsif ( $line =~ m/./ ) { push @args, $line; } else { die "Syntax error in file $filename at line $INPUT_LINE_NUMBER"; } } close $fh; return @args; } sub read_para_after { my ( $self, $file, $regex ) = @_; open my $fh, "<", $file or die "Can't open $file: $OS_ERROR"; local $INPUT_RECORD_SEPARATOR = ''; my $para; while ( $para = <$fh> ) { next unless $para =~ m/^=pod$/m; last; } while ( $para = <$fh> ) { next unless $para =~ m/$regex/; last; } $para = <$fh>; chomp($para); close $fh or die "Can't close $file: $OS_ERROR"; return $para; } sub clone { my ( $self ) = @_; my %clone = map { my $hashref = $self->{$_}; my $val_copy = {}; foreach my $key ( keys %$hashref ) { my $ref = ref $hashref->{$key}; $val_copy->{$key} = !$ref ? $hashref->{$key} : $ref eq 'HASH' ? { %{$hashref->{$key}} } : $ref eq 'ARRAY' ? [ @{$hashref->{$key}} ] : $hashref->{$key}; } $_ => $val_copy; } qw(opts short_opts defaults); foreach my $scalar ( qw(got_opts) ) { $clone{$scalar} = $self->{$scalar}; } return bless \%clone; } sub _parse_size { my ( $self, $opt, $val ) = @_; if ( lc($val || '') eq 'null' ) { PTDEBUG && _d('NULL size for', $opt->{long}); $opt->{value} = 'null'; return; } my %factor_for = (k => 1_024, M => 1_048_576, G => 1_073_741_824); my ($pre, $num, $factor) = $val =~ m/^([+-])?(\d+)([kMG])?$/; if ( defined $num ) { if ( $factor ) { $num *= $factor_for{$factor}; PTDEBUG && _d('Setting option', $opt->{y}, 'to num', $num, '* factor', $factor); } $opt->{value} = ($pre || '') . $num; } else { $self->save_error("Invalid size for --$opt->{long}: $val"); } return; } sub _parse_attribs { my ( $self, $option, $attribs ) = @_; my $types = $self->{types}; return $option . ($attribs->{'short form'} ? '|' . $attribs->{'short form'} : '' ) . ($attribs->{'negatable'} ? '!' : '' ) . ($attribs->{'cumulative'} ? '+' : '' ) . ($attribs->{'type'} ? '=' . $types->{$attribs->{type}} : '' ); } sub _parse_synopsis { my ( $self, $file ) = @_; $file ||= $self->{file} || __FILE__; PTDEBUG && _d("Parsing SYNOPSIS in", $file); local $INPUT_RECORD_SEPARATOR = ''; # read paragraphs open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; my $para; 1 while defined($para = <$fh>) && $para !~ m/^=head1 SYNOPSIS/; die "$file does not contain a SYNOPSIS section" unless $para; my @synop; for ( 1..2 ) { # 1 for the usage, 2 for the description my $para = <$fh>; push @synop, $para; } close $fh; PTDEBUG && _d("Raw SYNOPSIS text:", @synop); my ($usage, $desc) = @synop; die "The SYNOPSIS section in $file is not formatted properly" unless $usage && $desc; $usage =~ s/^\s*Usage:\s+(.+)/$1/; chomp $usage; $desc =~ s/\n/ /g; $desc =~ s/\s{2,}/ /g; $desc =~ s/\. ([A-Z][a-z])/. $1/g; $desc =~ s/\s+$//; return ( description => $desc, usage => $usage, ); }; sub set_vars { my ($self, $file) = @_; $file ||= $self->{file} || __FILE__; my %user_vars; my $user_vars = $self->has('set-vars') ? $self->get('set-vars') : undef; if ( $user_vars ) { foreach my $var_val ( @$user_vars ) { my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $user_vars{$var} = { val => $val, default => 0, }; } } my %default_vars; my $default_vars = $self->read_para_after($file, qr/MAGIC_set_vars/); if ( $default_vars ) { %default_vars = map { my $var_val = $_; my ($var, $val) = $var_val =~ m/([^\s=]+)=(\S+)/; die "Invalid --set-vars value: $var_val\n" unless $var && defined $val; $var => { val => $val, default => 1, }; } split("\n", $default_vars); } my %vars = ( %default_vars, # first the tool's defaults %user_vars, # then the user's which overwrite the defaults ); PTDEBUG && _d('--set-vars:', Dumper(\%vars)); return \%vars; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } if ( PTDEBUG ) { print STDERR '# ', $^X, ' ', $], "\n"; if ( my $uname = `uname -a` ) { $uname =~ s/\s+/ /g; print STDERR "# $uname\n"; } print STDERR '# Arguments: ', join(' ', map { my $a = "_[$_]_"; $a =~ s/\n/\n# /g; $a; } @ARGV), "\n"; } 1; } # ########################################################################### # End OptionParser package # ########################################################################### # ########################################################################### # DSNParser package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/DSNParser.pm # t/lib/DSNParser.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package DSNParser; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use Data::Dumper; $Data::Dumper::Indent = 0; $Data::Dumper::Quotekeys = 0; my $dsn_sep = qr/(? {} # h, P, u, etc. Should come from DSN OPTIONS section in POD. }; foreach my $opt ( @{$args{opts}} ) { if ( !$opt->{key} || !$opt->{desc} ) { die "Invalid DSN option: ", Dumper($opt); } PTDEBUG && _d('DSN option:', join(', ', map { "$_=" . (defined $opt->{$_} ? ($opt->{$_} || '') : 'undef') } keys %$opt ) ); $self->{opts}->{$opt->{key}} = { dsn => $opt->{dsn}, desc => $opt->{desc}, copy => $opt->{copy} || 0, }; } return bless $self, $class; } sub prop { my ( $self, $prop, $value ) = @_; if ( @_ > 2 ) { PTDEBUG && _d('Setting', $prop, 'property'); $self->{$prop} = $value; } return $self->{$prop}; } sub parse { my ( $self, $dsn, $prev, $defaults ) = @_; if ( !$dsn ) { PTDEBUG && _d('No DSN to parse'); return; } PTDEBUG && _d('Parsing', $dsn); $prev ||= {}; $defaults ||= {}; my %given_props; my %final_props; my $opts = $self->{opts}; foreach my $dsn_part ( split($dsn_sep, $dsn) ) { $dsn_part =~ s/\\,/,/g; if ( my ($prop_key, $prop_val) = $dsn_part =~ m/^(.)=(.*)$/ ) { $given_props{$prop_key} = $prop_val; } else { PTDEBUG && _d('Interpreting', $dsn_part, 'as h=', $dsn_part); $given_props{h} = $dsn_part; } } foreach my $key ( keys %$opts ) { PTDEBUG && _d('Finding value for', $key); $final_props{$key} = $given_props{$key}; if ( !defined $final_props{$key} && defined $prev->{$key} && $opts->{$key}->{copy} ) { $final_props{$key} = $prev->{$key}; PTDEBUG && _d('Copying value for', $key, 'from previous DSN'); } if ( !defined $final_props{$key} ) { $final_props{$key} = $defaults->{$key}; PTDEBUG && _d('Copying value for', $key, 'from defaults'); } } foreach my $key ( keys %given_props ) { die "Unknown DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless exists $opts->{$key}; } if ( (my $required = $self->prop('required')) ) { foreach my $key ( keys %$required ) { die "Missing required DSN option '$key' in '$dsn'. For more details, " . "please use the --help option, or try 'perldoc $PROGRAM_NAME' " . "for complete documentation." unless $final_props{$key}; } } return \%final_props; } sub parse_options { my ( $self, $o ) = @_; die 'I need an OptionParser object' unless ref $o eq 'OptionParser'; my $dsn_string = join(',', map { "$_=".$o->get($_); } grep { $o->has($_) && $o->get($_) } keys %{$self->{opts}} ); PTDEBUG && _d('DSN string made from options:', $dsn_string); return $self->parse($dsn_string); } sub as_string { my ( $self, $dsn, $props ) = @_; return $dsn unless ref $dsn; my @keys = $props ? @$props : sort keys %$dsn; return join(',', map { "$_=" . ($_ eq 'p' ? '...' : $dsn->{$_}) } grep { exists $self->{opts}->{$_} && exists $dsn->{$_} && defined $dsn->{$_} } @keys); } sub usage { my ( $self ) = @_; my $usage = "DSN syntax is key=value[,key=value...] Allowable DSN keys:\n\n" . " KEY COPY MEANING\n" . " === ==== =============================================\n"; my %opts = %{$self->{opts}}; foreach my $key ( sort keys %opts ) { $usage .= " $key " . ($opts{$key}->{copy} ? 'yes ' : 'no ') . ($opts{$key}->{desc} || '[No description]') . "\n"; } $usage .= "\n If the DSN is a bareword, the word is treated as the 'h' key.\n"; return $usage; } sub get_cxn_params { my ( $self, $info ) = @_; my $dsn; my %opts = %{$self->{opts}}; my $driver = $self->prop('dbidriver') || ''; if ( $driver eq 'Pg' ) { $dsn = 'DBI:Pg:dbname=' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(h P)); } else { $dsn = 'DBI:mysql:' . ( $info->{D} || '' ) . ';' . join(';', map { "$opts{$_}->{dsn}=$info->{$_}" } grep { defined $info->{$_} } qw(F h P S A)) . ';mysql_read_default_group=client' . ($info->{L} ? ';mysql_local_infile=1' : ''); } PTDEBUG && _d($dsn); return ($dsn, $info->{u}, $info->{p}); } sub fill_in_dsn { my ( $self, $dbh, $dsn ) = @_; my $vars = $dbh->selectall_hashref('SHOW VARIABLES', 'Variable_name'); my ($user, $db) = $dbh->selectrow_array('SELECT USER(), DATABASE()'); $user =~ s/@.*//; $dsn->{h} ||= $vars->{hostname}->{Value}; $dsn->{S} ||= $vars->{'socket'}->{Value}; $dsn->{P} ||= $vars->{port}->{Value}; $dsn->{u} ||= $user; $dsn->{D} ||= $db; } sub get_dbh { my ( $self, $cxn_string, $user, $pass, $opts ) = @_; $opts ||= {}; my $defaults = { AutoCommit => 0, RaiseError => 1, PrintError => 0, ShowErrorStatement => 1, mysql_enable_utf8 => ($cxn_string =~ m/charset=utf8/i ? 1 : 0), }; @{$defaults}{ keys %$opts } = values %$opts; if (delete $defaults->{L}) { # L for LOAD DATA LOCAL INFILE, our own extension $defaults->{mysql_local_infile} = 1; } if ( $opts->{mysql_use_result} ) { $defaults->{mysql_use_result} = 1; } if ( !$have_dbi ) { die "Cannot connect to MySQL because the Perl DBI module is not " . "installed or not found. Run 'perl -MDBI' to see the directories " . "that Perl searches for DBI. If DBI is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbi-perl\n" . " RHEL/CentOS yum install perl-DBI\n" . " OpenSolaris pkg install pkg:/SUNWpmdbi\n"; } my $dbh; my $tries = 2; while ( !$dbh && $tries-- ) { PTDEBUG && _d($cxn_string, ' ', $user, ' ', $pass, join(', ', map { "$_=>$defaults->{$_}" } keys %$defaults )); $dbh = eval { DBI->connect($cxn_string, $user, $pass, $defaults) }; if ( !$dbh && $EVAL_ERROR ) { if ( $EVAL_ERROR =~ m/locate DBD\/mysql/i ) { die "Cannot connect to MySQL because the Perl DBD::mysql module is " . "not installed or not found. Run 'perl -MDBD::mysql' to see " . "the directories that Perl searches for DBD::mysql. If " . "DBD::mysql is not installed, try:\n" . " Debian/Ubuntu apt-get install libdbd-mysql-perl\n" . " RHEL/CentOS yum install perl-DBD-MySQL\n" . " OpenSolaris pgk install pkg:/SUNWapu13dbd-mysql\n"; } elsif ( $EVAL_ERROR =~ m/not a compiled character set|character set utf8/ ) { PTDEBUG && _d('Going to try again without utf8 support'); delete $defaults->{mysql_enable_utf8}; } if ( !$tries ) { die $EVAL_ERROR; } } } if ( $cxn_string =~ m/mysql/i ) { my $sql; if ( my ($charset) = $cxn_string =~ m/charset=([\w]+)/ ) { $sql = qq{/*!40101 SET NAMES "$charset"*/}; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting NAMES to $charset: $EVAL_ERROR"; } PTDEBUG && _d('Enabling charset for STDOUT'); if ( $charset eq 'utf8' ) { binmode(STDOUT, ':utf8') or die "Can't binmode(STDOUT, ':utf8'): $OS_ERROR"; } else { binmode(STDOUT) or die "Can't binmode(STDOUT): $OS_ERROR"; } } if ( my $vars = $self->prop('set-vars') ) { $self->set_vars($dbh, $vars); } $sql = 'SELECT @@SQL_MODE'; PTDEBUG && _d($dbh, $sql); my ($sql_mode) = eval { $dbh->selectrow_array($sql) }; if ( $EVAL_ERROR ) { die "Error getting the current SQL_MODE: $EVAL_ERROR"; } $sql = 'SET @@SQL_QUOTE_SHOW_CREATE = 1' . '/*!40101, @@SQL_MODE=\'NO_AUTO_VALUE_ON_ZERO' . ($sql_mode ? ",$sql_mode" : '') . '\'*/'; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( $EVAL_ERROR ) { die "Error setting SQL_QUOTE_SHOW_CREATE, SQL_MODE" . ($sql_mode ? " and $sql_mode" : '') . ": $EVAL_ERROR"; } } PTDEBUG && _d('DBH info: ', $dbh, Dumper($dbh->selectrow_hashref( 'SELECT DATABASE(), CONNECTION_ID(), VERSION()/*!50038 , @@hostname*/')), 'Connection info:', $dbh->{mysql_hostinfo}, 'Character set info:', Dumper($dbh->selectall_arrayref( "SHOW VARIABLES LIKE 'character_set%'", { Slice => {}})), '$DBD::mysql::VERSION:', $DBD::mysql::VERSION, '$DBI::VERSION:', $DBI::VERSION, ); return $dbh; } sub get_hostname { my ( $self, $dbh ) = @_; if ( my ($host) = ($dbh->{mysql_hostinfo} || '') =~ m/^(\w+) via/ ) { return $host; } my ( $hostname, $one ) = $dbh->selectrow_array( 'SELECT /*!50038 @@hostname, */ 1'); return $hostname; } sub disconnect { my ( $self, $dbh ) = @_; PTDEBUG && $self->print_active_handles($dbh); $dbh->disconnect; } sub print_active_handles { my ( $self, $thing, $level ) = @_; $level ||= 0; printf("# Active %sh: %s %s %s\n", ($thing->{Type} || 'undef'), "\t" x $level, $thing, (($thing->{Type} || '') eq 'st' ? $thing->{Statement} || '' : '')) or die "Cannot print: $OS_ERROR"; foreach my $handle ( grep {defined} @{ $thing->{ChildHandles} } ) { $self->print_active_handles( $handle, $level + 1 ); } } sub copy { my ( $self, $dsn_1, $dsn_2, %args ) = @_; die 'I need a dsn_1 argument' unless $dsn_1; die 'I need a dsn_2 argument' unless $dsn_2; my %new_dsn = map { my $key = $_; my $val; if ( $args{overwrite} ) { $val = defined $dsn_1->{$key} ? $dsn_1->{$key} : $dsn_2->{$key}; } else { $val = defined $dsn_2->{$key} ? $dsn_2->{$key} : $dsn_1->{$key}; } $key => $val; } keys %{$self->{opts}}; return \%new_dsn; } sub set_vars { my ($self, $dbh, $vars) = @_; return unless $vars; foreach my $var ( sort keys %$vars ) { my $val = $vars->{$var}->{val}; (my $quoted_var = $var) =~ s/_/\\_/; my ($var_exists, $current_val); eval { ($var_exists, $current_val) = $dbh->selectrow_array( "SHOW VARIABLES LIKE '$quoted_var'"); }; my $e = $EVAL_ERROR; if ( $e ) { PTDEBUG && _d($e); } if ( $vars->{$var}->{default} && !$var_exists ) { PTDEBUG && _d('Not setting default var', $var, 'because it does not exist'); next; } if ( $current_val && $current_val eq $val ) { PTDEBUG && _d('Not setting var', $var, 'because its value', 'is already', $val); next; } my $sql = "SET SESSION $var=$val"; PTDEBUG && _d($dbh, $sql); eval { $dbh->do($sql) }; if ( my $set_error = $EVAL_ERROR ) { chomp($set_error); $set_error =~ s/ at \S+ line \d+//; my $msg = "Error setting $var: $set_error"; if ( $current_val ) { $msg .= " The current value for $var is $current_val. " . "If the variable is read only (not dynamic), specify " . "--set-vars $var=$current_val to avoid this warning, " . "else manually set the variable and restart MySQL."; } warn $msg . "\n\n"; } } return; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End DSNParser package # ########################################################################### # ########################################################################### # Daemon package # This package is a copy without comments from the original. The original # with comments and its test file can be found in the Bazaar repository at, # lib/Daemon.pm # t/lib/Daemon.t # See https://launchpad.net/percona-toolkit for more information. # ########################################################################### { package Daemon; use strict; use warnings FATAL => 'all'; use English qw(-no_match_vars); use constant PTDEBUG => $ENV{PTDEBUG} || 0; use POSIX qw(setsid); sub new { my ( $class, %args ) = @_; foreach my $arg ( qw(o) ) { die "I need a $arg argument" unless $args{$arg}; } my $o = $args{o}; my $self = { o => $o, log_file => $o->has('log') ? $o->get('log') : undef, PID_file => $o->has('pid') ? $o->get('pid') : undef, }; check_PID_file(undef, $self->{PID_file}); PTDEBUG && _d('Daemonized child will log to', $self->{log_file}); return bless $self, $class; } sub daemonize { my ( $self ) = @_; PTDEBUG && _d('About to fork and daemonize'); defined (my $pid = fork()) or die "Cannot fork: $OS_ERROR"; if ( $pid ) { PTDEBUG && _d('Parent PID', $PID, 'exiting after forking child PID',$pid); exit; } PTDEBUG && _d('Daemonizing child PID', $PID); $self->{PID_owner} = $PID; $self->{child} = 1; POSIX::setsid() or die "Cannot start a new session: $OS_ERROR"; chdir '/' or die "Cannot chdir to /: $OS_ERROR"; $self->_make_PID_file(); $OUTPUT_AUTOFLUSH = 1; PTDEBUG && _d('Redirecting STDIN to /dev/null'); close STDIN; open STDIN, '/dev/null' or die "Cannot reopen STDIN to /dev/null: $OS_ERROR"; if ( $self->{log_file} ) { PTDEBUG && _d('Redirecting STDOUT and STDERR to', $self->{log_file}); close STDOUT; open STDOUT, '>>', $self->{log_file} or die "Cannot open log file $self->{log_file}: $OS_ERROR"; close STDERR; open STDERR, ">&STDOUT" or die "Cannot dupe STDERR to STDOUT: $OS_ERROR"; } else { if ( -t STDOUT ) { PTDEBUG && _d('No log file and STDOUT is a terminal;', 'redirecting to /dev/null'); close STDOUT; open STDOUT, '>', '/dev/null' or die "Cannot reopen STDOUT to /dev/null: $OS_ERROR"; } if ( -t STDERR ) { PTDEBUG && _d('No log file and STDERR is a terminal;', 'redirecting to /dev/null'); close STDERR; open STDERR, '>', '/dev/null' or die "Cannot reopen STDERR to /dev/null: $OS_ERROR"; } } return; } sub check_PID_file { my ( $self, $file ) = @_; my $PID_file = $self ? $self->{PID_file} : $file; PTDEBUG && _d('Checking PID file', $PID_file); if ( $PID_file && -f $PID_file ) { my $pid; eval { chomp($pid = (slurp_file($PID_file) || '')); }; if ( $EVAL_ERROR ) { die "The PID file $PID_file already exists but it cannot be read: " . $EVAL_ERROR; } PTDEBUG && _d('PID file exists; it contains PID', $pid); if ( $pid ) { my $pid_is_alive = kill 0, $pid; if ( $pid_is_alive ) { die "The PID file $PID_file already exists " . " and the PID that it contains, $pid, is running"; } else { warn "Overwriting PID file $PID_file because the PID that it " . "contains, $pid, is not running"; } } else { die "The PID file $PID_file already exists but it does not " . "contain a PID"; } } else { PTDEBUG && _d('No PID file'); } return; } sub make_PID_file { my ( $self ) = @_; if ( exists $self->{child} ) { die "Do not call Daemon::make_PID_file() for daemonized scripts"; } $self->_make_PID_file(); $self->{PID_owner} = $PID; return; } sub _make_PID_file { my ( $self ) = @_; my $PID_file = $self->{PID_file}; if ( !$PID_file ) { PTDEBUG && _d('No PID file to create'); return; } $self->check_PID_file(); open my $PID_FH, '>', $PID_file or die "Cannot open PID file $PID_file: $OS_ERROR"; print $PID_FH $PID or die "Cannot print to PID file $PID_file: $OS_ERROR"; close $PID_FH or die "Cannot close PID file $PID_file: $OS_ERROR"; PTDEBUG && _d('Created PID file:', $self->{PID_file}); return; } sub _remove_PID_file { my ( $self ) = @_; if ( $self->{PID_file} && -f $self->{PID_file} ) { unlink $self->{PID_file} or warn "Cannot remove PID file $self->{PID_file}: $OS_ERROR"; PTDEBUG && _d('Removed PID file'); } else { PTDEBUG && _d('No PID to remove'); } return; } sub DESTROY { my ( $self ) = @_; $self->_remove_PID_file() if ($self->{PID_owner} || 0) == $PID; return; } sub slurp_file { my ($file) = @_; return unless $file; open my $fh, "<", $file or die "Cannot open $file: $OS_ERROR"; return do { local $/; <$fh> }; } sub _d { my ($package, undef, $line) = caller 0; @_ = map { (my $temp = $_) =~ s/\n/\n# /g; $temp; } map { defined $_ ? $_ : 'undef' } @_; print STDERR "# $package:$line $PID ", join(' ', @_), "\n"; } 1; } # ########################################################################### # End Daemon package # ########################################################################### # ########################################################################### # This is a combination of modules and programs in one -- a runnable module. # http://www.perl.com/pub/a/2006/07/13/lightning-articles.html?page=last # Or, look it up in the Camel book on pages 642 and 643 in the 3rd edition. # # Check at the end of this package for the call to main() which actually runs # the program. # ########################################################################### package pt_visual_explain; use English qw(-no_match_vars); use Getopt::Long; use Data::Dumper; $Data::Dumper::Indent = 1; $Data::Dumper::Quotekeys = 0; use constant PTDEBUG => $ENV{PTDEBUG} || 0; sub main { @ARGV = @_; # set global ARGV for this package # ####################################################################### # Get configuration information and parse command line options. # ####################################################################### my $o = new OptionParser(); $o->get_specs(); $o->get_opts(); my $dp = $o->DSNParser(); $dp->prop('set-vars', $o->set_vars()); $o->usage_or_errors(); # ######################################################################## # If --pid, check it first since we'll die if it already exits. # ######################################################################## my $daemon; if ( $o->get('pid') ) { # We're not daemoninzing, it just handles PID stuff. Keep $daemon # in the the scope of main() because when it's destroyed it automatically # removes the PID file. $daemon = new Daemon(o=>$o); $daemon->make_PID_file(); } # ####################################################################### # Get ready to do the main work. # ####################################################################### # Magically read STDIN or files in @ARGV my $text = do { local $INPUT_RECORD_SEPARATOR = undef; <>; }; my $rows; if ( $o->got('connect') ) { # Connect to the database. if ( $o->got('ask-pass') && !$o->got('password') ) { $o->set('password', OptionParser::prompt_noecho("Enter password: ")); } my $dsn = $dp->parse_options($o); my $dbh = $dp->get_dbh($dp->get_cxn_params($dsn), { AutoCommit => 1 } ); $text =~ s{^.*?select}{EXPLAIN /*!50115 PARTITIONS*/ SELECT}is; $rows = $dbh->selectall_arrayref($text, { Slice => {} } ); $dbh->disconnect(); } else { $rows = ExplainParser->new->parse($text); } # ####################################################################### # Do the main work. # ####################################################################### my $et = ExplainTree->new(); my $tree = $et->process($rows, { clustered => $o->get('clustered-pk') }); if ( $tree ) { print $o->get('format') eq 'dump' ? Dumper($tree) : $et->pretty_print($tree); } return 0; } # ############################################################################ # Run the program. # ############################################################################ if ( !caller ) { exit main(@ARGV); } 1; # Because this is a module as well as a script. # ############################################################################ # Documentation. # ############################################################################ =pod =head1 NAME pt-visual-explain - Format EXPLAIN output as a tree. =head1 SYNOPSIS Usage: pt-visual-explain [OPTIONS] [FILES] pt-visual-explain transforms EXPLAIN output into a tree representation of the query plan. If FILE is given, input is read from the file(s). With no FILE, or when FILE is -, read standard input. Examples: pt-visual-explain pt-visual-explain -c mysql -e "explain select * from mysql.user" | pt-visual-explain =head1 RISKS Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server. Before using this tool, please: =over =item * Read the tool's documentation =item * Review the tool's known L<"BUGS"> =item * Test the tool on a non-production server =item * Backup your production server and verify the backups =back =head1 DESCRIPTION pt-visual-explain reverse-engineers MySQL's EXPLAIN output into a query execution plan, which it then formats as a left-deep tree -- the same way the plan is represented inside MySQL. It is possible to do this by hand, or to read EXPLAIN's output directly, but it requires patience and expertise. Many people find a tree representation more understandable. You can pipe input into pt-visual-explain or specify a filename at the command line, including the magical '-' filename, which will read from standard input. It can do two things with the input: parse it for something that looks like EXPLAIN output, or connect to a MySQL instance and run EXPLAIN on the input. When parsing its input, pt-visual-explain understands three formats: tabular like that shown in the mysql command-line client, vertical like that created by using the \G line terminator in the mysql command-line client, and tab separated. It ignores any lines it doesn't know how to parse. When executing the input, pt-visual-explain replaces everything in the input up to the first SELECT keyword with 'EXPLAIN SELECT,' and then executes the result. You must specify L<"--connect"> to execute the input as a query. Either way, it builds a tree from the result set and prints it to standard output. For the following query, select * from sakila.film_actor join sakila.film using(film_id); pt-visual-explain generates this query plan: JOIN +- Bookmark lookup | +- Table | | table film_actor | | possible_keys idx_fk_film_id | +- Index lookup | key film_actor->idx_fk_film_id | possible_keys idx_fk_film_id | key_len 2 | ref sakila.film.film_id | rows 2 +- Table scan rows 952 +- Table table film possible_keys PRIMARY The query plan is left-deep, depth-first search, and the tree's root is the output node -- the last step in the execution plan. In other words, read it like this: =over =item 1 Table scan the 'film' table, which accesses an estimated 952 rows. =item 2 For each row, find matching rows by doing an index lookup into the film_actor->idx_fk_film_id index with the value from sakila.film.film_id, then a bookmark lookup into the film_actor table. =back For more information on how to read EXPLAIN output, please see L, and this talk titled "MySQL query optimizer internals and upcoming features in v. 5.2": from Timour Katchaounov, one of the MySQL developers: L =head1 MODULES This program is actually a runnable module, not just an ordinary Perl script. In fact, there are two modules embedded in it. This makes unit testing easy, but it also makes it easy for you to use the parsing and tree-building functionality if you want. The ExplainParser package accepts a string and parses whatever it thinks looks like EXPLAIN output from it. The synopsis is as follows: require "pt-visual-explain"; my $p = ExplainParser->new(); my $rows = $p->parse("some text"); # $rows is an arrayref of hashrefs. The ExplainTree package accepts a set of rows and turns it into a tree. For convenience, you can also have it delegate to ExplainParser and parse text for you. Here's the synopsis: require "pt-visual-explain"; my $e = ExplainTree->new(); my $tree = $e->parse("some text", \%options); my $output = $e->pretty_print($tree); print $tree; =head1 ALGORITHM This section explains the algorithm that converts EXPLAIN into a tree. You may be interested in reading this if you want to understand EXPLAIN more fully, or trying to figure out how this works, but otherwise this section will probably not make your life richer. The tree can be built by examining the id, select_type, and table columns of each row. Here's what I know about them: The id column is the sequential number of the select. This does not indicate nesting; it just comes from counting SELECT from the left of the SQL statement. It's like capturing parentheses in a regular expression. A UNION RESULT row doesn't have an id, because it isn't a SELECT. The source code actually refers to UNIONs as a fake_lex, as I recall. If two adjacent rows have the same id value, they are joined with the standard single-sweep multi-join method. The select_type column tells a) that a new sub-scope has opened b) what kind of relationship the row has to the previous row c) what kind of operation the row represents. =over =item * SIMPLE means there are no subqueries or unions in the whole query. =item * PRIMARY means there are, but this is the outermost SELECT. =item * [DEPENDENT] UNION means this result is UNIONed with the previous result (not row; a result might encompass more than one row). =item * UNION RESULT terminates a set of UNIONed results. =item * [DEPENDENT|UNCACHEABLE] SUBQUERY means a new sub-scope is opening. This is the kind of subquery that happens in a WHERE clause, SELECT list or whatnot; it does not return a so-called "derived table." =item * DERIVED is a subquery in the FROM clause. =back Tables that are JOINed all have the same select_type. For example, if you JOIN three tables inside a dependent subquery, they'll all say the same thing: DEPENDENT SUBQUERY. The table column usually specifies the table name or alias, but may also say or . If it says , the row represents an access to the temporary table that holds the result of the subquery whose id is N. If it says it's the same thing, but it refers to the results it UNIONs together. Finally, order matters. If a row's id is less than the one before it, I think that means it is dependent on something other than the one before it. For example, explain select (select 1 from sakila.film), (select 2 from sakila.film_actor), (select 3 from sakila.actor); | id | select_type | table | +----+-------------+------------+ | 1 | PRIMARY | NULL | | 4 | SUBQUERY | actor | | 3 | SUBQUERY | film_actor | | 2 | SUBQUERY | film | If the results were in order 2-3-4, I think that would mean 3 is a subquery of 2, 4 is a subquery of 3. As it is, this means 4 is a subquery of the nearest previous recent row with a smaller id, which is 1. Likewise for 3 and 2. This structure is hard to programmatically build into a tree for the same reason it's hard to understand by inspection: there are both forward and backward references. is a forward reference to selectN, while is a backward reference to selectM and selectN. That makes recursion and other tree-building algorithms hard to get right (NOTE: after implementation, I now see how it would be possible to deal with both forward and backward references, but I have no motivation to change something that works). Consider the following: select * from ( select 1 from sakila.actor as actor_1 union select 1 from sakila.actor as actor_2 ) as der_1 union select * from ( select 1 from sakila.actor as actor_3 union all select 1 from sakila.actor as actor_4 ) as der_2; | id | select_type | table | +------+--------------+------------+ | 1 | PRIMARY | | | 2 | DERIVED | actor_1 | | 3 | UNION | actor_2 | | NULL | UNION RESULT | | | 4 | UNION | | | 5 | DERIVED | actor_3 | | 6 | UNION | actor_4 | | NULL | UNION RESULT | | | NULL | UNION RESULT | | This would be a lot easier to work with if it looked like this (I've bracketed the id on rows I moved): | id | select_type | table | +------+--------------+------------+ | [1] | UNION RESULT | | | 1 | PRIMARY | | | [2] | UNION RESULT | | | 2 | DERIVED | actor_1 | | 3 | UNION | actor_2 | | 4 | UNION | | | [5] | UNION RESULT | | | 5 | DERIVED | actor_3 | | 6 | UNION | actor_4 | In fact, why not re-number all the ids, so the PRIMARY row becomes 2, and so on? That would make it even easier to read. Unfortunately that would also have the effect of destroying the meaning of the id column, which I think is important to preserve in the final tree. Also, though it makes it easier to read, it doesn't make it easier to manipulate programmatically; so it's fine to leave them numbered as they are. The goal of re-ordering is to make it easier to figure out which rows are children of which rows in the execution plan. Given the reordered list and some row whose table is or , it is easy to find the beginning of the slice of rows that should be child nodes in the tree: you just look for the first row whose ID is the same as the first number in the table. The next question is how to find the last row that should be a child node of a UNION or DERIVED. I'll start with DERIVED, because the solution makes UNION easy. Consider how MySQL numbers the SELECTs sequentially according to their position in the SQL, left-to-right. Since a DERIVED table encloses everything within it in a scope, which becomes a temporary table, there are only two things to think about: its child subqueries and unions (if any), and its next siblings in the scope that encloses it. Its children will all have an id greater than it does, by definition, so any later rows with a smaller id terminate the scope. Here's an example. The middle derived table here has a subquery and a UNION to make it a little more complex for the example. explain select 1 from ( select film_id from sakila.film limit 1 ) as der_1 join ( select film_id, actor_id, (select count(*) from sakila.rental) as r from sakila.film_actor limit 1 union all select 1, 1, 1 from sakila.film_actor as dummy ) as der_2 using (film_id) join ( select actor_id from sakila.actor limit 1 ) as der_3 using (actor_id); Here's the output of EXPLAIN: | id | select_type | table | | 1 | PRIMARY | | | 1 | PRIMARY | | | 1 | PRIMARY | | | 6 | DERIVED | actor | | 3 | DERIVED | film_actor | | 4 | SUBQUERY | rental | | 5 | UNION | dummy | | NULL | UNION RESULT | | | 2 | DERIVED | film | The siblings all have id 1, and the middle one I care about is derived3. (Notice MySQL doesn't execute them in the order I defined them, which is fine). Now notice that MySQL prints out the rows in the opposite order I defined the subqueries: 6, 3, 2. It always seems to do this, and there might be other methods of finding the scope boundaries including looking for the lower boundary of the next largest sibling, but this is a good enough heuristic. I am forced to rely on it for non-DERIVED subqueries, so I rely on it here too. Therefore, I decide that everything greater than or equal to 3 belongs to the DERIVED scope. The rule for UNION is simple: they consume the entire enclosing scope, and to find the component parts of each one, you find each part's beginning as referred to in the definition, and its end is either just before the next one, or if it's the last part, the end is the end of the scope. This is only simple because UNION consumes the entire scope, which is either the entire statement, or the scope of a DERIVED table. This is because a UNION cannot be a sibling of another UNION or a table, DERIVED or not. (Try writing such a statement if you don't see it intuitively). Therefore, you can just find the enclosing scope's boundaries, and the rest is easy. Notice in the example above, the UNION is over , which includes the row with id 4 -- it includes every row between 3 and 5. Finally, there are non-derived subqueries to deal with as well. In this case I can't look at siblings to find the end of the scope as I did for DERIVED. I have to trust that MySQL executes depth-first. Here's an example: explain select actor_id, ( select count(film_id) + (select count(*) from sakila.film) from sakila.film join sakila.film_actor using(film_id) where exists( select * from sakila.actor where sakila.actor.actor_id = sakila.film_actor.actor_id ) ) from sakila.actor; | id | select_type | table | | 1 | PRIMARY | actor | | 2 | SUBQUERY | film | | 2 | SUBQUERY | film_actor | | 4 | DEPENDENT SUBQUERY | actor | | 3 | SUBQUERY | film | In order, the tree should be built like this: =over =item * See row 1. =item * See row 2. It's a higher id than 1, so it's a subquery, along with every other row whose id is greater than 2. =item * Inside this scope, see 2 and 2 and JOIN them. See 4. It's a higher id than 2, so it's again a subquery; recurse. After that, see 3, which is also higher; recurse. =back But the only reason the nested subquery didn't include select 3 is because select 4 came first. In other words, if EXPLAIN looked like this, | id | select_type | table | | 1 | PRIMARY | actor | | 2 | SUBQUERY | film | | 2 | SUBQUERY | film_actor | | 3 | SUBQUERY | film | | 4 | DEPENDENT SUBQUERY | actor | I would be forced to assume upon seeing select 3 that select 4 is a subquery of it, rather than just being the next sibling in the enclosing scope. If this is ever wrong, then the algorithm is wrong, and I don't see what could be done about it. UNION is a little more complicated than just "the entire scope is a UNION," because the UNION might itself be inside an enclosing scope that's only indicated by the first item inside the UNION. There are only three kinds of enclosing scopes: UNION, DERIVED, and SUBQUERY. A UNION can't enclose a UNION, and a DERIVED has its own "scope markers," but a SUBQUERY can wholly enclose a UNION, like this strange example on the empty table t1: explain select * from t1 where not exists( (select t11.i from t1 t11) union (select t12.i from t1 t12)); | id | select_type | table | Extra | +------+--------------+------------+--------------------------------+ | 1 | PRIMARY | t1 | const row not found | | 2 | SUBQUERY | NULL | No tables used | | 3 | SUBQUERY | NULL | no matching row in const table | | 4 | UNION | t12 | const row not found | | NULL | UNION RESULT | | | The UNION's backward references might make it look like the UNION encloses the subquery, but studying the query makes it clear this isn't the case. So when a UNION's first row says SUBQUERY, it is this special case. By the way, I don't fully understand this query plan; there are 4 numbered SELECT in the plan, but only 3 in the query. The parens around the UNIONs are meaningful. Removing them will make the EXPLAIN different. Please tell me how and why this works if you know. Armed with this knowledge, it's possible to use recursion to turn the parent-child relationship between all the rows into a tree representing the execution plan. MySQL prints the rows in execution order, even the forward and backward references. At any given scope, the rows are processed as a left-deep tree. MySQL does not do "bushy" execution plans. It begins with a table, finds a matching row in the next table, and continues till the last table, when it emits a row. When it runs out, it backtracks till it can find the next row and repeats. There are subtleties of course, but this is the basic plan. This is why MySQL transforms all RIGHT OUTER JOINs into LEFT OUTER JOINs and cannot do FULL OUTER JOIN. This means in any given scope, say | id | select_type | table | | 1 | SIMPLE | tbl1 | | 1 | SIMPLE | tbl2 | | 1 | SIMPLE | tbl3 | The execution plan looks like a depth-first traversal of this tree: JOIN / \ JOIN tbl3 / \ tbl1 tbl2 The JOIN might not be a JOIN. It might be a subquery, for example. This comes from the type column of EXPLAIN. The documentation says this is a "join type," but I think "access type" is more accurate, because it's "how MySQL accesses rows." pt-visual-explain decorates the tree significantly more than just turning rows into nodes. Each node may get a series of transformations that turn it into a subtree of more than one node. For example, an index scan not marked with 'Using index' must do a bookmark lookup into the table rows; that is a three-node subtree. However, after the above node-ordering and scoping stuff, the rest of the process is pretty simple. =head1 OPTIONS This tool accepts additional command-line arguments. Refer to the L<"SYNOPSIS"> and usage information for details. =over =item --ask-pass Prompt for a password when connecting to MySQL. =item --charset short form: -A; type: string Default character set. If the value is utf8, sets Perl's binmode on STDOUT to utf8, passes the mysql_enable_utf8 option to DBD::mysql, and runs SET NAMES UTF8 after connecting to MySQL. Any other value sets binmode on STDOUT without the utf8 layer, and runs SET NAMES after connecting to MySQL. =item --clustered-pk Assume that PRIMARY KEY index accesses don't need to do a bookmark lookup to retrieve rows. This is the case for InnoDB. =item --config type: Array Read this comma-separated list of config files; if specified, this must be the first option on the command line. =item --connect Treat input as a query, and obtain EXPLAIN output by connecting to a MySQL instance and running EXPLAIN on the query. When this option is given, pt-visual-explain uses the other connection-specific options such as L<"--user"> to connect to the MySQL instance. If you have a .my.cnf file, it will read it, so you may not need to specify any connection-specific options. =item --database short form: -D; type: string Connect to this database. =item --defaults-file short form: -F; type: string Only read mysql options from the given file. You must give an absolute pathname. =item --format type: string; default: tree Set output format. The default is a terse pretty-printed tree. The valid values are: Value Meaning ===== ================================================ tree Pretty-printed terse tree. dump Data::Dumper output (see Data::Dumper for more). =item --help Show help and exit. =item --host short form: -h; type: string Connect to host. =item --password short form: -p; type: string Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item --pid type: string Create the given PID file. The tool won't start if the PID file already exists and the PID it contains is different than the current PID. However, if the PID file exists and the PID it contains is no longer running, the tool will overwrite the PID file with the current PID. The PID file is removed automatically when the tool exits. =item --port short form: -P; type: int Port number to use for connection. =item --set-vars type: Array Set the MySQL variables in this comma-separated list of C pairs. By default, the tool sets: =for comment ignore-pt-internal-value MAGIC_set_vars wait_timeout=10000 Variables specified on the command line override these defaults. For example, specifying C<--set-vars wait_timeout=500> overrides the defaultvalue of C<10000>. The tool prints a warning and continues if a variable cannot be set. =item --socket short form: -S; type: string Socket file to use for connection. =item --user short form: -u; type: string User for login if not current user. =item --version Show version and exit. =back =head1 DSN OPTIONS These DSN options are used to create a DSN. Each option is given like C. The options are case-sensitive, so P and p are not the same option. There cannot be whitespace before or after the C<=> and if the value contains whitespace it must be quoted. DSN options are comma-separated. See the L manpage for full details. =over =item * A dsn: charset; copy: yes Default character set. =item * D dsn: database; copy: yes Default database. =item * F dsn: mysql_read_default_file; copy: yes Only read default options from the given file =item * h dsn: host; copy: yes Connect to host. =item * p dsn: password; copy: yes Password to use when connecting. If password contains commas they must be escaped with a backslash: "exam\,ple" =item * P dsn: port; copy: yes Port number to use for connection. =item * S dsn: mysql_socket; copy: yes Socket file to use for connection. =item * u dsn: user; copy: yes User for login if not current user. =back =head1 ENVIRONMENT The environment variable C enables verbose debugging output to STDERR. To enable debugging and capture all output to a file, run the tool like: PTDEBUG=1 pt-visual-explain ... > FILE 2>&1 Be careful: debugging output is voluminous and can generate several megabytes of output. =head1 SYSTEM REQUIREMENTS You need Perl, DBI, DBD::mysql, and some core packages that ought to be installed in any reasonably new version of Perl. =head1 BUGS For a list of known bugs, see L. Please report bugs at L. Include the following information in your bug report: =over =item * Complete command-line used to run the tool =item * Tool L<"--version"> =item * MySQL version of all servers involved =item * Output from the tool including STDERR =item * Input files (log/dump/config files, etc.) =back If possible, include debugging output by running the tool with C; see L<"ENVIRONMENT">. =head1 DOWNLOADING Visit L to download the latest release of Percona Toolkit. Or, get the latest release from the command line: wget percona.com/get/percona-toolkit.tar.gz wget percona.com/get/percona-toolkit.rpm wget percona.com/get/percona-toolkit.deb You can also get individual tools from the latest release: wget percona.com/get/TOOL Replace C with the name of any tool. =head1 AUTHORS Baron Schwartz =head1 ABOUT PERCONA TOOLKIT This tool is part of Percona Toolkit, a collection of advanced command-line tools for MySQL developed by Percona. Percona Toolkit was forked from two projects in June, 2011: Maatkit and Aspersa. Those projects were created by Baron Schwartz and primarily developed by him and Daniel Nichter. Visit L to learn about other free, open-source software from Percona. =head1 COPYRIGHT, LICENSE, AND WARRANTY This program is copyright 2011-2018 Percona LLC and/or its affiliates, 2007-2011 Baron Schwartz. THIS PROGRAM IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. On UNIX and similar systems, you can issue `man perlgpl' or `man perlartistic' to read these licenses. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. =head1 VERSION pt-visual-explain 3.1.0 =cut percona-toolkit-3.1/config/000775 001750 001750 00000000000 13535723557 017121 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/NaturalDocs/000775 001750 001750 00000000000 13535723557 021340 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/NaturalDocs/Languages.txt000664 001750 001750 00000011501 13535723557 024005 0ustar00jenkinsjenkins000000 000000 Format: 1.52 # This is the Natural Docs languages file for this project. If you change # anything here, it will apply to THIS PROJECT ONLY. If you'd like to change # something for all your projects, edit the Languages.txt in Natural Docs' # Config directory instead. # You can prevent certain file extensions from being scanned like this: # Ignore Extensions: [extension] [extension] ... #------------------------------------------------------------------------------- # SYNTAX: # # Unlike other Natural Docs configuration files, in this file all comments # MUST be alone on a line. Some languages deal with the # character, so you # cannot put comments on the same line as content. # # Also, all lists are separated with spaces, not commas, again because some # languages may need to use them. # # Language: [name] # Alter Language: [name] # Defines a new language or alters an existing one. Its name can use any # characters. If any of the properties below have an add/replace form, you # must use that when using Alter Language. # # The language Shebang Script is special. It's entry is only used for # extensions, and files with those extensions have their shebang (#!) lines # read to determine the real language of the file. Extensionless files are # always treated this way. # # The language Text File is also special. It's treated as one big comment # so you can put Natural Docs content in them without special symbols. Also, # if you don't specify a package separator, ignored prefixes, or enum value # behavior, it will copy those settings from the language that is used most # in the source tree. # # Extensions: [extension] [extension] ... # [Add/Replace] Extensions: [extension] [extension] ... # Defines the file extensions of the language's source files. You can # redefine extensions found in the main languages file. You can use * to # mean any undefined extension. # # Shebang Strings: [string] [string] ... # [Add/Replace] Shebang Strings: [string] [string] ... # Defines a list of strings that can appear in the shebang (#!) line to # designate that it's part of the language. You can redefine strings found # in the main languages file. # # Ignore Prefixes in Index: [prefix] [prefix] ... # [Add/Replace] Ignored Prefixes in Index: [prefix] [prefix] ... # # Ignore [Topic Type] Prefixes in Index: [prefix] [prefix] ... # [Add/Replace] Ignored [Topic Type] Prefixes in Index: [prefix] [prefix] ... # Specifies prefixes that should be ignored when sorting symbols in an # index. Can be specified in general or for a specific topic type. # #------------------------------------------------------------------------------ # For basic language support only: # # Line Comments: [symbol] [symbol] ... # Defines a space-separated list of symbols that are used for line comments, # if any. # # Block Comments: [opening sym] [closing sym] [opening sym] [closing sym] ... # Defines a space-separated list of symbol pairs that are used for block # comments, if any. # # Package Separator: [symbol] # Defines the default package separator symbol. The default is a dot. # # [Topic Type] Prototype Enders: [symbol] [symbol] ... # When defined, Natural Docs will attempt to get a prototype from the code # immediately following the topic type. It stops when it reaches one of # these symbols. Use \n for line breaks. # # Line Extender: [symbol] # Defines the symbol that allows a prototype to span multiple lines if # normally a line break would end it. # # Enum Values: [global|under type|under parent] # Defines how enum values are referenced. The default is global. # global - Values are always global, referenced as 'value'. # under type - Values are under the enum type, referenced as # 'package.enum.value'. # under parent - Values are under the enum's parent, referenced as # 'package.value'. # # Perl Package: [perl package] # Specifies the Perl package used to fine-tune the language behavior in ways # too complex to do in this file. # #------------------------------------------------------------------------------ # For full language support only: # # Full Language Support: [perl package] # Specifies the Perl package that has the parsing routines necessary for full # language support. # #------------------------------------------------------------------------------- # The following languages are defined in the main file, if you'd like to alter # them: # # Text File, Shebang Script, C/C++, C#, Java, JavaScript, Perl, Python, # PHP, SQL, Visual Basic, Pascal, Assembly, Ada, Tcl, Ruby, Makefile, # ActionScript, ColdFusion, R, Fortran # If you add a language that you think would be useful to other developers # and should be included in Natural Docs by default, please e-mail it to # languages [at] naturaldocs [dot] org. percona-toolkit-3.1/config/NaturalDocs/Menu.txt000664 001750 001750 00000017017 13535723557 023013 0ustar00jenkinsjenkins000000 000000 Format: 1.52 # You can add a title and sub-title to your menu like this: # Title: [project name] # SubTitle: [subtitle] # You can add a footer to your documentation like this: # Footer: [text] # If you want to add a copyright notice, this would be the place to do it. # You can add a timestamp to your documentation like one of these: # Timestamp: Generated on month day, year # Timestamp: Updated mm/dd/yyyy # Timestamp: Last updated mon day # # m - One or two digit month. January is "1" # mm - Always two digit month. January is "01" # mon - Short month word. January is "Jan" # month - Long month word. January is "January" # d - One or two digit day. 1 is "1" # dd - Always two digit day. 1 is "01" # day - Day with letter extension. 1 is "1st" # yy - Two digit year. 2006 is "06" # yyyy - Four digit year. 2006 is "2006" # year - Four digit year. 2006 is "2006" # -------------------------------------------------------------------------- # # Cut and paste the lines below to change the order in which your files # appear on the menu. Don't worry about adding or removing files, Natural # Docs will take care of that. # # You can further organize the menu by grouping the entries. Add a # "Group: [name] {" line to start a group, and add a "}" to end it. # # You can add text and web links to the menu by adding "Text: [text]" and # "Link: [name] ([URL])" lines, respectively. # # The formatting and comments are auto-generated, so don't worry about # neatness when editing the file. Natural Docs will clean it up the next # time it is run. When working with groups, just deal with the braces and # forget about the indentation and comments. # # -------------------------------------------------------------------------- Group: Modules { File: Advisor (modules/Advisor.pm) File: AdvisorRules (modules/AdvisorRules.pm) File: BinaryLogParser (modules/BinaryLogParser.pm) File: ChangeHandler (modules/ChangeHandler.pm) File: CompareQueryTimes (modules/CompareQueryTimes.pm) File: CompareResults (modules/CompareResults.pm) File: CompareTableStructs (modules/CompareTableStructs.pm) File: CompareWarnings (modules/CompareWarnings.pm) File: CopyRowsInsertSelect (modules/CopyRowsInsertSelect.pm) File: Daemon (modules/Daemon.pm) File: DSNParser (modules/DSNParser.pm) File: DuplicateKeyFinder (modules/DuplicateKeyFinder.pm) File: EventAggregator (modules/EventAggregator.pm) File: EventTimeline (modules/EventTimeline.pm) File: ExecutionThrottler (modules/ExecutionThrottler.pm) File: ExplainAnalyzer (modules/ExplainAnalyzer.pm) File: FileIterator (modules/FileIterator.pm) File: ForeignKeyIterator (modules/ForeignKeyIterator.pm) File: GeneralLogParser (modules/GeneralLogParser.pm) File: HTTPProtocolParser (modules/HTTPProtocolParser.pm) File: IndexUsage (modules/IndexUsage.pm) File: InnoDBStatusParser (modules/InnoDBStatusParser.pm) File: KeySize (modules/KeySize.pm) File: LogSplitter (modules/LogSplitter.pm) File: MaatkitTest (modules/MaatkitTest.pm) File: MasterSlave (modules/MasterSlave.pm) File: MemcachedEvent (modules/MemcachedEvent.pm) File: MemcachedProtocolParser (modules/MemcachedProtocolParser.pm) File: MockSth (modules/MockSth.pm) File: MockSync (modules/MockSync.pm) File: MockSyncStream (modules/MockSyncStream.pm) File: MySQLConfig (modules/MySQLConfig.pm) File: MySQLConfigComparer (modules/MySQLConfigComparer.pm) File: MySQLDump (modules/MySQLDump.pm) File: MySQLProtocolParser (modules/MySQLProtocolParser.pm) File: OptionParser (modules/OptionParser.pm) File: OSCCaptureSync (modules/OSCCaptureSync.pm) File: Outfile (modules/Outfile.pm) File: PgLogParser (modules/PgLogParser.pm) File: Pipeline (modules/Pipeline.pm) File: PodParser (modules/PodParser.pm) File: Processlist (modules/Processlist.pm) File: ProcesslistAggregator (modules/ProcesslistAggregator.pm) File: Progress (modules/Progress.pm) File: ProtocolParser (modules/ProtocolParser.pm) File: QueryAdvisorRules (modules/QueryAdvisorRules.pm) File: QueryParser (modules/QueryParser.pm) File: QueryReportFormatter (modules/QueryReportFormatter.pm) File: QueryReview (modules/QueryReview.pm) File: QueryRewriter (modules/QueryRewriter.pm) File: Quoter (modules/Quoter.pm) File: ReportFormatter (modules/ReportFormatter.pm) File: Retry (modules/Retry.pm) File: RowDiff (modules/RowDiff.pm) File: Runtime (modules/Runtime.pm) File: Sandbox (modules/Sandbox.pm) File: Schema (modules/Schema.pm) File: SchemaIterator (modules/SchemaIterator.pm) File: SimpleTCPDumpParser (modules/SimpleTCPDumpParser.pm) File: SlowLogParser (modules/SlowLogParser.pm) File: SlowLogWriter (modules/SlowLogWriter.pm) File: SQLParser (modules/SQLParser.pm) File: SysLogParser (modules/SysLogParser.pm) File: TableChecksum (modules/TableChecksum.pm) File: TableChunker (modules/TableChunker.pm) File: TableNibbler (modules/TableNibbler.pm) File: TableParser (modules/TableParser.pm) File: TableSyncChunk (modules/TableSyncChunk.pm) File: TableSyncer (modules/TableSyncer.pm) File: TableSyncGroupBy (modules/TableSyncGroupBy.pm) File: TableSyncNibble (modules/TableSyncNibble.pm) File: TableSyncStream (modules/TableSyncStream.pm) File: TableUsage (modules/TableUsage.pm) File: TcpdumpParser (modules/TcpdumpParser.pm) File: TCPRequestAggregator (modules/TCPRequestAggregator.pm) File: TextResultSetParser (modules/TextResultSetParser.pm) File: TimeSeriesTrender (modules/TimeSeriesTrender.pm) File: Transformers (modules/Transformers.pm) File: UpgradeReportFormatter (modules/UpgradeReportFormatter.pm) File: VariableAdvisorRules (modules/VariableAdvisorRules.pm) File: VersionParser (modules/VersionParser.pm) } # Group: Modules Group: Tools { File: pt_archiver (tools/pt-archiver.pm) File: pt_config_diff (tools/pt-config-diff.pm) File: pt_deadlock_logger (tools/pt-deadlock-logger.pm) File: pt_duplicate_key_checker (tools/pt-duplicate-key-checker.pm) File: pt_fifo_split (tools/pt-fifo-split.pm) File: pt_find (tools/pt-find.pm) File: pt_fk_error_logger (tools/pt-fk-error-logger.pm) File: pt_heartbeat (tools/pt-heartbeat.pm) File: pt_index_usage (tools/pt-index-usage.pm) File: pt_kill (tools/pt-kill.pm) File: pt_log_player (tools/pt-log-player.pm) File: pt_online_schema_change (tools/pt-online-schema-change.pm) File: pt_profile_compact (tools/pt-profile-compact.pm) File: pt_query_advisor (tools/pt-query-advisor.pm) File: pt_query_digest (tools/pt-query-digest.pm) File: pt_query_profiler (tools/pt-query-profiler.pm) File: pt_schema_advisor (tools/pt-schema-advisor.pm) File: pt_show_grants (tools/pt-show-grants.pm) File: pt_slave_delay (tools/pt-slave-delay.pm) File: pt_slave_find (tools/pt-slave-find.pm) File: pt_slave_restart (tools/pt-slave-restart.pm) File: pt_table_checksum (tools/pt-table-checksum.pm) File: pt_table_sync (tools/pt-table-sync.pm) File: pt_table_usage (tools/pt-table-usage.pm) File: pt_tcp_model (tools/pt-tcp-model.pm) File: pt_trend (tools/pt-trend.pm) File: pt_upgrade (tools/pt-upgrade.pm) File: pt_variable_advisor (tools/pt-variable-advisor.pm) File: pt_visual_explain (tools/pt-visual-explain.pm) } # Group: Tools Group: Index { Index: Everything Class Index: Classes Function Index: Functions Variable Index: Variables } # Group: Index percona-toolkit-3.1/config/NaturalDocs/Topics.txt000664 001750 001750 00000006077 13535723557 023354 0ustar00jenkinsjenkins000000 000000 Format: 1.52 # This is the Natural Docs topics file for this project. If you change anything # here, it will apply to THIS PROJECT ONLY. If you'd like to change something # for all your projects, edit the Topics.txt in Natural Docs' Config directory # instead. # If you'd like to prevent keywords from being recognized by Natural Docs, you # can do it like this: # Ignore Keywords: [keyword], [keyword], ... # # Or you can use the list syntax like how they are defined: # Ignore Keywords: # [keyword] # [keyword], [plural keyword] # ... #------------------------------------------------------------------------------- # SYNTAX: # # Topic Type: [name] # Alter Topic Type: [name] # Creates a new topic type or alters one from the main file. Each type gets # its own index and behavior settings. Its name can have letters, numbers, # spaces, and these charaters: - / . ' # # Plural: [name] # Sets the plural name of the topic type, if different. # # Keywords: # [keyword] # [keyword], [plural keyword] # ... # Defines or adds to the list of keywords for the topic type. They may only # contain letters, numbers, and spaces and are not case sensitive. Plural # keywords are used for list topics. You can redefine keywords found in the # main topics file. # # Index: [yes|no] # Whether the topics get their own index. Defaults to yes. Everything is # included in the general index regardless of this setting. # # Scope: [normal|start|end|always global] # How the topics affects scope. Defaults to normal. # normal - Topics stay within the current scope. # start - Topics start a new scope for all the topics beneath it, # like class topics. # end - Topics reset the scope back to global for all the topics # beneath it. # always global - Topics are defined as global, but do not change the scope # for any other topics. # # Class Hierarchy: [yes|no] # Whether the topics are part of the class hierarchy. Defaults to no. # # Page Title If First: [yes|no] # Whether the topic's title becomes the page title if it's the first one in # a file. Defaults to no. # # Break Lists: [yes|no] # Whether list topics should be broken into individual topics in the output. # Defaults to no. # # Can Group With: [type], [type], ... # Defines a list of topic types that this one can possibly be grouped with. # Defaults to none. #------------------------------------------------------------------------------- # The following topics are defined in the main file, if you'd like to alter # their behavior or add keywords: # # Generic, Class, Interface, Section, File, Group, Function, Variable, # Property, Type, Constant, Enumeration, Event, Delegate, Macro, # Database, Database Table, Database View, Database Index, Database # Cursor, Database Trigger, Cookie, Build Target # If you add something that you think would be useful to other developers # and should be included in Natural Docs by default, please e-mail it to # topics [at] naturaldocs [dot] org. percona-toolkit-3.1/config/deb/000775 001750 001750 00000000000 13535723557 017653 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/deb/changelog000664 001750 001750 00000142454 13535723557 021537 0ustar00jenkinsjenkins000000 000000 percona-toolkit (3.0.13-1) unstable; urgency=low * Fixed bug PT-1673: Fix pt-show-grants for MariaDB 10+ (thanks Tim Birkett) * Fixed bug PT-1638: pt-online-schema-change not working with MariaDB 10.x * Improvement PT-1637: Added --fail-on stopped-replication param to table checksum * Fixed bug PT-1616: pt-table-checksum fails to --resume on certain binary strings * Fixed bug PT-1573: pt-query-digest log_timestamps = SYSTEM # No events processed * Improvement PT-1340: pt-stalk should not call mysqladmin debug by default * Fixed bug PT-1114: pt-table-checksum fails when table is empty * Fixed bug PT-157: Specifying the index to use for pt-archiver ignores --primary-key-only -- Percona Toolkit Developers Fri, 28 Dec 2018 08:06:00 +0000 percona-toolkit (3.0.12-1) unstable; urgency=low * Fixed bug PT-1611: pt-archiver fails with UTF-8 chars * Fixed bug PT-1603: pt-table-sync is not considering unsorted enum fields in indexes for calculating chunk boundaries * Fixed bug PT-1574: pt-online-schema-change fails on UK and NULLs * Fixed bug PT-1572: Better usage of ENUM fields in keys in NibbleIterator * Fixed bug PT-1422: pt-mysql-summary may get stuck when Time: NULL in processlist * Improvement PT-1321: Add required MySQL privileges to pt-online-schema-change documentation -- Percona Toolkit Developers Tue, 11 Sep 2018 11:20:08 +0000 percona-toolkit (3.0.11-1) unstable; urgency=low * Improvement PT-1571 : Improved hostname recognition in pt-secure-collect * Fixed bug PT-1570 : pt-archiver fails to detect columns with the word GENERATED as part of the comment * Improvement PT-1569 : Disabled --alter-foreign-keys-method=drop_swap in pt-osc * Fixed bug PT-1563 : Fixed pt-show-grants for MySQL 5.6 * Improvement PT-1562 : pt-mysql-summary: Fix mysqld command for Travis * Fixed bug PT-1551 : pt-table-checksum fails on MySQL 8.0.11 * Improvement PT-242 : (pt-stalk) Include SHOW SLAVE STATUS on 5.7 (Thanks Marcelo Altmann) * Fixed bug PT-241 : (pt-stalk) Slave queries doesn't run on 5.7 (Thanks Marcelo Altmann) -- Percona Toolkit Developers Fri, 06 Jul 2018 15:07:41 +0000 percona-toolkit (3.0.11-1) unstable; urgency=low * Improvement PT-1571 : Improved hostname recognition in pt-secure-collect * Fixed bug PT-1570 : pt-archiver fails to detect columns with the word GENERATED as part of the comment * Improvement PT-1569 : Disabled --alter-foreign-keys-method=drop_swap in pt-osc * Fixed bug PT-1563 : Fixed pt-show-grants for MySQL 5.6 * Improvement PT-1562 : pt-mysql-summary: Fix mysqld command for Travis * Fixed bug PT-1551 : pt-table-checksum fails on MySQL 8.0.11 * Improvement PT-242 : (pt-stalk) Include SHOW SLAVE STATUS on 5.7 (Thanks Marcelo Altmann) * Fixed bug PT-241 : (pt-stalk) Slave queries doesn't run on 5.7 (Thanks Marcelo Altmann) -- Percona Toolkit Developers Tue, 03 Jul 2018 12:54:53 +0000 percona-toolkit (3.0.10-1) unstable; urgency=low -- Percona Toolkit Developers Mon, 21 May 2018 17:36:10 +0000 percona-toolkit (3.0.9-1) unstable; urgency=low -- Percona Toolkit Developers Tue, 17 Apr 2018 10:24:47 +0000 percona-toolkit (3.0.8-1) unstable; urgency=low -- Percona Toolkit Developers Tue, 13 Mar 2018 13:58:27 +0000 percona-toolkit (3.0.7-1) unstable; urgency=low -- Percona Toolkit Developers Thu, 01 Mar 2018 12:11:13 +0000 percona-toolkit (3.0.6-1) unstable; urgency=low -- Percona Toolkit Developers Wed, 20 Dec 2017 14:02:29 +0000 percona-toolkit (3.0.5-1) unstable; urgency=low -- Percona Toolkit Developers Fri, 17 Nov 2017 08:02:47 +0000 percona-toolkit (3.0.4-1) unstable; urgency=low * Fixed bug PT-181 : pt-online-schema-change not in sync with modules (Thanks Daniël van Eeden) * Fixed bug PT-180 : pt-online-schema-change --skip-check-slave-lag doesn't work * Fixed bug PT-178 : pt-online-schema-change appears to ignore the --check-slave-lag option * Fixed bug PT-162 : Updated pt-table-checksum ignored dbs (Thanks Agustin Gallego) * Fixed bug PT-161 : Safely check for undefined values in --skip-check-slave-lag (Thanks Chris Swingler) * Fixed bug PT-154 : pt-online-schema-change --no-use-insert-ignore is broken * Fixed bug PT-153 : pt-online-schema-change data loss when adding unique keys * Fixed bug PT-151 : point is not decimal * Fixed bug PT-148 : pt-osc Use of uninitialized value in printf * Fixed bug PT-146 : Turn off statement based binlog checks * Fixed bug PT-144 : Constraint name is too long (> 64 chars) * Fixed bug PT-143 : pt-archiver SELECT query fails because of primary key * Fixed bug PT-142 : pt-online-schema-change find_child_tables slow * Fixed bug PT-138 : Added --output-format option to pt-mongodb-summary * Fixed bug PT-136 : pt-table-checksum fails with columns having different collation/charset * Feature PT-173 : Enable pt-table-checksum to ensure stale data is removed * Feature PT-141 : pt-archiver archive records into csv file * Feature PT-91 : Make pt-osc compatible with AFTER triggers * Feature PT-90 : pt-stalk: Collect information about prepared statements if P_S is enabled (Thanks Agustin Gallego) -- Percona Toolkit Developers Tue, 01 Aug 2017 09:35:45 +0000 percona-toolkit (3.0.3-1) unstable; urgency=low * Fixed bug PT-133 : Sandbox won't start correctly if autocommit=0 in my.cnf * Fixed bug PT-132 : pt-online-schema-change should imply --no-drop-new-table * Fixed bug PT-130 : Fixed pt-mext not working with not empty Rsa_public_key * Fixed bug PT-128 : pt-stalk ps include memory usage outputs * Fixed bug PT-126 : Recognize comments in ALTER * Fixed bug PT-116 : pt-online-schema change eats data on adding a unique index. Added --[no]use-insert-ignore * Feature PT-115 : Make DSNs params able to be repeatable * Fixed bug PT-115 : Made OptionParser to accept repeatable DSNs * Fixed bug PT-111 : Collect MySQL variables * Fixed bug PT-087 : Add --skip-check-slave-lag to pt-table-checksum * Fixed bug PT-086 : Added --skip-check-slave-lag to pt-osc * Fixed bug PT-080 : Added support for slave status in pt-stalk -- Percona Toolkit Developers Tue, 01 Aug 2017 10:23:25 +0000 percona-toolkit (3.0.2-1) unstable; urgency=low * Fixed bug PT-73 : pt-mongodb tools add support for SSL connections * Fixed bug PT-74 : pt-mongodb-summary Cannot get security settings when connected to a mongod instance * Fixed bug PT-75 : pt-mongodb-query-digest Change the default sort order to -count (descending) * Fixed bug PT-76 : pt-mysql-summary password doesn't support '&' and '#' symbols * Fixed bug PT-77 : Update Makefile for mongodb tools * Fixed bug PT-81 : Collect information about locks and transactions using P_S (Thanks Agustin Gallego) * Fixed bug PT-89 : pt-stalk top CPU usage is useless * Fixed bug PT-93 : Fix pt-mongodb-query-digest query ID (Thanks Kamil Dziedzic) * Fixed bug PT-94 : pt-online-schema-change makes duplicate rows in _t_new for UPDATE t set pk=0 where pk=1 * Fixed bug PT-96 : Fixed PT tests * Fixed bug PT-101 : pt-table-checksum ignores slave-user and slave-password * Fixed bug PT-105 : pt-table-checksum fails if a database is dropped while the tool is running -- Percona Toolkit Developers Thu, 23 Mar 2017 12:16:45 +0000 percona-toolkit (3.0.1-1) unstable; urgency=low -- Percona Toolkit Developers Thu, 16 Feb 2017 16:08:43 +0000 percona-toolkit (3.0-1) unstable; urgency=low * Fixed bug 1402776: Improved fix (protocol parser fix): error when parsing tcpdump capture with pt-query-digest * Fixed bug 1632522: pt-osc: Fails with duplicate key in table for self-referencing (Thanks Amiel Marqeta) * Fixed bug 1654668: pt-summary exists with an error (Thanks Marcelo Altmann) * New tool : pt-mongodb-summary * New tool : pt-mongodb-query-digest -- Percona Toolkit Developers Fri, 03 Feb 2017 23:23:31 +0000 percona-toolkit (2.2.20-1) unstable; urgency=low -- Percona Toolkit Developers Tue, 06 Dec 2016 21:57:10 +0000 percona-toolkit (2.2.19-1) unstable; urgency=low -- Percona Toolkit Developers Mon, 15 Aug 2016 06:01:02 +0000 percona-toolkit (2.2.18-1) unstable; urgency=low * Feature 1537416 : pt-stalk now sorts the output of transactions by id * Feature 1553340 : Added "Shared" memory info to pt-summary * Feature PT-24 : Added the --no-vertical-format option for pt-query-digest, allowing compatibility with non-standard MySQL clients that don't support the \G directive at the end of a statement * Fixed bug 1402776: Fixed error when parsing tcpdump capture with pt-query-digest * Fixed bug 1521880: Improved pt-online-schema-change plugin documentation * Fixed bug 1547225: Clarified the description of the --attribute-value-limit option for pt-query-digest * Fixed bug 1569564: Fixed all PERL-based tools to return a zero exit status when run with the --version option * Fixed bug 1576036: Fixed error that sometimes prevented to choose the primary key as index, when using the -where option for pt-table-checksum * Fixed bug 1585412: Fixed the inability of pt-query-digest to parse the general log generated by MySQL (and Percona Server) 5.7 instance * Fixed bug PT-36 : Clarified the description of the --verbose option for pt-slave-restart -- Percona Toolkit Developers Fri, 24 Jun 2016 08:02:55 +0000 percona-toolkit (2.2.17-1) unstable; urgency=low * Feature : General compatibility with MySQL 5.7 tools, docs and test suite * Fixed bug 1529411: pt-mysql-summary displays incorrect info about Fast Server Restarts for Percona Server 5.6 * Fixed bug 1506748: pt-online-schema-change cannot set sql_mode using --set-vars * Fixed bug 1336734: pt-online-schema-change added --null-to-non-null option to allow NULLable columns to be converted to NOT NULL * Fixed bug 1336734: pt-online-schema-change doesn't apply underscores to foreign keys individually * Fixed bug 1523685: pt-online-schema Invalid recursion method: t=dsns * Fixed bug 1526105: pt-online-schema-change fails when using --no-drop-old-table after 10 times * Fixed bug 1536305: pt-query-digest : Redundant argument in sprintf * Fixed bug PT-27 : pt-query-digest doc bug with --since and too many colons * Fixed bug PT-28 : pt-query-digest: Make documentation of --attribute-value-limit option more clear * Fixed bug 1435370: pt-show-grants fails against MySQL-5.7.6 * Fixed bug 1523730: pt-show-grants doesn't sort column-level privileges * Fixed bug 1362942: pt-slave-restart fails on MariaDB 10.0.13 (gtid_mode confusion) * Fixed bug PT-30: pt-stalk: new var binlog_error_action causes bug in collect module * Fixed bug 1389041: pt-table-checksum has high likelyhood to skip a table when row count is around chunk-size * chunk-size-limit * Fixed bug 1480719: pt-table-checksum redundant argument in printf -- Percona Toolkit Developers Fri, 04 Mar 2016 12:09:54 +0000 percona-toolkit (2.2.16-1) unstable; urgency=low * Fixed bug 1452895: pt-archiver dies with "MySQL server has gone away" when innodb_kill_idle_transaction set to low value and bulk insert/delete process takes too long time * Fixed bug 1488685: pt-kill option --filter does not work * Feature 1402051: pt-online-schema-change should reconnect to slaves * Fixed bug 1491261: pt-online-schema-change, MySQL 5.6, and InnoDB optimizer stats can cause downtime * Fixed bug 1494082: pt-stalk find -warn option is not portable * Feature 1389041: Document that pt-table-checksum has high likelihood to skip a table when row count is around chunk-size * chunk-size-limit -- Percona Toolkit Developers Fri, 06 Nov 2015 19:47:42 +0000 percona-toolkit (2.2.15-2) unstable; urgency=low * Fixed bug 1056507: pt-archiver checked lag too frequently * Fixed bug 1443763: pt-archiver clarified function of --check-interval [DOC] * Feature 1452911: pt-archiver now accepts checking lag on multiple slaves * Feature 1413137: pt-archiver now checks for PXC flow control via --max-flow-ctl option * Fixed bug 1452914: pt-archiver options --no-delete and --purge were not mutually exclusive * Fixed bug 1449226: pt-archiver mysql timed out when innodb_kill_idle_transaction set to low value and check-slave-lag used * Fixed bug 1462904: pt-duplicate-key-checker doesn't support triple quote in column name * Feature 1470127: pt-kill enable support for RDS * Fixed bug 1455486: pt-mysql-summary lacked an --ask-pass option * Feature 1413140: pt-online-schema-change added --sleep option * Fixed bug 1446928: pt-online-schema-change core dump on erroneous alter directive * Feature 1413101: pt-online-schema-change now checks for PXC flow control via --max-flow-ctl option * Fixed bug 1450499: pt-online-schema-change unstable signal handling * Feature 1215587: pt-online-schema-change now controls constraint name length * Fixed bug 1441928: pt-online-schema-change --chunk-size-limit=0 inhibited checksumming of single nibble tables * Fixed bug 1457573: pt-sift failed when fetching missing tools * Feature 1488600: pt-stalk monitors tokudb status * Fixed bug 1042727: pt-table-checksum doesn't reconnect to slaves when timed out on very long lags * Fixed bug 1277049: passsword parameter must escape commas - all tools [DOC] * Fixed bug BLD-271: changes needed to build packages from git tree * Fixed bug PT-21 : write-user-docs script stopped working after switching to github * Fixed bug 1488611: testing bugs related to newer perl versions -- Percona Toolkit Developers Fri, 28 Aug 2015 08:38:24 +0000 percona-toolkit (2.2.14) unstable; urgency=low * Fixed bug 1402730 pt-duplicate-key-checker seems useless with MySQL 5.6 * Fixed bug 1415646 pt-duplicate-key-checker documentation does not explain how Size Duplicate Indexes is calculated * Fixed bug 1406390 pt-heartbeat crashes with sleep argument error * Fixed bug 1368244 pt-online-schema-change --alter-foreign-keys-method=drop-swap is not atomic * FIxed bug 1417864 pt-online-schema-change documentation, the interpretation of --tries create_triggers:5:0.5,drop_triggers:5:0.5 is wrong * Fixed bug 1404313 pt-query-digest: specifying a file that doesn't exist as log causes the tool to wait for STDIN instead of giving an error * Feature 1418446 pt-slave-find resolve IP addresses option * Fixed bug 1417558 pt-stalk with --collect-strace output doesn't go to an YYYY_MM_DD_HH_mm_ss-strace file * Fixed bug 1425478 pt-stalk removes non-empty files that start with empty line * Fixed bug 925781 pt-table-checksum checksum error when default-character-set = utf8 * Fixed bug 1381280 pt-table-checksum fails on BINARY field in PK * Feature 1439842 pt-table-sync lacks --ignore-tables-regex option * Fixed bug 1401399 pt-table-sync fails to close one db handle * Fixed bug 1442277 pt-table-sync-ignores system databases but doc doesn't clarify this * Fixed bug 1421781 pt-upgrade fails on SELECT ... INTO queries * Fixed bug 1421405 pt-upgrade fails to aggregate queries based on fingerprint * Fixed bug 1439348 pt-upgrade erroneously reports number of diffs * Fixed bug 1421025 rpm missing dependency on perl-TermReadKey for --ask-pass -- Percona Toolkit Developers Fri, 10 Apr 2015 08:38:24 +0000 percona-toolkit (2.2.13) unstable; urgency=low * Feature 1391240: pt-kill added query fingerprint hash to output * Fixed bug 1402668: pt-mysql-summary fails on cluster in Donor/Desynced status * Fixed bug 1396870: pt-online-schema-change CTRL+C leaves terminal in inconsistent state * Fixed bug 1396868: pt-online-schema-change --ask-pass option error * Fixed bug 1266869: pt-stalk fails to start if $HOME environment variable is not set * Fixed bug 1019479: pt-table-checksum does not work with sql_mode ONLY_FULL_GROUP_BY * Fixed bug 1394934: pt-table-checksum error in debug mode * Fixed bug 1321297: pt-table-checksum reports diffs on timestamp columns in 5.5 vs 5.6 * Fixed bug 1399789: pt-table-checksum fails to find pxc nodes when wsrep_node_incoming_address is set to AUTO * Fixed bug 1388870: pt-table-checksum has some errors with different time zones * Fixed bug 1408375: vulnerable to MITM attack which would allow exfiltration of MySQL configuration information via --version-check * Fixed bug 1404298: missing MySQL5.7 test files for pt-table-checksum * Fixed bug 1403900: added sandbox and fixed sakila test db for 5.7 -- Percona Toolkit Developers Fri, 23 Jan 2015 10:08:15 +0000 percona-toolkit (2.2.12) unstable; urgency=low * Fixed bug 1376561: pt-archiver is not able to archive all the rows when a table has a hash partition * Fixed bug 1328686: pt-heartbeat check-read-only option does not prevent creates or inserts * Fixed bug 1269695: pt-online-schema-change does not allow ALTER for a table without a non-unique, while manual does not explain this * Fixed bug 1217466: pt-table-checksum refuses to run on PXC if server_id is the same on all nodes * Fixed bug 1373937: pt-table-checksum requires recursion when working with and XtraDB Cluster node * Fixed bug 1377888: pt-query-digest manual for --type binlog is ambiguous * Fixed bug 1349086: pt-stalk should also gather dmesg output * Fixed bug 1361293: Some scripts fail when no-version-check option is put in global config file -- Percona Toolkit Developers Tue, 11 Nov 2014 13:02:51 +0000 percona-toolkit (2.2.11) unstable; urgency=low * Fixed bug 1262456: pt-query-digest doesn't report host details * Fixed bug 1264580: pt-mysql-summary incorrectly tries to parse key/value pairs in wsrep_provider_options resulting in incomplete my.cnf information * Fixed bug 1318985: pt-stalk should use SQL_NO_CACHE * Fixed bug 1348679: pt-stalk handles mysql user password in awkward way * Fixed bug 1365085: Various issues with tests * Fixed bug 1368379: pt-summary problem parsing dmidecode output on some machines * Fixed bug 1303388: Typo in pt-variable-advisor -- Percona Toolkit Developers Thu, 25 Sep 2014 13:43:24 +0000 percona-toolkit (2.2.10) unstable; urgency=low * Fixed bug 1287253: pt-table-checksum deadlock * Fixed bug 1299387: 5.6 slow query log Thead_id becomes Id * Fixed bug 1311654: pt-table-checksum + PXC inconsistent results upon --resume * Fixed bug 1340728: pt-online-schema-change doesn't work with HASH indexes * Fixed bug 1253872: pt-table-checksum max load 20% rounds down * Fixed bug 1340364: some shell tools output error when queried for --version -- Percona Toolkit Developers Wed, 06 Aug 2014 19:53:53 +0000 percona-toolkit (2.2.9) unstable; urgency=low * Fixed bug 1258135: pt-deadlock-logger introduces a noise to MySQL * Fixed bug 1329422: pt-online-schema-change foreign-keys-method=none breaks constraints * Fixed bug 1315130: pt-online-schema-change not properly detecting foreign keys * Fixed bug 1335960: pt-query-digest cannot parse binlogs from 5.6 * Fixed bug 1335322: pt-stalk fails when variable or threshold is non-integer -- Percona Toolkit Developers Tue, 08 Jul 2014 12:04:05 +0000 percona-toolkit (2.2.8) unstable; urgency=low * Removed pt-agent * Added pt-slave-restart GTID support * Added pt-table-checksum --plugin * Fixed bug 1304062: --ignore-tables does not work correctly * Fixed bug 1295667: pt-deadlock-logger logs incorrect ts * Fixed bug 1254233: pt-mysql-summary blank InnoDB section for 5.6 * Fixed bug 1286250: pt-online-schema-change requests password twice * Fixed bug 965553: pt-query-digest dosn't fingerprint true/false literals correctly * Fixed bug 290911: pt-show-grant --ask-pass prints "Enter password" to STDOUT -- Percona Toolkit Developers Thu, 05 Jun 2014 07:07:52 +0000 percona-toolkit (2.2.7) unstable; urgency=low * Fixed bug 1279502: --version-check behaves like spyware -- Percona Toolkit Developers Thu, 20 Feb 2014 08:09:53 +0000 percona-toolkit (2.2.6) unstable; urgency=low * Added pt-query-digest support for Percona Server slow log rate limiting * Added pt-agent --ping * Added pt-mysql-summary --all-databases * Added pt-stalk --sleep-collect * Added pt-table-sync --[no]check-child-tables * Fixed bug 1249150: PTDEBUG prints some info to STDOUT * Fixed bug 1248363: pt-agent requires restart after changing MySQL options * Fixed bug 1248778: pt-agent --install on PXC is not documented * Fixed bug 1250973: pt-agent --install doesn't check for previous install * Fixed bug 1250968: pt-agent --install suggest MySQL user isn't quoted * Fixed bug 1251004: pt-agent --install error about slave is confusing * Fixed bug 1251726: pt-agent --uninstall fails if agent is running * Fixed bug 1248785: pt-agent docs don't list privs required for its MySQL user * Fixed bug 1215016: pt-deadlock-logger docs use pt-fk-error-logger * Fixed bug 1201443: pt-duplicate-key-checker error when EXPLAIN key_len=0 * Fixed bug 1217013: pt-duplicate-key-checker misses exact duplicate unique indexes * Fixed bug 1214685: pt-mysql-summary schema dump prompt can't be disabled * Fixed bug 1195628: pt-online-schema-change gets stuck looking for its own _new table * Fixed bug 1249149: pt-query-digest stats prints to STDOUT instead of STDERR * Fixed bug 1071979: pt-stak error parsing df with NFS * Fixed bug 1223458: pt-table-sync deletes child table rows -- Percona Toolkit Developers Wed, 18 Dec 2013 23:50:43 +0000 percona-toolkit (2.2.5) unstable; urgency=low * Added Query_time histogram bucket counts to pt-query-digest JSON output * Added pt-online-schema-change --[no]drop-triggers option * Fixed bug #1199589: pt-archiver deletes data despite --dry-run * Fixed bug #944051: pt-table-checksum has ambiguous exit status * Fixed bug #1209436: pt-kill --log-dsn may not work on Perl 5.8 * Fixed bug #1210537: pt-table-checksum --recursion-method=cluster crashes if no nodes are found * Fixed bug #1215608: pt-online-schema-change new table suffix is hard-coded * Fixed bug #1229861: pt-table-sync quotes float values, can't sync * Fixed bug #821692: pt-query-digest doesn't distill LOAD DATA correctly * Fixed bug #984053: pt-query-digest doesn't distill INSERT/REPLACE without INTO correctly * Fixed bug #1206728: pt-deadlock-logger 2.2 requires DSN on command line * Fixed bug #1226721: pt-agent on CentOS 5 fails to send data * Fixed bug #821690: pt-query-digest doesn't distill IF EXISTS correctly * Fixed bug #1206677: pt-agent docs reference clodu.percona.com -- Percona Toolkit Developers Thu, 17 Oct 2013 05:00:27 +0000 percona-toolkit (2.2.3) unstable; urgency=low * Added new tool: pt-agent * Fixed bug 1188002: pt-online-schema-change causes "ERROR 1146 (42S02): Table 'db._t_new' doesn't exist" -- Percona Toolkit Developers Mon, 17 Jun 2013 07:07:54 +0000 percona-toolkit (2.2.2) unstable; urgency=low * Added --show-all to pt-query-digest * Added --recursion-method=cluster to pt-table-checksum * Fixed bug 1127450: pt-archiver --bulk-insert may corrupt data * Fixed bug 1163372: pt-heartbeat --utc --check always returns 0 * Fixed bug 1156901: pt-query-digest --processlist reports duplicate queries for replication thread * Fixed bug 1160338: pt-query-digest 2.2 prints unwanted debug info on tcpdump parsing errors * Fixed bug 1160918: pt-query-digest 2.2 prints too many string values * Fixed bug 1156867: pt-stalk prints the wrong variable name in verbose mode when --function is used * Fixed bug 1081733: pt-stalk plugins can't access the real --prefix * Fixed bug 1099845: pt-table-checksum pxc same_node function incorrectly uses wsrep_sst_receive_address * Fixed bug 821502: Some tools don't have --help or --version * Fixed bug 947893: Some tools use @@hostname without /*!50038*/ * Fixed bug 1082406: An explicitly set wsrep_node_incoming_address may make SHOW STATUS LIKE 'wsrep_incoming_addresses' return a portless address -- Percona Toolkit Developers Wed, 24 Apr 2013 23:23:00 +0000 percona-toolkit (2.2.1) unstable; urgency=low * Official support for MySQL 5.6 * Official support for Percona XtraDB Cluster * Redesigned pt-query-digest * Redesigned pt-upgrade * Redesigned pt-fk-error-logger * Redesigned pt-deadlock-logger * Changed --set-vars in all tools * Renamed --retries to --tries in pt-online-schema-change * Added --check-read-only to pt-heartbeat * Added MySQL options to pt-mysql-summary * Added MySQL options to pt-stalk * Removed --lock-wait-timeout from pt-online-schema-change (use --set-vars) * Removed --lock-wait-timeout from pt-table-checksum (use --set-vars) * Removed pt-query-advisor * Removed pt-tcp-model * Removed pt-trend * Removed pt-log-player * Enabled --version-check by default in all tools * Fixed bug 1008796: Several tools don't have --database * Fixed bug 1087319: Quoter::serialize_list() doesn't handle multiple NULL values * Fixed bug 1086018: pt-config-diff needs to parse wsrep_provider_options * Fixed bug 1056838: pt-fk-error-logger --run-time works differently than pt-deadlock-logger --run-time * Fixed bug 1093016: pt-online-schema-change doesn't retry RENAME TABLE * Fixed bug 1113301: pt-online-schema-change blocks on metadata locks * Fixed bug 1125665: pt-stalk --no-stalk silently clobbers other options, acts magically * Fixed bug 1019648: pt-stalk truncates InnoDB status if there are too many transactions * Fixed bug 1087804: pt-table-checksum doesn't warn if no slaves are found -- Percona Toolkit Developers Thu, 14 Mar 2013 17:18:34 +0000 percona-toolkit (2.1.9) unstable; urgency=low * Fixed bug 1103221: pt-heartbeat 2.1.8 doesn't use precision/sub-second timestamps * Fixed bug 1099665: pt-heartbeat 2.1.8 reports big time drift with UTC_TIMESTAMP * Fixed bug 1099836: pt-online-schema-change fails with "Duplicate entry" on MariaDB * Fixed bug 1103672: pt-online-schema-change makes bad DELETE trigger if PK is re-created with new columns * Fixed bug 1115333: pt-pmp doesn't list the origin lib for each function * Fixed bug 823411: pt-query-digest shouldn't print "Error: none" for tcpdump * Fixed bug 1103045: pt-query-digest fails to parse non-SQL errors * Fixed bug 1105077: pt-table-checksum: Confusing error message with binlog_format ROW or MIXED on slave * Fixed bug 918056: pt-table-sync false-positive error "Cannot nibble table because MySQL chose no index instead of the PRIMARY index" * Fixed bug 1099933: pt-stalk is too verbose, fills up log -- Percona Toolkit Developers Thu, 14 Feb 2013 17:25:44 +0000 percona-toolkit (2.1.8) unstable; urgency=low * Beta support for MySQL 5.6 * Beta support for Percona XtraDB Cluster * pt-online-schema-change: If ran on Percona XtraDB Cluster, requires PXC 5.5.28 or newer * pt-table-checksum: If ran on Percona XtraDB Cluster, requires PXC 5.5.28 or newer * pt-upgrade: Added --[no]disable-query-cache * Fixed bug 927955: Bad pod2rst transformation * Fixed bug 898665: Bad online docs formatting for --[no]vars * Fixed bug 1022622: pt-config-diff is case-sensitive * Fixed bug 1007938: pt-config-diff doesn't handle end-of-line comments * Fixed bug 917770: pt-config-diff Use of uninitialized value in substitution (s///) at line 1996 * Fixed bug 1082104: pt-deadlock-logger doesn't handle usernames with dashes * Fixed bug 886059: pt-heartbeat handles timezones inconsistently * Fixed bug 1086259: pt-kill --log-dsn timestamp is wrong * Fixed bug 1015590: pt-mysql-summary doesn't handle renamed variables in Percona Server 5.5 * Fixed bug 1079341: pt-online-schema-change checks for foreign keys on MyISAM tables * Fixed bug 823431: pt-query-advisor hangs on big queries * Fixed bug 996069: pt-query-advisor RES.001 is incorrect * Fixed bug 933465: pt-query-advisor false positive on RES.001 * Fixed bug 937234: pt-query-advisor issues wrong RES.001 * Fixed bug 1082599: pt-query-digest fails to parse timestamp with no query * Fixed bug 1078838: pt-query-digest doesn't parse general log with "Connect user as user" * Fixed bug 957442: pt-query-digest with custom --group-by throws error * Fixed bug 887638: pt-query-digest prints negative byte offset * Fixed bug 831525: pt-query-digest help output mangled * Fixed bug 932614: pt-slave-restart CHANGE MASTER query causes error * Fixed bug 1046440: pt-stalk purge_samples slows down checks * Fixed bug 986847: pt-stalk does not report NFS iostat * Fixed bug 1074179: pt-table-checksum doesn't ignore tables for --replicate-check-only * Fixed bug 911385: pt-table-checksum v2 fails when --resume + --ignore-database is used * Fixed bug 1041391: pt-table-checksum debug statement for "Chosen hash func" prints undef * Fixed bug 1075638: pt-table-checksum Illegal division by zero at line 7950 * Fixed bug 1052475: pt-table-checksum uninitialized value in numeric lt (<) at line 8611 * Fixed bug 1078887: Tools let --set-vars clobber the required SQL mode -- Percona Toolkit Developers Fri, 21 Dec 2012 17:31:09 +0000 percona-toolkit (2.1.7) unstable; urgency=low * Fixed bug 1080384: pt-table-checksum 2.1.6 crashes using PTDEBUG * Fixed bug 1080385: pt-table-checksum 2.1.6 --check-binlog-format doesn't ignore PXC nodes -- Percona Toolkit Developers Mon, 19 Nov 2012 18:43:13 +0000 percona-toolkit (2.1.6) unstable; urgency=low * pt-online-schema-change: Columns can now be renamed without data loss * pt-online-schema-change: New --default-engine option * pt-stalk: Plugin hooks available through the --plugin option to extend the tool's functionality * Fixed bug 1069951: --version-check default should be explicitly "off" * Fixed bug 821715: LOAD DATA LOCAL INFILE broken in some platforms * Fixed bug 995896: Useless use of cat in Daemon.pm * Fixed bug 1039074: Tools exit 0 on error parsing options, should exit non-zero * Fixed bug 938068: pt-table-checksum doesn't warn if binlog_format=row or mixed on slaves * Fixed bug 1009510: pt-table-checksum breaks replication if a slave table is missing or different * Fixed bug 1043438: pt-table-checksum doesn't honor --run-time while checking replication lag * Fixed bug 1073532: pt-table-checksum error: Use of uninitialized value in int at line 2778 * Fixed bug 1016131: pt-table-checksum can crash with --columns if none match * Fixed bug 1039569: pt-table-checksum dies if creating the --replicate table fails * Fixed bug 1059732: pt-table-checksum doesn't test all hash functions * Fixed bug 1062563: pt-table-checksum 2.1.4 doesn't detect diffs on Percona XtraDB Cluster nodes * Fixed bug 1043528: pt-deadlock-logger can't parse db/tbl/index on partitioned tables * Fixed bug 1062324: pt-online-schema-change DELETE trigger fails when altering primary key * Fixed bug 1058285: pt-online-schema-change fails if sql_mode explicitly or implicitly uses ANSI_QUOTES * Fixed bug 1073996: pt-online-schema-change fails with "I need a max_rows argument" * Fixed bug 1039541: pt-online-schema-change --quiet doesn't disable --progress * Fixed bug 1045317: pt-online-schema-change doesn't report how many warnings it suppressed * Fixed bug 1060774: pt-upgrade fails if select column > 64 chars * Fixed bug 1070916: pt-mysql-summary may report the wrong cnf file * Fixed bug 903229: pt-mysql-summary incorrectly categorizes databases * Fixed bug 866075: pt-show-grant doesn't support column-level grants * Fixed bug 978133: pt-query-digest review table privilege checks don't work * Fixed bug 956981: pt-query-digest docs for event attributes link to defunct Maatkit wiki * Fixed bug 1047335: pt-duplicate-key-checker fails when it encounters a crashed table * Fixed bug 1047701: pt-stalk deletes non-empty files * Fixed bug 1070434: pt-stalk --no-stalk and --iterations 1 don't wait for the collect * Fixed bug 1052722: pt-fifo-split is processing n-1 rows initially * Fixed bug 1013407: pt-find documentation error with mtime and InnoDB * Fixed bug 1059757: pt-trend output has no header * Fixed bug 1063933: pt-visual-explain docs link to missing pdf * Fixed bug 1075773: pt-fk-error-logger crashes if there's no foreign key error * Fixed bug 1075775: pt-fk-error-logger --dest table example doesn't work -- Percona Toolkit Developers Tue, 13 Nov 2012 15:10:55 +0000 percona-toolkit (2.1.5) unstable; urgency=low * Fixed bug 1062563: pt-table-checksum 2.1.4 doesn't detect diffs on Percona XtraDB Cluster nodes * Fixed bug 1063912: pt-table-checksum 2.1.4 miscategorizes Percona XtraDB Cluster-based slaves as cluster nodes * Fixed bug 1064016: pt-table-sync 2.1.4 --version-check may not work with HTTPS/SSL * Fixed bug 1060423: Missing version-check page -- Percona Toolkit Developers Mon, 08 Oct 2012 21:00:06 +0000 percona-toolkit (2.1.4) unstable; urgency=low * pt-table-checksum: Percona XtraDB Cluster support * pt-table-checksum: Implemented the standard --run-time option * Implemented the version-check feature in several tools, enabled with the --version-check option * Fixed bug 856060: Document gdb dependency * Fixed bug 1041394: Unquoted arguments to tr break the bash tools * Fixed bug 1035311: pt-diskstats shows wrong device names * Fixed bug 1036804: pt-duplicate-key-checker error parsing InnoDB table with no PK or unique keys * Fixed bug 1022658: pt-online-schema-change dropping FK limitation isn't documented * Fixed bug 1041372: pt-online-schema-changes fails if db+tbl name exceeds 64 characters * Fixed bug 1029178: pt-query-digest --type tcpdump memory usage keeps increasing * Fixed bug 1037211: pt-query-digest won't distill LOCK TABLES in lowercase * Fixed bug 942114: pt-stalk warns about bad "find" usage * Fixed bug 1035319: pt-stalk df -h throws away needed details * Fixed bug 1038995: pt-stalk --notify-by-email fails * Fixed bug 1038995: pt-stalk does not get all InnoDB lock data * Fixed bug 952722: pt-summary should show information about Fusion-io cards * Fixed bug 899415: pt-table-checksum doesn't work if slaves use RBR * Fixed bug 954588: pt-table-checksum --check-slave-lag docs aren't clear * Fixed bug 1034170: pt-table-checksum --defaults-file isn't used for slaves * Fixed bug 930693: pt-table-sync and text columns with just whitespace * Fixed bug 1028710: pt-table-sync base_count fails on n = 1000, base = 10 * Fixed bug 1034717: pt-table-sync division by zero error with varchar primary key * Fixed bug 1036747: pt-table-sync priv checks need to be removed * Fixed bug 1039184: pt-upgrade error "I need a right_sth argument" * Fixed bug 1035260: sh warnings in pt-summary and pt-mysql-summary * Fixed bug 1038276: ChangeHandler doesn't quote varchar columns with hex-looking values * Fixed bug 916925: CentOS 5 yum dependency resolution for perl module is wrong * Fixed bug 1035950: Percona Toolkit RPM should contain a dependency on perl-Time-HiRes -- Percona Toolkit Developers Thu, 20 Sep 2012 12:41:45 +0000 percona-toolkit (2.1.3) unstable; urgency=low * pt-kill: Implemented --log-dsn to log info about killed queries to a table * Fixed bug 1016127: Install hint for DBD::mysql is wrong * Fixed bug 984915: DSNParser does not check success of --set-vars * Fixed bug 889739: pt-config-diff doesn't diff quoted strings properly * Fixed bug 969669: pt-duplicate-key-checker --key-types=k doesn't work * Fixed bug 1004567: pt-heartbeat --update --replace causes duplicate key error * Fixed bug 1028614: pt-index-usage ignores --database * Fixed bug 940733: pt-ioprofile leaves behind temp directory * Fixed bug 941469: pt-kill doesn't reconnect if its connection is lost * Fixed bug 1016114: pt-online-schema-change docs don't mention default values * Fixed bug 1020997: pt-online-schema-change fails when table is empty * Fixed bug 1022628: pt-online-schema-change error: Use of uninitialized value in numeric lt (<) at line 6519 * Fixed bug 937225: pt-query-advisor OUTER JOIN advice in JOI.003 is confusing * Fixed bug 821703: pt-query-digest --processlist may crash * Fixed bug 883098: pt-query-digest crashes if processlist has extra columns * Fixed bug 924950: pt-query-digest --group-by db may crash profile report * Fixed bug 1022851: pt-sift error: PREFIX: unbound variable * Fixed bug 969703: pt-sift defaults to '.' instead of '/var/lib/pt-talk' * Fixed bug 962330: pt-slave-delay incorrectly computes lag if started when slave is already lagging * Fixed bug 954990: pt-stalk --nostalk does not work * Fixed bug 977226: pt-summary doesn't detect LSI RAID control * Fixed bug 1030031: pt-table-checksum reports wrong number of DIFFS * Fixed bug 916168: pt-table-checksum privilege check fails on MySQL 5.5 * Fixed bug 950294: pt-table-checksum should always create schema and tables with IF NOT EXISTS * Fixed bug 953141: pt-table-checksum ignores its default and explicit --recursion-method * Fixed bug 1030975: pt-table-sync crashes if sql_mode includes ANSI_QUOTES * Fixed bug 869005: pt-table-sync should always set REPEATABLE READ * Fixed bug 903510: pt-tcp-model crashes in --type=requests mode on empty file * Fixed bug 934310: pt-tcp-model --quantile docs wrong * Fixed bug 980318: pt-upgrade results truncated if hostnames are long * Fixed bug 821696: pt-variable-advisor shows too long of a snippet * Fixed bug 844880: pt-variable-advisor shows binary logging as both enabled and disabled -- Percona Toolkit Developers Fri, 03 Aug 2012 18:39:39 +0000 percona-toolkit (2.1.2) unstable; urgency=low * pt-heartbeat: Implemented --recursion-method=none * pt-index-usage: MySQL 5.5 compatibility fixes * pt-log-player: MySQL 5.5 compatibility fixes * pt-online-schema-change: Added --chunk-index-columns * pt-online-schema-change: Added --[no]check-plan * pt-online-schema-change: Added --[no]drop-new-table * pt-online-schema-change: Implemented --recursion-method=none * pt-query-advisor: Added --report-type for JSON output * pt-query-digest: Removed --[no]zero-bool * pt-slave-delay: Added --database * pt-slave-find: Implemented --recursion-method=none * pt-slave-restart: Implemented --recursion-method=none * pt-table-checksum: Added --chunk-index-columns * pt-table-checksum: Added --[no]check-plan * pt-table-checksum: Implemented --recursion-method=none * pt-table-sync: Disabled --lock-and-rename except for MySQL 5.5 and newer * pt-table-sync: Implemented --recursion-method=none * Fixed bug 945079: Shell tools TMPDIR may break * Fixed bug 912902: Some shell tools still use basename * Fixed bug 987694: There is no --recursion-method=none option * Fixed bug 886077: Passwords with commas don't work, expose part of password * Fixed bug 856024: Lintian warnings when building percona-toolkit Debian package * Fixed bug 903379: pt-archiver --file doesn't create a file * Fixed bug 979092: pt-archiver --sleep conflicts with bulk operations * Fixed bug 903443: pt-deadlock-logger crashes on MySQL 5.5 * Fixed bug 941064: pt-deadlock-logger can't clear deadlocks on 5.5 * Fixed bug 952727: pt-diskstats shows incorrect wr_mb_s * Fixed bug 994176: pt-diskstats --group-by=all --headers=scroll prints a header for every sample * Fixed bug 894140: pt-duplicate-key-checker sometimes recreates a key it shouldn't * Fixed bug 923896: pt-kill: uninitialized value causes script to exit * Fixed bug 1003003: pt-online-schema-change uses different keys for chunking and triggers * Fixed bug 1003315: pt-online-schema-change --dry-run always fails on table with foreign keys * Fixed bug 1004551: pt-online-schema-change --no-swap-tables causes error * Fixed bug 976108: pt-online-schema-change doesn't allow to disable foreign key checks * Fixed bug 976109: pt-online-schema-change doesn't handle column renames * Fixed bug 988036: pt-online-schema-change causes deadlocks under heavy write load * Fixed bug 989227: pt-online-schema-change crashes with PTDEBUG * Fixed bug 994002: pt-online-schema-change 2.1.1 doesn't choose the PRIMARY KEY * Fixed bug 994010: pt-online-schema-change 2.1.1 crashes without InnoDB * Fixed bug 996915: pt-online-schema-change crashes with invalid --max-load and --critical-load * Fixed bug 998831: pt-online-schema-change -- Should have an option to NOT drop tables on failure * Fixed bug 1002448: pt-online-schema-change: typo for finding usable indexes * Fixed bug 885382: pt-query-digest --embedded-attributes doesn't check cardinality * Fixed bug 888114: pt-query-digest report crashes with infinite loop * Fixed bug 949630: pt-query-digest mentions a Subversion repository * Fixed bug 844034: pt-show-grants --separate fails with proxy user * Fixed bug 946707: pt-sift loses STDIN after pt-diskstats * Fixed bug 994947: pt-stalk doesn't reset cycles_true after collection * Fixed bug 986151: pt-stalk-has mktemp error * Fixed bug 993436: pt-summary Memory: Total reports M instead of G * Fixed bug 1008778: pt-table-checksum doesn't wait for checksum table to replicate * Fixed bug 1010232: pt-table-checksum doesn't check the size of checksum chunks * Fixed bug 1011738: pt-table-checksum SKIPPED is zero but chunks were skipped * Fixed bug 919499: pt-table-checksum fails with binary log error in mysql >= 5.5.18 * Fixed bug 972399: pt-table-checksum docs are not rendered right * Fixed bug 978432: pt-table-checksum ignoring primary key * Fixed bug 995274: pt-table-checksum can't use an undefined value as an ARRAY reference at line 2206 * Fixed bug 996110: pt-table-checksum crashes if InnoDB is disabled * Fixed bug 987393: pt-table-checksum: Empy tables cause "undefined value as an ARRAY" errors * Fixed bug 1002365: pt-table-sync --ignore-* options don't work with --replicate * Fixed bug 1003014: pt-table-sync --replicate and --sync-to-master error "index does not exist" * Fixed bug 823403: pt-table-sync --lock-and-rename doesn't work on 5.1 * Fixed bug 898138: pt-variable-advisor doesn't recognize 5.5.3+ concurrent_insert values -- Percona Toolkit Developers Tue, 12 Jun 2012 14:03:06 +0000 percona-toolkit (2.1.1) unstable; urgency=low * Completely redesigned pt-online-schema-change * Completely redesigned pt-mysql-summary * Completely redesigned pt-summary * Added new tool: pt-table-usage * Added new tool: pt-fingerprint * Fixed bug 955860: pt-stalk doesn't run vmstat, iostat, and mpstat for --run-time * Fixed bug 960513: SHOW TABLE STATUS is used needlessly * Fixed bug 969726: pt-online-schema-change loses foreign keys * Fixed bug 846028: pt-online-schema-change does not show progress until completed * Fixed bug 898695: pt-online-schema-change add useless ORDER BY * Fixed bug 952727: pt-diskstats shows incorrect wr_mb_s * Fixed bug 963225: pt-query-digest fails to set history columns for disk tmp tables and disk filesort * Fixed bug 967451: Char chunking doesn't quote column name * Fixed bug 972399: pt-table-checksum docs are not rendered right * Fixed bug 896553: Various documentation spelling fixes * Fixed bug 949154: pt-variable-advisor advice for relay-log-space-limit * Fixed bug 953461: pt-upgrade manual broken 'output' section * Fixed bug 949653: pt-table-checksum docs don't mention risks posed by inconsistent schemas -- Percona Toolkit Developers Tue, 03 Apr 2012 19:40:42 +0000 percona-toolkit (2.0.4) unstable; urgency=low * Added --filter to pt-kill to allow arbitrary --group-by * Added --[no]stalk to pt-stalk (bug 932331) * Added --execute to pt-online-schema-change (bug 933232) * Fixed bug 873598: pt-online-schema-change doesn't like reserved words in column names * Fixed bug 928966: pt-pmp still uses insecure /tmp * Fixed bug 933232: pt-online-schema-change can break replication * Fixed bug 941225: Use of qw(...) as parentheses is deprecated at pt-kill line 3511 * Fixed bug 821694: pt-query-digest doesn't recognize hex InnoDB txn IDs * Fixed bug 894255: pt-kill shouldn't check if STDIN is a tty when --daemonize is given * Fixed bug 916999: pt-table-checksum error: DBD::mysql::st execute failed: called with 2 bind variables when 6 are needed * Fixed bug 926598: DBD::mysql bug causes pt-upgrade to use wrong precision (M) and scale (D) * Fixed bug 928226: pt-diskstats illegal division by zero * Fixed bug 928415: Typo in pt-stalk doc: --trigger should be --function * Fixed bug 930317: pt-archiver doc refers to nonexistent pt-query-profiler * Fixed bug 930533: pt-sift looking for *-processlist1; broken compatibility with pt-stalk * Fixed bug 932331: pt-stalk cannot collect without stalking * Fixed bug 932442: pt-table-checksum error when column name has two spaces * Fixed bug 932883: File Debian bug after each release * Fixed bug 940503: pt-stalk disk space checks wrong on 32bit platforms * Fixed bug 944420: --daemonize doesn't always close STDIN * Fixed bug 945834: pt-sift invokes pt-diskstats with deprecated argument * Fixed bug 945836: pt-sift prints awk error if there are no stack traces to aggregate * Fixed bug 945842: pt-sift generates wrong state sum during processlist analysis * Fixed bug 946438: pt-query-digest should print a better message when an unsupported log format is specified * Fixed bug 946776: pt-table-checksum ignores --lock-wait-timeout * Fixed bug 940440: Bad grammar in pt-kill docs -- Percona Toolkit Developers Wed, 07 Mar 2012 23:38:27 +0000 percona-toolkit (2.0.3) unstable; urgency=low * Completely redesigned pt-diskstats * Completely redesigned pt-stalk * Removed pt-collect and put its functionality in pt-stalk * Fixed bug 871438: Bash tools are insecure * Fixed bug 897758: Failed to prepare TableSyncChunk plugin: Use of uninitialized value $args{"chunk_range"} in lc at pt-table-sync line 3055 * Fixed bug 919819: pt-kill --execute-command creates zombies * Fixed bug 894255: pt-kill: when --daemonize is given, should not check that stdin is a tty * Fixed bug 925778: pt-ioprofile doesn't run without a file * Fixed bug 925477: pt-ioprofile docs refer to pt-iostats * Fixed bug 857091: pt-sift downloads http://percona.com/get/pt-pmp, which does not work * Fixed bug 857104: pt-sift tries to invoke mext, should be pt-mext * Fixed bug 872699: pt-diskstats: rd_avkb & wr_avkb derived incorrectly * Fixed bug 882918: pt-stalk spams warning if oprofile isn't installed * Fixed bug 884504: pt-stalk doesn't check pt-collect * Fixed bug 897483: pt-online-schema-change "uninitialized value" due to update-foreign-keys-method * Fixed bug 925007: pt-online-schema-change Use of uninitialized value $tables{"old_table"} in concatenation (.) or string at line 4330 * Fixed bug 915598: pt-config-diff ignores --ask-pass option * Fixed bug 919352: pt-table-checksum changes binlog_format even if already set to statement * Fixed bug 921700: pt-table-checksum doesn't add --where to chunk size test on replicas * Fixed bug 921802: pt-table-checksum does not recognize --recursion-method=processlist * Fixed bug 925855: pt-table-checksum index check is case-sensitive * Fixed bug 821709: pt-show-grants --revoke and --separate don't work together * Fixed bug 918247: Some tools use VALUE instead of VALUES -- Percona Toolkit Developers Fri, 03 Feb 2012 23:22:54 +0000 percona-toolkit (2.0.2) unstable; urgency=low * Fixed bug 911996: pt-table-sync --replicate causes "Unknown column" error -- Percona Toolkit Developers Thu, 05 Jan 2012 19:18:08 +0000 percona-toolkit (2.0.1) unstable; urgency=low * Completely redesigned pt-table-checksum * Fixed bug 856065: pt-trend does not work * Fixed bug 887688: Prepared statements crash pt-query-digest * Fixed bug 888286: align not part of percona-toolkit * Fixed bug 897961: ptc 2.0 replicate-check error does not include hostname * Fixed bug 898318: ptc 2.0 --resume with --tables does not always work * Fixed bug 903513: MKDEBUG should be PTDEBUG * Fixed bug 908256: Percona Toolkit should include pt-ioprofile * Fixed bug 821717: pt-tcp-model --type=requests crashes * Fixed bug 844038: pt-online-schema-change documentation example w/drop-tmp-table does not work * Fixed bug 864205: Remove the query to reset @crc from pt-table-checksum * Fixed bug 898663: Typo in pt-log-player documentation -- Percona Toolkit Developers Fri, 30 Dec 2011 22:43:13 +0000 percona-toolkit (1.0.1) unstable; urgency=low * Fixed bug 819421: MasterSlave::is_replication_thread() doesn't match all * Fixed bug 821673: pt-table-checksum doesn't include --where in min max queries * Fixed bug 821688: pt-table-checksum SELECT MIN MAX for char chunking is wrong * Fixed bug 838211: pt-collect: line 24: [: : integer expression expected * Fixed bug 838248: pt-collect creates a "5.1" file -- Percona Toolkit Developers Thu, 01 Sep 2011 15:59:21 +0000 percona-toolkit (0.9.5) unstable; urgency=low * Forked, combined, and rebranded Maatkit and Aspersa as Percona Toolkit. -- Percona Toolkit Developers Thu, 04 Aug 2011 21:00:00 +0000 percona-toolkit-3.1/config/deb/compat000664 001750 001750 00000000002 13535723557 021051 0ustar00jenkinsjenkins000000 000000 5 percona-toolkit-3.1/config/deb/control000664 001750 001750 00000002435 13535723557 021262 0ustar00jenkinsjenkins000000 000000 Source: percona-toolkit Section: utils Priority: optional Maintainer: Percona Toolkit Developers Build-Depends: debhelper (>= 4.2) Build-Depends-Indep: perl (>= 5.6.0-16) Standards-Version: 3.7.2 Homepage: http://www.percona.com/software/percona-toolkit/ Vcs-Browser: https://github.com/percona/percona-toolkit Vcs-Git: git://github.com/percona/percona-toolkit.git Package: percona-toolkit Architecture: amd64 Depends: ${perl:Depends}, libdbi-perl (>= 1.13), libdbd-mysql-perl | libdbd-mysql-5.1-perl, libterm-readkey-perl (>=2.10), libio-socket-ssl-perl Description: Advanced MySQL and system command-line tools Percona Toolkit is a collection of advanced command-line tools used by Percona (http://www.percona.com/) support staff to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. . These tools are ideal alternatives to private or "one-off" scripts because they are professionally developed, formally tested, and fully documented. They are also fully self-contained, so installation is quick and easy and no libraries are installed. . Percona Toolkit is developed and supported by Percona. For more information and other free, open-source software developed by Percona, visit http://www.percona.com/software/. percona-toolkit-3.1/config/deb/copyright000664 001750 001750 00000002464 13535723557 021614 0ustar00jenkinsjenkins000000 000000 This package was debianized by Percona Toolkit Developers on Sun, 10 Jun 2007 22:30:36 -0500 It was downloaded from http://www.percona.com/downloads/ Upstream Author: Percona Toolkit Developers Copyright: Copyright 2013 Percona Ireland Ltd. License: This software is dual licensed, either GPL version 2 or Artistic License. This package is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2; OR the Perl Artistic License. This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this package; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA On Debian systems, the complete text of the GNU General Public License version 2 can be found in `/usr/share/common-licenses/GPL-2'. On Debian systems, the complete text of the Artistic License can be found in `/usr/share/common-licenses/Artistic'. percona-toolkit-3.1/config/deb/postinst000664 001750 001750 00000000602 13535723557 021457 0ustar00jenkinsjenkins000000 000000 #!/bin/bash if [ ! -e /etc/percona-toolkit/.percona.toolkit.uuid ]; then mkdir -p /etc/percona-toolkit if [ -r /sys/class/dmi/id/product_uuid ]; then cat /sys/class/dmi/id/product_uuid > /etc/percona-toolkit/.percona.toolkit.uuid else perl -e 'printf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7;' > /etc/percona-toolkit/.percona.toolkit.uuid fi fi #DEBHELPER# percona-toolkit-3.1/config/deb/rules000775 001750 001750 00000001345 13535723557 020736 0ustar00jenkinsjenkins000000 000000 #!/usr/bin/make -f build: build-stamp build-stamp: dh_testdir perl Makefile.PL INSTALLDIRS=vendor $(MAKE) touch build-stamp clean: dh_testdir dh_testroot -rm -f build-stamp [ ! -f Makefile ] || $(MAKE) distclean dh_clean install: build dh_testdir dh_testroot dh_clean dh_installdirs $(MAKE) install DESTDIR=$(CURDIR)/debian/percona-toolkit rm -rf debian/percona-toolkit/usr/lib binary-arch: binary-indep: build install dh_testdir dh_testroot dh_installdocs dh_installmenu dh_installchangelogs Changelog dh_installdocs dh_install dh_installman dh_compress dh_fixperms dh_perl dh_installdeb dh_gencontrol dh_md5sums dh_builddeb binary: binary-indep .PHONY: binary binary-arch binary-indep clean checkroot percona-toolkit-3.1/config/rpm/000775 001750 001750 00000000000 13535723557 017717 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/rpm/percona-toolkit.spec000664 001750 001750 00000004457 13535723557 023717 0ustar00jenkinsjenkins000000 000000 %undefine _missing_build_ids_terminate_build %define debug_package %{nil} Name: percona-toolkit Summary: Advanced MySQL and system command-line tools Version: %{version} Release: %{release} Group: Applications/Databases License: GPLv2 Vendor: Percona URL: http://www.percona.com/software/percona-toolkit/ Source: percona-toolkit-%{version}.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root BuildArch: x86_64 BuildRequires: perl(ExtUtils::MakeMaker) make Requires: perl(DBI) >= 1.13, perl(DBD::mysql) >= 1.0, perl(Time::HiRes), perl(IO::Socket::SSL), perl(Digest::MD5), perl(Term::ReadKey) AutoReq: no %description Percona Toolkit is a collection of advanced command-line tools used by Percona (http://www.percona.com/) support staff to perform a variety of MySQL and system tasks that are too difficult or complex to perform manually. These tools are ideal alternatives to private or "one-off" scripts because they are professionally developed, formally tested, and fully documented. They are also fully self-contained, so installation is quick and easy and no libraries are installed. Percona Toolkit is developed and supported by Percona. For more information and other free, open-source software developed by Percona, visit http://www.percona.com/software/. %prep %setup -q %build %{__perl} Makefile.PL INSTALLDIRS=vendor < /dev/null make %{?_smp_mflags} %install rm -rf $RPM_BUILD_ROOT make pure_install PERL_INSTALL_ROOT=$RPM_BUILD_ROOT find $RPM_BUILD_ROOT -type f -name .packlist -exec rm -f {} ';' find $RPM_BUILD_ROOT -type d -depth -exec rmdir {} 2>/dev/null ';' find $RPM_BUILD_ROOT -type f -name 'percona-toolkit.pod' -exec rm -f {} ';' rm -rf $RPM_BUILD_ROOT/usr/share/perl5 chmod -R u+w $RPM_BUILD_ROOT/* %post if [ ! -e /etc/percona-toolkit/.percona.toolkit.uuid ]; then mkdir -p /etc/percona-toolkit if [ -r /sys/class/dmi/id/product_uuid ]; then cat /sys/class/dmi/id/product_uuid > /etc/percona-toolkit/.percona.toolkit.uuid else perl -e 'printf+($}="%04x")."$}-$}-$}-$}-".$}x3,map rand 65537,0..7;' > /etc/percona-toolkit/.percona.toolkit.uuid fi fi %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root,-) %doc COPYING INSTALL README.md Changelog %{_bindir}/* %{_mandir}/man1/*.1* %changelog * Mon Jul 18 2011 Daniel Nichter - Initial implementation percona-toolkit-3.1/config/sphinx-build/000775 001750 001750 00000000000 13535723560 021521 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/sphinx-build/Makefile000664 001750 001750 00000011575 13535723557 023200 0ustar00jenkinsjenkins000000 000000 # Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = ../../docs/user SOURCE = ../../docs/user # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -c . -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) $(SOURCE) .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: @echo "Downloading percona-theme ..." @wget -O percona-theme.tar.gz https://www.percona.com/docs/theme-1-4/percona-toolkit/2.2 @rm -rf percona-theme @echo "Extracting theme." @tar -zxf percona-theme.tar.gz @mv percona-theme-1-4 percona-theme @rm percona-theme.tar.gz @echo "Building html doc" $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/PerconaXtraBackup.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/PerconaXtraBackup.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/PerconaXtraBackup" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/PerconaXtraBackup" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." make -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." percona-toolkit-3.1/config/sphinx-build/_static/000775 001750 001750 00000000000 13535723557 023155 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/sphinx-build/_static/percona_favicon.ico000664 001750 001750 00000001576 13535723557 027016 0ustar00jenkinsjenkins000000 000000 h( -;&&2Br}[m r}DPUcgs UcgsgsgsDPҊ⊑DPr}gsgs gs77  r}-; gs-; 2B& gsbbUc &5[muu[m55\00%Q..%Q00=d܄CCCքCCCքdZ[[Z[Zdh n nh0``0h n nhz} }} }z++b::bpercona-toolkit-3.1/config/sphinx-build/percona-logo.jpg000664 001750 001750 00000154233 13535723557 024626 0ustar00jenkinsjenkins000000 000000 ExifMM* (12҇i T'T'Adobe Photoshop CS5 Macintosh2012:01:02 11:21:530221nv(~ ^HH Adobe_CMAdobed            @" ?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?n1eMsJ`Nqڹly.(YwuUx &GYkxȹ [=?9w(X1)_)3x΀q#?kow4{ >N3gj??6kqsͩ:}n/mqi}k{_?O617){h~Sp16 L $z>~<9u2v,n᯻ܨYr3ov!ٗuafcVM䱏e5~~G$뤨ugzt4:խaսՋ}vwVW2+/)&1Z/vQ̮꭮{*Jo{X |:E21n^f#J-Av=c[Jl$rޏ{MNuǽmy;X+=_5nDpfW}ѷ}6O_Su$gFg䶺bIq[>}Q&ʲON2mES[isu:C%msݥ+?W-ͷcpK~d;+D_u>VZv?}sAʁӨ8o =3Q/}z{=btp:uYvV.u{Yc7dSeQZ]z4SGtΣs^Cm &6}:ꪝeO~U̻*_k+8]57zaUK_uke$jNmyv>C3_Ds*ȿ.G" }=1s?GI%<8XsbQmxa25 }v7kmO jwM5qK+ǧ(c+1U=6?LJq]jNyUffh[ݻqeg\(0*4mĸN~C(I%?+jMmX ^Z5IXZ~tpc{a0?BQygq J&Ǜl1\&t'I3&ssAmϦccQ8ų\Xu$JVI$I%)$IO5U_] 6ۚGڜ$E5حotzIϮm2hG.sIe73ڻHJpwuk+xR>ƷKGkYZE_\NM7%ům $UXͲ[Wef̚t$[N#_z6z/잯dZGj1fq#6MvWhuwzCO$w[Y}M{ݏ[,UY)jy`}FUB~}׮G0:p۾m~?ti$TI%)$IJI$RI$I%)$IJI$RI$I%)$IOPhotoshop 3.08BIMZ%G8BIM%}Ǿ pvN8BIM: printOutputPstSboolInteenumInteClrmprintSixteenBitbool printerNameTEXTEPSON Stylus TX1058BIM;printOutputOptionsCptnboolClbrboolRgsMboolCrnCboolCntCboolLblsboolNgtvboolEmlDboolIntrboolBckgObjcRGBCRd doub@oGrn doub@oBl doub@oBrdTUntF#RltBld UntF#RltRsltUntF#Pxl@iQ vectorDataboolPgPsenumPgPsPgPCLeftUntF#RltTop UntF#RltScl UntF#Prc@Y8BIM8BIM&?8BIM 8BIM x8BIM8BIM 8BIM' 8BIMH/fflff/ff2Z5-8BIMp8BIM8BIM8BIM08BIM-8BIM@@8BIM8BIMM percona-logonullboundsObjcRct1Top longLeftlongBtomlongRghtlongslicesVlLsObjcslicesliceIDlonggroupIDlongoriginenum ESliceOrigin autoGeneratedTypeenum ESliceTypeImg boundsObjcRct1Top longLeftlongBtomlongRghtlongurlTEXTnullTEXTMsgeTEXTaltTagTEXTcellTextIsHTMLboolcellTextTEXT horzAlignenumESliceHorzAligndefault vertAlignenumESliceVertAligndefault bgColorTypeenumESliceBGColorTypeNone topOutsetlong leftOutsetlong bottomOutsetlong rightOutsetlong8BIM( ?8BIM8BIM z@x ^ Adobe_CMAdobed            @" ?   3!1AQa"q2B#$Rb34rC%Scs5&DTdE£t6UeuF'Vfv7GWgw5!1AQaq"2B#R3$brCScs4%&5DTdEU6teuFVfv'7GWgw ?n1eMsJ`Nqڹly.(YwuUx &GYkxȹ [=?9w(X1)_)3x΀q#?kow4{ >N3gj??6kqsͩ:}n/mqi}k{_?O617){h~Sp16 L $z>~<9u2v,n᯻ܨYr3ov!ٗuafcVM䱏e5~~G$뤨ugzt4:խaսՋ}vwVW2+/)&1Z/vQ̮꭮{*Jo{X |:E21n^f#J-Av=c[Jl$rޏ{MNuǽmy;X+=_5nDpfW}ѷ}6O_Su$gFg䶺bIq[>}Q&ʲON2mES[isu:C%msݥ+?W-ͷcpK~d;+D_u>VZv?}sAʁӨ8o =3Q/}z{=btp:uYvV.u{Yc7dSeQZ]z4SGtΣs^Cm &6}:ꪝeO~U̻*_k+8]57zaUK_uke$jNmyv>C3_Ds*ȿ.G" }=1s?GI%<8XsbQmxa25 }v7kmO jwM5qK+ǧ(c+1U=6?LJq]jNyUffh[ݻqeg\(0*4mĸN~C(I%?+jMmX ^Z5IXZ~tpc{a0?BQygq J&Ǜl1\&t'I3&ssAmϦccQ8ų\Xu$JVI$I%)$IO5U_] 6ۚGڜ$E5حotzIϮm2hG.sIe73ڻHJpwuk+xR>ƷKGkYZE_\NM7%ům $UXͲ[Wef̚t$[N#_z6z/잯dZGj1fq#6MvWhuwzCO$w[Y}M{ݏ[,UY)jy`}FUB~}׮G0:p۾m~?ti$TI%)$IJI$RI$I%)$IJI$RI$I%)$IO8BIM!UAdobe PhotoshopAdobe Photoshop CS58BIMhttp://ns.adobe.com/xap/1.0/ adobe:docid:photoshop:b7aa1d02-40cc-11e0-8b59-e109ef7e627a XICC_PROFILE HLinomntrRGB XYZ  1acspMSFTIEC sRGB-HP cprtP3desclwtptbkptrXYZgXYZ,bXYZ@dmndTpdmddvuedLview$lumimeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ QXYZ XYZ o8XYZ bXYZ $descIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view_. \XYZ L VPWmeassig CRT curv #(-27;@EJOTY^chmrw| %+28>ELRY`gnu| &/8AKT]gqz !-8COZfr~ -;HUcq~ +:IXgw'7HYj{+=Oat 2FZn  % : O d y  ' = T j " 9 Q i  * C \ u & @ Z t .Id %A^z &Ca~1Om&Ed#Cc'Ij4Vx&IlAe@e Ek*Qw;c*R{Gp@j>i  A l !!H!u!!!"'"U"""# #8#f###$$M$|$$% %8%h%%%&'&W&&&''I'z''( (?(q(())8)k))**5*h**++6+i++,,9,n,,- -A-v--..L.../$/Z///050l0011J1112*2c223 3F3334+4e4455M555676r667$7`7788P8899B999:6:t::;-;k;;<' >`>>?!?a??@#@d@@A)AjAAB0BrBBC:C}CDDGDDEEUEEF"FgFFG5G{GHHKHHIIcIIJ7J}JK KSKKL*LrLMMJMMN%NnNOOIOOP'PqPQQPQQR1R|RSS_SSTBTTU(UuUVV\VVWDWWX/X}XYYiYZZVZZ[E[[\5\\]']x]^^l^__a_``W``aOaabIbbcCccd@dde=eef=ffg=ggh?hhiCiijHjjkOkklWlmm`mnnknooxop+ppq:qqrKrss]sttptu(uuv>vvwVwxxnxy*yyzFz{{c{|!||}A}~~b~#G k͂0WGrׇ;iΉ3dʋ0cʍ1fΏ6n֑?zM _ɖ4 uL$h՛BdҞ@iءG&vVǥ8nRĩ7u\ЭD-u`ֲK³8%yhYѹJº;.! zpg_XQKFAǿ=ȼ:ɹ8ʷ6˶5̵5͵6ζ7ϸ9к<Ѿ?DINU\dlvۀ܊ݖޢ)߯6DScs 2F[p(@Xr4Pm8Ww)KmAdobed@       ~  s!1AQa"q2B#R3b$r%C4Scs5D'6Tdt& EFVU(eufv7GWgw8HXhx)9IYiy*:JZjzm!1AQa"q2#BRbr3$4CS%cs5DT &6E'dtU7()󄔤euFVfvGWgw8HXhx9IYiy*:JZjz ?*UثWb]v*421-vz~C3UiE&cPjZ%dkȵ[8g:LdQC?o`w<e?1rrEP'j)4GUλ'b8ǻ_?]/g%njS-DE&=.f.(^)%z-#sS=fVIѦ9D|]JƬO$12[D5 /9ǔ>%\dO5_3;k}?Ε|y~ ~A65X*9i118 ~Nϔx}IJ́ZKwHVuZ_og-foN,Kun)3n?tITt5{G ?DO.QZJꝊv*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]*UثWb]:_.]Fq밪[G{sA>.~^; gg@sOVO;s؏Y彧힫='o73Ë|-4L۴1f?2js9ɑ=eqEDPY|,޿<+NZw ӽ|xV´_z;Dži޿<+IΏsCeq_BG3}ݹП2^~x]n0j;8m/ǽZ掏՗umܟ-Ql9Dž.鿇gbc#F` B2 hUثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]*UثWb{lE ZI\UQܓU$qdF#c)HF"y?6W~6jz<}Ǥ볽 7?y׳\JO#K4YyIK$I'/ib"(C;Dži޶<+NZw ӽlxVc´[z;Dži޶<+NZw ӽlxV~la#}-J~N߷m >:Ovodm_Ɍzw ӽlxVc´_z;Dži޿<+NZw ӽlxVc´_z;Dži޿<+NZV.lgg#%e>erOLd9&(䉌a,beՁ7/RW{c妼:^UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UتM0XuIxAȃ!(شx\a1s4z<1|8>mԞ¤ ~ğo N˯g>_>G_Ho|w ӽo|xV{´[z;Dži޷<+NZw ӽo|xV{´[z;Dži޷<+NZw ӽo|xV-#F*j6 (S{bK马 ӟG~߳~` />i۽r^5ثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb*U }}iY{"glY}rcs5-K$",O<绿9tKG@JXZG:sjU{gjO_GRvOeE/+fӽlxVc´[z;Dži޶<+NZw ack]$NebgBRKS8&/<_O@غJ' q8Z){PHW6HZeS\78M,a$`s:>c14E9"B=l *w ӽlxVc´[z;Dži޶<+K9HC#+) a`WyKp .P<{?_gl~s!QS䝽'GrV]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]*U5._ Rd;Kr7u/LiL}1?1^?11;?xׯ' ӽ|xV´_z;Dži޿<+H'M5I//$1B;z;d`> q2>M<<8<^n0C "dzUo"HJzk<:xޯyfN'5|}I90vZnibϪ_j>?bl@jNEjp^?b$`i ! ݋>LF#O<_`F]0V}s_\iso/+b%dWgy~Mq{o)Nڍ^ ;sg֛YO /担S|xV´_z;Dži޿<+I /jƛ/wl; ̭.zl$ )!qX+Z`srhɫG ѽU9aOkt)uWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثW*4H$~?UӔ_'lk)2QwߝԈz_i.FW/+jԒORsH$}DC[ i޶<+NZw ӽlxVc´.?,5?OM~7*\?iO_Sɭ7KnbЎ!/= VB[x=YO4?jG;1Ãk"kjǖW?O=v*UثWbTTdu e"^AĿ2#mXTik P! bFpݭr^M8?_oiz([KkX,2X~>g~[J&G'W_ ӽoZwV<+NDži޷X;+~Q~Z]yT7ŋN>3gďG_s~:ܶ_yn1|GG{=2 >AD" \x"12K$lhWb]v*UثWq~S)56VQE 7KNqvlKWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*Uث[Ձ~!OkOK'p/Fuzb#FbKYK;,MI'w* ߉"21{Yy XJ?>>Dҝqb[iRhG)EFtC,x!!:BP5 AW˘v*Uoiӏ?hqGăIdns="|x ]r;~7_=NZw ӽlxVc´[mھ ) ]RvdQAy3ᜰ~xC8/e.zN]v*ί?꿖H5]An"qM#t5mKo?*qM8c~aW}F#;c"7>^$OdwX2/eh,^fńdnlgK+"@آ@yOΞYΙ[Z<$1d^2F25;0 v*UثWb7$}KJ4ˋ 1Mt 1b /LSL'/SN."~a՗EW_Sӿq0ˢȫ)iߐߞhkV_[erGF%R}v*Uث[ۛdѴbB1ŕ(~a՗EW_S"̟:Z #7 Gih?^_"';5Ŕxu-)NbV")BD'ؑMv*UثWbVbRN?ߖK-b:Rʊó_Mw340by~\ltKqF4ÛrX\u=}89~8-9 VmWVlJF*i󕟛z[fš2,Mژ(~V+j&;o9SӚ?\fPUʲb}k?m GM~ĈOpHGu;Pb]v*UتEm<y$rUTTN UyI˪O`-T`T$4/0^`j<4 \^\/) V)&ߟz̄HP_?=o^,?ih?ߙkaeS6iI'dKO3[ ,֬?j*(|QOv,5[H˨lge] 8v*UثWb_(~NB3_.ɡv?߻M>Q8UثWb^m~Zy/K]^r_J;EPFk!?<|^aaiI2f%ՖAJ!︡ثWb]v*U E~~Xq]SzW-V~e?EESOqCWb]an7^I}2 eIxkx޿p|/ӽ|xV´_z;DžiF}5 h,bjTxz TLg^#{c>.q'V.]a8DAx(Gix6?3t5y^cV M+3ꈞl?})#5 a?F/wl87>w4I&rzɷ| U^P3SJB=fu3 D/4xf<.Z%1XL9d1j$>o#gq޿@ =HyFTZ,u*Fl/x'O\R#9NlQ r>y5(_4wRCӧhKRn =yM,9%\_4ZjG,yL_~6´_z;Džiy 9Z~YD %J3 es}:\&?쟢]w3ןv*U)'IH|8:wԵ mFu̟W."Ez=-?.$#m)}N'~]84?4it_i/I3A8Q^?6en\So|0Y oJy# z.!V*$U//kf0il^RܵCQVRщ(Zִ^-h[OA=L6q@~qWqS^O,,\LMI|__^x(f}/$7h$`@=AY@,C.|+ocn((eR XRG'jl?5b?4<|rt[h[5L2z pTmk82[y 2Z:iȁ0o5m$,<_|i5`Rr\Ql"AjQUE."OϚÝwv3%yG5ث?[hg..4KcߌS*Oȫ1eoGG'u쌶uM\B =::ѿ8_/h#VjZ\kiOJҜ8rPgxWb]v*w_cǿC_!aL?ev?߻bCAWb]'r%@F\RW8i)w\R_ebثWb]v*?8אy RE4=B[ RkWkgjkBǫqMfgvN=Q}8]g)Kz_4/eNZw ӽ|xV´LVVs DN>µ9v'$9qY˔W&Zed,`<#B(z1泴CdR'M y޳}%T ? /e_a6vi9!6?OB,չ]v*Uf Bd&R=#>s=g>ԏW/w!]lcq߭Uث/,-gG}R=F&#oFݑ<=Tu~k8Oe󏯜 4_z;Dži޾<+NZ~XO-j?9簁g%?K)Ud4]5`9 S/PUثWW䤟&!_6.Qb.]>%k~mSA:I:SDg?b>P%?)ˍ3PCY(/q]v*/, Y%X8 pR]͚C90ZZ]F]G*11;v*UثD*v|9 7e4o(hawVKZgͱd8kegeDڕʉGܸ=dqG(%(v*UثWbJ`HCW]v*UتMo[{H{4HK31W~up~hjk4 WhtKįvINN}\Y>|ɶvreۤYȵ|N*6?>uPKLPUثWb^I+hcqМR[g~gGWWQgic%UdXߌ/u%iI/я(J43zBT~ӣ#1b]v*Uw"IPw K!\K*2w 9XLD#Y _C_!0X'c-*77v*_vqHy_cl}qI}b]v*Uثɿ=OӴ61EjS0YR,@2b=6TO\< w?4Q~|b9T}tث&4eտ)u;Vm"{mBBi;g/x]bP8Eg|sx_{zzzzzz/[;zwvgۓ>{wߥlp}#_gǵf݊v*B$ؤ<q+&eJ,R_uثWgm]_ʬ*tX-7JxŋWb]v*Uߐ[L6ZQE7Q h\1b#cu)Ai`){1?lP_\b]v*oQ͵q$ӣE<x7ee5hAŭ%ǐneOypW}>G;+ɈDs~:O89tkO=WLŋWb]v*?8|ϐ-}X QLf4=Bda_--O6yo8jfӼEȒJ_J)JOfؤŋ ?0d֬5EEn>ez{b> }MoMqڵMYGdBOr CWbuJ"|ߩ%USo|xV{´[z>1I@oeoVb>HgMQ}qGD׾^ރwNշRR~K陿-*Z9Xcȅ" ohZ&Ђ}%1Zyo?+k9-G+1sq|TxK51/iP~G/S t'5YT/ӐSY[}aciY[i|+ociAmbQ(UP<Ŋ+v*UثWN1cP0b1/~U~iyG|^]\q PfV1TToKzbQ-6?ۿGvԷ+ooc׬^yJ$uei}xڃxثWb]+5D3ch3uH _})ō!.2|=if"bir?S6i^H$#hۏ$ qvP%LWw ׌8v8yVrFiv>1$%xqS'(UثWb_2f_z~XΙ_k<bgExI5B=3pNX|@UjnR&+bC;v*UثWb^C95-'=U[шH|8?2DKg>)/Xv*UثWW8?PkPFx%Pn(j Ѓ̾hOqˇ |vG;&gY? Κm|˪駌:63q`9¤>Q8oxOJm;OŋWb]v*_r>'43i~\-j?$ L@J͋ wiu-3@tSux?˻KXMkKrZXc<1H|8HP34TZO"cŋWb7i$7S)by_`A#K7r8һƕ܎4q}k8D<&gQoɿx:w2躋[Ym&:,sZ@xϖZv*UثWb]eߖdo oT#v y]?HyH>/هQԽv/W~Yr9HGWr8һƕ܎4q}8X,ԣU'~^=#8оdUثWW䤟&!O֭u=&xz(Յ/ R_F"}{VFZ~Y::\[" `حo/~@~RnUWKJMn: >+oI4($h(Bt*b]v*U95_X Jy_[ءثWb]v*UثWK8>_ߚ<6hz&$jz ΫUƜu= |Mq^u{"q6%I\,x~T_~[yf6 -m`{kRMEvo_G]v*Uث̻S~[ԩ2N)>=X?0k`IqI}b]v*UثWb]|K9?14AVi1>2Aq0?𬸲G7ԣɞ`9-.u%[xثWb]v*R=;wY$pгab[qO:/ԭckk)S^,]v*UثS+iD[xCZ_[Mgy \Z\#E<x7ee5hA_}B_,DKzɧds1hz}u,qri'ivX'c-}H}| w1b_vqHy_cl}qI}b]v*8<ϐ/0+SIy^NCC Fʇ|3EKK/v~^{ ˈ'f>'b G\no)5q9+N!1Z|gs^GSrʱoV8ϟL$pڤK-YGQy؎ޠ+ |Xv*r:STs |²Hy} !%\b]}o8A_-'s?7ASΝGbk~k=7O%#6ptLSq##8*lF|(0d7b]v*Uث7gկ.2|G~s=eɓaDW,az_Qz{s»ynG'nz]Ũ?^/Շ&akÂg{?1O_nyWb]}8cnSX"mnecñI{y+@wc?7ϝ;v*U)'IHx'WMl˿XŋWb]v*UثWN1cP0b>m)o>bb]v*UثWb_:/LNyk-o)н{K8iL4Q*o)F?d\2g󔲘$DOk$OU#bH?fD5(6)~y[/t;O1h"LNqٕ̎veŊuv*UثW~zD]`˨ZFTrs=8g@H|+[O?^g }TY5r)~[\yoݬ52 uX]v*UثWb]xw^@Vp/1ԧ'vR#sJ6)0-z:F4Hg 8*ؤ@qbUثWb]|_9mmd^}M +_e긲 pn^)UIp^5x:;v*UثWW8?Pkv*Uukmm5)qipJxQA|; s}s/y*yɧds1hz}eM6Daѵ}eIF\PW8i)w\R_ebثWb]y^^Z7y[~߅^̯ز _zF/Qeۥ ZQG5P@m(ɿ06F+n5oL: ъۿM|So4b3?4)y+T/[BLAs-ŭ QjqM?68|'k]v*U8~НuKD?J"sP~owG?Gf]v*B$ؤ<q+&eJ,R_uثWb]v*Uثrk'WԱ1dW6q ձCWb]v*UثWf[&ql~Z5C\UOp󽌞qo*1CoAd?J)U/V@Q}x"~I}j֝miW uFZy#⠌XqWb]v*Cr?g=P֤2F?wky%L}cW >m k-VV o?qA::bb]v*]Q`܅&*UثWb3 2[\F2r28+>d*/&Ԣ_]? ?ˋi5>ƴW#.3ܞqݳ %qQǢ3?du4Ov*Uثzi?;?a&t;v*Ǽנ/4p%c=;OM6}Xϟ'iٺ~-<M%扊:0؃9GW\P`}>n |9]d?Shrw!j;={Ŝ_kg3pQgg>Đe<_rW +;v*Uثh#qiUaA4,S*;AW-d#FNz6 ~8Ǹ?-v:#/e Wb^9]hZě%?̻EKXv*UثWb]~}M?,?%>,eyo0ġ}[_;PC=wTZw ^ӿh?&7 v_΅c (u)I27`u׽:g!k~3M[ CLߍ~^"<Xuw% P+*|T}XUثWb]~^^eqm7cb]CZe{ \Z\#Eq9#qFVSPAq̞[)N}ch~S;7g<2CyN_S{;r XlG<3]3@՗)|OǞ?>py^)%BUʿЕr=m%_ܫq[w W*Vޏ9)5gC`_ZL/p˗%٥)ڱCWb]|w8g=iNNlM+ZWxӉ#x(Z1B*UiT=)lW5m"8maq'p~qMWZƄ|yCB`{Ck-򏗯7$?Uz(v*UثWb^kȯˏ>Z'TөmpXzyM7󆚜|򯙠RR# -qM[oz66wt|D+#}Ih{գSBg8(oE"E5 ( t*b]v*UثW˓z̚=KE֜8Zb}Gv*UتPFU+) ^Im-hsfjk0\Iݒuu6K)ߏ%㊾f9u}KXVn/Ξb{D2*_p[b}B"@UU bWb]c-B԰ԭ䵸N%R+PqW6Wz].-f]+Ib`3"I}[̟r??7O3TɪZg)H!NCkQe9:MDxg7 ݊v*pZOOoɿ!;}w;]v*{ O .L82N=mŚxAb-/e,LdV?Y.)FXxw;<6bOɫMH*sw#zb?ַ0X ~fN/g'/-s?Mgk.iϨ??%οAgȔa!Y=rbu(ʇtYuNثT-妝i=mel5Ĭ8Aə% L NB1O'w~kfvO*iE5^`=ÃRPPWr_9mپ{=of݊v*Uثo?%WJu%qF$I;f~ :QC﷚,;G/Kcftv*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]*s neK=` -ToaOݹ-?YƛC _faub] 篖co )N]N#Wb]v*UثWbPwv^̖O<4D,H$e#)HF"|59 9'eQl]Ib!UT߲hxވ}?pcف>qyO(AhBv*UثWbtG&?WQ#^ܤjrcT|L-nj䐈\mV|R?ȿPL\'bWp{Ou9qe?L5X8cXyuJ)ثW|w:~?>M%?$v*k3ZhݻqX6e45d&j$D?厍:'0G՚EA{4ZN֌9v=/m{ Ypsk*Om*X:2Yj9aS D"0ثWb]Oͯ#ZٴeԐ_o[-bPrVa5xQvgbBU>;B?~?~kԕMt*Fܡ r}Bo@,5{ğ;#IV_Glhe{_[L;Wb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثW*UG_ fSMsHC㸰ZAա_\v>$F_G3'岟DϠ7'w9]}o8A_-'s?7ASΝGbO?LU>i?(͟4<}0Ibֳ Ve`}Ghv>\+4O7>,IyLPټG_# M/rcP$4[yep\|ͺLٱjoc{N(1dVH^/8XS$'@_/fJ'|2?Zr+b y)61>( {GN?li~&1,?yN5K*Y۟w+12v( w_a5?qd:W~fyImt`9?q $x3|&{OO']]_\Kw{<7s1yIRĒOA$/mF:,7b]v*U_毛! k}f Rv$;6g货>Nݻ9glqtc_'YhuB}IokojT|1A.YeF7$UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*U*U+銾_*SzCO}Z?_pO/izOs\оU_=|ΟϓCOv:w>esX2V_֏6~v*UثWb]v*UثWbS)O\i.ۛJ#Aen͗,ܸ:n-)C̿G?*-*[y{MmR s;.81>l9g2?[v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*U*UتPl4J;>6e+)؂2&"B9Vϙ,{3#6";(Ջq(:._iTpY$8*(2B7eqli;}u;R29 l/}}G~Q?U;v*UثWb]v*Uث8w掩.qgm-5>&#Ƶo5l2gDm;[ݝ)0\}[Qʭi4~8VyT F i4ƣϽ>&D?=3]Wb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]*UثWbRM;aFӿ1uI{i[ݤxѢ6?|R368$ 6fjP7J&YyFxck04&}cWx7__{}-myͤZ&V¨-DaK(1[r!H>W˟y˓3AҹLr($CPA*á OdGQ`'|`~[?*n]"G֤Y+tQeC4ο+ xthbH$z}VYE[ѕvbr#Sp EZ^a9A仏2&cuq^4֒,f%2$ʋJcSǙu=J:Now ŤЬtR ߘ~XNknޝ =ĝxĄnI!W*3S};j)pq{Hƫ_oW,Nfa='D'hdv:JYhqC]_KZKiVZw@i͹+M~!qWdf1BM {IiG͡EnK^QL+VS)b-~SjfK_MPHΒ;V銰 krȺV^ZM\Ϥȼ-E[)B̫i_95=v,irZ/nt~HQr2s 7|{.?+vh6ChL#SLk_^Ե8l[)#_t/>[rtan bV)b$C*E5VVX0mo8ݽ;[H@{: ܒBUgӛwIR.y>Vޮ)ўYzO7ROo+ou-%ʻhMZҗҭ8ӛr "WCP#cY喛)Iq#%wa4-yC~m.ݥv*#fWSBfS _[WJumR_Lm‰B; iElUߓt#N4OZIbT[Êi94mGRKyzMA,db}]T܍ R>UZb#s;@:=iEWÏMi~b,"aqjͬSr ##W*UثWb]v*\Rl@16U>i?(͟v*UثWbS.OlV2Bw`8b] %kpiŚq{ժIߟ/6/ahu|ú?\ir!kvZ|`M_Ηrbߕ>DG}&+{e%/vlLJ-ӻv*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*Uث-$~O5 Q ZeHeYe#4//[XI-+poa~Tq#qX󆃩49FdoӁWݾxc?Uǚ3lr^6? ~T_{-s^tUجBO>|PK+Ck [g8H@2ߊ+6^g:U47!FMN)/[ ֓qkHc:ArTiYvq}P?3&.Z֬>niuc+ɤ 2BP:pb+_9movH[MFGQ.QǮ%ǓQxm_ء>M'Zu/Z{P斯up#aM?dܺ?x &KIUQeo5zjɩ^(|Wsϸ<7ԭ4+D웖t d*q<>i'#E/ ;uW'EIA4QQEAh5':֛&ro)-ҒW"n,|KЖoRcZg?.7hlNb!> C=yuO1yFIKYn$D$uUqIfʳOӶѝ*^@?WK:]YV `D<`?iWyO,,VyBE7BIhgjWءdq,#N'dkwQVVOK5bW3|\v _<EyhwHB5 /èLm~B[Ñ_aX3?姓fm L,rIs^*Uϟ>HRЭnZ3+1R+૊_co'-\1z{D(0y\PV$"J8EE l*ZɣCZl%fJI^8/BYqK_͏)ilt{hݡ}9so*}C?-9t:Km^X I$f@ĘT3Ʒy?%<Ř .p}S=[*UثWb]v*U|̞\݈ _f.].,TAv>?( {piGh _'b?I!!HBc ~yާ8W虿CyMAMs=b~`Or@C1S1KĿg+ ⎏(?8F =X{kٲG$Oj/AaOiʿ?GrGJ&?yiiVKsvɎfS'~L8̻ :apM4B|= 8}8Od62 XO,I&.? 3G y_b#\hukw겵i yٚx Y^ezMv]661 GmmCQ6Q(1Uky+(]Tb6ꨄi1RЀ ؃ vys[$B CFe3Do}|TJ2'O5, ڌeWQ"qWzl}͒MJhvEf ;hlPEmn1E0T| WؾLZX6%8w⿶Wz*.hm--.$X$B"dY qK0kV~]е~ENWcA$-AM(s ~c_O46@vȈ%?%Ρy]4Ee6gb!y͵毡;-Y^iBV"U$P~'ztֳ[ri7 Ȫ\5XR(z*Z۾cWإ(|!a*V-cKDh4W_8Yܴ-b @+˗JSz↭m-Ỵ'E `n++ 5b/Ua Pw$Ԓhsҁtjߘݧ|=bCoRHn7geQ? 4YGk[(xe=o%,n*gGI/+nqC_k?nsMXZiѧK24,4!~jl]M:kkZhfrЖ5 ~#j')eZ_Yt&iPkȬ GEjDY++o/%5PZmWڕ¬(oJ'$+cV ?Ƙ9+YVkYeh2`4gc-lԡ^e AbҿoPwe$GQg/Mev1P??Iy_s*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWtXǥ$$}>qג3/kRɿ;o=LMK֣r.f>DD%'/X㽷淘|Eք7wW'׆LR.hZ4/Y]6q0y,;*woiwEio-J9‡nQL`32[KmmԾ#y'/^j!kOo͟-[2K{JGFN[0(]C_kO剼 ;/.ogTE^ ?kY*o_ƛ_̿8Iȼ +ad.L@8zxT ~\Rm:NI l!mR}@O)q8ao80~6z{z<dhՌXEck߱C:(fž m{ǿA)ReB5XRΜkN;b}Ckil$@(75'>ѡח$h` It[[8v$r)f󋿛7WPFVЇڒbEҝ.Mj2(Y.%a^(dJ>qC,մ;].mZn y7Wn:U%ߝO~TͲs-Ŝ5[Uq .)8a̟:j"OiHHۘX*Tn\Uf(x?@~n#ǭ~μ=7קq/n,6m-{y;-IPH^PY+jYYW'oM͂_P+-\L?g숔_Fɶ@򥇕m/n5lTsvܜ%P6D>'k_<'񩪤 캞C!//qIoyyn՗ыUvC2ĕ1W>As<[L\NB4e P~q~Wͧ?Jz~^3EÇ+ח/銦ߖJw/^%7龗O\Kq9)OS=+_7a_ѿӵ?R+5`^?kytjb>Xⷙt4?* 57=ByG!dG.M8Fy͏~`]*}FYm"1JQحw@.KP}J O+1hi# 5Ik^ _Zͺii 5<^j/xn>Ŋ^ ]Ҽk#KoZAe}X# hM+Nءr~Ih6<`$V/Je,H5BXq|yzQ.o_LvDJq/|o yRʶ6*@nNK("~dx Ex[jK|ԊxRdMW]O!ߐ8$Ѽr.=X_V~LP~Zy+Uߒt'}{߬_W>q-|=N?h*1W*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*U_#!nYԴ8.Q2>;ҘثWb]v*UثWb]v*UثWb]v*UثWb]v*UثW*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb]v*UثWb_percona-toolkit-3.1/config/sphinx-build/percona-theme/000775 001750 001750 00000000000 13535723557 024256 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/sphinx-build/percona-theme/layout.html000664 001750 001750 00000054141 13535723557 026466 0ustar00jenkinsjenkins000000 000000 {# basic/layout.html ~~~~~~~~~~~~~~~~~ Master layout template for Sphinx themes. :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- block doctype -%} {%- endblock %} {%- set reldelim1 = reldelim1 is not defined and ' »' or reldelim1 %} {%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %} {%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and (sidebars != []) %} {%- set url_root = pathto('', 1) %} {# XXX necessary? #} {%- if url_root == '#' %}{% set url_root = '' %}{% endif %} {%- if not embedded and docstitle %} {%- set titlesuffix = " — "|safe + docstitle|e %} {%- else %} {%- set titlesuffix = "" %} {%- endif %} {%- macro relbar() %} {%- endmacro %} {%- macro sidebar() %} {%- if render_sidebar %}
Percona Toolkit
{%- if sidebars != None %} {#- new style sidebar: explicitly include/exclude templates #} {%- for sidebartemplate in sidebars %} {%- include sidebartemplate %} {%- endfor %} {%- else %} {#- old style sidebars: using blocks -- should be deprecated #} {%- block serverseries %} {%- endblock %} {%- block sidebartoc %} {%- include "localtoc.html" %} {%- endblock %} {%- block sidebarrel %} {%- include "relations.html" %} {%- endblock %} {%- block sidebarsourcelink %} {%- include "sourcelink.html" %} {%- endblock %} {%- if customsidebar %} {%- include customsidebar %} {%- endif %} {%- block sidebarsearch %} {%- include "searchbox.html" %} {%- endblock %} {%- endif %}
{%- endif %} {%- endmacro %} {%- macro script() %} {%- for scriptfile in script_files %} {%- endfor %} {%- endmacro %} {%- macro css() %} {%- for cssfile in css_files %} {%- endfor %} {%- endmacro %} {{ metatags }} {%- block htmltitle %} {{ title|striptags|e }}{{ titlesuffix }} {%- endblock %} {{ css() }} {%- if not embedded %} {{ script() }} {%- if use_opensearch %} {%- endif %} {%- endif %} {%- block linktags %} {%- if hasdoc('about') %} {%- endif %} {%- if hasdoc('genindex') %} {%- endif %} {%- if hasdoc('search') %} {%- endif %} {%- if hasdoc('copyright') %} {%- endif %} {%- if parents %} {%- endif %} {%- if next %} {%- endif %} {%- if prev %} {%- endif %} {%- endblock %} {%- block extrahead %} {% endblock %}
{%- block header %} {% endblock %} {%- block content %}
{%- block sidebar1 %} {# possible location for sidebar #} {% endblock %}
{%- block document %}
{%- if render_sidebar %}
{%- endif %} {%- block relbar1 %}{{ relbar() }}{% endblock %} {%- block relbar2 %}{{ relbar() }} {%- if render_sidebar %}
{%- endif %}
{%- endblock %} {%- block sidebar2 %}{{ sidebar() }}{% endblock %}
{%- if last_updated %} {% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %} {%- endif %}
{%- if show_copyright %} {%- if hasdoc('copyright') %} {% trans path=pathto('copyright'), copyright=copyright|e %}© Copyright {{ copyright }}.{% endtrans %} {%- else %} {% trans copyright=copyright|e %}© Copyright {{ copyright }}.{% endtrans %} {%- endif %} {%- endif %}
Except where otherwise noted, this documentation is licensed under the following license:
CC Attribution-ShareAlike 2.0 Generic
{%- if show_sphinx %} {% trans sphinx_version=sphinx_version|e %}Created using Sphinx {{ sphinx_version }}.{% endtrans %} {%- endif %}
{%- endblock %}
{%- block footer %}
This documentation is developed in Launchpad as part of the Percona Toolkit source code.
If you spotted innacuracies, errors, don't understood it or you think something is missing or should be improved, please file a bug.
{% endblock %} {%- endblock %}
{% if theme_collapsiblesidebar|tobool %} {% set script_files = script_files + ['_static/sidebar.js'] %} {% endif %} percona-toolkit-3.1/config/sphinx-build/percona-theme/searchbox.html000664 001750 001750 00000001334 13535723557 027123 0ustar00jenkinsjenkins000000 000000 {# basic/searchbox.html ~~~~~~~~~~~~~~~~~~~~ Sphinx sidebar template: quick search box. :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. #} {%- if pagename != "search" %} {%- endif %} percona-toolkit-3.1/config/sphinx-build/percona-theme/static/000775 001750 001750 00000000000 13535723557 025545 5ustar00jenkinsjenkins000000 000000 percona-toolkit-3.1/config/sphinx-build/percona-theme/static/boxes_header_bulletpoint.png000664 001750 001750 00000005435 13535723557 033333 0ustar00jenkinsjenkins000000 000000 PNG  IHDR d- pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FHIDATxڔ 0cW'NPD=A.yIݕo {J`+w,5JyZкcIENDB`percona-toolkit-3.1/config/sphinx-build/percona-theme/static/button-download-icon.png000664 001750 001750 00000005676 13535723557 032337 0ustar00jenkinsjenkins000000 000000 PNG  IHDRĴl; pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FIDATxڴ1 1E_,,[!7Sxo  X[ZX[L@bɛI2' S 2g0NRa30v5s5*UI 2ƓIۢIT&R}ӌ~wG~K<{EhrߠUWK=pXg6m H4j-"9=EƦ`(IENDB`percona-toolkit-3.1/config/sphinx-build/percona-theme/static/content_list_bulletpoint.png000664 001750 001750 00000005406 13535723557 033406 0ustar00jenkinsjenkins000000 000000 PNG  IHDR xD pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_F1IDATxb0ɬ,,$#mW yBIENDB`percona-toolkit-3.1/config/sphinx-build/percona-theme/static/default.css_t000664 001750 001750 00000016440 13535723557 030233 0ustar00jenkinsjenkins000000 000000 /* * default.css_t * ~~~~~~~~~~~~~ * * Sphinx stylesheet -- default theme. * * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-size: 12px; line-height: 16px; font-family: Arial; background-color: #fff; color: #333333; line-height: 20px; margin: 0; padding: 0; } div.document { background-color: #fff; margin: 0 auto; overflow: hidden; padding: 30px 0; width: 1000px; } div.documentwrapper { /* float: left; */ /* width: 100%; */ } div.bodywrapper { margin: 10 0 0 0px; padding-right: 12px; width: 700px; float:right; } div.body { background-color: #fff; color: #333333; padding: 0 0 20px; width: 712px; } {%- if theme_rightsidebar|tobool %} div.bodywrapper { margin: 0 {{ theme_sidebarwidth }}px 0 0; } {%- endif %} div.footer { /* color: {{ theme_footertextcolor }}; */ /* width: 100%; */ /* padding: 9px 0 9px 0; */ /* text-align: center; */ /* font-size: 75%; */ /*font: 80% "Lucida Grande",Verdana,Lucida,Helvetica,Arial,sans-serif; */ /*background: none repeat scroll 0 0 #333333;*/ /*border-top: 2px solid #D95200;*/ overflow: hidden; } div.footer a { color: {{ theme_footertextcolor }}; text-decoration: underline; } div.related { background-color: #ccc; line-height: 30px; color: #333; background: none repeat scroll 0 0 #F8F8F8; border: 1px solid #E0E0E0; clear: both; padding: 10px; width: 690px; margin-bottom: 5px; } div.related a { color: #999; } div.sphinxsidebar { margin-left: 0px; margin-right: 20px; color: #333333; font-family: Arial,Helvetica,sans-serif; width: 250px; {%- if theme_stickysidebar|tobool %} top: 30px; bottom: 0; margin: 0; position: fixed; overflow: auto; height: auto; {%- endif %} {%- if theme_rightsidebar|tobool %} float: right; {%- if theme_stickysidebar|tobool %} right: 0; {%- endif %} {%- endif %} } {%- if theme_stickysidebar|tobool %} /* this is nice, but it it leads to hidden headings when jumping to an anchor */ /* div.related { position: fixed; } div.documentwrapper { margin-top: 30px; } */ {%- endif %} div.sphinxsidebar h3 { /* font-family: {{ theme_headfont }}; */ color: #999; font-size: 18px; line-height: 15px; font-weight: 400; margin-left: 5px; padding: 0; } div.sphinxsidebar h3 a { color: #333; } div.sphinxsidebar a:hover { /* color: #333; */ } div.sphinxsidebar h4 { /* font-family: {{ theme_headfont }}; */ color: #666; font-size: 18px; font-weight: normal; margin: 5px 0 0 5px; padding: 0; } div.sphinxsidebar p { color: #333; margin-bottom: 5px; margin-left: 5px; } div.sphinxsidebar p.topless { margin: 5px 10px 10px 10px; } div.sphinxsidebar ul { margin: 10px; padding-left: 5px; color: #333; } div.sphinxsidebar a { color: #D12907; } div.sphinxsidebar input { border: 1px solid #999; font-family: sans-serif; font-size: 1em; } {% if theme_collapsiblesidebar|tobool %} /* for collapsible sidebar */ div#sidebarbutton { background-color: {{ theme_sidebarbtncolor }}; } {% endif %} /* -- hyperlink styles ------------------------------------------------------ */ a { color: #D12907; text-decoration: underline; border: medium none; cursor: pointer; } a:visited { color: #D12907; text-decoration: underline; } a:hover { color: #D12907; text-decoration: underline; } {% if theme_externalrefs|tobool %} a.external { text-decoration: none; border-bottom: 1px dashed {{ theme_linkcolor }}; } a.external:hover { text-decoration: none; border-bottom: none; } a.external:visited { text-decoration: none; border-bottom: 1px dashed {{ theme_visitedlinkcolor }}; } {% endif %} /* -- body styles ----------------------------------------------------------- */ a.headerlink { color: {{ theme_headlinkcolor }}; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none; } a.headerlink:hover { background-color: {{ theme_headlinkcolor }}; color: white; } div.body dd, div.body li { text-align: justify; } div.body dt { list-style-type: square; /* margin: 8px 0 8px 30px; */ padding: 0 4px 0 5px; } div.body p { text-align: justify; /* line-height: 130%;*/ margin: 10px 0; } /* div.body ul, */ div.body li { list-style-type: square; margin: 8px 0 8px 30px; padding: 0 4px 0 5px; } .reference em { font-style: normal; } .std-term { font-style: normal; font-weight: 400; color: #FF7400 } div.admonition p.admonition-title + p { display: inline; } div.admonition p { margin-bottom: 5px; } div.admonition pre { margin-bottom: 5px; } div.admonition ul, div.admonition ol { margin-bottom: 5px; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } div.warning { background-color: #ffe4e4; border: 1px solid #f66; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre { padding: 10px; background-color: #F0E6D9/* {{ theme_codebgcolor }} */; color: {{ theme_codetextcolor }}; line-height: 120%; border: 1px dashed #ED9821; font-size: 80%; overflow: auto; /* border-left: none; */ /* border-right: none; */ } tt { /* background-color: #ecf0f3; padding: 0 1px 0 1px; font-size: 0.95em;*/ font-weight: 400; } div.body td { text-align: none; } table.docutils th { background-color: #DEE7EC; border: 1px solid #8CACBB; padding: 3px; text-align: center; font-style: normal; font-weight: 400; } table.docutils td { border: 1px solid #8CACBB; padding: 3px; line-height: 16px; vertical-align: middle; } .warning tt { background: #efc2c2; } .note tt { background: #d6d6d6; } .viewcode-back { font-family: {{ theme_bodyfont }}; } div.viewcode-block:target { background-color: #f4debf; border-top: 1px solid #ac9; border-bottom: 1px solid #ac9; } .file-bugs { background: none repeat scroll 0 0 #E8E8E8; border: 1px solid #E0E0E0; clear: both; color: #333333; padding: 10px; text-align: center; font-size: 90%; } div.footer .footer { margin: 0 auto; overflow: hidden; padding: 10px 0; width: 960px; border-top: none; } div.footer .footer .logo { float: left; overflow: hidden; padding: 17px 0 0; } div.footer .footer .text { color: #E0E0E0; float: right; font-family: Arial,Helvetica,sans-serif; font-size: 11px; line-height: 15px; overflow: hidden; text-align: right; } div.license { font-size: 80%; /* padding: 0.5em; */ text-align: center; margin: 13px 0 -15px; } #sidenavi.noborder { border-right: medium none; border-top: medium none; margin: 0; } #sidenavi { color: #333333; /* float: left; */ font-family: Arial,Helvetica,sans-serif; font-size: 18px; line-height: 25px; overflow: hidden; padding: 10px 0 8px 0px; width: 255px; } percona-toolkit-3.1/config/sphinx-build/percona-theme/static/email-small.png000664 001750 001750 00000001667 13535723557 030462 0ustar00jenkinsjenkins000000 000000 PNG  IHDRPLTEhilwmxՌ {օ }օ ~{Ӂ~ׇ |؁ـقللن ن و و ڌ۝۞܁݆݊ ݋ ގ ސ     #(,# % (..  !##%' $'(+,-.13$%(+.566<+@I )/34=@/058==>AA9'RG)!2;R|tRNS[\\]^^TrIDATc``bdy[2pNӂ̞o? l oX< 4L(6dhWWT&0H7N5II] REE]Nm phYT$ _dJ^@aIhϵw, 2Gyvx2T%EEGG'V⁠SvA^[eJhllENڼ^g޼ 1nZ6sg$%'gWTg'''͘kZ# n9/+" Dd͋TNaд5-$4 Le)>?qcK x\|6!^ɹJIENDB`percona-toolkit-3.1/config/sphinx-build/percona-theme/static/jquery.min.js000664 001750 001750 00000214756 13535723557 030223 0ustar00jenkinsjenkins000000 000000 /*! * jQuery JavaScript Library v1.4.2 * http://jquery.com/ * * Copyright 2010, John Resig * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * Includes Sizzle.js * http://sizzlejs.com/ * Copyright 2010, The Dojo Foundation * Released under the MIT, BSD, and GPL Licenses. * * Date: Sat Feb 13 22:33:48 2010 -0500 */ (function(A,w){function ma(){if(!c.isReady){try{s.documentElement.doScroll("left")}catch(a){setTimeout(ma,1);return}c.ready()}}function Qa(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}function X(a,b,d,f,e,j){var i=a.length;if(typeof b==="object"){for(var o in b)X(a,o,b[o],f,e,d);return a}if(d!==w){f=!j&&f&&c.isFunction(d);for(o=0;o)[^>]*$|^#([\w-]+)$/,Ua=/^.[^:#\[\.,]*$/,Va=/\S/, Wa=/^(\s|\u00A0)+|(\s|\u00A0)+$/g,Xa=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,P=navigator.userAgent,xa=false,Q=[],L,$=Object.prototype.toString,aa=Object.prototype.hasOwnProperty,ba=Array.prototype.push,R=Array.prototype.slice,ya=Array.prototype.indexOf;c.fn=c.prototype={init:function(a,b){var d,f;if(!a)return this;if(a.nodeType){this.context=this[0]=a;this.length=1;return this}if(a==="body"&&!b){this.context=s;this[0]=s.body;this.selector="body";this.length=1;return this}if(typeof a==="string")if((d=Ta.exec(a))&& (d[1]||!b))if(d[1]){f=b?b.ownerDocument||b:s;if(a=Xa.exec(a))if(c.isPlainObject(b)){a=[s.createElement(a[1])];c.fn.attr.call(a,b,true)}else a=[f.createElement(a[1])];else{a=sa([d[1]],[f]);a=(a.cacheable?a.fragment.cloneNode(true):a.fragment).childNodes}return c.merge(this,a)}else{if(b=s.getElementById(d[2])){if(b.id!==d[2])return T.find(a);this.length=1;this[0]=b}this.context=s;this.selector=a;return this}else if(!b&&/^\w+$/.test(a)){this.selector=a;this.context=s;a=s.getElementsByTagName(a);return c.merge(this, a)}else return!b||b.jquery?(b||T).find(a):c(b).find(a);else if(c.isFunction(a))return T.ready(a);if(a.selector!==w){this.selector=a.selector;this.context=a.context}return c.makeArray(a,this)},selector:"",jquery:"1.4.2",length:0,size:function(){return this.length},toArray:function(){return R.call(this,0)},get:function(a){return a==null?this.toArray():a<0?this.slice(a)[0]:this[a]},pushStack:function(a,b,d){var f=c();c.isArray(a)?ba.apply(f,a):c.merge(f,a);f.prevObject=this;f.context=this.context;if(b=== "find")f.selector=this.selector+(this.selector?" ":"")+d;else if(b)f.selector=this.selector+"."+b+"("+d+")";return f},each:function(a,b){return c.each(this,a,b)},ready:function(a){c.bindReady();if(c.isReady)a.call(s,c);else Q&&Q.push(a);return this},eq:function(a){return a===-1?this.slice(a):this.slice(a,+a+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(R.apply(this,arguments),"slice",R.call(arguments).join(","))},map:function(a){return this.pushStack(c.map(this, function(b,d){return a.call(b,d,b)}))},end:function(){return this.prevObject||c(null)},push:ba,sort:[].sort,splice:[].splice};c.fn.init.prototype=c.fn;c.extend=c.fn.extend=function(){var a=arguments[0]||{},b=1,d=arguments.length,f=false,e,j,i,o;if(typeof a==="boolean"){f=a;a=arguments[1]||{};b=2}if(typeof a!=="object"&&!c.isFunction(a))a={};if(d===b){a=this;--b}for(;b
a"; var e=d.getElementsByTagName("*"),j=d.getElementsByTagName("a")[0];if(!(!e||!e.length||!j)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(j.getAttribute("style")),hrefNormalized:j.getAttribute("href")==="/a",opacity:/^0.55$/.test(j.style.opacity),cssFloat:!!j.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:s.createElement("select").appendChild(s.createElement("option")).selected, parentNode:d.removeChild(d.appendChild(s.createElement("div"))).parentNode===null,deleteExpando:true,checkClone:false,scriptEval:false,noCloneEvent:true,boxModel:null};b.type="text/javascript";try{b.appendChild(s.createTextNode("window."+f+"=1;"))}catch(i){}a.insertBefore(b,a.firstChild);if(A[f]){c.support.scriptEval=true;delete A[f]}try{delete b.test}catch(o){c.support.deleteExpando=false}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function k(){c.support.noCloneEvent= false;d.detachEvent("onclick",k)});d.cloneNode(true).fireEvent("onclick")}d=s.createElement("div");d.innerHTML="";a=s.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var k=s.createElement("div");k.style.width=k.style.paddingLeft="1px";s.body.appendChild(k);c.boxModel=c.support.boxModel=k.offsetWidth===2;s.body.removeChild(k).style.display="none"});a=function(k){var n= s.createElement("div");k="on"+k;var r=k in n;if(!r){n.setAttribute(k,"return;");r=typeof n[k]==="function"}return r};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=e=j=null}})();c.props={"for":"htmlFor","class":"className",readonly:"readOnly",maxlength:"maxLength",cellspacing:"cellSpacing",rowspan:"rowSpan",colspan:"colSpan",tabindex:"tabIndex",usemap:"useMap",frameborder:"frameBorder"};var G="jQuery"+J(),Ya=0,za={};c.extend({cache:{},expando:G,noData:{embed:true,object:true, applet:true},data:function(a,b,d){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var f=a[G],e=c.cache;if(!f&&typeof b==="string"&&d===w)return null;f||(f=++Ya);if(typeof b==="object"){a[G]=f;e[f]=c.extend(true,{},b)}else if(!e[f]){a[G]=f;e[f]={}}a=e[f];if(d!==w)a[b]=d;return typeof b==="string"?a[b]:a}},removeData:function(a,b){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var d=a[G],f=c.cache,e=f[d];if(b){if(e){delete e[b];c.isEmptyObject(e)&&c.removeData(a)}}else{if(c.support.deleteExpando)delete a[c.expando]; else a.removeAttribute&&a.removeAttribute(c.expando);delete f[d]}}}});c.fn.extend({data:function(a,b){if(typeof a==="undefined"&&this.length)return c.data(this[0]);else if(typeof a==="object")return this.each(function(){c.data(this,a)});var d=a.split(".");d[1]=d[1]?"."+d[1]:"";if(b===w){var f=this.triggerHandler("getData"+d[1]+"!",[d[0]]);if(f===w&&this.length)f=c.data(this[0],a);return f===w&&d[1]?this.data(d[0]):f}else return this.trigger("setData"+d[1]+"!",[d[0],b]).each(function(){c.data(this, a,b)})},removeData:function(a){return this.each(function(){c.removeData(this,a)})}});c.extend({queue:function(a,b,d){if(a){b=(b||"fx")+"queue";var f=c.data(a,b);if(!d)return f||[];if(!f||c.isArray(d))f=c.data(a,b,c.makeArray(d));else f.push(d);return f}},dequeue:function(a,b){b=b||"fx";var d=c.queue(a,b),f=d.shift();if(f==="inprogress")f=d.shift();if(f){b==="fx"&&d.unshift("inprogress");f.call(a,function(){c.dequeue(a,b)})}}});c.fn.extend({queue:function(a,b){if(typeof a!=="string"){b=a;a="fx"}if(b=== w)return c.queue(this[0],a);return this.each(function(){var d=c.queue(this,a,b);a==="fx"&&d[0]!=="inprogress"&&c.dequeue(this,a)})},dequeue:function(a){return this.each(function(){c.dequeue(this,a)})},delay:function(a,b){a=c.fx?c.fx.speeds[a]||a:a;b=b||"fx";return this.queue(b,function(){var d=this;setTimeout(function(){c.dequeue(d,b)},a)})},clearQueue:function(a){return this.queue(a||"fx",[])}});var Aa=/[\n\t]/g,ca=/\s+/,Za=/\r/g,$a=/href|src|style/,ab=/(button|input)/i,bb=/(button|input|object|select|textarea)/i, cb=/^(a|area)$/i,Ba=/radio|checkbox/;c.fn.extend({attr:function(a,b){return X(this,a,b,true,c.attr)},removeAttr:function(a){return this.each(function(){c.attr(this,a,"");this.nodeType===1&&this.removeAttribute(a)})},addClass:function(a){if(c.isFunction(a))return this.each(function(n){var r=c(this);r.addClass(a.call(this,n,r.attr("class")))});if(a&&typeof a==="string")for(var b=(a||"").split(ca),d=0,f=this.length;d-1)return true;return false},val:function(a){if(a===w){var b=this[0];if(b){if(c.nodeName(b,"option"))return(b.attributes.value||{}).specified?b.value:b.text;if(c.nodeName(b,"select")){var d=b.selectedIndex,f=[],e=b.options;b=b.type==="select-one";if(d<0)return null;var j=b?d:0;for(d=b?d+1:e.length;j=0;else if(c.nodeName(this,"select")){var u=c.makeArray(r);c("option",this).each(function(){this.selected= c.inArray(c(this).val(),u)>=0});if(!u.length)this.selectedIndex=-1}else this.value=r}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(a,b,d,f){if(!a||a.nodeType===3||a.nodeType===8)return w;if(f&&b in c.attrFn)return c(a)[b](d);f=a.nodeType!==1||!c.isXMLDoc(a);var e=d!==w;b=f&&c.props[b]||b;if(a.nodeType===1){var j=$a.test(b);if(b in a&&f&&!j){if(e){b==="type"&&ab.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed"); a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&&b.specified?b.value:bb.test(a.nodeName)||cb.test(a.nodeName)&&a.href?0:w;return a[b]}if(!c.support.style&&f&&b==="style"){if(e)a.style.cssText=""+d;return a.style.cssText}e&&a.setAttribute(b,""+d);a=!c.support.hrefNormalized&&f&&j?a.getAttribute(b,2):a.getAttribute(b);return a===null?w:a}return c.style(a,b,d)}});var O=/\.(.*)$/,db=function(a){return a.replace(/[^\w\s\.\|`]/g, function(b){return"\\"+b})};c.event={add:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){if(a.setInterval&&a!==A&&!a.frameElement)a=A;var e,j;if(d.handler){e=d;d=e.handler}if(!d.guid)d.guid=c.guid++;if(j=c.data(a)){var i=j.events=j.events||{},o=j.handle;if(!o)j.handle=o=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(o.elem,arguments):w};o.elem=a;b=b.split(" ");for(var k,n=0,r;k=b[n++];){j=e?c.extend({},e):{handler:d,data:f};if(k.indexOf(".")>-1){r=k.split("."); k=r.shift();j.namespace=r.slice(0).sort().join(".")}else{r=[];j.namespace=""}j.type=k;j.guid=d.guid;var u=i[k],z=c.event.special[k]||{};if(!u){u=i[k]=[];if(!z.setup||z.setup.call(a,f,r,o)===false)if(a.addEventListener)a.addEventListener(k,o,false);else a.attachEvent&&a.attachEvent("on"+k,o)}if(z.add){z.add.call(a,j);if(!j.handler.guid)j.handler.guid=d.guid}u.push(j);c.event.global[k]=true}a=null}}},global:{},remove:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){var e,j=0,i,o,k,n,r,u,z=c.data(a), C=z&&z.events;if(z&&C){if(b&&b.type){d=b.handler;b=b.type}if(!b||typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(e in C)c.event.remove(a,e+b)}else{for(b=b.split(" ");e=b[j++];){n=e;i=e.indexOf(".")<0;o=[];if(!i){o=e.split(".");e=o.shift();k=new RegExp("(^|\\.)"+c.map(o.slice(0).sort(),db).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(r=C[e])if(d){n=c.event.special[e]||{};for(B=f||0;B=0){a.type= e=e.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[e]&&c.each(c.cache,function(){this.events&&this.events[e]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType===8)return w;a.result=w;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(f=c.data(d,"handle"))&&f.apply(d,b);f=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+e]&&d["on"+e].apply(d,b)===false)a.result=false}catch(j){}if(!a.isPropagationStopped()&& f)c.event.trigger(a,b,f,true);else if(!a.isDefaultPrevented()){f=a.target;var i,o=c.nodeName(f,"a")&&e==="click",k=c.event.special[e]||{};if((!k._default||k._default.call(d,a)===false)&&!o&&!(f&&f.nodeName&&c.noData[f.nodeName.toLowerCase()])){try{if(f[e]){if(i=f["on"+e])f["on"+e]=null;c.event.triggered=true;f[e]()}}catch(n){}if(i)f["on"+e]=i;c.event.triggered=false}}},handle:function(a){var b,d,f,e;a=arguments[0]=c.event.fix(a||A.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive; if(!b){d=a.type.split(".");a.type=d.shift();f=new RegExp("(^|\\.)"+d.slice(0).sort().join("\\.(?:.*\\.)?")+"(\\.|$)")}e=c.data(this,"events");d=e[a.type];if(e&&d){d=d.slice(0);e=0;for(var j=d.length;e-1?c.map(a.options,function(f){return f.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},fa=function(a,b){var d=a.target,f,e;if(!(!da.test(d.nodeName)||d.readOnly)){f=c.data(d,"_change_data");e=Fa(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data", e);if(!(f===w||e===f))if(f!=null||e){a.type="change";return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:fa,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return fa.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return fa.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a, "_change_data",Fa(a))}},setup:function(){if(this.type==="file")return false;for(var a in ea)c.event.add(this,a+".specialChange",ea[a]);return da.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return da.test(this.nodeName)}};ea=c.event.special.change.filters}s.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(f){f=c.event.fix(f);f.type=b;return c.event.handle.call(this,f)}c.event.special[b]={setup:function(){this.addEventListener(a, d,true)},teardown:function(){this.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,f,e){if(typeof d==="object"){for(var j in d)this[b](j,f,d[j],e);return this}if(c.isFunction(f)){e=f;f=w}var i=b==="one"?c.proxy(e,function(k){c(this).unbind(k,i);return e.apply(this,arguments)}):e;if(d==="unload"&&b!=="one")this.one(d,f,e);else{j=0;for(var o=this.length;j0){y=t;break}}t=t[g]}m[q]=y}}}var f=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^[\]]*\]|['"][^'"]*['"]|[^[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g, e=0,j=Object.prototype.toString,i=false,o=true;[0,0].sort(function(){o=false;return 0});var k=function(g,h,l,m){l=l||[];var q=h=h||s;if(h.nodeType!==1&&h.nodeType!==9)return[];if(!g||typeof g!=="string")return l;for(var p=[],v,t,y,S,H=true,M=x(h),I=g;(f.exec(""),v=f.exec(I))!==null;){I=v[3];p.push(v[1]);if(v[2]){S=v[3];break}}if(p.length>1&&r.exec(g))if(p.length===2&&n.relative[p[0]])t=ga(p[0]+p[1],h);else for(t=n.relative[p[0]]?[h]:k(p.shift(),h);p.length;){g=p.shift();if(n.relative[g])g+=p.shift(); t=ga(g,t)}else{if(!m&&p.length>1&&h.nodeType===9&&!M&&n.match.ID.test(p[0])&&!n.match.ID.test(p[p.length-1])){v=k.find(p.shift(),h,M);h=v.expr?k.filter(v.expr,v.set)[0]:v.set[0]}if(h){v=m?{expr:p.pop(),set:z(m)}:k.find(p.pop(),p.length===1&&(p[0]==="~"||p[0]==="+")&&h.parentNode?h.parentNode:h,M);t=v.expr?k.filter(v.expr,v.set):v.set;if(p.length>0)y=z(t);else H=false;for(;p.length;){var D=p.pop();v=D;if(n.relative[D])v=p.pop();else D="";if(v==null)v=h;n.relative[D](y,v,M)}}else y=[]}y||(y=t);y||k.error(D|| g);if(j.call(y)==="[object Array]")if(H)if(h&&h.nodeType===1)for(g=0;y[g]!=null;g++){if(y[g]&&(y[g]===true||y[g].nodeType===1&&E(h,y[g])))l.push(t[g])}else for(g=0;y[g]!=null;g++)y[g]&&y[g].nodeType===1&&l.push(t[g]);else l.push.apply(l,y);else z(y,l);if(S){k(S,q,l,m);k.uniqueSort(l)}return l};k.uniqueSort=function(g){if(B){i=o;g.sort(B);if(i)for(var h=1;h":function(g,h){var l=typeof h==="string";if(l&&!/\W/.test(h)){h=h.toLowerCase();for(var m=0,q=g.length;m=0))l||m.push(v);else if(l)h[p]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()}, CHILD:function(g){if(g[1]==="nth"){var h=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=h[1]+(h[2]||1)-0;g[3]=h[3]-0}g[0]=e++;return g},ATTR:function(g,h,l,m,q,p){h=g[1].replace(/\\/g,"");if(!p&&n.attrMap[h])g[1]=n.attrMap[h];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,h,l,m,q){if(g[1]==="not")if((f.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=k(g[3],null,null,h);else{g=k.filter(g[3],h,l,true^q);l||m.push.apply(m, g);return false}else if(n.match.POS.test(g[0])||n.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled===true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,h,l){return!!k(l[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)}, text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"===g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}}, setFilters:{first:function(g,h){return h===0},last:function(g,h,l,m){return h===m.length-1},even:function(g,h){return h%2===0},odd:function(g,h){return h%2===1},lt:function(g,h,l){return hl[3]-0},nth:function(g,h,l){return l[3]-0===h},eq:function(g,h,l){return l[3]-0===h}},filter:{PSEUDO:function(g,h,l,m){var q=h[1],p=n.filters[q];if(p)return p(g,l,h,m);else if(q==="contains")return(g.textContent||g.innerText||a([g])||"").indexOf(h[3])>=0;else if(q==="not"){h= h[3];l=0;for(m=h.length;l=0}},ID:function(g,h){return g.nodeType===1&&g.getAttribute("id")===h},TAG:function(g,h){return h==="*"&&g.nodeType===1||g.nodeName.toLowerCase()===h},CLASS:function(g,h){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(h)>-1},ATTR:function(g,h){var l=h[1];g=n.attrHandle[l]?n.attrHandle[l](g):g[l]!=null?g[l]:g.getAttribute(l);l=g+"";var m=h[2];h=h[4];return g==null?m==="!=":m=== "="?l===h:m==="*="?l.indexOf(h)>=0:m==="~="?(" "+l+" ").indexOf(h)>=0:!h?l&&g!==false:m==="!="?l!==h:m==="^="?l.indexOf(h)===0:m==="$="?l.substr(l.length-h.length)===h:m==="|="?l===h||l.substr(0,h.length+1)===h+"-":false},POS:function(g,h,l,m){var q=n.setFilters[h[2]];if(q)return q(g,l,h,m)}}},r=n.match.POS;for(var u in n.match){n.match[u]=new RegExp(n.match[u].source+/(?![^\[]*\])(?![^\(]*\))/.source);n.leftMatch[u]=new RegExp(/(^(?:.|\r|\n)*?)/.source+n.match[u].source.replace(/\\(\d+)/g,function(g, h){return"\\"+(h-0+1)}))}var z=function(g,h){g=Array.prototype.slice.call(g,0);if(h){h.push.apply(h,g);return h}return g};try{Array.prototype.slice.call(s.documentElement.childNodes,0)}catch(C){z=function(g,h){h=h||[];if(j.call(g)==="[object Array]")Array.prototype.push.apply(h,g);else if(typeof g.length==="number")for(var l=0,m=g.length;l";var l=s.documentElement;l.insertBefore(g,l.firstChild);if(s.getElementById(h)){n.find.ID=function(m,q,p){if(typeof q.getElementById!=="undefined"&&!p)return(q=q.getElementById(m[1]))?q.id===m[1]||typeof q.getAttributeNode!=="undefined"&& q.getAttributeNode("id").nodeValue===m[1]?[q]:w:[]};n.filter.ID=function(m,q){var p=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&p&&p.nodeValue===q}}l.removeChild(g);l=g=null})();(function(){var g=s.createElement("div");g.appendChild(s.createComment(""));if(g.getElementsByTagName("*").length>0)n.find.TAG=function(h,l){l=l.getElementsByTagName(h[1]);if(h[1]==="*"){h=[];for(var m=0;l[m];m++)l[m].nodeType===1&&h.push(l[m]);l=h}return l};g.innerHTML=""; if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")n.attrHandle.href=function(h){return h.getAttribute("href",2)};g=null})();s.querySelectorAll&&function(){var g=k,h=s.createElement("div");h.innerHTML="

";if(!(h.querySelectorAll&&h.querySelectorAll(".TEST").length===0)){k=function(m,q,p,v){q=q||s;if(!v&&q.nodeType===9&&!x(q))try{return z(q.querySelectorAll(m),p)}catch(t){}return g(m,q,p,v)};for(var l in g)k[l]=g[l];h=null}}(); (function(){var g=s.createElement("div");g.innerHTML="
";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){n.order.splice(1,0,"CLASS");n.find.CLASS=function(h,l,m){if(typeof l.getElementsByClassName!=="undefined"&&!m)return l.getElementsByClassName(h[1])};g=null}}})();var E=s.compareDocumentPosition?function(g,h){return!!(g.compareDocumentPosition(h)&16)}: function(g,h){return g!==h&&(g.contains?g.contains(h):true)},x=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false},ga=function(g,h){var l=[],m="",q;for(h=h.nodeType?[h]:h;q=n.match.PSEUDO.exec(g);){m+=q[0];g=g.replace(n.match.PSEUDO,"")}g=n.relative[g]?g+"*":g;q=0;for(var p=h.length;q=0===d})};c.fn.extend({find:function(a){for(var b=this.pushStack("","find",a),d=0,f=0,e=this.length;f0)for(var j=d;j0},closest:function(a,b){if(c.isArray(a)){var d=[],f=this[0],e,j= {},i;if(f&&a.length){e=0;for(var o=a.length;e-1:c(f).is(e)){d.push({selector:i,elem:f});delete j[i]}}f=f.parentNode}}return d}var k=c.expr.match.POS.test(a)?c(a,b||this.context):null;return this.map(function(n,r){for(;r&&r.ownerDocument&&r!==b;){if(k?k.index(r)>-1:c(r).is(a))return r;r=r.parentNode}return null})},index:function(a){if(!a||typeof a=== "string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){a=typeof a==="string"?c(a,b||this.context):c.makeArray(a);b=c.merge(this.get(),a);return this.pushStack(qa(a[0])||qa(b[0])?b:c.unique(b))},andSelf:function(){return this.add(this.prevObject)}});c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode", d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a,2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")? a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a,b){c.fn[a]=function(d,f){var e=c.map(this,b,d);eb.test(a)||(f=d);if(f&&typeof f==="string")e=c.filter(f,e);e=this.length>1?c.unique(e):e;if((this.length>1||gb.test(f))&&fb.test(a))e=e.reverse();return this.pushStack(e,a,R.call(arguments).join(","))}});c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return c.find.matches(a,b)},dir:function(a,b,d){var f=[];for(a=a[b];a&&a.nodeType!==9&&(d===w||a.nodeType!==1||!c(a).is(d));){a.nodeType=== 1&&f.push(a);a=a[b]}return f},nth:function(a,b,d){b=b||1;for(var f=0;a;a=a[d])if(a.nodeType===1&&++f===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var Ja=/ jQuery\d+="(?:\d+|null)"/g,V=/^\s+/,Ka=/(<([\w:]+)[^>]*?)\/>/g,hb=/^(?:area|br|col|embed|hr|img|input|link|meta|param)$/i,La=/<([\w:]+)/,ib=/"},F={option:[1,""],legend:[1,"
","
"],thead:[1,"","
"],tr:[2,"","
"],td:[3,"","
"],col:[2,"","
"],area:[1,"",""],_default:[0,"",""]};F.optgroup=F.option;F.tbody=F.tfoot=F.colgroup=F.caption=F.thead;F.th=F.td;if(!c.support.htmlSerialize)F._default=[1,"div
","
"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d= c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==w)return this.empty().append((this[0]&&this[0].ownerDocument||s).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this}, wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})}, prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b, this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,f;(f=this[d])!=null;d++)if(!a||c.filter(a,[f]).length){if(!b&&f.nodeType===1){c.cleanData(f.getElementsByTagName("*"));c.cleanData([f])}f.parentNode&&f.parentNode.removeChild(f)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild); return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,f=this.ownerDocument;if(!d){d=f.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(Ja,"").replace(/=([^="'>\s]+\/)>/g,'="$1">').replace(V,"")],f)[0]}else return this.cloneNode(true)});if(a===true){ra(this,b);ra(this.find("*"),b.find("*"))}return b},html:function(a){if(a===w)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(Ja, ""):null;else if(typeof a==="string"&&!ta.test(a)&&(c.support.leadingWhitespace||!V.test(a))&&!F[(La.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(Ka,Ma);try{for(var b=0,d=this.length;b0||e.cacheable||this.length>1?k.cloneNode(true):k)}o.length&&c.each(o,Qa)}return this}});c.fragments={};c.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var f=[];d=c(d);var e=this.length===1&&this[0].parentNode;if(e&&e.nodeType===11&&e.childNodes.length===1&&d.length===1){d[b](this[0]); return this}else{e=0;for(var j=d.length;e0?this.clone(true):this).get();c.fn[b].apply(c(d[e]),i);f=f.concat(i)}return this.pushStack(f,a,d.selector)}}});c.extend({clean:function(a,b,d,f){b=b||s;if(typeof b.createElement==="undefined")b=b.ownerDocument||b[0]&&b[0].ownerDocument||s;for(var e=[],j=0,i;(i=a[j])!=null;j++){if(typeof i==="number")i+="";if(i){if(typeof i==="string"&&!jb.test(i))i=b.createTextNode(i);else if(typeof i==="string"){i=i.replace(Ka,Ma);var o=(La.exec(i)||["", ""])[1].toLowerCase(),k=F[o]||F._default,n=k[0],r=b.createElement("div");for(r.innerHTML=k[1]+i+k[2];n--;)r=r.lastChild;if(!c.support.tbody){n=ib.test(i);o=o==="table"&&!n?r.firstChild&&r.firstChild.childNodes:k[1]===""&&!n?r.childNodes:[];for(k=o.length-1;k>=0;--k)c.nodeName(o[k],"tbody")&&!o[k].childNodes.length&&o[k].parentNode.removeChild(o[k])}!c.support.leadingWhitespace&&V.test(i)&&r.insertBefore(b.createTextNode(V.exec(i)[0]),r.firstChild);i=r.childNodes}if(i.nodeType)e.push(i);else e= c.merge(e,i)}}if(d)for(j=0;e[j];j++)if(f&&c.nodeName(e[j],"script")&&(!e[j].type||e[j].type.toLowerCase()==="text/javascript"))f.push(e[j].parentNode?e[j].parentNode.removeChild(e[j]):e[j]);else{e[j].nodeType===1&&e.splice.apply(e,[j+1,0].concat(c.makeArray(e[j].getElementsByTagName("script"))));d.appendChild(e[j])}return e},cleanData:function(a){for(var b,d,f=c.cache,e=c.event.special,j=c.support.deleteExpando,i=0,o;(o=a[i])!=null;i++)if(d=o[c.expando]){b=f[d];if(b.events)for(var k in b.events)e[k]? c.event.remove(o,k):Ca(o,k,b.handle);if(j)delete o[c.expando];else o.removeAttribute&&o.removeAttribute(c.expando);delete f[d]}}});var kb=/z-?index|font-?weight|opacity|zoom|line-?height/i,Na=/alpha\([^)]*\)/,Oa=/opacity=([^)]*)/,ha=/float/i,ia=/-([a-z])/ig,lb=/([A-Z])/g,mb=/^-?\d+(?:px)?$/i,nb=/^-?\d/,ob={position:"absolute",visibility:"hidden",display:"block"},pb=["Left","Right"],qb=["Top","Bottom"],rb=s.defaultView&&s.defaultView.getComputedStyle,Pa=c.support.cssFloat?"cssFloat":"styleFloat",ja= function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){return X(this,a,b,true,function(d,f,e){if(e===w)return c.curCSS(d,f);if(typeof e==="number"&&!kb.test(f))e+="px";c.style(d,f,e)})};c.extend({style:function(a,b,d){if(!a||a.nodeType===3||a.nodeType===8)return w;if((b==="width"||b==="height")&&parseFloat(d)<0)d=w;var f=a.style||a,e=d!==w;if(!c.support.opacity&&b==="opacity"){if(e){f.zoom=1;b=parseInt(d,10)+""==="NaN"?"":"alpha(opacity="+d*100+")";a=f.filter||c.curCSS(a,"filter")||"";f.filter= Na.test(a)?a.replace(Na,b):b}return f.filter&&f.filter.indexOf("opacity=")>=0?parseFloat(Oa.exec(f.filter)[1])/100+"":""}if(ha.test(b))b=Pa;b=b.replace(ia,ja);if(e)f[b]=d;return f[b]},css:function(a,b,d,f){if(b==="width"||b==="height"){var e,j=b==="width"?pb:qb;function i(){e=b==="width"?a.offsetWidth:a.offsetHeight;f!=="border"&&c.each(j,function(){f||(e-=parseFloat(c.curCSS(a,"padding"+this,true))||0);if(f==="margin")e+=parseFloat(c.curCSS(a,"margin"+this,true))||0;else e-=parseFloat(c.curCSS(a, "border"+this+"Width",true))||0})}a.offsetWidth!==0?i():c.swap(a,ob,i);return Math.max(0,Math.round(e))}return c.curCSS(a,b,d)},curCSS:function(a,b,d){var f,e=a.style;if(!c.support.opacity&&b==="opacity"&&a.currentStyle){f=Oa.test(a.currentStyle.filter||"")?parseFloat(RegExp.$1)/100+"":"";return f===""?"1":f}if(ha.test(b))b=Pa;if(!d&&e&&e[b])f=e[b];else if(rb){if(ha.test(b))b="float";b=b.replace(lb,"-$1").toLowerCase();e=a.ownerDocument.defaultView;if(!e)return null;if(a=e.getComputedStyle(a,null))f= a.getPropertyValue(b);if(b==="opacity"&&f==="")f="1"}else if(a.currentStyle){d=b.replace(ia,ja);f=a.currentStyle[b]||a.currentStyle[d];if(!mb.test(f)&&nb.test(f)){b=e.left;var j=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;e.left=d==="fontSize"?"1em":f||0;f=e.pixelLeft+"px";e.left=b;a.runtimeStyle.left=j}}return f},swap:function(a,b,d){var f={};for(var e in b){f[e]=a.style[e];a.style[e]=b[e]}d.call(a);for(e in b)a.style[e]=f[e]}});if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b= a.offsetWidth,d=a.offsetHeight,f=a.nodeName.toLowerCase()==="tr";return b===0&&d===0&&!f?true:b>0&&d>0&&!f?false:c.curCSS(a,"display")==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var sb=J(),tb=//gi,ub=/select|textarea/i,vb=/color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week/i,N=/=\?(&|$)/,ka=/\?/,wb=/(\?|&)_=.*?(&|$)/,xb=/^(\w+:)?\/\/([^\/?#]+)/,yb=/%20/g,zb=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!== "string")return zb.call(this,a);else if(!this.length)return this;var f=a.indexOf(" ");if(f>=0){var e=a.slice(f,a.length);a=a.slice(0,f)}f="GET";if(b)if(c.isFunction(b)){d=b;b=null}else if(typeof b==="object"){b=c.param(b,c.ajaxSettings.traditional);f="POST"}var j=this;c.ajax({url:a,type:f,dataType:"html",data:b,complete:function(i,o){if(o==="success"||o==="notmodified")j.html(e?c("
").append(i.responseText.replace(tb,"")).find(e):i.responseText);d&&j.each(d,[i.responseText,o,i])}});return this}, serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||ub.test(this.nodeName)||vb.test(this.type))}).map(function(a,b){a=c(this).val();return a==null?null:c.isArray(a)?c.map(a,function(d){return{name:b.name,value:d}}):{name:b.name,value:a}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "), function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:f})},getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:f})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href, global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:A.XMLHttpRequest&&(A.location.protocol!=="file:"||!A.ActiveXObject)?function(){return new A.XMLHttpRequest}:function(){try{return new A.ActiveXObject("Microsoft.XMLHTTP")}catch(a){}},accepts:{xml:"application/xml, text/xml",html:"text/html",script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},lastModified:{},etag:{},ajax:function(a){function b(){e.success&& e.success.call(k,o,i,x);e.global&&f("ajaxSuccess",[x,e])}function d(){e.complete&&e.complete.call(k,x,i);e.global&&f("ajaxComplete",[x,e]);e.global&&!--c.active&&c.event.trigger("ajaxStop")}function f(q,p){(e.context?c(e.context):c.event).trigger(q,p)}var e=c.extend(true,{},c.ajaxSettings,a),j,i,o,k=a&&a.context||e,n=e.type.toUpperCase();if(e.data&&e.processData&&typeof e.data!=="string")e.data=c.param(e.data,e.traditional);if(e.dataType==="jsonp"){if(n==="GET")N.test(e.url)||(e.url+=(ka.test(e.url)? "&":"?")+(e.jsonp||"callback")+"=?");else if(!e.data||!N.test(e.data))e.data=(e.data?e.data+"&":"")+(e.jsonp||"callback")+"=?";e.dataType="json"}if(e.dataType==="json"&&(e.data&&N.test(e.data)||N.test(e.url))){j=e.jsonpCallback||"jsonp"+sb++;if(e.data)e.data=(e.data+"").replace(N,"="+j+"$1");e.url=e.url.replace(N,"="+j+"$1");e.dataType="script";A[j]=A[j]||function(q){o=q;b();d();A[j]=w;try{delete A[j]}catch(p){}z&&z.removeChild(C)}}if(e.dataType==="script"&&e.cache===null)e.cache=false;if(e.cache=== false&&n==="GET"){var r=J(),u=e.url.replace(wb,"$1_="+r+"$2");e.url=u+(u===e.url?(ka.test(e.url)?"&":"?")+"_="+r:"")}if(e.data&&n==="GET")e.url+=(ka.test(e.url)?"&":"?")+e.data;e.global&&!c.active++&&c.event.trigger("ajaxStart");r=(r=xb.exec(e.url))&&(r[1]&&r[1]!==location.protocol||r[2]!==location.host);if(e.dataType==="script"&&n==="GET"&&r){var z=s.getElementsByTagName("head")[0]||s.documentElement,C=s.createElement("script");C.src=e.url;if(e.scriptCharset)C.charset=e.scriptCharset;if(!j){var B= false;C.onload=C.onreadystatechange=function(){if(!B&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){B=true;b();d();C.onload=C.onreadystatechange=null;z&&C.parentNode&&z.removeChild(C)}}}z.insertBefore(C,z.firstChild);return w}var E=false,x=e.xhr();if(x){e.username?x.open(n,e.url,e.async,e.username,e.password):x.open(n,e.url,e.async);try{if(e.data||a&&a.contentType)x.setRequestHeader("Content-Type",e.contentType);if(e.ifModified){c.lastModified[e.url]&&x.setRequestHeader("If-Modified-Since", c.lastModified[e.url]);c.etag[e.url]&&x.setRequestHeader("If-None-Match",c.etag[e.url])}r||x.setRequestHeader("X-Requested-With","XMLHttpRequest");x.setRequestHeader("Accept",e.dataType&&e.accepts[e.dataType]?e.accepts[e.dataType]+", */*":e.accepts._default)}catch(ga){}if(e.beforeSend&&e.beforeSend.call(k,x,e)===false){e.global&&!--c.active&&c.event.trigger("ajaxStop");x.abort();return false}e.global&&f("ajaxSend",[x,e]);var g=x.onreadystatechange=function(q){if(!x||x.readyState===0||q==="abort"){E|| d();E=true;if(x)x.onreadystatechange=c.noop}else if(!E&&x&&(x.readyState===4||q==="timeout")){E=true;x.onreadystatechange=c.noop;i=q==="timeout"?"timeout":!c.httpSuccess(x)?"error":e.ifModified&&c.httpNotModified(x,e.url)?"notmodified":"success";var p;if(i==="success")try{o=c.httpData(x,e.dataType,e)}catch(v){i="parsererror";p=v}if(i==="success"||i==="notmodified")j||b();else c.handleError(e,x,i,p);d();q==="timeout"&&x.abort();if(e.async)x=null}};try{var h=x.abort;x.abort=function(){x&&h.call(x); g("abort")}}catch(l){}e.async&&e.timeout>0&&setTimeout(function(){x&&!E&&g("timeout")},e.timeout);try{x.send(n==="POST"||n==="PUT"||n==="DELETE"?e.data:null)}catch(m){c.handleError(e,x,null,m);d()}e.async||g();return x}},handleError:function(a,b,d,f){if(a.error)a.error.call(a.context||a,b,d,f);if(a.global)(a.context?c(a.context):c.event).trigger("ajaxError",[b,a,f])},active:0,httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status=== 1223||a.status===0}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"),f=a.getResponseHeader("Etag");if(d)c.lastModified[b]=d;if(f)c.etag[b]=f;return a.status===304||a.status===0},httpData:function(a,b,d){var f=a.getResponseHeader("content-type")||"",e=b==="xml"||!b&&f.indexOf("xml")>=0;a=e?a.responseXML:a.responseText;e&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b=== "json"||!b&&f.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&f.indexOf("javascript")>=0)c.globalEval(a);return a},param:function(a,b){function d(i,o){if(c.isArray(o))c.each(o,function(k,n){b||/\[\]$/.test(i)?f(i,n):d(i+"["+(typeof n==="object"||c.isArray(n)?k:"")+"]",n)});else!b&&o!=null&&typeof o==="object"?c.each(o,function(k,n){d(i+"["+k+"]",n)}):f(i,o)}function f(i,o){o=c.isFunction(o)?o():o;e[e.length]=encodeURIComponent(i)+"="+encodeURIComponent(o)}var e=[];if(b===w)b=c.ajaxSettings.traditional; if(c.isArray(a)||a.jquery)c.each(a,function(){f(this.name,this.value)});else for(var j in a)d(j,a[j]);return e.join("&").replace(yb,"+")}});var la={},Ab=/toggle|show|hide/,Bb=/^([+-]=)?([\d+-.]+)(.*)$/,W,va=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b){if(a||a===0)return this.animate(K("show",3),a,b);else{a=0;for(b=this.length;a").appendTo("body");f=e.css("display");if(f==="none")f="block";e.remove();la[d]=f}c.data(this[a],"olddisplay",f)}}a=0;for(b=this.length;a=0;f--)if(d[f].elem===this){b&&d[f](true);d.splice(f,1)}});b||this.dequeue();return this}});c.each({slideDown:K("show",1),slideUp:K("hide",1),slideToggle:K("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"}},function(a,b){c.fn[a]=function(d,f){return this.animate(b,d,f)}});c.extend({speed:function(a,b,d){var f=a&&typeof a==="object"?a:{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};f.duration=c.fx.off?0:typeof f.duration=== "number"?f.duration:c.fx.speeds[f.duration]||c.fx.speeds._default;f.old=f.complete;f.complete=function(){f.queue!==false&&c(this).dequeue();c.isFunction(f.old)&&f.old.call(this)};return f},easing:{linear:function(a,b,d,f){return d+f*a},swing:function(a,b,d,f){return(-Math.cos(a*Math.PI)/2+0.5)*f+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]|| c.fx.step._default)(this);if((this.prop==="height"||this.prop==="width")&&this.elem.style)this.elem.style.display="block"},cur:function(a){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];return(a=parseFloat(c.css(this.elem,this.prop,a)))&&a>-10000?a:parseFloat(c.curCSS(this.elem,this.prop))||0},custom:function(a,b,d){function f(j){return e.step(j)}this.startTime=J();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start; this.pos=this.state=0;var e=this;f.elem=this.elem;if(f()&&c.timers.push(f)&&!W)W=setInterval(c.fx.tick,13)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true;this.custom(this.cur(),0)},step:function(a){var b=J(),d=true;if(a||b>=this.options.duration+this.startTime){this.now= this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var f in this.options.curAnim)if(this.options.curAnim[f]!==true)d=false;if(d){if(this.options.display!=null){this.elem.style.overflow=this.options.overflow;a=c.data(this.elem,"olddisplay");this.elem.style.display=a?a:this.options.display;if(c.css(this.elem,"display")==="none")this.elem.style.display="block"}this.options.hide&&c(this.elem).hide();if(this.options.hide||this.options.show)for(var e in this.options.curAnim)c.style(this.elem, e,this.options.orig[e]);this.options.complete.call(this.elem)}return false}else{e=b-this.startTime;this.state=e/this.options.duration;a=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||a](this.state,e,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a=c.timers,b=0;b
"; a.insertBefore(b,a.firstChild);d=b.firstChild;f=d.firstChild;e=d.nextSibling.firstChild.firstChild;this.doesNotAddBorder=f.offsetTop!==5;this.doesAddBorderForTableAndCells=e.offsetTop===5;f.style.position="fixed";f.style.top="20px";this.supportsFixedPosition=f.offsetTop===20||f.offsetTop===15;f.style.position=f.style.top="";d.style.overflow="hidden";d.style.position="relative";this.subtractsBorderForOverflowNotVisible=f.offsetTop===-5;this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==j;a.removeChild(b); c.offset.initialize=c.noop},bodyOffset:function(a){var b=a.offsetTop,d=a.offsetLeft;c.offset.initialize();if(c.offset.doesNotIncludeMarginInBodyOffset){b+=parseFloat(c.curCSS(a,"marginTop",true))||0;d+=parseFloat(c.curCSS(a,"marginLeft",true))||0}return{top:b,left:d}},setOffset:function(a,b,d){if(/static/.test(c.curCSS(a,"position")))a.style.position="relative";var f=c(a),e=f.offset(),j=parseInt(c.curCSS(a,"top",true),10)||0,i=parseInt(c.curCSS(a,"left",true),10)||0;if(c.isFunction(b))b=b.call(a, d,e);d={top:b.top-e.top+j,left:b.left-e.left+i};"using"in b?b.using.call(a,d):f.css(d)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),f=/^body|html$/i.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.curCSS(a,"marginTop",true))||0;d.left-=parseFloat(c.curCSS(a,"marginLeft",true))||0;f.top+=parseFloat(c.curCSS(b[0],"borderTopWidth",true))||0;f.left+=parseFloat(c.curCSS(b[0],"borderLeftWidth",true))||0;return{top:d.top- f.top,left:d.left-f.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||s.body;a&&!/^body|html$/i.test(a.nodeName)&&c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(f){var e=this[0],j;if(!e)return null;if(f!==w)return this.each(function(){if(j=wa(this))j.scrollTo(!a?f:c(j).scrollLeft(),a?f:c(j).scrollTop());else this[d]=f});else return(j=wa(e))?"pageXOffset"in j?j[a?"pageYOffset": "pageXOffset"]:c.support.boxModel&&j.document.documentElement[d]||j.document.body[d]:e[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase();c.fn["inner"+b]=function(){return this[0]?c.css(this[0],d,false,"padding"):null};c.fn["outer"+b]=function(f){return this[0]?c.css(this[0],d,false,f?"margin":"border"):null};c.fn[d]=function(f){var e=this[0];if(!e)return f==null?null:this;if(c.isFunction(f))return this.each(function(j){var i=c(this);i[d](f.call(this,j,i[d]()))});return"scrollTo"in e&&e.document?e.document.compatMode==="CSS1Compat"&&e.document.documentElement["client"+b]||e.document.body["client"+b]:e.nodeType===9?Math.max(e.documentElement["client"+b],e.body["scroll"+b],e.documentElement["scroll"+b],e.body["offset"+b],e.documentElement["offset"+b]):f===w?c.css(e,d):this.css(d,typeof f==="string"?f:f+"px")}});A.jQuery=A.$=c})(window); percona-toolkit-3.1/config/sphinx-build/percona-theme/static/percona.com.css000664 001750 001750 00000227514 13535723557 030476 0ustar00jenkinsjenkins000000 000000 @font-face { font-family: 'Vegur-Regular_font'; src: url('Vegur-Regular.otf') format('opentype'); font-family: 'GeosansLight'; src: url('GeosansLight.ttf') format('truetype'); } body, html{ height:100%; margin:0; padding:0; font-size: 12px; line-height: 16px; font-family: Arial; color: #333333; } form{ display:inline } a { text-decoration:none; } a, a:visited{ border:none; cursor:pointer; color: #d5390b; } a:hover { color: #712000; } img{ border:none } div,img{ behavior:url(/static/css/iepngfix.htc) } em { font-style: italic; } del { text-decoration: line-through; } .hidden{ display:none } strong { font-weight: bold; } html, body { height: 100%; } #stickywrapper { position: relative; min-height: 100%; } * html #stickywrapper { height: 100%; } #wrapper { min-height: 100%; position: relative; } #stickycontent { padding-bottom: 102px; margin: 0px auto; overflow: hidden; width: 1000px; border: 0px solid #CCC; } #stickyfooter { position: relative; margin: -97px auto 0 auto; } a.links { font-style: oblique; } span.subpart { font-weight: bold; } /** * LAYOUT */ #header{ background:#333; overflow:hidden; zoom:1 } #header .header{ height:95px; margin:0 auto; overflow:hidden; zoom:1; } #header .header .logo{ float:left; overflow:hidden; padding:20px 0; width:230px; zoom:1; } #topnav a { text-decoration: none !important; } #header .header .right { overflow: hidden; zoom: 1; float: left; width: 730px; height: 35px; padding: 30px 0px; } #header .header .right .searchlink { overflow: hidden; zoom: 1; width: 35px; height: 35px; float: right; padding: 0px 0px 0px 10px; } #header .header .right .navicontainer { overflow: hidden; zoom: 1; float: right; height: 35px; background: #ffffff url(/static/images/ui-navi-l.png) top left no-repeat; } #header .header .right .navi { overflow: hidden; zoom: 1; height: 27px; padding: 4px 10px; background: url(/static/images/ui-navi-r.png) top right no-repeat; } #header .header .right .navicontainer .navi span { overflow: hidden; zoom: 1; height: 27px; float: left; margin: 0px; } #header .header .right .navicontainer .navi span:hover { background: #f0f0f0 url(/static/images/ui-navi-hover-l.png) top left no-repeat; } #header .header .right .navicontainer .navi span.selected, #header .header .right .navicontainer .navi span.selected:hover { background: #d5390b url(/static/images/ui-navi-sel-l.png) top left no-repeat; } #header .header .right .navicontainer .navi span a, #header .header .right .navicontainer .navi span a:visited { display: block; padding: 6px 10px 4px 10px; height: 17px; font-family: Arial, Helvetica, sans-serif; font-size: 14px; line-height: 14px; color: #000000; text-decoration: none; } #header .header .right .navicontainer .navi span a:hover { background: url(/static/images/ui-navi-hover-r.png) top right no-repeat; } #header .header .right .navicontainer .navi span.selected a.selected, #header .header .right .navicontainer .navi span.selected a.selected:visited, #header .header .right .navicontainer .navi span.selected a.selected:hover { background: url(/static/images/ui-navi-sel-r.png) top right no-repeat; color: #ffffff; } /* * Banners */ #slogan { overflow: hidden; zoom: 1; height: 125px; background: #d24300 url(/static/images/ui-orange-front.png) bottom center no-repeat; text-align: center; } #stripe { overflow: hidden; zoom: 1; height: 20px; background: #d24300 url(/static/images/ui-orange-content.png) top center no-repeat; } /* * Footer */ #footer { overflow: hidden; zoom: 1; background: #333333; border-top: 2px #d95200 solid; } #footer .footer { overflow: hidden; zoom: 1; width: 960px; margin: 0px auto; padding: 10px 0px; } #footer .footer .logo { overflow: hidden; zoom: 1; float: left; padding: 17px 0px 0px 0px; } #footer .footer .text { overflow: hidden; zoom: 1; float: right; text-align: right; font-family: Arial, Helvetica, sans-serif; font-size: 11px; line-height: 15px; color: #e0e0e0; } #footer .footer .text a, #footer .footer .text a:visited { color: #e0e0e0; text-decoration: none; } #footer .footer .text a:hover { color: #ffffff; text-decoration: underline; } /* * Menu Dropdowns */ div.navi-dropdown { display: none; position: absolute; overflow: hidden; zoom: 1; width: 255px; z-index: 900; } div.navi-dropdown .navi-dropdown-header-l { overflow: hidden; zoom: 1; width: 255px; height: 20px; background: url(/static/images/ui-dropdown-header-l.png) top left no-repeat; } div.navi-dropdown .navi-dropdown-header-r { overflow: hidden; zoom: 1; width: 255px; height: 20px; background: url(/static/images/ui-dropdown-header-r.png) top left no-repeat; } div.navi-dropdown .navi-dropdown-content { overflow: hidden; zoom: 1; width: 225px; padding: 0px 15px 5px 15px; background: #ffffff url(/static/images/ui-dropdown-bg.png) repeat-y; font-family: Arial, Helvetica, sans-serif; font-size: 14px; line-height: 20px; color: #333333; } div.navi-dropdown .navi-dropdown-content .item { padding: 3px 0px 1px 0px; } div.navi-dropdown .navi-dropdown-content a, div.navi-dropdown .navi-dropdown-content a:visited { display: block; color: #333333; text-decoration: none; } div.navi-dropdown .navi-dropdown-content a:hover { color: #d12907; text-decoration: underline; } div.navi-dropdown .navi-dropdown-footer { overflow: hidden; zoom: 1; width: 255px; height: 10px; background: url(/static/images/ui-dropdown-footer.png) top left no-repeat; } div.search-dropdown { display: none; position: absolute; overflow: hidden; zoom: 1; width: 255px; } div.search-dropdown .search-dropdown-header { overflow: hidden; zoom: 1; width: 255px; height: 20px; background: url(/static/images/ui-dropdown-header-search.png) top left no-repeat; } div.search-dropdown .search-dropdown-content { overflow: hidden; zoom: 1; width: 225px; padding: 0px 15px 5px 15px; background: #ffffff url(/static/images/ui-dropdown-bg.png) repeat-y; font-family: Arial, Helvetica, sans-serif; font-size: 14px; line-height: 20px; color: #333333; } div.search-dropdown .search-dropdown-content .form { padding-top: 2px; } div.search-dropdown .search-dropdown-content .form input { border: 1px #c0c0c0 solid; padding: 4px; width: 210px; outline: none; } div.search-dropdown .search-dropdown-content .form input:focus { border: 1px #707070 solid; outline: none; } div.search-dropdown .search-dropdown-footer { overflow: hidden; zoom: 1; width: 255px; height: 10px; background: url(/static/images/ui-dropdown-footer.png) top left no-repeat; } /* * Layout columns */ #left-column { width: 165px; float: left; margin-right: 25px; } #right-column { width: 230px; float:left; margin-left: 25px; } #left-column .navi-title { font-size: 15px; color: #d5390b; margin-bottom: 20px; } #left-column .navi-title span { background: url("/static/images/left_menu_title_bullet.png") right top no-repeat; padding-right: 20px; padding-left: 10px; } #left-column .item, #left-column .subitem { border-top: 1px solid #f3f1ee; } #left-column .subitem { padding-left: 25px; } #left-column .item a, #left-column .subitem a { min-height: 18px; /* line-height: 23px; */ color: #a3a3a3; font-size: 11px; text-decoration: none; display: block; padding: 5px 0 2px 10px; background: url('/static/images/left_column_item_bulletpoint.png') left no-repeat; } #left-column .item a:hover { text-decoration: underline; } #left-column .selected > a { color: #b82c02; background: url('/static/images/left_menu_item_selected_bulletpoint.png') left no-repeat; } /* * Front page */ .frontpage-text { font-size: 13px; } .frontpage-text p { text-align: justify; } div.frontpage-banners { overflow: hidden; zoom: 1; width: 960px; margin: 0px; margin-bottom: 20px; padding: 0px; } div.frontpage-banners div.frontpage-banners-container { overflow: hidden; zoom: 1; width: 960px; height: 200px; padding-bottom: 30px; z-index: 98; } div.frontpage-banners a.nivo-imageLink { position: absolute; top: 0px; left: 0px; width: 960px; height: 200px; border: 0; padding: 0; margin: 0; z-index: 20; display: none; } div.frontpage-banners div.frontpage-banners-container div.nivo-directionNav { width: 1px; height: 1px; display: none !important; z-index: 18 !important; } div.frontpage-banners div.frontpage-banners-container div.nivo-controlNav { position: absolute; top: 200px; width: 960px; text-align: center; font-size: 28px; line-height: 25px; } div.frontpage-banners div.frontpage-banners-container div.nivo-controlNav a { padding: 0px 3px; text-decoration: none !important; color: #919191 !important; } div.frontpage-banners div.frontpage-banners-container div.nivo-controlNav a.active { color: #d12907 !important; } div.frontpage-banners div.frontpage-banners-selector { overflow: hidden; zoom: 1; height: 20px; text-align: center; } div.frontpage-main { overflow: hidden; zoom: 1; width: 960px; padding: 0px; } div.frontpage-boxes { overflow: hidden; zoom: 1; margin: 25px 0px 0px 0px; width: 515px; padding: 0px; } div.frontpage-boxes div.frontpage-boxes-selector { overflow: hidden; zoom: 1; width: 515px; } div.frontpage-boxes div.frontpage-boxes-selector div.frontpage-boxes-selector-item { overflow: hidden; zoom: 1; width: 127px; float: left; margin-left: -1px; border-top: 1px #cccccc solid; border-left: 1px #cccccc solid; border-right: 1px #cccccc solid; border-bottom: 1px #cccccc solid; text-align: center; margin-top: 10px; padding: 0px 0px 0px 0px; } div.frontpage-boxes div.frontpage-boxes-selector div.frontpage-boxes-selector-item:first-child { margin-left: 0px; } div.frontpage-boxes div.frontpage-boxes-selector div.frontpage-boxes-selector-item.selected { background: url(/static/images/ui-tab-bg.png) repeat-x; margin-top: 0px; border-bottom: none; padding: 5px 0px 6px 0px; } div.frontpage-boxes div.frontpage-boxes-selector div.frontpage-boxes-selector-item a { display: block; font-size: 14px; line-height: 13px; font-weight: bold; color: #444444 !important; text-decoration: none !important; padding: 9px 0px 8px 0px; } div.frontpage-boxes div.frontpage-boxes-container { overflow: hidden; zoom: 1; font-size: 14px; width: 511px; height: 150px; padding-top: 20px; border-left: 1px #cccccc solid; border-right: 1px #cccccc solid; border-bottom: 1px #cccccc solid; padding-left: 0px; } div.frontpage-boxes div.frontpage-boxes-container div.frontpage-box img { padding: 5px 15px 55px 0; float:left; } .innerpage-box { padding: 10px 20px; } .frontpage-icon-section { clear: both; } .frontpage-icon-section img{ float: left; margin-right: 12px; width: 55px; height: 55px; } .frontpage-icon-section p { text-align: justify; } /* * Content */ #contentcontainer { overflow: hidden; zoom: 1; width: 960px; padding: 0px 0px; margin: 0px auto; } #content { float: left; width: 515px; } #content p { margin-bottom:15px; } /*#content h1 { font-size: 23px; line-height: 30px; margin: 5px 0px 10px 0px; color: #d5390b; font-weight: normal; } */ #content h2 { font-weight: normal; font-size: 17px; line-height: 20px; padding: 0px 0px 3px 0px; margin: 30px 0px 15px 0px; color: #000000; border-bottom: 1px #e0e0e0 solid; } #content h3 { font-weight: bold; font-size: 14px; line-height: 17px; margin: 20px 0px 10px 0px; padding-bottom:5px; color: #000000; text-transform: uppercase; border-bottom:1px solid #ccc; } #content dl dt { font-weight: bold; margin-top: 10px; margin-bottom: 10px; } #content dl dd { padding: 0px 0px 0px 30px; } #content ol { list-style-type: decimal; padding: 0px 0px 0px 30px; } #content ol li { list-style-type: decimal; margin: 8px 0px; } div.side-column-block { position: relative; margin: 0 0 20px; } div.side-column-block .header { background: url("/static/images/boxes_header_bulletpoint.png") left no-repeat; text-align: left; font-size: 14px; line-height: 20px; padding: 3px 0px 3px 20px; margin-bottom: 10px; font-weight: bold; border-bottom: 1px solid #efece8; } div.side-column-block div.content { font-size: 11px; padding-left: 5px; } div.side-column-block div.content div.more { overflow: hidden; zoom: 1; text-align: right; padding: 5px 0px 0px 0px; font-size: 11px; line-height: 17px; } div.side-column-block div.content li { padding-left: 10px; list-style:none; background: url('/static/images/boxes_list_bulletpoints.png') left top no-repeat; margin-bottom:7px; } div.side-column-block .content a { color: #b82c02; text-decoration: none; } div.side-column-block .content table.call-us { width: 100%; } div.side-column-block .content table.call-us tr { margin-bottom: 10px; } div.side-column-block .content table.call-us th { vertical-align: text-top; color: #e05d02; font-size: 11px; font-weight: normal; } div.side-column-block .content table.call-us td, div.side-column-block .content table.call-us th { padding-bottom: 10px; } div.side-column-block .content table.call-us td a { color: #e05d02; } div.side-column-block .content .button-customer-login span{ display: none; } .side-column-block .content .button-customer-login { display: block; background: url('/static/images/button-customer-login.png') top left no-repeat; width: 220px; height: 30px; margin-top: 12px; } .side-column-block .side-contact-buttons { margin-left: -5px; } .side-column-block .side-contact-buttons li { margin-bottom: 5px; padding-left: 0 !important; } .side-column-block .side-contact-buttons a { display: block; width: 228px; height: 32px; background: url('/static/images/side_buttons.png') top left no-repeat; } .side-column-block .side-contact-buttons a.contact-me { background-position: 0px 0px; } .side-column-block .side-contact-buttons a.webinar { background-position: 0px -36px; } .side-column-block .side-contact-buttons a.download-software { background-position: 0px -72px; } .side-column-block .side-contact-buttons a.newsletters { background-position: 0px -108px; } .side-column-block .side-contact-buttons a span { display: none; } .side-column-block .content .numbers { padding-left: 45px; background: url(/static/images/phone.png) 2px center no-repeat; line-height: 18px; text-align: right; padding-right: 5px; } /* * Our team list */ .our-team-page h2 { clear: both; } .our-team-page div.views-row { margin-top: 0px !important; text-align: center; } .our-team-page .views-field-title a { text-decoration: none; font-weight: bold; } .our-team-page .views-field-field-job-title { color: #555555; } .our-team-page .views-field-field-picture img { border-radius: 8px; box-shadow: 2px 2px 3px #bbbbbb; } div.our-team-item { text-align: center; overflow: hidden; zoom: 1; width: 162px; margin-left: 5px; height: 200px; float: left; } .pagecontent div.our-team-profile { height: inherit; margin-left: 0; } .pagecontent div.our-team-profile img{ margin-top: 20px; margin-left: 0; margin-right: 0; } div.our-team-item .img img { border: 1px #e0e0e0 solid; padding: 2px; } /* * Phone Directory */ table.phonetable { margin-top: 1em; border-collapse: collapse; } .phonetable { margin-left: 2em; } table.phonetable th, table.phonetable tr { text-align: left; } table.phonetable th { border: 1px #e0e0e0 solid; background: #f8f8f8; padding: 7px; font-weight: bold; } table.phonetable td { border: 1px #e0e0e0 solid; padding: 7px ; } table.phonetable .label { text-align: left; } table.phonetable td.label { } /* * contact forms */ .form-item { clear: both; margin-top: 1em; position: relative; } .form-item label { display: block; font-weight: bold; position: absolute; top: 0; left: 0; width: 13em; } .form-required { color: #fa0; font-weight: bold; } .form-item .description { color: #888; font-size: 0.85em; line-height: 150%; position: relative; left: 14em; } #edit-submitbutton { position: relative; left: 12em; margin: 1em 0; } .form-item input[type=text], .form-item input[type=password], .form-item select, .form-item textarea { border: 1px solid #CCC; width: 350px; position: relative; top: 0; left: 13em; } .form-item textarea { height: 12em; padding: 3px; left: 12em; } .form-item-spacer { padding-top: 1em; } .form-item #captcha { position: relative; top: 0; left: 12em; } #edit-submitted-captcha { margin-top: 0.5em; width: 154px; } fieldset.form-item-set { border: 1px solid #ccc; display: block; margin: 12px 0; padding: 12px; } fieldset.form-item-set legend { border: none; display: block; padding: 0 2px; } div.error { background: #fff; border: solid 1px #c52020; color: #C52020; padding: 5px; } div.messages { margin: 5px 20px; font-size: 12px; } div.messages ul, div.messages ul li { list-style-image: none; list-style-type: none; } div.messages ul li { padding: 0 0 0.2em 0; } .form-item .error { background-color :#fdd; } #emergency-contacts .node-field-name { text-align: right; font-weight: bold; padding-right: 1em; } #emergency-contacts .node-field-value-phone { font-weight: bold; color: #137F00; } #emergency-contacts .node-field-help { font-weight: bold; } /* * compact template */ #compact_footer { width: auto !important; } #compact_contentcontainer { overflow: hidden; zoom: 1; width: auto; padding: 30px 0px; margin: 0px 20px; } .content-featured-box { float: right; background-color: #f9f9f9; color: #999999; font-size: 11px; padding: 20px; width: 130px; margin: 0px 0 10px 10px; } .content-featured-box .footer { margin-top: 10px; text-align: right; color: #666666; font-size: 11px; } .content-featured-box .footer em { font-style: normal; font-size: 11px; } .quotes-box { /*height: 200px;*/ } .quotes-box .content { /*height: 190px; overflow: auto;*/ } #content ul { margin-left: 0px; padding-left: 15px; margin-bottom: 30px; } #content ul li { /*list-style-image: url('/static/images/content_list_bulletpoint.png');*/ list-style-image: url('/sites/all/themes/percona/images/orange-bullet-square.png'); margin-bottom: 5px; line-height: 16px; } #content ul.alt-list li { /*list-style-image: url('/static/images/content_list_bulletpoint.png');*/ list-style-image: url('/sites/all/themes/percona/images/orange-bullet-square.png'); font-size: 12px; } #content ul.alt-list li a { text-decoration: none; } .field-row { margin-bottom: 20px; } .field-row label { display:block; width: 80px; color: #cc3300; float: left; } .field-row .field { display: block; width: 400px; float: left; } .field-row:after { content: "."; display: block; visibility: hidden; height: 0; clear:both; } /* * Customer list */ .custlogo img { /* Images are 175px wide; three side-by-side in a 720px div means each has * roughly 240px of space to occupy. */ padding: 20px; border: 1px solid white; } .custlogo img:hover { border: 1px solid gray; } /* * Form table */ table.formtable { border-collapse: collapse; } table.formtable th { font-weight: normal; white-space: nowrap; padding: 4px 20px 4px 0px; } table.formtable td { padding: 4px 0px 4px 0px; } table.formtable td input[type='text'], table.formtable td select { border: 1px #e0e0e0 solid; padding: 4px; width: 400px; } table.formtable td.submit { padding: 10px 0px 0px 0px; text-align: center; } /* * Data table */ table.datatable { margin-top: 1em; border-collapse: collapse; font-size:12px; } .datatable { margin-left: 2em; } table.datatable th, table.datatable tr { text-align: center; } table.datatable th { background: none repeat scroll 0 0 #525151; border-bottom: 3px solid #e67b1a; border-top: 1px solid #e0e0e0; border-right: 1px solid #e0e0e0; border-left: 1px solid #e0e0e0; font-weight: normal; padding: 5px; font-size:16px; text-transform:uppercase; color:#fff; } table.datatable td { border: 1px #e0e0e0 solid; padding: 7px; } table.datatable .label { text-align: left; } /* * Presentations */ div.presentation { border-top: 1px #e0e0e0 dotted; padding: 15px 0px; } div.presentation:first-child { margin-top: 0; border: none; } div.presentation table.presentation { border-collapse: collapse; } div.presentation table.presentation td div.bordered-image-screenshot { border: 1px #e0e0e0 solid; background: #f8f8f8; padding: 5px; } div.presentation table.presentation td.presentations-descr { vertical-align: top; padding-left: 15px; } div.presentation table.presentation td.presentations-descr * { vertical-align: top; } div.presentation table.presentation td.presentations-descr h4 { font-size: 14px; font-weight: bold; margin-top: 0px !important; margin-bottom: 10px; border-top: none !important; } /** New Presentations */ .view-technical-presentations .view-filters * { font-size: 10px; } .view-technical-presentations .view-filters .form-item input { left: 0 !important; width: 100px; } .view-technical-presentations .view-filters .form-item select { left: 0 !important; width: 60px; /*left: 70px;*/ } .view-technical-presentations .view-filters .form-item label { width: 70px; } .view-technical-presentations .view-filters .views-exposed-form { /* padding-left: 300px;*/ border-bottom: 1px solid #eee; } /*.view-technical-presentations .view-filters .views-exposed-form .views-exposed-widget { float: none; }*/ .view-technical-presentations li { list-style-image: url("/static/images/left_menu_item_selected_bulletpoint.png") !important; margin-bottom: 20px; border-bottom: 1px solid #eee; } .view-technical-presentations h3 { font-size: 16px; } .view-technical-presentations h3 a { text-decoration: none !important; } .view-technical-presentations .presentation-date { font-size:12px; font-weight: bold; margin-bottom:5px } .view-percona-tv .views-exposed-form .views-exposed-widgets { padding-left: 25px; } .view-percona-tv .views-exposed-form .views-exposed-widget { display: inline-block; float: left; } .view-percona-tv .views-exposed-form .views-exposed-widget label { display: none !important; } .view-percona-tv .views-exposed-form .views-exposed-widget select, .view-percona-tv .views-exposed-form .views-exposed-widget input[type="text"] { left: 0 !important; position: inherit; width: 150px; } .view-percona-tv .views-exposed-form .views-exposed-widget input[type="submit"] { margin-top: 0; } .view-percona-tv .views-submit-button { text-align: right; } .view-percona-tv .view-content { margin-top:25px; } .view-percona-tv .view-content .views-field-title { font-weight: bold; font-size: 14px; margin: 10px 0; } .view-percona-tv .view-content .views-field-field-video-thumbnail img { border-radius: 7px; } .view-percona-tv .view-content .views-row { width: 205px; float: left; margin-bottom: 30px; } .view-percona-tv .view-content .views-row-odd { margin-right: 35px; margin-left: 15px; clear: left; } .view-percona-tv .view-content .views-field-field-video-thumbnail .field-content { text-align: center; } .view-percona-tv .view-content .views-field-field-video-thumbnail a { opacity:0.5; filter:alpha(opacity=50); /* For IE8 and earlier */ } .view-percona-tv .view-content .views-field-field-video-thumbnail a:hover { opacity:1; filter:alpha(opacity=100); /* For IE8 and earlier */ } .views-field-created { font-weight: bold; } .customer-vote { width: 412px; text-align: justify; } .training-info-container { float: right; width:200px; background-color:#fff; padding:10px; margin:0 15px; border:1px solid #E0E0E0; } .node-percona-tv-video .content iframe { width: 100% !important; } /* * Software info container */ div.software-info-container { overflow: hidden; zoom: 1; float: right; padding: 0px 0px 25px 25px; } table.software-info-container { width: 240px; border-collapse: collapse; } table.software-info-container td { border: 1px #e0e0e0 solid; padding: 10px 10px 10px 10px; } table.software-info-container td.latest { padding: 10px 10px 10px 62px; background: url(/static/images/software-latest.png) 15px center no-repeat; min-height: 32px; font-size: 16px; line-height: 24px; } table.software-info-container td.download a { font-weight: bold; } table.software-info-container td.download { padding: 15px 62px 15px 20px; background: url(/static/images/software-download.png) 195px center no-repeat; min-height: 32px; font-size: 18px; line-height: 24px; } table.software-info-container td.links { text-align: left; padding: 4px 10px 4px 20px; } table.software-info-container td.links div { margin: 10px 0px; } .inv { display: none; } /* * Downloads table */ ul.downloads_list { font-size: 16px; } ul.downloads_list li { line-height: 20px; } ul.downloads_list a { text-decoration: none; } table.downloadstable { margin-top: 20px; border-collapse: collapse; } table.downloadstable th { border: 1px #e0e0e0 solid; background: #f8f8f8; padding: 7px; font-weight: bold; } table.downloadstable td { border: 1px #e0e0e0 solid; padding: 7px; } table.downloadstable td.file { width: 90%; text-align: left; } table.downloadstable td.modified { width: 5%; text-align: center; white-space: nowrap; } table.downloadstable td.size { width: 5%; text-align: right; white-space: nowrap; } /* * Maintenance promo on /downloads */ div.downloads-maintenance-promo, div.downloads-warning { border: 1px #e0e0e0 solid; padding: 15px 20px 15px 150px; margin: 30px 0px; background: url(/static/images/shield.png) 35px center no-repeat; } div.downloads-maintenance-promo > .close-button { float:right; clear:right; margin-top:-10px; margin-right:-15px; cursor: pointer; cursor: hand; } div.downloads-warning { background: url(/static/images/warning.png) 35px center no-repeat; } /* * Downloads pages */ div.downloads-version { border-top: 1px #e0e0e0 solid; margin-top: -10px; padding-top: 10px; text-align: right; font-size: 14px; line-height: 14px; font-weight: bold; } div.downloads-anchors { font-size: 13px; line-height: 15px; margin: 20px 0px; } div.downloads-backtotop { font-size: 13px; line-height: 15px; padding-top: 8px; } .fixedwidth-wrapper { margin: 0 auto; overflow: hidden; width: 1000px; border: solid #ccc 0px; } /*** * side style buttons */ /* .a-btn-container { overflow:hidden; border-radius:4px; } .a-btn, .a-btn-new { width: 220px; height: 48px; background: #fdcd34; background:-webkit-gradient(linear,left top,left bottom,color-stop(##fdcd34,0),color-stop(##d42c14,1)); background:-webkit-linear-gradient(top, #fdcd34 0%, #d42c14 100%); background:-moz-linear-gradient(top, #fdcd34 0%, #d42c14 100%); background:-o-linear-gradient(top, #fdcd34 0%, #d42c14 100%); background:linear-gradient(top, #fdcd34 0%, #d42c14 100%); filter:progid:DXImageTransform.Microsoft.gradient( startColorstr='#fdcd34', endColorstr='#d42c14',GradientType=0 ); padding:0px; display:inline-block; position:relative; -webkit-border-radius:4px; -moz-border-radius:4px; border-radius:4px; overflow:hidden; -webkit-transition:all 0.3s linear; -moz-transition:all 0.3s linear; -o-transition:all 0.3s linear; transition:all 0.3s linear; text-decoration: none !important; border: 1px solid #f08900; margin-bottom: 10px; } .a-btn-text{ padding-left:10px; padding-top:10px; display:block; font-size:13px; white-space:nowrap; color: #ffffff; -webkit-transition:all 0.3s linear; -moz-transition:all 0.3s linear; -o-transition:all 0.3s linear; transition:all 0.3s linear; } .a-btn-slide-text, .a-btn-fixed-slide-text { padding-left: 10px; left:0px; width:auto; right:52px; height:0px; color:#000000; font-size:12px; white-space:nowrap; font-family:Georgia, serif; font-style:italic; text-indent:15px; overflow:hidden; } .a-btn-fixed-slide-text{ height: 30px; } .a-btn-icon-right{ position:absolute; right:0px; top:0px; height:100%; width:52px; } .a-btn-icon-right span{ width:38px; height:38px; position:absolute; left:50%; top:50%; margin:-20px 0px 0px -20px; background:transparent url('/static/images/button-download-icon.png') no-repeat 50% 55%; -webkit-transition:all 0.3s linear; -moz-transition:all 0.3s linear; -o-transition:all 0.3s linear; transition:all 0.3s linear; } */ .a-btn-green, .a-btn-green-big { background: #84bb81; /* Old browsers */ border: none; border-radius: 0; display: inline-block; height: 65px; margin-bottom: 10px; overflow: hidden; padding: 0; position: relative; text-decoration: none !important; transition: all 0.3s linear 0s; width: 100%; box-shadow: 0px 5px #4d9355; } .a-btn-text-green { color: #FFFFFF; display: block; font-size: 13px; padding-left: 15px; padding-top: 16px; transition: all 0.3s linear 0s; white-space: nowrap; font-weight: bold; } .a-btn-text-green-big { color: #FFFFFF; display: block; font-size: 23px; padding-left: 50px; padding-top: 18px; transition: all 0.3s linear 0s; white-space: nowrap; } .a-btn-text-green-big-second { color: #fbdc32; display: block; font-size: 16px; padding-left: 50px; padding-top: 8px; transition: all 0.3s linear 0s; white-space: nowrap; } .a-btn-fixed-slide-text-green { color: #ffffff; font-family: Arial, Helvetica, sans-serif; font-size: 14px; font-style: italic; left: 0; overflow: hidden; padding-left: 14px; right: 52px; text-indent: 15px; white-space: nowrap; width: auto; line-height:20px; } .a-btn-icon-right-green { height: 100%; position: absolute; right: 0; top: 0; width: 52px; border-left: 1px solid #eee; } .a-btn-icon-left-green, .a-btn-icon-left-orange { height: 100%; position: absolute; left: 0; top: 0; width: 52px; } .a-btn-icon-right-green span, .a-btn-icon-left-green span, .a-btn-icon-left-orange span { height: 38px; left: 50%; margin: -19px 0 0 -22px; position: absolute; top: 50%; transition: all 0.3s linear 0s; width: 38px; } .a-btn-icon-right-green span { background: url("bullet-white.png") no-repeat scroll 50% 55% transparent; margin: -19px 0 0 -22px; } .a-btn-icon-left-green span { background: url("/static/images/bullet-yellow-green.png") no-repeat scroll 50% 55% transparent; margin: -20px 0 0 -17px; } .a-btn-icon-left-orange span { background: url("/static/images/bullet-orange.png") no-repeat scroll 50% 55% transparent; margin: -20px 0 0 -30px; } /* **** */ .paper-desc { padding-left: 20px; } .papers-content h2 a { text-decoration: none !important; color: inherit !important; } .paper-author { padding-bottom: 20px; } .paper-author img { border: 1px solid #CCCCCC; display: block; float: left; margin-right: 15px; padding: 1px; width: 110px; } .paper-author h2 { border-bottom: 0 !important; color: #000000; font-family: Arial,Helvetica,sans-serif; font-size: 20px; line-height: 20px; margin: 30px 0 0 !important; padding: 0 0 3px; } .paper-author h3 { color: #555555 !important; font-family: Times New Roman,serif !important; font-style: italic; font-weight: normal !important; margin-left: 130px !important; margin-top: 5px !important; } .paper-long-desc { width: 310px; float: left; } .paper-long-desc > p{ padding: 0 20px; margin:0 !important; } .paper-details .paper-thumb { width: 185px; float: left; margin-right: 10px; } .paper-details .paper-thumb div.content { padding: 5px; } .paper-details .paper-thumb img { width: 175px; border: 1px solid #999; box-shadow: 5px 5px 5px #888888; } .pr-subhead { font-size: 16px !important; border: none !important; text-align: center; margin-top: 15px !important; font-style: italic; } .clearer { clear: both; } blockquote { display: block; padding: 0 0 0 35px; font-style: italic; } .book_cover { border: 1px solid #CCCCCC; margin-right: 20px; } .support-yes { color: green; font-weight: bold; } .support-no { color: red; } .support-cs { color: blue; } table.support-table * { font-size: 12px !important; } /* * social share styles * .sharrre .button, .sharrre .IN-widget, .linkedin-company-follow { height: 35px; padding-left: 5px; } .sharrre .linkedin { display: none; } .sharrre .IN-widget, .sharrre .twitter { height: 30px; } */ .translation-buttons { text-align: right; } .translation-buttons img { width: 25px; margin-right: 10px; } .news-article { clear:both; margin-top: 10px; padding-bottom: 10px; } .news-article hr { clear: both; border: none; } .news-article > .image { float: left; width: 155px; border: 1px solid #ccc; } .news-article > .image img { margin: 5px; border: 0; } .news-article > .news-body { float: left; max-width: 700px; margin-left: 10px; } .news-article > .news-body .date { font-style: italic; font-size: 12px; } .news-article > .news-body h3 { margin: 0 !important; } .block-aggregator .more-link { display: none; } #agreement_form .form_field label { display: inline-block !important; width: 100px; font-size: 14px; } #agreement_form .form_field .for_checkbox { display: inline !important; } #agreement_form .form_field input:not([type=checkbox]) { font-size: 14px; padding: 5px; width: 175px; border: 1px solid #999999; border-radius: 10px; color: #333333; outline: medium none; padding-left: 10px; } #agreement_form .form-submit { display: block; margin:auto; font-size: 18px; } #agreement_form .form_field { padding: 15px 0 10px 25px; } #location_proposal_form { background-color: #F9F9F9; border: 1px solid #ccc; border-radius: 10px; padding: 20px 10px; display: none; } #percona-training-location-form label { font-size: 13px; width: 8em; text-align: right; padding-top: 5px; } #percona-training-location-form .form-text { font-size: 12px; padding: 5px; border: 1px solid #999999; border-radius: 10px; color: #333333; outline: medium none; left: 9em; } #percona-training-location-form .form-submit { display: block; margin:auto; font-size: 14px; margin-top: 10px; } #location_proposal_form .error { margin-bottom: 10px; } .view-percona-mysql-university a { text-decoration: none; } .view-percona-mysql-university li.views-row { margin-bottom: 15px !important; } .node-percona-mysql-university-event .where_content, .node-percona-mysql-university-event .agenda_content { padding-left: 25px; } .view-percona-mysql-university .views-field-field-city, .view-percona-mysql-university .views-field-field-country { font-size:16px; } .view-display-id-agenda_block .views-field-field-time { text-align:center; font-size: 11px; font-weight: bold; width: 75px; padding: 8px 0; border-bottom: 1px solid #ccc; } .view-display-id-agenda_block .views-field-title { padding: 8px 10px; border-bottom: 1px solid #ccc; } .view-display-id-agenda_block td.views-field-field-time { background-color: #eee; } .view-display-id-agenda_block .views-field-nothing, .view-display-id-agenda_block .views-field-field-slides-link-1 { font-size: 13px; padding: 8px 10px; text-align: center; border-bottom: 1px solid #ccc; } .view-display-id-agenda_block tr.item-percona-mysql-university-session { background: none; } .view-display-id-agenda_block tr.item-percona-mysql-university-session td.views-field-nothing, .view-display-id-agenda_block tr.item-percona-mysql-university-session .views-field-field-slides-link-1 { border-left: 1px solid #ccc; } .view-display-id-agenda_block td.views-field-field-slides-link-1 { border-right: 1px solid #ccc; } .node-percona-mysql-university-session img { margin-top: 0 !important; } .node-percona-mysql-university-session .our-team-list { } .social-share-toolbar { } .social-share-toolbar .share_button { display: inline-block; width: 36px; margin-right: 10px; height: 37px; background: url('/static/images/social_share_icons.png') top left no-repeat; } .social-share-toolbar .share_button span { display: none; } .social-share-toolbar .facebook { background-position: -34px 0; } .social-share-toolbar .linkedin { background-position: -69px 0; } .social-share-toolbar .googleplus { background-position: 0 0; width: 34px; } .social-share-toolbar .twitter { background-position: -105px 0; } h2 a, h2 a:visited { text-decoration: none; color: #000000; } #related-resources { padding-left: 10px; } #related-resources h3 { cursor: pointer; outline: none; } #related-resources h3 span { background: url("/static/images/arrow_bulletpoints.png") 5px center no-repeat; width: 15px; height: 9px; display: inline-block; margin-right: 5px; } #related-resources h3.ui-state-active span { background-position: -8px; } #related-resources > div { padding-left: 15px; } #related-resources .more-links { text-align: right; padding-right: 25px; } /** DOWNLOADS */ #downloads { } #downloads .product { padding: 25px 25px; border-bottom: 1px solid #eee; overflow: hidden; } #downloads .product h2 { display: none; } #downloads .product img { float: left; margin-right: 20px; width: 180px; } #downloads .product p { float: left; } #downloads .product .links { clear: both; overflow: hidden; } #downloads .product .links a{ font-size: 12px; display: block; text-decoration: none; width: 40%; text-align: center; float: left; } #downloads .product .links ul.major-versions { margin: 0; padding: 0; } #downloads .product .links ul.major-versions li { list-style: none; width: 22%; float: left; margin-right: 2%; } #downloads .product .links .minor-versions li { list-style: none !important; } #downloads .product .links ul.major-versions li a { clear: both; width: 100%; font-size: 12px; } #downloads .product .links ul.major-versions li a.latest { margin-bottom: 10px; font-size: 14px; } #downloads .product .links ul.major-versions li a .version { font-size: 20px; color: #fff; margin-bottom: 0px; } #downloads .product .links ul.major-versions li a span{ display: block; text-align: center; font-size:11px; } #downloads .product .links a.latest{ /** border: none; border-radius: 4px 4px 4px 4px; display: inline-block; height: 65px; margin-bottom: 10px; overflow: hidden; padding: 0; position: relative; text-decoration: none !important; transition: all 0.3s linear 0s; width: 100%; */ background: #0a4200; /* Old browsers */ background: -moz-linear-gradient(top, #de6616 0%, #de6616 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#de6616), color-stop(100%,#de6616)); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, #de6616 0%,#de6616 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, #de6616 0%,#de6616 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, #de6616 0%,#de6616 100%); /* IE10+ */ background: linear-gradient(to bottom, #de6616 0%,#de6616 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#de6616', endColorstr='#de6616',GradientType=0 ); /* IE6-9 */ border: none; background-color:#de6616; font-size: 14px; border-radius: 0px; color: #fff; padding: 5px; } #downloads .other-products li { font-size: 14px; } #downloads .download-form { clear: both; overflow: hidden; } .side-column-block .download-form { margin-bottom: 20px; display: block; border: 1px solid #ccc; padding: 5px; } h2.w_links a{ color: #D5390B !important; } h2.w_links a:hover { text-decoration: underline !important; } .inapplp-button-dwnl { width: 40%; margin: 20px; float: left; } a.inapplp-button { background: #0a4200; /* Old browsers */ background: -moz-linear-gradient(top, #099b00 0%, #0a4200 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#099b00), color-stop(100%,#0a4200)); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, #099b00 0%,#0a4200 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, #099b00 0%,#0a4200 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, #099b00 0%,#0a4200 100%); /* IE10+ */ background: linear-gradient(to bottom, #099b00 0%,#0a4200 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#099b00', endColorstr='#0a4200',GradientType=0 ); /* IE6-9 */ border: none; background-color:#0a4200; font-size: 16px; border-radius: 5px; color: #FBDC32; text-decoration: none; padding: 20px 0; width: 40%; margin: 20px; display: inline-block; float: left; text-align: center; } .2cols { clear: both; overflow: hidden; width: 100%; } #block-breadcrumb-breadcrumb { margin-bottom: 15px; } .pager li{ font-size: 16px; } .pager li a{ padding: 0 5px; text-decoration: none; } .transcript-speaker { float: left; width: 100px; font-weight: bold; } .transcript-text { margin-left: 120px; } /* * WM123 OVERRIDES */ body, html { color: #666; font-family: Arial; font-size: 14px; line-height: 1.6; } #contentcontainer { margin: 0 auto; overflow: hidden; width: 1024px; border: solid #ccc 0px; } .front #contentcontainer { padding-top: 0px; } #content { float: left; width: 770px; margin-top:20px; } .field-content { margin-bottom:25px; } /* HEADER (Logo, Top Right Nav, Contact Button) */ .topNavContainer { width:1000px; margin-left:auto; margin-right:auto; margin-top:20px; } #topnav { width:720px; float:right; font-size:11px; text-align:right; } .search-field { margin-top:5px; } .search-button { border: 0px solid #006; background: #b4b3b3; color:#fff; text-transform:uppercase; padding:3px 5px 3px 5px; font-weight:bold; } .search-button:hover { background: #777; cursor:pointer; } .getHelpBox { border:0px solid #000; width:240px; height:30px; float:right; background-color:#e67b1a; text-align:center; color:#fff; font-weight:bold; font-size:16px; border-radius: 5px 5px 0px 0px; margin-bottom:0px; margin-top:25px; padding-top:5px; } .getHelpArrow img{ margin-left:5px; } .topnavLink a { font-size:11px; text-align:right; margin-left:10px; margin-right:10px; text-decoration:none; text-decoration:none; margin-right:8px; margin-left:8px; color:#333 important!; } .topnavLink a:hover { color:#d5390b important!; } #header { background: none repeat scroll 0 0 #fff; /*height: 0px;*/ overflow: visible; } #header .header .right .navicontainer { background: url("") no-repeat scroll left top #333; float: left; margin-left:80px; height: 35px; overflow: hidden; background-color: #333; } #header input[type="submit"] { background-color: #b6b6b6; border: medium none; border-radius: 0px; box-shadow: none; color: #fff; cursor: pointer; font-size: 12px; line-height: 1; padding: 6px; text-transform: uppercase; width: auto; font-family:arial; } #header input[type="submit"]:hover { background-color: #777; color: #fff; } #header input { background-color: #fff; border: 1px solid #e3e3e3; box-shadow: 1px 1px 3px #eee inset; color: #999; font-size: 12px; padding: 4px; width: 150px; border-radius: 0px; margin-right: 5px; } /* Eof HEADER (Logo, Top Right Nav, Contact Button) */ /* Main top nav */ .tb-megamenu{ position: relative; background-color: #000; } .tb-megamenu .nav-collapse.collapse { height: auto !important; overflow: visible !important; } .tb-megamenu .nav { margin: 0px; list-style: outside none none; padding: 0px !important; } .tb-megamenu .nav > li { float: left; margin-left: 0px; } .tb-megamenu .nav > li > a { display: block; border-right: 1px solid #222; border-top: 0px none; color: #FFF; font-size: 18px; font-weight: normal; padding: 13px 26px; text-shadow: none; text-transform: uppercase; text-decoration: none; } .tb-megamenu .nav li a:hover { background-color: #eff0f5 !important; color: #0e4a86 !important; text-decoration: none !important; } .tb-megamenu:after { clear: both; } .tb-megamenu:before, .tb-megamenu:after { content: ""; display: table; line-height: 0; } /* Eof main top nav */ .searchlink { overflow: hidden; zoom: 1; width: 35px; height: 35px; float: right; padding: 0px 0px 0px 10px; display:none; } .lifecycle-box { display: inline-block; padding: 5px; width: 130px; text-align: center; vertical-align: middle; text-decoration: none; background-color: #f9f9f9; border: 1px solid #ccc; height: 40px; font-size:12px; margin-bottom:10px; } .one-line-align { display: inline-block; margin-top:15px; } .two-line-align { display: inline-block; margin-top:10px; } .quotes-box .content { font-size:18px; line-height:20px; font-family:'times'; color:#626161; font-weight:normal; font-style:italic; padding:10px; border-top:1px solid #ccc; border-bottom:1px solid #ccc; } td.label {font-size:11px;} .content-featured-box .footer { color: #666666; font-size: 16px; margin-top: 5px; text-align: right; font-family:'times'; font-style:normal; } .content-featured-box .footer em { font-size: 12px; font-style: normal; } #header .header .logo{ float:left; overflow:hidden; padding:10px 0; width:220px; height:66px; zoom:1; } .team-name a { color: #d5390b; cursor: pointer; text-decoration: none !important; } .pagecontent div.our-team-profile img{ margin-top: 0px; margin-left: 0; margin-right: 0; } div.our-team-item .img img { border: 0px #e0e0e0 solid; padding: 0px 10px 0px 0px; } div.navi-dropdown .navi-dropdown-header-l { background: url('') no-repeat scroll left top rgba(0, 0, 0, 0); height: 0px; overflow: hidden; width: 255px; } div.navi-dropdown .navi-dropdown-footer { background: url("") no-repeat scroll left top rgba(0, 0, 0, 0); height: 10px; overflow: hidden; width: 255px; } div.navi-dropdown .navi-dropdown-content { background: url("") repeat-y scroll 0 0 #ffffff; color: #333333; font-family: "MyriadPro",Arial,Helvetica,sans-serif; font-size: 12px; line-height: 18px; overflow: hidden; padding: 0 15px 5px; width: 180px; margin-top:2px; border-top: 1px solid #333; border-left: 1px solid #333; border-right: 1px solid #333; border-bottom: 1px solid #333; } #header .header .right .navicontainer .navi span.selected, #header .header .right .navicontainer .navi span.selected:hover { background: #445c90 url("") no-repeat scroll left top; } #header .header .right .navicontainer .navi span a:hover { background: #445c90 url() top right no-repeat; color:#fff; background-color: #445c90; } #header .header .right .navicontainer .navi span.selected a.selected, #header .header .right .navicontainer .navi span.selected a.selected:visited, #header .header .right .navicontainer .navi span.selected a.selected:hover { background: url() top right no-repeat; background-color: #445c90; color: #ffffff; } #header .header .right .navicontainer .navi span a, #header .header .right .navicontainer .navi span a:visited { color: #fff; display: block; font-family: "MyriadPro",Arial,Helvetica,sans-serif; font-size: 18px; height: 17px; line-height: 20px; padding: 8px 24px 8px; text-decoration: none; text-transform: uppercase; position: relative; background-color: #333; float:left; margin-left:auto; margin-right:auto; } #header .header .right .navicontainer .navi span.selected a.selected, #header .header .right .navicontainer .navi span.selected a.selected:visited, #header .header .right .navicontainer .navi span.selected a.selected:hover { background: url("") no-repeat scroll right top #445c90; background-color: #445c90; color: #ffffff; } .home-box-titles a:hover{ text-decoration:none; } .home-box-titles { text-decoration:none; } .home-box-titles a{ text-decoration:none; } .home-box-links { text-decoration:none; color: #333; } .home-box-links a { text-decoration:none; color: #333; } .home-box-links a:hover{ text-decoration:none; color: #de6616; } .home-box-more-link a { margin-left:10px; font-weight:bold; color:#de6616; font-size:12px; text-decoration:none; } .home-box-more-link a:hover { margin-left:10px; font-weight:bold; color:#b94d06; font-size:12px; text-decoration:none; } .home-community-events-news-links ul { font-family: arial; font-size: 12px; margin-bottom: 5px; margin-left: 5px; padding-left: 10px; padding-top: 7px; } .home-community-events-news-links a { text-decoration:none; color: #333; } .home-community-events-news-links a:hover{ text-decoration:none; color: #de6616; } div.frontpage-banners { margin: 0 0 20px; overflow: hidden; padding: 0; width: 1024px; } div.frontpage-banners a.nivo-imageLink { border: 0 none; display: none; height: 200px; left: 0; margin: 0; padding: 0; position: absolute; top: 0; width: 1024px; z-index: 20; } div.frontpage-banners div.frontpage-banners-container div.nivo-controlNav { font-size: 28px; line-height: 25px; position: absolute; text-align: center; top: 200px; width: 1024px; display:none; } div.frontpage-banners div.frontpage-banners-container div.nivo-controlNav a { color: #ccc !important; padding: 0 3px; text-decoration: none !important; } div.frontpage-banners div.frontpage-banners-container { height: 200px; overflow: hidden; padding-bottom: 0; width: 1024px; z-index: 98; } #left-column { width: 220px; float: left; margin-right: 25px; } div.side-column-block .header { display:none; } #stripe { background: url("") no-repeat scroll center top #d24300; height: 0px; overflow: hidden; } .front h1#page-title { display: none; } .front #left-column { width: 165px; float: right; margin-right: 0px; border:1px solid #000; display: none; } .front #content { float: left; width: 100%; border:0px solid #000; margin-top:0px; } .front #content ul { padding-top: 7px; margin-bottom: 5px; margin-left: 5px; padding-left: 10px; font-size: 14px; font-family:'Open Sans',arial; } #content ul li { line-height: 20px; /*list-style-image: url("/sites/all/themes/percona/images/home-arrow-list.png");*/ list-style-type: square; margin-bottom: 5px; } .content-featured-box { background-color: #fff; color: #999999; float: right; font-size: 11px; margin: 0 0 10px 10px; padding: 20px; width: 750px; border:1px 1px solid #000; } .body h1 { color: #333; font-family: Arial,helvetica,Tahoma,Verdana; font-size: 24px; font-weight: bold; line-height: 30px; margin: 5px 0px 10px; text-transform: uppercase; border-bottom: 2px solid #E67B1A; } .body h2 { color: #333; font-family: helvetica,Arial,Tahoma,Verdana; font-size: 18px; font-weight: bold; line-height: 22px; margin: 5px 0 20px; text-transform: uppercase; border-bottom: 2px solid #de6616 } .breadcrumb { color: #000; font-size: 10px; text-decoration: none; text-transform: uppercase; } .breadcrumb a { color: #000; font-size: 10px; text-decoration: none; text-transform: uppercase; } .introText { font-size:16px; line-height:21px; margin-bottom:15px; } .breadcrumb a:hover { color: #de6616; text-decoration: none; text-transform: uppercase; } #left-column .navi-title { color: #333; font-size: 15px; font-weight:bold; margin-bottom: 0px; text-align:right; border-bottom: 2px solid #5073bc; text-transform: uppercase; margin-top:15px; } #left-column .navi-title-submain{ border-bottom: 0px solid #5073bc; color: #999; font-size: 15px; font-weight: bold; padding-right: 5px; margin-top: 15px; text-align: right; text-transform: uppercase; text-decoration:none; } #left-column .navi-title-submain a{ border-bottom: 0px solid #5073bc; color: #999; font-size: 15px; font-weight: bold; padding-right: 5px; margin-top: 15px; text-align: right; text-transform: uppercase; text-decoration:none; } #left-column .navi-title-submain a:hover{ color: #333; } #left-column .navi-title span { background: url("") no-repeat scroll right top rgba(0, 0, 0, 0); padding-left: 0px; padding-right: 10px; } #left-column .item a, #left-column .subitem a { background: url("/sites/all/themes/percona/images/leftnav-blue-gradation.jpg"); color: #666; display: block; font-size: 11px; min-height: 18px; padding: 7px 12px 7px 7px; text-decoration: none; text-align:right; border-bottom: 3px solid #333; border-right: 1px solid #e0dede; } #left-column .item a, #left-column .subitem .selected > a { background: url("/sites/all/themes/percona/images/leftnav-blue-gradation.jpg"); color: #666; display: block; font-size: 11px; min-height: 18px; padding: 7px 12px 7px 7px; text-decoration: none; text-align:right; border-bottom: 1px solid #e0dede; border-right: 1px solid #e0dede; } #left-column .item, #left-column .subitem .selected > { border-top: 0px solid #f3f1ee; } #left-column .item a:hover { text-decoration: none; color: #333; } #left-column .selected > a { font-weight:bold; font-size: 12px; color: #333; background: url('/sites/all/themes/percona/images/leftnav-active-arrow.png') right no-repeat; } /* About */ .SectionTitle { font-family:'GeosansLight'; font-size:20px; color:#bf3a32; line-height:22px; font-weight:bold; padding-left:0px; letter-spacing:1px; margin-top:30px; } .SectionContent { text-align:justify; line-height:18px; margin-top:7px; font-size:13px; padding:0px 10px 10px 0px; } /* Services */ .ServicesBoxContent { text-align:justify; line-height:18px; margin-top:10px; font-size:13px; padding:0px 10px 10px 10px; } .ServicesBoxPerconaTitle { font-family:'GeosansLight'; font-size:17px; letter-spacing:2px; color:#333; line-height:16px; padding-top:10px; padding-left:10px; } .ServicesBoxTitle { font-family:'GeosansLight'; font-size:20px; color:#bf3a32; line-height:22px; font-weight:bold; padding-left:10px; letter-spacing:1px; } .ServicesBoxOutline { margin-top:20px; width:362px; height:185px; border:1px solid #d9d9d9; -moz-border-radius: 8px; border-radius: 8px; background-color:#f7f7f7; float:left; } .ServicesBoxDetailsOutline { border-top:1px solid #d9d9d9; height:15px; } .ServicesBoxDetails { width:230px; float:left; border-right:1px solid #d9d9d9; font-size:11px; text-align:right; padding: 5px 10px; } .ServicesBoxLearnMore { width:90px; float:right; font-size:12px; font-weight:bold; padding: 3px 10px; text-align:right; } /* Software */ .SoftwareBoxOutline { margin-top:-10px; width:100%; border:1px solid #d9d9d9; -moz-border-radius: 0px 0px 8px 8px; border-radius: 0px 0px 8px 8px; background-color:#f7f7f7; float:left; } .SoftwareServicesBoxOutline { margin-top:20px; width:362px; height:185px; background-color:#f7f7f7; float:left; } .SoftwareClusterBoxOutline { margin-top:20px; width:362px; height:185px; background-color:#f7f7f7; float:left; } .SoftwareBoxDetailsOutline { border-top:1px solid #d9d9d9; border-bottom:1px solid #d9d9d9; height:25px; } .SoftwareBoxPerconaTitle { font-family:'GeosansLight'; font-size:17px; letter-spacing:2px; color:#333; line-height:16px; padding-top:10px; padding-left:10px; } .SoftwareBoxTitle { font-family:'GeosansLight'; font-size:20px; color:#bf3a32; line-height:22px; font-weight:bold; padding-left:10px; letter-spacing:1px; } .SoftwareBoxContent { text-align:justify; line-height:18px; margin-top:10px; font-size:13px; padding:0px 10px 10px 10px; } .SoftwareBoxDetails { width:230px; float:left; border-right:1px solid #d9d9d9; font-size:11px; text-align:right; padding: 5px 10px; } .SoftwareBoxLearnMore { width:90px; float:right; font-size:12px; font-weight:bold; padding: 3px 10px; text-align:right; } /* Resources */ .ResourcesBoxContent { text-align:justify; line-height:18px; margin-top:10px; font-size:13px; padding:0px 10px 10px 10px; } .ResourcesBoxTitle { font-family:'GeosansLight'; font-size:20px; color:#bf3a32; line-height:22px; font-weight:bold; padding-top:10px; padding-left:10px; letter-spacing:1px; } .ResourcesBoxOutline { margin-top:20px; width:362px; height:155px; border:1px solid #d9d9d9; -moz-border-radius: 8px; border-radius: 8px; background-color:#f7f7f7; float:left; } .ResourcesBoxDetailsOutline { border-top:1px solid #d9d9d9; height:15px; margin-top:20px; } .ResourcesBoxDetails { width:230px; float:left; border-right:1px solid #d9d9d9; font-size:11px; text-align:right; padding: 5px 10px; } .ResourcesBoxLearnMore { width:90px; float:right; font-size:12px; font-weight:bold; padding: 3px 10px; text-align:right; } #block-block-33 h2 { display:none; } /* #Tablet (Portrait) ================================================== */ /* Note: Design for a width of 768px */ @media screen and (max-width: 768px) { #contentcontainer { width: 768px; } #content { float: left; width: 490px; } .content-featured-box { width: 450px; } } /*============================================================= * MEGA MENU *------------------------------------------------------------- * This style sheet is divided into the following 3 sections: * 1 Fundamentals * 1.1 Reset * 1.2 Structure * 1.3 Default styles * 2 User Overrides * 2.1 Widths, margins, and padding * 2.2 Vertical menu * 2.3 Horizontal slots * 3 Skins *=============================================================*/ /* @group Fundamentals */ /* Reset *------------------------------------------------------------*/ /* @group Reset */ .megamenu-menu, .megamenu-menu *, .megamenu-menu .megamenu-parent, .megamenu-menu .megamenu-parent-title, .megamenu-menu .megamenu-bin, .megamenu-menu .megamenu-slot, .megamenu-menu .megamenu-slot-title, .megamenu-menu .megamenu-items { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; font-weight: normal; vertical-align: baseline; background: transparent; } ul.megamenu-menu, .megamenu-menu ul.megamenu-items { list-style: none; } /* @end */ /* Structure *------------------------------------------------------------*/ /* @group Structure */ .megamenu-menu { overflow: visible; /* Contain floated elements */ } .megamenu-menu .megamenu-parent { display: inline; position: relative; z-index: 99; } .megamenu-menu .megamenu-parent-title { display: inline; } .megamenu-menu .megamenu-bin { position: absolute; left: 0; /* Default horizontal orientation */ top: -9000px; /* default hidden position */ z-index: 10000; overflow: hidden; } /* Bin Alignment *--------------------------------------------------------*/ .megamenu-menu .megamenu-bin-right{ left:auto; right:-500px; } /* @group Horizontal Links */ .megamenu-menu .megamenu-links-horizontal { overflow: hidden; } .megamenu-menu .megamenu-links-horizontal li, .megamenu-menu .megamenu-links-horizontal li a { float: left; } /* @end Horizontal Links */ .megamenu-menu .megamenu-menu-vertical .megamenu-parent { display: block; } /* @end Structure */ /* Default styles *--------------------------------------------------------*/ /* @group Default styles */ .megamenu-menu ul a { text-decoration: none; } .megamenu-menu ul a:hover { text-decoration: underline; } /* @end */ /* @end Fundamentals */ /* User Overrides * * This part will have to be dynamically generated in the * customization admin screen. The user can specify the * proper widths for structural elements according to the * ID of the mega menu. *--------------------------------------------------------*/ /* @group Widths, Padding, & Margins */ .megamenu-parent { margin: 0.1em 1em; /* Left margin affects fly-out value */ } .megamenu-menu .megamenu-bin { padding: 0.5em; } .megamenu-menu .megamenu-slot { width: 200px; /* move this to skins or php */ margin-bottom: 0.5em; } /* Vertical Orientation *--------------------------------------------------------*/ /* @group Vertical Menu */ .megamenu-menu-vertical { width: 8em; } .megamenu-menu-vertical .megamenu-parent { margin-top: 1em; } .megamenu-slots.flyright { left: 7em; /* [megamenu-menu-vertical] (width) - [megamenu-parent] (margin) */ top: 0; } .megamenu-slots.flyleft { left: -16.2em; /* Calculation of this value did not seem straightforward */ top: 0; } /* @end */ /* Horizontal Orientation *--------------------------------------------------------*/ /* @group Horizontal Slots */ /* This value will have to be calculated to account for slot widths + margins */ /* 2009.1009.1437 EFD: this width conflicted with our layout. it made more sense for us to set width on the slots themselves. unsure how the interface should handle this. */ .megamenu-menu .megamenu-slots-columnar { overflow: hidden; /* contain floated slots */ } .megamenu-slots-columnar li.megamenu-slot { float: left; } /* @end Horizontal Slots */ /* @end Widths*/ /* megamenu-menu end */ /* Compatibilty with Admin Menu */ .megamenu-menu { z-index:1000; } /* Skins * * Skin styles should be limited to typography, colors, and * backgrounds. Except for text elements (i.e. h3, megamenu-title, * etc.), widths, margins, and padding of structural blocks are * set in the administration area. *--------------------------------------------------------*/ /* @group Minimal */ .megamenu-skin-minimal { background-color:#bbb; } .megamenu-skin-minimal .megamenu-title { font-size:18px; font-weight:normal; } .megamenu-skin-minimal .megamenu-bin { background-color:#eee; border:2px groove #7e7e7e; -moz-border-radius:0.583em; -webkit-border-radius:0.583em; border-radius:0.583em; } .megamenu-skin-minimal .megamenu-slot { margin-right: 4px; border-bottom:1px solid #bbb; } .megamenu-skin-minimal .megamenu-slot-title, .megamenu-skin-minimal .megamenu-slot-title a { background-color:#bbb; color:#fff; /* white */ font-size:20px;; text-align:center; text-transform:uppercase; } .megamenu-skin-minimal .megamenu-link { font-size:85%; } /* @end Minimal*/ /* @group friendly (ucsf theme)*/ /* overrides - begin */ .megamenu-skin-friendly li{ margin:0px; padding:0px; line-height:120%; } .megamenu-skin-friendly *{ list-style:none; } /* overrides - end */ .megamenu-skin-friendly{ font-family:verdana,tahoma,arial,helvetica,sans-serif; background:#333; /* pharmacy medium gold */ padding:7px 0px 6px 10px; color:#fff; } .megamenu-skin-friendly a:hover{ text-decoration:none; } .megamenu-skin-friendly .megamenu-parent-title{ text-transform:uppercase; } .megamenu-skin-friendly .megamenu-parent-title{ font-weight:normal; font-size:100%; vertical-align:bottom; } .megamenu-skin-friendly .megamenu-parent-title a, .megamenu-skin-friendly .megamenu-parent-title span, .megamenu-skin-friendly .megamenu-parent-title a:visited{ font-style:normal; color:#fff; /* black */ padding:5px; } .megamenu-skin-friendly .megamenu-parent-title a, .megamenu-skin-friendly .megamenu-parent-title span, .megamenu-skin-friendly .hovering .megamenu-parent-title a:hover{ -moz-border-radius-topright:0px; -webkit-border-top-right-radius:0px; -moz-border-radius-topleft:0px; -webkit-border-top-left-radius:0px; } .megamenu-skin-friendly .megamenu-slot, .megamenu-skin-friendly .megamenu-slot a:hover{ -moz-border-radius:0px; -webkit-border-radius:0px; border-radius:0px; } .megamenu-skin-friendly li.megamenu-slot{ /*width:auto;*/ /*max-width:200px;*/ width:200px; margin:10px 10px auto auto; background:#445c90; /* pharmacy dark gold */ padding-top:0px; padding-bottom:10px; color:#333; } .megamenu-skin-friendly li.megamenu-slot:hover{ background-color:#445c90; /* pharmacy dark gold */ color:#333; } .megamenu-skin-friendly .hovering .megamenu-parent-title a, .megamenu-skin-friendly .hovering .megamenu-parent-title span{ color:#fff; background:#445c90; /* pharmacy dark gold */ } .megamenu-skin-friendly .hovering .megamenu-parent-title span:hover, .megamenu-skin-friendly .megamenu-slot-title span{ cursor:default; } .megamenu-skin-friendly .hovering .megamenu-parent-title a:hover{ color:#ccc; /* white */ background:#445c90; /* 540 blue */ } .megamenu-skin-friendly .megamenu-bin{ margin-top:5px; padding:0px; color:#000; /* white */ background: #eef1f6; border: 1px solid #000; z-index: 1000; } .megamenu-skin-friendly .megamenu-slot-title{ font-family:"Arial Black",Arial,Helvetica,sans-serif; color:#fff; /* 80% white on pharmacy dark gold */ } .megamenu-skin-friendly .megamenu-slot-title a:link, .megamenu-skin-friendly .megamenu-slot-title a:visited{ color:#fff; /* 80% white on pharmacy dark gold */ } .megamenu-skin-friendly .megamenu-slot-title a:link, .megamenu-skin-friendly .megamenu-slot-title a{ color:#333; /* 80% white on pharmacy dark gold */ } .megamenu-skin-friendly .megamenu-slot-title .megamenu-skin-friendly .megamenu-slot-title { color:#333; /* 80% white on pharmacy dark gold */ } .megamenu-skin-friendly .megamenu-slot-title a, .megamenu-skin-friendly .megamenu-slot-title span, .megamenu-skin-friendly .megamenu-item a, .megamenu-skin-friendly .megamenu-item span{ padding:5px 12px 7px 12px; display:block; } .megamenu-skin-friendly .megamenu-slot a{ color:#333; /* black */ } .megamenu-skin-friendly .megamenu-slot a:hover{ color:#333; /* black */ background:#e1e6f1; /* 540 blue */ } .megamenu-skin-friendly .megamenu-item a, .megamenu-skin-friendly .megamenu-item span{ color:#333; /* white */ font-size:12px;; background:#eef1f6; } .megamenu-skin-friendly .megamenu-item a:hover{ color:#333; /* white */ background:#e1e6f1; /* 540 blue */ } .megamenu-skin-friendly li.megamenu-slot{ z-index:3000; background:#eef1f6; /* pharmacy dark gold */ margin-bottom:1.5em; border:0px; border-radius:0px; -moz-border-radius:0px; -webkit-border-radius:0px; -opera-border-radius:0px; -khtml-border-radius:0px; -khtml-box-shadow:rgba(0,0,0,.33) 0px 0px 0px; -moz-box-shadow:rgba(0,0,0,.33) 0px 0px 0px; box-shadow:rgba(0,0,0,.33) 0px 0px 0px; color:#333; } .megamenu-skin-friendly ul a:hover{ text-decoration:none; } .megamenu-skin-friendly .megamenu-slot-title{ font-size:100%; line-height:100%; font-weight:bold; } /* start - ensure that #megamenu-menu bar is the same height in all browsers * very frustrating to get this correct - do not touch! *--------------------------------------------------------*/ .megamenu-skin-friendly .megamenu-item a, .megamenu-skin-friendly .megamenu-item span { font-size: 10px; line-height: 12px; } .megamenu-menu, .megamenu-skin-friendly .megamenu-parent, .megamenu-skin-friendly .megamenu-parent-title /* this is the key -- all these must be set to the same value */ { font-size:100%; line-height:100%; height:100%; } .megamenu-skin-friendly .megamenu-parent-title a, .megamenu-skin-friendly .megamenu-parent-title a:link, .megamenu-skin-friendly .megamenu-parent-title a:visited, .megamenu-skin-friendly .megamenu-parent-title a:focus, .megamenu-skin-friendly .megamenu-parent-title a:hover, .megamenu-skin-friendly .megamenu-parent-title a:active, .megamenu-skin-friendly .megamenu-parent-title span { padding:4px 8px; } /* end - ensure that #megamenu-menu bar is the same height in all browsers *--------------------------------------------------------*/ .megamenu-skin-friendly .megamenu-parent-title a:hover { text-decoration:none; color:#fff; /* white */ background:#445c90; /* 540 blue coated */ } /* @end Friendly */ #stickyfooter { margin: -97px auto 0; position: relative; } #footer { background-color: #EDECEC; border-top: 0; margin-top: 20px; overflow: hidden; /*padding: 0 60px; font-family: Arial,Helvetica,sans-serif; font-size: 11px; color: #E0E0E0; line-height: 15px; position: relative;*/ } #footer .footer { overflow: hidden; width: 1024px; margin: 0px auto; padding: 0px; } .footerContainer { max-width: 1000px; min-height: 250px; background-color: #EDECEC; margin-left: auto; margin-right: auto; padding-top: 10px; padding-left: 0px; } .footerNewsletterContainer { width: 190px; min-height: 200px; float: left; margin-right: 15px; margin-top: 10px; border-right: 1px solid #CCC; } .footerSocialIcon img { float: left; margin-right: 6px; width: 30px; height: 30px; } .footerNewsletterTitle { font-family: "Lucida Sans Unicode","Lucida Grande",sans-serif; font-size: 17px; color: #333; line-height: 1.1; margin: 20px 0px 10px; } #footer a { border: medium none; color: #333; cursor: pointer; text-decoration: none; } .footerNewsletter { width: 165px; background-color: #E67B1A; color: #FFF; padding: 5px; margin-bottom: 20px; text-align: center; text-transform: uppercase; font-weight: bold; } .footerLinksContainer { max-width: 580px; float: left; min-height: 200px; margin-top: 10px; } .footerGetHelpContainer { background-color: #DDD; float: right; margin-right: 0px; margin-top: -10px; min-height: 260px; width: 212px; } .footerLinksRow1Col1 { width: 140px; height: 120px; float: left; margin-right: 15px; margin-left: 35px; } .footerLinksTitle { font-family: Arial,Helvetica,sans-serif; font-weight: bold; font-size: 13px; color: #333; } ul.footerLinks { color: #333; font-family: Arial,Helvetica,sans-serif; font-size: 11px; font-style: normal; font-variant: normal; font-weight: normal; list-style-type: none; list-style-position: outside; text-decoration: none; text-indent: 0px; margin: 0px; padding: 0px; } .footerLinksRow1Col2 { width: 170px; height: 120px; float: left; margin-right: 15px; margin-left: 15px; } .footerLinksRow1Col3 { width: 150px; height: 130px; float: left; margin-right: 15px; margin-left: 15px; } .footerLinksRow2Col4 { width: 140px; height: 100px; float: left; margin-right: 15px; margin-left: 35px; } .footerLinksRow2Col5 { width: 170px; height: 100px; float: left; margin-right: 15px; margin-left: 15px; } .footerLinksRow2Col6 { width: 150px; height: 100px; float: left; margin-right: 15px; margin-left: 15px; } .footerGetHelpEmergency { background-color: #D12906; padding: 10px; color: #FFF; font-weight: bold; font-size: 18px; text-align: center; line-height: 1.3; } .footerPhoneNoContainer { margin: 15px; } .footerGetHelpNow { color: #FFF; font-size: 12px; text-align: center; } .footerSalesRegionTitle { font-size: 12px; color: #333; font-weight: bold; line-height: 12px; margin-top: 15px; } .footerPhoneNo { font-size: 11px; color: #424242; margin-bottom: 10px; } .copyright-container { width: 100%; background-color: #000; bottom: 0px; } .copyright { margin-left: auto; margin-right: auto; font-size: 10px; padding: 10px; color: #CCC; text-align: center; } .cta-sidebar-second { width: 236px; float: right; margin: 80px 0px 10px 20px; border-left: 1px solid #CCC; padding-left: 20px; } .CTA-buttons { background-color: #E67B1A; padding: 5px 5px 5px 3px; margin-bottom: 5px; } .a-btn-text-orange, .a-btn-text { color: #FFF; display: block; font-size: 14px; padding-left: 5px; padding-top: 4px; transition: all 0.3s linear 0s; white-space: nowrap; background-color: #E67B1A; } #block-cta-buttons .content { padding: 0; } percona-toolkit-3.1/config/sphinx-build/percona-theme/static/percona.com.js000664 001750 001750 00000016425 13535723557 030317 0ustar00jenkinsjenkins000000 000000 window.jQuery(function($) { function setCookie(c_name, value, expiredays) { var exdate = new Date(); exdate.setDate(exdate.getDate() + expiredays); document.cookie = c_name + "=" + escape(value) + ((expiredays == null) ? "" : ";path=/;expires=" + exdate.toUTCString()); } function getCookie(c_name) { if (document.cookie.length > 0) { c_start = document.cookie.indexOf(c_name + "="); if (c_start != -1) { c_start = c_start + c_name.length+1; c_end = document.cookie.indexOf(";",c_start); if (c_end == -1) { c_end = document.cookie.length; } return unescape(document.cookie.substring(c_start, c_end)); } } return ""; } NAVI = new Object(); NAVI.CloseTimer = null; NAVI.Open = function ( menu_tag, dir ) { NAVI_CancelTimer(); NAVI_Close(); SEARCH_CancelTimer(); SEARCH_Close(); if ('h' == dir) { return; } var pos = $("#navilink-span-"+menu_tag).offset(); if (dir=='r') { $("#navi-dropdown-"+menu_tag).css( { "position": "absolute", "left": (pos.left + ($("#navilink-span-"+menu_tag).width()) + 2 - ($("#navi-dropdown-"+menu_tag).width())) + "px", "top": (pos.top + 32) + "px" } ); } else { $("#navi-dropdown-"+menu_tag).css( { "position": "absolute", "left": (pos.left) + "px", "top": (pos.top + 32) + "px" } ); } $("#navi-dropdown-"+menu_tag).show(); $("#navilink-span-"+menu_tag).bind('mouseover',NAVI_CancelTimer); $("#navilink-span-"+menu_tag).bind('mouseout',NAVI_Timer); $("#navi-dropdown-"+menu_tag).bind('mouseover',NAVI_CancelTimer); $("#navi-dropdown-"+menu_tag).bind('mouseout',NAVI_Timer); }; function NAVI_Close() { $(".navi-dropdown").hide(); $("#menu div").unbind('mouseover'); $(".dropdown").unbind('mouseover'); $(".dropdown").unbind('mouseout'); $("#search-dropdown").hide(); $("#searchlink-anchor").unbind('mouseover'); $("#searchlink-anchor").unbind('mouseout'); $("#search-dropdown").unbind('mouseover'); $("#search-dropdown").unbind('mouseout'); } function NAVI_CancelTimer() { if (NAVI.CloseTimer!=null) { window.clearTimeout(NAVI.CloseTimer); NAVI.CloseTimer = null; } } function NAVI_Timer() { if (NAVI.CloseTimer==null) { NAVI.CloseTimer = window.setTimeout(NAVI_Close, 300); } } SUBNAVI = new Object(); SUBNAVI.Open = function ( menu_tag ) { if ($("#sidesubnavi-" + menu_tag + ':hidden').length) { $(".sidesubnavi").hide(); $("#sidesubnavi-"+menu_tag).show(); return false; } else { return true; } }; SEARCH = new Object(); SEARCH.CloseTimer = null; SEARCH.Open = function() { NAVI_CancelTimer(); NAVI_Close(); SEARCH_CancelTimer(); SEARCH_Close(); var pos = $("#searchlink-anchor").offset(); $("#search-dropdown").css( { "position": "absolute", "left": (pos.left - ($("#search-dropdown").width()) + 40) + "px", "top": (pos.top + 36) + "px" } ); $("#search-dropdown").show(); $("#searchlink-anchor").bind('mouseover',NAVI_CancelTimer); $("#searchlink-anchor").bind('mouseout',NAVI_Timer); $("#search-dropdown").bind('mouseover',NAVI_CancelTimer); $("#search-dropdown").bind('mouseout',NAVI_Timer); $("#search-input")[0].focus(); }; function SEARCH_Close() { $(".navi-dropdown").hide(); $("#menu div").unbind('mouseover'); $(".dropdown").unbind('mouseover'); $(".dropdown").unbind('mouseout'); $("#search-dropdown").hide(); $("#searchlink-anchor").unbind('mouseover'); $("#searchlink-anchor").unbind('mouseout'); $("#search-dropdown").unbind('mouseover'); $("#search-dropdown").unbind('mouseout'); } function SEARCH_CancelTimer() { if (SEARCH.CloseTimer!=null) { window.clearTimeout(SEARCH.CloseTimer); SEARCH.CloseTimer = null; } } function SEARCH_Timer() { if (SEARCH.CloseTimer==null) { SEARCH.CloseTimer = window.setTimeout(SEARCH_Close, 300); } } menuImg1 = new Image(); menuImg1.src = 'http://s1.percona.com/ui-dropdown-header-l.png'; menuImg2 = new Image(); menuImg2.src = 'http://s2.percona.com/ui-dropdown-header-r.png'; menuImg3 = new Image(); menuImg3.src = 'http://s3.percona.com/ui-dropdown-header-search.png'; menuImg4 = new Image(); menuImg4.src = 'http://s0.percona.com/ui-dropdown-bg.png'; menuImg5 = new Image(); menuImg5.src = 'http://s1.percona.com/ui-dropdown-footer.png'; }); var Percona = { ssl: false, host: 'www.percona.com' }; /** * @param string selector jQuery selector string */ Percona.getRecentServerVersion = function(selector) { if ('string' != typeof(selector)) { alert('Percona.getRecentServerVersion: missed or wrong selector!'); } /* Localize jQuery variable */ var jQuery; /******** Load jQuery if not present *********/ if (window.jQuery === undefined || window.jQuery.fn.jquery !== '1.4.2') { var script_tag = document.createElement('script'); script_tag.setAttribute("type","text/javascript"); script_tag.setAttribute('src', 'http' + (Percona.ssl ? 's' : '') + ':/' + '/ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js'); script_tag.onload = scriptLoadHandler; script_tag.onreadystatechange = function () /* Same thing but for IE */ { if (this.readyState == 'complete' || this.readyState == 'loaded') { scriptLoadHandler(); } }; /* Try to find the head, otherwise default to the documentElement */ (document.getElementsByTagName("head")[0] || document.documentElement).appendChild(script_tag); } else { /* The jQuery version on the window is the one we want to use */ jQuery = window.jQuery; main(); } var scriptLoadHandler_counter = 0; /******** Called once jQuery has loaded ******/ function scriptLoadHandler() { if (++scriptLoadHandler_counter > 1) { return; } /* Restore $ and window.jQuery to their previous values and store the new jQuery in our local jQuery variable */ jQuery = window.jQuery.noConflict(true); /* Call our main function */ main(jQuery); } /******** Our main function ********/ function main($) { var fillRecentServerVersion = function($) { if ($(selector).get(0)) { $.get('http' + (Percona.ssl ? 's' : '') + ':/' + '/' + Percona.host + '/ajax/server-version/?callback=?', {}, function(json) { if ('object' == typeof(json) && 'string' == typeof(json.recentServerVersion)) { $(selector).text(' ' + json.recentServerVersion); } }, 'jsonp'); } }; $(document).ready(function() { fillRecentServerVersion(jQuery); }); } }; $(document).ready(function(){ $(window).bind("resize", resizeWindow); resizeWindow(); function resizeWindow() { var win_w = $(window).width(); var ribon = $("#support-ribbon"); if(win_w < 1265){ if(/mobile/i.test(navigator.userAgent)){ ribon.hide(); }else{ if(ribon.hasClass("vertical")){ ribon.removeClass("vertical"); ribon.addClass("horizontal"); } ribon.css({"left":'50%', "margin-left": '-'+(ribon.width() / 2)+'px'}); } }else{ if(ribon.hasClass("horizontal")){ ribon.addClass("vertical"); ribon.removeClass("horizontal"); ribon.removeAttr("style"); } } } });percona-toolkit-3.1/config/sphinx-build/percona-theme/static/phone-small.png000664 001750 001750 00000002423 13535723557 030473 0ustar00jenkinsjenkins000000 000000 PNG  IHDRosBITO pHYs B4tEXtSoftwareAdobe Fireworks CS4ӠtEXtCreation Time4/9/10)ܝnIDATMo]W}ιysiTT UBP"oσ11Db! eP@Hԉ^s{-Gԗ,c"53,DxD8fzȀ<)w %dVLE @%?;λoi\f\Lw~O>"aIf "D8%b_r/|  ,(wI!!$ P$齼z<@JHJ0UuaEC @*'{!#H!!Crs_ #Dxj`)?<Q_.S.cpdD%<eD !Yɔ]΃[.ç,4-+HB5==ä楫AB Pd5-cg_Rɾmc@Dx,/x1of;m{e d+ql^$*H P'S'. 9]Vq{6-V|luqF8;dLdpEutFYF7/i1_m3z?~boV[ϲDr5(#HY&$I$aov<~3?c7\]B;&$}{lmb&f9ͺk_\şrww0ӯ_P^$)a2ծnm~/>: ͅM;q4xu)P,f2ynn|Ěz΋dzOWR]W.nD=#Zr_6_Jm":zzdF='|n~~3rwϟ~e'R 2DeG)a>&IQB\}Mq~Ώ'p^#ԉp!PlgwyLw>|Ac @!#Lz7^oa4IENDB`percona-toolkit-3.1/config/sphinx-build/percona-theme/static/phone.png000664 001750 001750 00000005242 13535723557 027367 0ustar00jenkinsjenkins000000 000000 PNG  IHDR'(tsBITO pHYs  ~tEXtSoftwareAdobe Fireworks CS4ӠtEXtCreation Time3/31/10΀# IDAT[^Yu?Z;{gqbB iPA)>@?|8P ERJ$jTI4qı=xfZ/~?iڑ&PU GDTB A  ҕ˿%*S{/`}NF E A(T @I llA6*DPwڰ^[Dd>| JW|xw^>$ё HT$A $AHHkBʰ>WD LmD l!(ac@ rl_2n~>((P@AuiOBS BV '64[J+nǛxhksl&Q(V,OGTl${dhm0 0L-yprqy`(D4Q)& 9aF!dZ  $TٺQKKaj N` ll`4gV !0DYԚǟ}Gcù4a7E@HDL 0ji PD֧>͘u{=V0F Mt` nHVţ-b3а@*8sG&{l`$`Hh(1  V6`l$eX>&5] Dpbf6PLȤT`jLSSXŠ{G7Q\&ڤla&L8MڈB! PqjZ @"x,ҘNǕr}J$ @Lr Ba@ģN㰪adNdMcLe^`9F8Q8q [L34+=خӕדvk{b:g\x";!'L6R9yxXІ  =tU=s_*OηRB&v%@&Qp6`W  %Ur0:?.վ( gzVDli#BL nD!G"B@ |rة|p=yoz~Rmν:mP 6'pD" +tA JЉy+"K1{w`O\=c}ͳ\U;[ERz$ $TDT+PDWWB N/b .60nó'svD 'ITD_wꋶ:muw/n!:.VmXnx{|toϮ ܐPAaS? }Q Y%J v֛͍:v_~zNk{I6DD -]Wߝ}%AjQg^|tl/ɇ՘wK[\ٛi9D׿^#+-(Q@!\wY/ͷ~}hyywW㯖m=̋AS{2M @h AWDPש^uK1ۋzw?[_Kզ |XNL#"a֬Yp6lۆ6`* : iEGsEoAAR-N}7~'.ZH{lB{͗5QiD lp)@E̘m+ztDGr z ~zwp G_k,O)PCR*QEPU{GQUf.Q ƒN9])k7-rt_r8?UAnPȉ($@9uF )**9 t$JW0eu>IF \E62#Q: Pm fZd#j4fLPWk6PgDJn{v\aB7C(680nv A%iUҩLJeŴ}tlQ@nn2 E`#ALy0,<.dliKvGrMm"U$mN9f#B¦%ʴǥgnkd^ڕb|??9–[:ә O[3r6vڙ6`aif=5t9Quw*k?|?ɋ{6 "EKCA6E3M4vZƘ"$@>[7Oo_: >~?huȉi 8i#96Q6iI6NR ط,~[/z m8?{;{goHl a `̶|W|mz wDy[V >d߀zœE[D{ ^/QLEƉ0OO[*|岋}PuUUED,o>y$("ߟZ\\76y}%-FF{>WR9+* g2mPV_Vz:|@Vig-E@Xw᭦+UxbPo/#d3dv,kV]tv\橩;@vM^Z.\V(5×˕+e{7A A]se9 s PV3kZ^ZwU8W,1rmz3KKjљ>e4MjzK˕cPfJJ!b]vt3նW3M6Tp7Q(d2}Z:z{2dԲ&MTZFZ0srO|Xٷ04tazJhZ2/̨x],lm-B! 슸C} 7jʼQFlbpUQ:10/v^cX./Zp5.Wg\~s-<װNV*WNw9Xn .&n+G\2NLNࠃD#ۂ= !F)L[6L~⁕26jݻ7v Dt{٤`5y&' ĺRXnTY 8ULчm4ȷ4xnoZWmЈX[t]pKP;ak5.V2H`=1.vJ/n'!ȱ~ ?J f7KA;fACŖ+&[a`]"PЮ!z@n~Ba >2;460u&P}OS/&%P_u= DK!sdJ\S*L &" SҵJ &C>Ąi_H;T"rqm#u?~z̀e,1t:P{}3yBi:=4&.K _I{qC2eҋvc `u;^-w4P4^),0 Ȥ4"+f!3Xw*]Lul {2,1n?!;OquCB͓+M5f`ӄhTh;X LdzhQa sTwA,E]|}5aug4k=ml>B.[L@Rȏ=-aTus4i/=?,H@:tRH=%xIbFxB-".oQ:MJL`3kGke(s)n܄0޿8pP)d*5U!^w5 Q@ ȃA]=),tW2B2,Lx͹CtꙀJ!Wa\/udX&ô׺ṭ9ȅ+f$G.e6n9ڦȃŶ^5S>T&/` ~4 ~W]2ml>|^_S\,Sۣcmx ;L9Fّ++m*sBOe@1 8JXl`P$Mh؆y =7$iJSg/4Xd( L4t b2AJ/ڇE>HS1\EgqV}t \ bBi4H܊FA*F6A _`pq1pbϭjt$ TӺ:Un'xJa}\)h/5_V-LtmۜKlXmM0ф3~s%g,Un, 34}+eT7bBS;| hl~/p4@jH4bwCuV &*/ĶA竸8 2{\ĽSΘ5\>o*/Xj8fQ ubbP4GJW?lZ'qՖ%kaaʗ:qUgQ0>QI )TQq2&UB4u2Y)vM2T*EB=Rd8XU~ QXyF3tة+аxdݡX,,wu`.//#*37fwg\$4 ט&^Џ_-g(gAT>Mxә[ZxC{L/l{f8@M/{ ?_͠| R䞹7Cdb'?gfx-K@1q6Xש /fc3\.uyR8{>7]7 VVVŽm~#N.p? T`)I*|m-\brrl6qF ͟>npqcS3SIu]..6?ԤxGE;?։ISJȨR~Դx`ݷ/)TE庠{{JtvzJHquuFlqkVno2X ÿTmR)mW/.5=:MURj5ָ~LRL[VvR6Xu5gJ|-U9ë-aϞ= sÎ򁹥sAWt]ץ}Ŵ *[Ua VyꀓX*5si {ffƶ\vFUp\;θ}yqth-^w8ZJ$+%-,,{5{jqac|LT֩ժp݉f&]6ܖPaaQ(111k n_8}zoFNzM7O>ܚ֮nm~}o_ȴg< 5bgWU$lĶ)MN3SZ'9Le?.FnAնިS^-R-S{,`)@Y[6liݦR\6::uj`0ۺ5(j[]RYRhSv!sx=W[߭ex%LO06/!kYX۾Q-"z$C V.SR_εc2TȊ7fS,3sv.ͧ~z[0v+333yտ~WvJyqmԳ;}:2.]&N_c_u-~ҎO跀^msʽ& buM@.GiIT[ZqG՟.n&S[뺔˫4MN[R.L!u'j6lbg Pƿ?ӱnʨ|s6 VWVQ>A0P8RFK({/DG0=uG*+++X&lr؉Qs6[qp $%Q4ʳӺ$0sc AAjz}|8 ;UXwJ=oKȯ嫅Avt`z:65Dz'U*j*aG[uO>m];]ivjje 濪9p@R_Z|h : * DS'|a Cbn*X<ܻVD-Uw3KKK=4FOFROH28Σ5xyRR*7t|ffuCz ˒>N-,TD(w#HRwvdۇRC枎OOy:p ֮àLS늜P0! t[Ot[O.~V YIENDB`percona-toolkit-3.1/config/sphinx-build/percona-theme/static/side_buttons.png000664 001750 001750 00000062335 13535723557 030766 0ustar00jenkinsjenkins000000 000000 PNG  IHDR}7 pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FZIDATxw]WyYkϙ>Ij57Yh.` L 8P%Ƚ^Z@B/l0ۀ{%rQZ?3g,[΍z^޲~4Zkjl!$bfӡt.J)|GiEuI!, )%X^#~g<r\5r֊rhT !1mBk!w zt:etC܄N{5:T8Gq8p a|)H|Ik}5p*jަLk!*u-]H(cԞ@A-4B˙_hN{#,F̠ B!@dÈjžT,ceҀqޣgR !Bp,&F=(H#Нi Ύu4.́E/d`HDDkuLJsWh8Wz1HWaή~i@ƚs!zfh;p\\C>\WBXi+$\ZNKFJ?Ip^RY&x34݋ 䌭y\>ϸБs(cj=@u'<ͮR/I'_L&*퀓Ӆ8382 tZ=zOP1';C CW?rPY;( ;*{-Z?t*Dc=۩L 7>wO@47/%Bs?̓Lx< M..Y<ߠ6""6Vs"|V|\O瑽3ACb!n,\&N/|>Qwz*v{n(0* lYf'JS h ug.~&uegp355=`2i5;Ƶ0a²]D:#'M۶8:1`.- ns7ҀnvnvnP"ߍWRCWЏ2,ahENk 3D9Ea#DKzʫ گSQF[3ִ#Z0h~B(|[q(35ʏMIPwIJ|f?ʹӞ}9#Nj0F>︅xJW6Fq/J%;# fb$zVAرBE3JO9$ ;i }TDL4vɯ_ ?8ή;Ё 5؊ YÀ y wz!%Œ?RA^g c8oF{B]3b+] ? yk1Ch*F yg=ibt-#ӻqϰkfd{z4"!K(xZ1ǧkPv4ܳeZ7hNΤ(HRy'uW1LH6g|2݇Т vzgO7"*i l!@.T;GbkH0.SK؋ Eߏ54L줋)8{!s5hߎj04MrW >4mb@-=W}m98–H+Kw`/:`zH̚ABDbfvEP7B0<0 ֳw>Fc罨zk N|a{a4: Mߌ!3WΝv"|Y܌i=#$ Ji1I8yQqP*B1@5ÏB+DS}Z#haPv =i{&SNyU:J7ږz{2 !lT(=ٷ̅}B Iᆏ"b2A Get]zDz1R} 7E0;H%z*wC߇0 ?#%L_Tu&ei'ѡF3&oآs='-XCXD5jX廾JHVx#7%R"c= r" ^ V","p5FИJ(_Y)Z%GOCTR-)noijI&wV XCOnP!廾[VmÝ؏H' ,|׷"CV"I 20Gcb LR_{L"|'=v3JAoa Ƅx/XHN4 %4egS:EQ< -j J|BXqA7k{=vڳD+;8laAτB)88(FH*@]k?R LĐ@>Nq[Oa~-Q2RA$AR&@??}HۿAeeQF5*ԄBL5?g^XCAaN it(Bo4 Pasz$au V>#ՏQZ 0Èܱ_*0ҐBIy9 v3:=DüD46zh`&S("c(OcwLFnh3]xLܷځ<%R ~~S4E%#ٽaeQN {ڡAߐ:9@Hy+Pٳ{ėkJ|ztc_@Ҩ LR t=aii!p! ݍ`-3^oijX#73u'0Rݠ5Ž!lt*bӞMDFJ°fow7S:>2 m\IvºĽ+ -Arwo!uEd6_}Ês#6|*|o =/c1;#tzξG M- ju~O,F>WFѤ 31 5*N? \J!sULˠ6MN/LT67Q߱QaW~@Pwҍ~xaZG@N{v'Z?M9 AS +d"t`lfGmm(Ly2'3q^JH!3bTҷ# L5wS \,6n&~Ijs Bc ~K2iP i[Gf:y}[gMT12}$O} 7|]?kɞwBHa3qg+UAc5A. PJ" /?IcV2got/AB)rˮ!sҀ~#">Dc=mjkܪfbZ@v-@LMO#Jп2h>5+B* I,MP!ѿС%/=3Ow5;;ob?5DUKr" ͖&PniŢ B!-o-d&*0DVt=V˞&P%Ӟ6Ub5L4sqp@ Acb?C´yOaX|m_Lu8GÉ9ߢ67Kö tr)#f&Zo5r"0 'G?L ,LHZJvoSU7({[]R]ԯ폝[Gϥ`MPFg&UHcj%W`)T %rR 0 ޓ1RQz| [.(=0 {Ic<#{G'X#1RgzIgu[ӓaYvU 2eޏ1;i3}4Ҁ?lBH,XⷽT/V23"hNMP<=P}<ή;vaHi2@;/H^w3Vtɺ=6_{RaLrx%+Ԗ4www=ZjUYzSfg_G_cH)9"jHt=A\S^ώڋ/o#^''ԒEd̶Zhسwi%XIѸCD~t.o=_QmN{vlN;QhFDe^^M3VQ/Qaxe6۶ 0 pll˲> o6AuZm`6shi̍Dl6=LeH>9b\CYk&F,|?/[;jsLfSmݶoߞR!J-^Fji&\ôV*X2Tt;sfqrBm| Q/.y©vTP)N/~0191V=òb6u7|>n# -: ۲XP R?{iq ?Ӑ!$rƆ5矟q܀['pg2h)N6svL1% a{RJLӌ8w)%ab6ʍvukQAHОWp _Xá5* [A)$m!RJ f&R@ٽ=bjju* eXdsYyr84a.lO3Ď0 DHaȋ]=DICLHѬ:i 1qƩ:>Jpw Ð>n !2|ZuˆeL> ۠Ghœ=qiEc(:aoZAr% fV2_J51Bh+Gf 2,H8W'6}+jNTF|qRFgR]l8CG'V3ϛ(JX@̞\bqm1eYCxrtz-Uş0۾7@eeAڳ$vN<_FXfFYd)L~ LՎA?f7?GPEIiϳ65Ûhы=ahß z72U)S k8ٱwvuwн]wT [4K<6۹5ϸӑs'{&U-OΣX(`KBkUT LQEVᔧO)8vI=s^EGϽ|x $"5LJTd2n!DTJ[Jc݉[B&(LM׾kSnG'Zc[[WgZ-=Hct'#_gdטG&->~(Kb\f?L`ة#!kՅv6)jg[DBmᙱNrXgmRmK0:h?:C r٤iƵZIBPcvx:[/bsjqv : 0BwtD▣Ǚ,8U0vBZ;vM*,{gBQaTF a7h"㹨WCud,}7pQA#zآm>bxuС7{#K#ʍ6K;r d+:^YZTN4#6)I\0\њlf|=Z㾵 V "OS~ }ξ}ךJr)qwVW-YԷsMA.RLN+_Hly.K?5t*6V K*@T7pv܊؋@r0L=7)}=eF|Ջil{~C]ZF_@g?X蓽oߏVXXq^LH;C'eT2ή۱o bSӻ{O`(ZxNE"w<6 . A[>a.'qex}':IJUay]^b+.ho?kK?MqjIߺsŒT:} )RT*bH2Cv02(/lm&lyC:!ׅIQxq-#RgoloruIZT/廿NGUշKvkh@; Ba[Bpvދ78ݍ5p uy FBAo $VOF#8-]MbX?v M-xR^Ia->{ gh}0Ax"u[Ej|9$NowMsۈIl\gvS'4K|y_1ѽ0Zx&!,*Ş̒׾"f,0r$3@Q͏!J#$SP,1 "n!B&43%f|Jˡ,ChH-G+4<#x-L}k` 2֋>S!lt* V&".Th!jg7gMS{IVK}a~;Q V2k/BH#Qny,b- ==Y2cXVVj?K]-aCwd?D&& (ky=_~>dL}=Fm(%s02 P?/!3wfY֌nV ABPR^6d"Kc}XKi}8b@#&fL7om$讖I3[_^O7z9Uc/:7 |СK䋈87 BNz'{izgVh#wՔo|?E:~=-R7\4Vۋ䧿FW#=h՜gWKϢkp>L4zXy.5/A&hH " LX]Ȟu%G#1M˧A\+^ sz@za +ȮC-Me4Z?5ش .'$l*O܋j&k& "Mi~s!¡ f:DX B?!lK8nɦ*+er^IW2ᕋ0$>|:Ek7@R. wHLtN c)T2pDyȜe ڔIDCtI9=4A)R/I9>~gP+&Ga`v/`5\(-CL_Y"\IP z  f0Ch>=]H4aV؏rx"彌߾}{Z)b K"s`;ga=I4-Q8ä?m- x{({#ʭƦm3{oС (m|5ɵ/@H:>Cio^R]?ه<К#w7XuPq ~ Dw ~~o a#H:umX> Ju|N}\СB+N }ދJl(?r JE{}$Nڈ|黚T|҃??}(N'=FaFѵK(t>_982^|3dlgcغbWHV-WS ,%u"RmlКQ$r ZA)kc&/N ^0if}p KGm9`nu9zZG0f̃#3s s}A6kyXsIg|4J j,^8X5z N+dNy'`t[ùu5JϾwn>S~Gޥ~ xs_Qc>u%j{*]$Wupg5o9f[4 %R>~^u9X(1s@l !8l Wo0u1)eCw;[D:}F. lCEs6 n+u^Tz7߫2O3uׯQAD -v(iυVsI-Τ--kY&תUZ4 cuܵ[f7:~CA4fLC9|\kϩإAa$,bא~kw|crbðUh K?s]bM_Q~!ܱr m;`C9\:svֲa`gs,${ʙ^t)LMM㕊[֯\lپ߅rxy"g$z{&S)CX* r&+C9|n|G'd̮j41rė,^u aOz4Ips=oJ9'J˶v߁^*3$~&^ NB t*:r}d@=4 u]*!ZKLxZhG=&{R ^J7tb||XN9lv7!d/ w=B?6 6H;N Z7)0 CbatpZk! A5S*bAN Wl(J:bd2I2|VXdI͔/ ~u؞ V*.'hR+39]2@Ď !JޯꘙAn(oDgp"Rϲ2[Xen@Y2Τt&btUQs%S} m(+ væmbbj+Gƾ-d"t;z!b3C9|b\O4i-fGzn/)@n\y]QTz) a(mS~-BPJKl)1RsjʻOgRssOj>,g? ŴW>"rgs&~w Б*RG|4ߵTp)8:s9ert.(es|Vyc7R넕Ii: 6KOS }D]/gd.x37A }%02C4i}N}ݛ!Р\Pz@.%]~meʧ1?!,O k_rĖ_ I0xf싯EI^JEϢur/|#L|q?Hr͋=kb}џP+벏A@m@&Mi犏b`F|w0{VkpAHl) (G[r.ՇGy>ˑX2*\ HeƿV}"$V_T@ux9y˦ӯ nၔŧc'-a ",ѥC$VH˶ AP! $|h_ B_Ѥխ(9H$uLܳg۰+j/ZI{PRg>3)9d)$NtfbDM kh{%lI̾ոF!a-_#uQz1@MϜ4^'uh<[TQ"y̮exʽ%JyMJQj\O|F(ǍcY4L`xx1ξ~=;Nk SKX{)s4Ff!Fz!29&ZJA#Z*VB\(VDo!WmE+_rh2Ldj#3DX)[>G"E顈8q\ָƶjrΎ &02@F eۘz\"_)BQ$\(o='a&bt6حO>]2jeǰkd0>!_h.Ca vz#7= X#=8{B&s?q'] NHaR:>VRʻ'h4@̈́h-=[>P["*^O}}d7ٿ8Qvi1B怃1B_LQ[\q>W]F>3|=(23}'_‚" ΞQ^4(u/Ff[r;x7'a+pGUJ~әgH8bv~!Md,IPg꺏FHm&s[Ad4}K"kN MsTRBJkR{ 3B9&ޟfwKtѼv&& Ѯhmz!RG !Q^H}}'!!IJca ̂#V2H Xc8 ݨ92*b$rhLRk7!S=,x=)ҧ]R"=ĖF frV4gş!f3LP- iFzމݽm"Laij(ϣc3j,o0^VjOE[/? -< <#%5V^sh^JJA~w>BrK*`_r rg_N+]h (1[n&5n$t$2)Q2gґN kA:RUv-6*43A"RGeLO-4`Fol@yA8 X2 +ʭ/@ 82j͚Aj+jO!SE>V'"Z aH+FبUZFfSCQ%aZt9j%tl< :ZFK B"cIXbn6 QN ;w2DQPnTъc$D̑~2: 1\z #CZ(QAfЭ#Dԇ^FX1LuGh#lTsric$3i'Z0^~^HqnSSSjQB}fvG0:d6d.zy[ϡi#L. uzb%sLis] ,f8OI˕{O{!]#] \vׂY6Òkg9[G'XMH(gbM0oƛ N{!CO6SJk2'uLGlOnt4a1_xtigPA92 \f&upGBJlgv3/̬>\ÞoXJfTP3#*7sHVܯ ,}5xϦ/K"Z#5K/%˾kaUI5q>GnT3fVRJ'~1Rꯄ( - YJGרLH[pGӞA'# #̩x~ H-fb/Zؖ,1PK)3}]C;c'#]x<Ҳޑ):i몹9hC>vW=\+_w)L8>iQ۶])Ui( 4?b&a%S8識u {QD[XtzbYTYP c1Ȟz VL^ zdYlYxP+RM[ FԚI]O4:um̶X, (9r]n*ZSUI&1?e(P1]ll뷵 z_vka18)euu:8'd |@i!ULp;;8x飀혀k`*Tgm?nJn#C9pZ+l"90\P񧺟G !"a!QJjWDJ䓓u\( IFATD1 3QÅ|u8 <'IcjX,RT*EPHlqNCE&mtwwSՐR.t TضE< 2*Kߏ4id2bY-*:Ýjh1K)be,44R>ikSJJ}kse_#0uJ֝N{^q/ B7ibʝnw 4%aP(Z", 'AoA!Siسg…  Ac|vj^ bxي\00}?tzB&Q -V*T<;711IVawwwJku.pqNJqf޸\mtT9IЪY=Y!l\kώu% L;p!=C%N'8K)ǜrx-al(Ml;nL!$HU}EqLiώR3QhL?Sگ#}Vw!ΈrV^ͪpތR`%gz>lۑs'_G5.F:HS#n}R-H]V}ͱ6u33v[DšOP 5|65L !>\[5UHyXŨI> 3`"D` cCwN̲t_PDAn|G'pigǠ`v'}+}+_t𢡍7"+kTv܉6Rh=0%)#j&UN|3@.TEn}.Vfv뛃 n #F{Y] g9D78G;UX V@mßڍLta/=`zae#آS#+y;W1:VT? #ݏL̹a@GQ;բ>qm}#I؃'CKx#[Q<"Z3=+}p9"^v6;8Ҍ!X!Z`MمITFоY@l*}C ѵElhsF'(8 #Ճ{1`c-Xճx9U-"ʭ#c/h0LC;e}XVl=pƶia:?|ZX` ˆ&_ࣄCm|g=OnqadqFv-0}? Qc n\s7Kk SZ#ki/ٵ8#<_u {toF:Ұ0R{ԶލLt"0K0/ En#EPCt_ ?qiD=U+DuK#4vމ5t2faeXF>Jj!b!nYLfqlmic3OP_vmo 8A~?]؋u񟁌xtJ7} ]@+`jsn?xa7߼\#|27a'\i" j_NO>r/xBO!=Wʷ}Ěa"$Co4XwSy7au,[6K?^B0/ҼğH`-=DTxxfµ)(aeyٷ9@Ջ(O_"KώMHd";nz7f"F##qr/0H5}eZr|V >fYAXwQŃ$,Z01$!Bj:l"bqƋ [Hxj$ #+F՟$F2L_aw3Oa.y7_~25W~)/&TmQXGlə?+%w(9TП\r*Lo kSd3n&v a[@%M| ͛5#y`x48OL_!]$ּ}WBZQHI|;~HP?%BLtS発CH+6 ` u셧ւU _ a@P[}>bK$s[~R(oF2MHD&H#:->k U+e7|?|0}\8{gGl57um2]5ZU~M^D+A*0X-N7m?1ԭ`V{* 8H!1=.Z 6>ڂıVf߫ƯzVP;I}t\ ]k$z[ZT@l)4vއ7=rkd/| g# aXԷ߅Lv?D7` "t~2{%])yr/x=ΡESX;$naEj_#([v6Y2ZETJaDzëŠ#Lg#TUdMt-E$bĵ`O@ {z%h:?ۉ7'hWtEUKf,:4v?RB8b9V;K蹤2ؔ obG" W ]G$r0*$Y/{;hzQbA&X.XosC"N Z 37#0f8PcXT&eϒXڋ3֞F axE>㷑4G,Bfi~t @O%%*8&"s+ɞZ@Fᖯўn]nhFza$!%Z@6Fv1V0Ք!S1ڮ,@n4E"L:`a JD.eH w K~> aPNarFQ'B0H{?:4hlg)y)ayƞG&[xfn!ήhF$ aLvG4O״*f,f-QG(_!, FXq0Š!R1ֈX"|l?wFa ɞ:'~US=*f\'yG!GZ1{{zP^#Ջr|T=r$^rd 1F .ENHTOP1qi8aFw2f0Z?8;EM1Wʈ#[^Wpsxi~eJĊ3-ZK%LIB_EqzvVc'·4MJ0]hpm!xX &ch=6:62aV+xQ&i1z.}/̆AR49(Ԫw| HPFRL|b{fyo+U[ơmheؽa)=jfn䚍؋Nע]at"d[?E%1$ZwL -41[71)atw>LצC׊Lo./ w'(y1֠D9_y,B?– M {1"i+sޛylK¯dOd$WS=Ľ(PBSLĖ'}kϔBmyRz72w?_=[VKeIoɟ mT/zaR&ݶs Za#b8cL+_J y@Ϲ6F{>agh'ւu\;b EDq|Qk>`iM__v:rm53+Z'A$לOݏ\k mבk8bv/}ozwjwr RYxA I }~6Gm=m9 f?O+/݋qiZ+u~mg>-g5<g5 TO4z**4VQ~&z^&R/?NFnMMr|cG- _y42-{h6P^H+I,߀WMy9hbBI%,z'#RHڶ42%5BHj;p /i}wl7k }%`8_~W!E$v0ۮG,[ɟ}k>KnSgPF d  JrFg dZEW_r*´I?][kz[܀ړ`ࢷ\wک!8a%M uʅe5vXڶ *yz.~+u/4cONGk A=`s#N) B?$|=hi@HtSyvBK0u7ɜq1d6\Jbņhdsf0g)&<սDsébTlh*H!Pڶ+مJӚ>בgyђF;QcQARÎ5Z<;#"Pś8s`'4~~0?=Fcٍr]= 5nj۷P'ОGPxϨ܂0h TJccx#gMP/L' 8#{J}cxh$:qO'V@s-9gߓ(AXPS7rؿ h-G'F{hF(GcVك_/d7A}7vʣ8(NiqFv0;7Q&OF(N$ӛ FG}T5@NH~J x&qmvR ^݋_|/4_x w= ř6cB8t1)A6~Hx_[OEx=a14"M0{R#\I@fDHG36gG9@Ԑv!MB1y @Xh}M56fe64 o< Iï|";QƊ$ERF #'J5i!OD(if':XGdxaX(VrTێG}g9ND}ws9Xsۛ眙(sDF#=b[u0hݹ90DF|^ |,عsqPnLaؖ###Rk`T o~G|h,Z myl)̧Rv\(c"T:Yxkɽ[Лz\kiFjK^F=D} v։KQu7:rO'9!HDZ@.4jy9V/0߿%o};>w8SIJ=e=9 ʲD-_Ik3t8bx'ч6lbt| E\]tVj{Ǧڿ_'EbdTt=T}6@GϹ̛( uҳ<Ԗfrj}=vlr_ RJuݡ={7llWBQ'j @H-\Bn7ohw33vzbe޶/T75h&5/ůdRK"jebQT0/(mUe C$uO