zmat-0.9.8/000077500000000000000000000000001366304620100125045ustar00rootroot00000000000000zmat-0.9.8/.gitmodules000066400000000000000000000000001366304620100146470ustar00rootroot00000000000000zmat-0.9.8/AUTHORS.txt000066400000000000000000000015741366304620100144010ustar00rootroot00000000000000ZMAT is written by Qianqian Fang. Qianqian is currently an Assistant Professor in the Department of Bioengineering at Northeastern University. Address: Dept. of Bioengineering Northeastern University 360 Huntington Ave, ISEC 206, Boston, MA 02115, USA URL: http://fanglab.org Email: ZMAT Website : https://github.com/fangq/zmat The base64_{encode,decode} functions in the zmat.cpp functions were copied from http://web.mit.edu/freebsd/head/contrib/wpa/src/utils/base64.c http://web.mit.edu/freebsd/head/contrib/wpa/src/utils/base64.h written by Jouni Malinen licensed under the BSD license The optional LZMA/LZIP compression support is provided by the easylzma library written by Lloyd Hilaiel, which was built on Igor Pavlov's LZMA library (both previous works were released under the public domain) https://github.com/lloyd/easylzma zmat-0.9.8/ChangeLog.txt000066400000000000000000000061661366304620100151050ustar00rootroot00000000000000= Change Log = == ZMAT 0.9.8 (Archie-the-goat - beta), FangQ == 2020-05-25 [b83efba] trim base64 newline, update compilezmat, update win64 mex, release 1.0 beta 2020-05-25 [ea83b12] move zmatlib.h header to the dedicated include folder 2020-05-25 [b333987] add fortran 90 function interface and demo code 2020-05-24 [f0cbcf5] add c example 2020-05-24 [18215a2] fix compilation instruction format 2020-05-24 [392d446] add doxygen documentation 2020-05-24 [6a3e038] fix crashing if input is empty 2020-05-24 [bc8349e] accept empty input, update help info, add cmake compilation 2020-05-24 [211cddb] change windows mex file permission 2020-05-23 [5261474] add cmake script 2020-05-23 [5fac274] add makefile in the top folder 2020-05-23 [cd45748] support compression level via iscompress, add code header 2020-05-23 [be5bba0] Merge branch 'master' of https://github.com/fangq/zmat 2020-05-23 [864e9d2] add cmake support 2020-05-11 [62ecc75] Add description for libzmat and interface 2020-05-11 [7252216] make compilezmat compatible with matlab 2010 2020-05-11 [6cedd38] fix all warnings for easylzma 2020-05-11 [1ac6464] include easylzma source tree directly for easy deployment and building 2019-10-07 [a477939] compile static and dynamic libraries == ZMAT 0.9.0 (Gus-the-duck), FangQ == 2019-09-17 [73c6257] update windows mex files 2019-09-16 [a6768b9] update mex files for mac os 2019-09-16*[a940d36] initial support for the fast lz4 compression method 2019-07-12 [2cd2ac6] Update formats 2019-07-12 [53485d9] Additional format updates 2019-07-12 [b3df542] Add compilation instructions == ZMAT 0.8.0 (Mox-the-fox), FangQ == 2019-07-11 [177ed52] move mex to private/, zmat.m can pre/post process, accept nd-array and logical, can restore array size/type 2019-07-11*[0412419] change makefile to compile both mex and library 2019-06-24 [274ce37] compile zmat on octave 5 2019-06-24 [a662701] update in-matlab compile script 2019-06-23 [68f35d0] place functions into a separate unit for libzmat.a 2019-06-03 [14a84a5] update changelog to add lzma support 2019-05-07 [27a8583] make lzma optional in compilation, use static library 2019-05-06*[3d8de61] support lzma compression via easylzma, close #1 == ZMAT 0.5.0 (Zac-the-rat), FangQ == 2019-05-04 [ ] tag and release v0.5.0 2019-05-04 [d8cd440] apply patch to compile on newer matlab and octave 2019-05-04 [bd099b9] handle large inflate output with dynamic buffer, compile with 1 command 2019-05-03 [86a0dea] return the zlib return value for debugging 2019-05-03 [f204eb4] add README 2019-05-02 [e3c2ae1] function is fully compatible with octave 2019-05-02 [d48a771] avoid windows mex error 2019-05-02 [ed388d7] zmat now supports base64 encoding and decoding 2019-05-01 [545876c] add help file 2019-05-01 [5ab9a67] compile zmat on mac 2019-05-01 [396cb6f] zmat now supports gnu octave on Linux 2019-05-01 [d6a48c6] now integrated with jsonlab 2019-05-01 [3c42350] first working version, both zipping and unzipping 2019-05-01 [9300e18] rename project to zmat 2019-04-30 [93b0a77] Initial commit zmat-0.9.8/LICENSE.txt000066400000000000000000001045141366304620100143340ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . zmat-0.9.8/Makefile000066400000000000000000000011011366304620100141350ustar00rootroot00000000000000############################################################ # ZMat: A portable C-library and MATLAB/Octave toolbox for inline data compression # # Author: Qianqian Fang ############################################################ PKGNAME=zmat LIBNAME=lib$(PKGNAME) MEXNAME=zipmat VERSION=0.9.0 SOURCE=src all: mex oct lib dll lib: -$(MAKE) -C $(SOURCE) lib dll: -$(MAKE) -C $(SOURCE) dll mex: -$(MAKE) -C $(SOURCE) mex oct: -$(MAKE) -C $(SOURCE) oct clean: -rm -rf $(LIBNAME).* $(MEXNAME).mex* -$(MAKE) -C $(SOURCE) clean .DEFAULT_GOAL := mex zmat-0.9.8/README.rst000066400000000000000000000320541366304620100141770ustar00rootroot00000000000000############################################################################## ZMAT: A portable C-library and MATLAB toolbox for zlib/gzip/lzma/lz4/lz4hc data compression ############################################################################## * Copyright (C) 2019,2020 Qianqian Fang * License: GNU General Public License version 3 (GPL v3), see License*.txt * Version: 0.9.8 (Archie-the-goat - beta) * URL: http://github.com/fangq/zmat ################# Table of Contents ################# .. contents:: :local: :depth: 3 ============ Introduction ============ ZMat provides both an easy-to-use C-based data compression library - `libzmat` as well a portable mex function to enable `zlib/gzip/lzma/lzip/lz4/lz4hc` based data compression/decompression and `base64` encoding/decoding support in MATLAB and GNU Octave. It is fast and compact, can process a large array within a fraction of a second. Among the 6 supported compression methods, `lz4` is the fastest for compression/decompression; `lzma` is the slowest but has the highest compression ratio; `zlib/gzip` have the best balance between speed and compression time. The `libzmat` library, including the static library (`libzmat.a`) and the dynamic library `libzmat.so` or `libzmat.dll`, provides a simple interface to conveniently compress or decompress a memory buffer: .. code:: c int zmat_run( const size_t inputsize, /* input buffer data length */ unsigned char *inputstr, /* input buffer */ size_t *outputsize, /* output buffer data length */ unsigned char **outputbuf, /* output buffer */ const int zipid, /* 0-zlib,1-gzip,2-base64,3-lzma,4-lzip,5-lz4,6-lz4hc */ int *status, /*return status for error handling*/ const int level /* 1 compress (default level); -1 to -9 compression level, 0 decompress */ ); The library is highly portable and can be directly embedded in the source code to provide maximal portability. In the ``test`` folder, we provided sample codes to call ``zmat_run/zmat_encode/zmat_decode`` for stream-level compression and decompression in C and Fortran90. The Fortran90 C-binding module can be found in the ``fortran90`` folder. The ZMat MATLAB function accepts 3 types of inputs: char-based strings, numerical arrays or vectors, or logical arrays/vectors. Any other input format will result in an error unless you typecast the input into ``int8/uint8`` format. A multi-dimensional numerical array is accepeted, and the original input's type/dimension info is stored in the 2nd output ``"info"``. If one calls ``zmat`` with both the encoded data (in byte vector) and the ``"info"`` structure, zmat will first decode the binary data and then restore the original input's type and size. ZMat uses ``zlib`` - an open-source and widely used library for data compression. On Linux/Mac OSX, you need to have libz.so or libz.dylib installed in your system library path (defined by the environment variables ``LD_LIBRARY_PATH`` or ``DYLD_LIBRARY_PATH``, respectively). The pre-compiled mex binaries for MATLAB are stored inside the subfolder named ``private``. Those precompiled for GNU Octave are stored in the subfolder named ``octave``, with one operating system per subfolder. If you do not want to compile zmat yourself, you can download the precompiled package by either clicking on the "Download ZIP" button on the above URL, or use the below git command: .. code:: shell git clone https://github.com/fangq/zmat.git ================ Installation ================ The installation of ZMat is no different from any other simple MATLAB toolboxes. You only need to download/unzip the package to a folder, and add the folder's path (that contains ``zmat.m`` and the ``"private"`` folder) to MATLAB's path list by using the following command: .. code:: matlab addpath('/path/to/zmat'); For Octave, one needs to copy the ``zipmat.mat`` file inside the "``octave``", from the subfolder matching the OS into the "``private``" subfolder. If you want to add this path permanently, you need to type "``pathtool``", browse to the zmat root folder and add to the list, then click "Save". Then, run "``rehash``" in MATLAB, and type "``which zmat``", if you see an output, that means ZMax is installed for MATLAB/Octave. If you use MATLAB in a shared environment such as a Linux server, the best way to add path is to type .. code:: shell mkdir ~/matlab/ nano ~/matlab/startup.m and type ``addpath('/path/to/zmax')`` in this file, save and quit the editor. MATLAB will execute this file every time it starts. For Octave, the file you need to edit is ``~/.octaverc`` , where "``~``" is your home directory. ================ Using ZMat in MATLAB ================ ZMat provides a single mex function, ``zipmat.mex*`` -- for both compressing/encoding or decompresing/decoding data streams. The help info of the function is shown below ---------- zmat.m ---------- .. code-block:: matlab output=zmat(input) or [output, info]=zmat(input, iscompress, method) output=zmat(input, info) A portable data compression/decompression toolbox for MATLAB/GNU Octave author: Qianqian Fang initial version created on 04/30/2019 input: input: a char, non-complex numeric or logical vector or array iscompress: (optional) if iscompress is 1, zmat compresses/encodes the input, if 0, it decompresses/decodes the input. Default value is 1. if iscompress is set to a negative integer, (-iscompress) specifies the compression level. For zlib/gzip, default level is 6 (1-9); for lzma/lzip, default level is 5 (1-9); for lz4hc, default level is 8 (1-16). the default compression level is used if iscompress is set to 1. zmat removes the trailing newline when iscompress=2 and methpod='base64' all newlines are removed when iscompress=3 and methpod='base64' if one defines iscompress as the info struct (2nd output of zmat), zmat will perform a decoding/decompression operation and recover the original input using the info stored in the info structure. method: (optional) compression method, currently, zmat supports the below methods 'zlib': zlib/zip based data compression (default) 'gzip': gzip formatted data compression 'lzip': lzip formatted data compression 'lzma': lzma formatted data compression 'lz4': lz4 formatted data compression 'lz4hc':lz4hc (LZ4 with high-compression ratio) formatted data compression 'base64': encode or decode use base64 format output: output: a uint8 row vector, storing the compressed or decompressed data; empty when an error is encountered info: (optional) a struct storing additional info regarding the input data, may have 'type': the class of the input array 'size': the dimensions of the input array 'byte': the number of bytes per element in the input array 'method': a copy of the 3rd input indicating the encoding method 'status': the zlib/lzma/lz4 compression/decompression function return value, including potential error codes; see documentation of the respective libraries for details 'level': a copy of the iscompress flag; if non-zero, specifying compression level, see above example: [ss, info]=zmat(eye(5)) orig=zmat(ss,0) orig=zmat(ss,info) ss=char(zmat('zmat test',1,'base64')) orig=char(zmat(ss,0,'base64')) -- this function is part of the zmat toolbox (http://github.com/fangq/zmat) --------- examples --------- Under the ``"example"`` folder, you can find a demo script showing the basic utilities of ZMat. Running the ``"demo_zmat_basic.m"`` script, you can see how to compress/decompress a simple array, as well as apply base64 encoding/decoding to strings. Please run these examples and understand how ZMat works before you use it to process your data. ========================== Compile ZMat ========================== To recompile ZMat, you first need to check out ZMat source code, along with the needed submodules from the Github repository using the below command .. code:: shell git clone https://github.com/fangq/zmat.git zmat Next, you need to make sure your system has ``gcc``, ``g++``, ``mex`` and ``mkoctfile`` (if compiling for Octave is needed). If not, please install gcc, MATLAB and GNU Octave and add the paths to these utilities to the system PATH environment variable. To compile zmat, you may choose one of the three methods: 1. Method 1: please open MATLAB or Octave, and run the below commands .. code-block:: matlab cd zmat/src compilezmat The above script utilizes the MinGW-w64 MATLAB Compiler plugin. To install the MinGW-w64 compiler plugin for MATLAB, please follow the below steps - If you have MATLAB R2017b or later, you may skip this step. To compile mcxlabcl in MATLAB R2017a or earlier on Windows, you must pre-install the MATLAB support for MinGW-w64 compiler https://www.mathworks.com/matlabcentral/fileexchange/52848-matlab-support-for-mingw-w64-c-c-compiler Note: it appears that installing the above Add On is no longer working and may give an error at the download stage. In this case, you should install MSYS2 from https://www.msys2.org/. Once you install MSYS2, run MSYS2.0 MinGW 64bit from Start menu, in the popup terminal window, type .. code-block:: shell pacman -Syu pacman -S base-devel gcc git mingw-w64-x86_64-opencl-headers Then, start MATLAB, and in the command window, run .. code-block:: matlab setenv('MW_MINGW64_LOC','C:\msys64\usr'); - After installation of MATLAB MinGW support, you must type ``mex -setup C`` in MATLAB and select "MinGW64 Compiler (C)". - Once you select the MingW C compiler, you should run ``mex -setup C++`` again in MATLAB and select "MinGW64 Compiler (C++)" to compile C++. 2. Method 2: Compile with cmake (3.3 or later) Please open a terminal, and run the below shall commands .. code-block:: shell cd zmat/src rm -rf build mkdir build && cd build cmake ../ make clean make if MATLAB was not installed in a standard path, you may change ``cmake ../`` to .. code-block:: shell cmake Matlab_ROOT_DIR=/path/to/matlab/root ../ by default, this will first compile ``libzmat.a`` and then create the ``.mex`` file that is statically linked with ``libzmat.a``. If one prefers to create a dynamic library ``libzmat.so`` and then a dynamically linked ``.mex`` file, this can be done by .. code-block:: shell cmake Matlab_ROOT_DIR=/path/to/matlab/root -DSTATIC_LIB=off ../ 3. Method 3: please open a terminal, and run the below shall commands .. code-block:: shell cd zmat/src make clean mex to create the mex file for MATLAB, and run ``make clean oct`` to compile the mex file for Octave. The compilex mex files are named as ``zipmat.mex*`` under the zmat root folder. One may move those into the ``private`` folder to overwrite the existing files, or leave them in the root folder. MATLAB/Octave will use these files when ``zmat`` is called. ========================== Contribution and feedback ========================== ZMat is an open-source project. This means you can not only use it and modify it as you wish, but also you can contribute your changes back to JSONLab so that everyone else can enjoy the improvement. For anyone who want to contribute, please download JSONLab source code from its source code repositories by using the following command: .. code:: shell git clone https://github.com/fangq/zmat.git zmat or browsing the github site at .. code:: shell https://github.com/fangq/zmat You can make changes to the files as needed. Once you are satisfied with your changes, and ready to share it with others, please submit your changes as a "pull request" on github. The project maintainer, Dr. Qianqian Fang will review the changes and choose to accept the patch. We appreciate any suggestions and feedbacks from you. Please use the iso2mesh mailing list to report any questions you may have regarding ZMat: `iso2mesh-users `_ (Subscription to the mailing list is needed in order to post messages). ========================== Acknowledgement ========================== ZMat is linked against 4 open-source data compression libraries 1. ZLib library: https://www.zlib.net/ * Copyright (C) 1995-2017 Jean-loup Gailly and Mark Adler * License: Zlib license 2. Eazylzma: https://github.com/lloyd/easylzma * Author: Lloyd Hilaiel (lloyd) * License: public domain 3. Original LZMA library: * Author: Igor Pavlov * License: public domain 4. LZ4 library: https://lz4.github.io/lz4/ * Copyright (C) 2011-2019, Yann Collet. * License: BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) zmat-0.9.8/example/000077500000000000000000000000001366304620100141375ustar00rootroot00000000000000zmat-0.9.8/example/demo_zmat_basic.m000066400000000000000000000007101366304620100174330ustar00rootroot00000000000000% addpath('../') % compression [dzip,info]=zmat(uint8(eye(5,5))) % decompression orig=reshape(zmat(dzip,0),info.size) % base64 encoding and decoding base64=zmat('zmat toolbox',1,'base64'); char(base64) orig=zmat(base64,0,'base64'); char(orig) % encode ND numeric array orig=single(rand(5)); [Aencoded,info]=zmat(orig,1,'lzma') % decode compressed ND array and restore size/type Adecoded=zmat(Aencoded,info) all(all(Adecoded==orig)) class(Adecoded) zmat-0.9.8/fortran90/000077500000000000000000000000001366304620100143305ustar00rootroot00000000000000zmat-0.9.8/fortran90/zmatlib.f90000066400000000000000000000132201366304620100163100ustar00rootroot00000000000000!------------------------------------------------------------------------------ ! ZMat - A portable C-library and MATLAB/Octave toolbox for inline data compression !------------------------------------------------------------------------------ ! ! module: zmatlib ! !> @author Qianqian Fang ! !> @brief An interface to call the zmatlib C-library ! ! DESCRIPTION: !> ZMat provides an easy-to-use interface for stream compression and decompression. !> !> It can be compiled as a MATLAB/Octave mex function (zipmat.mex/zmat.m) and compresses !> arrays and strings in MATLAB/Octave. It can also be compiled as a lightweight !> C-library (libzmat.a/libzmat.so) that can be called in C/C++/FORTRAN etc to !> provide stream-level compression and decompression. !> !> Currently, zmat/libzmat supports 6 different compression algorthms, including !> - zlib and gzip : the most widely used algorithm algorithms for .zip and .gz files !> - lzma and lzip : high compression ratio LZMA based algorithms for .lzma and .lzip files !> - lz4 and lz4hc : real-time compression based on LZ4 and LZ4HC algorithms !> - base64 : base64 encoding and decoding ! !> @section slicense License !> GPL v3, see LICENSE.txt for details !------------------------------------------------------------------------------ module zmatlib use iso_c_binding, only: c_char,c_size_t,c_int,c_ptr, c_loc, c_f_pointer implicit none !------------------------------------------------------------------------------ !> @brief Compression/encoding methods ! ! DESCRIPTION: !> 0: zmZlib !> 1: zmGzip !> 2: zmBase64 !> 3: zmLzip !> 4: zmLzma !> 5: zmLz4 !> 6: zmLz4hc !------------------------------------------------------------------------------ integer(c_int), parameter :: zmZlib=0, zmGzip=1, zmBase64=2, zmLzip=3, zmLzma=4, zmLz4=5, zmLz4hc=6 interface !------------------------------------------------------------------------------ !> @brief Main interface to perform compression/decompression ! !> @param[in] inputsize: input stream buffer length !> @param[in] inputstr: input stream buffer pointer !> @param[out] outputsize: output stream buffer length !> @param[out] outputbuf: output stream buffer pointer !> @param[out] ret: encoder/decoder specific detailed error code (if error occurs) !> @param[in] iscompress: 0: decompression, 1: use default compression level; !> negative interger: set compression level (-1, less, to -9, more compression) !> @return return the coarse grained zmat error code; detailed error code is in ret. !------------------------------------------------------------------------------ integer(c_int) function zmat_run(inputsize, inputbuf, outputsize, outputbuf, zipid, ret, level) bind(C) use iso_c_binding, only: c_char,c_size_t,c_int,c_ptr integer(c_size_t), value :: inputsize integer(c_int), value :: zipid, level integer(c_size_t), intent(out) :: outputsize integer(c_int), intent(out) :: ret type(c_ptr), value, intent(in) :: inputbuf type(c_ptr),intent(out) :: outputbuf end function zmat_run !------------------------------------------------------------------------------ !> @brief Simplified interface to perform compression, same as zmat_run(...,1) ! !> @param[in] inputsize: input stream buffer length !> @param[in] inputstr: input stream buffer pointer !> @param[out] outputsize: output stream buffer length !> @param[out] outputbuf: output stream buffer pointer !> @param[out] ret: encoder/decoder specific detailed error code (if error occurs) !> @return return the coarse grained zmat error code; detailed error code is in ret. !------------------------------------------------------------------------------ integer(c_int) function zmat_encode(inputsize, inputbuf, outputsize, outputbuf, zipid, ret) bind(C) use iso_c_binding, only: c_char,c_size_t,c_int,c_ptr integer(c_size_t), value :: inputsize integer(c_int), value :: zipid integer(c_size_t), intent(out) :: outputsize integer(c_int), intent(out) :: ret type(c_ptr), value, intent(in) :: inputbuf type(c_ptr),intent(out) :: outputbuf end function zmat_encode !------------------------------------------------------------------------------ !> @brief Simplified interface to perform decompression, same as zmat_run(...,0) ! !> @param[in] inputsize: input stream buffer length !> @param[in] inputstr: input stream buffer pointer !> @param[out] outputsize: output stream buffer length !> @param[out] outputbuf: output stream buffer pointer !> @param[out] ret: encoder/decoder specific detailed error code (if error occurs) !> @return return the coarse grained zmat error code; detailed error code is in ret. !------------------------------------------------------------------------------ integer(c_int) function zmat_decode(inputsize, inputbuf, outputsize, outputbuf, zipid, ret) bind(C) use iso_c_binding, only: c_char,c_size_t,c_int,c_ptr integer(c_size_t), value :: inputsize integer(c_int), value :: zipid integer(c_size_t), intent(out) :: outputsize integer(c_int), intent(out) :: ret type(c_ptr), value, intent(in) :: inputbuf type(c_ptr),intent(out) :: outputbuf end function zmat_decode !------------------------------------------------------------------------------ !> @brief Deallocating the C-allocated output buffer, must be called after each zmat use ! !> @param[in,out] outputbuf: output stream buffer pointer !------------------------------------------------------------------------------ subroutine zmat_free(outputbuf) bind(C) use iso_c_binding, only: c_ptr type(c_ptr),intent(inout) :: outputbuf end subroutine zmat_free end interface end module zmat-0.9.8/include/000077500000000000000000000000001366304620100141275ustar00rootroot00000000000000zmat-0.9.8/include/zmatlib.h000066400000000000000000000133441366304620100157470ustar00rootroot00000000000000/***************************************************************************//** ** \mainpage ZMat - A portable C-library and MATLAB/Octave toolbox for inline data compression ** ** \author Qianqian Fang ** \copyright Qianqian Fang, 2019-2020 ** ** ZMat provides an easy-to-use interface for stream compression and decompression. ** ** It can be compiled as a MATLAB/Octave mex function (zipmat.mex/zmat.m) and compresses ** arrays and strings in MATLAB/Octave. It can also be compiled as a lightweight ** C-library (libzmat.a/libzmat.so) that can be called in C/C++/FORTRAN etc to ** provide stream-level compression and decompression. ** ** Currently, zmat/libzmat supports 6 different compression algorthms, including ** - zlib and gzip : the most widely used algorithm algorithms for .zip and .gz files ** - lzma and lzip : high compression ratio LZMA based algorithms for .lzma and .lzip files ** - lz4 and lz4hc : real-time compression based on LZ4 and LZ4HC algorithms ** - base64 : base64 encoding and decoding ** ** Depencency: ZLib library: https://www.zlib.net/ ** author: (C) 1995-2017 Jean-loup Gailly and Mark Adler ** ** Depencency: LZ4 library: https://lz4.github.io/lz4/ ** author: (C) 2011-2019, Yann Collet, ** ** Depencency: Original LZMA library ** author: Igor Pavlov ** ** Depencency: Eazylzma: https://github.com/lloyd/easylzma ** author: Lloyd Hilaiel (lloyd) ** ** Depencency: base64_encode()/base64_decode() ** \copyright 2005-2011, Jouni Malinen ** ** \section slicense License ** GPL v3, see LICENSE.txt for details *******************************************************************************/ /***************************************************************************//** \file zmatlib.h @brief zmat library header file *******************************************************************************/ #ifndef ZMAT_LIB_H #define ZMAT_LIB_H #ifdef __cplusplus extern "C" { #endif /** * @brief Compression/encoding methods * * 0: zlib * 1: gzip * 2: base64 * 3: lzip * 4: lzma * 5: lz4 * 6: lz4hc */ enum TZipMethod {zmZlib, zmGzip, zmBase64, zmLzip, zmLzma, zmLz4, zmLz4hc}; /** * @brief Main interface to perform compression/decompression * * @param[in] inputsize: input stream buffer length * @param[in] inputstr: input stream buffer pointer * @param[in] outputsize: output stream buffer length * @param[in] outputbuf: output stream buffer pointer * @param[in] ret: encoder/decoder specific detailed error code (if error occurs) * @param[in] iscompress: 0: decompression, 1: use default compression level; * negative interger: set compression level (-1, less, to -9, more compression) * @return return the coarse grained zmat error code; detailed error code is in ret. */ int zmat_run(const size_t inputsize, unsigned char *inputstr, size_t *outputsize, unsigned char **outputbuf, const int zipid, int *ret, const int iscompress); /** * @brief Simplified interface to perform compression (use default compression level) * * @param[in] inputsize: input stream buffer length * @param[in] inputstr: input stream buffer pointer * @param[in] outputsize: output stream buffer length * @param[in] outputbuf: output stream buffer pointer * @param[in] ret: encoder/decoder specific detailed error code (if error occurs) * @return return the coarse grained zmat error code; detailed error code is in ret. */ int zmat_encode(const size_t inputsize, unsigned char *inputstr, size_t *outputsize, unsigned char **outputbuf, const int zipid, int *ret); /** * @brief Simplified interface to perform decompression * * @param[in] inputsize: input stream buffer length * @param[in] inputstr: input stream buffer pointer * @param[in] outputsize: output stream buffer length * @param[in] outputbuf: output stream buffer pointer * @param[in] ret: encoder/decoder specific detailed error code (if error occurs) * @return return the coarse grained zmat error code; detailed error code is in ret. */ int zmat_decode(const size_t inputsize, unsigned char *inputstr, size_t *outputsize, unsigned char **outputbuf, const int zipid, int *ret); /** * @brief Free the output buffer to facilitate use in fortran * * @param[in,out] outputbuf: the outputbuf buffer's initial address to be freed */ void zmat_free(unsigned char **outputbuf); /** * @brief Look up a string in a string list and return the index * * @param[in] origkey: string to be looked up * @param[out] table: the dictionary where the string is searched * @return if found, return the index of the string in the dictionary, otherwise -1. */ int zmat_keylookup(char *origkey, const char *table[]); /** * @brief Convert error code to a string error message * * @param[in] id: zmat error code */ char *zmat_error(int id); /** * @brief base64_encode - Base64 encode * @src: Data to be encoded * @len: Length of the data to be encoded * @out_len: Pointer to output length variable, or %NULL if not used * Returns: Allocated buffer of out_len bytes of encoded data, * or %NULL on failure * * Caller is responsible for freeing the returned buffer. Returned buffer is * nul terminated to make it easier to use as a C string. The nul terminator is * not included in out_len. */ unsigned char * base64_encode(const unsigned char *src, size_t len, size_t *out_len); /** * base64_decode - Base64 decode * @src: Data to be decoded * @len: Length of the data to be decoded * @out_len: Pointer to output length variable * Returns: Allocated buffer of out_len bytes of decoded data, * or %NULL on failure * * Caller is responsible for freeing the returned buffer. */ unsigned char * base64_decode(const unsigned char *src, size_t len, size_t *out_len); #ifdef __cplusplus } #endif #endif zmat-0.9.8/src/000077500000000000000000000000001366304620100132735ustar00rootroot00000000000000zmat-0.9.8/src/CMakeLists.txt000066400000000000000000000033611366304620100160360ustar00rootroot00000000000000################################################################# # CMake configure file for ZMat # Qianqian Fang # 2020/05/23 ################################################################# cmake_minimum_required(VERSION 3.3) project(zmat) find_package(ZLIB REQUIRED) find_package(Matlab) option(STATIC_LIB "Build static library" ON) # C Options set(CMAKE_C_FLAGS "-g -Wall -O3 -fPIC") set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/../) # Add include directories include_directories(../include) include_directories(lz4) include_directories(easylzma) include_directories(easylzma/pavlov) # Add all project units if(STATIC_LIB) add_library(zmat STATIC zmatlib.c lz4/lz4.c lz4/lz4hc.c easylzma/compress.c easylzma/decompress.c easylzma/lzma_header.c easylzma/lzip_header.c easylzma/common_internal.c easylzma/pavlov/LzmaEnc.c easylzma/pavlov/LzmaDec.c easylzma/pavlov/LzmaLib.c easylzma/pavlov/LzFind.c easylzma/pavlov/Bra.c easylzma/pavlov/BraIA64.c easylzma/pavlov/Alloc.c easylzma/pavlov/7zCrc.c ) else() # Add all project units add_library(zmat SHARED zmatlib.c lz4/lz4.c lz4/lz4hc.c easylzma/compress.c easylzma/decompress.c easylzma/lzma_header.c easylzma/lzip_header.c easylzma/common_internal.c easylzma/pavlov/LzmaEnc.c easylzma/pavlov/LzmaDec.c easylzma/pavlov/LzmaLib.c easylzma/pavlov/LzFind.c easylzma/pavlov/Bra.c easylzma/pavlov/BraIA64.c easylzma/pavlov/Alloc.c easylzma/pavlov/7zCrc.c ) endif() # Link options target_link_libraries( zmat ZLIB::ZLIB ) if(Matlab_FOUND) matlab_add_mex( NAME zipmat SRC zmat.cpp LINK_TO mex mx zmat ) endif() zmat-0.9.8/src/Makefile000066400000000000000000000076231366304620100147430ustar00rootroot00000000000000################################################################# # Makefile for ZMAT # Qianqian Fang # 2019/04/30 ################################################################# BACKEND ?= ROOTDIR ?=.. ZMATDIR ?=$(ROOTDIR) LIBDIR ?=$(ROOTDIR)/lib MKDIR :=mkdir MEX=mex AR=$(CC) ECHO := echo BINARY:=zipmat OUTPUT_DIR=$(ZMATDIR) DOXY := doxygen DOCDIR := $(ZMATDIR)/doc DOXYCFG=zmat.cfg INCLUDEDIRS=-I../include -Ieasylzma -Ieasylzma/pavlov -Ilz4 CUOMPLINK= ARCH = $(shell uname -m) PLATFORM = $(shell uname -s) DLLFLAG=-fPIC OMP=-fopenmp CPPOPT=-g -Wall -O3 -fPIC #-g -Wall -std=c99 # -DUSE_OS_TIMER OUTPUTFLAG:=-o OBJSUFFIX=.o EXESUFFIX=.mex* FILES=zmatlib lz4/lz4 lz4/lz4hc easylzma/compress easylzma/decompress \ easylzma/lzma_header easylzma/lzip_header easylzma/common_internal \ easylzma/pavlov/LzmaEnc easylzma/pavlov/LzmaDec easylzma/pavlov/LzmaLib \ easylzma/pavlov/LzFind easylzma/pavlov/Bra easylzma/pavlov/BraIA64 \ easylzma/pavlov/Alloc easylzma/pavlov/7zCrc ifeq ($(findstring CYGWIN,$(PLATFORM)), CYGWIN) ifeq ($(findstring x86_64,$(ARCH)), x86_64) LINKOPT=-L"$(CUDA_PATH)/lib/x64" $(CUDART) else LINKOPT=-L"$(CUDA_PATH)/lib/Win32" $(CUDART) endif INCLUDEDIRS +=-I"$(CUDA_PATH)/lib/include" CPPOPT =-c -D_CRT_SECURE_NO_DEPRECATE -DWIN32 OBJSUFFIX=.obj EXESUFFIX= DLLFLAG= OMP=-fopenmp MEX=cmd /c mex else ifeq ($(findstring Darwin,$(PLATFORM)), Darwin) else CPPOPT+= CUCCOPT+=-Xcompiler $(OMP) ifeq ($(findstring x86_64,$(ARCH)), x86_64) CPPOPT += CUCCOPT +=-m64 ifeq "$(wildcard /usr/local/cuda/lib64)" "/usr/local/cuda/lib64" ifeq ($(BACKEND),cuda) LINKOPT=-L/usr/local/cuda/lib64 $(CUDART) -lm -lstdc++ else ifeq ($(BACKEND),cudastatic) LINKOPT=-L/usr/local/cuda/lib64 $(CUDART) -lm -static-libgcc -static-libstdc++ endif endif endif endif ifeq ($(MAKECMDGOALS),lib) AR :=ar ARFLAGS :=cr BINARY :=libzmat.a AROUTPUT := LINKOPT := OUTPUT_DIR :=$(LIBDIR) ifeq ($(findstring Darwin,$(PLATFORM)), Darwin) OUTPUTFLAG := endif endif ifeq ($(MAKECMDGOALS),dll) OUTPUTFLAG :=-o BINARY :=libzmat.so OUTPUT_DIR :=$(LIBDIR) ifeq ($(findstring Darwin,$(PLATFORM)), Darwin) ARFLAGS :=-shared -Wl,-install_name,$(BINARY).1 -lz else ARFLAGS :=-shared -Wl,-soname,$(BINARY).1 -lz endif endif ifeq ($(MAKECMDGOALS),dll) BINARY :=libzmat.so endif dll: CPPOPT +=$(DLLFLAG) dll: AR :=gcc dll: ARFLAGS ?=-shared -Wl,-soname,$(BINARY).1 dll: LINKOPT := dll: AROUTPUT :=-o oct mex: CPPOPT+= $(DLLFLAG) oct: OUTPUT_DIR=.. oct: AR= CXXFLAGS='-O3' LFLAGS='$(-lz)' LDFLAGS='$(LFLAGS)' mkoctfile zmat.cpp oct: BINARY=zmat.mex oct: ARFLAGS := oct: LINKOPT+=--mex $(INCLUDEDIRS) oct: CXX=mkoctfile mex: CXX=$(MEX) mex: OUTPUTFLAG:=-output mex: AR=$(MEX) zmat.cpp $(INCLUDEDIRS) mex: LINKOPT+= -cxx CXXLIBS='$$CXXLIBS -lz -static-libgcc -static-libstdc++' -outdir $(ZMATDIR) mex: ARFLAGS := mex: OUTPUT_DIR=.. all: mex TARGETSUFFIX:=$(suffix $(BINARY)) doc: makedocdir $(DOXY) $(DOXYCFG) OBJS := $(addsuffix $(OBJSUFFIX), $(FILES)) all dll lib mex oct: $(OUTPUT_DIR)/$(BINARY) makedirs: @if test ! -d $(OUTPUT_DIR); then $(MKDIR) $(OUTPUT_DIR); fi makedocdir: @if test ! -d $(DOCDIR); then $(MKDIR) $(DOCDIR); fi $(OUTPUT_DIR)/$(BINARY): makedirs $(OBJS) $(OUTPUT_DIR)/$(BINARY): $(OBJS) @$(ECHO) Building $@ $(AR) $(ARFLAGS) $(OUTPUTFLAG) $@ $(OBJS) $(LINKOPT) $(USERLINKOPT) %$(OBJSUFFIX): %.cpp $(CXX) $(INCLUDEDIRS) $(CPPOPT) -c -o $@ $< %$(OBJSUFFIX): %.c @$(ECHO) Building $@ $(CC) $(INCLUDEDIRS) $(CPPOPT) -c -o $@ $< %$(OBJSUFFIX): %.cu @$(ECHO) Building $@ $(CUDACC) -c $(CUCCOPT) -o $@ $< clean: -rm -f $(OBJS) $(OUTPUT_DIR)/$(BINARY)$(EXESUFFIX) zmat$(OBJSUFFIX) $(LIBDIR)/* .PHONY: all mex oct lib dll .DEFAULT_GOAL := all zmat-0.9.8/src/compilezmat.m000066400000000000000000000062641366304620100160050ustar00rootroot00000000000000%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Compilation script for zmat in MATLAB and GNU Octave % % author: Qianqian Fang % % Dependency (Windows only): % 1.If you have MATLAB R2017b or later, you may skip this step. % To compile mcxlabcl in MATLAB R2017a or earlier on Windows, you must % pre-install the MATLAB support for MinGW-w64 compiler % https://www.mathworks.com/matlabcentral/fileexchange/52848-matlab-support-for-mingw-w64-c-c-compiler % % Note: it appears that installing the above Add On is no longer working % and may give an error at the download stage. In this case, you should % install MSYS2 from https://www.msys2.org/. Once you install MSYS2, % run MSYS2.0 MinGW 64bit from Start menu, in the popup terminal window, % type % % pacman -Syu % pacman -S base-devel gcc git mingw-w64-x86_64-opencl-headers % % Then, start MATLAB, and in the command window, run % % setenv('MW_MINGW64_LOC','C:\msys64\usr'); % 2.After installation of MATLAB MinGW support, you must type % "mex -setup C" in MATLAB and select "MinGW64 Compiler (C)". % 3.Once you select the MingW C compiler, you should run "mex -setup C++" % again in MATLAB and select "MinGW64 Compiler (C++)" to compile C++. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% filelist={'lz4/lz4.c','lz4/lz4hc.c','easylzma/compress.c','easylzma/decompress.c', ... 'easylzma/lzma_header.c', 'easylzma/lzip_header.c', 'easylzma/common_internal.c', ... 'easylzma/pavlov/LzmaEnc.c', 'easylzma/pavlov/LzmaDec.c', 'easylzma/pavlov/LzmaLib.c' ... 'easylzma/pavlov/LzFind.c', 'easylzma/pavlov/Bra.c', 'easylzma/pavlov/BraIA64.c' ... 'easylzma/pavlov/Alloc.c', 'easylzma/pavlov/7zCrc.c','zmatlib.c'}; mexfile='zmat.cpp'; suffix='.o'; if(ispc) suffix='.obj'; end if(~exist('OCTAVE_VERSION','builtin')) delete(['*',suffix]); if(ispc) CCFLAG='CFLAGS=''-O3 -g -I../include -Ieasylzma -Ieasylzma/pavlov -Ilz4'' -c'; LINKFLAG='CXXLIBS=''$CLIBS -lz'' -output ../zipmat -outdir ../'; else CCFLAG='CFLAGS=''-O3 -g -I../include -Ieasylzma -Ieasylzma/pavlov -Ilz4 -fPIC'' -c'; LINKFLAG='CXXLIBS=''\$CLIBS -lz'' -output ../zipmat -outdir ../'; end for i=1:length(filelist) fprintf(1,'mex %s %s\n', CCFLAG, filelist{i}); eval(sprintf('mex %s %s', CCFLAG, filelist{i})); end filelist=dir(['*' suffix]); filelist={filelist.name}; cmd=sprintf('mex %s -I../include -Ieasylzma %s %s',mexfile, LINKFLAG, sprintf('%s ' ,filelist{:})); fprintf(1,'%s\n',cmd); eval(cmd) else delete('*.o'); CCFLAG='-O3 -g -c -I../include -Ieasylzma -Ieasylzma/pavlov -Ilz4'; LINKFLAG='-o ../zipmat -lz'; for i=1:length(filelist) fprintf(stdout,'mex %s %s\n', CCFLAG, filelist{i}); fflush(stdout); eval(sprintf('mex %s %s', CCFLAG, filelist{i})); end if(ispc) filelist=dir(['*.o']); filelist={filelist.name}; end cmd=sprintf('mex %s -I../include -Ieasylzma %s %s',mexfile, LINKFLAG, regexprep(sprintf('%s ' ,filelist{:}),'\.c[p]*','\.o')); fprintf(stdout,'%s\n',cmd);fflush(stdout); eval(cmd) end zmat-0.9.8/src/easylzma/000077500000000000000000000000001366304620100151205ustar00rootroot00000000000000zmat-0.9.8/src/easylzma/Makefile000066400000000000000000001011121366304620100165540ustar00rootroot00000000000000# CMAKE generated file: DO NOT EDIT! # Generated by "Unix Makefiles" Generator, CMake Version 3.10 # Default target executed when no arguments are given to make. default_target: all .PHONY : default_target # Allow only one "make -f Makefile2" at a time, but pass parallelism. .NOTPARALLEL: #============================================================================= # Special targets provided by cmake. # Disable implicit rules so canonical targets will work. .SUFFIXES: # Remove some rules from gmake that .SUFFIXES does not remove. SUFFIXES = .SUFFIXES: .hpux_make_needs_suffix_list # Suppress display of executed commands. $(VERBOSE).SILENT: # A target that is always out of date. cmake_force: .PHONY : cmake_force #============================================================================= # Set environment variables for the build. # The shell in which to execute make rules. SHELL = /bin/sh # The CMake executable. CMAKE_COMMAND = /usr/bin/cmake # The command to remove a file. RM = /usr/bin/cmake -E remove -f # Escaping for special characters. EQUALS = = # The top-level source directory on which CMake was run. CMAKE_SOURCE_DIR = /home/fangq/space/git/Project/github/zmat/src/easylzma # The top-level build directory on which CMake was run. CMAKE_BINARY_DIR = /home/fangq/space/git/Project/github/zmat/src/easylzma #============================================================================= # Targets provided globally by CMake. # Special rule for the target rebuild_cache rebuild_cache: @$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "Running CMake to regenerate build system..." /usr/bin/cmake -H$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) .PHONY : rebuild_cache # Special rule for the target rebuild_cache rebuild_cache/fast: rebuild_cache .PHONY : rebuild_cache/fast # Special rule for the target edit_cache edit_cache: @$(CMAKE_COMMAND) -E cmake_echo_color --switch=$(COLOR) --cyan "No interactive CMake dialog available..." /usr/bin/cmake -E echo No\ interactive\ CMake\ dialog\ available. .PHONY : edit_cache # Special rule for the target edit_cache edit_cache/fast: edit_cache .PHONY : edit_cache/fast # The main all target all: cmake_check_build_system cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(CMAKE_COMMAND) -E cmake_progress_start /home/fangq/space/git/Project/github/zmat/src/easylzma/CMakeFiles /home/fangq/space/git/Project/github/zmat/src/easylzma/src/CMakeFiles/progress.marks cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f CMakeFiles/Makefile2 src/all $(CMAKE_COMMAND) -E cmake_progress_start /home/fangq/space/git/Project/github/zmat/src/easylzma/CMakeFiles 0 .PHONY : all # The main clean target clean: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f CMakeFiles/Makefile2 src/clean .PHONY : clean # The main clean target clean/fast: clean .PHONY : clean/fast # Prepare targets for installation. preinstall: all cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f CMakeFiles/Makefile2 src/preinstall .PHONY : preinstall # Prepare targets for installation. preinstall/fast: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f CMakeFiles/Makefile2 src/preinstall .PHONY : preinstall/fast # clear depends depend: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(CMAKE_COMMAND) -H$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) --check-build-system CMakeFiles/Makefile.cmake 1 .PHONY : depend # Convenience name for target. src/CMakeFiles/easylzma_s.dir/rule: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f CMakeFiles/Makefile2 src/CMakeFiles/easylzma_s.dir/rule .PHONY : src/CMakeFiles/easylzma_s.dir/rule # Convenience name for target. easylzma_s: src/CMakeFiles/easylzma_s.dir/rule .PHONY : easylzma_s # fast build rule for target. easylzma_s/fast: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/build .PHONY : easylzma_s/fast # Convenience name for target. src/CMakeFiles/easylzma.dir/rule: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f CMakeFiles/Makefile2 src/CMakeFiles/easylzma.dir/rule .PHONY : src/CMakeFiles/easylzma.dir/rule # Convenience name for target. easylzma: src/CMakeFiles/easylzma.dir/rule .PHONY : easylzma # fast build rule for target. easylzma/fast: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/build .PHONY : easylzma/fast common_internal.o: common_internal.c.o .PHONY : common_internal.o # target to build an object file common_internal.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/common_internal.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/common_internal.c.o .PHONY : common_internal.c.o common_internal.i: common_internal.c.i .PHONY : common_internal.i # target to preprocess a source file common_internal.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/common_internal.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/common_internal.c.i .PHONY : common_internal.c.i common_internal.s: common_internal.c.s .PHONY : common_internal.s # target to generate assembly for a file common_internal.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/common_internal.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/common_internal.c.s .PHONY : common_internal.c.s compress.o: compress.c.o .PHONY : compress.o # target to build an object file compress.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/compress.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/compress.c.o .PHONY : compress.c.o compress.i: compress.c.i .PHONY : compress.i # target to preprocess a source file compress.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/compress.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/compress.c.i .PHONY : compress.c.i compress.s: compress.c.s .PHONY : compress.s # target to generate assembly for a file compress.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/compress.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/compress.c.s .PHONY : compress.c.s decompress.o: decompress.c.o .PHONY : decompress.o # target to build an object file decompress.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/decompress.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/decompress.c.o .PHONY : decompress.c.o decompress.i: decompress.c.i .PHONY : decompress.i # target to preprocess a source file decompress.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/decompress.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/decompress.c.i .PHONY : decompress.c.i decompress.s: decompress.c.s .PHONY : decompress.s # target to generate assembly for a file decompress.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/decompress.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/decompress.c.s .PHONY : decompress.c.s lzip_header.o: lzip_header.c.o .PHONY : lzip_header.o # target to build an object file lzip_header.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/lzip_header.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/lzip_header.c.o .PHONY : lzip_header.c.o lzip_header.i: lzip_header.c.i .PHONY : lzip_header.i # target to preprocess a source file lzip_header.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/lzip_header.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/lzip_header.c.i .PHONY : lzip_header.c.i lzip_header.s: lzip_header.c.s .PHONY : lzip_header.s # target to generate assembly for a file lzip_header.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/lzip_header.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/lzip_header.c.s .PHONY : lzip_header.c.s lzma_header.o: lzma_header.c.o .PHONY : lzma_header.o # target to build an object file lzma_header.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/lzma_header.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/lzma_header.c.o .PHONY : lzma_header.c.o lzma_header.i: lzma_header.c.i .PHONY : lzma_header.i # target to preprocess a source file lzma_header.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/lzma_header.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/lzma_header.c.i .PHONY : lzma_header.c.i lzma_header.s: lzma_header.c.s .PHONY : lzma_header.s # target to generate assembly for a file lzma_header.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/lzma_header.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/lzma_header.c.s .PHONY : lzma_header.c.s pavlov/7zBuf.o: pavlov/7zBuf.c.o .PHONY : pavlov/7zBuf.o # target to build an object file pavlov/7zBuf.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zBuf.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zBuf.c.o .PHONY : pavlov/7zBuf.c.o pavlov/7zBuf.i: pavlov/7zBuf.c.i .PHONY : pavlov/7zBuf.i # target to preprocess a source file pavlov/7zBuf.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zBuf.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zBuf.c.i .PHONY : pavlov/7zBuf.c.i pavlov/7zBuf.s: pavlov/7zBuf.c.s .PHONY : pavlov/7zBuf.s # target to generate assembly for a file pavlov/7zBuf.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zBuf.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zBuf.c.s .PHONY : pavlov/7zBuf.c.s pavlov/7zBuf2.o: pavlov/7zBuf2.c.o .PHONY : pavlov/7zBuf2.o # target to build an object file pavlov/7zBuf2.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zBuf2.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zBuf2.c.o .PHONY : pavlov/7zBuf2.c.o pavlov/7zBuf2.i: pavlov/7zBuf2.c.i .PHONY : pavlov/7zBuf2.i # target to preprocess a source file pavlov/7zBuf2.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zBuf2.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zBuf2.c.i .PHONY : pavlov/7zBuf2.c.i pavlov/7zBuf2.s: pavlov/7zBuf2.c.s .PHONY : pavlov/7zBuf2.s # target to generate assembly for a file pavlov/7zBuf2.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zBuf2.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zBuf2.c.s .PHONY : pavlov/7zBuf2.c.s pavlov/7zCrc.o: pavlov/7zCrc.c.o .PHONY : pavlov/7zCrc.o # target to build an object file pavlov/7zCrc.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zCrc.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zCrc.c.o .PHONY : pavlov/7zCrc.c.o pavlov/7zCrc.i: pavlov/7zCrc.c.i .PHONY : pavlov/7zCrc.i # target to preprocess a source file pavlov/7zCrc.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zCrc.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zCrc.c.i .PHONY : pavlov/7zCrc.c.i pavlov/7zCrc.s: pavlov/7zCrc.c.s .PHONY : pavlov/7zCrc.s # target to generate assembly for a file pavlov/7zCrc.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zCrc.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zCrc.c.s .PHONY : pavlov/7zCrc.c.s pavlov/7zFile.o: pavlov/7zFile.c.o .PHONY : pavlov/7zFile.o # target to build an object file pavlov/7zFile.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zFile.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zFile.c.o .PHONY : pavlov/7zFile.c.o pavlov/7zFile.i: pavlov/7zFile.c.i .PHONY : pavlov/7zFile.i # target to preprocess a source file pavlov/7zFile.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zFile.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zFile.c.i .PHONY : pavlov/7zFile.c.i pavlov/7zFile.s: pavlov/7zFile.c.s .PHONY : pavlov/7zFile.s # target to generate assembly for a file pavlov/7zFile.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zFile.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zFile.c.s .PHONY : pavlov/7zFile.c.s pavlov/7zStream.o: pavlov/7zStream.c.o .PHONY : pavlov/7zStream.o # target to build an object file pavlov/7zStream.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zStream.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zStream.c.o .PHONY : pavlov/7zStream.c.o pavlov/7zStream.i: pavlov/7zStream.c.i .PHONY : pavlov/7zStream.i # target to preprocess a source file pavlov/7zStream.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zStream.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zStream.c.i .PHONY : pavlov/7zStream.c.i pavlov/7zStream.s: pavlov/7zStream.c.s .PHONY : pavlov/7zStream.s # target to generate assembly for a file pavlov/7zStream.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/7zStream.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/7zStream.c.s .PHONY : pavlov/7zStream.c.s pavlov/Alloc.o: pavlov/Alloc.c.o .PHONY : pavlov/Alloc.o # target to build an object file pavlov/Alloc.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Alloc.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Alloc.c.o .PHONY : pavlov/Alloc.c.o pavlov/Alloc.i: pavlov/Alloc.c.i .PHONY : pavlov/Alloc.i # target to preprocess a source file pavlov/Alloc.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Alloc.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Alloc.c.i .PHONY : pavlov/Alloc.c.i pavlov/Alloc.s: pavlov/Alloc.c.s .PHONY : pavlov/Alloc.s # target to generate assembly for a file pavlov/Alloc.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Alloc.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Alloc.c.s .PHONY : pavlov/Alloc.c.s pavlov/Bcj2.o: pavlov/Bcj2.c.o .PHONY : pavlov/Bcj2.o # target to build an object file pavlov/Bcj2.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bcj2.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bcj2.c.o .PHONY : pavlov/Bcj2.c.o pavlov/Bcj2.i: pavlov/Bcj2.c.i .PHONY : pavlov/Bcj2.i # target to preprocess a source file pavlov/Bcj2.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bcj2.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bcj2.c.i .PHONY : pavlov/Bcj2.c.i pavlov/Bcj2.s: pavlov/Bcj2.c.s .PHONY : pavlov/Bcj2.s # target to generate assembly for a file pavlov/Bcj2.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bcj2.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bcj2.c.s .PHONY : pavlov/Bcj2.c.s pavlov/Bra.o: pavlov/Bra.c.o .PHONY : pavlov/Bra.o # target to build an object file pavlov/Bra.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bra.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bra.c.o .PHONY : pavlov/Bra.c.o pavlov/Bra.i: pavlov/Bra.c.i .PHONY : pavlov/Bra.i # target to preprocess a source file pavlov/Bra.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bra.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bra.c.i .PHONY : pavlov/Bra.c.i pavlov/Bra.s: pavlov/Bra.c.s .PHONY : pavlov/Bra.s # target to generate assembly for a file pavlov/Bra.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bra.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bra.c.s .PHONY : pavlov/Bra.c.s pavlov/Bra86.o: pavlov/Bra86.c.o .PHONY : pavlov/Bra86.o # target to build an object file pavlov/Bra86.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bra86.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bra86.c.o .PHONY : pavlov/Bra86.c.o pavlov/Bra86.i: pavlov/Bra86.c.i .PHONY : pavlov/Bra86.i # target to preprocess a source file pavlov/Bra86.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bra86.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bra86.c.i .PHONY : pavlov/Bra86.c.i pavlov/Bra86.s: pavlov/Bra86.c.s .PHONY : pavlov/Bra86.s # target to generate assembly for a file pavlov/Bra86.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/Bra86.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/Bra86.c.s .PHONY : pavlov/Bra86.c.s pavlov/BraIA64.o: pavlov/BraIA64.c.o .PHONY : pavlov/BraIA64.o # target to build an object file pavlov/BraIA64.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/BraIA64.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/BraIA64.c.o .PHONY : pavlov/BraIA64.c.o pavlov/BraIA64.i: pavlov/BraIA64.c.i .PHONY : pavlov/BraIA64.i # target to preprocess a source file pavlov/BraIA64.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/BraIA64.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/BraIA64.c.i .PHONY : pavlov/BraIA64.c.i pavlov/BraIA64.s: pavlov/BraIA64.c.s .PHONY : pavlov/BraIA64.s # target to generate assembly for a file pavlov/BraIA64.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/BraIA64.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/BraIA64.c.s .PHONY : pavlov/BraIA64.c.s pavlov/LzFind.o: pavlov/LzFind.c.o .PHONY : pavlov/LzFind.o # target to build an object file pavlov/LzFind.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzFind.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzFind.c.o .PHONY : pavlov/LzFind.c.o pavlov/LzFind.i: pavlov/LzFind.c.i .PHONY : pavlov/LzFind.i # target to preprocess a source file pavlov/LzFind.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzFind.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzFind.c.i .PHONY : pavlov/LzFind.c.i pavlov/LzFind.s: pavlov/LzFind.c.s .PHONY : pavlov/LzFind.s # target to generate assembly for a file pavlov/LzFind.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzFind.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzFind.c.s .PHONY : pavlov/LzFind.c.s pavlov/LzmaDec.o: pavlov/LzmaDec.c.o .PHONY : pavlov/LzmaDec.o # target to build an object file pavlov/LzmaDec.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaDec.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaDec.c.o .PHONY : pavlov/LzmaDec.c.o pavlov/LzmaDec.i: pavlov/LzmaDec.c.i .PHONY : pavlov/LzmaDec.i # target to preprocess a source file pavlov/LzmaDec.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaDec.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaDec.c.i .PHONY : pavlov/LzmaDec.c.i pavlov/LzmaDec.s: pavlov/LzmaDec.c.s .PHONY : pavlov/LzmaDec.s # target to generate assembly for a file pavlov/LzmaDec.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaDec.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaDec.c.s .PHONY : pavlov/LzmaDec.c.s pavlov/LzmaEnc.o: pavlov/LzmaEnc.c.o .PHONY : pavlov/LzmaEnc.o # target to build an object file pavlov/LzmaEnc.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaEnc.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaEnc.c.o .PHONY : pavlov/LzmaEnc.c.o pavlov/LzmaEnc.i: pavlov/LzmaEnc.c.i .PHONY : pavlov/LzmaEnc.i # target to preprocess a source file pavlov/LzmaEnc.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaEnc.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaEnc.c.i .PHONY : pavlov/LzmaEnc.c.i pavlov/LzmaEnc.s: pavlov/LzmaEnc.c.s .PHONY : pavlov/LzmaEnc.s # target to generate assembly for a file pavlov/LzmaEnc.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaEnc.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaEnc.c.s .PHONY : pavlov/LzmaEnc.c.s pavlov/LzmaLib.o: pavlov/LzmaLib.c.o .PHONY : pavlov/LzmaLib.o # target to build an object file pavlov/LzmaLib.c.o: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaLib.c.o cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaLib.c.o .PHONY : pavlov/LzmaLib.c.o pavlov/LzmaLib.i: pavlov/LzmaLib.c.i .PHONY : pavlov/LzmaLib.i # target to preprocess a source file pavlov/LzmaLib.c.i: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaLib.c.i cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaLib.c.i .PHONY : pavlov/LzmaLib.c.i pavlov/LzmaLib.s: pavlov/LzmaLib.c.s .PHONY : pavlov/LzmaLib.s # target to generate assembly for a file pavlov/LzmaLib.c.s: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma_s.dir/build.make src/CMakeFiles/easylzma_s.dir/pavlov/LzmaLib.c.s cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(MAKE) -f src/CMakeFiles/easylzma.dir/build.make src/CMakeFiles/easylzma.dir/pavlov/LzmaLib.c.s .PHONY : pavlov/LzmaLib.c.s # Help Target help: @echo "The following are some of the valid targets for this Makefile:" @echo "... all (the default if no target is provided)" @echo "... clean" @echo "... depend" @echo "... rebuild_cache" @echo "... edit_cache" @echo "... easylzma_s" @echo "... easylzma" @echo "... common_internal.o" @echo "... common_internal.i" @echo "... common_internal.s" @echo "... compress.o" @echo "... compress.i" @echo "... compress.s" @echo "... decompress.o" @echo "... decompress.i" @echo "... decompress.s" @echo "... lzip_header.o" @echo "... lzip_header.i" @echo "... lzip_header.s" @echo "... lzma_header.o" @echo "... lzma_header.i" @echo "... lzma_header.s" @echo "... pavlov/7zBuf.o" @echo "... pavlov/7zBuf.i" @echo "... pavlov/7zBuf.s" @echo "... pavlov/7zBuf2.o" @echo "... pavlov/7zBuf2.i" @echo "... pavlov/7zBuf2.s" @echo "... pavlov/7zCrc.o" @echo "... pavlov/7zCrc.i" @echo "... pavlov/7zCrc.s" @echo "... pavlov/7zFile.o" @echo "... pavlov/7zFile.i" @echo "... pavlov/7zFile.s" @echo "... pavlov/7zStream.o" @echo "... pavlov/7zStream.i" @echo "... pavlov/7zStream.s" @echo "... pavlov/Alloc.o" @echo "... pavlov/Alloc.i" @echo "... pavlov/Alloc.s" @echo "... pavlov/Bcj2.o" @echo "... pavlov/Bcj2.i" @echo "... pavlov/Bcj2.s" @echo "... pavlov/Bra.o" @echo "... pavlov/Bra.i" @echo "... pavlov/Bra.s" @echo "... pavlov/Bra86.o" @echo "... pavlov/Bra86.i" @echo "... pavlov/Bra86.s" @echo "... pavlov/BraIA64.o" @echo "... pavlov/BraIA64.i" @echo "... pavlov/BraIA64.s" @echo "... pavlov/LzFind.o" @echo "... pavlov/LzFind.i" @echo "... pavlov/LzFind.s" @echo "... pavlov/LzmaDec.o" @echo "... pavlov/LzmaDec.i" @echo "... pavlov/LzmaDec.s" @echo "... pavlov/LzmaEnc.o" @echo "... pavlov/LzmaEnc.i" @echo "... pavlov/LzmaEnc.s" @echo "... pavlov/LzmaLib.o" @echo "... pavlov/LzmaLib.i" @echo "... pavlov/LzmaLib.s" .PHONY : help #============================================================================= # Special targets to cleanup operation of make. # Special rule to run CMake to check the build system integrity. # No rule that depends on this can have commands that come from listfiles # because they might be regenerated. cmake_check_build_system: cd /home/fangq/space/git/Project/github/zmat/src/easylzma && $(CMAKE_COMMAND) -H$(CMAKE_SOURCE_DIR) -B$(CMAKE_BINARY_DIR) --check-build-system CMakeFiles/Makefile.cmake 0 .PHONY : cmake_check_build_system zmat-0.9.8/src/easylzma/README000066400000000000000000000002721366304620100160010ustar00rootroot00000000000000pavlov/ - contains original lzma compress/decompress source from Igor Pavlov easylzma/ - contains the public api of this library ./ - contains the implementation of this wrapper library zmat-0.9.8/src/easylzma/common_internal.c000066400000000000000000000024071366304620100204530ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. */ #include "common_internal.h" static void *elzmaAlloc(void *p, size_t size) { struct elzma_alloc_struct * as = (struct elzma_alloc_struct *) p; if (as->clientMallocFunc) { return as->clientMallocFunc(as->clientMallocContext, size); } return malloc(size); } static void elzmaFree(void *p, void *address) { struct elzma_alloc_struct * as = (struct elzma_alloc_struct *) p; if (as->clientFreeFunc) { as->clientFreeFunc(as->clientMallocContext, address); } else { free(address); } } void init_alloc_struct(struct elzma_alloc_struct * as, elzma_malloc clientMallocFunc, void * clientMallocContext, elzma_free clientFreeFunc, void * clientFreeContext) { as->Alloc = elzmaAlloc; as->Free = elzmaFree; as->clientMallocFunc = clientMallocFunc; as->clientMallocContext = clientMallocContext; as->clientFreeFunc = clientFreeFunc; as->clientFreeContext = clientFreeContext; } zmat-0.9.8/src/easylzma/common_internal.h000066400000000000000000000041201366304620100204520ustar00rootroot00000000000000#ifndef __ELZMA_COMMON_INTERNAL_H__ #define __ELZMA_COMMON_INTERNAL_H__ #include "easylzma/common.h" /** a structure which may be cast and passed into Igor's allocate * routines */ struct elzma_alloc_struct { void *(*Alloc)(void *p, size_t size); void (*Free)(void *p, void *address); /* address can be 0 */ elzma_malloc clientMallocFunc; void * clientMallocContext; elzma_free clientFreeFunc; void * clientFreeContext; }; /* initialize an allocation structure, may be called safely multiple * times */ void init_alloc_struct(struct elzma_alloc_struct * allocStruct, elzma_malloc clientMallocFunc, void * clientMallocContext, elzma_free clientFreeFunc, void * clientFreeContext); /** superset representation of a compressed file header */ struct elzma_file_header { unsigned char pb; unsigned char lp; unsigned char lc; unsigned char isStreamed; long long unsigned int uncompressedSize; unsigned int dictSize; }; /** superset representation of a compressed file footer */ struct elzma_file_footer { unsigned int crc32; long long unsigned int uncompressedSize; }; /** a structure which encapsulates information about the particular * file header and footer in use (lzip vs lzma vs (eventually) xz. * The intention of this structure is to simplify compression and * decompression logic by abstracting the file format details a bit. */ struct elzma_format_handler { unsigned int header_size; void (*init_header)(struct elzma_file_header * hdr); int (*parse_header)(const unsigned char * hdrBuf, struct elzma_file_header * hdr); int (*serialize_header)(unsigned char * hdrBuf, const struct elzma_file_header * hdr); unsigned int footer_size; int (*serialize_footer)(struct elzma_file_footer * ftr, unsigned char * ftrBuf); int (*parse_footer)(const unsigned char * ftrBuf, struct elzma_file_footer * ftr); }; #endif zmat-0.9.8/src/easylzma/compress.c000066400000000000000000000222541366304620100171240ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. */ #include "easylzma/compress.h" #include "lzma_header.h" #include "lzip_header.h" #include "common_internal.h" #include "pavlov/Types.h" #include "pavlov/LzmaEnc.h" #include "pavlov/7zCrc.h" #include struct _elzma_compress_handle { CLzmaEncProps props; CLzmaEncHandle encHand; unsigned long long uncompressedSize; elzma_file_format format; struct elzma_alloc_struct allocStruct; struct elzma_format_handler formatHandler; }; elzma_compress_handle elzma_compress_alloc() { elzma_compress_handle hand = malloc(sizeof(struct _elzma_compress_handle)); memset((void *) hand, 0, sizeof(struct _elzma_compress_handle)); /* "reasonable" defaults for props */ LzmaEncProps_Init(&(hand->props)); hand->props.lc = 3; hand->props.lp = 0; hand->props.pb = 2; hand->props.level = 5; hand->props.algo = 1; hand->props.fb = 32; hand->props.dictSize = 1 << 24; hand->props.btMode = 1; hand->props.numHashBytes = 4; hand->props.mc = 32; hand->props.numThreads = 1; hand->props.writeEndMark = 1; init_alloc_struct(&(hand->allocStruct), NULL, NULL, NULL, NULL); /* default format is LZMA-Alone */ initializeLZMAFormatHandler(&(hand->formatHandler)); return hand; } void elzma_compress_free(elzma_compress_handle * hand) { if (hand && *hand) { if ((*hand)->encHand) { LzmaEnc_Destroy((*hand)->encHand, (ISzAlloc *) &((*hand)->allocStruct), (ISzAlloc *) &((*hand)->allocStruct)); } } *hand = NULL; } int elzma_compress_config(elzma_compress_handle hand, unsigned char lc, unsigned char lp, unsigned char pb, unsigned char level, unsigned int dictionarySize, elzma_file_format format, unsigned long long uncompressedSize) { /* XXX: validate arguments are in valid ranges */ hand->props.lc = lc; hand->props.lp = lp; hand->props.pb = pb; hand->props.level = level; hand->props.dictSize = dictionarySize; hand->uncompressedSize = uncompressedSize; hand->format = format; /* default of LZMA-Alone is set at alloc time, and there are only * two possible formats */ if (format == ELZMA_lzip) { initializeLZIPFormatHandler(&(hand->formatHandler)); } return ELZMA_E_OK; } /* use Igor's stream hooks for compression. */ struct elzmaInStream { SRes (*ReadPtr)(void *p, void *buf, size_t *size); elzma_read_callback inputStream; void * inputContext; unsigned int crc32; unsigned int crc32a; unsigned int crc32b; unsigned int crc32c; int calculateCRC; }; static SRes elzmaReadFunc(void *p, void *buf, size_t *size) { int rv; struct elzmaInStream * is = (struct elzmaInStream *) p; rv = is->inputStream(is->inputContext, buf, size); if (rv == 0 && *size > 0 && is->calculateCRC) { is->crc32 = CrcUpdate(is->crc32, buf, *size); } return rv; } struct elzmaOutStream { size_t (*WritePtr)(void *p, const void *buf, size_t size); elzma_write_callback outputStream; void * outputContext; }; static size_t elzmaWriteFunc(void *p, const void *buf, size_t size) { struct elzmaOutStream * os = (struct elzmaOutStream *) p; return os->outputStream(os->outputContext, buf, size); } /* use Igor's stream hooks for compression. */ struct elzmaProgressStruct { SRes (*Progress)(void *p, UInt64 inSize, UInt64 outSize); long long unsigned int uncompressedSize; elzma_progress_callback progressCallback; void * progressContext; }; #include static SRes elzmaProgress(void *p, UInt64 inSize, UInt64 outSize) { struct elzmaProgressStruct * ps = (struct elzmaProgressStruct *) p; if (ps->progressCallback) { ps->progressCallback(ps->progressContext, inSize, ps->uncompressedSize); } return SZ_OK; } void elzma_compress_set_allocation_callbacks( elzma_compress_handle hand, elzma_malloc mallocFunc, void * mallocFuncContext, elzma_free freeFunc, void * freeFuncContext) { if (hand) { init_alloc_struct(&(hand->allocStruct), mallocFunc, mallocFuncContext, freeFunc, freeFuncContext); } } int elzma_compress_run(elzma_compress_handle hand, elzma_read_callback inputStream, void * inputContext, elzma_write_callback outputStream, void * outputContext, elzma_progress_callback progressCallback, void * progressContext) { struct elzmaInStream inStreamStruct; struct elzmaOutStream outStreamStruct; struct elzmaProgressStruct progressStruct; SRes r; CrcGenerateTable(); if (hand == NULL || inputStream == NULL) return ELZMA_E_BAD_PARAMS; /* initialize stream structrures */ inStreamStruct.ReadPtr = elzmaReadFunc; inStreamStruct.inputStream = inputStream; inStreamStruct.inputContext = inputContext; inStreamStruct.crc32 = CRC_INIT_VAL; inStreamStruct.calculateCRC = (hand->formatHandler.serialize_footer != NULL); outStreamStruct.WritePtr = elzmaWriteFunc; outStreamStruct.outputStream = outputStream; outStreamStruct.outputContext = outputContext; progressStruct.Progress = elzmaProgress; progressStruct.uncompressedSize = hand->uncompressedSize; progressStruct.progressCallback = progressCallback; progressStruct.progressContext = progressContext; /* create an encoding object */ hand->encHand = LzmaEnc_Create((ISzAlloc *) &(hand->allocStruct)); if (hand->encHand == NULL) { return ELZMA_E_COMPRESS_ERROR; } /* inintialize with compression parameters */ if (SZ_OK != LzmaEnc_SetProps(hand->encHand, &(hand->props))) { return ELZMA_E_BAD_PARAMS; } /* verify format is sane */ if (ELZMA_lzma != hand->format && ELZMA_lzip != hand->format) { return ELZMA_E_UNSUPPORTED_FORMAT; } /* now write the compression header header */ { unsigned char * hdr = hand->allocStruct.Alloc(&(hand->allocStruct), hand->formatHandler.header_size); struct elzma_file_header h; size_t wt; hand->formatHandler.init_header(&h); h.pb = (unsigned char) hand->props.pb; h.lp = (unsigned char) hand->props.lp; h.lc = (unsigned char) hand->props.lc; h.dictSize = hand->props.dictSize; h.isStreamed = (unsigned char) (hand->uncompressedSize == 0); h.uncompressedSize = hand->uncompressedSize; hand->formatHandler.serialize_header(hdr, &h); wt = outputStream(outputContext, (void *) hdr, hand->formatHandler.header_size); hand->allocStruct.Free(&(hand->allocStruct), hdr); if (wt != hand->formatHandler.header_size) { return ELZMA_E_OUTPUT_ERROR; } } /* begin LZMA encoding */ /* XXX: expose encoding progress */ r = LzmaEnc_Encode(hand->encHand, (ISeqOutStream *) &outStreamStruct, (ISeqInStream *) &inStreamStruct, (ICompressProgress *) &progressStruct, (ISzAlloc *) &(hand->allocStruct), (ISzAlloc *) &(hand->allocStruct)); if (r != SZ_OK) return ELZMA_E_COMPRESS_ERROR; /* support a footer! (lzip) */ if (hand->formatHandler.serialize_footer != NULL && hand->formatHandler.footer_size > 0) { size_t wt; unsigned char * ftrBuf = hand->allocStruct.Alloc(&(hand->allocStruct), hand->formatHandler.footer_size); struct elzma_file_footer ftr; ftr.crc32 = inStreamStruct.crc32 ^ 0xFFFFFFFF; ftr.uncompressedSize = hand->uncompressedSize; hand->formatHandler.serialize_footer(&ftr, ftrBuf); wt = outputStream(outputContext, (void *) ftrBuf, hand->formatHandler.footer_size); hand->allocStruct.Free(&(hand->allocStruct), ftrBuf); if (wt != hand->formatHandler.footer_size) { return ELZMA_E_OUTPUT_ERROR; } } return ELZMA_E_OK; } unsigned int elzma_get_dict_size(unsigned long long size) { int i = 13; /* 16k dict is minimum */ /* now we'll find the closes power of two with a max at 16< * * if the size is greater than 8m, we'll divide by two, all of this * is based on a quick set of emperical tests on hopefully * representative sample data */ if ( size > ( 1 << 23 ) ) size >>= 1; while (size >> i) i++; if (i > 23) return 1 << 23; /* now 1 << i is greater than size, let's return either 1< (size - (1 << (i-1)))) ? i-1 : i); } zmat-0.9.8/src/easylzma/decompress.c000066400000000000000000000176301366304620100174370ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. */ #include "easylzma/decompress.h" #include "pavlov/LzmaDec.h" #include "pavlov/7zCrc.h" #include "common_internal.h" #include "lzma_header.h" #include "lzip_header.h" #include #include #define ELZMA_DECOMPRESS_INPUT_BUFSIZE (1024 * 64) #define ELZMA_DECOMPRESS_OUTPUT_BUFSIZE (1024 * 256) /** an opaque handle to an lzma decompressor */ struct _elzma_decompress_handle { char inbuf[ELZMA_DECOMPRESS_INPUT_BUFSIZE]; char outbuf[ELZMA_DECOMPRESS_OUTPUT_BUFSIZE]; struct elzma_alloc_struct allocStruct; }; elzma_decompress_handle elzma_decompress_alloc() { elzma_decompress_handle hand = malloc(sizeof(struct _elzma_decompress_handle)); memset((void *) hand, 0, sizeof(struct _elzma_decompress_handle)); init_alloc_struct(&(hand->allocStruct), NULL, NULL, NULL, NULL); return hand; } void elzma_decompress_set_allocation_callbacks( elzma_decompress_handle hand, elzma_malloc mallocFunc, void * mallocFuncContext, elzma_free freeFunc, void * freeFuncContext) { if (hand) { init_alloc_struct(&(hand->allocStruct), mallocFunc, mallocFuncContext, freeFunc, freeFuncContext); } } void elzma_decompress_free(elzma_decompress_handle * hand) { if (*hand) free(*hand); *hand = NULL; } int elzma_decompress_run(elzma_decompress_handle hand, elzma_read_callback inputStream, void * inputContext, elzma_write_callback outputStream, void * outputContext, elzma_file_format format) { unsigned long long int totalRead = 0; /* total amount read from stream */ unsigned int crc32 = CRC_INIT_VAL; /* running crc32 (lzip case) */ CLzmaDec dec; unsigned int errorCode = ELZMA_E_OK; struct elzma_format_handler formatHandler; struct elzma_file_header h; struct elzma_file_footer f; /* switch between supported formats */ if (format == ELZMA_lzma) { initializeLZMAFormatHandler(&formatHandler); } else if (format == ELZMA_lzip) { CrcGenerateTable(); initializeLZIPFormatHandler(&formatHandler); } else { return ELZMA_E_BAD_PARAMS; } /* initialize footer */ f.crc32 = 0; f.uncompressedSize = 0; /* initialize decoder memory */ memset((void *) &dec, 0, sizeof(dec)); LzmaDec_Init(&dec); /* decode the header. */ { unsigned char * hdr = hand->allocStruct.Alloc(&(hand->allocStruct), formatHandler.header_size); size_t sz = formatHandler.header_size; formatHandler.init_header(&h); if (inputStream(inputContext, hdr, &sz) != 0 || sz != formatHandler.header_size) { hand->allocStruct.Free(&(hand->allocStruct), hdr); return ELZMA_E_INPUT_ERROR; } if (0 != formatHandler.parse_header(hdr, &h)) { hand->allocStruct.Free(&(hand->allocStruct), hdr); return ELZMA_E_CORRUPT_HEADER; } /* the LzmaDec_Allocate call requires 5 bytes which have * compression properties encoded in them. In the case of * lzip, the header format does not already contain what * LzmaDec_Allocate expects, so we must craft it, silly */ { unsigned char propsBuf[13]; const unsigned char * propsPtr = hdr; if (format == ELZMA_lzip) { struct elzma_format_handler lzmaHand; initializeLZMAFormatHandler(&lzmaHand); lzmaHand.serialize_header(propsBuf, &h); propsPtr = propsBuf; } /* now we're ready to allocate the decoder */ LzmaDec_Allocate(&dec, propsPtr, 5, (ISzAlloc *) &(hand->allocStruct)); } hand->allocStruct.Free(&(hand->allocStruct), hdr); } /* perform the decoding */ for (;;) { size_t dstLen = ELZMA_DECOMPRESS_OUTPUT_BUFSIZE; size_t srcLen = ELZMA_DECOMPRESS_INPUT_BUFSIZE; size_t amt = 0; size_t bufOff = 0; ELzmaStatus stat; if (0 != inputStream(inputContext, hand->inbuf, &srcLen)) { errorCode = ELZMA_E_INPUT_ERROR; goto decompressEnd; } /* handle the case where the input prematurely finishes */ if (srcLen == 0) { errorCode = ELZMA_E_INSUFFICIENT_INPUT; goto decompressEnd; } amt = srcLen; /* handle the case where a single read buffer of compressed bytes * will translate into multiple buffers of uncompressed bytes, * with this inner loop */ stat = LZMA_STATUS_NOT_SPECIFIED; while (bufOff < srcLen) { SRes r = LzmaDec_DecodeToBuf(&dec, (Byte *) hand->outbuf, &dstLen, ((Byte *) hand->inbuf + bufOff), &amt, LZMA_FINISH_ANY, &stat); /* XXX deal with result code more granularly*/ if (r != SZ_OK) { errorCode = ELZMA_E_DECOMPRESS_ERROR; goto decompressEnd; } /* write what we've read */ { size_t wt; /* if decoding lzip, update our crc32 value */ if (format == ELZMA_lzip && dstLen > 0) { crc32 = CrcUpdate(crc32, hand->outbuf, dstLen); } totalRead += dstLen; wt = outputStream(outputContext, hand->outbuf, dstLen); if (wt != dstLen) { errorCode = ELZMA_E_OUTPUT_ERROR; goto decompressEnd; } } /* do we have more data on the input buffer? */ bufOff += amt; assert( bufOff <= srcLen ); if (bufOff >= srcLen) break; amt = srcLen - bufOff; /* with lzip, we will have the footer left on the buffer! */ if (stat == LZMA_STATUS_FINISHED_WITH_MARK) { break; } } /* now check status */ if (stat == LZMA_STATUS_FINISHED_WITH_MARK) { /* read a footer if one is expected and * present */ if (formatHandler.footer_size > 0 && amt >= formatHandler.footer_size && formatHandler.parse_footer != NULL) { formatHandler.parse_footer( (unsigned char *) hand->inbuf + bufOff, &f); } break; } /* for LZMA utils, we don't always have a finished mark */ if (!h.isStreamed && totalRead >= h.uncompressedSize) { break; } } /* finish the calculated crc32 */ crc32 ^= 0xFFFFFFFF; /* if we have a footer, check that the calculated crc32 matches * the encoded crc32, and that the sizes match */ if (formatHandler.footer_size) { if (f.crc32 != crc32) { errorCode = ELZMA_E_CRC32_MISMATCH; } else if (f.uncompressedSize != totalRead) { errorCode = ELZMA_E_SIZE_MISMATCH; } } else if (!h.isStreamed) { /* if the format does not support a footer and has an uncompressed * size in the header, let's compare that with how much we actually * read */ if (h.uncompressedSize != totalRead) { errorCode = ELZMA_E_SIZE_MISMATCH; } } decompressEnd: LzmaDec_Free(&dec, (ISzAlloc *) &(hand->allocStruct)); return errorCode; } zmat-0.9.8/src/easylzma/easylzma/000077500000000000000000000000001366304620100167455ustar00rootroot00000000000000zmat-0.9.8/src/easylzma/easylzma/common.h000066400000000000000000000104271366304620100204120ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. * * easylzma/common.h - definitions common to both compression and * decompression */ #ifndef __EASYLZMACOMMON_H__ #define __EASYLZMACOMMON_H__ #include #ifdef __cplusplus extern "C" { #endif /* msft dll export gunk. To build a DLL on windows, you * must define WIN32, EASYLZMA_SHARED, and EASYLZMA_BUILD. To use a * DLL, you must define EASYLZMA_SHARED and WIN32 */ #if defined(WIN32) && defined(EASYLZMA_SHARED) # ifdef EASYLZMA_BUILD # define EASYLZMA_API __declspec(dllexport) # else # define EASYLZMA_API __declspec(dllimport) # endif #else # define EASYLZMA_API #endif /** error codes */ /** no error */ #define ELZMA_E_OK 0 /** bad parameters passed to an ELZMA function */ #define ELZMA_E_BAD_PARAMS 10 /** could not initialize the encode with configured parameters. */ #define ELZMA_E_ENCODING_PROPERTIES_ERROR 11 /** an error occured during compression (XXX: be more specific) */ #define ELZMA_E_COMPRESS_ERROR 12 /** currently unsupported lzma file format was specified*/ #define ELZMA_E_UNSUPPORTED_FORMAT 13 /** an error occured when reading input */ #define ELZMA_E_INPUT_ERROR 14 /** an error occured when writing output */ #define ELZMA_E_OUTPUT_ERROR 15 /** LZMA header couldn't be parsed */ #define ELZMA_E_CORRUPT_HEADER 16 /** an error occured during decompression (XXX: be more specific) */ #define ELZMA_E_DECOMPRESS_ERROR 17 /** the input stream returns EOF before the decompression could complete */ #define ELZMA_E_INSUFFICIENT_INPUT 18 /** for formats which have an emebedded crc, this error would indicated that * what came out was not what went in, i.e. data corruption */ #define ELZMA_E_CRC32_MISMATCH 19 /** for formats which have an emebedded uncompressed content length, * this error indicates that the amount we read was not what we expected */ #define ELZMA_E_SIZE_MISMATCH 20 /** Supported file formats */ typedef enum { ELZMA_lzip, /**< the lzip format which includes a magic number and * CRC check */ ELZMA_lzma /**< the LZMA-Alone format, originally designed by * Igor Pavlov and in widespread use due to lzmautils, * lacking both aforementioned features of lzip */ /* XXX: future, potentially , ELZMA_xz */ } elzma_file_format; /** * A callback invoked during elzma_[de]compress_run when the [de]compression * process has generated [de]compressed output. * * the size parameter indicates how much data is in buf to be written. * it is required that the write callback consume all data, and a return * value not equal to input size indicates and error. */ typedef size_t (*elzma_write_callback)(void *ctx, const void *buf, size_t size); /** * A callback invoked during elzma_[de]compress_run when the [de]compression * process requires more [un]compressed input. * * the size parameter is an in/out argument. on input it indicates * the buffer size. on output it indicates the amount of data read into * buf. when *size is zero on output it indicates EOF. * * \returns the read callback should return nonzero on failure. */ typedef int (*elzma_read_callback)(void *ctx, void *buf, size_t *size); /** * A callback invoked during elzma_[de]compress_run to report progress * on the [de]compression. * * \returns the read callback should return nonzero on failure. */ typedef void (*elzma_progress_callback)(void *ctx, size_t complete, size_t total); /** pointer to a malloc function, supporting client overriding memory * allocation routines */ typedef void * (*elzma_malloc)(void *ctx, unsigned int sz); /** pointer to a free function, supporting client overriding memory * allocation routines */ typedef void (*elzma_free)(void *ctx, void * ptr); #ifdef __cplusplus }; #endif #endif zmat-0.9.8/src/easylzma/easylzma/compress.h000066400000000000000000000047431366304620100207610ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. * * easylzma/compress.h - the API for LZMA compression using easylzma */ #ifndef __EASYLZMACOMPRESS_H__ #define __EASYLZMACOMPRESS_H__ #include "easylzma/common.h" #include #ifdef __cplusplus extern "C" { #endif /** suggested default values */ #define ELZMA_LC_DEFAULT 3 #define ELZMA_LP_DEFAULT 0 #define ELZMA_PB_DEFAULT 2 #define ELZMA_DICT_SIZE_DEFAULT_MAX (1 << 24) /** an opaque handle to an lzma compressor */ typedef struct _elzma_compress_handle * elzma_compress_handle; /** * Allocate a handle to an LZMA compressor object. */ elzma_compress_handle EASYLZMA_API elzma_compress_alloc(); /** * set allocation routines (optional, if not called malloc & free will * be used) */ void EASYLZMA_API elzma_compress_set_allocation_callbacks( elzma_compress_handle hand, elzma_malloc mallocFunc, void * mallocFuncContext, elzma_free freeFunc, void * freeFuncContext); /** * Free all data associated with an LZMA compressor object. */ void EASYLZMA_API elzma_compress_free(elzma_compress_handle * hand); /** * Set configuration paramters for a compression run. If not called, * reasonable defaults will be used. */ int EASYLZMA_API elzma_compress_config(elzma_compress_handle hand, unsigned char lc, unsigned char lp, unsigned char pb, unsigned char level, unsigned int dictionarySize, elzma_file_format format, unsigned long long uncompressedSize); /** * Run compression */ int EASYLZMA_API elzma_compress_run( elzma_compress_handle hand, elzma_read_callback inputStream, void * inputContext, elzma_write_callback outputStream, void * outputContext, elzma_progress_callback progressCallback, void * progressContext); /** * a heuristic utility routine to guess a dictionary size that gets near * optimal compression while reducing memory usage. * accepts a size in bytes, returns a proposed dictionary size */ unsigned int EASYLZMA_API elzma_get_dict_size(unsigned long long size); #ifdef __cplusplus }; #endif #endif zmat-0.9.8/src/easylzma/easylzma/decompress.h000066400000000000000000000031401366304620100212600ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. * * easylzma/decompress.h - The API for LZMA decompression using easylzma */ #ifndef __EASYLZMADECOMPRESS_H__ #define __EASYLZMADECOMPRESS_H__ #include "easylzma/common.h" #ifdef __cplusplus extern "C" { #endif /** an opaque handle to an lzma decompressor */ typedef struct _elzma_decompress_handle * elzma_decompress_handle; /** * Allocate a handle to an LZMA decompressor object. */ elzma_decompress_handle EASYLZMA_API elzma_decompress_alloc(); /** * set allocation routines (optional, if not called malloc & free will * be used) */ void EASYLZMA_API elzma_decompress_set_allocation_callbacks( elzma_decompress_handle hand, elzma_malloc mallocFunc, void * mallocFuncContext, elzma_free freeFunc, void * freeFuncContext); /** * Free all data associated with an LZMA decompressor object. */ void EASYLZMA_API elzma_decompress_free(elzma_decompress_handle * hand); /** * Perform decompression * * XXX: should the library automatically detect format by reading stream? * currently it's based on data external to stream (such as extension * or convention) */ int EASYLZMA_API elzma_decompress_run( elzma_decompress_handle hand, elzma_read_callback inputStream, void * inputContext, elzma_write_callback outputStream, void * outputContext, elzma_file_format format); #ifdef __cplusplus }; #endif #endif zmat-0.9.8/src/easylzma/lzip_header.c000066400000000000000000000046341366304620100175610ustar00rootroot00000000000000#include "lzip_header.h" #include #define ELZMA_LZIP_HEADER_SIZE 6 #define ELZMA_LZIP_FOOTER_SIZE 12 static void initLzipHeader(struct elzma_file_header * hdr) { memset((void *) hdr, 0, sizeof(struct elzma_file_header)); } static int parseLzipHeader(const unsigned char * hdrBuf, struct elzma_file_header * hdr) { if (0 != strncmp("LZIP", (char *) hdrBuf, 4)) return 1; /* XXX: ignore version for now */ hdr->pb = 2; hdr->lp = 0; hdr->lc = 3; /* unknown at this point */ hdr->isStreamed = 1; hdr->uncompressedSize = 0; hdr->dictSize = 1 << (hdrBuf[5] & 0x1F); return 0; } static int serializeLzipHeader(unsigned char * hdrBuf, const struct elzma_file_header * hdr) { hdrBuf[0] = 'L'; hdrBuf[1] = 'Z'; hdrBuf[2] = 'I'; hdrBuf[3] = 'P'; hdrBuf[4] = 0; { int r = 0; while ((hdr->dictSize >> r) != 0) r++; hdrBuf[5] = (unsigned char) (r-1) & 0x1F; } return 0; } static int serializeLzipFooter(struct elzma_file_footer * ftr, unsigned char * ftrBuf) { unsigned int i = 0; /* first crc32 */ for (i = 0; i < 4; i++) { *(ftrBuf++) = (unsigned char) (ftr->crc32 >> (i * 8)); } /* next data size */ for (i = 0; i < 8; i++) { *(ftrBuf++) = (unsigned char) (ftr->uncompressedSize >> (i * 8)); } /* write version 0 files, omit member length for now*/ return 0; } static int parseLzipFooter(const unsigned char * ftrBuf, struct elzma_file_footer * ftr) { unsigned int i = 0; ftr->crc32 = 0; ftr->uncompressedSize = 0; /* first crc32 */ for (i = 0; i < 4; i++) { ftr->crc32 += ((unsigned int) *(ftrBuf++) << (i * 8)); } /* next data size */ for (i = 0; i < 8; i++) { ftr->uncompressedSize += (unsigned long long) *(ftrBuf++) << (i * 8); } /* read version 0 files, omit member length for now*/ return 0; } void initializeLZIPFormatHandler(struct elzma_format_handler * hand) { hand->header_size = ELZMA_LZIP_HEADER_SIZE; hand->init_header = initLzipHeader; hand->parse_header = parseLzipHeader; hand->serialize_header = serializeLzipHeader; hand->footer_size = ELZMA_LZIP_FOOTER_SIZE; hand->serialize_footer = serializeLzipFooter; hand->parse_footer = parseLzipFooter; } zmat-0.9.8/src/easylzma/lzip_header.h000066400000000000000000000004351366304620100175610ustar00rootroot00000000000000#ifndef __EASYLZMA_LZIP_HEADER__ #define __EASYLZMA_LZIP_HEADER__ #include "common_internal.h" /* lzip file format documented here: * http://download.savannah.gnu.org/releases-noredirect/lzip/manual/ */ void initializeLZIPFormatHandler(struct elzma_format_handler * hand); #endif zmat-0.9.8/src/easylzma/lzma_header.c000066400000000000000000000072131366304620100175420ustar00rootroot00000000000000/* * Written in 2009 by Lloyd Hilaiel * * License * * All the cruft you find here is public domain. You don't have to credit * anyone to use this code, but my personal request is that you mention * Igor Pavlov for his hard, high quality work. */ /* XXX: clean this up, it's mostly lifted from pavel */ #include "lzma_header.h" #include #include #define ELZMA_LZMA_HEADER_SIZE 13 #define ELZMA_LZMA_PROPSBUF_SIZE 5 /**************** Header parsing ****************/ #ifndef UINT64_MAX #define UINT64_MAX ((unsigned long long) -1) #endif /* Parse the properties byte */ static char lzmadec_header_properties ( unsigned char *pb, unsigned char *lp, unsigned char *lc, const unsigned char c) { /* pb, lp and lc are encoded into a single byte. */ if (c > (9 * 5 * 5)) return -1; *pb = c / (9 * 5); /* 0 <= pb <= 4 */ *lp = (c % (9 * 5)) / 9; /* 0 <= lp <= 4 */ *lc = c % 9; /* 0 <= lc <= 8 */ assert (*pb < 5 && *lp < 5 && *lc < 9); return 0; } /* Parse the dictionary size (4 bytes, little endian) */ static char lzmadec_header_dictionary (unsigned int *size, const unsigned char *buffer) { unsigned int i; *size = 0; for (i = 0; i < 4; i++) *size += (unsigned int)(*buffer++) << (i * 8); /* The dictionary size is limited to 256 MiB (checked from * LZMA SDK 4.30) */ if (*size > (1 << 28)) return -1; return 0; } /* Parse the uncompressed size field (8 bytes, little endian) */ static void lzmadec_header_uncompressed (unsigned long long *size, unsigned char *is_streamed, const unsigned char *buffer) { unsigned int i; /* Streamed files have all 64 bits set in the size field. * We don't know the uncompressed size beforehand. */ *is_streamed = 1; /* Assume streamed. */ *size = 0; for (i = 0; i < 8; i++) { *size += (unsigned long long)buffer[i] << (i * 8); if (buffer[i] != 255) *is_streamed = 0; } assert ((*is_streamed == 1 && *size == UINT64_MAX) || (*is_streamed == 0 && *size < UINT64_MAX)); } static void initLzmaHeader(struct elzma_file_header * hdr) { memset((void *) hdr, 0, sizeof(struct elzma_file_header)); } static int parseLzmaHeader(const unsigned char * hdrBuf, struct elzma_file_header * hdr) { if (lzmadec_header_properties(&(hdr->pb), &(hdr->lp), &(hdr->lc), *hdrBuf) || lzmadec_header_dictionary(&(hdr->dictSize), hdrBuf + 1)) { return 1; } lzmadec_header_uncompressed(&(hdr->uncompressedSize), &(hdr->isStreamed), hdrBuf + 5); return 0; } static int serializeLzmaHeader(unsigned char * hdrBuf, const struct elzma_file_header * hdr) { unsigned int i; memset((void *) hdrBuf, 0, ELZMA_LZMA_HEADER_SIZE); /* encode lc, pb, and lp */ *hdrBuf++ = hdr->lc + (hdr->pb * 45) + (hdr->lp * 45 * 9); /* encode dictionary size */ for (i = 0; i < 4; i++) { *(hdrBuf++) = (unsigned char) (hdr->dictSize >> (i * 8)); } /* encode uncompressed size */ for (i = 0; i < 8; i++) { if (hdr->isStreamed) { *(hdrBuf++) = 0xff; } else { *(hdrBuf++) = (unsigned char) (hdr->uncompressedSize >> (i * 8)); } } return 0; } void initializeLZMAFormatHandler(struct elzma_format_handler * hand) { hand->header_size = ELZMA_LZMA_HEADER_SIZE; hand->init_header = initLzmaHeader; hand->parse_header = parseLzmaHeader; hand->serialize_header = serializeLzmaHeader; hand->footer_size = 0; hand->serialize_footer = NULL; } zmat-0.9.8/src/easylzma/lzma_header.h000066400000000000000000000003601366304620100175430ustar00rootroot00000000000000#ifndef __EASYLZMA_LZMA_HEADER__ #define __EASYLZMA_LZMA_HEADER__ #include "common_internal.h" /* LZMA-Alone header format gleaned from reading Igor's code */ void initializeLZMAFormatHandler(struct elzma_format_handler * hand); #endif zmat-0.9.8/src/easylzma/pavlov/000077500000000000000000000000001366304620100164275ustar00rootroot00000000000000zmat-0.9.8/src/easylzma/pavlov/7zBuf.c000077500000000000000000000007651366304620100176030ustar00rootroot00000000000000/* 7zBuf.c -- Byte Buffer 2008-03-28 Igor Pavlov Public domain */ #include "7zBuf.h" void Buf_Init(CBuf *p) { p->data = 0; p->size = 0; } int Buf_Create(CBuf *p, size_t size, ISzAlloc *alloc) { p->size = 0; if (size == 0) { p->data = 0; return 1; } p->data = (Byte *)alloc->Alloc(alloc, size); if (p->data != 0) { p->size = size; return 1; } return 0; } void Buf_Free(CBuf *p, ISzAlloc *alloc) { alloc->Free(alloc, p->data); p->data = 0; p->size = 0; } zmat-0.9.8/src/easylzma/pavlov/7zBuf.h000077500000000000000000000011041366304620100175740ustar00rootroot00000000000000/* 7zBuf.h -- Byte Buffer 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __7Z_BUF_H #define __7Z_BUF_H #include "Types.h" typedef struct { Byte *data; size_t size; } CBuf; void Buf_Init(CBuf *p); int Buf_Create(CBuf *p, size_t size, ISzAlloc *alloc); void Buf_Free(CBuf *p, ISzAlloc *alloc); typedef struct { Byte *data; size_t size; size_t pos; } CDynBuf; void DynBuf_Construct(CDynBuf *p); void DynBuf_SeekToBeg(CDynBuf *p); int DynBuf_Write(CDynBuf *p, const Byte *buf, size_t size, ISzAlloc *alloc); void DynBuf_Free(CDynBuf *p, ISzAlloc *alloc); #endif zmat-0.9.8/src/easylzma/pavlov/7zBuf2.c000077500000000000000000000015051366304620100176560ustar00rootroot00000000000000/* 7zBuf2.c -- Byte Buffer 2008-10-04 : Igor Pavlov : Public domain */ #include #include "7zBuf.h" void DynBuf_Construct(CDynBuf *p) { p->data = 0; p->size = 0; p->pos = 0; } void DynBuf_SeekToBeg(CDynBuf *p) { p->pos = 0; } int DynBuf_Write(CDynBuf *p, const Byte *buf, size_t size, ISzAlloc *alloc) { if (size > p->size - p->pos) { size_t newSize = p->pos + size; Byte *data; newSize += newSize / 4; data = (Byte *)alloc->Alloc(alloc, newSize); if (data == 0) return 0; p->size = newSize; memcpy(data, p->data, p->pos); alloc->Free(alloc, p->data); p->data = data; } memcpy(p->data + p->pos, buf, size); p->pos += size; return 1; } void DynBuf_Free(CDynBuf *p, ISzAlloc *alloc) { alloc->Free(alloc, p->data); p->data = 0; p->size = 0; p->pos = 0; } zmat-0.9.8/src/easylzma/pavlov/7zCrc.c000077500000000000000000000012531366304620100175670ustar00rootroot00000000000000/* 7zCrc.c -- CRC32 calculation 2008-08-05 Igor Pavlov Public domain */ #include "7zCrc.h" #define kCrcPoly 0xEDB88320 UInt32 g_CrcTable[256]; void MY_FAST_CALL CrcGenerateTable(void) { UInt32 i; for (i = 0; i < 256; i++) { UInt32 r = i; int j; for (j = 0; j < 8; j++) r = (r >> 1) ^ (kCrcPoly & ~((r & 1) - 1)); g_CrcTable[i] = r; } } UInt32 MY_FAST_CALL CrcUpdate(UInt32 v, const void *data, size_t size) { const Byte *p = (const Byte *)data; for (; size > 0 ; size--, p++) v = CRC_UPDATE_BYTE(v, *p); return v; } UInt32 MY_FAST_CALL CrcCalc(const void *data, size_t size) { return CrcUpdate(CRC_INIT_VAL, data, size) ^ 0xFFFFFFFF; } zmat-0.9.8/src/easylzma/pavlov/7zCrc.h000077500000000000000000000010231366304620100175670ustar00rootroot00000000000000/* 7zCrc.h -- CRC32 calculation 2008-03-13 Igor Pavlov Public domain */ #ifndef __7Z_CRC_H #define __7Z_CRC_H #include #include "Types.h" extern UInt32 g_CrcTable[]; void MY_FAST_CALL CrcGenerateTable(void); #define CRC_INIT_VAL 0xFFFFFFFF #define CRC_GET_DIGEST(crc) ((crc) ^ 0xFFFFFFFF) #define CRC_UPDATE_BYTE(crc, b) (g_CrcTable[((crc) ^ (b)) & 0xFF] ^ ((crc) >> 8)) UInt32 MY_FAST_CALL CrcUpdate(UInt32 crc, const void *data, size_t size); UInt32 MY_FAST_CALL CrcCalc(const void *data, size_t size); #endif zmat-0.9.8/src/easylzma/pavlov/7zFile.c000077500000000000000000000135601366304620100177430ustar00rootroot00000000000000/* 7zFile.c -- File IO 2008-11-22 : Igor Pavlov : Public domain */ #include "7zFile.h" #ifndef USE_WINDOWS_FILE #include #endif #ifdef USE_WINDOWS_FILE /* ReadFile and WriteFile functions in Windows have BUG: If you Read or Write 64MB or more (probably min_failure_size = 64MB - 32KB + 1) from/to Network file, it returns ERROR_NO_SYSTEM_RESOURCES (Insufficient system resources exist to complete the requested service). Probably in some version of Windows there are problems with other sizes: for 32 MB (maybe also for 16 MB). And message can be "Network connection was lost" */ #define kChunkSizeMax (1 << 22) #endif void File_Construct(CSzFile *p) { #ifdef USE_WINDOWS_FILE p->handle = INVALID_HANDLE_VALUE; #else p->file = NULL; #endif } static WRes File_Open(CSzFile *p, const char *name, int writeMode) { #ifdef USE_WINDOWS_FILE p->handle = CreateFileA(name, writeMode ? GENERIC_WRITE : GENERIC_READ, FILE_SHARE_READ, NULL, writeMode ? CREATE_ALWAYS : OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); return (p->handle != INVALID_HANDLE_VALUE) ? 0 : GetLastError(); #else p->file = fopen(name, writeMode ? "wb+" : "rb"); return (p->file != 0) ? 0 : errno; #endif } WRes InFile_Open(CSzFile *p, const char *name) { return File_Open(p, name, 0); } WRes OutFile_Open(CSzFile *p, const char *name) { return File_Open(p, name, 1); } WRes File_Close(CSzFile *p) { #ifdef USE_WINDOWS_FILE if (p->handle != INVALID_HANDLE_VALUE) { if (!CloseHandle(p->handle)) return GetLastError(); p->handle = INVALID_HANDLE_VALUE; } #else if (p->file != NULL) { int res = fclose(p->file); if (res != 0) return res; p->file = NULL; } #endif return 0; } WRes File_Read(CSzFile *p, void *data, size_t *size) { size_t originalSize = *size; if (originalSize == 0) return 0; #ifdef USE_WINDOWS_FILE *size = 0; do { DWORD curSize = (originalSize > kChunkSizeMax) ? kChunkSizeMax : (DWORD)originalSize; DWORD processed = 0; BOOL res = ReadFile(p->handle, data, curSize, &processed, NULL); data = (void *)((Byte *)data + processed); originalSize -= processed; *size += processed; if (!res) return GetLastError(); if (processed == 0) break; } while (originalSize > 0); return 0; #else *size = fread(data, 1, originalSize, p->file); if (*size == originalSize) return 0; return ferror(p->file); #endif } WRes File_Write(CSzFile *p, const void *data, size_t *size) { size_t originalSize = *size; if (originalSize == 0) return 0; #ifdef USE_WINDOWS_FILE *size = 0; do { DWORD curSize = (originalSize > kChunkSizeMax) ? kChunkSizeMax : (DWORD)originalSize; DWORD processed = 0; BOOL res = WriteFile(p->handle, data, curSize, &processed, NULL); data = (void *)((Byte *)data + processed); originalSize -= processed; *size += processed; if (!res) return GetLastError(); if (processed == 0) break; } while (originalSize > 0); return 0; #else *size = fwrite(data, 1, originalSize, p->file); if (*size == originalSize) return 0; return ferror(p->file); #endif } WRes File_Seek(CSzFile *p, Int64 *pos, ESzSeek origin) { #ifdef USE_WINDOWS_FILE LARGE_INTEGER value; DWORD moveMethod; value.LowPart = (DWORD)*pos; value.HighPart = (LONG)((UInt64)*pos >> 16 >> 16); /* for case when UInt64 is 32-bit only */ switch (origin) { case SZ_SEEK_SET: moveMethod = FILE_BEGIN; break; case SZ_SEEK_CUR: moveMethod = FILE_CURRENT; break; case SZ_SEEK_END: moveMethod = FILE_END; break; default: return ERROR_INVALID_PARAMETER; } value.LowPart = SetFilePointer(p->handle, value.LowPart, &value.HighPart, moveMethod); if (value.LowPart == 0xFFFFFFFF) { WRes res = GetLastError(); if (res != NO_ERROR) return res; } *pos = ((Int64)value.HighPart << 32) | value.LowPart; return 0; #else int moveMethod; int res; switch (origin) { case SZ_SEEK_SET: moveMethod = SEEK_SET; break; case SZ_SEEK_CUR: moveMethod = SEEK_CUR; break; case SZ_SEEK_END: moveMethod = SEEK_END; break; default: return 1; } res = fseek(p->file, (long)*pos, moveMethod); *pos = ftell(p->file); return res; #endif } WRes File_GetLength(CSzFile *p, UInt64 *length) { #ifdef USE_WINDOWS_FILE DWORD sizeHigh; DWORD sizeLow = GetFileSize(p->handle, &sizeHigh); if (sizeLow == 0xFFFFFFFF) { DWORD res = GetLastError(); if (res != NO_ERROR) return res; } *length = (((UInt64)sizeHigh) << 32) + sizeLow; return 0; #else long pos = ftell(p->file); int res = fseek(p->file, 0, SEEK_END); *length = ftell(p->file); fseek(p->file, pos, SEEK_SET); return res; #endif } /* ---------- FileSeqInStream ---------- */ static SRes FileSeqInStream_Read(void *pp, void *buf, size_t *size) { CFileSeqInStream *p = (CFileSeqInStream *)pp; return File_Read(&p->file, buf, size) == 0 ? SZ_OK : SZ_ERROR_READ; } void FileSeqInStream_CreateVTable(CFileSeqInStream *p) { p->s.Read = FileSeqInStream_Read; } /* ---------- FileInStream ---------- */ static SRes FileInStream_Read(void *pp, void *buf, size_t *size) { CFileInStream *p = (CFileInStream *)pp; return (File_Read(&p->file, buf, size) == 0) ? SZ_OK : SZ_ERROR_READ; } static SRes FileInStream_Seek(void *pp, Int64 *pos, ESzSeek origin) { CFileInStream *p = (CFileInStream *)pp; return File_Seek(&p->file, pos, origin); } void FileInStream_CreateVTable(CFileInStream *p) { p->s.Read = FileInStream_Read; p->s.Seek = FileInStream_Seek; } /* ---------- FileOutStream ---------- */ static size_t FileOutStream_Write(void *pp, const void *data, size_t size) { CFileOutStream *p = (CFileOutStream *)pp; File_Write(&p->file, data, &size); return size; } void FileOutStream_CreateVTable(CFileOutStream *p) { p->s.Write = FileOutStream_Write; } zmat-0.9.8/src/easylzma/pavlov/7zFile.h000077500000000000000000000023731366304620100177500ustar00rootroot00000000000000/* 7zFile.h -- File IO 2008-11-22 : Igor Pavlov : Public domain */ #ifndef __7Z_FILE_H #define __7Z_FILE_H #ifdef _WIN32 #define USE_WINDOWS_FILE #endif #ifdef USE_WINDOWS_FILE #include #else #include #endif #include "Types.h" /* ---------- File ---------- */ typedef struct { #ifdef USE_WINDOWS_FILE HANDLE handle; #else FILE *file; #endif } CSzFile; void File_Construct(CSzFile *p); WRes InFile_Open(CSzFile *p, const char *name); WRes OutFile_Open(CSzFile *p, const char *name); WRes File_Close(CSzFile *p); /* reads max(*size, remain file's size) bytes */ WRes File_Read(CSzFile *p, void *data, size_t *size); /* writes *size bytes */ WRes File_Write(CSzFile *p, const void *data, size_t *size); WRes File_Seek(CSzFile *p, Int64 *pos, ESzSeek origin); WRes File_GetLength(CSzFile *p, UInt64 *length); /* ---------- FileInStream ---------- */ typedef struct { ISeqInStream s; CSzFile file; } CFileSeqInStream; void FileSeqInStream_CreateVTable(CFileSeqInStream *p); typedef struct { ISeekInStream s; CSzFile file; } CFileInStream; void FileInStream_CreateVTable(CFileInStream *p); typedef struct { ISeqOutStream s; CSzFile file; } CFileOutStream; void FileOutStream_CreateVTable(CFileOutStream *p); #endif zmat-0.9.8/src/easylzma/pavlov/7zStream.c000077500000000000000000000076071366304620100203240ustar00rootroot00000000000000/* 7zStream.c -- 7z Stream functions 2008-11-23 : Igor Pavlov : Public domain */ #include #include "Types.h" SRes SeqInStream_Read2(ISeqInStream *stream, void *buf, size_t size, SRes errorType) { while (size != 0) { size_t processed = size; RINOK(stream->Read(stream, buf, &processed)); if (processed == 0) return errorType; buf = (void *)((Byte *)buf + processed); size -= processed; } return SZ_OK; } SRes SeqInStream_Read(ISeqInStream *stream, void *buf, size_t size) { return SeqInStream_Read2(stream, buf, size, SZ_ERROR_INPUT_EOF); } SRes SeqInStream_ReadByte(ISeqInStream *stream, Byte *buf) { size_t processed = 1; RINOK(stream->Read(stream, buf, &processed)); return (processed == 1) ? SZ_OK : SZ_ERROR_INPUT_EOF; } SRes LookInStream_SeekTo(ILookInStream *stream, UInt64 offset) { Int64 t = offset; return stream->Seek(stream, &t, SZ_SEEK_SET); } SRes LookInStream_LookRead(ILookInStream *stream, void *buf, size_t *size) { void *lookBuf; if (*size == 0) return SZ_OK; RINOK(stream->Look(stream, &lookBuf, size)); memcpy(buf, lookBuf, *size); return stream->Skip(stream, *size); } SRes LookInStream_Read2(ILookInStream *stream, void *buf, size_t size, SRes errorType) { while (size != 0) { size_t processed = size; RINOK(stream->Read(stream, buf, &processed)); if (processed == 0) return errorType; buf = (void *)((Byte *)buf + processed); size -= processed; } return SZ_OK; } SRes LookInStream_Read(ILookInStream *stream, void *buf, size_t size) { return LookInStream_Read2(stream, buf, size, SZ_ERROR_INPUT_EOF); } static SRes LookToRead_Look_Lookahead(void *pp, void **buf, size_t *size) { SRes res = SZ_OK; CLookToRead *p = (CLookToRead *)pp; size_t size2 = p->size - p->pos; if (size2 == 0 && *size > 0) { p->pos = 0; size2 = LookToRead_BUF_SIZE; res = p->realStream->Read(p->realStream, p->buf, &size2); p->size = size2; } if (size2 < *size) *size = size2; *buf = p->buf + p->pos; return res; } static SRes LookToRead_Look_Exact(void *pp, void **buf, size_t *size) { SRes res = SZ_OK; CLookToRead *p = (CLookToRead *)pp; size_t size2 = p->size - p->pos; if (size2 == 0 && *size > 0) { p->pos = 0; if (*size > LookToRead_BUF_SIZE) *size = LookToRead_BUF_SIZE; res = p->realStream->Read(p->realStream, p->buf, size); size2 = p->size = *size; } if (size2 < *size) *size = size2; *buf = p->buf + p->pos; return res; } static SRes LookToRead_Skip(void *pp, size_t offset) { CLookToRead *p = (CLookToRead *)pp; p->pos += offset; return SZ_OK; } static SRes LookToRead_Read(void *pp, void *buf, size_t *size) { CLookToRead *p = (CLookToRead *)pp; size_t rem = p->size - p->pos; if (rem == 0) return p->realStream->Read(p->realStream, buf, size); if (rem > *size) rem = *size; memcpy(buf, p->buf + p->pos, rem); p->pos += rem; *size = rem; return SZ_OK; } static SRes LookToRead_Seek(void *pp, Int64 *pos, ESzSeek origin) { CLookToRead *p = (CLookToRead *)pp; p->pos = p->size = 0; return p->realStream->Seek(p->realStream, pos, origin); } void LookToRead_CreateVTable(CLookToRead *p, int lookahead) { p->s.Look = lookahead ? LookToRead_Look_Lookahead : LookToRead_Look_Exact; p->s.Skip = LookToRead_Skip; p->s.Read = LookToRead_Read; p->s.Seek = LookToRead_Seek; } void LookToRead_Init(CLookToRead *p) { p->pos = p->size = 0; } static SRes SecToLook_Read(void *pp, void *buf, size_t *size) { CSecToLook *p = (CSecToLook *)pp; return LookInStream_LookRead(p->realStream, buf, size); } void SecToLook_CreateVTable(CSecToLook *p) { p->s.Read = SecToLook_Read; } static SRes SecToRead_Read(void *pp, void *buf, size_t *size) { CSecToRead *p = (CSecToRead *)pp; return p->realStream->Read(p->realStream, buf, size); } void SecToRead_CreateVTable(CSecToRead *p) { p->s.Read = SecToRead_Read; } zmat-0.9.8/src/easylzma/pavlov/7zVersion.h000077500000000000000000000003761366304620100205170ustar00rootroot00000000000000#define MY_VER_MAJOR 4 #define MY_VER_MINOR 63 #define MY_VER_BUILD 0 #define MY_VERSION "4.63" #define MY_DATE "2008-12-31" #define MY_COPYRIGHT ": Igor Pavlov : Public domain" #define MY_VERSION_COPYRIGHT_DATE MY_VERSION " " MY_COPYRIGHT " : " MY_DATE zmat-0.9.8/src/easylzma/pavlov/Alloc.c000077500000000000000000000051741366304620100176370ustar00rootroot00000000000000/* Alloc.c -- Memory allocation functions 2008-09-24 Igor Pavlov Public domain */ #ifdef _WIN32 #include #endif #include #include "Alloc.h" /* #define _SZ_ALLOC_DEBUG */ /* use _SZ_ALLOC_DEBUG to debug alloc/free operations */ #ifdef _SZ_ALLOC_DEBUG #include int g_allocCount = 0; int g_allocCountMid = 0; int g_allocCountBig = 0; #endif void *MyAlloc(size_t size) { if (size == 0) return 0; #ifdef _SZ_ALLOC_DEBUG { void *p = malloc(size); fprintf(stderr, "\nAlloc %10d bytes, count = %10d, addr = %8X", size, g_allocCount++, (unsigned)p); return p; } #else return malloc(size); #endif } void MyFree(void *address) { #ifdef _SZ_ALLOC_DEBUG if (address != 0) fprintf(stderr, "\nFree; count = %10d, addr = %8X", --g_allocCount, (unsigned)address); #endif free(address); } #ifdef _WIN32 void *MidAlloc(size_t size) { if (size == 0) return 0; #ifdef _SZ_ALLOC_DEBUG fprintf(stderr, "\nAlloc_Mid %10d bytes; count = %10d", size, g_allocCountMid++); #endif return VirtualAlloc(0, size, MEM_COMMIT, PAGE_READWRITE); } void MidFree(void *address) { #ifdef _SZ_ALLOC_DEBUG if (address != 0) fprintf(stderr, "\nFree_Mid; count = %10d", --g_allocCountMid); #endif if (address == 0) return; VirtualFree(address, 0, MEM_RELEASE); } #ifndef MEM_LARGE_PAGES #undef _7ZIP_LARGE_PAGES #endif #ifdef _7ZIP_LARGE_PAGES SIZE_T g_LargePageSize = 0; typedef SIZE_T (WINAPI *GetLargePageMinimumP)(); #endif void SetLargePageSize() { #ifdef _7ZIP_LARGE_PAGES SIZE_T size = 0; GetLargePageMinimumP largePageMinimum = (GetLargePageMinimumP) GetProcAddress(GetModuleHandle(TEXT("kernel32.dll")), "GetLargePageMinimum"); if (largePageMinimum == 0) return; size = largePageMinimum(); if (size == 0 || (size & (size - 1)) != 0) return; g_LargePageSize = size; #endif } void *BigAlloc(size_t size) { if (size == 0) return 0; #ifdef _SZ_ALLOC_DEBUG fprintf(stderr, "\nAlloc_Big %10d bytes; count = %10d", size, g_allocCountBig++); #endif #ifdef _7ZIP_LARGE_PAGES if (g_LargePageSize != 0 && g_LargePageSize <= (1 << 30) && size >= (1 << 18)) { void *res = VirtualAlloc(0, (size + g_LargePageSize - 1) & (~(g_LargePageSize - 1)), MEM_COMMIT | MEM_LARGE_PAGES, PAGE_READWRITE); if (res != 0) return res; } #endif return VirtualAlloc(0, size, MEM_COMMIT, PAGE_READWRITE); } void BigFree(void *address) { #ifdef _SZ_ALLOC_DEBUG if (address != 0) fprintf(stderr, "\nFree_Big; count = %10d", --g_allocCountBig); #endif if (address == 0) return; VirtualFree(address, 0, MEM_RELEASE); } #endif zmat-0.9.8/src/easylzma/pavlov/Alloc.h000077500000000000000000000010451366304620100176350ustar00rootroot00000000000000/* Alloc.h -- Memory allocation functions 2008-03-13 Igor Pavlov Public domain */ #ifndef __COMMON_ALLOC_H #define __COMMON_ALLOC_H #include void *MyAlloc(size_t size); void MyFree(void *address); #ifdef _WIN32 void SetLargePageSize(); void *MidAlloc(size_t size); void MidFree(void *address); void *BigAlloc(size_t size); void BigFree(void *address); #else #define MidAlloc(size) MyAlloc(size) #define MidFree(address) MyFree(address) #define BigAlloc(size) MyAlloc(size) #define BigFree(address) MyFree(address) #endif #endif zmat-0.9.8/src/easylzma/pavlov/Bcj2.c000077500000000000000000000061421366304620100173610ustar00rootroot00000000000000/* Bcj2.c -- Converter for x86 code (BCJ2) 2008-10-04 : Igor Pavlov : Public domain */ #include "Bcj2.h" #ifdef _LZMA_PROB32 #define CProb UInt32 #else #define CProb UInt16 #endif #define IsJcc(b0, b1) ((b0) == 0x0F && ((b1) & 0xF0) == 0x80) #define IsJ(b0, b1) ((b1 & 0xFE) == 0xE8 || IsJcc(b0, b1)) #define kNumTopBits 24 #define kTopValue ((UInt32)1 << kNumTopBits) #define kNumBitModelTotalBits 11 #define kBitModelTotal (1 << kNumBitModelTotalBits) #define kNumMoveBits 5 #define RC_READ_BYTE (*buffer++) #define RC_TEST { if (buffer == bufferLim) return SZ_ERROR_DATA; } #define RC_INIT2 code = 0; range = 0xFFFFFFFF; \ { int i; for (i = 0; i < 5; i++) { RC_TEST; code = (code << 8) | RC_READ_BYTE; }} #define NORMALIZE if (range < kTopValue) { RC_TEST; range <<= 8; code = (code << 8) | RC_READ_BYTE; } #define IF_BIT_0(p) ttt = *(p); bound = (range >> kNumBitModelTotalBits) * ttt; if (code < bound) #define UPDATE_0(p) range = bound; *(p) = (CProb)(ttt + ((kBitModelTotal - ttt) >> kNumMoveBits)); NORMALIZE; #define UPDATE_1(p) range -= bound; code -= bound; *(p) = (CProb)(ttt - (ttt >> kNumMoveBits)); NORMALIZE; int Bcj2_Decode( const Byte *buf0, SizeT size0, const Byte *buf1, SizeT size1, const Byte *buf2, SizeT size2, const Byte *buf3, SizeT size3, Byte *outBuf, SizeT outSize) { CProb p[256 + 2]; SizeT inPos = 0, outPos = 0; const Byte *buffer, *bufferLim; UInt32 range, code; Byte prevByte = 0; unsigned int i; for (i = 0; i < sizeof(p) / sizeof(p[0]); i++) p[i] = kBitModelTotal >> 1; buffer = buf3; bufferLim = buffer + size3; RC_INIT2 if (outSize == 0) return SZ_OK; for (;;) { Byte b; CProb *prob; UInt32 bound; UInt32 ttt; SizeT limit = size0 - inPos; if (outSize - outPos < limit) limit = outSize - outPos; while (limit != 0) { Byte b = buf0[inPos]; outBuf[outPos++] = b; if (IsJ(prevByte, b)) break; inPos++; prevByte = b; limit--; } if (limit == 0 || outPos == outSize) break; b = buf0[inPos++]; if (b == 0xE8) prob = p + prevByte; else if (b == 0xE9) prob = p + 256; else prob = p + 257; IF_BIT_0(prob) { UPDATE_0(prob) prevByte = b; } else { UInt32 dest; const Byte *v; UPDATE_1(prob) if (b == 0xE8) { v = buf1; if (size1 < 4) return SZ_ERROR_DATA; buf1 += 4; size1 -= 4; } else { v = buf2; if (size2 < 4) return SZ_ERROR_DATA; buf2 += 4; size2 -= 4; } dest = (((UInt32)v[0] << 24) | ((UInt32)v[1] << 16) | ((UInt32)v[2] << 8) | ((UInt32)v[3])) - ((UInt32)outPos + 4); outBuf[outPos++] = (Byte)dest; if (outPos == outSize) break; outBuf[outPos++] = (Byte)(dest >> 8); if (outPos == outSize) break; outBuf[outPos++] = (Byte)(dest >> 16); if (outPos == outSize) break; outBuf[outPos++] = prevByte = (Byte)(dest >> 24); } } return (outPos == outSize) ? SZ_OK : SZ_ERROR_DATA; } zmat-0.9.8/src/easylzma/pavlov/Bcj2.h000077500000000000000000000011761366304620100173700ustar00rootroot00000000000000/* Bcj2.h -- Converter for x86 code (BCJ2) 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __BCJ2_H #define __BCJ2_H #include "Types.h" /* Conditions: outSize <= FullOutputSize, where FullOutputSize is full size of output stream of x86_2 filter. If buf0 overlaps outBuf, there are two required conditions: 1) (buf0 >= outBuf) 2) (buf0 + size0 >= outBuf + FullOutputSize). Returns: SZ_OK SZ_ERROR_DATA - Data error */ int Bcj2_Decode( const Byte *buf0, SizeT size0, const Byte *buf1, SizeT size1, const Byte *buf2, SizeT size2, const Byte *buf3, SizeT size3, Byte *outBuf, SizeT outSize); #endif zmat-0.9.8/src/easylzma/pavlov/Bra.c000077500000000000000000000061101366304620100173000ustar00rootroot00000000000000/* Bra.c -- Converters for RISC code 2008-10-04 : Igor Pavlov : Public domain */ #include "Bra.h" SizeT ARM_Convert(Byte *data, SizeT size, UInt32 ip, int encoding) { SizeT i; if (size < 4) return 0; size -= 4; ip += 8; for (i = 0; i <= size; i += 4) { if (data[i + 3] == 0xEB) { UInt32 dest; UInt32 src = ((UInt32)data[i + 2] << 16) | ((UInt32)data[i + 1] << 8) | (data[i + 0]); src <<= 2; if (encoding) dest = ip + (UInt32)i + src; else dest = src - (ip + (UInt32)i); dest >>= 2; data[i + 2] = (Byte)(dest >> 16); data[i + 1] = (Byte)(dest >> 8); data[i + 0] = (Byte)dest; } } return i; } SizeT ARMT_Convert(Byte *data, SizeT size, UInt32 ip, int encoding) { SizeT i; if (size < 4) return 0; size -= 4; ip += 4; for (i = 0; i <= size; i += 2) { if ((data[i + 1] & 0xF8) == 0xF0 && (data[i + 3] & 0xF8) == 0xF8) { UInt32 dest; UInt32 src = (((UInt32)data[i + 1] & 0x7) << 19) | ((UInt32)data[i + 0] << 11) | (((UInt32)data[i + 3] & 0x7) << 8) | (data[i + 2]); src <<= 1; if (encoding) dest = ip + (UInt32)i + src; else dest = src - (ip + (UInt32)i); dest >>= 1; data[i + 1] = (Byte)(0xF0 | ((dest >> 19) & 0x7)); data[i + 0] = (Byte)(dest >> 11); data[i + 3] = (Byte)(0xF8 | ((dest >> 8) & 0x7)); data[i + 2] = (Byte)dest; i += 2; } } return i; } SizeT PPC_Convert(Byte *data, SizeT size, UInt32 ip, int encoding) { SizeT i; if (size < 4) return 0; size -= 4; for (i = 0; i <= size; i += 4) { if ((data[i] >> 2) == 0x12 && (data[i + 3] & 3) == 1) { UInt32 src = ((UInt32)(data[i + 0] & 3) << 24) | ((UInt32)data[i + 1] << 16) | ((UInt32)data[i + 2] << 8) | ((UInt32)data[i + 3] & (~3)); UInt32 dest; if (encoding) dest = ip + (UInt32)i + src; else dest = src - (ip + (UInt32)i); data[i + 0] = (Byte)(0x48 | ((dest >> 24) & 0x3)); data[i + 1] = (Byte)(dest >> 16); data[i + 2] = (Byte)(dest >> 8); data[i + 3] &= 0x3; data[i + 3] |= dest; } } return i; } SizeT SPARC_Convert(Byte *data, SizeT size, UInt32 ip, int encoding) { UInt32 i; if (size < 4) return 0; size -= 4; for (i = 0; i <= size; i += 4) { if ((data[i] == 0x40 && (data[i + 1] & 0xC0) == 0x00) || (data[i] == 0x7F && (data[i + 1] & 0xC0) == 0xC0)) { UInt32 src = ((UInt32)data[i + 0] << 24) | ((UInt32)data[i + 1] << 16) | ((UInt32)data[i + 2] << 8) | ((UInt32)data[i + 3]); UInt32 dest; src <<= 2; if (encoding) dest = ip + i + src; else dest = src - (ip + i); dest >>= 2; dest = (((0 - ((dest >> 22) & 1)) << 22) & 0x3FFFFFFF) | (dest & 0x3FFFFF) | 0x40000000; data[i + 0] = (Byte)(dest >> 24); data[i + 1] = (Byte)(dest >> 16); data[i + 2] = (Byte)(dest >> 8); data[i + 3] = (Byte)dest; } } return i; } zmat-0.9.8/src/easylzma/pavlov/Bra.h000077500000000000000000000034461366304620100173160ustar00rootroot00000000000000/* Bra.h -- Branch converters for executables 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __BRA_H #define __BRA_H #include "Types.h" /* These functions convert relative addresses to absolute addresses in CALL instructions to increase the compression ratio. In: data - data buffer size - size of data ip - current virtual Instruction Pinter (IP) value state - state variable for x86 converter encoding - 0 (for decoding), 1 (for encoding) Out: state - state variable for x86 converter Returns: The number of processed bytes. If you call these functions with multiple calls, you must start next call with first byte after block of processed bytes. Type Endian Alignment LookAhead x86 little 1 4 ARMT little 2 2 ARM little 4 0 PPC big 4 0 SPARC big 4 0 IA64 little 16 0 size must be >= Alignment + LookAhead, if it's not last block. If (size < Alignment + LookAhead), converter returns 0. Example: UInt32 ip = 0; for () { ; size must be >= Alignment + LookAhead, if it's not last block SizeT processed = Convert(data, size, ip, 1); data += processed; size -= processed; ip += processed; } */ #define x86_Convert_Init(state) { state = 0; } SizeT x86_Convert(Byte *data, SizeT size, UInt32 ip, UInt32 *state, int encoding); SizeT ARM_Convert(Byte *data, SizeT size, UInt32 ip, int encoding); SizeT ARMT_Convert(Byte *data, SizeT size, UInt32 ip, int encoding); SizeT PPC_Convert(Byte *data, SizeT size, UInt32 ip, int encoding); SizeT SPARC_Convert(Byte *data, SizeT size, UInt32 ip, int encoding); SizeT IA64_Convert(Byte *data, SizeT size, UInt32 ip, int encoding); #endif zmat-0.9.8/src/easylzma/pavlov/Bra86.c000077500000000000000000000042101366304620100174550ustar00rootroot00000000000000/* Bra86.c -- Converter for x86 code (BCJ) 2008-10-04 : Igor Pavlov : Public domain */ #include "Bra.h" #define Test86MSByte(b) ((b) == 0 || (b) == 0xFF) const Byte kMaskToAllowedStatus[8] = {1, 1, 1, 0, 1, 0, 0, 0}; const Byte kMaskToBitNumber[8] = {0, 1, 2, 2, 3, 3, 3, 3}; SizeT x86_Convert(Byte *data, SizeT size, UInt32 ip, UInt32 *state, int encoding) { SizeT bufferPos = 0, prevPosT; UInt32 prevMask = *state & 0x7; if (size < 5) return 0; ip += 5; prevPosT = (SizeT)0 - 1; for (;;) { Byte *p = data + bufferPos; Byte *limit = data + size - 4; for (; p < limit; p++) if ((*p & 0xFE) == 0xE8) break; bufferPos = (SizeT)(p - data); if (p >= limit) break; prevPosT = bufferPos - prevPosT; if (prevPosT > 3) prevMask = 0; else { prevMask = (prevMask << ((int)prevPosT - 1)) & 0x7; if (prevMask != 0) { Byte b = p[4 - kMaskToBitNumber[prevMask]]; if (!kMaskToAllowedStatus[prevMask] || Test86MSByte(b)) { prevPosT = bufferPos; prevMask = ((prevMask << 1) & 0x7) | 1; bufferPos++; continue; } } } prevPosT = bufferPos; if (Test86MSByte(p[4])) { UInt32 src = ((UInt32)p[4] << 24) | ((UInt32)p[3] << 16) | ((UInt32)p[2] << 8) | ((UInt32)p[1]); UInt32 dest; for (;;) { Byte b; int index; if (encoding) dest = (ip + (UInt32)bufferPos) + src; else dest = src - (ip + (UInt32)bufferPos); if (prevMask == 0) break; index = kMaskToBitNumber[prevMask] * 8; b = (Byte)(dest >> (24 - index)); if (!Test86MSByte(b)) break; src = dest ^ ((1 << (32 - index)) - 1); } p[4] = (Byte)(~(((dest >> 24) & 1) - 1)); p[3] = (Byte)(dest >> 16); p[2] = (Byte)(dest >> 8); p[1] = (Byte)dest; bufferPos += 5; } else { prevMask = ((prevMask << 1) & 0x7) | 1; bufferPos++; } } prevPosT = bufferPos - prevPosT; *state = ((prevPosT > 3) ? 0 : ((prevMask << ((int)prevPosT - 1)) & 0x7)); return bufferPos; } zmat-0.9.8/src/easylzma/pavlov/BraIA64.c000077500000000000000000000033241366304620100176700ustar00rootroot00000000000000/* BraIA64.c -- Converter for IA-64 code 2008-10-04 : Igor Pavlov : Public domain */ #include "Bra.h" static const Byte kBranchTable[32] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 4, 6, 6, 0, 0, 7, 7, 4, 4, 0, 0, 4, 4, 0, 0 }; SizeT IA64_Convert(Byte *data, SizeT size, UInt32 ip, int encoding) { SizeT i; if (size < 16) return 0; size -= 16; for (i = 0; i <= size; i += 16) { UInt32 instrTemplate = data[i] & 0x1F; UInt32 mask = kBranchTable[instrTemplate]; UInt32 bitPos = 5; int slot; for (slot = 0; slot < 3; slot++, bitPos += 41) { UInt32 bytePos, bitRes; UInt64 instruction, instNorm; int j; if (((mask >> slot) & 1) == 0) continue; bytePos = (bitPos >> 3); bitRes = bitPos & 0x7; instruction = 0; for (j = 0; j < 6; j++) instruction += (UInt64)data[i + j + bytePos] << (8 * j); instNorm = instruction >> bitRes; if (((instNorm >> 37) & 0xF) == 0x5 && ((instNorm >> 9) & 0x7) == 0) { UInt32 src = (UInt32)((instNorm >> 13) & 0xFFFFF); UInt32 dest; src |= ((UInt32)(instNorm >> 36) & 1) << 20; src <<= 4; if (encoding) dest = ip + (UInt32)i + src; else dest = src - (ip + (UInt32)i); dest >>= 4; instNorm &= ~((UInt64)(0x8FFFFF) << 13); instNorm |= ((UInt64)(dest & 0xFFFFF) << 13); instNorm |= ((UInt64)(dest & 0x100000) << (36 - 20)); instruction &= (1 << bitRes) - 1; instruction |= (instNorm << bitRes); for (j = 0; j < 6; j++) data[i + j + bytePos] = (Byte)(instruction >> (8 * j)); } } } return i; } zmat-0.9.8/src/easylzma/pavlov/CpuArch.h000077500000000000000000000036351366304620100201370ustar00rootroot00000000000000/* CpuArch.h 2008-08-05 Igor Pavlov Public domain */ #ifndef __CPUARCH_H #define __CPUARCH_H /* LITTLE_ENDIAN_UNALIGN means: 1) CPU is LITTLE_ENDIAN 2) it's allowed to make unaligned memory accesses if LITTLE_ENDIAN_UNALIGN is not defined, it means that we don't know about these properties of platform. */ #if defined(_M_IX86) || defined(_M_X64) || defined(_M_AMD64) || defined(__i386__) || defined(__x86_64__) #define LITTLE_ENDIAN_UNALIGN #endif #ifdef LITTLE_ENDIAN_UNALIGN #define GetUi16(p) (*(const UInt16 *)(p)) #define GetUi32(p) (*(const UInt32 *)(p)) #define GetUi64(p) (*(const UInt64 *)(p)) #define SetUi32(p, d) *(UInt32 *)(p) = (d); #else #define GetUi16(p) (((const Byte *)(p))[0] | ((UInt16)((const Byte *)(p))[1] << 8)) #define GetUi32(p) ( \ ((const Byte *)(p))[0] | \ ((UInt32)((const Byte *)(p))[1] << 8) | \ ((UInt32)((const Byte *)(p))[2] << 16) | \ ((UInt32)((const Byte *)(p))[3] << 24)) #define GetUi64(p) (GetUi32(p) | ((UInt64)GetUi32(((const Byte *)(p)) + 4) << 32)) #define SetUi32(p, d) { UInt32 _x_ = (d); \ ((Byte *)(p))[0] = (Byte)_x_; \ ((Byte *)(p))[1] = (Byte)(_x_ >> 8); \ ((Byte *)(p))[2] = (Byte)(_x_ >> 16); \ ((Byte *)(p))[3] = (Byte)(_x_ >> 24); } #endif #if defined(LITTLE_ENDIAN_UNALIGN) && defined(_WIN64) && (_MSC_VER >= 1300) #pragma intrinsic(_byteswap_ulong) #pragma intrinsic(_byteswap_uint64) #define GetBe32(p) _byteswap_ulong(*(const UInt32 *)(const Byte *)(p)) #define GetBe64(p) _byteswap_uint64(*(const UInt64 *)(const Byte *)(p)) #else #define GetBe32(p) ( \ ((UInt32)((const Byte *)(p))[0] << 24) | \ ((UInt32)((const Byte *)(p))[1] << 16) | \ ((UInt32)((const Byte *)(p))[2] << 8) | \ ((const Byte *)(p))[3] ) #define GetBe64(p) (((UInt64)GetBe32(p) << 32) | GetBe32(((const Byte *)(p)) + 4)) #endif #define GetBe16(p) (((UInt16)((const Byte *)(p))[0] << 8) | ((const Byte *)(p))[1]) #endif zmat-0.9.8/src/easylzma/pavlov/LzFind.c000077500000000000000000000460451366304620100177750ustar00rootroot00000000000000/* LzFind.c -- Match finder for LZ algorithms 2008-10-04 : Igor Pavlov : Public domain */ #include #include "LzFind.h" #include "LzHash.h" #define kEmptyHashValue 0 #define kMaxValForNormalize ((UInt32)0xFFFFFFFF) #define kNormalizeStepMin (1 << 10) /* it must be power of 2 */ #define kNormalizeMask (~(kNormalizeStepMin - 1)) #define kMaxHistorySize ((UInt32)3 << 30) #define kStartMaxLen 3 static void LzInWindow_Free(CMatchFinder *p, ISzAlloc *alloc) { if (!p->directInput) { alloc->Free(alloc, p->bufferBase); p->bufferBase = 0; } } /* keepSizeBefore + keepSizeAfter + keepSizeReserv must be < 4G) */ static int LzInWindow_Create(CMatchFinder *p, UInt32 keepSizeReserv, ISzAlloc *alloc) { UInt32 blockSize = p->keepSizeBefore + p->keepSizeAfter + keepSizeReserv; if (p->directInput) { p->blockSize = blockSize; return 1; } if (p->bufferBase == 0 || p->blockSize != blockSize) { LzInWindow_Free(p, alloc); p->blockSize = blockSize; p->bufferBase = (Byte *)alloc->Alloc(alloc, (size_t)blockSize); } return (p->bufferBase != 0); } Byte *MatchFinder_GetPointerToCurrentPos(CMatchFinder *p) { return p->buffer; } Byte MatchFinder_GetIndexByte(CMatchFinder *p, Int32 index) { return p->buffer[index]; } UInt32 MatchFinder_GetNumAvailableBytes(CMatchFinder *p) { return p->streamPos - p->pos; } void MatchFinder_ReduceOffsets(CMatchFinder *p, UInt32 subValue) { p->posLimit -= subValue; p->pos -= subValue; p->streamPos -= subValue; } static void MatchFinder_ReadBlock(CMatchFinder *p) { if (p->streamEndWasReached || p->result != SZ_OK) return; for (;;) { Byte *dest = p->buffer + (p->streamPos - p->pos); size_t size = (p->bufferBase + p->blockSize - dest); if (size == 0) return; p->result = p->stream->Read(p->stream, dest, &size); if (p->result != SZ_OK) return; if (size == 0) { p->streamEndWasReached = 1; return; } p->streamPos += (UInt32)size; if (p->streamPos - p->pos > p->keepSizeAfter) return; } } void MatchFinder_MoveBlock(CMatchFinder *p) { memmove(p->bufferBase, p->buffer - p->keepSizeBefore, (size_t)(p->streamPos - p->pos + p->keepSizeBefore)); p->buffer = p->bufferBase + p->keepSizeBefore; } int MatchFinder_NeedMove(CMatchFinder *p) { /* if (p->streamEndWasReached) return 0; */ return ((size_t)(p->bufferBase + p->blockSize - p->buffer) <= p->keepSizeAfter); } void MatchFinder_ReadIfRequired(CMatchFinder *p) { if (p->streamEndWasReached) return; if (p->keepSizeAfter >= p->streamPos - p->pos) MatchFinder_ReadBlock(p); } static void MatchFinder_CheckAndMoveAndRead(CMatchFinder *p) { if (MatchFinder_NeedMove(p)) MatchFinder_MoveBlock(p); MatchFinder_ReadBlock(p); } static void MatchFinder_SetDefaultSettings(CMatchFinder *p) { p->cutValue = 32; p->btMode = 1; p->numHashBytes = 4; /* p->skipModeBits = 0; */ p->directInput = 0; p->bigHash = 0; } #define kCrcPoly 0xEDB88320 void MatchFinder_Construct(CMatchFinder *p) { UInt32 i; p->bufferBase = 0; p->directInput = 0; p->hash = 0; MatchFinder_SetDefaultSettings(p); for (i = 0; i < 256; i++) { UInt32 r = i; int j; for (j = 0; j < 8; j++) r = (r >> 1) ^ (kCrcPoly & ~((r & 1) - 1)); p->crc[i] = r; } } static void MatchFinder_FreeThisClassMemory(CMatchFinder *p, ISzAlloc *alloc) { alloc->Free(alloc, p->hash); p->hash = 0; } void MatchFinder_Free(CMatchFinder *p, ISzAlloc *alloc) { MatchFinder_FreeThisClassMemory(p, alloc); LzInWindow_Free(p, alloc); } static CLzRef* AllocRefs(UInt32 num, ISzAlloc *alloc) { size_t sizeInBytes = (size_t)num * sizeof(CLzRef); if (sizeInBytes / sizeof(CLzRef) != num) return 0; return (CLzRef *)alloc->Alloc(alloc, sizeInBytes); } int MatchFinder_Create(CMatchFinder *p, UInt32 historySize, UInt32 keepAddBufferBefore, UInt32 matchMaxLen, UInt32 keepAddBufferAfter, ISzAlloc *alloc) { UInt32 sizeReserv; if (historySize > kMaxHistorySize) { MatchFinder_Free(p, alloc); return 0; } sizeReserv = historySize >> 1; if (historySize > ((UInt32)2 << 30)) sizeReserv = historySize >> 2; sizeReserv += (keepAddBufferBefore + matchMaxLen + keepAddBufferAfter) / 2 + (1 << 19); p->keepSizeBefore = historySize + keepAddBufferBefore + 1; p->keepSizeAfter = matchMaxLen + keepAddBufferAfter; /* we need one additional byte, since we use MoveBlock after pos++ and before dictionary using */ if (LzInWindow_Create(p, sizeReserv, alloc)) { UInt32 newCyclicBufferSize = (historySize /* >> p->skipModeBits */) + 1; UInt32 hs; p->matchMaxLen = matchMaxLen; { p->fixedHashSize = 0; if (p->numHashBytes == 2) hs = (1 << 16) - 1; else { hs = historySize - 1; hs |= (hs >> 1); hs |= (hs >> 2); hs |= (hs >> 4); hs |= (hs >> 8); hs >>= 1; /* hs >>= p->skipModeBits; */ hs |= 0xFFFF; /* don't change it! It's required for Deflate */ if (hs > (1 << 24)) { if (p->numHashBytes == 3) hs = (1 << 24) - 1; else hs >>= 1; } } p->hashMask = hs; hs++; if (p->numHashBytes > 2) p->fixedHashSize += kHash2Size; if (p->numHashBytes > 3) p->fixedHashSize += kHash3Size; if (p->numHashBytes > 4) p->fixedHashSize += kHash4Size; hs += p->fixedHashSize; } { UInt32 prevSize = p->hashSizeSum + p->numSons; UInt32 newSize; p->historySize = historySize; p->hashSizeSum = hs; p->cyclicBufferSize = newCyclicBufferSize; p->numSons = (p->btMode ? newCyclicBufferSize * 2 : newCyclicBufferSize); newSize = p->hashSizeSum + p->numSons; if (p->hash != 0 && prevSize == newSize) return 1; MatchFinder_FreeThisClassMemory(p, alloc); p->hash = AllocRefs(newSize, alloc); if (p->hash != 0) { p->son = p->hash + p->hashSizeSum; return 1; } } } MatchFinder_Free(p, alloc); return 0; } static void MatchFinder_SetLimits(CMatchFinder *p) { UInt32 limit = kMaxValForNormalize - p->pos; UInt32 limit2 = p->cyclicBufferSize - p->cyclicBufferPos; if (limit2 < limit) limit = limit2; limit2 = p->streamPos - p->pos; if (limit2 <= p->keepSizeAfter) { if (limit2 > 0) limit2 = 1; } else limit2 -= p->keepSizeAfter; if (limit2 < limit) limit = limit2; { UInt32 lenLimit = p->streamPos - p->pos; if (lenLimit > p->matchMaxLen) lenLimit = p->matchMaxLen; p->lenLimit = lenLimit; } p->posLimit = p->pos + limit; } void MatchFinder_Init(CMatchFinder *p) { UInt32 i; for (i = 0; i < p->hashSizeSum; i++) p->hash[i] = kEmptyHashValue; p->cyclicBufferPos = 0; p->buffer = p->bufferBase; p->pos = p->streamPos = p->cyclicBufferSize; p->result = SZ_OK; p->streamEndWasReached = 0; MatchFinder_ReadBlock(p); MatchFinder_SetLimits(p); } static UInt32 MatchFinder_GetSubValue(CMatchFinder *p) { return (p->pos - p->historySize - 1) & kNormalizeMask; } void MatchFinder_Normalize3(UInt32 subValue, CLzRef *items, UInt32 numItems) { UInt32 i; for (i = 0; i < numItems; i++) { UInt32 value = items[i]; if (value <= subValue) value = kEmptyHashValue; else value -= subValue; items[i] = value; } } static void MatchFinder_Normalize(CMatchFinder *p) { UInt32 subValue = MatchFinder_GetSubValue(p); MatchFinder_Normalize3(subValue, p->hash, p->hashSizeSum + p->numSons); MatchFinder_ReduceOffsets(p, subValue); } static void MatchFinder_CheckLimits(CMatchFinder *p) { if (p->pos == kMaxValForNormalize) MatchFinder_Normalize(p); if (!p->streamEndWasReached && p->keepSizeAfter == p->streamPos - p->pos) MatchFinder_CheckAndMoveAndRead(p); if (p->cyclicBufferPos == p->cyclicBufferSize) p->cyclicBufferPos = 0; MatchFinder_SetLimits(p); } static UInt32 * Hc_GetMatchesSpec(UInt32 lenLimit, UInt32 curMatch, UInt32 pos, const Byte *cur, CLzRef *son, UInt32 _cyclicBufferPos, UInt32 _cyclicBufferSize, UInt32 cutValue, UInt32 *distances, UInt32 maxLen) { son[_cyclicBufferPos] = curMatch; for (;;) { UInt32 delta = pos - curMatch; if (cutValue-- == 0 || delta >= _cyclicBufferSize) return distances; { const Byte *pb = cur - delta; curMatch = son[_cyclicBufferPos - delta + ((delta > _cyclicBufferPos) ? _cyclicBufferSize : 0)]; if (pb[maxLen] == cur[maxLen] && *pb == *cur) { UInt32 len = 0; while (++len != lenLimit) if (pb[len] != cur[len]) break; if (maxLen < len) { *distances++ = maxLen = len; *distances++ = delta - 1; if (len == lenLimit) return distances; } } } } } UInt32 * GetMatchesSpec1(UInt32 lenLimit, UInt32 curMatch, UInt32 pos, const Byte *cur, CLzRef *son, UInt32 _cyclicBufferPos, UInt32 _cyclicBufferSize, UInt32 cutValue, UInt32 *distances, UInt32 maxLen) { CLzRef *ptr0 = son + (_cyclicBufferPos << 1) + 1; CLzRef *ptr1 = son + (_cyclicBufferPos << 1); UInt32 len0 = 0, len1 = 0; for (;;) { UInt32 delta = pos - curMatch; if (cutValue-- == 0 || delta >= _cyclicBufferSize) { *ptr0 = *ptr1 = kEmptyHashValue; return distances; } { CLzRef *pair = son + ((_cyclicBufferPos - delta + ((delta > _cyclicBufferPos) ? _cyclicBufferSize : 0)) << 1); const Byte *pb = cur - delta; UInt32 len = (len0 < len1 ? len0 : len1); if (pb[len] == cur[len]) { if (++len != lenLimit && pb[len] == cur[len]) while (++len != lenLimit) if (pb[len] != cur[len]) break; if (maxLen < len) { *distances++ = maxLen = len; *distances++ = delta - 1; if (len == lenLimit) { *ptr1 = pair[0]; *ptr0 = pair[1]; return distances; } } } if (pb[len] < cur[len]) { *ptr1 = curMatch; ptr1 = pair + 1; curMatch = *ptr1; len1 = len; } else { *ptr0 = curMatch; ptr0 = pair; curMatch = *ptr0; len0 = len; } } } } static void SkipMatchesSpec(UInt32 lenLimit, UInt32 curMatch, UInt32 pos, const Byte *cur, CLzRef *son, UInt32 _cyclicBufferPos, UInt32 _cyclicBufferSize, UInt32 cutValue) { CLzRef *ptr0 = son + (_cyclicBufferPos << 1) + 1; CLzRef *ptr1 = son + (_cyclicBufferPos << 1); UInt32 len0 = 0, len1 = 0; for (;;) { UInt32 delta = pos - curMatch; if (cutValue-- == 0 || delta >= _cyclicBufferSize) { *ptr0 = *ptr1 = kEmptyHashValue; return; } { CLzRef *pair = son + ((_cyclicBufferPos - delta + ((delta > _cyclicBufferPos) ? _cyclicBufferSize : 0)) << 1); const Byte *pb = cur - delta; UInt32 len = (len0 < len1 ? len0 : len1); if (pb[len] == cur[len]) { while (++len != lenLimit) if (pb[len] != cur[len]) break; { if (len == lenLimit) { *ptr1 = pair[0]; *ptr0 = pair[1]; return; } } } if (pb[len] < cur[len]) { *ptr1 = curMatch; ptr1 = pair + 1; curMatch = *ptr1; len1 = len; } else { *ptr0 = curMatch; ptr0 = pair; curMatch = *ptr0; len0 = len; } } } } #define MOVE_POS \ ++p->cyclicBufferPos; \ p->buffer++; \ if (++p->pos == p->posLimit) MatchFinder_CheckLimits(p); #define MOVE_POS_RET MOVE_POS return offset; static void MatchFinder_MovePos(CMatchFinder *p) { MOVE_POS; } #define GET_MATCHES_HEADER2(minLen, ret_op) \ UInt32 lenLimit; UInt32 hashValue; const Byte *cur; UInt32 curMatch; \ lenLimit = p->lenLimit; { if (lenLimit < minLen) { MatchFinder_MovePos(p); ret_op; }} \ cur = p->buffer; #define GET_MATCHES_HEADER(minLen) GET_MATCHES_HEADER2(minLen, return 0) #define SKIP_HEADER(minLen) GET_MATCHES_HEADER2(minLen, continue) #define MF_PARAMS(p) p->pos, p->buffer, p->son, p->cyclicBufferPos, p->cyclicBufferSize, p->cutValue #define GET_MATCHES_FOOTER(offset, maxLen) \ offset = (UInt32)(GetMatchesSpec1(lenLimit, curMatch, MF_PARAMS(p), \ distances + offset, maxLen) - distances); MOVE_POS_RET; #define SKIP_FOOTER \ SkipMatchesSpec(lenLimit, curMatch, MF_PARAMS(p)); MOVE_POS; static UInt32 Bt2_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances) { UInt32 offset; GET_MATCHES_HEADER(2) HASH2_CALC; curMatch = p->hash[hashValue]; p->hash[hashValue] = p->pos; offset = 0; GET_MATCHES_FOOTER(offset, 1) } UInt32 Bt3Zip_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances) { UInt32 offset; GET_MATCHES_HEADER(3) HASH_ZIP_CALC; curMatch = p->hash[hashValue]; p->hash[hashValue] = p->pos; offset = 0; GET_MATCHES_FOOTER(offset, 2) } static UInt32 Bt3_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances) { UInt32 hash2Value, delta2, maxLen, offset; GET_MATCHES_HEADER(3) HASH3_CALC; delta2 = p->pos - p->hash[hash2Value]; curMatch = p->hash[kFix3HashSize + hashValue]; p->hash[hash2Value] = p->hash[kFix3HashSize + hashValue] = p->pos; maxLen = 2; offset = 0; if (delta2 < p->cyclicBufferSize && *(cur - delta2) == *cur) { for (; maxLen != lenLimit; maxLen++) if (cur[(ptrdiff_t)maxLen - delta2] != cur[maxLen]) break; distances[0] = maxLen; distances[1] = delta2 - 1; offset = 2; if (maxLen == lenLimit) { SkipMatchesSpec(lenLimit, curMatch, MF_PARAMS(p)); MOVE_POS_RET; } } GET_MATCHES_FOOTER(offset, maxLen) } static UInt32 Bt4_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances) { UInt32 hash2Value, hash3Value, delta2, delta3, maxLen, offset; GET_MATCHES_HEADER(4) HASH4_CALC; delta2 = p->pos - p->hash[ hash2Value]; delta3 = p->pos - p->hash[kFix3HashSize + hash3Value]; curMatch = p->hash[kFix4HashSize + hashValue]; p->hash[ hash2Value] = p->hash[kFix3HashSize + hash3Value] = p->hash[kFix4HashSize + hashValue] = p->pos; maxLen = 1; offset = 0; if (delta2 < p->cyclicBufferSize && *(cur - delta2) == *cur) { distances[0] = maxLen = 2; distances[1] = delta2 - 1; offset = 2; } if (delta2 != delta3 && delta3 < p->cyclicBufferSize && *(cur - delta3) == *cur) { maxLen = 3; distances[offset + 1] = delta3 - 1; offset += 2; delta2 = delta3; } if (offset != 0) { for (; maxLen != lenLimit; maxLen++) if (cur[(ptrdiff_t)maxLen - delta2] != cur[maxLen]) break; distances[offset - 2] = maxLen; if (maxLen == lenLimit) { SkipMatchesSpec(lenLimit, curMatch, MF_PARAMS(p)); MOVE_POS_RET; } } if (maxLen < 3) maxLen = 3; GET_MATCHES_FOOTER(offset, maxLen) } static UInt32 Hc4_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances) { UInt32 hash2Value, hash3Value, delta2, delta3, maxLen, offset; GET_MATCHES_HEADER(4) HASH4_CALC; delta2 = p->pos - p->hash[ hash2Value]; delta3 = p->pos - p->hash[kFix3HashSize + hash3Value]; curMatch = p->hash[kFix4HashSize + hashValue]; p->hash[ hash2Value] = p->hash[kFix3HashSize + hash3Value] = p->hash[kFix4HashSize + hashValue] = p->pos; maxLen = 1; offset = 0; if (delta2 < p->cyclicBufferSize && *(cur - delta2) == *cur) { distances[0] = maxLen = 2; distances[1] = delta2 - 1; offset = 2; } if (delta2 != delta3 && delta3 < p->cyclicBufferSize && *(cur - delta3) == *cur) { maxLen = 3; distances[offset + 1] = delta3 - 1; offset += 2; delta2 = delta3; } if (offset != 0) { for (; maxLen != lenLimit; maxLen++) if (cur[(ptrdiff_t)maxLen - delta2] != cur[maxLen]) break; distances[offset - 2] = maxLen; if (maxLen == lenLimit) { p->son[p->cyclicBufferPos] = curMatch; MOVE_POS_RET; } } if (maxLen < 3) maxLen = 3; offset = (UInt32)(Hc_GetMatchesSpec(lenLimit, curMatch, MF_PARAMS(p), distances + offset, maxLen) - (distances)); MOVE_POS_RET } UInt32 Hc3Zip_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances) { UInt32 offset; GET_MATCHES_HEADER(3) HASH_ZIP_CALC; curMatch = p->hash[hashValue]; p->hash[hashValue] = p->pos; offset = (UInt32)(Hc_GetMatchesSpec(lenLimit, curMatch, MF_PARAMS(p), distances, 2) - (distances)); MOVE_POS_RET } static void Bt2_MatchFinder_Skip(CMatchFinder *p, UInt32 num) { do { SKIP_HEADER(2) HASH2_CALC; curMatch = p->hash[hashValue]; p->hash[hashValue] = p->pos; SKIP_FOOTER } while (--num != 0); } void Bt3Zip_MatchFinder_Skip(CMatchFinder *p, UInt32 num) { do { SKIP_HEADER(3) HASH_ZIP_CALC; curMatch = p->hash[hashValue]; p->hash[hashValue] = p->pos; SKIP_FOOTER } while (--num != 0); } static void Bt3_MatchFinder_Skip(CMatchFinder *p, UInt32 num) { do { UInt32 hash2Value; SKIP_HEADER(3) HASH3_CALC; curMatch = p->hash[kFix3HashSize + hashValue]; p->hash[hash2Value] = p->hash[kFix3HashSize + hashValue] = p->pos; SKIP_FOOTER } while (--num != 0); } static void Bt4_MatchFinder_Skip(CMatchFinder *p, UInt32 num) { do { UInt32 hash2Value, hash3Value; SKIP_HEADER(4) HASH4_CALC; curMatch = p->hash[kFix4HashSize + hashValue]; p->hash[ hash2Value] = p->hash[kFix3HashSize + hash3Value] = p->pos; p->hash[kFix4HashSize + hashValue] = p->pos; SKIP_FOOTER } while (--num != 0); } static void Hc4_MatchFinder_Skip(CMatchFinder *p, UInt32 num) { do { UInt32 hash2Value, hash3Value; SKIP_HEADER(4) HASH4_CALC; curMatch = p->hash[kFix4HashSize + hashValue]; p->hash[ hash2Value] = p->hash[kFix3HashSize + hash3Value] = p->hash[kFix4HashSize + hashValue] = p->pos; p->son[p->cyclicBufferPos] = curMatch; MOVE_POS } while (--num != 0); } void Hc3Zip_MatchFinder_Skip(CMatchFinder *p, UInt32 num) { do { SKIP_HEADER(3) HASH_ZIP_CALC; curMatch = p->hash[hashValue]; p->hash[hashValue] = p->pos; p->son[p->cyclicBufferPos] = curMatch; MOVE_POS } while (--num != 0); } void MatchFinder_CreateVTable(CMatchFinder *p, IMatchFinder *vTable) { vTable->Init = (Mf_Init_Func)MatchFinder_Init; vTable->GetIndexByte = (Mf_GetIndexByte_Func)MatchFinder_GetIndexByte; vTable->GetNumAvailableBytes = (Mf_GetNumAvailableBytes_Func)MatchFinder_GetNumAvailableBytes; vTable->GetPointerToCurrentPos = (Mf_GetPointerToCurrentPos_Func)MatchFinder_GetPointerToCurrentPos; if (!p->btMode) { vTable->GetMatches = (Mf_GetMatches_Func)Hc4_MatchFinder_GetMatches; vTable->Skip = (Mf_Skip_Func)Hc4_MatchFinder_Skip; } else if (p->numHashBytes == 2) { vTable->GetMatches = (Mf_GetMatches_Func)Bt2_MatchFinder_GetMatches; vTable->Skip = (Mf_Skip_Func)Bt2_MatchFinder_Skip; } else if (p->numHashBytes == 3) { vTable->GetMatches = (Mf_GetMatches_Func)Bt3_MatchFinder_GetMatches; vTable->Skip = (Mf_Skip_Func)Bt3_MatchFinder_Skip; } else { vTable->GetMatches = (Mf_GetMatches_Func)Bt4_MatchFinder_GetMatches; vTable->Skip = (Mf_Skip_Func)Bt4_MatchFinder_Skip; } } zmat-0.9.8/src/easylzma/pavlov/LzFind.h000077500000000000000000000062241366304620100177750ustar00rootroot00000000000000/* LzFind.h -- Match finder for LZ algorithms 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __LZFIND_H #define __LZFIND_H #include "Types.h" typedef UInt32 CLzRef; typedef struct _CMatchFinder { Byte *buffer; UInt32 pos; UInt32 posLimit; UInt32 streamPos; UInt32 lenLimit; UInt32 cyclicBufferPos; UInt32 cyclicBufferSize; /* it must be = (historySize + 1) */ UInt32 matchMaxLen; CLzRef *hash; CLzRef *son; UInt32 hashMask; UInt32 cutValue; Byte *bufferBase; ISeqInStream *stream; int streamEndWasReached; UInt32 blockSize; UInt32 keepSizeBefore; UInt32 keepSizeAfter; UInt32 numHashBytes; int directInput; int btMode; /* int skipModeBits; */ int bigHash; UInt32 historySize; UInt32 fixedHashSize; UInt32 hashSizeSum; UInt32 numSons; SRes result; UInt32 crc[256]; } CMatchFinder; #define Inline_MatchFinder_GetPointerToCurrentPos(p) ((p)->buffer) #define Inline_MatchFinder_GetIndexByte(p, index) ((p)->buffer[(Int32)(index)]) #define Inline_MatchFinder_GetNumAvailableBytes(p) ((p)->streamPos - (p)->pos) int MatchFinder_NeedMove(CMatchFinder *p); Byte *MatchFinder_GetPointerToCurrentPos(CMatchFinder *p); void MatchFinder_MoveBlock(CMatchFinder *p); void MatchFinder_ReadIfRequired(CMatchFinder *p); void MatchFinder_Construct(CMatchFinder *p); /* Conditions: historySize <= 3 GB keepAddBufferBefore + matchMaxLen + keepAddBufferAfter < 511MB */ int MatchFinder_Create(CMatchFinder *p, UInt32 historySize, UInt32 keepAddBufferBefore, UInt32 matchMaxLen, UInt32 keepAddBufferAfter, ISzAlloc *alloc); void MatchFinder_Free(CMatchFinder *p, ISzAlloc *alloc); void MatchFinder_Normalize3(UInt32 subValue, CLzRef *items, UInt32 numItems); void MatchFinder_ReduceOffsets(CMatchFinder *p, UInt32 subValue); UInt32 * GetMatchesSpec1(UInt32 lenLimit, UInt32 curMatch, UInt32 pos, const Byte *buffer, CLzRef *son, UInt32 _cyclicBufferPos, UInt32 _cyclicBufferSize, UInt32 _cutValue, UInt32 *distances, UInt32 maxLen); /* Conditions: Mf_GetNumAvailableBytes_Func must be called before each Mf_GetMatchLen_Func. Mf_GetPointerToCurrentPos_Func's result must be used only before any other function */ typedef void (*Mf_Init_Func)(void *object); typedef Byte (*Mf_GetIndexByte_Func)(void *object, Int32 index); typedef UInt32 (*Mf_GetNumAvailableBytes_Func)(void *object); typedef const Byte * (*Mf_GetPointerToCurrentPos_Func)(void *object); typedef UInt32 (*Mf_GetMatches_Func)(void *object, UInt32 *distances); typedef void (*Mf_Skip_Func)(void *object, UInt32); typedef struct _IMatchFinder { Mf_Init_Func Init; Mf_GetIndexByte_Func GetIndexByte; Mf_GetNumAvailableBytes_Func GetNumAvailableBytes; Mf_GetPointerToCurrentPos_Func GetPointerToCurrentPos; Mf_GetMatches_Func GetMatches; Mf_Skip_Func Skip; } IMatchFinder; void MatchFinder_CreateVTable(CMatchFinder *p, IMatchFinder *vTable); void MatchFinder_Init(CMatchFinder *p); UInt32 Bt3Zip_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances); UInt32 Hc3Zip_MatchFinder_GetMatches(CMatchFinder *p, UInt32 *distances); void Bt3Zip_MatchFinder_Skip(CMatchFinder *p, UInt32 num); void Hc3Zip_MatchFinder_Skip(CMatchFinder *p, UInt32 num); #endif zmat-0.9.8/src/easylzma/pavlov/LzHash.h000077500000000000000000000036521366304620100200020ustar00rootroot00000000000000/* LzHash.h -- HASH functions for LZ algorithms 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __LZHASH_H #define __LZHASH_H #define kHash2Size (1 << 10) #define kHash3Size (1 << 16) #define kHash4Size (1 << 20) #define kFix3HashSize (kHash2Size) #define kFix4HashSize (kHash2Size + kHash3Size) #define kFix5HashSize (kHash2Size + kHash3Size + kHash4Size) #define HASH2_CALC hashValue = cur[0] | ((UInt32)cur[1] << 8); #define HASH3_CALC { \ UInt32 temp = p->crc[cur[0]] ^ cur[1]; \ hash2Value = temp & (kHash2Size - 1); \ hashValue = (temp ^ ((UInt32)cur[2] << 8)) & p->hashMask; } #define HASH4_CALC { \ UInt32 temp = p->crc[cur[0]] ^ cur[1]; \ hash2Value = temp & (kHash2Size - 1); \ hash3Value = (temp ^ ((UInt32)cur[2] << 8)) & (kHash3Size - 1); \ hashValue = (temp ^ ((UInt32)cur[2] << 8) ^ (p->crc[cur[3]] << 5)) & p->hashMask; } #define HASH5_CALC { \ UInt32 temp = p->crc[cur[0]] ^ cur[1]; \ hash2Value = temp & (kHash2Size - 1); \ hash3Value = (temp ^ ((UInt32)cur[2] << 8)) & (kHash3Size - 1); \ hash4Value = (temp ^ ((UInt32)cur[2] << 8) ^ (p->crc[cur[3]] << 5)); \ hashValue = (hash4Value ^ (p->crc[cur[4]] << 3)) & p->hashMask; \ hash4Value &= (kHash4Size - 1); } /* #define HASH_ZIP_CALC hashValue = ((cur[0] | ((UInt32)cur[1] << 8)) ^ p->crc[cur[2]]) & 0xFFFF; */ #define HASH_ZIP_CALC hashValue = ((cur[2] | ((UInt32)cur[0] << 8)) ^ p->crc[cur[1]]) & 0xFFFF; #define MT_HASH2_CALC \ hash2Value = (p->crc[cur[0]] ^ cur[1]) & (kHash2Size - 1); #define MT_HASH3_CALC { \ UInt32 temp = p->crc[cur[0]] ^ cur[1]; \ hash2Value = temp & (kHash2Size - 1); \ hash3Value = (temp ^ ((UInt32)cur[2] << 8)) & (kHash3Size - 1); } #define MT_HASH4_CALC { \ UInt32 temp = p->crc[cur[0]] ^ cur[1]; \ hash2Value = temp & (kHash2Size - 1); \ hash3Value = (temp ^ ((UInt32)cur[2] << 8)) & (kHash3Size - 1); \ hash4Value = (temp ^ ((UInt32)cur[2] << 8) ^ (p->crc[cur[3]] << 5)) & (kHash4Size - 1); } #endif zmat-0.9.8/src/easylzma/pavlov/LzmaDec.c000077500000000000000000000654211366304620100201250ustar00rootroot00000000000000/* LzmaDec.c -- LZMA Decoder 2008-11-06 : Igor Pavlov : Public domain */ #include "LzmaDec.h" #include #define kNumTopBits 24 #define kTopValue ((UInt32)1 << kNumTopBits) #define kNumBitModelTotalBits 11 #define kBitModelTotal (1 << kNumBitModelTotalBits) #define kNumMoveBits 5 #define RC_INIT_SIZE 5 #define NORMALIZE if (range < kTopValue) { range <<= 8; code = (code << 8) | (*buf++); } #define IF_BIT_0(p) ttt = *(p); NORMALIZE; bound = (range >> kNumBitModelTotalBits) * ttt; if (code < bound) #define UPDATE_0(p) range = bound; *(p) = (CLzmaProb)(ttt + ((kBitModelTotal - ttt) >> kNumMoveBits)); #define UPDATE_1(p) range -= bound; code -= bound; *(p) = (CLzmaProb)(ttt - (ttt >> kNumMoveBits)); #define GET_BIT2(p, i, A0, A1) IF_BIT_0(p) \ { UPDATE_0(p); i = (i + i); A0; } else \ { UPDATE_1(p); i = (i + i) + 1; A1; } #define GET_BIT(p, i) GET_BIT2(p, i, ; , ;) #define TREE_GET_BIT(probs, i) { GET_BIT((probs + i), i); } #define TREE_DECODE(probs, limit, i) \ { i = 1; do { TREE_GET_BIT(probs, i); } while (i < limit); i -= limit; } /* #define _LZMA_SIZE_OPT */ #ifdef _LZMA_SIZE_OPT #define TREE_6_DECODE(probs, i) TREE_DECODE(probs, (1 << 6), i) #else #define TREE_6_DECODE(probs, i) \ { i = 1; \ TREE_GET_BIT(probs, i); \ TREE_GET_BIT(probs, i); \ TREE_GET_BIT(probs, i); \ TREE_GET_BIT(probs, i); \ TREE_GET_BIT(probs, i); \ TREE_GET_BIT(probs, i); \ i -= 0x40; } #endif #define NORMALIZE_CHECK if (range < kTopValue) { if (buf >= bufLimit) return DUMMY_ERROR; range <<= 8; code = (code << 8) | (*buf++); } #define IF_BIT_0_CHECK(p) ttt = *(p); NORMALIZE_CHECK; bound = (range >> kNumBitModelTotalBits) * ttt; if (code < bound) #define UPDATE_0_CHECK range = bound; #define UPDATE_1_CHECK range -= bound; code -= bound; #define GET_BIT2_CHECK(p, i, A0, A1) IF_BIT_0_CHECK(p) \ { UPDATE_0_CHECK; i = (i + i); A0; } else \ { UPDATE_1_CHECK; i = (i + i) + 1; A1; } #define GET_BIT_CHECK(p, i) GET_BIT2_CHECK(p, i, ; , ;) #define TREE_DECODE_CHECK(probs, limit, i) \ { i = 1; do { GET_BIT_CHECK(probs + i, i) } while (i < limit); i -= limit; } #define kNumPosBitsMax 4 #define kNumPosStatesMax (1 << kNumPosBitsMax) #define kLenNumLowBits 3 #define kLenNumLowSymbols (1 << kLenNumLowBits) #define kLenNumMidBits 3 #define kLenNumMidSymbols (1 << kLenNumMidBits) #define kLenNumHighBits 8 #define kLenNumHighSymbols (1 << kLenNumHighBits) #define LenChoice 0 #define LenChoice2 (LenChoice + 1) #define LenLow (LenChoice2 + 1) #define LenMid (LenLow + (kNumPosStatesMax << kLenNumLowBits)) #define LenHigh (LenMid + (kNumPosStatesMax << kLenNumMidBits)) #define kNumLenProbs (LenHigh + kLenNumHighSymbols) #define kNumStates 12 #define kNumLitStates 7 #define kStartPosModelIndex 4 #define kEndPosModelIndex 14 #define kNumFullDistances (1 << (kEndPosModelIndex >> 1)) #define kNumPosSlotBits 6 #define kNumLenToPosStates 4 #define kNumAlignBits 4 #define kAlignTableSize (1 << kNumAlignBits) #define kMatchMinLen 2 #define kMatchSpecLenStart (kMatchMinLen + kLenNumLowSymbols + kLenNumMidSymbols + kLenNumHighSymbols) #define IsMatch 0 #define IsRep (IsMatch + (kNumStates << kNumPosBitsMax)) #define IsRepG0 (IsRep + kNumStates) #define IsRepG1 (IsRepG0 + kNumStates) #define IsRepG2 (IsRepG1 + kNumStates) #define IsRep0Long (IsRepG2 + kNumStates) #define PosSlot (IsRep0Long + (kNumStates << kNumPosBitsMax)) #define SpecPos (PosSlot + (kNumLenToPosStates << kNumPosSlotBits)) #define Align (SpecPos + kNumFullDistances - kEndPosModelIndex) #define LenCoder (Align + kAlignTableSize) #define RepLenCoder (LenCoder + kNumLenProbs) #define Literal (RepLenCoder + kNumLenProbs) #define LZMA_BASE_SIZE 1846 #define LZMA_LIT_SIZE 768 #define LzmaProps_GetNumProbs(p) ((UInt32)LZMA_BASE_SIZE + (LZMA_LIT_SIZE << ((p)->lc + (p)->lp))) #if Literal != LZMA_BASE_SIZE StopCompilingDueBUG #endif static const Byte kLiteralNextStates[kNumStates * 2] = { 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 4, 5, 7, 7, 7, 7, 7, 7, 7, 10, 10, 10, 10, 10 }; #define LZMA_DIC_MIN (1 << 12) /* First LZMA-symbol is always decoded. And it decodes new LZMA-symbols while (buf < bufLimit), but "buf" is without last normalization Out: Result: SZ_OK - OK SZ_ERROR_DATA - Error p->remainLen: < kMatchSpecLenStart : normal remain = kMatchSpecLenStart : finished = kMatchSpecLenStart + 1 : Flush marker = kMatchSpecLenStart + 2 : State Init Marker */ static int MY_FAST_CALL LzmaDec_DecodeReal(CLzmaDec *p, SizeT limit, const Byte *bufLimit) { CLzmaProb *probs = p->probs; unsigned state = p->state; UInt32 rep0 = p->reps[0], rep1 = p->reps[1], rep2 = p->reps[2], rep3 = p->reps[3]; unsigned pbMask = ((unsigned)1 << (p->prop.pb)) - 1; unsigned lpMask = ((unsigned)1 << (p->prop.lp)) - 1; unsigned lc = p->prop.lc; Byte *dic = p->dic; SizeT dicBufSize = p->dicBufSize; SizeT dicPos = p->dicPos; UInt32 processedPos = p->processedPos; UInt32 checkDicSize = p->checkDicSize; unsigned len = 0; const Byte *buf = p->buf; UInt32 range = p->range; UInt32 code = p->code; do { CLzmaProb *prob; UInt32 bound; unsigned ttt; unsigned posState = processedPos & pbMask; prob = probs + IsMatch + (state << kNumPosBitsMax) + posState; IF_BIT_0(prob) { unsigned symbol; UPDATE_0(prob); prob = probs + Literal; if (checkDicSize != 0 || processedPos != 0) prob += (LZMA_LIT_SIZE * (((processedPos & lpMask) << lc) + (dic[(dicPos == 0 ? dicBufSize : dicPos) - 1] >> (8 - lc)))); if (state < kNumLitStates) { symbol = 1; do { GET_BIT(prob + symbol, symbol) } while (symbol < 0x100); } else { unsigned matchByte = p->dic[(dicPos - rep0) + ((dicPos < rep0) ? dicBufSize : 0)]; unsigned offs = 0x100; symbol = 1; do { unsigned bit; CLzmaProb *probLit; matchByte <<= 1; bit = (matchByte & offs); probLit = prob + offs + bit + symbol; GET_BIT2(probLit, symbol, offs &= ~bit, offs &= bit) } while (symbol < 0x100); } dic[dicPos++] = (Byte)symbol; processedPos++; state = kLiteralNextStates[state]; /* if (state < 4) state = 0; else if (state < 10) state -= 3; else state -= 6; */ continue; } else { UPDATE_1(prob); prob = probs + IsRep + state; IF_BIT_0(prob) { UPDATE_0(prob); state += kNumStates; prob = probs + LenCoder; } else { UPDATE_1(prob); if (checkDicSize == 0 && processedPos == 0) return SZ_ERROR_DATA; prob = probs + IsRepG0 + state; IF_BIT_0(prob) { UPDATE_0(prob); prob = probs + IsRep0Long + (state << kNumPosBitsMax) + posState; IF_BIT_0(prob) { UPDATE_0(prob); dic[dicPos] = dic[(dicPos - rep0) + ((dicPos < rep0) ? dicBufSize : 0)]; dicPos++; processedPos++; state = state < kNumLitStates ? 9 : 11; continue; } UPDATE_1(prob); } else { UInt32 distance; UPDATE_1(prob); prob = probs + IsRepG1 + state; IF_BIT_0(prob) { UPDATE_0(prob); distance = rep1; } else { UPDATE_1(prob); prob = probs + IsRepG2 + state; IF_BIT_0(prob) { UPDATE_0(prob); distance = rep2; } else { UPDATE_1(prob); distance = rep3; rep3 = rep2; } rep2 = rep1; } rep1 = rep0; rep0 = distance; } state = state < kNumLitStates ? 8 : 11; prob = probs + RepLenCoder; } { unsigned limit, offset; CLzmaProb *probLen = prob + LenChoice; IF_BIT_0(probLen) { UPDATE_0(probLen); probLen = prob + LenLow + (posState << kLenNumLowBits); offset = 0; limit = (1 << kLenNumLowBits); } else { UPDATE_1(probLen); probLen = prob + LenChoice2; IF_BIT_0(probLen) { UPDATE_0(probLen); probLen = prob + LenMid + (posState << kLenNumMidBits); offset = kLenNumLowSymbols; limit = (1 << kLenNumMidBits); } else { UPDATE_1(probLen); probLen = prob + LenHigh; offset = kLenNumLowSymbols + kLenNumMidSymbols; limit = (1 << kLenNumHighBits); } } TREE_DECODE(probLen, limit, len); len += offset; } if (state >= kNumStates) { UInt32 distance; prob = probs + PosSlot + ((len < kNumLenToPosStates ? len : kNumLenToPosStates - 1) << kNumPosSlotBits); TREE_6_DECODE(prob, distance); if (distance >= kStartPosModelIndex) { unsigned posSlot = (unsigned)distance; int numDirectBits = (int)(((distance >> 1) - 1)); distance = (2 | (distance & 1)); if (posSlot < kEndPosModelIndex) { distance <<= numDirectBits; prob = probs + SpecPos + distance - posSlot - 1; { UInt32 mask = 1; unsigned i = 1; do { GET_BIT2(prob + i, i, ; , distance |= mask); mask <<= 1; } while (--numDirectBits != 0); } } else { numDirectBits -= kNumAlignBits; do { NORMALIZE range >>= 1; { UInt32 t; code -= range; t = (0 - ((UInt32)code >> 31)); /* (UInt32)((Int32)code >> 31) */ distance = (distance << 1) + (t + 1); code += range & t; } /* distance <<= 1; if (code >= range) { code -= range; distance |= 1; } */ } while (--numDirectBits != 0); prob = probs + Align; distance <<= kNumAlignBits; { unsigned i = 1; GET_BIT2(prob + i, i, ; , distance |= 1); GET_BIT2(prob + i, i, ; , distance |= 2); GET_BIT2(prob + i, i, ; , distance |= 4); GET_BIT2(prob + i, i, ; , distance |= 8); } if (distance == (UInt32)0xFFFFFFFF) { len += kMatchSpecLenStart; state -= kNumStates; break; } } } rep3 = rep2; rep2 = rep1; rep1 = rep0; rep0 = distance + 1; if (checkDicSize == 0) { if (distance >= processedPos) return SZ_ERROR_DATA; } else if (distance >= checkDicSize) return SZ_ERROR_DATA; state = (state < kNumStates + kNumLitStates) ? kNumLitStates : kNumLitStates + 3; /* state = kLiteralNextStates[state]; */ } len += kMatchMinLen; if (limit == dicPos) return SZ_ERROR_DATA; { SizeT rem = limit - dicPos; unsigned curLen = ((rem < len) ? (unsigned)rem : len); SizeT pos = (dicPos - rep0) + ((dicPos < rep0) ? dicBufSize : 0); processedPos += curLen; len -= curLen; if (pos + curLen <= dicBufSize) { Byte *dest = dic + dicPos; ptrdiff_t src = (ptrdiff_t)pos - (ptrdiff_t)dicPos; const Byte *lim = dest + curLen; dicPos += curLen; do *(dest) = (Byte)*(dest + src); while (++dest != lim); } else { do { dic[dicPos++] = dic[pos]; if (++pos == dicBufSize) pos = 0; } while (--curLen != 0); } } } } while (dicPos < limit && buf < bufLimit); NORMALIZE; p->buf = buf; p->range = range; p->code = code; p->remainLen = len; p->dicPos = dicPos; p->processedPos = processedPos; p->reps[0] = rep0; p->reps[1] = rep1; p->reps[2] = rep2; p->reps[3] = rep3; p->state = state; return SZ_OK; } static void MY_FAST_CALL LzmaDec_WriteRem(CLzmaDec *p, SizeT limit) { if (p->remainLen != 0 && p->remainLen < kMatchSpecLenStart) { Byte *dic = p->dic; SizeT dicPos = p->dicPos; SizeT dicBufSize = p->dicBufSize; unsigned len = p->remainLen; UInt32 rep0 = p->reps[0]; if (limit - dicPos < len) len = (unsigned)(limit - dicPos); if (p->checkDicSize == 0 && p->prop.dicSize - p->processedPos <= len) p->checkDicSize = p->prop.dicSize; p->processedPos += len; p->remainLen -= len; while (len-- != 0) { dic[dicPos] = dic[(dicPos - rep0) + ((dicPos < rep0) ? dicBufSize : 0)]; dicPos++; } p->dicPos = dicPos; } } static int MY_FAST_CALL LzmaDec_DecodeReal2(CLzmaDec *p, SizeT limit, const Byte *bufLimit) { do { SizeT limit2 = limit; if (p->checkDicSize == 0) { UInt32 rem = p->prop.dicSize - p->processedPos; if (limit - p->dicPos > rem) limit2 = p->dicPos + rem; } RINOK(LzmaDec_DecodeReal(p, limit2, bufLimit)); if (p->processedPos >= p->prop.dicSize) p->checkDicSize = p->prop.dicSize; LzmaDec_WriteRem(p, limit); } while (p->dicPos < limit && p->buf < bufLimit && p->remainLen < kMatchSpecLenStart); if (p->remainLen > kMatchSpecLenStart) { p->remainLen = kMatchSpecLenStart; } return 0; } typedef enum { DUMMY_ERROR, /* unexpected end of input stream */ DUMMY_LIT, DUMMY_MATCH, DUMMY_REP } ELzmaDummy; static ELzmaDummy LzmaDec_TryDummy(const CLzmaDec *p, const Byte *buf, SizeT inSize) { UInt32 range = p->range; UInt32 code = p->code; const Byte *bufLimit = buf + inSize; CLzmaProb *probs = p->probs; unsigned state = p->state; ELzmaDummy res; { CLzmaProb *prob; UInt32 bound; unsigned ttt; unsigned posState = (p->processedPos) & ((1 << p->prop.pb) - 1); prob = probs + IsMatch + (state << kNumPosBitsMax) + posState; IF_BIT_0_CHECK(prob) { UPDATE_0_CHECK /* if (bufLimit - buf >= 7) return DUMMY_LIT; */ prob = probs + Literal; if (p->checkDicSize != 0 || p->processedPos != 0) prob += (LZMA_LIT_SIZE * ((((p->processedPos) & ((1 << (p->prop.lp)) - 1)) << p->prop.lc) + (p->dic[(p->dicPos == 0 ? p->dicBufSize : p->dicPos) - 1] >> (8 - p->prop.lc)))); if (state < kNumLitStates) { unsigned symbol = 1; do { GET_BIT_CHECK(prob + symbol, symbol) } while (symbol < 0x100); } else { unsigned matchByte = p->dic[p->dicPos - p->reps[0] + ((p->dicPos < p->reps[0]) ? p->dicBufSize : 0)]; unsigned offs = 0x100; unsigned symbol = 1; do { unsigned bit; CLzmaProb *probLit; matchByte <<= 1; bit = (matchByte & offs); probLit = prob + offs + bit + symbol; GET_BIT2_CHECK(probLit, symbol, offs &= ~bit, offs &= bit) } while (symbol < 0x100); } res = DUMMY_LIT; } else { unsigned len; UPDATE_1_CHECK; prob = probs + IsRep + state; IF_BIT_0_CHECK(prob) { UPDATE_0_CHECK; state = 0; prob = probs + LenCoder; res = DUMMY_MATCH; } else { UPDATE_1_CHECK; res = DUMMY_REP; prob = probs + IsRepG0 + state; IF_BIT_0_CHECK(prob) { UPDATE_0_CHECK; prob = probs + IsRep0Long + (state << kNumPosBitsMax) + posState; IF_BIT_0_CHECK(prob) { UPDATE_0_CHECK; NORMALIZE_CHECK; return DUMMY_REP; } else { UPDATE_1_CHECK; } } else { UPDATE_1_CHECK; prob = probs + IsRepG1 + state; IF_BIT_0_CHECK(prob) { UPDATE_0_CHECK; } else { UPDATE_1_CHECK; prob = probs + IsRepG2 + state; IF_BIT_0_CHECK(prob) { UPDATE_0_CHECK; } else { UPDATE_1_CHECK; } } } state = kNumStates; prob = probs + RepLenCoder; } { unsigned limit, offset; CLzmaProb *probLen = prob + LenChoice; IF_BIT_0_CHECK(probLen) { UPDATE_0_CHECK; probLen = prob + LenLow + (posState << kLenNumLowBits); offset = 0; limit = 1 << kLenNumLowBits; } else { UPDATE_1_CHECK; probLen = prob + LenChoice2; IF_BIT_0_CHECK(probLen) { UPDATE_0_CHECK; probLen = prob + LenMid + (posState << kLenNumMidBits); offset = kLenNumLowSymbols; limit = 1 << kLenNumMidBits; } else { UPDATE_1_CHECK; probLen = prob + LenHigh; offset = kLenNumLowSymbols + kLenNumMidSymbols; limit = 1 << kLenNumHighBits; } } TREE_DECODE_CHECK(probLen, limit, len); len += offset; } if (state < 4) { unsigned posSlot; prob = probs + PosSlot + ((len < kNumLenToPosStates ? len : kNumLenToPosStates - 1) << kNumPosSlotBits); TREE_DECODE_CHECK(prob, 1 << kNumPosSlotBits, posSlot); if (posSlot >= kStartPosModelIndex) { int numDirectBits = ((posSlot >> 1) - 1); /* if (bufLimit - buf >= 8) return DUMMY_MATCH; */ if (posSlot < kEndPosModelIndex) { prob = probs + SpecPos + ((2 | (posSlot & 1)) << numDirectBits) - posSlot - 1; } else { numDirectBits -= kNumAlignBits; do { NORMALIZE_CHECK range >>= 1; code -= range & (((code - range) >> 31) - 1); /* if (code >= range) code -= range; */ } while (--numDirectBits != 0); prob = probs + Align; numDirectBits = kNumAlignBits; } { unsigned i = 1; do { GET_BIT_CHECK(prob + i, i); } while (--numDirectBits != 0); } } } } } NORMALIZE_CHECK; return res; } static void LzmaDec_InitRc(CLzmaDec *p, const Byte *data) { p->code = ((UInt32)data[1] << 24) | ((UInt32)data[2] << 16) | ((UInt32)data[3] << 8) | ((UInt32)data[4]); p->range = 0xFFFFFFFF; p->needFlush = 0; } void LzmaDec_InitDicAndState(CLzmaDec *p, Bool initDic, Bool initState) { p->needFlush = 1; p->remainLen = 0; p->tempBufSize = 0; if (initDic) { p->processedPos = 0; p->checkDicSize = 0; p->needInitState = 1; } if (initState) p->needInitState = 1; } void LzmaDec_Init(CLzmaDec *p) { p->dicPos = 0; LzmaDec_InitDicAndState(p, True, True); } static void LzmaDec_InitStateReal(CLzmaDec *p) { UInt32 numProbs = Literal + ((UInt32)LZMA_LIT_SIZE << (p->prop.lc + p->prop.lp)); UInt32 i; CLzmaProb *probs = p->probs; for (i = 0; i < numProbs; i++) probs[i] = kBitModelTotal >> 1; p->reps[0] = p->reps[1] = p->reps[2] = p->reps[3] = 1; p->state = 0; p->needInitState = 0; } SRes LzmaDec_DecodeToDic(CLzmaDec *p, SizeT dicLimit, const Byte *src, SizeT *srcLen, ELzmaFinishMode finishMode, ELzmaStatus *status) { SizeT inSize = *srcLen; (*srcLen) = 0; LzmaDec_WriteRem(p, dicLimit); *status = LZMA_STATUS_NOT_SPECIFIED; while (p->remainLen != kMatchSpecLenStart) { int checkEndMarkNow; if (p->needFlush != 0) { for (; inSize > 0 && p->tempBufSize < RC_INIT_SIZE; (*srcLen)++, inSize--) p->tempBuf[p->tempBufSize++] = *src++; if (p->tempBufSize < RC_INIT_SIZE) { *status = LZMA_STATUS_NEEDS_MORE_INPUT; return SZ_OK; } if (p->tempBuf[0] != 0) return SZ_ERROR_DATA; LzmaDec_InitRc(p, p->tempBuf); p->tempBufSize = 0; } checkEndMarkNow = 0; if (p->dicPos >= dicLimit) { if (p->remainLen == 0 && p->code == 0) { *status = LZMA_STATUS_MAYBE_FINISHED_WITHOUT_MARK; return SZ_OK; } if (finishMode == LZMA_FINISH_ANY) { *status = LZMA_STATUS_NOT_FINISHED; return SZ_OK; } if (p->remainLen != 0) { *status = LZMA_STATUS_NOT_FINISHED; return SZ_ERROR_DATA; } checkEndMarkNow = 1; } if (p->needInitState) LzmaDec_InitStateReal(p); if (p->tempBufSize == 0) { SizeT processed; const Byte *bufLimit; if (inSize < LZMA_REQUIRED_INPUT_MAX || checkEndMarkNow) { int dummyRes = LzmaDec_TryDummy(p, src, inSize); if (dummyRes == DUMMY_ERROR) { memcpy(p->tempBuf, src, inSize); p->tempBufSize = (unsigned)inSize; (*srcLen) += inSize; *status = LZMA_STATUS_NEEDS_MORE_INPUT; return SZ_OK; } if (checkEndMarkNow && dummyRes != DUMMY_MATCH) { *status = LZMA_STATUS_NOT_FINISHED; return SZ_ERROR_DATA; } bufLimit = src; } else bufLimit = src + inSize - LZMA_REQUIRED_INPUT_MAX; p->buf = src; if (LzmaDec_DecodeReal2(p, dicLimit, bufLimit) != 0) return SZ_ERROR_DATA; processed = (SizeT)(p->buf - src); (*srcLen) += processed; src += processed; inSize -= processed; } else { unsigned rem = p->tempBufSize, lookAhead = 0; while (rem < LZMA_REQUIRED_INPUT_MAX && lookAhead < inSize) p->tempBuf[rem++] = src[lookAhead++]; p->tempBufSize = rem; if (rem < LZMA_REQUIRED_INPUT_MAX || checkEndMarkNow) { int dummyRes = LzmaDec_TryDummy(p, p->tempBuf, rem); if (dummyRes == DUMMY_ERROR) { (*srcLen) += lookAhead; *status = LZMA_STATUS_NEEDS_MORE_INPUT; return SZ_OK; } if (checkEndMarkNow && dummyRes != DUMMY_MATCH) { *status = LZMA_STATUS_NOT_FINISHED; return SZ_ERROR_DATA; } } p->buf = p->tempBuf; if (LzmaDec_DecodeReal2(p, dicLimit, p->buf) != 0) return SZ_ERROR_DATA; lookAhead -= (rem - (unsigned)(p->buf - p->tempBuf)); (*srcLen) += lookAhead; src += lookAhead; inSize -= lookAhead; p->tempBufSize = 0; } } if (p->code == 0) *status = LZMA_STATUS_FINISHED_WITH_MARK; return (p->code == 0) ? SZ_OK : SZ_ERROR_DATA; } SRes LzmaDec_DecodeToBuf(CLzmaDec *p, Byte *dest, SizeT *destLen, const Byte *src, SizeT *srcLen, ELzmaFinishMode finishMode, ELzmaStatus *status) { SizeT outSize = *destLen; SizeT inSize = *srcLen; *srcLen = *destLen = 0; for (;;) { SizeT inSizeCur = inSize, outSizeCur, dicPos; ELzmaFinishMode curFinishMode; SRes res; if (p->dicPos == p->dicBufSize) p->dicPos = 0; dicPos = p->dicPos; if (outSize > p->dicBufSize - dicPos) { outSizeCur = p->dicBufSize; curFinishMode = LZMA_FINISH_ANY; } else { outSizeCur = dicPos + outSize; curFinishMode = finishMode; } res = LzmaDec_DecodeToDic(p, outSizeCur, src, &inSizeCur, curFinishMode, status); src += inSizeCur; inSize -= inSizeCur; *srcLen += inSizeCur; outSizeCur = p->dicPos - dicPos; memcpy(dest, p->dic + dicPos, outSizeCur); dest += outSizeCur; outSize -= outSizeCur; *destLen += outSizeCur; if (res != 0) return res; if (outSizeCur == 0 || outSize == 0) return SZ_OK; } } void LzmaDec_FreeProbs(CLzmaDec *p, ISzAlloc *alloc) { alloc->Free(alloc, p->probs); p->probs = 0; } static void LzmaDec_FreeDict(CLzmaDec *p, ISzAlloc *alloc) { alloc->Free(alloc, p->dic); p->dic = 0; } void LzmaDec_Free(CLzmaDec *p, ISzAlloc *alloc) { LzmaDec_FreeProbs(p, alloc); LzmaDec_FreeDict(p, alloc); } SRes LzmaProps_Decode(CLzmaProps *p, const Byte *data, unsigned size) { UInt32 dicSize; Byte d; if (size < LZMA_PROPS_SIZE) return SZ_ERROR_UNSUPPORTED; else dicSize = data[1] | ((UInt32)data[2] << 8) | ((UInt32)data[3] << 16) | ((UInt32)data[4] << 24); if (dicSize < LZMA_DIC_MIN) dicSize = LZMA_DIC_MIN; p->dicSize = dicSize; d = data[0]; if (d >= (9 * 5 * 5)) return SZ_ERROR_UNSUPPORTED; p->lc = d % 9; d /= 9; p->pb = d / 5; p->lp = d % 5; return SZ_OK; } static SRes LzmaDec_AllocateProbs2(CLzmaDec *p, const CLzmaProps *propNew, ISzAlloc *alloc) { UInt32 numProbs = LzmaProps_GetNumProbs(propNew); if (p->probs == 0 || numProbs != p->numProbs) { LzmaDec_FreeProbs(p, alloc); p->probs = (CLzmaProb *)alloc->Alloc(alloc, numProbs * sizeof(CLzmaProb)); p->numProbs = numProbs; if (p->probs == 0) return SZ_ERROR_MEM; } return SZ_OK; } SRes LzmaDec_AllocateProbs(CLzmaDec *p, const Byte *props, unsigned propsSize, ISzAlloc *alloc) { CLzmaProps propNew; RINOK(LzmaProps_Decode(&propNew, props, propsSize)); RINOK(LzmaDec_AllocateProbs2(p, &propNew, alloc)); p->prop = propNew; return SZ_OK; } SRes LzmaDec_Allocate(CLzmaDec *p, const Byte *props, unsigned propsSize, ISzAlloc *alloc) { CLzmaProps propNew; SizeT dicBufSize; RINOK(LzmaProps_Decode(&propNew, props, propsSize)); RINOK(LzmaDec_AllocateProbs2(p, &propNew, alloc)); dicBufSize = propNew.dicSize; if (p->dic == 0 || dicBufSize != p->dicBufSize) { LzmaDec_FreeDict(p, alloc); p->dic = (Byte *)alloc->Alloc(alloc, dicBufSize); if (p->dic == 0) { LzmaDec_FreeProbs(p, alloc); return SZ_ERROR_MEM; } } p->dicBufSize = dicBufSize; p->prop = propNew; return SZ_OK; } SRes LzmaDecode(Byte *dest, SizeT *destLen, const Byte *src, SizeT *srcLen, const Byte *propData, unsigned propSize, ELzmaFinishMode finishMode, ELzmaStatus *status, ISzAlloc *alloc) { CLzmaDec p; SRes res; SizeT inSize = *srcLen; SizeT outSize = *destLen; *srcLen = *destLen = 0; if (inSize < RC_INIT_SIZE) return SZ_ERROR_INPUT_EOF; LzmaDec_Construct(&p); res = LzmaDec_AllocateProbs(&p, propData, propSize, alloc); if (res != 0) return res; p.dic = dest; p.dicBufSize = outSize; LzmaDec_Init(&p); *srcLen = inSize; res = LzmaDec_DecodeToDic(&p, outSize, src, srcLen, finishMode, status); if (res == SZ_OK && *status == LZMA_STATUS_NEEDS_MORE_INPUT) res = SZ_ERROR_INPUT_EOF; (*destLen) = p.dicPos; LzmaDec_FreeProbs(&p, alloc); return res; } zmat-0.9.8/src/easylzma/pavlov/LzmaDec.h000077500000000000000000000152121366304620100201230ustar00rootroot00000000000000/* LzmaDec.h -- LZMA Decoder 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __LZMADEC_H #define __LZMADEC_H #include "Types.h" /* #define _LZMA_PROB32 */ /* _LZMA_PROB32 can increase the speed on some CPUs, but memory usage for CLzmaDec::probs will be doubled in that case */ #ifdef _LZMA_PROB32 #define CLzmaProb UInt32 #else #define CLzmaProb UInt16 #endif /* ---------- LZMA Properties ---------- */ #define LZMA_PROPS_SIZE 5 typedef struct _CLzmaProps { unsigned lc, lp, pb; UInt32 dicSize; } CLzmaProps; /* LzmaProps_Decode - decodes properties Returns: SZ_OK SZ_ERROR_UNSUPPORTED - Unsupported properties */ SRes LzmaProps_Decode(CLzmaProps *p, const Byte *data, unsigned size); /* ---------- LZMA Decoder state ---------- */ /* LZMA_REQUIRED_INPUT_MAX = number of required input bytes for worst case. Num bits = log2((2^11 / 31) ^ 22) + 26 < 134 + 26 = 160; */ #define LZMA_REQUIRED_INPUT_MAX 20 typedef struct { CLzmaProps prop; CLzmaProb *probs; Byte *dic; const Byte *buf; UInt32 range, code; SizeT dicPos; SizeT dicBufSize; UInt32 processedPos; UInt32 checkDicSize; unsigned state; UInt32 reps[4]; unsigned remainLen; int needFlush; int needInitState; UInt32 numProbs; unsigned tempBufSize; Byte tempBuf[LZMA_REQUIRED_INPUT_MAX]; } CLzmaDec; #define LzmaDec_Construct(p) { (p)->dic = 0; (p)->probs = 0; } void LzmaDec_Init(CLzmaDec *p); /* There are two types of LZMA streams: 0) Stream with end mark. That end mark adds about 6 bytes to compressed size. 1) Stream without end mark. You must know exact uncompressed size to decompress such stream. */ typedef enum { LZMA_FINISH_ANY, /* finish at any point */ LZMA_FINISH_END /* block must be finished at the end */ } ELzmaFinishMode; /* ELzmaFinishMode has meaning only if the decoding reaches output limit !!! You must use LZMA_FINISH_END, when you know that current output buffer covers last bytes of block. In other cases you must use LZMA_FINISH_ANY. If LZMA decoder sees end marker before reaching output limit, it returns SZ_OK, and output value of destLen will be less than output buffer size limit. You can check status result also. You can use multiple checks to test data integrity after full decompression: 1) Check Result and "status" variable. 2) Check that output(destLen) = uncompressedSize, if you know real uncompressedSize. 3) Check that output(srcLen) = compressedSize, if you know real compressedSize. You must use correct finish mode in that case. */ typedef enum { LZMA_STATUS_NOT_SPECIFIED, /* use main error code instead */ LZMA_STATUS_FINISHED_WITH_MARK, /* stream was finished with end mark. */ LZMA_STATUS_NOT_FINISHED, /* stream was not finished */ LZMA_STATUS_NEEDS_MORE_INPUT, /* you must provide more input bytes */ LZMA_STATUS_MAYBE_FINISHED_WITHOUT_MARK /* there is probability that stream was finished without end mark */ } ELzmaStatus; /* ELzmaStatus is used only as output value for function call */ /* ---------- Interfaces ---------- */ /* There are 3 levels of interfaces: 1) Dictionary Interface 2) Buffer Interface 3) One Call Interface You can select any of these interfaces, but don't mix functions from different groups for same object. */ /* There are two variants to allocate state for Dictionary Interface: 1) LzmaDec_Allocate / LzmaDec_Free 2) LzmaDec_AllocateProbs / LzmaDec_FreeProbs You can use variant 2, if you set dictionary buffer manually. For Buffer Interface you must always use variant 1. LzmaDec_Allocate* can return: SZ_OK SZ_ERROR_MEM - Memory allocation error SZ_ERROR_UNSUPPORTED - Unsupported properties */ SRes LzmaDec_AllocateProbs(CLzmaDec *p, const Byte *props, unsigned propsSize, ISzAlloc *alloc); void LzmaDec_FreeProbs(CLzmaDec *p, ISzAlloc *alloc); SRes LzmaDec_Allocate(CLzmaDec *state, const Byte *prop, unsigned propsSize, ISzAlloc *alloc); void LzmaDec_Free(CLzmaDec *state, ISzAlloc *alloc); /* ---------- Dictionary Interface ---------- */ /* You can use it, if you want to eliminate the overhead for data copying from dictionary to some other external buffer. You must work with CLzmaDec variables directly in this interface. STEPS: LzmaDec_Constr() LzmaDec_Allocate() for (each new stream) { LzmaDec_Init() while (it needs more decompression) { LzmaDec_DecodeToDic() use data from CLzmaDec::dic and update CLzmaDec::dicPos } } LzmaDec_Free() */ /* LzmaDec_DecodeToDic The decoding to internal dictionary buffer (CLzmaDec::dic). You must manually update CLzmaDec::dicPos, if it reaches CLzmaDec::dicBufSize !!! finishMode: It has meaning only if the decoding reaches output limit (dicLimit). LZMA_FINISH_ANY - Decode just dicLimit bytes. LZMA_FINISH_END - Stream must be finished after dicLimit. Returns: SZ_OK status: LZMA_STATUS_FINISHED_WITH_MARK LZMA_STATUS_NOT_FINISHED LZMA_STATUS_NEEDS_MORE_INPUT LZMA_STATUS_MAYBE_FINISHED_WITHOUT_MARK SZ_ERROR_DATA - Data error */ SRes LzmaDec_DecodeToDic(CLzmaDec *p, SizeT dicLimit, const Byte *src, SizeT *srcLen, ELzmaFinishMode finishMode, ELzmaStatus *status); /* ---------- Buffer Interface ---------- */ /* It's zlib-like interface. See LzmaDec_DecodeToDic description for information about STEPS and return results, but you must use LzmaDec_DecodeToBuf instead of LzmaDec_DecodeToDic and you don't need to work with CLzmaDec variables manually. finishMode: It has meaning only if the decoding reaches output limit (*destLen). LZMA_FINISH_ANY - Decode just destLen bytes. LZMA_FINISH_END - Stream must be finished after (*destLen). */ SRes LzmaDec_DecodeToBuf(CLzmaDec *p, Byte *dest, SizeT *destLen, const Byte *src, SizeT *srcLen, ELzmaFinishMode finishMode, ELzmaStatus *status); /* ---------- One Call Interface ---------- */ /* LzmaDecode finishMode: It has meaning only if the decoding reaches output limit (*destLen). LZMA_FINISH_ANY - Decode just destLen bytes. LZMA_FINISH_END - Stream must be finished after (*destLen). Returns: SZ_OK status: LZMA_STATUS_FINISHED_WITH_MARK LZMA_STATUS_NOT_FINISHED LZMA_STATUS_MAYBE_FINISHED_WITHOUT_MARK SZ_ERROR_DATA - Data error SZ_ERROR_MEM - Memory allocation error SZ_ERROR_UNSUPPORTED - Unsupported properties SZ_ERROR_INPUT_EOF - It needs more bytes in input buffer (src). */ SRes LzmaDecode(Byte *dest, SizeT *destLen, const Byte *src, SizeT *srcLen, const Byte *propData, unsigned propSize, ELzmaFinishMode finishMode, ELzmaStatus *status, ISzAlloc *alloc); #endif zmat-0.9.8/src/easylzma/pavlov/LzmaEnc.c000077500000000000000000001725521366304620100201430ustar00rootroot00000000000000/* LzmaEnc.c -- LZMA Encoder 2008-10-04 : Igor Pavlov : Public domain */ #include /* #define SHOW_STAT */ /* #define SHOW_STAT2 */ #if defined(SHOW_STAT) || defined(SHOW_STAT2) #include #endif #include "LzmaEnc.h" #include "LzFind.h" #ifdef COMPRESS_MF_MT #include "LzFindMt.h" #endif #ifdef SHOW_STAT static int ttt = 0; #endif #define kBlockSizeMax ((1 << LZMA_NUM_BLOCK_SIZE_BITS) - 1) #define kBlockSize (9 << 10) #define kUnpackBlockSize (1 << 18) #define kMatchArraySize (1 << 21) #define kMatchRecordMaxSize ((LZMA_MATCH_LEN_MAX * 2 + 3) * LZMA_MATCH_LEN_MAX) #define kNumMaxDirectBits (31) #define kNumTopBits 24 #define kTopValue ((UInt32)1 << kNumTopBits) #define kNumBitModelTotalBits 11 #define kBitModelTotal (1 << kNumBitModelTotalBits) #define kNumMoveBits 5 #define kProbInitValue (kBitModelTotal >> 1) #define kNumMoveReducingBits 4 #define kNumBitPriceShiftBits 4 #define kBitPrice (1 << kNumBitPriceShiftBits) void LzmaEncProps_Init(CLzmaEncProps *p) { p->level = 5; p->dictSize = p->mc = 0; p->lc = p->lp = p->pb = p->algo = p->fb = p->btMode = p->numHashBytes = p->numThreads = -1; p->writeEndMark = 0; } void LzmaEncProps_Normalize(CLzmaEncProps *p) { int level = p->level; if (level < 0) level = 5; p->level = level; if (p->dictSize == 0) p->dictSize = (level <= 5 ? (1 << (level * 2 + 14)) : (level == 6 ? (1 << 25) : (1 << 26))); if (p->lc < 0) p->lc = 3; if (p->lp < 0) p->lp = 0; if (p->pb < 0) p->pb = 2; if (p->algo < 0) p->algo = (level < 5 ? 0 : 1); if (p->fb < 0) p->fb = (level < 7 ? 32 : 64); if (p->btMode < 0) p->btMode = (p->algo == 0 ? 0 : 1); if (p->numHashBytes < 0) p->numHashBytes = 4; if (p->mc == 0) p->mc = (16 + (p->fb >> 1)) >> (p->btMode ? 0 : 1); if (p->numThreads < 0) p->numThreads = ((p->btMode && p->algo) ? 2 : 1); } UInt32 LzmaEncProps_GetDictSize(const CLzmaEncProps *props2) { CLzmaEncProps props = *props2; LzmaEncProps_Normalize(&props); return props.dictSize; } /* #define LZMA_LOG_BSR */ /* Define it for Intel's CPU */ #ifdef LZMA_LOG_BSR #define kDicLogSizeMaxCompress 30 #define BSR2_RET(pos, res) { unsigned long i; _BitScanReverse(&i, (pos)); res = (i + i) + ((pos >> (i - 1)) & 1); } UInt32 GetPosSlot1(UInt32 pos) { UInt32 res; BSR2_RET(pos, res); return res; } #define GetPosSlot2(pos, res) { BSR2_RET(pos, res); } #define GetPosSlot(pos, res) { if (pos < 2) res = pos; else BSR2_RET(pos, res); } #else #define kNumLogBits (9 + (int)sizeof(size_t) / 2) #define kDicLogSizeMaxCompress ((kNumLogBits - 1) * 2 + 7) void LzmaEnc_FastPosInit(Byte *g_FastPos) { int c = 2, slotFast; g_FastPos[0] = 0; g_FastPos[1] = 1; for (slotFast = 2; slotFast < kNumLogBits * 2; slotFast++) { UInt32 k = (1 << ((slotFast >> 1) - 1)); UInt32 j; for (j = 0; j < k; j++, c++) g_FastPos[c] = (Byte)slotFast; } } #define BSR2_RET(pos, res) { UInt32 i = 6 + ((kNumLogBits - 1) & \ (0 - (((((UInt32)1 << (kNumLogBits + 6)) - 1) - pos) >> 31))); \ res = p->g_FastPos[pos >> i] + (i * 2); } /* #define BSR2_RET(pos, res) { res = (pos < (1 << (kNumLogBits + 6))) ? \ p->g_FastPos[pos >> 6] + 12 : \ p->g_FastPos[pos >> (6 + kNumLogBits - 1)] + (6 + (kNumLogBits - 1)) * 2; } */ #define GetPosSlot1(pos) p->g_FastPos[pos] #define GetPosSlot2(pos, res) { BSR2_RET(pos, res); } #define GetPosSlot(pos, res) { if (pos < kNumFullDistances) res = p->g_FastPos[pos]; else BSR2_RET(pos, res); } #endif #define LZMA_NUM_REPS 4 typedef unsigned CState; typedef struct _COptimal { UInt32 price; CState state; int prev1IsChar; int prev2; UInt32 posPrev2; UInt32 backPrev2; UInt32 posPrev; UInt32 backPrev; UInt32 backs[LZMA_NUM_REPS]; } COptimal; #define kNumOpts (1 << 12) #define kNumLenToPosStates 4 #define kNumPosSlotBits 6 #define kDicLogSizeMin 0 #define kDicLogSizeMax 32 #define kDistTableSizeMax (kDicLogSizeMax * 2) #define kNumAlignBits 4 #define kAlignTableSize (1 << kNumAlignBits) #define kAlignMask (kAlignTableSize - 1) #define kStartPosModelIndex 4 #define kEndPosModelIndex 14 #define kNumPosModels (kEndPosModelIndex - kStartPosModelIndex) #define kNumFullDistances (1 << (kEndPosModelIndex / 2)) #ifdef _LZMA_PROB32 #define CLzmaProb UInt32 #else #define CLzmaProb UInt16 #endif #define LZMA_PB_MAX 4 #define LZMA_LC_MAX 8 #define LZMA_LP_MAX 4 #define LZMA_NUM_PB_STATES_MAX (1 << LZMA_PB_MAX) #define kLenNumLowBits 3 #define kLenNumLowSymbols (1 << kLenNumLowBits) #define kLenNumMidBits 3 #define kLenNumMidSymbols (1 << kLenNumMidBits) #define kLenNumHighBits 8 #define kLenNumHighSymbols (1 << kLenNumHighBits) #define kLenNumSymbolsTotal (kLenNumLowSymbols + kLenNumMidSymbols + kLenNumHighSymbols) #define LZMA_MATCH_LEN_MIN 2 #define LZMA_MATCH_LEN_MAX (LZMA_MATCH_LEN_MIN + kLenNumSymbolsTotal - 1) #define kNumStates 12 typedef struct { CLzmaProb choice; CLzmaProb choice2; CLzmaProb low[LZMA_NUM_PB_STATES_MAX << kLenNumLowBits]; CLzmaProb mid[LZMA_NUM_PB_STATES_MAX << kLenNumMidBits]; CLzmaProb high[kLenNumHighSymbols]; } CLenEnc; typedef struct { CLenEnc p; UInt32 prices[LZMA_NUM_PB_STATES_MAX][kLenNumSymbolsTotal]; UInt32 tableSize; UInt32 counters[LZMA_NUM_PB_STATES_MAX]; } CLenPriceEnc; typedef struct _CRangeEnc { UInt32 range; Byte cache; UInt64 low; UInt64 cacheSize; Byte *buf; Byte *bufLim; Byte *bufBase; ISeqOutStream *outStream; UInt64 processed; SRes res; } CRangeEnc; typedef struct _CSeqInStreamBuf { ISeqInStream funcTable; const Byte *data; SizeT rem; } CSeqInStreamBuf; static SRes MyRead(void *pp, void *data, size_t *size) { size_t curSize = *size; CSeqInStreamBuf *p = (CSeqInStreamBuf *)pp; if (p->rem < curSize) curSize = p->rem; memcpy(data, p->data, curSize); p->rem -= curSize; p->data += curSize; *size = curSize; return SZ_OK; } typedef struct { CLzmaProb *litProbs; CLzmaProb isMatch[kNumStates][LZMA_NUM_PB_STATES_MAX]; CLzmaProb isRep[kNumStates]; CLzmaProb isRepG0[kNumStates]; CLzmaProb isRepG1[kNumStates]; CLzmaProb isRepG2[kNumStates]; CLzmaProb isRep0Long[kNumStates][LZMA_NUM_PB_STATES_MAX]; CLzmaProb posSlotEncoder[kNumLenToPosStates][1 << kNumPosSlotBits]; CLzmaProb posEncoders[kNumFullDistances - kEndPosModelIndex]; CLzmaProb posAlignEncoder[1 << kNumAlignBits]; CLenPriceEnc lenEnc; CLenPriceEnc repLenEnc; UInt32 reps[LZMA_NUM_REPS]; UInt32 state; } CSaveState; typedef struct _CLzmaEnc { IMatchFinder matchFinder; void *matchFinderObj; #ifdef COMPRESS_MF_MT Bool mtMode; CMatchFinderMt matchFinderMt; #endif CMatchFinder matchFinderBase; #ifdef COMPRESS_MF_MT Byte pad[128]; #endif UInt32 optimumEndIndex; UInt32 optimumCurrentIndex; UInt32 longestMatchLength; UInt32 numPairs; UInt32 numAvail; COptimal opt[kNumOpts]; #ifndef LZMA_LOG_BSR Byte g_FastPos[1 << kNumLogBits]; #endif UInt32 ProbPrices[kBitModelTotal >> kNumMoveReducingBits]; UInt32 matches[LZMA_MATCH_LEN_MAX * 2 + 2 + 1]; UInt32 numFastBytes; UInt32 additionalOffset; UInt32 reps[LZMA_NUM_REPS]; UInt32 state; UInt32 posSlotPrices[kNumLenToPosStates][kDistTableSizeMax]; UInt32 distancesPrices[kNumLenToPosStates][kNumFullDistances]; UInt32 alignPrices[kAlignTableSize]; UInt32 alignPriceCount; UInt32 distTableSize; unsigned lc, lp, pb; unsigned lpMask, pbMask; CLzmaProb *litProbs; CLzmaProb isMatch[kNumStates][LZMA_NUM_PB_STATES_MAX]; CLzmaProb isRep[kNumStates]; CLzmaProb isRepG0[kNumStates]; CLzmaProb isRepG1[kNumStates]; CLzmaProb isRepG2[kNumStates]; CLzmaProb isRep0Long[kNumStates][LZMA_NUM_PB_STATES_MAX]; CLzmaProb posSlotEncoder[kNumLenToPosStates][1 << kNumPosSlotBits]; CLzmaProb posEncoders[kNumFullDistances - kEndPosModelIndex]; CLzmaProb posAlignEncoder[1 << kNumAlignBits]; CLenPriceEnc lenEnc; CLenPriceEnc repLenEnc; unsigned lclp; Bool fastMode; CRangeEnc rc; Bool writeEndMark; UInt64 nowPos64; UInt32 matchPriceCount; Bool finished; Bool multiThread; SRes result; UInt32 dictSize; UInt32 matchFinderCycles; ISeqInStream *inStream; CSeqInStreamBuf seqBufInStream; CSaveState saveState; } CLzmaEnc; void LzmaEnc_SaveState(CLzmaEncHandle pp) { CLzmaEnc *p = (CLzmaEnc *)pp; CSaveState *dest = &p->saveState; int i; dest->lenEnc = p->lenEnc; dest->repLenEnc = p->repLenEnc; dest->state = p->state; for (i = 0; i < kNumStates; i++) { memcpy(dest->isMatch[i], p->isMatch[i], sizeof(p->isMatch[i])); memcpy(dest->isRep0Long[i], p->isRep0Long[i], sizeof(p->isRep0Long[i])); } for (i = 0; i < kNumLenToPosStates; i++) memcpy(dest->posSlotEncoder[i], p->posSlotEncoder[i], sizeof(p->posSlotEncoder[i])); memcpy(dest->isRep, p->isRep, sizeof(p->isRep)); memcpy(dest->isRepG0, p->isRepG0, sizeof(p->isRepG0)); memcpy(dest->isRepG1, p->isRepG1, sizeof(p->isRepG1)); memcpy(dest->isRepG2, p->isRepG2, sizeof(p->isRepG2)); memcpy(dest->posEncoders, p->posEncoders, sizeof(p->posEncoders)); memcpy(dest->posAlignEncoder, p->posAlignEncoder, sizeof(p->posAlignEncoder)); memcpy(dest->reps, p->reps, sizeof(p->reps)); memcpy(dest->litProbs, p->litProbs, (0x300 << p->lclp) * sizeof(CLzmaProb)); } void LzmaEnc_RestoreState(CLzmaEncHandle pp) { CLzmaEnc *dest = (CLzmaEnc *)pp; const CSaveState *p = &dest->saveState; int i; dest->lenEnc = p->lenEnc; dest->repLenEnc = p->repLenEnc; dest->state = p->state; for (i = 0; i < kNumStates; i++) { memcpy(dest->isMatch[i], p->isMatch[i], sizeof(p->isMatch[i])); memcpy(dest->isRep0Long[i], p->isRep0Long[i], sizeof(p->isRep0Long[i])); } for (i = 0; i < kNumLenToPosStates; i++) memcpy(dest->posSlotEncoder[i], p->posSlotEncoder[i], sizeof(p->posSlotEncoder[i])); memcpy(dest->isRep, p->isRep, sizeof(p->isRep)); memcpy(dest->isRepG0, p->isRepG0, sizeof(p->isRepG0)); memcpy(dest->isRepG1, p->isRepG1, sizeof(p->isRepG1)); memcpy(dest->isRepG2, p->isRepG2, sizeof(p->isRepG2)); memcpy(dest->posEncoders, p->posEncoders, sizeof(p->posEncoders)); memcpy(dest->posAlignEncoder, p->posAlignEncoder, sizeof(p->posAlignEncoder)); memcpy(dest->reps, p->reps, sizeof(p->reps)); memcpy(dest->litProbs, p->litProbs, (0x300 << dest->lclp) * sizeof(CLzmaProb)); } SRes LzmaEnc_SetProps(CLzmaEncHandle pp, const CLzmaEncProps *props2) { CLzmaEnc *p = (CLzmaEnc *)pp; CLzmaEncProps props = *props2; LzmaEncProps_Normalize(&props); if (props.lc > LZMA_LC_MAX || props.lp > LZMA_LP_MAX || props.pb > LZMA_PB_MAX || props.dictSize > (1 << kDicLogSizeMaxCompress) || props.dictSize > (1 << 30)) return SZ_ERROR_PARAM; p->dictSize = props.dictSize; p->matchFinderCycles = props.mc; { unsigned fb = props.fb; if (fb < 5) fb = 5; if (fb > LZMA_MATCH_LEN_MAX) fb = LZMA_MATCH_LEN_MAX; p->numFastBytes = fb; } p->lc = props.lc; p->lp = props.lp; p->pb = props.pb; p->fastMode = (props.algo == 0); p->matchFinderBase.btMode = props.btMode; { UInt32 numHashBytes = 4; if (props.btMode) { if (props.numHashBytes < 2) numHashBytes = 2; else if (props.numHashBytes < 4) numHashBytes = props.numHashBytes; } p->matchFinderBase.numHashBytes = numHashBytes; } p->matchFinderBase.cutValue = props.mc; p->writeEndMark = props.writeEndMark; #ifdef COMPRESS_MF_MT /* if (newMultiThread != _multiThread) { ReleaseMatchFinder(); _multiThread = newMultiThread; } */ p->multiThread = (props.numThreads > 1); #endif return SZ_OK; } static const int kLiteralNextStates[kNumStates] = {0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 4, 5}; static const int kMatchNextStates[kNumStates] = {7, 7, 7, 7, 7, 7, 7, 10, 10, 10, 10, 10}; static const int kRepNextStates[kNumStates] = {8, 8, 8, 8, 8, 8, 8, 11, 11, 11, 11, 11}; static const int kShortRepNextStates[kNumStates]= {9, 9, 9, 9, 9, 9, 9, 11, 11, 11, 11, 11}; #define IsCharState(s) ((s) < 7) #define GetLenToPosState(len) (((len) < kNumLenToPosStates + 1) ? (len) - 2 : kNumLenToPosStates - 1) #define kInfinityPrice (1 << 30) static void RangeEnc_Construct(CRangeEnc *p) { p->outStream = 0; p->bufBase = 0; } #define RangeEnc_GetProcessed(p) ((p)->processed + ((p)->buf - (p)->bufBase) + (p)->cacheSize) #define RC_BUF_SIZE (1 << 16) static int RangeEnc_Alloc(CRangeEnc *p, ISzAlloc *alloc) { if (p->bufBase == 0) { p->bufBase = (Byte *)alloc->Alloc(alloc, RC_BUF_SIZE); if (p->bufBase == 0) return 0; p->bufLim = p->bufBase + RC_BUF_SIZE; } return 1; } static void RangeEnc_Free(CRangeEnc *p, ISzAlloc *alloc) { alloc->Free(alloc, p->bufBase); p->bufBase = 0; } static void RangeEnc_Init(CRangeEnc *p) { /* Stream.Init(); */ p->low = 0; p->range = 0xFFFFFFFF; p->cacheSize = 1; p->cache = 0; p->buf = p->bufBase; p->processed = 0; p->res = SZ_OK; } static void RangeEnc_FlushStream(CRangeEnc *p) { size_t num; if (p->res != SZ_OK) return; num = p->buf - p->bufBase; if (num != p->outStream->Write(p->outStream, p->bufBase, num)) p->res = SZ_ERROR_WRITE; p->processed += num; p->buf = p->bufBase; } static void MY_FAST_CALL RangeEnc_ShiftLow(CRangeEnc *p) { if ((UInt32)p->low < (UInt32)0xFF000000 || (int)(p->low >> 32) != 0) { Byte temp = p->cache; do { Byte *buf = p->buf; *buf++ = (Byte)(temp + (Byte)(p->low >> 32)); p->buf = buf; if (buf == p->bufLim) RangeEnc_FlushStream(p); temp = 0xFF; } while (--p->cacheSize != 0); p->cache = (Byte)((UInt32)p->low >> 24); } p->cacheSize++; p->low = (UInt32)p->low << 8; } static void RangeEnc_FlushData(CRangeEnc *p) { int i; for (i = 0; i < 5; i++) RangeEnc_ShiftLow(p); } static void RangeEnc_EncodeDirectBits(CRangeEnc *p, UInt32 value, int numBits) { do { p->range >>= 1; p->low += p->range & (0 - ((value >> --numBits) & 1)); if (p->range < kTopValue) { p->range <<= 8; RangeEnc_ShiftLow(p); } } while (numBits != 0); } static void RangeEnc_EncodeBit(CRangeEnc *p, CLzmaProb *prob, UInt32 symbol) { UInt32 ttt = *prob; UInt32 newBound = (p->range >> kNumBitModelTotalBits) * ttt; if (symbol == 0) { p->range = newBound; ttt += (kBitModelTotal - ttt) >> kNumMoveBits; } else { p->low += newBound; p->range -= newBound; ttt -= ttt >> kNumMoveBits; } *prob = (CLzmaProb)ttt; if (p->range < kTopValue) { p->range <<= 8; RangeEnc_ShiftLow(p); } } static void LitEnc_Encode(CRangeEnc *p, CLzmaProb *probs, UInt32 symbol) { symbol |= 0x100; do { RangeEnc_EncodeBit(p, probs + (symbol >> 8), (symbol >> 7) & 1); symbol <<= 1; } while (symbol < 0x10000); } static void LitEnc_EncodeMatched(CRangeEnc *p, CLzmaProb *probs, UInt32 symbol, UInt32 matchByte) { UInt32 offs = 0x100; symbol |= 0x100; do { matchByte <<= 1; RangeEnc_EncodeBit(p, probs + (offs + (matchByte & offs) + (symbol >> 8)), (symbol >> 7) & 1); symbol <<= 1; offs &= ~(matchByte ^ symbol); } while (symbol < 0x10000); } void LzmaEnc_InitPriceTables(UInt32 *ProbPrices) { UInt32 i; for (i = (1 << kNumMoveReducingBits) / 2; i < kBitModelTotal; i += (1 << kNumMoveReducingBits)) { const int kCyclesBits = kNumBitPriceShiftBits; UInt32 w = i; UInt32 bitCount = 0; int j; for (j = 0; j < kCyclesBits; j++) { w = w * w; bitCount <<= 1; while (w >= ((UInt32)1 << 16)) { w >>= 1; bitCount++; } } ProbPrices[i >> kNumMoveReducingBits] = ((kNumBitModelTotalBits << kCyclesBits) - 15 - bitCount); } } #define GET_PRICE(prob, symbol) \ p->ProbPrices[((prob) ^ (((-(int)(symbol))) & (kBitModelTotal - 1))) >> kNumMoveReducingBits]; #define GET_PRICEa(prob, symbol) \ ProbPrices[((prob) ^ ((-((int)(symbol))) & (kBitModelTotal - 1))) >> kNumMoveReducingBits]; #define GET_PRICE_0(prob) p->ProbPrices[(prob) >> kNumMoveReducingBits] #define GET_PRICE_1(prob) p->ProbPrices[((prob) ^ (kBitModelTotal - 1)) >> kNumMoveReducingBits] #define GET_PRICE_0a(prob) ProbPrices[(prob) >> kNumMoveReducingBits] #define GET_PRICE_1a(prob) ProbPrices[((prob) ^ (kBitModelTotal - 1)) >> kNumMoveReducingBits] static UInt32 LitEnc_GetPrice(const CLzmaProb *probs, UInt32 symbol, UInt32 *ProbPrices) { UInt32 price = 0; symbol |= 0x100; do { price += GET_PRICEa(probs[symbol >> 8], (symbol >> 7) & 1); symbol <<= 1; } while (symbol < 0x10000); return price; } static UInt32 LitEnc_GetPriceMatched(const CLzmaProb *probs, UInt32 symbol, UInt32 matchByte, UInt32 *ProbPrices) { UInt32 price = 0; UInt32 offs = 0x100; symbol |= 0x100; do { matchByte <<= 1; price += GET_PRICEa(probs[offs + (matchByte & offs) + (symbol >> 8)], (symbol >> 7) & 1); symbol <<= 1; offs &= ~(matchByte ^ symbol); } while (symbol < 0x10000); return price; } static void RcTree_Encode(CRangeEnc *rc, CLzmaProb *probs, int numBitLevels, UInt32 symbol) { UInt32 m = 1; int i; for (i = numBitLevels; i != 0;) { UInt32 bit; i--; bit = (symbol >> i) & 1; RangeEnc_EncodeBit(rc, probs + m, bit); m = (m << 1) | bit; } } static void RcTree_ReverseEncode(CRangeEnc *rc, CLzmaProb *probs, int numBitLevels, UInt32 symbol) { UInt32 m = 1; int i; for (i = 0; i < numBitLevels; i++) { UInt32 bit = symbol & 1; RangeEnc_EncodeBit(rc, probs + m, bit); m = (m << 1) | bit; symbol >>= 1; } } static UInt32 RcTree_GetPrice(const CLzmaProb *probs, int numBitLevels, UInt32 symbol, UInt32 *ProbPrices) { UInt32 price = 0; symbol |= (1 << numBitLevels); while (symbol != 1) { price += GET_PRICEa(probs[symbol >> 1], symbol & 1); symbol >>= 1; } return price; } static UInt32 RcTree_ReverseGetPrice(const CLzmaProb *probs, int numBitLevels, UInt32 symbol, UInt32 *ProbPrices) { UInt32 price = 0; UInt32 m = 1; int i; for (i = numBitLevels; i != 0; i--) { UInt32 bit = symbol & 1; symbol >>= 1; price += GET_PRICEa(probs[m], bit); m = (m << 1) | bit; } return price; } static void LenEnc_Init(CLenEnc *p) { unsigned i; p->choice = p->choice2 = kProbInitValue; for (i = 0; i < (LZMA_NUM_PB_STATES_MAX << kLenNumLowBits); i++) p->low[i] = kProbInitValue; for (i = 0; i < (LZMA_NUM_PB_STATES_MAX << kLenNumMidBits); i++) p->mid[i] = kProbInitValue; for (i = 0; i < kLenNumHighSymbols; i++) p->high[i] = kProbInitValue; } static void LenEnc_Encode(CLenEnc *p, CRangeEnc *rc, UInt32 symbol, UInt32 posState) { if (symbol < kLenNumLowSymbols) { RangeEnc_EncodeBit(rc, &p->choice, 0); RcTree_Encode(rc, p->low + (posState << kLenNumLowBits), kLenNumLowBits, symbol); } else { RangeEnc_EncodeBit(rc, &p->choice, 1); if (symbol < kLenNumLowSymbols + kLenNumMidSymbols) { RangeEnc_EncodeBit(rc, &p->choice2, 0); RcTree_Encode(rc, p->mid + (posState << kLenNumMidBits), kLenNumMidBits, symbol - kLenNumLowSymbols); } else { RangeEnc_EncodeBit(rc, &p->choice2, 1); RcTree_Encode(rc, p->high, kLenNumHighBits, symbol - kLenNumLowSymbols - kLenNumMidSymbols); } } } static void LenEnc_SetPrices(CLenEnc *p, UInt32 posState, UInt32 numSymbols, UInt32 *prices, UInt32 *ProbPrices) { UInt32 a0 = GET_PRICE_0a(p->choice); UInt32 a1 = GET_PRICE_1a(p->choice); UInt32 b0 = a1 + GET_PRICE_0a(p->choice2); UInt32 b1 = a1 + GET_PRICE_1a(p->choice2); UInt32 i = 0; for (i = 0; i < kLenNumLowSymbols; i++) { if (i >= numSymbols) return; prices[i] = a0 + RcTree_GetPrice(p->low + (posState << kLenNumLowBits), kLenNumLowBits, i, ProbPrices); } for (; i < kLenNumLowSymbols + kLenNumMidSymbols; i++) { if (i >= numSymbols) return; prices[i] = b0 + RcTree_GetPrice(p->mid + (posState << kLenNumMidBits), kLenNumMidBits, i - kLenNumLowSymbols, ProbPrices); } for (; i < numSymbols; i++) prices[i] = b1 + RcTree_GetPrice(p->high, kLenNumHighBits, i - kLenNumLowSymbols - kLenNumMidSymbols, ProbPrices); } static void MY_FAST_CALL LenPriceEnc_UpdateTable(CLenPriceEnc *p, UInt32 posState, UInt32 *ProbPrices) { LenEnc_SetPrices(&p->p, posState, p->tableSize, p->prices[posState], ProbPrices); p->counters[posState] = p->tableSize; } static void LenPriceEnc_UpdateTables(CLenPriceEnc *p, UInt32 numPosStates, UInt32 *ProbPrices) { UInt32 posState; for (posState = 0; posState < numPosStates; posState++) LenPriceEnc_UpdateTable(p, posState, ProbPrices); } static void LenEnc_Encode2(CLenPriceEnc *p, CRangeEnc *rc, UInt32 symbol, UInt32 posState, Bool updatePrice, UInt32 *ProbPrices) { LenEnc_Encode(&p->p, rc, symbol, posState); if (updatePrice) if (--p->counters[posState] == 0) LenPriceEnc_UpdateTable(p, posState, ProbPrices); } static void MovePos(CLzmaEnc *p, UInt32 num) { #ifdef SHOW_STAT ttt += num; printf("\n MovePos %d", num); #endif if (num != 0) { p->additionalOffset += num; p->matchFinder.Skip(p->matchFinderObj, num); } } static UInt32 ReadMatchDistances(CLzmaEnc *p, UInt32 *numDistancePairsRes) { UInt32 lenRes = 0, numPairs; p->numAvail = p->matchFinder.GetNumAvailableBytes(p->matchFinderObj); numPairs = p->matchFinder.GetMatches(p->matchFinderObj, p->matches); #ifdef SHOW_STAT printf("\n i = %d numPairs = %d ", ttt, numPairs / 2); ttt++; { UInt32 i; for (i = 0; i < numPairs; i += 2) printf("%2d %6d | ", p->matches[i], p->matches[i + 1]); } #endif if (numPairs > 0) { lenRes = p->matches[numPairs - 2]; if (lenRes == p->numFastBytes) { const Byte *pby = p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - 1; UInt32 distance = p->matches[numPairs - 1] + 1; UInt32 numAvail = p->numAvail; if (numAvail > LZMA_MATCH_LEN_MAX) numAvail = LZMA_MATCH_LEN_MAX; { const Byte *pby2 = pby - distance; for (; lenRes < numAvail && pby[lenRes] == pby2[lenRes]; lenRes++); } } } p->additionalOffset++; *numDistancePairsRes = numPairs; return lenRes; } #define MakeAsChar(p) (p)->backPrev = (UInt32)(-1); (p)->prev1IsChar = False; #define MakeAsShortRep(p) (p)->backPrev = 0; (p)->prev1IsChar = False; #define IsShortRep(p) ((p)->backPrev == 0) static UInt32 GetRepLen1Price(CLzmaEnc *p, UInt32 state, UInt32 posState) { return GET_PRICE_0(p->isRepG0[state]) + GET_PRICE_0(p->isRep0Long[state][posState]); } static UInt32 GetPureRepPrice(CLzmaEnc *p, UInt32 repIndex, UInt32 state, UInt32 posState) { UInt32 price; if (repIndex == 0) { price = GET_PRICE_0(p->isRepG0[state]); price += GET_PRICE_1(p->isRep0Long[state][posState]); } else { price = GET_PRICE_1(p->isRepG0[state]); if (repIndex == 1) price += GET_PRICE_0(p->isRepG1[state]); else { price += GET_PRICE_1(p->isRepG1[state]); price += GET_PRICE(p->isRepG2[state], repIndex - 2); } } return price; } static UInt32 GetRepPrice(CLzmaEnc *p, UInt32 repIndex, UInt32 len, UInt32 state, UInt32 posState) { return p->repLenEnc.prices[posState][len - LZMA_MATCH_LEN_MIN] + GetPureRepPrice(p, repIndex, state, posState); } static UInt32 Backward(CLzmaEnc *p, UInt32 *backRes, UInt32 cur) { UInt32 posMem = p->opt[cur].posPrev; UInt32 backMem = p->opt[cur].backPrev; p->optimumEndIndex = cur; do { if (p->opt[cur].prev1IsChar) { MakeAsChar(&p->opt[posMem]) p->opt[posMem].posPrev = posMem - 1; if (p->opt[cur].prev2) { p->opt[posMem - 1].prev1IsChar = False; p->opt[posMem - 1].posPrev = p->opt[cur].posPrev2; p->opt[posMem - 1].backPrev = p->opt[cur].backPrev2; } } { UInt32 posPrev = posMem; UInt32 backCur = backMem; backMem = p->opt[posPrev].backPrev; posMem = p->opt[posPrev].posPrev; p->opt[posPrev].backPrev = backCur; p->opt[posPrev].posPrev = cur; cur = posPrev; } } while (cur != 0); *backRes = p->opt[0].backPrev; p->optimumCurrentIndex = p->opt[0].posPrev; return p->optimumCurrentIndex; } #define LIT_PROBS(pos, prevByte) (p->litProbs + ((((pos) & p->lpMask) << p->lc) + ((prevByte) >> (8 - p->lc))) * 0x300) static UInt32 GetOptimum(CLzmaEnc *p, UInt32 position, UInt32 *backRes) { UInt32 numAvail, mainLen, numPairs, repMaxIndex, i, posState, lenEnd, len, cur; UInt32 matchPrice, repMatchPrice, normalMatchPrice; UInt32 reps[LZMA_NUM_REPS], repLens[LZMA_NUM_REPS]; UInt32 *matches; const Byte *data; Byte curByte, matchByte; if (p->optimumEndIndex != p->optimumCurrentIndex) { const COptimal *opt = &p->opt[p->optimumCurrentIndex]; UInt32 lenRes = opt->posPrev - p->optimumCurrentIndex; *backRes = opt->backPrev; p->optimumCurrentIndex = opt->posPrev; return lenRes; } p->optimumCurrentIndex = p->optimumEndIndex = 0; if (p->additionalOffset == 0) mainLen = ReadMatchDistances(p, &numPairs); else { mainLen = p->longestMatchLength; numPairs = p->numPairs; } numAvail = p->numAvail; if (numAvail < 2) { *backRes = (UInt32)(-1); return 1; } if (numAvail > LZMA_MATCH_LEN_MAX) numAvail = LZMA_MATCH_LEN_MAX; data = p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - 1; repMaxIndex = 0; for (i = 0; i < LZMA_NUM_REPS; i++) { UInt32 lenTest; const Byte *data2; reps[i] = p->reps[i]; data2 = data - (reps[i] + 1); if (data[0] != data2[0] || data[1] != data2[1]) { repLens[i] = 0; continue; } for (lenTest = 2; lenTest < numAvail && data[lenTest] == data2[lenTest]; lenTest++); repLens[i] = lenTest; if (lenTest > repLens[repMaxIndex]) repMaxIndex = i; } if (repLens[repMaxIndex] >= p->numFastBytes) { UInt32 lenRes; *backRes = repMaxIndex; lenRes = repLens[repMaxIndex]; MovePos(p, lenRes - 1); return lenRes; } matches = p->matches; if (mainLen >= p->numFastBytes) { *backRes = matches[numPairs - 1] + LZMA_NUM_REPS; MovePos(p, mainLen - 1); return mainLen; } curByte = *data; matchByte = *(data - (reps[0] + 1)); if (mainLen < 2 && curByte != matchByte && repLens[repMaxIndex] < 2) { *backRes = (UInt32)-1; return 1; } p->opt[0].state = (CState)p->state; posState = (position & p->pbMask); { const CLzmaProb *probs = LIT_PROBS(position, *(data - 1)); p->opt[1].price = GET_PRICE_0(p->isMatch[p->state][posState]) + (!IsCharState(p->state) ? LitEnc_GetPriceMatched(probs, curByte, matchByte, p->ProbPrices) : LitEnc_GetPrice(probs, curByte, p->ProbPrices)); } MakeAsChar(&p->opt[1]); matchPrice = GET_PRICE_1(p->isMatch[p->state][posState]); repMatchPrice = matchPrice + GET_PRICE_1(p->isRep[p->state]); if (matchByte == curByte) { UInt32 shortRepPrice = repMatchPrice + GetRepLen1Price(p, p->state, posState); if (shortRepPrice < p->opt[1].price) { p->opt[1].price = shortRepPrice; MakeAsShortRep(&p->opt[1]); } } lenEnd = ((mainLen >= repLens[repMaxIndex]) ? mainLen : repLens[repMaxIndex]); if (lenEnd < 2) { *backRes = p->opt[1].backPrev; return 1; } p->opt[1].posPrev = 0; for (i = 0; i < LZMA_NUM_REPS; i++) p->opt[0].backs[i] = reps[i]; len = lenEnd; do p->opt[len--].price = kInfinityPrice; while (len >= 2); for (i = 0; i < LZMA_NUM_REPS; i++) { UInt32 repLen = repLens[i]; UInt32 price; if (repLen < 2) continue; price = repMatchPrice + GetPureRepPrice(p, i, p->state, posState); do { UInt32 curAndLenPrice = price + p->repLenEnc.prices[posState][repLen - 2]; COptimal *opt = &p->opt[repLen]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = 0; opt->backPrev = i; opt->prev1IsChar = False; } } while (--repLen >= 2); } normalMatchPrice = matchPrice + GET_PRICE_0(p->isRep[p->state]); len = ((repLens[0] >= 2) ? repLens[0] + 1 : 2); if (len <= mainLen) { UInt32 offs = 0; while (len > matches[offs]) offs += 2; for (; ; len++) { COptimal *opt; UInt32 distance = matches[offs + 1]; UInt32 curAndLenPrice = normalMatchPrice + p->lenEnc.prices[posState][len - LZMA_MATCH_LEN_MIN]; UInt32 lenToPosState = GetLenToPosState(len); if (distance < kNumFullDistances) curAndLenPrice += p->distancesPrices[lenToPosState][distance]; else { UInt32 slot; GetPosSlot2(distance, slot); curAndLenPrice += p->alignPrices[distance & kAlignMask] + p->posSlotPrices[lenToPosState][slot]; } opt = &p->opt[len]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = 0; opt->backPrev = distance + LZMA_NUM_REPS; opt->prev1IsChar = False; } if (len == matches[offs]) { offs += 2; if (offs == numPairs) break; } } } cur = 0; #ifdef SHOW_STAT2 if (position >= 0) { unsigned i; printf("\n pos = %4X", position); for (i = cur; i <= lenEnd; i++) printf("\nprice[%4X] = %d", position - cur + i, p->opt[i].price); } #endif for (;;) { UInt32 numAvailFull, newLen, numPairs, posPrev, state, posState, startLen; UInt32 curPrice, curAnd1Price, matchPrice, repMatchPrice; Bool nextIsChar; Byte curByte, matchByte; const Byte *data; COptimal *curOpt; COptimal *nextOpt; cur++; if (cur == lenEnd) return Backward(p, backRes, cur); newLen = ReadMatchDistances(p, &numPairs); if (newLen >= p->numFastBytes) { p->numPairs = numPairs; p->longestMatchLength = newLen; return Backward(p, backRes, cur); } position++; curOpt = &p->opt[cur]; posPrev = curOpt->posPrev; if (curOpt->prev1IsChar) { posPrev--; if (curOpt->prev2) { state = p->opt[curOpt->posPrev2].state; if (curOpt->backPrev2 < LZMA_NUM_REPS) state = kRepNextStates[state]; else state = kMatchNextStates[state]; } else state = p->opt[posPrev].state; state = kLiteralNextStates[state]; } else state = p->opt[posPrev].state; if (posPrev == cur - 1) { if (IsShortRep(curOpt)) state = kShortRepNextStates[state]; else state = kLiteralNextStates[state]; } else { UInt32 pos; const COptimal *prevOpt; if (curOpt->prev1IsChar && curOpt->prev2) { posPrev = curOpt->posPrev2; pos = curOpt->backPrev2; state = kRepNextStates[state]; } else { pos = curOpt->backPrev; if (pos < LZMA_NUM_REPS) state = kRepNextStates[state]; else state = kMatchNextStates[state]; } prevOpt = &p->opt[posPrev]; if (pos < LZMA_NUM_REPS) { UInt32 i; reps[0] = prevOpt->backs[pos]; for (i = 1; i <= pos; i++) reps[i] = prevOpt->backs[i - 1]; for (; i < LZMA_NUM_REPS; i++) reps[i] = prevOpt->backs[i]; } else { UInt32 i; reps[0] = (pos - LZMA_NUM_REPS); for (i = 1; i < LZMA_NUM_REPS; i++) reps[i] = prevOpt->backs[i - 1]; } } curOpt->state = (CState)state; curOpt->backs[0] = reps[0]; curOpt->backs[1] = reps[1]; curOpt->backs[2] = reps[2]; curOpt->backs[3] = reps[3]; curPrice = curOpt->price; nextIsChar = False; data = p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - 1; curByte = *data; matchByte = *(data - (reps[0] + 1)); posState = (position & p->pbMask); curAnd1Price = curPrice + GET_PRICE_0(p->isMatch[state][posState]); { const CLzmaProb *probs = LIT_PROBS(position, *(data - 1)); curAnd1Price += (!IsCharState(state) ? LitEnc_GetPriceMatched(probs, curByte, matchByte, p->ProbPrices) : LitEnc_GetPrice(probs, curByte, p->ProbPrices)); } nextOpt = &p->opt[cur + 1]; if (curAnd1Price < nextOpt->price) { nextOpt->price = curAnd1Price; nextOpt->posPrev = cur; MakeAsChar(nextOpt); nextIsChar = True; } matchPrice = curPrice + GET_PRICE_1(p->isMatch[state][posState]); repMatchPrice = matchPrice + GET_PRICE_1(p->isRep[state]); if (matchByte == curByte && !(nextOpt->posPrev < cur && nextOpt->backPrev == 0)) { UInt32 shortRepPrice = repMatchPrice + GetRepLen1Price(p, state, posState); if (shortRepPrice <= nextOpt->price) { nextOpt->price = shortRepPrice; nextOpt->posPrev = cur; MakeAsShortRep(nextOpt); nextIsChar = True; } } numAvailFull = p->numAvail; { UInt32 temp = kNumOpts - 1 - cur; if (temp < numAvailFull) numAvailFull = temp; } if (numAvailFull < 2) continue; numAvail = (numAvailFull <= p->numFastBytes ? numAvailFull : p->numFastBytes); if (!nextIsChar && matchByte != curByte) /* speed optimization */ { /* try Literal + rep0 */ UInt32 temp; UInt32 lenTest2; const Byte *data2 = data - (reps[0] + 1); UInt32 limit = p->numFastBytes + 1; if (limit > numAvailFull) limit = numAvailFull; for (temp = 1; temp < limit && data[temp] == data2[temp]; temp++); lenTest2 = temp - 1; if (lenTest2 >= 2) { UInt32 state2 = kLiteralNextStates[state]; UInt32 posStateNext = (position + 1) & p->pbMask; UInt32 nextRepMatchPrice = curAnd1Price + GET_PRICE_1(p->isMatch[state2][posStateNext]) + GET_PRICE_1(p->isRep[state2]); /* for (; lenTest2 >= 2; lenTest2--) */ { UInt32 curAndLenPrice; COptimal *opt; UInt32 offset = cur + 1 + lenTest2; while (lenEnd < offset) p->opt[++lenEnd].price = kInfinityPrice; curAndLenPrice = nextRepMatchPrice + GetRepPrice(p, 0, lenTest2, state2, posStateNext); opt = &p->opt[offset]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = cur + 1; opt->backPrev = 0; opt->prev1IsChar = True; opt->prev2 = False; } } } } startLen = 2; /* speed optimization */ { UInt32 repIndex; for (repIndex = 0; repIndex < LZMA_NUM_REPS; repIndex++) { UInt32 lenTest; UInt32 lenTestTemp; UInt32 price; const Byte *data2 = data - (reps[repIndex] + 1); if (data[0] != data2[0] || data[1] != data2[1]) continue; for (lenTest = 2; lenTest < numAvail && data[lenTest] == data2[lenTest]; lenTest++); while (lenEnd < cur + lenTest) p->opt[++lenEnd].price = kInfinityPrice; lenTestTemp = lenTest; price = repMatchPrice + GetPureRepPrice(p, repIndex, state, posState); do { UInt32 curAndLenPrice = price + p->repLenEnc.prices[posState][lenTest - 2]; COptimal *opt = &p->opt[cur + lenTest]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = cur; opt->backPrev = repIndex; opt->prev1IsChar = False; } } while (--lenTest >= 2); lenTest = lenTestTemp; if (repIndex == 0) startLen = lenTest + 1; /* if (_maxMode) */ { UInt32 lenTest2 = lenTest + 1; UInt32 limit = lenTest2 + p->numFastBytes; UInt32 nextRepMatchPrice; if (limit > numAvailFull) limit = numAvailFull; for (; lenTest2 < limit && data[lenTest2] == data2[lenTest2]; lenTest2++); lenTest2 -= lenTest + 1; if (lenTest2 >= 2) { UInt32 state2 = kRepNextStates[state]; UInt32 posStateNext = (position + lenTest) & p->pbMask; UInt32 curAndLenCharPrice = price + p->repLenEnc.prices[posState][lenTest - 2] + GET_PRICE_0(p->isMatch[state2][posStateNext]) + LitEnc_GetPriceMatched(LIT_PROBS(position + lenTest, data[lenTest - 1]), data[lenTest], data2[lenTest], p->ProbPrices); state2 = kLiteralNextStates[state2]; posStateNext = (position + lenTest + 1) & p->pbMask; nextRepMatchPrice = curAndLenCharPrice + GET_PRICE_1(p->isMatch[state2][posStateNext]) + GET_PRICE_1(p->isRep[state2]); /* for (; lenTest2 >= 2; lenTest2--) */ { UInt32 curAndLenPrice; COptimal *opt; UInt32 offset = cur + lenTest + 1 + lenTest2; while (lenEnd < offset) p->opt[++lenEnd].price = kInfinityPrice; curAndLenPrice = nextRepMatchPrice + GetRepPrice(p, 0, lenTest2, state2, posStateNext); opt = &p->opt[offset]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = cur + lenTest + 1; opt->backPrev = 0; opt->prev1IsChar = True; opt->prev2 = True; opt->posPrev2 = cur; opt->backPrev2 = repIndex; } } } } } } /* for (UInt32 lenTest = 2; lenTest <= newLen; lenTest++) */ if (newLen > numAvail) { newLen = numAvail; for (numPairs = 0; newLen > matches[numPairs]; numPairs += 2); matches[numPairs] = newLen; numPairs += 2; } if (newLen >= startLen) { UInt32 normalMatchPrice = matchPrice + GET_PRICE_0(p->isRep[state]); UInt32 offs, curBack, posSlot; UInt32 lenTest; while (lenEnd < cur + newLen) p->opt[++lenEnd].price = kInfinityPrice; offs = 0; while (startLen > matches[offs]) offs += 2; curBack = matches[offs + 1]; GetPosSlot2(curBack, posSlot); for (lenTest = /*2*/ startLen; ; lenTest++) { UInt32 curAndLenPrice = normalMatchPrice + p->lenEnc.prices[posState][lenTest - LZMA_MATCH_LEN_MIN]; UInt32 lenToPosState = GetLenToPosState(lenTest); COptimal *opt; if (curBack < kNumFullDistances) curAndLenPrice += p->distancesPrices[lenToPosState][curBack]; else curAndLenPrice += p->posSlotPrices[lenToPosState][posSlot] + p->alignPrices[curBack & kAlignMask]; opt = &p->opt[cur + lenTest]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = cur; opt->backPrev = curBack + LZMA_NUM_REPS; opt->prev1IsChar = False; } if (/*_maxMode && */lenTest == matches[offs]) { /* Try Match + Literal + Rep0 */ const Byte *data2 = data - (curBack + 1); UInt32 lenTest2 = lenTest + 1; UInt32 limit = lenTest2 + p->numFastBytes; UInt32 nextRepMatchPrice; if (limit > numAvailFull) limit = numAvailFull; for (; lenTest2 < limit && data[lenTest2] == data2[lenTest2]; lenTest2++); lenTest2 -= lenTest + 1; if (lenTest2 >= 2) { UInt32 state2 = kMatchNextStates[state]; UInt32 posStateNext = (position + lenTest) & p->pbMask; UInt32 curAndLenCharPrice = curAndLenPrice + GET_PRICE_0(p->isMatch[state2][posStateNext]) + LitEnc_GetPriceMatched(LIT_PROBS(position + lenTest, data[lenTest - 1]), data[lenTest], data2[lenTest], p->ProbPrices); state2 = kLiteralNextStates[state2]; posStateNext = (posStateNext + 1) & p->pbMask; nextRepMatchPrice = curAndLenCharPrice + GET_PRICE_1(p->isMatch[state2][posStateNext]) + GET_PRICE_1(p->isRep[state2]); /* for (; lenTest2 >= 2; lenTest2--) */ { UInt32 offset = cur + lenTest + 1 + lenTest2; UInt32 curAndLenPrice; COptimal *opt; while (lenEnd < offset) p->opt[++lenEnd].price = kInfinityPrice; curAndLenPrice = nextRepMatchPrice + GetRepPrice(p, 0, lenTest2, state2, posStateNext); opt = &p->opt[offset]; if (curAndLenPrice < opt->price) { opt->price = curAndLenPrice; opt->posPrev = cur + lenTest + 1; opt->backPrev = 0; opt->prev1IsChar = True; opt->prev2 = True; opt->posPrev2 = cur; opt->backPrev2 = curBack + LZMA_NUM_REPS; } } } offs += 2; if (offs == numPairs) break; curBack = matches[offs + 1]; if (curBack >= kNumFullDistances) GetPosSlot2(curBack, posSlot); } } } } } #define ChangePair(smallDist, bigDist) (((bigDist) >> 7) > (smallDist)) static UInt32 GetOptimumFast(CLzmaEnc *p, UInt32 *backRes) { UInt32 numAvail, mainLen, mainDist, numPairs, repIndex, repLen, i; const Byte *data; const UInt32 *matches; if (p->additionalOffset == 0) mainLen = ReadMatchDistances(p, &numPairs); else { mainLen = p->longestMatchLength; numPairs = p->numPairs; } numAvail = p->numAvail; *backRes = (UInt32)-1; if (numAvail < 2) return 1; if (numAvail > LZMA_MATCH_LEN_MAX) numAvail = LZMA_MATCH_LEN_MAX; data = p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - 1; repLen = repIndex = 0; for (i = 0; i < LZMA_NUM_REPS; i++) { UInt32 len; const Byte *data2 = data - (p->reps[i] + 1); if (data[0] != data2[0] || data[1] != data2[1]) continue; for (len = 2; len < numAvail && data[len] == data2[len]; len++); if (len >= p->numFastBytes) { *backRes = i; MovePos(p, len - 1); return len; } if (len > repLen) { repIndex = i; repLen = len; } } matches = p->matches; if (mainLen >= p->numFastBytes) { *backRes = matches[numPairs - 1] + LZMA_NUM_REPS; MovePos(p, mainLen - 1); return mainLen; } mainDist = 0; /* for GCC */ if (mainLen >= 2) { mainDist = matches[numPairs - 1]; while (numPairs > 2 && mainLen == matches[numPairs - 4] + 1) { if (!ChangePair(matches[numPairs - 3], mainDist)) break; numPairs -= 2; mainLen = matches[numPairs - 2]; mainDist = matches[numPairs - 1]; } if (mainLen == 2 && mainDist >= 0x80) mainLen = 1; } if (repLen >= 2 && ( (repLen + 1 >= mainLen) || (repLen + 2 >= mainLen && mainDist >= (1 << 9)) || (repLen + 3 >= mainLen && mainDist >= (1 << 15)))) { *backRes = repIndex; MovePos(p, repLen - 1); return repLen; } if (mainLen < 2 || numAvail <= 2) return 1; p->longestMatchLength = ReadMatchDistances(p, &p->numPairs); if (p->longestMatchLength >= 2) { UInt32 newDistance = matches[p->numPairs - 1]; if ((p->longestMatchLength >= mainLen && newDistance < mainDist) || (p->longestMatchLength == mainLen + 1 && !ChangePair(mainDist, newDistance)) || (p->longestMatchLength > mainLen + 1) || (p->longestMatchLength + 1 >= mainLen && mainLen >= 3 && ChangePair(newDistance, mainDist))) return 1; } data = p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - 1; for (i = 0; i < LZMA_NUM_REPS; i++) { UInt32 len, limit; const Byte *data2 = data - (p->reps[i] + 1); if (data[0] != data2[0] || data[1] != data2[1]) continue; limit = mainLen - 1; for (len = 2; len < limit && data[len] == data2[len]; len++); if (len >= limit) return 1; } *backRes = mainDist + LZMA_NUM_REPS; MovePos(p, mainLen - 2); return mainLen; } static void WriteEndMarker(CLzmaEnc *p, UInt32 posState) { UInt32 len; RangeEnc_EncodeBit(&p->rc, &p->isMatch[p->state][posState], 1); RangeEnc_EncodeBit(&p->rc, &p->isRep[p->state], 0); p->state = kMatchNextStates[p->state]; len = LZMA_MATCH_LEN_MIN; LenEnc_Encode2(&p->lenEnc, &p->rc, len - LZMA_MATCH_LEN_MIN, posState, !p->fastMode, p->ProbPrices); RcTree_Encode(&p->rc, p->posSlotEncoder[GetLenToPosState(len)], kNumPosSlotBits, (1 << kNumPosSlotBits) - 1); RangeEnc_EncodeDirectBits(&p->rc, (((UInt32)1 << 30) - 1) >> kNumAlignBits, 30 - kNumAlignBits); RcTree_ReverseEncode(&p->rc, p->posAlignEncoder, kNumAlignBits, kAlignMask); } static SRes CheckErrors(CLzmaEnc *p) { if (p->result != SZ_OK) return p->result; if (p->rc.res != SZ_OK) p->result = SZ_ERROR_WRITE; if (p->matchFinderBase.result != SZ_OK) p->result = SZ_ERROR_READ; if (p->result != SZ_OK) p->finished = True; return p->result; } static SRes Flush(CLzmaEnc *p, UInt32 nowPos) { /* ReleaseMFStream(); */ p->finished = True; if (p->writeEndMark) WriteEndMarker(p, nowPos & p->pbMask); RangeEnc_FlushData(&p->rc); RangeEnc_FlushStream(&p->rc); return CheckErrors(p); } static void FillAlignPrices(CLzmaEnc *p) { UInt32 i; for (i = 0; i < kAlignTableSize; i++) p->alignPrices[i] = RcTree_ReverseGetPrice(p->posAlignEncoder, kNumAlignBits, i, p->ProbPrices); p->alignPriceCount = 0; } static void FillDistancesPrices(CLzmaEnc *p) { UInt32 tempPrices[kNumFullDistances]; UInt32 i, lenToPosState; for (i = kStartPosModelIndex; i < kNumFullDistances; i++) { UInt32 posSlot = GetPosSlot1(i); UInt32 footerBits = ((posSlot >> 1) - 1); UInt32 base = ((2 | (posSlot & 1)) << footerBits); tempPrices[i] = RcTree_ReverseGetPrice(p->posEncoders + base - posSlot - 1, footerBits, i - base, p->ProbPrices); } for (lenToPosState = 0; lenToPosState < kNumLenToPosStates; lenToPosState++) { UInt32 posSlot; const CLzmaProb *encoder = p->posSlotEncoder[lenToPosState]; UInt32 *posSlotPrices = p->posSlotPrices[lenToPosState]; for (posSlot = 0; posSlot < p->distTableSize; posSlot++) posSlotPrices[posSlot] = RcTree_GetPrice(encoder, kNumPosSlotBits, posSlot, p->ProbPrices); for (posSlot = kEndPosModelIndex; posSlot < p->distTableSize; posSlot++) posSlotPrices[posSlot] += ((((posSlot >> 1) - 1) - kNumAlignBits) << kNumBitPriceShiftBits); { UInt32 *distancesPrices = p->distancesPrices[lenToPosState]; UInt32 i; for (i = 0; i < kStartPosModelIndex; i++) distancesPrices[i] = posSlotPrices[i]; for (; i < kNumFullDistances; i++) distancesPrices[i] = posSlotPrices[GetPosSlot1(i)] + tempPrices[i]; } } p->matchPriceCount = 0; } void LzmaEnc_Construct(CLzmaEnc *p) { RangeEnc_Construct(&p->rc); MatchFinder_Construct(&p->matchFinderBase); #ifdef COMPRESS_MF_MT MatchFinderMt_Construct(&p->matchFinderMt); p->matchFinderMt.MatchFinder = &p->matchFinderBase; #endif { CLzmaEncProps props; LzmaEncProps_Init(&props); LzmaEnc_SetProps(p, &props); } #ifndef LZMA_LOG_BSR LzmaEnc_FastPosInit(p->g_FastPos); #endif LzmaEnc_InitPriceTables(p->ProbPrices); p->litProbs = 0; p->saveState.litProbs = 0; } CLzmaEncHandle LzmaEnc_Create(ISzAlloc *alloc) { void *p; p = alloc->Alloc(alloc, sizeof(CLzmaEnc)); if (p != 0) LzmaEnc_Construct((CLzmaEnc *)p); return p; } void LzmaEnc_FreeLits(CLzmaEnc *p, ISzAlloc *alloc) { alloc->Free(alloc, p->litProbs); alloc->Free(alloc, p->saveState.litProbs); p->litProbs = 0; p->saveState.litProbs = 0; } void LzmaEnc_Destruct(CLzmaEnc *p, ISzAlloc *alloc, ISzAlloc *allocBig) { #ifdef COMPRESS_MF_MT MatchFinderMt_Destruct(&p->matchFinderMt, allocBig); #endif MatchFinder_Free(&p->matchFinderBase, allocBig); LzmaEnc_FreeLits(p, alloc); RangeEnc_Free(&p->rc, alloc); } void LzmaEnc_Destroy(CLzmaEncHandle p, ISzAlloc *alloc, ISzAlloc *allocBig) { LzmaEnc_Destruct((CLzmaEnc *)p, alloc, allocBig); alloc->Free(alloc, p); } static SRes LzmaEnc_CodeOneBlock(CLzmaEnc *p, Bool useLimits, UInt32 maxPackSize, UInt32 maxUnpackSize) { UInt32 nowPos32, startPos32; if (p->inStream != 0) { p->matchFinderBase.stream = p->inStream; p->matchFinder.Init(p->matchFinderObj); p->inStream = 0; } if (p->finished) return p->result; RINOK(CheckErrors(p)); nowPos32 = (UInt32)p->nowPos64; startPos32 = nowPos32; if (p->nowPos64 == 0) { UInt32 numPairs; Byte curByte; if (p->matchFinder.GetNumAvailableBytes(p->matchFinderObj) == 0) return Flush(p, nowPos32); ReadMatchDistances(p, &numPairs); RangeEnc_EncodeBit(&p->rc, &p->isMatch[p->state][0], 0); p->state = kLiteralNextStates[p->state]; curByte = p->matchFinder.GetIndexByte(p->matchFinderObj, 0 - p->additionalOffset); LitEnc_Encode(&p->rc, p->litProbs, curByte); p->additionalOffset--; nowPos32++; } if (p->matchFinder.GetNumAvailableBytes(p->matchFinderObj) != 0) for (;;) { UInt32 pos, len, posState; if (p->fastMode) len = GetOptimumFast(p, &pos); else len = GetOptimum(p, nowPos32, &pos); #ifdef SHOW_STAT2 printf("\n pos = %4X, len = %d pos = %d", nowPos32, len, pos); #endif posState = nowPos32 & p->pbMask; if (len == 1 && pos == (UInt32)-1) { Byte curByte; CLzmaProb *probs; const Byte *data; RangeEnc_EncodeBit(&p->rc, &p->isMatch[p->state][posState], 0); data = p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - p->additionalOffset; curByte = *data; probs = LIT_PROBS(nowPos32, *(data - 1)); if (IsCharState(p->state)) LitEnc_Encode(&p->rc, probs, curByte); else LitEnc_EncodeMatched(&p->rc, probs, curByte, *(data - p->reps[0] - 1)); p->state = kLiteralNextStates[p->state]; } else { RangeEnc_EncodeBit(&p->rc, &p->isMatch[p->state][posState], 1); if (pos < LZMA_NUM_REPS) { RangeEnc_EncodeBit(&p->rc, &p->isRep[p->state], 1); if (pos == 0) { RangeEnc_EncodeBit(&p->rc, &p->isRepG0[p->state], 0); RangeEnc_EncodeBit(&p->rc, &p->isRep0Long[p->state][posState], ((len == 1) ? 0 : 1)); } else { UInt32 distance = p->reps[pos]; RangeEnc_EncodeBit(&p->rc, &p->isRepG0[p->state], 1); if (pos == 1) RangeEnc_EncodeBit(&p->rc, &p->isRepG1[p->state], 0); else { RangeEnc_EncodeBit(&p->rc, &p->isRepG1[p->state], 1); RangeEnc_EncodeBit(&p->rc, &p->isRepG2[p->state], pos - 2); if (pos == 3) p->reps[3] = p->reps[2]; p->reps[2] = p->reps[1]; } p->reps[1] = p->reps[0]; p->reps[0] = distance; } if (len == 1) p->state = kShortRepNextStates[p->state]; else { LenEnc_Encode2(&p->repLenEnc, &p->rc, len - LZMA_MATCH_LEN_MIN, posState, !p->fastMode, p->ProbPrices); p->state = kRepNextStates[p->state]; } } else { UInt32 posSlot; RangeEnc_EncodeBit(&p->rc, &p->isRep[p->state], 0); p->state = kMatchNextStates[p->state]; LenEnc_Encode2(&p->lenEnc, &p->rc, len - LZMA_MATCH_LEN_MIN, posState, !p->fastMode, p->ProbPrices); pos -= LZMA_NUM_REPS; GetPosSlot(pos, posSlot); RcTree_Encode(&p->rc, p->posSlotEncoder[GetLenToPosState(len)], kNumPosSlotBits, posSlot); if (posSlot >= kStartPosModelIndex) { UInt32 footerBits = ((posSlot >> 1) - 1); UInt32 base = ((2 | (posSlot & 1)) << footerBits); UInt32 posReduced = pos - base; if (posSlot < kEndPosModelIndex) RcTree_ReverseEncode(&p->rc, p->posEncoders + base - posSlot - 1, footerBits, posReduced); else { RangeEnc_EncodeDirectBits(&p->rc, posReduced >> kNumAlignBits, footerBits - kNumAlignBits); RcTree_ReverseEncode(&p->rc, p->posAlignEncoder, kNumAlignBits, posReduced & kAlignMask); p->alignPriceCount++; } } p->reps[3] = p->reps[2]; p->reps[2] = p->reps[1]; p->reps[1] = p->reps[0]; p->reps[0] = pos; p->matchPriceCount++; } } p->additionalOffset -= len; nowPos32 += len; if (p->additionalOffset == 0) { UInt32 processed; if (!p->fastMode) { if (p->matchPriceCount >= (1 << 7)) FillDistancesPrices(p); if (p->alignPriceCount >= kAlignTableSize) FillAlignPrices(p); } if (p->matchFinder.GetNumAvailableBytes(p->matchFinderObj) == 0) break; processed = nowPos32 - startPos32; if (useLimits) { if (processed + kNumOpts + 300 >= maxUnpackSize || RangeEnc_GetProcessed(&p->rc) + kNumOpts * 2 >= maxPackSize) break; } else if (processed >= (1 << 15)) { p->nowPos64 += nowPos32 - startPos32; return CheckErrors(p); } } } p->nowPos64 += nowPos32 - startPos32; return Flush(p, nowPos32); } #define kBigHashDicLimit ((UInt32)1 << 24) static SRes LzmaEnc_Alloc(CLzmaEnc *p, UInt32 keepWindowSize, ISzAlloc *alloc, ISzAlloc *allocBig) { UInt32 beforeSize = kNumOpts; #ifdef COMPRESS_MF_MT Bool btMode; #endif if (!RangeEnc_Alloc(&p->rc, alloc)) return SZ_ERROR_MEM; #ifdef COMPRESS_MF_MT btMode = (p->matchFinderBase.btMode != 0); p->mtMode = (p->multiThread && !p->fastMode && btMode); #endif { unsigned lclp = p->lc + p->lp; if (p->litProbs == 0 || p->saveState.litProbs == 0 || p->lclp != lclp) { LzmaEnc_FreeLits(p, alloc); p->litProbs = (CLzmaProb *)alloc->Alloc(alloc, (0x300 << lclp) * sizeof(CLzmaProb)); p->saveState.litProbs = (CLzmaProb *)alloc->Alloc(alloc, (0x300 << lclp) * sizeof(CLzmaProb)); if (p->litProbs == 0 || p->saveState.litProbs == 0) { LzmaEnc_FreeLits(p, alloc); return SZ_ERROR_MEM; } p->lclp = lclp; } } p->matchFinderBase.bigHash = (p->dictSize > kBigHashDicLimit); if (beforeSize + p->dictSize < keepWindowSize) beforeSize = keepWindowSize - p->dictSize; #ifdef COMPRESS_MF_MT if (p->mtMode) { RINOK(MatchFinderMt_Create(&p->matchFinderMt, p->dictSize, beforeSize, p->numFastBytes, LZMA_MATCH_LEN_MAX, allocBig)); p->matchFinderObj = &p->matchFinderMt; MatchFinderMt_CreateVTable(&p->matchFinderMt, &p->matchFinder); } else #endif { if (!MatchFinder_Create(&p->matchFinderBase, p->dictSize, beforeSize, p->numFastBytes, LZMA_MATCH_LEN_MAX, allocBig)) return SZ_ERROR_MEM; p->matchFinderObj = &p->matchFinderBase; MatchFinder_CreateVTable(&p->matchFinderBase, &p->matchFinder); } return SZ_OK; } void LzmaEnc_Init(CLzmaEnc *p) { UInt32 i; p->state = 0; for (i = 0 ; i < LZMA_NUM_REPS; i++) p->reps[i] = 0; RangeEnc_Init(&p->rc); for (i = 0; i < kNumStates; i++) { UInt32 j; for (j = 0; j < LZMA_NUM_PB_STATES_MAX; j++) { p->isMatch[i][j] = kProbInitValue; p->isRep0Long[i][j] = kProbInitValue; } p->isRep[i] = kProbInitValue; p->isRepG0[i] = kProbInitValue; p->isRepG1[i] = kProbInitValue; p->isRepG2[i] = kProbInitValue; } { UInt32 num = 0x300 << (p->lp + p->lc); for (i = 0; i < num; i++) p->litProbs[i] = kProbInitValue; } { for (i = 0; i < kNumLenToPosStates; i++) { CLzmaProb *probs = p->posSlotEncoder[i]; UInt32 j; for (j = 0; j < (1 << kNumPosSlotBits); j++) probs[j] = kProbInitValue; } } { for (i = 0; i < kNumFullDistances - kEndPosModelIndex; i++) p->posEncoders[i] = kProbInitValue; } LenEnc_Init(&p->lenEnc.p); LenEnc_Init(&p->repLenEnc.p); for (i = 0; i < (1 << kNumAlignBits); i++) p->posAlignEncoder[i] = kProbInitValue; p->optimumEndIndex = 0; p->optimumCurrentIndex = 0; p->additionalOffset = 0; p->pbMask = (1 << p->pb) - 1; p->lpMask = (1 << p->lp) - 1; } void LzmaEnc_InitPrices(CLzmaEnc *p) { if (!p->fastMode) { FillDistancesPrices(p); FillAlignPrices(p); } p->lenEnc.tableSize = p->repLenEnc.tableSize = p->numFastBytes + 1 - LZMA_MATCH_LEN_MIN; LenPriceEnc_UpdateTables(&p->lenEnc, 1 << p->pb, p->ProbPrices); LenPriceEnc_UpdateTables(&p->repLenEnc, 1 << p->pb, p->ProbPrices); } static SRes LzmaEnc_AllocAndInit(CLzmaEnc *p, UInt32 keepWindowSize, ISzAlloc *alloc, ISzAlloc *allocBig) { UInt32 i; for (i = 0; i < (UInt32)kDicLogSizeMaxCompress; i++) if (p->dictSize <= ((UInt32)1 << i)) break; p->distTableSize = i * 2; p->finished = False; p->result = SZ_OK; RINOK(LzmaEnc_Alloc(p, keepWindowSize, alloc, allocBig)); LzmaEnc_Init(p); LzmaEnc_InitPrices(p); p->nowPos64 = 0; return SZ_OK; } static SRes LzmaEnc_Prepare(CLzmaEncHandle pp, ISeqInStream *inStream, ISeqOutStream *outStream, ISzAlloc *alloc, ISzAlloc *allocBig) { CLzmaEnc *p = (CLzmaEnc *)pp; p->inStream = inStream; p->rc.outStream = outStream; return LzmaEnc_AllocAndInit(p, 0, alloc, allocBig); } SRes LzmaEnc_PrepareForLzma2(CLzmaEncHandle pp, ISeqInStream *inStream, UInt32 keepWindowSize, ISzAlloc *alloc, ISzAlloc *allocBig) { CLzmaEnc *p = (CLzmaEnc *)pp; p->inStream = inStream; return LzmaEnc_AllocAndInit(p, keepWindowSize, alloc, allocBig); } static void LzmaEnc_SetInputBuf(CLzmaEnc *p, const Byte *src, SizeT srcLen) { p->seqBufInStream.funcTable.Read = MyRead; p->seqBufInStream.data = src; p->seqBufInStream.rem = srcLen; } SRes LzmaEnc_MemPrepare(CLzmaEncHandle pp, const Byte *src, SizeT srcLen, UInt32 keepWindowSize, ISzAlloc *alloc, ISzAlloc *allocBig) { CLzmaEnc *p = (CLzmaEnc *)pp; LzmaEnc_SetInputBuf(p, src, srcLen); p->inStream = &p->seqBufInStream.funcTable; return LzmaEnc_AllocAndInit(p, keepWindowSize, alloc, allocBig); } void LzmaEnc_Finish(CLzmaEncHandle pp) { #ifdef COMPRESS_MF_MT CLzmaEnc *p = (CLzmaEnc *)pp; if (p->mtMode) MatchFinderMt_ReleaseStream(&p->matchFinderMt); #else pp = pp; #endif } typedef struct _CSeqOutStreamBuf { ISeqOutStream funcTable; Byte *data; SizeT rem; Bool overflow; } CSeqOutStreamBuf; static size_t MyWrite(void *pp, const void *data, size_t size) { CSeqOutStreamBuf *p = (CSeqOutStreamBuf *)pp; if (p->rem < size) { size = p->rem; p->overflow = True; } memcpy(p->data, data, size); p->rem -= size; p->data += size; return size; } UInt32 LzmaEnc_GetNumAvailableBytes(CLzmaEncHandle pp) { const CLzmaEnc *p = (CLzmaEnc *)pp; return p->matchFinder.GetNumAvailableBytes(p->matchFinderObj); } const Byte *LzmaEnc_GetCurBuf(CLzmaEncHandle pp) { const CLzmaEnc *p = (CLzmaEnc *)pp; return p->matchFinder.GetPointerToCurrentPos(p->matchFinderObj) - p->additionalOffset; } SRes LzmaEnc_CodeOneMemBlock(CLzmaEncHandle pp, Bool reInit, Byte *dest, size_t *destLen, UInt32 desiredPackSize, UInt32 *unpackSize) { CLzmaEnc *p = (CLzmaEnc *)pp; UInt64 nowPos64; SRes res; CSeqOutStreamBuf outStream; outStream.funcTable.Write = MyWrite; outStream.data = dest; outStream.rem = *destLen; outStream.overflow = False; p->writeEndMark = False; p->finished = False; p->result = SZ_OK; if (reInit) LzmaEnc_Init(p); LzmaEnc_InitPrices(p); nowPos64 = p->nowPos64; RangeEnc_Init(&p->rc); p->rc.outStream = &outStream.funcTable; res = LzmaEnc_CodeOneBlock(p, True, desiredPackSize, *unpackSize); *unpackSize = (UInt32)(p->nowPos64 - nowPos64); *destLen -= outStream.rem; if (outStream.overflow) return SZ_ERROR_OUTPUT_EOF; return res; } SRes LzmaEnc_Encode(CLzmaEncHandle pp, ISeqOutStream *outStream, ISeqInStream *inStream, ICompressProgress *progress, ISzAlloc *alloc, ISzAlloc *allocBig) { CLzmaEnc *p = (CLzmaEnc *)pp; SRes res = SZ_OK; #ifdef COMPRESS_MF_MT Byte allocaDummy[0x300]; int i = 0; for (i = 0; i < 16; i++) allocaDummy[i] = (Byte)i; #endif RINOK(LzmaEnc_Prepare(pp, inStream, outStream, alloc, allocBig)); for (;;) { res = LzmaEnc_CodeOneBlock(p, False, 0, 0); if (res != SZ_OK || p->finished != 0) break; if (progress != 0) { res = progress->Progress(progress, p->nowPos64, RangeEnc_GetProcessed(&p->rc)); if (res != SZ_OK) { res = SZ_ERROR_PROGRESS; break; } } } LzmaEnc_Finish(pp); return res; } SRes LzmaEnc_WriteProperties(CLzmaEncHandle pp, Byte *props, SizeT *size) { CLzmaEnc *p = (CLzmaEnc *)pp; int i; UInt32 dictSize = p->dictSize; if (*size < LZMA_PROPS_SIZE) return SZ_ERROR_PARAM; *size = LZMA_PROPS_SIZE; props[0] = (Byte)((p->pb * 5 + p->lp) * 9 + p->lc); for (i = 11; i <= 30; i++) { if (dictSize <= ((UInt32)2 << i)) { dictSize = (2 << i); break; } if (dictSize <= ((UInt32)3 << i)) { dictSize = (3 << i); break; } } for (i = 0; i < 4; i++) props[1 + i] = (Byte)(dictSize >> (8 * i)); return SZ_OK; } SRes LzmaEnc_MemEncode(CLzmaEncHandle pp, Byte *dest, SizeT *destLen, const Byte *src, SizeT srcLen, int writeEndMark, ICompressProgress *progress, ISzAlloc *alloc, ISzAlloc *allocBig) { SRes res; CLzmaEnc *p = (CLzmaEnc *)pp; CSeqOutStreamBuf outStream; LzmaEnc_SetInputBuf(p, src, srcLen); outStream.funcTable.Write = MyWrite; outStream.data = dest; outStream.rem = *destLen; outStream.overflow = False; p->writeEndMark = writeEndMark; res = LzmaEnc_Encode(pp, &outStream.funcTable, &p->seqBufInStream.funcTable, progress, alloc, allocBig); *destLen -= outStream.rem; if (outStream.overflow) return SZ_ERROR_OUTPUT_EOF; return res; } SRes LzmaEncode(Byte *dest, SizeT *destLen, const Byte *src, SizeT srcLen, const CLzmaEncProps *props, Byte *propsEncoded, SizeT *propsSize, int writeEndMark, ICompressProgress *progress, ISzAlloc *alloc, ISzAlloc *allocBig) { CLzmaEnc *p = (CLzmaEnc *)LzmaEnc_Create(alloc); SRes res; if (p == 0) return SZ_ERROR_MEM; res = LzmaEnc_SetProps(p, props); if (res == SZ_OK) { res = LzmaEnc_WriteProperties(p, propsEncoded, propsSize); if (res == SZ_OK) res = LzmaEnc_MemEncode(p, dest, destLen, src, srcLen, writeEndMark, progress, alloc, allocBig); } LzmaEnc_Destroy(p, alloc, allocBig); return res; } zmat-0.9.8/src/easylzma/pavlov/LzmaEnc.h000077500000000000000000000054221366304620100201370ustar00rootroot00000000000000/* LzmaEnc.h -- LZMA Encoder 2008-10-04 : Igor Pavlov : Public domain */ #ifndef __LZMAENC_H #define __LZMAENC_H #include "Types.h" #define LZMA_PROPS_SIZE 5 typedef struct _CLzmaEncProps { int level; /* 0 <= level <= 9 */ UInt32 dictSize; /* (1 << 12) <= dictSize <= (1 << 27) for 32-bit version (1 << 12) <= dictSize <= (1 << 30) for 64-bit version default = (1 << 24) */ int lc; /* 0 <= lc <= 8, default = 3 */ int lp; /* 0 <= lp <= 4, default = 0 */ int pb; /* 0 <= pb <= 4, default = 2 */ int algo; /* 0 - fast, 1 - normal, default = 1 */ int fb; /* 5 <= fb <= 273, default = 32 */ int btMode; /* 0 - hashChain Mode, 1 - binTree mode - normal, default = 1 */ int numHashBytes; /* 2, 3 or 4, default = 4 */ UInt32 mc; /* 1 <= mc <= (1 << 30), default = 32 */ unsigned writeEndMark; /* 0 - do not write EOPM, 1 - write EOPM, default = 0 */ int numThreads; /* 1 or 2, default = 2 */ } CLzmaEncProps; void LzmaEncProps_Init(CLzmaEncProps *p); void LzmaEncProps_Normalize(CLzmaEncProps *p); UInt32 LzmaEncProps_GetDictSize(const CLzmaEncProps *props2); /* ---------- CLzmaEncHandle Interface ---------- */ /* LzmaEnc_* functions can return the following exit codes: Returns: SZ_OK - OK SZ_ERROR_MEM - Memory allocation error SZ_ERROR_PARAM - Incorrect paramater in props SZ_ERROR_WRITE - Write callback error. SZ_ERROR_PROGRESS - some break from progress callback SZ_ERROR_THREAD - errors in multithreading functions (only for Mt version) */ typedef void * CLzmaEncHandle; CLzmaEncHandle LzmaEnc_Create(ISzAlloc *alloc); void LzmaEnc_Destroy(CLzmaEncHandle p, ISzAlloc *alloc, ISzAlloc *allocBig); SRes LzmaEnc_SetProps(CLzmaEncHandle p, const CLzmaEncProps *props); SRes LzmaEnc_WriteProperties(CLzmaEncHandle p, Byte *properties, SizeT *size); SRes LzmaEnc_Encode(CLzmaEncHandle p, ISeqOutStream *outStream, ISeqInStream *inStream, ICompressProgress *progress, ISzAlloc *alloc, ISzAlloc *allocBig); SRes LzmaEnc_MemEncode(CLzmaEncHandle p, Byte *dest, SizeT *destLen, const Byte *src, SizeT srcLen, int writeEndMark, ICompressProgress *progress, ISzAlloc *alloc, ISzAlloc *allocBig); /* ---------- One Call Interface ---------- */ /* LzmaEncode Return code: SZ_OK - OK SZ_ERROR_MEM - Memory allocation error SZ_ERROR_PARAM - Incorrect paramater SZ_ERROR_OUTPUT_EOF - output buffer overflow SZ_ERROR_THREAD - errors in multithreading functions (only for Mt version) */ SRes LzmaEncode(Byte *dest, SizeT *destLen, const Byte *src, SizeT srcLen, const CLzmaEncProps *props, Byte *propsEncoded, SizeT *propsSize, int writeEndMark, ICompressProgress *progress, ISzAlloc *alloc, ISzAlloc *allocBig); #endif zmat-0.9.8/src/easylzma/pavlov/LzmaLib.c000077500000000000000000000027061366304620100201350ustar00rootroot00000000000000/* LzmaLib.c -- LZMA library wrapper 2008-08-05 Igor Pavlov Public domain */ #include "LzmaEnc.h" #include "LzmaDec.h" #include "Alloc.h" #include "LzmaLib.h" static void *SzAlloc(void *p, size_t size) { p = p; return MyAlloc(size); } static void SzFree(void *p, void *address) { p = p; MyFree(address); } static ISzAlloc g_Alloc = { SzAlloc, SzFree }; MY_STDAPI LzmaCompress(unsigned char *dest, size_t *destLen, const unsigned char *src, size_t srcLen, unsigned char *outProps, size_t *outPropsSize, int level, /* 0 <= level <= 9, default = 5 */ unsigned dictSize, /* use (1 << N) or (3 << N). 4 KB < dictSize <= 128 MB */ int lc, /* 0 <= lc <= 8, default = 3 */ int lp, /* 0 <= lp <= 4, default = 0 */ int pb, /* 0 <= pb <= 4, default = 2 */ int fb, /* 5 <= fb <= 273, default = 32 */ int numThreads /* 1 or 2, default = 2 */ ) { CLzmaEncProps props; LzmaEncProps_Init(&props); props.level = level; props.dictSize = dictSize; props.lc = lc; props.lp = lp; props.pb = pb; props.fb = fb; props.numThreads = numThreads; return LzmaEncode(dest, destLen, src, srcLen, &props, outProps, outPropsSize, 0, NULL, &g_Alloc, &g_Alloc); } MY_STDAPI LzmaUncompress(unsigned char *dest, size_t *destLen, const unsigned char *src, size_t *srcLen, const unsigned char *props, size_t propsSize) { ELzmaStatus status; return LzmaDecode(dest, destLen, src, srcLen, props, (unsigned)propsSize, LZMA_FINISH_ANY, &status, &g_Alloc); } zmat-0.9.8/src/easylzma/pavlov/LzmaLib.h000077500000000000000000000104301366304620100201330ustar00rootroot00000000000000/* LzmaLib.h -- LZMA library interface 2008-08-05 Igor Pavlov Public domain */ #ifndef __LZMALIB_H #define __LZMALIB_H #include "Types.h" #ifdef __cplusplus #define MY_EXTERN_C extern "C" #else #define MY_EXTERN_C extern #endif #define MY_STDAPI MY_EXTERN_C int MY_STD_CALL #define LZMA_PROPS_SIZE 5 /* RAM requirements for LZMA: for compression: (dictSize * 11.5 + 6 MB) + state_size for decompression: dictSize + state_size state_size = (4 + (1.5 << (lc + lp))) KB by default (lc=3, lp=0), state_size = 16 KB. LZMA properties (5 bytes) format Offset Size Description 0 1 lc, lp and pb in encoded form. 1 4 dictSize (little endian). */ /* LzmaCompress ------------ outPropsSize - In: the pointer to the size of outProps buffer; *outPropsSize = LZMA_PROPS_SIZE = 5. Out: the pointer to the size of written properties in outProps buffer; *outPropsSize = LZMA_PROPS_SIZE = 5. LZMA Encoder will use defult values for any parameter, if it is -1 for any from: level, loc, lp, pb, fb, numThreads 0 for dictSize level - compression level: 0 <= level <= 9; level dictSize algo fb 0: 16 KB 0 32 1: 64 KB 0 32 2: 256 KB 0 32 3: 1 MB 0 32 4: 4 MB 0 32 5: 16 MB 1 32 6: 32 MB 1 32 7+: 64 MB 1 64 The default value for "level" is 5. algo = 0 means fast method algo = 1 means normal method dictSize - The dictionary size in bytes. The maximum value is 128 MB = (1 << 27) bytes for 32-bit version 1 GB = (1 << 30) bytes for 64-bit version The default value is 16 MB = (1 << 24) bytes. It's recommended to use the dictionary that is larger than 4 KB and that can be calculated as (1 << N) or (3 << N) sizes. lc - The number of literal context bits (high bits of previous literal). It can be in the range from 0 to 8. The default value is 3. Sometimes lc=4 gives the gain for big files. lp - The number of literal pos bits (low bits of current position for literals). It can be in the range from 0 to 4. The default value is 0. The lp switch is intended for periodical data when the period is equal to 2^lp. For example, for 32-bit (4 bytes) periodical data you can use lp=2. Often it's better to set lc=0, if you change lp switch. pb - The number of pos bits (low bits of current position). It can be in the range from 0 to 4. The default value is 2. The pb switch is intended for periodical data when the period is equal 2^pb. fb - Word size (the number of fast bytes). It can be in the range from 5 to 273. The default value is 32. Usually, a big number gives a little bit better compression ratio and slower compression process. numThreads - The number of thereads. 1 or 2. The default value is 2. Fast mode (algo = 0) can use only 1 thread. Out: destLen - processed output size Returns: SZ_OK - OK SZ_ERROR_MEM - Memory allocation error SZ_ERROR_PARAM - Incorrect paramater SZ_ERROR_OUTPUT_EOF - output buffer overflow SZ_ERROR_THREAD - errors in multithreading functions (only for Mt version) */ MY_STDAPI LzmaCompress(unsigned char *dest, size_t *destLen, const unsigned char *src, size_t srcLen, unsigned char *outProps, size_t *outPropsSize, /* *outPropsSize must be = 5 */ int level, /* 0 <= level <= 9, default = 5 */ unsigned dictSize, /* default = (1 << 24) */ int lc, /* 0 <= lc <= 8, default = 3 */ int lp, /* 0 <= lp <= 4, default = 0 */ int pb, /* 0 <= pb <= 4, default = 2 */ int fb, /* 5 <= fb <= 273, default = 32 */ int numThreads /* 1 or 2, default = 2 */ ); /* LzmaUncompress -------------- In: dest - output data destLen - output data size src - input data srcLen - input data size Out: destLen - processed output size srcLen - processed input size Returns: SZ_OK - OK SZ_ERROR_DATA - Data error SZ_ERROR_MEM - Memory allocation arror SZ_ERROR_UNSUPPORTED - Unsupported properties SZ_ERROR_INPUT_EOF - it needs more bytes in input buffer (src) */ MY_STDAPI LzmaUncompress(unsigned char *dest, size_t *destLen, const unsigned char *src, SizeT *srcLen, const unsigned char *props, size_t propsSize); #endif zmat-0.9.8/src/easylzma/pavlov/Types.h000077500000000000000000000110711366304620100177070ustar00rootroot00000000000000/* Types.h -- Basic types 2008-11-23 : Igor Pavlov : Public domain */ #ifndef __7Z_TYPES_H #define __7Z_TYPES_H #include #ifdef _WIN32 #include #endif #define SZ_OK 0 #define SZ_ERROR_DATA 1 #define SZ_ERROR_MEM 2 #define SZ_ERROR_CRC 3 #define SZ_ERROR_UNSUPPORTED 4 #define SZ_ERROR_PARAM 5 #define SZ_ERROR_INPUT_EOF 6 #define SZ_ERROR_OUTPUT_EOF 7 #define SZ_ERROR_READ 8 #define SZ_ERROR_WRITE 9 #define SZ_ERROR_PROGRESS 10 #define SZ_ERROR_FAIL 11 #define SZ_ERROR_THREAD 12 #define SZ_ERROR_ARCHIVE 16 #define SZ_ERROR_NO_ARCHIVE 17 typedef int SRes; #ifdef _WIN32 typedef DWORD WRes; #else typedef int WRes; #endif #ifndef RINOK #define RINOK(x) { int __result__ = (x); if (__result__ != 0) return __result__; } #endif typedef unsigned char Byte; typedef short Int16; typedef unsigned short UInt16; #ifdef _LZMA_UINT32_IS_ULONG typedef long Int32; typedef unsigned long UInt32; #else typedef int Int32; typedef unsigned int UInt32; #endif #ifdef _SZ_NO_INT_64 /* define _SZ_NO_INT_64, if your compiler doesn't support 64-bit integers. NOTES: Some code will work incorrectly in that case! */ typedef long Int64; typedef unsigned long UInt64; #else #if defined(_MSC_VER) || defined(__BORLANDC__) typedef __int64 Int64; typedef unsigned __int64 UInt64; #else typedef long long int Int64; typedef unsigned long long int UInt64; #endif #endif #ifdef _LZMA_NO_SYSTEM_SIZE_T typedef UInt32 SizeT; #else typedef size_t SizeT; #endif typedef int Bool; #define True 1 #define False 0 #ifdef _MSC_VER #if _MSC_VER >= 1300 #define MY_NO_INLINE __declspec(noinline) #else #define MY_NO_INLINE #endif #define MY_CDECL __cdecl #define MY_STD_CALL __stdcall #define MY_FAST_CALL MY_NO_INLINE __fastcall #else #define MY_CDECL #define MY_STD_CALL #define MY_FAST_CALL #endif /* The following interfaces use first parameter as pointer to structure */ typedef struct { SRes (*Read)(void *p, void *buf, size_t *size); /* if (input(*size) != 0 && output(*size) == 0) means end_of_stream. (output(*size) < input(*size)) is allowed */ } ISeqInStream; /* it can return SZ_ERROR_INPUT_EOF */ SRes SeqInStream_Read(ISeqInStream *stream, void *buf, size_t size); SRes SeqInStream_Read2(ISeqInStream *stream, void *buf, size_t size, SRes errorType); SRes SeqInStream_ReadByte(ISeqInStream *stream, Byte *buf); typedef struct { size_t (*Write)(void *p, const void *buf, size_t size); /* Returns: result - the number of actually written bytes. (result < size) means error */ } ISeqOutStream; typedef enum { SZ_SEEK_SET = 0, SZ_SEEK_CUR = 1, SZ_SEEK_END = 2 } ESzSeek; typedef struct { SRes (*Read)(void *p, void *buf, size_t *size); /* same as ISeqInStream::Read */ SRes (*Seek)(void *p, Int64 *pos, ESzSeek origin); } ISeekInStream; typedef struct { SRes (*Look)(void *p, void **buf, size_t *size); /* if (input(*size) != 0 && output(*size) == 0) means end_of_stream. (output(*size) > input(*size)) is not allowed (output(*size) < input(*size)) is allowed */ SRes (*Skip)(void *p, size_t offset); /* offset must be <= output(*size) of Look */ SRes (*Read)(void *p, void *buf, size_t *size); /* reads directly (without buffer). It's same as ISeqInStream::Read */ SRes (*Seek)(void *p, Int64 *pos, ESzSeek origin); } ILookInStream; SRes LookInStream_LookRead(ILookInStream *stream, void *buf, size_t *size); SRes LookInStream_SeekTo(ILookInStream *stream, UInt64 offset); /* reads via ILookInStream::Read */ SRes LookInStream_Read2(ILookInStream *stream, void *buf, size_t size, SRes errorType); SRes LookInStream_Read(ILookInStream *stream, void *buf, size_t size); #define LookToRead_BUF_SIZE (1 << 14) typedef struct { ILookInStream s; ISeekInStream *realStream; size_t pos; size_t size; Byte buf[LookToRead_BUF_SIZE]; } CLookToRead; void LookToRead_CreateVTable(CLookToRead *p, int lookahead); void LookToRead_Init(CLookToRead *p); typedef struct { ISeqInStream s; ILookInStream *realStream; } CSecToLook; void SecToLook_CreateVTable(CSecToLook *p); typedef struct { ISeqInStream s; ILookInStream *realStream; } CSecToRead; void SecToRead_CreateVTable(CSecToRead *p); typedef struct { SRes (*Progress)(void *p, UInt64 inSize, UInt64 outSize); /* Returns: result. (result != SZ_OK) means break. Value (UInt64)(Int64)-1 for size means unknown value. */ } ICompressProgress; typedef struct { void *(*Alloc)(void *p, size_t size); void (*Free)(void *p, void *address); /* address can be 0 */ } ISzAlloc; #define IAlloc_Alloc(p, size) (p)->Alloc((p), size) #define IAlloc_Free(p, a) (p)->Free((p), a) #endif zmat-0.9.8/src/lz4/000077500000000000000000000000001366304620100140045ustar00rootroot00000000000000zmat-0.9.8/src/lz4/lz4.c000066400000000000000000003056441366304620100146750ustar00rootroot00000000000000/* LZ4 - Fast LZ compression algorithm Copyright (C) 2011-present, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - LZ4 homepage : http://www.lz4.org - LZ4 source repository : https://github.com/lz4/lz4 */ /*-************************************ * Tuning parameters **************************************/ /* * LZ4_HEAPMODE : * Select how default compression functions will allocate memory for their hash table, * in memory stack (0:default, fastest), or in memory heap (1:requires malloc()). */ #ifndef LZ4_HEAPMODE # define LZ4_HEAPMODE 0 #endif /* * ACCELERATION_DEFAULT : * Select "acceleration" for LZ4_compress_fast() when parameter value <= 0 */ #define ACCELERATION_DEFAULT 1 /*-************************************ * CPU Feature Detection **************************************/ /* LZ4_FORCE_MEMORY_ACCESS * By default, access to unaligned memory is controlled by `memcpy()`, which is safe and portable. * Unfortunately, on some target/compiler combinations, the generated assembly is sub-optimal. * The below switch allow to select different access method for improved performance. * Method 0 (default) : use `memcpy()`. Safe and portable. * Method 1 : `__packed` statement. It depends on compiler extension (ie, not portable). * This method is safe if your compiler supports it, and *generally* as fast or faster than `memcpy`. * Method 2 : direct access. This method is portable but violate C standard. * It can generate buggy code on targets which assembly generation depends on alignment. * But in some circumstances, it's the only known way to get the most performance (ie GCC + ARMv6) * See https://fastcompression.blogspot.fr/2015/08/accessing-unaligned-memory.html for details. * Prefer these methods in priority order (0 > 1 > 2) */ #ifndef LZ4_FORCE_MEMORY_ACCESS /* can be defined externally */ # if defined(__GNUC__) && \ ( defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) \ || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) || defined(__ARM_ARCH_6T2__) ) # define LZ4_FORCE_MEMORY_ACCESS 2 # elif (defined(__INTEL_COMPILER) && !defined(_WIN32)) || defined(__GNUC__) # define LZ4_FORCE_MEMORY_ACCESS 1 # endif #endif /* * LZ4_FORCE_SW_BITCOUNT * Define this parameter if your target system or compiler does not support hardware bit count */ #if defined(_MSC_VER) && defined(_WIN32_WCE) /* Visual Studio for WinCE doesn't support Hardware bit count */ # define LZ4_FORCE_SW_BITCOUNT #endif /*-************************************ * Dependency **************************************/ /* * LZ4_SRC_INCLUDED: * Amalgamation flag, whether lz4.c is included */ #ifndef LZ4_SRC_INCLUDED # define LZ4_SRC_INCLUDED 1 #endif #ifndef LZ4_STATIC_LINKING_ONLY #define LZ4_STATIC_LINKING_ONLY #endif #ifndef LZ4_DISABLE_DEPRECATE_WARNINGS #define LZ4_DISABLE_DEPRECATE_WARNINGS /* due to LZ4_decompress_safe_withPrefix64k */ #endif #define LZ4_STATIC_LINKING_ONLY /* LZ4_DISTANCE_MAX */ #include "lz4.h" /* see also "memory routines" below */ /*-************************************ * Compiler Options **************************************/ #ifdef _MSC_VER /* Visual Studio */ # include # pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */ # pragma warning(disable : 4293) /* disable: C4293: too large shift (32-bits) */ #endif /* _MSC_VER */ #ifndef LZ4_FORCE_INLINE # ifdef _MSC_VER /* Visual Studio */ # define LZ4_FORCE_INLINE static __forceinline # else # if defined (__cplusplus) || defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L /* C99 */ # ifdef __GNUC__ # define LZ4_FORCE_INLINE static inline __attribute__((always_inline)) # else # define LZ4_FORCE_INLINE static inline # endif # else # define LZ4_FORCE_INLINE static # endif /* __STDC_VERSION__ */ # endif /* _MSC_VER */ #endif /* LZ4_FORCE_INLINE */ /* LZ4_FORCE_O2_GCC_PPC64LE and LZ4_FORCE_O2_INLINE_GCC_PPC64LE * gcc on ppc64le generates an unrolled SIMDized loop for LZ4_wildCopy8, * together with a simple 8-byte copy loop as a fall-back path. * However, this optimization hurts the decompression speed by >30%, * because the execution does not go to the optimized loop * for typical compressible data, and all of the preamble checks * before going to the fall-back path become useless overhead. * This optimization happens only with the -O3 flag, and -O2 generates * a simple 8-byte copy loop. * With gcc on ppc64le, all of the LZ4_decompress_* and LZ4_wildCopy8 * functions are annotated with __attribute__((optimize("O2"))), * and also LZ4_wildCopy8 is forcibly inlined, so that the O2 attribute * of LZ4_wildCopy8 does not affect the compression speed. */ #if defined(__PPC64__) && defined(__LITTLE_ENDIAN__) && defined(__GNUC__) && !defined(__clang__) # define LZ4_FORCE_O2_GCC_PPC64LE __attribute__((optimize("O2"))) # define LZ4_FORCE_O2_INLINE_GCC_PPC64LE __attribute__((optimize("O2"))) LZ4_FORCE_INLINE #else # define LZ4_FORCE_O2_GCC_PPC64LE # define LZ4_FORCE_O2_INLINE_GCC_PPC64LE static #endif #if (defined(__GNUC__) && (__GNUC__ >= 3)) || (defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 800)) || defined(__clang__) # define expect(expr,value) (__builtin_expect ((expr),(value)) ) #else # define expect(expr,value) (expr) #endif #ifndef likely #define likely(expr) expect((expr) != 0, 1) #endif #ifndef unlikely #define unlikely(expr) expect((expr) != 0, 0) #endif /*-************************************ * Memory routines **************************************/ #include /* malloc, calloc, free */ #define ALLOC(s) malloc(s) #define ALLOC_AND_ZERO(s) calloc(1,s) #define FREEMEM(p) free(p) #include /* memset, memcpy */ #define MEM_INIT(p,v,s) memset((p),(v),(s)) /*-************************************ * Common Constants **************************************/ #define MINMATCH 4 #define WILDCOPYLENGTH 8 #define LASTLITERALS 5 /* see ../doc/lz4_Block_format.md#parsing-restrictions */ #define MFLIMIT 12 /* see ../doc/lz4_Block_format.md#parsing-restrictions */ #define MATCH_SAFEGUARD_DISTANCE ((2*WILDCOPYLENGTH) - MINMATCH) /* ensure it's possible to write 2 x wildcopyLength without overflowing output buffer */ #define FASTLOOP_SAFE_DISTANCE 64 static const int LZ4_minLength = (MFLIMIT+1); #define KB *(1 <<10) #define MB *(1 <<20) #define GB *(1U<<30) #define LZ4_DISTANCE_ABSOLUTE_MAX 65535 #if (LZ4_DISTANCE_MAX > LZ4_DISTANCE_ABSOLUTE_MAX) /* max supported by LZ4 format */ # error "LZ4_DISTANCE_MAX is too big : must be <= 65535" #endif #define ML_BITS 4 #define ML_MASK ((1U<=1) # include #else # ifndef assert # define assert(condition) ((void)0) # endif #endif #define LZ4_STATIC_ASSERT(c) { enum { LZ4_static_assert = 1/(int)(!!(c)) }; } /* use after variable declarations */ #if defined(LZ4_DEBUG) && (LZ4_DEBUG>=2) # include static int g_debuglog_enable = 1; # define DEBUGLOG(l, ...) { \ if ((g_debuglog_enable) && (l<=LZ4_DEBUG)) { \ fprintf(stderr, __FILE__ ": "); \ fprintf(stderr, __VA_ARGS__); \ fprintf(stderr, " \n"); \ } } #else # define DEBUGLOG(l, ...) {} /* disabled */ #endif /*-************************************ * Types **************************************/ #if defined(__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) # include typedef uint8_t BYTE; typedef uint16_t U16; typedef uint32_t U32; typedef int32_t S32; typedef uint64_t U64; typedef uintptr_t uptrval; #else typedef unsigned char BYTE; typedef unsigned short U16; typedef unsigned int U32; typedef signed int S32; typedef unsigned long long U64; typedef size_t uptrval; /* generally true, except OpenVMS-64 */ #endif #if defined(__x86_64__) typedef U64 reg_t; /* 64-bits in x32 mode */ #else typedef size_t reg_t; /* 32-bits in x32 mode */ #endif typedef enum { notLimited = 0, limitedOutput = 1, fillOutput = 2 } limitedOutput_directive; /*-************************************ * Reading and writing into memory **************************************/ static unsigned LZ4_isLittleEndian(void) { const union { U32 u; BYTE c[4]; } one = { 1 }; /* don't use static : performance detrimental */ return one.c[0]; } #if defined(LZ4_FORCE_MEMORY_ACCESS) && (LZ4_FORCE_MEMORY_ACCESS==2) /* lie to the compiler about data alignment; use with caution */ static U16 LZ4_read16(const void* memPtr) { return *(const U16*) memPtr; } static U32 LZ4_read32(const void* memPtr) { return *(const U32*) memPtr; } static reg_t LZ4_read_ARCH(const void* memPtr) { return *(const reg_t*) memPtr; } static void LZ4_write16(void* memPtr, U16 value) { *(U16*)memPtr = value; } static void LZ4_write32(void* memPtr, U32 value) { *(U32*)memPtr = value; } #elif defined(LZ4_FORCE_MEMORY_ACCESS) && (LZ4_FORCE_MEMORY_ACCESS==1) /* __pack instructions are safer, but compiler specific, hence potentially problematic for some compilers */ /* currently only defined for gcc and icc */ typedef union { U16 u16; U32 u32; reg_t uArch; } __attribute__((packed)) unalign; static U16 LZ4_read16(const void* ptr) { return ((const unalign*)ptr)->u16; } static U32 LZ4_read32(const void* ptr) { return ((const unalign*)ptr)->u32; } static reg_t LZ4_read_ARCH(const void* ptr) { return ((const unalign*)ptr)->uArch; } static void LZ4_write16(void* memPtr, U16 value) { ((unalign*)memPtr)->u16 = value; } static void LZ4_write32(void* memPtr, U32 value) { ((unalign*)memPtr)->u32 = value; } #else /* safe and portable access using memcpy() */ static U16 LZ4_read16(const void* memPtr) { U16 val; memcpy(&val, memPtr, sizeof(val)); return val; } static U32 LZ4_read32(const void* memPtr) { U32 val; memcpy(&val, memPtr, sizeof(val)); return val; } static reg_t LZ4_read_ARCH(const void* memPtr) { reg_t val; memcpy(&val, memPtr, sizeof(val)); return val; } static void LZ4_write16(void* memPtr, U16 value) { memcpy(memPtr, &value, sizeof(value)); } static void LZ4_write32(void* memPtr, U32 value) { memcpy(memPtr, &value, sizeof(value)); } #endif /* LZ4_FORCE_MEMORY_ACCESS */ static U16 LZ4_readLE16(const void* memPtr) { if (LZ4_isLittleEndian()) { return LZ4_read16(memPtr); } else { const BYTE* p = (const BYTE*)memPtr; return (U16)((U16)p[0] + (p[1]<<8)); } } static void LZ4_writeLE16(void* memPtr, U16 value) { if (LZ4_isLittleEndian()) { LZ4_write16(memPtr, value); } else { BYTE* p = (BYTE*)memPtr; p[0] = (BYTE) value; p[1] = (BYTE)(value>>8); } } /* customized variant of memcpy, which can overwrite up to 8 bytes beyond dstEnd */ LZ4_FORCE_O2_INLINE_GCC_PPC64LE void LZ4_wildCopy8(void* dstPtr, const void* srcPtr, void* dstEnd) { BYTE* d = (BYTE*)dstPtr; const BYTE* s = (const BYTE*)srcPtr; BYTE* const e = (BYTE*)dstEnd; do { memcpy(d,s,8); d+=8; s+=8; } while (d= 16. */ LZ4_FORCE_O2_INLINE_GCC_PPC64LE void LZ4_wildCopy32(void* dstPtr, const void* srcPtr, void* dstEnd) { BYTE* d = (BYTE*)dstPtr; const BYTE* s = (const BYTE*)srcPtr; BYTE* const e = (BYTE*)dstEnd; do { memcpy(d,s,16); memcpy(d+16,s+16,16); d+=32; s+=32; } while (d= dstPtr + MINMATCH * - there is at least 8 bytes available to write after dstEnd */ LZ4_FORCE_O2_INLINE_GCC_PPC64LE void LZ4_memcpy_using_offset(BYTE* dstPtr, const BYTE* srcPtr, BYTE* dstEnd, const size_t offset) { BYTE v[8]; assert(dstEnd >= dstPtr + MINMATCH); LZ4_write32(dstPtr, 0); /* silence an msan warning when offset==0 */ switch(offset) { case 1: memset(v, *srcPtr, 8); break; case 2: memcpy(v, srcPtr, 2); memcpy(&v[2], srcPtr, 2); memcpy(&v[4], &v[0], 4); break; case 4: memcpy(v, srcPtr, 4); memcpy(&v[4], srcPtr, 4); break; default: LZ4_memcpy_using_offset_base(dstPtr, srcPtr, dstEnd, offset); return; } memcpy(dstPtr, v, 8); dstPtr += 8; while (dstPtr < dstEnd) { memcpy(dstPtr, v, 8); dstPtr += 8; } } #endif /*-************************************ * Common functions **************************************/ static unsigned LZ4_NbCommonBytes (reg_t val) { if (LZ4_isLittleEndian()) { if (sizeof(val)==8) { # if defined(_MSC_VER) && defined(_WIN64) && !defined(LZ4_FORCE_SW_BITCOUNT) unsigned long r = 0; _BitScanForward64( &r, (U64)val ); return (int)(r>>3); # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT) return (unsigned)__builtin_ctzll((U64)val) >> 3; # else static const int DeBruijnBytePos[64] = { 0, 0, 0, 0, 0, 1, 1, 2, 0, 3, 1, 3, 1, 4, 2, 7, 0, 2, 3, 6, 1, 5, 3, 5, 1, 3, 4, 4, 2, 5, 6, 7, 7, 0, 1, 2, 3, 3, 4, 6, 2, 6, 5, 5, 3, 4, 5, 6, 7, 1, 2, 4, 6, 4, 4, 5, 7, 2, 6, 5, 7, 6, 7, 7 }; return DeBruijnBytePos[((U64)((val & -(long long)val) * 0x0218A392CDABBD3FULL)) >> 58]; # endif } else /* 32 bits */ { # if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT) unsigned long r; _BitScanForward( &r, (U32)val ); return (int)(r>>3); # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT) return (unsigned)__builtin_ctz((U32)val) >> 3; # else static const int DeBruijnBytePos[32] = { 0, 0, 3, 0, 3, 1, 3, 0, 3, 2, 2, 1, 3, 2, 0, 1, 3, 3, 1, 2, 2, 2, 2, 0, 3, 1, 2, 0, 1, 0, 1, 1 }; return DeBruijnBytePos[((U32)((val & -(S32)val) * 0x077CB531U)) >> 27]; # endif } } else /* Big Endian CPU */ { if (sizeof(val)==8) { /* 64-bits */ # if defined(_MSC_VER) && defined(_WIN64) && !defined(LZ4_FORCE_SW_BITCOUNT) unsigned long r = 0; _BitScanReverse64( &r, val ); return (unsigned)(r>>3); # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT) return (unsigned)__builtin_clzll((U64)val) >> 3; # else static const U32 by32 = sizeof(val)*4; /* 32 on 64 bits (goal), 16 on 32 bits. Just to avoid some static analyzer complaining about shift by 32 on 32-bits target. Note that this code path is never triggered in 32-bits mode. */ unsigned r; if (!(val>>by32)) { r=4; } else { r=0; val>>=by32; } if (!(val>>16)) { r+=2; val>>=8; } else { val>>=24; } r += (!val); return r; # endif } else /* 32 bits */ { # if defined(_MSC_VER) && !defined(LZ4_FORCE_SW_BITCOUNT) unsigned long r = 0; _BitScanReverse( &r, (unsigned long)val ); return (unsigned)(r>>3); # elif (defined(__clang__) || (defined(__GNUC__) && (__GNUC__>=3))) && !defined(LZ4_FORCE_SW_BITCOUNT) return (unsigned)__builtin_clz((U32)val) >> 3; # else unsigned r; if (!(val>>16)) { r=2; val>>=8; } else { r=0; val>>=24; } r += (!val); return r; # endif } } } #define STEPSIZE sizeof(reg_t) LZ4_FORCE_INLINE unsigned LZ4_count(const BYTE* pIn, const BYTE* pMatch, const BYTE* pInLimit) { const BYTE* const pStart = pIn; if (likely(pIn < pInLimit-(STEPSIZE-1))) { reg_t const diff = LZ4_read_ARCH(pMatch) ^ LZ4_read_ARCH(pIn); if (!diff) { pIn+=STEPSIZE; pMatch+=STEPSIZE; } else { return LZ4_NbCommonBytes(diff); } } while (likely(pIn < pInLimit-(STEPSIZE-1))) { reg_t const diff = LZ4_read_ARCH(pMatch) ^ LZ4_read_ARCH(pIn); if (!diff) { pIn+=STEPSIZE; pMatch+=STEPSIZE; continue; } pIn += LZ4_NbCommonBytes(diff); return (unsigned)(pIn - pStart); } if ((STEPSIZE==8) && (pIn<(pInLimit-3)) && (LZ4_read32(pMatch) == LZ4_read32(pIn))) { pIn+=4; pMatch+=4; } if ((pIn<(pInLimit-1)) && (LZ4_read16(pMatch) == LZ4_read16(pIn))) { pIn+=2; pMatch+=2; } if ((pIn compression run slower on incompressible data */ /*-************************************ * Local Structures and types **************************************/ typedef enum { clearedTable = 0, byPtr, byU32, byU16 } tableType_t; /** * This enum distinguishes several different modes of accessing previous * content in the stream. * * - noDict : There is no preceding content. * - withPrefix64k : Table entries up to ctx->dictSize before the current blob * blob being compressed are valid and refer to the preceding * content (of length ctx->dictSize), which is available * contiguously preceding in memory the content currently * being compressed. * - usingExtDict : Like withPrefix64k, but the preceding content is somewhere * else in memory, starting at ctx->dictionary with length * ctx->dictSize. * - usingDictCtx : Like usingExtDict, but everything concerning the preceding * content is in a separate context, pointed to by * ctx->dictCtx. ctx->dictionary, ctx->dictSize, and table * entries in the current context that refer to positions * preceding the beginning of the current compression are * ignored. Instead, ctx->dictCtx->dictionary and ctx->dictCtx * ->dictSize describe the location and size of the preceding * content, and matches are found by looking in the ctx * ->dictCtx->hashTable. */ typedef enum { noDict = 0, withPrefix64k, usingExtDict, usingDictCtx } dict_directive; typedef enum { noDictIssue = 0, dictSmall } dictIssue_directive; /*-************************************ * Local Utils **************************************/ int LZ4_versionNumber (void) { return LZ4_VERSION_NUMBER; } const char* LZ4_versionString(void) { return LZ4_VERSION_STRING; } int LZ4_compressBound(int isize) { return LZ4_COMPRESSBOUND(isize); } int LZ4_sizeofState() { return LZ4_STREAMSIZE; } /*-************************************ * Internal Definitions used in Tests **************************************/ #if defined (__cplusplus) extern "C" { #endif int LZ4_compress_forceExtDict (LZ4_stream_t* LZ4_dict, const char* source, char* dest, int srcSize); int LZ4_decompress_safe_forceExtDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const void* dictStart, size_t dictSize); #if defined (__cplusplus) } #endif /*-****************************** * Compression functions ********************************/ static U32 LZ4_hash4(U32 sequence, tableType_t const tableType) { if (tableType == byU16) return ((sequence * 2654435761U) >> ((MINMATCH*8)-(LZ4_HASHLOG+1))); else return ((sequence * 2654435761U) >> ((MINMATCH*8)-LZ4_HASHLOG)); } static U32 LZ4_hash5(U64 sequence, tableType_t const tableType) { const U32 hashLog = (tableType == byU16) ? LZ4_HASHLOG+1 : LZ4_HASHLOG; if (LZ4_isLittleEndian()) { const U64 prime5bytes = 889523592379ULL; return (U32)(((sequence << 24) * prime5bytes) >> (64 - hashLog)); } else { const U64 prime8bytes = 11400714785074694791ULL; return (U32)(((sequence >> 24) * prime8bytes) >> (64 - hashLog)); } } LZ4_FORCE_INLINE U32 LZ4_hashPosition(const void* const p, tableType_t const tableType) { if ((sizeof(reg_t)==8) && (tableType != byU16)) return LZ4_hash5(LZ4_read_ARCH(p), tableType); return LZ4_hash4(LZ4_read32(p), tableType); } static void LZ4_clearHash(U32 h, void* tableBase, tableType_t const tableType) { switch (tableType) { default: /* fallthrough */ case clearedTable: { /* illegal! */ assert(0); return; } case byPtr: { const BYTE** hashTable = (const BYTE**)tableBase; hashTable[h] = NULL; return; } case byU32: { U32* hashTable = (U32*) tableBase; hashTable[h] = 0; return; } case byU16: { U16* hashTable = (U16*) tableBase; hashTable[h] = 0; return; } } } static void LZ4_putIndexOnHash(U32 idx, U32 h, void* tableBase, tableType_t const tableType) { switch (tableType) { default: /* fallthrough */ case clearedTable: /* fallthrough */ case byPtr: { /* illegal! */ assert(0); return; } case byU32: { U32* hashTable = (U32*) tableBase; hashTable[h] = idx; return; } case byU16: { U16* hashTable = (U16*) tableBase; assert(idx < 65536); hashTable[h] = (U16)idx; return; } } } static void LZ4_putPositionOnHash(const BYTE* p, U32 h, void* tableBase, tableType_t const tableType, const BYTE* srcBase) { switch (tableType) { case clearedTable: { /* illegal! */ assert(0); return; } case byPtr: { const BYTE** hashTable = (const BYTE**)tableBase; hashTable[h] = p; return; } case byU32: { U32* hashTable = (U32*) tableBase; hashTable[h] = (U32)(p-srcBase); return; } case byU16: { U16* hashTable = (U16*) tableBase; hashTable[h] = (U16)(p-srcBase); return; } } } LZ4_FORCE_INLINE void LZ4_putPosition(const BYTE* p, void* tableBase, tableType_t tableType, const BYTE* srcBase) { U32 const h = LZ4_hashPosition(p, tableType); LZ4_putPositionOnHash(p, h, tableBase, tableType, srcBase); } /* LZ4_getIndexOnHash() : * Index of match position registered in hash table. * hash position must be calculated by using base+index, or dictBase+index. * Assumption 1 : only valid if tableType == byU32 or byU16. * Assumption 2 : h is presumed valid (within limits of hash table) */ static U32 LZ4_getIndexOnHash(U32 h, const void* tableBase, tableType_t tableType) { LZ4_STATIC_ASSERT(LZ4_MEMORY_USAGE > 2); if (tableType == byU32) { const U32* const hashTable = (const U32*) tableBase; assert(h < (1U << (LZ4_MEMORY_USAGE-2))); return hashTable[h]; } if (tableType == byU16) { const U16* const hashTable = (const U16*) tableBase; assert(h < (1U << (LZ4_MEMORY_USAGE-1))); return hashTable[h]; } assert(0); return 0; /* forbidden case */ } static const BYTE* LZ4_getPositionOnHash(U32 h, const void* tableBase, tableType_t tableType, const BYTE* srcBase) { if (tableType == byPtr) { const BYTE* const* hashTable = (const BYTE* const*) tableBase; return hashTable[h]; } if (tableType == byU32) { const U32* const hashTable = (const U32*) tableBase; return hashTable[h] + srcBase; } { const U16* const hashTable = (const U16*) tableBase; return hashTable[h] + srcBase; } /* default, to ensure a return */ } LZ4_FORCE_INLINE const BYTE* LZ4_getPosition(const BYTE* p, const void* tableBase, tableType_t tableType, const BYTE* srcBase) { U32 const h = LZ4_hashPosition(p, tableType); return LZ4_getPositionOnHash(h, tableBase, tableType, srcBase); } LZ4_FORCE_INLINE void LZ4_prepareTable(LZ4_stream_t_internal* const cctx, const int inputSize, const tableType_t tableType) { /* If compression failed during the previous step, then the context * is marked as dirty, therefore, it has to be fully reset. */ if (cctx->dirty) { DEBUGLOG(5, "LZ4_prepareTable: Full reset for %p", cctx); MEM_INIT(cctx, 0, sizeof(LZ4_stream_t_internal)); return; } /* If the table hasn't been used, it's guaranteed to be zeroed out, and is * therefore safe to use no matter what mode we're in. Otherwise, we figure * out if it's safe to leave as is or whether it needs to be reset. */ if (cctx->tableType != clearedTable) { assert(inputSize >= 0); if (cctx->tableType != tableType || ((tableType == byU16) && cctx->currentOffset + (unsigned)inputSize >= 0xFFFFU) || ((tableType == byU32) && cctx->currentOffset > 1 GB) || tableType == byPtr || inputSize >= 4 KB) { DEBUGLOG(4, "LZ4_prepareTable: Resetting table in %p", cctx); MEM_INIT(cctx->hashTable, 0, LZ4_HASHTABLESIZE); cctx->currentOffset = 0; cctx->tableType = clearedTable; } else { DEBUGLOG(4, "LZ4_prepareTable: Re-use hash table (no reset)"); } } /* Adding a gap, so all previous entries are > LZ4_DISTANCE_MAX back, is faster * than compressing without a gap. However, compressing with * currentOffset == 0 is faster still, so we preserve that case. */ if (cctx->currentOffset != 0 && tableType == byU32) { DEBUGLOG(5, "LZ4_prepareTable: adding 64KB to currentOffset"); cctx->currentOffset += 64 KB; } /* Finally, clear history */ cctx->dictCtx = NULL; cctx->dictionary = NULL; cctx->dictSize = 0; } /** LZ4_compress_generic() : inlined, to ensure branches are decided at compilation time */ LZ4_FORCE_INLINE int LZ4_compress_generic( LZ4_stream_t_internal* const cctx, const char* const source, char* const dest, const int inputSize, int *inputConsumed, /* only written when outputDirective == fillOutput */ const int maxOutputSize, const limitedOutput_directive outputDirective, const tableType_t tableType, const dict_directive dictDirective, const dictIssue_directive dictIssue, const int acceleration) { int result; const BYTE* ip = (const BYTE*) source; U32 const startIndex = cctx->currentOffset; const BYTE* base = (const BYTE*) source - startIndex; const BYTE* lowLimit; const LZ4_stream_t_internal* dictCtx = (const LZ4_stream_t_internal*) cctx->dictCtx; const BYTE* const dictionary = dictDirective == usingDictCtx ? dictCtx->dictionary : cctx->dictionary; const U32 dictSize = dictDirective == usingDictCtx ? dictCtx->dictSize : cctx->dictSize; const U32 dictDelta = (dictDirective == usingDictCtx) ? startIndex - dictCtx->currentOffset : 0; /* make indexes in dictCtx comparable with index in current context */ int const maybe_extMem = (dictDirective == usingExtDict) || (dictDirective == usingDictCtx); U32 const prefixIdxLimit = startIndex - dictSize; /* used when dictDirective == dictSmall */ const BYTE* const dictEnd = dictionary + dictSize; const BYTE* anchor = (const BYTE*) source; const BYTE* const iend = ip + inputSize; const BYTE* const mflimitPlusOne = iend - MFLIMIT + 1; const BYTE* const matchlimit = iend - LASTLITERALS; /* the dictCtx currentOffset is indexed on the start of the dictionary, * while a dictionary in the current context precedes the currentOffset */ const BYTE* dictBase = (dictDirective == usingDictCtx) ? dictionary + dictSize - dictCtx->currentOffset : dictionary + dictSize - startIndex; BYTE* op = (BYTE*) dest; BYTE* const olimit = op + maxOutputSize; U32 offset = 0; U32 forwardH; DEBUGLOG(5, "LZ4_compress_generic: srcSize=%i, tableType=%u", inputSize, tableType); /* If init conditions are not met, we don't have to mark stream * as having dirty context, since no action was taken yet */ if (outputDirective == fillOutput && maxOutputSize < 1) { return 0; } /* Impossible to store anything */ if ((U32)inputSize > (U32)LZ4_MAX_INPUT_SIZE) { return 0; } /* Unsupported inputSize, too large (or negative) */ if ((tableType == byU16) && (inputSize>=LZ4_64Klimit)) { return 0; } /* Size too large (not within 64K limit) */ if (tableType==byPtr) assert(dictDirective==noDict); /* only supported use case with byPtr */ assert(acceleration >= 1); lowLimit = (const BYTE*)source - (dictDirective == withPrefix64k ? dictSize : 0); /* Update context state */ if (dictDirective == usingDictCtx) { /* Subsequent linked blocks can't use the dictionary. */ /* Instead, they use the block we just compressed. */ cctx->dictCtx = NULL; cctx->dictSize = (U32)inputSize; } else { cctx->dictSize += (U32)inputSize; } cctx->currentOffset += (U32)inputSize; cctx->tableType = (U16)tableType; if (inputSizehashTable, tableType, base); ip++; forwardH = LZ4_hashPosition(ip, tableType); /* Main Loop */ for ( ; ; ) { const BYTE* match; BYTE* token; const BYTE* filledIp; /* Find a match */ if (tableType == byPtr) { const BYTE* forwardIp = ip; int step = 1; int searchMatchNb = acceleration << LZ4_skipTrigger; do { U32 const h = forwardH; ip = forwardIp; forwardIp += step; step = (searchMatchNb++ >> LZ4_skipTrigger); if (unlikely(forwardIp > mflimitPlusOne)) goto _last_literals; assert(ip < mflimitPlusOne); match = LZ4_getPositionOnHash(h, cctx->hashTable, tableType, base); forwardH = LZ4_hashPosition(forwardIp, tableType); LZ4_putPositionOnHash(ip, h, cctx->hashTable, tableType, base); } while ( (match+LZ4_DISTANCE_MAX < ip) || (LZ4_read32(match) != LZ4_read32(ip)) ); } else { /* byU32, byU16 */ const BYTE* forwardIp = ip; int step = 1; int searchMatchNb = acceleration << LZ4_skipTrigger; do { U32 const h = forwardH; U32 const current = (U32)(forwardIp - base); U32 matchIndex = LZ4_getIndexOnHash(h, cctx->hashTable, tableType); assert(matchIndex <= current); assert(forwardIp - base < (ptrdiff_t)(2 GB - 1)); ip = forwardIp; forwardIp += step; step = (searchMatchNb++ >> LZ4_skipTrigger); if (unlikely(forwardIp > mflimitPlusOne)) goto _last_literals; assert(ip < mflimitPlusOne); if (dictDirective == usingDictCtx) { if (matchIndex < startIndex) { /* there was no match, try the dictionary */ assert(tableType == byU32); matchIndex = LZ4_getIndexOnHash(h, dictCtx->hashTable, byU32); match = dictBase + matchIndex; matchIndex += dictDelta; /* make dictCtx index comparable with current context */ lowLimit = dictionary; } else { match = base + matchIndex; lowLimit = (const BYTE*)source; } } else if (dictDirective==usingExtDict) { if (matchIndex < startIndex) { DEBUGLOG(7, "extDict candidate: matchIndex=%5u < startIndex=%5u", matchIndex, startIndex); assert(startIndex - matchIndex >= MINMATCH); match = dictBase + matchIndex; lowLimit = dictionary; } else { match = base + matchIndex; lowLimit = (const BYTE*)source; } } else { /* single continuous memory segment */ match = base + matchIndex; } forwardH = LZ4_hashPosition(forwardIp, tableType); LZ4_putIndexOnHash(current, h, cctx->hashTable, tableType); DEBUGLOG(7, "candidate at pos=%u (offset=%u \n", matchIndex, current - matchIndex); if ((dictIssue == dictSmall) && (matchIndex < prefixIdxLimit)) { continue; } /* match outside of valid area */ assert(matchIndex < current); if ( ((tableType != byU16) || (LZ4_DISTANCE_MAX < LZ4_DISTANCE_ABSOLUTE_MAX)) && (matchIndex+LZ4_DISTANCE_MAX < current)) { continue; } /* too far */ assert((current - matchIndex) <= LZ4_DISTANCE_MAX); /* match now expected within distance */ if (LZ4_read32(match) == LZ4_read32(ip)) { if (maybe_extMem) offset = current - matchIndex; break; /* match found */ } } while(1); } /* Catch up */ filledIp = ip; while (((ip>anchor) & (match > lowLimit)) && (unlikely(ip[-1]==match[-1]))) { ip--; match--; } /* Encode Literals */ { unsigned const litLength = (unsigned)(ip - anchor); token = op++; if ((outputDirective == limitedOutput) && /* Check output buffer overflow */ (unlikely(op + litLength + (2 + 1 + LASTLITERALS) + (litLength/255) > olimit)) ) { return 0; /* cannot compress within `dst` budget. Stored indexes in hash table are nonetheless fine */ } if ((outputDirective == fillOutput) && (unlikely(op + (litLength+240)/255 /* litlen */ + litLength /* literals */ + 2 /* offset */ + 1 /* token */ + MFLIMIT - MINMATCH /* min last literals so last match is <= end - MFLIMIT */ > olimit))) { op--; goto _last_literals; } if (litLength >= RUN_MASK) { int len = (int)(litLength - RUN_MASK); *token = (RUN_MASK<= 255 ; len-=255) *op++ = 255; *op++ = (BYTE)len; } else *token = (BYTE)(litLength< olimit)) { /* the match was too close to the end, rewind and go to last literals */ op = token; goto _last_literals; } /* Encode Offset */ if (maybe_extMem) { /* static test */ DEBUGLOG(6, " with offset=%u (ext if > %i)", offset, (int)(ip - (const BYTE*)source)); assert(offset <= LZ4_DISTANCE_MAX && offset > 0); LZ4_writeLE16(op, (U16)offset); op+=2; } else { DEBUGLOG(6, " with offset=%u (same segment)", (U32)(ip - match)); assert(ip-match <= LZ4_DISTANCE_MAX); LZ4_writeLE16(op, (U16)(ip - match)); op+=2; } /* Encode MatchLength */ { unsigned matchCode; if ( (dictDirective==usingExtDict || dictDirective==usingDictCtx) && (lowLimit==dictionary) /* match within extDict */ ) { const BYTE* limit = ip + (dictEnd-match); assert(dictEnd > match); if (limit > matchlimit) limit = matchlimit; matchCode = LZ4_count(ip+MINMATCH, match+MINMATCH, limit); ip += (size_t)matchCode + MINMATCH; if (ip==limit) { unsigned const more = LZ4_count(limit, (const BYTE*)source, matchlimit); matchCode += more; ip += more; } DEBUGLOG(6, " with matchLength=%u starting in extDict", matchCode+MINMATCH); } else { matchCode = LZ4_count(ip+MINMATCH, match+MINMATCH, matchlimit); ip += (size_t)matchCode + MINMATCH; DEBUGLOG(6, " with matchLength=%u", matchCode+MINMATCH); } if ((outputDirective) && /* Check output buffer overflow */ (unlikely(op + (1 + LASTLITERALS) + (matchCode+240)/255 > olimit)) ) { if (outputDirective == fillOutput) { /* Match description too long : reduce it */ U32 newMatchCode = 15 /* in token */ - 1 /* to avoid needing a zero byte */ + ((U32)(olimit - op) - 1 - LASTLITERALS) * 255; ip -= matchCode - newMatchCode; assert(newMatchCode < matchCode); matchCode = newMatchCode; if (unlikely(ip <= filledIp)) { /* We have already filled up to filledIp so if ip ends up less than filledIp * we have positions in the hash table beyond the current position. This is * a problem if we reuse the hash table. So we have to remove these positions * from the hash table. */ const BYTE* ptr; DEBUGLOG(5, "Clearing %u positions", (U32)(filledIp - ip)); for (ptr = ip; ptr <= filledIp; ++ptr) { U32 const h = LZ4_hashPosition(ptr, tableType); LZ4_clearHash(h, cctx->hashTable, tableType); } } } else { assert(outputDirective == limitedOutput); return 0; /* cannot compress within `dst` budget. Stored indexes in hash table are nonetheless fine */ } } if (matchCode >= ML_MASK) { *token += ML_MASK; matchCode -= ML_MASK; LZ4_write32(op, 0xFFFFFFFF); while (matchCode >= 4*255) { op+=4; LZ4_write32(op, 0xFFFFFFFF); matchCode -= 4*255; } op += matchCode / 255; *op++ = (BYTE)(matchCode % 255); } else *token += (BYTE)(matchCode); } /* Ensure we have enough space for the last literals. */ assert(!(outputDirective == fillOutput && op + 1 + LASTLITERALS > olimit)); anchor = ip; /* Test end of chunk */ if (ip >= mflimitPlusOne) break; /* Fill table */ LZ4_putPosition(ip-2, cctx->hashTable, tableType, base); /* Test next position */ if (tableType == byPtr) { match = LZ4_getPosition(ip, cctx->hashTable, tableType, base); LZ4_putPosition(ip, cctx->hashTable, tableType, base); if ( (match+LZ4_DISTANCE_MAX >= ip) && (LZ4_read32(match) == LZ4_read32(ip)) ) { token=op++; *token=0; goto _next_match; } } else { /* byU32, byU16 */ U32 const h = LZ4_hashPosition(ip, tableType); U32 const current = (U32)(ip-base); U32 matchIndex = LZ4_getIndexOnHash(h, cctx->hashTable, tableType); assert(matchIndex < current); if (dictDirective == usingDictCtx) { if (matchIndex < startIndex) { /* there was no match, try the dictionary */ matchIndex = LZ4_getIndexOnHash(h, dictCtx->hashTable, byU32); match = dictBase + matchIndex; lowLimit = dictionary; /* required for match length counter */ matchIndex += dictDelta; } else { match = base + matchIndex; lowLimit = (const BYTE*)source; /* required for match length counter */ } } else if (dictDirective==usingExtDict) { if (matchIndex < startIndex) { match = dictBase + matchIndex; lowLimit = dictionary; /* required for match length counter */ } else { match = base + matchIndex; lowLimit = (const BYTE*)source; /* required for match length counter */ } } else { /* single memory segment */ match = base + matchIndex; } LZ4_putIndexOnHash(current, h, cctx->hashTable, tableType); assert(matchIndex < current); if ( ((dictIssue==dictSmall) ? (matchIndex >= prefixIdxLimit) : 1) && (((tableType==byU16) && (LZ4_DISTANCE_MAX == LZ4_DISTANCE_ABSOLUTE_MAX)) ? 1 : (matchIndex+LZ4_DISTANCE_MAX >= current)) && (LZ4_read32(match) == LZ4_read32(ip)) ) { token=op++; *token=0; if (maybe_extMem) offset = current - matchIndex; DEBUGLOG(6, "seq.start:%i, literals=%u, match.start:%i", (int)(anchor-(const BYTE*)source), 0, (int)(ip-(const BYTE*)source)); goto _next_match; } } /* Prepare next loop */ forwardH = LZ4_hashPosition(++ip, tableType); } _last_literals: /* Encode Last Literals */ { size_t lastRun = (size_t)(iend - anchor); if ( (outputDirective) && /* Check output buffer overflow */ (op + lastRun + 1 + ((lastRun+255-RUN_MASK)/255) > olimit)) { if (outputDirective == fillOutput) { /* adapt lastRun to fill 'dst' */ assert(olimit >= op); lastRun = (size_t)(olimit-op) - 1; lastRun -= (lastRun+240)/255; } else { assert(outputDirective == limitedOutput); return 0; /* cannot compress within `dst` budget. Stored indexes in hash table are nonetheless fine */ } } if (lastRun >= RUN_MASK) { size_t accumulator = lastRun - RUN_MASK; *op++ = RUN_MASK << ML_BITS; for(; accumulator >= 255 ; accumulator-=255) *op++ = 255; *op++ = (BYTE) accumulator; } else { *op++ = (BYTE)(lastRun< 0); return result; } int LZ4_compress_fast_extState(void* state, const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration) { LZ4_stream_t_internal* const ctx = & LZ4_initStream(state, sizeof(LZ4_stream_t)) -> internal_donotuse; assert(ctx != NULL); if (acceleration < 1) acceleration = ACCELERATION_DEFAULT; if (maxOutputSize >= LZ4_compressBound(inputSize)) { if (inputSize < LZ4_64Klimit) { return LZ4_compress_generic(ctx, source, dest, inputSize, NULL, 0, notLimited, byU16, noDict, noDictIssue, acceleration); } else { const tableType_t tableType = ((sizeof(void*)==4) && ((uptrval)source > LZ4_DISTANCE_MAX)) ? byPtr : byU32; return LZ4_compress_generic(ctx, source, dest, inputSize, NULL, 0, notLimited, tableType, noDict, noDictIssue, acceleration); } } else { if (inputSize < LZ4_64Klimit) { return LZ4_compress_generic(ctx, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, byU16, noDict, noDictIssue, acceleration); } else { const tableType_t tableType = ((sizeof(void*)==4) && ((uptrval)source > LZ4_DISTANCE_MAX)) ? byPtr : byU32; return LZ4_compress_generic(ctx, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, noDict, noDictIssue, acceleration); } } } /** * LZ4_compress_fast_extState_fastReset() : * A variant of LZ4_compress_fast_extState(). * * Using this variant avoids an expensive initialization step. It is only safe * to call if the state buffer is known to be correctly initialized already * (see comment in lz4.h on LZ4_resetStream_fast() for a definition of * "correctly initialized"). */ int LZ4_compress_fast_extState_fastReset(void* state, const char* src, char* dst, int srcSize, int dstCapacity, int acceleration) { LZ4_stream_t_internal* ctx = &((LZ4_stream_t*)state)->internal_donotuse; if (acceleration < 1) acceleration = ACCELERATION_DEFAULT; if (dstCapacity >= LZ4_compressBound(srcSize)) { if (srcSize < LZ4_64Klimit) { const tableType_t tableType = byU16; LZ4_prepareTable(ctx, srcSize, tableType); if (ctx->currentOffset) { return LZ4_compress_generic(ctx, src, dst, srcSize, NULL, 0, notLimited, tableType, noDict, dictSmall, acceleration); } else { return LZ4_compress_generic(ctx, src, dst, srcSize, NULL, 0, notLimited, tableType, noDict, noDictIssue, acceleration); } } else { const tableType_t tableType = ((sizeof(void*)==4) && ((uptrval)src > LZ4_DISTANCE_MAX)) ? byPtr : byU32; LZ4_prepareTable(ctx, srcSize, tableType); return LZ4_compress_generic(ctx, src, dst, srcSize, NULL, 0, notLimited, tableType, noDict, noDictIssue, acceleration); } } else { if (srcSize < LZ4_64Klimit) { const tableType_t tableType = byU16; LZ4_prepareTable(ctx, srcSize, tableType); if (ctx->currentOffset) { return LZ4_compress_generic(ctx, src, dst, srcSize, NULL, dstCapacity, limitedOutput, tableType, noDict, dictSmall, acceleration); } else { return LZ4_compress_generic(ctx, src, dst, srcSize, NULL, dstCapacity, limitedOutput, tableType, noDict, noDictIssue, acceleration); } } else { const tableType_t tableType = ((sizeof(void*)==4) && ((uptrval)src > LZ4_DISTANCE_MAX)) ? byPtr : byU32; LZ4_prepareTable(ctx, srcSize, tableType); return LZ4_compress_generic(ctx, src, dst, srcSize, NULL, dstCapacity, limitedOutput, tableType, noDict, noDictIssue, acceleration); } } } int LZ4_compress_fast(const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration) { int result; #if (LZ4_HEAPMODE) LZ4_stream_t* ctxPtr = ALLOC(sizeof(LZ4_stream_t)); /* malloc-calloc always properly aligned */ if (ctxPtr == NULL) return 0; #else LZ4_stream_t ctx; LZ4_stream_t* const ctxPtr = &ctx; #endif result = LZ4_compress_fast_extState(ctxPtr, source, dest, inputSize, maxOutputSize, acceleration); #if (LZ4_HEAPMODE) FREEMEM(ctxPtr); #endif return result; } int LZ4_compress_default(const char* src, char* dst, int srcSize, int maxOutputSize) { return LZ4_compress_fast(src, dst, srcSize, maxOutputSize, 1); } /* hidden debug function */ /* strangely enough, gcc generates faster code when this function is uncommented, even if unused */ int LZ4_compress_fast_force(const char* src, char* dst, int srcSize, int dstCapacity, int acceleration) { LZ4_stream_t ctx; LZ4_initStream(&ctx, sizeof(ctx)); if (srcSize < LZ4_64Klimit) { return LZ4_compress_generic(&ctx.internal_donotuse, src, dst, srcSize, NULL, dstCapacity, limitedOutput, byU16, noDict, noDictIssue, acceleration); } else { tableType_t const addrMode = (sizeof(void*) > 4) ? byU32 : byPtr; return LZ4_compress_generic(&ctx.internal_donotuse, src, dst, srcSize, NULL, dstCapacity, limitedOutput, addrMode, noDict, noDictIssue, acceleration); } } /* Note!: This function leaves the stream in an unclean/broken state! * It is not safe to subsequently use the same state with a _fastReset() or * _continue() call without resetting it. */ static int LZ4_compress_destSize_extState (LZ4_stream_t* state, const char* src, char* dst, int* srcSizePtr, int targetDstSize) { void* const s = LZ4_initStream(state, sizeof (*state)); assert(s != NULL); (void)s; if (targetDstSize >= LZ4_compressBound(*srcSizePtr)) { /* compression success is guaranteed */ return LZ4_compress_fast_extState(state, src, dst, *srcSizePtr, targetDstSize, 1); } else { if (*srcSizePtr < LZ4_64Klimit) { return LZ4_compress_generic(&state->internal_donotuse, src, dst, *srcSizePtr, srcSizePtr, targetDstSize, fillOutput, byU16, noDict, noDictIssue, 1); } else { tableType_t const addrMode = ((sizeof(void*)==4) && ((uptrval)src > LZ4_DISTANCE_MAX)) ? byPtr : byU32; return LZ4_compress_generic(&state->internal_donotuse, src, dst, *srcSizePtr, srcSizePtr, targetDstSize, fillOutput, addrMode, noDict, noDictIssue, 1); } } } int LZ4_compress_destSize(const char* src, char* dst, int* srcSizePtr, int targetDstSize) { #if (LZ4_HEAPMODE) LZ4_stream_t* ctx = (LZ4_stream_t*)ALLOC(sizeof(LZ4_stream_t)); /* malloc-calloc always properly aligned */ if (ctx == NULL) return 0; #else LZ4_stream_t ctxBody; LZ4_stream_t* ctx = &ctxBody; #endif int result = LZ4_compress_destSize_extState(ctx, src, dst, srcSizePtr, targetDstSize); #if (LZ4_HEAPMODE) FREEMEM(ctx); #endif return result; } /*-****************************** * Streaming functions ********************************/ LZ4_stream_t* LZ4_createStream(void) { LZ4_stream_t* const lz4s = (LZ4_stream_t*)ALLOC(sizeof(LZ4_stream_t)); LZ4_STATIC_ASSERT(LZ4_STREAMSIZE >= sizeof(LZ4_stream_t_internal)); /* A compilation error here means LZ4_STREAMSIZE is not large enough */ DEBUGLOG(4, "LZ4_createStream %p", lz4s); if (lz4s == NULL) return NULL; LZ4_initStream(lz4s, sizeof(*lz4s)); return lz4s; } #ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 : it reports an aligment of 8-bytes, while actually aligning LZ4_stream_t on 4 bytes. */ static size_t LZ4_stream_t_alignment(void) { struct { char c; LZ4_stream_t t; } t_a; return sizeof(t_a) - sizeof(t_a.t); } #endif LZ4_stream_t* LZ4_initStream (void* buffer, size_t size) { DEBUGLOG(5, "LZ4_initStream"); if (buffer == NULL) { return NULL; } if (size < sizeof(LZ4_stream_t)) { return NULL; } #ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 : it reports an aligment of 8-bytes, while actually aligning LZ4_stream_t on 4 bytes. */ if (((size_t)buffer) & (LZ4_stream_t_alignment() - 1)) { return NULL; } /* alignment check */ #endif MEM_INIT(buffer, 0, sizeof(LZ4_stream_t)); return (LZ4_stream_t*)buffer; } /* resetStream is now deprecated, * prefer initStream() which is more general */ void LZ4_resetStream (LZ4_stream_t* LZ4_stream) { DEBUGLOG(5, "LZ4_resetStream (ctx:%p)", LZ4_stream); MEM_INIT(LZ4_stream, 0, sizeof(LZ4_stream_t)); } void LZ4_resetStream_fast(LZ4_stream_t* ctx) { LZ4_prepareTable(&(ctx->internal_donotuse), 0, byU32); } int LZ4_freeStream (LZ4_stream_t* LZ4_stream) { if (!LZ4_stream) return 0; /* support free on NULL */ DEBUGLOG(5, "LZ4_freeStream %p", LZ4_stream); FREEMEM(LZ4_stream); return (0); } #define HASH_UNIT sizeof(reg_t) int LZ4_loadDict (LZ4_stream_t* LZ4_dict, const char* dictionary, int dictSize) { LZ4_stream_t_internal* dict = &LZ4_dict->internal_donotuse; const tableType_t tableType = byU32; const BYTE* p = (const BYTE*)dictionary; const BYTE* const dictEnd = p + dictSize; const BYTE* base; DEBUGLOG(4, "LZ4_loadDict (%i bytes from %p into %p)", dictSize, dictionary, LZ4_dict); /* It's necessary to reset the context, * and not just continue it with prepareTable() * to avoid any risk of generating overflowing matchIndex * when compressing using this dictionary */ LZ4_resetStream(LZ4_dict); /* We always increment the offset by 64 KB, since, if the dict is longer, * we truncate it to the last 64k, and if it's shorter, we still want to * advance by a whole window length so we can provide the guarantee that * there are only valid offsets in the window, which allows an optimization * in LZ4_compress_fast_continue() where it uses noDictIssue even when the * dictionary isn't a full 64k. */ dict->currentOffset += 64 KB; if (dictSize < (int)HASH_UNIT) { return 0; } if ((dictEnd - p) > 64 KB) p = dictEnd - 64 KB; base = dictEnd - dict->currentOffset; dict->dictionary = p; dict->dictSize = (U32)(dictEnd - p); dict->tableType = tableType; while (p <= dictEnd-HASH_UNIT) { LZ4_putPosition(p, dict->hashTable, tableType, base); p+=3; } return (int)dict->dictSize; } void LZ4_attach_dictionary(LZ4_stream_t* workingStream, const LZ4_stream_t* dictionaryStream) { const LZ4_stream_t_internal* dictCtx = dictionaryStream == NULL ? NULL : &(dictionaryStream->internal_donotuse); DEBUGLOG(4, "LZ4_attach_dictionary (%p, %p, size %u)", workingStream, dictionaryStream, dictCtx != NULL ? dictCtx->dictSize : 0); /* Calling LZ4_resetStream_fast() here makes sure that changes will not be * erased by subsequent calls to LZ4_resetStream_fast() in case stream was * marked as having dirty context, e.g. requiring full reset. */ LZ4_resetStream_fast(workingStream); if (dictCtx != NULL) { /* If the current offset is zero, we will never look in the * external dictionary context, since there is no value a table * entry can take that indicate a miss. In that case, we need * to bump the offset to something non-zero. */ if (workingStream->internal_donotuse.currentOffset == 0) { workingStream->internal_donotuse.currentOffset = 64 KB; } /* Don't actually attach an empty dictionary. */ if (dictCtx->dictSize == 0) { dictCtx = NULL; } } workingStream->internal_donotuse.dictCtx = dictCtx; } static void LZ4_renormDictT(LZ4_stream_t_internal* LZ4_dict, int nextSize) { assert(nextSize >= 0); if (LZ4_dict->currentOffset + (unsigned)nextSize > 0x80000000) { /* potential ptrdiff_t overflow (32-bits mode) */ /* rescale hash table */ U32 const delta = LZ4_dict->currentOffset - 64 KB; const BYTE* dictEnd = LZ4_dict->dictionary + LZ4_dict->dictSize; int i; DEBUGLOG(4, "LZ4_renormDictT"); for (i=0; ihashTable[i] < delta) LZ4_dict->hashTable[i]=0; else LZ4_dict->hashTable[i] -= delta; } LZ4_dict->currentOffset = 64 KB; if (LZ4_dict->dictSize > 64 KB) LZ4_dict->dictSize = 64 KB; LZ4_dict->dictionary = dictEnd - LZ4_dict->dictSize; } } int LZ4_compress_fast_continue (LZ4_stream_t* LZ4_stream, const char* source, char* dest, int inputSize, int maxOutputSize, int acceleration) { const tableType_t tableType = byU32; LZ4_stream_t_internal* streamPtr = &LZ4_stream->internal_donotuse; const BYTE* dictEnd = streamPtr->dictionary + streamPtr->dictSize; DEBUGLOG(5, "LZ4_compress_fast_continue (inputSize=%i)", inputSize); if (streamPtr->dirty) { return 0; } /* Uninitialized structure detected */ LZ4_renormDictT(streamPtr, inputSize); /* avoid index overflow */ if (acceleration < 1) acceleration = ACCELERATION_DEFAULT; /* invalidate tiny dictionaries */ if ( (streamPtr->dictSize-1 < 4-1) /* intentional underflow */ && (dictEnd != (const BYTE*)source) ) { DEBUGLOG(5, "LZ4_compress_fast_continue: dictSize(%u) at addr:%p is too small", streamPtr->dictSize, streamPtr->dictionary); streamPtr->dictSize = 0; streamPtr->dictionary = (const BYTE*)source; dictEnd = (const BYTE*)source; } /* Check overlapping input/dictionary space */ { const BYTE* sourceEnd = (const BYTE*) source + inputSize; if ((sourceEnd > streamPtr->dictionary) && (sourceEnd < dictEnd)) { streamPtr->dictSize = (U32)(dictEnd - sourceEnd); if (streamPtr->dictSize > 64 KB) streamPtr->dictSize = 64 KB; if (streamPtr->dictSize < 4) streamPtr->dictSize = 0; streamPtr->dictionary = dictEnd - streamPtr->dictSize; } } /* prefix mode : source data follows dictionary */ if (dictEnd == (const BYTE*)source) { if ((streamPtr->dictSize < 64 KB) && (streamPtr->dictSize < streamPtr->currentOffset)) return LZ4_compress_generic(streamPtr, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, withPrefix64k, dictSmall, acceleration); else return LZ4_compress_generic(streamPtr, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, withPrefix64k, noDictIssue, acceleration); } /* external dictionary mode */ { int result; if (streamPtr->dictCtx) { /* We depend here on the fact that dictCtx'es (produced by * LZ4_loadDict) guarantee that their tables contain no references * to offsets between dictCtx->currentOffset - 64 KB and * dictCtx->currentOffset - dictCtx->dictSize. This makes it safe * to use noDictIssue even when the dict isn't a full 64 KB. */ if (inputSize > 4 KB) { /* For compressing large blobs, it is faster to pay the setup * cost to copy the dictionary's tables into the active context, * so that the compression loop is only looking into one table. */ memcpy(streamPtr, streamPtr->dictCtx, sizeof(LZ4_stream_t)); result = LZ4_compress_generic(streamPtr, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, usingExtDict, noDictIssue, acceleration); } else { result = LZ4_compress_generic(streamPtr, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, usingDictCtx, noDictIssue, acceleration); } } else { if ((streamPtr->dictSize < 64 KB) && (streamPtr->dictSize < streamPtr->currentOffset)) { result = LZ4_compress_generic(streamPtr, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, usingExtDict, dictSmall, acceleration); } else { result = LZ4_compress_generic(streamPtr, source, dest, inputSize, NULL, maxOutputSize, limitedOutput, tableType, usingExtDict, noDictIssue, acceleration); } } streamPtr->dictionary = (const BYTE*)source; streamPtr->dictSize = (U32)inputSize; return result; } } /* Hidden debug function, to force-test external dictionary mode */ int LZ4_compress_forceExtDict (LZ4_stream_t* LZ4_dict, const char* source, char* dest, int srcSize) { LZ4_stream_t_internal* streamPtr = &LZ4_dict->internal_donotuse; int result; LZ4_renormDictT(streamPtr, srcSize); if ((streamPtr->dictSize < 64 KB) && (streamPtr->dictSize < streamPtr->currentOffset)) { result = LZ4_compress_generic(streamPtr, source, dest, srcSize, NULL, 0, notLimited, byU32, usingExtDict, dictSmall, 1); } else { result = LZ4_compress_generic(streamPtr, source, dest, srcSize, NULL, 0, notLimited, byU32, usingExtDict, noDictIssue, 1); } streamPtr->dictionary = (const BYTE*)source; streamPtr->dictSize = (U32)srcSize; return result; } /*! LZ4_saveDict() : * If previously compressed data block is not guaranteed to remain available at its memory location, * save it into a safer place (char* safeBuffer). * Note : you don't need to call LZ4_loadDict() afterwards, * dictionary is immediately usable, you can therefore call LZ4_compress_fast_continue(). * Return : saved dictionary size in bytes (necessarily <= dictSize), or 0 if error. */ int LZ4_saveDict (LZ4_stream_t* LZ4_dict, char* safeBuffer, int dictSize) { LZ4_stream_t_internal* const dict = &LZ4_dict->internal_donotuse; const BYTE* const previousDictEnd = dict->dictionary + dict->dictSize; if ((U32)dictSize > 64 KB) { dictSize = 64 KB; } /* useless to define a dictionary > 64 KB */ if ((U32)dictSize > dict->dictSize) { dictSize = (int)dict->dictSize; } memmove(safeBuffer, previousDictEnd - dictSize, dictSize); dict->dictionary = (const BYTE*)safeBuffer; dict->dictSize = (U32)dictSize; return dictSize; } /*-******************************* * Decompression functions ********************************/ typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } endCondition_directive; typedef enum { decode_full_block = 0, partial_decode = 1 } earlyEnd_directive; #undef MIN #define MIN(a,b) ( (a) < (b) ? (a) : (b) ) /* Read the variable-length literal or match length. * * ip - pointer to use as input. * lencheck - end ip. Return an error if ip advances >= lencheck. * loop_check - check ip >= lencheck in body of loop. Returns loop_error if so. * initial_check - check ip >= lencheck before start of loop. Returns initial_error if so. * error (output) - error code. Should be set to 0 before call. */ typedef enum { loop_error = -2, initial_error = -1, ok = 0 } variable_length_error; LZ4_FORCE_INLINE unsigned read_variable_length(const BYTE**ip, const BYTE* lencheck, int loop_check, int initial_check, variable_length_error* error) { unsigned length = 0; unsigned s; if (initial_check && unlikely((*ip) >= lencheck)) { /* overflow detection */ *error = initial_error; return length; } do { s = **ip; (*ip)++; length += s; if (loop_check && unlikely((*ip) >= lencheck)) { /* overflow detection */ *error = loop_error; return length; } } while (s==255); return length; } /*! LZ4_decompress_generic() : * This generic decompression function covers all use cases. * It shall be instantiated several times, using different sets of directives. * Note that it is important for performance that this function really get inlined, * in order to remove useless branches during compilation optimization. */ LZ4_FORCE_INLINE int LZ4_decompress_generic( const char* const src, char* const dst, int srcSize, int outputSize, /* If endOnInput==endOnInputSize, this value is `dstCapacity` */ endCondition_directive endOnInput, /* endOnOutputSize, endOnInputSize */ earlyEnd_directive partialDecoding, /* full, partial */ dict_directive dict, /* noDict, withPrefix64k, usingExtDict */ const BYTE* const lowPrefix, /* always <= dst, == dst when no prefix */ const BYTE* const dictStart, /* only if dict==usingExtDict */ const size_t dictSize /* note : = 0 if noDict */ ) { if (src == NULL) { return -1; } { const BYTE* ip = (const BYTE*) src; const BYTE* const iend = ip + srcSize; BYTE* op = (BYTE*) dst; BYTE* const oend = op + outputSize; BYTE* cpy; const BYTE* const dictEnd = (dictStart == NULL) ? NULL : dictStart + dictSize; const int safeDecode = (endOnInput==endOnInputSize); const int checkOffset = ((safeDecode) && (dictSize < (int)(64 KB))); /* Set up the "end" pointers for the shortcut. */ const BYTE* const shortiend = iend - (endOnInput ? 14 : 8) /*maxLL*/ - 2 /*offset*/; const BYTE* const shortoend = oend - (endOnInput ? 14 : 8) /*maxLL*/ - 18 /*maxML*/; const BYTE* match; size_t offset; unsigned token; size_t length; DEBUGLOG(5, "LZ4_decompress_generic (srcSize:%i, dstSize:%i)", srcSize, outputSize); /* Special cases */ assert(lowPrefix <= op); if ((endOnInput) && (unlikely(outputSize==0))) { /* Empty output buffer */ if (partialDecoding) return 0; return ((srcSize==1) && (*ip==0)) ? 0 : -1; } if ((!endOnInput) && (unlikely(outputSize==0))) { return (*ip==0 ? 1 : -1); } if ((endOnInput) && unlikely(srcSize==0)) { return -1; } /* Currently the fast loop shows a regression on qualcomm arm chips. */ #if LZ4_FAST_DEC_LOOP if ((oend - op) < FASTLOOP_SAFE_DISTANCE) { DEBUGLOG(6, "skip fast decode loop"); goto safe_decode; } /* Fast loop : decode sequences as long as output < iend-FASTLOOP_SAFE_DISTANCE */ while (1) { /* Main fastloop assertion: We can always wildcopy FASTLOOP_SAFE_DISTANCE */ assert(oend - op >= FASTLOOP_SAFE_DISTANCE); if (endOnInput) { assert(ip < iend); } token = *ip++; length = token >> ML_BITS; /* literal length */ assert(!endOnInput || ip <= iend); /* ip < iend before the increment */ /* decode literal length */ if (length == RUN_MASK) { variable_length_error error = ok; length += read_variable_length(&ip, iend-RUN_MASK, endOnInput, endOnInput, &error); if (error == initial_error) { goto _output_error; } if ((safeDecode) && unlikely((uptrval)(op)+length<(uptrval)(op))) { goto _output_error; } /* overflow detection */ if ((safeDecode) && unlikely((uptrval)(ip)+length<(uptrval)(ip))) { goto _output_error; } /* overflow detection */ /* copy literals */ cpy = op+length; LZ4_STATIC_ASSERT(MFLIMIT >= WILDCOPYLENGTH); if (endOnInput) { /* LZ4_decompress_safe() */ if ((cpy>oend-32) || (ip+length>iend-32)) { goto safe_literal_copy; } LZ4_wildCopy32(op, ip, cpy); } else { /* LZ4_decompress_fast() */ if (cpy>oend-8) { goto safe_literal_copy; } LZ4_wildCopy8(op, ip, cpy); /* LZ4_decompress_fast() cannot copy more than 8 bytes at a time : * it doesn't know input length, and only relies on end-of-block properties */ } ip += length; op = cpy; } else { cpy = op+length; if (endOnInput) { /* LZ4_decompress_safe() */ DEBUGLOG(7, "copy %u bytes in a 16-bytes stripe", (unsigned)length); /* We don't need to check oend, since we check it once for each loop below */ if (ip > iend-(16 + 1/*max lit + offset + nextToken*/)) { goto safe_literal_copy; } /* Literals can only be 14, but hope compilers optimize if we copy by a register size */ memcpy(op, ip, 16); } else { /* LZ4_decompress_fast() */ /* LZ4_decompress_fast() cannot copy more than 8 bytes at a time : * it doesn't know input length, and relies on end-of-block properties */ memcpy(op, ip, 8); if (length > 8) { memcpy(op+8, ip+8, 8); } } ip += length; op = cpy; } /* get offset */ offset = LZ4_readLE16(ip); ip+=2; match = op - offset; assert(match <= op); /* get matchlength */ length = token & ML_MASK; if (length == ML_MASK) { variable_length_error error = ok; if ((checkOffset) && (unlikely(match + dictSize < lowPrefix))) { goto _output_error; } /* Error : offset outside buffers */ length += read_variable_length(&ip, iend - LASTLITERALS + 1, endOnInput, 0, &error); if (error != ok) { goto _output_error; } if ((safeDecode) && unlikely((uptrval)(op)+length<(uptrval)op)) { goto _output_error; } /* overflow detection */ length += MINMATCH; if (op + length >= oend - FASTLOOP_SAFE_DISTANCE) { goto safe_match_copy; } } else { length += MINMATCH; if (op + length >= oend - FASTLOOP_SAFE_DISTANCE) { goto safe_match_copy; } /* Fastpath check: Avoids a branch in LZ4_wildCopy32 if true */ if ((dict == withPrefix64k) || (match >= lowPrefix)) { if (offset >= 8) { assert(match >= lowPrefix); assert(match <= op); assert(op + 18 <= oend); memcpy(op, match, 8); memcpy(op+8, match+8, 8); memcpy(op+16, match+16, 2); op += length; continue; } } } if ((checkOffset) && (unlikely(match + dictSize < lowPrefix))) { goto _output_error; } /* Error : offset outside buffers */ /* match starting within external dictionary */ if ((dict==usingExtDict) && (match < lowPrefix)) { if (unlikely(op+length > oend-LASTLITERALS)) { if (partialDecoding) { length = MIN(length, (size_t)(oend-op)); /* reach end of buffer */ } else { goto _output_error; /* end-of-block condition violated */ } } if (length <= (size_t)(lowPrefix-match)) { /* match fits entirely within external dictionary : just copy */ memmove(op, dictEnd - (lowPrefix-match), length); op += length; } else { /* match stretches into both external dictionary and current block */ size_t const copySize = (size_t)(lowPrefix - match); size_t const restSize = length - copySize; memcpy(op, dictEnd - copySize, copySize); op += copySize; if (restSize > (size_t)(op - lowPrefix)) { /* overlap copy */ BYTE* const endOfMatch = op + restSize; const BYTE* copyFrom = lowPrefix; while (op < endOfMatch) { *op++ = *copyFrom++; } } else { memcpy(op, lowPrefix, restSize); op += restSize; } } continue; } /* copy match within block */ cpy = op + length; assert((op <= oend) && (oend-op >= 32)); if (unlikely(offset<16)) { LZ4_memcpy_using_offset(op, match, cpy, offset); } else { LZ4_wildCopy32(op, match, cpy); } op = cpy; /* wildcopy correction */ } safe_decode: #endif /* Main Loop : decode remaining sequences where output < FASTLOOP_SAFE_DISTANCE */ while (1) { token = *ip++; length = token >> ML_BITS; /* literal length */ assert(!endOnInput || ip <= iend); /* ip < iend before the increment */ /* A two-stage shortcut for the most common case: * 1) If the literal length is 0..14, and there is enough space, * enter the shortcut and copy 16 bytes on behalf of the literals * (in the fast mode, only 8 bytes can be safely copied this way). * 2) Further if the match length is 4..18, copy 18 bytes in a similar * manner; but we ensure that there's enough space in the output for * those 18 bytes earlier, upon entering the shortcut (in other words, * there is a combined check for both stages). */ if ( (endOnInput ? length != RUN_MASK : length <= 8) /* strictly "less than" on input, to re-enter the loop with at least one byte */ && likely((endOnInput ? ip < shortiend : 1) & (op <= shortoend)) ) { /* Copy the literals */ memcpy(op, ip, endOnInput ? 16 : 8); op += length; ip += length; /* The second stage: prepare for match copying, decode full info. * If it doesn't work out, the info won't be wasted. */ length = token & ML_MASK; /* match length */ offset = LZ4_readLE16(ip); ip += 2; match = op - offset; assert(match <= op); /* check overflow */ /* Do not deal with overlapping matches. */ if ( (length != ML_MASK) && (offset >= 8) && (dict==withPrefix64k || match >= lowPrefix) ) { /* Copy the match. */ memcpy(op + 0, match + 0, 8); memcpy(op + 8, match + 8, 8); memcpy(op +16, match +16, 2); op += length + MINMATCH; /* Both stages worked, load the next token. */ continue; } /* The second stage didn't work out, but the info is ready. * Propel it right to the point of match copying. */ goto _copy_match; } /* decode literal length */ if (length == RUN_MASK) { variable_length_error error = ok; length += read_variable_length(&ip, iend-RUN_MASK, endOnInput, endOnInput, &error); if (error == initial_error) { goto _output_error; } if ((safeDecode) && unlikely((uptrval)(op)+length<(uptrval)(op))) { goto _output_error; } /* overflow detection */ if ((safeDecode) && unlikely((uptrval)(ip)+length<(uptrval)(ip))) { goto _output_error; } /* overflow detection */ } /* copy literals */ cpy = op+length; #if LZ4_FAST_DEC_LOOP safe_literal_copy: #endif LZ4_STATIC_ASSERT(MFLIMIT >= WILDCOPYLENGTH); if ( ((endOnInput) && ((cpy>oend-MFLIMIT) || (ip+length>iend-(2+1+LASTLITERALS))) ) || ((!endOnInput) && (cpy>oend-WILDCOPYLENGTH)) ) { /* We've either hit the input parsing restriction or the output parsing restriction. * If we've hit the input parsing condition then this must be the last sequence. * If we've hit the output parsing condition then we are either using partialDecoding * or we've hit the output parsing condition. */ if (partialDecoding) { /* Since we are partial decoding we may be in this block because of the output parsing * restriction, which is not valid since the output buffer is allowed to be undersized. */ assert(endOnInput); /* If we're in this block because of the input parsing condition, then we must be on the * last sequence (or invalid), so we must check that we exactly consume the input. */ if ((ip+length>iend-(2+1+LASTLITERALS)) && (ip+length != iend)) { goto _output_error; } assert(ip+length <= iend); /* We are finishing in the middle of a literals segment. * Break after the copy. */ if (cpy > oend) { cpy = oend; assert(op<=oend); length = (size_t)(oend-op); } assert(ip+length <= iend); } else { /* We must be on the last sequence because of the parsing limitations so check * that we exactly regenerate the original size (must be exact when !endOnInput). */ if ((!endOnInput) && (cpy != oend)) { goto _output_error; } /* We must be on the last sequence (or invalid) because of the parsing limitations * so check that we exactly consume the input and don't overrun the output buffer. */ if ((endOnInput) && ((ip+length != iend) || (cpy > oend))) { goto _output_error; } } memmove(op, ip, length); /* supports overlapping memory regions, which only matters for in-place decompression scenarios */ ip += length; op += length; /* Necessarily EOF when !partialDecoding. When partialDecoding * it is EOF if we've either filled the output buffer or hit * the input parsing restriction. */ if (!partialDecoding || (cpy == oend) || (ip == iend)) { break; } } else { LZ4_wildCopy8(op, ip, cpy); /* may overwrite up to WILDCOPYLENGTH beyond cpy */ ip += length; op = cpy; } /* get offset */ offset = LZ4_readLE16(ip); ip+=2; match = op - offset; /* get matchlength */ length = token & ML_MASK; _copy_match: if (length == ML_MASK) { variable_length_error error = ok; length += read_variable_length(&ip, iend - LASTLITERALS + 1, endOnInput, 0, &error); if (error != ok) goto _output_error; if ((safeDecode) && unlikely((uptrval)(op)+length<(uptrval)op)) goto _output_error; /* overflow detection */ } length += MINMATCH; #if LZ4_FAST_DEC_LOOP safe_match_copy: #endif if ((checkOffset) && (unlikely(match + dictSize < lowPrefix))) goto _output_error; /* Error : offset outside buffers */ /* match starting within external dictionary */ if ((dict==usingExtDict) && (match < lowPrefix)) { if (unlikely(op+length > oend-LASTLITERALS)) { if (partialDecoding) length = MIN(length, (size_t)(oend-op)); else goto _output_error; /* doesn't respect parsing restriction */ } if (length <= (size_t)(lowPrefix-match)) { /* match fits entirely within external dictionary : just copy */ memmove(op, dictEnd - (lowPrefix-match), length); op += length; } else { /* match stretches into both external dictionary and current block */ size_t const copySize = (size_t)(lowPrefix - match); size_t const restSize = length - copySize; memcpy(op, dictEnd - copySize, copySize); op += copySize; if (restSize > (size_t)(op - lowPrefix)) { /* overlap copy */ BYTE* const endOfMatch = op + restSize; const BYTE* copyFrom = lowPrefix; while (op < endOfMatch) *op++ = *copyFrom++; } else { memcpy(op, lowPrefix, restSize); op += restSize; } } continue; } assert(match >= lowPrefix); /* copy match within block */ cpy = op + length; /* partialDecoding : may end anywhere within the block */ assert(op<=oend); if (partialDecoding && (cpy > oend-MATCH_SAFEGUARD_DISTANCE)) { size_t const mlen = MIN(length, (size_t)(oend-op)); const BYTE* const matchEnd = match + mlen; BYTE* const copyEnd = op + mlen; if (matchEnd > op) { /* overlap copy */ while (op < copyEnd) { *op++ = *match++; } } else { memcpy(op, match, mlen); } op = copyEnd; if (op == oend) { break; } continue; } if (unlikely(offset<8)) { LZ4_write32(op, 0); /* silence msan warning when offset==0 */ op[0] = match[0]; op[1] = match[1]; op[2] = match[2]; op[3] = match[3]; match += inc32table[offset]; memcpy(op+4, match, 4); match -= dec64table[offset]; } else { memcpy(op, match, 8); match += 8; } op += 8; if (unlikely(cpy > oend-MATCH_SAFEGUARD_DISTANCE)) { BYTE* const oCopyLimit = oend - (WILDCOPYLENGTH-1); if (cpy > oend-LASTLITERALS) { goto _output_error; } /* Error : last LASTLITERALS bytes must be literals (uncompressed) */ if (op < oCopyLimit) { LZ4_wildCopy8(op, match, oCopyLimit); match += oCopyLimit - op; op = oCopyLimit; } while (op < cpy) { *op++ = *match++; } } else { memcpy(op, match, 8); if (length > 16) { LZ4_wildCopy8(op+8, match+8, cpy); } } op = cpy; /* wildcopy correction */ } /* end of decoding */ if (endOnInput) { return (int) (((char*)op)-dst); /* Nb of output bytes decoded */ } else { return (int) (((const char*)ip)-src); /* Nb of input bytes read */ } /* Overflow error detected */ _output_error: return (int) (-(((const char*)ip)-src))-1; } } /*===== Instantiate the API decoding functions. =====*/ LZ4_FORCE_O2_GCC_PPC64LE int LZ4_decompress_safe(const char* source, char* dest, int compressedSize, int maxDecompressedSize) { return LZ4_decompress_generic(source, dest, compressedSize, maxDecompressedSize, endOnInputSize, decode_full_block, noDict, (BYTE*)dest, NULL, 0); } LZ4_FORCE_O2_GCC_PPC64LE int LZ4_decompress_safe_partial(const char* src, char* dst, int compressedSize, int targetOutputSize, int dstCapacity) { dstCapacity = MIN(targetOutputSize, dstCapacity); return LZ4_decompress_generic(src, dst, compressedSize, dstCapacity, endOnInputSize, partial_decode, noDict, (BYTE*)dst, NULL, 0); } LZ4_FORCE_O2_GCC_PPC64LE int LZ4_decompress_fast(const char* source, char* dest, int originalSize) { return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize, decode_full_block, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 0); } /*===== Instantiate a few more decoding cases, used more than once. =====*/ LZ4_FORCE_O2_GCC_PPC64LE /* Exported, an obsolete API function. */ int LZ4_decompress_safe_withPrefix64k(const char* source, char* dest, int compressedSize, int maxOutputSize) { return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, endOnInputSize, decode_full_block, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 0); } /* Another obsolete API function, paired with the previous one. */ int LZ4_decompress_fast_withPrefix64k(const char* source, char* dest, int originalSize) { /* LZ4_decompress_fast doesn't validate match offsets, * and thus serves well with any prefixed dictionary. */ return LZ4_decompress_fast(source, dest, originalSize); } LZ4_FORCE_O2_GCC_PPC64LE static int LZ4_decompress_safe_withSmallPrefix(const char* source, char* dest, int compressedSize, int maxOutputSize, size_t prefixSize) { return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, endOnInputSize, decode_full_block, noDict, (BYTE*)dest-prefixSize, NULL, 0); } LZ4_FORCE_O2_GCC_PPC64LE int LZ4_decompress_safe_forceExtDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const void* dictStart, size_t dictSize) { return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, endOnInputSize, decode_full_block, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize); } LZ4_FORCE_O2_GCC_PPC64LE static int LZ4_decompress_fast_extDict(const char* source, char* dest, int originalSize, const void* dictStart, size_t dictSize) { return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize, decode_full_block, usingExtDict, (BYTE*)dest, (const BYTE*)dictStart, dictSize); } /* The "double dictionary" mode, for use with e.g. ring buffers: the first part * of the dictionary is passed as prefix, and the second via dictStart + dictSize. * These routines are used only once, in LZ4_decompress_*_continue(). */ LZ4_FORCE_INLINE int LZ4_decompress_safe_doubleDict(const char* source, char* dest, int compressedSize, int maxOutputSize, size_t prefixSize, const void* dictStart, size_t dictSize) { return LZ4_decompress_generic(source, dest, compressedSize, maxOutputSize, endOnInputSize, decode_full_block, usingExtDict, (BYTE*)dest-prefixSize, (const BYTE*)dictStart, dictSize); } LZ4_FORCE_INLINE int LZ4_decompress_fast_doubleDict(const char* source, char* dest, int originalSize, size_t prefixSize, const void* dictStart, size_t dictSize) { return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize, decode_full_block, usingExtDict, (BYTE*)dest-prefixSize, (const BYTE*)dictStart, dictSize); } /*===== streaming decompression functions =====*/ LZ4_streamDecode_t* LZ4_createStreamDecode(void) { LZ4_streamDecode_t* lz4s = (LZ4_streamDecode_t*) ALLOC_AND_ZERO(sizeof(LZ4_streamDecode_t)); LZ4_STATIC_ASSERT(LZ4_STREAMDECODESIZE >= sizeof(LZ4_streamDecode_t_internal)); /* A compilation error here means LZ4_STREAMDECODESIZE is not large enough */ return lz4s; } int LZ4_freeStreamDecode (LZ4_streamDecode_t* LZ4_stream) { if (LZ4_stream == NULL) { return 0; } /* support free on NULL */ FREEMEM(LZ4_stream); return 0; } /*! LZ4_setStreamDecode() : * Use this function to instruct where to find the dictionary. * This function is not necessary if previous data is still available where it was decoded. * Loading a size of 0 is allowed (same effect as no dictionary). * @return : 1 if OK, 0 if error */ int LZ4_setStreamDecode (LZ4_streamDecode_t* LZ4_streamDecode, const char* dictionary, int dictSize) { LZ4_streamDecode_t_internal* lz4sd = &LZ4_streamDecode->internal_donotuse; lz4sd->prefixSize = (size_t) dictSize; lz4sd->prefixEnd = (const BYTE*) dictionary + dictSize; lz4sd->externalDict = NULL; lz4sd->extDictSize = 0; return 1; } /*! LZ4_decoderRingBufferSize() : * when setting a ring buffer for streaming decompression (optional scenario), * provides the minimum size of this ring buffer * to be compatible with any source respecting maxBlockSize condition. * Note : in a ring buffer scenario, * blocks are presumed decompressed next to each other. * When not enough space remains for next block (remainingSize < maxBlockSize), * decoding resumes from beginning of ring buffer. * @return : minimum ring buffer size, * or 0 if there is an error (invalid maxBlockSize). */ int LZ4_decoderRingBufferSize(int maxBlockSize) { if (maxBlockSize < 0) return 0; if (maxBlockSize > LZ4_MAX_INPUT_SIZE) return 0; if (maxBlockSize < 16) maxBlockSize = 16; return LZ4_DECODER_RING_BUFFER_SIZE(maxBlockSize); } /* *_continue() : These decoding functions allow decompression of multiple blocks in "streaming" mode. Previously decoded blocks must still be available at the memory position where they were decoded. If it's not possible, save the relevant part of decoded data into a safe buffer, and indicate where it stands using LZ4_setStreamDecode() */ LZ4_FORCE_O2_GCC_PPC64LE int LZ4_decompress_safe_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int compressedSize, int maxOutputSize) { LZ4_streamDecode_t_internal* lz4sd = &LZ4_streamDecode->internal_donotuse; int result; if (lz4sd->prefixSize == 0) { /* The first call, no dictionary yet. */ assert(lz4sd->extDictSize == 0); result = LZ4_decompress_safe(source, dest, compressedSize, maxOutputSize); if (result <= 0) return result; lz4sd->prefixSize = (size_t)result; lz4sd->prefixEnd = (BYTE*)dest + result; } else if (lz4sd->prefixEnd == (BYTE*)dest) { /* They're rolling the current segment. */ if (lz4sd->prefixSize >= 64 KB - 1) result = LZ4_decompress_safe_withPrefix64k(source, dest, compressedSize, maxOutputSize); else if (lz4sd->extDictSize == 0) result = LZ4_decompress_safe_withSmallPrefix(source, dest, compressedSize, maxOutputSize, lz4sd->prefixSize); else result = LZ4_decompress_safe_doubleDict(source, dest, compressedSize, maxOutputSize, lz4sd->prefixSize, lz4sd->externalDict, lz4sd->extDictSize); if (result <= 0) return result; lz4sd->prefixSize += (size_t)result; lz4sd->prefixEnd += result; } else { /* The buffer wraps around, or they're switching to another buffer. */ lz4sd->extDictSize = lz4sd->prefixSize; lz4sd->externalDict = lz4sd->prefixEnd - lz4sd->extDictSize; result = LZ4_decompress_safe_forceExtDict(source, dest, compressedSize, maxOutputSize, lz4sd->externalDict, lz4sd->extDictSize); if (result <= 0) return result; lz4sd->prefixSize = (size_t)result; lz4sd->prefixEnd = (BYTE*)dest + result; } return result; } LZ4_FORCE_O2_GCC_PPC64LE int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* source, char* dest, int originalSize) { LZ4_streamDecode_t_internal* lz4sd = &LZ4_streamDecode->internal_donotuse; int result; assert(originalSize >= 0); if (lz4sd->prefixSize == 0) { assert(lz4sd->extDictSize == 0); result = LZ4_decompress_fast(source, dest, originalSize); if (result <= 0) return result; lz4sd->prefixSize = (size_t)originalSize; lz4sd->prefixEnd = (BYTE*)dest + originalSize; } else if (lz4sd->prefixEnd == (BYTE*)dest) { if (lz4sd->prefixSize >= 64 KB - 1 || lz4sd->extDictSize == 0) result = LZ4_decompress_fast(source, dest, originalSize); else result = LZ4_decompress_fast_doubleDict(source, dest, originalSize, lz4sd->prefixSize, lz4sd->externalDict, lz4sd->extDictSize); if (result <= 0) return result; lz4sd->prefixSize += (size_t)originalSize; lz4sd->prefixEnd += originalSize; } else { lz4sd->extDictSize = lz4sd->prefixSize; lz4sd->externalDict = lz4sd->prefixEnd - lz4sd->extDictSize; result = LZ4_decompress_fast_extDict(source, dest, originalSize, lz4sd->externalDict, lz4sd->extDictSize); if (result <= 0) return result; lz4sd->prefixSize = (size_t)originalSize; lz4sd->prefixEnd = (BYTE*)dest + originalSize; } return result; } /* Advanced decoding functions : *_usingDict() : These decoding functions work the same as "_continue" ones, the dictionary must be explicitly provided within parameters */ int LZ4_decompress_safe_usingDict(const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize) { if (dictSize==0) return LZ4_decompress_safe(source, dest, compressedSize, maxOutputSize); if (dictStart+dictSize == dest) { if (dictSize >= 64 KB - 1) { return LZ4_decompress_safe_withPrefix64k(source, dest, compressedSize, maxOutputSize); } assert(dictSize >= 0); return LZ4_decompress_safe_withSmallPrefix(source, dest, compressedSize, maxOutputSize, (size_t)dictSize); } assert(dictSize >= 0); return LZ4_decompress_safe_forceExtDict(source, dest, compressedSize, maxOutputSize, dictStart, (size_t)dictSize); } int LZ4_decompress_fast_usingDict(const char* source, char* dest, int originalSize, const char* dictStart, int dictSize) { if (dictSize==0 || dictStart+dictSize == dest) return LZ4_decompress_fast(source, dest, originalSize); assert(dictSize >= 0); return LZ4_decompress_fast_extDict(source, dest, originalSize, dictStart, (size_t)dictSize); } /*=************************************************* * Obsolete Functions ***************************************************/ /* obsolete compression functions */ int LZ4_compress_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize) { return LZ4_compress_default(source, dest, inputSize, maxOutputSize); } int LZ4_compress(const char* src, char* dest, int srcSize) { return LZ4_compress_default(src, dest, srcSize, LZ4_compressBound(srcSize)); } int LZ4_compress_limitedOutput_withState (void* state, const char* src, char* dst, int srcSize, int dstSize) { return LZ4_compress_fast_extState(state, src, dst, srcSize, dstSize, 1); } int LZ4_compress_withState (void* state, const char* src, char* dst, int srcSize) { return LZ4_compress_fast_extState(state, src, dst, srcSize, LZ4_compressBound(srcSize), 1); } int LZ4_compress_limitedOutput_continue (LZ4_stream_t* LZ4_stream, const char* src, char* dst, int srcSize, int dstCapacity) { return LZ4_compress_fast_continue(LZ4_stream, src, dst, srcSize, dstCapacity, 1); } int LZ4_compress_continue (LZ4_stream_t* LZ4_stream, const char* source, char* dest, int inputSize) { return LZ4_compress_fast_continue(LZ4_stream, source, dest, inputSize, LZ4_compressBound(inputSize), 1); } /* These decompression functions are deprecated and should no longer be used. They are only provided here for compatibility with older user programs. - LZ4_uncompress is totally equivalent to LZ4_decompress_fast - LZ4_uncompress_unknownOutputSize is totally equivalent to LZ4_decompress_safe */ int LZ4_uncompress (const char* source, char* dest, int outputSize) { return LZ4_decompress_fast(source, dest, outputSize); } int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize) { return LZ4_decompress_safe(source, dest, isize, maxOutputSize); } /* Obsolete Streaming functions */ int LZ4_sizeofStreamState() { return LZ4_STREAMSIZE; } int LZ4_resetStreamState(void* state, char* inputBuffer) { (void)inputBuffer; LZ4_resetStream((LZ4_stream_t*)state); return 0; } void* LZ4_create (char* inputBuffer) { (void)inputBuffer; return LZ4_createStream(); } char* LZ4_slideInputBuffer (void* state) { /* avoid const char * -> char * conversion warning */ return (char *)(uptrval)((LZ4_stream_t*)state)->internal_donotuse.dictionary; } #endif /* LZ4_COMMONDEFS_ONLY */ zmat-0.9.8/src/lz4/lz4.h000066400000000000000000001160071366304620100146730ustar00rootroot00000000000000/* * LZ4 - Fast LZ compression algorithm * Header File * Copyright (C) 2011-present, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - LZ4 homepage : http://www.lz4.org - LZ4 source repository : https://github.com/lz4/lz4 */ #if defined (__cplusplus) extern "C" { #endif #ifndef LZ4_H_2983827168210 #define LZ4_H_2983827168210 /* --- Dependency --- */ #include /* size_t */ /** Introduction LZ4 is lossless compression algorithm, providing compression speed >500 MB/s per core, scalable with multi-cores CPU. It features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems. The LZ4 compression library provides in-memory compression and decompression functions. It gives full buffer control to user. Compression can be done in: - a single step (described as Simple Functions) - a single step, reusing a context (described in Advanced Functions) - unbounded multiple steps (described as Streaming compression) lz4.h generates and decodes LZ4-compressed blocks (doc/lz4_Block_format.md). Decompressing such a compressed block requires additional metadata. Exact metadata depends on exact decompression function. For the typical case of LZ4_decompress_safe(), metadata includes block's compressed size, and maximum bound of decompressed size. Each application is free to encode and pass such metadata in whichever way it wants. lz4.h only handle blocks, it can not generate Frames. Blocks are different from Frames (doc/lz4_Frame_format.md). Frames bundle both blocks and metadata in a specified manner. Embedding metadata is required for compressed data to be self-contained and portable. Frame format is delivered through a companion API, declared in lz4frame.h. The `lz4` CLI can only manage frames. */ /*^*************************************************************** * Export parameters *****************************************************************/ /* * LZ4_DLL_EXPORT : * Enable exporting of functions when building a Windows DLL * LZ4LIB_VISIBILITY : * Control library symbols visibility. */ #ifndef LZ4LIB_VISIBILITY # if defined(__GNUC__) && (__GNUC__ >= 4) # define LZ4LIB_VISIBILITY __attribute__ ((visibility ("default"))) # else # define LZ4LIB_VISIBILITY # endif #endif #if defined(LZ4_DLL_EXPORT) && (LZ4_DLL_EXPORT==1) # define LZ4LIB_API __declspec(dllexport) LZ4LIB_VISIBILITY #elif defined(LZ4_DLL_IMPORT) && (LZ4_DLL_IMPORT==1) # define LZ4LIB_API __declspec(dllimport) LZ4LIB_VISIBILITY /* It isn't required but allows to generate better code, saving a function pointer load from the IAT and an indirect jump.*/ #else # define LZ4LIB_API LZ4LIB_VISIBILITY #endif /*------ Version ------*/ #define LZ4_VERSION_MAJOR 1 /* for breaking interface changes */ #define LZ4_VERSION_MINOR 9 /* for new (non-breaking) interface capabilities */ #define LZ4_VERSION_RELEASE 2 /* for tweaks, bug-fixes, or development */ #define LZ4_VERSION_NUMBER (LZ4_VERSION_MAJOR *100*100 + LZ4_VERSION_MINOR *100 + LZ4_VERSION_RELEASE) #define LZ4_LIB_VERSION LZ4_VERSION_MAJOR.LZ4_VERSION_MINOR.LZ4_VERSION_RELEASE #define LZ4_QUOTE(str) #str #define LZ4_EXPAND_AND_QUOTE(str) LZ4_QUOTE(str) #define LZ4_VERSION_STRING LZ4_EXPAND_AND_QUOTE(LZ4_LIB_VERSION) LZ4LIB_API int LZ4_versionNumber (void); /**< library version number; useful to check dll version */ LZ4LIB_API const char* LZ4_versionString (void); /**< library version string; useful to check dll version */ /*-************************************ * Tuning parameter **************************************/ /*! * LZ4_MEMORY_USAGE : * Memory usage formula : N->2^N Bytes (examples : 10 -> 1KB; 12 -> 4KB ; 16 -> 64KB; 20 -> 1MB; etc.) * Increasing memory usage improves compression ratio. * Reduced memory usage may improve speed, thanks to better cache locality. * Default value is 14, for 16KB, which nicely fits into Intel x86 L1 cache */ #ifndef LZ4_MEMORY_USAGE # define LZ4_MEMORY_USAGE 14 #endif /*-************************************ * Simple Functions **************************************/ /*! LZ4_compress_default() : * Compresses 'srcSize' bytes from buffer 'src' * into already allocated 'dst' buffer of size 'dstCapacity'. * Compression is guaranteed to succeed if 'dstCapacity' >= LZ4_compressBound(srcSize). * It also runs faster, so it's a recommended setting. * If the function cannot compress 'src' into a more limited 'dst' budget, * compression stops *immediately*, and the function result is zero. * In which case, 'dst' content is undefined (invalid). * srcSize : max supported value is LZ4_MAX_INPUT_SIZE. * dstCapacity : size of buffer 'dst' (which must be already allocated) * @return : the number of bytes written into buffer 'dst' (necessarily <= dstCapacity) * or 0 if compression fails * Note : This function is protected against buffer overflow scenarios (never writes outside 'dst' buffer, nor read outside 'source' buffer). */ LZ4LIB_API int LZ4_compress_default(const char* src, char* dst, int srcSize, int dstCapacity); /*! LZ4_decompress_safe() : * compressedSize : is the exact complete size of the compressed block. * dstCapacity : is the size of destination buffer (which must be already allocated), presumed an upper bound of decompressed size. * @return : the number of bytes decompressed into destination buffer (necessarily <= dstCapacity) * If destination buffer is not large enough, decoding will stop and output an error code (negative value). * If the source stream is detected malformed, the function will stop decoding and return a negative result. * Note 1 : This function is protected against malicious data packets : * it will never writes outside 'dst' buffer, nor read outside 'source' buffer, * even if the compressed block is maliciously modified to order the decoder to do these actions. * In such case, the decoder stops immediately, and considers the compressed block malformed. * Note 2 : compressedSize and dstCapacity must be provided to the function, the compressed block does not contain them. * The implementation is free to send / store / derive this information in whichever way is most beneficial. * If there is a need for a different format which bundles together both compressed data and its metadata, consider looking at lz4frame.h instead. */ LZ4LIB_API int LZ4_decompress_safe (const char* src, char* dst, int compressedSize, int dstCapacity); /*-************************************ * Advanced Functions **************************************/ #define LZ4_MAX_INPUT_SIZE 0x7E000000 /* 2 113 929 216 bytes */ #define LZ4_COMPRESSBOUND(isize) ((unsigned)(isize) > (unsigned)LZ4_MAX_INPUT_SIZE ? 0 : (isize) + ((isize)/255) + 16) /*! LZ4_compressBound() : Provides the maximum size that LZ4 compression may output in a "worst case" scenario (input data not compressible) This function is primarily useful for memory allocation purposes (destination buffer size). Macro LZ4_COMPRESSBOUND() is also provided for compilation-time evaluation (stack memory allocation for example). Note that LZ4_compress_default() compresses faster when dstCapacity is >= LZ4_compressBound(srcSize) inputSize : max supported value is LZ4_MAX_INPUT_SIZE return : maximum output size in a "worst case" scenario or 0, if input size is incorrect (too large or negative) */ LZ4LIB_API int LZ4_compressBound(int inputSize); /*! LZ4_compress_fast() : Same as LZ4_compress_default(), but allows selection of "acceleration" factor. The larger the acceleration value, the faster the algorithm, but also the lesser the compression. It's a trade-off. It can be fine tuned, with each successive value providing roughly +~3% to speed. An acceleration value of "1" is the same as regular LZ4_compress_default() Values <= 0 will be replaced by ACCELERATION_DEFAULT (currently == 1, see lz4.c). */ LZ4LIB_API int LZ4_compress_fast (const char* src, char* dst, int srcSize, int dstCapacity, int acceleration); /*! LZ4_compress_fast_extState() : * Same as LZ4_compress_fast(), using an externally allocated memory space for its state. * Use LZ4_sizeofState() to know how much memory must be allocated, * and allocate it on 8-bytes boundaries (using `malloc()` typically). * Then, provide this buffer as `void* state` to compression function. */ LZ4LIB_API int LZ4_sizeofState(void); LZ4LIB_API int LZ4_compress_fast_extState (void* state, const char* src, char* dst, int srcSize, int dstCapacity, int acceleration); /*! LZ4_compress_destSize() : * Reverse the logic : compresses as much data as possible from 'src' buffer * into already allocated buffer 'dst', of size >= 'targetDestSize'. * This function either compresses the entire 'src' content into 'dst' if it's large enough, * or fill 'dst' buffer completely with as much data as possible from 'src'. * note: acceleration parameter is fixed to "default". * * *srcSizePtr : will be modified to indicate how many bytes where read from 'src' to fill 'dst'. * New value is necessarily <= input value. * @return : Nb bytes written into 'dst' (necessarily <= targetDestSize) * or 0 if compression fails. */ LZ4LIB_API int LZ4_compress_destSize (const char* src, char* dst, int* srcSizePtr, int targetDstSize); /*! LZ4_decompress_safe_partial() : * Decompress an LZ4 compressed block, of size 'srcSize' at position 'src', * into destination buffer 'dst' of size 'dstCapacity'. * Up to 'targetOutputSize' bytes will be decoded. * The function stops decoding on reaching this objective, * which can boost performance when only the beginning of a block is required. * * @return : the number of bytes decoded in `dst` (necessarily <= dstCapacity) * If source stream is detected malformed, function returns a negative result. * * Note : @return can be < targetOutputSize, if compressed block contains less data. * * Note 2 : this function features 2 parameters, targetOutputSize and dstCapacity, * and expects targetOutputSize <= dstCapacity. * It effectively stops decoding on reaching targetOutputSize, * so dstCapacity is kind of redundant. * This is because in a previous version of this function, * decoding operation would not "break" a sequence in the middle. * As a consequence, there was no guarantee that decoding would stop at exactly targetOutputSize, * it could write more bytes, though only up to dstCapacity. * Some "margin" used to be required for this operation to work properly. * This is no longer necessary. * The function nonetheless keeps its signature, in an effort to not break API. */ LZ4LIB_API int LZ4_decompress_safe_partial (const char* src, char* dst, int srcSize, int targetOutputSize, int dstCapacity); /*-********************************************* * Streaming Compression Functions ***********************************************/ typedef union LZ4_stream_u LZ4_stream_t; /* incomplete type (defined later) */ LZ4LIB_API LZ4_stream_t* LZ4_createStream(void); LZ4LIB_API int LZ4_freeStream (LZ4_stream_t* streamPtr); /*! LZ4_resetStream_fast() : v1.9.0+ * Use this to prepare an LZ4_stream_t for a new chain of dependent blocks * (e.g., LZ4_compress_fast_continue()). * * An LZ4_stream_t must be initialized once before usage. * This is automatically done when created by LZ4_createStream(). * However, should the LZ4_stream_t be simply declared on stack (for example), * it's necessary to initialize it first, using LZ4_initStream(). * * After init, start any new stream with LZ4_resetStream_fast(). * A same LZ4_stream_t can be re-used multiple times consecutively * and compress multiple streams, * provided that it starts each new stream with LZ4_resetStream_fast(). * * LZ4_resetStream_fast() is much faster than LZ4_initStream(), * but is not compatible with memory regions containing garbage data. * * Note: it's only useful to call LZ4_resetStream_fast() * in the context of streaming compression. * The *extState* functions perform their own resets. * Invoking LZ4_resetStream_fast() before is redundant, and even counterproductive. */ LZ4LIB_API void LZ4_resetStream_fast (LZ4_stream_t* streamPtr); /*! LZ4_loadDict() : * Use this function to reference a static dictionary into LZ4_stream_t. * The dictionary must remain available during compression. * LZ4_loadDict() triggers a reset, so any previous data will be forgotten. * The same dictionary will have to be loaded on decompression side for successful decoding. * Dictionary are useful for better compression of small data (KB range). * While LZ4 accept any input as dictionary, * results are generally better when using Zstandard's Dictionary Builder. * Loading a size of 0 is allowed, and is the same as reset. * @return : loaded dictionary size, in bytes (necessarily <= 64 KB) */ LZ4LIB_API int LZ4_loadDict (LZ4_stream_t* streamPtr, const char* dictionary, int dictSize); /*! LZ4_compress_fast_continue() : * Compress 'src' content using data from previously compressed blocks, for better compression ratio. * 'dst' buffer must be already allocated. * If dstCapacity >= LZ4_compressBound(srcSize), compression is guaranteed to succeed, and runs faster. * * @return : size of compressed block * or 0 if there is an error (typically, cannot fit into 'dst'). * * Note 1 : Each invocation to LZ4_compress_fast_continue() generates a new block. * Each block has precise boundaries. * Each block must be decompressed separately, calling LZ4_decompress_*() with relevant metadata. * It's not possible to append blocks together and expect a single invocation of LZ4_decompress_*() to decompress them together. * * Note 2 : The previous 64KB of source data is __assumed__ to remain present, unmodified, at same address in memory ! * * Note 3 : When input is structured as a double-buffer, each buffer can have any size, including < 64 KB. * Make sure that buffers are separated, by at least one byte. * This construction ensures that each block only depends on previous block. * * Note 4 : If input buffer is a ring-buffer, it can have any size, including < 64 KB. * * Note 5 : After an error, the stream status is undefined (invalid), it can only be reset or freed. */ LZ4LIB_API int LZ4_compress_fast_continue (LZ4_stream_t* streamPtr, const char* src, char* dst, int srcSize, int dstCapacity, int acceleration); /*! LZ4_saveDict() : * If last 64KB data cannot be guaranteed to remain available at its current memory location, * save it into a safer place (char* safeBuffer). * This is schematically equivalent to a memcpy() followed by LZ4_loadDict(), * but is much faster, because LZ4_saveDict() doesn't need to rebuild tables. * @return : saved dictionary size in bytes (necessarily <= maxDictSize), or 0 if error. */ LZ4LIB_API int LZ4_saveDict (LZ4_stream_t* streamPtr, char* safeBuffer, int maxDictSize); /*-********************************************** * Streaming Decompression Functions * Bufferless synchronous API ************************************************/ typedef union LZ4_streamDecode_u LZ4_streamDecode_t; /* tracking context */ /*! LZ4_createStreamDecode() and LZ4_freeStreamDecode() : * creation / destruction of streaming decompression tracking context. * A tracking context can be re-used multiple times. */ LZ4LIB_API LZ4_streamDecode_t* LZ4_createStreamDecode(void); LZ4LIB_API int LZ4_freeStreamDecode (LZ4_streamDecode_t* LZ4_stream); /*! LZ4_setStreamDecode() : * An LZ4_streamDecode_t context can be allocated once and re-used multiple times. * Use this function to start decompression of a new stream of blocks. * A dictionary can optionally be set. Use NULL or size 0 for a reset order. * Dictionary is presumed stable : it must remain accessible and unmodified during next decompression. * @return : 1 if OK, 0 if error */ LZ4LIB_API int LZ4_setStreamDecode (LZ4_streamDecode_t* LZ4_streamDecode, const char* dictionary, int dictSize); /*! LZ4_decoderRingBufferSize() : v1.8.2+ * Note : in a ring buffer scenario (optional), * blocks are presumed decompressed next to each other * up to the moment there is not enough remaining space for next block (remainingSize < maxBlockSize), * at which stage it resumes from beginning of ring buffer. * When setting such a ring buffer for streaming decompression, * provides the minimum size of this ring buffer * to be compatible with any source respecting maxBlockSize condition. * @return : minimum ring buffer size, * or 0 if there is an error (invalid maxBlockSize). */ LZ4LIB_API int LZ4_decoderRingBufferSize(int maxBlockSize); #define LZ4_DECODER_RING_BUFFER_SIZE(maxBlockSize) (65536 + 14 + (maxBlockSize)) /* for static allocation; maxBlockSize presumed valid */ /*! LZ4_decompress_*_continue() : * These decoding functions allow decompression of consecutive blocks in "streaming" mode. * A block is an unsplittable entity, it must be presented entirely to a decompression function. * Decompression functions only accepts one block at a time. * The last 64KB of previously decoded data *must* remain available and unmodified at the memory position where they were decoded. * If less than 64KB of data has been decoded, all the data must be present. * * Special : if decompression side sets a ring buffer, it must respect one of the following conditions : * - Decompression buffer size is _at least_ LZ4_decoderRingBufferSize(maxBlockSize). * maxBlockSize is the maximum size of any single block. It can have any value > 16 bytes. * In which case, encoding and decoding buffers do not need to be synchronized. * Actually, data can be produced by any source compliant with LZ4 format specification, and respecting maxBlockSize. * - Synchronized mode : * Decompression buffer size is _exactly_ the same as compression buffer size, * and follows exactly same update rule (block boundaries at same positions), * and decoding function is provided with exact decompressed size of each block (exception for last block of the stream), * _then_ decoding & encoding ring buffer can have any size, including small ones ( < 64 KB). * - Decompression buffer is larger than encoding buffer, by a minimum of maxBlockSize more bytes. * In which case, encoding and decoding buffers do not need to be synchronized, * and encoding ring buffer can have any size, including small ones ( < 64 KB). * * Whenever these conditions are not possible, * save the last 64KB of decoded data into a safe buffer where it can't be modified during decompression, * then indicate where this data is saved using LZ4_setStreamDecode(), before decompressing next block. */ LZ4LIB_API int LZ4_decompress_safe_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* src, char* dst, int srcSize, int dstCapacity); /*! LZ4_decompress_*_usingDict() : * These decoding functions work the same as * a combination of LZ4_setStreamDecode() followed by LZ4_decompress_*_continue() * They are stand-alone, and don't need an LZ4_streamDecode_t structure. * Dictionary is presumed stable : it must remain accessible and unmodified during decompression. * Performance tip : Decompression speed can be substantially increased * when dst == dictStart + dictSize. */ LZ4LIB_API int LZ4_decompress_safe_usingDict (const char* src, char* dst, int srcSize, int dstCapcity, const char* dictStart, int dictSize); #endif /* LZ4_H_2983827168210 */ /*^************************************* * !!!!!! STATIC LINKING ONLY !!!!!! ***************************************/ /*-**************************************************************************** * Experimental section * * Symbols declared in this section must be considered unstable. Their * signatures or semantics may change, or they may be removed altogether in the * future. They are therefore only safe to depend on when the caller is * statically linked against the library. * * To protect against unsafe usage, not only are the declarations guarded, * the definitions are hidden by default * when building LZ4 as a shared/dynamic library. * * In order to access these declarations, * define LZ4_STATIC_LINKING_ONLY in your application * before including LZ4's headers. * * In order to make their implementations accessible dynamically, you must * define LZ4_PUBLISH_STATIC_FUNCTIONS when building the LZ4 library. ******************************************************************************/ #ifdef LZ4_STATIC_LINKING_ONLY #ifndef LZ4_STATIC_3504398509 #define LZ4_STATIC_3504398509 #ifdef LZ4_PUBLISH_STATIC_FUNCTIONS #define LZ4LIB_STATIC_API LZ4LIB_API #else #define LZ4LIB_STATIC_API #endif /*! LZ4_compress_fast_extState_fastReset() : * A variant of LZ4_compress_fast_extState(). * * Using this variant avoids an expensive initialization step. * It is only safe to call if the state buffer is known to be correctly initialized already * (see above comment on LZ4_resetStream_fast() for a definition of "correctly initialized"). * From a high level, the difference is that * this function initializes the provided state with a call to something like LZ4_resetStream_fast() * while LZ4_compress_fast_extState() starts with a call to LZ4_resetStream(). */ LZ4LIB_STATIC_API int LZ4_compress_fast_extState_fastReset (void* state, const char* src, char* dst, int srcSize, int dstCapacity, int acceleration); /*! LZ4_attach_dictionary() : * This is an experimental API that allows * efficient use of a static dictionary many times. * * Rather than re-loading the dictionary buffer into a working context before * each compression, or copying a pre-loaded dictionary's LZ4_stream_t into a * working LZ4_stream_t, this function introduces a no-copy setup mechanism, * in which the working stream references the dictionary stream in-place. * * Several assumptions are made about the state of the dictionary stream. * Currently, only streams which have been prepared by LZ4_loadDict() should * be expected to work. * * Alternatively, the provided dictionaryStream may be NULL, * in which case any existing dictionary stream is unset. * * If a dictionary is provided, it replaces any pre-existing stream history. * The dictionary contents are the only history that can be referenced and * logically immediately precede the data compressed in the first subsequent * compression call. * * The dictionary will only remain attached to the working stream through the * first compression call, at the end of which it is cleared. The dictionary * stream (and source buffer) must remain in-place / accessible / unchanged * through the completion of the first compression call on the stream. */ LZ4LIB_STATIC_API void LZ4_attach_dictionary(LZ4_stream_t* workingStream, const LZ4_stream_t* dictionaryStream); /*! In-place compression and decompression * * It's possible to have input and output sharing the same buffer, * for highly contrained memory environments. * In both cases, it requires input to lay at the end of the buffer, * and decompression to start at beginning of the buffer. * Buffer size must feature some margin, hence be larger than final size. * * |<------------------------buffer--------------------------------->| * |<-----------compressed data--------->| * |<-----------decompressed size------------------>| * |<----margin---->| * * This technique is more useful for decompression, * since decompressed size is typically larger, * and margin is short. * * In-place decompression will work inside any buffer * which size is >= LZ4_DECOMPRESS_INPLACE_BUFFER_SIZE(decompressedSize). * This presumes that decompressedSize > compressedSize. * Otherwise, it means compression actually expanded data, * and it would be more efficient to store such data with a flag indicating it's not compressed. * This can happen when data is not compressible (already compressed, or encrypted). * * For in-place compression, margin is larger, as it must be able to cope with both * history preservation, requiring input data to remain unmodified up to LZ4_DISTANCE_MAX, * and data expansion, which can happen when input is not compressible. * As a consequence, buffer size requirements are much higher, * and memory savings offered by in-place compression are more limited. * * There are ways to limit this cost for compression : * - Reduce history size, by modifying LZ4_DISTANCE_MAX. * Note that it is a compile-time constant, so all compressions will apply this limit. * Lower values will reduce compression ratio, except when input_size < LZ4_DISTANCE_MAX, * so it's a reasonable trick when inputs are known to be small. * - Require the compressor to deliver a "maximum compressed size". * This is the `dstCapacity` parameter in `LZ4_compress*()`. * When this size is < LZ4_COMPRESSBOUND(inputSize), then compression can fail, * in which case, the return code will be 0 (zero). * The caller must be ready for these cases to happen, * and typically design a backup scheme to send data uncompressed. * The combination of both techniques can significantly reduce * the amount of margin required for in-place compression. * * In-place compression can work in any buffer * which size is >= (maxCompressedSize) * with maxCompressedSize == LZ4_COMPRESSBOUND(srcSize) for guaranteed compression success. * LZ4_COMPRESS_INPLACE_BUFFER_SIZE() depends on both maxCompressedSize and LZ4_DISTANCE_MAX, * so it's possible to reduce memory requirements by playing with them. */ #define LZ4_DECOMPRESS_INPLACE_MARGIN(compressedSize) (((compressedSize) >> 8) + 32) #define LZ4_DECOMPRESS_INPLACE_BUFFER_SIZE(decompressedSize) ((decompressedSize) + LZ4_DECOMPRESS_INPLACE_MARGIN(decompressedSize)) /**< note: presumes that compressedSize < decompressedSize. note2: margin is overestimated a bit, since it could use compressedSize instead */ #ifndef LZ4_DISTANCE_MAX /* history window size; can be user-defined at compile time */ # define LZ4_DISTANCE_MAX 65535 /* set to maximum value by default */ #endif #define LZ4_COMPRESS_INPLACE_MARGIN (LZ4_DISTANCE_MAX + 32) /* LZ4_DISTANCE_MAX can be safely replaced by srcSize when it's smaller */ #define LZ4_COMPRESS_INPLACE_BUFFER_SIZE(maxCompressedSize) ((maxCompressedSize) + LZ4_COMPRESS_INPLACE_MARGIN) /**< maxCompressedSize is generally LZ4_COMPRESSBOUND(inputSize), but can be set to any lower value, with the risk that compression can fail (return code 0(zero)) */ #endif /* LZ4_STATIC_3504398509 */ #endif /* LZ4_STATIC_LINKING_ONLY */ #ifndef LZ4_H_98237428734687 #define LZ4_H_98237428734687 /*-************************************************************ * PRIVATE DEFINITIONS ************************************************************** * Do not use these definitions directly. * They are only exposed to allow static allocation of `LZ4_stream_t` and `LZ4_streamDecode_t`. * Accessing members will expose code to API and/or ABI break in future versions of the library. **************************************************************/ #define LZ4_HASHLOG (LZ4_MEMORY_USAGE-2) #define LZ4_HASHTABLESIZE (1 << LZ4_MEMORY_USAGE) #define LZ4_HASH_SIZE_U32 (1 << LZ4_HASHLOG) /* required as macro for static allocation */ #if defined(__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) #include typedef struct LZ4_stream_t_internal LZ4_stream_t_internal; struct LZ4_stream_t_internal { uint32_t hashTable[LZ4_HASH_SIZE_U32]; uint32_t currentOffset; uint16_t dirty; uint16_t tableType; const uint8_t* dictionary; const LZ4_stream_t_internal* dictCtx; uint32_t dictSize; }; typedef struct { const uint8_t* externalDict; size_t extDictSize; const uint8_t* prefixEnd; size_t prefixSize; } LZ4_streamDecode_t_internal; #else typedef struct LZ4_stream_t_internal LZ4_stream_t_internal; struct LZ4_stream_t_internal { unsigned int hashTable[LZ4_HASH_SIZE_U32]; unsigned int currentOffset; unsigned short dirty; unsigned short tableType; const unsigned char* dictionary; const LZ4_stream_t_internal* dictCtx; unsigned int dictSize; }; typedef struct { const unsigned char* externalDict; const unsigned char* prefixEnd; size_t extDictSize; size_t prefixSize; } LZ4_streamDecode_t_internal; #endif /*! LZ4_stream_t : * information structure to track an LZ4 stream. * LZ4_stream_t can also be created using LZ4_createStream(), which is recommended. * The structure definition can be convenient for static allocation * (on stack, or as part of larger structure). * Init this structure with LZ4_initStream() before first use. * note : only use this definition in association with static linking ! * this definition is not API/ABI safe, and may change in a future version. */ #define LZ4_STREAMSIZE_U64 ((1 << (LZ4_MEMORY_USAGE-3)) + 4 + ((sizeof(void*)==16) ? 4 : 0) /*AS-400*/ ) #define LZ4_STREAMSIZE (LZ4_STREAMSIZE_U64 * sizeof(unsigned long long)) union LZ4_stream_u { unsigned long long table[LZ4_STREAMSIZE_U64]; LZ4_stream_t_internal internal_donotuse; } ; /* previously typedef'd to LZ4_stream_t */ /*! LZ4_initStream() : v1.9.0+ * An LZ4_stream_t structure must be initialized at least once. * This is automatically done when invoking LZ4_createStream(), * but it's not when the structure is simply declared on stack (for example). * * Use LZ4_initStream() to properly initialize a newly declared LZ4_stream_t. * It can also initialize any arbitrary buffer of sufficient size, * and will @return a pointer of proper type upon initialization. * * Note : initialization fails if size and alignment conditions are not respected. * In which case, the function will @return NULL. * Note2: An LZ4_stream_t structure guarantees correct alignment and size. * Note3: Before v1.9.0, use LZ4_resetStream() instead */ LZ4LIB_API LZ4_stream_t* LZ4_initStream (void* buffer, size_t size); /*! LZ4_streamDecode_t : * information structure to track an LZ4 stream during decompression. * init this structure using LZ4_setStreamDecode() before first use. * note : only use in association with static linking ! * this definition is not API/ABI safe, * and may change in a future version ! */ #define LZ4_STREAMDECODESIZE_U64 (4 + ((sizeof(void*)==16) ? 2 : 0) /*AS-400*/ ) #define LZ4_STREAMDECODESIZE (LZ4_STREAMDECODESIZE_U64 * sizeof(unsigned long long)) union LZ4_streamDecode_u { unsigned long long table[LZ4_STREAMDECODESIZE_U64]; LZ4_streamDecode_t_internal internal_donotuse; } ; /* previously typedef'd to LZ4_streamDecode_t */ /*-************************************ * Obsolete Functions **************************************/ /*! Deprecation warnings * * Deprecated functions make the compiler generate a warning when invoked. * This is meant to invite users to update their source code. * Should deprecation warnings be a problem, it is generally possible to disable them, * typically with -Wno-deprecated-declarations for gcc * or _CRT_SECURE_NO_WARNINGS in Visual. * * Another method is to define LZ4_DISABLE_DEPRECATE_WARNINGS * before including the header file. */ #ifdef LZ4_DISABLE_DEPRECATE_WARNINGS # define LZ4_DEPRECATED(message) /* disable deprecation warnings */ #else # define LZ4_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__) # if defined (__cplusplus) && (__cplusplus >= 201402) /* C++14 or greater */ # define LZ4_DEPRECATED(message) [[deprecated(message)]] # elif (LZ4_GCC_VERSION >= 405) || defined(__clang__) # define LZ4_DEPRECATED(message) __attribute__((deprecated(message))) # elif (LZ4_GCC_VERSION >= 301) # define LZ4_DEPRECATED(message) __attribute__((deprecated)) # elif defined(_MSC_VER) # define LZ4_DEPRECATED(message) __declspec(deprecated(message)) # else # pragma message("WARNING: You need to implement LZ4_DEPRECATED for this compiler") # define LZ4_DEPRECATED(message) # endif #endif /* LZ4_DISABLE_DEPRECATE_WARNINGS */ /* Obsolete compression functions */ LZ4_DEPRECATED("use LZ4_compress_default() instead") LZ4LIB_API int LZ4_compress (const char* src, char* dest, int srcSize); LZ4_DEPRECATED("use LZ4_compress_default() instead") LZ4LIB_API int LZ4_compress_limitedOutput (const char* src, char* dest, int srcSize, int maxOutputSize); LZ4_DEPRECATED("use LZ4_compress_fast_extState() instead") LZ4LIB_API int LZ4_compress_withState (void* state, const char* source, char* dest, int inputSize); LZ4_DEPRECATED("use LZ4_compress_fast_extState() instead") LZ4LIB_API int LZ4_compress_limitedOutput_withState (void* state, const char* source, char* dest, int inputSize, int maxOutputSize); LZ4_DEPRECATED("use LZ4_compress_fast_continue() instead") LZ4LIB_API int LZ4_compress_continue (LZ4_stream_t* LZ4_streamPtr, const char* source, char* dest, int inputSize); LZ4_DEPRECATED("use LZ4_compress_fast_continue() instead") LZ4LIB_API int LZ4_compress_limitedOutput_continue (LZ4_stream_t* LZ4_streamPtr, const char* source, char* dest, int inputSize, int maxOutputSize); /* Obsolete decompression functions */ LZ4_DEPRECATED("use LZ4_decompress_fast() instead") LZ4LIB_API int LZ4_uncompress (const char* source, char* dest, int outputSize); LZ4_DEPRECATED("use LZ4_decompress_safe() instead") LZ4LIB_API int LZ4_uncompress_unknownOutputSize (const char* source, char* dest, int isize, int maxOutputSize); /* Obsolete streaming functions; degraded functionality; do not use! * * In order to perform streaming compression, these functions depended on data * that is no longer tracked in the state. They have been preserved as well as * possible: using them will still produce a correct output. However, they don't * actually retain any history between compression calls. The compression ratio * achieved will therefore be no better than compressing each chunk * independently. */ LZ4_DEPRECATED("Use LZ4_createStream() instead") LZ4LIB_API void* LZ4_create (char* inputBuffer); LZ4_DEPRECATED("Use LZ4_createStream() instead") LZ4LIB_API int LZ4_sizeofStreamState(void); LZ4_DEPRECATED("Use LZ4_resetStream() instead") LZ4LIB_API int LZ4_resetStreamState(void* state, char* inputBuffer); LZ4_DEPRECATED("Use LZ4_saveDict() instead") LZ4LIB_API char* LZ4_slideInputBuffer (void* state); /* Obsolete streaming decoding functions */ LZ4_DEPRECATED("use LZ4_decompress_safe_usingDict() instead") LZ4LIB_API int LZ4_decompress_safe_withPrefix64k (const char* src, char* dst, int compressedSize, int maxDstSize); LZ4_DEPRECATED("use LZ4_decompress_fast_usingDict() instead") LZ4LIB_API int LZ4_decompress_fast_withPrefix64k (const char* src, char* dst, int originalSize); /*! LZ4_decompress_fast() : **unsafe!** * These functions used to be faster than LZ4_decompress_safe(), * but it has changed, and they are now slower than LZ4_decompress_safe(). * This is because LZ4_decompress_fast() doesn't know the input size, * and therefore must progress more cautiously in the input buffer to not read beyond the end of block. * On top of that `LZ4_decompress_fast()` is not protected vs malformed or malicious inputs, making it a security liability. * As a consequence, LZ4_decompress_fast() is strongly discouraged, and deprecated. * * The last remaining LZ4_decompress_fast() specificity is that * it can decompress a block without knowing its compressed size. * Such functionality could be achieved in a more secure manner, * by also providing the maximum size of input buffer, * but it would require new prototypes, and adaptation of the implementation to this new use case. * * Parameters: * originalSize : is the uncompressed size to regenerate. * `dst` must be already allocated, its size must be >= 'originalSize' bytes. * @return : number of bytes read from source buffer (== compressed size). * The function expects to finish at block's end exactly. * If the source stream is detected malformed, the function stops decoding and returns a negative result. * note : LZ4_decompress_fast*() requires originalSize. Thanks to this information, it never writes past the output buffer. * However, since it doesn't know its 'src' size, it may read an unknown amount of input, past input buffer bounds. * Also, since match offsets are not validated, match reads from 'src' may underflow too. * These issues never happen if input (compressed) data is correct. * But they may happen if input data is invalid (error or intentional tampering). * As a consequence, use these functions in trusted environments with trusted data **only**. */ LZ4_DEPRECATED("This function is deprecated and unsafe. Consider using LZ4_decompress_safe() instead") LZ4LIB_API int LZ4_decompress_fast (const char* src, char* dst, int originalSize); LZ4_DEPRECATED("This function is deprecated and unsafe. Consider using LZ4_decompress_safe_continue() instead") LZ4LIB_API int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* src, char* dst, int originalSize); LZ4_DEPRECATED("This function is deprecated and unsafe. Consider using LZ4_decompress_safe_usingDict() instead") LZ4LIB_API int LZ4_decompress_fast_usingDict (const char* src, char* dst, int originalSize, const char* dictStart, int dictSize); /*! LZ4_resetStream() : * An LZ4_stream_t structure must be initialized at least once. * This is done with LZ4_initStream(), or LZ4_resetStream(). * Consider switching to LZ4_initStream(), * invoking LZ4_resetStream() will trigger deprecation warnings in the future. */ LZ4LIB_API void LZ4_resetStream (LZ4_stream_t* streamPtr); #endif /* LZ4_H_98237428734687 */ #if defined (__cplusplus) } #endif zmat-0.9.8/src/lz4/lz4hc.c000066400000000000000000002015211366304620100151750ustar00rootroot00000000000000/* LZ4 HC - High Compression Mode of LZ4 Copyright (C) 2011-2017, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - LZ4 source repository : https://github.com/lz4/lz4 - LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c */ /* note : lz4hc is not an independent module, it requires lz4.h/lz4.c for proper compilation */ /* ************************************* * Tuning Parameter ***************************************/ /*! HEAPMODE : * Select how default compression function will allocate workplace memory, * in stack (0:fastest), or in heap (1:requires malloc()). * Since workplace is rather large, heap mode is recommended. */ #ifndef LZ4HC_HEAPMODE # define LZ4HC_HEAPMODE 1 #endif /*=== Dependency ===*/ #define LZ4_HC_STATIC_LINKING_ONLY #include "lz4hc.h" /*=== Common LZ4 definitions ===*/ #if defined(__GNUC__) # pragma GCC diagnostic ignored "-Wunused-function" #endif #if defined (__clang__) # pragma clang diagnostic ignored "-Wunused-function" #endif /*=== Enums ===*/ typedef enum { noDictCtx, usingDictCtxHc } dictCtx_directive; #define LZ4_COMMONDEFS_ONLY #ifndef LZ4_SRC_INCLUDED #include "lz4.c" /* LZ4_count, constants, mem */ #endif /*=== Constants ===*/ #define OPTIMAL_ML (int)((ML_MASK-1)+MINMATCH) #define LZ4_OPT_NUM (1<<12) /*=== Macros ===*/ #define MIN(a,b) ( (a) < (b) ? (a) : (b) ) #define MAX(a,b) ( (a) > (b) ? (a) : (b) ) #define HASH_FUNCTION(i) (((i) * 2654435761U) >> ((MINMATCH*8)-LZ4HC_HASH_LOG)) #define DELTANEXTMAXD(p) chainTable[(p) & LZ4HC_MAXD_MASK] /* flexible, LZ4HC_MAXD dependent */ #define DELTANEXTU16(table, pos) table[(U16)(pos)] /* faster */ /* Make fields passed to, and updated by LZ4HC_encodeSequence explicit */ #define UPDATABLE(ip, op, anchor) &ip, &op, &anchor static U32 LZ4HC_hashPtr(const void* ptr) { return HASH_FUNCTION(LZ4_read32(ptr)); } /************************************** * HC Compression **************************************/ static void LZ4HC_clearTables (LZ4HC_CCtx_internal* hc4) { MEM_INIT((void*)hc4->hashTable, 0, sizeof(hc4->hashTable)); MEM_INIT(hc4->chainTable, 0xFF, sizeof(hc4->chainTable)); } static void LZ4HC_init_internal (LZ4HC_CCtx_internal* hc4, const BYTE* start) { uptrval startingOffset = (uptrval)(hc4->end - hc4->base); if (startingOffset > 1 GB) { LZ4HC_clearTables(hc4); startingOffset = 0; } startingOffset += 64 KB; hc4->nextToUpdate = (U32) startingOffset; hc4->base = start - startingOffset; hc4->end = start; hc4->dictBase = start - startingOffset; hc4->dictLimit = (U32) startingOffset; hc4->lowLimit = (U32) startingOffset; } /* Update chains up to ip (excluded) */ LZ4_FORCE_INLINE void LZ4HC_Insert (LZ4HC_CCtx_internal* hc4, const BYTE* ip) { U16* const chainTable = hc4->chainTable; U32* const hashTable = hc4->hashTable; const BYTE* const base = hc4->base; U32 const target = (U32)(ip - base); U32 idx = hc4->nextToUpdate; while (idx < target) { U32 const h = LZ4HC_hashPtr(base+idx); size_t delta = idx - hashTable[h]; if (delta>LZ4_DISTANCE_MAX) delta = LZ4_DISTANCE_MAX; DELTANEXTU16(chainTable, idx) = (U16)delta; hashTable[h] = idx; idx++; } hc4->nextToUpdate = target; } /** LZ4HC_countBack() : * @return : negative value, nb of common bytes before ip/match */ LZ4_FORCE_INLINE int LZ4HC_countBack(const BYTE* const ip, const BYTE* const match, const BYTE* const iMin, const BYTE* const mMin) { int back = 0; int const min = (int)MAX(iMin - ip, mMin - match); assert(min <= 0); assert(ip >= iMin); assert((size_t)(ip-iMin) < (1U<<31)); assert(match >= mMin); assert((size_t)(match - mMin) < (1U<<31)); while ( (back > min) && (ip[back-1] == match[back-1]) ) back--; return back; } #if defined(_MSC_VER) # define LZ4HC_rotl32(x,r) _rotl(x,r) #else # define LZ4HC_rotl32(x,r) ((x << r) | (x >> (32 - r))) #endif static U32 LZ4HC_rotatePattern(size_t const rotate, U32 const pattern) { size_t const bitsToRotate = (rotate & (sizeof(pattern) - 1)) << 3; if (bitsToRotate == 0) return pattern; return LZ4HC_rotl32(pattern, (int)bitsToRotate); } /* LZ4HC_countPattern() : * pattern32 must be a sample of repetitive pattern of length 1, 2 or 4 (but not 3!) */ static unsigned LZ4HC_countPattern(const BYTE* ip, const BYTE* const iEnd, U32 const pattern32) { const BYTE* const iStart = ip; reg_t const pattern = (sizeof(pattern)==8) ? (reg_t)pattern32 + (((reg_t)pattern32) << 32) : pattern32; while (likely(ip < iEnd-(sizeof(pattern)-1))) { reg_t const diff = LZ4_read_ARCH(ip) ^ pattern; if (!diff) { ip+=sizeof(pattern); continue; } ip += LZ4_NbCommonBytes(diff); return (unsigned)(ip - iStart); } if (LZ4_isLittleEndian()) { reg_t patternByte = pattern; while ((ip>= 8; } } else { /* big endian */ U32 bitOffset = (sizeof(pattern)*8) - 8; while (ip < iEnd) { BYTE const byte = (BYTE)(pattern >> bitOffset); if (*ip != byte) break; ip ++; bitOffset -= 8; } } return (unsigned)(ip - iStart); } /* LZ4HC_reverseCountPattern() : * pattern must be a sample of repetitive pattern of length 1, 2 or 4 (but not 3!) * read using natural platform endianess */ static unsigned LZ4HC_reverseCountPattern(const BYTE* ip, const BYTE* const iLow, U32 pattern) { const BYTE* const iStart = ip; while (likely(ip >= iLow+4)) { if (LZ4_read32(ip-4) != pattern) break; ip -= 4; } { const BYTE* bytePtr = (const BYTE*)(&pattern) + 3; /* works for any endianess */ while (likely(ip>iLow)) { if (ip[-1] != *bytePtr) break; ip--; bytePtr--; } } return (unsigned)(iStart - ip); } /* LZ4HC_protectDictEnd() : * Checks if the match is in the last 3 bytes of the dictionary, so reading the * 4 byte MINMATCH would overflow. * @returns true if the match index is okay. */ static int LZ4HC_protectDictEnd(U32 const dictLimit, U32 const matchIndex) { return ((U32)((dictLimit - 1) - matchIndex) >= 3); } typedef enum { rep_untested, rep_not, rep_confirmed } repeat_state_e; typedef enum { favorCompressionRatio=0, favorDecompressionSpeed } HCfavor_e; LZ4_FORCE_INLINE int LZ4HC_InsertAndGetWiderMatch ( LZ4HC_CCtx_internal* hc4, const BYTE* const ip, const BYTE* const iLowLimit, const BYTE* const iHighLimit, int longest, const BYTE** matchpos, const BYTE** startpos, const int maxNbAttempts, const int patternAnalysis, const int chainSwap, const dictCtx_directive dict, const HCfavor_e favorDecSpeed) { U16* const chainTable = hc4->chainTable; U32* const HashTable = hc4->hashTable; const LZ4HC_CCtx_internal * const dictCtx = hc4->dictCtx; const BYTE* const base = hc4->base; const U32 dictLimit = hc4->dictLimit; const BYTE* const lowPrefixPtr = base + dictLimit; const U32 ipIndex = (U32)(ip - base); const U32 lowestMatchIndex = (hc4->lowLimit + (LZ4_DISTANCE_MAX + 1) > ipIndex) ? hc4->lowLimit : ipIndex - LZ4_DISTANCE_MAX; const BYTE* const dictBase = hc4->dictBase; int const lookBackLength = (int)(ip-iLowLimit); int nbAttempts = maxNbAttempts; U32 matchChainPos = 0; U32 const pattern = LZ4_read32(ip); U32 matchIndex; repeat_state_e repeat = rep_untested; size_t srcPatternLength = 0; DEBUGLOG(7, "LZ4HC_InsertAndGetWiderMatch"); /* First Match */ LZ4HC_Insert(hc4, ip); matchIndex = HashTable[LZ4HC_hashPtr(ip)]; DEBUGLOG(7, "First match at index %u / %u (lowestMatchIndex)", matchIndex, lowestMatchIndex); while ((matchIndex>=lowestMatchIndex) && (nbAttempts)) { int matchLength=0; nbAttempts--; assert(matchIndex < ipIndex); if (favorDecSpeed && (ipIndex - matchIndex < 8)) { /* do nothing */ } else if (matchIndex >= dictLimit) { /* within current Prefix */ const BYTE* const matchPtr = base + matchIndex; assert(matchPtr >= lowPrefixPtr); assert(matchPtr < ip); assert(longest >= 1); if (LZ4_read16(iLowLimit + longest - 1) == LZ4_read16(matchPtr - lookBackLength + longest - 1)) { if (LZ4_read32(matchPtr) == pattern) { int const back = lookBackLength ? LZ4HC_countBack(ip, matchPtr, iLowLimit, lowPrefixPtr) : 0; matchLength = MINMATCH + (int)LZ4_count(ip+MINMATCH, matchPtr+MINMATCH, iHighLimit); matchLength -= back; if (matchLength > longest) { longest = matchLength; *matchpos = matchPtr + back; *startpos = ip + back; } } } } else { /* lowestMatchIndex <= matchIndex < dictLimit */ const BYTE* const matchPtr = dictBase + matchIndex; if (LZ4_read32(matchPtr) == pattern) { const BYTE* const dictStart = dictBase + hc4->lowLimit; int back = 0; const BYTE* vLimit = ip + (dictLimit - matchIndex); if (vLimit > iHighLimit) vLimit = iHighLimit; matchLength = (int)LZ4_count(ip+MINMATCH, matchPtr+MINMATCH, vLimit) + MINMATCH; if ((ip+matchLength == vLimit) && (vLimit < iHighLimit)) matchLength += LZ4_count(ip+matchLength, lowPrefixPtr, iHighLimit); back = lookBackLength ? LZ4HC_countBack(ip, matchPtr, iLowLimit, dictStart) : 0; matchLength -= back; if (matchLength > longest) { longest = matchLength; *matchpos = base + matchIndex + back; /* virtual pos, relative to ip, to retrieve offset */ *startpos = ip + back; } } } if (chainSwap && matchLength==longest) { /* better match => select a better chain */ assert(lookBackLength==0); /* search forward only */ if (matchIndex + (U32)longest <= ipIndex) { int const kTrigger = 4; U32 distanceToNextMatch = 1; int const end = longest - MINMATCH + 1; int step = 1; int accel = 1 << kTrigger; int pos; for (pos = 0; pos < end; pos += step) { U32 const candidateDist = DELTANEXTU16(chainTable, matchIndex + (U32)pos); step = (accel++ >> kTrigger); if (candidateDist > distanceToNextMatch) { distanceToNextMatch = candidateDist; matchChainPos = (U32)pos; accel = 1 << kTrigger; } } if (distanceToNextMatch > 1) { if (distanceToNextMatch > matchIndex) break; /* avoid overflow */ matchIndex -= distanceToNextMatch; continue; } } } { U32 const distNextMatch = DELTANEXTU16(chainTable, matchIndex); if (patternAnalysis && distNextMatch==1 && matchChainPos==0) { U32 const matchCandidateIdx = matchIndex-1; /* may be a repeated pattern */ if (repeat == rep_untested) { if ( ((pattern & 0xFFFF) == (pattern >> 16)) & ((pattern & 0xFF) == (pattern >> 24)) ) { repeat = rep_confirmed; srcPatternLength = LZ4HC_countPattern(ip+sizeof(pattern), iHighLimit, pattern) + sizeof(pattern); } else { repeat = rep_not; } } if ( (repeat == rep_confirmed) && (matchCandidateIdx >= lowestMatchIndex) && LZ4HC_protectDictEnd(dictLimit, matchCandidateIdx) ) { const int extDict = matchCandidateIdx < dictLimit; const BYTE* const matchPtr = (extDict ? dictBase : base) + matchCandidateIdx; if (LZ4_read32(matchPtr) == pattern) { /* good candidate */ const BYTE* const dictStart = dictBase + hc4->lowLimit; const BYTE* const iLimit = extDict ? dictBase + dictLimit : iHighLimit; size_t forwardPatternLength = LZ4HC_countPattern(matchPtr+sizeof(pattern), iLimit, pattern) + sizeof(pattern); if (extDict && matchPtr + forwardPatternLength == iLimit) { U32 const rotatedPattern = LZ4HC_rotatePattern(forwardPatternLength, pattern); forwardPatternLength += LZ4HC_countPattern(lowPrefixPtr, iHighLimit, rotatedPattern); } { const BYTE* const lowestMatchPtr = extDict ? dictStart : lowPrefixPtr; size_t backLength = LZ4HC_reverseCountPattern(matchPtr, lowestMatchPtr, pattern); size_t currentSegmentLength; if (!extDict && matchPtr - backLength == lowPrefixPtr && hc4->lowLimit < dictLimit) { U32 const rotatedPattern = LZ4HC_rotatePattern((U32)(-(int)backLength), pattern); backLength += LZ4HC_reverseCountPattern(dictBase + dictLimit, dictStart, rotatedPattern); } /* Limit backLength not go further than lowestMatchIndex */ backLength = matchCandidateIdx - MAX(matchCandidateIdx - (U32)backLength, lowestMatchIndex); assert(matchCandidateIdx - backLength >= lowestMatchIndex); currentSegmentLength = backLength + forwardPatternLength; /* Adjust to end of pattern if the source pattern fits, otherwise the beginning of the pattern */ if ( (currentSegmentLength >= srcPatternLength) /* current pattern segment large enough to contain full srcPatternLength */ && (forwardPatternLength <= srcPatternLength) ) { /* haven't reached this position yet */ U32 const newMatchIndex = matchCandidateIdx + (U32)forwardPatternLength - (U32)srcPatternLength; /* best position, full pattern, might be followed by more match */ if (LZ4HC_protectDictEnd(dictLimit, newMatchIndex)) matchIndex = newMatchIndex; else { /* Can only happen if started in the prefix */ assert(newMatchIndex >= dictLimit - 3 && newMatchIndex < dictLimit && !extDict); matchIndex = dictLimit; } } else { U32 const newMatchIndex = matchCandidateIdx - (U32)backLength; /* farthest position in current segment, will find a match of length currentSegmentLength + maybe some back */ if (!LZ4HC_protectDictEnd(dictLimit, newMatchIndex)) { assert(newMatchIndex >= dictLimit - 3 && newMatchIndex < dictLimit && !extDict); matchIndex = dictLimit; } else { matchIndex = newMatchIndex; if (lookBackLength==0) { /* no back possible */ size_t const maxML = MIN(currentSegmentLength, srcPatternLength); if ((size_t)longest < maxML) { assert(base + matchIndex < ip); if (ip - (base+matchIndex) > LZ4_DISTANCE_MAX) break; assert(maxML < 2 GB); longest = (int)maxML; *matchpos = base + matchIndex; /* virtual pos, relative to ip, to retrieve offset */ *startpos = ip; } { U32 const distToNextPattern = DELTANEXTU16(chainTable, matchIndex); if (distToNextPattern > matchIndex) break; /* avoid overflow */ matchIndex -= distToNextPattern; } } } } } continue; } } } } /* PA optimization */ /* follow current chain */ matchIndex -= DELTANEXTU16(chainTable, matchIndex + matchChainPos); } /* while ((matchIndex>=lowestMatchIndex) && (nbAttempts)) */ if ( dict == usingDictCtxHc && nbAttempts && ipIndex - lowestMatchIndex < LZ4_DISTANCE_MAX) { size_t const dictEndOffset = (size_t)(dictCtx->end - dictCtx->base); U32 dictMatchIndex = dictCtx->hashTable[LZ4HC_hashPtr(ip)]; assert(dictEndOffset <= 1 GB); matchIndex = dictMatchIndex + lowestMatchIndex - (U32)dictEndOffset; while (ipIndex - matchIndex <= LZ4_DISTANCE_MAX && nbAttempts--) { const BYTE* const matchPtr = dictCtx->base + dictMatchIndex; if (LZ4_read32(matchPtr) == pattern) { int mlt; int back = 0; const BYTE* vLimit = ip + (dictEndOffset - dictMatchIndex); if (vLimit > iHighLimit) vLimit = iHighLimit; mlt = (int)LZ4_count(ip+MINMATCH, matchPtr+MINMATCH, vLimit) + MINMATCH; back = lookBackLength ? LZ4HC_countBack(ip, matchPtr, iLowLimit, dictCtx->base + dictCtx->dictLimit) : 0; mlt -= back; if (mlt > longest) { longest = mlt; *matchpos = base + matchIndex + back; *startpos = ip + back; } } { U32 const nextOffset = DELTANEXTU16(dictCtx->chainTable, dictMatchIndex); dictMatchIndex -= nextOffset; matchIndex -= nextOffset; } } } return longest; } LZ4_FORCE_INLINE int LZ4HC_InsertAndFindBestMatch(LZ4HC_CCtx_internal* const hc4, /* Index table will be updated */ const BYTE* const ip, const BYTE* const iLimit, const BYTE** matchpos, const int maxNbAttempts, const int patternAnalysis, const dictCtx_directive dict) { const BYTE* uselessPtr = ip; /* note : LZ4HC_InsertAndGetWiderMatch() is able to modify the starting position of a match (*startpos), * but this won't be the case here, as we define iLowLimit==ip, * so LZ4HC_InsertAndGetWiderMatch() won't be allowed to search past ip */ return LZ4HC_InsertAndGetWiderMatch(hc4, ip, ip, iLimit, MINMATCH-1, matchpos, &uselessPtr, maxNbAttempts, patternAnalysis, 0 /*chainSwap*/, dict, favorCompressionRatio); } /* LZ4HC_encodeSequence() : * @return : 0 if ok, * 1 if buffer issue detected */ LZ4_FORCE_INLINE int LZ4HC_encodeSequence ( const BYTE** ip, BYTE** op, const BYTE** anchor, int matchLength, const BYTE* const match, limitedOutput_directive limit, BYTE* oend) { size_t length; BYTE* const token = (*op)++; #if defined(LZ4_DEBUG) && (LZ4_DEBUG >= 6) static const BYTE* start = NULL; static U32 totalCost = 0; U32 const pos = (start==NULL) ? 0 : (U32)(*anchor - start); U32 const ll = (U32)(*ip - *anchor); U32 const llAdd = (ll>=15) ? ((ll-15) / 255) + 1 : 0; U32 const mlAdd = (matchLength>=19) ? ((matchLength-19) / 255) + 1 : 0; U32 const cost = 1 + llAdd + ll + 2 + mlAdd; if (start==NULL) start = *anchor; /* only works for single segment */ /* g_debuglog_enable = (pos >= 2228) & (pos <= 2262); */ DEBUGLOG(6, "pos:%7u -- literals:%3u, match:%4i, offset:%5u, cost:%3u + %u", pos, (U32)(*ip - *anchor), matchLength, (U32)(*ip-match), cost, totalCost); totalCost += cost; #endif /* Encode Literal length */ length = (size_t)(*ip - *anchor); if ((limit) && ((*op + (length / 255) + length + (2 + 1 + LASTLITERALS)) > oend)) return 1; /* Check output limit */ if (length >= RUN_MASK) { size_t len = length - RUN_MASK; *token = (RUN_MASK << ML_BITS); for(; len >= 255 ; len -= 255) *(*op)++ = 255; *(*op)++ = (BYTE)len; } else { *token = (BYTE)(length << ML_BITS); } /* Copy Literals */ LZ4_wildCopy8(*op, *anchor, (*op) + length); *op += length; /* Encode Offset */ assert( (*ip - match) <= LZ4_DISTANCE_MAX ); /* note : consider providing offset as a value, rather than as a pointer difference */ LZ4_writeLE16(*op, (U16)(*ip-match)); *op += 2; /* Encode MatchLength */ assert(matchLength >= MINMATCH); length = (size_t)matchLength - MINMATCH; if ((limit) && (*op + (length / 255) + (1 + LASTLITERALS) > oend)) return 1; /* Check output limit */ if (length >= ML_MASK) { *token += ML_MASK; length -= ML_MASK; for(; length >= 510 ; length -= 510) { *(*op)++ = 255; *(*op)++ = 255; } if (length >= 255) { length -= 255; *(*op)++ = 255; } *(*op)++ = (BYTE)length; } else { *token += (BYTE)(length); } /* Prepare next loop */ *ip += matchLength; *anchor = *ip; return 0; } LZ4_FORCE_INLINE int LZ4HC_compress_hashChain ( LZ4HC_CCtx_internal* const ctx, const char* const source, char* const dest, int* srcSizePtr, int const maxOutputSize, unsigned maxNbAttempts, const limitedOutput_directive limit, const dictCtx_directive dict ) { const int inputSize = *srcSizePtr; const int patternAnalysis = (maxNbAttempts > 128); /* levels 9+ */ const BYTE* ip = (const BYTE*) source; const BYTE* anchor = ip; const BYTE* const iend = ip + inputSize; const BYTE* const mflimit = iend - MFLIMIT; const BYTE* const matchlimit = (iend - LASTLITERALS); BYTE* optr = (BYTE*) dest; BYTE* op = (BYTE*) dest; BYTE* oend = op + maxOutputSize; int ml0, ml, ml2, ml3; const BYTE* start0; const BYTE* ref0; const BYTE* ref = NULL; const BYTE* start2 = NULL; const BYTE* ref2 = NULL; const BYTE* start3 = NULL; const BYTE* ref3 = NULL; /* init */ *srcSizePtr = 0; if (limit == fillOutput) oend -= LASTLITERALS; /* Hack for support LZ4 format restriction */ if (inputSize < LZ4_minLength) goto _last_literals; /* Input too small, no compression (all literals) */ /* Main Loop */ while (ip <= mflimit) { ml = LZ4HC_InsertAndFindBestMatch(ctx, ip, matchlimit, &ref, maxNbAttempts, patternAnalysis, dict); if (ml encode ML1 */ optr = op; if (LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml, ref, limit, oend)) goto _dest_overflow; continue; } if (start0 < ip) { /* first match was skipped at least once */ if (start2 < ip + ml0) { /* squeezing ML1 between ML0(original ML1) and ML2 */ ip = start0; ref = ref0; ml = ml0; /* restore initial ML1 */ } } /* Here, start0==ip */ if ((start2 - ip) < 3) { /* First Match too small : removed */ ml = ml2; ip = start2; ref =ref2; goto _Search2; } _Search3: /* At this stage, we have : * ml2 > ml1, and * ip1+3 <= ip2 (usually < ip1+ml1) */ if ((start2 - ip) < OPTIMAL_ML) { int correction; int new_ml = ml; if (new_ml > OPTIMAL_ML) new_ml = OPTIMAL_ML; if (ip+new_ml > start2 + ml2 - MINMATCH) new_ml = (int)(start2 - ip) + ml2 - MINMATCH; correction = new_ml - (int)(start2 - ip); if (correction > 0) { start2 += correction; ref2 += correction; ml2 -= correction; } } /* Now, we have start2 = ip+new_ml, with new_ml = min(ml, OPTIMAL_ML=18) */ if (start2 + ml2 <= mflimit) { ml3 = LZ4HC_InsertAndGetWiderMatch(ctx, start2 + ml2 - 3, start2, matchlimit, ml2, &ref3, &start3, maxNbAttempts, patternAnalysis, 0, dict, favorCompressionRatio); } else { ml3 = ml2; } if (ml3 == ml2) { /* No better match => encode ML1 and ML2 */ /* ip & ref are known; Now for ml */ if (start2 < ip+ml) ml = (int)(start2 - ip); /* Now, encode 2 sequences */ optr = op; if (LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml, ref, limit, oend)) goto _dest_overflow; ip = start2; optr = op; if (LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml2, ref2, limit, oend)) goto _dest_overflow; continue; } if (start3 < ip+ml+3) { /* Not enough space for match 2 : remove it */ if (start3 >= (ip+ml)) { /* can write Seq1 immediately ==> Seq2 is removed, so Seq3 becomes Seq1 */ if (start2 < ip+ml) { int correction = (int)(ip+ml - start2); start2 += correction; ref2 += correction; ml2 -= correction; if (ml2 < MINMATCH) { start2 = start3; ref2 = ref3; ml2 = ml3; } } optr = op; if (LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml, ref, limit, oend)) goto _dest_overflow; ip = start3; ref = ref3; ml = ml3; start0 = start2; ref0 = ref2; ml0 = ml2; goto _Search2; } start2 = start3; ref2 = ref3; ml2 = ml3; goto _Search3; } /* * OK, now we have 3 ascending matches; * let's write the first one ML1. * ip & ref are known; Now decide ml. */ if (start2 < ip+ml) { if ((start2 - ip) < OPTIMAL_ML) { int correction; if (ml > OPTIMAL_ML) ml = OPTIMAL_ML; if (ip + ml > start2 + ml2 - MINMATCH) ml = (int)(start2 - ip) + ml2 - MINMATCH; correction = ml - (int)(start2 - ip); if (correction > 0) { start2 += correction; ref2 += correction; ml2 -= correction; } } else { ml = (int)(start2 - ip); } } optr = op; if (LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml, ref, limit, oend)) goto _dest_overflow; /* ML2 becomes ML1 */ ip = start2; ref = ref2; ml = ml2; /* ML3 becomes ML2 */ start2 = start3; ref2 = ref3; ml2 = ml3; /* let's find a new ML3 */ goto _Search3; } _last_literals: /* Encode Last Literals */ { size_t lastRunSize = (size_t)(iend - anchor); /* literals */ size_t litLength = (lastRunSize + 255 - RUN_MASK) / 255; size_t const totalSize = 1 + litLength + lastRunSize; if (limit == fillOutput) oend += LASTLITERALS; /* restore correct value */ if (limit && (op + totalSize > oend)) { if (limit == limitedOutput) return 0; /* Check output limit */ /* adapt lastRunSize to fill 'dest' */ lastRunSize = (size_t)(oend - op) - 1; litLength = (lastRunSize + 255 - RUN_MASK) / 255; lastRunSize -= litLength; } ip = anchor + lastRunSize; if (lastRunSize >= RUN_MASK) { size_t accumulator = lastRunSize - RUN_MASK; *op++ = (RUN_MASK << ML_BITS); for(; accumulator >= 255 ; accumulator -= 255) *op++ = 255; *op++ = (BYTE) accumulator; } else { *op++ = (BYTE)(lastRunSize << ML_BITS); } memcpy(op, anchor, lastRunSize); op += lastRunSize; } /* End */ *srcSizePtr = (int) (((const char*)ip) - source); return (int) (((char*)op)-dest); _dest_overflow: if (limit == fillOutput) { op = optr; /* restore correct out pointer */ goto _last_literals; } return 0; } static int LZ4HC_compress_optimal( LZ4HC_CCtx_internal* ctx, const char* const source, char* dst, int* srcSizePtr, int dstCapacity, int const nbSearches, size_t sufficient_len, const limitedOutput_directive limit, int const fullUpdate, const dictCtx_directive dict, HCfavor_e favorDecSpeed); LZ4_FORCE_INLINE int LZ4HC_compress_generic_internal ( LZ4HC_CCtx_internal* const ctx, const char* const src, char* const dst, int* const srcSizePtr, int const dstCapacity, int cLevel, const limitedOutput_directive limit, const dictCtx_directive dict ) { typedef enum { lz4hc, lz4opt } lz4hc_strat_e; typedef struct { lz4hc_strat_e strat; U32 nbSearches; U32 targetLength; } cParams_t; static const cParams_t clTable[LZ4HC_CLEVEL_MAX+1] = { { lz4hc, 2, 16 }, /* 0, unused */ { lz4hc, 2, 16 }, /* 1, unused */ { lz4hc, 2, 16 }, /* 2, unused */ { lz4hc, 4, 16 }, /* 3 */ { lz4hc, 8, 16 }, /* 4 */ { lz4hc, 16, 16 }, /* 5 */ { lz4hc, 32, 16 }, /* 6 */ { lz4hc, 64, 16 }, /* 7 */ { lz4hc, 128, 16 }, /* 8 */ { lz4hc, 256, 16 }, /* 9 */ { lz4opt, 96, 64 }, /*10==LZ4HC_CLEVEL_OPT_MIN*/ { lz4opt, 512,128 }, /*11 */ { lz4opt,16384,LZ4_OPT_NUM }, /* 12==LZ4HC_CLEVEL_MAX */ }; DEBUGLOG(4, "LZ4HC_compress_generic(ctx=%p, src=%p, srcSize=%d)", ctx, src, *srcSizePtr); if (limit == fillOutput && dstCapacity < 1) return 0; /* Impossible to store anything */ if ((U32)*srcSizePtr > (U32)LZ4_MAX_INPUT_SIZE) return 0; /* Unsupported input size (too large or negative) */ ctx->end += *srcSizePtr; if (cLevel < 1) cLevel = LZ4HC_CLEVEL_DEFAULT; /* note : convention is different from lz4frame, maybe something to review */ cLevel = MIN(LZ4HC_CLEVEL_MAX, cLevel); { cParams_t const cParam = clTable[cLevel]; HCfavor_e const favor = ctx->favorDecSpeed ? favorDecompressionSpeed : favorCompressionRatio; int result; if (cParam.strat == lz4hc) { result = LZ4HC_compress_hashChain(ctx, src, dst, srcSizePtr, dstCapacity, cParam.nbSearches, limit, dict); } else { assert(cParam.strat == lz4opt); result = LZ4HC_compress_optimal(ctx, src, dst, srcSizePtr, dstCapacity, (int)cParam.nbSearches, cParam.targetLength, limit, cLevel == LZ4HC_CLEVEL_MAX, /* ultra mode */ dict, favor); } if (result <= 0) ctx->dirty = 1; return result; } } static void LZ4HC_setExternalDict(LZ4HC_CCtx_internal* ctxPtr, const BYTE* newBlock); static int LZ4HC_compress_generic_noDictCtx ( LZ4HC_CCtx_internal* const ctx, const char* const src, char* const dst, int* const srcSizePtr, int const dstCapacity, int cLevel, limitedOutput_directive limit ) { assert(ctx->dictCtx == NULL); return LZ4HC_compress_generic_internal(ctx, src, dst, srcSizePtr, dstCapacity, cLevel, limit, noDictCtx); } static int LZ4HC_compress_generic_dictCtx ( LZ4HC_CCtx_internal* const ctx, const char* const src, char* const dst, int* const srcSizePtr, int const dstCapacity, int cLevel, limitedOutput_directive limit ) { const size_t position = (size_t)(ctx->end - ctx->base) - ctx->lowLimit; assert(ctx->dictCtx != NULL); if (position >= 64 KB) { ctx->dictCtx = NULL; return LZ4HC_compress_generic_noDictCtx(ctx, src, dst, srcSizePtr, dstCapacity, cLevel, limit); } else if (position == 0 && *srcSizePtr > 4 KB) { memcpy(ctx, ctx->dictCtx, sizeof(LZ4HC_CCtx_internal)); LZ4HC_setExternalDict(ctx, (const BYTE *)src); ctx->compressionLevel = (short)cLevel; return LZ4HC_compress_generic_noDictCtx(ctx, src, dst, srcSizePtr, dstCapacity, cLevel, limit); } else { return LZ4HC_compress_generic_internal(ctx, src, dst, srcSizePtr, dstCapacity, cLevel, limit, usingDictCtxHc); } } static int LZ4HC_compress_generic ( LZ4HC_CCtx_internal* const ctx, const char* const src, char* const dst, int* const srcSizePtr, int const dstCapacity, int cLevel, limitedOutput_directive limit ) { if (ctx->dictCtx == NULL) { return LZ4HC_compress_generic_noDictCtx(ctx, src, dst, srcSizePtr, dstCapacity, cLevel, limit); } else { return LZ4HC_compress_generic_dictCtx(ctx, src, dst, srcSizePtr, dstCapacity, cLevel, limit); } } int LZ4_sizeofStateHC(void) { return (int)sizeof(LZ4_streamHC_t); } #ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 : * it reports an aligment of 8-bytes, * while actually aligning LZ4_streamHC_t on 4 bytes. */ static size_t LZ4_streamHC_t_alignment(void) { struct { char c; LZ4_streamHC_t t; } t_a; return sizeof(t_a) - sizeof(t_a.t); } #endif /* state is presumed correctly initialized, * in which case its size and alignment have already been validate */ int LZ4_compress_HC_extStateHC_fastReset (void* state, const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel) { LZ4HC_CCtx_internal* const ctx = &((LZ4_streamHC_t*)state)->internal_donotuse; #ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 : * it reports an aligment of 8-bytes, * while actually aligning LZ4_streamHC_t on 4 bytes. */ assert(((size_t)state & (LZ4_streamHC_t_alignment() - 1)) == 0); /* check alignment */ #endif if (((size_t)(state)&(sizeof(void*)-1)) != 0) return 0; /* Error : state is not aligned for pointers (32 or 64 bits) */ LZ4_resetStreamHC_fast((LZ4_streamHC_t*)state, compressionLevel); LZ4HC_init_internal (ctx, (const BYTE*)src); if (dstCapacity < LZ4_compressBound(srcSize)) return LZ4HC_compress_generic (ctx, src, dst, &srcSize, dstCapacity, compressionLevel, limitedOutput); else return LZ4HC_compress_generic (ctx, src, dst, &srcSize, dstCapacity, compressionLevel, notLimited); } int LZ4_compress_HC_extStateHC (void* state, const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel) { LZ4_streamHC_t* const ctx = LZ4_initStreamHC(state, sizeof(*ctx)); if (ctx==NULL) return 0; /* init failure */ return LZ4_compress_HC_extStateHC_fastReset(state, src, dst, srcSize, dstCapacity, compressionLevel); } int LZ4_compress_HC(const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel) { #if defined(LZ4HC_HEAPMODE) && LZ4HC_HEAPMODE==1 LZ4_streamHC_t* const statePtr = (LZ4_streamHC_t*)ALLOC(sizeof(LZ4_streamHC_t)); #else LZ4_streamHC_t state; LZ4_streamHC_t* const statePtr = &state; #endif int const cSize = LZ4_compress_HC_extStateHC(statePtr, src, dst, srcSize, dstCapacity, compressionLevel); #if defined(LZ4HC_HEAPMODE) && LZ4HC_HEAPMODE==1 FREEMEM(statePtr); #endif return cSize; } /* state is presumed sized correctly (>= sizeof(LZ4_streamHC_t)) */ int LZ4_compress_HC_destSize(void* state, const char* source, char* dest, int* sourceSizePtr, int targetDestSize, int cLevel) { LZ4_streamHC_t* const ctx = LZ4_initStreamHC(state, sizeof(*ctx)); if (ctx==NULL) return 0; /* init failure */ LZ4HC_init_internal(&ctx->internal_donotuse, (const BYTE*) source); LZ4_setCompressionLevel(ctx, cLevel); return LZ4HC_compress_generic(&ctx->internal_donotuse, source, dest, sourceSizePtr, targetDestSize, cLevel, fillOutput); } /************************************** * Streaming Functions **************************************/ /* allocation */ LZ4_streamHC_t* LZ4_createStreamHC(void) { LZ4_streamHC_t* const LZ4_streamHCPtr = (LZ4_streamHC_t*)ALLOC(sizeof(LZ4_streamHC_t)); if (LZ4_streamHCPtr==NULL) return NULL; LZ4_initStreamHC(LZ4_streamHCPtr, sizeof(*LZ4_streamHCPtr)); /* full initialization, malloc'ed buffer can be full of garbage */ return LZ4_streamHCPtr; } int LZ4_freeStreamHC (LZ4_streamHC_t* LZ4_streamHCPtr) { DEBUGLOG(4, "LZ4_freeStreamHC(%p)", LZ4_streamHCPtr); if (!LZ4_streamHCPtr) return 0; /* support free on NULL */ FREEMEM(LZ4_streamHCPtr); return 0; } LZ4_streamHC_t* LZ4_initStreamHC (void* buffer, size_t size) { LZ4_streamHC_t* const LZ4_streamHCPtr = (LZ4_streamHC_t*)buffer; if (buffer == NULL) return NULL; if (size < sizeof(LZ4_streamHC_t)) return NULL; #ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 : * it reports an aligment of 8-bytes, * while actually aligning LZ4_streamHC_t on 4 bytes. */ if (((size_t)buffer) & (LZ4_streamHC_t_alignment() - 1)) return NULL; /* alignment check */ #endif /* if compilation fails here, LZ4_STREAMHCSIZE must be increased */ LZ4_STATIC_ASSERT(sizeof(LZ4HC_CCtx_internal) <= LZ4_STREAMHCSIZE); DEBUGLOG(4, "LZ4_initStreamHC(%p, %u)", LZ4_streamHCPtr, (unsigned)size); /* end-base will trigger a clearTable on starting compression */ LZ4_streamHCPtr->internal_donotuse.end = (const BYTE *)(ptrdiff_t)-1; LZ4_streamHCPtr->internal_donotuse.base = NULL; LZ4_streamHCPtr->internal_donotuse.dictCtx = NULL; LZ4_streamHCPtr->internal_donotuse.favorDecSpeed = 0; LZ4_streamHCPtr->internal_donotuse.dirty = 0; LZ4_setCompressionLevel(LZ4_streamHCPtr, LZ4HC_CLEVEL_DEFAULT); return LZ4_streamHCPtr; } /* just a stub */ void LZ4_resetStreamHC (LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel) { LZ4_initStreamHC(LZ4_streamHCPtr, sizeof(*LZ4_streamHCPtr)); LZ4_setCompressionLevel(LZ4_streamHCPtr, compressionLevel); } void LZ4_resetStreamHC_fast (LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel) { DEBUGLOG(4, "LZ4_resetStreamHC_fast(%p, %d)", LZ4_streamHCPtr, compressionLevel); if (LZ4_streamHCPtr->internal_donotuse.dirty) { LZ4_initStreamHC(LZ4_streamHCPtr, sizeof(*LZ4_streamHCPtr)); } else { /* preserve end - base : can trigger clearTable's threshold */ LZ4_streamHCPtr->internal_donotuse.end -= (uptrval)LZ4_streamHCPtr->internal_donotuse.base; LZ4_streamHCPtr->internal_donotuse.base = NULL; LZ4_streamHCPtr->internal_donotuse.dictCtx = NULL; } LZ4_setCompressionLevel(LZ4_streamHCPtr, compressionLevel); } void LZ4_setCompressionLevel(LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel) { DEBUGLOG(5, "LZ4_setCompressionLevel(%p, %d)", LZ4_streamHCPtr, compressionLevel); if (compressionLevel < 1) compressionLevel = LZ4HC_CLEVEL_DEFAULT; if (compressionLevel > LZ4HC_CLEVEL_MAX) compressionLevel = LZ4HC_CLEVEL_MAX; LZ4_streamHCPtr->internal_donotuse.compressionLevel = (short)compressionLevel; } void LZ4_favorDecompressionSpeed(LZ4_streamHC_t* LZ4_streamHCPtr, int favor) { LZ4_streamHCPtr->internal_donotuse.favorDecSpeed = (favor!=0); } /* LZ4_loadDictHC() : * LZ4_streamHCPtr is presumed properly initialized */ int LZ4_loadDictHC (LZ4_streamHC_t* LZ4_streamHCPtr, const char* dictionary, int dictSize) { LZ4HC_CCtx_internal* const ctxPtr = &LZ4_streamHCPtr->internal_donotuse; DEBUGLOG(4, "LZ4_loadDictHC(%p, %p, %d)", LZ4_streamHCPtr, dictionary, dictSize); assert(LZ4_streamHCPtr != NULL); if (dictSize > 64 KB) { dictionary += (size_t)dictSize - 64 KB; dictSize = 64 KB; } /* need a full initialization, there are bad side-effects when using resetFast() */ { int const cLevel = ctxPtr->compressionLevel; LZ4_initStreamHC(LZ4_streamHCPtr, sizeof(*LZ4_streamHCPtr)); LZ4_setCompressionLevel(LZ4_streamHCPtr, cLevel); } LZ4HC_init_internal (ctxPtr, (const BYTE*)dictionary); ctxPtr->end = (const BYTE*)dictionary + dictSize; if (dictSize >= 4) LZ4HC_Insert (ctxPtr, ctxPtr->end-3); return dictSize; } void LZ4_attach_HC_dictionary(LZ4_streamHC_t *working_stream, const LZ4_streamHC_t *dictionary_stream) { working_stream->internal_donotuse.dictCtx = dictionary_stream != NULL ? &(dictionary_stream->internal_donotuse) : NULL; } /* compression */ static void LZ4HC_setExternalDict(LZ4HC_CCtx_internal* ctxPtr, const BYTE* newBlock) { DEBUGLOG(4, "LZ4HC_setExternalDict(%p, %p)", ctxPtr, newBlock); if (ctxPtr->end >= ctxPtr->base + ctxPtr->dictLimit + 4) LZ4HC_Insert (ctxPtr, ctxPtr->end-3); /* Referencing remaining dictionary content */ /* Only one memory segment for extDict, so any previous extDict is lost at this stage */ ctxPtr->lowLimit = ctxPtr->dictLimit; ctxPtr->dictLimit = (U32)(ctxPtr->end - ctxPtr->base); ctxPtr->dictBase = ctxPtr->base; ctxPtr->base = newBlock - ctxPtr->dictLimit; ctxPtr->end = newBlock; ctxPtr->nextToUpdate = ctxPtr->dictLimit; /* match referencing will resume from there */ /* cannot reference an extDict and a dictCtx at the same time */ ctxPtr->dictCtx = NULL; } static int LZ4_compressHC_continue_generic (LZ4_streamHC_t* LZ4_streamHCPtr, const char* src, char* dst, int* srcSizePtr, int dstCapacity, limitedOutput_directive limit) { LZ4HC_CCtx_internal* const ctxPtr = &LZ4_streamHCPtr->internal_donotuse; DEBUGLOG(4, "LZ4_compressHC_continue_generic(ctx=%p, src=%p, srcSize=%d)", LZ4_streamHCPtr, src, *srcSizePtr); assert(ctxPtr != NULL); /* auto-init if forgotten */ if (ctxPtr->base == NULL) LZ4HC_init_internal (ctxPtr, (const BYTE*) src); /* Check overflow */ if ((size_t)(ctxPtr->end - ctxPtr->base) > 2 GB) { size_t dictSize = (size_t)(ctxPtr->end - ctxPtr->base) - ctxPtr->dictLimit; if (dictSize > 64 KB) dictSize = 64 KB; LZ4_loadDictHC(LZ4_streamHCPtr, (const char*)(ctxPtr->end) - dictSize, (int)dictSize); } /* Check if blocks follow each other */ if ((const BYTE*)src != ctxPtr->end) LZ4HC_setExternalDict(ctxPtr, (const BYTE*)src); /* Check overlapping input/dictionary space */ { const BYTE* sourceEnd = (const BYTE*) src + *srcSizePtr; const BYTE* const dictBegin = ctxPtr->dictBase + ctxPtr->lowLimit; const BYTE* const dictEnd = ctxPtr->dictBase + ctxPtr->dictLimit; if ((sourceEnd > dictBegin) && ((const BYTE*)src < dictEnd)) { if (sourceEnd > dictEnd) sourceEnd = dictEnd; ctxPtr->lowLimit = (U32)(sourceEnd - ctxPtr->dictBase); if (ctxPtr->dictLimit - ctxPtr->lowLimit < 4) ctxPtr->lowLimit = ctxPtr->dictLimit; } } return LZ4HC_compress_generic (ctxPtr, src, dst, srcSizePtr, dstCapacity, ctxPtr->compressionLevel, limit); } int LZ4_compress_HC_continue (LZ4_streamHC_t* LZ4_streamHCPtr, const char* src, char* dst, int srcSize, int dstCapacity) { if (dstCapacity < LZ4_compressBound(srcSize)) return LZ4_compressHC_continue_generic (LZ4_streamHCPtr, src, dst, &srcSize, dstCapacity, limitedOutput); else return LZ4_compressHC_continue_generic (LZ4_streamHCPtr, src, dst, &srcSize, dstCapacity, notLimited); } int LZ4_compress_HC_continue_destSize (LZ4_streamHC_t* LZ4_streamHCPtr, const char* src, char* dst, int* srcSizePtr, int targetDestSize) { return LZ4_compressHC_continue_generic(LZ4_streamHCPtr, src, dst, srcSizePtr, targetDestSize, fillOutput); } /* dictionary saving */ int LZ4_saveDictHC (LZ4_streamHC_t* LZ4_streamHCPtr, char* safeBuffer, int dictSize) { LZ4HC_CCtx_internal* const streamPtr = &LZ4_streamHCPtr->internal_donotuse; int const prefixSize = (int)(streamPtr->end - (streamPtr->base + streamPtr->dictLimit)); DEBUGLOG(4, "LZ4_saveDictHC(%p, %p, %d)", LZ4_streamHCPtr, safeBuffer, dictSize); if (dictSize > 64 KB) dictSize = 64 KB; if (dictSize < 4) dictSize = 0; if (dictSize > prefixSize) dictSize = prefixSize; memmove(safeBuffer, streamPtr->end - dictSize, dictSize); { U32 const endIndex = (U32)(streamPtr->end - streamPtr->base); streamPtr->end = (const BYTE*)safeBuffer + dictSize; streamPtr->base = streamPtr->end - endIndex; streamPtr->dictLimit = endIndex - (U32)dictSize; streamPtr->lowLimit = endIndex - (U32)dictSize; if (streamPtr->nextToUpdate < streamPtr->dictLimit) streamPtr->nextToUpdate = streamPtr->dictLimit; } return dictSize; } /*************************************************** * Deprecated Functions ***************************************************/ /* These functions currently generate deprecation warnings */ /* Wrappers for deprecated compression functions */ int LZ4_compressHC(const char* src, char* dst, int srcSize) { return LZ4_compress_HC (src, dst, srcSize, LZ4_compressBound(srcSize), 0); } int LZ4_compressHC_limitedOutput(const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_HC(src, dst, srcSize, maxDstSize, 0); } int LZ4_compressHC2(const char* src, char* dst, int srcSize, int cLevel) { return LZ4_compress_HC (src, dst, srcSize, LZ4_compressBound(srcSize), cLevel); } int LZ4_compressHC2_limitedOutput(const char* src, char* dst, int srcSize, int maxDstSize, int cLevel) { return LZ4_compress_HC(src, dst, srcSize, maxDstSize, cLevel); } int LZ4_compressHC_withStateHC (void* state, const char* src, char* dst, int srcSize) { return LZ4_compress_HC_extStateHC (state, src, dst, srcSize, LZ4_compressBound(srcSize), 0); } int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_HC_extStateHC (state, src, dst, srcSize, maxDstSize, 0); } int LZ4_compressHC2_withStateHC (void* state, const char* src, char* dst, int srcSize, int cLevel) { return LZ4_compress_HC_extStateHC(state, src, dst, srcSize, LZ4_compressBound(srcSize), cLevel); } int LZ4_compressHC2_limitedOutput_withStateHC (void* state, const char* src, char* dst, int srcSize, int maxDstSize, int cLevel) { return LZ4_compress_HC_extStateHC(state, src, dst, srcSize, maxDstSize, cLevel); } int LZ4_compressHC_continue (LZ4_streamHC_t* ctx, const char* src, char* dst, int srcSize) { return LZ4_compress_HC_continue (ctx, src, dst, srcSize, LZ4_compressBound(srcSize)); } int LZ4_compressHC_limitedOutput_continue (LZ4_streamHC_t* ctx, const char* src, char* dst, int srcSize, int maxDstSize) { return LZ4_compress_HC_continue (ctx, src, dst, srcSize, maxDstSize); } /* Deprecated streaming functions */ int LZ4_sizeofStreamStateHC(void) { return LZ4_STREAMHCSIZE; } /* state is presumed correctly sized, aka >= sizeof(LZ4_streamHC_t) * @return : 0 on success, !=0 if error */ int LZ4_resetStreamStateHC(void* state, char* inputBuffer) { LZ4_streamHC_t* const hc4 = LZ4_initStreamHC(state, sizeof(*hc4)); if (hc4 == NULL) return 1; /* init failed */ LZ4HC_init_internal (&hc4->internal_donotuse, (const BYTE*)inputBuffer); return 0; } void* LZ4_createHC (const char* inputBuffer) { LZ4_streamHC_t* const hc4 = LZ4_createStreamHC(); if (hc4 == NULL) return NULL; /* not enough memory */ LZ4HC_init_internal (&hc4->internal_donotuse, (const BYTE*)inputBuffer); return hc4; } int LZ4_freeHC (void* LZ4HC_Data) { if (!LZ4HC_Data) return 0; /* support free on NULL */ FREEMEM(LZ4HC_Data); return 0; } int LZ4_compressHC2_continue (void* LZ4HC_Data, const char* src, char* dst, int srcSize, int cLevel) { return LZ4HC_compress_generic (&((LZ4_streamHC_t*)LZ4HC_Data)->internal_donotuse, src, dst, &srcSize, 0, cLevel, notLimited); } int LZ4_compressHC2_limitedOutput_continue (void* LZ4HC_Data, const char* src, char* dst, int srcSize, int dstCapacity, int cLevel) { return LZ4HC_compress_generic (&((LZ4_streamHC_t*)LZ4HC_Data)->internal_donotuse, src, dst, &srcSize, dstCapacity, cLevel, limitedOutput); } char* LZ4_slideInputBufferHC(void* LZ4HC_Data) { LZ4_streamHC_t *ctx = (LZ4_streamHC_t*)LZ4HC_Data; const BYTE *bufferStart = ctx->internal_donotuse.base + ctx->internal_donotuse.lowLimit; LZ4_resetStreamHC_fast(ctx, ctx->internal_donotuse.compressionLevel); /* avoid const char * -> char * conversion warning :( */ return (char *)(uptrval)bufferStart; } /* ================================================ * LZ4 Optimal parser (levels [LZ4HC_CLEVEL_OPT_MIN - LZ4HC_CLEVEL_MAX]) * ===============================================*/ typedef struct { int price; int off; int mlen; int litlen; } LZ4HC_optimal_t; /* price in bytes */ LZ4_FORCE_INLINE int LZ4HC_literalsPrice(int const litlen) { int price = litlen; assert(litlen >= 0); if (litlen >= (int)RUN_MASK) price += 1 + ((litlen-(int)RUN_MASK) / 255); return price; } /* requires mlen >= MINMATCH */ LZ4_FORCE_INLINE int LZ4HC_sequencePrice(int litlen, int mlen) { int price = 1 + 2 ; /* token + 16-bit offset */ assert(litlen >= 0); assert(mlen >= MINMATCH); price += LZ4HC_literalsPrice(litlen); if (mlen >= (int)(ML_MASK+MINMATCH)) price += 1 + ((mlen-(int)(ML_MASK+MINMATCH)) / 255); return price; } typedef struct { int off; int len; } LZ4HC_match_t; LZ4_FORCE_INLINE LZ4HC_match_t LZ4HC_FindLongerMatch(LZ4HC_CCtx_internal* const ctx, const BYTE* ip, const BYTE* const iHighLimit, int minLen, int nbSearches, const dictCtx_directive dict, const HCfavor_e favorDecSpeed) { LZ4HC_match_t match = { 0 , 0 }; const BYTE* matchPtr = NULL; /* note : LZ4HC_InsertAndGetWiderMatch() is able to modify the starting position of a match (*startpos), * but this won't be the case here, as we define iLowLimit==ip, * so LZ4HC_InsertAndGetWiderMatch() won't be allowed to search past ip */ int matchLength = LZ4HC_InsertAndGetWiderMatch(ctx, ip, ip, iHighLimit, minLen, &matchPtr, &ip, nbSearches, 1 /*patternAnalysis*/, 1 /*chainSwap*/, dict, favorDecSpeed); if (matchLength <= minLen) return match; if (favorDecSpeed) { if ((matchLength>18) & (matchLength<=36)) matchLength=18; /* favor shortcut */ } match.len = matchLength; match.off = (int)(ip-matchPtr); return match; } static int LZ4HC_compress_optimal ( LZ4HC_CCtx_internal* ctx, const char* const source, char* dst, int* srcSizePtr, int dstCapacity, int const nbSearches, size_t sufficient_len, const limitedOutput_directive limit, int const fullUpdate, const dictCtx_directive dict, const HCfavor_e favorDecSpeed) { #define TRAILING_LITERALS 3 LZ4HC_optimal_t opt[LZ4_OPT_NUM + TRAILING_LITERALS]; /* ~64 KB, which is a bit large for stack... */ const BYTE* ip = (const BYTE*) source; const BYTE* anchor = ip; const BYTE* const iend = ip + *srcSizePtr; const BYTE* const mflimit = iend - MFLIMIT; const BYTE* const matchlimit = iend - LASTLITERALS; BYTE* op = (BYTE*) dst; BYTE* opSaved = (BYTE*) dst; BYTE* oend = op + dstCapacity; /* init */ DEBUGLOG(5, "LZ4HC_compress_optimal(dst=%p, dstCapa=%u)", dst, (unsigned)dstCapacity); *srcSizePtr = 0; if (limit == fillOutput) oend -= LASTLITERALS; /* Hack for support LZ4 format restriction */ if (sufficient_len >= LZ4_OPT_NUM) sufficient_len = LZ4_OPT_NUM-1; /* Main Loop */ assert(ip - anchor < LZ4_MAX_INPUT_SIZE); while (ip <= mflimit) { int const llen = (int)(ip - anchor); int best_mlen, best_off; int cur, last_match_pos = 0; LZ4HC_match_t const firstMatch = LZ4HC_FindLongerMatch(ctx, ip, matchlimit, MINMATCH-1, nbSearches, dict, favorDecSpeed); if (firstMatch.len==0) { ip++; continue; } if ((size_t)firstMatch.len > sufficient_len) { /* good enough solution : immediate encoding */ int const firstML = firstMatch.len; const BYTE* const matchPos = ip - firstMatch.off; opSaved = op; if ( LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), firstML, matchPos, limit, oend) ) /* updates ip, op and anchor */ goto _dest_overflow; continue; } /* set prices for first positions (literals) */ { int rPos; for (rPos = 0 ; rPos < MINMATCH ; rPos++) { int const cost = LZ4HC_literalsPrice(llen + rPos); opt[rPos].mlen = 1; opt[rPos].off = 0; opt[rPos].litlen = llen + rPos; opt[rPos].price = cost; DEBUGLOG(7, "rPos:%3i => price:%3i (litlen=%i) -- initial setup", rPos, cost, opt[rPos].litlen); } } /* set prices using initial match */ { int mlen = MINMATCH; int const matchML = firstMatch.len; /* necessarily < sufficient_len < LZ4_OPT_NUM */ int const offset = firstMatch.off; assert(matchML < LZ4_OPT_NUM); for ( ; mlen <= matchML ; mlen++) { int const cost = LZ4HC_sequencePrice(llen, mlen); opt[mlen].mlen = mlen; opt[mlen].off = offset; opt[mlen].litlen = llen; opt[mlen].price = cost; DEBUGLOG(7, "rPos:%3i => price:%3i (matchlen=%i) -- initial setup", mlen, cost, mlen); } } last_match_pos = firstMatch.len; { int addLit; for (addLit = 1; addLit <= TRAILING_LITERALS; addLit ++) { opt[last_match_pos+addLit].mlen = 1; /* literal */ opt[last_match_pos+addLit].off = 0; opt[last_match_pos+addLit].litlen = addLit; opt[last_match_pos+addLit].price = opt[last_match_pos].price + LZ4HC_literalsPrice(addLit); DEBUGLOG(7, "rPos:%3i => price:%3i (litlen=%i) -- initial setup", last_match_pos+addLit, opt[last_match_pos+addLit].price, addLit); } } /* check further positions */ for (cur = 1; cur < last_match_pos; cur++) { const BYTE* const curPtr = ip + cur; LZ4HC_match_t newMatch; if (curPtr > mflimit) break; DEBUGLOG(7, "rPos:%u[%u] vs [%u]%u", cur, opt[cur].price, opt[cur+1].price, cur+1); if (fullUpdate) { /* not useful to search here if next position has same (or lower) cost */ if ( (opt[cur+1].price <= opt[cur].price) /* in some cases, next position has same cost, but cost rises sharply after, so a small match would still be beneficial */ && (opt[cur+MINMATCH].price < opt[cur].price + 3/*min seq price*/) ) continue; } else { /* not useful to search here if next position has same (or lower) cost */ if (opt[cur+1].price <= opt[cur].price) continue; } DEBUGLOG(7, "search at rPos:%u", cur); if (fullUpdate) newMatch = LZ4HC_FindLongerMatch(ctx, curPtr, matchlimit, MINMATCH-1, nbSearches, dict, favorDecSpeed); else /* only test matches of minimum length; slightly faster, but misses a few bytes */ newMatch = LZ4HC_FindLongerMatch(ctx, curPtr, matchlimit, last_match_pos - cur, nbSearches, dict, favorDecSpeed); if (!newMatch.len) continue; if ( ((size_t)newMatch.len > sufficient_len) || (newMatch.len + cur >= LZ4_OPT_NUM) ) { /* immediate encoding */ best_mlen = newMatch.len; best_off = newMatch.off; last_match_pos = cur + 1; goto encode; } /* before match : set price with literals at beginning */ { int const baseLitlen = opt[cur].litlen; int litlen; for (litlen = 1; litlen < MINMATCH; litlen++) { int const price = opt[cur].price - LZ4HC_literalsPrice(baseLitlen) + LZ4HC_literalsPrice(baseLitlen+litlen); int const pos = cur + litlen; if (price < opt[pos].price) { opt[pos].mlen = 1; /* literal */ opt[pos].off = 0; opt[pos].litlen = baseLitlen+litlen; opt[pos].price = price; DEBUGLOG(7, "rPos:%3i => price:%3i (litlen=%i)", pos, price, opt[pos].litlen); } } } /* set prices using match at position = cur */ { int const matchML = newMatch.len; int ml = MINMATCH; assert(cur + newMatch.len < LZ4_OPT_NUM); for ( ; ml <= matchML ; ml++) { int const pos = cur + ml; int const offset = newMatch.off; int price; int ll; DEBUGLOG(7, "testing price rPos %i (last_match_pos=%i)", pos, last_match_pos); if (opt[cur].mlen == 1) { ll = opt[cur].litlen; price = ((cur > ll) ? opt[cur - ll].price : 0) + LZ4HC_sequencePrice(ll, ml); } else { ll = 0; price = opt[cur].price + LZ4HC_sequencePrice(0, ml); } assert((U32)favorDecSpeed <= 1); if (pos > last_match_pos+TRAILING_LITERALS || price <= opt[pos].price - (int)favorDecSpeed) { DEBUGLOG(7, "rPos:%3i => price:%3i (matchlen=%i)", pos, price, ml); assert(pos < LZ4_OPT_NUM); if ( (ml == matchML) /* last pos of last match */ && (last_match_pos < pos) ) last_match_pos = pos; opt[pos].mlen = ml; opt[pos].off = offset; opt[pos].litlen = ll; opt[pos].price = price; } } } /* complete following positions with literals */ { int addLit; for (addLit = 1; addLit <= TRAILING_LITERALS; addLit ++) { opt[last_match_pos+addLit].mlen = 1; /* literal */ opt[last_match_pos+addLit].off = 0; opt[last_match_pos+addLit].litlen = addLit; opt[last_match_pos+addLit].price = opt[last_match_pos].price + LZ4HC_literalsPrice(addLit); DEBUGLOG(7, "rPos:%3i => price:%3i (litlen=%i)", last_match_pos+addLit, opt[last_match_pos+addLit].price, addLit); } } } /* for (cur = 1; cur <= last_match_pos; cur++) */ assert(last_match_pos < LZ4_OPT_NUM + TRAILING_LITERALS); best_mlen = opt[last_match_pos].mlen; best_off = opt[last_match_pos].off; cur = last_match_pos - best_mlen; encode: /* cur, last_match_pos, best_mlen, best_off must be set */ assert(cur < LZ4_OPT_NUM); assert(last_match_pos >= 1); /* == 1 when only one candidate */ DEBUGLOG(6, "reverse traversal, looking for shortest path (last_match_pos=%i)", last_match_pos); { int candidate_pos = cur; int selected_matchLength = best_mlen; int selected_offset = best_off; while (1) { /* from end to beginning */ int const next_matchLength = opt[candidate_pos].mlen; /* can be 1, means literal */ int const next_offset = opt[candidate_pos].off; DEBUGLOG(7, "pos %i: sequence length %i", candidate_pos, selected_matchLength); opt[candidate_pos].mlen = selected_matchLength; opt[candidate_pos].off = selected_offset; selected_matchLength = next_matchLength; selected_offset = next_offset; if (next_matchLength > candidate_pos) break; /* last match elected, first match to encode */ assert(next_matchLength > 0); /* can be 1, means literal */ candidate_pos -= next_matchLength; } } /* encode all recorded sequences in order */ { int rPos = 0; /* relative position (to ip) */ while (rPos < last_match_pos) { int const ml = opt[rPos].mlen; int const offset = opt[rPos].off; if (ml == 1) { ip++; rPos++; continue; } /* literal; note: can end up with several literals, in which case, skip them */ rPos += ml; assert(ml >= MINMATCH); assert((offset >= 1) && (offset <= LZ4_DISTANCE_MAX)); opSaved = op; if ( LZ4HC_encodeSequence(UPDATABLE(ip, op, anchor), ml, ip - offset, limit, oend) ) /* updates ip, op and anchor */ goto _dest_overflow; } } } /* while (ip <= mflimit) */ _last_literals: /* Encode Last Literals */ { size_t lastRunSize = (size_t)(iend - anchor); /* literals */ size_t litLength = (lastRunSize + 255 - RUN_MASK) / 255; size_t const totalSize = 1 + litLength + lastRunSize; if (limit == fillOutput) oend += LASTLITERALS; /* restore correct value */ if (limit && (op + totalSize > oend)) { if (limit == limitedOutput) return 0; /* Check output limit */ /* adapt lastRunSize to fill 'dst' */ lastRunSize = (size_t)(oend - op) - 1; litLength = (lastRunSize + 255 - RUN_MASK) / 255; lastRunSize -= litLength; } ip = anchor + lastRunSize; if (lastRunSize >= RUN_MASK) { size_t accumulator = lastRunSize - RUN_MASK; *op++ = (RUN_MASK << ML_BITS); for(; accumulator >= 255 ; accumulator -= 255) *op++ = 255; *op++ = (BYTE) accumulator; } else { *op++ = (BYTE)(lastRunSize << ML_BITS); } memcpy(op, anchor, lastRunSize); op += lastRunSize; } /* End */ *srcSizePtr = (int) (((const char*)ip) - source); return (int) ((char*)op-dst); _dest_overflow: if (limit == fillOutput) { op = opSaved; /* restore correct out pointer */ goto _last_literals; } return 0; } zmat-0.9.8/src/lz4/lz4hc.h000066400000000000000000000515551366304620100152140ustar00rootroot00000000000000/* LZ4 HC - High Compression Mode of LZ4 Header File Copyright (C) 2011-2017, Yann Collet. BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You can contact the author at : - LZ4 source repository : https://github.com/lz4/lz4 - LZ4 public forum : https://groups.google.com/forum/#!forum/lz4c */ #ifndef LZ4_HC_H_19834876238432 #define LZ4_HC_H_19834876238432 #if defined (__cplusplus) extern "C" { #endif /* --- Dependency --- */ /* note : lz4hc requires lz4.h/lz4.c for compilation */ #include "lz4.h" /* stddef, LZ4LIB_API, LZ4_DEPRECATED */ /* --- Useful constants --- */ #define LZ4HC_CLEVEL_MIN 3 #define LZ4HC_CLEVEL_DEFAULT 9 #define LZ4HC_CLEVEL_OPT_MIN 10 #define LZ4HC_CLEVEL_MAX 12 /*-************************************ * Block Compression **************************************/ /*! LZ4_compress_HC() : * Compress data from `src` into `dst`, using the powerful but slower "HC" algorithm. * `dst` must be already allocated. * Compression is guaranteed to succeed if `dstCapacity >= LZ4_compressBound(srcSize)` (see "lz4.h") * Max supported `srcSize` value is LZ4_MAX_INPUT_SIZE (see "lz4.h") * `compressionLevel` : any value between 1 and LZ4HC_CLEVEL_MAX will work. * Values > LZ4HC_CLEVEL_MAX behave the same as LZ4HC_CLEVEL_MAX. * @return : the number of bytes written into 'dst' * or 0 if compression fails. */ LZ4LIB_API int LZ4_compress_HC (const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel); /* Note : * Decompression functions are provided within "lz4.h" (BSD license) */ /*! LZ4_compress_HC_extStateHC() : * Same as LZ4_compress_HC(), but using an externally allocated memory segment for `state`. * `state` size is provided by LZ4_sizeofStateHC(). * Memory segment must be aligned on 8-bytes boundaries (which a normal malloc() should do properly). */ LZ4LIB_API int LZ4_sizeofStateHC(void); LZ4LIB_API int LZ4_compress_HC_extStateHC(void* stateHC, const char* src, char* dst, int srcSize, int maxDstSize, int compressionLevel); /*! LZ4_compress_HC_destSize() : v1.9.0+ * Will compress as much data as possible from `src` * to fit into `targetDstSize` budget. * Result is provided in 2 parts : * @return : the number of bytes written into 'dst' (necessarily <= targetDstSize) * or 0 if compression fails. * `srcSizePtr` : on success, *srcSizePtr is updated to indicate how much bytes were read from `src` */ LZ4LIB_API int LZ4_compress_HC_destSize(void* stateHC, const char* src, char* dst, int* srcSizePtr, int targetDstSize, int compressionLevel); /*-************************************ * Streaming Compression * Bufferless synchronous API **************************************/ typedef union LZ4_streamHC_u LZ4_streamHC_t; /* incomplete type (defined later) */ /*! LZ4_createStreamHC() and LZ4_freeStreamHC() : * These functions create and release memory for LZ4 HC streaming state. * Newly created states are automatically initialized. * A same state can be used multiple times consecutively, * starting with LZ4_resetStreamHC_fast() to start a new stream of blocks. */ LZ4LIB_API LZ4_streamHC_t* LZ4_createStreamHC(void); LZ4LIB_API int LZ4_freeStreamHC (LZ4_streamHC_t* streamHCPtr); /* These functions compress data in successive blocks of any size, using previous blocks as dictionary, to improve compression ratio. One key assumption is that previous blocks (up to 64 KB) remain read-accessible while compressing next blocks. There is an exception for ring buffers, which can be smaller than 64 KB. Ring-buffer scenario is automatically detected and handled within LZ4_compress_HC_continue(). Before starting compression, state must be allocated and properly initialized. LZ4_createStreamHC() does both, though compression level is set to LZ4HC_CLEVEL_DEFAULT. Selecting the compression level can be done with LZ4_resetStreamHC_fast() (starts a new stream) or LZ4_setCompressionLevel() (anytime, between blocks in the same stream) (experimental). LZ4_resetStreamHC_fast() only works on states which have been properly initialized at least once, which is automatically the case when state is created using LZ4_createStreamHC(). After reset, a first "fictional block" can be designated as initial dictionary, using LZ4_loadDictHC() (Optional). Invoke LZ4_compress_HC_continue() to compress each successive block. The number of blocks is unlimited. Previous input blocks, including initial dictionary when present, must remain accessible and unmodified during compression. It's allowed to update compression level anytime between blocks, using LZ4_setCompressionLevel() (experimental). 'dst' buffer should be sized to handle worst case scenarios (see LZ4_compressBound(), it ensures compression success). In case of failure, the API does not guarantee recovery, so the state _must_ be reset. To ensure compression success whenever `dst` buffer size cannot be made >= LZ4_compressBound(), consider using LZ4_compress_HC_continue_destSize(). Whenever previous input blocks can't be preserved unmodified in-place during compression of next blocks, it's possible to copy the last blocks into a more stable memory space, using LZ4_saveDictHC(). Return value of LZ4_saveDictHC() is the size of dictionary effectively saved into 'safeBuffer' (<= 64 KB) After completing a streaming compression, it's possible to start a new stream of blocks, using the same LZ4_streamHC_t state, just by resetting it, using LZ4_resetStreamHC_fast(). */ LZ4LIB_API void LZ4_resetStreamHC_fast(LZ4_streamHC_t* streamHCPtr, int compressionLevel); /* v1.9.0+ */ LZ4LIB_API int LZ4_loadDictHC (LZ4_streamHC_t* streamHCPtr, const char* dictionary, int dictSize); LZ4LIB_API int LZ4_compress_HC_continue (LZ4_streamHC_t* streamHCPtr, const char* src, char* dst, int srcSize, int maxDstSize); /*! LZ4_compress_HC_continue_destSize() : v1.9.0+ * Similar to LZ4_compress_HC_continue(), * but will read as much data as possible from `src` * to fit into `targetDstSize` budget. * Result is provided into 2 parts : * @return : the number of bytes written into 'dst' (necessarily <= targetDstSize) * or 0 if compression fails. * `srcSizePtr` : on success, *srcSizePtr will be updated to indicate how much bytes were read from `src`. * Note that this function may not consume the entire input. */ LZ4LIB_API int LZ4_compress_HC_continue_destSize(LZ4_streamHC_t* LZ4_streamHCPtr, const char* src, char* dst, int* srcSizePtr, int targetDstSize); LZ4LIB_API int LZ4_saveDictHC (LZ4_streamHC_t* streamHCPtr, char* safeBuffer, int maxDictSize); /*^********************************************** * !!!!!! STATIC LINKING ONLY !!!!!! ***********************************************/ /*-****************************************************************** * PRIVATE DEFINITIONS : * Do not use these definitions directly. * They are merely exposed to allow static allocation of `LZ4_streamHC_t`. * Declare an `LZ4_streamHC_t` directly, rather than any type below. * Even then, only do so in the context of static linking, as definitions may change between versions. ********************************************************************/ #define LZ4HC_DICTIONARY_LOGSIZE 16 #define LZ4HC_MAXD (1<= 199901L) /* C99 */) #include typedef struct LZ4HC_CCtx_internal LZ4HC_CCtx_internal; struct LZ4HC_CCtx_internal { uint32_t hashTable[LZ4HC_HASHTABLESIZE]; uint16_t chainTable[LZ4HC_MAXD]; const uint8_t* end; /* next block here to continue on current prefix */ const uint8_t* base; /* All index relative to this position */ const uint8_t* dictBase; /* alternate base for extDict */ uint32_t dictLimit; /* below that point, need extDict */ uint32_t lowLimit; /* below that point, no more dict */ uint32_t nextToUpdate; /* index from which to continue dictionary update */ short compressionLevel; int8_t favorDecSpeed; /* favor decompression speed if this flag set, otherwise, favor compression ratio */ int8_t dirty; /* stream has to be fully reset if this flag is set */ const LZ4HC_CCtx_internal* dictCtx; }; #else typedef struct LZ4HC_CCtx_internal LZ4HC_CCtx_internal; struct LZ4HC_CCtx_internal { unsigned int hashTable[LZ4HC_HASHTABLESIZE]; unsigned short chainTable[LZ4HC_MAXD]; const unsigned char* end; /* next block here to continue on current prefix */ const unsigned char* base; /* All index relative to this position */ const unsigned char* dictBase; /* alternate base for extDict */ unsigned int dictLimit; /* below that point, need extDict */ unsigned int lowLimit; /* below that point, no more dict */ unsigned int nextToUpdate; /* index from which to continue dictionary update */ short compressionLevel; char favorDecSpeed; /* favor decompression speed if this flag set, otherwise, favor compression ratio */ char dirty; /* stream has to be fully reset if this flag is set */ const LZ4HC_CCtx_internal* dictCtx; }; #endif /* Do not use these definitions directly ! * Declare or allocate an LZ4_streamHC_t instead. */ #define LZ4_STREAMHCSIZE (4*LZ4HC_HASHTABLESIZE + 2*LZ4HC_MAXD + 56 + ((sizeof(void*)==16) ? 56 : 0) /* AS400*/ ) /* 262200 or 262256*/ #define LZ4_STREAMHCSIZE_SIZET (LZ4_STREAMHCSIZE / sizeof(size_t)) union LZ4_streamHC_u { size_t table[LZ4_STREAMHCSIZE_SIZET]; LZ4HC_CCtx_internal internal_donotuse; }; /* previously typedef'd to LZ4_streamHC_t */ /* LZ4_streamHC_t : * This structure allows static allocation of LZ4 HC streaming state. * This can be used to allocate statically, on state, or as part of a larger structure. * * Such state **must** be initialized using LZ4_initStreamHC() before first use. * * Note that invoking LZ4_initStreamHC() is not required when * the state was created using LZ4_createStreamHC() (which is recommended). * Using the normal builder, a newly created state is automatically initialized. * * Static allocation shall only be used in combination with static linking. */ /* LZ4_initStreamHC() : v1.9.0+ * Required before first use of a statically allocated LZ4_streamHC_t. * Before v1.9.0 : use LZ4_resetStreamHC() instead */ LZ4LIB_API LZ4_streamHC_t* LZ4_initStreamHC (void* buffer, size_t size); /*-************************************ * Deprecated Functions **************************************/ /* see lz4.h LZ4_DISABLE_DEPRECATE_WARNINGS to turn off deprecation warnings */ /* deprecated compression functions */ LZ4_DEPRECATED("use LZ4_compress_HC() instead") LZ4LIB_API int LZ4_compressHC (const char* source, char* dest, int inputSize); LZ4_DEPRECATED("use LZ4_compress_HC() instead") LZ4LIB_API int LZ4_compressHC_limitedOutput (const char* source, char* dest, int inputSize, int maxOutputSize); LZ4_DEPRECATED("use LZ4_compress_HC() instead") LZ4LIB_API int LZ4_compressHC2 (const char* source, char* dest, int inputSize, int compressionLevel); LZ4_DEPRECATED("use LZ4_compress_HC() instead") LZ4LIB_API int LZ4_compressHC2_limitedOutput(const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel); LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") LZ4LIB_API int LZ4_compressHC_withStateHC (void* state, const char* source, char* dest, int inputSize); LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") LZ4LIB_API int LZ4_compressHC_limitedOutput_withStateHC (void* state, const char* source, char* dest, int inputSize, int maxOutputSize); LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") LZ4LIB_API int LZ4_compressHC2_withStateHC (void* state, const char* source, char* dest, int inputSize, int compressionLevel); LZ4_DEPRECATED("use LZ4_compress_HC_extStateHC() instead") LZ4LIB_API int LZ4_compressHC2_limitedOutput_withStateHC(void* state, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel); LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") LZ4LIB_API int LZ4_compressHC_continue (LZ4_streamHC_t* LZ4_streamHCPtr, const char* source, char* dest, int inputSize); LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") LZ4LIB_API int LZ4_compressHC_limitedOutput_continue (LZ4_streamHC_t* LZ4_streamHCPtr, const char* source, char* dest, int inputSize, int maxOutputSize); /* Obsolete streaming functions; degraded functionality; do not use! * * In order to perform streaming compression, these functions depended on data * that is no longer tracked in the state. They have been preserved as well as * possible: using them will still produce a correct output. However, use of * LZ4_slideInputBufferHC() will truncate the history of the stream, rather * than preserve a window-sized chunk of history. */ LZ4_DEPRECATED("use LZ4_createStreamHC() instead") LZ4LIB_API void* LZ4_createHC (const char* inputBuffer); LZ4_DEPRECATED("use LZ4_saveDictHC() instead") LZ4LIB_API char* LZ4_slideInputBufferHC (void* LZ4HC_Data); LZ4_DEPRECATED("use LZ4_freeStreamHC() instead") LZ4LIB_API int LZ4_freeHC (void* LZ4HC_Data); LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") LZ4LIB_API int LZ4_compressHC2_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int compressionLevel); LZ4_DEPRECATED("use LZ4_compress_HC_continue() instead") LZ4LIB_API int LZ4_compressHC2_limitedOutput_continue (void* LZ4HC_Data, const char* source, char* dest, int inputSize, int maxOutputSize, int compressionLevel); LZ4_DEPRECATED("use LZ4_createStreamHC() instead") LZ4LIB_API int LZ4_sizeofStreamStateHC(void); LZ4_DEPRECATED("use LZ4_initStreamHC() instead") LZ4LIB_API int LZ4_resetStreamStateHC(void* state, char* inputBuffer); /* LZ4_resetStreamHC() is now replaced by LZ4_initStreamHC(). * The intention is to emphasize the difference with LZ4_resetStreamHC_fast(), * which is now the recommended function to start a new stream of blocks, * but cannot be used to initialize a memory segment containing arbitrary garbage data. * * It is recommended to switch to LZ4_initStreamHC(). * LZ4_resetStreamHC() will generate deprecation warnings in a future version. */ LZ4LIB_API void LZ4_resetStreamHC (LZ4_streamHC_t* streamHCPtr, int compressionLevel); #if defined (__cplusplus) } #endif #endif /* LZ4_HC_H_19834876238432 */ /*-************************************************** * !!!!! STATIC LINKING ONLY !!!!! * Following definitions are considered experimental. * They should not be linked from DLL, * as there is no guarantee of API stability yet. * Prototypes will be promoted to "stable" status * after successfull usage in real-life scenarios. ***************************************************/ #ifdef LZ4_HC_STATIC_LINKING_ONLY /* protection macro */ #ifndef LZ4_HC_SLO_098092834 #define LZ4_HC_SLO_098092834 #define LZ4_STATIC_LINKING_ONLY /* LZ4LIB_STATIC_API */ #include "lz4.h" #if defined (__cplusplus) extern "C" { #endif /*! LZ4_setCompressionLevel() : v1.8.0+ (experimental) * It's possible to change compression level * between successive invocations of LZ4_compress_HC_continue*() * for dynamic adaptation. */ LZ4LIB_STATIC_API void LZ4_setCompressionLevel( LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel); /*! LZ4_favorDecompressionSpeed() : v1.8.2+ (experimental) * Opt. Parser will favor decompression speed over compression ratio. * Only applicable to levels >= LZ4HC_CLEVEL_OPT_MIN. */ LZ4LIB_STATIC_API void LZ4_favorDecompressionSpeed( LZ4_streamHC_t* LZ4_streamHCPtr, int favor); /*! LZ4_resetStreamHC_fast() : v1.9.0+ * When an LZ4_streamHC_t is known to be in a internally coherent state, * it can often be prepared for a new compression with almost no work, only * sometimes falling back to the full, expensive reset that is always required * when the stream is in an indeterminate state (i.e., the reset performed by * LZ4_resetStreamHC()). * * LZ4_streamHCs are guaranteed to be in a valid state when: * - returned from LZ4_createStreamHC() * - reset by LZ4_resetStreamHC() * - memset(stream, 0, sizeof(LZ4_streamHC_t)) * - the stream was in a valid state and was reset by LZ4_resetStreamHC_fast() * - the stream was in a valid state and was then used in any compression call * that returned success * - the stream was in an indeterminate state and was used in a compression * call that fully reset the state (LZ4_compress_HC_extStateHC()) and that * returned success * * Note: * A stream that was last used in a compression call that returned an error * may be passed to this function. However, it will be fully reset, which will * clear any existing history and settings from the context. */ LZ4LIB_STATIC_API void LZ4_resetStreamHC_fast( LZ4_streamHC_t* LZ4_streamHCPtr, int compressionLevel); /*! LZ4_compress_HC_extStateHC_fastReset() : * A variant of LZ4_compress_HC_extStateHC(). * * Using this variant avoids an expensive initialization step. It is only safe * to call if the state buffer is known to be correctly initialized already * (see above comment on LZ4_resetStreamHC_fast() for a definition of * "correctly initialized"). From a high level, the difference is that this * function initializes the provided state with a call to * LZ4_resetStreamHC_fast() while LZ4_compress_HC_extStateHC() starts with a * call to LZ4_resetStreamHC(). */ LZ4LIB_STATIC_API int LZ4_compress_HC_extStateHC_fastReset ( void* state, const char* src, char* dst, int srcSize, int dstCapacity, int compressionLevel); /*! LZ4_attach_HC_dictionary() : * This is an experimental API that allows for the efficient use of a * static dictionary many times. * * Rather than re-loading the dictionary buffer into a working context before * each compression, or copying a pre-loaded dictionary's LZ4_streamHC_t into a * working LZ4_streamHC_t, this function introduces a no-copy setup mechanism, * in which the working stream references the dictionary stream in-place. * * Several assumptions are made about the state of the dictionary stream. * Currently, only streams which have been prepared by LZ4_loadDictHC() should * be expected to work. * * Alternatively, the provided dictionary stream pointer may be NULL, in which * case any existing dictionary stream is unset. * * A dictionary should only be attached to a stream without any history (i.e., * a stream that has just been reset). * * The dictionary will remain attached to the working stream only for the * current stream session. Calls to LZ4_resetStreamHC(_fast) will remove the * dictionary context association from the working stream. The dictionary * stream (and source buffer) must remain in-place / accessible / unchanged * through the lifetime of the stream session. */ LZ4LIB_STATIC_API void LZ4_attach_HC_dictionary( LZ4_streamHC_t *working_stream, const LZ4_streamHC_t *dictionary_stream); #if defined (__cplusplus) } #endif #endif /* LZ4_HC_SLO_098092834 */ #endif /* LZ4_HC_STATIC_LINKING_ONLY */ zmat-0.9.8/src/runcmake.sh000077500000000000000000000001151366304620100154340ustar00rootroot00000000000000#!/bin/sh rm -rf build mkdir build && cd build cmake $@ ../ make clean make zmat-0.9.8/src/zmat.cpp000066400000000000000000000144431366304620100147600ustar00rootroot00000000000000/***************************************************************************//** ** \mainpage ZMat - A portable C-library and MATLAB/Octave toolbox for inline data compression ** ** \author Qianqian Fang ** \copyright Qianqian Fang, 2019-2020 ** ** ZMat provides an easy-to-use interface for stream compression and decompression. ** ** It can be compiled as a MATLAB/Octave mex function (zipmat.mex/zmat.m) and compresses ** arrays and strings in MATLAB/Octave. It can also be compiled as a lightweight ** C-library (libzmat.a/libzmat.so) that can be called in C/C++/FORTRAN etc to ** provide stream-level compression and decompression. ** ** Currently, zmat/libzmat supports 6 different compression algorthms, including ** - zlib and gzip : the most widely used algorithm algorithms for .zip and .gz files ** - lzma and lzip : high compression ratio LZMA based algorithms for .lzma and .lzip files ** - lz4 and lz4hc : real-time compression based on LZ4 and LZ4HC algorithms ** - base64 : base64 encoding and decoding ** ** Depencency: ZLib library: https://www.zlib.net/ ** author: (C) 1995-2017 Jean-loup Gailly and Mark Adler ** ** Depencency: LZ4 library: https://lz4.github.io/lz4/ ** author: (C) 2011-2019, Yann Collet, ** ** Depencency: Original LZMA library ** author: Igor Pavlov ** ** Depencency: Eazylzma: https://github.com/lloyd/easylzma ** author: Lloyd Hilaiel (lloyd) ** ** Depencency: base64_encode()/base64_decode() ** \copyright 2005-2011, Jouni Malinen ** ** \section slicense License ** GPL v3, see LICENSE.txt for details *******************************************************************************/ /***************************************************************************//** \file zmat.cpp @brief mex function for ZMAT *******************************************************************************/ #include #include #include #include #include #include "mex.h" #include "zmatlib.h" #include "zlib.h" void zmat_usage(); const char *metadata[]={"type","size","byte","method","status","level"}; /** @brief Mex function for the zmat - an interface to compress/decompress binary data * This is the master function to interface for zipping and unzipping a char/int8 buffer */ void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[]){ TZipMethod zipid=zmZlib; int iscompress=1; #if defined(NO_LZ4) && defined(NO_LZMA) const char *zipmethods[]={"zlib","gzip","base64",""}; #elif !defined(NO_LZMA) && defined(NO_LZ4) const char *zipmethods[]={"zlib","gzip","base64","lzip","lzma",""}; #elif defined(NO_LZMA) && !defined(NO_LZ4) const char *zipmethods[]={"zlib","gzip","base64","lz4","lz4hc",""}; #else const char *zipmethods[]={"zlib","gzip","base64","lzip","lzma","lz4","lz4hc",""}; #endif /** * If no input is given for this function, it prints help information and return. */ if (nrhs==0){ zmat_usage(); return; } if(nrhs>=2){ double *val=mxGetPr(prhs[1]); iscompress=val[0]; } if(nrhs>=3){ int len=mxGetNumberOfElements(prhs[2]); if(!mxIsChar(prhs[2]) || len==0) mexErrMsgTxt("the 'method' field must be a non-empty string"); if((zipid=(TZipMethod)zmat_keylookup((char *)mxArrayToString(prhs[2]), zipmethods))<0) mexErrMsgTxt("the specified compression method is not supported"); } try{ if(mxIsChar(prhs[0]) || (mxIsNumeric(prhs[0]) && ~mxIsComplex(prhs[0])) || mxIsLogical(prhs[0])){ int ret=-1; mwSize inputsize=mxGetNumberOfElements(prhs[0])*mxGetElementSize(prhs[0]); mwSize buflen[2]={0}; unsigned char *outputbuf=NULL; size_t outputsize=0; unsigned char * inputstr=(mxIsChar(prhs[0])? (unsigned char *)mxArrayToString(prhs[0]) : (unsigned char *)mxGetData(prhs[0])); int errcode=0; if(inputsize>0) errcode=zmat_run(inputsize, inputstr, &outputsize, &outputbuf, zipid, &ret, iscompress); if(errcode<0){ if(outputbuf) free(outputbuf); outputbuf=NULL; outputsize=0; } buflen[0]=1; buflen[1]=outputsize; plhs[0] = mxCreateNumericArray(2,buflen,mxUINT8_CLASS,mxREAL); if(outputbuf){ memcpy((unsigned char*)mxGetPr(plhs[0]),outputbuf,buflen[1]); free(outputbuf); } if(nlhs>1){ mwSize inputdim[2]={1,0}, *dims=(mwSize *)mxGetDimensions(prhs[0]); unsigned int *inputsize=NULL; plhs[1]=mxCreateStructMatrix(1,1,6,metadata); mxArray *val = mxCreateString(mxGetClassName(prhs[0])); mxSetFieldByNumber(plhs[1],0,0, val); inputdim[1]=mxGetNumberOfDimensions(prhs[0]); inputsize=(unsigned int *)malloc(inputdim[1]*sizeof(unsigned int)); val = mxCreateNumericArray(2, inputdim, mxUINT32_CLASS, mxREAL); for(int i=0;i ** \copyright Qianqian Fang, 2019-2020 ** ** ZMat provides an easy-to-use interface for stream compression and decompression. ** ** It can be compiled as a MATLAB/Octave mex function (zipmat.mex/zmat.m) and compresses ** arrays and strings in MATLAB/Octave. It can also be compiled as a lightweight ** C-library (libzmat.a/libzmat.so) that can be called in C/C++/FORTRAN etc to ** provide stream-level compression and decompression. ** ** Currently, zmat/libzmat supports 6 different compression algorthms, including ** - zlib and gzip : the most widely used algorithm algorithms for .zip and .gz files ** - lzma and lzip : high compression ratio LZMA based algorithms for .lzma and .lzip files ** - lz4 and lz4hc : real-time compression based on LZ4 and LZ4HC algorithms ** - base64 : base64 encoding and decoding ** ** Depencency: ZLib library: https://www.zlib.net/ ** author: (C) 1995-2017 Jean-loup Gailly and Mark Adler ** ** Depencency: LZ4 library: https://lz4.github.io/lz4/ ** author: (C) 2011-2019, Yann Collet, ** ** Depencency: Original LZMA library ** author: Igor Pavlov ** ** Depencency: Eazylzma: https://github.com/lloyd/easylzma ** author: Lloyd Hilaiel (lloyd) ** ** Depencency: base64_encode()/base64_decode() ** \copyright 2005-2011, Jouni Malinen ** ** \section slicense License ** GPL v3, see LICENSE.txt for details *******************************************************************************/ /***************************************************************************//** \file zmatlib.c @brief Compression and decompression interfaces: zmat_run, zmat_encode, zmat_decode *******************************************************************************/ #include #include #include #include #include "zmatlib.h" #include "zlib.h" #ifndef NO_LZMA #include "easylzma/compress.h" #include "easylzma/decompress.h" #endif #ifndef NO_LZ4 #include "lz4/lz4.h" #include "lz4/lz4hc.h" #endif #ifndef NO_LZMA /** * @brief Easylzma interface to perform compression * * @param[in] format: output format (0 for lzip format, 1 for lzma-alone format) * @param[in] inData: input stream buffer pointer * @param[in] inLen: input stream buffer length * @param[in] outData: output stream buffer pointer * @param[in] outLen: output stream buffer length * @param[in] level: positive number: use default compression level (5); * negative interger: set compression level (-1, less, to -9, more compression) * @return return the fine grained lzma error code. */ int simpleCompress(elzma_file_format format, const unsigned char * inData, size_t inLen, unsigned char ** outData, size_t * outLen, int level); /** * @brief Easylzma interface to perform decompression * * @param[in] format: output format (0 for lzip format, 1 for lzma-alone format) * @param[in] inData: input stream buffer pointer * @param[in] inLen: input stream buffer length * @param[in] outData: output stream buffer pointer * @param[in] outLen: output stream buffer length * @return return the fine grained lzma error code. */ int simpleDecompress(elzma_file_format format, const unsigned char * inData, size_t inLen, unsigned char ** outData, size_t * outLen); #endif /** * @brief Coarse grained error messages (encoder-specific detailed error codes are in the status parameter) * */ char *zmat_errcode[]={ "No error", /*0*/ "input can not be empty", /*-1*/ "failed to initialize zlib", /*-2*/ "zlib error, see info.status for error flag, often a result of mismatch in compression method", /*-3*/ "easylzma error, see info.status for error flag, often a result of mismatch in compression method",/*-4*/ "can not allocate output buffer",/*-5*/ "lz4 error, see info.status for error flag, often a result of mismatch in compression method",/*-6*/ "unsupported method" /*-7*/ }; /** * @brief Convert error code to a string error message * * @param[in] id: zmat error code */ char * zmat_error(int id){ if(id>=0 && id<(sizeof(zmat_errcode) / sizeof(zmat_errcode[0]))) return zmat_errcode[id]; else return "unknown error"; } /** * @brief Main interface to perform compression/decompression * * @param[in] inputsize: input stream buffer length * @param[in] inputstr: input stream buffer pointer * @param[in] outputsize: output stream buffer length * @param[in] outputbuf: output stream buffer pointer * @param[in] ret: encoder/decoder specific detailed error code (if error occurs) * @param[in] iscompress: 0: decompression, 1: use default compression level; * negative interger: set compression level (-1, less, to -9, more compression) * @return return the coarse grained zmat error code; detailed error code is in ret. */ int zmat_run(const size_t inputsize, unsigned char *inputstr, size_t *outputsize, unsigned char **outputbuf, const int zipid, int *ret, const int iscompress){ z_stream zs; size_t buflen[2]={0}; *outputbuf=NULL; zs.zalloc = Z_NULL; zs.zfree = Z_NULL; zs.opaque = Z_NULL; if(inputsize==0) return -1; if(iscompress){ /** * perform compression or encoding */ if(zipid==zmBase64){ /** * base64 encoding */ *outputbuf=base64_encode((const unsigned char*)inputstr, inputsize, outputsize); }else if(zipid==zmZlib || zipid==zmGzip){ /** * zlib (.zip) or gzip (.gz) compression */ if(zipid==zmZlib){ if(deflateInit(&zs, (iscompress>0) ? Z_DEFAULT_COMPRESSION : (-iscompress)) != Z_OK) return -2; }else{ if(deflateInit2(&zs, (iscompress>0) ? Z_DEFAULT_COMPRESSION : (-iscompress), Z_DEFLATED, 15|16, MAX_MEM_LEVEL, Z_DEFAULT_STRATEGY) != Z_OK) return -2; } buflen[0] =deflateBound(&zs,inputsize); *outputbuf=(unsigned char *)malloc(buflen[0]); zs.avail_in = inputsize; /* size of input, string + terminator*/ zs.next_in = (Bytef *)inputstr; /* input char array*/ zs.avail_out = buflen[0]; /* size of output*/ zs.next_out = (Bytef *)(*outputbuf); /*(Bytef *)(); // output char array*/ *ret=deflate(&zs, Z_FINISH); *outputsize=zs.total_out; if(*ret!=Z_STREAM_END && *ret!=Z_OK) return -3; deflateEnd(&zs); #ifndef NO_LZMA }else if(zipid==zmLzma || zipid==zmLzip){ /** * lzma (.lzma) or lzip (.lzip) compression */ *ret = simpleCompress((elzma_file_format)(zipid-3), (unsigned char *)inputstr, inputsize, outputbuf, outputsize, iscompress); if(*ret!=ELZMA_E_OK) return -4; #endif #ifndef NO_LZ4 }else if(zipid==zmLz4 || zipid==zmLz4hc){ /** * lz4 or lz4hc compression */ *outputsize=LZ4_compressBound(inputsize); if (!(*outputbuf = (unsigned char *)malloc(*outputsize))) return -5; if(zipid==zmLz4) *outputsize = LZ4_compress_default((const char *)inputstr, (char*)(*outputbuf), inputsize, *outputsize); else *outputsize = LZ4_compress_HC((const char *)inputstr, (char*)(*outputbuf), inputsize, *outputsize, (iscompress>0)?8:(-iscompress)); *ret=*outputsize; if(*outputsize==0) return -6; #endif }else{ return -7; } }else{ /** * perform decompression or decoding */ if(zipid==zmBase64){ /** * base64 decoding */ *outputbuf=base64_decode((const unsigned char*)inputstr, inputsize, outputsize); }else if(zipid==zmZlib || zipid==zmGzip){ /** * zlib (.zip) or gzip (.gz) decompression */ int count=1; if(zipid==zmZlib){ if(inflateInit(&zs) != Z_OK) return -2; }else{ if(inflateInit2(&zs, 15|32) != Z_OK) return -2; } buflen[0] =inputsize*20; *outputbuf=(unsigned char *)malloc(buflen[0]); zs.avail_in = inputsize; /* size of input, string + terminator*/ zs.next_in =inputstr; /* input char array*/ zs.avail_out = buflen[0]; /* size of output*/ zs.next_out = (Bytef *)(*outputbuf); /*(Bytef *)(); // output char array*/ while((*ret=inflate(&zs, Z_SYNC_FLUSH))!=Z_STREAM_END && count<=10){ *outputbuf=(unsigned char *)realloc(*outputbuf, (buflen[0]< * * This software may be distributed under the terms of the BSD license. * See README for more details. */ static const unsigned char base64_table[65] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; /** * @brief base64_encode - Base64 encode * @src: Data to be encoded * @len: Length of the data to be encoded * @out_len: Pointer to output length variable, or %NULL if not used * Returns: Allocated buffer of out_len bytes of encoded data, * or %NULL on failure * * Caller is responsible for freeing the returned buffer. Returned buffer is * nul terminated to make it easier to use as a C string. The nul terminator is * not included in out_len. */ unsigned char * base64_encode(const unsigned char *src, size_t len, size_t *out_len) { unsigned char *out, *pos; const unsigned char *end, *in; size_t olen; int line_len; olen = len * 4 / 3 + 4; /* 3-byte blocks to 4-byte */ olen += olen / 72; /* line feeds */ olen++; /* nul termination */ if (olen < len) return NULL; /* integer overflow */ out = (unsigned char *)malloc(olen); if (out == NULL) return NULL; end = src + len; in = src; pos = out; line_len = 0; while (end - in >= 3) { *pos++ = base64_table[in[0] >> 2]; *pos++ = base64_table[((in[0] & 0x03) << 4) | (in[1] >> 4)]; *pos++ = base64_table[((in[1] & 0x0f) << 2) | (in[2] >> 6)]; *pos++ = base64_table[in[2] & 0x3f]; in += 3; line_len += 4; if (line_len >= 72) { *pos++ = '\n'; line_len = 0; } } if (end - in) { *pos++ = base64_table[in[0] >> 2]; if (end - in == 1) { *pos++ = base64_table[(in[0] & 0x03) << 4]; *pos++ = '='; } else { *pos++ = base64_table[((in[0] & 0x03) << 4) | (in[1] >> 4)]; *pos++ = base64_table[(in[1] & 0x0f) << 2]; } *pos++ = '='; line_len += 4; } if (line_len) *pos++ = '\n'; *pos = '\0'; if (out_len) *out_len = pos - out; return out; } /** * base64_decode - Base64 decode * @src: Data to be decoded * @len: Length of the data to be decoded * @out_len: Pointer to output length variable * Returns: Allocated buffer of out_len bytes of decoded data, * or %NULL on failure * * Caller is responsible for freeing the returned buffer. */ unsigned char * base64_decode(const unsigned char *src, size_t len, size_t *out_len) { unsigned char dtable[256], *out, *pos, block[4], tmp; size_t i, count, olen; int pad = 0; memset(dtable, 0x80, 256); for (i = 0; i < sizeof(base64_table) - 1; i++) dtable[base64_table[i]] = (unsigned char) i; dtable['='] = 0; count = 0; for (i = 0; i < len; i++) { if (dtable[src[i]] != 0x80) count++; } if (count == 0 || count % 4) return NULL; olen = count / 4 * 3; pos = out = (unsigned char *)malloc(olen); if (out == NULL) return NULL; count = 0; for (i = 0; i < len; i++) { tmp = dtable[src[i]]; if (tmp == 0x80) continue; if (src[i] == '=') pad++; block[count] = tmp; count++; if (count == 4) { *pos++ = (block[0] << 2) | (block[1] >> 4); *pos++ = (block[1] << 4) | (block[2] >> 2); *pos++ = (block[2] << 6) | block[3]; count = 0; if (pad) { if (pad == 1) pos--; else if (pad == 2) pos -= 2; else { /* Invalid padding */ free(out); return NULL; } break; } } } *out_len = pos - out; return out; } #ifndef NO_LZMA /** * @brief Easylzma compression interface */ struct dataStream { const unsigned char * inData; size_t inLen; unsigned char * outData; size_t outLen; }; /** * @brief Easylzma input callback function */ static int inputCallback(void *ctx, void *buf, size_t * size) { size_t rd = 0; struct dataStream * ds = (struct dataStream *) ctx; assert(ds != NULL); rd = (ds->inLen < *size) ? ds->inLen : *size; if (rd > 0) { memcpy(buf, (void *) ds->inData, rd); ds->inData += rd; ds->inLen -= rd; } *size = rd; return 0; } /** * @brief Easylzma output callback function */ static size_t outputCallback(void *ctx, const void *buf, size_t size) { struct dataStream * ds = (struct dataStream *) ctx; assert(ds != NULL); if (size > 0) { ds->outData = (unsigned char *)realloc(ds->outData, ds->outLen + size); memcpy((void *) (ds->outData + ds->outLen), buf, size); ds->outLen += size; } return size; } /** * @brief Easylzma interface to perform compression * * @param[in] format: output format (0 for lzip format, 1 for lzma-alone format) * @param[in] inData: input stream buffer pointer * @param[in] inLen: input stream buffer length * @param[in] outData: output stream buffer pointer * @param[in] outLen: output stream buffer length * @param[in] level: positive number: use default compression level (5); * negative interger: set compression level (-1, less, to -9, more compression) * @return return the fine grained lzma error code. */ int simpleCompress(elzma_file_format format, const unsigned char * inData, size_t inLen, unsigned char ** outData, size_t * outLen, int level) { int rc; elzma_compress_handle hand; /* allocate compression handle */ hand = elzma_compress_alloc(); assert(hand != NULL); rc = elzma_compress_config(hand, ELZMA_LC_DEFAULT, ELZMA_LP_DEFAULT, ELZMA_PB_DEFAULT, ((level>0)?5:-level), (1 << 20) /* 1mb */, format, inLen); if (rc != ELZMA_E_OK) { elzma_compress_free(&hand); return rc; } /* now run the compression */ { struct dataStream ds; ds.inData = inData; ds.inLen = inLen; ds.outData = NULL; ds.outLen = 0; rc = elzma_compress_run(hand, inputCallback, (void *) &ds, outputCallback, (void *) &ds, NULL, NULL); if (rc != ELZMA_E_OK) { if (ds.outData != NULL) free(ds.outData); elzma_compress_free(&hand); return rc; } *outData = ds.outData; *outLen = ds.outLen; } return rc; } /** * @brief Easylzma interface to perform decompression * * @param[in] format: output format (0 for lzip format, 1 for lzma-alone format) * @param[in] inData: input stream buffer pointer * @param[in] inLen: input stream buffer length * @param[in] outData: output stream buffer pointer * @param[in] outLen: output stream buffer length * @return return the fine grained lzma error code. */ int simpleDecompress(elzma_file_format format, const unsigned char * inData, size_t inLen, unsigned char ** outData, size_t * outLen) { int rc; elzma_decompress_handle hand; hand = elzma_decompress_alloc(); /* now run the compression */ { struct dataStream ds; ds.inData = inData; ds.inLen = inLen; ds.outData = NULL; ds.outLen = 0; rc = elzma_decompress_run(hand, inputCallback, (void *) &ds, outputCallback, (void *) &ds, format); if (rc != ELZMA_E_OK) { if (ds.outData != NULL) free(ds.outData); elzma_decompress_free(&hand); return rc; } *outData = ds.outData; *outLen = ds.outLen; } return rc; } #endif zmat-0.9.8/test/000077500000000000000000000000001366304620100134635ustar00rootroot00000000000000zmat-0.9.8/test/c/000077500000000000000000000000001366304620100137055ustar00rootroot00000000000000zmat-0.9.8/test/c/Makefile000066400000000000000000000002041366304620100153410ustar00rootroot00000000000000all: $(CC) -g -Wall -pedantic testzmat.c -o testzmat -DNO_LZMA -DNO_LZ4 -I../../src -L../../lib -lzmat -lz clean: -rm -f testzmat zmat-0.9.8/test/c/testzmat.c000066400000000000000000000057101366304620100157270ustar00rootroot00000000000000/***************************************************************************//** ** \mainpage ZMat - A portable C-library and MATLAB/Octave toolbox for inline data compression ** ** \author Qianqian Fang ** \copyright Qianqian Fang, 2019-2020 ** ** Demo on how to use zmatlib for simple compression and decompression in C ** ** \section slicense License ** GPL v3, see LICENSE.txt for details *******************************************************************************/ #include #include #include /** * if only zlib/gzip/base64 is used, one only need to add -I/path/to/zmatlib.h * if lzma/lzip is used, one must add -I/path/to/src/easylzma/ * if lz4/lz4hc is used, one must add -I/path/to/src/lz4 */ #include "zmatlib.h" int main(void){ char *test[]={"__o000o__(o)(o)__o000o__ =^_^= __o000o__(o)(o)__o000o__"}; int ret=0, status=0; size_t compressedlen, encodedlen, decodelen, decompressedlen; /*output buffers will be allocated inside zmat functions, host is responsible to free after use*/ unsigned char *compressed=NULL, *encoded=NULL, *decoded=NULL, *decompressed=NULL; /*=====================================*/ /* compressing and encoding the string */ /*=====================================*/ /*first, perform zlib compression use the highest compression (-9); one can use zmat_encode as well*/ ret=zmat_run(strlen(test[0]),(unsigned char*)test[0], &compressedlen, &compressed, zmZlib, &status, -9); if(ret==0){ /*next, encode the compressed data using base64*/ ret=zmat_encode(compressedlen,compressed, &encodedlen, &encoded, zmBase64, &status); if(ret==0){ printf("{\n\t\"original\":\"%s\",\n",test[0]); printf("\t\"encoded\":\"%s\",\n",encoded); } } /* error handling */ if(ret){ printf("encoding failed, error code: %d: encoder error code %d\n",ret,status); return ret; } /*==========================================================*/ /* decode and then decompress to restore the orginal string */ /*==========================================================*/ /*first, perform base64 decoding*/ ret=zmat_decode(encodedlen,encoded, &decodelen, &decoded, zmBase64, &status); if(ret==0){ /*next, decompress using zlib (deflate) */ ret=zmat_decode(decodelen,decoded, &decompressedlen, &decompressed, zmZlib, &status); if(ret==0) printf("\t\"decompressed\":\"%s\",\n",decompressed); } printf("}\n"); /* error handling */ if(ret){ printf("decoding failed, error code: %d: decoder error code %d\n",ret,status); return ret; } /*==================================*/ /* host must free buffers after use */ /*==================================*/ if(compressed) free(compressed); if(encoded) free(encoded); if(decoded) free(decoded); if(decompressed) free(decompressed); return 0; } zmat-0.9.8/test/f90/000077500000000000000000000000001366304620100140615ustar00rootroot00000000000000zmat-0.9.8/test/f90/Makefile000066400000000000000000000002741366304620100155240ustar00rootroot00000000000000F90 :=gfortran all: $(F90) -g -Wall -pedantic -c ../../fortran90/zmatlib.f90 $(F90) -g -Wall -pedantic testzmat.f90 -o testzmat -L../../lib -lzmat -lz clean: -rm -f testzmat *.o *.mod zmat-0.9.8/test/f90/testzmat.f90000066400000000000000000000074371366304620100162670ustar00rootroot00000000000000!------------------------------------------------------------------------------ ! ZMatLib test program !------------------------------------------------------------------------------ ! !> @author Qianqian Fang !> @brief A demo program to call zmatlib functions to encode and decode ! ! DESCRIPTION: !> This demo program shows how to call zmat_run/zmat_encode/zmat_decode to !> perform buffer encoding and decoding !------------------------------------------------------------------------------ program zmatdemo !------------------------------------------------------------------------------ ! step 1: add the below line to use zmatlib unit !------------------------------------------------------------------------------ use zmatlib use iso_c_binding, only: c_double implicit none !------------------------------------------------------------------------------ ! step 2: define needed input and output variables !------------------------------------------------------------------------------ ! here are the fortran arrays/variables that you want to pass to zmat_run character (len=128), target :: inputstr real(kind=c_double), target :: dat(10) ! here are the interface variables, note that inputbuf/outputbuf are pointers (void*) integer(kind=c_int) :: ret, res, i integer(kind=c_size_t) :: inputlen, outputlen type(c_ptr) :: inputbuf, outputbuf ! zmat output is defined as a char pointer, one must free this after each call character(kind=c_char),pointer :: myout(:) !------------------------------------------------------------------------------ ! step 3: set whatever fortran variables !------------------------------------------------------------------------------ inputstr="__o000o__(o)(o)__o000o__ =^_^= __o000o__(o)(o)__o000o__" print *, trim(inputstr) !------------------------------------------------------------------------------ ! step 4: set inputbuf to the variable you want to encode, then call zmat_run !------------------------------------------------------------------------------ inputbuf=c_loc(inputstr) inputlen=len(trim(inputstr)) res=zmat_run(inputlen,inputbuf,outputlen, outputbuf, zmBase64, ret, 1); !------------------------------------------------------------------------------ ! step 5: once done, call c_f_pointer to transfer myout to outputbuf, print results !------------------------------------------------------------------------------ call c_f_pointer(outputbuf, myout, [outputlen]) print *, myout !------------------------------------------------------------------------------ ! step 6: free the c-allocated buffer !------------------------------------------------------------------------------ call zmat_free(outputbuf) !------------------------------------------------------------------------------ ! repeat step 3 for another variable: set whatever fortran variables !------------------------------------------------------------------------------ dat = (/ (i, i = 1, 10) /) !------------------------------------------------------------------------------ ! repeat step 4: set inputbuf to the variable you want to encode, then call zmat_run !------------------------------------------------------------------------------ inputlen=sizeof(dat) inputbuf=c_loc(dat) res=zmat_run(inputlen,inputbuf,outputlen, outputbuf, zmBase64, ret, 1); !------------------------------------------------------------------------------ ! repeat step 5: once done, call c_f_pointer to transfer myout to outputbuf, print results !------------------------------------------------------------------------------ call c_f_pointer(outputbuf, myout, [outputlen]) print *, myout !------------------------------------------------------------------------------ ! repeat step 6: free the c-allocated buffer !------------------------------------------------------------------------------ call zmat_free(outputbuf) end program zmat-0.9.8/zmat.m000066400000000000000000000103131366304620100136330ustar00rootroot00000000000000function varargout=zmat(varargin) % % output=zmat(input) % or % [output, info]=zmat(input, iscompress, method) % output=zmat(input, info) % % A portable data compression/decompression toolbox for MATLAB/GNU Octave % % author: Qianqian Fang % initial version created on 04/30/2019 % % input: % input: a char, non-complex numeric or logical vector or array % iscompress: (optional) if iscompress is 1, zmat compresses/encodes the input, % if 0, it decompresses/decodes the input. Default value is 1. % % if iscompress is set to a negative integer, (-iscompress) specifies % the compression level. For zlib/gzip, default level is 6 (1-9); for % lzma/lzip, default level is 5 (1-9); for lz4hc, default level is 8 (1-16). % the default compression level is used if iscompress is set to 1. % % zmat removes the trailing newline when iscompress=2 and method='base64' % all newlines are removed when iscompress=3 and method='base64' % % if one defines iscompress as the info struct (2nd output of zmat), zmat % will perform a decoding/decompression operation and recover the original % input using the info stored in the info structure. % method: (optional) compression method, currently, zmat supports the below methods % 'zlib': zlib/zip based data compression (default) % 'gzip': gzip formatted data compression % 'lzip': lzip formatted data compression % 'lzma': lzma formatted data compression % 'lz4': lz4 formatted data compression % 'lz4hc':lz4hc (LZ4 with high-compression ratio) formatted data compression % 'base64': encode or decode use base64 format % % output: % output: a uint8 row vector, storing the compressed or decompressed data; % empty when an error is encountered % info: (optional) a struct storing additional info regarding the input data, may have % 'type': the class of the input array % 'size': the dimensions of the input array % 'byte': the number of bytes per element in the input array % 'method': a copy of the 3rd input indicating the encoding method % 'status': the zlib/lzma/lz4 compression/decompression function return value, % including potential error codes; see documentation of the respective % libraries for details % 'level': a copy of the iscompress flag; if non-zero, specifying compression % level, see above % % example: % % [ss, info]=zmat(eye(5)) % orig=zmat(ss,0) % orig=zmat(ss,info) % ss=char(zmat('zmat test',1,'base64')) % orig=char(zmat(ss,0,'base64')) % % -- this function is part of the ZMAT toolbox (http://github.com/fangq/zmat) % if(exist('zipmat')~=3 && exist('zipmat')~=2) error('zipmat mex file is not found. you must download the mex file or recompile'); end if(nargin==0) fprintf(1,'Usage:\n\t[output,info]=zmat(input,iscompress,method);\nPlease run "help zmat" for more details.\n'); return; end input=varargin{1}; iscompress=1; zipmethod='zlib'; if(~(ischar(input) || islogical(input) || (isnumeric(input) && isreal(input)))) error('input must be a char, non-complex numeric or logical vector or N-D array'); end if(ischar(input)) input=uint8(input); end if(nargin>1) iscompress=varargin{2}; if(isstruct(varargin{2})) inputinfo=varargin{2}; iscompress=0; zipmethod=inputinfo.method; end end if(nargin>2) zipmethod=varargin{3}; end iscompress=round(iscompress); if((strcmp(zipmethod,'zlib') || strcmp(zipmethod,'gzip')) && iscompress<=-10) iscompress=-9; end [varargout{1:max(1,nargout)}]=zipmat(input,iscompress,zipmethod); if(strcmp(zipmethod,'base64') && iscompress>1) varargout{1}=char(varargout{1}); if(iscompress==2) varargout{1}=regexprep(varargout{1},'\n$',''); elseif(iscompress>2) varargout{1}=regexprep(varargout{1},'\n',''); end end if(exist('inputinfo','var') && isfield(inputinfo,'type')) varargout{1}=typecast(varargout{1},inputinfo.type); varargout{1}=reshape(varargout{1},inputinfo.size); end