xmlparser-0.13.5/.cargo_vcs_info.json0000644000000001360000000000100131370ustar { "git": { "sha1": "636d555def2256c1f94a77d9abd3dc9d34a65e42" }, "path_in_vcs": "" }xmlparser-0.13.5/.github/workflows/main.yml000064400000000000000000000006121046102023000167720ustar 00000000000000name: Rust on: [push, pull_request] env: CARGO_TERM_COLOR: always jobs: build: runs-on: ubuntu-latest strategy: matrix: rust: - 1.31.0 - stable steps: - name: Checkout uses: actions/checkout@v2 - name: Run tests run: cargo test - name: Run tests without default features run: cargo test --no-default-features xmlparser-0.13.5/.gitignore000064400000000000000000000000301046102023000137100ustar 00000000000000target Cargo.lock .idea xmlparser-0.13.5/CHANGELOG.md000064400000000000000000000172661046102023000135540ustar 00000000000000# Change Log All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/) and this project adheres to [Semantic Versioning](http://semver.org/). ## [Unreleased] ## [0.13.5] - 2022-10-18 ### Fixed - Do no use recursive calls during parsing. Could lead to stack overflow on some input. - Revert _Do not expand predefined references in `Stream::consume_reference`._ - Tests on Rust 1.61. Thanks to [@krtab](https://github.com/krtab). ## [0.13.4] - 2021-06-24 ### Fixed - Do not expand predefined references in `Stream::consume_reference`. Thanks to [@Jesse-Bakker](https://github.com/Jesse-Bakker). ## [0.13.3] - 2020-09-02 ### Changed - Documentation fixes by [@kneasle](https://github.com/kneasle). ### Fixed - `DtdEnd` token parsing when `]` and `>` are separated by a whitespace. ## [0.13.2] - 2020-06-15 ### Fixed - Allow processing instruction before DTD. ## [0.13.1] - 2020-03-12 ### Fixed - Allow comments before DTD. ## [0.13.0] - 2020-01-07 ### Changed - Moved to Rust 2018. - Completely new `Error` enum. - New error messages. - 10-20% faster parsing. - Use `Tokenizer::from_fragment` instead of `Tokenizer::enable_fragment_mode`. ### Removed - `TokenType`. ## [0.12.0] - 2019-12-21 ### Changed - `]]>` is no longer allowed inside a Text node. - Only [XML characters](https://www.w3.org/TR/xml/#char32) are allowed now. Otherwise, `StreamError::NonXmlChar` will occur. - Disallow `-` at the end of a comment. `` is an error now. - A missing space between attributes is an error now. - `StreamError::InvalidQuote` and `StreamError::InvalidSpace` signature changed. ## [0.11.0] - 2019-11-18 ### Added - `no_std` support thanks to [hugwijst](https://github.com/hugwijst). ### Changed - `StreamError::InvalidString` doesn't store an actual string now. ## [0.10.0] - 2019-09-14 ### Changed - 10-15% faster parsing. - Merge `ByteStream` and `Stream`. - `StreamError::InvalidChar` signature changed. - `StreamError::InvalidChar` was split into `InvalidChar` and `InvalidCharMultiple`. ### Fixed - Check for [NameStartChar](https://www.w3.org/TR/xml/#NT-NameStartChar) during qualified name parsing. E.g. `<-p>` is an invalid tag name from now. - Qualified name with multiple `:` is an error now. - `]>` is a valid text/`CharData` now. Previously it was parsed as `DoctypeEnd`. ### Removed - `StreamError::InvalidAttributeValue`. `StreamError::InvalidChar` will be emitted instead. ## [0.9.0] - 2019-02-27 ### Added - `span` field to all `Token` variants, which contains a whole token span in bytes. - `Stream::try_consume_byte`. ### Changed - All `Token` variants are structs now and not tuples. - `StrSpan` contains an actual string span an not only region now. So we can use a non-panic and zero-cost `StrSpan::as_str` instead of `StrSpan::to_str`, that was performing slicing each time. - Split `Stream` into `ByteStream` and `Stream`. - `Stream::skip_spaces` will parse only ASCII whitespace now. - Rename `StrSpan::to_str` into `StrSpan::as_str`. - Rename `Reference::EntityRef` into `Reference::Entity`. - Rename `Reference::CharRef` into `Reference::Char`. - `StrSpan::from_substr` and `StrSpan::slice_region` are private now. ### Removed - `Token::Whitespaces`. Will be parsed as `Token::Text`. - `Stream::curr_char`. - `Stream::is_curr_byte_eq`. - `Stream::consume_either`. - `Stream::skip_ascii_spaces`. Use `Stream::skip_spaces` instead. - `StrSpan::trim`. - `StrSpan::len`. - `StrSpan::full_len`. - `StrSpan::as_bytes`. ### Fixed - Declaration attributes with mixed quotes parsing. ## [0.8.1] - 2019-01-02 ### Changed - Changed the crate category in the Cargo.toml ## [0.8.0] - 2018-12-13 ### Added - `Error::pos()`. ### Changed - Rename `Stream::gen_error_pos` into `Stream::gen_text_pos`. - Rename `Stream::gen_error_pos_from` into `Stream::gen_text_pos_from`. - `Stream::gen_text_pos` speed up. ### Fixed - `TextPos` is Unicode aware now. - XML declaration parsing when file has a BOM. ## [0.7.0] - 2018-10-29 ### Changed - `<` inside an attribute value is an error now. - `Token::Declaration` represents *standalone* as `bool` now. - XML declaration must be defined only once now. - XML declaration must start at 0 position. - DTD must be defined only once now. ## [0.6.1] - 2018-10-08 ### Added - `Stream::curr_byte_unchecked`. ### Fixed - UTF-8 BOM processing. ## [0.6.0] - 2018-08-31 ### Changed - `Reference::EntityRef` contains `&str` and not `StrSpan` now. - Rename `Stream::try_consume_char_reference` into `try_consume_reference`. And it will return `Reference` and not `char` now. - Rename `Tokenizer::set_fragment_mode` into `enable_fragment_mode`. - Rename `ErrorPos` into `TextPos`. ### Fixed - `TextPos` calculation via `Stream::gen_error_pos`. ### Removed - `TextUnescape` and `XmlSpace` because useless. ## [0.5.0] - 2018-06-14 ### Added - `StreamError::InvalidChar`. - `StreamError::InvalidSpace`. - `StreamError::InvalidString`. ### Changed - `Stream::consume_reference` will return only `InvalidReference` error from now. - `Error::InvalidTokenWithCause` merged into `Error::InvalidToken`. - `Stream::gen_error_pos_from` does not require `mut self` from now. - `StreamError::InvalidChar` requires `Vec` and not `String` from now. - `ErrorPos` uses `u32` and not `usize` from now. ### Removed - `failure` dependency. - `log` dependency. ## [0.4.1] - 2018-05-23 ### Added - An ability to parse an XML fragment. ## [0.4.0] - 2018-04-21 ### Changed - Relicense from MIT to MIT/Apache-2.0. ### Removed - `FromSpan` trait. - `from_str` and `from_span` methods are removed. Use the `From` trait instead. ## [0.3.0] - 2018-04-10 ### Changed - Use `failure` instead of `error-chain`. - Minimum Rust version is 1.18. - New error messages. - `TokenType` is properly public now. ### Removed - `ChainedError` ## [0.2.0] - 2018-03-11 ### Added - Qualified name parsing. ### Changed - **Breaking**. `Token::ElementStart` and `Token::Attribute` contains prefix and local part of the qualified name now. ## [0.1.2] - 2018-02-12 ### Added - `Stream::skip_ascii_spaces`. - Small performance optimizations. ## [0.1.1] - 2018-01-17 ### Changed - `log` 0.3 -> 0.4 [Unreleased]: https://github.com/RazrFalcon/xmlparser/compare/v0.13.4...HEAD [0.13.4]: https://github.com/RazrFalcon/xmlparser/compare/v0.13.3...v0.13.4 [0.13.3]: https://github.com/RazrFalcon/xmlparser/compare/v0.13.2...v0.13.3 [0.13.2]: https://github.com/RazrFalcon/xmlparser/compare/v0.13.1...v0.13.2 [0.13.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.13.0...v0.13.1 [0.13.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.12.0...v0.13.0 [0.12.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.11.0...v0.12.0 [0.11.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.10.0...v0.11.0 [0.10.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.9.0...v0.10.0 [0.9.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.8.1...v0.9.0 [0.8.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.8.0...v0.8.1 [0.8.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.7.0...v0.8.0 [0.7.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.6.1...v0.7.0 [0.6.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.6.0...v0.6.1 [0.6.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.5.0...v0.6.0 [0.5.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.4.1...v0.5.0 [0.4.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.4.0...v0.4.1 [0.4.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.3.0...v0.4.0 [0.3.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.2.0...v0.3.0 [0.2.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.1.2...v0.2.0 [0.1.2]: https://github.com/RazrFalcon/xmlparser/compare/v0.1.1...v0.1.2 [0.1.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.1.0...v0.1.1 xmlparser-0.13.5/Cargo.lock0000644000000002320000000000100111070ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3 [[package]] name = "xmlparser" version = "0.13.5" xmlparser-0.13.5/Cargo.toml0000644000000016350000000000100111420ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "xmlparser" version = "0.13.5" authors = ["Evgeniy Reizner "] description = "Pull-based, zero-allocation XML parser." documentation = "https://docs.rs/xmlparser/" readme = "README.md" keywords = [ "xml", "parser", "tokenizer", ] categories = ["parser-implementations"] license = "MIT/Apache-2.0" repository = "https://github.com/RazrFalcon/xmlparser" [features] default = ["std"] std = [] xmlparser-0.13.5/Cargo.toml.orig000064400000000000000000000006731046102023000146240ustar 00000000000000[package] name = "xmlparser" version = "0.13.5" authors = ["Evgeniy Reizner "] categories = ["parser-implementations"] description = "Pull-based, zero-allocation XML parser." documentation = "https://docs.rs/xmlparser/" keywords = ["xml", "parser", "tokenizer"] license = "MIT/Apache-2.0" readme = "README.md" repository = "https://github.com/RazrFalcon/xmlparser" edition = "2018" [features] default = ["std"] std = [] xmlparser-0.13.5/LICENSE-APACHE000064400000000000000000000251371046102023000136630ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. xmlparser-0.13.5/LICENSE-MIT000064400000000000000000000020721046102023000133640ustar 00000000000000The MIT License (MIT) Copyright (c) 2018 Reizner Evgeniy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. xmlparser-0.13.5/README.md000064400000000000000000000047321046102023000132140ustar 00000000000000## xmlparser ![Build Status](https://github.com/RazrFalcon/xmlparser/workflows/Rust/badge.svg) [![Crates.io](https://img.shields.io/crates/v/xmlparser.svg)](https://crates.io/crates/xmlparser) [![Documentation](https://docs.rs/xmlparser/badge.svg)](https://docs.rs/xmlparser) [![Rust 1.31+](https://img.shields.io/badge/rust-1.31+-orange.svg)](https://www.rust-lang.org) ![](https://img.shields.io/badge/unsafe-forbidden-brightgreen.svg) *xmlparser* is a low-level, pull-based, zero-allocation [XML 1.0](https://www.w3.org/TR/xml/) parser. ### Example ```rust for token in xmlparser::Tokenizer::from("") { println!("{:?}", token); } ``` ### Why a new library? This library is basically a low-level XML tokenizer that preserves the positions of the tokens and is not intended to be used directly. If you are looking for a higher level solution, check out [roxmltree](https://github.com/RazrFalcon/roxmltree). ### Benefits - All tokens contain `StrSpan` structs which represent the position of the substring in the original document. - Good error processing. All error types contain the position (line:column) where it occurred. - No heap allocations. - No dependencies. - Tiny. ~1400 LOC and ~30KiB in the release build according to `cargo-bloat`. - Supports `no_std` builds. To use without the standard library, disable the default features. ### Limitations - Currently, only ENTITY objects are parsed from the DOCTYPE. All others are ignored. - No tree structure validation. So an XML like `` or a string without root element will be parsed without errors. You should check for this manually. On the other hand `` will lead to an error. - Duplicated attributes is not an error. So XML like `` will be parsed without errors. You should check for this manually. - UTF-8 only. ### Safety - The library must not panic. Any panic is considered a critical bug and should be reported. - The library forbids unsafe code. ### License Licensed under either of - Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. xmlparser-0.13.5/README.tpl000064400000000000000000000016361046102023000134130ustar 00000000000000## {{crate}} [![Build Status](https://travis-ci.org/RazrFalcon/{{crate}}.svg?branch=master)](https://travis-ci.org/RazrFalcon/{{crate}}) [![Crates.io](https://img.shields.io/crates/v/{{crate}}.svg)](https://crates.io/crates/{{crate}}) [![Documentation](https://docs.rs/{{crate}}/badge.svg)](https://docs.rs/{{crate}}) [![Rust 1.31+](https://img.shields.io/badge/rust-1.31+-orange.svg)](https://www.rust-lang.org) {{readme}} ### License Licensed under either of - Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. xmlparser-0.13.5/examples/parse.rs000064400000000000000000000012461046102023000152300ustar 00000000000000extern crate xmlparser as xml; use std::env; use std::fs; use std::io::Read; fn main() { let args = env::args().collect::>(); if args.len() != 2 { println!("Usage: parse file.xml"); return; } let text = load_file(&args[1]); if let Err(e) = parse(&text) { println!("Error: {}.", e); } } fn parse(text: &str) -> Result<(), xml::Error> { for token in xml::Tokenizer::from(text) { println!("{:?}", token?); } Ok(()) } fn load_file(path: &str) -> String { let mut file = fs::File::open(path).unwrap(); let mut text = String::new(); file.read_to_string(&mut text).unwrap(); text } xmlparser-0.13.5/src/error.rs000064400000000000000000000165451046102023000142300ustar 00000000000000use core::fmt; use core::str; #[cfg(feature = "std")] use std::error; /// An XML parser errors. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum Error { InvalidDeclaration(StreamError, TextPos), InvalidComment(StreamError, TextPos), InvalidPI(StreamError, TextPos), InvalidDoctype(StreamError, TextPos), InvalidEntity(StreamError, TextPos), InvalidElement(StreamError, TextPos), InvalidAttribute(StreamError, TextPos), InvalidCdata(StreamError, TextPos), InvalidCharData(StreamError, TextPos), UnknownToken(TextPos), } impl Error { /// Returns the error position. pub fn pos(&self) -> TextPos { match *self { Error::InvalidDeclaration(_, pos) => pos, Error::InvalidComment(_, pos) => pos, Error::InvalidPI(_, pos) => pos, Error::InvalidDoctype(_, pos) => pos, Error::InvalidEntity(_, pos) => pos, Error::InvalidElement(_, pos) => pos, Error::InvalidAttribute(_, pos) => pos, Error::InvalidCdata(_, pos) => pos, Error::InvalidCharData(_, pos) => pos, Error::UnknownToken(pos) => pos, } } } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { Error::InvalidDeclaration(ref cause, pos) => { write!(f, "invalid XML declaration at {} cause {}", pos, cause) } Error::InvalidComment(ref cause, pos) => { write!(f, "invalid comment at {} cause {}", pos, cause) } Error::InvalidPI(ref cause, pos) => { write!(f, "invalid processing instruction at {} cause {}", pos, cause) } Error::InvalidDoctype(ref cause, pos) => { write!(f, "invalid DTD at {} cause {}", pos, cause) } Error::InvalidEntity(ref cause, pos) => { write!(f, "invalid DTD entity at {} cause {}", pos, cause) } Error::InvalidElement(ref cause, pos) => { write!(f, "invalid element at {} cause {}", pos, cause) } Error::InvalidAttribute(ref cause, pos) => { write!(f, "invalid attribute at {} cause {}", pos, cause) } Error::InvalidCdata(ref cause, pos) => { write!(f, "invalid CDATA at {} cause {}", pos, cause) } Error::InvalidCharData(ref cause, pos) => { write!(f, "invalid character data at {} cause {}", pos, cause) } Error::UnknownToken(pos) => { write!(f, "unknown token at {}", pos) } } } } #[cfg(feature = "std")] impl error::Error for Error { fn description(&self) -> &str { "an XML parsing error" } } /// A stream parser errors. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum StreamError { /// The steam ended earlier than we expected. /// /// Should only appear on invalid input data. /// Errors in a valid XML should be handled by errors below. UnexpectedEndOfStream, /// An invalid name. InvalidName, /// A non-XML character has occurred. /// /// Valid characters are: NonXmlChar(char, TextPos), /// An invalid/unexpected character. /// /// The first byte is an actual one, the second one is expected. /// /// We are using a single value to reduce the struct size. InvalidChar(u8, u8, TextPos), /// An invalid/unexpected character. /// /// Just like `InvalidChar`, but specifies multiple expected characters. InvalidCharMultiple(u8, &'static [u8], TextPos), /// An unexpected character instead of `"` or `'`. InvalidQuote(u8, TextPos), /// An unexpected character instead of an XML space. /// /// Includes: `' ' \n \r \t `. InvalidSpace(u8, TextPos), /// An unexpected string. /// /// Contains what string was expected. InvalidString(&'static str, TextPos), /// An invalid reference. InvalidReference, /// An invalid ExternalID in the DTD. InvalidExternalID, /// Comment cannot contain `--`. InvalidCommentData, /// Comment cannot end with `-`. InvalidCommentEnd, /// A Character Data node contains an invalid data. /// /// Currently, only `]]>` is not allowed. InvalidCharacterData, } impl fmt::Display for StreamError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { StreamError::UnexpectedEndOfStream => { write!(f, "unexpected end of stream") } StreamError::InvalidName => { write!(f, "invalid name token") } StreamError::NonXmlChar(c, pos) => { write!(f, "a non-XML character {:?} found at {}", c, pos) } StreamError::InvalidChar(actual, expected, pos) => { write!(f, "expected '{}' not '{}' at {}", expected as char, actual as char, pos) } StreamError::InvalidCharMultiple(actual, ref expected, pos) => { let mut expected_iter = expected.iter().peekable(); write!(f, "expected ")?; while let Some(&c) = expected_iter.next() { write!(f, "'{}'", c as char)?; if expected_iter.peek().is_some() { write!(f, ", ")?; } } write!(f, " not '{}' at {}", actual as char, pos) } StreamError::InvalidQuote(c, pos) => { write!(f, "expected quote mark not '{}' at {}", c as char, pos) } StreamError::InvalidSpace(c, pos) => { write!(f, "expected space not '{}' at {}", c as char, pos) } StreamError::InvalidString(expected, pos) => { write!(f, "expected '{}' at {}", expected, pos) } StreamError::InvalidReference => { write!(f, "invalid reference") } StreamError::InvalidExternalID => { write!(f, "invalid ExternalID") } StreamError::InvalidCommentData => { write!(f, "'--' is not allowed in comments") } StreamError::InvalidCommentEnd => { write!(f, "comment cannot end with '-'") } StreamError::InvalidCharacterData => { write!(f, "']]>' is not allowed inside a character data") } } } } #[cfg(feature = "std")] impl error::Error for StreamError { fn description(&self) -> &str { "an XML stream parsing error" } } /// Position in text. /// /// Position indicates a row/line and a column in the original text. Starting from 1:1. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] #[allow(missing_docs)] pub struct TextPos { pub row: u32, pub col: u32, } impl TextPos { /// Constructs a new `TextPos`. /// /// Should not be invoked manually, but rather via `Stream::gen_text_pos`. pub fn new(row: u32, col: u32) -> TextPos { TextPos { row, col } } } impl fmt::Display for TextPos { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}:{}", self.row, self.col) } } xmlparser-0.13.5/src/lib.rs000064400000000000000000000750131046102023000136400ustar 00000000000000/*! *xmlparser* is a low-level, pull-based, zero-allocation [XML 1.0](https://www.w3.org/TR/xml/) parser. ## Example ```rust for token in xmlparser::Tokenizer::from("") { println!("{:?}", token); } ``` ## Why a new library? This library is basically a low-level XML tokenizer that preserves the positions of the tokens and is not intended to be used directly. If you are looking for a higher level solution, check out [roxmltree](https://github.com/RazrFalcon/roxmltree). ## Benefits - All tokens contain `StrSpan` structs which represent the position of the substring in the original document. - Good error processing. All error types contain the position (line:column) where it occurred. - No heap allocations. - No dependencies. - Tiny. ~1400 LOC and ~30KiB in the release build according to `cargo-bloat`. - Supports `no_std` builds. To use without the standard library, disable the default features. ## Limitations - Currently, only ENTITY objects are parsed from the DOCTYPE. All others are ignored. - No tree structure validation. So an XML like `` or a string without root element will be parsed without errors. You should check for this manually. On the other hand `` will lead to an error. - Duplicated attributes is not an error. So XML like `` will be parsed without errors. You should check for this manually. - UTF-8 only. ## Safety - The library must not panic. Any panic is considered a critical bug and should be reported. - The library forbids unsafe code. */ #![no_std] #![forbid(unsafe_code)] #![warn(missing_docs)] #![allow(ellipsis_inclusive_range_patterns)] #[cfg(feature = "std")] #[macro_use] extern crate std; macro_rules! matches { ($expression:expr, $($pattern:tt)+) => { match $expression { $($pattern)+ => true, _ => false } } } mod error; mod stream; mod strspan; mod xmlchar; pub use crate::error::*; pub use crate::stream::*; pub use crate::strspan::*; pub use crate::xmlchar::*; /// An XML token. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum Token<'a> { /// Declaration token. /// /// ```text /// /// --- - version /// ----- - encoding? /// --- - standalone? /// ------------------------------------------------------- - span /// ``` Declaration { version: StrSpan<'a>, encoding: Option>, standalone: Option, span: StrSpan<'a>, }, /// Processing instruction token. /// /// ```text /// /// ------ - target /// ------- - content? /// ------------------ - span /// ``` ProcessingInstruction { target: StrSpan<'a>, content: Option>, span: StrSpan<'a>, }, /// Comment token. /// /// ```text /// /// ------ - text /// ------------- - span /// ``` Comment { text: StrSpan<'a>, span: StrSpan<'a>, }, /// DOCTYPE start token. /// /// ```text /// , external_id: Option>, span: StrSpan<'a>, }, /// Empty DOCTYPE token. /// /// ```text /// /// -------- - name /// ------------------ - external_id? /// -------------------------------------- - span /// ``` EmptyDtd { name: StrSpan<'a>, external_id: Option>, span: StrSpan<'a>, }, /// ENTITY token. /// /// Can appear only inside the DTD. /// /// ```text /// /// --------- - name /// --------------- - definition /// ------------------------------------- - span /// ``` EntityDeclaration { name: StrSpan<'a>, definition: EntityDefinition<'a>, span: StrSpan<'a>, }, /// DOCTYPE end token. /// /// ```text /// /// -- - span /// ``` DtdEnd { span: StrSpan<'a>, }, /// Element start token. /// /// ```text /// /// -- - prefix /// ---- - local /// -------- - span /// ``` ElementStart { prefix: StrSpan<'a>, local: StrSpan<'a>, span: StrSpan<'a>, }, /// Attribute token. /// /// ```text /// /// -- - prefix /// ---- - local /// ----- - value /// --------------- - span /// ``` Attribute { prefix: StrSpan<'a>, local: StrSpan<'a>, value: StrSpan<'a>, span: StrSpan<'a>, }, /// Element end token. /// /// ```text /// text /// - ElementEnd::Open /// - - span /// ``` /// /// ```text /// text /// -- ---- - ElementEnd::Close(prefix, local) /// ---------- - span /// ``` /// /// ```text /// /// - ElementEnd::Empty /// -- - span /// ``` ElementEnd { end: ElementEnd<'a>, span: StrSpan<'a>, }, /// Text token. /// /// Contains text between elements including whitespaces. /// Basically everything between `>` and `<`. /// Except `]]>`, which is not allowed and will lead to an error. /// /// ```text ///

text

/// ------ - text /// ``` /// /// The token span is equal to the `text`. Text { text: StrSpan<'a>, }, /// CDATA token. /// /// ```text ///

/// ---- - text /// ---------------- - span /// ``` Cdata { text: StrSpan<'a>, span: StrSpan<'a>, }, } /// `ElementEnd` token. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum ElementEnd<'a> { /// Indicates `>` Open, /// Indicates `` Close(StrSpan<'a>, StrSpan<'a>), /// Indicates `/>` Empty, } /// Representation of the [ExternalID](https://www.w3.org/TR/xml/#NT-ExternalID) value. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum ExternalId<'a> { System(StrSpan<'a>), Public(StrSpan<'a>, StrSpan<'a>), } /// Representation of the [EntityDef](https://www.w3.org/TR/xml/#NT-EntityDef) value. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum EntityDefinition<'a> { EntityValue(StrSpan<'a>), ExternalId(ExternalId<'a>), } type Result = core::result::Result; type StreamResult = core::result::Result; #[derive(Clone, Copy, PartialEq)] enum State { Declaration, AfterDeclaration, Dtd, AfterDtd, Elements, Attributes, AfterElements, End, } /// Tokenizer for the XML structure. pub struct Tokenizer<'a> { stream: Stream<'a>, state: State, depth: usize, fragment_parsing: bool, } impl<'a> From<&'a str> for Tokenizer<'a> { #[inline] fn from(text: &'a str) -> Self { let mut stream = Stream::from(text); // Skip UTF-8 BOM. if stream.starts_with(&[0xEF, 0xBB, 0xBF]) { stream.advance(3); } Tokenizer { stream, state: State::Declaration, depth: 0, fragment_parsing: false, } } } macro_rules! map_err_at { ($fun:expr, $stream:expr, $err:ident) => {{ let start = $stream.pos(); $fun.map_err(|e| Error::$err(e, $stream.gen_text_pos_from(start)) ) }} } impl<'a> Tokenizer<'a> { /// Enables document fragment parsing. /// /// By default, `xmlparser` will check for DTD, root element, etc. /// But if we have to parse an XML fragment, it will lead to an error. /// This method switches the parser to the root element content parsing mode, /// so it will treat any data as a content of the root element. pub fn from_fragment(full_text: &'a str, fragment: core::ops::Range) -> Self { Tokenizer { stream: Stream::from_substr(full_text, fragment), state: State::Elements, depth: 0, fragment_parsing: true, } } fn parse_next_impl(&mut self) -> Option>> { let s = &mut self.stream; if s.at_end() { return None; } let start = s.pos(); match self.state { State::Declaration => { self.state = State::AfterDeclaration; if s.starts_with(b" { if s.starts_with(b" self.state = State::Dtd, Ok(Token::EmptyDtd { .. }) => self.state = State::AfterDtd, _ => {} } Some(t) } else if s.starts_with(b"' fn parse_comment_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(4); let text = s.consume_chars(|s, c| !(c == '-' && s.starts_with(b"-->")))?; s.skip_string(b"-->")?; if text.as_str().contains("--") { return Err(StreamError::InvalidCommentData); } if text.as_str().ends_with('-') { return Err(StreamError::InvalidCommentEnd); } let span = s.slice_back(start); Ok(Token::Comment { text, span }) } fn parse_pi(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_pi_impl(s), s, InvalidPI) } // PI ::= '' Char*)))? '?>' // PITarget ::= Name - (('X' | 'x') ('M' | 'm') ('L' | 'l')) fn parse_pi_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(2); let target = s.consume_name()?; s.skip_spaces(); let content = s.consume_chars(|s, c| !(c == '?' && s.starts_with(b"?>")))?; let content = if !content.is_empty() { Some(content) } else { None }; s.skip_string(b"?>")?; let span = s.slice_back(start); Ok(Token::ProcessingInstruction { target, content, span }) } fn parse_doctype(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_doctype_impl(s), s, InvalidDoctype) } // doctypedecl ::= '' fn parse_doctype_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(9); s.consume_spaces()?; let name = s.consume_name()?; s.skip_spaces(); let external_id = Self::parse_external_id(s)?; s.skip_spaces(); let c = s.curr_byte()?; if c != b'[' && c != b'>' { static EXPECTED: &[u8] = &[b'[', b'>']; return Err(StreamError::InvalidCharMultiple(c, EXPECTED, s.gen_text_pos())); } s.advance(1); let span = s.slice_back(start); if c == b'[' { Ok(Token::DtdStart { name, external_id, span }) } else { Ok(Token::EmptyDtd { name, external_id, span }) } } // ExternalID ::= 'SYSTEM' S SystemLiteral | 'PUBLIC' S PubidLiteral S SystemLiteral fn parse_external_id(s: &mut Stream<'a>) -> StreamResult>> { let v = if s.starts_with(b"SYSTEM") || s.starts_with(b"PUBLIC") { let start = s.pos(); s.advance(6); let id = s.slice_back(start); s.consume_spaces()?; let quote = s.consume_quote()?; let literal1 = s.consume_bytes(|_, c| c != quote); s.consume_byte(quote)?; let v = if id.as_str() == "SYSTEM" { ExternalId::System(literal1) } else { s.consume_spaces()?; let quote = s.consume_quote()?; let literal2 = s.consume_bytes(|_, c| c != quote); s.consume_byte(quote)?; ExternalId::Public(literal1, literal2) }; Some(v) } else { None }; Ok(v) } fn parse_entity_decl(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_entity_decl_impl(s), s, InvalidEntity) } // EntityDecl ::= GEDecl | PEDecl // GEDecl ::= '' // PEDecl ::= '' fn parse_entity_decl_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(8); s.consume_spaces()?; let is_ge = if s.try_consume_byte(b'%') { s.consume_spaces()?; false } else { true }; let name = s.consume_name()?; s.consume_spaces()?; let definition = Self::parse_entity_def(s, is_ge)?; s.skip_spaces(); s.consume_byte(b'>')?; let span = s.slice_back(start); Ok(Token::EntityDeclaration { name, definition, span }) } // EntityDef ::= EntityValue | (ExternalID NDataDecl?) // PEDef ::= EntityValue | ExternalID // EntityValue ::= '"' ([^%&"] | PEReference | Reference)* '"' | "'" ([^%&'] // | PEReference | Reference)* "'" // ExternalID ::= 'SYSTEM' S SystemLiteral | 'PUBLIC' S PubidLiteral S SystemLiteral // NDataDecl ::= S 'NDATA' S Name fn parse_entity_def(s: &mut Stream<'a>, is_ge: bool) -> StreamResult> { let c = s.curr_byte()?; match c { b'"' | b'\'' => { let quote = s.consume_quote()?; let value = s.consume_bytes(|_, c| c != quote); s.consume_byte(quote)?; Ok(EntityDefinition::EntityValue(value)) } b'S' | b'P' => { if let Some(id) = Self::parse_external_id(s)? { if is_ge { s.skip_spaces(); if s.starts_with(b"NDATA") { s.advance(5); s.consume_spaces()?; s.skip_name()?; // TODO: NDataDecl is not supported } } Ok(EntityDefinition::ExternalId(id)) } else { Err(StreamError::InvalidExternalID) } } _ => { static EXPECTED: &[u8] = &[b'"', b'\'', b'S', b'P']; let pos = s.gen_text_pos(); Err(StreamError::InvalidCharMultiple(c, EXPECTED, pos)) } } } fn consume_decl(s: &mut Stream) -> StreamResult<()> { s.skip_bytes(|_, c| c != b'>'); s.consume_byte(b'>')?; Ok(()) } fn parse_cdata(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_cdata_impl(s), s, InvalidCdata) } // CDSect ::= CDStart CData CDEnd // CDStart ::= '' Char*)) // CDEnd ::= ']]>' fn parse_cdata_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(9); let text = s.consume_chars(|s, c| !(c == ']' && s.starts_with(b"]]>")))?; s.skip_string(b"]]>")?; let span = s.slice_back(start); Ok(Token::Cdata { text, span }) } fn parse_element_start(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_element_start_impl(s), s, InvalidElement) } // '<' Name (S Attribute)* S? '>' fn parse_element_start_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(1); let (prefix, local) = s.consume_qname()?; let span = s.slice_back(start); Ok(Token::ElementStart { prefix, local, span }) } fn parse_close_element(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_close_element_impl(s), s, InvalidElement) } // '' fn parse_close_element_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos(); s.advance(2); let (prefix, tag_name) = s.consume_qname()?; s.skip_spaces(); s.consume_byte(b'>')?; let span = s.slice_back(start); Ok(Token::ElementEnd { end: ElementEnd::Close(prefix, tag_name), span }) } // Name Eq AttValue fn parse_attribute(s: &mut Stream<'a>) -> StreamResult> { let attr_start = s.pos(); let has_space = s.starts_with_space(); s.skip_spaces(); if let Ok(c) = s.curr_byte() { let start = s.pos(); match c { b'/' => { s.advance(1); s.consume_byte(b'>')?; let span = s.slice_back(start); return Ok(Token::ElementEnd { end: ElementEnd::Empty, span }); } b'>' => { s.advance(1); let span = s.slice_back(start); return Ok(Token::ElementEnd { end: ElementEnd::Open, span }); } _ => {} } } if !has_space { if !s.at_end() { return Err(StreamError::InvalidSpace( s.curr_byte_unchecked(), s.gen_text_pos_from(attr_start)) ); } else { return Err(StreamError::UnexpectedEndOfStream); } } let start = s.pos(); let (prefix, local) = s.consume_qname()?; s.consume_eq()?; let quote = s.consume_quote()?; let quote_c = quote as char; // The attribute value must not contain the < character. let value = s.consume_chars(|_, c| c != quote_c && c != '<')?; s.consume_byte(quote)?; let span = s.slice_back(start); Ok(Token::Attribute { prefix, local, value, span }) } fn parse_text(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_text_impl(s), s, InvalidCharData) } fn parse_text_impl(s: &mut Stream<'a>) -> StreamResult> { let text = s.consume_chars(|_, c| c != '<')?; // According to the spec, `]]>` must not appear inside a Text node. // https://www.w3.org/TR/xml/#syntax // // Search for `>` first, since it's a bit faster than looking for `]]>`. if text.as_str().contains('>') { if text.as_str().contains("]]>") { return Err(StreamError::InvalidCharacterData); } } Ok(Token::Text { text }) } } impl<'a> Iterator for Tokenizer<'a> { type Item = Result>; #[inline] fn next(&mut self) -> Option { let mut t = None; while !self.stream.at_end() && self.state != State::End && t.is_none() { t = self.parse_next_impl(); } if let Some(Err(_)) = t { self.stream.jump_to_end(); self.state = State::End; } t } } xmlparser-0.13.5/src/stream.rs000064400000000000000000000411651046102023000143660ustar 00000000000000use core::char; use core::cmp; use core::ops::Range; use core::str; use crate::{ StreamError, StrSpan, TextPos, XmlByteExt, XmlCharExt, }; type Result = ::core::result::Result; /// Representation of the [Reference](https://www.w3.org/TR/xml/#NT-Reference) value. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub enum Reference<'a> { /// An entity reference. /// /// Entity(&'a str), /// A character reference. /// /// Char(char), } /// A streaming XML parsing interface. #[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)] pub struct Stream<'a> { pos: usize, end: usize, span: StrSpan<'a>, } impl<'a> From<&'a str> for Stream<'a> { #[inline] fn from(text: &'a str) -> Self { Stream { pos: 0, end: text.len(), span: text.into(), } } } impl<'a> From> for Stream<'a> { #[inline] fn from(span: StrSpan<'a>) -> Self { Stream { pos: 0, end: span.as_str().len(), span, } } } impl<'a> Stream<'a> { /// Creates a new stream from a specified `text` substring. #[inline] pub fn from_substr(text: &'a str, fragment: Range) -> Self { Stream { pos: fragment.start, end: fragment.end, span: text.into(), } } /// Returns an underling string span. #[inline] pub fn span(&self) -> StrSpan<'a> { self.span } /// Returns current position. #[inline] pub fn pos(&self) -> usize { self.pos } /// Sets current position equal to the end. /// /// Used to indicate end of parsing on error. #[inline] pub fn jump_to_end(&mut self) { self.pos = self.end; } /// Checks if the stream is reached the end. /// /// Any [`pos()`] value larger than original text length indicates stream end. /// /// Accessing stream after reaching end via safe methods will produce /// an `UnexpectedEndOfStream` error. /// /// Accessing stream after reaching end via *_unchecked methods will produce /// a Rust's bound checking error. /// /// [`pos()`]: #method.pos #[inline] pub fn at_end(&self) -> bool { self.pos >= self.end } /// Returns a byte from a current stream position. /// /// # Errors /// /// - `UnexpectedEndOfStream` #[inline] pub fn curr_byte(&self) -> Result { if self.at_end() { return Err(StreamError::UnexpectedEndOfStream); } Ok(self.curr_byte_unchecked()) } /// Returns a byte from a current stream position. /// /// # Panics /// /// - if the current position is after the end of the data #[inline] pub fn curr_byte_unchecked(&self) -> u8 { self.span.as_bytes()[self.pos] } /// Returns a next byte from a current stream position. /// /// # Errors /// /// - `UnexpectedEndOfStream` #[inline] pub fn next_byte(&self) -> Result { if self.pos + 1 >= self.end { return Err(StreamError::UnexpectedEndOfStream); } Ok(self.span.as_bytes()[self.pos + 1]) } /// Advances by `n` bytes. /// /// # Examples /// /// ```rust,should_panic /// use xmlparser::Stream; /// /// let mut s = Stream::from("text"); /// s.advance(2); // ok /// s.advance(20); // will cause a panic via debug_assert!(). /// ``` #[inline] pub fn advance(&mut self, n: usize) { debug_assert!(self.pos + n <= self.end); self.pos += n; } /// Checks that the stream starts with a selected text. /// /// We are using `&[u8]` instead of `&str` for performance reasons. /// /// # Examples /// /// ``` /// use xmlparser::Stream; /// /// let mut s = Stream::from("Some text."); /// s.advance(5); /// assert_eq!(s.starts_with(b"text"), true); /// assert_eq!(s.starts_with(b"long"), false); /// ``` #[inline] pub fn starts_with(&self, text: &[u8]) -> bool { self.span.as_bytes()[self.pos..self.end].starts_with(text) } /// Consumes the current byte if it's equal to the provided byte. /// /// # Errors /// /// - `InvalidChar` /// - `UnexpectedEndOfStream` /// /// # Examples /// /// ``` /// use xmlparser::Stream; /// /// let mut s = Stream::from("Some text."); /// assert!(s.consume_byte(b'S').is_ok()); /// assert!(s.consume_byte(b'o').is_ok()); /// assert!(s.consume_byte(b'm').is_ok()); /// assert!(s.consume_byte(b'q').is_err()); /// ``` pub fn consume_byte(&mut self, c: u8) -> Result<()> { let curr = self.curr_byte()?; if curr != c { return Err(StreamError::InvalidChar(curr, c, self.gen_text_pos())); } self.advance(1); Ok(()) } /// Tries to consume the current byte if it's equal to the provided byte. /// /// Unlike `consume_byte()` will not return any errors. pub fn try_consume_byte(&mut self, c: u8) -> bool { match self.curr_byte() { Ok(b) if b == c => { self.advance(1); true } _ => false, } } /// Skips selected string. /// /// # Errors /// /// - `InvalidString` pub fn skip_string(&mut self, text: &'static [u8]) -> Result<()> { if !self.starts_with(text) { let pos = self.gen_text_pos(); // Assume that all input `text` are valid UTF-8 strings, so unwrap is safe. let expected = str::from_utf8(text).unwrap(); return Err(StreamError::InvalidString(expected, pos)); } self.advance(text.len()); Ok(()) } /// Consumes bytes by the predicate and returns them. /// /// The result can be empty. #[inline] pub fn consume_bytes(&mut self, f: F) -> StrSpan<'a> where F: Fn(&Stream, u8) -> bool { let start = self.pos; self.skip_bytes(f); self.slice_back(start) } /// Skips bytes by the predicate. pub fn skip_bytes(&mut self, f: F) where F: Fn(&Stream, u8) -> bool { while !self.at_end() && f(self, self.curr_byte_unchecked()) { self.advance(1); } } /// Consumes chars by the predicate and returns them. /// /// The result can be empty. #[inline] pub fn consume_chars(&mut self, f: F) -> Result> where F: Fn(&Stream, char) -> bool { let start = self.pos; self.skip_chars(f)?; Ok(self.slice_back(start)) } /// Skips chars by the predicate. #[inline] pub fn skip_chars(&mut self, f: F) -> Result<()> where F: Fn(&Stream, char) -> bool { for c in self.chars() { if !c.is_xml_char() { return Err(StreamError::NonXmlChar(c, self.gen_text_pos())); } else if f(self, c) { self.advance(c.len_utf8()); } else { break; } } Ok(()) } #[inline] pub(crate) fn chars(&self) -> str::Chars<'a> { self.span.as_str()[self.pos..self.end].chars() } /// Slices data from `pos` to the current position. #[inline] pub fn slice_back(&self, pos: usize) -> StrSpan<'a> { self.span.slice_region(pos, self.pos) } /// Slices data from the current position to the end. #[inline] pub fn slice_tail(&self) -> StrSpan<'a> { self.span.slice_region(self.pos, self.end) } /// Skips whitespaces. /// /// Accepted values: `' ' \n \r \t`. #[inline] pub fn skip_spaces(&mut self) { while !self.at_end() && self.curr_byte_unchecked().is_xml_space() { self.advance(1); } } /// Checks if the stream is starts with a space. #[inline] pub fn starts_with_space(&self) -> bool { !self.at_end() && self.curr_byte_unchecked().is_xml_space() } /// Consumes whitespaces. /// /// Like [`skip_spaces()`], but checks that first char is actually a space. /// /// [`skip_spaces()`]: #method.skip_spaces /// /// # Errors /// /// - `InvalidSpace` pub fn consume_spaces(&mut self) -> Result<()> { if self.at_end() { return Err(StreamError::UnexpectedEndOfStream); } if !self.starts_with_space() { return Err(StreamError::InvalidSpace(self.curr_byte_unchecked(), self.gen_text_pos())); } self.skip_spaces(); Ok(()) } /// Consumes an XML character reference if there is one. /// /// On error will reset the position to the original. pub fn try_consume_reference(&mut self) -> Option> { let start = self.pos(); // Consume reference on a substream. let mut s = self.clone(); match s.consume_reference() { Ok(r) => { // If the current data is a reference than advance the current stream // by number of bytes read by substream. self.advance(s.pos() - start); Some(r) } Err(_) => { None } } } /// Consumes an XML reference. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidReference` pub fn consume_reference(&mut self) -> Result> { self._consume_reference().map_err(|_| StreamError::InvalidReference) } #[inline(never)] fn _consume_reference(&mut self) -> Result> { if !self.try_consume_byte(b'&') { return Err(StreamError::InvalidReference); } let reference = if self.try_consume_byte(b'#') { let (value, radix) = if self.try_consume_byte(b'x') { let value = self.consume_bytes(|_, c| c.is_xml_hex_digit()).as_str(); (value, 16) } else { let value = self.consume_bytes(|_, c| c.is_xml_digit()).as_str(); (value, 10) }; let n = u32::from_str_radix(value, radix).map_err(|_| StreamError::InvalidReference)?; let c = char::from_u32(n).unwrap_or('\u{FFFD}'); if !c.is_xml_char() { return Err(StreamError::InvalidReference); } Reference::Char(c) } else { let name = self.consume_name()?; match name.as_str() { "quot" => Reference::Char('"'), "amp" => Reference::Char('&'), "apos" => Reference::Char('\''), "lt" => Reference::Char('<'), "gt" => Reference::Char('>'), _ => Reference::Entity(name.as_str()), } }; self.consume_byte(b';')?; Ok(reference) } /// Consumes an XML name and returns it. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidName` - if name is empty or starts with an invalid char /// - `UnexpectedEndOfStream` pub fn consume_name(&mut self) -> Result> { let start = self.pos(); self.skip_name()?; let name = self.slice_back(start); if name.is_empty() { return Err(StreamError::InvalidName); } Ok(name) } /// Skips an XML name. /// /// The same as `consume_name()`, but does not return a consumed name. /// /// # Errors /// /// - `InvalidName` - if name is empty or starts with an invalid char pub fn skip_name(&mut self) -> Result<()> { let mut iter = self.chars(); if let Some(c) = iter.next() { if c.is_xml_name_start() { self.advance(c.len_utf8()); } else { return Err(StreamError::InvalidName); } } for c in iter { if c.is_xml_name() { self.advance(c.len_utf8()); } else { break; } } Ok(()) } /// Consumes a qualified XML name and returns it. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidName` - if name is empty or starts with an invalid char #[inline(never)] pub fn consume_qname(&mut self) -> Result<(StrSpan<'a>, StrSpan<'a>)> { let start = self.pos(); let mut splitter = None; while !self.at_end() { // Check for ASCII first for performance reasons. let b = self.curr_byte_unchecked(); if b < 128 { if b == b':' { if splitter.is_none() { splitter = Some(self.pos()); self.advance(1); } else { // Multiple `:` is an error. return Err(StreamError::InvalidName); } } else if b.is_xml_name() { self.advance(1); } else { break; } } else { // Fallback to Unicode code point. match self.chars().nth(0) { Some(c) if c.is_xml_name() => { self.advance(c.len_utf8()); } _ => break, } } } let (prefix, local) = if let Some(splitter) = splitter { let prefix = self.span().slice_region(start, splitter); let local = self.slice_back(splitter + 1); (prefix, local) } else { let local = self.slice_back(start); ("".into(), local) }; // Prefix must start with a `NameStartChar`. if let Some(c) = prefix.as_str().chars().nth(0) { if !c.is_xml_name_start() { return Err(StreamError::InvalidName); } } // Local name must start with a `NameStartChar`. if let Some(c) = local.as_str().chars().nth(0) { if !c.is_xml_name_start() { return Err(StreamError::InvalidName); } } else { // If empty - error. return Err(StreamError::InvalidName); } Ok((prefix, local)) } /// Consumes `=`. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidChar` /// - `UnexpectedEndOfStream` pub fn consume_eq(&mut self) -> Result<()> { self.skip_spaces(); self.consume_byte(b'=')?; self.skip_spaces(); Ok(()) } /// Consumes quote. /// /// Consumes `'` or `"` and returns it. /// /// # Errors /// /// - `InvalidQuote` /// - `UnexpectedEndOfStream` pub fn consume_quote(&mut self) -> Result { let c = self.curr_byte()?; if c == b'\'' || c == b'"' { self.advance(1); Ok(c) } else { Err(StreamError::InvalidQuote(c, self.gen_text_pos())) } } /// Calculates a current absolute position. /// /// This operation is very expensive. Use only for errors. #[inline(never)] pub fn gen_text_pos(&self) -> TextPos { let text = self.span.as_str(); let end = self.pos; let row = Self::calc_curr_row(text, end); let col = Self::calc_curr_col(text, end); TextPos::new(row, col) } /// Calculates an absolute position at `pos`. /// /// This operation is very expensive. Use only for errors. /// /// # Examples /// /// ``` /// let s = xmlparser::Stream::from("text"); /// /// assert_eq!(s.gen_text_pos_from(2), xmlparser::TextPos::new(1, 3)); /// assert_eq!(s.gen_text_pos_from(9999), xmlparser::TextPos::new(1, 5)); /// ``` #[inline(never)] pub fn gen_text_pos_from(&self, pos: usize) -> TextPos { let mut s = self.clone(); s.pos = cmp::min(pos, s.span.as_str().len()); s.gen_text_pos() } fn calc_curr_row(text: &str, end: usize) -> u32 { let mut row = 1; for c in &text.as_bytes()[..end] { if *c == b'\n' { row += 1; } } row } fn calc_curr_col(text: &str, end: usize) -> u32 { let mut col = 1; for c in text[..end].chars().rev() { if c == '\n' { break; } else { col += 1; } } col } } xmlparser-0.13.5/src/strspan.rs000064400000000000000000000035621046102023000145640ustar 00000000000000use core::fmt; use core::ops::{Deref, Range}; /// A string slice. /// /// Like `&str`, but also contains the position in the input XML /// from which it was parsed. #[must_use] #[derive(Clone, Copy, PartialEq, Eq, Hash)] pub struct StrSpan<'a> { text: &'a str, start: usize, } impl<'a> From<&'a str> for StrSpan<'a> { #[inline] fn from(text: &'a str) -> Self { StrSpan { text, start: 0, } } } impl<'a> StrSpan<'a> { /// Constructs a new `StrSpan` from substring. #[inline] pub(crate) fn from_substr(text: &str, start: usize, end: usize) -> StrSpan { debug_assert!(start <= end); StrSpan { text: &text[start..end], start } } /// Returns the start position of the span. #[inline] pub fn start(&self) -> usize { self.start } /// Returns the end position of the span. #[inline] pub fn end(&self) -> usize { self.start + self.text.len() } /// Returns the range of the span. #[inline] pub fn range(&self) -> Range { self.start..self.end() } /// Returns the span as a string slice #[inline] pub fn as_str(&self) -> &'a str { &self.text } /// Returns an underling string region as `StrSpan`. #[inline] pub(crate) fn slice_region(&self, start: usize, end: usize) -> StrSpan<'a> { StrSpan::from_substr(self.text, start, end) } } impl<'a> fmt::Debug for StrSpan<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "StrSpan({:?} {}..{})", self.as_str(), self.start(), self.end()) } } impl<'a> fmt::Display for StrSpan<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}", self.as_str()) } } impl<'a> Deref for StrSpan<'a> { type Target = str; fn deref(&self) -> &Self::Target { self.text } } xmlparser-0.13.5/src/xmlchar.rs000064400000000000000000000071231046102023000145250ustar 00000000000000/// Extension methods for XML-subset only operations. pub trait XmlCharExt { /// Checks if the value is within the /// [NameStartChar](https://www.w3.org/TR/xml/#NT-NameStartChar) range. fn is_xml_name_start(&self) -> bool; /// Checks if the value is within the /// [NameChar](https://www.w3.org/TR/xml/#NT-NameChar) range. fn is_xml_name(&self) -> bool; /// Checks if the value is within the /// [Char](https://www.w3.org/TR/xml/#NT-Char) range. fn is_xml_char(&self) -> bool; } impl XmlCharExt for char { #[inline] fn is_xml_name_start(&self) -> bool { // Check for ASCII first. if *self as u32 <= 128 { return match *self as u8 { b'A'...b'Z' | b'a'...b'z' | b':' | b'_' => true, _ => false, }; } match *self as u32 { 0x0000C0...0x0000D6 | 0x0000D8...0x0000F6 | 0x0000F8...0x0002FF | 0x000370...0x00037D | 0x00037F...0x001FFF | 0x00200C...0x00200D | 0x002070...0x00218F | 0x002C00...0x002FEF | 0x003001...0x00D7FF | 0x00F900...0x00FDCF | 0x00FDF0...0x00FFFD | 0x010000...0x0EFFFF => true, _ => false, } } #[inline] fn is_xml_name(&self) -> bool { // Check for ASCII first. if *self as u32 <= 128 { return (*self as u8).is_xml_name(); } match *self as u32 { 0x0000B7 | 0x0000C0...0x0000D6 | 0x0000D8...0x0000F6 | 0x0000F8...0x0002FF | 0x000300...0x00036F | 0x000370...0x00037D | 0x00037F...0x001FFF | 0x00200C...0x00200D | 0x00203F...0x002040 | 0x002070...0x00218F | 0x002C00...0x002FEF | 0x003001...0x00D7FF | 0x00F900...0x00FDCF | 0x00FDF0...0x00FFFD | 0x010000...0x0EFFFF => true, _ => false, } } #[inline] fn is_xml_char(&self) -> bool { match *self as u32 { 0x000009 | 0x00000A | 0x00000D | 0x000020...0x00D7FF | 0x00E000...0x00FFFD | 0x010000...0x10FFFF => true, _ => false, } } } /// Extension methods for XML-subset only operations. pub trait XmlByteExt { /// Checks if byte is a digit. /// /// `[0-9]` fn is_xml_digit(&self) -> bool; /// Checks if byte is a hex digit. /// /// `[0-9A-Fa-f]` fn is_xml_hex_digit(&self) -> bool; /// Checks if byte is a space. /// /// `[ \r\n\t]` fn is_xml_space(&self) -> bool; /// Checks if byte is an ASCII char. /// /// `[A-Za-z]` fn is_xml_letter(&self) -> bool; /// Checks if byte is within the ASCII /// [Char](https://www.w3.org/TR/xml/#NT-Char) range. fn is_xml_name(&self) -> bool; } impl XmlByteExt for u8 { #[inline] fn is_xml_digit(&self) -> bool { matches!(*self, b'0'...b'9') } #[inline] fn is_xml_hex_digit(&self) -> bool { matches!(*self, b'0'...b'9' | b'A'...b'F' | b'a'...b'f') } #[inline] fn is_xml_space(&self) -> bool { matches!(*self, b' ' | b'\t' | b'\n' | b'\r') } #[inline] fn is_xml_letter(&self) -> bool { matches!(*self, b'A'...b'Z' | b'a'...b'z') } #[inline] fn is_xml_name(&self) -> bool { matches!(*self, b'A'...b'Z' | b'a'...b'z'| b'0'...b'9'| b':' | b'_' | b'-' | b'.') } } xmlparser-0.13.5/tests/integration/api.rs000064400000000000000000000014571046102023000165420ustar 00000000000000extern crate xmlparser; use xmlparser::*; #[test] fn text_pos_1() { let mut s = Stream::from("text"); s.advance(2); assert_eq!(s.gen_text_pos(), TextPos::new(1, 3)); } #[test] fn text_pos_2() { let mut s = Stream::from("text\ntext"); s.advance(6); assert_eq!(s.gen_text_pos(), TextPos::new(2, 2)); } #[test] fn text_pos_3() { let mut s = Stream::from("текст\nтекст"); s.advance(15); assert_eq!(s.gen_text_pos(), TextPos::new(2, 3)); } #[test] fn token_size() { assert!(::std::mem::size_of::() <= 196); } #[test] fn span_size() { assert!(::std::mem::size_of::() <= 48); } #[test] fn err_size_1() { assert!(::std::mem::size_of::() <= 64); } #[test] fn err_size_2() { assert!(::std::mem::size_of::() <= 64); } xmlparser-0.13.5/tests/integration/cdata.rs000064400000000000000000000050671046102023000170460ustar 00000000000000extern crate xmlparser as xml; use crate::token::*; test!(cdata_01, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("content", 3..22), Token::ElementEnd(ElementEnd::Close("", "p"), 22..26) ); test!(cdata_02, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("&ing", 3..22), Token::ElementEnd(ElementEnd::Close("", "p"), 22..26) ); test!(cdata_03, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("&ing ]", 3..24), Token::ElementEnd(ElementEnd::Close("", "p"), 24..28) ); test!(cdata_04, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("&ing]] ", 3..25), Token::ElementEnd(ElementEnd::Close("", "p"), 25..29) ); test!(cdata_05, "

text]]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("text", 3..38), Token::ElementEnd(ElementEnd::Close("", "p"), 38..42) ); test!(cdata_06, "

]]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("", 3..66), Token::ElementEnd(ElementEnd::Close("", "p"), 66..70) ); test!(cdata_07, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("1", 3..16), Token::Cdata("2", 16..29), Token::ElementEnd(ElementEnd::Close("", "p"), 29..33) ); test!(cdata_08, "

\n \t

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" \n ", 3..6), Token::Cdata("data", 6..22), Token::Text(" \t ", 22..25), Token::ElementEnd(ElementEnd::Close("", "p"), 25..29) ); test!(cdata_09, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("bracket ]after", 3..29), Token::ElementEnd(ElementEnd::Close("", "p"), 29..33) ); test!(cdata_err_01, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Error("invalid CDATA at 1:4 cause a non-XML character '\\u{1}' found at 1:13".to_string()) ); xmlparser-0.13.5/tests/integration/comments.rs000064400000000000000000000045431046102023000176150ustar 00000000000000use crate::token::*; test!(comment_01, "", Token::Comment("comment", 0..14)); test!(comment_02, "", Token::Comment("", 0..13)); test!(comment_03, "", Token::Comment("", Token::Comment("", Token::Comment("<", Token::Comment("<", Token::Comment("-->", Token::Comment("<>", 0..9)); test!(comment_09, "", Token::Comment("<", 0..8)); test!(comment_10, "", Token::Comment("", Token::Comment("", 0..7)); macro_rules! test_err { ($name:ident, $text:expr) => ( #[test] fn $name() { let mut p = xml::Tokenizer::from($text); assert!(p.next().unwrap().is_err()); } ) } test_err!(comment_err_01, ""); test_err!(comment_err_02, ""); test_err!(comment_err_05, ""); test_err!(comment_err_07, ""); test_err!(comment_err_15, ""); test_err!(comment_err_20, ""); test_err!(comment_err_27, ""); test_err!(comment_err_28, ""); test_err!(comment_err_29, ""); test_err!(comment_err_33, ""); test_err!(comment_err_34, ""); test_err!(comment_err_35, ""); xmlparser-0.13.5/tests/integration/doctype.rs000064400000000000000000000122001046102023000174240ustar 00000000000000use crate::token::*; test!(dtd_01, "", Token::EmptyDtd("greeting", Some(ExternalId::System("hello.dtd")), 0..38) ); test!(dtd_02, "", Token::EmptyDtd("greeting", Some(ExternalId::Public("hello.dtd", "goodbye.dtd")), 0..52) ); test!(dtd_03, "", Token::EmptyDtd("greeting", Some(ExternalId::System("hello.dtd")), 0..38) ); test!(dtd_04, "", Token::EmptyDtd("greeting", None, 0..19) ); test!(dtd_05, "", Token::DtdStart("greeting", None, 0..20), Token::DtdEnd(20..22) ); test!(dtd_06, "
", Token::EmptyDtd("greeting", None, 0..19), Token::ElementStart("", "a", 19..21), Token::ElementEnd(ElementEnd::Empty, 21..23) ); test!(dtd_07, "", Token::DtdStart("greeting", None, 0..20), Token::DtdEnd(20..23) ); test!(dtd_08, "", Token::DtdStart("greeting", None, 0..20), Token::DtdEnd(21..24) ); test!(dtd_entity_01, " ]>", Token::DtdStart("svg", None, 0..15), Token::EntityDecl( "ns_extend", EntityDefinition::EntityValue("http://ns.adobe.com/Extensibility/1.0/"), 20..80, ), Token::DtdEnd(81..83) ); test!(dtd_entity_02, " ]>", Token::DtdStart("svg", None, 0..15), Token::EntityDecl( "Pub-Status", EntityDefinition::EntityValue("This is a pre-release of the\nspecification."), 20..86, ), Token::DtdEnd(87..89) ); test!(dtd_entity_03, " ]>", Token::DtdStart("svg", None, 0..15), Token::EntityDecl( "open-hatch", EntityDefinition::ExternalId(ExternalId::System("http://www.textuality.com/boilerplate/OpenHatch.xml")), 20..101, ), Token::DtdEnd(102..104) ); test!(dtd_entity_04, " ]>", Token::DtdStart("svg", None, 0..15), Token::EntityDecl( "open-hatch", EntityDefinition::ExternalId( ExternalId::Public( "-//Textuality//TEXT Standard open-hatch boilerplate//EN", "http://www.textuality.com/boilerplate/OpenHatch.xml" ) ), 20..185, ), Token::DtdEnd(186..188) ); // TODO: NDATA will be ignored test!(dtd_entity_05, " ]>", Token::DtdStart("svg", None, 0..15), Token::EntityDecl( "hatch-pic", EntityDefinition::ExternalId(ExternalId::System("../grafix/OpenHatch.gif")), 20..83, ), Token::DtdEnd(84..86) ); // TODO: unsupported data will be ignored test!(dtd_entity_06, " ]>", Token::DtdStart("svg", None, 0..15), Token::EntityDecl( "ns_extend", EntityDefinition::EntityValue("http://ns.adobe.com/Extensibility/1.0/"), 44..104 ), Token::DtdEnd(203..205) ); // We do not support !ELEMENT DTD token and it will be skipped. // Previously, we were calling `Tokenizer::next` after the skip, // which is recursive and could cause a stack overflow when there are too many sequential // unsupported tokens. // This tests checks that the current code do not crash with stack overflow. #[test] fn dtd_entity_07() { let mut text = "\n"); } text.push_str("]>\n"); let mut p = xml::Tokenizer::from(text.as_str()); assert_eq!(to_test_token(p.next().unwrap()), Token::DtdStart("svg", None, 0..15)); assert_eq!(to_test_token(p.next().unwrap()), Token::DtdEnd(10016..10018)); } test!(dtd_err_01, "\u{000a}<", Token::Error("invalid DTD at 1:1 cause expected space not 'E' at 1:10".to_string()) ); test!(dtd_err_02, "' not '!' at 1:16".to_string()) ); xmlparser-0.13.5/tests/integration/document.rs000064400000000000000000000055001046102023000176000ustar 00000000000000use std::str; use crate::token::*; test!(document_01, "", ); test!(document_02, " ", ); test!(document_03, " \n\t\r ", ); // BOM test!(document_05, str::from_utf8(b"\xEF\xBB\xBF").unwrap(), Token::ElementStart("", "a", 3..5), Token::ElementEnd(ElementEnd::Empty, 5..7) ); test!(document_06, str::from_utf8(b"\xEF\xBB\xBF").unwrap(), Token::Declaration("1.0", None, None, 3..24) ); test!(document_07, "\n\n\ ", Token::Declaration("1.0", Some("utf-8"), None, 0..38), Token::Comment(" comment ", 39..55), Token::EmptyDtd("svg", Some(ExternalId::Public( "-//W3C//DTD SVG 1.1//EN", "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" )), 56..154) ); test!(document_08, "\n\ ", Token::PI("xml-stylesheet", None, 0..18), Token::EmptyDtd("svg", Some(ExternalId::Public( "-//W3C//DTD SVG 1.1//EN", "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" )), 19..117) ); test!(document_09, "\n\n\ ", Token::Declaration("1.0", Some("utf-8"), None, 0..38), Token::PI("xml-stylesheet", None, 39..57), Token::EmptyDtd("svg", Some(ExternalId::Public( "-//W3C//DTD SVG 1.1//EN", "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" )), 58..156) ); test!(document_err_01, "", Token::Error("unknown token at 1:1".to_string()) ); test!(document_err_02, " &www---------Ӥ+----------w-----www_", Token::Error("unknown token at 1:2".to_string()) ); test!(document_err_03, "q", Token::Error("unknown token at 1:1".to_string()) ); test!(document_err_04, "", Token::Error("unknown token at 1:1".to_string()) ); test!(document_err_05, "", Token::EmptyDtd("greeting1", None, 0..20), Token::Error("unknown token at 1:21".to_string()) ); test!(document_err_06, " ", Token::Error("unknown token at 1:1".to_string()) ); #[test] fn parse_fragment_1() { let s = "

"; let mut p = xml::Tokenizer::from_fragment(s, 0..s.len()); match p.next().unwrap().unwrap() { xml::Token::ElementStart { local, .. } => assert_eq!(local.as_str(), "p"), _ => panic!(), } match p.next().unwrap().unwrap() { xml::Token::ElementEnd { .. } => {} _ => panic!(), } match p.next().unwrap().unwrap() { xml::Token::ElementStart { local, .. } => assert_eq!(local.as_str(), "p"), _ => panic!(), } } xmlparser-0.13.5/tests/integration/elements.rs000064400000000000000000000156301046102023000176030ustar 00000000000000use crate::token::*; test!(element_01, "", Token::ElementStart("", "a", 0..2), Token::ElementEnd(ElementEnd::Empty, 2..4) ); test!(element_02, "", Token::ElementStart("", "a", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::ElementEnd(ElementEnd::Close("", "a"), 3..7) ); test!(element_03, " \t \n ", Token::ElementStart("", "a", 5..7), Token::ElementEnd(ElementEnd::Empty, 7..9) ); test!(element_04, " \t \n ", Token::ElementStart("", "b", 5..7), Token::ElementEnd(ElementEnd::Open, 7..8), Token::ElementStart("", "a", 8..10), Token::ElementEnd(ElementEnd::Empty, 10..12), Token::ElementEnd(ElementEnd::Close("", "b"), 12..16) ); test!(element_06, "<俄语 լեզու=\"ռուսերեն\">данные", Token::ElementStart("", "俄语", 0..7), Token::Attribute("", "լեզու", "ռուսերեն", 8..37), Token::ElementEnd(ElementEnd::Open, 37..38), Token::Text("данные", 38..50), Token::ElementEnd(ElementEnd::Close("", "俄语"), 50..59) ); test!(element_07, "", Token::ElementStart("svg", "circle", 0..11), Token::ElementEnd(ElementEnd::Open, 11..12), Token::ElementEnd(ElementEnd::Close("svg", "circle"), 12..25) ); test!(element_08, "<:circle/>", Token::ElementStart("", "circle", 0..8), Token::ElementEnd(ElementEnd::Empty, 8..10) ); test!(element_err_01, "<>", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_02, "", Token::ElementStart("", "a", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::ElementEnd(ElementEnd::Close("", "a"), 3..7), Token::Error("unknown token at 1:8".to_string()) ); test!(element_err_10, "", Token::ElementStart("", "a", 0..2), Token::ElementEnd(ElementEnd::Empty, 2..4), Token::Error("unknown token at 1:5".to_string()) ); test!(element_err_11, "
", Token::ElementStart("", "a", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Error("invalid element at 1:4 cause expected '>' not '/' at 1:8".to_string()) ); test!(element_err_12, "", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_13, "\ ", Token::ElementStart("", "root", 0..5), Token::ElementEnd(ElementEnd::Open, 5..6), Token::Text("\n", 6..7), Token::ElementEnd(ElementEnd::Close("", "root"), 7..14), Token::Error("unknown token at 3:1".to_string()) ); test!(element_err_14, "<-svg/>", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_15, "", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_16, "", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_17, "", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_18, "<::svg/>", Token::Error("invalid element at 1:1 cause invalid name token".to_string()) ); test!(element_err_19, "<", Token::ElementStart("", "a", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Error("unknown token at 1:4".to_string()) ); test!(attribute_01, "", Token::ElementStart("", "a", 0..2), Token::Attribute("", "ax", "test", 3..12), Token::ElementEnd(ElementEnd::Empty, 12..14) ); test!(attribute_02, "", Token::ElementStart("", "a", 0..2), Token::Attribute("", "ax", "test", 3..12), Token::ElementEnd(ElementEnd::Empty, 12..14) ); test!(attribute_03, "", Token::ElementStart("", "a", 0..2), Token::Attribute("", "b", "test1", 3..12), Token::Attribute("", "c", "test2", 13..22), Token::ElementEnd(ElementEnd::Empty, 22..24) ); test!(attribute_04, "", Token::ElementStart("", "a", 0..2), Token::Attribute("", "b", "\"test1\"", 3..14), Token::Attribute("", "c", "'test2'", 15..26), Token::ElementEnd(ElementEnd::Empty, 26..28) ); test!(attribute_05, "", Token::ElementStart("", "c", 0..2), Token::Attribute("", "a", "test1' c='test2", 3..22), Token::Attribute("", "b", "test1\" c=\"test2", 23..42), Token::ElementEnd(ElementEnd::Empty, 42..44) ); test!(attribute_06, "", Token::ElementStart("", "c", 0..2), Token::Attribute("", "a", "test1", 5..21), Token::ElementEnd(ElementEnd::Empty, 26..28) ); test!(attribute_07, "", Token::ElementStart("", "c", 0..2), Token::Attribute("q", "a", "b", 3..10), Token::ElementEnd(ElementEnd::Empty, 10..12) ); test!(attribute_err_01, "", Token::ElementStart("", "c", 0..2), Token::Error("invalid attribute at 1:3 cause expected quote mark not 't' at 1:7".to_string()) ); test!(attribute_err_02, "", Token::ElementStart("", "c", 0..2), Token::Error("invalid attribute at 1:3 cause expected \'=\' not \'>\' at 1:5".to_string()) ); test!(attribute_err_03, "", Token::ElementStart("", "c", 0..2), Token::Error("invalid attribute at 1:3 cause expected '=' not '/' at 1:5".to_string()) ); test!(attribute_err_04, "", Token::ElementStart("", "c", 0..2), Token::Attribute("", "a", "b", 3..8), Token::Error("invalid attribute at 1:9 cause expected '=' not '/' at 1:11".to_string()) ); test!(attribute_err_05, "", Token::ElementStart("", "c", 0..2), Token::Error("invalid attribute at 1:3 cause expected ''' not '<' at 1:7".to_string()) ); test!(attribute_err_06, "", Token::ElementStart("", "c", 0..2), Token::Error("invalid attribute at 1:3 cause a non-XML character '\\u{1}' found at 1:7".to_string()) ); test!(attribute_err_07, "", Token::ElementStart("", "c", 0..2), Token::Attribute("", "a", "v", 3..8), Token::Error("invalid attribute at 1:9 cause expected space not 'b' at 1:9".to_string()) ); xmlparser-0.13.5/tests/integration/main.rs000064400000000000000000000002261046102023000167060ustar 00000000000000extern crate xmlparser as xml; #[macro_use] mod token; mod api; mod cdata; mod comments; mod doctype; mod document; mod elements; mod pi; mod text; xmlparser-0.13.5/tests/integration/pi.rs000064400000000000000000000106621046102023000163770ustar 00000000000000use crate::token::*; test!(pi_01, "", Token::PI("xslt", Some("ma"), 0..11) ); test!(pi_02, "", Token::PI("xslt", Some("m"), 0..13) ); test!(pi_03, "", Token::PI("xslt", None, 0..8) ); test!(pi_04, "", Token::PI("xslt", None, 0..9) ); test!(pi_05, "", Token::PI("xml-stylesheet", None, 0..18) ); test!(pi_err_01, "", Token::Error("invalid processing instruction at 1:1 cause invalid name token".to_string()) ); test!(declaration_01, "", Token::Declaration("1.0", None, None, 0..21) ); test!(declaration_02, "", Token::Declaration("1.0", None, None, 0..21) ); test!(declaration_03, "", Token::Declaration("1.0", Some("UTF-8"), None, 0..38) ); test!(declaration_04, "", Token::Declaration("1.0", Some("UTF-8"), None, 0..38) ); test!(declaration_05, "", Token::Declaration("1.0", Some("utf-8"), None, 0..38) ); test!(declaration_06, "", Token::Declaration("1.0", Some("EUC-JP"), None, 0..39) ); test!(declaration_07, "", Token::Declaration("1.0", Some("UTF-8"), Some(true), 0..55) ); test!(declaration_08, "", Token::Declaration("1.0", Some("UTF-8"), Some(false), 0..54) ); test!(declaration_09, "", Token::Declaration("1.0", None, Some(false), 0..37) ); test!(declaration_10, "", Token::Declaration("1.0", None, Some(false), 0..38) ); // Declaration with an invalid order test!(declaration_err_01, "", Token::Error("invalid XML declaration at 1:1 cause expected 'version' at 1:7".to_string()) ); test!(declaration_err_02, "", Token::Error("invalid XML declaration at 1:1 cause expected '\'' not '*' at 1:31".to_string()) ); test!(declaration_err_03, "", Token::Error("invalid XML declaration at 1:1 cause expected '1.' at 1:16".to_string()) ); test!(declaration_err_04, "", Token::Error("invalid XML declaration at 1:1 cause expected 'yes', 'no' at 1:33".to_string()) ); test!(declaration_err_05, "", Token::Error("invalid XML declaration at 1:1 cause expected '?>' at 1:21".to_string()) ); test!(declaration_err_06, "", Token::Error("invalid XML declaration at 1:1 cause expected '?>' at 1:55".to_string()) ); test!(declaration_err_07, "\u{000a}' at 3:7".to_string()) ); test!(declaration_err_08, "", Token::Error("invalid XML declaration at 1:1 cause expected 'version' at 2:2".to_string()) ); test!(declaration_err_09, "", Token::Error("invalid XML declaration at 1:1 cause expected 'version' at 2:2".to_string()) ); // XML declaration allowed only at the start of the document. test!(declaration_err_10, " ", Token::Error("unknown token at 1:2".to_string()) ); // XML declaration allowed only at the start of the document. test!(declaration_err_11, "", Token::Comment(" comment ", 0..16), Token::Error("unknown token at 1:17".to_string()) ); // Duplicate. test!(declaration_err_12, "", Token::Declaration("1.0", None, None, 0..21), Token::Error("unknown token at 1:22".to_string()) ); test!(declaration_err_13, "", Token::Error("invalid processing instruction at 1:1 cause a non-XML character '\\u{1}' found at 1:10".to_string()) ); test!(declaration_err_14, "", Token::Error("invalid XML declaration at 1:1 cause expected space not 'e' at 1:20".to_string()) ); test!(declaration_err_15, "", Token::Error("invalid XML declaration at 1:1 cause expected space not 's' at 1:37".to_string()) ); test!(declaration_err_16, "' at 1:20".to_string()) ); xmlparser-0.13.5/tests/integration/text.rs000064400000000000000000000037441046102023000167560ustar 00000000000000use crate::token::*; test!(text_01, "

text

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text("text", 3..7), Token::ElementEnd(ElementEnd::Close("", "p"), 7..11) ); test!(text_02, "

text

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" text ", 3..9), Token::ElementEnd(ElementEnd::Close("", "p"), 9..13) ); // 欄 is EF A4 9D. And EF can be mistreated for UTF-8 BOM. test!(text_03, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text("欄", 3..6), Token::ElementEnd(ElementEnd::Close("", "p"), 6..10) ); test!(text_04, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" ", 3..4), Token::ElementEnd(ElementEnd::Close("", "p"), 4..8) ); test!(text_05, "

\r\n\t

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" \r\n\t ", 3..8), Token::ElementEnd(ElementEnd::Close("", "p"), 8..12) ); test!(text_06, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" ", 3..9), Token::ElementEnd(ElementEnd::Close("", "p"), 9..13) ); test!(text_07, "

]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text("]>", 3..5), Token::ElementEnd(ElementEnd::Close("", "p"), 5..9) ); test!(text_err_01, "

]]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Error("invalid character data at 1:4 cause ']]>' is not allowed inside a character data".to_string()) ); test!(text_err_02, "

\u{0c}

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Error("invalid character data at 1:4 cause a non-XML character '\\u{c}' found at 1:4".to_string()) ); xmlparser-0.13.5/tests/integration/token.rs000064400000000000000000000105741046102023000171110ustar 00000000000000type Range = ::std::ops::Range; #[derive(PartialEq, Debug)] pub enum Token<'a> { Declaration(&'a str, Option<&'a str>, Option, Range), PI(&'a str, Option<&'a str>, Range), Comment(&'a str, Range), DtdStart(&'a str, Option>, Range), EmptyDtd(&'a str, Option>, Range), EntityDecl(&'a str, EntityDefinition<'a>, Range), DtdEnd(Range), ElementStart(&'a str, &'a str, Range), Attribute(&'a str, &'a str, &'a str, Range), ElementEnd(ElementEnd<'a>, Range), Text(&'a str, Range), Cdata(&'a str, Range), Error(String), } #[derive(PartialEq, Debug)] pub enum ElementEnd<'a> { Open, Close(&'a str, &'a str), Empty, } #[derive(PartialEq, Debug)] pub enum ExternalId<'a> { System(&'a str), Public(&'a str, &'a str), } #[derive(PartialEq, Debug)] pub enum EntityDefinition<'a> { EntityValue(&'a str), ExternalId(ExternalId<'a>), } #[macro_export] macro_rules! test { ($name:ident, $text:expr, $($token:expr),*) => ( #[test] fn $name() { let mut p = xml::Tokenizer::from($text); $( let t = p.next().unwrap(); assert_eq!(to_test_token(t), $token); )* assert!(p.next().is_none()); } ) } #[inline(never)] pub fn to_test_token(token: Result) -> Token { match token { Ok(xml::Token::Declaration { version, encoding, standalone, span }) => { Token::Declaration( version.as_str(), encoding.map(|v| v.as_str()), standalone, span.range(), ) } Ok(xml::Token::ProcessingInstruction { target, content, span }) => { Token::PI( target.as_str(), content.map(|v| v.as_str()), span.range(), ) } Ok(xml::Token::Comment { text, span }) => Token::Comment(text.as_str(), span.range()), Ok(xml::Token::DtdStart { name, external_id, span }) => { Token::DtdStart( name.as_str(), external_id.map(|v| to_test_external_id(v)), span.range(), ) } Ok(xml::Token::EmptyDtd { name, external_id, span }) => { Token::EmptyDtd( name.as_str(), external_id.map(|v| to_test_external_id(v)), span.range(), ) } Ok(xml::Token::EntityDeclaration { name, definition, span }) => { Token::EntityDecl( name.as_str(), match definition { xml::EntityDefinition::EntityValue(name) => { EntityDefinition::EntityValue(name.as_str()) } xml::EntityDefinition::ExternalId(id) => { EntityDefinition::ExternalId(to_test_external_id(id)) } }, span.range(), ) } Ok(xml::Token::DtdEnd { span }) => Token::DtdEnd(span.range()), Ok(xml::Token::ElementStart { prefix, local, span }) => { Token::ElementStart(prefix.as_str(), local.as_str(), span.range()) } Ok(xml::Token::Attribute { prefix, local, value, span }) => { Token::Attribute(prefix.as_str(), local.as_str(), value.as_str(), span.range()) } Ok(xml::Token::ElementEnd { end, span }) => { Token::ElementEnd( match end { xml::ElementEnd::Open => ElementEnd::Open, xml::ElementEnd::Close(prefix, local) => { ElementEnd::Close(prefix.as_str(), local.as_str()) } xml::ElementEnd::Empty => ElementEnd::Empty, }, span.range() ) } Ok(xml::Token::Text { text }) => Token::Text(text.as_str(), text.range()), Ok(xml::Token::Cdata { text, span }) => Token::Cdata(text.as_str(), span.range()), Err(ref e) => Token::Error(e.to_string()), } } fn to_test_external_id(id: xml::ExternalId) -> ExternalId { match id { xml::ExternalId::System(name) => { ExternalId::System(name.as_str()) } xml::ExternalId::Public(name, value) => { ExternalId::Public(name.as_str(), value.as_str()) } } }