xmlparser-0.11.0/.gitignore010064400017500001750000000000301322766170700140230ustar0000000000000000target Cargo.lock .idea xmlparser-0.11.0/.travis.yml010064400017500001750000000001731356436353600141570ustar0000000000000000language: rust rust: - 1.18.0 - stable script: - cargo test --verbose - cargo test --verbose --no-default-features xmlparser-0.11.0/CHANGELOG.md010064400017500001750000000135251356436417000136570ustar0000000000000000# Change Log All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/) and this project adheres to [Semantic Versioning](http://semver.org/). ## [Unreleased] ## [0.11.0] - 2019-11-18 ### Added - `no_std` support thanks to [hugwijst](https://github.com/hugwijst). ### Changed - `StreamError::InvalidString` doesn't store an actual string now. ## [0.10.0] - 2019-09-14 ### Changed - 10-15% faster parsing. - Merge `ByteStream` and `Stream`. - `StreamError::InvalidChar` signature changed. - `StreamError::InvalidChar` was split into `InvalidChar` and `InvalidCharMultiple`. ### Fixed - Check for [NameStartChar](https://www.w3.org/TR/xml/#NT-NameStartChar) during qualified name parsing. E.g. `<-p>` is an invalid tag name from now. - Qualified name with multiple `:` is an error now. - `]>` is a valid text/`CharData` now. Previously it was parsed as `DoctypeEnd`. ### Removed - `StreamError::InvalidAttributeValue`. `StreamError::InvalidChar` will be emitted instead. ## [0.9.0] - 2019-02-27 ### Added - `span` field to all `Token` variants, which contains a whole token span in bytes. - `Stream::try_consume_byte`. ### Changed - All `Token` variants are structs now and not tuples. - `StrSpan` contains an actual string span an not only region now. So we can use a non-panic and zero-cost `StrSpan::as_str` instead of `StrSpan::to_str`, that was performing slicing each time. - Split `Stream` into `ByteStream` and `Stream`. - `Stream::skip_spaces` will parse only ASCII whitespace now. - Rename `StrSpan::to_str` into `StrSpan::as_str`. - Rename `Reference::EntityRef` into `Reference::Entity`. - Rename `Reference::CharRef` into `Reference::Char`. - `StrSpan::from_substr` and `StrSpan::slice_region` are private now. ### Removed - `Token::Whitespaces`. Will be parsed as `Token::Text`. - `Stream::curr_char`. - `Stream::is_curr_byte_eq`. - `Stream::consume_either`. - `Stream::skip_ascii_spaces`. Use `Stream::skip_spaces` instead. - `StrSpan::trim`. - `StrSpan::len`. - `StrSpan::full_len`. - `StrSpan::as_bytes`. ### Fixed - Declaration attributes with mixed quotes parsing. ## [0.8.1] - 2019-01-02 ### Changed - Changed the crate category in the Cargo.toml ## [0.8.0] - 2018-12-13 ### Added - `Error::pos()`. ### Changed - Rename `Stream::gen_error_pos` into `Stream::gen_text_pos`. - Rename `Stream::gen_error_pos_from` into `Stream::gen_text_pos_from`. - `Stream::gen_text_pos` speed up. ### Fixed - `TextPos` is Unicode aware now. - XML declaration parsing when file has a BOM. ## [0.7.0] - 2018-10-29 ### Changed - `<` inside an attribute value is an error now. - `Token::Declaration` represents *standalone* as `bool` now. - XML declaration must be defined only once now. - XML declaration must start at 0 position. - DTD must be defined only once now. ## [0.6.1] - 2018-10-08 ### Added - `Stream::curr_byte_unchecked`. ### Fixed - UTF-8 BOM processing. ## [0.6.0] - 2018-08-31 ### Changed - `Reference::EntityRef` contains `&str` and not `StrSpan` now. - Rename `Stream::try_consume_char_reference` into `try_consume_reference`. And it will return `Reference` and not `char` now. - Rename `Tokenizer::set_fragment_mode` into `enable_fragment_mode`. - Rename `ErrorPos` into `TextPos`. ### Fixed - `TextPos` calculation via `Stream::gen_error_pos`. ### Removed - `TextUnescape` and `XmlSpace` because useless. ## [0.5.0] - 2018-06-14 ### Added - `StreamError::InvalidChar`. - `StreamError::InvalidSpace`. - `StreamError::InvalidString`. ### Changed - `Stream::consume_reference` will return only `InvalidReference` error from now. - `Error::InvalidTokenWithCause` merged into `Error::InvalidToken`. - `Stream::gen_error_pos_from` does not require `mut self` from now. - `StreamError::InvalidChar` requires `Vec` and not `String` from now. - `ErrorPos` uses `u32` and not `usize` from now. ### Removed - `failure` dependency. - `log` dependency. ## [0.4.1] - 2018-05-23 ### Added - An ability to parse an XML fragment. ## [0.4.0] - 2018-04-21 ### Changed - Relicense from MIT to MIT/Apache-2.0. ### Removed - `FromSpan` trait. - `from_str` and `from_span` methods are removed. Use the `From` trait instead. ## [0.3.0] - 2018-04-10 ### Changed - Use `failure` instead of `error-chain`. - Minimum Rust version is 1.18. - New error messages. - `TokenType` is properly public now. ### Removed - `ChainedError` ## [0.2.0] - 2018-03-11 ### Added - Qualified name parsing. ### Changed - **Breaking**. `Token::ElementStart` and `Token::Attribute` contains prefix and local part of the qualified name now. ## [0.1.2] - 2018-02-12 ### Added - `Stream::skip_ascii_spaces`. - Small performance optimizations. ## [0.1.1] - 2018-01-17 ### Changed - `log` 0.3 -> 0.4 [Unreleased]: https://github.com/RazrFalcon/xmlparser/compare/v0.11.0...HEAD [0.11.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.10.0...v0.11.0 [0.10.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.9.0...v0.10.0 [0.9.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.8.1...v0.9.0 [0.8.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.8.0...v0.8.1 [0.8.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.7.0...v0.8.0 [0.7.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.6.1...v0.7.0 [0.6.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.6.0...v0.6.1 [0.6.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.5.0...v0.6.0 [0.5.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.4.1...v0.5.0 [0.4.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.4.0...v0.4.1 [0.4.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.3.0...v0.4.0 [0.3.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.2.0...v0.3.0 [0.2.0]: https://github.com/RazrFalcon/xmlparser/compare/v0.1.2...v0.2.0 [0.1.2]: https://github.com/RazrFalcon/xmlparser/compare/v0.1.1...v0.1.2 [0.1.1]: https://github.com/RazrFalcon/xmlparser/compare/v0.1.0...v0.1.1 xmlparser-0.11.0/Cargo.toml.orig010064400017500001750000000011461356436406400147330ustar0000000000000000[package] name = "xmlparser" # When updating version, also modify html_root_url in the lib.rs version = "0.11.0" authors = ["Evgeniy Reizner "] categories = ["parser-implementations"] description = "Pull-based, zero-allocation XML parser." documentation = "https://docs.rs/xmlparser/" keywords = ["xml", "parser", "tokenizer"] license = "MIT/Apache-2.0" readme = "README.md" repository = "https://github.com/RazrFalcon/xmlparser" [badges] travis-ci = { repository = "RazrFalcon/xmlparser" } [features] default = ["std"] std = [] [lib] path = "src/lib.rs" # for cargo-readme doctest = true xmlparser-0.11.0/Cargo.toml0000644000000020300000000000000111600ustar00# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] name = "xmlparser" version = "0.11.0" authors = ["Evgeniy Reizner "] description = "Pull-based, zero-allocation XML parser." documentation = "https://docs.rs/xmlparser/" readme = "README.md" keywords = ["xml", "parser", "tokenizer"] categories = ["parser-implementations"] license = "MIT/Apache-2.0" repository = "https://github.com/RazrFalcon/xmlparser" [lib] path = "src/lib.rs" doctest = true [features] default = ["std"] std = [] [badges.travis-ci] repository = "RazrFalcon/xmlparser" xmlparser-0.11.0/LICENSE-APACHE010064400017500001750000000251371322753041200137620ustar0000000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. xmlparser-0.11.0/LICENSE-MIT010064400017500001750000000020721326664711400134760ustar0000000000000000The MIT License (MIT) Copyright (c) 2018 Reizner Evgeniy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. xmlparser-0.11.0/README.md010064400017500001750000000046641356436353600133360ustar0000000000000000## xmlparser [![Build Status](https://travis-ci.org/RazrFalcon/xmlparser.svg?branch=master)](https://travis-ci.org/RazrFalcon/xmlparser) [![Crates.io](https://img.shields.io/crates/v/xmlparser.svg)](https://crates.io/crates/xmlparser) [![Documentation](https://docs.rs/xmlparser/badge.svg)](https://docs.rs/xmlparser) [![Rust 1.18+](https://img.shields.io/badge/rust-1.18+-orange.svg)](https://www.rust-lang.org) *xmlparser* is a low-level, pull-based, zero-allocation [XML 1.0](https://www.w3.org/TR/xml/) parser. ### Example ```rust for token in xmlparser::Tokenizer::from("") { println!("{:?}", token); } ``` ### Why a new library This library is basically a low-level XML tokenizer that preserves a position of the tokens and does not intend to be used directly. If you are looking for a more high-level solution - checkout [roxmltree](https://github.com/RazrFalcon/roxmltree). ### Benefits - All tokens contain `StrSpan` objects which contain a position of the data in the original document. - Good error processing. All error types contain position (line:column) where it occurred. - No heap allocations. - No dependencies. - Tiny. ~1500 LOC and ~40KiB in the release build according to the `cargo-bloat`. - Supports `no_std` builds. To use without the standard library, disable the default features. ### Limitations - Currently, only ENTITY objects are parsed from the DOCTYPE. Other ignored. - No tree structure validation. So an XML like `` or a string without root element will be parsed without errors. You should check for this manually. On the other hand `` will lead to an error. - Duplicated attributes is not an error. So an XML like `` will be parsed without errors. You should check for this manually. - UTF-8 only. ### Safety - The library must not panic. Any panic considered as a critical bug and should be reported. - The library forbids the unsafe code. ### License Licensed under either of - Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. xmlparser-0.11.0/README.tpl010064400017500001750000000016361353173327200135210ustar0000000000000000## {{crate}} [![Build Status](https://travis-ci.org/RazrFalcon/{{crate}}.svg?branch=master)](https://travis-ci.org/RazrFalcon/{{crate}}) [![Crates.io](https://img.shields.io/crates/v/{{crate}}.svg)](https://crates.io/crates/{{crate}}) [![Documentation](https://docs.rs/{{crate}}/badge.svg)](https://docs.rs/{{crate}}) [![Rust 1.18+](https://img.shields.io/badge/rust-1.18+-orange.svg)](https://www.rust-lang.org) {{readme}} ### License Licensed under either of - Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) - MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) at your option. ### Contribution Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions. xmlparser-0.11.0/examples/parse.rs010064400017500001750000000012461353713610500153340ustar0000000000000000extern crate xmlparser as xml; use std::env; use std::fs; use std::io::Read; fn main() { let args = env::args().collect::>(); if args.len() != 2 { println!("Usage: parse file.xml"); return; } let text = load_file(&args[1]); if let Err(e) = parse(&text) { println!("Error: {}.", e); } } fn parse(text: &str) -> Result<(), xml::Error> { for token in xml::Tokenizer::from(text) { println!("{:?}", token?); } Ok(()) } fn load_file(path: &str) -> String { let mut file = fs::File::open(path).unwrap(); let mut text = String::new(); file.read_to_string(&mut text).unwrap(); text } xmlparser-0.11.0/src/error.rs010064400017500001750000000122001356436353600143260ustar0000000000000000use core::fmt; use core::str; #[cfg(feature = "std")] use std::error; use TokenType; /// An XML parser errors. #[derive(Debug)] pub enum Error { /// An invalid token with an optional cause. InvalidToken(TokenType, TextPos, Option), /// An unexpected token. UnexpectedToken(TokenType, TextPos), /// An unknown token. UnknownToken(TextPos), } impl Error { /// Returns the error position. pub fn pos(&self) -> TextPos { match *self { Error::InvalidToken(_, pos, _) => pos, Error::UnexpectedToken(_, pos) => pos, Error::UnknownToken(pos) => pos, } } } impl fmt::Display for Error { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { Error::InvalidToken(token_type, pos, ref cause) => { match *cause { Some(ref cause) => { write!(f, "invalid token '{}' at {} cause {}", token_type, pos, cause) } None => { write!(f, "invalid token '{}' at {}", token_type, pos) } } } Error::UnexpectedToken(token_type, pos) => { write!(f, "unexpected token '{}' at {}", token_type, pos) } Error::UnknownToken(pos) => { write!(f, "unknown token at {}", pos) } } } } #[cfg(feature = "std")] impl error::Error for Error { fn description(&self) -> &str { "an XML parsing error" } } /// A stream parser errors. #[derive(Debug)] pub enum StreamError { /// The steam ended earlier than we expected. /// /// Should only appear on invalid input data. /// Errors in a valid XML should be handled by errors below. UnexpectedEndOfStream, /// An invalid name. InvalidName, /// An invalid/unexpected character. /// /// The first byte is an actual one, the second one is expected. /// /// We are using a single value to reduce the struct size. InvalidChar(u8, u8, TextPos), /// An invalid/unexpected character. /// /// Just like `InvalidChar`, but specifies multiple expected characters. InvalidCharMultiple(u8, &'static [u8], TextPos), /// An unexpected character instead of `"` or `'`. InvalidQuote(char, TextPos), /// An unexpected character instead of an XML space. /// /// Includes: `' ' \n \r \t `. InvalidSpace(char, TextPos), /// An unexpected string. /// /// Contains what string was expected. InvalidString(&'static str, TextPos), /// An invalid reference. InvalidReference, /// An invalid ExternalID in the DTD. InvalidExternalID, } impl fmt::Display for StreamError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { match *self { StreamError::UnexpectedEndOfStream => { write!(f, "unexpected end of stream") } StreamError::InvalidName => { write!(f, "invalid name token") } StreamError::InvalidChar(actual, expected, pos) => { write!(f, "expected '{}' not '{}' at {}", expected as char, actual as char, pos) } StreamError::InvalidCharMultiple(actual, ref expected, pos) => { let mut expected_iter = expected.iter().peekable(); write!(f, "expected ")?; while let Some(&c) = expected_iter.next() { write!(f, "'{}'", c as char)?; if expected_iter.peek().is_some() { write!(f, ", ")?; } } write!(f, " not '{}' at {}", actual as char, pos) } StreamError::InvalidQuote(c, pos) => { write!(f, "expected quote mark not '{}' at {}", c, pos) } StreamError::InvalidSpace(c, pos) => { write!(f, "expected space not '{}' at {}", c, pos) } StreamError::InvalidString(expected, pos) => { write!(f, "expected '{}' at {}", expected, pos) } StreamError::InvalidReference => { write!(f, "invalid reference") } StreamError::InvalidExternalID => { write!(f, "invalid ExternalID") } } } } #[cfg(feature = "std")] impl error::Error for StreamError { fn description(&self) -> &str { "an XML stream parsing error" } } /// Position in text. /// /// Position indicates a row/line and a column in the original text. Starting from 1:1. #[derive(Clone, Copy, PartialEq, Debug)] #[allow(missing_docs)] pub struct TextPos { pub row: u32, pub col: u32, } impl TextPos { /// Constructs a new `TextPos`. /// /// Should not be invoked manually, but rather via `Stream::gen_text_pos`. pub fn new(row: u32, col: u32) -> TextPos { TextPos { row, col } } } impl fmt::Display for TextPos { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}:{}", self.row, self.col) } } xmlparser-0.11.0/src/lib.rs010064400017500001750000001027161356436406400137540ustar0000000000000000/*! *xmlparser* is a low-level, pull-based, zero-allocation [XML 1.0](https://www.w3.org/TR/xml/) parser. ## Example ```rust for token in xmlparser::Tokenizer::from("") { println!("{:?}", token); } ``` ## Why a new library This library is basically a low-level XML tokenizer that preserves a position of the tokens and does not intend to be used directly. If you are looking for a more high-level solution - checkout [roxmltree](https://github.com/RazrFalcon/roxmltree). ## Benefits - All tokens contain `StrSpan` objects which contain a position of the data in the original document. - Good error processing. All error types contain position (line:column) where it occurred. - No heap allocations. - No dependencies. - Tiny. ~1500 LOC and ~40KiB in the release build according to the `cargo-bloat`. ## Limitations - Currently, only ENTITY objects are parsed from the DOCTYPE. Other ignored. - No tree structure validation. So an XML like `` or a string without root element will be parsed without errors. You should check for this manually. On the other hand `` will lead to an error. - Duplicated attributes is not an error. So an XML like `` will be parsed without errors. You should check for this manually. - UTF-8 only. ## Safety - The library must not panic. Any panic considered as a critical bug and should be reported. - The library forbids the unsafe code. */ #![no_std] #![cfg_attr(feature = "cargo-clippy", allow(unreadable_literal))] #![doc(html_root_url = "https://docs.rs/xmlparser/0.11.0")] #![forbid(unsafe_code)] #![warn(missing_docs)] #![allow(ellipsis_inclusive_range_patterns)] #[cfg(feature = "std")] #[macro_use] extern crate std; use core::fmt; macro_rules! matches { ($expression:expr, $($pattern:tt)+) => { match $expression { $($pattern)+ => true, _ => false } } } mod error; mod stream; mod strspan; mod xmlchar; pub use error::*; pub use stream::*; pub use strspan::*; pub use xmlchar::*; /// An XML token. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Debug)] pub enum Token<'a> { /// Declaration token. /// /// ```text /// /// --- - version /// ----- - encoding? /// --- - standalone? /// ------------------------------------------------------- - span /// ``` Declaration { version: StrSpan<'a>, encoding: Option>, standalone: Option, span: StrSpan<'a>, }, /// Processing instruction token. /// /// ```text /// /// ------ - target /// ------- - content? /// ------------------ - span /// ``` ProcessingInstruction { target: StrSpan<'a>, content: Option>, span: StrSpan<'a>, }, /// Comment token. /// /// ```text /// /// ------ - text /// ------------- - span /// ``` Comment { text: StrSpan<'a>, span: StrSpan<'a>, }, /// DOCTYPE start token. /// /// ```text /// , external_id: Option>, span: StrSpan<'a>, }, /// Empty DOCTYPE token. /// /// ```text /// /// -------- - name /// ------------------ - external_id? /// -------------------------------------- - span /// ``` EmptyDtd { name: StrSpan<'a>, external_id: Option>, span: StrSpan<'a>, }, /// ENTITY token. /// /// Can appear only inside the DTD. /// /// ```text /// /// --------- - name /// --------------- - definition /// ------------------------------------- - span /// ``` EntityDeclaration { name: StrSpan<'a>, definition: EntityDefinition<'a>, span: StrSpan<'a>, }, /// DOCTYPE end token. /// /// ```text /// /// -- - span /// ``` DtdEnd { span: StrSpan<'a>, }, /// Element start token. /// /// ```text /// /// -- - prefix /// ---- - local /// -------- - span /// ``` ElementStart { prefix: StrSpan<'a>, local: StrSpan<'a>, span: StrSpan<'a>, }, /// Attribute token. /// /// ```text /// /// -- - prefix /// ---- - local /// ----- - value /// --------------- - span /// ``` Attribute { prefix: StrSpan<'a>, local: StrSpan<'a>, value: StrSpan<'a>, span: StrSpan<'a>, }, /// Element end token. /// /// ```text /// text /// - ElementEnd::Open /// - - span /// ``` /// /// ```text /// text /// -- ---- - ElementEnd::Close(prefix, local) /// ---------- - span /// ``` /// /// ```text /// /// - ElementEnd::Empty /// -- - span /// ``` ElementEnd { end: ElementEnd<'a>, span: StrSpan<'a>, }, /// Text token. /// /// Contains text between elements including whitespaces. /// Basically everything between `>` and `<`. /// /// ```text ///

text

/// ------ - text /// ``` /// /// The token span is equal to the `text`. Text { text: StrSpan<'a>, }, /// CDATA token. /// /// ```text ///

/// ---- - text /// ---------------- - span /// ``` Cdata { text: StrSpan<'a>, span: StrSpan<'a>, }, } /// `ElementEnd` token. #[derive(Clone, Copy, PartialEq, Debug)] pub enum ElementEnd<'a> { /// Indicates `>` Open, /// Indicates `` Close(StrSpan<'a>, StrSpan<'a>), /// Indicates `/>` Empty, } /// Representation of the [ExternalID](https://www.w3.org/TR/xml/#NT-ExternalID) value. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Debug)] pub enum ExternalId<'a> { System(StrSpan<'a>), Public(StrSpan<'a>, StrSpan<'a>), } /// Representation of the [EntityDef](https://www.w3.org/TR/xml/#NT-EntityDef) value. #[allow(missing_docs)] #[derive(Clone, Copy, PartialEq, Debug)] pub enum EntityDefinition<'a> { EntityValue(StrSpan<'a>), ExternalId(ExternalId<'a>), } type Result = core::result::Result; type StreamResult = core::result::Result; /// List of token types. /// /// For internal use and errors. #[derive(Clone, Copy, PartialEq, Debug)] #[allow(missing_docs)] pub enum TokenType { XMLDecl, Comment, PI, DoctypeDecl, ElementDecl, AttlistDecl, EntityDecl, NotationDecl, DoctypeEnd, ElementStart, ElementClose, Attribute, CDSect, Whitespace, CharData, Unknown, } impl fmt::Display for TokenType { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let s = match *self { TokenType::XMLDecl => "Declaration", TokenType::Comment => "Comment", TokenType::PI => "Processing Instruction", TokenType::DoctypeDecl => "Doctype Declaration", TokenType::ElementDecl => "Doctype Element Declaration", TokenType::AttlistDecl => "Doctype Attributes Declaration", TokenType::EntityDecl => "Doctype Entity Declaration", TokenType::NotationDecl => "Doctype Notation Declaration", TokenType::DoctypeEnd => "Doctype End", TokenType::ElementStart => "Element Start", TokenType::ElementClose => "Element Close", TokenType::Attribute => "Attribute", TokenType::CDSect => "CDATA", TokenType::Whitespace => "Whitespace", TokenType::CharData => "Character data", TokenType::Unknown => "Unknown", }; write!(f, "{}", s) } } #[derive(Clone, Copy, PartialEq)] enum State { Start, Dtd, AfterDtd, Elements, Attributes, AfterElements, End, } /// Tokenizer for the XML structure. pub struct Tokenizer<'a> { stream: Stream<'a>, state: State, depth: usize, fragment_parsing: bool, } impl<'a> From<&'a str> for Tokenizer<'a> { #[inline] fn from(text: &'a str) -> Self { Self::from(StrSpan::from(text)) } } impl<'a> From> for Tokenizer<'a> { #[inline] fn from(span: StrSpan<'a>) -> Self { Tokenizer { stream: Stream::from(span), state: State::Start, depth: 0, fragment_parsing: false, } } } /// Shorthand for: /// /// ```no_run /// let start = stream.pos() - 2; // or any other number /// some_func().map_err(|e| /// Error::InvalidToken(Token::SomeToken, stream.gen_error_pos_from(start), Some(e)) /// ) /// ``` macro_rules! map_err_at { ($fun:expr, $token:expr, $stream:expr, $d:expr) => {{ let mut start = $stream.pos() as isize + $d; debug_assert!(start >= 0); if start < 0 { start = 0; } $fun.map_err(|e| Error::InvalidToken($token, $stream.gen_text_pos_from(start as usize), Some(e)) ) }} } impl<'a> Tokenizer<'a> { /// Enables document fragment parsing. /// /// By default, `xmlparser` will check for DTD, root element, etc. /// But if we have to parse an XML fragment, it will lead to an error. /// This method switches the parser to the root element content parsing mode. /// So it will treat any data as a content of the root element. pub fn enable_fragment_mode(&mut self) { self.state = State::Elements; self.fragment_parsing = true; } fn parse_next_impl(s: &mut Stream<'a>, state: State) -> Option>> { if s.at_end() { return None; } let start = s.pos(); if start == 0 { // Skip UTF-8 BOM. if s.starts_with(&[0xEF, 0xBB, 0xBF]) { s.advance(3); } } macro_rules! parse_token_type { () => ({ match Self::parse_token_type(s, state) { Ok(v) => v, Err(_) => { let pos = s.gen_text_pos_from(start); return Some(Err(Error::UnknownToken(pos))); } } }) } macro_rules! gen_err { ($token_type:expr) => ({ let pos = s.gen_text_pos_from(start); if $token_type == TokenType::Unknown { return Some(Err(Error::UnknownToken(pos))); } else { return Some(Err(Error::UnexpectedToken($token_type, pos))); } }) } let t = match state { State::Start => { let token_type = parse_token_type!(); match token_type { TokenType::XMLDecl => { // XML declaration allowed only at the start of the document. if start == 0 { Self::parse_declaration(s) } else { gen_err!(token_type); } } TokenType::Comment => { Self::parse_comment(s) } TokenType::PI => { Self::parse_pi(s) } TokenType::DoctypeDecl => { Self::parse_doctype(s) } TokenType::ElementStart => { Self::parse_element_start(s) } TokenType::Whitespace => { s.skip_spaces(); return Self::parse_next_impl(s, state); } _ => { gen_err!(token_type); } } } State::Dtd => { let token_type = parse_token_type!(); match token_type { TokenType::ElementDecl | TokenType::NotationDecl | TokenType::AttlistDecl => { if Self::consume_decl(s).is_err() { gen_err!(token_type); } return Self::parse_next_impl(s, state); } TokenType::EntityDecl => { Self::parse_entity_decl(s) } TokenType::Comment => { Self::parse_comment(s) } TokenType::PI => { Self::parse_pi(s) } TokenType::DoctypeEnd => { Ok(Token::DtdEnd { span: s.slice_back(s.pos() - 2) }) } TokenType::Whitespace => { s.skip_spaces(); return Self::parse_next_impl(s, state); } _ => { gen_err!(token_type); } } } State::AfterDtd => { let token_type = parse_token_type!(); match token_type { TokenType::Comment => { Self::parse_comment(s) } TokenType::PI => { Self::parse_pi(s) } TokenType::ElementStart => { Self::parse_element_start(s) } TokenType::Whitespace => { s.skip_spaces(); return Self::parse_next_impl(s, state); } _ => { gen_err!(token_type); } } } State::Elements => { let token_type = parse_token_type!(); match token_type { TokenType::ElementStart => { Self::parse_element_start(s) } TokenType::ElementClose => { Self::parse_close_element(s) } TokenType::CDSect => { Self::parse_cdata(s) } TokenType::PI => { Self::parse_pi(s) } TokenType::Comment => { Self::parse_comment(s) } TokenType::CharData => { Self::parse_text(s) } _ => { gen_err!(token_type); } } } State::Attributes => { Self::parse_attribute(s).map_err(|e| Error::InvalidToken(TokenType::Attribute, s.gen_text_pos_from(start), Some(e))) } State::AfterElements => { let token_type = parse_token_type!(); match token_type { TokenType::Comment => { Self::parse_comment(s) } TokenType::PI => { Self::parse_pi(s) } TokenType::Whitespace => { s.skip_spaces(); return Self::parse_next_impl(s, state); } _ => { gen_err!(token_type); } } } State::End => { return None; } }; Some(t) } fn parse_token_type(s: &mut Stream, state: State) -> StreamResult { let c1 = s.curr_byte()?; let t = match c1 { b'<' => { s.advance(1); let c2 = s.curr_byte()?; match c2 { b'?' => { // TODO: technically, we should check for any whitespace if s.starts_with(b"?xml ") { s.advance(5); TokenType::XMLDecl } else { s.advance(1); TokenType::PI } } b'!' => { s.advance(1); let c3 = s.curr_byte()?; match c3 { b'-' if s.starts_with(b"--") => { s.advance(2); TokenType::Comment } b'D' if s.starts_with(b"DOCTYPE") => { s.advance(7); TokenType::DoctypeDecl } b'E' if s.starts_with(b"ELEMENT") => { s.advance(7); TokenType::ElementDecl } b'A' if s.starts_with(b"ATTLIST") => { s.advance(7); TokenType::AttlistDecl } b'E' if s.starts_with(b"ENTITY") => { s.advance(6); TokenType::EntityDecl } b'N' if s.starts_with(b"NOTATION") => { s.advance(8); TokenType::NotationDecl } b'[' if s.starts_with(b"[CDATA[") => { s.advance(7); TokenType::CDSect } _ => { TokenType::Unknown } } } b'/' => { s.advance(1); TokenType::ElementClose } _ => { TokenType::ElementStart } } } b']' if state == State::Dtd && s.starts_with(b"]>") => { s.advance(2); TokenType::DoctypeEnd } _ => { match state { State::Start | State::AfterDtd | State::AfterElements | State::Dtd => { if s.starts_with_space() { TokenType::Whitespace } else { TokenType::Unknown } } State::Elements => { TokenType::CharData } _ => { TokenType::Unknown } } } }; Ok(t) } fn parse_declaration(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_declaration_impl(s), TokenType::XMLDecl, s, -6) } // XMLDecl ::= '' fn parse_declaration_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 6; let version = Self::parse_version_info(s)?; let encoding = Self::parse_encoding_decl(s)?; let standalone = Self::parse_standalone(s)?; s.skip_spaces(); s.skip_string(b"?>")?; let span = s.slice_back(start); Ok(Token::Declaration { version, encoding, standalone, span }) } // VersionInfo ::= S 'version' Eq ("'" VersionNum "'" | '"' VersionNum '"') // VersionNum ::= '1.' [0-9]+ fn parse_version_info(s: &mut Stream<'a>) -> StreamResult> { s.skip_spaces(); s.skip_string(b"version")?; s.consume_eq()?; let quote = s.consume_quote()?; let start = s.pos(); s.skip_string(b"1.")?; s.skip_bytes(|_, c| c.is_xml_digit()); let ver = s.slice_back(start); s.consume_byte(quote)?; Ok(ver) } // EncodingDecl ::= S 'encoding' Eq ('"' EncName '"' | "'" EncName "'" ) // EncName ::= [A-Za-z] ([A-Za-z0-9._] | '-')* fn parse_encoding_decl(s: &mut Stream<'a>) -> StreamResult>> { s.skip_spaces(); if s.skip_string(b"encoding").is_err() { return Ok(None); } s.consume_eq()?; let quote = s.consume_quote()?; // [A-Za-z] ([A-Za-z0-9._] | '-')* // TODO: check that first byte is [A-Za-z] let name = s.consume_bytes(|_, c| { c.is_xml_letter() || c.is_xml_digit() || c == b'.' || c == b'-' || c == b'_' }); s.consume_byte(quote)?; Ok(Some(name)) } // SDDecl ::= S 'standalone' Eq (("'" ('yes' | 'no') "'") | ('"' ('yes' | 'no') '"')) fn parse_standalone(s: &mut Stream<'a>) -> StreamResult> { s.skip_spaces(); if s.skip_string(b"standalone").is_err() { return Ok(None); } s.consume_eq()?; let quote = s.consume_quote()?; let start = s.pos(); let value = s.consume_name()?.as_str(); let flag = match value { "yes" => true, "no" => false, _ => { let pos = s.gen_text_pos_from(start); return Err(StreamError::InvalidString("yes', 'no", pos)); } }; s.consume_byte(quote)?; Ok(Some(flag)) } fn parse_comment(s: &mut Stream<'a>) -> Result> { let start = s.pos() - 4; Self::parse_comment_impl(s) .map_err(|_| Error::InvalidToken(TokenType::Comment, s.gen_text_pos_from(start), None)) } // '' fn parse_comment_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 4; let text = s.consume_chars(|s, c| { if c == '-' && s.starts_with(b"-->") { return false; } c.is_xml_char() }); s.skip_string(b"-->")?; if text.as_str().contains("--") { return Err(StreamError::UnexpectedEndOfStream); // Error type doesn't matter. } let span = s.slice_back(start); Ok(Token::Comment { text, span }) } fn parse_pi(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_pi_impl(s), TokenType::PI, s, -2) } // PI ::= '' Char*)))? '?>' // PITarget ::= Name - (('X' | 'x') ('M' | 'm') ('L' | 'l')) fn parse_pi_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 2; let target = s.consume_name()?; s.skip_spaces(); let content = s.consume_chars(|s, c| { if c == '?' && s.starts_with(b"?>") { return false; } if !c.is_xml_char() { return false; } true }); let content = if !content.is_empty() { Some(content) } else { None }; s.skip_string(b"?>")?; let span = s.slice_back(start); Ok(Token::ProcessingInstruction { target, content, span }) } fn parse_doctype(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_doctype_impl(s), TokenType::DoctypeDecl, s, -9) } // doctypedecl ::= '' fn parse_doctype_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 9; s.consume_spaces()?; let name = s.consume_name()?; s.skip_spaces(); let external_id = Self::parse_external_id(s)?; s.skip_spaces(); let c = s.curr_byte()?; if c != b'[' && c != b'>' { static EXPECTED: &[u8] = &[b'[', b'>']; return Err(StreamError::InvalidCharMultiple(c, EXPECTED, s.gen_text_pos())); } s.advance(1); let span = s.slice_back(start); if c == b'[' { Ok(Token::DtdStart { name, external_id, span }) } else { Ok(Token::EmptyDtd { name, external_id, span }) } } // ExternalID ::= 'SYSTEM' S SystemLiteral | 'PUBLIC' S PubidLiteral S SystemLiteral fn parse_external_id(s: &mut Stream<'a>) -> StreamResult>> { let v = if s.starts_with(b"SYSTEM") || s.starts_with(b"PUBLIC") { let start = s.pos(); s.advance(6); let id = s.slice_back(start); s.consume_spaces()?; let quote = s.consume_quote()?; let literal1 = s.consume_bytes(|_, c| c != quote); s.consume_byte(quote)?; let v = if id.as_str() == "SYSTEM" { ExternalId::System(literal1) } else { s.consume_spaces()?; let quote = s.consume_quote()?; let literal2 = s.consume_bytes(|_, c| c != quote); s.consume_byte(quote)?; ExternalId::Public(literal1, literal2) }; Some(v) } else { None }; Ok(v) } fn parse_entity_decl(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_entity_decl_impl(s), TokenType::EntityDecl, s, -8) } // EntityDecl ::= GEDecl | PEDecl // GEDecl ::= '' // PEDecl ::= '' fn parse_entity_decl_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 8; s.consume_spaces()?; let is_ge = if s.try_consume_byte(b'%') { s.consume_spaces()?; false } else { true }; let name = s.consume_name()?; s.consume_spaces()?; let definition = Self::parse_entity_def(s, is_ge)?; s.skip_spaces(); s.consume_byte(b'>')?; let span = s.slice_back(start); Ok(Token::EntityDeclaration { name, definition, span }) } // EntityDef ::= EntityValue | (ExternalID NDataDecl?) // PEDef ::= EntityValue | ExternalID // EntityValue ::= '"' ([^%&"] | PEReference | Reference)* '"' | "'" ([^%&'] // | PEReference | Reference)* "'" // ExternalID ::= 'SYSTEM' S SystemLiteral | 'PUBLIC' S PubidLiteral S SystemLiteral // NDataDecl ::= S 'NDATA' S Name fn parse_entity_def(s: &mut Stream<'a>, is_ge: bool) -> StreamResult> { let c = s.curr_byte()?; match c { b'"' | b'\'' => { let quote = s.consume_quote()?; let value = s.consume_bytes(|_, c| c != quote); s.consume_byte(quote)?; Ok(EntityDefinition::EntityValue(value)) } b'S' | b'P' => { if let Some(id) = Self::parse_external_id(s)? { if is_ge { s.skip_spaces(); if s.starts_with(b"NDATA") { s.advance(5); s.consume_spaces()?; s.skip_name()?; // TODO: NDataDecl is not supported } } Ok(EntityDefinition::ExternalId(id)) } else { Err(StreamError::InvalidExternalID) } } _ => { static EXPECTED: &[u8] = &[b'"', b'\'', b'S', b'P']; let pos = s.gen_text_pos(); Err(StreamError::InvalidCharMultiple(c, EXPECTED, pos)) } } } fn consume_decl(s: &mut Stream) -> StreamResult<()> { s.consume_spaces()?; s.skip_bytes(|_, c| c != b'>'); s.consume_byte(b'>')?; Ok(()) } fn parse_cdata(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_cdata_impl(s), TokenType::CDSect, s, -9) } // CDSect ::= CDStart CData CDEnd // CDStart ::= '' Char*)) // CDEnd ::= ']]>' fn parse_cdata_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 9; let text = s.consume_bytes(|s, c| { !(c == b']' && s.starts_with(b"]]>")) }); s.skip_string(b"]]>")?; let span = s.slice_back(start); Ok(Token::Cdata { text, span }) } fn parse_element_start(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_element_start_impl(s), TokenType::ElementStart, s, -1) } // '<' Name (S Attribute)* S? '>' fn parse_element_start_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 1; let (prefix, local) = s.consume_qname()?; let span = s.slice_back(start); Ok(Token::ElementStart { prefix, local, span }) } fn parse_close_element(s: &mut Stream<'a>) -> Result> { map_err_at!(Self::parse_close_element_impl(s), TokenType::ElementClose, s, -2) } // '' fn parse_close_element_impl(s: &mut Stream<'a>) -> StreamResult> { let start = s.pos() - 2; let (prefix, tag_name) = s.consume_qname()?; s.skip_spaces(); s.consume_byte(b'>')?; let span = s.slice_back(start); Ok(Token::ElementEnd { end: ElementEnd::Close(prefix, tag_name), span }) } // Name Eq AttValue fn parse_attribute(s: &mut Stream<'a>) -> StreamResult> { s.skip_spaces(); if let Ok(c) = s.curr_byte() { let start = s.pos(); match c { b'/' => { s.advance(1); s.consume_byte(b'>')?; let span = s.slice_back(start); return Ok(Token::ElementEnd { end: ElementEnd::Empty, span }); } b'>' => { s.advance(1); let span = s.slice_back(start); return Ok(Token::ElementEnd { end: ElementEnd::Open, span }); } _ => {} } } let start = s.pos(); let (prefix, local) = s.consume_qname()?; s.consume_eq()?; let quote = s.consume_quote()?; // The attribute value must not contain the < character. let value = s.consume_bytes(|_, c| c != quote && c != b'<'); s.consume_byte(quote)?; let span = s.slice_back(start); s.skip_spaces(); Ok(Token::Attribute { prefix, local, value, span }) } fn parse_text(s: &mut Stream<'a>) -> Result> { let text = s.consume_bytes(|_, c| c != b'<'); Ok(Token::Text { text }) } } impl<'a> Iterator for Tokenizer<'a> { type Item = Result>; #[inline] fn next(&mut self) -> Option { if self.stream.at_end() || self.state == State::End { return None; } let t = Self::parse_next_impl(&mut self.stream, self.state); if let Some(ref t) = t { match *t { Ok(t) => match t { Token::ElementStart { .. } => { self.state = State::Attributes; } Token::ElementEnd { ref end, .. } => { match *end { ElementEnd::Open => self.depth += 1, ElementEnd::Close(..) if self.depth > 0 => self.depth -= 1, _ => {} } if self.depth == 0 && !self.fragment_parsing { self.state = State::AfterElements; } else { self.state = State::Elements; } } Token::DtdStart { .. } => { self.state = State::Dtd; } Token::EmptyDtd { .. } | Token::DtdEnd { .. } => { self.state = State::AfterDtd; } _ => {} }, Err(_) => { self.stream.jump_to_end(); self.state = State::End; } } } t } } xmlparser-0.11.0/src/stream.rs010064400017500001750000000402311356436353600144750ustar0000000000000000use core::char; use core::cmp; use core::str; use { StreamError, StrSpan, TextPos, XmlByteExt, XmlCharExt, }; type Result = ::core::result::Result; /// Representation of the [Reference](https://www.w3.org/TR/xml/#NT-Reference) value. #[derive(Clone, Copy, PartialEq, Debug)] pub enum Reference<'a> { /// An entity reference. /// /// Entity(&'a str), /// A character reference. /// /// Char(char), } /// A streaming XML parsing interface. #[derive(Clone, Copy, PartialEq)] pub struct Stream<'a> { pos: usize, end: usize, span: StrSpan<'a>, } impl<'a> From<&'a str> for Stream<'a> { #[inline] fn from(text: &'a str) -> Self { Stream { pos: 0, end: text.len(), span: text.into(), } } } impl<'a> From> for Stream<'a> { #[inline] fn from(span: StrSpan<'a>) -> Self { Stream { pos: 0, end: span.as_str().len(), span, } } } impl<'a> Stream<'a> { /// Returns an underling string span. #[inline] pub fn span(&self) -> StrSpan<'a> { self.span } /// Returns current position. #[inline] pub fn pos(&self) -> usize { self.pos } /// Sets current position equal to the end. /// /// Used to indicate end of parsing on error. #[inline] pub fn jump_to_end(&mut self) { self.pos = self.end; } /// Checks if the stream is reached the end. /// /// Any [`pos()`] value larger than original text length indicates stream end. /// /// Accessing stream after reaching end via safe methods will produce /// an `UnexpectedEndOfStream` error. /// /// Accessing stream after reaching end via *_unchecked methods will produce /// a Rust's bound checking error. /// /// [`pos()`]: #method.pos #[inline] pub fn at_end(&self) -> bool { self.pos >= self.end } /// Returns a byte from a current stream position. /// /// # Errors /// /// - `UnexpectedEndOfStream` #[inline] pub fn curr_byte(&self) -> Result { if self.at_end() { return Err(StreamError::UnexpectedEndOfStream); } Ok(self.curr_byte_unchecked()) } /// Returns a byte from a current stream position. /// /// # Panics /// /// - if the current position is after the end of the data #[inline] pub fn curr_byte_unchecked(&self) -> u8 { self.span.as_bytes()[self.pos] } /// Returns a next byte from a current stream position. /// /// # Errors /// /// - `UnexpectedEndOfStream` #[inline] pub fn next_byte(&self) -> Result { if self.pos + 1 >= self.end { return Err(StreamError::UnexpectedEndOfStream); } Ok(self.span.as_bytes()[self.pos + 1]) } /// Advances by `n` bytes. /// /// # Examples /// /// ```rust,should_panic /// use xmlparser::Stream; /// /// let mut s = Stream::from("text"); /// s.advance(2); // ok /// s.advance(20); // will cause a panic via debug_assert!(). /// ``` #[inline] pub fn advance(&mut self, n: usize) { debug_assert!(self.pos + n <= self.end); self.pos += n; } /// Checks that the stream starts with a selected text. /// /// We are using `&[u8]` instead of `&str` for performance reasons. /// /// # Examples /// /// ``` /// use xmlparser::Stream; /// /// let mut s = Stream::from("Some text."); /// s.advance(5); /// assert_eq!(s.starts_with(b"text"), true); /// assert_eq!(s.starts_with(b"long"), false); /// ``` #[inline] pub fn starts_with(&self, text: &[u8]) -> bool { self.span.as_bytes()[self.pos..self.end].starts_with(text) } /// Consumes the current byte if it's equal to the provided byte. /// /// # Errors /// /// - `InvalidChar` /// - `UnexpectedEndOfStream` /// /// # Examples /// /// ``` /// use xmlparser::Stream; /// /// let mut s = Stream::from("Some text."); /// assert!(s.consume_byte(b'S').is_ok()); /// assert!(s.consume_byte(b'o').is_ok()); /// assert!(s.consume_byte(b'm').is_ok()); /// assert!(s.consume_byte(b'q').is_err()); /// ``` pub fn consume_byte(&mut self, c: u8) -> Result<()> { let curr = self.curr_byte()?; if curr != c { return Err(StreamError::InvalidChar(curr, c, self.gen_text_pos())); } self.advance(1); Ok(()) } /// Tries to consume the current byte if it's equal to the provided byte. /// /// Unlike `consume_byte()` will not return any errors. pub fn try_consume_byte(&mut self, c: u8) -> bool { match self.curr_byte() { Ok(b) if b == c => { self.advance(1); true } _ => false, } } /// Skips selected string. /// /// # Errors /// /// - `InvalidString` pub fn skip_string(&mut self, text: &'static [u8]) -> Result<()> { if !self.starts_with(text) { let pos = self.gen_text_pos(); // Assume that all input `text` are valid UTF-8 strings, so unwrap is safe. let expected = str::from_utf8(text).unwrap(); return Err(StreamError::InvalidString(expected, pos)); } self.advance(text.len()); Ok(()) } /// Consumes bytes by the predicate and returns them. /// /// The result can be empty. #[inline] pub fn consume_bytes(&mut self, f: F) -> StrSpan<'a> where F: Fn(&Stream, u8) -> bool { let start = self.pos; self.skip_bytes(f); self.slice_back(start) } /// Skips bytes by the predicate. pub fn skip_bytes(&mut self, f: F) where F: Fn(&Stream, u8) -> bool { while !self.at_end() && f(self, self.curr_byte_unchecked()) { self.advance(1); } } /// Consumes chars by the predicate and returns them. /// /// The result can be empty. pub fn consume_chars(&mut self, f: F) -> StrSpan<'a> where F: Fn(&Stream, char) -> bool { let start = self.pos; self.skip_chars(f); self.slice_back(start) } /// Skips chars by the predicate. pub fn skip_chars(&mut self, f: F) where F: Fn(&Stream, char) -> bool { for c in self.chars() { if f(self, c) { self.advance(c.len_utf8()); } else { break; } } } #[inline] pub(crate) fn chars(&self) -> str::Chars<'a> { self.span.as_str()[self.pos..self.end].chars() } /// Slices data from `pos` to the current position. #[inline] pub fn slice_back(&self, pos: usize) -> StrSpan<'a> { self.span.slice_region(pos, self.pos) } /// Slices data from the current position to the end. #[inline] pub fn slice_tail(&self) -> StrSpan<'a> { self.span.slice_region(self.pos, self.end) } /// Calculates a current absolute position. /// /// This operation is very expensive. Use only for errors. #[inline(never)] pub fn gen_text_pos(&self) -> TextPos { let text = self.span.full_str(); let end = self.pos + self.span.start(); let row = Self::calc_curr_row(text, end); let col = Self::calc_curr_col(text, end); TextPos::new(row, col) } /// Calculates an absolute position at `pos`. /// /// This operation is very expensive. Use only for errors. /// /// # Examples /// /// ``` /// let s = xmlparser::Stream::from("text"); /// /// assert_eq!(s.gen_text_pos_from(2), xmlparser::TextPos::new(1, 3)); /// assert_eq!(s.gen_text_pos_from(9999), xmlparser::TextPos::new(1, 5)); /// ``` #[inline(never)] pub fn gen_text_pos_from(&self, pos: usize) -> TextPos { let mut s = self.clone(); s.pos = cmp::min(pos, s.span.full_str().len()); s.gen_text_pos() } fn calc_curr_row(text: &str, end: usize) -> u32 { let mut row = 1; for c in &text.as_bytes()[..end] { if *c == b'\n' { row += 1; } } row } fn calc_curr_col(text: &str, end: usize) -> u32 { let mut col = 1; for c in text[..end].chars().rev() { if c == '\n' { break; } else { col += 1; } } col } /// Skips whitespaces. /// /// Accepted values: `' ' \n \r \t`. #[inline] pub fn skip_spaces(&mut self) { while !self.at_end() && self.curr_byte_unchecked().is_xml_space() { self.advance(1); } } /// Checks if the stream is starts with a space. #[inline] pub fn starts_with_space(&self) -> bool { !self.at_end() && self.curr_byte_unchecked().is_xml_space() } /// Consumes whitespaces. /// /// Like [`skip_spaces()`], but checks that first char is actually a space. /// /// [`skip_spaces()`]: #method.skip_spaces /// /// # Errors /// /// - `InvalidSpace` pub fn consume_spaces(&mut self) -> Result<()> { if self.at_end() { return Err(StreamError::UnexpectedEndOfStream); } if !self.starts_with_space() { let c = self.curr_byte_unchecked() as char; let pos = self.gen_text_pos(); return Err(StreamError::InvalidSpace(c, pos)); } self.skip_spaces(); Ok(()) } /// Consumes an XML character reference if there is one. /// /// On error will reset the position to the original. pub fn try_consume_reference(&mut self) -> Option> { let start = self.pos(); // Consume reference on a substream. let mut s = self.clone(); match s.consume_reference() { Ok(r) => { // If the current data is a reference than advance the current stream // by number of bytes read by substream. self.advance(s.pos() - start); Some(r) } Err(_) => { None } } } /// Consumes an XML reference. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidReference` pub fn consume_reference(&mut self) -> Result> { self._consume_reference().map_err(|_| StreamError::InvalidReference) } fn _consume_reference(&mut self) -> Result> { if !self.try_consume_byte(b'&') { return Err(StreamError::InvalidReference); } let reference = if self.try_consume_byte(b'#') { let (value, radix) = if self.try_consume_byte(b'x') { let value = self.consume_bytes(|_, c| c.is_xml_hex_digit()).as_str(); (value, 16) } else { let value = self.consume_bytes(|_, c| c.is_xml_digit()).as_str(); (value, 10) }; let n = u32::from_str_radix(value, radix).map_err(|_| StreamError::InvalidReference)?; let c = char::from_u32(n).unwrap_or('\u{FFFD}'); if !c.is_xml_char() { return Err(StreamError::InvalidReference); } Reference::Char(c) } else { let name = self.consume_name()?; match name.as_str() { "quot" => Reference::Char('"'), "amp" => Reference::Char('&'), "apos" => Reference::Char('\''), "lt" => Reference::Char('<'), "gt" => Reference::Char('>'), _ => Reference::Entity(name.as_str()), } }; self.consume_byte(b';')?; Ok(reference) } /// Consumes an XML name and returns it. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidName` - if name is empty or starts with an invalid char /// - `UnexpectedEndOfStream` pub fn consume_name(&mut self) -> Result> { let start = self.pos(); self.skip_name()?; let name = self.slice_back(start); if name.is_empty() { return Err(StreamError::InvalidName); } Ok(name) } /// Skips an XML name. /// /// The same as `consume_name()`, but does not return a consumed name. /// /// # Errors /// /// - `InvalidName` - if name is empty or starts with an invalid char pub fn skip_name(&mut self) -> Result<()> { let mut iter = self.chars(); if let Some(c) = iter.next() { if c.is_xml_name_start() { self.advance(c.len_utf8()); } else { return Err(StreamError::InvalidName); } } for c in iter { if c.is_xml_name() { self.advance(c.len_utf8()); } else { break; } } Ok(()) } /// Consumes a qualified XML name and returns it. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidName` - if name is empty or starts with an invalid char pub fn consume_qname(&mut self) -> Result<(StrSpan<'a>, StrSpan<'a>)> { let start = self.pos(); let mut splitter = None; while !self.at_end() { // Check for ASCII first for performance reasons. let b = self.curr_byte_unchecked(); if b < 128 { if b == b':' { if splitter.is_none() { splitter = Some(self.pos()); self.advance(1); } else { // Multiple `:` is an error. return Err(StreamError::InvalidName); } } else if b.is_xml_name() { self.advance(1); } else { break; } } else { // Fallback to Unicode code point. match self.chars().nth(0) { Some(c) if c.is_xml_name() => { self.advance(c.len_utf8()); } _ => break, } } } let (prefix, local) = if let Some(splitter) = splitter { let prefix = self.span().slice_region(start, splitter); let local = self.slice_back(splitter + 1); (prefix, local) } else { let local = self.slice_back(start); ("".into(), local) }; // Prefix must start with a `NameStartChar`. if let Some(c) = prefix.as_str().chars().nth(0) { if !c.is_xml_name_start() { return Err(StreamError::InvalidName); } } // Local name must start with a `NameStartChar`. if let Some(c) = local.as_str().chars().nth(0) { if !c.is_xml_name_start() { return Err(StreamError::InvalidName); } } else { // If empty - error. return Err(StreamError::InvalidName); } Ok((prefix, local)) } /// Consumes `=`. /// /// Consumes according to: /// /// # Errors /// /// - `InvalidChar` /// - `UnexpectedEndOfStream` pub fn consume_eq(&mut self) -> Result<()> { self.skip_spaces(); self.consume_byte(b'=')?; self.skip_spaces(); Ok(()) } /// Consumes quote. /// /// Consumes `'` or `"` and returns it. /// /// # Errors /// /// - `InvalidQuote` /// - `UnexpectedEndOfStream` pub fn consume_quote(&mut self) -> Result { let c = self.curr_byte()?; if c == b'\'' || c == b'"' { self.advance(1); Ok(c) } else { Err(StreamError::InvalidQuote(c as char, self.gen_text_pos())) } } } xmlparser-0.11.0/src/strspan.rs010064400017500001750000000043211356436353600146740ustar0000000000000000use core::fmt; use core::ops::Range; /// An immutable string slice. /// /// Unlike `&str` contains a reference to the original string /// and a span region. #[must_use] #[derive(Clone, Copy, PartialEq)] pub struct StrSpan<'a> { text: &'a str, span: &'a str, start: usize, } impl<'a> From<&'a str> for StrSpan<'a> { #[inline] fn from(text: &'a str) -> Self { StrSpan { text, start: 0, span: text, } } } impl<'a> StrSpan<'a> { /// Constructs a new `StrSpan` from substring. #[inline] pub(crate) fn from_substr(text: &str, start: usize, end: usize) -> StrSpan { debug_assert!(start <= end); StrSpan { text, span: &text[start..end], start } } /// Returns a start position of the span. #[inline] pub fn start(&self) -> usize { self.start } /// Returns a end position of the span. #[inline] pub fn end(&self) -> usize { self.start + self.span.len() } /// Returns a end position of the span. #[inline] pub fn range(&self) -> Range { self.start..self.end() } /// Returns a length of the span. #[inline] pub fn is_empty(&self) -> bool { self.span.is_empty() } /// Returns a span slice. #[inline] pub fn as_str(&self) -> &'a str { &self.span } /// Returns a span slice as bytes. #[inline] pub(crate) fn as_bytes(&self) -> &'a [u8] { self.span.as_bytes() } /// Returns an underling string. #[inline] pub fn full_str(&self) -> &'a str { self.text } /// Returns an underling string region as `StrSpan`. #[inline] pub(crate) fn slice_region(&self, start: usize, end: usize) -> StrSpan<'a> { let start = self.start + start; let end = self.start + end; StrSpan::from_substr(self.text, start, end) } } impl<'a> fmt::Debug for StrSpan<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "StrSpan({:?} {}..{})", self.as_str(), self.start(), self.end()) } } impl<'a> fmt::Display for StrSpan<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "{}", self.as_str()) } } xmlparser-0.11.0/src/xmlchar.rs010064400017500001750000000071401353713207500146320ustar0000000000000000/// Extension methods for XML-subset only operations. pub trait XmlCharExt { /// Checks if the value is within the /// [NameStartChar](https://www.w3.org/TR/xml/#NT-NameStartChar) range. fn is_xml_name_start(&self) -> bool; /// Checks if the value is within the /// [NameChar](https://www.w3.org/TR/xml/#NT-NameChar) range. fn is_xml_name(&self) -> bool; /// Checks if the value is within the /// [Char](https://www.w3.org/TR/xml/#NT-Char) range. fn is_xml_char(&self) -> bool; } impl XmlCharExt for char { #[inline] fn is_xml_name_start(&self) -> bool { // Check for ASCII first. if *self as u32 <= 128 { return match *self as u8 { b'A'...b'Z' | b'a'...b'z' | b':' | b'_' => true, _ => false, }; } match *self as u32 { 0x0000C0...0x0000D6 | 0x0000D8...0x0000F6 | 0x0000F8...0x0002FF | 0x000370...0x00037D | 0x00037F...0x001FFF | 0x00200C...0x00200D | 0x002070...0x00218F | 0x002C00...0x002FEF | 0x003001...0x00D7FF | 0x00F900...0x00FDCF | 0x00FDF0...0x00FFFD | 0x010000...0x0EFFFF => true, _ => false, } } #[inline] fn is_xml_name(&self) -> bool { // Check for ASCII first. if *self as u32 <= 128 { return (*self as u8).is_xml_name(); } match *self as u32 { 0x0000B7 | 0x0000C0...0x0000D6 | 0x0000D8...0x0000F6 | 0x0000F8...0x0002FF | 0x000300...0x00036F | 0x000370...0x00037D | 0x00037F...0x001FFF | 0x00200C...0x00200D | 0x00203F...0x002040 | 0x002070...0x00218F | 0x002C00...0x002FEF | 0x003001...0x00D7FF | 0x00F900...0x00FDCF | 0x00FDF0...0x00FFFD | 0x010000...0x0EFFFF => true, _ => false, } } #[inline] fn is_xml_char(&self) -> bool { match *self as u32 { 0x000009 | 0x00000A | 0x00000D | 0x000020...0x000D7FF | 0x00E000...0x000FFFD | 0x010000...0x010FFFF => true, _ => false, } } } /// Extension methods for XML-subset only operations. pub trait XmlByteExt { /// Checks if a byte is a digit. /// /// `[0-9]` fn is_xml_digit(&self) -> bool; /// Checks if a byte is a hex digit. /// /// `[0-9A-Fa-f]` fn is_xml_hex_digit(&self) -> bool; /// Checks if a byte is a space. /// /// `[ \r\n\t]` fn is_xml_space(&self) -> bool; /// Checks if a byte is an ASCII char. /// /// `[A-Za-z]` fn is_xml_letter(&self) -> bool; /// Checks if a byte is within the ASCII /// [Char](https://www.w3.org/TR/xml/#NT-Char) range. fn is_xml_name(&self) -> bool; } impl XmlByteExt for u8 { #[inline] fn is_xml_digit(&self) -> bool { matches!(*self, b'0'...b'9') } #[inline] fn is_xml_hex_digit(&self) -> bool { matches!(*self, b'0'...b'9' | b'A'...b'F' | b'a'...b'f') } #[inline] fn is_xml_space(&self) -> bool { matches!(*self, b' ' | b'\t' | b'\n' | b'\r') } #[inline] fn is_xml_letter(&self) -> bool { matches!(*self, b'A'...b'Z' | b'a'...b'z') } #[inline] fn is_xml_name(&self) -> bool { matches!(*self, b'A'...b'Z' | b'a'...b'z'| b'0'...b'9'| b':' | b'_' | b'-' | b'.') } } xmlparser-0.11.0/tests/api.rs010064400017500001750000000014571353677742400143410ustar0000000000000000extern crate xmlparser; use xmlparser::*; #[test] fn text_pos_1() { let mut s = Stream::from("text"); s.advance(2); assert_eq!(s.gen_text_pos(), TextPos::new(1, 3)); } #[test] fn text_pos_2() { let mut s = Stream::from("text\ntext"); s.advance(6); assert_eq!(s.gen_text_pos(), TextPos::new(2, 2)); } #[test] fn text_pos_3() { let mut s = Stream::from("текст\nтекст"); s.advance(15); assert_eq!(s.gen_text_pos(), TextPos::new(2, 3)); } #[test] fn token_size() { assert!(::std::mem::size_of::() <= 196); } #[test] fn span_size() { assert!(::std::mem::size_of::() <= 48); } #[test] fn err_size_1() { assert!(::std::mem::size_of::() <= 64); } #[test] fn err_size_2() { assert!(::std::mem::size_of::() <= 64); } xmlparser-0.11.0/tests/cdata.rs010064400017500001750000000045271343550265400146320ustar0000000000000000extern crate xmlparser as xml; #[macro_use] mod token; use token::*; test!(cdata_01, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("content", 3..22), Token::ElementEnd(ElementEnd::Close("", "p"), 22..26) ); test!(cdata_02, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("&ing", 3..22), Token::ElementEnd(ElementEnd::Close("", "p"), 22..26) ); test!(cdata_03, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("&ing ]", 3..24), Token::ElementEnd(ElementEnd::Close("", "p"), 24..28) ); test!(cdata_04, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("&ing]] ", 3..25), Token::ElementEnd(ElementEnd::Close("", "p"), 25..29) ); test!(cdata_05, "

text]]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("text", 3..38), Token::ElementEnd(ElementEnd::Close("", "p"), 38..42) ); test!(cdata_06, "

]]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("", 3..66), Token::ElementEnd(ElementEnd::Close("", "p"), 66..70) ); test!(cdata_07, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("1", 3..16), Token::Cdata("2", 16..29), Token::ElementEnd(ElementEnd::Close("", "p"), 29..33) ); test!(cdata_08, "

\n \t

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" \n ", 3..6), Token::Cdata("data", 6..22), Token::Text(" \t ", 22..25), Token::ElementEnd(ElementEnd::Close("", "p"), 25..29) ); test!(cdata_09, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Cdata("bracket ]after", 3..29), Token::ElementEnd(ElementEnd::Close("", "p"), 29..33) ); xmlparser-0.11.0/tests/comments.rs010064400017500001750000000047121342510354700153740ustar0000000000000000extern crate xmlparser as xml; #[macro_use] mod token; use token::*; test!(comment_01, "", Token::Comment("comment", 0..14)); test!(comment_02, "", Token::Comment("", 0..13)); test!(comment_03, "", Token::Comment("-", 0..8)); test!(comment_04, "", Token::Comment("", Token::Comment("", Token::Comment("<", Token::Comment("<", Token::Comment("-->", Token::Comment("<>", 0..9)); test!(comment_10, "", Token::Comment("<", 0..8)); test!(comment_11, "", Token::Comment("<-", 0..9)); test!(comment_12, "", Token::Comment("", Token::Comment("", 0..7)); macro_rules! test_err { ($name:ident, $text:expr) => ( #[test] fn $name() { let mut p = xml::Tokenizer::from($text); assert_eq!(p.next().unwrap().unwrap_err().to_string(), "invalid token 'Comment' at 1:1"); } ) } test_err!(comment_err_01, ""); test_err!(comment_err_02, ""); test_err!(comment_err_05, ""); test_err!(comment_err_07, ""); test_err!(comment_err_15, ""); test_err!(comment_err_20, ""); test_err!(comment_err_27, ""); test_err!(comment_err_28, ""); test_err!(comment_err_29, "", Token::Comment(" comment ", 0..16), Token::Error("unexpected token 'Declaration' at 1:17".to_string()) ); // Duplicate. test!(declaration_err_12, "", Token::Declaration("1.0", None, None, 0..21), Token::Error("unexpected token 'Declaration' at 1:22".to_string()) ); test!(declaration_err_13, "", Token::Error("invalid token 'Declaration' at 1:1 cause expected '\"' not '\'' at 1:19".to_string()) ); xmlparser-0.11.0/tests/text.rs010064400017500001750000000030731353712471200145330ustar0000000000000000extern crate xmlparser as xml; #[macro_use] mod token; use token::*; test!(text_01, "

text

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text("text", 3..7), Token::ElementEnd(ElementEnd::Close("", "p"), 7..11) ); test!(text_02, "

text

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" text ", 3..9), Token::ElementEnd(ElementEnd::Close("", "p"), 9..13) ); // 欄 is EF A4 9D. And EF can be mistreated for UTF-8 BOM. test!(text_03, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text("欄", 3..6), Token::ElementEnd(ElementEnd::Close("", "p"), 6..10) ); test!(text_04, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" ", 3..4), Token::ElementEnd(ElementEnd::Close("", "p"), 4..8) ); test!(text_05, "

\r\n\t

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" \r\n\t ", 3..8), Token::ElementEnd(ElementEnd::Close("", "p"), 8..12) ); test!(text_06, "

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text(" ", 3..9), Token::ElementEnd(ElementEnd::Close("", "p"), 9..13) ); test!(text_07, "

]>

", Token::ElementStart("", "p", 0..2), Token::ElementEnd(ElementEnd::Open, 2..3), Token::Text("]>", 3..5), Token::ElementEnd(ElementEnd::Close("", "p"), 5..9) ); xmlparser-0.11.0/tests/token.rs010064400017500001750000000106341356436351600146770ustar0000000000000000extern crate xmlparser as xml; type Range = ::std::ops::Range; #[derive(PartialEq, Debug)] pub enum Token<'a> { Declaration(&'a str, Option<&'a str>, Option, Range), PI(&'a str, Option<&'a str>, Range), Comment(&'a str, Range), DtdStart(&'a str, Option>, Range), EmptyDtd(&'a str, Option>, Range), EntityDecl(&'a str, EntityDefinition<'a>, Range), DtdEnd(Range), ElementStart(&'a str, &'a str, Range), Attribute(&'a str, &'a str, &'a str, Range), ElementEnd(ElementEnd<'a>, Range), Text(&'a str, Range), Cdata(&'a str, Range), Error(String), } #[derive(PartialEq, Debug)] pub enum ElementEnd<'a> { Open, Close(&'a str, &'a str), Empty, } #[derive(PartialEq, Debug)] pub enum ExternalId<'a> { System(&'a str), Public(&'a str, &'a str), } #[derive(PartialEq, Debug)] pub enum EntityDefinition<'a> { EntityValue(&'a str), ExternalId(ExternalId<'a>), } #[macro_export] macro_rules! test { ($name:ident, $text:expr, $($token:expr),*) => ( #[test] fn $name() { let mut p = xml::Tokenizer::from($text); $( let t = p.next().unwrap(); assert_eq!(to_test_token(t), $token); )* assert!(p.next().is_none()); } ) } #[inline(never)] pub fn to_test_token(token: Result) -> Token { match token { Ok(xml::Token::Declaration { version, encoding, standalone, span }) => { Token::Declaration( version.as_str(), encoding.map(|v| v.as_str()), standalone, span.range(), ) } Ok(xml::Token::ProcessingInstruction { target, content, span }) => { Token::PI( target.as_str(), content.map(|v| v.as_str()), span.range(), ) } Ok(xml::Token::Comment { text, span }) => Token::Comment(text.as_str(), span.range()), Ok(xml::Token::DtdStart { name, external_id, span }) => { Token::DtdStart( name.as_str(), external_id.map(|v| to_test_external_id(v)), span.range(), ) } Ok(xml::Token::EmptyDtd { name, external_id, span }) => { Token::EmptyDtd( name.as_str(), external_id.map(|v| to_test_external_id(v)), span.range(), ) } Ok(xml::Token::EntityDeclaration { name, definition, span }) => { Token::EntityDecl( name.as_str(), match definition { xml::EntityDefinition::EntityValue(name) => { EntityDefinition::EntityValue(name.as_str()) } xml::EntityDefinition::ExternalId(id) => { EntityDefinition::ExternalId(to_test_external_id(id)) } }, span.range(), ) } Ok(xml::Token::DtdEnd { span }) => Token::DtdEnd(span.range()), Ok(xml::Token::ElementStart { prefix, local, span }) => { Token::ElementStart(prefix.as_str(), local.as_str(), span.range()) } Ok(xml::Token::Attribute { prefix, local, value, span }) => { Token::Attribute(prefix.as_str(), local.as_str(), value.as_str(), span.range()) } Ok(xml::Token::ElementEnd { end, span }) => { Token::ElementEnd( match end { xml::ElementEnd::Open => ElementEnd::Open, xml::ElementEnd::Close(prefix, local) => { ElementEnd::Close(prefix.as_str(), local.as_str()) } xml::ElementEnd::Empty => ElementEnd::Empty, }, span.range() ) } Ok(xml::Token::Text { text }) => Token::Text(text.as_str(), text.range()), Ok(xml::Token::Cdata { text, span }) => Token::Cdata(text.as_str(), span.range()), Err(ref e) => Token::Error(e.to_string()), } } fn to_test_external_id(id: xml::ExternalId) -> ExternalId { match id { xml::ExternalId::System(name) => { ExternalId::System(name.as_str()) } xml::ExternalId::Public(name, value) => { ExternalId::Public(name.as_str(), value.as_str()) } } } xmlparser-0.11.0/.cargo_vcs_info.json0000644000000001120000000000000131610ustar00{ "git": { "sha1": "2d90a4b7431a642a0be578399e712809289e083c" } } xmlparser-0.11.0/Cargo.lock0000644000000002160000000000000111410ustar00# This file is automatically @generated by Cargo. # It is not intended for manual editing. [[package]] name = "xmlparser" version = "0.11.0"