Event driven parsing and function callbacks not yet added for
S4/Splus5. Requires mutable state and hence integration of the
CORBA/Java/XML driver interface for this kind of thing.
XML is one of several facilities we are investigating for enhancing
communication between applications. The ability to easily transfer
data between applications is enhanced if the data is
self-describing. One can imagine datasets being well-formed
documents that specify a DTD specifying their format. For example, Statlib datasets would contain
information about the variables, number of observations, levels of
factors, etc. as well as meta-data about the source of the data,
precision, additional commands for specific systems to assist in
interpreting the data. This format allows readers to perform single
passes by announcing their dimensions, types, etc. at the top of the
document. Additionally, entities in the DTD and document can contain
code for specific systems. These then act as "portable" methods.
(Security is an issue, but somewhat orthogonal to the parsing mechanism.)
Data can also be exchanged dynamically with other systems that use
XML. For example, Office, Oracle, Lotus Notes, browsers, HTTP
servers, etc.
The markup language for mathematics - MathML - will be important
in the research end of statistics, and also to some extend the applied
data analysis for specifying models, etc.
See Math ML Example
and you can fetch the DTD from
mmlents.zipScalable Vector Graphics
is an XML based format for specifying graphics descriptions that
can be scaled easily without distortion. We may be using it (or an
extension of it) in Omegahat to represent plots.
The DTD is available from here.
Since XML is similar to HTML, we can encourage people to use this type
of format for different inputs. We have effectively used for defining
options with potentially more complicated structures than simple
name-value pairs. Hierarchical structures are easily handled by XML.
Plot descriptions can be described in this way, and indeed we intend
to do this in Omegahat.
This XML-approach is in contrast to a simple ASCII or native object dump which relies
on the receiving system or user to understand the format.
(Communicating via the S4 object ASCII dump format was used
effectively to communicate between Java and S4, but was heavily
dependent on the parsing facilities being migrated to Java, and any
other system engaging in such communication.)
In contrast
with the embedded
Java facilities and CORBA packages for
R and S, XML is a more static representation of data rather than a
live object offering methods.
In addition to providing an environment neutral form of persistence,
XML can be used for configuration files, plot template descriptions,
documentation, etc.
The aim of providing facilities in R and S for reading XML at the user
level is to encourage users to consider the development of DTDs for
statistical data and concepts. If we can "standardize" on some basic
descriptions, we can exchange data easily between numerous systems,
including spreadsheets, etc. These DTDs, coupled with Java
interface classes and IDL modules create an integrated
framework for open network computing in multiple ways and at multiple
user levels. We strongly encourage people to actively make their DTDs
available to others.
In the future, we will develop facilities for writing objects from R,
S and Omegahat in XML format using the DTDs we develop. A general
mechanism for having validated output filters can be created.
See Writing XML
This is a small, but sufficient, collection of both C routines and R
and S functions for reading the contents of XML files for processing
in these two environments. There are two general styles of XML
parsers.
Document/Tree-based.
Here, the entire XML document is read and a tree constructed containing the
different elements in the document. At this point, we process the
elements by traversing the tree and generating a user-level
representation of the nodes. We allow the user to specify functions
that are called for different types of nodes so that she can customize
the resulting tree as it is being constructed.
Event driven.
This style involves reading the XML elements in the document
one at a time and invoking different user level functions/methods that
correspond to the type of element - a tag, text, entity, CData,
etc. The methods are responsible for processing the information
locally and building the appropriate data structure from all of
them. Thus, no tree need be constructed and traversed
post-construction. This reduces the memory used in reading the
document and provides much greater control over the parsing.
Rather than offering one of these styles, we provide functions that
work both ways for R. In S, we currently only support the
document/tree-based approach. xmlTreeParse() is the tree
based version which generates an internal tree and then converts it to
a list of lists in R/S. This is uses the XML library from Daniel
Veillard of W3.org.
The second function, xmlEventParse(), is event driven. The user
specifies a collection of R/S-level functions, in addition to the file
name, and the parser invokes the appropriate function as new XML elements
are encountered. The C-level parser we use is Expat developed by
Jim Clark.
Unless you have very large XML documents, if you want to experiment
with just one parser, use the first of these, i.e the document-based
one. That is the simplest to use, sacrificing control of the creation
of the data structures and potential memory growth.
In R, the collection of functions is usually a closure and it can
manipulate local data. In S, these are usually
a list of functions. In order to handle mutable state, one should
use the interface
driver mechanism.
The closure approach is described in more detail in
Docs/Outline.nw and the R document in man/.
Now we turn our attention to manipulating the previously generated
tree. We can do this in R/S using the following version of
treeApply.
treeApply <- function(x, func, post=NULL, pre=NULL, ...) {
ans <- NULL
value <- func(x)
if(length(value))
ans <- list(value=value)
# If there are any children, do a recursive apply on those also.
# If the result is non-null
if( length(x[["children"]]) > 0 ) {
tmp <- lapply(x[["children"]], treeApply, func, ...)
if(length(tmp) > 0)
ans$children - tmp
}
# invoke the post-processing of children hook.
if(length(post)) {
post(x)
}
invisible(ans)
}
Armed with this version of apply(), we can start doing
some processing of the tree. First, lets display the type of each node in the tree.
v <- treeApply(x, function(x) cat(class(x),"\n"))
named
XMLComment
XMLNode
XMLNode
XMLNode
XMLEntityRef
XMLProccesingInstruction
XMLNode
XMLNode
XMLNode
A slightly more interesting example is to
produce a graphical display of the tree.
I use PStricks
for this purpose.
We define a node function that produces the relevant TeX commands
and also a post function to tidy up the groups.
\pstree{\Tr{doc}}{%
\Tr{ A comment }%
\pstree{\Tr{foo}}{%
\Tr{element}%
\Tr{ }%
\Tr{test entity}%
\Tr{print "This is some more PHP code being executed."; }%
\pstree{\Tr{duncan}}{%
\pstree{\Tr{temple}}{%
\Tr{extEnt;}%
}
}
}
}
Note that the post function is more naturally done
in an event-driven parser, via the
endElement handler.
Another example is that this document has been carefully constructed
to be parseable by the xmlTreeParse
function.
The event-driven style is essentially a filtering mechanism. It
provides lower level control over the processing of the
elements. Because both R and S do not have references, incremental
processing is slightly more complicated than it is in languages such
as C or Java. However, the event-driven style does allow us to avoid
reading the entire document into memory at once and is ideal for
situations when most of the document is not of interest, but a small
number of nodes are important and their location in the document is
necessary to understand.
A simple example is where we gather all the character text in the
document. In other words, we throw away the XML hierarchical
structure and any nodes that are not simply character text.
Note that we can discard the lines that are simply white space
using the trim argument.
This trims all text values. More granularity is needed here.
z <- xmlEventParse("data/job.xml", characterOnlyHandler(), ignoreBlanks=T, trim=T)
> z$getText()
[1] "GBackup"
[2] "Development"
[3] "Open"
[4] "Mon, 07 Jun 1999 20:27:45 -0400 MET DST"
[5] "USD 0.00"
[6] "Nathan Clemons"
[7] "nathan@windsofstorm.net"
[8] "The program should be released as free software, under the GPL."
Much as we did with the tree-based parser,
we can construct a display of the structure of the document
using the event driven parser.
Note that we use a list of functions rather than a closure in this
example. This is because we do not have data that persists across
function calls.
Parsing the mtcars.xml file (or generally
files using the DTD used by that file) can be done via the event
parser in the following manner. First we define a closure with
methods for handling the different tags of interest. Rather than
using startElement and looking at the name of the tag/element, we will
instruct the xmlEventParse to look for a method whose
name is the same as the tag, before defaulting to use the
startElement() method. As with most event driven
material, the logic is different and may seem complicated. The idea
is that we will see the dataset tag first. So we define a
function with this name. The dataset tag will have
attributes that we store to attach to the data frame that we construct
from reading the entire XML structure. Of special interest in this
list is the number of records. We store this separately, converting
it to an integer, so that when we find the number of variables, we can
allocate the array.
The next we do is define a method for handling the
variables element. There we find the number of variables.
Note that if the DTD didn't provide this count, we could defer the
computation of variables and the allocation of the array until we saw
the end of the variables tag. This would allow the user
to avoid having to specify the number of variables explicitly.
As we encounter each variable element, we expect the next
text element to be the name of the variable. So, within
the variable() method, we set the flag
expectingVariableName to be true. Then in the text() function, we interpret the value as
either a variable name if expectingVariableName is true,
or as the value of a record if not. In the former case, we append the
value to the list of variable names in varNames. We need
to set the value expectingVariableName to false when we
have enough. We do this when the length of varNames
equals the number of columns in data, computed from the
count attribute.
A different way to do this is to have an endElement()
function which set expectingVariableName to false when
the element being ended was variables. Again, this is a
choice and different implementations will have advantages with respect
to robustness, error handling, etc.
The text() function handles the case where we are not
expecting the name of a variable, but instead interpret the string as
the value of a record. To do this, we have to convert the collection
of numbers separeted by white space to a numeric vector. We do this
by splitting the string by white space and the converting each entry
to a numeric value. We assign the resulting numeric vector to the
matrix data in the current row. The index of the record
is stored in currentRecord. This is incremented by the
record method. (We could do this in text()
also, but this is more interesting.)
We will ignore issues where the values are separated across
lines, contain strings, etc. The latter is orthogonal to the event
driven XML parsing. The former (partial record per line) can be
handled by computing the number seen so far for this record and
storing this across calls to text() and adding to the
appropriate columns.
handler <- function() {
data <- NULL
# Private or local variables used to store information across
# method calls from the event parser
numRecords <- 0
varNames <- NULL
meta <- NULL
currentRecord <- 0
expectingVariableName <- F
rowNames <- NULL
# read the attributes from the dataset
dataset <- function(x,atts) {
numRecords <<- as.integer(atts[["numRecords"]])
# store these so that we can put these as attributes
# on data when we create it.
meta <<- atts
}
variables <- function(x, atts) {
# From the DTD, we expect a count attribute telling us the number
# of variables.
data <<- matrix(0., numRecords, as.integer(atts[["count"]]))
# set the XML attributes from the dataset element as R
# attributes of the data.
attributes(data) <<- c(attributes(data),meta)
}
# when we see the start of a variable tag, then we are expecting
# its name next, so handle text accordingly.
variable <- function(x,...) {
expectingVariableName <<- T
}
record <- function(x,atts) {
# advance the current record index.
currentRecord <<- currentRecord + 1
rowNames <<- c(rowNames, atts[["id"]])
}
text <- function(x,...) {
if(x == "")
return(NULL)
if(expectingVariableName) {
varNames <<- c(varNames, x)
if(length(varNames) >= ncol(data)) {
expectingVariableName <<- F
dimnames(data) <<- list(NULL, varNames)
}
} else {
e <- gsub("[ \t]*",",",x)
vals <- sapply(strsplit(e,",")[[1]], as.numeric)
data[currentRecord,] <<- vals
}
}
# Called at the end of each tag.
endElement <- function(x,...) {
if(x == "dataset") {
# set the row names for the matrix.
dimnames(data)[[1]] <<- rowNames
}
}
return(list(variable = variable,
variables = variables,
dataset=dataset,
text = text,
record= record,
endElement = endElement,
data = function() {data },
rowNames = function() rowNames
))
}
A more robust version of this that handles rownames and produces a
data frame rather than a is given in the function dataFrameEvents
The uncompiled, installable version as an R package.
This is probably the easiest to install as
at the end you can simply invoke library(XML).
You can use the GNUmakefiles in libxml and expat
to configure each of those distributions appropriately.
(Basically, these build shared libraries.)
There are no binaries for Unix.
If there is a need, please ask.
This software is known to run on both Linux (RedHat 6.1) and Solaris
(2.6).
To run the R functions, you will need to install
either or both of the following packages.
The code, documentation, etc. is released under the terms of the GNU
General Public License and the owner of the copyright is the Omega
Project for Statistical Computing.
The goal is to share this code with an S4/Splus5 version. In order to
keep the programming interfaces consistent, we would appreciate being
notified of changes.
The package, also known as chapter, can be configured to use either of
the XML parsing styles discussed in the above, or both. The event-based parser
uses the Expat library by Jim Clark . The tree/document-based
parser uses libxml from
Daniel Veillard. You can use either or both of these. First install
whichever of these you will use, and make sure to build them as shared
libraries. See below for some assistance in doing this.
Having decided to use either libxml and/or expat, you must specify
their locations. Edit the GNUmakefile, and uncomment the
line defining LIBXML and/or LIBEXPAT as
appropriate. Change the value on the right hand side of the = sign to
the location of these directories.
Next, you need to specify whether you are building for R or S4/Splus5.
You can do this via the variable LANGUAGE in the
GNUmakefile.
It defaults to R.
All of these can be specified on the command line such as:
make LIBXML=$HOME/libxml-1.7.3 LIBEXPAT=$HOME/expat LANGUAGE=R CC=gcc
Untar the XML_3.98-0.tar.gz file in the appropriate directory,
probably one of the library/ directories your R distribution
searches for libraries. (See library(), R_LIBS, etc.)
cd XML
Invoke make, specifying the different values for the
3rd party distributions, etc. on the command line.
make LIBXML=$HOME/libxml-1.7.3 LIBEXPAT=$HOME/expat LANGUAGE=R CC=gcc
I have installed using the makefiles here and the
GNUmakefile.admin in the omegahat source tree version of this. That
however relies on some other makefiles in the R section of the
Omegahat tree. If any one else wishes to package this, please send me
the changes I can make them available to others. Of course you can
use it by just attaching the chapter and using dyn.load().
Some of this would be easier if we used either the R or S4/Splus5
package installation facilities. However, I do not have time at the
moment to handle both cases in the common code.
Make sure to specify the location of the library path. Use the
environment variable LD_LIBRARY_PATH to include the
location of the libxml distribution and also the lib directory in the
expat distribution.
There is now a version of the package for Windows.
One can install from source or download a binary, pre-compiled version of the package
from Brian Ripley's R windows package builds.
Installing From Binary
Change directory to the location in which you want to
install the library. This is usually R_HOME/library.
Install the libxml2 (and iconv) libraries into a directory
and add that to your PATH.
Run R and load the library using
library(XML)!
Installing From Source on Windows
To install from source, you can follow these steps.
Change directory to the src/library/ within the R distribution.
cd R_HOME/src/library
Untar the XML_3.98-0.tar.gz.
tar zxf XML_3.98-0.tar.gz
Edit the Makevars.win
file in the XML/src/ directory.
You will need to provide the names of the directories
in which the libxml2 header files and the libxml2 library
can be found.
Change directory to the src/gnuwin32 directory within the R
distribution.
cd ../gnuwin32
File that
allows us to use the same code for R and S4/Splus5 by hiding the
differences between these two via C pre-processor macros.
This file is copied from $OMEGA_HOME/Interfaces/CORBA/CORBAConfig
Utils
routines shared by both files above
for handling white space in text.
RS_XML.h
name space macro for routines used
to avoid conflicts with routine names with other
libraries.
RSDTD
Routines for converting DTDs to
user-level objects.
GNUmakefile
makefile controlling the
compilation of the shared libraries, etc.
expat/
makefiles that can be copied into expat distribution to make shared
libraries for use here.
libxml/
makefiles that can be copied into libxml distribution to make shared
library
Src/
R/S functions for parsing XML documents/buffers.
man/
R documentation for R/S functions.
Docs/
document (in noweb) describing initial ideas.
data/
example functions, closure
definitions, DTDs, etc that are not quite official functions.
The following information helps in installing the 3rd party libraries.
The approach is optional, but the need is to build shared libraries.
GNU makefiles are provided (in the subdirectories expat/
and libxml/ of this distribution) to perform the
necessary operations. A simple way to place these in the
appropriate distribution is to give the command,
make LIBEXPAT=/dir/subdir expat
and
make LIBXML=/dir/subdir libxml
These requires GNU make to be installed.
These makefiles circumvents the regular Makefiles in the
distributions.
Unzip the exapt.zip
file. This will create a directory expat/.
Copy the contents of the directory named expat within the
directory where you are reading this installation file. There should be 4 files
in total that are copied. Two of these GNUmakefile and GNUmakefile.lib go into
expat/. There are two others, one in each of xmltok and xmlparse that should
be copied to the corresponding directories in the expat
distribution.
You can do this via the command
make LIBEXPAT=/wherever expat
issued from this directory.
Before doing this, you will have to edit these files to ensure that the correct
values are used for compiling shared libraries. At present, there are
settings for gcc and Solaris compilers.
Edit the file expat/GNUmakefile.lib
and comment out the settings that do not apply to your machine.
Specifically, if you are using the GNU compiler (gcc),
comment out the two lines for the Solaris compilers
(the second settings for PIC_FLAG and PIC_LD_FLAG)
The steps are similar to those for expat.
However, when compiling this for use with Splus5/S4, there are
additional steps. Please follow these or you will likely see
segmentation faults, etc. due to conflicting symbols.
Untar the libxml
distribution, creating a directory called, say,
libxml/.
Copy the single GNUmakefile in the directory below this one (where you are reading this file)
named libxml/ to the location you have
installed the libxml distribution.
You can do this via the command
make LIBXML=/wherever libxml
You will have to edit these files to ensure that the correct
values are used for compiling shared libraries. At present, there are
settings for gcc and Solaris compilers.
Change directory back to the libxml distribution.
Type ./configure.
Type make.
An alternative to this involves the following steps.
It has not been extensively tested at all.
Apply the patch in the directory libxml/
to the libxml directory.
This can be done via the commands.
cd libxml
make LIBXML=/wherever/installed patch
Append the following lines to
either Makefile.in or Makefile in the libxml distribution
(depending on whether you have alread configured that distribution
and/or whether you want the changes to persist across reconfigurations).
Both S4 and libxml have a symbol attribute. Because of the way
dynamically loaded code resolves symbols, the libxml facilities will
use the one from S4, incorrectly. Until we determine the appropriate
linker flags, please modify the three references to attribute in
libxml before compiling the shared libraries.
The following patch makes the changes.
Apply them by invoking the
Added names to the children field from an XMLNode in xmlTreeParse().
Simple example of reading a Gnumeric worksheet.
Fix up special case for trim when string is of length 1.
Duncan Temple Lang<duncan@wald.ucdavis.edu>
Last modified: Mon Dec 13 21:28:37 EST 1999
XML/inst/ 0000755 0001760 0000144 00000000000 12532432573 011725 5 ustar ripley users XML/inst/examples/ 0000755 0001760 0000144 00000000000 12160210576 013535 5 ustar ripley users XML/inst/examples/gnumericHandler.R 0000644 0001760 0000144 00000001663 11741563530 017002 0 ustar ripley users #
# Should turn this into a data frame rather than a matrix.
# This would allow us to preserve different data types across
# columns/variables. Of course, there isn't an exact one-to-one
# correspondence between spreadsheets and data frames.
gnumericHandler <-
function(fileName)
{
# read the XML tree from the file.
d <- xmlTreeParse(fileName)
# Get the Sheet
sh <- d$doc$children[["Workbook"]]$children[["Sheets"]]$children[["Sheet"]]$children
mat <- matrix(0, as.integer(sh$MaxRow$children[[1]]$value)+1, as.integer(sh$MaxCol$children[[1]]$value)+1)
vals <- sh$Cells$children
gnumericCellEntry <- function(x)
{
atts <- sapply(x$attributes, as.integer)
val <- x$children$Content$children$text$value
tmp <- switch(atts[["Style"]], "1"= as.numeric(val), "2"=as.numeric(val), "3"=val)
mat[atts[["Row"]]+1, atts[["Col"]]+1] <<- tmp
tmp
}
sapply(vals, gnumericCellEntry)
return(mat)
}
XML/inst/examples/xpath.xml 0000644 0001760 0000144 00000000212 11741563530 015403 0 ustar ripley users
Some textOther textMore text
XML/inst/examples/redirection.R 0000644 0001760 0000144 00000002024 11741563530 016172 0 ustar ripley users # This is an example of downloading data that is accessible via a text file
# that is identified via a link in an HTML document that is returned from
# a form submission.
# The original form is available via the SWISSPROT site which is redirected
# to www.expasy.org.
#
# This example illustrates the use of the FOLLOWLOCATION options in libcurl and hence
# RCurl.
#
# The example was raised by Linda Tran at UC Davis.
# Works as of May 12, 2006
#
tt = getForm("http://www.expasy.org/cgi-bin/search",
db = "sptrde", SEARCH = "fmod_bovin",
.opts = list("FOLLOWLOCATION" = TRUE))
# Then, find the link node which has "raw text"
# in the text of the link
h = function() {
link = ""
a = function(node, ...) {
v = xmlValue(node)
if(length(grep("raw text", v)))
link <<- xmlGetAttr(node, "href")
node
}
list(a = a, .link = function() link)
}
a = h()
htmlTreeParse(tt, asText = TRUE, handlers = a)
a$.link()
u = paste("http://www.expasy.org", a$.link(), sep = "")
getURL(u)
XML/inst/examples/index.html 0000644 0001760 0000144 00000004524 11741563530 015544 0 ustar ripley users
XML Package for R and S-Plus
This package provides facilities for the S language
to
parse XML files, URLs and strings,
using either the DOM (Document Object Model)/tree-based
approach, or the event-driven SAX (Simple API for XML)
mechanism;
generate XML content to buffers, files, URLs,
and internal XML trees;
read DTDs as S objects.
The package supports both R and S-Plus 5 and higher.
NOTE
The most significant visible changes to the package include:
uses libxml2, by default and only libxml(version 1) if libxml2
is not present
uses a namespace for R.
Download
The source for the S package can
be downloaded as XML_0.97-0.tar.gz.
Note that this latest version has not been tested with S-Plus
Specifically, it should work as before, however the state
mechanism for the SAX parser may not. This just requires testing.
Documentation
A reasonably detailed overview
of the package and what we might use XML for.
Duncan Temple Lang<duncan@wald.ucdavis.edu>
Last modified: Fri Apr 1 04:32:29 PST 2005
XML/inst/examples/mathml.R 0000644 0001760 0000144 00000010720 11741563530 015147 0 ustar ripley users #
# Functions to illustrate how to convert a MathML tree an
# R expression.
#
#
mathml <-
# generic method that converts an XMLNode
# object to an R/S expression.
function(node)
{
UseMethod("mathml", node)
}
mathml.XMLDocument <-
function(doc)
{
return(mathml(doc$doc$children))
}
mathml.default <-
#
# Attempts to create an expression from the Math ML
# document tree given to it.
# This is an example using the mathml.xml and is not
# in any way intended to be a general MathML "interpreter"
# for R/S.
#
function(children)
{
expr <- list()
for(i in children) {
if(class(i) == "XMLComment")
next
expr <- c(expr, mathml(i))
}
return(expr)
}
mergeMathML <-
#
# This takes a list of objects previously converted to R terms
# from MathML and aggregates them by collapsing elements
# such as
# term operator term
# into R calls.
#
# see mathml.XMLNode
#
function(els)
{
#cat("Merging",length(els));
#print(els)
ans <- list()
more <- T
ctr <- 1
while(more) {
i <- els[[ctr]]
if(inherits(i, "MathMLOperator")) {
ans <- c(i, ans, els[[ctr+1]])
mode(ans) <- "call"
ctr <- ctr + 1
} else if(inherits(i,"MathMLGroup")) {
#print("MathMLGroup")
ans <- c(ans, i)
mode(ans) <- "call"
} else
ans <- c(ans, i)
ctr <- ctr + 1
more <- (ctr <= length(els))
}
#cat("Merged: "); print(ans)
return(ans)
}
mathml.XMLNode <-
#
# Interprets a MathML node and converts it
# to an R expression term. This handles tags
# such as mi, mo, mn, msup, mfrac, mrow, mfenced,
# msqrt, mroot
#
# Other tags include:
# msub
# msubsup
# munder
# mover
# munderover
# mmultiscripts
#
# mtable
# mtr
# mtd
#
# set, interval, vector, matrix
# cn
# matrix, matrixrow
# transpose
# Attributes for mfenced: open, close "["
function(node)
{
nm <- name(node)
if(nm == "msup" || nm == "mfrac") {
op <- switch(nm, msup="^", mfrac="/")
a <- mathml(node$children[[1]])
b <- mathml(node$children[[2]])
expr <- list(as.name(op), a, b)
mode(expr) <- "call"
val <- expr
} else if(nm == "mi" || nm == "ci") {
# display in italics
if(!is.null(node$children[[1]]$value))
val <- as.name(node$children[[1]]$value)
} else if(nm == "mo") {
if(inherits(node$children[[1]],"XMLEntityRef")) {
# node$children[[1]]$value
val <- as.name("*")
class(val) <- "MathMLOperator"
} else {
# operator
tmp <- node$children[[1]]$value
if(!is.null(tmp)) {
if(tmp == "=") {
# or we could use "=="
# to indicate equality, not assignment.
tmp <- "<-"
}
val <- as.name(tmp)
class(val) <- "MathMLOperator"
}
}
} else if(nm == "text") {
val <- node$value
} else if(nm == "matrix"){
val <- mathml.matrix(node)
} else if(nm == "vector"){
val <- mathml.vector(node)
} else if(nm == "mn" || nm == "cn") {
# number tag.
if(!is.null(node$children[[1]]$value))
val <- as.numeric(node$children[[1]]$value)
} else if(nm == "mrow" || nm == "mfenced" || nm == "msqrt" || nm == "mroot") {
# group of elements (displayed in a single row)
ans <- vector("list", length(node$children))
ctr <- 1
for(i in node$children) {
ans[[ctr]] <- mathml(i)
#cat(ctr,i$name,length(ans),"\n")
ctr <- ctr + 1
}
ans <- mergeMathML(ans)
# if this is an mfenced, msqrt or mroot element, add the
# enclosing parentheses or function call.
# ....
if(nm == "msqrt") {
ans <- c(as.name("sqrt"), ans)
mode(ans) <- "call"
} else if(nm == "mfenced") {
class(ans) <- "MathMLGroup"
}
val <- ans
} else if(nm == "reln") {
val <- mathml(node$children)
mode(val) <- "call"
} else if(nm == "eq") {
val <- as.name("==")
} else if(nm == "apply") {
val <- mathml(node$children)
cat("apply:",length(val),"\n")
print(val)
mode(val) <- "call"
} else if(nm == "times") {
val <- as.name("%*%")
} else if(nm == "transpose") {
val <- as.name("t")
}
return(val)
}
mathml.matrix <-
#
#
#
#
function(node)
{
m <- matrix(character(1), length(node$children), length(node$children[[1]]$children))
i <- 1
for(row in node$children) {
j <- 1
for(cell in row$children) {
tmp <- mathml(cell)
m[i,j] <- as.character((tmp))
j <- j + 1
}
i <- i + 1
}
print(m)
return(m)
}
mathml.vector <-
function(node)
{
ans <- character(length(node$children))
for(i in 1:length(node$children)) {
tmp <- mathml(node$children[[i]])
ans[i] <- as.character(tmp)
}
print(ans)
return(ans)
}
XML/inst/examples/schema.xsd 0000644 0001760 0000144 00000000613 11741563530 015522 0 ustar ripley users
XML/inst/examples/CIS.R 0000644 0001760 0000144 00000002727 11741563530 014313 0 ustar ripley users require(RCurl)
require(XML)
#myCurl = getCurlHandle()
#getURL("http://www.statindex.org/CIS/psqlQuery/", cookiejar = "-", curl = myCurl)
#.opts = list(cookie = '_ZopeId="19324353A25Uc.N15jM"', verbose = TRUE))
mycookie <- '_ZopeId="19324353A25Uc.N15jM"'
CISQuery <- function(author = "Ihaka", title = "", keyword = "",
journal = "", yearbeg = "", yearend = "", format = "bib",
url = "http://www.statindex.org/CIS/psqlQuery/Searcher",
cookie=mycookie){
v <- postForm(url, skip = "+", authed = "+", authed = "a",
authorstring = author, titlestring = title,
keywordstring = keyword, jnt = "jp", jnamestring = journal, pt= "+",
pt = "b", pt = "j", pt = "p", pt = "+",
yearbeg = yearbeg, yearend = yearend,
startDisplay = "1", endDisplay = "50", fmt = format,
.opts = list(cookie = cookie, verbose = TRUE))
browser()
g <- htmlTreeParse(v,asText = TRUE)
h <- g$children$html[["body"]]
uh <- unlist(h)
ugh <- uh[grep("not an authenticated user",uh)]
if(length(ugh)>0)
stop("Not an authenticated CIS user")
h <- h[["pre"]][["text"]]
i <- unlist(strsplit(h$value,"@"))[-1]
j <- gsub("\n+$","",i)
k <- gsub("^","@",j)
l <- sapply(k, function(x) strsplit(x,"\n"),USE.NAMES = FALSE)
lapply(l,function(x) {x; class(x) <- "Bibtex"; x})
}
f <- CISQuery(cookie=mycookie)
XML/inst/examples/RhelpInfo.xml 0000644 0001760 0000144 00000000411 11741563530 016146 0 ustar ripley users
Omegahat
The libxml2 library
DuncanTemple Langduncan@wald.ucdavis.edu
XML/inst/examples/createTree.R 0000644 0001760 0000144 00000000713 11741563530 015751 0 ustar ripley users doc <- xmlTree()
doc$addTag("EXAMPLE", close= FALSE, attrs=c("prop1" = "gnome is great", prop2 = "& linux too"))
doc$addComment("A comment")
doc$addTag("head", close= FALSE)
doc$addTag("title", "Welcome to Gnome")
doc$addTag("chapter", close= FALSE)
doc$addTag("title", "The Linux Adventure")
doc$addTag("p")
doc$addTag("image", attrs=c(href="linux.gif"))
doc$closeTag()
doc$closeTag()
doc$addTag("foot")
doc$closeTag()
XML/inst/examples/functionIndex.Sxml 0000644 0001760 0000144 00000002163 11741563530 017226 0 ustar ripley users
]>
functionIndex
This function returns the names of the functions that are to
be defined in this file. This allows one to know ahead of time
what functions the file defines and to source specific
functions from this file using the
which argument of xmlSource
function(file, ...) {
d &sgets; xmlRoot(xmlTreeParse(file, ...))
sapply(d[names(d) == "function"],
function(x) {
if(!is.na(match("sname", names(x))))
xmlValue(x[["sname"]][[1]])
else {
xmlValue(x[[1]][[1]])
}
})
}
XML/inst/examples/xpath.R 0000644 0001760 0000144 00000001320 11741563530 015005 0 ustar ripley users xpathExprCollector =
function(targetAttributes = c("test", "select"))
{
# Collect a list of attributes for each element.
tags = list()
# frequency table for the element names
counts = integer()
start =
function(name, attrs, ...) {
attrs = attrs[ names(attrs) %in% targetAttributes ]
if(length(attrs) == 0)
return(TRUE)
tags[names(attrs)] <<-
lapply(names(attrs),
function(id)
c(tags[[id]] , attrs[id]))
}
list(.startElement = start,
.getEntity = function(x, ...) "xxx",
.getParameterEntity = function(x, ...) "xxx",
result = function() lapply(tags, function(x) sort(table(x), decreasing = TRUE)))
}
XML/inst/examples/dataFrameEvent.R 0000644 0001760 0000144 00000004622 11741563530 016557 0 ustar ripley users # A closure for use with xmlEventParse
# and for reading a data frame using the DatasetByRecord.dtd
# DTD in $OMEGA_HOME/XML/DTDs.
# To test
# xmlEventParse("mtcars.xml", handler())
#
handler <- function() {
data <- NULL
# Private or local variables used to store information across
# method calls from the event parser
numRecords <- 0
varNames <- NULL
meta <- NULL
currentRecord <- 0
expectingVariableName <- F
rowNames <- NULL
currentColumn <- 1
# read the attributes from the dataset
dataset <- function(x, atts) {
numRecords <<- as.integer(atts[["numRecords"]])
# store these so that we can put these as attributes
# on data when we create it.
meta <<- atts
}
variables <- function(x, atts) {
# From the DTD, we expect a count attribute telling us the number
# of variables.
#cat("Creating matrix",numRecords, as.integer(atts[["count"]]),"\n")
data <<- matrix(0., numRecords, as.integer(atts[["count"]]))
# set the XML attributes from the dataset element as R
# attributes of the data.
attributes(data) <<- c(attributes(data),meta)
}
# when we see the start of a variable tag, then we are expecting
# its name next, so handle text accordingly.
variable <- function(x,...) {
expectingVariableName <<- T
}
record <- function(x,atts) {
# advance the current record index.
currentRecord <<- currentRecord + 1
rowNames <<- c(rowNames, atts[["id"]])
}
text <- function(x,...) {
if(x == "")
return(NULL)
if(expectingVariableName) {
varNames <<- c(varNames, x)
if(length(varNames) >= ncol(data)) {
expectingVariableName <<- F
dimnames(data) <<- list(NULL, varNames)
}
} else {
e <- gsub("[ \t]*",",",x)
els <- strsplit(e,",")[[1]]
for(i in els) {
data[currentRecord, currentColumn] <<- as.numeric(i)
currentColumn <<- currentColumn + 1
}
}
}
endElement <- function(x,...) {
if(x == "dataset") {
dimnames(data)[[1]] <<- rowNames
} else if(x == "record") {
currentColumn <<- 1
}
}
return(list(variable = variable,
variables = variables,
dataset=dataset,
text = text,
record= record,
endElement = endElement,
data = function() {data },
rowNames = function() rowNames
))
}
XML/inst/examples/sbmlSAX.S 0000644 0001760 0000144 00000005640 11741563530 015204 0 ustar ripley users MathMLOperations =
c("power" = "^",
"times" = "*",
"plus" = "+"
)
handlers =
function(operations = MathMLOperations)
{
current = list()
state = character()
start =
function(x, atts, ...) {
# Handle the opening tags and set the stack appropriately.
if(x == "apply") {
# Make a call with a silly name that we will change when we read the next element
# giving the operation.
current <<- c(call(""), current)
state <<- c("call", state)
} else if(x == 'ci') {
# Expecting the next text contents to be a name of a variable.
state <<- c("name", state)
current <<- c("", current)
} else if(!is.na(idx <- match(x, names(operations)))) {
# If we are dealing with a call and the name of this element being opened
# matches our operation names, then insert the S name of the corrresponding
# function into the previously created call.
if(length(state) && state[1] == "call")
current[[1]][[1]] <<- as.name(operations[idx])
# make certain that we add something to state stack so that when we close the
# tag, we remove it, not the previously active element on the stack.
state <<- c("<>", state)
}
}
text = function(x, atts, ...) {
if(x == "")
return(FALSE)
if(length(state) && state[1] == "name") {
current[[1]] <<- paste(current[[1]], x, sep = "")
}
}
end =
function(x, atts, ...) {
# If there is nothing on the stack, then nothing to close.
if(length(state)) {
if(state[1] == "call" && length(current) > 1) {
# If ending an apply (call) and we have 2 or more things
# on the stack, then fold this call (current[[1]]) into the argument of the
# of the previous call (current[[2]]) at the end.
e = current[[1]]
f = current[[2]]
# Should check f is a call or state[2] == "call"
f[[length(f) + 1]] = e
current[[2]] = f
current <<- current[-1]
} else if(state[1] == "name") {
# ending a so we have a name, then put this into the
# current call.
if(length(state) > 1 && state[2] == "call") {
# this is very similar to the previous block for call
# except we have a as.name(). Could easily consolidate by doing
# this coercion first. Left like this for clarity of concept.
e = current[[2]]
e[[length(e) + 1]] = as.name(current[[1]])
current[[2]] = e
# Remove the elements from the top of the stacks.
current <<- current[-1]
}
}
state <<- state[-1]
}
}
list(startElement = start, endElement = end, text = text,
state = function() state,
current = function() current)
}
XML/inst/examples/author.R 0000644 0001760 0000144 00000000374 11741563530 015173 0 ustar ripley users xsd = xmlTreeParse("examples/author.xsd", isSchema =TRUE, useInternal = TRUE)
doc = xmlInternalTreeParse("examples/author.xml")
#h = schemaValidationErrorHandler()
#.Call("RS_XML_xmlSchemaValidateDoc", xsd@ref, doc, 0L, h)
xmlSchemaValidate(xsd, doc)
XML/inst/examples/itunesSax.R 0000644 0001760 0000144 00000002764 11741563530 015661 0 ustar ripley users saxHandlers =
function()
{
tracks = list()
dictLevel = 0L
key = NA
value = character()
track = list()
text = function(val) {
value <<- paste(value, val, sep = "")
}
startElement =
function(name, attrs) {
if(name %in% c("integer", "string", "date", "key"))
value <<- character()
if(name == "dict")
dictLevel <<- dictLevel + 1L
}
convertValue =
function(value, textType) {
switch(textType,
integer = as.numeric(value),
string = value,
date = as.POSIXct(strptime(value, "%Y-%m-%dT%H:%M:%S")),
default = value)
}
endElement = function(name) {
if(name %in% c("integer", "string", "date"))
track[[key]] <<- convertValue(value, name)
else if(name == "key")
key <<- value
else if(name == "dict" && dictLevel == 3) {
class(track) = "iTunesTrackInfo"
tracks[[ length(tracks) + 1]] <<- track
track <<- list()
dictLevel <<- 2
}
}
list(startElement = startElement, endElement = endElement, text = text, tracks = function() tracks)
}
h = saxHandlers()
#xmlEventParse(path.expand(fileName), handlers = h)
# 5.9 seconds. But this is parsing and processing into tracks.
# system.time({dd = xmlEventParse(path.expand(fileName), handlers = h, addContext = FALSE)})
# 5.93 seconds on average (SD of .09)
# sax = replicate(10, system.time({dd = xmlEventParse(path.expand(fileName), handlers = h, addContext = FALSE)}))
XML/inst/examples/xml2tex.Sxml 0000644 0001760 0000144 00000004040 11741563530 016010 0 ustar ripley users
]>
The functions in this file are an initial attempt to define some
filters for an XML document to produce &Latex; output by translating
the contents of the XML document.
Note that using XSL is slightly problematic because the result needs
to be a valid XML document, which no &Latex; document ever is.
xml2tex
function(node, mappings=.XMLTexMappings) {
n &sgets; 10
x &sgets;
print(x+10)
x
}
cat("Got to here\n")
xml2texUnderline
function(node, tex)
{
}
xml2texCode &sgets;
function(node, tex)
{
}
xxx
seq(1,n)
+ 10
xml2tex.map &sgets;
list("i"="textit",
"b"="textbf",
"sfunction"="SFunction",
"item"="item",
"label"=c("[", "]"),
"cite"=function(x) { paste("\cite{", xmlAttrs(x)["id"], "}", collapse="")},
"bibitem"="",
"bibliography"=""
)
mapXML2TeX
function(node, attr) {
name &sgets; xmlName(node)
el &sgets; xml2tex.map[[name]]
if(!is.null(el)) {
if(mode(el) == "character") {
} else if(mode(el) == "function") {
el(node)
}
}
}
XML/inst/examples/bondYields.R 0000644 0001760 0000144 00000000753 11741563530 015766 0 ustar ripley users
uri = "http://www.treas.gov/offices/domestic-finance/debt-management/interest-rate/yield.xml"
h =
function() {
tables = list()
tb = function(node) {
# this will drop any NULL values from empty nodes.
els = unlist(xmlApply(node, xmlValue))
vals = as.numeric(els)
names(vals) = gsub("BC_", "", names(els))
tables[[length(tables) + 1]] <<- vals
NULL
}
list("G_BC_CAT" = tb, getTables = function() tables)
}
xmlTreeParse(uri, handlers = h())$getTables()
XML/inst/examples/mexico.xml 0000644 0001760 0000144 00000001641 11741563530 015552 0 ustar ripley users
57.9332 53.4599 54.8241 55.7852 55.008 54.9579 53.3819 54.1589 53.3658 58.4322 57.9332 53.4599 54.8241 55.7852 55.008 54.9579 53.3819 54.1589 53.3658 58.4322
24.9125 24.0581 24.9128 25.5737 24.4704 25.697 26.1449 25.9035 25.4705 24.0121 24.9125 24.0581 24.9128 25.5737 24.4704 25.697 26.1449 25.9035 25.4705 24.012
30.0 28.0 16.0 28.0 31.0 34.0 32.0 24.0 32.0 28.0 30.0 28.0 16.0 28.0 31.0 34.0 32.0 24.0 32.0 28.0